Change mbedtls_zeroize() implementation to use memset() instead of a
custom implementation for performance reasons. Furthermore, we would
also like to prevent as much as we can compiler optimisations that
remove zeroization code.
The implementation of mbedtls_zeroize() now uses a volatile function
pointer to memset() as suggested by Colin Percival at:
http://www.daemonology.net/blog/2014-09-04-how-to-zero-a-buffer.html
Improve the position of the breakpoint to be set at a line of code that
is less likely to be optimised out by the compiler. Setting the breakpoint
at a place that can be easily optimised out by the compiler will cause the
gdb script to fail as it cannot match the source code line to the
compiled code. For this reason the breakpoint is now set at the fclose()
call which is very unlikely to be optimised out or there might be a
resource leak.
Add a new macro MBEDTLS_UTILS_ZEROIZE that allows users to configure
mbedtls_zeroize() to an alternative definition when defined. If the
macro is not defined, then mbed TLS will use the default definition of
the function.
This commit removes all the static occurrencies of the function
mbedtls_zeroize() in each of the individual .c modules. Instead the
function has been moved to utils.h that is included in each of the
modules.
The gdb script loads the programs/test/zeroize program and feeds it as
imput its own source code. Then sets a breakpoint just before the last
program's return code and checks that every element in memory was
zeroized. Otherwise it signals a failure and terminates.
The test was added to all.sh.
The idea is to use the simple program that is expected to be modified
rarely to set a breakpoint in a specific line and check that the
function mbedtls_zeroize() does actually set the buffer to 0 and is not
optimised out by the compiler.
The new header contains common information across various mbed TLS
modules and avoids code duplication. To start, utils.h currently only
contains the mbedtls_zeroize() function.
Minor documentation improvements:
*Standardized file brief description.
*Separated return statements.
*Reordered tags within documentation blocks so that params and returns are last in block.
*p_rng descriptions changed from "parameter" to "context".
*Suggest to specify issue for each return code, where multiple failure return codes are listed, or generalize.
*Minor improvements to parameter documentation proposed by eng.
1. Add 3des context to be allowed for alternative defintion
2. Move some ecp structs, to disallow alternative definition of them,
as other modules rely on them
The specification requires that P and Q are not too close. The specification
also requires that you generate a P and stick with it, generating new Qs until
you have found a pair that works. In practice, it turns out that sometimes a
particular P results in it being very unlikely a Q can be found matching all
the constraints. So we keep the original behavior where a new P and Q are
generated every round.
The specification requires that numbers are the raw entropy (except for odd/
even) and at least 2^(nbits-0.5). If not, new random bits need to be used for
the next number. Similarly, if the number is not prime new random bits need to
be used.
Attacks against RSA exist for small D. [Wiener] established this for
D < N^0.25. [Boneh] suggests the bound should be N^0.5.
Multiple possible values of D might exist for the same set of E, P, Q. The
attack works when there exists any possible D that is small. To make sure that
the generated key is not susceptible to attack, we need to make sure we have
found the smallest possible D, and then check that D is big enough. The
Carmichael function λ of p*q is lcm(p-1, q-1), so we can apply Carmichael's
theorem to show that D = d mod λ(n) is the smallest.
[Wiener] Michael J. Wiener, "Cryptanalysis of Short RSA Secret Exponents"
[Boneh] Dan Boneh and Glenn Durfee, "Cryptanalysis of RSA with Private Key d Less than N^0.292"