MD2, MD4, MD5, DES and SHA-1 are considered weak and their use
constitutes a security risk. If possible, we recommend avoiding
dependencies on them, and considering stronger message digests and
ciphers instead.
This change fixes a problem in the tests pk_rsa_alt() and
pk_rsa_overflow() from test_suite_pk.function that would cause a
segmentation fault. The problem is that these tests are only designed
to run in computers where the sizeof(size_t) > sizeof(unsigned int).
On x32, pointers are only 4-bytes wide and need to be loaded using the "movl"
instruction instead of "movq" to avoid loading garbage into the register.
The MULADDC routines for x86-64 are adjusted to work on x32 as well by getting
gcc to load all the registers for us in advance (and storing them later) by
using better register constraints. The b, c, D and S constraints correspond to
the rbx, rcx, rdi and rsi registers respectively.
On x32 systems, pointers are 4-bytes wide and are therefore stored in %e?x
registers (instead of %r?x registers). These registers must be accessed using
"addl" instead of "addq", however the GNU assembler will acccept the generic
"add" instruction and determine the correct opcode based on the registers
passed to it.
A new test for mbedtls_timing_alarm(0) was introduced in PR 1136, which also
fixed it on Unix. Apparently test results on MinGW were not checked at that
point, so we missed that this new test was also failing on this platform.
generate add ctest test-suites, with the --verbose argument to be given
to the test suites.
The verbose output will be shown **only** if ctest is run with `-v` parameter
The verbose argument is to the test-suites, only when run through `ctest`
The race goes this way:
1. ssl_recv() succeeds (ie no signal received yet)
2. processing the message leads to aborting handshake with ret != 0
3. reset ret if we were signaled
4. print error if ret is still non-zero
5. go back to net_accept() which can be interrupted by a signal
We print the error message only if the signal is received between steps 3 and
5, not when it arrives between steps 1 and 3.
This can cause failures in ssl-opt.sh where we check for the presence of "Last
error was..." in the server's output: if we perform step 2, the client will be
notified and exit, then ssl-opt.sh will send SIGTERM to the server, but if it
didn't get a chance to run and pass step 3 in the meantime, we're in trouble.
The purpose of step 3 was to avoid spurious "Last error" messages in the
output so that ssl-opt.sh can check for a successful run by the absence of
that message. However, it is enough to suppress that message when the last
error we get is the one we expect from being interrupted by a signal - doing
more could hide real errors.
Also, improve the messages printed when interrupted to make it easier to
distinguish the two cases - this could be used in a testing script wanted to
check that the server doesn't see the client as disconnecting unexpectedly.
If lsof is not available, wait_server_start uses a fixed timeout,
which can trigger a race condition if the timeout turns out to be too
short. Emit a warning so that we know this is going on from the test
logs.
- Some of the CI machines don't have lsof installed yet, so rely on an sleeping
an arbitrary number of seconds while the server starts. We're seeing
occasional failures with the current delay because the CI machines are highly
loaded, which seems to indicate the current delay is not quite enough, but
hopefully not to far either, so double it.
- While at it, also double the watchdog delay: while I don't remember seeing
much failures due to client timeout, this change doesn't impact normal
running time of the script, so better err on the safe side.
These changes don't affect the test and should only affect the false positive
rate coming from the test framework in those scripts.
1) The MPI test for prime generation missed a return value
check for a call to `mbedtls_mpi_shift_r`. This is neither
critical nor new but should be fixed.
2) The RSA keygeneration example program contained code
initializing an RSA context after a potentially failing
call to CTR DRBG initialization, leaving the corresponding
RSA context free call in the cleanup section orphaned.
The commit fixes this by moving the initializtion of the
RSA context prior to the first potentially failing call.
* mbedtls-2.1:
selftest: fix build error in some configurations
Timing self test: shorten redundant tests
Timing self test: increased duration
Timing self test: increased tolerance
selftest: allow excluding a subset of the tests
selftest: allow running a subset of the tests
selftest: fixed an erroneous return code
selftest: refactor to separate the list of tests from the logic
Timing self test: print some diagnosis information
mbedtls_timing_get_timer: don't use uninitialized memory
timing interface documentation: minor clarifications
Timing: fix mbedtls_set_alarm(0) on Unix/POSIX
* public/pr/1223:
selftest: fix build error in some configurations
Timing self test: shorten redundant tests
Timing self test: increased duration
Timing self test: increased tolerance
selftest: allow excluding a subset of the tests
selftest: allow running a subset of the tests
selftest: fixed an erroneous return code
selftest: refactor to separate the list of tests from the logic
Timing self test: print some diagnosis information
mbedtls_timing_get_timer: don't use uninitialized memory
timing interface documentation: minor clarifications
Timing: fix mbedtls_set_alarm(0) on Unix/POSIX
Increase the duration of the self test, otherwise it tends to fail on
a busy machine even with the recently upped tolerance. But run the
loop only once, it's enough for a simple smoke test.
mbedtls_timing_self_test fails annoyingly often when running on a busy
machine such as can be expected of a continous integration system.
Increase the tolerances in the delay test, to reduce the chance of
failures that are only due to missing a deadline on a busy machine.