Add mbedTLS.vcxproj to the VS2010 application template so that the next
time we auto-generate the application project files, the
LinkLibraryDependencies for mbedTLS.vcxproj are maintained.
Fixes#1347
MD2, MD4, MD5, DES and SHA-1 are considered weak and their use
constitutes a security risk. If possible, we recommend avoiding
dependencies on them, and considering stronger message digests and
ciphers instead.
This change fixes a problem in the tests pk_rsa_alt() and
pk_rsa_overflow() from test_suite_pk.function that would cause a
segmentation fault. The problem is that these tests are only designed
to run in computers where the sizeof(size_t) > sizeof(unsigned int).
On x32, pointers are only 4-bytes wide and need to be loaded using the "movl"
instruction instead of "movq" to avoid loading garbage into the register.
The MULADDC routines for x86-64 are adjusted to work on x32 as well by getting
gcc to load all the registers for us in advance (and storing them later) by
using better register constraints. The b, c, D and S constraints correspond to
the rbx, rcx, rdi and rsi registers respectively.
On x32 systems, pointers are 4-bytes wide and are therefore stored in %e?x
registers (instead of %r?x registers). These registers must be accessed using
"addl" instead of "addq", however the GNU assembler will acccept the generic
"add" instruction and determine the correct opcode based on the registers
passed to it.
A new test for mbedtls_timing_alarm(0) was introduced in PR 1136, which also
fixed it on Unix. Apparently test results on MinGW were not checked at that
point, so we missed that this new test was also failing on this platform.
generate add ctest test-suites, with the --verbose argument to be given
to the test suites.
The verbose output will be shown **only** if ctest is run with `-v` parameter
The verbose argument is to the test-suites, only when run through `ctest`
The race goes this way:
1. ssl_recv() succeeds (ie no signal received yet)
2. processing the message leads to aborting handshake with ret != 0
3. reset ret if we were signaled
4. print error if ret is still non-zero
5. go back to net_accept() which can be interrupted by a signal
We print the error message only if the signal is received between steps 3 and
5, not when it arrives between steps 1 and 3.
This can cause failures in ssl-opt.sh where we check for the presence of "Last
error was..." in the server's output: if we perform step 2, the client will be
notified and exit, then ssl-opt.sh will send SIGTERM to the server, but if it
didn't get a chance to run and pass step 3 in the meantime, we're in trouble.
The purpose of step 3 was to avoid spurious "Last error" messages in the
output so that ssl-opt.sh can check for a successful run by the absence of
that message. However, it is enough to suppress that message when the last
error we get is the one we expect from being interrupted by a signal - doing
more could hide real errors.
Also, improve the messages printed when interrupted to make it easier to
distinguish the two cases - this could be used in a testing script wanted to
check that the server doesn't see the client as disconnecting unexpectedly.
If lsof is not available, wait_server_start uses a fixed timeout,
which can trigger a race condition if the timeout turns out to be too
short. Emit a warning so that we know this is going on from the test
logs.
- Some of the CI machines don't have lsof installed yet, so rely on an sleeping
an arbitrary number of seconds while the server starts. We're seeing
occasional failures with the current delay because the CI machines are highly
loaded, which seems to indicate the current delay is not quite enough, but
hopefully not to far either, so double it.
- While at it, also double the watchdog delay: while I don't remember seeing
much failures due to client timeout, this change doesn't impact normal
running time of the script, so better err on the safe side.
These changes don't affect the test and should only affect the false positive
rate coming from the test framework in those scripts.