AES-XEX is a building block for other cryptographic standards and not yet a
standard in and of itself. We'll just provide the standardized AES-XTS
algorithm, and not AES-XEX. The AES-XTS algorithm and interface provided
can be used to perform the AES-XEX algorithm when the length of the input
is a multiple of the AES block size.
XTS mode is fully known as "xor-encrypt-xor with ciphertext-stealing".
This is the generalization of the XEX mode.
This implementation is limited to an 8-bits (1 byte) boundary, which
doesn't seem to be what was thought considering some test vectors [1].
This commit comes with tests, extracted from [1], and benchmarks.
Although, benchmarks aren't really nice here, as they work with a buffer
of a multiple of 16 bytes, which isn't a challenge for XTS compared to
XEX.
[1] http://csrc.nist.gov/groups/STM/cavp/documents/aes/XTSTestVectors.zip
* development: (504 commits)
Fix minor code style issues
Add the uodate to the soversion to the ChangeLog
Fix the ChangeLog for clarity, english and credit
Update version to 2.9.0
ecp: Fix binary compatibility with group ID
Changelog entry
Change accepted ciphersuite versions when parsing server hello
Remove preprocessor directives around platform_util.h include
Fix style for mbedtls_mpi_zeroize()
Improve mbedtls_platform_zeroize() docs
mbedtls_zeroize -> mbedtls_platform_zeroize in docs
Reword config.h docs for MBEDTLS_PLATFORM_ZEROIZE_ALT
Organize CMakeLists targets in alphabetical order
Organize output objs in alfabetical order in Makefile
Regenerate errors after ecp.h updates
Update ecp.h
Change variable bytes_written to header_bytes in record decompression
Update ecp.h
Update ecp.h
Update ecp.h
...
The idea is to use the simple program that is expected to be modified
rarely to set a breakpoint in a specific line and check that the
function mbedtls_zeroize() does actually set the buffer to 0 and is not
optimised out by the compiler.
Also, introduce MBEDTLS_EINTR locally in net_sockets.c
for the platform-dependent return code macro used by
the `select` call to indicate that the poll was interrupted
by a signal handler: On Unix, the corresponding macro is EINTR,
while on Windows, it's WSAEINTR.
Previously, the idling loop in ssl_server2 didn't check whether
the underlying call to mbedtls_net_poll signalled that the socket
became invalid. This had the consequence that during idling, the
server couldn't be terminated through a SIGTERM, as the corresponding
handler would only close the sockets and expect the remainder of
the program to shutdown gracefully as a consequence of this.
This was subsequently attempted to be fixed through a change
in ssl-opt.sh by terminating the server through a KILL signal,
which however lead to other problems when the latter was run
under valgrind.
This commit changes the idling loop in ssl_server2 and ssl_client2
to obey the return code of mbedtls_net_poll and gracefully shutdown
if an error occurs, e.g. because the socket was closed.
As a consequence, the server termination via a KILL signal in
ssl-opt.sh is no longer necessary, with the previous `kill; wait`
pattern being sufficient. The commit reverts the corresponding
change.
Initializing arrays using non-constant expressions is not permitted in
C89, and was causing errors when compiling with Metrowerks CodeWarrior
(for classic MacOS) in C89 mode. Clang also produces a warning when
compiling with '-Wc99-extensions':
test/benchmark.c:670:42: warning: initializer for aggregate is not a compile-time constant [-Wc99-extensions]
const unsigned char *dhm_P[] = { dhm_P_2048, dhm_P_3072 };
^~~~~~~~~~
test/benchmark.c:674:42: warning: initializer for aggregate is not a compile-time constant [-Wc99-extensions]
const unsigned char *dhm_G[] = { dhm_G_2048, dhm_G_3072 };
^~~~~~~~~~
Declaring the arrays as 'static' makes them constant expressions.
fixes#1353
The race goes this way:
1. ssl_recv() succeeds (ie no signal received yet)
2. processing the message leads to aborting handshake with ret != 0
3. reset ret if we were signaled
4. print error if ret is still non-zero
5. go back to net_accept() which can be interrupted by a signal
We print the error message only if the signal is received between steps 3 and
5, not when it arrives between steps 1 and 3.
This can cause failures in ssl-opt.sh where we check for the presence of "Last
error was..." in the server's output: if we perform step 2, the client will be
notified and exit, then ssl-opt.sh will send SIGTERM to the server, but if it
didn't get a chance to run and pass step 3 in the meantime, we're in trouble.
The purpose of step 3 was to avoid spurious "Last error" messages in the
output so that ssl-opt.sh can check for a successful run by the absence of
that message. However, it is enough to suppress that message when the last
error we get is the one we expect from being interrupted by a signal - doing
more could hide real errors.
Also, improve the messages printed when interrupted to make it easier to
distinguish the two cases - this could be used in a testing script wanted to
check that the server doesn't see the client as disconnecting unexpectedly.
The _ext suffix suggests "new arguments", but the new functions have
the same arguments. Use _ret instead, to convey that the difference is
that the new functions return a value.
Conflict resolution:
* ChangeLog: put the new entries in their rightful place.
* library/x509write_crt.c: the change in development was whitespace
only, so use the one from the iotssl-1251 feature branch.
1) `mbedtls_rsa_import_raw` used an uninitialized return
value when it was called without any input parameters.
While not sensible, this is allowed and should be a
succeeding no-op.
2) The MPI test for prime generation missed a return value
check for a call to `mbedtls_mpi_shift_r`. This is neither
critical nor new but should be fixed.
3) Both the RSA keygeneration example program and the
RSA test suites contained code initializing an RSA context
after a potentially failing call to CTR DRBG initialization,
leaving the corresponding RSA context free call in the
cleanup section of the respective function orphaned.
While this defect existed before, Coverity picked up on
it again because of newly introduced MPI's that were
also wrongly initialized only after the call to CTR DRBG
init. The commit fixes both the old and the new issue
by moving the initializtion of both the RSA context and
all MPI's prior to the first potentially failing call.
* public/pr/1136:
Timing self test: shorten redundant tests
Timing self test: increased duration
Timing self test: increased tolerance
Timing unit tests: more protection against infinite loops
Unit test for mbedtls_timing_hardclock
New timing unit tests
selftest: allow excluding a subset of the tests
selftest: allow running a subset of the tests
selftest: refactor to separate the list of tests from the logic
Timing self test: print some diagnosis information
mbedtls_timing_get_timer: don't use uninitialized memory
timing interface documentation: minor clarifications
Timing: fix mbedtls_set_alarm(0) on Unix/POSIX