The original definition of MBEDTLS_PK_SIGNATURE_MAX_SIZE only took RSA
into account. An ECDSA signature may be larger than the maximum
possible RSA signature size, depending on build options; for example
this is the case with config-suite-b.h.
In pk_sign_verify, if mbedtls_pk_sign() failed, sig_len was passed to
mbedtls_pk_verify_restartable() without having been initialized. This
worked only because in the only test case that expects signature to
fail, the verify implementation doesn't look at sig_len before failing
for the expected reason.
The value of sig_len if sign() fails is undefined, so set sig_len to
something sensible.
This issue has been reported by Tuba Yavuz, Farhaan Fowze, Ken (Yihang) Bai,
Grant Hernandez, and Kevin Butler (University of Florida) and
Dave Tian (Purdue University).
In AES encrypt and decrypt some variables were left on the stack. The value
of these variables can be used to recover the last round key. To follow best
practice and to limit the impact of buffer overread vulnerabilities (like
Heartbleed) we need to zeroize them before exiting the function.
It was not obvious before that `AES_KEY` and `RSA_KEY` were shorthand
for key material. A user copy pasting the code snippet would run into a
compilation error if they didn't realize this. Make it more obvious that
key material must come from somewhere external by making the snippets
which use global keys into functions that take a key as a parameter.
When writing a private EC key, use a constant size for the private
value, as specified in RFC 5915. Previously, the value was written
as an ASN.1 INTEGER, which caused the size of the key to leak
about 1 bit of information on average, and could cause the value to be
1 byte too large for the output buffer.
Add pk_write test cases where the ASN.1 INTEGER encoding of the
private value would not have the mandatory size for the OCTET STRING
that contains the value.
ec_256_long_prv.pem is a random secp256r1 private key, selected so
that the private value is >= 2^255, i.e. the top bit of the first byte
is set (which would cause the INTEGER encoding to have an extra
leading 0 byte).
ec_521_short_prv.pem is a random secp521r1 private key, selected so
that the private value is < 2^519, i.e. the first byte is 0 and the
top bit of the second byte is 0 (which would cause the INTEGER
encoding to have one less 0 byte at the start).
The corner case tests were designed for 32 and 64 bit limbs
independently and performed only on the target platform. On the other
platform they are not corner cases anymore, but we can still exercise
them.
The corner case tests were designed for 64 bit limbs and failed on 32
bit platforms because the numbers in the test ended up being stored in a
different number of limbs and the function (correctly) returnd an error
upon receiving them.
In the case of *ret we might need to preserve a 0 value throughout the
loop and therefore we need an extra condition to protect it from being
overwritten.
The value of done is always 1 after *ret has been set and does not need
to be protected from overwriting. Therefore in this case the extra
condition can be removed.
The code relied on the assumptions that CHAR_BIT is 8 and that unsigned
does not have padding bits.
In the Bignum module we already assume that the sign of an MPI is either
-1 or 1. Using this, we eliminate the above mentioned dependency.
The signature of mbedtls_mpi_cmp_mpi_ct() meant to support using it in
place of mbedtls_mpi_cmp_mpi(). This meant full comparison functionality
and a signed result.
To make the function more universal and friendly to constant time
coding, we change the result type to unsigned. Theoretically, we could
encode the comparison result in an unsigned value, but it would be less
intuitive.
Therefore we won't be able to represent the result as unsigned anymore
and the functionality will be constrained to checking if the first
operand is less than the second. This is sufficient to support the
current use case and to check any relationship between MPIs.
The only drawback is that we need to call the function twice when
checking for equality, but this can be optimised later if an when it is
needed.
Multiplication is known to have measurable timing variations based on
the operands. For example it typically is much faster if one of the
operands is zero. Remove them from constant time code.
1. variable name accoriding to the Mbed TLS coding style;
2. add a comment explaining safety of the optimization;
3. safer T2 initialization and memory zeroing on the function exit;
Enabling memory_buffer_alloc is slow and makes ASan ineffective. We
have a patch pending to remove it from the full config. In the
meantime, disable it explicitly.
None of the test cases in tests_suite_memory_buffer_alloc actually
need MBEDTLS_MEMORY_DEBUG. Some have additional checks when
MBEDTLS_MEMORY_DEBUG but all are useful even without it. So enable
them all and #ifdef out the parts that require DEBUG.
The test case "Memory buffer small buffer" emits a message
"FATAL: verification of first header failed". In this test case, it's
actually expected, but it looks weird to see this message from a
passing test. Add a comment that states this explicitly, and modify
the test description to indicate that the failure is expected, and
change the test function name to be more accurate.
Fix#309