The documentation of HMAC_DRBG erroneously claimed that
mbedtls_hmac_drbg_set_entropy_len() had an impact on the initial
seeding. This is in fact not the case: mbedtls_hmac_drbg_seed() forces
the entropy length to its chosen value. Fix the documentation.
The documentation of CTR_DRBG erroneously claimed that
mbedtls_ctr_drbg_set_entropy_len() had an impact on the initial
seeding. This is in fact not the case: mbedtls_ctr_drbg_seed() forces
the initial seeding to grab MBEDTLS_CTR_DRBG_ENTROPY_LEN bytes of
entropy. Fix the documentation and rewrite the discussion of the
entropy length and the security strength accordingly.
Explain how MBEDTLS_CTR_DRBG_ENTROPY_LEN is set next to the security
strength statement, rather than giving a partial explanation (current
setting only) in the documentation of MBEDTLS_CTR_DRBG_ENTROPY_LEN.
NIST and many other sources call it a "personalization string", and
certainly not "device-specific identifiers" which is actually somewhat
misleading since this is just one of many things that might go into a
personalization string.
Improve the formatting and writing of the documentation based on what
had been done for CTR_DRBG.
Document the maximum size and nullability of some buffer parameters.
Exercise the library functions with calloc returning NULL for a size
of 0. Make this a separate job with UBSan (and ASan) to detect
places where we try to dereference the result of calloc(0) or to do
things like
buf = calloc(size, 1);
if (buf == NULL && size != 0) return INSUFFICIENT_MEMORY;
memcpy(buf, source, size);
which has undefined behavior when buf is NULL at the memcpy call even
if size is 0.
This is needed because other test components jobs either use the system
malloc which returns non-NULL on Linux and FreeBSD, or the
memory_buffer_alloc malloc which returns NULL but does not give as
useful feedback with ASan (because the whole heap is a single C
object).
Add a very basic test of calloc to the selftest program. The selftest
program acts in its capacity as a platform compatibility checker rather
than in its capacity as a test of the library.
The main objective is to report whether calloc returns NULL for a size
of 0. Also observe whether a free/alloc sequence returns the address
that was just freed and whether a size overflow is properly detected.
This is a documentation-only change, but one that users who care about
NIST compliance may be interested in, to review if they're using the
module in a compliant way.
Document that a derivation function is used.
Document the security strength of the DRBG depending on the
compile-time configuration and how it is set up. In particular,
document how the nonce specified in SP 800-90A is set.
Mention how to link the ctr_drbg module with the entropy module.
* State explicit whether several numbers are in bits or bytes.
* Clarify whether buffer pointer parameters can be NULL.
* Explain the value of constants that are dependent on the configuration.
ssl_decompress_buf() was operating on data from the ssl context, but called at
a point where this data is actually in the rec structure. Call it later so
that the data is back to the ssl structure.
Signed-off-by: Simon Butcher <simon.butcher@arm.com>
There is a 50% performance drop in the SCA_CM enabled encrypt and
decrypt functions. Therefore use the older version of encrypt/decypt
functions when SCA_CM is disabled.
-Do not reuse any part of randomized number, use separate byte for
each purpose.
-Combine some separate loops together to get rid of gap between them
-Extend usage of flow_control
* upstream/pr/2945:
Rename macro MBEDTLS_MAX_RAND_DELAY
Update signature of mbedtls_platform_random_delay
Replace mbedtls_platform_enforce_volatile_reads 2
Replace mbedtls_platform_enforce_volatile_reads
Add more variation to random delay countermeasure
Add random delay to enforce_volatile_reads
Update comments of mbedtls_platform_random_delay
Follow Mbed TLS coding style
Add random delay function to platform_utils
When reading the input, buffer will be initialised with random data
and the reading will start from a random offset. When writing the data,
the output will be initialised with random data and the writing will start
from a random offset.