We previously introduced a safety check ensuring that if a datagram had
already been dropped twice, it would no longer be dropped or delayed
after that.
This missed an edge case: if a datagram is dropped once, it can be
delayed any number of times. Since "delay" is not defined in terms of
time (x seconds) but in terms of ordering with respect to other messages
(will be forwarded after the next message is forwarded), depending on
the RNG results this could result in an endless loop where all messages
are delayed until the next, which is itself delayed, etc. and no message
is ever forwarded.
The probability of this happening n times in a row is (1/d)^n, where d
is the value passed as delay=d, so for delay=5 and n=5 it's around 0.03%
which seems small but we still happened on such an occurrence in real
life:
tests/ssl-opt.sh --seed 1625061502 -f 'DTLS proxy: 3d, min handshake, resumption$'
results (according to debug statements added for the investigation) in
the ClientHello of the second handshake being dropped once then delayed
5 times, after which the client stops re-trying and the test fails for
no interesting reason.
Make sure this doesn't happen again by putting a cap on the number of
times we fail to forward a given datagram immediately.
Signed-off-by: Manuel Pégourié-Gonnard <manuel.pegourie-gonnard@arm.com>
If PSA_CIPHER_ENCRYPT_OUTPUT_SIZE was called on a non symmetric key,
then a divide by zero could happen, as PSA_CIPHER_BLOCK_LENGTH will
return 0 for such a key, and PSA_ROUND_UP_TO_MULTIPLE will divide by
the block length.
Signed-off-by: Paul Elliott <paul.elliott@arm.com>
Restore the optimization done in
HEAD^{/Speed up the generation of storage format test cases}
which was lost during refactoring made when adding support for
implicit usage flags.
There are still more than one call to the C compiler, but the extra
calls are only for some key usage test cases.
This is an internal refactoring. This commit does not change the
output of generate_psa_tests.py
Signed-off-by: Gilles Peskine <Gilles.Peskine@arm.com>
In order to for tests to pass from the previous commit (which it mandatory for all pk verify/sign
functions to be given a hash_len that is exactly equal to the message digest length of md_alg) the
hash_len that is supplied to the fucntion cannot be MBEDTLS_MD_MAX_SIZE. This would result in all tests failing. Since the md alg for all of these funtions are SHA256, we can use mbedtls functions to get
the required length of a SHA256 digest (32 bytes). Then that number can be used for allocating the
hash buffer.
Signed-off-by: Nick Child <nick.child@ibm.com>
The function `pk_hashlen_helper` exists to ensure a valid hash_len is
used in pk_verify and pk_sign functions. This function has been
used to adjust to the corrsponding hash_len if the user passes in 0
for the hash_len argument based on the md algorithm given. If the user
does not pass in 0 as the hash_len, then it is not adjusted. This is
problematic if the user gives a hash_len and hash buffer that is less than the
associated length of the md algorithm. This error would go unchecked
and eventually lead to buffer overread when given to specific pk_sign/verify
functions, since they both ignore the hash_len argument if md_alg is not MBEDTLS_MD_NONE.
This commit, adds a conditional to `pk_hashlen_helper` so that an
error is thrown if the user specifies a hash_length (not 0) and it is
not equal to the expected for the associated message digest algorithm.
This aligns better with the api documentation where it states "If
hash_len is 0, then the length associated with md_alg is used instead,
or an error returned if it is invalid"
Signed-off-by: Nick Child <nick.child@ibm.com>
Signed-off-by: Nayna Jain <nayna@linux.ibm.com>