To prevent dropping the same message over and over again, the UDP proxy
test application programs/test/udp_proxy _logically_ maintains a mapping
from records to the number of times the record has already been dropped,
and stops dropping once a configurable threshold (currently 2) is passed.
However, the actual implementation deviates from this logical view
in two crucial respects:
- To keep the implementation simple and independent of
implementations of suitable map interfaces, it only counts how
many times a record of a given _size_ has been dropped, and
stops dropping further records of that size once the configurable
threshold is passed. Of course, this is not fail-proof, but a
good enough approximation for the proxy, and it allows to use
an inefficient but simple array for the required map.
- The implementation mixes datagram lengths and record lengths:
When deciding whether it is allowed to drop a datagram, it
uses the total datagram size as a lookup index into the map
counting the number of times a package has been dropped. However,
when updating this map, the UDP proxy traverses the datagram
record by record, and updates the mapping at the level of record
lengths.
Apart from this inconsistency, the introduction of the Connection ID
feature leads to yet another problem: The CID length is not part of
the record header but dynamically negotiated during (potentially
encrypted!) handshakes, and it is hence impossible for a passive traffic
analyzer (in this case our UDP proxy) to reliably parse record headers;
especially, it isn't possible to reliably infer the length of a record,
nor to dissect a datagram into records.
The previous implementation of the UDP proxy was not CID-aware and
assumed that the record length would always reside at offsets 11, 12
in the DTLS record header, which would allow it to iterate through
the datagram record by record. As mentioned, this is no longer possible
for CID-based records, and the current implementation can run into
a buffer overflow in this case (because it doesn't validate that
the record length is not larger than what remains in the datagram).
This commit removes the inconsistency in datagram vs. record length
and resolves the buffer overflow issue by not attempting any dissection
of datagrams into records, and instead only counting how often _datagrams_
of a particular size have been dropped.
There is only one practical situation where this makes a difference:
If datagram packing is used by default but disabled on retransmission
(which OpenSSL has been seen to do), it can happen that we drop a
datagram in its initial transmission, then also drop some of its records
when they retransmitted one-by-one afterwards, yet still keeping the
drop-counter at 1 instead of 2. However, even in this situation, we'll
correctly count the number of droppings from that point on and eventually
stop dropping, because the peer will not fall back to using packing
and hence use stable record lengths.
Increase the SO versions of libmbedx509 and libmbedtls due to the
addition of fields in publicly visible (non-opaque) structs:
- mbedtls_ssl_config
- mbedtls_ssl_context
- mbedtls_x509_crt
Due to the way the current PK API works, it may have not been clear
for the library clients, how big output buffers they should pass
to the signing functions. Depending on the key type they depend on
MPI or EC specific compile-time constants.
Inside the library, there were places, where it was assumed that
the MPI size will always be enough, even for ECDSA signatures.
However, for very small sizes of the MBEDTLS_MPI_MAX_SIZE and
sufficiently large key, the EC signature could exceed the MPI size
and cause a stack overflow.
This test establishes both conditions -- small MPI size and the use
of a long ECDSA key -- and attempts to sign an arbitrary file.
This can cause a stack overvlow if the signature buffers are not
big enough, therefore the test is performed for an ASan build.
For unit tests and sample programs, CFLAGS=-m32 is enough to get a
32-bit build, because these programs are all compiled directly
from *.c to the executable in one shot. But with makefile rules that
first build object files and then link them, LDFLAGS=-m32 is also
needed.
Remove the "Decrypt empty buffer" test, as ChaCha20 is a stream cipher
and 0 bytes encrypted is identical to a 0 length buffer. The "ChaCha20
Encrypt and decrypt 0 bytes" test will test decryption of a 0 length
buffer.
Previously, even in the Chacha20 and Chacha20-Poly1305 tests, we would
test that decryption of an empty buffer would work with
MBEDTLS_CIPHER_AES_128_CBC.
Make the cipher used with the dec_empty_buf() test configurable, so that
Chacha20 and Chacha20-Poly1305 empty buffer tests can use ciphers other
than AES CBC. Then, make the Chacha20 and Chacha20-Poly1305 empty buffer
tests use the MBEDTLS_CIPHER_CHACHA20 and
MBEDTLS_CIPHER_CHACHA20_POLY1305 cipher suites.
When testing a configuration where no ciphersuites have MAC, via
component_test_when_no_ciphersuites_have_mac(), perform a targeted test
of only encrypt-then-MAC tests within ssl-opt.sh.
When MBEDTLS_SSL_ENCRYPT_THEN_MAC is enabled, but not
MBEDTLS_SSL_SOME_MODES_USE_MAC, mbedtls_ssl_derive_keys() and
build_transforms() will attempt to use a non-existent `encrypt_then_mac`
field in the ssl_transform.
Compile [ 93.7%]: ssl_tls.c
[Error] ssl_tls.c@865,14: 'mbedtls_ssl_transform {aka struct mbedtls_ssl_transform}' ha
s no member named 'encrypt_then_mac'
[ERROR] ./mbed-os/features/mbedtls/src/ssl_tls.c: In function 'mbedtls_ssl_derive_keys'
:
./mbed-os/features/mbedtls/src/ssl_tls.c:865:14: error: 'mbedtls_ssl_transform {aka str
uct mbedtls_ssl_transform}' has no member named 'encrypt_then_mac'
transform->encrypt_then_mac = session->encrypt_then_mac;
^~
Change mbedtls_ssl_derive_keys() and build_transforms() to only access
`encrypt_then_mac` if `encrypt_then_mac` is actually present.
Add a regression test to detect when we have regressions with
configurations that do not include any MAC ciphersuites.
Fixes d56ed2491b ("Reduce size of `ssl_transform` if no MAC ciphersuite is enabled")
Lengths below 128 Bytes must be encoded as a single 'XX' byte in DER,
but two tests in the X.509 CRT parsing suite used the BER but non-DER
encoding '81 XX' (the first byte 10000001 indicating that the length
is to follow (high bit) and has length 1 byte (low bit)).
Previously, a test exercising the X.509 CRT parser's behaviour
on unexpected tags would use a '00' byte in place of the tag
for the expected structure. This makes reviewing the examples
harder because the binary data isn't valid DER-encoded ASN.1.
This commit uses the ASN.1 NULL TLV '05 00' to test invalid
tags, and adapts surrounding structures' length values accordingly.
This eases reviewing because now the ASN.1 structures are still
well-formed at the place where the mismatch occurs.