To prevent dropping the same message over and over again, the UDP proxy
test application programs/test/udp_proxy _logically_ maintains a mapping
from records to the number of times the record has already been dropped,
and stops dropping once a configurable threshold (currently 2) is passed.
However, the actual implementation deviates from this logical view
in two crucial respects:
- To keep the implementation simple and independent of
implementations of suitable map interfaces, it only counts how
many times a record of a given _size_ has been dropped, and
stops dropping further records of that size once the configurable
threshold is passed. Of course, this is not fail-proof, but a
good enough approximation for the proxy, and it allows to use
an inefficient but simple array for the required map.
- The implementation mixes datagram lengths and record lengths:
When deciding whether it is allowed to drop a datagram, it
uses the total datagram size as a lookup index into the map
counting the number of times a package has been dropped. However,
when updating this map, the UDP proxy traverses the datagram
record by record, and updates the mapping at the level of record
lengths.
Apart from this inconsistency, the introduction of the Connection ID
feature leads to yet another problem: The CID length is not part of
the record header but dynamically negotiated during (potentially
encrypted!) handshakes, and it is hence impossible for a passive traffic
analyzer (in this case our UDP proxy) to reliably parse record headers;
especially, it isn't possible to reliably infer the length of a record,
nor to dissect a datagram into records.
The previous implementation of the UDP proxy was not CID-aware and
assumed that the record length would always reside at offsets 11, 12
in the DTLS record header, which would allow it to iterate through
the datagram record by record. As mentioned, this is no longer possible
for CID-based records, and the current implementation can run into
a buffer overflow in this case (because it doesn't validate that
the record length is not larger than what remains in the datagram).
This commit removes the inconsistency in datagram vs. record length
and resolves the buffer overflow issue by not attempting any dissection
of datagrams into records, and instead only counting how often _datagrams_
of a particular size have been dropped.
There is only one practical situation where this makes a difference:
If datagram packing is used by default but disabled on retransmission
(which OpenSSL has been seen to do), it can happen that we drop a
datagram in its initial transmission, then also drop some of its records
when they retransmitted one-by-one afterwards, yet still keeping the
drop-counter at 1 instead of 2. However, even in this situation, we'll
correctly count the number of droppings from that point on and eventually
stop dropping, because the peer will not fall back to using packing
and hence use stable record lengths.
We have explicit recommendations to use US spelling for technical writing, so
let's apply this to code as well for uniformity. (My fingers tend to prefer UK
spelling, so this needs to be fixed in many places.)
sed -i 's/\([Ss]eriali\)s/\1z/g' **/*.[ch] **/*.function **/*.data ChangeLog
This allows callers to discover what an appropriate size is. Otherwise they'd
have to either try repeatedly, or allocate an overly large buffer (or some
combination of those).
Adapt documentation an example usage in ssl_client2.
Avoid useless copy with mbedtls_ssl_get_session() before serialising.
Used in ssl_client2 for testing and demonstrating usage, but unfortunately
that means mbedtls_ssl_get_session() is no longer tested, which will be fixed
in the next commit.
This provides basic testing for the session (de)serialisation functions, as
well as an example of how to use them.
Tested locally with tests/ssl-opt.sh -f '^Session resume'.
This commit adds the command line option 'bad_cid' to the UDP proxy
`./programs/test/udp_proxy`. It takes a non-negative integral value N,
which if not 0 has the effect of duplicating every 1:N CID records
and modifying the CID in the first copy sent.
This is to exercise the stacks documented behaviour on receipt
of unexpected CIDs.
It is important to send the record with the unexpected CID first,
because otherwise the packet would be dropped already during
replay protection (the same holds for the implementation of the
existing 'bad_ad' option).
This commit modifies the CID configuration API mbedtls_ssl_conf_cid_len()
to allow the configuration of the stack's behaviour when receiving an
encrypted DTLS record with unexpected CID.
ApplicationData records are not protected against loss by DTLS
and our test applications ssl_client2 and ssl_server2 don't
implement any retransmission scheme to deal with loss of the
data they exchange. Therefore, the UDP proxy programs/test/udp_proxy
does not drop ApplicationData records.
With the introduction of the Connection ID, encrypted ApplicationData
records cannot be recognized as such by inspecting the record content
type, as the latter is always set to the CID specific content type for
protected records using CIDs, while the actual content type is hidden
in the plaintext.
To keep tests working, this commit adds CID records to the list of
content types which are protected against dropping by the UDP proxy.
Context:
The CID draft does not require that the length of CIDs used for incoming
records must not change in the course of a connection. Since the record
header does not contain a length field for the CID, this means that if
CIDs of varying lengths are used, the CID length must be inferred from
other aspects of the record header (such as the epoch) and/or by means
outside of the protocol, e.g. by coding its length in the CID itself.
Inferring the CID length from the record's epoch is theoretically possible
in DTLS 1.2, but it requires the information about the epoch to be present
even if the epoch is no longer used: That's because one should silently drop
records from old epochs, but not the entire datagrams to which they belong
(there might be entire flights in a single datagram, including a change of
epoch); however, in order to do so, one needs to parse the record's content
length, the position of which is only known once the CID length for the epoch
is known. In conclusion, it puts a significant burden on the implementation
to infer the CID length from the record epoch, which moreover mangles record
processing with the high-level logic of the protocol (determining which epochs
are in use in which flights, when they are changed, etc. -- this would normally
determine when we drop epochs).
Moreover, with DTLS 1.3, CIDs are no longer uniquely associated to epochs,
but every epoch may use a set of CIDs of varying lengths -- in that case,
it's even theoretically impossible to do record header parsing based on
the epoch configuration only.
We must therefore seek a way for standalone record header parsing, which
means that we must either (a) fix the CID lengths for incoming records,
or (b) allow the application-code to configure a callback to implement
an application-specific CID parsing which would somehow infer the length
of the CID from the CID itself.
Supporting multiple lengths for incoming CIDs significantly increases
complexity while, on the other hand, the restriction to a fixed CID length
for incoming CIDs (which the application controls - in contrast to the
lengths of the CIDs used when writing messages to the peer) doesn't
appear to severely limit the usefulness of the CID extension.
Therefore, the initial implementation of the CID feature will require
a fixed length for incoming CIDs, which is what this commit enforces,
in the following way:
In order to avoid a change of API in case support for variable lengths
CIDs shall be added at some point, we keep mbedtls_ssl_set_cid(), which
includes a CID length parameter, but add a new API mbedtls_ssl_conf_cid_len()
which applies to an SSL configuration, and which fixes the CID length that
any call to mbetls_ssl_set_cid() which applies to an SSL context that is bound
to the given SSL configuration must use.
While this creates a slight redundancy of parameters, it allows to
potentially add an API like mbedtls_ssl_conf_cid_len_cb() later which
could allow users to register a callback which dynamically infers the
length of a CID at record header parsing time, without changing the
rest of the API.
We called in tinycrypt in the file names, but uecc in config.h, all.sh and
other places, which could be confusing. Just use tinycrypt everywhere because
that's the name of the project and repo where we took the files.
The changes were made using the following commands (with GNU sed and zsh):
sed -i 's/uecc/tinycrypt/g' **/*.[ch] tests/scripts/all.sh
sed -i 's/MBEDTLS_USE_UECC/MBEDTLS_USE_TINYCRYPT/g' **/*.[ch] tests/scripts/all.sh scripts/config.pl
This commit improves hygiene and formatting of macro definitions
throughout the library. Specifically:
- It adds brackets around parameters to avoid unintended
interpretation of arguments, e.g. due to operator precedence.
- It adds uses of the `do { ... } while( 0 )` idiom for macros that
can be used as commands.
Remove the ssl_cert_test sample application, as it uses
hardcoded certificates that moved, and is redundant with the x509
tests and applications. Fixes#1905.
* restricted/pr/550:
Update query_config.c
Fix failure in SSLv3 per-version suites test
Adjust DES exclude lists in test scripts
Clarify 3DES changes in ChangeLog
Fix documentation for 3DES removal
Exclude 3DES tests in test scripts
Fix wording of ChangeLog and 3DES_REMOVE docs
Reduce priority of 3DES ciphersuites
* public/pr/2429:
Add ChangeLog entry for unused bits in bitstrings
Improve docs for ASN.1 bitstrings and their usage
Add tests for (named) bitstring to suite_asn1write
Fix ASN1 bitstring writing