Introduce MBEDTLS_X509_INFO to indicate the availability of the
mbedtls_x509_*_info() function and closely related APIs. When this is
not defined, also omit name and description from
mbedtls_oid_descriptor_t, and omit OID arrays, macros, and types that
are entirely unused. This saves several KB of code space.
Change-Id: I056312613379890e0d70e1d08c34171287c0aa17
* origin/pr/2481:
Document support for MD2 and MD4 in programs/x509/cert_write
Correct name of X.509 parsing test for well-formed, ill-signed CRT
Add test cases exercising successful verification of MD2/MD4/MD5 CRT
Add test case exercising verification of valid MD2 CRT
Add MD[245] test CRTs to tree
Add instructions for MD[245] test CRTs to tests/data_files/Makefile
Add suppport for MD2 to CSR and CRT writing example programs
Convert further x509parse tests to use lower-case hex data
Correct placement of ChangeLog entry
Adapt ChangeLog
Use SHA-256 instead of MD2 in X.509 CRT parsing tests
Consistently use lower case hex data in X.509 parsing tests
To prevent dropping the same message over and over again, the UDP proxy
test application programs/test/udp_proxy _logically_ maintains a mapping
from records to the number of times the record has already been dropped,
and stops dropping once a configurable threshold (currently 2) is passed.
However, the actual implementation deviates from this logical view
in two crucial respects:
- To keep the implementation simple and independent of
implementations of suitable map interfaces, it only counts how
many times a record of a given _size_ has been dropped, and
stops dropping further records of that size once the configurable
threshold is passed. Of course, this is not fail-proof, but a
good enough approximation for the proxy, and it allows to use
an inefficient but simple array for the required map.
- The implementation mixes datagram lengths and record lengths:
When deciding whether it is allowed to drop a datagram, it
uses the total datagram size as a lookup index into the map
counting the number of times a package has been dropped. However,
when updating this map, the UDP proxy traverses the datagram
record by record, and updates the mapping at the level of record
lengths.
Apart from this inconsistency, the current implementation suffers
from a lack of bounds checking for the parsed length of incoming
DTLS records that can lead to a buffer overflow when facing
malformed records.
This commit removes the inconsistency in datagram vs. record length
and resolves the buffer overflow issue by not attempting any dissection
of datagrams into records, and instead only counting how often _datagrams_
of a particular size have been dropped.
There is only one practical situation where this makes a difference:
If datagram packing is used by default but disabled on retransmission
(which OpenSSL has been seen to do), it can happen that we drop a
datagram in its initial transmission, then also drop some of its records
when they retransmitted one-by-one afterwards, yet still keeping the
drop-counter at 1 instead of 2. However, even in this situation, we'll
correctly count the number of droppings from that point on and eventually
stop dropping, because the peer will not fall back to using packing
and hence use stable record lengths.
For now the option has no effect.
Adapted existing example config files. The fact that I needed to do this
highlights that this is a slightly incompatible change: existing users need to
update their existing custom configs (if standalone as opposed to based on the
default config) in order to still get the same behaviour.
The alternative would be to have a negative config option (eg NO_TLS or
DTLS_ONLY) but this doesn't fit as nicely with the existing options, so
hopefully the minor incompatibility is acceptable.
I don't think it's worth adding a new component to all.sh:
- builds with both DTLS and TLS are done in the default (and full) config
- TLS-only builds are done with eg config-suite-b.h in test-ref-configs
- a DTLS-only build is done with config-thread.h in test-ref-configs
- builds with none of them (and SSL_TLS_C enabled) are forbidden
To prevent dropping the same message over and over again, the UDP proxy
test application programs/test/udp_proxy _logically_ maintains a mapping
from records to the number of times the record has already been dropped,
and stops dropping once a configurable threshold (currently 2) is passed.
However, the actual implementation deviates from this logical view
in two crucial respects:
- To keep the implementation simple and independent of
implementations of suitable map interfaces, it only counts how
many times a record of a given _size_ has been dropped, and
stops dropping further records of that size once the configurable
threshold is passed. Of course, this is not fail-proof, but a
good enough approximation for the proxy, and it allows to use
an inefficient but simple array for the required map.
- The implementation mixes datagram lengths and record lengths:
When deciding whether it is allowed to drop a datagram, it
uses the total datagram size as a lookup index into the map
counting the number of times a package has been dropped. However,
when updating this map, the UDP proxy traverses the datagram
record by record, and updates the mapping at the level of record
lengths.
Apart from this inconsistency, the introduction of the Connection ID
feature leads to yet another problem: The CID length is not part of
the record header but dynamically negotiated during (potentially
encrypted!) handshakes, and it is hence impossible for a passive traffic
analyzer (in this case our UDP proxy) to reliably parse record headers;
especially, it isn't possible to reliably infer the length of a record,
nor to dissect a datagram into records.
The previous implementation of the UDP proxy was not CID-aware and
assumed that the record length would always reside at offsets 11, 12
in the DTLS record header, which would allow it to iterate through
the datagram record by record. As mentioned, this is no longer possible
for CID-based records, and the current implementation can run into
a buffer overflow in this case (because it doesn't validate that
the record length is not larger than what remains in the datagram).
This commit removes the inconsistency in datagram vs. record length
and resolves the buffer overflow issue by not attempting any dissection
of datagrams into records, and instead only counting how often _datagrams_
of a particular size have been dropped.
There is only one practical situation where this makes a difference:
If datagram packing is used by default but disabled on retransmission
(which OpenSSL has been seen to do), it can happen that we drop a
datagram in its initial transmission, then also drop some of its records
when they retransmitted one-by-one afterwards, yet still keeping the
drop-counter at 1 instead of 2. However, even in this situation, we'll
correctly count the number of droppings from that point on and eventually
stop dropping, because the peer will not fall back to using packing
and hence use stable record lengths.
The example programs programs/x509/cert_req and programs/x509/cert_write
(demonstrating the use of X.509 CSR and CRT writing functionality)
previously didn't support MD2 signatures.
For testing purposes, this commit adds support for MD2 to cert_req,
and support for MD2 and MD4 to cert_write.
We have explicit recommendations to use US spelling for technical writing, so
let's apply this to code as well for uniformity. (My fingers tend to prefer UK
spelling, so this needs to be fixed in many places.)
sed -i 's/\([Ss]eriali\)s/\1z/g' **/*.[ch] **/*.function **/*.data ChangeLog
This allows callers to discover what an appropriate size is. Otherwise they'd
have to either try repeatedly, or allocate an overly large buffer (or some
combination of those).
Adapt documentation an example usage in ssl_client2.
Avoid useless copy with mbedtls_ssl_get_session() before serialising.
Used in ssl_client2 for testing and demonstrating usage, but unfortunately
that means mbedtls_ssl_get_session() is no longer tested, which will be fixed
in the next commit.
This provides basic testing for the session (de)serialisation functions, as
well as an example of how to use them.
Tested locally with tests/ssl-opt.sh -f '^Session resume'.
This allows to test PSK-based ciphersuites via ssl_server2 in builds
which have MBEDTLS_X509_CRT_PARSE_C enabled but both MBEDTLS_FS_IO and
MBEDTLS_CERTS_C disabled.
This allows to test PSK-based ciphersuites via ssl_client2 in builds
which have MBEDTLS_X509_CRT_PARSE_C enabled but both MBEDTLS_FS_IO and
MBEDTLS_CERTS_C disabled.
A similar change is applied to the `crt_file` and `key_file` arguments.
This commit adds the command line option 'bad_cid' to the UDP proxy
`./programs/test/udp_proxy`. It takes a non-negative integral value N,
which if not 0 has the effect of duplicating every 1:N CID records
and modifying the CID in the first copy sent.
This is to exercise the stacks documented behaviour on receipt
of unexpected CIDs.
It is important to send the record with the unexpected CID first,
because otherwise the packet would be dropped already during
replay protection (the same holds for the implementation of the
existing 'bad_ad' option).
This commit modifies the CID configuration API mbedtls_ssl_conf_cid_len()
to allow the configuration of the stack's behaviour when receiving an
encrypted DTLS record with unexpected CID.
ApplicationData records are not protected against loss by DTLS
and our test applications ssl_client2 and ssl_server2 don't
implement any retransmission scheme to deal with loss of the
data they exchange. Therefore, the UDP proxy programs/test/udp_proxy
does not drop ApplicationData records.
With the introduction of the Connection ID, encrypted ApplicationData
records cannot be recognized as such by inspecting the record content
type, as the latter is always set to the CID specific content type for
protected records using CIDs, while the actual content type is hidden
in the plaintext.
To keep tests working, this commit adds CID records to the list of
content types which are protected against dropping by the UDP proxy.
Context:
The CID draft does not require that the length of CIDs used for incoming
records must not change in the course of a connection. Since the record
header does not contain a length field for the CID, this means that if
CIDs of varying lengths are used, the CID length must be inferred from
other aspects of the record header (such as the epoch) and/or by means
outside of the protocol, e.g. by coding its length in the CID itself.
Inferring the CID length from the record's epoch is theoretically possible
in DTLS 1.2, but it requires the information about the epoch to be present
even if the epoch is no longer used: That's because one should silently drop
records from old epochs, but not the entire datagrams to which they belong
(there might be entire flights in a single datagram, including a change of
epoch); however, in order to do so, one needs to parse the record's content
length, the position of which is only known once the CID length for the epoch
is known. In conclusion, it puts a significant burden on the implementation
to infer the CID length from the record epoch, which moreover mangles record
processing with the high-level logic of the protocol (determining which epochs
are in use in which flights, when they are changed, etc. -- this would normally
determine when we drop epochs).
Moreover, with DTLS 1.3, CIDs are no longer uniquely associated to epochs,
but every epoch may use a set of CIDs of varying lengths -- in that case,
it's even theoretically impossible to do record header parsing based on
the epoch configuration only.
We must therefore seek a way for standalone record header parsing, which
means that we must either (a) fix the CID lengths for incoming records,
or (b) allow the application-code to configure a callback to implement
an application-specific CID parsing which would somehow infer the length
of the CID from the CID itself.
Supporting multiple lengths for incoming CIDs significantly increases
complexity while, on the other hand, the restriction to a fixed CID length
for incoming CIDs (which the application controls - in contrast to the
lengths of the CIDs used when writing messages to the peer) doesn't
appear to severely limit the usefulness of the CID extension.
Therefore, the initial implementation of the CID feature will require
a fixed length for incoming CIDs, which is what this commit enforces,
in the following way:
In order to avoid a change of API in case support for variable lengths
CIDs shall be added at some point, we keep mbedtls_ssl_set_cid(), which
includes a CID length parameter, but add a new API mbedtls_ssl_conf_cid_len()
which applies to an SSL configuration, and which fixes the CID length that
any call to mbetls_ssl_set_cid() which applies to an SSL context that is bound
to the given SSL configuration must use.
While this creates a slight redundancy of parameters, it allows to
potentially add an API like mbedtls_ssl_conf_cid_len_cb() later which
could allow users to register a callback which dynamically infers the
length of a CID at record header parsing time, without changing the
rest of the API.
We called in tinycrypt in the file names, but uecc in config.h, all.sh and
other places, which could be confusing. Just use tinycrypt everywhere because
that's the name of the project and repo where we took the files.
The changes were made using the following commands (with GNU sed and zsh):
sed -i 's/uecc/tinycrypt/g' **/*.[ch] tests/scripts/all.sh
sed -i 's/MBEDTLS_USE_UECC/MBEDTLS_USE_TINYCRYPT/g' **/*.[ch] tests/scripts/all.sh scripts/config.pl