This is enabled by default as we generally enable things by default unless
there's a reason not to (experimental, deprecated, security risk).
We need a compile-time option because, even though the functions themselves
can be easily garbage-collected by the linker, implementing them will require
saving 64 bytes of Client/ServerHello.random values after the handshake, that
would otherwise not be needed, and people who don't need this feature
shouldn't have to pay the price of increased RAM usage.
All sample and test programs had a definition of mbedtls_param_failed.
This was necessary because we wanted to be able to build them in a
configuration with MBEDTLS_CHECK_PARAMS set but without a definition
of MBEDTLS_PARAM_FAILED. Now that we activate the sample definition of
MBEDTLS_PARAM_FAILED in config.h when testing with
MBEDTLS_CHECK_PARAMS set, this boilerplate code is no longer needed.
Introduce a new configuration option MBEDTLS_CHECK_PARAMS_ASSERT,
which is disabled by default. When this option is enabled,
MBEDTLS_PARAM_FAILED defaults to assert rather than to a call to
mbedtls_param_failed, and <assert.h> is included.
This fixes#2671 (no easy way to make MBEDTLS_PARAM_FAILED assert)
without breaking backward compatibility. With this change,
`config.pl full` runs tests with MBEDTLS_PARAM_FAILED set to assert,
so the tests will fail if a validation check fails, and programs don't
need to provide their own definition of mbedtls_param_failed().
A positive option looks better, but comes with the following compatibility
issue: people using a custom config.h that is not based on the default
config.h and need TLS support would need to manually change their config in
order to still get TLS.
Work around that by making the public option negative. Internally the positive
option is used, though.
In the future (when preparing the next major version), we might want to switch
back to a positive option as this would be more consistent with other options
we have.
* origin/pr/2481:
Document support for MD2 and MD4 in programs/x509/cert_write
Correct name of X.509 parsing test for well-formed, ill-signed CRT
Add test cases exercising successful verification of MD2/MD4/MD5 CRT
Add test case exercising verification of valid MD2 CRT
Add MD[245] test CRTs to tree
Add instructions for MD[245] test CRTs to tests/data_files/Makefile
Add suppport for MD2 to CSR and CRT writing example programs
Convert further x509parse tests to use lower-case hex data
Correct placement of ChangeLog entry
Adapt ChangeLog
Use SHA-256 instead of MD2 in X.509 CRT parsing tests
Consistently use lower case hex data in X.509 parsing tests
To prevent dropping the same message over and over again, the UDP proxy
test application programs/test/udp_proxy _logically_ maintains a mapping
from records to the number of times the record has already been dropped,
and stops dropping once a configurable threshold (currently 2) is passed.
However, the actual implementation deviates from this logical view
in two crucial respects:
- To keep the implementation simple and independent of
implementations of suitable map interfaces, it only counts how
many times a record of a given _size_ has been dropped, and
stops dropping further records of that size once the configurable
threshold is passed. Of course, this is not fail-proof, but a
good enough approximation for the proxy, and it allows to use
an inefficient but simple array for the required map.
- The implementation mixes datagram lengths and record lengths:
When deciding whether it is allowed to drop a datagram, it
uses the total datagram size as a lookup index into the map
counting the number of times a package has been dropped. However,
when updating this map, the UDP proxy traverses the datagram
record by record, and updates the mapping at the level of record
lengths.
Apart from this inconsistency, the current implementation suffers
from a lack of bounds checking for the parsed length of incoming
DTLS records that can lead to a buffer overflow when facing
malformed records.
This commit removes the inconsistency in datagram vs. record length
and resolves the buffer overflow issue by not attempting any dissection
of datagrams into records, and instead only counting how often _datagrams_
of a particular size have been dropped.
There is only one practical situation where this makes a difference:
If datagram packing is used by default but disabled on retransmission
(which OpenSSL has been seen to do), it can happen that we drop a
datagram in its initial transmission, then also drop some of its records
when they retransmitted one-by-one afterwards, yet still keeping the
drop-counter at 1 instead of 2. However, even in this situation, we'll
correctly count the number of droppings from that point on and eventually
stop dropping, because the peer will not fall back to using packing
and hence use stable record lengths.
For now the option has no effect.
Adapted existing example config files. The fact that I needed to do this
highlights that this is a slightly incompatible change: existing users need to
update their existing custom configs (if standalone as opposed to based on the
default config) in order to still get the same behaviour.
The alternative would be to have a negative config option (eg NO_TLS or
DTLS_ONLY) but this doesn't fit as nicely with the existing options, so
hopefully the minor incompatibility is acceptable.
I don't think it's worth adding a new component to all.sh:
- builds with both DTLS and TLS are done in the default (and full) config
- TLS-only builds are done with eg config-suite-b.h in test-ref-configs
- a DTLS-only build is done with config-thread.h in test-ref-configs
- builds with none of them (and SSL_TLS_C enabled) are forbidden
To prevent dropping the same message over and over again, the UDP proxy
test application programs/test/udp_proxy _logically_ maintains a mapping
from records to the number of times the record has already been dropped,
and stops dropping once a configurable threshold (currently 2) is passed.
However, the actual implementation deviates from this logical view
in two crucial respects:
- To keep the implementation simple and independent of
implementations of suitable map interfaces, it only counts how
many times a record of a given _size_ has been dropped, and
stops dropping further records of that size once the configurable
threshold is passed. Of course, this is not fail-proof, but a
good enough approximation for the proxy, and it allows to use
an inefficient but simple array for the required map.
- The implementation mixes datagram lengths and record lengths:
When deciding whether it is allowed to drop a datagram, it
uses the total datagram size as a lookup index into the map
counting the number of times a package has been dropped. However,
when updating this map, the UDP proxy traverses the datagram
record by record, and updates the mapping at the level of record
lengths.
Apart from this inconsistency, the introduction of the Connection ID
feature leads to yet another problem: The CID length is not part of
the record header but dynamically negotiated during (potentially
encrypted!) handshakes, and it is hence impossible for a passive traffic
analyzer (in this case our UDP proxy) to reliably parse record headers;
especially, it isn't possible to reliably infer the length of a record,
nor to dissect a datagram into records.
The previous implementation of the UDP proxy was not CID-aware and
assumed that the record length would always reside at offsets 11, 12
in the DTLS record header, which would allow it to iterate through
the datagram record by record. As mentioned, this is no longer possible
for CID-based records, and the current implementation can run into
a buffer overflow in this case (because it doesn't validate that
the record length is not larger than what remains in the datagram).
This commit removes the inconsistency in datagram vs. record length
and resolves the buffer overflow issue by not attempting any dissection
of datagrams into records, and instead only counting how often _datagrams_
of a particular size have been dropped.
There is only one practical situation where this makes a difference:
If datagram packing is used by default but disabled on retransmission
(which OpenSSL has been seen to do), it can happen that we drop a
datagram in its initial transmission, then also drop some of its records
when they retransmitted one-by-one afterwards, yet still keeping the
drop-counter at 1 instead of 2. However, even in this situation, we'll
correctly count the number of droppings from that point on and eventually
stop dropping, because the peer will not fall back to using packing
and hence use stable record lengths.
The example programs programs/x509/cert_req and programs/x509/cert_write
(demonstrating the use of X.509 CSR and CRT writing functionality)
previously didn't support MD2 signatures.
For testing purposes, this commit adds support for MD2 to cert_req,
and support for MD2 and MD4 to cert_write.
We have explicit recommendations to use US spelling for technical writing, so
let's apply this to code as well for uniformity. (My fingers tend to prefer UK
spelling, so this needs to be fixed in many places.)
sed -i 's/\([Ss]eriali\)s/\1z/g' **/*.[ch] **/*.function **/*.data ChangeLog
This allows callers to discover what an appropriate size is. Otherwise they'd
have to either try repeatedly, or allocate an overly large buffer (or some
combination of those).
Adapt documentation an example usage in ssl_client2.
Avoid useless copy with mbedtls_ssl_get_session() before serialising.
Used in ssl_client2 for testing and demonstrating usage, but unfortunately
that means mbedtls_ssl_get_session() is no longer tested, which will be fixed
in the next commit.
This provides basic testing for the session (de)serialisation functions, as
well as an example of how to use them.
Tested locally with tests/ssl-opt.sh -f '^Session resume'.
This allows to test PSK-based ciphersuites via ssl_server2 in builds
which have MBEDTLS_X509_CRT_PARSE_C enabled but both MBEDTLS_FS_IO and
MBEDTLS_CERTS_C disabled.
This allows to test PSK-based ciphersuites via ssl_client2 in builds
which have MBEDTLS_X509_CRT_PARSE_C enabled but both MBEDTLS_FS_IO and
MBEDTLS_CERTS_C disabled.
A similar change is applied to the `crt_file` and `key_file` arguments.
This commit adds the command line option 'bad_cid' to the UDP proxy
`./programs/test/udp_proxy`. It takes a non-negative integral value N,
which if not 0 has the effect of duplicating every 1:N CID records
and modifying the CID in the first copy sent.
This is to exercise the stacks documented behaviour on receipt
of unexpected CIDs.
It is important to send the record with the unexpected CID first,
because otherwise the packet would be dropped already during
replay protection (the same holds for the implementation of the
existing 'bad_ad' option).
This commit modifies the CID configuration API mbedtls_ssl_conf_cid_len()
to allow the configuration of the stack's behaviour when receiving an
encrypted DTLS record with unexpected CID.
ApplicationData records are not protected against loss by DTLS
and our test applications ssl_client2 and ssl_server2 don't
implement any retransmission scheme to deal with loss of the
data they exchange. Therefore, the UDP proxy programs/test/udp_proxy
does not drop ApplicationData records.
With the introduction of the Connection ID, encrypted ApplicationData
records cannot be recognized as such by inspecting the record content
type, as the latter is always set to the CID specific content type for
protected records using CIDs, while the actual content type is hidden
in the plaintext.
To keep tests working, this commit adds CID records to the list of
content types which are protected against dropping by the UDP proxy.
Context:
The CID draft does not require that the length of CIDs used for incoming
records must not change in the course of a connection. Since the record
header does not contain a length field for the CID, this means that if
CIDs of varying lengths are used, the CID length must be inferred from
other aspects of the record header (such as the epoch) and/or by means
outside of the protocol, e.g. by coding its length in the CID itself.
Inferring the CID length from the record's epoch is theoretically possible
in DTLS 1.2, but it requires the information about the epoch to be present
even if the epoch is no longer used: That's because one should silently drop
records from old epochs, but not the entire datagrams to which they belong
(there might be entire flights in a single datagram, including a change of
epoch); however, in order to do so, one needs to parse the record's content
length, the position of which is only known once the CID length for the epoch
is known. In conclusion, it puts a significant burden on the implementation
to infer the CID length from the record epoch, which moreover mangles record
processing with the high-level logic of the protocol (determining which epochs
are in use in which flights, when they are changed, etc. -- this would normally
determine when we drop epochs).
Moreover, with DTLS 1.3, CIDs are no longer uniquely associated to epochs,
but every epoch may use a set of CIDs of varying lengths -- in that case,
it's even theoretically impossible to do record header parsing based on
the epoch configuration only.
We must therefore seek a way for standalone record header parsing, which
means that we must either (a) fix the CID lengths for incoming records,
or (b) allow the application-code to configure a callback to implement
an application-specific CID parsing which would somehow infer the length
of the CID from the CID itself.
Supporting multiple lengths for incoming CIDs significantly increases
complexity while, on the other hand, the restriction to a fixed CID length
for incoming CIDs (which the application controls - in contrast to the
lengths of the CIDs used when writing messages to the peer) doesn't
appear to severely limit the usefulness of the CID extension.
Therefore, the initial implementation of the CID feature will require
a fixed length for incoming CIDs, which is what this commit enforces,
in the following way:
In order to avoid a change of API in case support for variable lengths
CIDs shall be added at some point, we keep mbedtls_ssl_set_cid(), which
includes a CID length parameter, but add a new API mbedtls_ssl_conf_cid_len()
which applies to an SSL configuration, and which fixes the CID length that
any call to mbetls_ssl_set_cid() which applies to an SSL context that is bound
to the given SSL configuration must use.
While this creates a slight redundancy of parameters, it allows to
potentially add an API like mbedtls_ssl_conf_cid_len_cb() later which
could allow users to register a callback which dynamically infers the
length of a CID at record header parsing time, without changing the
rest of the API.
We called in tinycrypt in the file names, but uecc in config.h, all.sh and
other places, which could be confusing. Just use tinycrypt everywhere because
that's the name of the project and repo where we took the files.
The changes were made using the following commands (with GNU sed and zsh):
sed -i 's/uecc/tinycrypt/g' **/*.[ch] tests/scripts/all.sh
sed -i 's/MBEDTLS_USE_UECC/MBEDTLS_USE_TINYCRYPT/g' **/*.[ch] tests/scripts/all.sh scripts/config.pl
This commit improves hygiene and formatting of macro definitions
throughout the library. Specifically:
- It adds brackets around parameters to avoid unintended
interpretation of arguments, e.g. due to operator precedence.
- It adds uses of the `do { ... } while( 0 )` idiom for macros that
can be used as commands.
Remove the ssl_cert_test sample application, as it uses
hardcoded certificates that moved, and is redundant with the x509
tests and applications. Fixes#1905.
* restricted/pr/550:
Update query_config.c
Fix failure in SSLv3 per-version suites test
Adjust DES exclude lists in test scripts
Clarify 3DES changes in ChangeLog
Fix documentation for 3DES removal
Exclude 3DES tests in test scripts
Fix wording of ChangeLog and 3DES_REMOVE docs
Reduce priority of 3DES ciphersuites
* public/pr/2429:
Add ChangeLog entry for unused bits in bitstrings
Improve docs for ASN.1 bitstrings and their usage
Add tests for (named) bitstring to suite_asn1write
Fix ASN1 bitstring writing
The previous prototype gave warnings are the strings produced by #cond and
__FILE__ are const, so we shouldn't implicitly cast them to non-const.
While at it modifying most example programs:
- include the header that has the function declaration, so that the definition
can be checked to match by the compiler
- fix whitespace
- make it work even if PLATFORM_C is not defined:
- CHECK_PARAMS is not documented as depending on PLATFORM_C and there is
no reason why it should
- so, remove the corresponding #if defined in each program...
- and add missing #defines for mbedtls_exit when needed
The result has been tested (make all test with -Werror) with the following
configurations:
- full with CHECK_PARAMS with PLATFORM_C
- full with CHECK_PARAMS without PLATFORM_C
- full without CHECK_PARAMS without PLATFORM_C
- full without CHECK_PARAMS with PLATFORM_C
Additionally, it has been manually tested that adding
mbedtls_aes_init( NULL );
near the normal call to mbedtls_aes_init() in programs/aes/aescrypt2.c has the
expected effect when running the program.