* origin/pr/2481:
Document support for MD2 and MD4 in programs/x509/cert_write
Correct name of X.509 parsing test for well-formed, ill-signed CRT
Add test cases exercising successful verification of MD2/MD4/MD5 CRT
Add test case exercising verification of valid MD2 CRT
Add MD[245] test CRTs to tree
Add instructions for MD[245] test CRTs to tests/data_files/Makefile
Add suppport for MD2 to CSR and CRT writing example programs
Convert further x509parse tests to use lower-case hex data
Correct placement of ChangeLog entry
Adapt ChangeLog
Use SHA-256 instead of MD2 in X.509 CRT parsing tests
Consistently use lower case hex data in X.509 parsing tests
To prevent dropping the same message over and over again, the UDP proxy
test application programs/test/udp_proxy _logically_ maintains a mapping
from records to the number of times the record has already been dropped,
and stops dropping once a configurable threshold (currently 2) is passed.
However, the actual implementation deviates from this logical view
in two crucial respects:
- To keep the implementation simple and independent of
implementations of suitable map interfaces, it only counts how
many times a record of a given _size_ has been dropped, and
stops dropping further records of that size once the configurable
threshold is passed. Of course, this is not fail-proof, but a
good enough approximation for the proxy, and it allows to use
an inefficient but simple array for the required map.
- The implementation mixes datagram lengths and record lengths:
When deciding whether it is allowed to drop a datagram, it
uses the total datagram size as a lookup index into the map
counting the number of times a package has been dropped. However,
when updating this map, the UDP proxy traverses the datagram
record by record, and updates the mapping at the level of record
lengths.
Apart from this inconsistency, the current implementation suffers
from a lack of bounds checking for the parsed length of incoming
DTLS records that can lead to a buffer overflow when facing
malformed records.
This commit removes the inconsistency in datagram vs. record length
and resolves the buffer overflow issue by not attempting any dissection
of datagrams into records, and instead only counting how often _datagrams_
of a particular size have been dropped.
There is only one practical situation where this makes a difference:
If datagram packing is used by default but disabled on retransmission
(which OpenSSL has been seen to do), it can happen that we drop a
datagram in its initial transmission, then also drop some of its records
when they retransmitted one-by-one afterwards, yet still keeping the
drop-counter at 1 instead of 2. However, even in this situation, we'll
correctly count the number of droppings from that point on and eventually
stop dropping, because the peer will not fall back to using packing
and hence use stable record lengths.
For now the option has no effect.
Adapted existing example config files. The fact that I needed to do this
highlights that this is a slightly incompatible change: existing users need to
update their existing custom configs (if standalone as opposed to based on the
default config) in order to still get the same behaviour.
The alternative would be to have a negative config option (eg NO_TLS or
DTLS_ONLY) but this doesn't fit as nicely with the existing options, so
hopefully the minor incompatibility is acceptable.
I don't think it's worth adding a new component to all.sh:
- builds with both DTLS and TLS are done in the default (and full) config
- TLS-only builds are done with eg config-suite-b.h in test-ref-configs
- a DTLS-only build is done with config-thread.h in test-ref-configs
- builds with none of them (and SSL_TLS_C enabled) are forbidden
To prevent dropping the same message over and over again, the UDP proxy
test application programs/test/udp_proxy _logically_ maintains a mapping
from records to the number of times the record has already been dropped,
and stops dropping once a configurable threshold (currently 2) is passed.
However, the actual implementation deviates from this logical view
in two crucial respects:
- To keep the implementation simple and independent of
implementations of suitable map interfaces, it only counts how
many times a record of a given _size_ has been dropped, and
stops dropping further records of that size once the configurable
threshold is passed. Of course, this is not fail-proof, but a
good enough approximation for the proxy, and it allows to use
an inefficient but simple array for the required map.
- The implementation mixes datagram lengths and record lengths:
When deciding whether it is allowed to drop a datagram, it
uses the total datagram size as a lookup index into the map
counting the number of times a package has been dropped. However,
when updating this map, the UDP proxy traverses the datagram
record by record, and updates the mapping at the level of record
lengths.
Apart from this inconsistency, the introduction of the Connection ID
feature leads to yet another problem: The CID length is not part of
the record header but dynamically negotiated during (potentially
encrypted!) handshakes, and it is hence impossible for a passive traffic
analyzer (in this case our UDP proxy) to reliably parse record headers;
especially, it isn't possible to reliably infer the length of a record,
nor to dissect a datagram into records.
The previous implementation of the UDP proxy was not CID-aware and
assumed that the record length would always reside at offsets 11, 12
in the DTLS record header, which would allow it to iterate through
the datagram record by record. As mentioned, this is no longer possible
for CID-based records, and the current implementation can run into
a buffer overflow in this case (because it doesn't validate that
the record length is not larger than what remains in the datagram).
This commit removes the inconsistency in datagram vs. record length
and resolves the buffer overflow issue by not attempting any dissection
of datagrams into records, and instead only counting how often _datagrams_
of a particular size have been dropped.
There is only one practical situation where this makes a difference:
If datagram packing is used by default but disabled on retransmission
(which OpenSSL has been seen to do), it can happen that we drop a
datagram in its initial transmission, then also drop some of its records
when they retransmitted one-by-one afterwards, yet still keeping the
drop-counter at 1 instead of 2. However, even in this situation, we'll
correctly count the number of droppings from that point on and eventually
stop dropping, because the peer will not fall back to using packing
and hence use stable record lengths.
The example programs programs/x509/cert_req and programs/x509/cert_write
(demonstrating the use of X.509 CSR and CRT writing functionality)
previously didn't support MD2 signatures.
For testing purposes, this commit adds support for MD2 to cert_req,
and support for MD2 and MD4 to cert_write.
We have explicit recommendations to use US spelling for technical writing, so
let's apply this to code as well for uniformity. (My fingers tend to prefer UK
spelling, so this needs to be fixed in many places.)
sed -i 's/\([Ss]eriali\)s/\1z/g' **/*.[ch] **/*.function **/*.data ChangeLog
This allows callers to discover what an appropriate size is. Otherwise they'd
have to either try repeatedly, or allocate an overly large buffer (or some
combination of those).
Adapt documentation an example usage in ssl_client2.
Avoid useless copy with mbedtls_ssl_get_session() before serialising.
Used in ssl_client2 for testing and demonstrating usage, but unfortunately
that means mbedtls_ssl_get_session() is no longer tested, which will be fixed
in the next commit.
This provides basic testing for the session (de)serialisation functions, as
well as an example of how to use them.
Tested locally with tests/ssl-opt.sh -f '^Session resume'.
This allows to test PSK-based ciphersuites via ssl_server2 in builds
which have MBEDTLS_X509_CRT_PARSE_C enabled but both MBEDTLS_FS_IO and
MBEDTLS_CERTS_C disabled.
This allows to test PSK-based ciphersuites via ssl_client2 in builds
which have MBEDTLS_X509_CRT_PARSE_C enabled but both MBEDTLS_FS_IO and
MBEDTLS_CERTS_C disabled.
A similar change is applied to the `crt_file` and `key_file` arguments.
This commit adds the command line option 'bad_cid' to the UDP proxy
`./programs/test/udp_proxy`. It takes a non-negative integral value N,
which if not 0 has the effect of duplicating every 1:N CID records
and modifying the CID in the first copy sent.
This is to exercise the stacks documented behaviour on receipt
of unexpected CIDs.
It is important to send the record with the unexpected CID first,
because otherwise the packet would be dropped already during
replay protection (the same holds for the implementation of the
existing 'bad_ad' option).
This commit modifies the CID configuration API mbedtls_ssl_conf_cid_len()
to allow the configuration of the stack's behaviour when receiving an
encrypted DTLS record with unexpected CID.
ApplicationData records are not protected against loss by DTLS
and our test applications ssl_client2 and ssl_server2 don't
implement any retransmission scheme to deal with loss of the
data they exchange. Therefore, the UDP proxy programs/test/udp_proxy
does not drop ApplicationData records.
With the introduction of the Connection ID, encrypted ApplicationData
records cannot be recognized as such by inspecting the record content
type, as the latter is always set to the CID specific content type for
protected records using CIDs, while the actual content type is hidden
in the plaintext.
To keep tests working, this commit adds CID records to the list of
content types which are protected against dropping by the UDP proxy.
Context:
The CID draft does not require that the length of CIDs used for incoming
records must not change in the course of a connection. Since the record
header does not contain a length field for the CID, this means that if
CIDs of varying lengths are used, the CID length must be inferred from
other aspects of the record header (such as the epoch) and/or by means
outside of the protocol, e.g. by coding its length in the CID itself.
Inferring the CID length from the record's epoch is theoretically possible
in DTLS 1.2, but it requires the information about the epoch to be present
even if the epoch is no longer used: That's because one should silently drop
records from old epochs, but not the entire datagrams to which they belong
(there might be entire flights in a single datagram, including a change of
epoch); however, in order to do so, one needs to parse the record's content
length, the position of which is only known once the CID length for the epoch
is known. In conclusion, it puts a significant burden on the implementation
to infer the CID length from the record epoch, which moreover mangles record
processing with the high-level logic of the protocol (determining which epochs
are in use in which flights, when they are changed, etc. -- this would normally
determine when we drop epochs).
Moreover, with DTLS 1.3, CIDs are no longer uniquely associated to epochs,
but every epoch may use a set of CIDs of varying lengths -- in that case,
it's even theoretically impossible to do record header parsing based on
the epoch configuration only.
We must therefore seek a way for standalone record header parsing, which
means that we must either (a) fix the CID lengths for incoming records,
or (b) allow the application-code to configure a callback to implement
an application-specific CID parsing which would somehow infer the length
of the CID from the CID itself.
Supporting multiple lengths for incoming CIDs significantly increases
complexity while, on the other hand, the restriction to a fixed CID length
for incoming CIDs (which the application controls - in contrast to the
lengths of the CIDs used when writing messages to the peer) doesn't
appear to severely limit the usefulness of the CID extension.
Therefore, the initial implementation of the CID feature will require
a fixed length for incoming CIDs, which is what this commit enforces,
in the following way:
In order to avoid a change of API in case support for variable lengths
CIDs shall be added at some point, we keep mbedtls_ssl_set_cid(), which
includes a CID length parameter, but add a new API mbedtls_ssl_conf_cid_len()
which applies to an SSL configuration, and which fixes the CID length that
any call to mbetls_ssl_set_cid() which applies to an SSL context that is bound
to the given SSL configuration must use.
While this creates a slight redundancy of parameters, it allows to
potentially add an API like mbedtls_ssl_conf_cid_len_cb() later which
could allow users to register a callback which dynamically infers the
length of a CID at record header parsing time, without changing the
rest of the API.
We called in tinycrypt in the file names, but uecc in config.h, all.sh and
other places, which could be confusing. Just use tinycrypt everywhere because
that's the name of the project and repo where we took the files.
The changes were made using the following commands (with GNU sed and zsh):
sed -i 's/uecc/tinycrypt/g' **/*.[ch] tests/scripts/all.sh
sed -i 's/MBEDTLS_USE_UECC/MBEDTLS_USE_TINYCRYPT/g' **/*.[ch] tests/scripts/all.sh scripts/config.pl
This commit improves hygiene and formatting of macro definitions
throughout the library. Specifically:
- It adds brackets around parameters to avoid unintended
interpretation of arguments, e.g. due to operator precedence.
- It adds uses of the `do { ... } while( 0 )` idiom for macros that
can be used as commands.
Remove the ssl_cert_test sample application, as it uses
hardcoded certificates that moved, and is redundant with the x509
tests and applications. Fixes#1905.
* restricted/pr/550:
Update query_config.c
Fix failure in SSLv3 per-version suites test
Adjust DES exclude lists in test scripts
Clarify 3DES changes in ChangeLog
Fix documentation for 3DES removal
Exclude 3DES tests in test scripts
Fix wording of ChangeLog and 3DES_REMOVE docs
Reduce priority of 3DES ciphersuites
* public/pr/2429:
Add ChangeLog entry for unused bits in bitstrings
Improve docs for ASN.1 bitstrings and their usage
Add tests for (named) bitstring to suite_asn1write
Fix ASN1 bitstring writing
The previous prototype gave warnings are the strings produced by #cond and
__FILE__ are const, so we shouldn't implicitly cast them to non-const.
While at it modifying most example programs:
- include the header that has the function declaration, so that the definition
can be checked to match by the compiler
- fix whitespace
- make it work even if PLATFORM_C is not defined:
- CHECK_PARAMS is not documented as depending on PLATFORM_C and there is
no reason why it should
- so, remove the corresponding #if defined in each program...
- and add missing #defines for mbedtls_exit when needed
The result has been tested (make all test with -Werror) with the following
configurations:
- full with CHECK_PARAMS with PLATFORM_C
- full with CHECK_PARAMS without PLATFORM_C
- full without CHECK_PARAMS without PLATFORM_C
- full without CHECK_PARAMS with PLATFORM_C
Additionally, it has been manually tested that adding
mbedtls_aes_init( NULL );
near the normal call to mbedtls_aes_init() in programs/aes/aescrypt2.c has the
expected effect when running the program.
The sample programs require an additional handler function of
mbedtls_param_failed() to handle any failed parameter validation checks enabled
by the MBEDTLS_CHECK_PARAMS config.h option.
Some sample programs access structure fields directly. Making these work is
desirable in the long term, but these are not essential for the core
functionality in non-legacy mode.
This commit adds a command line option `md` to the example application
`programs/x509/cert_req` allowing to specify the hash algorithm to use
when signing the CSR.
* development:
ssl-opt.sh: change expected output for large srv packet test with SSLv3
Adapt ChangeLog
Fix bug in SSL ticket implementation removing keys of age < 1s
ssl-opt.sh: Add DTLS session resumption tests
Add ChangeLog entry
Fix typo
Fix hmac_drbg failure in benchmark, with threading
Remove trailing whitespace
Remove trailing whitespace
ssl_server2: add buffer overhead for a termination character
Add missing large and small packet tests for ssl_server2
Added buffer_size and response_size options for ssl-server2. Added appropriate tests.
Solving a conflict in tests/ssl-opt.sh: two set of tests were added at the
same place (just after large packets):
- restartable ECC tests (in this branch)
- server-side large packets (in development)
Resolution was to move the ECC tests after the newly added server large packet
ones.
This commit replaces multiple `memset()` calls in the example
programs aes/aescrypt2.c and aes/crypt_and_hash.c by calls to
the reliable zeroization function `mbedtls_zeroize()`.
While not a security issue because the code is in the example
programs, it's bad practice and should be fixed.
When using a primality testing function the tolerable error rate depends
on the scheme in question, the required security strength and wether it
is used for key generation or parameter validation. To support all use
cases we need more flexibility than what the old API provides.
If `MBEDTLS_MEMORY_BUFFER_ALLOC_C` is configured and Mbed TLS'
custom buffer allocator is used for calloc() and free(), the
read buffer used by the server example application is allocated
from the buffer allocator, but freed after the buffer allocator
has been destroyed. If memory backtracing is enabled, this leaves
a memory leak in the backtracing structure allocated for the buffer,
as found by valgrind.
Fixes#2069.
* The variables `csr` and `issuer_crt` are initialized but not freed.
* The variable `entropy` is unconditionally freed in the cleanup section
but there's a conditional jump to that section before its initialization.
This cmmot Moves it to the other initializations happening before the
first conditional jump to the cleanup section.
Fixes#1422.
* development-restricted: (578 commits)
Update library version number to 2.13.1
Don't define _POSIX_C_SOURCE in header file
Don't declare and define gmtime()-mutex on Windows platforms
Correct preprocessor guards determining use of gmtime()
Correct documentation of mbedtls_platform_gmtime_r()
Correct typo in documentation of mbedtls_platform_gmtime_r()
Correct POSIX version check to determine presence of gmtime_r()
Improve documentation of mbedtls_platform_gmtime_r()
platform_utils.{c/h} -> platform_util.{c/h}
Don't include platform_time.h if !MBEDTLS_HAVE_TIME
Improve wording of documentation of MBEDTLS_PLATFORM_GMTIME_R_ALT
Fix typo in documentation of MBEDTLS_PLATFORM_GMTIME_R_ALT
Replace 'thread safe' by 'thread-safe' in the documentation
Improve documentation of MBEDTLS_HAVE_TIME_DATE
ChangeLog: Add missing renamings gmtime -> gmtime_r
Improve documentation of MBEDTLS_HAVE_TIME_DATE
Minor documentation improvements
Style: Add missing period in documentation in threading.h
Rename mbedtls_platform_gmtime() to mbedtls_platform_gmtime_r()
Guard decl and use of gmtime mutex by HAVE_TIME_DATE and !GMTIME_ALT
...
Previously, the UDP proxy could only remember one delayed message
for future transmission; if two messages were delayed in succession,
without another one being normally forwarded in between,
the message that got delayed first would be dropped.
This commit enhances the UDP proxy to allow to delay an arbitrary
(compile-time fixed) number of messages in succession.
This setting belongs to the individual connection, not to a configuration
shared by many connections. (If a default value is desired, that can be handled
by the application code that calls mbedtls_ssl_set_mtu().)
There are at least two ways in which this matters:
- per-connection settings can be adjusted if MTU estimates become available
during the lifetime of the connection
- it is at least conceivable that a server might recognize restricted clients
based on range of IPs and immediately set a lower MTU for them. This is much
easier to do with a per-connection setting than by maintaining multiple
near-duplicated ssl_config objects that differ only by the MTU setting.
This commit adds a new command line option `dgram_packing`
to the example server application programs/ssl/ssl_client2
allowing to allow/forbid the use of datagram packing.
This commit adds a new command line option `dgram_packing`
to the example server application programs/ssl/ssl_server2
allowing to allow/forbid the use of datagram packing.
For now, just check that it causes us to fragment. More tests are coming in
follow-up commits to ensure we respect the exact value set, including when
renegotiating.
sprintf( (char *) buf, "%s\r\n", base );
Above code generates Wformat-overflow warning since both buf and base
are of same size. buf should be sizeof( base ) + characters added in
the format. In this case format 2 bytes for "\r\n".
When MBEDTLS_MEMORY_BUFFER_ALLOC_C was defined, the sample ssl_server2.c was
using its own memory buffer for memory allocated by the library. The memory
used wasn't obvious, so this adds a macro for the memory buffer allocated to
make the allocated memory size more obvious and hence easier to configure.