If int is not capable of storing as many values as unsigned, the code
may generate a trap value. If signed int and unsigned int aren't
32-bit types, the code may calculate meaningless values.
The elements of the HAVEGE state are manipulated with bitwise
operations, with the expectations that the elements are 32-bit
unsigned integers (or larger). But they are declared as int, and so
the code has undefined behavior. Clang with Asan correctly points out
some shifts that reach the sign bit.
Use unsigned int internally. This is technically an aliasing violation
since we're accessing an array of `int` via a pointer to `unsigned
int`, but since we don't access the array directly inside the same
function, it's very unlikely to be compiled in an unintended manner.
* origin/pr/2481:
Document support for MD2 and MD4 in programs/x509/cert_write
Correct name of X.509 parsing test for well-formed, ill-signed CRT
Add test cases exercising successful verification of MD2/MD4/MD5 CRT
Add test case exercising verification of valid MD2 CRT
Add MD[245] test CRTs to tree
Add instructions for MD[245] test CRTs to tests/data_files/Makefile
Add suppport for MD2 to CSR and CRT writing example programs
Convert further x509parse tests to use lower-case hex data
Correct placement of ChangeLog entry
Adapt ChangeLog
Use SHA-256 instead of MD2 in X.509 CRT parsing tests
Consistently use lower case hex data in X.509 parsing tests
* origin/pr/2497:
Re-generate library/certs.c from script
Add new line at the end of test-ca2.key.enc
Use strict syntax to annotate origin of test data in certs.c
Add run to all.sh exercising !MBEDTLS_PEM_PARSE_C + !MBEDTLS_FS_IO
Allow DHM self test to run without MBEDTLS_PEM_PARSE_C
ssl-opt.sh: Auto-skip tests that use files if MBEDTLS_FS_IO unset
Document origin of hardcoded certificates in library/certs.c
Adapt ChangeLog
Rename server1.der to server1.crt.der
Add DER encoded files to git tree
Add build instructions to generate DER versions of CRTs and keys
Document "none" value for ca_path/ca_file in ssl_client2/ssl_server2
ssl_server2: Skip CA setup if `ca_path` or `ca_file` argument "none"
ssl_client2: Skip CA setup if `ca_path` or `ca_file` argument "none"
Correct white spaces in ssl_server2 and ssl_client2
Adapt ssl_client2 to parse DER encoded test CRTs if PEM is disabled
Adapt ssl_server2 to parse DER encoded test CRTs if PEM is disabled
To prevent dropping the same message over and over again, the UDP proxy
test application programs/test/udp_proxy _logically_ maintains a mapping
from records to the number of times the record has already been dropped,
and stops dropping once a configurable threshold (currently 2) is passed.
However, the actual implementation deviates from this logical view
in two crucial respects:
- To keep the implementation simple and independent of
implementations of suitable map interfaces, it only counts how
many times a record of a given _size_ has been dropped, and
stops dropping further records of that size once the configurable
threshold is passed. Of course, this is not fail-proof, but a
good enough approximation for the proxy, and it allows to use
an inefficient but simple array for the required map.
- The implementation mixes datagram lengths and record lengths:
When deciding whether it is allowed to drop a datagram, it
uses the total datagram size as a lookup index into the map
counting the number of times a package has been dropped. However,
when updating this map, the UDP proxy traverses the datagram
record by record, and updates the mapping at the level of record
lengths.
Apart from this inconsistency, the current implementation suffers
from a lack of bounds checking for the parsed length of incoming
DTLS records that can lead to a buffer overflow when facing
malformed records.
This commit removes the inconsistency in datagram vs. record length
and resolves the buffer overflow issue by not attempting any dissection
of datagrams into records, and instead only counting how often _datagrams_
of a particular size have been dropped.
There is only one practical situation where this makes a difference:
If datagram packing is used by default but disabled on retransmission
(which OpenSSL has been seen to do), it can happen that we drop a
datagram in its initial transmission, then also drop some of its records
when they retransmitted one-by-one afterwards, yet still keeping the
drop-counter at 1 instead of 2. However, even in this situation, we'll
correctly count the number of droppings from that point on and eventually
stop dropping, because the peer will not fall back to using packing
and hence use stable record lengths.
Remove the "Decrypt empty buffer" test, as ChaCha20 is a stream cipher
and 0 bytes encrypted is identical to a 0 length buffer. The "ChaCha20
Encrypt and decrypt 0 bytes" test will test decryption of a 0 length
buffer.
Previously, even in the Chacha20 and Chacha20-Poly1305 tests, we would
test that decryption of an empty buffer would work with
MBEDTLS_CIPHER_AES_128_CBC.
Make the cipher used with the dec_empty_buf() test configurable, so that
Chacha20 and Chacha20-Poly1305 empty buffer tests can use ciphers other
than AES CBC. Then, make the Chacha20 and Chacha20-Poly1305 empty buffer
tests use the MBEDTLS_CIPHER_CHACHA20 and
MBEDTLS_CIPHER_CHACHA20_POLY1305 cipher suites.
- Explain the use of explicit ASN.1 tagging for the extensions structuree
- Remove misleading comment which suggests that mbedtls_x509_get_ext()
also parsed the header of the first extension, which is not the case.
Some functions within the X.509 module return an ASN.1 low level
error code where instead this error code should be wrapped by a
high-level X.509 error code as in the bulk of the module.
Specifically, the following functions are affected:
- mbedtls_x509_get_ext()
- x509_get_version()
- x509_get_uid()
This commit modifies these functions to always return an
X.509 high level error code.
Care has to be taken when adapting `mbetls_x509_get_ext()`:
Currently, the callers `mbedtls_x509_crt_ext()` treat the
return code `MBEDTLS_ERR_ASN1_UNEXPECTED_TAG` specially to
gracefully detect and continue if the extension structure is not
present. Wrapping the ASN.1 error with
`MBEDTLS_ERR_X509_INVALID_EXTENSIONS` and adapting the check
accordingly would mean that an unexpected tag somewhere
down the extension parsing would be ignored by the caller.
The way out of this is the following: Luckily, the extension
structure is always the last field in the surrounding structure,
so if there is some data remaining, it must be an Extension
structure, so we don't need to deal with a tag mismatch gracefully
in the first place.
We may therefore wrap the return code from the initial call to
`mbedtls_asn1_get_tag()` in `mbedtls_x509_get_ext()` by
`MBEDTLS_ERR_X509_INVALID_EXTENSIONS` and simply remove
the special treatment of `MBEDTLS_ERR_ASN1_UNEXPECTED_TAG`
in the callers `x509_crl_get_ext()` and `x509_crt_get_ext()`.
This renders `mbedtls_x509_get_ext()` unsuitable if it ever
happened that an Extension structure is optional and does not
occur at the end of its surrounding structure, but for CRTs
and CRLs, it's fine.
The following tests need to be adapted:
- "TBSCertificate v3, issuerID wrong tag"
The issuerID is optional, so if we look for its presence
but find a different tag, we silently continue and try
parsing the subjectID, and then the extensions. The tag '00'
used in this test doesn't match either of these, and the
previous code would hence return LENGTH_MISMATCH after
unsucessfully trying issuerID, subjectID and Extensions.
With the new code, any data remaining after issuerID and
subjectID _must_ be Extension data, so we fail with
UNEXPECTED_TAG when trying to parse the Extension data.
- "TBSCertificate v3, UIDs, invalid length"
The test hardcodes the expectation of
MBEDTLS_ERR_ASN1_INVALID_LENGTH, which needs to be
wrapped in MBEDTLS_ERR_X509_INVALID_FORMAT now.
Fixes#2431.
When parsing a substructure of an ASN.1 structure, no field within
the substructure must exceed the bounds of the substructure.
Concretely, the `end` pointer passed to the ASN.1 parsing routines
must be updated to point to the end of the substructure while parsing
the latter.
This was previously not the case for the routines
- x509_get_attr_type_and_value(),
- mbedtls_x509_get_crt_ext(),
- mbedtls_x509_get_crl_ext().
These functions kept using the end of the parent structure as the
`end` pointer and would hence allow substructure fields to cross
the substructure boundary. This could lead to successful parsing
of ill-formed X.509 CRTs.
This commit fixes this.
Care has to be taken when adapting `mbedtls_x509_get_crt_ext()`
and `mbedtls_x509_get_crl_ext()`, as the underlying function
`mbedtls_x509_get_ext()` returns `0` if no extensions are present
but doesn't set the variable which holds the bounds of the Extensions
structure in case the latter is present. This commit addresses
this by returning early from `mbedtls_x509_get_crt_ext()` and
`mbedtls_x509_get_crl_ext()` if parsing has reached the end of
the input buffer.
The following X.509 parsing tests need to be adapted:
- "TBSCertificate, issuer two inner set datas"
This test exercises the X.509 CRT parser with a Subject name
which has two empty `AttributeTypeAndValue` structures.
This is supposed to fail with `MBEDTLS_ERR_ASN1_OUT_OF_DATA`
because the parser should attempt to parse the first structure
and fail because of a lack of data. Previously, it failed to
obey the (0-length) bounds of the first AttributeTypeAndValue
structure and would try to interpret the beginning of the second
AttributeTypeAndValue structure as the first field of the first
AttributeTypeAndValue structure, returning an UNEXPECTED_TAG error.
- "TBSCertificate, issuer, no full following string"
This test exercises the parser's behaviour on an AttributeTypeAndValue
structure which contains more data than expected; it should therefore
fail with MBEDTLS_ERR_ASN1_LENGTH_MISMATCH. Because of the missing bounds
check, it previously failed with UNEXPECTED_TAG because it interpreted
the remaining byte in the first AttributeTypeAndValue structure as the
first byte in the second AttributeTypeAndValue structure.
- "SubjectAltName repeated"
This test should exercise two SubjectAltNames extensions in succession,
but a wrong length values makes the second SubjectAltNames extension appear
outside of the Extensions structure. With the new bounds in place, this
therefore fails with a LENGTH_MISMATCH error. This commit adapts the test
data to put the 2nd SubjectAltNames extension inside the Extensions
structure, too.
The X.509 parsing test suite test_suite_x509parse contains a test
exercising X.509 verification for a valid MD4/MD5 certificate in a
profile which doesn't allow MD4/MD5. This commit adds an analogous
test for MD2.
The example programs programs/x509/cert_req and programs/x509/cert_write
(demonstrating the use of X.509 CSR and CRT writing functionality)
previously didn't support MD2 signatures.
For testing purposes, this commit adds support for MD2 to cert_req,
and support for MD2 and MD4 to cert_write.
When running make with parallelization, running both "clean" and "lib"
with a single make invocation can lead to each target building in
parallel. It's bad if lib is partially done building something, and then
clean deletes what was built. This can lead to errors later on in the
lib target.
$ make -j9 clean lib
CC aes.c
CC aesni.c
CC arc4.c
CC aria.c
CC asn1parse.c
CC ./library/error.c
CC ./library/version.c
CC ./library/version_features.c
AR libmbedcrypto.a
ar: aes.o: No such file or directory
Makefile:120: recipe for target 'libmbedcrypto.a' failed
make[2]: *** [libmbedcrypto.a] Error 1
Makefile:152: recipe for target 'libmbedcrypto.a' failed
make[1]: *** [libmbedcrypto.a] Error 2
Makefile:19: recipe for target 'lib' failed
make: *** [lib] Error 2
make: *** Waiting for unfinished jobs....
To avoid this sort of trouble, always invoke clean by itself without
other targets throughout the library. Don't run clean in parallel with
other rules. The only place where clean was run in parallel with other
targets was in list-symbols.sh.
- Replace 'RSA with MD2' OID '2a864886f70d010102' by
'RSA with SHA-256' OID '2a864886f70d01010b':
Only the last byte determines the hash, and
`MBEDTLS_OID_PKCS1_MD2 == MBEDTLS_OID_PKCS1 "\x02"`
`MBEDTLS_OID_PKCS1_SHA256 == MBEDTLS_OID_PKCS1 "\x0b"`
See oid.h.
- Replace MD2 dependency by SHA256 dependency.
- Adapt expected CRT info output.
All of them are copied from (former) CRT and key files in `tests/data_files`.
For files which have been regenerated since they've been copied to `certs.c`,
update the copy.
Add declarations for DER encoded test CRTs to certs.h
Add DER encoded versions of CRTs to certs.c
fix comment in certs.c
Don't use (signed) char for DER encoded certificates
Consistently use `const char *` for test CRTs regardless of encoding
Remove non-sensical and unused PW variable for DER encoded key
Provide test CRTs in PEM and DER fmt, + pick suitable per config
This commit modifies `certs.h` and `certs.c` to start following the
following pattern for the provided test certificates and files:
- Raw test data is named `NAME_ATTR1_ATTR2_..._ATTRn`
For example, there are
`TEST_CA_CRT_{RSA|EC}_{PEM|DER}_{SHA1|SHA256}`.
- Derived test data with fewer attributes, iteratively defined as one
of the raw test data instances which suits the current configuration.
For example,
`TEST_CA_CRT_RSA_PEM`
is one of `TEST_CA_CRT_RSA_PEM_SHA1` or `TEST_CA_CRT_RSA_PEM_SHA256`,
depending on whether SHA-1 and/or SHA-256 are defined in the current
config.
Add missing public declaration of test key password
Fix signedness and naming mismatches
Further improve structure of certs.h and certs.c
Fix definition of mbedtls_test_cas test CRTs depending on config
Remove semicolon after macro string constant in certs.c
This allows to test PSK-based ciphersuites via ssl_server2 in builds
which have MBEDTLS_X509_CRT_PARSE_C enabled but both MBEDTLS_FS_IO and
MBEDTLS_CERTS_C disabled.
This allows to test PSK-based ciphersuites via ssl_client2 in builds
which have MBEDTLS_X509_CRT_PARSE_C enabled but both MBEDTLS_FS_IO and
MBEDTLS_CERTS_C disabled.
A similar change is applied to the `crt_file` and `key_file` arguments.