The intended logic around MBEDTLS_xxx_ALT is to exclude them from full
because they require the alternative implementation of one or more
library functions, except that MBEDTLS_PLATFORM_xxx_ALT are different:
they're alternative implementations of a platform function and they
have a built-in default, so they should be included in full. Document
this.
Fix a bug whereby MBEDTLS_PLATFORM_xxx_ALT didn't catch symbols where
xxx contains an underscore. As a consequence,
MBEDTLS_PLATFORM_GMTIME_R_ALT and MBEDTLS_PLATFORM_NV_SEED_ALT are now
enabled in the full config. Explicitly exclude
MBEDTLS_PLATFORM_SETUP_TEARDOWN_ALT because it behaves like the
non-platform ones, requiring an extra build-time dependency.
Explicitly exclude MBEDTLS_PLATFORM_NV_SEED_ALT from baremetal
because it requires MBEDTLS_ENTROPY_NV_SEED, and likewise explicitly
unset it from builds that unset MBEDTLS_ENTROPY_NV_SEED.
Signed-off-by: Gilles Peskine <Gilles.Peskine@arm.com>
Make it possible to use a compiler that isn't in $PATH, or that's
installed with a different name, or even a compiler for a different
target such as arm-linux-gnueabi.
Signed-off-by: Gilles Peskine <Gilles.Peskine@arm.com>
Almost everything the selftest program does is in the test suites. But
just in case run the selftest program itself once in the full
configuration, and once in the default configuration with ASan, in
addition to running it out of box.
Signed-off-by: Gilles Peskine <Gilles.Peskine@arm.com>
When parsing a certificate with the basic constraints extension
the max_pathlen that was read from it was incremented regardless
of its value. However, if the max_pathlen is equal to INT_MAX (which
is highly unlikely), an undefined behaviour would occur.
This commit adds a check to ensure that such value is not accepted
as valid. Relevant tests for INT_MAX and INT_MAX-1 are also introduced.
Certificates added in this commit were generated using the
test_suite_x509write, function test_x509_crt_check. Input data taken
from the "Certificate write check Server1 SHA1" test case, so the generated
files are like the "server1.crt", but with the "is_ca" field set to 1 and
max_pathlen as described by the file name.
Signed-off-by: Andrzej Kurek <andrzej.kurek@arm.com>
Signed-off-by: Piotr Nowicki <piotr.nowicki@arm.com>
Simplify the code in minor ways. Each of this changes fixes a warning
from Pylint 2.4 that doesn't appear with Pylint 1.7.
Signed-off-by: Gilles Peskine <Gilles.Peskine@arm.com>
Python 2 is no longer supported upstream. Actively drop compatibility
with Python 2.
Removing the inheritance of a class on object pacifies recent versions
of Pylint (useless-object-inheritance).
Signed-off-by: Gilles Peskine <Gilles.Peskine@arm.com>
check_python_files was optional in all.sh because we used to have CI
machines where pylint wasn't available. But this had the downside that
check_python_files kept breaking because it wasn't checked in the CI.
Now our CI has pylint and check_python_files should not be optional.
Signed-off-by: Gilles Peskine <Gilles.Peskine@arm.com>
On some systems, such as Ubuntu up to 19.04, `pylint` is for Python 2
and `pylint3` is for Python 3, so we should not use `pylint` even if
it's available.
Use the Python module instead of the trivial shell wrapper. This way
we can make sure to use the correct Python version.
Fix#3111
Signed-off-by: Gilles Peskine <Gilles.Peskine@arm.com>
Check Windows files for some issues, including permissions. Omit the
checks related to special characters (whitespace, line endings,
encoding) as appropriate.
Signed-off-by: Gilles Peskine <Gilles.Peskine@arm.com>
To test a file name exactly, prepend a / to the base name.
files_to_check actually checks suffixes, not file names, so rename it
to extensions_to_check.
Signed-off-by: Gilles Peskine <Gilles.Peskine@arm.com>
The identifiers of the unmet dependencies of a test case are
stored in a buffer of fixed size that can be potentially too
small to store all the unmet dependencies. Indicate in test
reports if some unmet dependencies are missing.
Signed-off-by: Ronald Cron <ronald.cron@arm.com>
Fix potential buffer overflow when tracking the unmet dependencies
of a test case. The identifiers of unmet dependencies are stored
in an array of fixed size. Ensure that we don't overrun the array.
Signed-off-by: Ronald Cron <ronald.cron@arm.com>
Use size_t for some variables that are array indices.
Use unsigned for some variables that are counts of "small" things.
This is a backport of commit 3c1c8ea3e7.
Signed-off-by: Ronald Cron <ronald.cron@arm.com>
Since unmet_dependencies only ever contains strings that are integers
written out in decimal, store the integer instead. Do this
unconditionally since it doesn't cost any extra memory.
This commit saves a little memory and more importantly avoids a gotcha
with uninitialized pointers which caused a bug on development (the
array was only initialized in verbose mode).
Signed-off-by: Gilles Peskine <Gilles.Peskine@arm.com>
There are currently 4 tests in ssl-opt.sh with either -C "resend" or -S
"resend", that is, asserting that no retransmission will occur. They sometimes
fail on loaded CI machines as one side doesn't send a message fast enough,
causing the other side to retransmit, causing the test to fail.
(For the "reconnect" test there was an other issue causing random failures,
fixed in a previous commit, but even after that fix the test would still
sometimes randomly fail, even if much more rarely.)
While it's a hard problem to fix in a general and perfect way, in practice the
probability of failures can be drastically reduced by making the timeout
values much larger.
For some tests, where retransmissions are actually expected, this would have
the negative effect of increasing the average running time of the test, as
each side would wait for longer before it starts retransmission, so we have a
trade-off between average running time and probability of spurious failures.
But for tests where retransmission is not expected, there is no such trade-off
as the expected running time of the test (assuming the code is correct most of
the time) is not impacted by the timeout value. So the only negative effect of
increasing the timeout value is on the worst-case running time on the test,
which is much less important, as test should only fail quite rarely.
This commit addresses the easy case of tests that don't expect retransmission
by increasing the value of their timeout range to 10s-20s. This value
corresponds to the value used for tests that assert `-S "autoreduction"` which
are in the same case and where the current value seems acceptable so far.
It also represents an increase, compared to the values before this commit, of
a factor 20 for the "reconnect" tests which were frequently observed to fail
in the CI, and of a factor 10 for the first two "DTLS proxy" tests, which were
observed to fail much less frequently, so hopefully the new values are enough
to reduce the probability of spurious failures to an acceptable level.
Signed-off-by: Manuel Pégourié-Gonnard <manuel.pegourie-gonnard@arm.com>
The server must check client reachability (we chose to do that by checking a
cookie) before destroying the existing association (RFC 6347 section 4.2.8).
Let's make sure we do, by having a proxy-in-the-middle inject a ClientHello -
the server should notice, but not destroy the connection.
Signed-off-by: Manuel Pégourié-Gonnard <manuel.pegourie-gonnard@arm.com>
- "Default" should only be used for tests that actually use the defaults (ie,
not passing options on the command line, except maybe debug/dtls)
- All tests in the "Encrypt then MAC" group should start with that string as a
common prefix
Signed-off-by: Manuel Pégourié-Gonnard <manuel.pegourie-gonnard@arm.com>
Pylint when installed as a distro package can be installed as pylint3, whilst as
a PEP egg, it can be installed as pylint.
This commit changes the scripts to first use pylint if installed, and optionally
look for pylint3 if not installed. This is to allow a preference for the PEP
version over the distro version, assuming the PEP one is more likely to be
the correct one.
Signed-off-by: Simon Butcher <simon.butcher@arm.com>
The ssl-opt.sh test cases using session resumption tend to fail occasionally
on the CI due to a race condition in how ssl_server2 and ssl_client2 handle
the reconnection cycle.
The server does the following in order:
- S1 send application data
- S2 send a close_notify alert
- S3 close the client socket
- S4 wait for a "new connection" (actually a new datagram)
- S5 start a handshake
The client does the following in order:
- C1 wait for and read application data from the server
- C2 send a close_notify alert
- C3 close the server socket
- C4 reset session data and re-open a server socket
- C5 start a handshake
If the client has been able to send the close_notify (C2) and if has been
delivered to the server before if closes the client socket (S3), when the
server reaches S4, the datagram that we start the new connection will be the
ClientHello and everything will be fine.
However if S3 wins the race and happens before the close_notify is delivered,
in S4 the close_notify is what will be seen as the first datagram in a new
connection, and then in S5 this will rightfully be rejected as not being a
valid ClientHello and the server will close the connection (and go wait for
another one). The client will then fail to read from the socket and exit
non-zero and the ssl-opt.sh harness will correctly report this as a failure.
In order to avoid this race condition in test using ssl_client2 and
ssl_server2, this commits introduces a new command-line option
skip_close_notify to ssl_client2 and uses it in all ssl-opt.sh tests that use
session resumption with DTLS and ssl_server2.
This works because ssl_server2 knows how many messages it expects in each
direction and in what order, and closes the connection after that rather than
relying on close_notify (which is also why there was a race in the first
place).
Tests that use another server (in practice there are two of them, using
OpenSSL as a server) wouldn't work with skip_close_notify, as the server won't
close the connection until the client sends a close_notify, but for the same
reason they don't need it (there is no race between receiving close_notify and
closing as the former is the cause of the later).
An alternative approach would be to make ssl_server2 keep the connection open
until it receives a close_notify. Unfortunately it creates problems for tests
where we simulate a lossy network, as the close_notify could be lost (and the
client can't retransmit it). We could modify udp_proxy with an option to never
drop alert messages, but when TLS 1.3 comes that would no longer work as the
type of messages will be encrypted.
Signed-off-by: Manuel Pégourié-Gonnard <manuel.pegourie-gonnard@arm.com>
(Only the top-level ones, ie, for each call to eg asn1_get_mpi(), ensure
there's at least one test case that makes this call fail in one way, but don't
test the various ways to make asn1_get_mpi fail - that should be covered
elsewhere.)
- the new checks added by the previous commits needed exercising
- existing tests sometimes had wrong descriptions or where passing for the
wrong reason (eg with the "length mismatch" test, the function actually
failed before reaching the length check)
- while at it, add tests for the rest as well
The valid minimal-size key was generated with:
openssl genrsa 128 2>/dev/null | openssl rsa -outform der 2>/dev/null | xxd -p
Goals:
* Build with common compilers with common options, so that we don't
miss a (potentially useful) warning only triggered with certain
build options.
* A previous commit removed -O0 test jobs, leaving only the one with
-m32. We have inline assembly that is disabled with -O0, falling
back to generic C code. This commit restores a test that runs the
generic C code on a 64-bit platform.
The splitting of this test into two versions depending on whether SHA-1 was
allowed by the server was a mistake in
5d2511c4d4 - the test has nothing to do with
SHA-1 in the first place, as the server doesn't request a certificate from
the client so it doesn't matter if the server accepts SHA-1 or not.
While the whole script makes (often implicit) assumptions about the version of
GnuTLS used, generally speaking it should work out of the box with the version
packaged on our reference testing platform, which is Ubuntu 16.04 so far.
With the update from Jan 8 2020 (3.4.10-4ubuntu1.6), the patches for rejecting
SHA-1 in certificate signatures were backported, so we should avoid presenting
SHA-1 signed certificates to a GnuTLS peer in ssl-opt.sh.
* origin/mbedtls-2.16:
Fix some pylint warnings
Enable more test cases without MBEDTLS_MEMORY_DEBUG
More accurate test case description
Clarify that the "FATAL" message is expected
Note that mbedtls_ctr_drbg_seed() must not be called twice
Fix CTR_DRBG benchmark
Changelog entry for xxx_drbg_set_entropy_len before xxx_drbg_seed
CTR_DRBG: support set_entropy_len() before seed()
CTR_DRBG: Don't use functions before they're defined
HMAC_DRBG: support set_entropy_len() before seed()
None of the test cases in tests_suite_memory_buffer_alloc actually
need MBEDTLS_MEMORY_DEBUG. Some have additional checks when
MBEDTLS_MEMORY_DEBUG but all are useful even without it. So enable
them all and #ifdef out the parts that require DEBUG.
The test case "Memory buffer small buffer" emits a message
"FATAL: verification of first header failed". In this test case, it's
actually expected, but it looks weird to see this message from a
passing test. Add a comment that states this explicitly, and modify
the test description to indicate that the failure is expected, and
change the test function name to be more accurate.
Fix#309
The corner case tests were designed for 32 and 64 bit limbs
independently and performed only on the target platform. On the other
platform they are not corner cases anymore, but we can still exercise
them.
The corner case tests were designed for 64 bit limbs and failed on 32
bit platforms because the numbers in the test ended up being stored in a
different number of limbs and the function (correctly) returnd an error
upon receiving them.
The signature of mbedtls_mpi_cmp_mpi_ct() meant to support using it in
place of mbedtls_mpi_cmp_mpi(). This meant full comparison functionality
and a signed result.
To make the function more universal and friendly to constant time
coding, we change the result type to unsigned. Theoretically, we could
encode the comparison result in an unsigned value, but it would be less
intuitive.
Therefore we won't be able to represent the result as unsigned anymore
and the functionality will be constrained to checking if the first
operand is less than the second. This is sufficient to support the
current use case and to check any relationship between MPIs.
The only drawback is that we need to call the function twice when
checking for equality, but this can be optimised later if an when it is
needed.
mbedtls_ctr_drbg_seed() always set the entropy length to the default,
so a call to mbedtls_ctr_drbg_set_entropy_len() before seed() had no
effect. Change this to the more intuitive behavior that
set_entropy_len() sets the entropy length and seed() respects that and
only uses the default entropy length if there was no call to
set_entropy_len().
The former test-only function mbedtls_ctr_drbg_seed_entropy_len() is
no longer used, but keep it for strict ABI compatibility.
When running 'make test' with GNU make, if a test suite program
displays "PASSED", this was automatically counted as a pass. This
would in particular count as passing:
* A test suite with the substring "PASSED" in a test description.
* A test suite where all the test cases succeeded, but the final
cleanup failed, in particular if a sanitizer reported a memory leak.
Use the test executable's return status instead to determine whether
the test suite passed. It's always 0 on PASSED unless the executable's
cleanup code fails, and it's never 0 on any failure.
FixARMmbed/mbed-crypto#303
Some sanitizers default to displaying an error message and recovering.
This could result in a test being recorded as passing despite a
complaint from the sanitizer. Turn off sanitizer recovery to avoid
this risk.
* origin/pr/2864:
Fix compilation error
Add const to variable
Fix endianity issue when reading uint32
Increase test suite timeout
Reduce stack usage of test_suite_pkcs1_v15
Reduce stack usage of test_suite_pkcs1_v21
Reduce stack usage of test_suite_rsa
Reduce stack usage of test_suite_pk
Exercise the library functions with calloc returning NULL for a size
of 0. Make this a separate job with UBSan (and ASan) to detect
places where we try to dereference the result of calloc(0) or to do
things like
buf = calloc(size, 1);
if (buf == NULL && size != 0) return INSUFFICIENT_MEMORY;
memcpy(buf, source, size);
which has undefined behavior when buf is NULL at the memcpy call even
if size is 0.
This is needed because other test components jobs either use the system
malloc which returns non-NULL on Linux and FreeBSD, or the
memory_buffer_alloc malloc which returns NULL but does not give as
useful feedback with ASan (because the whole heap is a single C
object).
The uint32 is given as a bigendian stream, in the tests, however,
the char buffer that collected the stream read it as is,
without converting it. Add a temporary buffer, to call `greentea_getc()`
8 times, and then put it in the correct endianity for input to `unhexify()`.
Reduce the stack usage of the `test_suite_pkcs1_v21` by reducing the
size of the buffers used in the tests, to a reasonable big enough size,
and change the size sent to the API to sizeof output.
Reduce the stack usage of the `test_suite_rsa` by reducing the
size of the buffers used in the tests, to a reasonable big enough size,
and change the data size to decrypt in the data file.
With the removal of MBEDTLS_MEMORY_BUFFER_ALLOC_C from the
full config, there are no tests for it remaining in all.sh.
This commit adds a build as well as runs of `make test` and
`ssl-opt.sh` with MBEDTLS_MEMORY_BUFFER_ALLOC_C enabled to all.sh.
Previously, numerous all.sh tests manually disabled the buffer allocator
or memory backtracting after setting a full config as the starting point.
With the removal of MBEDTLS_MEMORY_BACKTRACE and MBEDTLS_MEMORY_BUFFER_ALLOC_C
from full configs, this is no longer necessary.
* origin/mbedtls-2.16:
Changelog entry
Check for zero length and NULL buffer pointer
ssl-opt.sh: wait for proxy to start before running the script further
Adapt ChangeLog
Fix mpi_bigendian_to_host() on bigendian systems
compat.sh used to skip OpenSSL altogether for DTLS 1.2, because older
versions of OpenSSL didn't support it. But these days it is supported.
We don't want to use DTLS 1.2 with OpenSSL unconditionally, because we
still use legacy versions of OpenSSL to test with legacy ciphers. So
check whether the version we're using supports it.
Without any -O option, the default is -O0, and then the assembly code
is not used, so this would not be a non-regression test for the
assembly code that doesn't build.
This test case was only executed if the SHA-512 module was enabled and
MBEDTLS_ENTROPY_FORCE_SHA256 was not enabled, so "config.pl full"
didn't have a chance to reach it even if that enabled
MBEDTLS_PLATFORM_NV_SEED_ALT.
Now all it takes to enable this test is MBEDTLS_PLATFORM_NV_SEED_ALT
and its requirements, and the near-ubiquitous MD module.
Call mbedtls_entropy_free on test failure.
Restore the previous NV seed functions which the call to
mbedtls_platform_set_nv_seed() changed. This didn't break anything,
but only because the NV seed functions used for these tests happened
to work for the tests that got executed later in the .data file.
memset has undefined behavior when either pointer can be NULL, which
is the case when it's the result of malloc/calloc with a size of 0.
The memset calls here are useless anyway since they come immediately
after calloc.
* origin/mbedtls-2.16:
Fix parsing issue when int parameter is in base 16
Refactor receive_uint32()
Refactor get_byte function
Make the script portable to both pythons
Update the test encoding to support python3
update the test script
tests: Limit each log to 10 GiB
Fix error `ValueError: invalid literal for int() with base 10:` that
is caused when a parameter is given in base 16. Use relevant base
when calling `int()` function.
Call `greentea_getc()` 8 times, and then `unhexify` once, instead of
calling `receive_byte()`, which inside calls `greentea_getc()` twice,
for every hex digit.
Since Python3 handles encoding differently than Python2,
a change in the way the data is encoded and sent to the target is needed.
1. Change the test data to be sent as hex string
2. Convert the characters to binary bytes.
This is done because the mbed tools translate the encoding differently
(mbed-greentea, and mbed-htrunner)
Limit log output in compat.sh and ssl-opt.sh, in case of failures with
these scripts where they may output seemingly unlimited length error
logs.
Note that ulimit -f uses units of 512 bytes, so we use 10 * 1024 * 1024
* 2 to get 10 GiB.
* origin/mbedtls-2.16:
Split _abi_compliance_command into smaller functions
Record the commits that were compared
Document how to build the typical argument for -s
Allow running /somewhere/else/path/to/abi_check.py
Allow TODO in code
Use the docstring in the command line help
* restricted/pr/582:
Add a test for signing content with a long ECDSA key
Add documentation notes about the required size of the signature buffers
Add missing MBEDTLS_ECP_C dependencies in check_config.h
Change size of preallocated buffer for pk_sign() calls
* origin/pr/2701:
Add all.sh component that exercises invalid_param checks
Remove mbedtls_param_failed from programs
Make it easier to define MBEDTLS_PARAM_FAILED as assert
Make test suites compatible with #include <assert.h>
Pass -m32 to the linker as well
* origin/pr/2053:
Clarify ChangeLog entry for fix to #1628
Add Changelog entry for clang test-ref-configs.pl fix
Enable more compiler warnings in tests/Makefile
Change file scoping of test helpers.function
With the change to the full config, there were no longer any tests
that exercise invalid-parameter behavior. The test suite exercises
invalid-parameter behavior by calling TEST_INVALID_PARAM and friends,
relying on the test suite's mbedtls_check_param function. This
function is only enabled if MBEDTLS_CHECK_PARAMS is defined but not
MBEDTLS_CHECK_PARAMS_ASSERT.
Add a component to all.sh that enables MBEDTLS_CHECK_PARAMS but
disables MBEDTLS_CHECK_PARAMS_ASSERT and doesn't define
MBEDTLS_PARAM_FAILED. This way, the xxx_invalid_param() tests do run.
Since sample programs don't provide a mbedtls_check_param function,
this component doesn't build the sample programs.
Don't use the macro name assert. It's technically permitted as long as
<assert.h> is not included, but it's fragile, because it means the
code and any header that it includes must not include <assert.h>.
For unit tests and sample programs, CFLAGS=-m32 is enough to get a
32-bit build, because these programs are all compiled directly
from *.c to the executable in one shot. But with makefile rules that
first build object files and then link them, LDFLAGS=-m32 is also
needed.
* origin/pr/2481:
Document support for MD2 and MD4 in programs/x509/cert_write
Correct name of X.509 parsing test for well-formed, ill-signed CRT
Add test cases exercising successful verification of MD2/MD4/MD5 CRT
Add test case exercising verification of valid MD2 CRT
Add MD[245] test CRTs to tree
Add instructions for MD[245] test CRTs to tests/data_files/Makefile
Add suppport for MD2 to CSR and CRT writing example programs
Convert further x509parse tests to use lower-case hex data
Correct placement of ChangeLog entry
Adapt ChangeLog
Use SHA-256 instead of MD2 in X.509 CRT parsing tests
Consistently use lower case hex data in X.509 parsing tests
* origin/pr/2497:
Re-generate library/certs.c from script
Add new line at the end of test-ca2.key.enc
Use strict syntax to annotate origin of test data in certs.c
Add run to all.sh exercising !MBEDTLS_PEM_PARSE_C + !MBEDTLS_FS_IO
Allow DHM self test to run without MBEDTLS_PEM_PARSE_C
ssl-opt.sh: Auto-skip tests that use files if MBEDTLS_FS_IO unset
Document origin of hardcoded certificates in library/certs.c
Adapt ChangeLog
Rename server1.der to server1.crt.der
Add DER encoded files to git tree
Add build instructions to generate DER versions of CRTs and keys
Document "none" value for ca_path/ca_file in ssl_client2/ssl_server2
ssl_server2: Skip CA setup if `ca_path` or `ca_file` argument "none"
ssl_client2: Skip CA setup if `ca_path` or `ca_file` argument "none"
Correct white spaces in ssl_server2 and ssl_client2
Adapt ssl_client2 to parse DER encoded test CRTs if PEM is disabled
Adapt ssl_server2 to parse DER encoded test CRTs if PEM is disabled
Due to the way the current PK API works, it may have not been clear
for the library clients, how big output buffers they should pass
to the signing functions. Depending on the key type they depend on
MPI or EC specific compile-time constants.
Inside the library, there were places, where it was assumed that
the MPI size will always be enough, even for ECDSA signatures.
However, for very small sizes of the MBEDTLS_MPI_MAX_SIZE and
sufficiently large key, the EC signature could exceed the MPI size
and cause a stack overflow.
This test establishes both conditions -- small MPI size and the use
of a long ECDSA key -- and attempts to sign an arbitrary file.
This can cause a stack overvlow if the signature buffers are not
big enough, therefore the test is performed for an ASan build.
Remove the "Decrypt empty buffer" test, as ChaCha20 is a stream cipher
and 0 bytes encrypted is identical to a 0 length buffer. The "ChaCha20
Encrypt and decrypt 0 bytes" test will test decryption of a 0 length
buffer.
Previously, even in the Chacha20 and Chacha20-Poly1305 tests, we would
test that decryption of an empty buffer would work with
MBEDTLS_CIPHER_AES_128_CBC.
Make the cipher used with the dec_empty_buf() test configurable, so that
Chacha20 and Chacha20-Poly1305 empty buffer tests can use ciphers other
than AES CBC. Then, make the Chacha20 and Chacha20-Poly1305 empty buffer
tests use the MBEDTLS_CIPHER_CHACHA20 and
MBEDTLS_CIPHER_CHACHA20_POLY1305 cipher suites.
Some functions within the X.509 module return an ASN.1 low level
error code where instead this error code should be wrapped by a
high-level X.509 error code as in the bulk of the module.
Specifically, the following functions are affected:
- mbedtls_x509_get_ext()
- x509_get_version()
- x509_get_uid()
This commit modifies these functions to always return an
X.509 high level error code.
Care has to be taken when adapting `mbetls_x509_get_ext()`:
Currently, the callers `mbedtls_x509_crt_ext()` treat the
return code `MBEDTLS_ERR_ASN1_UNEXPECTED_TAG` specially to
gracefully detect and continue if the extension structure is not
present. Wrapping the ASN.1 error with
`MBEDTLS_ERR_X509_INVALID_EXTENSIONS` and adapting the check
accordingly would mean that an unexpected tag somewhere
down the extension parsing would be ignored by the caller.
The way out of this is the following: Luckily, the extension
structure is always the last field in the surrounding structure,
so if there is some data remaining, it must be an Extension
structure, so we don't need to deal with a tag mismatch gracefully
in the first place.
We may therefore wrap the return code from the initial call to
`mbedtls_asn1_get_tag()` in `mbedtls_x509_get_ext()` by
`MBEDTLS_ERR_X509_INVALID_EXTENSIONS` and simply remove
the special treatment of `MBEDTLS_ERR_ASN1_UNEXPECTED_TAG`
in the callers `x509_crl_get_ext()` and `x509_crt_get_ext()`.
This renders `mbedtls_x509_get_ext()` unsuitable if it ever
happened that an Extension structure is optional and does not
occur at the end of its surrounding structure, but for CRTs
and CRLs, it's fine.
The following tests need to be adapted:
- "TBSCertificate v3, issuerID wrong tag"
The issuerID is optional, so if we look for its presence
but find a different tag, we silently continue and try
parsing the subjectID, and then the extensions. The tag '00'
used in this test doesn't match either of these, and the
previous code would hence return LENGTH_MISMATCH after
unsucessfully trying issuerID, subjectID and Extensions.
With the new code, any data remaining after issuerID and
subjectID _must_ be Extension data, so we fail with
UNEXPECTED_TAG when trying to parse the Extension data.
- "TBSCertificate v3, UIDs, invalid length"
The test hardcodes the expectation of
MBEDTLS_ERR_ASN1_INVALID_LENGTH, which needs to be
wrapped in MBEDTLS_ERR_X509_INVALID_FORMAT now.
Fixes#2431.
When parsing a substructure of an ASN.1 structure, no field within
the substructure must exceed the bounds of the substructure.
Concretely, the `end` pointer passed to the ASN.1 parsing routines
must be updated to point to the end of the substructure while parsing
the latter.
This was previously not the case for the routines
- x509_get_attr_type_and_value(),
- mbedtls_x509_get_crt_ext(),
- mbedtls_x509_get_crl_ext().
These functions kept using the end of the parent structure as the
`end` pointer and would hence allow substructure fields to cross
the substructure boundary. This could lead to successful parsing
of ill-formed X.509 CRTs.
This commit fixes this.
Care has to be taken when adapting `mbedtls_x509_get_crt_ext()`
and `mbedtls_x509_get_crl_ext()`, as the underlying function
`mbedtls_x509_get_ext()` returns `0` if no extensions are present
but doesn't set the variable which holds the bounds of the Extensions
structure in case the latter is present. This commit addresses
this by returning early from `mbedtls_x509_get_crt_ext()` and
`mbedtls_x509_get_crl_ext()` if parsing has reached the end of
the input buffer.
The following X.509 parsing tests need to be adapted:
- "TBSCertificate, issuer two inner set datas"
This test exercises the X.509 CRT parser with a Subject name
which has two empty `AttributeTypeAndValue` structures.
This is supposed to fail with `MBEDTLS_ERR_ASN1_OUT_OF_DATA`
because the parser should attempt to parse the first structure
and fail because of a lack of data. Previously, it failed to
obey the (0-length) bounds of the first AttributeTypeAndValue
structure and would try to interpret the beginning of the second
AttributeTypeAndValue structure as the first field of the first
AttributeTypeAndValue structure, returning an UNEXPECTED_TAG error.
- "TBSCertificate, issuer, no full following string"
This test exercises the parser's behaviour on an AttributeTypeAndValue
structure which contains more data than expected; it should therefore
fail with MBEDTLS_ERR_ASN1_LENGTH_MISMATCH. Because of the missing bounds
check, it previously failed with UNEXPECTED_TAG because it interpreted
the remaining byte in the first AttributeTypeAndValue structure as the
first byte in the second AttributeTypeAndValue structure.
- "SubjectAltName repeated"
This test should exercise two SubjectAltNames extensions in succession,
but a wrong length values makes the second SubjectAltNames extension appear
outside of the Extensions structure. With the new bounds in place, this
therefore fails with a LENGTH_MISMATCH error. This commit adapts the test
data to put the 2nd SubjectAltNames extension inside the Extensions
structure, too.
The X.509 parsing test suite test_suite_x509parse contains a test
exercising X.509 verification for a valid MD4/MD5 certificate in a
profile which doesn't allow MD4/MD5. This commit adds an analogous
test for MD2.
When running make with parallelization, running both "clean" and "lib"
with a single make invocation can lead to each target building in
parallel. It's bad if lib is partially done building something, and then
clean deletes what was built. This can lead to errors later on in the
lib target.
$ make -j9 clean lib
CC aes.c
CC aesni.c
CC arc4.c
CC aria.c
CC asn1parse.c
CC ./library/error.c
CC ./library/version.c
CC ./library/version_features.c
AR libmbedcrypto.a
ar: aes.o: No such file or directory
Makefile:120: recipe for target 'libmbedcrypto.a' failed
make[2]: *** [libmbedcrypto.a] Error 1
Makefile:152: recipe for target 'libmbedcrypto.a' failed
make[1]: *** [libmbedcrypto.a] Error 2
Makefile:19: recipe for target 'lib' failed
make: *** [lib] Error 2
make: *** Waiting for unfinished jobs....
To avoid this sort of trouble, always invoke clean by itself without
other targets throughout the library. Don't run clean in parallel with
other rules. The only place where clean was run in parallel with other
targets was in list-symbols.sh.
- Replace 'RSA with MD2' OID '2a864886f70d010102' by
'RSA with SHA-256' OID '2a864886f70d01010b':
Only the last byte determines the hash, and
`MBEDTLS_OID_PKCS1_MD2 == MBEDTLS_OID_PKCS1 "\x02"`
`MBEDTLS_OID_PKCS1_SHA256 == MBEDTLS_OID_PKCS1 "\x0b"`
See oid.h.
- Replace MD2 dependency by SHA256 dependency.
- Adapt expected CRT info output.
We've observed that sometimes check-names.sh exits unexpectedly with
status 2 and no error message. The failure is not reproducible. This
commits makes the script print a trace if it exits unexpectedly.
When doing ABI/API checking, its useful to have a list of all the
identifiers that are defined in the internal header files, as we
do not promise compatibility for them. This option allows for a
simple method of getting them for use with the ABI checking script.
Run ssl-opt.sh on x86_32 with ASan. This may detect bugs that only
show up on 32-bit platforms, for example due to size_t overflow.
For this component, turn off some memory management features that are
not useful, potentially slow, and may reduce ASan's effectiveness at
catching buffer overflows.
* origin/pr/2470:
Silence pylint
check-files.py: readability improvement in permission check
check-files.py: use class fields for class-wide constants
check-files.py: clean up class structure
abi_check.py: Document more methods
check-files.py: document some classes and methods
Fix pylint errors going uncaught
Call pylint3, not pylint
New, documented pylint configuration
* origin/pr/2364:
Increase okm_hex buffer to contain null character
Minor modifications to hkdf test
Add explanation for okm_string size
Update ChangeLog
Reduce buffer size of okm
Reduce Stack usage of hkdf test function
* restricted/pr/553:
Fix mbedtls_ecdh_get_params with new ECDH context
Add changelog entry for mbedtls_ecdh_get_params robustness
Fix ecdh_get_params with mismatching group
Add test case for ecdh_get_params with mismatching group
Add test case for ecdh_calc_secret
Fix typo in documentation
It was failing to set the key in the ENCRYPT direction before encrypting.
This just happened to work for GCM and CCM.
After re-encrypting, compare the length to the expected ciphertext
length not the plaintext length. Again this just happens to work for
GCM and CCM since they do not perform any kind of padding.
* origin/pr/2436:
Use certificates from data_files and refer them
Specify server certificate to use in SHA-1 test
refactor CA and SRV certificates into separate blocks
refactor SHA-1 certificate defintions and assignment
refactor server SHA-1 certificate definition into a new block
define TEST_SRV_CRT_RSA_SOME in similar logic to TEST_CA_CRT_RSA_SOME
server SHA-256 certificate now follows the same logic as CA SHA-256 certificate
add entry to ChangeLog
* restricted/pr/550:
Update query_config.c
Fix failure in SSLv3 per-version suites test
Adjust DES exclude lists in test scripts
Clarify 3DES changes in ChangeLog
Fix documentation for 3DES removal
Exclude 3DES tests in test scripts
Fix wording of ChangeLog and 3DES_REMOVE docs
Reduce priority of 3DES ciphersuites
* public/pr/2429:
Add ChangeLog entry for unused bits in bitstrings
Improve docs for ASN.1 bitstrings and their usage
Add tests for (named) bitstring to suite_asn1write
Fix ASN1 bitstring writing