Following discussion in the team, it was deemed preferable for the restart
context to be explicitly managed by the caller.
This commits in the first in a series moving in that directly: it starts by
only changing the public API, while still internally using the old design.
Future commits in that series will change to the new design internally.
The test function was simplified as it no longer makes sense to test for some
memory management errors since that responsibility shifted to the caller.
In case of argument change, freeing everything is not the most efficient
(wastes one free()+calloc()) but makes the code simpler, which is probably
more important here
We'll need to store MPIs and other things that allocate memory in this
context, so we need a place to free it. We can't rely on doing it before
returning from ecp_mul() as we might return MBEDTLS_ERR_ECP_IN_PROGRESS (thus
preserving the context) and never be called again (for example, TLS handshake
aborted for another reason). So, ecp_group_free() looks like a good place to
do this, if the restart context is part of struct ecp_group.
This means it's not possible to use the same ecp_group structure in different
threads concurrently, but:
- that's already the case (and documented) for other reasons
- this feature is precisely intended for environments that lack threading
An alternative option would be for the caller to have to allocate/free the
restart context and pass it explicitly, but this means creating new functions
that take a context argument, and putting a burden on the user.
Our current behaviour is a bit inconsistent here:
- when the bad signature is made by a trusted CA, we stop here and don't
include the trusted CA in the chain (don't call vrfy on it)
- otherwise, we just add NOT_TRUSTED to the flags but keep building the chain
and call vrfy on the upper certs
This ensures that the callback can actually clear that flag, and that it is
seen by the callback at the right level. This flag is not set at the same
place than others, and this difference will get bigger in the upcoming
refactor, so let's ensure we don't break anything here.
When a trusted CA is rolling its root keys, it could happen that for some
users the list of trusted roots contains two versions of the same CA with the
same name but different keys. Currently this is supported but wasn't tested.
Note: the intermediate file test-ca-alt.csr is commited on purpose, as not
commiting intermediate files causes make to regenerate files that we don't
want it to touch.
As we accept EE certs that are explicitly trusted (in the list of trusted
roots) and usually look for parent by subject, and in the future we might want
to avoid checking the self-signature on trusted certs, there could a risk that we
incorrectly accept a cert that looks like a trusted root except it doesn't
have the same key. This test ensures this will never happen.
The tests cover chains of length 0, 1 and 2, with one error, located at any of
the available levels in the chain. This exercises all three call sites of
f_vrfy (two in verify_top, one in verify_child). Chains of greater length
would not cover any new code path or behaviour that I can see.
So far there was no test ensuring that the flags passed to the vrfy callback
are correct (ie the flags for the current certificate, not including those of
the parent).
Actual tests case making use of that test function will be added in the next
commit.
We have code to skip them but didn't have explicit tests ensuring they are
(the corresponding branch was never taken).
While at it, remove extra copy of the chain in server10*.crt, which was
duplicated for no reason.
This shows inconsistencies in how flags are handled when callback fails:
- sometimes the flags set by the callback are transmitted, sometimes not
- when the cert if not trusted, sometimes BADCERT_NOT_TRUSTED is set,
sometimes not
This adds coverage for 9 lines and 9 branches. Now all lines related to
callback failure are covered.
Now all checks related to profile are covered in:
- verify_with_profile()
- verify_child()
- verify_top()
(that's 10 lines that were previously not covered)
Leaving aside profile enforcement in CRLs for now, as the focus is on
preparing to refactor cert verification.
Previously flags was left to whatever value it had before. It's cleaner to
make sure it has a definite value, and all bits set looks like the safest way
for when it went very wrong.
The X509 test suite assumes that MBEDTLS_X509_MAX_INTERMEDIATE_CA is below the
hardcoded threshold 20 used in the long certificate chain generating script
tests/data_files/dir-max/long.sh. This commit adds a compile-time check for
that.
If we didn't walk the whole chain, then there may be any kind of errors in the
part of the chain we didn't check, so setting all flags looks like the safe
thing to do.
Inspired by test code provided by Nicholas Wilson in PR #351.
The test will fail if someone sets MAX_INTERMEDIATE_CA to a value larger than
18 (default is 8), which is hopefully unlikely and can easily be fixed by
running long.sh again with a larger value if it ever happens.
Current behaviour is suboptimal as flags are not set, but currently the goal
is only to document/test existing behaviour.
By default, keep allowing SHA-1 in key exchange signatures. Disabling
it causes compatibility issues, especially with clients that use
TLS1.2 but don't send the signature_algorithms extension.
SHA-1 is forbidden in certificates by default, since it's vulnerable
to offline collision-based attacks.
There is now one test case to validate that SHA-1 is rejected in
certificates by default, and one test case to validate that SHA-1 is
supported if MBEDTLS_TLS_DEFAULT_ALLOW_SHA1 is #defined.
SHA-1 is now disabled by default in the X.509 layer. Explicitly enable
it in our tests for now. Updating all the test data to SHA-256 should
be done over time.
The ECJPAKE test suite uses a size zero array for the empty password
used in the tests, which is not valid C. This commit fixes this.
This originally showed up as a compilation failure on Visual Studio
2015, documented in IOTSSL-1242, but can also be observed with GCC
when using the -Wpedantic compilation option.
The test case was generated by modifying our signature code so that it
produces a 7-byte long padding (which also means garbage at the end, so it is
essential in to check that the error that is detected first is indeed the
padding rather than the final length check).
The modular inversion function hangs when provided with the modulus 1. This commit refuses this modulus with a BAD_INPUT error code. It also adds a test for this case.
Fix a buffer overflow when writting a string representation of an MPI
number to a buffer in hexadecimal. The problem occurs because hex
digits are written in pairs and this is not accounted for in the
calculation of the required buffer size when the number of digits is
odd.