Due to the way the current PK API works, it may have not been clear
for the library clients, how big output buffers they should pass
to the signing functions. Depending on the key type they depend on
MPI or EC specific compile-time constants.
Inside the library, there were places, where it was assumed that
the MPI size will always be enough, even for ECDSA signatures.
However, for very small sizes of the MBEDTLS_MPI_MAX_SIZE and
sufficiently large key, the EC signature could exceed the MPI size
and cause a stack overflow.
This test establishes both conditions -- small MPI size and the use
of a long ECDSA key -- and attempts to sign an arbitrary file.
This can cause a stack overvlow if the signature buffers are not
big enough, therefore the test is performed for an ASan build.
Remove the "Decrypt empty buffer" test, as ChaCha20 is a stream cipher
and 0 bytes encrypted is identical to a 0 length buffer. The "ChaCha20
Encrypt and decrypt 0 bytes" test will test decryption of a 0 length
buffer.
Previously, even in the Chacha20 and Chacha20-Poly1305 tests, we would
test that decryption of an empty buffer would work with
MBEDTLS_CIPHER_AES_128_CBC.
Make the cipher used with the dec_empty_buf() test configurable, so that
Chacha20 and Chacha20-Poly1305 empty buffer tests can use ciphers other
than AES CBC. Then, make the Chacha20 and Chacha20-Poly1305 empty buffer
tests use the MBEDTLS_CIPHER_CHACHA20 and
MBEDTLS_CIPHER_CHACHA20_POLY1305 cipher suites.
Some functions within the X.509 module return an ASN.1 low level
error code where instead this error code should be wrapped by a
high-level X.509 error code as in the bulk of the module.
Specifically, the following functions are affected:
- mbedtls_x509_get_ext()
- x509_get_version()
- x509_get_uid()
This commit modifies these functions to always return an
X.509 high level error code.
Care has to be taken when adapting `mbetls_x509_get_ext()`:
Currently, the callers `mbedtls_x509_crt_ext()` treat the
return code `MBEDTLS_ERR_ASN1_UNEXPECTED_TAG` specially to
gracefully detect and continue if the extension structure is not
present. Wrapping the ASN.1 error with
`MBEDTLS_ERR_X509_INVALID_EXTENSIONS` and adapting the check
accordingly would mean that an unexpected tag somewhere
down the extension parsing would be ignored by the caller.
The way out of this is the following: Luckily, the extension
structure is always the last field in the surrounding structure,
so if there is some data remaining, it must be an Extension
structure, so we don't need to deal with a tag mismatch gracefully
in the first place.
We may therefore wrap the return code from the initial call to
`mbedtls_asn1_get_tag()` in `mbedtls_x509_get_ext()` by
`MBEDTLS_ERR_X509_INVALID_EXTENSIONS` and simply remove
the special treatment of `MBEDTLS_ERR_ASN1_UNEXPECTED_TAG`
in the callers `x509_crl_get_ext()` and `x509_crt_get_ext()`.
This renders `mbedtls_x509_get_ext()` unsuitable if it ever
happened that an Extension structure is optional and does not
occur at the end of its surrounding structure, but for CRTs
and CRLs, it's fine.
The following tests need to be adapted:
- "TBSCertificate v3, issuerID wrong tag"
The issuerID is optional, so if we look for its presence
but find a different tag, we silently continue and try
parsing the subjectID, and then the extensions. The tag '00'
used in this test doesn't match either of these, and the
previous code would hence return LENGTH_MISMATCH after
unsucessfully trying issuerID, subjectID and Extensions.
With the new code, any data remaining after issuerID and
subjectID _must_ be Extension data, so we fail with
UNEXPECTED_TAG when trying to parse the Extension data.
- "TBSCertificate v3, UIDs, invalid length"
The test hardcodes the expectation of
MBEDTLS_ERR_ASN1_INVALID_LENGTH, which needs to be
wrapped in MBEDTLS_ERR_X509_INVALID_FORMAT now.
Fixes#2431.
When parsing a substructure of an ASN.1 structure, no field within
the substructure must exceed the bounds of the substructure.
Concretely, the `end` pointer passed to the ASN.1 parsing routines
must be updated to point to the end of the substructure while parsing
the latter.
This was previously not the case for the routines
- x509_get_attr_type_and_value(),
- mbedtls_x509_get_crt_ext(),
- mbedtls_x509_get_crl_ext().
These functions kept using the end of the parent structure as the
`end` pointer and would hence allow substructure fields to cross
the substructure boundary. This could lead to successful parsing
of ill-formed X.509 CRTs.
This commit fixes this.
Care has to be taken when adapting `mbedtls_x509_get_crt_ext()`
and `mbedtls_x509_get_crl_ext()`, as the underlying function
`mbedtls_x509_get_ext()` returns `0` if no extensions are present
but doesn't set the variable which holds the bounds of the Extensions
structure in case the latter is present. This commit addresses
this by returning early from `mbedtls_x509_get_crt_ext()` and
`mbedtls_x509_get_crl_ext()` if parsing has reached the end of
the input buffer.
The following X.509 parsing tests need to be adapted:
- "TBSCertificate, issuer two inner set datas"
This test exercises the X.509 CRT parser with a Subject name
which has two empty `AttributeTypeAndValue` structures.
This is supposed to fail with `MBEDTLS_ERR_ASN1_OUT_OF_DATA`
because the parser should attempt to parse the first structure
and fail because of a lack of data. Previously, it failed to
obey the (0-length) bounds of the first AttributeTypeAndValue
structure and would try to interpret the beginning of the second
AttributeTypeAndValue structure as the first field of the first
AttributeTypeAndValue structure, returning an UNEXPECTED_TAG error.
- "TBSCertificate, issuer, no full following string"
This test exercises the parser's behaviour on an AttributeTypeAndValue
structure which contains more data than expected; it should therefore
fail with MBEDTLS_ERR_ASN1_LENGTH_MISMATCH. Because of the missing bounds
check, it previously failed with UNEXPECTED_TAG because it interpreted
the remaining byte in the first AttributeTypeAndValue structure as the
first byte in the second AttributeTypeAndValue structure.
- "SubjectAltName repeated"
This test should exercise two SubjectAltNames extensions in succession,
but a wrong length values makes the second SubjectAltNames extension appear
outside of the Extensions structure. With the new bounds in place, this
therefore fails with a LENGTH_MISMATCH error. This commit adapts the test
data to put the 2nd SubjectAltNames extension inside the Extensions
structure, too.
The X.509 parsing test suite test_suite_x509parse contains a test
exercising X.509 verification for a valid MD4/MD5 certificate in a
profile which doesn't allow MD4/MD5. This commit adds an analogous
test for MD2.
When running make with parallelization, running both "clean" and "lib"
with a single make invocation can lead to each target building in
parallel. It's bad if lib is partially done building something, and then
clean deletes what was built. This can lead to errors later on in the
lib target.
$ make -j9 clean lib
CC aes.c
CC aesni.c
CC arc4.c
CC aria.c
CC asn1parse.c
CC ./library/error.c
CC ./library/version.c
CC ./library/version_features.c
AR libmbedcrypto.a
ar: aes.o: No such file or directory
Makefile:120: recipe for target 'libmbedcrypto.a' failed
make[2]: *** [libmbedcrypto.a] Error 1
Makefile:152: recipe for target 'libmbedcrypto.a' failed
make[1]: *** [libmbedcrypto.a] Error 2
Makefile:19: recipe for target 'lib' failed
make: *** [lib] Error 2
make: *** Waiting for unfinished jobs....
To avoid this sort of trouble, always invoke clean by itself without
other targets throughout the library. Don't run clean in parallel with
other rules. The only place where clean was run in parallel with other
targets was in list-symbols.sh.
- Replace 'RSA with MD2' OID '2a864886f70d010102' by
'RSA with SHA-256' OID '2a864886f70d01010b':
Only the last byte determines the hash, and
`MBEDTLS_OID_PKCS1_MD2 == MBEDTLS_OID_PKCS1 "\x02"`
`MBEDTLS_OID_PKCS1_SHA256 == MBEDTLS_OID_PKCS1 "\x0b"`
See oid.h.
- Replace MD2 dependency by SHA256 dependency.
- Adapt expected CRT info output.
We've observed that sometimes check-names.sh exits unexpectedly with
status 2 and no error message. The failure is not reproducible. This
commits makes the script print a trace if it exits unexpectedly.
When doing ABI/API checking, its useful to have a list of all the
identifiers that are defined in the internal header files, as we
do not promise compatibility for them. This option allows for a
simple method of getting them for use with the ABI checking script.
Run ssl-opt.sh on x86_32 with ASan. This may detect bugs that only
show up on 32-bit platforms, for example due to size_t overflow.
For this component, turn off some memory management features that are
not useful, potentially slow, and may reduce ASan's effectiveness at
catching buffer overflows.
* origin/pr/2470:
Silence pylint
check-files.py: readability improvement in permission check
check-files.py: use class fields for class-wide constants
check-files.py: clean up class structure
abi_check.py: Document more methods
check-files.py: document some classes and methods
Fix pylint errors going uncaught
Call pylint3, not pylint
New, documented pylint configuration
* origin/pr/2364:
Increase okm_hex buffer to contain null character
Minor modifications to hkdf test
Add explanation for okm_string size
Update ChangeLog
Reduce buffer size of okm
Reduce Stack usage of hkdf test function
* restricted/pr/553:
Fix mbedtls_ecdh_get_params with new ECDH context
Add changelog entry for mbedtls_ecdh_get_params robustness
Fix ecdh_get_params with mismatching group
Add test case for ecdh_get_params with mismatching group
Add test case for ecdh_calc_secret
Fix typo in documentation
It was failing to set the key in the ENCRYPT direction before encrypting.
This just happened to work for GCM and CCM.
After re-encrypting, compare the length to the expected ciphertext
length not the plaintext length. Again this just happens to work for
GCM and CCM since they do not perform any kind of padding.
* origin/pr/2436:
Use certificates from data_files and refer them
Specify server certificate to use in SHA-1 test
refactor CA and SRV certificates into separate blocks
refactor SHA-1 certificate defintions and assignment
refactor server SHA-1 certificate definition into a new block
define TEST_SRV_CRT_RSA_SOME in similar logic to TEST_CA_CRT_RSA_SOME
server SHA-256 certificate now follows the same logic as CA SHA-256 certificate
add entry to ChangeLog
* restricted/pr/550:
Update query_config.c
Fix failure in SSLv3 per-version suites test
Adjust DES exclude lists in test scripts
Clarify 3DES changes in ChangeLog
Fix documentation for 3DES removal
Exclude 3DES tests in test scripts
Fix wording of ChangeLog and 3DES_REMOVE docs
Reduce priority of 3DES ciphersuites
* public/pr/2429:
Add ChangeLog entry for unused bits in bitstrings
Improve docs for ASN.1 bitstrings and their usage
Add tests for (named) bitstring to suite_asn1write
Fix ASN1 bitstring writing
The test used 3DES as the suite for SSLv3, which now makes the handshake fails
with "no ciphersuite in common", failing the test as well. Use Camellia
instead (as there are not enough AES ciphersuites before TLS 1.2 to
distinguish between the 3 versions).
Document some dependencies, but not all. Just trying to avoid introducing new
issues by using a new cipher here, not trying to make it perfect, which is a
much larger task out of scope of this commit.
Line issue trackers are conceptually a subclass of file issue
trackers: they're file issue trackers where issues arise from checking
each line independently. So make it an actual subclass.
Pylint pointed out the design smell: there was an abstract method that
wasn't always overridden in concrete child classes.
Make check-python-files.sh run pylint on all *.py files (in
directories where they are known to be present), rather than list
files explicitly.
Fix a bug whereby the return status of check-python-files.sh was only
based on the last file passing, i.e. errors in other files were
effectively ignored.
Make check-python-files.sh run pylint unconditionally. Since pylint3
is not critical, make all.sh to skip running check-python-files.sh if
pylint3 is not available.
The pylint configuration in .pylint was a modified version of the
output of `pylint --generate-rcfile` from an unknown version of
pylint. Replace it with a file that only contains settings that are
modified from the default, with an explanation of why each setting is
modified.
The new .pylintrc was written from scratch, based on the output of
pylint on the current version of the files and on a judgement of what
to silence generically, what to silence on a case-by-case basis and
what to fix.
Add a test case for doing an ECDH calculation by calling
mbedtls_ecdh_get_params on both keys, with keys belonging to
different groups. This should fail, but currently passes.
When all.sh invokes check_headers_in_cpp, a backup config.h exists. This
causes a stray difference vs cpp_dummy_build.cpp. Fix by only collecting
the *.h files in include/mbedtls.
Change-Id: Ifd415027e856858579a6699538f06fc49c793570
`test_hkdf` in the hkdf test suites consumed stack of ~6KB with
6 buffers of ~1KB each. This causes stack overflow on some platforms
with smaller stack. The buffer sizes were reduced. By testing, the sizes
can be reduced even further, as the largest seen size is 82 bytes(for okm).
Wildcard patterns now work with command line COMPONENT arguments
without --except as well as with. You can now run e.g.
`all.sh "check_*` to run all the sanity checks.
After backing up and restoring config.h, `git diff-files` may report
it as potentially-changed because it isn't sure whether the index is
up to date. Use `git diff` instead: it actually reads the file.
Only look for armcc if component_build_armcc is to be executed,
instead of requiring the option --no-armcc.
You can still pass --no-armcc, but it's no longer required when
listing components to run. With no list of components or an exclude
list on the command line, --no-armcc is equivalent to having
build_armcc in the exclude list.
Build the list of components to run in $RUN_COMPONENTS as part of
command line parsing. After parsing the command line, it no longer
matters how this list was built.
Extract the list of available components by looking for definitions of
functions called component_xxx. The previous code explicitly listed
all components in run_all_components, which opened the risk of
forgetting to list a component there.
Add a conditional execution facility: if a function support_xxx exists
and returns false then component_xxx is not executed (except when the
command line lists an explicit set of components to execute).
MAKEFLAGS was set to -j if it was already set, instead of being set if
not previously set as intended. So now all.sh will do parallel builds
if invoked without MAKEFLAGS in the environment.
Don't bail out of all.sh if the OS isn't Linux. We only expect
everything to pass on a recent Linux x86_64, but it's useful to call
all.sh to run some components on any platform.
In all.sh, always run both MemorySanitizer and Valgrind. Valgrind is
slower than ASan and MSan but finds some things that they don't.
Run MSan unconditionally, not just on Linux/x86_64. MSan is supported
on some other OSes and CPUs these days.
Use `all.sh --except test_memsan` if you want to omit MSan because it
isn't supported on your platform. Use `all.sh --except test_memcheck`
if you want to omit Valgrind because it's too slow.
Make the test scripts more portable (tested on FreeBSD): don't insist
on GNU sed, and recognize amd64 as well as x86_64 for `uname -m`. The
`make` utility must still be GNU make.
Call `set disable-randomization off` only if it seems to be supported.
The goal is to neither get an error about disable-randomization not
being supported (e.g. on FreeBSD), nor get an error if it is supported
but fails (e.g. on Ubuntu).
Only fiddle with disable-randomization from all.sh, which cares
because it reports the failure of ASLR disabling as an error. If a
developer invokes the Gdb script manually, a warning about ASLR
doesn't matter.
Use `cmake -D CMAKE_BUILD_TYPE=Asan` rather than manually setting
`-fsanitize=address`. This lets cmake determine the necessary compiler
and linker flags.
With UNSAFE_BUILD on, force -Wno-error. This is necessary to build
with MBEDTLS_TEST_NULL_ENTROPY.
Merge the work on all.sh that was done on mbedtls-2.14.0 with the
changes from mbedtls-2.14.0 to mbedtls-2.16.0.
There is a merge conflict in test/scripts/all.sh, which is the only
file that was modified in the all.sh work branch. I resolved it by
taking the copy from the all.sh branch and applying the changes
between mbedtls-2.14.0 and mbedtls-2.16.0. These changes consisted of
two commits:
* "Add tests to all.sh for CHECK_PARAMS edge cases": adds two
test components which are reproduced here as
test_check_params_without_platform and component_test_check_params_silent.
* "tests: Backup config.h before modifying it": moot because the
component framework introduced in the all.sh branch backs up config.h
systematically.
In all.sh, always save config.h before running a component, instead of
doing it manually in each component that requires it (except when we
forget, which has happened). This would break a script that requires
config.h.bak not to exist, but we don't have any of those.
Call cleanup from run_component instead of calling it from each
individual component function.
Clean up after each component rather than before. With the new
structure it makes more sense for each component to leave the place
clean. Run cleanup once at the beginning to start from a clean slate.
Move almost all the code of this script into functions. There is no
intended behavior change. The goal of this commit is to make
subsequent improvements easier to follow.
A very large number of lines have been reintended. To see what's going
on, ignore whitespace differences (e.g. diff -w).
I followed the following rules:
* Minimize the amount of code that gets moved.
* Don't change anything to what gets executed or displayed.
* Almost all the code must end up in a function.
* One function does one thing. For most of the code, that's from one
"cleanup" to the next.
* The test sequence functions (run_XXX) are independent.
The change mostly amounts to putting chunks of code into a function
and calling the functions in order. A few test runs are conditional;
in those cases the conditional is around the function call.
The test suites `test_suite_gcm.aes{128,192,256}_en.data` contains
numerous NIST test vectors for AES-*-GCM against which the GCM
API mbedtls_gcm_xxx() is tested.
However, one level higher at the cipher API, no tests exist which
exercise mbedtls_cipher_auth_{encrypt/decrypt}() for GCM ciphers,
although test_suite_cipher.function contains the test auth_crypt_tv
which does precisely that and is already used e.g. in
test_suite_cipher.ccm.
This commit replicates the test vectors from
test_suite_gcm.aes{128,192,256}_en.data in test_suite_cipher.gcm.data
and adds a run of auth_crypt_tv for each of them.
The conversion was mainly done through the sed command line
```
s/gcm_decrypt_and_verify:\([^:]*\):\([^:]*\):\([^:]*\):\([^:]*\):
\([^:]*\):\([^:]*\):\([^:]*\):\([^:]*\):\([^:]*\):\([^:]*\)/auth_crypt_tv:
\1:\2:\4:\5:\3:\7:\8:\9/
```
tests/Makefile had some unused warnings disabled unnecessarily, which
test-ref-configs.pl was turning back on. We don't need to disable these warnings
so I'm turning them back on.
Dependent on configured options, not all of the helper functions were being
used, which was leading to warning of unused functions with Clang.
To avoid any complex compile time options, or adding more logic to
generate_test_code.py to screen out unused functions, those functions which were
provoking the warning were changed to remove static, remove them from file
scope, and expose them to the linker.
Document when a context must be initialized or not, when it must be
set up or not, and whether it needs a private key or a public key will
do.
The implementation is sometimes more liberal than the documentation,
accepting a non-set-up context as a context that can't perform the
requested information. This preserves backward compatibility.
The MPI_VALIDATE_RET() macro cannot be used for parameter
validation of mbedtls_mpi_lsb() because this function returns
a size_t.
Use the underlying MBEDTLS_INTERNAL_VALIDATE_RET() insteaed,
returning 0 on failure.
Also, add a test for this behaviour.
For mbedtls_pk_parse_key and mbedtls_pk_parse_keyfile, the password is
optional. Clarify what this means: NULL is ok and means no password.
Validate parameters and test accordingly.
The test that mbedtls_aria_free() accepts NULL parameters
can be performed even if MBEDTLS_CHECK_PARAMS is unset, but
was previously included in the test case aria_invalid_params()
which is only executed if MBEDTLS_CHECK_PARAMS is set.
It's good to make a backup of config.h before modifying it, so that when
"cleanup" runs the next test has a clean default config.h to start from.
Fixes 840af0a9ae ("Add tests to all.sh for CHECK_PARAMS edge cases")
Parameter validation was previously performed and tested unconditionally
for the ChaCha/Poly modules. This commit therefore only needs go guard the
existing tests accordingly and use the appropriate test macros for parameter
validation.