This commit adds four tests to tests/ssl-opt.sh:
(1) & (2): Check behaviour of optional/required verification when the
trusted CA chain is empty.
(3) & (4): Check behaviour of optional/required verification when the
client receives a server certificate with an unsupported curve.
In the TLS test client, allow SHA-1 as a signature hash algorithm.
Without this, the renegotation tests failed.
A previous commit had allowed SHA-1 via the certificate profile but
that only applied before the initial negotiation which includes the
signature_algorithms extension.
SHA-1 is now disabled by default in the X.509 layer. Explicitly enable
it in our tests for now. Updating all the test data to SHA-256 should
be done over time.
Adding the CA suppression list option to the 'ssl_server2' sample
program is a prerequisite for adding tests for this feature to the
integration test suite (ssl-opt.sh).
In the ecdsa.c sample application we don't use hashing, we use ecdsa
directly on a buffer containing plain text. Although the text explains
that it should be the message hash it still can be confusing.
Any misunderstandings here are potentially very dangerous, because ECDSA
truncates the message hash if necessary and this can lead to trivial
signature forgeries if the API is misused and the message is passed
directly to the function without hashing.
This commit adds a hash computation step to the ecdsa.c sample
application and clarification to the doxygen documentation of the
ECDSA functions involved.
The sample application programs/ssl/ssl_server2.c was previously
modifies to use inttypes.h to parse a string to a 64-bit integer.
However, MSVC does not support C99, so compilation fails. This
patch modifies the sample app to use the MSVC specific parsing
functions instead of inttypes.h.
Add a test to ssl-opt.sh to ensure that in DTLS a 6 byte record counter
is compared in ssl_check_ctr_renegotiate() instead of a 8 byte one as in
the TLS case. Because currently there are no testing facilities to check
that renegotiation routines are triggered after X number of input/output
messages, the test consists on setting a renegotiation period that
cannot be represented in 6 bytes, but whose least-significant byte is 2.
If the library behaves correctly, the renegotiation routines will be
executed after two exchanged.
The sample applications programs/pkey/cert_req.c and
programs/pkey/cert_write.c use the library functions
mbedtls_pk_write_csr_pem() and mbedtls_pk_write_crt_pem() respectively which
are dependent on the configuration option MBEDTLS_PEM_WRITE_C. If the option
isn't defined the build breaks.
This change adds the compilation condition MBEDTLS_PEM_WRITE_C to these
sample application.
The sample application programs/pkey/gen_key.c uses the library function
mbedtls_pk_write_key_pem() which is dependent on the configuration option
MBEDTLS_PEM_WRITE_C. If the option isn't defined the build breaks.
This change adds the compilation condition MBEDTLS_PEM_WRITE_C to the gen_key.c
sample application.
The library/net.c and its corresponding include/mbedtls/net.h file are
renamed to library/net_sockets.c and include/mbedtls/net_sockets.h
respectively. This is to avoid naming collisions in projects which also
have files with the common name 'net'.
The sample application programs/pkey/gen_key.c uses the library function
mbedtls_pk_write_key_pem() which is dependent on the configuration option
MBEDTLS_PEM_WRITE_C. If the option isn't defined the build breaks.
This change adds the compilation condition MBEDTLS_PEM_WRITE_C to the gen_key.c
sample application.
The library/net.c and its corresponding include/mbedtls/net.h file are
renamed to library/net_sockets.c and include/mbedtls/net_sockets.h
respectively. This is to avoid naming collisions in projects which also
have files with the common name 'net'.
* Fix crypt_and_hash to support decrypting GCM encrypted files
* Fix documentation in crypt_and_hash for the generic case
* Remove unused lastn from crypt_and_hash
lastn is not used with the cipher layer as it already provides padding
and understanding of length of the original data.
Instead of polling the hardware entropy source a single time and
comparing the output with itself, the source is polled at least twice
and make sure that the separate outputs are different.