New header file crypto_struct.h. The main file crypto.sh declares
structures which are implementation-defined. These structures must be
defined in crypto_struct.h, which is included at the end so that the
structures can use types defined in crypto.h.
Implement psa_hash_start, psa_hash_update and psa_hash_final. This
should work for all hash algorithms supported by Mbed TLS, but has
only been smoke-tested for SHA-256, and only in the nominal case.
Don't use the pk module except as required for pkparse/pkwrite. The
PSA crypto layer is meant to work alongside pk, not on top of it.
Fix the compile-time dependencies on RSA/ECP handling in
psa_export_key, psa_destroy_key and psa_get_key_information.
Define psa_key_type_t and a first stab at a few values.
New functions psa_import_key, psa_export_key, psa_destroy_key,
psa_get_key_information. Implement them for raw data and RSA.
Under the hood, create an in-memory, fixed-size keystore with room
for MBEDTLS_PSA_KEY_SLOT_COUNT - 1 keys.
Add a new function mbedtls_rsa_get_bitlen which returns the RSA key
size, i.e. the bit size of the modulus. In the pk module, call
mbedtls_rsa_get_bitlen instead of mbedtls_rsa_get_len, which gave the
wrong result for key sizes that are not a multiple of 8.
This commit adds one non-regression test in the pk suite. More tests
are needed for RSA key sizes that are a multiple of 8.
This commit does not address RSA alternative implementations, which
only provide an interface that return the modulus size in bytes.
New module psa_crypto.c (MBEDTLS_PSA_CRYPTO_C):
Platform Security Architecture compatibility layer on top of
libmedcrypto.
Implement psa_crypto_init function which sets up a RNG.
Add a mbedtls_psa_crypto_free function which deinitializes the
library.
Define a first batch of error codes.
By the standard (RFC 6066, Sect. 4), the Maximum Fragment Length (MFL)
extension limits the maximum record payload size, but not the maximum
datagram size. However, not inferring any limitations on the MTU when
setting the MFL means that a party has no means to dynamically inform
the peer about MTU limitations.
This commit changes the function ssl_get_remaining_payload_in_datagram()
to never return more than
MFL - { Total size of all records within the current datagram }
thereby limiting the MTU to MFL + { Maximum Record Expansion }.
The function ssl_free_buffered_record() frees a future epoch record, if
such is present. Previously, it was called in mbedtls_handshake_free(),
i.e. an unused buffered record would be cleared at the end of the handshake.
This commit moves the call to the function ssl_buffering_free() responsible
for freeing all buffering-related data, and which is called not only at
the end of the handshake, but at the end of every flight. In particular,
future record epochs won't be buffered across flight boundaries anymore,
and they shouldn't.
The previous code appended messages to flights only if their handshake type,
as derived from the first byte in the message, was different from
MBEDTLS_SSL_HS_HELLO_REQUEST. This check should only be performed
for handshake records, while CCS records should immediately be appended.
In SSLv3, the client sends a NoCertificate alert in response to
a CertificateRequest if it doesn't have a CRT. This previously
lead to failure in ssl_write_handshake_msg() which only accepted
handshake or CCS records.
The previous code appended messages to flights only if their handshake type,
as derived from the first byte in the message, was different from
MBEDTLS_SSL_HS_HELLO_REQUEST. This check should only be performed
for handshake records, while CCS records should immediately be appended.
In SSLv3, the client sends a NoCertificate alert in response to
a CertificateRequest if it doesn't have a CRT. This previously
lead to failure in ssl_write_handshake_msg() which only accepted
handshake or CCS records.
Previous commits introduced the field `total_bytes_buffered`
which is supposed to keep track of the cumulative size of
all heap allocated buffers used for the purpose of reassembly
and/or buffering of future messages.
However, the buffering of future epoch records were not reflected
in this field so far. This commit changes this, adding the length
of a future epoch record to `total_bytes_buffered` when it's buffered,
and subtracting it when it's freed.
This commit adds a static function ssl_buffer_make_space() which
takes a buffer size as an argument and attempts to free as many
future message bufffers as necessary to ensure that the desired
amount of buffering space is available without violating the
total buffering limit set by MBEDTLS_SSL_DTLS_MAX_BUFFERING.
If the next expected handshake message can't be reassembled because
buffered future messages have already used up too much of the available
space for buffering, free those future message buffers in order to
make space for the reassembly, starting with the handshake message
that's farthest in the future.
This commit adds a static function ssl_buffering_free_slot()
which allows to free a particular structure used to buffer
and/or reassembly some handshake message.
This commit introduces a compile time constant MBEDTLS_SSL_DTLS_MAX_BUFFERING
to mbedtls/config.h which allows the user to control the cumulative size of
all heap buffer allocated for the purpose of reassembling and buffering
handshake messages.
It is put to use by introducing a new field `total_bytes_buffered` to
the buffering substructure of `mbedtls_ssl_handshake_params` that keeps
track of the total size of heap allocated buffers for the purpose of
reassembly and buffering at any time. It is increased whenever a handshake
message is buffered or prepared for reassembly, and decreased when a
buffered or fully reassembled message is copied into the input buffer
and passed to the handshake logic layer.
This commit does not yet include future epoch record buffering into
account; this will be done in a subsequent commit.
Also, it is now conceivable that the reassembly of the next expected
handshake message fails because too much buffering space has already
been used up for future messages. This case currently leads to an
error, but instead, the stack should get rid of buffered messages
to be able to buffer the next one. This will need to be implemented
in one of the next commits.
A previous commit introduced the function ssl_prepare_reassembly_buffer()
which took a message length and a boolean flag indicating if a reassembly
bit map was needed, and attempted to heap-allocate a buffer of sufficient
size to hold both the message, its header, and potentially the reassembly
bitmap.
A subsequent commit is going to introduce a limit on the amount of heap
allocations allowed for the purpose of buffering, and this change will
need to know the reassembly buffer size before attempting the allocation.
To this end, this commit changes ssl_prepare_reassembly_buffer() into
ssl_get_reassembly_buffer_size() which solely computes the reassembly
buffer size, and performing the heap allocation manually in
ssl_buffer_message().
This commit moves the length and content check for CCS messages to
the function mbedtls_ssl_handle_message_type() which is called after
a record has been deprotected.
Previously, these checks were performed in the function
mbedtls_ssl_parse_change_cipher_spec(); however, now that
the arrival of out-of-order CCS messages is remembered
as a boolean flag, the check also has to happen when this
flag is set. Moving the length and content check to
mbedtls_ssl_handle_message_type() allows to treat both
checks uniformly.
Depends on the current transform, which might change when retransmitting a
flight containing a Finished message, so compute it only after the transform
is swapped.
This setting belongs to the individual connection, not to a configuration
shared by many connections. (If a default value is desired, that can be handled
by the application code that calls mbedtls_ssl_set_mtu().)
There are at least two ways in which this matters:
- per-connection settings can be adjusted if MTU estimates become available
during the lifetime of the connection
- it is at least conceivable that a server might recognize restricted clients
based on range of IPs and immediately set a lower MTU for them. This is much
easier to do with a per-connection setting than by maintaining multiple
near-duplicated ssl_config objects that differ only by the MTU setting.
The SSL context is passed to the reassembly preparation function
ssl_prepare_reassembly_buffer() solely for the purpose of allowing
debugging output. This commit marks the context as unused if
debugging is disabled (through !MBEDTLS_DEBUG_C).
This commit implements the buffering of a record from the next epoch.
- The buffering substructure of mbedtls_ssl_handshake_params
gets another field to hold a raw record (incl. header) from
a future epoch.
- If ssl_parse_record_header() sees a record from the next epoch,
it signals that it might be suitable for buffering by returning
MBEDTLS_ERR_SSL_EARLY_MESSAGE.
- If ssl_get_next_record() finds this error code, it passes control
to ssl_buffer_future_record() which may or may not decide to buffer
the record; it does so if
- a handshake is in progress,
- the record is a handshake record
- no record has already been buffered.
If these conditions are met, the record is backed up in the
aforementioned buffering substructure.
- If the current datagram is fully processed, ssl_load_buffered_record()
is called to check if a record has been buffered, and if yes,
if by now the its epoch is the current one; if yes, it copies
the record into the (empty! otherwise, ssl_load_buffered_record()
wouldn't have been called) input buffer.
This commit implements future handshake message buffering
and loading by implementing ssl_load_buffered_message()
and ssl_buffer_message().
Whenever a handshake message is received which is
- a future handshake message (i.e., the sequence number
is larger than the next expected one), or which is
- a proper fragment of the next expected handshake message,
ssl_buffer_message() is called, which does the following:
- Ignore message if its sequence number is too far ahead
of the next expected sequence number, as controlled by
the macro constant MBEDTLS_SSL_MAX_BUFFERED_HS.
- Otherwise, check if buffering for the message with the
respective sequence number has already commenced.
- If not, allocate space to back up the message within
the buffering substructure of mbedtls_ssl_handshake_params.
If the message is a proper fragment, allocate additional
space for a reassembly bitmap; if it is a full message,
omit the bitmap. In any case, fall throuh to the next case.
- If the message has already been buffered, check that
the header is the same, and add the current fragment
if the message is not yet complete (this excludes the
case where a future message has been received in a single
fragment, hence omitting the bitmap, and is afterwards
also received as a series of proper fragments; in this
case, the proper fragments will be ignored).
For loading buffered messages in ssl_load_buffered_message(),
the approach is the following:
- Check the first entry in the buffering window (the window
is always based at the next expected handshake message).
If buffering hasn't started or if reassembly is still
in progress, ignore. If the next expected message has been
fully received, copy it to the input buffer (which is empty,
as ssl_load_buffered_message() is only called in this case).
This commit returns the error code MBEDTLS_ERR_SSL_EARLY_MESSAGE
for proper handshake fragments, forwarding their treatment to
the buffering function ssl_buffer_message(); currently, though,
this function does not yet buffer or reassembly HS messages, so:
! This commit temporarily disables support for handshake reassembly !
This commit introduces helper functions
- ssl_get_hs_frag_len()
- ssl_get_hs_frag_off()
to parse the fragment length resp. fragment offset fields
in the handshake header.
Moreover, building on these helper functions, it adds a
function ssl_check_hs_header() checking the validity of
a DTLS handshake header with respect to the specification,
i.e. the indicated fragment must be a subrange of the total
handshake message, and the total handshake fragment length
(including header) must not exceed the record content size.
These checks were previously performed at a later stage during
ssl_reassemble_dtls_handshake().
This commit introduces a static helper function ssl_get_hs_total_len()
parsing the total message length field in the handshake header, and
puts it to use in mbedtls_ssl_prepare_handshake_record().
This commit introduces, but does not yet put to use, a sub-structure
of mbedtls_ssl_handshake_params::buffering that will be used for the
buffering and/or reassembly of handshake messages with handshake
sequence numbers that are greater or equal to the next expected
sequence number.
This commit introduces a sub-structure `buffering` within
mbedtls_ssl_handshake_params that shall contain all data
related to the reassembly and/or buffering of handshake
messages.
Currently, only buffering of CCS messages is implemented,
so the only member of this struct is the previously introduced
`seen_ccs` field.
This commit introduces a static function ssl_hs_is_proper_fragment()
to check if the current incoming handshake message is a proper fragment.
It is used within mbedtls_ssl_prepare_handshake_record() to decide whether
handshake reassembly through ssl_reassemble_dtls_handshake() is needed.
The commit changes the behavior of the library in the (unnatural)
situation where proper fragments for a handshake message are followed
by a non-fragmented version of the same message. In this case,
the previous code invoked the handshake reassembly routine
ssl_reassemble_dtls_handshake(), while with this commit, the full
handshake message is directly forwarded to the user, no altering
the handshake reassembly state -- in particular, not freeing it.
As a remedy, freeing of a potential handshake reassembly structure
is now done as part of the handshake update function
mbedtls_ssl_update_handshake_status().
This commit adds a parameter to ssl_prepare_reassembly_buffer()
allowing to disable the allocation of space for a reassembly bitmap.
This will allow this function to be used for the allocation of buffers
for future handshake messages in case these need no fragmentation.
This commit moves the code-path preparing the handshake
reassembly buffer, consisting of header, message content,
and reassembly bitmap, to a separate function
ssl_prepare_reassembly_buffer().
This leads future HS messages to traverse the buffering
function ssl_buffer_message(), which however doesn't do
anything at the moment for HS messages. Since the error
code MBEDTLS_ERR_SSL_EARLY_MESSAGE is afterwards remapped
to MBEDTLS_ERR_SSL_CONTINUE_PROCESSING -- which is what
was returned prior to this commit when receiving a future
handshake message -- this commit therefore does not yet
introduce any change in observable behavior.
This commit implements support for remembering out-of-order
CCS messages. Specifically, a flag is set whenever a CCS message
is read which remains until the end of a flight, and when a
CCS message is expected and a CCS message has been seen in the
current flight, a synthesized CCS record is created.
This commit introduces a function ssl_record_is_in_progress()
to indicate if there is there is more data within the current
record to be processed. Further, it moves the corresponding
call from ssl_read_record_layer() to the parent function
mbedtls_ssl_read_record(). With this change, ssl_read_record_layer()
has the sole purpose of fetching and decoding a new record,
and hence this commit also renames it to ssl_get_next_record().
Subsequent commits will potentially inject buffered
messages after the last incoming message has been
consumed, but before a new one is fetched. As a
preparatory step to this, this commit moves the call
to ssl_consume_current_message() from ssl_read_record_layer()
to the calling function mbedtls_ssl_read_record().
The first part of the function ssl_read_record_layer() was
to mark the previous message as consumed. This commit moves
the corresponding code-path to a separate static function
ssl_consume_current_message().
This function was previously global because it was
used directly within ssl_parse_certificate_verify()
in library/ssl_srv.c. The previous commit removed
this dependency, replacing the call by a call to
the global parent function mbedtls_ssl_read_record().
This renders mbedtls_ssl_read_record_layer() internal
and therefore allows to make it static, and accordingly
rename it as ssl_read_record_layer().
Usually, debug messages beginning with "=> and "<="
match up and indicate entering of and returning from
functions, respectively. This commit fixes one exception
to this rule in mbedtls_ssl_read_record(), which sometimes
printed two messages of the form "<= XXX".
Previously, mbedtls_ssl_read_record() always updated the handshake
checksum in case a handshake record was received. While desirable
most of the time, for the CertificateVerify message the checksum
update must only happen after the message has been fully processed,
because the validation requires the handshake digest up to but
excluding the CertificateVerify itself. As a remedy, the bulk
of mbedtls_ssl_read_record() was previously duplicated within
ssl_parse_certificate_verify(), hardening maintenance in case
mbedtls_ssl_read_record() is subject to changes.
This commit adds a boolean parameter to mbedtls_ssl_read_record()
indicating whether the checksum should be updated in case of a
handshake message or not. This allows using it also for
ssl_parse_certificate_verify(), manually updating the checksum
after the message has been processed.
This for example lead to the following corner case bug:
The code attempted to piggy-back a Finished message at
the end of a datagram where precisely 12 bytes of payload
were still available. This lead to an empty Finished fragment
being sent, and when mbedtls_ssl_flight_transmit() was called
again, it believed that it was just starting to send the
Finished message, thereby calling ssl_swap_epochs() which
had already happened in the call sending the empty fragment.
Therefore, the second call would send the 'rest' of the
Finished message with wrong epoch.
This commit adds a public function
`mbedtls_ssl_conf_datagram_packing()`
that allows to allow / forbid the packing of multiple
records within a single datagram.
The `partial` argument is only used when DTLS and same port
client reconnect are enabled. This commit marks the variable
as unused if that's not the case.
If neither the maximum fragment length extension nor DTLS
are used, the SSL context argument is unnecessary as the
maximum payload length is hardcoded as MBEDTLS_SSL_MAX_CONTENT_LEN.
This commit finally enables datagram packing by modifying the
record preparation function ssl_write_record() to not always
calling mbedtls_ssl_flush_output().
The packing of multiple records within a single datagram works
by increasing the pointer `out_hdr` (pointing to the beginning
of the next outgoing record) within the datagram buffer, as
long as space is available and no flush was mandatory.
This commit does not yet change the code's behavior of always
flushing after preparing a record, but it introduces the logic
of increasing `out_hdr` after preparing the record, and resetting
it after the flush has been completed.
Previously, the record sequence number was incremented at the
end of each successful call to mbedtls_ssl_flush_output(),
which works as long as there is precisely one such call for
each outgoing record.
When packing multiple records into a single datagram, this
property is no longer true, and instead the increment of the
record sequence number must happen after the record has been
prepared, and not after it has been dispatched.
This commit moves the code for incrementing the record sequence
number from mbedtls_ssl_flush_output() to ssl_write_record().
This commit is another step towards supporting the packing of
multiple records within a single datagram.
Previously, the incremental outgoing record sequence number was
statically stored within the record buffer, at its final place
within the record header. This slightly increased efficiency
as it was not necessary to copy the sequence number when writing
outgoing records.
When allowing multiple records within a single datagram, it is
necessary to allow the position of the current record within the
datagram buffer to be flexible; in particular, there is no static
address for the record sequence number field within the record header.
This commit introduces an additional field `cur_out_ctr` within
the main SSL context structure `mbedtls_ssl_context` to keep track
of the outgoing record sequence number independent of the buffer used
for the current record / datagram. Whenever a new record is written,
this sequence number is copied to the the address `out_ctr` of the
sequence number header field within the current outgoing record.
The SSL/TLS module maintains a number of internally used pointers
`out_hdr`, `out_len`, `out_iv`, ..., indicating where to write the
various parts of the record header.
These pointers have to be kept in sync and sometimes need update:
Most notably, the `out_msg` pointer should always point to the
beginning of the record payload, and its offset from the pointer
`out_iv` pointing to the end of the record header is determined
by the length of the explicit IV used in the current record
protection mechanism.
This commit introduces functions deducing these pointers from
the pointers `out_hdr` / `in_hdr` to the beginning of the header
of the current outgoing / incoming record.
The flexibility gained by these functions will subsequently
be used to allow shifting of `out_hdr` for the purpose of
packing multiple records into a single datagram.
For now, just check that it causes us to fragment. More tests are coming in
follow-up commits to ensure we respect the exact value set, including when
renegotiating.
Note: no interop tests in ssl-opt.sh for now, as some of them make us run into
bugs in (the CI's default versions of) OpenSSL and GnuTLS, so interop tests
will be added later once the situation is clarified. <- TODO
This will allow fragmentation to always happen in the same place, always from
a buffer distinct from ssl->out_msg, and with the same way of resuming after
returning WANT_WRITE
- take advantage of the fact that we're only called for first send
- put all sanity checks at the top
- rename and constify shortcut variables
- improve comments
`mbedtls_ssl_get_record_expansion()` is supposed to return the maximum
difference between the size of a protected record and the size of the
encapsulated plaintext.
It had the following two bugs:
(1) It did not consider the new ChaChaPoly ciphersuites, returning
the error code #MBEDTLS_ERR_SSL_INTERNAL_ERROR in this case.
(2) It did not correctly estimate the maximum record expansion in case
of CBC ciphersuites in (D)TLS versions 1.1 and higher, in which
case the ciphertext is prefixed by an explicit IV.
This commit fixes both bugs.
In `mbedtls_ccm_self_test()`, enforce input and output
buffers sent to the ccm API to be contigous and aligned,
by copying the test vectors to buffers on the stack.
In ecp_mul_comb(), if (!p_eq_g && grp->T == NULL) and then ecp_precompute_comb() fails (which can
happen due to OOM), then the new array of points T will be leaked (as it's newly allocated, but
hasn't been asigned to grp->T yet).
Symptom was a memory leak in ECDHE key exchange under low memory conditions.
The length to the debug message could conceivably leak through the time it
takes to print it, and that length would in turn reveal whether padding was
correct or not.
The basis for the Lucky 13 family of attacks is for an attacker to be able to
distinguish between (long) valid TLS-CBC padding and invalid TLS-CBC padding.
Since our code sets padlen = 0 for invalid padding, the length of the input to
the HMAC function, and the location where we read the MAC, give information
about that.
A local attacker could gain information about that by observing via a
cache attack whether the bytes at the end of the record (at the location of
would-be padding) have been read during MAC verification (computation +
comparison).
Let's make sure they're always read.
The basis for the Lucky 13 family of attacks is for an attacker to be able to
distinguish between (long) valid TLS-CBC padding and invalid TLS-CBC padding.
Since our code sets padlen = 0 for invalid padding, the length of the input to
the HMAC function gives information about that.
Information about this length (modulo the MD/SHA block size) can be deduced
from how much MD/SHA padding (this is distinct from TLS-CBC padding) is used.
If MD/SHA padding is read from a (static) buffer, a local attacker could get
information about how much is used via a cache attack targeting that buffer.
Let's get rid of this buffer. Now the only buffer used is the internal MD/SHA
one, which is always read fully by the process() function.
Move definition of `MBEDTLS_CIPHER_MODE_STREAM` to header file
(`mbedtls_cipher_internal.h`), because it is used by more than
one file. Raised by TrinityTonic in #1719
The TLS layer is checking for mode, such as GCM, CCM, CBC, STREAM. ChachaPoly
needs to have its own mode, even if it's used just one cipher, in order to
allow consistent handling of mode in the TLS layer.
* development: (182 commits)
Change the library version to 2.11.0
Fix version in ChangeLog for fix for #552
Add ChangeLog entry for clang version fix. Issue #1072
Compilation warning fixes on 32b platfrom with IAR
Revert "Turn on MBEDTLS_SSL_ASYNC_PRIVATE by default"
Fix for missing len var when XTS config'd and CTR not
ssl_server2: handle mbedtls_x509_dn_gets failure
Fix harmless use of uninitialized memory in ssl_parse_encrypted_pms
SSL async tests: add a few test cases for error in decrypt
Fix memory leak in ssl_server2 with SNI + async callback
SNI + SSL async callback: make all keys async
ssl_async_resume: free the operation context on error
ssl_server2: get op_name from context in ssl_async_resume as well
Clarify "as directed here" in SSL async callback documentation
SSL async callbacks documentation: clarify resource cleanup
Async callback: use mbedtls_pk_check_pair to compare keys
Rename mbedtls_ssl_async_{get,set}_data for clarity
Fix copypasta in the async callback documentation
SSL async callback: cert is not always from mbedtls_ssl_conf_own_cert
ssl_async_set_key: detect if ctx->slots overflows
...
For the situation where the mbedTLS device has limited RAM, but the
other end of the connection doesn't support the max_fragment_length
extension. To be spec-compliant, mbedTLS has to keep a 16384 byte
incoming buffer. However the outgoing buffer can be made smaller without
breaking spec compliance, and we save some RAM.
See comments in include/mbedtls/config.h for some more details.
(The lower limit of outgoing buffer size is the buffer size used during
handshake/cert negotiation. As the handshake is half-duplex it might
even be possible to store this data in the "incoming" buffer during the
handshake, which would save even more RAM - but it would also be a lot
hackier and error-prone. I didn't really explore this possibility, but
thought I'd mention it here in case someone sees this later on a mission
to jam mbedTLS into an even tinier RAM footprint.)
Fix compilation warnings with IAR toolchain, on 32 bit platform.
Reported by rahmanih in #683
This is based on work by Ron Eldor in PR #750, some of which was independently
fixed by Azim Khan and already merged in PR #1646.
The AES XTS self-test was using a variable len, which was declared only when CTR
was enabled. Changed the declaration of len to be conditional on CTR and XTS.
The AES OFB self-test made use of a variable `offset` but failed to have a
preprocessor condition around it, so unless CTR and CBC were enabled, the
variable would be undeclared.
In ssl_parse_encrypted_pms, some operational failures from
ssl_decrypt_encrypted_pms lead to diff being set to a value that
depended on some uninitialized unsigned char and size_t values. This didn't
affect the behavior of the program (assuming an implementation with no
trap values for size_t) because all that matters is whether diff is 0,
but Valgrind rightfully complained about the use of uninitialized
memory. Behave nicely and initialize the offending memory.
THe function `mbedtls_gf128mul_x_ble()` doesn't multiply by x, x^4, and
x^8. Update the function description to properly describe what the function
does.
mbedtls_aes_crypt_xts() currently takes a `bits_length` parameter, unlike
the other block modes. Change the parameter to accept a bytes length
instead, as the `bits_length` parameter is not actually ever used in the
current implementation.
Add a new context structure for XTS. Adjust the API for XTS to use the new
context structure, including tests suites and the benchmark program. Update
Doxgen documentation accordingly.
AES-XEX is a building block for other cryptographic standards and not yet a
standard in and of itself. We'll just provide the standardized AES-XTS
algorithm, and not AES-XEX. The AES-XTS algorithm and interface provided
can be used to perform the AES-XEX algorithm when the length of the input
is a multiple of the AES block size.
If we're unlucky with memory placement, gf128mul_table_bbe may spread over
two cache lines and this would leak b >> 63 to a cache timing attack.
Instead, take an approach that is less likely to make different memory
loads depending on the value of b >> 63 and is also unlikely to be compiled
to a condition.
XTS mode is fully known as "xor-encrypt-xor with ciphertext-stealing".
This is the generalization of the XEX mode.
This implementation is limited to an 8-bits (1 byte) boundary, which
doesn't seem to be what was thought considering some test vectors [1].
This commit comes with tests, extracted from [1], and benchmarks.
Although, benchmarks aren't really nice here, as they work with a buffer
of a multiple of 16 bytes, which isn't a challenge for XTS compared to
XEX.
[1] http://csrc.nist.gov/groups/STM/cavp/documents/aes/XTSTestVectors.zip
As seen from the first benchmark run, AES-XEX was running pourly (even
slower than AES-CBC). This commit doubles the performances of the
current implementation.
XEX mode, known as "xor-encrypt-xor", is the simple case of the XTS
mode, known as "XEX with ciphertext stealing". When the buffers to be
encrypted/decrypted have a length divisible by the length of a standard
AES block (16), XTS is exactly like XEX.
When MBEDTLS_PLATFORM_MEMORY is defined but MBEDTLS_PLATFORM_FREE_MACRO or
MBEDTLS_PLATFORM_CALLOC_MACRO are not defined then the actual functions
used to allocate and free memory are stored in function pointers.
These pointers are exposed to the caller, and it means that the caller
and the library have to share a data section.
In TF-A, we execute in a very constrained environment, where some images
are executed from ROM and other images are executed from SRAM. The
images that are executed from ROM cannot be modified. The SRAM size
is very small and we are moving libraries to the ROM that can be shared
between the different SRAM images. These SRAM images could import all the
symbols used in mbedtls, but it would create an undesirable hard binary
dependency between the different images. For this reason, all the library
functions in ROM are accesed using a jump table whose base address is
known, allowing the images to execute with different versions of the ROM.
This commit changes the function pointers to actual functions,
so that the SRAM images only have to use the new exported symbols
(mbedtls_calloc and mbedtls_free) using the jump table. In
our scenario, mbedtls_platform_set_calloc_free is called from
mbedtls_memory_buffer_alloc_init which initializes the function pointers
to the internal buffer_alloc_calloc and buffer_alloc_free functions.
No functional changes to mbedtls_memory_buffer_alloc_init.
Signed-off-by: Roberto Vargas <roberto.vargas@arm.com>
Adds error handling into mbedtls_aes_crypt_ofb for AES errors, a self-test
for the OFB mode using NIST SP 800-38A test vectors and adds a check to
potential return errors in setting the AES encryption key in the OFB test
suite.
* development: (97 commits)
Updated version number to 2.10.0 for release
Add a disabled CMAC define in the no-entropy configuration
Adapt the ARIA test cases for new ECB function
Fix file permissions for ssl.h
Add ChangeLog entry for PR#1651
Fix MicroBlaze register typo.
Fix typo in doc and copy missing warning
Fix edit mistake in cipher_wrap.c
Update CTR doc for the 64-bit block cipher
Update CTR doc for other 128-bit block ciphers
Slightly tune ARIA CTR documentation
Remove double declaration of mbedtls_ssl_list_ciphersuites
Update CTR documentation
Use zeroize function from new platform_util
Move to new header style for ALT implementations
Add ifdef for selftest in header file
Fix typo in comments
Use more appropriate type for local variable
Remove useless parameter from function
Wipe sensitive info from the stack
...
Motivation is similar to NO_UDBL_DIVISION.
The alternative implementation of 64-bit mult is straightforward and aims at
obvious correctness. Also, visual examination of the generate assembly show
that it's quite efficient with clang, armcc5 and arm-clang. However current
GCC generates fairly inefficient code for it.
I tried to rework the code in order to make GCC generate more efficient code.
Unfortunately the only way to do that is to get rid of 64-bit add and handle
the carry manually, but this causes other compilers to generate less efficient
code with branches, which is not acceptable from a side-channel point of view.
So let's keep the obvious code that works for most compilers and hope future
versions of GCC learn to manage registers in a sensible way in that context.
See https://bugs.launchpad.net/gcc-arm-embedded/+bug/1775263
- in x509_profile_check_pk_alg
- in x509_profile_check_md_alg
- in x509_profile_check_key
and in ssl_cli.c : unsigned char gets promoted to signed integer
Allowing DECRYPT with crypt_and_tag is a risk as people might fail to check
the tag correctly (or at all). So force them to use auth_decrypt() instead.
See also https://github.com/ARMmbed/mbedtls/pull/1668
When MBEDTLS_TIMING_C was not defined in config.h, but the MemSan
memory sanitizer was activated, entropy_poll.c used memset without
declaring it. Fix this by including string.h unconditionally.
As a protection against the Lucky Thirteen attack, the TLS code for
CBC decryption in encrypt-then-MAC mode performs extra MAC
calculations to compensate for variations in message size due to
padding. The amount of extra MAC calculation to perform was based on
the assumption that the bulk of the time is spent in processing
64-byte blocks, which is correct for most supported hashes but not for
SHA-384. Correct the amount of extra work for SHA-384 (and SHA-512
which is currently not used in TLS, and MD2 although no one should
care about that).
Fix IAR compiler warnings
Two warnings have been fixed:
1. code 'if( len <= 0xFFFFFFFF )' gave warning 'pointless integer comparison'.
This was fixed by wraping the condition in '#if SIZE_MAX > 0xFFFFFFFF'.
2. code 'diff |= A[i] ^ B[i];' gave warning 'the order of volatile accesses is undefined in'.
This was fixed by read the volatile data in temporary variables before the computation.
Explain IAR warning on volatile access
Consistent use of CMAKE_C_COMPILER_ID
The cast to void was motivated by the assumption that the functions only
return non-zero when passed bad arguments, but that might not be true of
alternative implementation, for example on hardware failure.
- need HW failure codes too
- re-use relevant poly codes for chachapoly to save on limited space
Values were chosen to leave 3 free slots at the end of the NET odd range.
That's what it is. So we shouldn't set a block size != 1.
While at it, move call to chachapoly_update() closer to the one for GCM, as
they are similar (AEAD).
This reduces clutter, making the functions more readable.
Also, it makes lcov see each line as covered. This is not cheating, as the
lines that were previously seen as not covered are not supposed to be reached
anyway (failing branches of the selftests).
Thanks to this and previous test suite enhancements, lcov now sees chacha20.c
and poly1305.c at 100% line coverage, and for chachapoly.c only two lines are
not covered (error returns from lower-level module that should never happen
except perhaps if an alternative implementation returns an unexpected error).
This module used (len, pointer) while (pointer, len) is more common in the
rest of the library, in particular it's what's used in the GCM API that
very comparable to it, so switch to (pointer, len) for consistency.
Note that the crypt_and_tag() and auth_decrypt() functions were already using
the same convention as GCM, so this also increases intra-module consistency.
This module used (len, pointer) while (pointer, len) is more common in the
rest of the library, in particular it's what's used in the CMAC API that is
very comparable to Poly1305, so switch to (pointer, len) for consistency.
In addition to making the APIs of the various AEAD modules more consistent
with each other, it's useful to have an auth_decrypt() function so that we can
safely check the tag ourselves, as the user might otherwise do it in an
insecure way (or even forget to do it altogether).
While the old name is explicit and aligned with the RFC, it's also very long,
so with the mbedtls_ prefix prepended we get a 31-char prefix to each
identifier, which quickly conflicts with our 80-column policy.
The new name is shorter, it's what a lot of people use when speaking about
that construction anyway, and hopefully should not introduce confusion at
it seems unlikely that variants other than 20/1305 be standardised in the
foreseeable future.
- in .h files: only put the context declaration inside the #ifdef _ALT
(this was changed in 2.9.0, ie after the original PR)
- in .c file: only leave selftest out of _ALT: even though some function are
trivial to build from other parts, alt implementors might want to go another
way about them (for efficiency or other reasons)