The function mbedtls_mpi_sub_abs first checked that A >= B and then
performed the subtraction, relying on the fact that A >= B to
guarantee that the carry propagation would stop, and not taking
advantage of the fact that the carry when subtracting two numbers can
only be 0 or 1. This made the carry propagation code a little hard to
follow.
Write an ad hoc loop for the carry propagation, checking the size of
the result. This makes termination obvious.
The initial check that A >= B is no longer needed, since the function
now checks that the carry propagation terminates, which is equivalent.
This is a slight performance gain.
Signed-off-by: Gilles Peskine <Gilles.Peskine@arm.com>
There was some confusion during review about when A->p[n] could be
nonzero. In fact, there is no need to set A->p[n]: only the
intermediate result d might need to extend to n+1 limbs, not the final
result A. So never access A->p[n]. Rework the explanation of the
calculation in a way that should be easier to follow.
Signed-off-by: Gilles Peskine <Gilles.Peskine@arm.com>
The function mpi_sub_hlp had confusing semantics: although it took a
size parameter, it accessed the limb array d beyond this size, to
propagate the carry. This made the function difficult to understand
and analyze, with a potential buffer overflow if misused (not enough
room to propagate the carry).
Change the function so that it only performs the subtraction within
the specified number of limbs, and returns the carry.
Move the carry propagation out of mpi_sub_hlp and into its caller
mbedtls_mpi_sub_abs. This makes the code of subtraction very slightly
less neat, but not significantly different.
In the one other place where mpi_sub_hlp is used, namely mpi_montmul,
this is a net win because the carry is potentially sensitive data and
the function carefully arranges to not have to propagate it.
Signed-off-by: Gilles Peskine <Gilles.Peskine@arm.com>
mpi_sub_hlp performs a subtraction A - B, but took parameters in the
order (B, A). Swap the parameters so that they match the usual
mathematical syntax.
This has the additional benefit of putting the output parameter (A)
first, which is the normal convention in this module.
Signed-off-by: Gilles Peskine <Gilles.Peskine@arm.com>
Let code analyzers know that this is deliberate. For example MSVC
warns about the conversion if it's implicit.
Signed-off-by: Gilles Peskine <Gilles.Peskine@arm.com>
In mpi_montmul, an auxiliary function for modular
exponentiation (mbedtls_mpi_mod_exp) that performs Montgomery
multiplication, the last step is a conditional subtraction to force
the result into the correct range. The current implementation uses a
branch and therefore may leak information about secret data to an
adversary who can observe what branch is taken through a side channel.
Avoid this potential leak by always doing the same subtraction and
doing a contant-trace conditional assignment to set the result.
Signed-off-by: Gilles Peskine <Gilles.Peskine@arm.com>
Separate out a version of mpi_safe_cond_assign that works on
equal-sized limb arrays, without worrying about allocation sizes or
signs.
Signed-off-by: Gilles Peskine <Gilles.Peskine@arm.com>
This reverts commit 2cc69fffcf.
A check was added in mpi_montmul because clang-analyzer warned about a
possibly null pointer. However this was a false positive. Recent
versions of clang-analyzer no longer emit a warning (3.6 does, 6
doesn't).
Incidentally, the size check was wrong: mpi_montmul needs
T->n >= 2 * (N->n + 1), not just T->n >= N->n + 1.
Given that this is an internal function which is only used from one
public function and in a tightly controlled way, remove both the null
check (which is of low value to begin with) and the size check (which
would be slightly more valuable, but was wrong anyway). This allows
the function not to need to return an error, which makes the source
code a little easier to read and makes the object code a little
smaller.
Signed-off-by: Gilles Peskine <Gilles.Peskine@arm.com>
When parsing a certificate with the basic constraints extension
the max_pathlen that was read from it was incremented regardless
of its value. However, if the max_pathlen is equal to INT_MAX (which
is highly unlikely), an undefined behaviour would occur.
This commit adds a check to ensure that such value is not accepted
as valid. Relevant tests for INT_MAX and INT_MAX-1 are also introduced.
Certificates added in this commit were generated using the
test_suite_x509write, function test_x509_crt_check. Input data taken
from the "Certificate write check Server1 SHA1" test case, so the generated
files are like the "server1.crt", but with the "is_ca" field set to 1 and
max_pathlen as described by the file name.
Signed-off-by: Andrzej Kurek <andrzej.kurek@arm.com>
Signed-off-by: Piotr Nowicki <piotr.nowicki@arm.com>
See the comments in the code for how an attack would go, and the ChangeLog
entry for an impact assessment. (For ECDSA, leaking a few bits of the scalar
over several signatures translates to full private key recovery using a
lattice attack.)
Signed-off-by: Manuel Pégourié-Gonnard <manuel.pegourie-gonnard@arm.com>
We keep track of the current epoch and record sequence number in out_ctr,
which was overwritten when writing the record containing the
HelloVerifyRequest starting from out_buf. We can avoid that by only using the
rest of the buffer.
Using MBEDTLS_SSL_MAX_CONTENT_LEN as the buffer size is still correct, as it
was a pretty conservative value when starting from out_buf.
Note: this bug was also fixed unknowingly in 2.13 by introducing a new buffer
that holds the current value of the sequence number (including epoch), while
working on datagram packing: 198594709b
Signed-off-by: Manuel Pégourié-Gonnard <manuel.pegourie-gonnard@arm.com>
The current logging was sub-standard, in particular there was no trace
whatsoever of the HelloVerifyRequest being sent. Now it's being logged with
the usual levels: 4 for full content, 2 return of f_send, 1 decision about
sending it (or taking other branches in the same function) because that's the
same level as state changes in the handshake, and also same as the "possible
client reconnect" message" to which it's the logical continuation (what are we
doing about it?).
Signed-off-by: Manuel Pégourié-Gonnard <manuel.pegourie-gonnard@arm.com>
In x509.c, the self-test code is dependent on MBEDTLS_CERTS_C and
MBEDTLS_SHA256_C being enabled. At some point in the recent past that dependency
was on MBEDTLS_SHA1_C but changed to SHA256, but the comment wasn't updated.
This commit updates the comment.
Signed-off-by: Simon Butcher <simon.butcher@arm.com>
Some code paths want to access members of the mbedtls_rsa_context structure.
We can only do that when using our own implementation, as otherwise we don't
know anything about that structure.
When parsing a PKCS#1 RSAPrivateKey structure, all parameters are always
present. After importing them, we need to call rsa_complete() for the sake of
alternative implementations. That function interprets zero as a signal for
"this parameter was not provided". As that's never the case, we mustn't pass
any zero value to that function, so we need to explicitly check for it.
This reverts commit 130e136439, reversing
changes made to 071b3e170e.
stat() will never return S_IFLNK as the file type, as stat() explicitly
follows symlinks.
Fixes#3005.
If Y was constructed through functions in this module, then Y->n == 0
iff Y->p == NULL. However we do not prevent filling mpi structures
manually, and zero may be represented with n=0 and p a valid pointer.
Most of the code can cope with such a representation, but for the
source of mbedtls_mpi_copy, this would cause an integer underflow.
Changing the test for zero from Y->p==NULL to Y->n==0 causes this case
to work at no extra cost.
The comment on TEST_SRV_CRT_RSA_SHA256 that it was
tests/data_files/server2-sha256.crt was a lie, the contents were actually
those of the mbedtls-2.16 version of the same file.
While it didn't have a noticeable impact on its own, it was confusing and
distracting while investigating an issue that cause gnutls-cli to not trust
the default RSA-SHA256 cert given test-ca.crt as a root, so worth fixing.
When mbedtls_x509_crt_parse_path() checks each object in the supplied path, it only processes regular files. This change makes it also accept a symlink to a file. Fixes#3005.
This was observed to be a problem on Fedora/CentOS/RHEL systems, where the ca-bundle in the default location is actually a symlink.
* origin/mbedtls-2.7:
Enable more test cases without MBEDTLS_MEMORY_DEBUG
More accurate test case description
Clarify that the "FATAL" message is expected
Note that mbedtls_ctr_drbg_seed() must not be called twice
Fix CTR_DRBG benchmark
Changelog entry for xxx_drbg_set_entropy_len before xxx_drbg_seed
CTR_DRBG: support set_entropy_len() before seed()
CTR_DRBG: Don't use functions before they're defined
HMAC_DRBG: support set_entropy_len() before seed()
The functions mbedtls_ctr_drbg_random() and
mbedtls_ctr_drbg_random_with_add() could return 0 if an AES function
failed. This could only happen with alternative AES
implementations (the built-in implementation of the AES functions
involved never fail), typically due to a failure in a hardware
accelerator.
Bug reported and fix proposed by Johan Uppman Bruce and Christoffer
Lauri, Sectra.
In ssl_parse_hello_verify_request, we read 3 bytes (version and cookie
length) without checking that there are that many bytes left in
ssl->in_msg. This could potentially read from memory outside of the
ssl->receive buffer (which would be a remotely exploitable
crash).
In ssl_parse_hello_verify_request, we print cookie_len bytes without
checking that there are that many bytes left in ssl->in_msg. This
could potentially log data outside the received message (not a big
deal) and could potentially read from memory outside of the receive
buffer (which would be a remotely exploitable crash).
* restricted/pr/666: (24 commits)
Add ChangeLog entry
mpi_lt_mpi_ct: fix condition handling
mpi_lt_mpi_ct: Add further tests
mpi_lt_mpi_ct: Fix test numbering
mpi_lt_mpi_ct perform tests for both limb size
ct_lt_mpi_uint: cast the return value explicitely
mbedtls_mpi_lt_mpi_ct: add tests for 32 bit limbs
mbedtls_mpi_lt_mpi_ct: simplify condition
Rename variable for better readability
mbedtls_mpi_lt_mpi_ct: Improve documentation
Make mbedtls_mpi_lt_mpi_ct more portable
Bignum: Document assumptions about the sign field
Add more tests for mbedtls_mpi_lt_mpi_ct
mpi_lt_mpi_ct test: hardcode base 16
Document ct_lt_mpi_uint
mpi_lt_mpi_ct: make use of unsigned consistent
ct_lt_mpi_uint: make use of biL
Change mbedtls_mpi_cmp_mpi_ct to check less than
mbedtls_mpi_cmp_mpi_ct: remove multiplications
Remove excess vertical space
...
This issue has been reported by Tuba Yavuz, Farhaan Fowze, Ken (Yihang) Bai,
Grant Hernandez, and Kevin Butler (University of Florida) and
Dave Tian (Purdue University).
In AES encrypt and decrypt some variables were left on the stack. The value
of these variables can be used to recover the last round key. To follow best
practice and to limit the impact of buffer overread vulnerabilities (like
Heartbleed) we need to zeroize them before exiting the function.
In the case of *ret we might need to preserve a 0 value throughout the
loop and therefore we need an extra condition to protect it from being
overwritten.
The value of done is always 1 after *ret has been set and does not need
to be protected from overwriting. Therefore in this case the extra
condition can be removed.
The code relied on the assumptions that CHAR_BIT is 8 and that unsigned
does not have padding bits.
In the Bignum module we already assume that the sign of an MPI is either
-1 or 1. Using this, we eliminate the above mentioned dependency.
The signature of mbedtls_mpi_cmp_mpi_ct() meant to support using it in
place of mbedtls_mpi_cmp_mpi(). This meant full comparison functionality
and a signed result.
To make the function more universal and friendly to constant time
coding, we change the result type to unsigned. Theoretically, we could
encode the comparison result in an unsigned value, but it would be less
intuitive.
Therefore we won't be able to represent the result as unsigned anymore
and the functionality will be constrained to checking if the first
operand is less than the second. This is sufficient to support the
current use case and to check any relationship between MPIs.
The only drawback is that we need to call the function twice when
checking for equality, but this can be optimised later if an when it is
needed.
Multiplication is known to have measurable timing variations based on
the operands. For example it typically is much faster if one of the
operands is zero. Remove them from constant time code.
The blinding applied to the scalar before modular inversion is
inadequate. Bignum is not constant time/constant trace, side channel
attacks can retrieve the blinded value, factor it (it is smaller than
RSA keys and not guaranteed to have only large prime factors). Then the
key can be recovered by brute force.
Reducing the blinded value makes factoring useless because the adversary
can only recover pk*t+z*N instead of pk*t.
mbedtls_ctr_drbg_seed() always set the entropy length to the default,
so a call to mbedtls_ctr_drbg_set_entropy_len() before seed() had no
effect. Change this to the more intuitive behavior that
set_entropy_len() sets the entropy length and seed() respects that and
only uses the default entropy length if there was no call to
set_entropy_len().
The former test-only function mbedtls_ctr_drbg_seed_entropy_len() is
no longer used, but keep it for strict ABI compatibility.
Move the definitions of mbedtls_ctr_drbg_seed_entropy_len() and
mbedtls_ctr_drbg_seed() to after they are used. This makes the code
easier to read and to maintain.
mbedtls_hmac_drbg_seed() always set the entropy length to the default,
so a call to mbedtls_hmac_drbg_set_entropy_len() before seed() had no
effect. Change this to the more intuitive behavior that
set_entropy_len() sets the entropy length and seed() respects that and
only uses the default entropy length if there was no call to
set_entropy_len().
According to SP800-90A, the DRBG seeding process should use a nonce
of length `security_strength / 2` bits as part of the DRBG seed. It
further notes that this nonce may be drawn from the same source of
entropy that is used for the first `security_strength` bits of the
DRBG seed. The present HMAC DRBG implementation does that, requesting
`security_strength * 3 / 2` bits of entropy from the configured entropy
source in total to form the initial part of the DRBG seed.
However, some entropy sources may have thresholds in terms of how much
entropy they can provide in a single call to their entropy gathering
function which may be exceeded by the present HMAC DRBG implementation
even if the threshold is not smaller than `security_strength` bits.
Specifically, this is the case for our own entropy module implementation
which only allows requesting at most 32 Bytes of entropy at a time
in configurations disabling SHA-512, and this leads to runtime failure
of HMAC DRBG when used with Mbed Crypto' own entropy callbacks in such
configurations.
This commit fixes this by splitting the seed entropy acquisition into
two calls, one requesting `security_strength` bits first, and another
one requesting `security_strength / 2` bits for the nonce.
Fixes#237.
* origin/mbedtls-2.7:
Changelog entry for HAVEGE fix
Prevent building the HAVEGE module on platforms where it doesn't work
Fix misuse of signed ints in the HAVEGE module
The failure of mbedtls_md was not checked in one place. This could have led
to an incorrect computation if a hardware accelerator failed. In most cases
this would have led to the key exchange failing, so the impact would have been
a hard-to-diagnose error reported in the wrong place. If the two sides of the
key exchange failed in the same way with an output from mbedtls_md that was
independent of the input, this could have led to an apparently successful key
exchange with a predictable key, thus a glitching md accelerator could have
caused a security vulnerability.
If int is not capable of storing as many values as unsigned, the code
may generate a trap value. If signed int and unsigned int aren't
32-bit types, the code may calculate meaningless values.
The elements of the HAVEGE state are manipulated with bitwise
operations, with the expectations that the elements are 32-bit
unsigned integers (or larger). But they are declared as int, and so
the code has undefined behavior. Clang with Asan correctly points out
some shifts that reach the sign bit.
Use unsigned int internally. This is technically an aliasing violation
since we're accessing an array of `int` via a pointer to `unsigned
int`, but since we don't access the array directly inside the same
function, it's very unlikely to be compiled in an unintended manner.
* restricted/pr/581:
Remove unnecessary empty line
Add a test for signing content with a long ECDSA key
Add documentation notes about the required size of the signature buffers
Add missing MBEDTLS_ECP_C dependencies in check_config.h
Change size of preallocated buffer for pk_sign() calls
* origin/pr/2713:
programs: Make `make clean` clean all programs always
ssl_tls: Enable Suite B with subset of ECP curves
windows: Fix Release x64 configuration
timing: Remove redundant include file
net_sockets: Fix typo in net_would_block()
- Explain the use of explicit ASN.1 tagging for the extensions structuree
- Remove misleading comment which suggests that mbedtls_x509_get_ext()
also parsed the header of the first extension, which is not the case.
Some functions within the X.509 module return an ASN.1 low level
error code where instead this error code should be wrapped by a
high-level X.509 error code as in the bulk of the module.
Specifically, the following functions are affected:
- mbedtls_x509_get_ext()
- x509_get_version()
- x509_get_uid()
This commit modifies these functions to always return an
X.509 high level error code.
Care has to be taken when adapting `mbetls_x509_get_ext()`:
Currently, the callers `mbedtls_x509_crt_ext()` treat the
return code `MBEDTLS_ERR_ASN1_UNEXPECTED_TAG` specially to
gracefully detect and continue if the extension structure is not
present. Wrapping the ASN.1 error with
`MBEDTLS_ERR_X509_INVALID_EXTENSIONS` and adapting the check
accordingly would mean that an unexpected tag somewhere
down the extension parsing would be ignored by the caller.
The way out of this is the following: Luckily, the extension
structure is always the last field in the surrounding structure,
so if there is some data remaining, it must be an Extension
structure, so we don't need to deal with a tag mismatch gracefully
in the first place.
We may therefore wrap the return code from the initial call to
`mbedtls_asn1_get_tag()` in `mbedtls_x509_get_ext()` by
`MBEDTLS_ERR_X509_INVALID_EXTENSIONS` and simply remove
the special treatment of `MBEDTLS_ERR_ASN1_UNEXPECTED_TAG`
in the callers `x509_crl_get_ext()` and `x509_crt_get_ext()`.
This renders `mbedtls_x509_get_ext()` unsuitable if it ever
happened that an Extension structure is optional and does not
occur at the end of its surrounding structure, but for CRTs
and CRLs, it's fine.
The following tests need to be adapted:
- "TBSCertificate v3, issuerID wrong tag"
The issuerID is optional, so if we look for its presence
but find a different tag, we silently continue and try
parsing the subjectID, and then the extensions. The tag '00'
used in this test doesn't match either of these, and the
previous code would hence return LENGTH_MISMATCH after
unsucessfully trying issuerID, subjectID and Extensions.
With the new code, any data remaining after issuerID and
subjectID _must_ be Extension data, so we fail with
UNEXPECTED_TAG when trying to parse the Extension data.
- "TBSCertificate v3, UIDs, invalid length"
The test hardcodes the expectation of
MBEDTLS_ERR_ASN1_INVALID_LENGTH, which needs to be
wrapped in MBEDTLS_ERR_X509_INVALID_FORMAT now.
Fixes#2431.
When parsing a substructure of an ASN.1 structure, no field within
the substructure must exceed the bounds of the substructure.
Concretely, the `end` pointer passed to the ASN.1 parsing routines
must be updated to point to the end of the substructure while parsing
the latter.
This was previously not the case for the routines
- x509_get_attr_type_and_value(),
- mbedtls_x509_get_crt_ext(),
- mbedtls_x509_get_crl_ext().
These functions kept using the end of the parent structure as the
`end` pointer and would hence allow substructure fields to cross
the substructure boundary. This could lead to successful parsing
of ill-formed X.509 CRTs.
This commit fixes this.
Care has to be taken when adapting `mbedtls_x509_get_crt_ext()`
and `mbedtls_x509_get_crl_ext()`, as the underlying function
`mbedtls_x509_get_ext()` returns `0` if no extensions are present
but doesn't set the variable which holds the bounds of the Extensions
structure in case the latter is present. This commit addresses
this by returning early from `mbedtls_x509_get_crt_ext()` and
`mbedtls_x509_get_crl_ext()` if parsing has reached the end of
the input buffer.
The following X.509 parsing tests need to be adapted:
- "TBSCertificate, issuer two inner set datas"
This test exercises the X.509 CRT parser with a Subject name
which has two empty `AttributeTypeAndValue` structures.
This is supposed to fail with `MBEDTLS_ERR_ASN1_OUT_OF_DATA`
because the parser should attempt to parse the first structure
and fail because of a lack of data. Previously, it failed to
obey the (0-length) bounds of the first AttributeTypeAndValue
structure and would try to interpret the beginning of the second
AttributeTypeAndValue structure as the first field of the first
AttributeTypeAndValue structure, returning an UNEXPECTED_TAG error.
- "TBSCertificate, issuer, no full following string"
This test exercises the parser's behaviour on an AttributeTypeAndValue
structure which contains more data than expected; it should therefore
fail with MBEDTLS_ERR_ASN1_LENGTH_MISMATCH. Because of the missing bounds
check, it previously failed with UNEXPECTED_TAG because it interpreted
the remaining byte in the first AttributeTypeAndValue structure as the
first byte in the second AttributeTypeAndValue structure.
- "SubjectAltName repeated"
This test should exercise two SubjectAltNames extensions in succession,
but a wrong length values makes the second SubjectAltNames extension appear
outside of the Extensions structure. With the new bounds in place, this
therefore fails with a LENGTH_MISMATCH error. This commit adapts the test
data to put the 2nd SubjectAltNames extension inside the Extensions
structure, too.