We don't really need a secure hash for that, something like CRC32 would
probably be enough - but we have SHA-256 handy, not CRC32, so use that for the
sake of simplicity.
By semi-internal I mean functions that are only public because they're used in
more than once compilation unit in the library (for example in ecc.c and
ecc_dsa.c) but should not really be part of the public-facing API.
Same motivation as for the other parameters. This is the last one, making the
curve structure empty, so it's left with a dummy parameter for legal reasons.
Validating the input is always a good idea. Validating the output protects
against some fault injections that would make the result invalid.
Note: valid_point() implies that the point is not zero.
Adding validation to mult_safer() makes it redundant in
compute_shared_secret().
This will make easier to add future counter-measures in a single place.
In practice this change means that:
- compute_public_key() now uses projective coordinate randomisation, which it
should as this is a protection against Template Attacks for example.
- mult_safer() now checks that the result is not the point at infinity, which
it can as the result is indeed never expected to be that
Inspection of the generated assembly showed that before this commit, armcc 5
was optimizing away the successive reads to the volatile local variable that's
used for double-checks. Inspection also reveals that inserting a call to an
external function is enough to prevent it from doing that.
The tested versions of ARM-GCC, Clang and Armcc 6 (aka armclang) all keep the
double read, with our without a call to an external function in the middle.
The inserted function can also be changed to insert a random delay if
desired in the future, as it is appropriately places between the reads.
This can be used by Mbed TLS functions in any module to signal that a fault
attack is likely happening, so this can be appropriately handled by the
application (report, fall back to safer mode or even halt, etc.)
This hardens against attacks that glitch the conditional branch by making it
necessary for the attacker to inject two consecutive faults instead of one. If
desired, we could insert a random delay in order to further protect against
double-glitch attacks.
Also, when a single glitch is detected we report it.
Previously it was returning 0 or 1, so flipping a single bit in the return
value reversed its meaning. Now it's returning the diff itself.
This is safe because in the two places it's used (signature verification and
point validation), invalid values will have a large number of bits differing
from the expected value, so diff will have a large Hamming weight.
An alternative would be to return for example -!(diff == 0), but the
comparison itself is prone to attacks (glitching the appropriate flag in the
CPU flags register, or the conditional branch if the comparison uses one). So
we'd need to protect the comparison, and it's simpler to just skip it and
return diff itself.
This is a first step in protecting against fault injection attacks: the
attacker can no longer change failure into success by flipping a single bit.
Additional steps are needed to prevent other attacks (instruction skip etc)
and will be the object of future commits.
The return value of uECC_vli_equal() should be protected as well, which will
be done in a future commit as well.
Currently functions that may return success or failure tend to do so by
returning 0 or 1. If an active physical attacker can flip a bit in memory or
registers at the right time, they may easily change a failure value into a
success value, with potentially catastrophic security consequences.
As typical attackers can only flip a few bits, an element of protection
against such attacks is to ensure a sufficient Hamming distance between
failure values and the success value. This commit introduces such values,
which will put to use in critical functions in future commits.
In addition to SUCCESS and FAILURE, a third value ATTACK_DETECTED is
introduced, which can be used later when suspicious-looking events are noticed
(static data changed when it shouldn't, double condition checking returning
inconsistent results, etc.).
Values are chosen so that Hamming distances are large, and that no value is
the complement of another, in order to avoid unwanted compiler optimisations.
Note: the error values used by Mbed TLS are already safe (assuming 32-bit
integers) as they are of the form -x with x in the range [1, 2^15) so their
Hamming distance with the success value (0) is at least 17, so it's hard for
an attacker to turn an error value into the success value (or vice-versa).
This is a temporary work-around for an integration issue.
A future task will re-integrate randomness into these functions are their
entire point is to be randomized; this is really just temporary.
This avoids the need for each calling site to manually regularize the scalar
and randomize coordinates, which makes for simpler safe use and saves 50 bytes
of code size in the library.