After the record contents are decompressed, in_len is no longer
accessed directly, only in_msglen is accessed. in_len is only read by
ssl_parse_record_header() which happens before ssl_prepare_record_contents().
This is also made clear by the fact that in_len is not touched after
decrypting anyway, so if it was accessed after that it would be wrong unless
decryption is used - as this is not the case, it show in_len is not accessed.
Previously it was failing with errors about headers not found, which is
suboptimal in terms of clarity. Now give a clean error with pointer to the
documentation.
Do the checks in the .c files rather than check_config.h as it keeps them
closer to the platform-specific implementations.
The previous documentation was not explicit about what was expected of the
callbacks - the user had to infer that from the descriptions in net.h or
timing.h, and it was not clear what was part of the calling convention and
what was specific to our implementation.
This re-introduces the apidoc with full config.h, but hopefully with the race
conditions and other issues that the previous implementation had.
Adapt doxygen test script to use that new script, and also check for errors
in addition to warnings while at it.
This partially reverts 1989caf71c (only the changes to Makefile and
CMakeLists, the addition to scripts/config.pl is kept).
Modifying config.h in the apidoc target creates a race condition with
make -j4 all apidoc
where some parts of the library, tests or programs could be built with the
wrong config.h, resulting in all kinds of (semi-random) errors. Recent
versions of CMake mitigate this by adding a .NOTPARALLEL target to the
generated Makefile, but people would still get errors with older CMake
versions that are still in use (eg in RHEL 5), and with plain make.
An additional issue is that, by failing to use cp -p, the apidoc target was
updating the timestamp on config.h, which seems to cause further build issues.
Let's get back to the previous, safe, situation. The improved apidoc building
will be resurrected in a script in the next commit.
fixes#390fixes#391
Apparently travis has an old version of doxygen that doesn't know all tags in
our config. That's not something we care about, we only want to know about
warnings in our doxygen content
On my machine, that reduces running time from about 30 minutes to less than 10
minutes, while maintaining a good probability of catching the most likely
issues in practice.
armar doesn't understand the syntax without dash. OTOH, the syntax with dash
is the only one specified by POSIX, and it's accepted by GNU ar, BSD ar (as
bundled with OS X) and armar, so it looks like the most portable syntax.
fixes#386
* yanesca/iss309:
Improved on the previous fix and added a test case to cover both types of carries.
Removed recursion from fix#309.
Improved on the fix of #309 and extended the test to cover subroutines.
Tests and fix added for #309 (inplace mpi doubling).
When we use the same documentation for a list of #defines, we used to use a
generic name in the \def command. Use the first name of the list instead so
that doxygen stops complaining, and mention the generic name in the longer
description.
This is not entirely satisfactory as the full list of macros will not be
included in the generated doc, but it's still an improvement as at least the
first macro is documented now, with a hint that there are others.
Otherwise we get warnings that some documentation items don't have
corresponding #define, and more importantly the corresponding snippets are not
included in the output.
For that we need a modified version of the "full" argument for config.pl.
Also, the new CMakeLists.txt target only works on Unix (which was already the
case of the Makefile target). Hopefully this is not an issue as people are
unlikely to need that target on Windows.
See for example page 8 of
http://csrc.nist.gov/publications/nistpubs/800-38D/SP-800-38D.pdf
The previous constant probably came from a typo as it was 2^26 - 2^5 instead
of 2^36 - 2^5. Clearly the intention was to allow for a constant bigger than
2^32 as the ull suffix and cast to uint64_t show.
fixes#362
By looking just at that test, it looks like 2 + dn_size could overflow. In
fact that can't happen as that would mean we've read a CA cert of size is too
big to be represented by a size_t.
However, it's best for code to be more obviously free of overflow without
having to reason about the bigger picture.
In case an entry with the given OID already exists in the list passed to
mbedtls_asn1_store_named_data() and there is not enough memory to allocate
room for the new value, the existing entry will be freed but the preceding
entry in the list will sill hold a pointer to it. (And the following entries
in the list are no longer reachable.) This results in memory leak or a double
free.
The issue is we want to leave the list in a consistent state on allocation
failure. (We could add a warning that the list is left in inconsistent state
when the function returns NULL, but behaviour changes that require more care
from the user are undesirable, especially in a stable branch.)
The chosen solution is a bit inefficient in that there is a time where both
blocks are allocated, but at least it's safe and this should trump efficiency
here: this code is only used for generating certificates, which is unlikely to
be done on very constrained devices, or to be in the critical loop of
anything. Also, the sizes involved should be fairly small anyway.
fixes#367