LZ4_decompress_safe_partial()
now also supports a scenario where
nb_bytes_to_generate is <= block_decompressed_size
And
nb_bytes_to_read is >= block_compressed_size.
Previously, the only supported scenario was
nb_bytes_to_read == block_compress_size.
Pay attention that,
if nb_bytes_to_read is > block_compressed_size,
then, necessarily, it requires that
nb_bytes_to_generate is <= block_decompress_size.
If both are larger, it will generate corrupted data.
EndMark, the 4-bytes value indicating the end of frame,
must be `0x00000000`.
Previously, it was just mentioned as a `0-size` block.
But such definition could encompass uncompressed blocks of size 0,
with a header of value `0x80000000`.
But the intention was to also support uncompressed empty blocks.
They could be used as a keep-alive signal.
Note that compressed empty blocks are already supported,
it's just that they have a size 1 instead of 0 (for the `0` token).
Unfortunately, the decoder implementation was also wrong,
and would also interpret a `0x80000000` block header as an endMark.
This issue evaded detection so far simply because
this situation never happens, as LZ4Frame always issues
a clean 0x00000000 value as a endMark.
It also does not flush empty blocks.
This is fixed in this PR.
The decoder can now deal with empty uncompressed blocks,
and do not confuse them with EndMark.
The specification is also clarified.
Finally, FrameTest is updated to randomly insert empty blocks during fuzzing.
It's now possible to select a custom LZ4_DISTANCE_MAX at compile time,
provided it's <= 65535.
However, in some cases (when compressing in byU16 mode),
the new distance wasn't respected,
as it used to implied that it was necessarily within range.
Added a distance check for this case.
Also : added a new TravisCI test which ensures that
custom LZ4_DISTANCE_MAX compiles correctly
and compresses correctly (relying on `assert()` to find outsized offsets).
This emulates a streaming scenario,
where the caller follows rigorously the srcSize hints
provided as return value of LZ4F_decompress().
This is useful to show the issue in #714,
where data is uselessly copied in a tmp buffer first.
- only play listTest with `make test`, not `make all` which is limited to build
- update `clangtest`, so that it's possible to disable O3 optimization, for faster processing
Improve formatting
Include static assert
Use UTIL_fseek to handle potential fseek limitation
Be explicit when refusing to read from stdin
Properly free dctx after use
Include valgrind tests
Use the list of prerequisites instead of listing those files manually,
this way they will never fall out of sync.
Also update the amalgamation example to use a single cat command.
Fixes: a7e8d394 ("[amalgamation] add test")
Fixes: b192c86b ("[amalgamation] lz4frame.c")
Moved a few other tests to Makefiles.inc. Other things might need to go there.
Made a test for symlink appropriateness. Windows can NOT handle them the same way Unix-like operating systems do (if at all). This is mostly the same as the Visual C projects.
embed version info into .dll and .exes that are redistributed.
This behavior has been preserved for compatibility with existing ecosystem.
But it's problematic, as some environment start `lz4` without identifying stdout as console by default,
leading to a change of behavior for a same line of script.
A more sensible policy would be to default to stdout only when input is stdin.
Soft change for the time being : keep the behavior, just print a warning message.
User should prefer `-c` to explicitly select `stdout`.
Also : updated tests in Makefile to explicitly select `stdout` with `-c`.
context is no longer dirty after a failed compressed block.
Actually, all indexes are fine.
It remains compatible with continued streaming, and reset*_fast().
This is the reverse of `-BD`, and the current default.
This command can be useful to reverse a previous `-BD` command.
It may in the future be more important
if `lz4` switches to generating dependent blocks by default.
- promoted LZ4_resetStream_fast() to stable
- moved LZ4_resetStream() into deprecate, but without triggering a compiler warning
- update all sources to no longer rely on LZ4_resetStream()
note : LZ4_initStream() proposal is slightly different :
it's able to initialize any buffer, provided that it's large enough.
To this end, it accepts a void*, and returns an LZ4_stream_t*.
- promoted LZ4_resetStreamHC_fast() to stable
- moved LZ4_resetStreamHC() to deprecated (but do not generate a warning yet)
- Updated doc, to highlight difference between init and reset
- switched all invocations of LZ4_resetStreamHC() onto LZ4_initStreamHC()
- misc: ensure `make all` also builds /tests
which actively tries to make it write out of bound.
For this scenario to be possible,
it's necessary to set dstCapacity < LZ4F_compressBound()
When a compression operation fails,
the CCtx context is left in an undefined state,
therefore compression cannot resume.
As a consequence :
- round trip tests must be aborted, since there is nothing valid to decompress
- most users avoid this situation, by ensuring that dstCapacity >= LZ4F_compressBound()
For these reasons, this use case was poorly tested up to now.
so "funny" thing with cppcheck
is that no 2 versions give the same list of warnings.
On Mac, I'm using v1.81, which had all warnings fixed.
On Travis CI, it's v1.61, and it complains about a dozen more/different things.
On Linux, it's v1.72, and it finds a completely different list of a half dozen warnings.
Some of these seems to be bugs/limitations in cppcheck itself.
The TravisCI version v1.61 seems unable to understand %zu correctly, and seems to assume it means %u.
make a round trip test with arbitrary input file,
generate an `abort()` on error,
to work in tandem with `afl`.
note : currently locked on level 9, to investigate #560.
The error can be reproduced using following command :
./frametest -v -i100000000 -s1659 -t31096808
It's actually a bug in the stream LZ4 API,
when starting a new stream
and providing a first chunk to complete with size < MINMATCH.
In which case, the chunk becomes a dictionary.
No hash was generated and stored,
but the chunk is accessible as default position 0 points to dictStart,
and position 0 is still within MAX_DISTANCE.
Then, next attempt to read 32-bits from position 0 fails.
The issue would have been mitigated by starting from index 64 KB,
effectively eliminating position 0 as too far away.
The proper fix is to eliminate such "dictionary" as too small.
Which is what this patch does.
* Uninstall didn't remove the pkg-config correctly.
* Fix `mandir`
* Allow overriding either upper- or lower-case location variables, but
always use the lower case variables.
* Add test case that ensures overriding both upper- and lower-case
variables is the same, and that the directory is empty after uninstall.
Ring buffer tests were performed only with LZ4_decompress_safe_continue,
leaving my buggy changes to LZ4_decompress_safe_continue undetected.
The tests are now replicated and performed in a similar manner for both
LZ4_decompress_safe_continue and LZ4_decompress_safe_continue (except
for the small buffer case where only one function can be tested,
because part of the dictionary is overwritten with the output).
I also updated function names in the messages (changed them to the
actual ones). The error was reported for LZ4_decompress_safe(),
which I found misleading.