Commit Graph

375 Commits

Author SHA1 Message Date
Yann Collet
ad2d2764c7 fix minor static analyzer warnings
detected by scan-build, cppcheck and advanved compilation flags
fix #786
2020-09-29 17:20:52 -07:00
Yann Collet
78f4fdbb89
Merge pull request #923 from lz4/fix784
fix efficiency of LZ4_compress_HC_destSize()
2020-09-28 14:04:56 -07:00
Yann Collet
ab89dda91d improved last literals run on LZ4_compress_destSize
applying new more accurate formula from LZ4_compress_HC_destSize()

also : fix some minor display issue in tests/frametest
2020-09-28 11:39:00 -07:00
Yann Collet
89736e4e27 ensure last match not too close to end
must respect MFLIMIT distance from oend
2020-09-27 23:59:56 -07:00
Yann Collet
8a362a8ac8
Merge pull request #921 from lz4/doubleNull
fix compressing into NULL
2020-09-27 21:09:06 -07:00
Yann Collet
e7fe105ac6 fix efficiency of LZ4_compress_HC_destSize()
LZ4_compress_HC_destSize() had a tendency
to discard its last match when this match overflowed specified dstBuffer limit.
The impact is generally moderate,
but occasionally huge,
typically when this last match is very large
(such as compressing a bunch of zeroes).

Issue #784 fixed for both Chain and Opt implementations.

Added a unit test suggested by @remittor checking this topic.
2020-09-27 21:04:40 -07:00
Anton Kochkov
9730d91110 Fix compilation with TinyCC 2020-09-27 17:07:51 +08:00
Yann Collet
ee4f37d284 fix compressing into NULL
fails properly
bug discovered by oss-fuzz
2020-09-26 11:31:57 -07:00
Yann Collet
10d2e1c694 fixed lz4frame with blocks of size 1
properly track history
2020-09-17 14:43:02 -07:00
Yann Collet
ee01df1271 added the actual code change 2020-09-16 23:46:39 -07:00
Yann Collet
c5d6f8a8be fix #783
LZ4_decompress_safe_partial()
now also supports a scenario where
nb_bytes_to_generate is <= block_decompressed_size
And
nb_bytes_to_read is >= block_compressed_size.

Previously, the only supported scenario was
nb_bytes_to_read == block_compress_size.

Pay attention that,
if nb_bytes_to_read is > block_compressed_size,
then, necessarily, it requires that
nb_bytes_to_generate is <= block_decompress_size.
If both are larger, it will generate corrupted data.
2020-08-27 00:17:57 -07:00
Yann Collet
3e3a006c6f Merge branch 'dev' into extraInput 2020-08-26 23:20:28 -07:00
Yann Collet
5243173b23 added documentation about LZ4_FORCE_SW_BITCOUNT
Also : added memory-frugal software byte count for big endian 64-bit cpus.
Disabled by default.
2020-08-25 22:17:29 -07:00
Yann Collet
ee0a3cfa0c
Merge pull request #898 from aqrit/aqrit-prefixlen
rejigger bit counting intrinsics
2020-08-24 15:13:18 -07:00
Yann Collet
0002563be8 removed LZ4_compress_fast_force()
which serves no more purpose.

The comment implies that the simple presence of this unused function was affecting performance,
and that's the reason why it was not removed earlier.
This is likely another side effect of instruction alignment.

It's obviously unreliable to rely on it in this way,
meaning that the impact will be different, positive of negative,
with any minor code change, and any minor compiler version change, even parameter change.
2020-08-21 19:23:49 -07:00
aqrit
e45defa8bd
silence warning
MSVC debug mode complains
2020-08-17 17:53:07 -04:00
BellaXlp
ab713923a2
fix issue #783 (#862)
* fix issue #783
2020-08-12 14:42:10 -07:00
aqrit
e72897402d
rejigger bit counting intrinsics
Fix lz4/lz4#867
Optimize software fallback routines.
Delete some faulty (and dead?) MSVC big endian code.
2020-08-11 21:14:09 -04:00
Yann Collet
b26c140a54
Merge pull request #895 from lz4/hugefast
Fix #876
2020-08-10 12:52:32 -07:00
Yann Collet
7b1b078dfc fix #876
by introducing a max limit acceleration value
2020-08-10 11:03:27 -07:00
W. Felix Handte
a78235e6ad Fix Enum Casts
Fixes `-Wsign-compare` issues.
2020-08-10 13:47:08 -04:00
W. Felix Handte
9af86f0841 Remove dirty Field From LZ4_stream_t 2020-08-06 16:06:40 -04:00
W. Felix Handte
d7399232a4 Remove Extraneous Reset in LZ4_attach_dictionary()
Nothing internally sets dirty anymore. The only way to get that is if you use
an uninitialized context, in which case your warranty is void anyways.
2020-08-05 12:46:32 -04:00
Nick Terrell
fe2a1b3707 Call LZ4_memcpy() instead of memcpy()
`LZ4_memcpy()` uses `__builtin_memcpy()` to ensure that clang/gcc
can inline the `memcpy()` calls in freestanding mode.

This is necessary for decompressing the Linux Kernel with LZ4.
Without an analogous patch decompression ran at 77 MB/s, and with
the patch it ran at 884 MB/s.
2020-08-03 11:28:02 -07:00
Yann Collet
f4054274fa
Merge pull request #860 from adeason/old-style-definitions
Avoid old-style function definitions
2020-07-28 17:44:25 -07:00
Yann Collet
48f9ecfb34
Merge pull request #863 from Devernua/reducing_stack_usage_in_t_alignment
Reducing stack usage in _t_alignment checks
2020-07-16 09:44:40 -07:00
Alexander Gallego
e68c7d3878 avoid computing 0 offsets from null pointers
Similar work in the kernel:
https://patchwork.kernel.org/patch/11351499/

UBsan (+clang-10) complains about doing pointer arithmetic (adding 0)
to a nullpointer.

This patch is tested with clang-10+ubsan
2020-07-08 08:30:07 -07:00
Aleksandr Kukuev
7a75b045bd Reducing stack usage in _t_alignment checks 2020-05-11 23:32:02 +03:00
Andrew Deason
12001d6c1a Avoid old-style function definitions
Define 0-argument functions like foo(void) instead of foo(), in order
to avoid a warning with -Wold-style-definition. This makes it easier
to embed lz4.c in projects that compile with -Werror
-Wold-style-definition.
2020-05-07 15:02:09 -05:00
Bartosz Taudul
7224f9bd5d Force inline small functions used by LZ4_compress_generic. 2020-01-17 00:37:47 +01:00
Yann Collet
d755f87f9f fixed lz4hc assert error
when src ptr is in very low memory area (< 64K),
the virtual reference to data in dictionary
might end up in a very high memory address.

Since it's not a "real" memory address,
just a virtual one, to calculate distance,
it doesn't matter : only distance matters.

The assert was to restrictive.
Fixed.
2019-12-03 14:49:22 -08:00
Yann Collet
0f6cbd996f faster decoding speed with Visual
by enabling the fast decoder path.
Visual requires a different set of macro constants to detect x86 / x64.

On my laptop, decoding speed on x64 went up from 3.12 to 3.45 GB/s.
32-bit is less impressive, though still favorable,
with speed increasing from 2.55 to 2.60 GB/s.

So both cases are now enabled.

Suggested by Bartosz Taudul (@wolfpld).
2019-12-02 16:38:33 -08:00
Nigel Tao
c5a83c1a48 Have read_variable_length use fixed size types
Otherwise, the output from decoding LZ4-compressed input could be
platform dependent.

Also add a compile-time check to confirm the existing code's assumptions
that, if <stdint.h> isn't used, then sizeof(int) == 4.

Updates #792
2019-09-21 12:38:46 +10:00
Nick Terrell
d7cad81093 [LZ4_compress_destSize] Fix off-by-one error
PR#756 fixed the data corruption bug, but didn't clear `ip`. PR#760
fixed that off-by-one error, but missed the case where `ip == filledIp`,
which is harder for the fuzzers to find (it took 20 days not 1 day).

Verified this fixed the issue reported by OSS-Fuzz.

Credit to OSS-Fuzz.
2019-08-09 10:36:46 -07:00
W. Felix Handte
4c58006719 Only Bump Offset When Attaching Non-Null Dictionary
We do want to bump, even if the dictionary is empty, but we **don't** want to
bump if the dictionary is null.
2019-08-06 19:08:41 -04:00
W. Felix Handte
4f49d744e8 Add Attach Dict Debug Log 2019-08-06 18:54:03 -04:00
W. Felix Handte
918269a4e3 Make Attaching an Empty Dict Behave the Same as Using it Directly
When using an empty dictionary, we bail out of loading or attaching it in
ways that leave the working context in potentially slightly different states.
In particular, in some paths, we will cause the currentOffset to be non-zero,
while in others we would allow it to remain 0.

This difference in behavior is perfectly harmless, but in some situations, it
can produce slight differences in the compressed output. For sanity's sake,
we currently try to maintain a strict correspondence between the behavior of
the dict attachment and the dict loading paths. This patch restores them to
behaving identically.

This shouldn't have any negative side-effects, as far as I can tell. When
writing the dict attachment code, I tried to preserve zeroed currentOffsets
when possible, since they benchmarked as very slightly faster. However, the
case of attaching an empty dictionary is probably rare enought that it's
acceptable to minisculely degrade performance in that corner case.
2019-08-06 18:50:33 -04:00
Yann Collet
e18fbd51c1 silence msan warning when offset==0 2019-08-06 15:35:49 +02:00
Yann Collet
7a516411d4
Merge pull request #760 from terrelln/destSize
[LZ4_compress_destSize] Fix off-by-one error in fix
2019-07-19 15:22:51 -07:00
Nick Terrell
1f236e0790 Fix LZ4_attach_dictionary with empty dictionary 2019-07-18 12:29:15 -07:00
Nick Terrell
7c32101c65 [LZ4_compress_destSize] Fix off-by-one error in fix
The next match is looking at the current ip, not the next ip,
so it needs to be cleared as well.

Credit to OSS-Fuzz
2019-07-18 12:20:29 -07:00
Nick Terrell
13a2d9e34f [LZ4_compress_destSize] Fix overflow condition 2019-07-17 11:50:47 -07:00
Nick Terrell
6bc6f836a1 [LZ4_compress_destSize] Fix rare data corruption bug 2019-07-17 11:38:38 -07:00
Nick Terrell
690009e2c2 [LZ4_compress_destSize] Allow 2 more bytes of match length 2019-07-17 11:07:24 -07:00
Yann Collet
7654a5a6d2
Merge pull request #752 from terrelln/fuzzers
[ossfuzz] Improve the fuzzers
2019-07-16 11:18:09 -07:00
Nick Terrell
725cb0aafd [lz4] Fix bugs in partial decoding
* Partial decoding could read a few bytes beyond the end of the input
* Partial decoding returned an error with an empty output buffer
2019-07-15 12:21:59 -07:00
Yann Collet
6654c2cd3b ensure conformance with custom LZ4_DISTANCE_MAX
It's now possible to select a custom LZ4_DISTANCE_MAX at compile time,
provided it's <= 65535.

However, in some cases (when compressing in byU16 mode),
the new distance wasn't respected,
as it used to implied that it was necessarily within range.

Added a distance check for this case.
Also : added a new TravisCI test which ensures that
custom LZ4_DISTANCE_MAX compiles correctly
and compresses correctly (relying on `assert()` to find outsized offsets).
2019-07-15 12:11:34 -07:00
Sylvestre Ledru
12e5841e76
Remove an useless declaration 2019-07-04 18:13:36 +02:00
Nick Terrell
e72d442300 Fix out-of-bounds read of up to 64 KB in the past 2019-06-28 14:58:35 -07:00
Yann Collet
348e107d99 restored FORCE_INLINE 2019-06-04 14:04:49 -07:00
Yann Collet
280fc0856d
Merge pull request #717 from lz4/inplace
Added documentation and macro to support in-place compression and decompression
2019-05-31 12:59:38 -07:00
Yann Collet
33cb8518ac decompress: changed final memcpy() into memmove()
for compatibility with in-place decompression scenarios.
2019-05-31 11:44:37 -07:00
Chenxi Mao
64b5917736 FAST_DEC_LOOP: only did offset check in specific condition.
When I did FAST_DEC_LOOP performance test, I found the
offset check is much more than v1.8.3

You will see the condition check difference via lzbench with dickens test case.
v1.8.3 34959
v.1.9.x 1055885

After investigate the code, we could see the difference.
v.1.8.3 SKIP the condition check if
if condition is true in:
https://github.com/lz4/lz4/blob/v1.8.3/lib/lz4.c#L1463
AND below condition is true
https://github.com/lz4/lz4/blob/v1.8.3/lib/lz4.c#L1478\
The offset check should be invoked.

v1.9.3
The offset check code will be invoked in every loop which lead to downgrade.
So the fix would be move this check to specific condition
to avoid useless condition check.

After this change, the call number is same as v1.8.3
2019-05-31 08:36:13 +08:00
Yann Collet
76116495bf some more minor conversion warnings fixes 2019-05-29 13:14:52 -07:00
Yann Collet
444550defa ensure lz4.h can be included with or without LZ4_STATIC_LINKING_ONLY in any order
ensure correct propagation of LZ4_DISTANCE_MAX
2019-05-29 12:21:14 -07:00
Yann Collet
b17f578a91 added comments and macros for in-place (de)compression 2019-05-29 12:06:13 -07:00
George Prekas
605d811e6c enable LZ4_FAST_DEC_LOOP build macro on aarch64/GCC by default 2019-05-07 08:36:06 -05:00
Yann Collet
ba99eac4d0 several minor style changes recommended by clang-tidy 2019-04-24 10:03:02 -07:00
Yann Collet
ae199124e5 fixed read-after input in LZ4_decompress_safe() 2019-04-18 18:50:51 -07:00
Yann Collet
5acfb15df0 re-enable FORCE_INLINE
was disabled for tests
2019-04-17 15:33:37 -07:00
Yann Collet
25d96f1e4d fix out-of-bound read within LZ4_decompress_fast()
and deprecate LZ4_decompress_fast(),
with deprecation warnings enabled by default.

Note that, as a consequence of the fix,
LZ4_decompress_fast is now __slower__ than LZ4_decompress_safe().
That's because, since it doesn't know the input buffer size,
it must progress more cautiously into the input buffer
to ensure to out-of-bound read.
2019-04-17 15:01:53 -07:00
Norm Green
1848ea5cbd Fix AIX errors/warnings 2019-04-17 09:20:09 -07:00
Yann Collet
920c988669 simplified output_directive 2019-04-15 14:13:10 -07:00
Yann Collet
55f6f0dd74 fix comma for pedantic 2019-04-15 11:22:25 -07:00
Yann Collet
474c17cdc4 unified limitedOutput_directive
between lz4.c and lz4hc.c .
was left in a strange state after the "amalgamation" patch.

Now only 3 directives remain,
same name across both implementations,
single definition place.

Might allow some light simplification due to reduced nb of states possible.
2019-04-15 11:09:56 -07:00
Yann Collet
481a37fe47 fixed lz4frame with linked blocks
when one block was not compressible,
it would tag the context as `dirty`,
resulting in compression automatically bailing out of all future blocks,
making the rest of the frame uncompressible.
2019-04-15 10:28:36 -07:00
Yann Collet
f8b7605034 fixed minor Visual warnings
since Visual 2017,
worries about potential overflow, which are actually impossible.
Replaced (c * a) by (c ? a : 0).
Will likely replaced a * by a cmov.
Probably harmless for performance.
2019-04-12 16:49:01 -07:00
Yann Collet
8d76c8a44a introduce LZ4_DISTANCE_MAX build macro
make it possible to generate LZ4-compressed block
with a controlled maximum offset (necessarily <= 65535).

This could be useful for compatibility with decoders
using a very limited memory budget (<64 KB).

Answer #154
2019-04-11 14:15:33 -07:00
Yann Collet
14c71dfa9c modified LZ4_initStreamHC() to look like LZ4_initStream()
it is now a pure initializer, for statically allocated states.
It can initialize any memory area, and because of this, requires size.
2019-04-09 13:55:42 -07:00
Yann Collet
5ef4f3ce91 check some more initialization result
ensure it's not NULL.
2019-04-08 16:51:22 -07:00
Yann Collet
111df0fa45 removed LZ4_stream_t alignment test on Visual
it fails on x86 32-bit mode :
Visual reports an alignment of 8-bytes (even with alignof())
but actually only align LZ4_stream_t on 4 bytes.
The alignment check then fails, resulting in missed initialization.
2019-04-08 16:47:21 -07:00
Yann Collet
c198a39a66 LZ4_initStream() checks alignment restriction
updated associated documentation
2019-04-08 12:49:54 -07:00
Yann Collet
2ece0d8380 created LZ4_initStream()
- promoted LZ4_resetStream_fast() to stable
- moved LZ4_resetStream() into deprecate, but without triggering a compiler warning
- update all sources to no longer rely on LZ4_resetStream()

note : LZ4_initStream() proposal is slightly different :
it's able to initialize any buffer, provided that it's large enough.
To this end, it accepts a void*, and returns an LZ4_stream_t*.
2019-04-05 12:56:26 -07:00
Yann Collet
f2755c9887 minor comments and reformatting 2019-04-03 08:59:29 -07:00
Yann Collet
753076bfa4 fixed minor conversion warnings 2019-04-02 17:16:43 -07:00
Yann Collet
2589c4424f created LZ4_FAST_DEC_LOOP build macro 2019-04-02 16:22:11 -07:00
Yann Collet
7d9d00f4df fixed a few minor conversion warnings 2019-04-02 16:06:37 -07:00
Yann Collet
d85bdb4ff2
Merge pull request #645 from djwatson/optimize_decompress_generic
Optimize decompress generic
2019-02-11 16:58:53 -08:00
Dave Watson
5d7d1166cb decompress_generic: Limit fastpath to x86
New fastpath currently shows a regression on qualcomm
arm chips.  Restrict it to x86 for now
2019-02-11 11:44:51 -08:00
Dave Watson
75fb878a90 decompress_generic: Add fastpath for small offsets
For small offsets of size 1, 2, 4 and 8, we can set a single uint64_t,
and then use it to do a memset() variation.  In particular, this makes
the somewhat-common RLE (offset 1) about 2-4x faster than the previous
implementation - we avoid not only the load blocked by store, but also
avoid the loads entirely.
2019-02-08 13:57:23 -08:00
Dave Watson
faac110e20 decompress_generic: Unroll loops a bit more
Generally we want our wildcopy loops to look like the
memcpy loops from our libc, but without the final byte copy checks.
We can unroll a bit to make long copies even faster.

The only catch is that this affects the value of FASTLOOP_SAFE_DISTANCE.
2019-02-08 13:57:23 -08:00
Dave Watson
1fbaf84306 decompress_generic: remove msan write
This store is also causing load-blocked-by-store issues, remove it.
The msan warning will have to be fixed another way if it is still an issue.
2019-02-08 13:57:23 -08:00
Dave Watson
28b824921d decompress_generic: re-add fastpath
This is the remaineder of the original 'shortcut'.  If true, we can avoid
the loop in LZ4_wildCopy, and directly copy instead.
2019-02-08 13:57:23 -08:00
Dave Watson
232f1e261f decompress_generic: drop partial copy check in fast loop
We've already checked that we are more than FASTLOOP_SAFE_DISTANCE
away from the end, so this branch can never be true, we will have
already jumped to the second decode loop.
2019-02-08 13:57:23 -08:00
Dave Watson
59332a3026 decompress_generic: Optimize literal copies
Use LZ4_wildCopy16 for variable-length literals.  For literal counts that
fit in the flag byte, copy directly.  We can also omit oend checks for
roughly the same reason as the previous shortcut:  We check once that both
match length and literal length fit in FASTLOOP_SAFE_DISTANCE, including
wildcopy distance.
2019-02-08 13:57:23 -08:00
Dave Watson
5dfa7d422b decompress_generic: optimize match copy
Add an LZ4_wildCopy16, that will wildcopy, potentially smashing up
to 16 bytes, and use it for match copy.  On x64, this avoids many
blocked loads due to store forwarding, similar to issue #411.
2019-02-08 13:57:23 -08:00
Dave Watson
28356e02ad decompress_generic: Add a loop fastpath
Copy the main loop, and change checks such that op is always less
than oend-SAFE_DISTANCE.  Currently these are added for the literal
copy length check, and for the match copy length check.

Otherwise the first loop is exactly the same as the second.  Follow on
diffs will optimize the first copy loop based on this new requirement.

I also tried instead making a separate inlineable function for the copy
loop (similar to existing partialDecode flags, etc), but I think the
changes might be significant enough to warrent doubling the code, instead
pulling out common functionality to separate functions.

This is the basic transformation that will allow several following optimisations.
2019-02-08 13:57:19 -08:00
Dave Watson
4da336062e decompress_generic: Refactor variable length fields
Make a helper function to read variable lengths for literals and
match length.
2019-02-08 13:42:42 -08:00
Jeremy Maitin-Shepard
26e7635a0e Eliminate optimize attribute warning with clang on PPC64LE 2019-02-04 12:22:56 -08:00
W. Felix Handte
4e3accccb2 Fix Dict Size Test in LZ4_compress_fast_continue()
Dictionaries don't need to be > 4 bytes, they need to be >= 4 bytes. This test
was overly conservative.

Also removes the test in `LZ4_attach_dictionary()`.
2018-12-05 11:24:33 -08:00
W. Felix Handte
535636ff5c Don't Attach Very Small Dictionaries
Fixes a mismatch in behavior between loading into the context (via
`LZ4_loadDict()`) a very small (<= 4 bytes) non-contiguous dictionary, versus
attaching it with `LZ4_attach_dictionary()`.

Before this patch, this divergence could be reproduced by running

```
make -C tests fuzzer MOREFLAGS="-m32"
tests/fuzzer -v -s1239 -t3146
```

Making sure these two paths behave exactly identically is an easy way to test
the correctness of the attach path, so it's desirable that this remain an
unpolluted, high signal test.
2018-12-04 14:05:11 -08:00
Bing Xu
17f5071e72 Enable amalgamation of lz4hc.c and lz4.c 2018-11-15 22:24:25 -08:00
Oleg Khabinov
28eb88d988 Some followups and renamings 2018-10-01 15:19:45 -07:00
Oleg Khabinov
f2ae385c2f Rename initCheck to dirtyContext and use it in LZ4_resetStream_fast() to check if full reset is needed. 2018-09-28 14:55:05 -07:00
Yann Collet
cb917827f9
Merge pull request #578 from lz4/support128bit
Support for 128bit pointers like AS400
2018-09-26 13:57:09 -07:00
Yann Collet
b2215f2a89 tried to clean another bunch of cppcheck warnings
so "funny" thing with cppcheck
is that no 2 versions give the same list of warnings.

On Mac, I'm using v1.81, which had all warnings fixed.
On Travis CI, it's v1.61, and it complains about a dozen more/different things.
On Linux, it's v1.72, and it finds a completely different list of a half dozen warnings.

Some of these seems to be bugs/limitations in cppcheck itself.
The TravisCI version v1.61 seems unable to understand %zu correctly, and seems to assume it means %u.
2018-09-19 12:12:49 -07:00
Yann Collet
8bea19d57c fixed minor cppcheck warnings in lib 2018-09-18 15:51:26 -07:00
Yann Collet
6381d828fd increase size of LZ4 contexts for 128-bit systems 2018-09-17 17:31:57 -07:00
Yann Collet
6103b4c9b4 use byU32 mode for any pointer > 32-bit
including 128-bit, like IBM AS-400
2018-09-14 15:27:48 -07:00
Yann Collet
6d32240b2e clarify constant MFLIMIT
and separate it from MATCH_SAFEGUARD_DISTANCE.

While both constants have same value,
they do not seve same purpose, hence should not be confused.
2018-09-11 10:00:13 -07:00