benchmark for multiple tuple size
the following case are tested:
- sorted tuple
- reverse sorted tuple
- randomized tuple
- sorted tuple except first element
- sorted tuple except last element
Those are old compilers and removing support allows removing a couple
of workarounds. It also reduces the CI burden and will allow us to test
more recent and more relevant compilers.
Metabench now contains almost all of Hana's algorithms, so we document
the existence of Metabench instead of redundantly generating the
benchmarks on Travis for Hana only.
It makes very little sense to compare find_if for set and map with
find_if for linear data structures, since that is not the intended
use case for set and map. Until we have a better comparison of
associative data structures ready, I do not want to present this
data as it could be misleading.
Initially, this commit was supposed to provide a slightly
optimized version of `find_if` for `integer_sequence`.
Unfortunately, benchmarking did not show any significant
difference, and so the current implementation will be kept.
Benchmark data is here: http://pastebin.com/t3M8YwzD
- chart.html contains a link to external content, but that's OK.
- doc/header.html and doc/footer.html contain "invalid links" and
"invalid characters" because of $relpath^, which is understood by
Doxygen.
Also slightly improve the benchmarking framework:
- Allow passing an additional environment to benchmarks
- Add the directory of the .erb.cpp file to the include path
- Output stdout when a compilation error occurs
Specifically,
(1) We now benchmark with fusion::list too
(2) We now document our methodology for forcing the evaluation of algorithms
Note that we still use `as_list` and `as_vector` to force the evaluation
of algorithms instead of using e.g. `for_each`. This is because we want
to compare apples with apples, and for this we need to get a sequence of
computed values, not only for_each over the view. The disclaimer in the
tutorial saying "Fusion might encourage a different design" takes care
of warning people about the fact that we're not necessarily using
idiomatic Fusion, but not need to benchmark unfairly to try to
account for that.
The benchmarks/documentation were only ever updated from Travis, and it
is much simpler to do it directly in bash from Travis than to write it
in CMake as we did.