ae01c8387c
- section about command line argument filtering for template test cases - many typos fixes - remove reference to bjam
38 lines
2.1 KiB
Plaintext
38 lines
2.1 KiB
Plaintext
[/
|
|
/ Copyright (c) 2003 Boost.Test contributors
|
|
/
|
|
/ Distributed under the Boost Software License, Version 1.0. (See accompanying
|
|
/ file LICENSE_1_0.txt or copy at http://www.boost.org/LICENSE_1_0.txt)
|
|
/]
|
|
|
|
[section:log_floating_points Logging floating point type numbers]
|
|
|
|
It may appear that floating-point numbers are displayed by the __UTF__ with an excessive number of decimal digits.
|
|
However the number of digits shown is chosen to avoid apparently nonsensical displays like `[1.00000 != 1.00000]`
|
|
when comparing exactly unity against a value which is increased by just one least significant binary digit using
|
|
the default precision for float of just 6 decimal digits, given by
|
|
`std::numeric_limits<float>::digits10`. The function used for the number of decimal
|
|
digits displayed is that proposed for a future C++ Standard,
|
|
[@http://www2.open-std.org/JTC1/SC22/WG21/docs/papers/2005/n1822.pdf A Proposal to add a max
|
|
significant decimal digits value], to be called `std::numeric_limits::max_digits10();`.
|
|
For 32-bit floats, 9 decimal digits are needed to ensure a single bit change produces a different decimal digit
|
|
string.
|
|
|
|
So a much more helpful display using 9 decimal digits is thus:
|
|
`[1.00000000 != 1.00000012]` showing that the two values are in fact different.
|
|
|
|
|
|
For __IEEE754__ 32-bit float values - 9 decimal digits are shown. For 64-bit __IEEE754__ double - 17 decimal digits. For
|
|
__IEEE754__ extended long double using 80-bit - 21 decimal digits. For __IEEE754__ quadruple long double 128-bit, and SPARC
|
|
extended long double 128-bit - 36 decimal digits. For floating-point types, a convenient formula to calculate
|
|
`max_digits10` is: `2 + std::numeric_limits<FPT>::digits * 3010/10000`;
|
|
|
|
[note
|
|
Note that a user defined floating point type UDFPT must define
|
|
`std::numeric_limits<UDFPT>::is_specialized = true` and provide an appropriate value
|
|
for `std::numeric_limits<UDFPT>::digits`, the number of bits used for the significand
|
|
or mantissa. For example, for the SPARC extended long double 128, 113 bits are used for the significand (one of
|
|
which is implicit).
|
|
]
|
|
[endsect] [/section:log_floating_points]
|