The ones marked *** have been addressed
The ones marked ??? (there's only a couple) are still being looked at
On 3/6/17 7:35 AM, Steven Watanabe via Boost wrote:
> AMDG
>
> All comments are as of 3bcfabe1ca
>
> automatic.hpp
>
> Actually it looks like calculate_max_t was originally
> written to return T or U, but was then altered to
> use [u]intmax_t.
Right - My intention was that the automatic would reflect
the standard C++ promotions but would return the
[u]intmax_t . This is later reduced according to
the anticipated range of the result. So the whole
business is about trying to get the sign right.
I made the changes and reran the tests and nothing changed.
***
automatic has been altered. It doesn't
attempt to use rules from C++. It uses it's own rules
as documented in the manual
>
> 97:
> // clause 4 - otherwise use unsigned version of the signed type
> std::uintmax_t
> It isn't clear to me why uintmax_t is preferable
> to intmax_t when neither is able to represent every
> possible result.
If both operands are signed - then uintmax_t could hold
one more bit than uintmax_t can. In these cases the
operand result can be twice as large before runtime
checking is necessary.
***
Simplified and better rules work better now.
> 124:
> using t_base_type = typename base_type<T>::type;
> This is called from safe_base_operations:259, which
> has already called base_type.
>
> From the documentation, I had assumed that automatic
> used the range from safe_base. However this range
> is not actually passed to the PromotionPolicy, so
> automatic always bumps the type to the next largest
> integer type, until it reaches [u]intmax_t. This
> means than any expressions with more than a few
> terms will always be carried out using [u]intmax_t,
> which is potentially inefficient, e.g. intmax_t may
> require software implementation if it's larger
> than the hardware register size.
***
code has been altered so that safe range is passed to
the all promotion polcies. promotion policies return
a numeric type - which is not a safe type - which
depends on the particular policy. eg. native returns
the result type based on C++ rules while automatic
returns a generally larger result type.
Clearly this is a sticky point. There are two issues
here. One is the result base type and the other is the
min/max range of safe result type returned. The return
safe type includes the proper range but may have
a base type which is larger than necessary. I'll have
to look into this.
***
I feel that I've got this pinned down now.
> A greater
> problem arises from the fact that mixed signed/unsigned
> arithmetic eventually chooses unsigned when the
> range is too large. If we have a slightly complex
> expression with both signed and unsigned terms, the
> result type will eventually be selected as uintmax_t,
> which may be incapable of representing the result,
> thus causing an error.
***
this can happen. but I believe that the new rules diminish
this possibility.
Hmmm - My thinking for "automatic" was that it should mimic
the "native" C/C++ promotion but return a larger size. I'll
have to think about this some more.
I was wrong about this.
> (I know that this scenario
> can also fail with builtin integer types, but
> it seems to me that the whole point of 'automatic'
> is to make things "just work" without having to
> worry about pesky things like signedness)
Right. To the extent possible.
Looking at this obvervation, I see that native promotion
has the same problem in that it's unclear whether it
should take safe types or native base types as arguments.
More stuff to think about.
***
fixed - takes safe types and returns built-in type
I'm sure that when I started out, I thought I had it
all clear in my mind. But then as I worked through
my tests and examples, my "fixed" clouded the issue.
>
> 247:
> using result_base_type = typename boost::mpl::if_c<
> std::numeric_limits<t_base_type>::is_signed
> || std::numeric_limits<u_base_type>::is_signed,
> std::intmax_t,
> std::uintmax_t
> >::type;
> Why division doesn't use calculate_max_t like multiplication
> deserves some explanation. Division can overflow
> too (numeric_limits<uintmax_t>::max() / 1).
Hmmm - I'm sure it's because I've presumed that the result
can aways be contained in the same size type as the number
being divide. That t / x <= t - and divide by zero is
checked separately. Am I wrong about this?
***
addressed by new rules - but numeric_limits<uintmax_t>::max() / 1
will still overflow because uint/int -> int
- but numeric_limits<uintmax_t>::max() / 1u
won't because uint/uint -> uint
>
> 275:
> static constexpr divide(
> This function appears to be necessary, but is
> not documented as part of the PromotionPolicy
> concept. The same goes for modulus.
Damn - I forgot about this. It couples the operation
with the promotion policy. I spent an incredible
amount of time trying to avoid doing this. But in the
end I just figure out how to do it. So I had to put
these in.
??? - still investigating as to whether this can
be eliminated
>
> 328:
> template<typename T, typename U>
> struct left_shift_result
> Inconsistent with the documantation of PromotionPolicy
> which says: PP::left_shift_result<T>::type. Same
> for right_shift_result, and bitwise_result.
Right
*** fixed - improved left shift implementation
>
> 362:
> using type = typename boost::mpl::if_c<
> (sizeof(t_base_type) > sizeof(u_base_type)),
> t_base_type,
> u_base_type
> >::type;
> Is it actually correct to combine &, |, and ^
> in the same class? The result of & will always
> fit in the smaller type, but | and ^ require
> the larger type.
Right - I'll fix this
*** fixed - broke out results for &, |, and ^
>
> checked.hpp:
>
> 90:
> // INT32-C Ensure that operations on signed
> // integers do not overflow
> This is the unsigned case.
*** fixed
>
> 220:
> template<class R, class T, class U>
> constexpr checked_result<R> add(
> Consider the following (assuming 2's complement):
> checked::add<int>(INT_MIN, UINT_MAX)
> The result should be INT_MAX,
*** now it is on both automatic promotion polcies
I'm not seeing this. With 8 bit ints we've got
INT_MIN = x80 = -128
UINT_MAX = x80 = 128
adding together x100
truncating = 0
which might be OK except on current machines but does violate the
standard in that it undefined behavior. Given that compiler optimizers
can do anything they wand on UB I don't think we can do this
but your implementation
> will generate an error when converting UINT_MAX
> to int.
As I believe it should -
UINT_MAX = 0x80
convert to int - nothing changes
x80 interpreted as an int is -128
So this would transform an arithmetic value of 128 to
a value of -128. Not a good thing.
What I've done is applied the standard. Convert each
type to the result type then do the addition. On this
conversion we change the value - game over - invoke
error.
> 227:
> if(! rt.no_exception() )
> The double negative makes this confusing to read.
*** fixed
>
> 230:
> return detail::add<R>(t, u);
> This relies on implicit conversion and may generate
> spurious warnings.
conversion - I don't see it. detail::add<R>(t, u)
return a checked_result<R> which is returned as another
checked_result<R>. No conversion. Probably not even
a copy with copy elusion.
*** still not seeing a problem here
>
> 301:
> Everything I said about add also applies to subtract.
OK - if we can agree on add we'll agree on subtract.
>
> 411:
> exception_type::underflow_error,
> This should be overflow_error.
OK
>
> 467:
> Again as with add. This time the problematic case is
> multiply<unsigned>(-1, -1)
again - same response
>
> 539:
> return detail::divide<R>(tx.m_r, ux.m_r);
> Just because m_r is public, doesn't mean that
> you should access it directly.
OK
>
> 583:
> constexpr divide_automatic(
> The difference between divide and divide_automatic
> could use some explanation. It looks like the only
> difference is that divide_automatic doesn't cast
> the divisor, but it isn't clear why. Also, why
> do you need two checked divide functions in the
> first place? There should really just be one
> that always works correctly.
LOL - that's what I thought. I couldn't make one which worked
for both native and automatic promotions.
The case which is giving me fits is dividing INT_MIN / -1
INT_MIN = 0x80 = -128
-128 / -1 = + 128 = 0x80 = INT_MIN again.
So we have INT_MIN / -1 -> INT_MIN an incorrect result.
Now I forget - but this is only a problem with automatic
promotions.
??? need to check this again
>
> 622:
> return cast<R>(abs(t) % abs(u));
> This likely differs from the builtin % and should be documented
> as such.
Hmmm - slightly sticky issue here. I used from &5.6/4
If both operands are nonnegative then the remainder is nonnegative; if not, the sign of the remainder is implementation-defined.
from ISO14882:2003(e) is no longer present in ISO14882:2011(e)
But even so my implementation should have trapped on negative
operands.
I'll consider what to do about this.
Also I prefer % to have the property that
> t / u * u + t % u = t, which this implementation
> does not satisfy.
I think this can be achieved
*** I think I've fixed this.
> 639:
> // INT34-C C++ standard paragraph 5.8
> if(u > std::numeric_limits<T>::digits){
> You're off-by-one
> "The behavior is undefined if the right operand
> is negative, or greater than *or equal to* the length in
> bits of the promoted left operand" (emphasis added)
>
> left_shift and right_shift (625-792) have several problems:
> - 640: You're checking the width of T even though the
> actual shift is done after casting to R.
> - 696, 708: These overloads serve no purpose. The
> only difference between them is checks that
> are already handled by check_shift.
> - 785: This test duplicates work done be check_shift
***
I've pretty much totally redid left and right shift last weekend.
I think they are more likely to be correct here.
>
> checked_result.hpp:
>
> 36: // can't select constructor based on the current status of another
>
> // checked_result object. So no copy constructor
> Saying that there is no copy constructor is misleading,
> as the default cctor exists and is used.
*** fixed
OK
> 67:
> //assert(no_exception());
> Why is this commented out? reading a
> not-currently-active member of a union
> is undefined behavior.
the assert conflicts with constexpr so I can't use it.
When one tries to cast to an R a compile time - it's trapped
just fine by my clang compiler on my mac.
The second case seems to pass because I never retrieve the
error message at compile time.
>
> 99:
> constexpr boost::logic::tribool
> operator<=(const checked_result<T> & t) const {
> return ! operator>(t) && ! operator<(t);
> }
> This is opertor==. The correct way is just ! operator>(t).
OK
*** fixed
>
> cpp.hpp:
>
> 54:
> using rank =
> Rank is only used for comparisons. I don't see
> any reason to use instead of using sizeof directly.
> This applies to automatic::rank as well.
> (Note: rank matters for builtin integer promotion,
> because types with the same size can have different
> ranks. Your implementation of rank, however, is
> strictly based on size, making it essentially useless.)
Then I should probably change the implementation so
that types of different rank but the same size (e.g.
int and long on some machines) are handled correctly.
I don't think this is too hard to fix.
*** fixed
>
> exception.hpp:
>
> 15:
> // contains operations for doing checked aritmetic on NATIVE
>
> // C++ types.
> Wrong file. Also s/aritmetic/arithmetic/ for
> wherever this was pasted from.
OK
*** fixed
>
> exception_policies.hpp:
>
> 1:
> #ifndef BOOST_NUMERIC_POLICIES_HPP
> #define BOOST_NUMERIC_POLICIES_HPP
> Make this BOOST_NUMERIC_EXCEPTION_POLICIES_HPP?
*** fixed
>
> interval.hpp:
>
> 18:
> #include <cstdlib> // quick_exit
> I don't see quick_exit anywhere in this file.
*** fixed
>
> 87:
> // account for the fact that for floats and doubles
> There's also long double, you know.
OK
*** fixed
>
> 101:
> namespace {
> Please don't use the unnamed namespace in headers.
OK - but I forgot why it's not recommended
*** fixed
>
> 133:
>
> template<typename R>
> constexpr bool less_than(
> const checked_result<R> & lhs,
> const checked_result<R> & rhs
> Why is this here and not in checked_result.hpp?
*** Removed
> Also this can be implemented easily using operator<:
> return lhs < rhs;
> (The implicit conversion from tribool to bool
> does exactly what you want here.)
OK - I'll take your word for it. used lhs < rhs in a lambda
***
LOL - lamdas can't be constexper ! So I just eliminated this and
used safe_compare directly. Things should be clearer now
>
> 257:
> checked::modulus<R>(t.l, u.l),
> checked::modulus<R>(t.l, u.u),
> checked::modulus<R>(t.u, u.l),
> checked::modulus<R>(t.u, u.u)
> This appears to be copied from divide, but it's totally
> wrong. modulus is not monotonic, so you can't get
> away with checking only the boundaries.
*** I spent significant time looking into the mod operator
at all levels. I'm pretty convinced that the current implementation
(which is pretty similar to the old one - but simpler is
indeed correct. It handles cases where the divisor
includes zero and correctly handles the sign
> 339:
> const R rl = safe_compare::greater_than(t.l, u.l) ? t.l : u.l;
> The implicit cast to R may not be safe.
> (same at 340, 356, and 357)
*** replaced with checked cast.
> 359:
> if(safe_compare::greater_than(rl, ru)){
> This check isn't necessary. The union of two non-empty
> intervals can't be empty.
OK
*** fixed
>
> 453:
> template<>
> std::ostream & operator<<(std::ostream & os, const
> boost::numeric::interval<unsigned char> & i)
> - This specialization is not a template, and can
> only appear in one translation unit.
Please expand upon this
> - The implementation requires <ostream>, but you only #include <iosfwd>
Right = but any other which invokes this function will aready have
included ostream. That's why included <iosfwd> rather than <ostream>
since this operator is used almost exclusively for debugging/examples.
*** replaced ostream with basic_ostream
> native.hpp:
>
> 27:
> // Standard C++ type promotion for expressions doesn't depend
> // on the operation being performed so we can just as well
> // use any operation to determine it. We choose + for this
> // purpose.
> Comment out-dated. The code (somewhat) uses decltype on the
> correct operators.
*** fixed
> safe_base.hpp:
>
> 240:
> safe_base operator++(){ // pre increment
> pre-increment and pre-decrement should return by reference.
> post-increment and post-decrement should return by value.
> You currently have everything returning by value
> except post-decrement (which returns a dangling
> reference)
Right
*** fixed
>
> 258:
> constexpr safe_base operator-() const { // unary minus
> should unary minus allow unsigned to become signed? (subject to
> the PromotionPolicy, of course).
LOL - I struggled with this - switched a couple of times back
and forth. I decided to leave the type the same. That probably
conflicts with automatic promotion. This still needs thinking
about.
***
after much consideration, I've permited the resulting value of a unary
- to change the type. The C++ standard does invoke integral promotions
so it's changing the type as well.
if this is a unsigned type and the promotion policy is native
the result will be unsigned. But then the operation will fail
according to the requirements of arithmetic correctness.
if this is an unsigned type and the promotion policy is automatic.
the result will be signed
> 261:
> constexpr safe_base operator~() const {
> This will fail if Min/Max are anything other than
> the full range of the Stored type.
***
This never occurred to me. It's a common case. It really highlights the
fact that this operation is not really an arithmetic operation. It's
probably a good thing that safe numerics will trap in most cases.
>
> safe_base_operations.hpp:
>
> 82:
> indeterminate(t_interval < this_interval),
> This works, but it would be a bit more intuitive
> if interval had an 'intersects' function.
***
H:mmmm it has intersection. Would the following be better?
static_assert(
intersection<Stored>(t_interval, this_interval).exception(),
"safe type cannot be constructed with this type"
);
>
> 244:
> // Note: the following global operators will be only found via
> // argument dependent lookup. So they won't conflict any
> // other global operators for types in namespaces other than
> // boost::numeric
> Unfortunately, ADL casts a very wide net.
> What makes these operators (mostly) safe
> is the use of enable_if.
*** changed to:
// Note: the following global operators will be found via
// argument dependent lookup.
>
> 267:
> // Use base_value(T) ( which equals MIN ) to create a new interval. Same
> // for MAX. Now
> Now ... what?
>
> 288:
> constexpr static const interval<result_base_type> type_interval =
> exception_possible() ?
> interval<result_base_type>{}
> This can overestimate the result interval if the
> interval only overflows on one side.
???
LOL - I never thought about that refinement. I'll have to think about this. I'm thinking that this might well apply to most of the operations.
>
> 313:
> using type_helper = typename boost::mpl::if_c<
> std::numeric_limits<result_base_type>::is_integer,
> safe_type,
> unsafe_type
> >::type;
> This is what? Support for floating point in the future?
???
I think I put this in because of the possibility of
safe<int> x = 10;
float y = 10.0
auto z = x + y;
so z would be a float
Of course now I don't actually remember. As I write this I don't feel
confident that I actually know what the above will do. Though I'm sure
I had an idea at one time. I'll take another look.
>
> 531:
> // when we add the temporary intervals above, we'll get a new
> interval
> // with the correct range for the sum !
> I think you mean multiply and product, not add and sum.
>
***
of course
> 676:
> constexpr static const interval<result_base_type> type_interval =
> exception_possible() ?
> At this point, you lose any benefit of divide_nz over divide.
> You should check r_interval.exception() instead of exception_possible().
> The same goes for modulus.
???
Hmmm - still looking at this.
>
> 748:
> // argument dependent lookup should guarentee that we only get here
> I don't understand this comment.
** removed
LOL - that makes two of us.
>
> 867:
> static_cast<result_base_type>(base_value(t))
>
> % static_cast<result_base_type>(base_value(u))
> Okay, we have a problem here: checked::modulus does /not/
> have the same behavior as the builtin % when the divisor
> is negative, which means that treating them as interchangable
> here will result in weird and inconsistent behavior (not to
> mention silently producing values outside the expected interval.)
*** fixed - I think
>
> 907:
> typename boost::lazy_enable_if_c<
> ...,
> boost::mpl::identity<bool>
> enable_if_c with bool should work just as well.
** fixed
>
> 932:
> // if the ranges don't overlap
> (! boost::logic::indeterminate(r)) ?
> // values in those ranges can't be equal
> false
> This is operator<, not operator==. You should return r, not false here.
> Same at 970 in operator>.
*** fixed
Also, you're already implementing
> operator<= and >= in terms of < and >. Is there a reason
> to implement > separately? (t > u) == (u < t)
*** fixed
>
> 1221:
> // handle safe<T> << int, int << safe<U>, safe<T> << safe<U>
> // exclude std::ostream << ...
> Copy/paste. Should be >> and istream.
*** fixed
>
> 1276:
>
> base_value(std::numeric_limits<T>::max())
> This doesn't take the input range into account
??? I think it does
> can drastically overestimate the result range.
??? don't think so - unless one is referring to the Min value
>
> 1401:
> using bwr = bitwise_or_result<T, U>;
> I think it would be slightly better to create an alias
> template<class T, class U>
> using bitwise_xor_result = bitwise_or_result<T, U>;
> instead of using bitwise_or_result directly in
> the implementation of xor. (Note that the range
> of ^ can be larger than that of | if the lower bound
> of the parameters is greater than 0.)
>
> 1432, 1474:
> class P, // promotion polic
> s/polic/policy/
>
> 1435:
> std::ostream & operator<<(
> std::ostream & os,
> Should this use std::basic_ostream<CharT, Traits>?
I think so too.
*** done
>
> safe_common.hpp:
>
> safe_compare.hpp:
>
> safe_integer.hpp:
>
> safe_literal.hpp:
>
> 111:
> constexpr safe_literal_impl operator-() const { // unary minus
> Err, what? This will work iff. N is 0.
that's out
>
> 140:
> #define safe_literal(n) \
> This is not acceptable. It totally violates
> the naming rules for macros.
I'm still struggling with this. What do you suggest?
>
> safe_range.hpp:
>
> utility.hpp:
>
> 26:
> constexpr log(T x){
> A function with a common name, that is not in a private
> namespace, and can match almost anything is a really
> bad idea.
Hmmm I could change this to something like bit_count
>
> concept/exception_policy.hpp:
>
> The reason that the check is commented out deserves
> explanation. (Missing functions are acceptible and
> will cause a compile-time error instead of checking
> at runtime.)
>
> concept/*.hpp: no comments
>
> In Christ,
> Steven Watanabe
>
>
> _______________________________________________
> Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
Misc
cpp / automatic policies - don't address floating point
>
a) made trap_exception work
b) updated manual and examples to show how to use library to eliminate runtime penalty
c) added in safe_literal
d) made corrections of various types
passes all tests
adds documentation of library internals
implements trap_exception for compile time guarantee for program correctness
still needs update to support the above for operations in addition to + and -
This lead to numerous small changes - which of course weren't so small
Added static asserts for some range construction / assignments.
Passing all tests.