tech-userlevel archive

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index][Old Index]

Re: detecting integer over/underflow

On Sun, 4 Mar 2012 14:37:31 -0500 (Christos Zoulas) wrote:

> Currently it is not easy to detect if a value will fit into a type
> so I wrote the following macros to simplify this.

I like this idea very much.  FreeTDS, for example, is full of ints and
shorts that get compared to pointer differences and strlen().  I've
never been comfortable either writing the casts or ignoring the

> Under normal operation nothing changes, i.e. the compiler ends up
> generating no different code than before, since the _DIAGASSERT()
> turns into nothing. 

I have a heretical suggestion: *do* change what happens under normal

If there actually is a real problem, the ramifications will be
unpredictable and almost untraceable. Isn't it better, in the default
case, to fail in some noisy way, particularly in the "can't happen"

Someone will doubtless mention efficiency.  Yet we all know that
optimization defies instinct, requires measurement.  Why then assume
that the overhead of verification is less important than the assurance
of verification?  

To the person who says "my code can't stop", I have two suggestions.
Second best is to log the problem as a warning.  Best, obviously, is to
(re)write the code correctly.  Use size_t instead of int.  

I understand that takes time and is sometimes impossible.  I suffer the
same contraints.  It might mean an ungodly amount of work or an ABI
change or even an RFC.   Still, code that really, really can't stop
also can't afford its lengths to be truncated.  

Efficiency and necessity aren't reasons to sweep errors under the rug
and hope for the best.  This is a perfect opportunity to favor
correctness over expedience.  

Humbly submitted, 


Home | Main Index | Thread Index | Old Index