[Cryptography] GCC bug 30475 (was Re: bounded pointers in C)

Bear bear at sonic.net
Fri Apr 25 22:58:15 EDT 2014


On Fri, 2014-04-25 at 08:31 +0200, Stephan Neuhaus wrote:

> And what do you do if al and be are of type ptrdiff_t, for example?  The
> only thing I can think of would be something like
> 
> #include <stdint.h>
> 
> #if sizeof(ptrdiff_t) == 4
> #  define PTRDIFF_T_MAX INT32_MAX
> #elif sizeof(ptrdiff_t) == 8
> #  define PTRDIFF_T_MAX INT64_MAX
> #else
> #  error Your ptrdiff_t has a weird size
> #endif
> 
> I look at this code and am uneasy. It isn't portable, since int32_t and
> int64_t are optional, and hence, so are the corresponding _MAX macros.
> Is it guaranteed to work? I'm sure I could find out from reading the
> standard, but it's not intuitive to me.

This is one of the reasons I usually write to the c++2011 standard
and tell the compiler to be absolutely pedantic about it.  Int32_t 
and int64_t are NOT optional anymore, and the code is portable to 
anything that supports c++2011.  

> > What you can't do is check for overflow *AFTER* the operation that 
> > might commit an overflow.  The instant you actually perform an 
> > operation that might commit an overflow, the compiler is in fact 
> > free to create any old evil code it wants including ignoring your 
> > subsequent check.
> 
> If I read correctly, the compiler may even create evil code for stuff
> that came textually before the overflowing computation, provided that
> these preceding computations don't interfere with the overflow-producing
> one and can thus be legitimately reordered.

You are correct.  Once your code is beyond a sequence point where 
it is committed to perform some 'undefined' operation, whether it 
has reached that operation or not, its semantics are undefined and 
you have invoked nasal demons.  

The result, however, is the same; the correct response is to assure
that your code will NEVER commit itself to performing an 'undefined'
operation, if you possibly can identify all such operations. 

The fact that undefined or "just plain wrong" behavior can start 
even *before* the operation that renders the semantics of your code
meaningless has even been committed is counterintuitive and 
annoying, but true.

> And while I'm sure that you're technically correct, I still think that
> the compiler writers should not read the standard as giving them license
> to do literally anything (without a warning, which was the original
> point of the bug report), but should instead try to preserve the
> intention of the programmer (with a warning).

I think you're absolutely right about this.  I used to assume
2's-complement semantics, because that was what every compiler in 
the world did and I hadn't actually studied the standard yet.  
Until somebody quit doing it and I got bit by this very same bug.

I'm mostly annoyed with the standard writers for not specifying 2's 
complement signed overflow semantics (this is something Java got
right). I'm only a little bit annoyed with the compiler writers 
though; that which is not in the standard, however much it should 
be, is not something we can *rely* on from any other compiler 
either, so it behooves us to fix the code anyway.  Still, failing
to support that widespread assumption made a lot of legacy code 
break, or silently skip checks, etc, for what, by rights, 
*shouldn't* even have been a bug in the first place if the 
standards people had gotten it right.

After reading the standard and the documentation, I wound up putting 
-fstrict-overflow -Wstrict-overflow=4 in my compile options (calling 
for an immediate halt on signed overflow and for warnings on any 
code that the compiler thinks might ever produce it) in order to 
make absolutely sure that I don't create code that fails to notice 
it or relies on it having some particular semantics ever again. 
Along with various conversion warnings that warn me about conversions
between signed and unsigned types.  The litany of safety options 
and warnings grows long.

			Bear






More information about the cryptography mailing list