[Cryptography] GCC bug 30475 (was Re: bounded pointers in C)

D. Hugh Redelmeier hugh at mimosa.com
Tue Apr 22 00:51:33 EDT 2014


| From: Arnold Reinhold <agr at me.com>

| The events described in the above linked Bugzilla thread regarding Bug 
| 30475, as I read it, is that the GCC team was informed that the GCC 
| complier in its common mode of operation is, without any warning, 
| removing safety checks that have inserted in wide variety of existing 
| programs; that the safety checks were inserted by competent programmers 
| who were unaware of any potential problem with their use; that the 
| safety checks, if left in place, would be functional and could avert 
| serious security lapses; and that it is not feasible to find all the 
| instances of these checks and apply proposed workarounds in a reasonable 
| amount of time and effort.

I don't like the way the C language handles overflow with signed ints.
I think signed overflow should, by default, cause a trap.

But the problem actually lies with the success of the PDP-11.  That
machine uses a pun: signed and unsigned add of two's complement
numbers produce the exact same bits of result from the same bits of
operands.  The only difference is how to interpret the signedness and
overflow of the result.  So the PDP-11 had a single add instruction.
It set a lot of condition bits:  the program could test the bits that
were relevant to the representation intended.  But the computer could
not trap on overflow because it didn't know if there were an overflow.

Contrast this with the IBM/360.  It too used two's complement.  But it
had distinct signed and unsigned add operations (it called the
unsigned operations "Logical").  It could generate a trap on overflow.

Almost all important machines after the PDP-11 copied it in this 
respect (and much else).

C is a close-to-the-metal language.  The designers didn't wish to
impose inefficiency on the program.  Furthermore, the C Committee
tried to be as hardware-agnostic as they could afford.  So the
committee decided early-on that the result of signed int overflow was
undefined.  This allowed but didn't require trap-on-overflow.

The assertion
	assert(a + 100 > a);
is nonsense: it assumes a definition of arithmetic overflow that does
not apply.  I can see that as a human.  But compilers often see tests
that are redundant, and this just looks redundant.

I like my compilers to warn me when I write nonsense.  But like
everyone else, I get on my high horse when there are false positives.
Redundant code that is intended is hard to separate from redundant
code that is a mistake.

I write a lot of assertions that I hope are redundant.  I love it when the 
compiler can make them free!

I know enough to not write overflow tests that create overflows.  I
write them to prevent overflows.

The first careful work on this issue that I was aware of was by David
Wortman.  It was published 35 years ago:

D. B. Wortman, “On Legality Assertions in Euclid”,
IEEE Transactions on Software Engineering, Vol.4, July 1979, pp.359-367. 

I imagine that numerical analysts dealt with these issues earlier.  I can 
imagine William Kahn railing about this kind of thing.  The problems in 
floating point are much more, uh, interesting.

In any case, I think that there is a much more surprising (even to me) 
example of GCC getting rid of a test.  It can eliminate a null pointer 
test if it is preceded by a dereference of the pointer.  This is well 
known now:

<http://lwn.net/Articles/342330/>

I am very happy that systems I run trap null dereferences without help
from the compiler.  I like this so much that I once hacked circuits
and the OS on my computer to get this feature.


More information about the cryptography mailing list