[Cryptography] portable, reliable code ... Eventus incertus delendus est

John Denker jsd at av8n.com
Wed Oct 28 00:49:41 EDT 2015


On 10/25/2015 11:46 AM, Ray Dillinger wrote:

> [...] writing code that's acceptable to gcc is important for two
> reasons.  First, if you don't have a particularly "adversarial"
> compiler, you will never find the problems in your code where a
> silly or utterly useless interpretation of what you wrote, is
> allowed by the standard.  gcc specializes in finding the silliest
> or most utterly useless permissible interpretation of your code,
> so making it your usual compiler makes that your testing baseline.
> Gcc is a sort of worst-case nightmare scenario of the language
> standard.  Code that works on gcc will need very little maintenance
> in the future, because later language standards are likely to make
> a difference only to code that the current standard allows silly
> or useless interpretations of - which you can't write if gcc is
> one of your regular test compilers - and no other compiler
> working within the current standard will ever find more excuses
> to produce silly or useless executables.
> 
> Second because when you release code to clients, they're
> going to use "whatever" compiler

We agree that it is important to recognize the distinction between
development and deployment.
 -- During development, we want to maximize the visibility of
  any bugs.
 -- When the product is deployed, we want to minimize the
  visibility of whatever bugs remain.

On the other hand, I don't agree that compiling with gcc is
"important" as a means of reaching either goal.  As it stands,
we cannot rely on it to conceal bugs ... but neither can
we rely on it to make bugs manifest.  It does silly things 
/sometimes/, when it feels like it.  In other words:  It's
not reliably friendly, but it's also not reliably adversarial.

We need a different approach entirely.  During development, we
need a compiler that provides maximally accurate and informative
warnings.  By "accurate" I mean that it should not generate 
false negatives *or* false positives.

Also, generating silly executable code is never the correct 
way to warn the user.  During development, the compiler
should emit clear, informative warnings.  If it can't 
generate good defensive code, it should generate no code
at all, and throw an error.

On 10/27/2015 03:36 AM, Watson Ladd wrote:

>> So go use Java, which defines 2's complement semantics for integer
>> overflow.

I'm not convinced that java is the solution.  If you want 2's
complement behavior, you can get it in C by using unsigned ints.

Meanwhile ... I'm not convinced that 2's complement is the solution
to all the world's problems.  The world is a complicated place.
For example, there are lots of GPUs where overflow results in 
/saturation/ at MAX_INT or MIN_INT.  And yes, some folks use
GPUs as crypto accelerators.

More generally, I can imagine a /base language/ that leaves open
various details such as sizeof(int), overflow behavior, etc...
in combination with one or more /dialects/ that specify all the 
details.  The key here is that the base language should specify 
that every compiler MUST implement a specific dialect, so that 
when the rubber meets the road, nothing is left unspecified.

I can also imagine a language that is much more expressive, allowing
the user to specify that variable XXX should exhibit 2's complement
behavior, variable YYY should exhibit trichotomy and monotonic 
behavior, variable ZZZ should throw an exception on overflow, etc.

There are lots of properties that one might like to have, but
it's impossible to have them all at once.  
 -- The associative property is compatible with 2's complement 
  but not with saturation.
 -- The weak monotonic property is compatible with saturation
  but not with 2's complement.
 -- The trichotomy property is not compatible with floating-point NaN.
 -- et cetera.

Eventus incertus delendus est.



More information about the cryptography mailing list