[Cryptography] letter versus spirit of the law ... Eventus incertus delendus est

John Denker jsd at av8n.com
Mon Oct 26 07:54:04 EDT 2015


On 10/25/2015 02:13 PM, D. Hugh Redelmeier wrote:

> If I were redesigning C, I would make it the compiler's job to ensure
> that a mathematically correct value were computed from each
> expression.

That makes sense.

> Crashing is generally a better policy than continuing with a wrong
> answer.  This forces bugs to be noticed and to be fixed. 

That's overstated.  We agree that /some/ types of strict
checking are good.  Bugs should be found and fixed.
However, "crashing" is not the right word.  Crashing is 
not "generally" good policy.

In the aerospace field, crashing is generally considered 
a Bad Thing.  Consider for example the Apollo 11 moon 
landing.  During the descent, the LM guidance computer 
started throwing "1201" and "1202" program alarms.  However,
steely-eyed mission controllers decided, correctly, that 
the alarms could be ignored.  The program did not crash,
and the spacecraft did not crash.

Note that the pilots did not screw up.  They followed the
checklist, but the checklist was wrong.  The simulations
during training were not sufficiently faithful to catch
the error.  The simulations were, however, good enough
to train the controllers to not over-react. 

The program was fault-tolerant;  it performed brilliantly
in the face of "impossible" inputs.


Here's a contrasting example from 2200 years ago:
  Chen Sheng:  "What's the penalty for being late?"
  Wu Guang:    "Death."
  Chen Sheng:  "What's the penalty for rebellion?"
  Wu Guang:    "Death."
  Chen Sheng:  "Well, guess what:  We're already late."

So began the Dazexiang Uprising, leading eventually to
the fall of the Qin dynasty.

Let's be clear:  All the extremes are wrong:
 -- Being overly tolerant of errors is wrong.  Ignoring 
  errors, especially during training, is bad policy.  
 -- Responding overly harshly to errors is also wrong.

References:
  https://www.hq.nasa.gov/alsj/a11/a11.1201-fm.html
  http://www.rulit.me/books/failure-is-not-an-option-mission-control-from-mercury-to-apollo-13-and-beyond-read-239767-78.html
  http://www.airspacemag.com/daily-planet/troubleshooting-101-1201-actually-and-1202-too-111339271/?no-ist

  http://english.cri.cn/1702/2005-4-29/14@232828.htm


>  (Throwing
> an exception instead of crashing would be OK except that programmers
> would start to write code that intended to generate exceptions.)

Again I say:  All the extremes are wrong.
 -- Sure, exceptions can be abused.  Any tool can be abused.
 -- Sometimes exceptions are a perfectly reasonable technique.

Example:  Suppose that N layers deep in a library, the
file-open routine throws a file-not-found exception.  A
much higher level knows that the lack of a configuration
file is harmless, indeed routine, and continues using 
the default configuration.  Using exceptions for things
like this is easier and generally more reliable than 
checking return status codes, layer upon layer upon layer.



------------------
On 10/22/2015 02:32 AM, John Gilmore wrote:

>> I fixed the program rather than bitching about the compiler. 

So the advice is:  We should all write perfect code.

Here's some better advice:  Never put yourself in a 
position where the first misstep is fatal.

Any real-world high-reliability system is designed around
the premise that things will go wrong.  
 -- Pilots will screw up.
 -- Air traffic controllers will screw up.
 -- The guys who write the checklists will screw up.
 -- A large-enough number of geese will destroy an engine.
 -- et cetera.

Reliability comes from having layer upon layer of safety
margin, so that you can recover from one or two errors,
usually three or four errors.

Recovering from an error does *not* mean ignoring the error.
Just the opposite, really, since ignoring errors allows
them to pile up, leading to an unrecoverable situation.
Good policy is to recover from the error and learn from 
it:  Track down the root cause and fix things so it never
happens again.


Some people would argue that crypto is different: "We
need to be super-strict.  We need every misstep to result
in a crash."  I disagree.  I say that crypto is the servant,
not the master.  Security is supposed to serve the overall
mission, by increasing reliability -- not by reducing it.

Obviously a compile-time error is infinitely preferable
to a run-time error.

>> The standard C language has a cure for [zeroization]
>> too, the "volatile" declaration.  I have the same advice about
>> getting crypto code from people who are unwilling to type that extra
>> word.

The "volatile" keyword is a half-step in the right
general direction, but it is not a "cure".  Not even
close.  The "volatile" keyword protects the memory
location, but it doesn't protect the /value/, and it's
the value that we care about.  The value (along with
various intermediate results) can be held in registers.
The registers can be spilled into memory in lots of
different ways.  The C spec imagines an abstract machine
where all calculations are done memory-to-memory, but
there is no reason to expect that the actual hardware
works that way.  

I wouldn't want to get crypto code from people who
think that "volatile" is a "cure".

Let's get some perspective:  any discussion of zeroization
without UB is oxymoronic to begin with.  A cold-boot attack
is entirely UB.  To say the same thing the other way, if
the real world conformed to the abstract machine imagined
by the C spec, cold-boot attacks would be impossible, and
there would never be any need to zeroize anything.  This 
explains why attempts to write reliable zeroizers within 
the C99 spec have always failed, and will always fail.
You're trying to solve a problem that the spec assumes
does not exist.  No wonder the code gets optimized out.

Possibly constructive suggestion:  Rather than relying on 
"volatile", we need a different language extension, perhaps
"secret", that applies to the /value/ not just to the memory
location.  Implementing this would require cooperation from
the compiler, operating system, and hardware.  For example,
it might well require a system call to expunge copies of the 
value that might be held in the Task State Segment, various
coprocessors, et cetera.

--------------
>  Intel 8086
> even had an instruction "INTO" (Interrupt if overflow flag set) that
> apparently nobody used 

That's the wrong solution anyway.  The 8086 was designed
in the 1970s.  The 1970s have been over for a while now.
Transistors are cheap nowadays, and parallelism is king.
The overflow check should be done in hardware, in parallel
with the arithmetic, without requiring a separate opcode
fetch.

>  if unsigned and signed arithmetic were supported,
> the architecture needed distinct signed and unsigned instructions.  On
> the /360: A for signed add, AL for unsigned.  The PDP-11 designers
> decided that since the values (in twos complement) were the same, the
> distinction would be left in the condition code, for subsequent
> instructions to interpret.  Every machine since the PDP-11 seems to
> have copied this.

To say it another way, it's a vicious circle:  Integer 
overflow trap had a nonzero cost on a PDP11, so the C 
language didn't specify it, so programmers have no way
of asking for it, so there's no incentive for any
hardware to support it ... for the rest of eternity.

I say that's no excuse.  It should be added to the list of 
obvious stupidities that have gone unaddressed for far too 
long.

  Note that it doesn't have to be that way.  By way of 
  analogy: Lots of PDP11s shipped with no hardware floating
  point, but the C language specifies floating point anyway.
  A decent language should implement things that people 
  need, even if the hardware imposes some nonzero cost.

Getting rid of the overflow check is an optimization.  Again
I am reminded of Knuth's dictum: 
  Premature optimization is the root of all evil.


----------------------------
Consider the phrase WYTM (What's Your Threat Model?).
That is often used in this forum, as it should be.

The same concept applies to optimizing compilers:  What's
Your Objective?  Hint: optimizing for speed is not the 
only possible objective.

It would be nice to have a compiler that optimizes for
reliability.  That includes things like trapping integer
overflow, which is vastly easier for the compiler than
for the ordinary programmer ... since it is usually 
supported at least partially in hardware, whereas it
is a nightmare to do it by hand, i.e. by performing
arithmetical checks on the arguments before each 
operation.

On 10/25/2015 06:02 PM, Mansour Moufid wrote:
> Check out Clight and CompCert:
> 
> http://pauillac.inria.fr/~xleroy/publi/Clight.pdf
> http://compcert.inria.fr/
> 
> I compile large projects like Tor using CompCert with a simple
> "./configure CC=ccomp" and the results are just as fast.


At the opposite extreme, some compiler-writers claim 
that the spec says that any misstep, however small, 
gives them a license to kill.  To them I say Wow, that
is amazingly arrogant.  You are not a team player.  You 
have not the slightest understanding of reliability.
You should not be allowed to come anywhere near a 
critical application.

I went into science because I didn't want to go to 
law school.  Sure, people should read and follow the 
instructions ... but they shouldn't be required to 
read the spec with lawyer's eyes, looking for some 
asinine legalistic pharisaical pettifogging chicanery
that tramples on the spirit of the law and tramples 
on longstanding tradition.

The rule should be, if you encounter a situation that
is not covered by the spec, or encounter an out-and-out
mistake, do something that makes the situation better
... not worse.

See also https://en.wikipedia.org/wiki/Principle_of_least_astonishment

Eventus incertus delendus est.



More information about the cryptography mailing list