[Cryptography] Formal Verification (was Re: Trust & randomness in computer systems)

Henry Baker hbaker1 at pipeline.com
Fri Mar 18 15:50:04 EDT 2016


At 11:45 AM 3/18/2016, Ray Dillinger wrote:
>From: Ray Dillinger <bear at sonic.net>
>
>On 03/18/2016 12:55 AM, Viktor Dukhovni wrote:
>> 
>>> On Mar 18, 2016, at 2:51 AM, Ron Garret <ron at flownet.com> wrote:
>>>
>>>> Formal verification is not a panacea
>>>
>>> Indeed not.
>>>
>>> http://spinroot.com/spin/Doc/rax.pdf
>>>
>>> TL;DR: In the 1990s we formally verified the a portion of the flight software for an unmanned spacecraft.  Despite this, and despite extensive ground testing, the software nonetheless failed in flight.  (Happily, the failure was not catastrophic.)
>>>
>>> There's no getting around the GIGO problem.
>> 
>> I'm confused.  Are you claiming that formal methods failed here?
>> On the face of it, they were quite successful...
>
>If the system can fail because of something that was not
>formally verified, then even if the formal verification
>was perfectly implemented and proved perfect compliance
>with the specification...
>
>It does not matter because the specification itself
>contained a bug.
>
>This is the real issue with formal verification.  You
>can prove that software complies with a specification,
>but you cannot prove that the specification itself is
>bug-free.  The specification is also code and may
>also contain bugs.

+1

My biggest problem with "specification languages"
is that each is yet *another* language, with its
own semantics, its own standards committee, its
own bugs, etc.  More stuff to memorize, more stuff
that is too easily forgotten.

Even if your specification is flawless, you aren't
often any closer to having working code (unless the
specification language has its own interpreter, in
which case why not dispense with the "target"
language completely!).

The reason for using liberal ASSERT statements is
that they *are* executable, and are active during
debugging, so at least all of your test cases pass
the ASSERT's.

If there are any differences between the specification
and the implementation, I'd rather err on the side of
the *intersection* -- i.e., only those computations
which are consistent with *both* the specification and
the implementation will run without raising an
exception.

As some wise men (Knuth, Dikjstra, etc.) once
said, correctness first, efficiency second.

The most dangerous time for any code isn't after it
is first released.  Sure, there will be weirdnesses
in user interfaces, but not so many outright bugs.

The most dangerous time for any code is during
an *optimization* phase, where you may have forgotten
all your original test examples and the corner cases
that they tested.  This optimization is often done
by someone other than the original author, so they
usually don't understand the reasons behind all the
original implementation decisions.  The elegant
simplicity of the original code becomes hidden behind
cached values, unrolled loops, SIMD arithmetic.  The
code becomes impenetrable; e.g., look at some
implementations of crypto code; you'd never guess
that there's a pony in there somewhere!



More information about the cryptography mailing list