[Cryptography] Formal Verification (was Re: Trust & randomness in computer systems)

Perry E. Metzger perry at piermont.com
Fri Mar 18 17:33:21 EDT 2016


On Fri, 18 Mar 2016 11:45:59 -0700 Ray Dillinger <bear at sonic.net>
wrote:
> >> http://spinroot.com/spin/Doc/rax.pdf
> >>
> >> TL;DR: In the 1990s we formally verified the a portion of the
> >> flight software for an unmanned spacecraft.  Despite this, and
> >> despite extensive ground testing, the software nonetheless
> >> failed in flight.  (Happily, the failure was not catastrophic.)
> >>
> >> There's no getting around the GIGO problem.
> >
> > I'm confused.  Are you claiming that formal methods failed here?
> > On the face of it, they were quite successful...
>
> If the system can fail because of something that was not
> formally verified, then even if the formal verification
> was perfectly implemented and proved perfect compliance
> with the specification...
>
> It does not matter because the specification itself
> contained a bug.

Actually, in this case the failure was because of a section of the
system that was not verified. The verification of the other section
clearly reduced the bug count in the overall system.

Now, on the more general question, although it is true that
specifications can contain bugs as well, that's not a reason to think
formal verification isn't exceptionally useful.

For a real-world example, CompCert has a huge spec it has to
formalize. The key theorem proven about it is that it preserves
observational equivalence between source code written in C and the
output assembly code, which means that there is a full formal
semantics of C that the team had to create. (The full formal semantics
for the assembly language was much simpler.)

None the less, there have been essentially no bugs found so far in
CompCert. (There have, in fact, been two, but one was really a bug in
a Linux include file, and the other was a bug in a portion of the
separate compilation functionality about which nothing had been
formalized, and once that was pointed out, it was successfully
formalized and fixed.)

Most importantly, if there is a bug found in the spec, once it is
fixed, the bug will be guaranteed to be gone forever.

This is a feature formal verification has that ordinary testing does
not. In ordinary testing, you hope that a bug never reappears and that
your regression test for it is good enough, but it is rare that you
can be 100% sure. In formal verification, you get a ratchet. Once
you've fixed something, you will never make that mistake again, full
stop. The real world shows this actually works.

> This is the real issue with formal verification.  You
> can prove that software complies with a specification,
> but you cannot prove that the specification itself is
> bug-free.  The specification is also code and may
> also contain bugs.

Sure, but the spec usually turns out to be dramatically simpler and,
more to the point, the spec provides the ratchet feature that I
describe above.

It also turns out that you can protect truly huge amounts of code with
fairly small formally verified "firewalls" that provide the sole
communication path to that software.

> The benefit is that the specification language is usually
> less susceptible to misunderstanding or unsuspected bugs
> because it has much cleaner semantics which would take a
> compiler at least days to compile code for (and which in
> some cases cannot be directly translated into machine
> code but which can be checked)

In general, you probably couldn't get a compiler from the formal
semantics for C or the statement of the theorem that you are
preserving those semantics in the output of your compile function.

And again, the big win is not perfection (which is difficult to
achieve) but rather the ratchet effect.

> and nobody really cares if the theorem prover (as opposed to the
> compiler) runs for days.

In general, systems like Coq do not find proofs on their own. They
provide a tactic language that allows a human to construct a
proof. That proof can be checked very quickly once created.

It is true that some of the tactics in Coq (like Omega, which invokes
a solver for Pressburger arithmetic) do seem like magic, of course. It
is also true that in the future there may be much more dependence on
powerful tactics as people come up with better ways of automating
proofs.

Perry
-- 
Perry E. Metzger		perry at piermont.com


More information about the cryptography mailing list