[Cryptography] Formal Verification (was Re: Trust & randomness in computer systems)
Perry E. Metzger
perry at piermont.com
Sat Mar 19 16:35:11 EDT 2016
On Sat, 19 Mar 2016 12:56:29 -0700 Ron Garret <ron at flownet.com> wrote:
> On Mar 18, 2016, at 12:55 AM, Viktor Dukhovni
> <cryptography at dukhovni.org> wrote:
> >> On Mar 18, 2016, at 2:51 AM, Ron Garret <ron at flownet.com> wrote:
> >>
> >>> Formal verification is not a panacea
> >>
> >> Indeed not.
> >>
> >> http://spinroot.com/spin/Doc/rax.pdf
> >>
> >> TL;DR: In the 1990s we formally verified the a portion of the
> >> flight software for an unmanned spacecraft. Despite this, and
> >> despite extensive ground testing, the software nonetheless
> >> failed in flight. (Happily, the failure was not catastrophic.)
> >>
> >> There's no getting around the GIGO problem.
> >
> > I'm confused. Are you claiming that formal methods failed here?
>
> No, it was the application of formal methods that failed. But it's
> not just a random bit of stupidity, there was an interesting
> structure to this particular failure. It was never practical to
> verify the entire software stack, there was just too much of it.
> So we tried to structure the software with a secure, formally
> verified "core" that was supposed to maintain certain invariants
> (like no deadlocks). The theory was that anything built on top of
> that core would maintain those invariants.
How could that possibly be when there were unverified components that
could clearly violate the deadlock invariant?
What was proven was not that no code built on top of this code would
be free of deadlocks -- what was proven was only that a model of the
code was free of deadlocks, and there was concurrent non-verified
code.
Your claim makes no sense to me. Indeed, the paper doesn't seem to
make claims that would give one the impression deadlock properties
would be maintained by other subsystems that interacted with the
verified code. Can you point me to a quotation indicating otherwise?
> It was a riff on this
> theme from Perry's original post:
>
> > However, it is an astonishingly useful *ratchet*. If you verify a
> > property, and you specified the property correctly, no new bugs
> > will appear of that particular flavor.
>
> It's a very attractive theory. I believed it myself. But I now
> think that the RAX bug falsified it.
But this isn't an instance of a ratchet.
To repeat, the "ratchet" effect is this: although you might forget to
validate some vital property and have a bug as a result, once you've
realized your mistake and added that property to the list of
properties you are verifying, bugs of that sort will never return
provided the verification is maintained as the software is maintained.
The deadlock described in the paper isn't anything like that. Indeed,
it was a property that the verified code was checked for, and this
problem happened in entirely unverified code.
BTW, your view here contradicts the paper's claims. I'm going to quote
an extensive section of the paper's conclusion here:
This experience gave a clear demonstration that model
checking can locate errors that are very hard to find with
normal testing and can nevertheless compromise a system’s
safety. It stands as one of the more successful applications
of formal methods to date. In its report of the
RAX incident, the RAX team indeed acknowledges that
it “provides a strong impetus for research on formal verification
of flight critical systems” [13].
A posteriori, given the successful partial results, one
can wonder why the first verification effort was not extended
to the rest of the Executive, which might have
spotted the error before it occurred in flight. There are
two reasons for that. First, the purpose of the effort was
to evaluate the verification technology, not to validate the
RA. The ASE team did not have the mission nor the resources
needed for a full-scale modeling and verification
effort. Second, the part of the code in which the error was
found has been written after the end of the first verification
experience.
Regarding software verification, the work presented
here demonstrates two main points. First of all, we believe
that it is worthwhile to do source code verification
since code may contain serious errors that probably will
not reveal themselves in a design. Hence, although design
verification may have the economical benefit of catching
errors early, code verification will always be needed to
catch errors that have survived any good practice. Code
will always by definition contain more details than the
design – any such detail being a potential contributor to
failure.
Second, we believe that model checking source code is
practical. The translation issue can be fully automated,
as we have demonstrated. The remaining technical challenge
is scaling the technology to work with larger programs
- programs that could have very large state spaces
unless suitably abstracted. Abstraction is of course a major
obstacle, but our experience has been that this effort
can be minimized given a set of supporting tools.
Perry
--
Perry E. Metzger perry at piermont.com
More information about the cryptography
mailing list