[Cryptography] Security proofs prove non-failproof

Perry E. Metzger perry at piermont.com
Thu Feb 16 20:58:11 EST 2017


On Fri, 17 Feb 2017 00:57:44 +0000 Peter Gutmann
<pgut001 at cs.auckland.ac.nz> wrote:
> Perry E. Metzger <perry at piermont.com> writes:
> >This is a very, very powerful tool, and one that I think is going
> >to revolutionize our field, but it is still a tool, not a panacea,
> >and you have to remain vigilant.
> 
> I guess we'll have to disagree on this... I've been hearing this
> since the 1970s (well, reading about it after the event in the case
> of the 1970s), any minute now formal methods will magically start
> working and all our code will be a lot better.

And that was an utterly unfulfilled promise for a long time, just as
the promise of computers playing good games of chess was constantly
something said to be in the distant future, until suddenly it
was something that had already happened.

The big change in formal methods happened only in the last decade. We
now have several formally verified microkernels, a formally verified C
compiler that also doubles as a proof tool, we nearly have a formally
verified TLS stack (Project Everest), etc., etc. The number of
formally verified artifacts keeps multiplying and the complexity
involved keeps increasing. Where before, for example, seL4 depended on
having no concurrency to do the proof, CertiKOS's creators were able
to build a system that handles fine grained concurrency, and with far
fewer man-years devoted to the verification, thanks to substantial
improvements in methodology that happened in less than a decade.

What has changed since 30 and 40 years ago is that back then people
simply didn't know how to build the tools to aid others in doing
formal verification (and it turned out to be a very hard problem for a
number of reasons) but now we largely do understand. There's been a
huge revolution in understanding how, on an engineering level, to
structure the proofs as well. (It turns out that proof engineering is
a thing, just as software engineering is a thing.)

It's very early days yet -- I would compare current maturity in formal
methods to the world of compilers and language design in the
mid-1960s, where things were still far too ad hoc and yet to be really
made subject to truly sound engineering practice -- but things are
moving very, very, very rapidly indeed.

> We've certainly come a long way since Ina Jo/Ina Mod and Gypsy and
> the Boyer-Moore theorem prover (whose use was famously described as
> "like trying to make a string go in a certain direction by pushing
> it"), but in 40 years we've only managed to go from "near-impossible
> to use" to "very difficult to use".

I disagree. Much of the difficulty in using a system like Coq now lies
in problems we understand how to solve, like terrible error messages
and terrible UI (mere matters of people doing the work rather than not
knowing how to do the work even in principle), rather than in problems
we don't know how to solve, like how to even go about doing the whole
thing in the first place.

> Extrapolating from 40 years of previous work, at this rate it's
> going to be centuries before they're ready to go mainstream.

I entirely disagree. I suspect that we're going to see serious
mainstream acceptance of the tools for high value products inside of
ten years, and we're *already* seeing use of formally verified
artifacts in some niche products. The main problems at this point are
making the tools more comfortable to use and educating people in their
existence.

> And given that tools like PREfast and Coverity
> and Fortify haven't sat still in the meantime, they've got pretty
> stiff competition to deal with.

You can run Coverity over a program every day and not find serious
problems in it that a formal verification will utterly
eliminate. Static analysis simply isn't able to do a lot of this
stuff. Yes, static analysis is great, but it isn't the same thing.

Perry
-- 
Perry E. Metzger		perry at piermont.com


More information about the cryptography mailing list