[Cryptography] Security proofs prove non-failproof

Peter Gutmann pgut001 at cs.auckland.ac.nz
Sun Feb 19 22:09:23 EST 2017


Perry E. Metzger <perry at piermont.com> writes:

>We now have several formally verified microkernels, a formally verified C
>compiler that also doubles as a proof tool, we nearly have a formally
>verified TLS stack (Project Everest), etc., etc. 

None of which are actually used in practice, at least not to any noticeable
extent.  I don't know of anything built around L4, mostly because by the time
you add enough bits to it to be useful you've got nothing like a formally
verified system any more.  CompCert is nice, but can't reach anywhere near the
code quality of any standard C compiler, and if there's one thing that the
continued use of gcc has taught us it's that people prefer optimised flaky code
to non-optimised correct code.  miTLS is a cool project, but I don't know of
anyone using it in production, they want WolfSSL compiled with gcc, not miTLS
compiled with... is there even a formally verified F* compiler?  In fact, is
there even an F* compiler of any kind?

miTLS specifically brings up another interesting point.  One of the skills
required in implementing a standard is knowing which bits you need to ignore
in order for it to work.  For example in TLS 1.2 everyone knows that you
ignore the requirement for the signature algorithms in the server certs to
match what's given in the TLS handshake algorithms extension otherwise things
break, pretend that the introduction of MD5 + RSA as a signature format, added
in 2008 (!!), never actually happened, and so on.

A specific miTLS example came up a while back where the miTLS developers
reported that they'd found issues when interleaving application data and
handshake records.  The initial response was that this is irrelevant because
you can't actually do that.  Turns out the spec does actually allow it, and
miTLS was (probably) the first implementation that had ever done it, which
showed up the problem.  Because miTLS very carefully implemented everything in
the spec while everyone else just did the bits that made sense, everything was
naturally immune to the issue.

So "correct" software isn't just "follows the spec", it's "follows a sensible
interpretation of the spec".  And that's something that previous attempts at
verifying both SSH and TLS had run into, the spec was such that it wasn't
possible to verify certain properties because it was unclear whether you were
supposed to do A, B, or C at a particular point, so people doing the
verification had to come up with their own best guess at what was intended.  A
lot of implementations of this stuff are relatively safe because they only
implement a basic subset of the full functionality of a particular spec.  It's
only when you try and implement every single bit of everything
coughOpenSSLcough that you run into problems.

Note that I'm not trying to bash any of L4, CompCert, miTLS, etc.  They're all
extremely useful... well OK, I don't know how useful L4 is but CompCert and
miTLS certainly are.  However, they're specialised, craftsman's tools, not
something you bake into a shipping product, in the same way that you don't use
a McLaren to deliver UPS packages.

>I disagree. Much of the difficulty in using a system like Coq now lies in
>problems we understand how to solve, like terrible error messages and
>terrible UI 

It's not the error messages that are the problem, it's the learning cliff (not
a curve, a vertical cliff).  I've played with some tools built around Coq, and
even with my somewhat elevated level of masochism for doing code analysis and
lack of the-boss-wants-it-by-Tuesday constraints, it was just too painful to
continue.

>I entirely disagree. I suspect that we're going to see serious mainstream
>acceptance of the tools for high value products inside of ten years, 

Again, we'll have to disagree on this, I see no sign of formal methods going
mainstream not just in ten years but ever.  Better yet, if in ten years
they've gone mainstream, I'll buy you dinner at Tony's Steak House in Victoria
St, Auckland.  If not, you owe me dinner at the same venue.

>You can run Coverity over a program every day and not find serious problems
>in it that a formal verification will utterly eliminate. 

The difference is that anyone can run Coverity, while almost no-one can do a
formal verification.  As the saying goes, a Smith and Wesson beats four aces
(which isn't expressing quite the right thing, but the general sentiment is
there, if anyone knows of something more apropos...).

Peter.


More information about the cryptography mailing list