[Cryptography] Security proofs prove non-failproof

Florian Weimer fw at deneb.enyo.de
Wed Feb 22 16:45:40 EST 2017


* Peter Gutmann:

> I guess we'll have to disagree on this... I've been hearing this since the
> 1970s (well, reading about it after the event in the case of the 1970s), any
> minute now formal methods will magically start working and all our code will
> be a lot better.  We've certainly come a long way since Ina Jo/Ina Mod and
> Gypsy and the Boyer-Moore theorem prover (whose use was famously described as
> "like trying to make a string go in a certain direction by pushing it"), but
> in 40 years we've only managed to go from "near-impossible to use" to "very
> difficult to use".  Extrapolating from 40 years of previous work, at this rate
> it's going to be centuries before they're ready to go mainstream.  And given
> that tools like PREfast and Coverity and Fortify haven't sat still in the
> meantime, they've got pretty stiff competition to deal with.

Those commercial tools don't care about soundness, so there isn't much
actual proving going on.  (Not sure about PREfast, it's somewhat
difficult to find accurate information.)  There is some correlation
with customer-encountered bugs, but it's not as strong as one would
want it to be.  I suspect a lot of use of those tools is
compliance-driven and gives good results mainly because any
interesting-looking report prompts real people to look at source code,
finding and fixing the actual problems (which are not always those
reported by the tool).

My main problem with formal methods (apart from odd things like Z3
giving me an invalid solution for pretty much the first problem I
threw at it) is that with the current tools, it's a one-time effort,
just like an audit.  You can't put production code through the proof
tool.  Few people say this explicitly, unfortunately, but it seems
that there is always an expectation that you massage the code (i.e. go
from Ada to SPARK, from C to ACSL, etc.), prove the basic properties
you get for free (such as termination, bounded resource use, and lack
of memory safety violations—all stuff that is relatively easy to
specify in general, but can be quite hard to prove in specific cases,
obviously).  After patching things up, you then go back to the
original code to fix things there as well.

But now someone makes a change to the original code to implement a new
feature, and you have to do all that source code massaging again from
scratch.  That clearly does not work.


More information about the cryptography mailing list