[Cryptography] Verified implementation of TLS: miTLS

Perry E. Metzger perry at piermont.com
Thu Nov 26 10:43:40 EST 2015


On Thu, 26 Nov 2015 06:32:38 -0800 Henry Baker <hbaker1 at pipeline.com>
wrote:
> http://www.mitls.org/wsgi/home
> 
> miTLS is a verified reference implementation of the TLS protocol.
> Our code fully supports its wire formats, ciphersuites, sessions
> and connections, re-handshakes and resumptions, alerts and errors,
> and data fragmentation, as prescribed in the RFCs; it interoperates
> with mainstream web browsers and servers.  At the same time, our
> code is carefully structured to enable its modular, automated
> verification, from its main API down to computational assumptions
> on its cryptographic algorithms.
> 
> The stable version of miTLS including the new 0.9 release are
> written in F# and specified in F7.

Formal verification of large software artifacts has finally become
quite practical. Thirty and even twenty years ago the tools were
simply inadequate and there was no way to perform the work in
reasonable time. Even in the last decade that has changed
dramatically. I believe this will be one of the biggest new frontiers
in systems and network security in the next decade or two.

Verification is not a panacea -- you can only verify properties you
think to ask about, and you inevitably have to make assumptions (i.e.
even formally verified hardware can't be proof against someone
physically modifying it.)

However, it is an astonishingly useful *ratchet*. If you verify a
property, and you specified the property correctly, no new bugs will
appear of that particular flavor. If you've formally verified (say)
that all encryption operations will take identical time (subject to
some model of caching behavior), no timing attacks will be possible.
If you formally verify that there are no buffer overflows, then no
buffer overflow attacks will be possible.

This ratchet effect is really useful. In our current paradigm for
building secure systems, we keep learning about more and more attacks
with time, but we are never sure, having discovered an attack, that
our systems have been made fully immune to it. Testing only goes so
far, and sometimes mistakes are made in tests, so the scope for
attacks keeps broadening as new attacks are discovered. With formal
verification, you can't be sure no new attack will be discovered, but
you *can* be sure you've closed the door to the old attacks, so the
balance between attack and defense can become significantly better.

In other words, you can finally make forward progress, because the
formal verification ratchet keeps you from backsliding. Even if, to
pick an example, you did a flawed timing proof that ignored caching,
and then someone attacked you using information about cache
properties, when it was discovered that this was possible, you could
fix your code and amend the proof to include those properties, and
you would not backslide.

Perry
-- 
Perry E. Metzger		perry at piermont.com


More information about the cryptography mailing list