[Cryptography] GnuTLS -- time to look at the diff.

Jerry Leichter leichter at lrw.com
Tue Mar 11 19:57:19 EDT 2014

On Mar 11, 2014, at 5:01 PM, Nico Williams <nico at cryptonector.com> wrote:
>> It seems like a more useful thing for the standards writers to do would be to produce a pretty comprehensive set of test cases (mostly things that should be rejected), and maybe offer a bounty on stuff that the protocol says should be rejected, but for which there is no test case exercising that bit of the code.
> Er, yes, agreed, but the standards generally say these things, just
> not in a way that can be easily extracted for the purpose of building
> a wasp nest / test suite.
> Perhaps we need to consider a more formal standards-writing language,
> but there's a lot of resistance to that (see the recent discussions
> about JSON schema languages in the IETF JSON WG).  A more realistic
> alternative might be to produce an Informational follow-on to any
> standard like TLS that has a description of all the test cases related
> to violations of requirements in the standard.
Many, many years ago, when DEC had produced a couple of VT1xx terminals which were supposed to be compatible but had some subtle differences, it was decided that it was time to write and publish (internally) a "VT100 standard".  The guy who put it together (Ram Sudama) decided on an interesting approach:  There was a textual definition of each possible command you could send to a VT1xx terminal and what would happen; and a bit Pascal that implemented that command on a simple simulation of a terminal.  You could actually use a tools to extract all the Pascal code, compile it, and get a running (if rather slow) emulator of a VT102 terminal.  You could also extract just the textual descriptions - which I believe formed the basis of the published VT102 programming manuals.  (To this day, the VT102 as described in that manual is the best definition of a "VT100" terminal - the VT102 added a couple of commands to the VT100.)

The Pascal code was, in standardese terms, "normative" (though there was a really significant effort to ensure that the Pascal and the English "said the same thing".)

We used to define processor instruction sets in fairly informal terms, which led to programmers finding odd corners of instruction behavior with sometimes model-specific features.  The IBM 360 was the first architecture that tried to pin down fairly exactly what every instruction did and didn't do.  The VAX took that a step further; the Alpha even further.  By now, we expect instruction sets to be fully and rigorously defined.

We really should expect no less of our protocols.  But we still tend to specify just the responses to *correct* messages, leaving it up to the implementor to decide for himself what to do about *incorrect* ones.  That's what we did - but haven't done in decades - in specifying instruction sets.

It takes some concentrated effort, but it's really not all *that* hard.  And the excuse that providing some freedom for implementors so that they can get higher performance is just not acceptable any more.

There have been specification languages out there for years, but they aren't really used much for Internet and related protocols.  The bias has always been for "code", to the extent that to this day, if you had only the RFC's to work from, it's unlikely you could produce an interoperable TCP implementation.  Understandable when you know the history - but still pretty disturbing.
                                                        -- Jerry

-------------- next part --------------
A non-text attachment was scrubbed...
Name: smime.p7s
Type: application/pkcs7-signature
Size: 4813 bytes
Desc: not available
URL: <http://www.metzdowd.com/pipermail/cryptography/attachments/20140311/f76d9651/attachment.bin>

More information about the cryptography mailing list