[Cryptography] VW/EPA tests as crypto protocols ?
leichter at lrw.com
Sun Sep 27 18:32:47 EDT 2015
>> 1. You cannot distinguish correct from incorrect outputs. These are
>> rare, but there is one cryptographic element that falls into this
>> category: Random number generators. These simply cannot be tested by
>> looking at the results. (It's interesting that the only example I know of
>> in this class takes no inputs.)
> Indeed. I seem to deliver a talk on this topic about once a week. You have
> to look at the entropy source process to have a hope of determine if it's
> actually entropic and determine if it meets the input requirements of the
It occurred to me after writing that there is another kind of software to be found "in the wild" that falls into this class: Any implementation of proprietary cryptography. And how is such crud sold? Why by challenging anyone to break it (i.e., find a test it fails).
Deterministic cryptography allows you to separate the algorithm from the implementation. Testing of the implementation can prove it implements the algorithm correctly "with high probability". If you're paranoid, you wrap the algorithm (both plaintext/ciphertext and key) in a whitening function, protecting yourself even from attacks that depend on certain rare inputs known only to the attacker.
However, deterministic cryptography cannot attain certain desirable security guarantees. Those require a randomizing element within the crypto. Add that, and the system becomes untestable.
>> 2. You can construct an input set with the property that if the results
>> are correct on the input set, they are correct everywhere. There are
>> systems that we assume have this property in the face of *random*
>> failures, and we often implicitly assume it - e.g., with sets of test
>> vectors for cryptographic algorithms. Of course, in the face of
>> deliberate wrongware, it's easy for the system to give the right answers
>> exactly for the test vectors.
>> Almost all traditional testing really falls into this category, even if we
>> don't explicitly write down the assumed "indicator set": It's implicit in
>> any fixed collection of deterministic tests.
> On this I think world is a little further along, than you suggest. In
> hardware, formal equivalence verification and formal assertion
> verification is routine....
No disagreement. I was specifically talking about *testing*. The fact is, very little software is formally verified. Way too little software is even subject to proper static analysis, which is considerably weaker than formal verification - but still useful, and increasingly useful as the tools and techniques get better.
The industry go-to answer to problems with software quality is ... more testing. While that's fine as far as it goes ... it doesn't go nearly far enough. And too much of what we do to improve testing remains uninformed about the realities. My favorite: Coverage statistics. At one time, I started a little project, long abandoned due to lack of time, to collect and categorize classes of bugs that would be missed by tests with 100% coverage. I once tried to use that as a question in interviews of QA people, but pretty much no one could answer it. I *do* have a question that involves handing an alleged implementation of a simple, fully-defined input-output transformation to a candidate and asking them to construct a thorough test. Hardly anyone gets that right either - though at least people can make a start on it.
So ... even in areas where testing is appropriate, those who do it don't manage to get it right. And we know that there are things that even thorough testing cannot find.
BTW, formal verification without a great deal of support is *hard*. I once wrote a classic buffering algorithm with get's, unget's, and buffer re-fills. This was at the center of a communications systems, so needed to be very efficient. It also had to be very safe, as it faced the outside world. So I devised a set of correctness conditions for the variables defining the algorithm, and proved that each call preserved them. Several people checked this and agreed that the assertions and the proofs were fine.
And they were. The code was never seen to exceed the buffer bounds, the various pointers were always in the right relative positions, etc. But ... there was a bug *anyway*. It was possible to make a sequence of calls which resulted in get() returning a byte that had not actually received new data (either from the input or from a unget()) since the last time its value was returned. (That is, some byte value could be returned twice.) Fixing the bug was easy - but even formalizing what needed to be shown here proved much harder than all the rest of it combined.
More information about the cryptography