[Cryptography] The GOTO Squirrel! [was GOTO Considered Harmful]

ianG iang at iang.org
Sat Mar 1 10:55:55 EST 2014


On 1/03/2014 01:33 am, Peter Gutmann wrote:
> "Dennis E. Hamilton" <dennis.hamilton at acm.org> writes:
> 
>> It is not about the code.  It is not about the code.  It is not about goto.
>> It is not about coming up with ways to avoid introducing this particular
>> defect by writing the code differently.


Au contraire!  You've fallen to exactly the same trap that you accuse
others.  You see that we are all focussed on cause A: bad code, and we
are missing cause B: lack of process.  By saying it is all about B, you
are missing that A is a contributor.

All of these examples, including Ariane B are characterised by 'accident
chains' being that set of circumstances where multiple checks fail to
block the disaster.  (Aviation disaster investigations are basically
that;  figure out the accident chain that led to the disaster.)

>> I say this is all about the engineering and delivery process that allowed
>> this gaff to be introduced into production code for a security-important
>> procedure and allowed to remain there until someone noticed externally.  The
>> coding style could have been perfect, with the code still not establishing
>> security correctly and it would have been put into the live release, all else
>> being equal.


In this case, the code is so woeful, so cause-full, that cause B will
always be suspect.  IMHO you can't actually design a good process of
review and control that deals with a spaghetti mess of gotos for error
trapping.

Or, in other words, as suspected by a consensus of programmers here
who've lost their dinner, a massive refactoring exercise is *the good
engineering and delivery process that was missing in OpenSSL*.

(This is not a new observation, I first saw admission that OpenSSL is a
dog's breakfast at least 15 years ago.)


> I was just about to say the same thing.  Even if you rewrote the entire think
> in Haskell (the newspeak of programming languages in which it's impossible to
> write incorrect code[0]), you can still produce something where some crypto
> check is missed.  This isn't an issue of coding style, it's one of software
> engineering practice (or lack thereof).


This is true, but misses the point by focussing on the either/or binary
possibility.

The coding style process (and indeed the choice of language as a subset)
is not there to make the code perfect, it is instead there to eliminate
easy errors, to make the code tractable, and to allow the programmer to
think higher layer.  In short, to help the coder concentrate on security
rather than any other aspect.

Of course it is impossible to eliminate errors in the code by code
style, just as it is possible to write secure code in assembler.  The
real question is, what is the likelihood and cost of secure results, in
either approach?


>> There are innumerable ways the particular defect could have been detected and
>> remedied well before the code was committed to the code base.  A walkthrough
>> would likely catch it, assuming a skilled human other than the original
>> programmer simply read through it.  I bet explaining it on a walkthrough
>> would have led the originator to notice it.
> 
> This could be, and should have been, caught with automated testing.  The
> problem is that most (all?) testing of this type of code is along the lines of
> "do the things that should happen, happen?", with very little testing of "do
> the things that shouldn't happen, not happen?".  What happens if a bit in the
> SSL handshake is flipped?  What if a bit in the payload data is flipped?  What
> if the server presents cert A and signs with cert B?  What if a bit in the
> signed DH parameters is flipped?  What if the hash has a valid signature but
> it's the wrong hash for the data?  (Those are all self-checks that my code
> performs, if there's anything else obvious that I've missed I'd love to hear
> about it so I can add checks for that too).  These are trivial, automated
> checks that you can run before you ship, and several of them would have caught
> Apple's bug (the signature wasn't checked at all, so wrong-key, DH-parameter-
> manipulation, and valid-sig-on-wrong hash would all have caught the problem).


Right.  This is where it does get hard.  I agree with that, and to be
frank, I don't do a lot of it myself.

The problem that I think is going to occur here is that the test cases
become so complicated and impenetrable that they overwhelm the code --
both economically and agility-wise.  So in practice, the only groups
who've been able to maintain such a lifestyle have been those groups
with sum of components equal to 1, or those with infinite budgets
(aviation).


> Peter.
> 
> [0] Some advocates of Haskell actually seem to believe this, which always
>     provides for much entertainment during discussions.


It is always easy to derail any opponent by referring to another
language as a better hammer ;)



iang


More information about the cryptography mailing list