[Cryptography] The GOTO Squirrel! [was GOTO Considered Harmful]
Dennis E. Hamilton
dennis.hamilton at acm.org
Fri Feb 28 20:03:12 EST 2014
I just sent the following note to the Risks list before noticing that exactly the same problem is cropping up here:
Oh look, a misplaced goto statement that short-circuits a security procedure.
It is amazing to me that, once the specific defect is disclosed (and the diff of the actual change has also been published), the discussion has devolved into one of coding style and whose code is better. I remember similar distractions around the Ariane 501 defect too, although in that case there was nothing wrong at the level of the code -- the error was that it was being run when it wasn't needed and it was not simulation tested with new launch parameters under the mistaken assumption that if the code worked for Ariane 4, it should work for Ariane 5 and its very different launch profile.
It is not about the code. It is not about the code. It is not about goto. It is not about coming up with ways to avoid introducing this particular defect by writing the code differently.
I say this is all about the engineering and delivery process that allowed this gaff to be introduced into production code for a security-important procedure and allowed to remain there until someone noticed externally. The coding style could have been perfect, with the code still not establishing security correctly and it would have been put into the live release, all else being equal. Some of the offered alternatives, I daresay, provide many ways to inject a comparable defect that is much less apparent.
The Apple defect was introduced when code was being patched to change the signature of some of the functions being called. This strikes me as a classic lapse about not testing what is thought to be obvious, although I have no idea what the actual scenario was.
There are innumerable ways the particular defect could have been detected and remedied well before the code was committed to the code base. A walkthrough would likely catch it, assuming a skilled human other than the original programmer simply read through it. I bet explaining it on a walkthrough would have led the originator to notice it.
A pretty-printer (or any IDE that reflows indentation) would point it out.
So would a modern IDE that identifies unreachable code.
Any practical code-coverage testing would reveal it too.
Furthermore, it is incomprehensible to me that a change to security-important code wasn't subjected to regression testing and confirmation of the procedure. For that matter, I'm a little disappointed that a review and commit by a second, senior technical-staff member was evidently not required.
What's appalling to me is the evident absence of risk management and procedures for detection and mitigation of regressions.
It is incumbent on all of us to stand back from the code and look at the process by which injection of a regression was allowed to sit there and fester all this time.
More information about the cryptography