[Cryptography] Heartbleed and fundamental crypto programming practices

Jerry Leichter leichter at lrw.com
Wed Apr 16 12:42:36 EDT 2014


On Apr 15, 2014, at 6:14 PM, Bear wrote:
> Indeed, the real issue is that security software cares about 
> "outputs" that are important in no other kind of programming....
AKA side channels.

All absolutely true - and this kind of thing can and does happen at multiple levels in the system stack.  A historical tale (which I may have told here in the past; if so, apologies to those who've seen it before):

The DEC VAX was an early example of an attempt to pin down exactly the architecture of a series of CPU's.  On many earlier machines, instructions were often described in terms of their expected effects - with other effects left open to implementors.  Users of particular implementations would explore these unspecified regions and often discover new and useful primitives.  Of course, those would fail on the next machine....  So the VAX architects tried to specify many details.  For example, if a register was an input to an instruction and only some of its bits were used, the others would be marked MBZ - Must Be Zero. The hardware would take a fault if an MBZ field was *not* zero.

However, the architects did leave themselves an "out" for certain *output* fields.  This might be defined as "unspecified", which means the hardware could do what it liked with these bits.

Years after there were multiple VAX implementations out in the field, someone (Hi, Joe M!) raised the following question in an internal VAX architecture discussion (Notesfile, for those who know what these were):  Could an "unspecified" field receive information to which the user program did not otherwise have access?  For example, could a user-mode program execute an instruction that would cause stuff from kernel-access-only memory to show up in an "unspecified" field?

DEC being an engineering company, the actual hardware designers jumped on this and responded within a day or two.  They went through every unspecified field and each of their designs and checked to see just what might show up.  As it happened, there was no issue:  Every existing implementation either zeroed or left unchanged all unspecified fields.

There followed an attempt to come up with language for the architecture spec that would prevent the "reveals secret information" problem without going to the extreme of mandating *exactly* how such fields were to be treated.  This turned out to be impossible to retrofit into the existing specs; it remained as one of those things hardware designers had to be aware of.  (It may have made it into annotations on the internal version of the architecture spec - the public versions was an extract and omitted various implementations hints and guidelines.)

A few years later, when the Alpha architecture was being defined, the architects were careful to avoid this problem.  Their definition of "undefined" carefully defines everything the "undefined" value might actually depend on, of course avoiding anything that the program could not have read directly for itself.

This stuff is *hard* to get right.  Side channels are insidious, and attackers keep coming up with new ones.  As a fairly straightforward and simple RISC architecture, the Alpha was probably immune to certain classes of timing attacks - all instructions took the same amount of time regardless of input.  On the other hand, timing attacks based on cache or page faults would have been easy to implement.  Why the distinction?  Because both are side-effects of completely unrelated design decisions:  No one at the time - at least in the non-clandestine community - knew or talked about timing attacks.


							-- Jerry



More information about the cryptography mailing list