[Cryptography] Vulnerability of RSA vs. DLP to single-bit faults

Jerry Leichter leichter at lrw.com
Sun Nov 2 14:56:13 EST 2014

On Nov 2, 2014, at 6:13 AM, Michael Kjörling <michael at kjorling.se> wrote:
> ...If I then fill that block by for example reading key data from disk, let's just say that with 50% probability, I have a bad situation because RAM now holds something other than what came from storage. _Reliably_
> detecting the problem without hardware support is a non-trivial
> problem; _at a minimum_, all memory-writing operations would need to
> double-check the results in a way that is immune to caching.
Cue the famous Arpanet collapse of 1980.  For those whose memories have lost the bits, or never had them:  Multiple things contributed, but one router - IMP, in those days - developed a stuck-at-zero block of memory.  Unfortunately what was in that block of memory was the table of distances to other nodes.  The Arpanet used a distance-vector routing algorithm, in which each not maintains such a table and floods it to its nearest neighbors.  The net effect was that that IMP informed all other IMP's that it had a path of length 0 to every node on the Arpanet - promptly re-direction *all* traffic on the Arpanet to itself.

The table was actually checksummed in memory, but the checksum used computed a zero result for an all-zero input - so it passed.  There were other issues that made this worse - see http://www.csl.sri.com/users/neumann/umd+H2_5.html for a discussion - but two lessons learned were (a) the checksum must follow the data from birth to death - memory to network transmission to memory to use, in this case; (b) given that stuck-at-zero (and stuck-at-one) are common failure modes, checksums should be designed so that the checksum of all zeroes is not zero, and the checksum of all ones is not one.

(Related long-ago story:  The original VAX, the 11/780, had a UBA - Unibus adapter - to allow it to use existing peripherals designed for PDP-11.  Among those peripherals was the first DEC Ethernet adapter, the DEUNA.  The Unibus had no parity, much less ECC, checking - at the time of its design in 1969 it was considered too expensive and complex.  (In fact, the whole Unibus design was based on keeping the necessary logic to an absolute minimum - extra logic was a big deal in MSI days.)  The Unibus was also known to corrupt bits if the devices on it drew too much power - and the DEUNA was a *very* power-hungry device; it could share a Unibus only with low-powered devices.  (You could have more than one UBA on a VAX 11/780, so this wasn't a killer constraint.)

If you transferred stuff through DECnet - this would have been Phase II DECnet, which only did one-hop routing; you had to specify a path if you wanted to go multi-hop), the only part of the low-level path that was *not* checksummed was the transit of the data from the DEUNA over the UBA to memory, and then back.  The analogue of FTP used with DECnet *did* compute an end-to-end checksum.  File transfers would appear to work - but on completion would report a "DAP CRC checksum failure" when the end-to-end checksum didn't match.  You could be pretty sure that the cause was someone's overloaded DEUNA.  Those who understood this stuff would look at the path the data had followed and then use divide-and-conquer to find the machine that was corrupting the data.  Many a VAX system manager came in to work to find mail from someone he'd never heard of telling him that his DEUNA was on an overloaded UBA - the management software for DECnet allowed you to determine pretty exactly how the device forwarding the packets was configured on the machine.)
							-- Jerry

More information about the cryptography mailing list