[Cryptography] GHCQ Penetration of Belgacom
Henry Baker
hbaker1 at pipeline.com
Sun Dec 21 10:21:51 EST 2014
At 09:42 PM 12/20/2014, Henry Baker wrote:
>At 06:56 PM 12/20/2014, dan at geer.org wrote:
>>hbaker1 writes:
>> | >From: dan at geer.org
>> | >It's my second-hand understanding that it would take perhaps 3,000
>> | >gates to implement intentional sensitivity to a pre-designed kill
>> | >packet. The addition of 3,000 gates to any current chipset will
>> | >never be found in current hardware, e.g., the iPhone 6 has two
>> | >billion transistors on the system chip.
>> | >
>> | >Others more knowledgeable welcome to correct my understanding.
>> |
>> | So Intel&Apple have provided PRC with netlists for their processor chips?
>> |
>> | Of course, PRC shouldn't believe them, unless they could also
>> | manufacture their own chips from the netlists.
>>
>>You missed the point well enough that it must have been on purpose.
>>
>>Nevertheless, to reword in the interest of clarity, hiding something in
>>hardware is, AND ALWAYS WILL BE, impossible to detect or disprove.
>>
>>--dan
>
>http://spectrum.ieee.org/tech-talk/semiconductors/devices/contracts_awarded_for_darpas_t
>
>Posted 6 Dec 2007 | 12:36 GMT
>
>"Trust, but verify. ... A year shy of its 50th birthday, the Defense Advanced Research Projects Agency has launched the *** Trust in Integrated Circuits *** program, the goal of which is a microchip verification process. It's basically a Pentagon Good Housekeeping Seal of Approval. A chip bearing the Trusted imprimatur will be *** guaranteed free of malicious content. ***"
>
>I've been sleeping better ever since.
I hope that the sarcasm drippage was evident.
The malware problem usually (always??) involves a discrepancy between the model and the reality. This is one reason why "proofs" of non-maliciousness are never going to be enough, because these proofs live within a mathematical model, and almost any discrepancy with reality might be elevated into an attack. E.g., model assumes proper voltage; improper voltage can create havoc; model assumes proper clocking; improper clocking can create havoc; model assumes normal temperatures; abnormal temperatures allow pwnage; model assumes normal entry to subroutines; ROP is based upon abnormal entry, etc.
In viruses & bacteria, evolution has already explored a lot of the discrepancies -- DNA that can be read two (or more) different ways, depending upon the initial offset (0, 1, 2 mod 3); DNA that is read backwards; RNA that codes for something and is also active in its own right; DNA that "borrows" sequences from an old attacking virus to produce good stuff, etc. But several billion years allows for a lot of hacking & defense attempts.
Governments keep worrying about a viral or bacterial pandemic like the 1918 flu that kills a material fraction of the people in the world. Given the limited diversity in digital HW & SW systems, a pandemic that *** destroys a material fraction of the computers in the world *** is a heck of a lot more likely. Such a pandemic will almost certainly result from a nation-state screwup with a Stuxnet-like weapon whose code turns out to be "too good", and a minor modification turns a "highly directed" attack into a global pandemic.
The 1918 flu killed 3-5% of the world's population; a digital pandemic could kill 50% of the world's smartphones or PC's. (Google "firing squad synchronization problem".)
There are nowhere near enough ethicists with top-secret clearances to properly vet some of the craziness being deployed and contemplated.
Think about the havoc that would result from a Stuxnet-quality worm that took at _all_ of the existing iOS devices. (Taking out all of the Android devices is a lot more difficult due to the large number of different versions.)
More information about the cryptography
mailing list