[Cryptography] The FBI can (almost certainly) crack the San Bernardino iPhone without Apple's help

Phillip Hallam-Baker phill at hallambaker.com
Wed Mar 2 00:03:06 EST 2016


On Tue, Mar 1, 2016 at 7:16 PM, John Gilmore <gnu at toad.com> wrote:
> Ron Garret suggested:
>> The attack is not a brute force attack on the AES key, it's a brute force attack on the PIN.  It works like this:
>> 1.  De-solder the flash chip and read its contents
>> 2.  Replace the flash chip with a ZIF socket (probably connected to a short ribbon cable).
>> 3.  Re-install the flash chip and make five guesses at the PIN.
>> 4.  Power down, replace the flash chip with a fresh copy of the original, and go to Step 3.
>
> You may even be able to do a simpler attack, by just filtering out all
> the system's attempts to write to the flash chip.  The standard flash
> chip interface has a Write Protect signal; shorting that pin to
> permanently "on" should prevent any alteration.  Then the software
> won't be able to erase keys, won't be able to increment the count of
> bad attempts, etc.  The question is whether the system will fail for
> other unrelated reasons if its flash chip becomes mysteriously
> read-only.

Unless you have something like the secure enclave, i.e. a dedicated
CPU with direct connection to the associated storage, the system is
going to be hackable.

I have been rather peeved by Apple's approach here because they have
taken a very high risk approach that seems to be more about covering
up the fact that the phone as designed has a backdoor that can't
actually be closed.

This was the ground that the FBI chose, a high profile terrorist case.
Now that Scalia is dead, the strategy does not look quite so reckless.
But there is still a substantial risk that if Apple wins in court, the
response is likely to be a push for new legislation.

A system that relies on the manufacturer refusing to obey a warrant
for security isn't acceptably secure.

Yes, Apple's newer iOS models have the secure enclave. But what about
their desktops? What about Windows? What about Linux?

Microsoft did some really good work on the TPM system some years back,
work that I was mighty upset got a really bad reception from the
EFF/GNU/BULLRUN crowd.

Having a Security Computing Unit (SCU) should be as ubiquitous in
computer hardware as a GPU or a WiFi chip.

Yes, infrastructure such as TPM chips and secure enclave could
conceivably be used by DRM systems. But that isn't why those systems
were developed and that is not what they are actually good at.

Stopping leakage of private keys is actually quite a straightforward
technical challenge because to succeed in a confidentiality attack on
MY data you have to break one of MY devices. And I don't have that
many devices connected to my Mesh profile. I very much doubt that I
would ever allow more than a few dozen machines connected to my
confidential data profile. An attacker has to get hold of one of those
physically and decap the SCU.

Stopping leakage of copyright content is a much harder problem because
DRM is a break once, run anywhere problem. Having an SCU might make
the task of the DRM folk a little bit easier but it is still an
intrinsically harder problem than protecting personal digital assets.


More information about the cryptography mailing list