[Cryptography] Apple GovtOS/FBiOS & Proof of Work

Jerry Leichter leichter at lrw.com
Sat Mar 19 18:54:18 EDT 2016


> If Apple is willing to put some serious Proof of Work into constructing *every* firmware update, then it could achieve some level of privacy...
This discussion has gotten into all kinds of technologies, but it's not clear to me that the underlying properties achieved make much sense.  So let's go back and look at what we're trying to accomplish and why.

Problem:  Apple (the working example) could be forced into producing a signing a "bad" update.  We want to make sure that phones refuse to apply "bad" updates.  (It's worth noting that the assumption that you can get Apple to issue any update you want and remain quiet about it is equivalent to being able to get a copy of Apple's signing key, as far as anyone on the outside could tell.)

Proposed Solution:  Apple binds itself into a situation in which it can, at least on average, issue no more than one legitimate update per interval T.  Since it will, in fact, *always* issue an update once per interval T, if someone manages to force it to issue a "rogue" update, when the time comes to issue the 
next legitimate one, it won't be able to and the "rogue" update will be "outed".

Note that as given, this solution requires no cryptography at all!  It's ultimately the phone that has to enforce the property that only one update per interval will be treated as valid, and it can do that by simply remembering the time of the last update.

So ... does this solution help?  Not really.  Given the assumption that the attacker can force Apple to sign anything, and the assumption that they are willing to send a rogue update to every phone in the world, at best this delays the updates by T/2 on average.

If you remove the second part of the assumption and require that the rogue update be delivered only to a single phone ... you're no better off.  Within T/2 at most, every phone in the world except the target receives a signed proper update, and the single target receives the "rogue" update.  The only way it could tell that its update is rogue is by comparing it to (a sample of) everyone else's.  But if it has that ability ... where's the need for the timing mechanism?

And, of course, this ignores the need for emergency updates.  But even regular updates tend to be reasonably frequent - T really can't be more than 3 months or so, which makes the average delay to a rogue push only 6 weeks.  That's not much protection.

So ... it's not at all clear to me what you would gain by this mechanism - regardless of the fancy technologies used (unnecessarily) to implement it.

                                                        -- Jerry



More information about the cryptography mailing list