[Cryptography] Fwd: OPENSSL FREAK

Jerry Leichter leichter at lrw.com
Tue Apr 7 21:04:31 EDT 2015


On Apr 7, 2015, at 6:50 PM, dan at geer.org wrote:
> Perhaps what is needed is for embedded systems ..., if having no
> remote management interface and thus out of reach, are a life form
> and as the purpose of life is to end, an embedded system without a
> remote management interface must be so designed as to be certain
> to die no later than some fixed time.
The reaction to such devices is completely predictable:  It's all about profit, they're forcing me to by a new X when my old X was working perfectly well.  What a scam.

And, of course, there will be plenty of X devices out there for which this will be exactly the case.  Don't you know that the new Version 2 K-cups for coffee makers include DRM to "protect you, the consumer"?

Look at all the complaints that Apple (and now Google) prevents you from reverting you phone to an earlier version of the software.  It's all just a conspiracy to take control of your devices away from you!

There's some device out there - I can't remember what - that is limited by software to 150 uses.  Then you have to go buy another one.  (Or look around on-line for ways to reset the counter.)

Right now, it's only a small minority of techies who complain about this stuff.  But when significant parts of the population find themselves required to replace "perfectly good" devices because the maker decided it was time for them to die ... expect consumer protection laws requiring that makers provide "unlock keys".

I could imagine a requirement that the device shut down, but that it could be revived by establishing a connection to an update service.  But this is much harder than it looks as a general approach.  The obvious thing to do is bring it to the store where you got it, or some similar location, where a direct connection can be made.  But "embedded" devices may well be *physically* embedded, and there may be no practical way to "bring" them anywhere.  Do you then somehow have to reconfigure your network to allow connection?  How many people will know how to do that?  For that matter, how will an embedded device even tell you that it needs an update?  It may not talk directly to you at all - only provide some simple data stream to layers of software.

I don't think anything of this sort can work except in constrained environments.  *Maybe* you can get the military to do this - though I have my doubts; a device that may suddenly shut itself down because it *might* be insecure, in the middle of a battle, is not going to get any military support.  Similar kinds of limitations apply in industrial settings.  In fact, I'm finding it hard to come up with a realistic scenario in which such a device would be acceptable:  Basically, from the outside it's a device with an additional failure mode.  Why would I buy such a thing?  There are enough failure modes already!

Besides ... breaks occur at unpredictable times.  A device with a 5-year lifetime against which an attack is found 6 months after release is a hell of a lot more dangerous than a device with no defined lifetime against which no attack has yet been found.

Every time I think about this issue, I come back to the same place:  We need very simple enforcers of very simple security properties - secure kernels - which we can use as a base.  These things have to be so simple that we can realistically convince ourselves that they obviously have no bugs (cue Tony Hoare).  They then never need to be replaced, never need to be upgraded, need no management interface.  Instead, you build your more complex functionality - including a management interface - on top of these things.  The base has enough security properties that you can build a secure update facility for the rest on top of it.

Can we really build such a thing?  I don't know, though we seem to be at the point where we can prove implementations of entire simple microprocessors and entire OS's secure.  We have enough confidence in *some* of our crypto protocols -particularly symmetric primitives - to think we can keep data safe for tens of years.  So perhaps we're getting there.  Exactly what security properties such a secure kernel should implement, I don't think we know.  It's an interesting question.

> Conversely, an embedded
> system with a remote management interface must be sufficiently
> self-protecting that it is capable of refusing a command.
The only way this kind of capability makes sense is if there's a layer within the remote management interface that you trust to enforce that self-protection - and then you can't update that layer for any *current* purpose, only for possible *future* purposes.  *Safely* fixing bugs in the self-protection mechanism is very challenging.  Maybe this isn't quite my "never to be changed" secure kernel, but it's pretty close.

> Inevitable
> death and purposive resistance are two aspects of the human condition
> we need to replicate, not somehow imagine that to overcome them is
> to improve the future.
I don't buy the analogy, sorry.  How does the inevitable death of individual human beings contribute to the *security* of any human-based system?  (Purposive resistance can cut both ways:  If correctly applied to resist an attack, it helps; if incorrectly applied, it's a way to produce denial of service, and it inappropriately applied - to work around or just plain refuse to implement real security requirements - it's a common cause of broken systems.)

                                                        -- Jerry



More information about the cryptography mailing list