[Cryptography] Something that's bothering me about the heartbleed discussion.....

Jerry Leichter leichter at lrw.com
Thu Apr 17 23:47:28 EDT 2014


On Apr 17, 2014, at 10:28 PM, Peter Trei <petertrei at gmail.com> wrote:
> So it's not just OpenSSL.  It's every bit of code that *uses* OpenSSL, and every bit of code the *uses* the code that *uses* OpenSSL.
> 
> I think you may have missed my point. This style of security hole could exist in server programs which don't use OpenSSL; indeed, which don't use crypto at all.
Oh, I got that.  I used OpenSSL as an example.

> All it requires is that the client/server protocol allows the client to cause a unchecked read as I described, and sensitive data be available in program-accessible memory, whether put
> there by the server, or dredged up unzeroed in a malloc.
> 
> Fixing crypto code, and/or walling it off as you suggest, won't prevent Heartbleed
> style bugs in other server code. 
What it will do is lower the sensitivity level of the data available to such a bug.  Any bug that allows an attacker to read arbitrary memory from a server is a problem, but Heartbleed caused such a panic because that memory could - and often did - contain very high value information (like keys).  As part of a defense in depth, it makes sense to lower the expected - and perhaps also the maximum - value of that information.

> We've known for years that buffer overflows can be used for code injection. In Heartbleed, we're seeing the same problem being used for data exfiltration.
Not quite.  What Heartbleed returns is the contents of memory returned by malloc().  If the memory allocator cleared memory on free, the actual value to an attacker of reading uninitialized memory would be zero.

> Fixes which prevent read/write access to code segment memory, or execution 
> of data as code, won't solve this. Perhaps Intel MPX will, once we move to 
> processors  which have it, compilers support that feature, and server software 
> is rebuilt.
There are a million possible attacks.

We've spent decades developing a model in which the unit of mutual suspicion is the process, and the hardware provides strong isolation between processes.  Then we turn around and cram mutually suspicious contexts into a single process, with no hardware support for inter-context isolation.  That leaves the compiler, or the user code itself.  The first can be effective, though history is against it.  (When was the last time you heard of a hardware bug that allowed a process to gain access to another process?  They certainly happen, but about all I can recall seeing in many years are timing attacks through shared caches and TLB's and such.  On the other hand, problems with compilers are commonplace.  It's just so much harder to generate code that will be correct, first time every time, than to have hardware that's checking a small number of conditions at every instruction.)  Still, we're getting pretty good at it.

The last - explicit user code - is what most of the Internet relies on.  It's *very* delicate - always one bug away from disaster.

Ironically, the original Unix model - of forking a new process for each incoming connection - was safe, but it failed to scale.  Could we somehow build scalable systems with the good characteristics of such isolation?  Linux actually started out with finer-grained versions of fork, allowing you to share or not share various bits of process state.  All that's still in there, but people were used to the full fork and insisted on it - and it's not clear the more tightly constrained primitives would have had sufficient performance in any case.  Other models of things we might loosely call "processes" have existed in other OS's.  There was an old DEC real-time OS in which "tasks" shared a common heap, but had  hardware-isolated stacks.  In this kind of model, if you kept all your transaction-specific stuff in stack locals, you'd be safe.  (Not appropriate for  heap-based programming models in languages like Java - but an illustration that other approaches are possible.)
                                                        -- Jerry


-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.metzdowd.com/pipermail/cryptography/attachments/20140417/260440a5/attachment-0001.html>


More information about the cryptography mailing list