[Cryptography] cheap sources of entropy

Jerry Leichter leichter at lrw.com
Mon Feb 3 18:12:40 EST 2014

On Feb 3, 2014, at 12:14 PM, John Kelsey <crypto.jmk at gmail.com> wrote:
>> So, if an attacker running malware in a hypervisor (or SMM) knew you
>> were depending on disk drive timings for the random numbers that
>> create your encryption keys, how easily could they attack you by
>> rigidizing those interrupt timings, e.g. delaying your virtual machine
>> interrupts at to the next even 1/60th of a second?
> Maybe this is just my lack of understanding coming out, but I'm having a hard time seeing how any crypto code is going to remain secure if the hypervisor controlling the VM it's running on is under an attacker's control.  
No, you've got it right.  A hypervisor has complete control over everything the guest OS can see.  It controls the horizontal; it controls the vertical.  :-)  While a hypervisor could not realistic monitor every instruction a guest executes - the slowdown would be recognized rapidly - it does have control over every interaction the guest has with the outside world.  And that includes, of course, program loading, paging, and so on.

In general*, every layer of the abstraction stack can completely subvert every layer above it.  A user-mode process is completely at the mercy of the OS; the OS is completely at the mercy of the hypervisor; the hypervisor is completely at the mercy of the microcode; the microcode is completely at the mercy of the hardware.  (Everything above the microcode is also completely at the mercy of System Management Mode, which doesn't fit exactly into this hierarchy - but in some sense is really "below" the hypervisor.  Similarly, you have to consider attacks on pieces of hardware other than the CPU - see the paper recently referred to here showing how to "spike" disk drive code.)

*The "in general" isn't quite the case.  There's some very interesting recent work on how to use the virtualization hardware to protect processes from malicious OS's.  Very clever stuff.  Basically, you arrange that any given page is set in the hardware so that the process XOR the OS has access to it.  When the OS needs access, you encrypt and MAC the page before resetting the page tables; when the process needs access back, you check the MAC and decrypt.  You similarly protect anything that gets written to disk.  So the OS can manage the processes - create them, give them memory, map code into them - but it can neither read nor modify the code that gets run or anything that code actually does in memory.  (There are tons more details.  I can't immediately lay my hands on the paper I read, but https://www.usenix.org/legacy/events/hotsec08/tech/full_papers/ports/ports_html/is an overview from some members of the group working on this.)

The reason I mention this is to raise the question:  Can similar ideas be applied at other levels of the abstraction hierarchy?  Until I saw these papers, I would have dismissed the idea that one could protect a process from a malicious OS.  Granted, you now have to trust the virtualization hardware and the protection code that uses it.  But (a) you always end up trusting *something*; (b) the pieces you have to trust are much smaller (hence much mor amenable to verification) than a full OS.

                                                        -- Jerry

-------------- next part --------------
A non-text attachment was scrubbed...
Name: smime.p7s
Type: application/pkcs7-signature
Size: 4813 bytes
Desc: not available
URL: <http://www.metzdowd.com/pipermail/cryptography/attachments/20140203/d5b4207a/attachment.bin>

More information about the cryptography mailing list