[Cryptography] Use of RDRAND in Haskell's TLS RNG?

David Johnston dj at deadhat.com
Sat Nov 26 15:52:41 EST 2016

On 11/25/16 12:55 PM, Arnold Reinhold wrote:

> On 11/23/2016 02:23 AM, Darren Moffat wrote:
>> What is a "proper audit" and why do you think that Intel hasn't done that
>> already ? What more find they (or any chip designer/builder) do to convince
>> you?
> In addition the need for a proper published audit that bear suggested, the most glaring defect in the Intel design is the lack of access to the un-whitened random bits. Adding a mode that bypassed the whitener would have been simple. Statistical analysis of the raw bit stream can provide ongoing assurance that the RNG is doing what it says. Likely there will be correlations between raw bit statistics and external parameters such as chip temperature and supply voltage. Of course it is possible for a deterministic generator to mimic such variations, but it would have to have a relatively large footprint on the die compared to simply using the whitener in a feedback mode or similar mischief.
This has been addressed multiple times on this list so once again:

The raw access mode is disabled on production parts. This is because it 
would be an attack vector. Log into a VM in a cloud machine and put the 
RNG into raw mode and other processes in other VMs would then be getting 
unwhitened data while thinking they were getting the output of a CSPRNG 
(RdRand) or an ENRBG (RdSeed).

If we enabled ring 0 processes to put the shared RNG in raw mode to 
satisfy the curiosity of people on this list we would be enabling an 
attack on most users. Also we would be violating the FIPS140-2 and 
SP800-90 requirements. The quality of the raw data is tested internally 
to the DRNG with the algorithms we have published.

Feel free to talk to NIST about adding things to their specs to enable 
raw access in while simultaneously in a FIPS context and a cloud context.


More information about the cryptography mailing list