[Cryptography] Use of RDRAND in Haskell's TLS RNG?

Viktor Dukhovni cryptography at dukhovni.org
Sat Nov 19 22:59:54 EST 2016


I've been teaching myself Haskell lately, while developing some
new code for my ongoing DANE TLSA survey[*].  As part of that, I
need a TLS stack, and have started exploring Network.TLS.

A syscall trace of the resulting TLS client code shows that while
it probes for the existence of /dev/random and /dev/urandom to seed
the per-context DRBG (it uses ChaCha for that), it does not (on my
Intel laptop) end up reading either device.  This is because the
Entropy sources are configured as:

    Crypto.Random.Entropy.Backend:

	-- | All supported backends
	supportedBackends :: [IO (Maybe EntropyBackend)]
	supportedBackends =
	    [
	#ifdef SUPPORT_RDRAND
	    openBackend (undefined :: RDRand),
	#endif
	#ifdef WINDOWS
	    openBackend (undefined :: WinCryptoAPI)
	#else
	    openBackend (undefined :: DevRandom),
	    openBackend (undefined :: DevURandom)
	#endif
	    ]

and for each request each backend is used in turn (non-blocking in
the /dev/random case) to obtain up to the requested number of bytes
until the full requirement is met.  With RDRAND always ready-and-
willing to provide bytes, the others go unused.

What is the crypto community's current state of concern around
RDRAND?  Should Haskell's Crypto avoid seeding exclusively from
RDRAND?

-- 
	Viktor.

[*] For my purposes (DANE deployment surveys), the RNG could yield
    a constant stream of 9's and I'd be none the worse for wear,
    but other users may have stronger security expectations.

	http://dilbert.com/strip/2001-10-25


More information about the cryptography mailing list