[Cryptography] A TRNG review per day: Turbid
waywardgeek at gmail.com
Mon Oct 27 13:09:54 EDT 2014
Turbid is a FOSS TRNG that generates high quality random data from a
system's sound card. It is free, and when calibrated correctly by an expert
with a sound card capable of reliably amplifying thermal noise, it
generates provable amounts of entropy. Being totally FOSS, it's 100%
auditable, making it one of the few decent choices out there for a good
The Turbid paper is here:
It discusses many important concepts in TRNGs, and is a excellent
contribution on it's own. It is easier to be a critic than an author of
good ideas, but my role here is pointing out the bad along with the good. I
applaud the authors for excellent work in general, but I do not consider
Turbid's approach of using a sound card as an entropy source to be a
particularly good idea for these reasons:
- A sound card used by Turbid cannot be used for input, meaning most users
need a second sound card.
- Once a user is buying extra hardware for use as a TRNG, there is no
reason to use a sound card, when a TRNG designed for the purpose can do a
- Turbid needs to be calibrated for each type of sound card by an expert at
Turbid configuration. Given how few people there are who can do this
properly, availability of properly tuned Turbid installations will likely
- ALSA has to be patched to ensure exclusive access to the mic input, so a
good sys-admin is also required.
Given the difficulty of analyzing a system's sound card for potential for
producing entropy, it makes more sense, IMO, to use a dedicated hardware
TRNG, where the entropy can be proven once. In this case, using a sound
card is just one possible solution among many, and not my preferred
solution. A dedicated amplification of thermal noise in a carefully
designed and shielded circuit for this purpose would be better, for
example, than a random sound card which was not designed for generating
cryptographically secure random data. There are cheap A/D based TRNGs out
there that do exactly this, though I would go with a OneRNG or possibly an
Entropy Key before one of those. There is a *huge* number of threats to
consider, like whether a USB key can PWN your system, and sound card USB
keys simply aren't designed for security.
However, there is one good thing about Turbid vs custom TRNG harware. As
IanG states, using a dedicated hardware TRNG is like having a "Kick me"
sign on your back. That device is a prime target for attackers, while
buying a sound card at Best Buy would go unnoticed. However, I would
prefer to rely on the security measures proposed by the OneRNG team than
trying to get a Turbid install right.
Turbid is not the only system with an entropy lower bound proven by
physics. For example, my Infinite Noise Multiplier gives log2(K) bits of
entropy per output bit, even if the only noise is the resistors around the
op-amp. In contrast, Turbid requires a skilled analyst to determine the
lower bound of entropy for any given system. TRNGs using zener noise have
trouble, but those amplifying thermal noise, which are also common,
generate easily provable entropy. Those based on "A/D converter noise" are
common thermal entropy sources.
The paper states:
"It harvests entropy from physical processes, and uses that entropy
efficiently. The hash saturation principle is used to distill the data, so
that the output has virtually 100% entropy density. This is calculated from
the laws of physics, not just statistically estimated, and is provably
correct under mild assumptions."
I particularly like their coverage of the hash saturation principle. This
is used by most TRNGs. This paper quantifies how many extra bits of entropy
are needed to saturate the entropy pool, and it is surprisingly few! I use
2X the input entropy as hashed output data, which may be over-kill.
Getting a system to work well with Turbid first requires a "good-quality"
"We start with a raw input, typically from a good-quality sound card."
I would dispute “good-quality” here. What they need is an A/D converter
with enough bits to digitize the thermal noise on the mic input. A 24-bit
A/D converter is simply a marketing tool, since the low 8-ish bits will be
random. That's not a “good” sound card, IMO, probably just a waste of
money, but it is wonderful for use as a TRNG. A sensible 12-bit A/D mic
input probably is unusable with Turbid.
Here's what they say in Apendix B about their assumptions:
"Let C be a machine that endlessly emits symbols from some alphabet Z. We
assume the symbols are IID, that is, independent and identically
distributed. That means we can calculate things on a symbol-by-symbol
basis, without worrying about strings, the length of strings, or any of
that. Let PC(i) denote the probability of the ith symbol in the alphabet.
Note: Nothing is ever exactly IID, but real soundcards are expected to be
very nearly IID. At the least, we can say that they are expected to have
very little memory. What we require for good random generation is a very
mild subset of what is required for good audio performance."
I had difficulty reading their proofs with this invalid assumption that
samples are independent. They are not independent, or even close to
independent. However, I read through the paper, and can see how the
arguments can be enhanced to deal with correlation between samples easily
enough. Their conclusions seem sound to me, but the short-cut of this
assumption was cutting corners when they didn't have to. It also set off
alarms in my head when I read it. I read this assumption a while back, and
stopped reading the paper right there. I didn't return to Turbid until
today, and if you had asked me about Turbid yesterday, I would have had
some uncomplimentary things to say about the author's making unrealistic
assumptions and "proving" things with them, just like a lot of snake-oil
TRNG manufacturers do.
They also say:
"We use no secret internal state and therefore require no seed, no
This is touted as a strength when in fact it is a weakness. Turbid uses
SHA-1 to concentrate entropy and whiten it's output. If they were to use
the init/update/finalize interface, and make a copy of the state before
finalize, and use that copy for the next sample, they could carry entropy
from one SHA-1 application to the next, which would make their output to be
less predictable. Some inputs they pass to SHA-1 will be far more likely
than others, and because of this, the corresponding outputs will also be
They go on to say:
"Best performance and maximally-trustworthy results depend on proper
calibration of the hardware. This needs to be done only once for each make
and model of hardware, but it really ought to be done. Turbid provides
extensive calibration features."
I feel this is the single most important point about Turbid. So long as
someone skilled at the task calibrates Turbid for each revision of each
make and model of hardware, assuming it has a suitable sound input that
know one wants to use for inputting sound, it can be made as secure. How
often do systems have redundant sound inputs? How many skilled technicians
do we have seeking out these useless mic inputs for use with Turbid?
In section 4, "surprisal" is discussed, but with the assumption that each
output symbol from the sound card is independent of the others, apparently
regardless of how fast the mic input is sampled, which is far from being
true. However, sampling fast will capture more of the available entropy, so
there's no harm in doing so. I would feel better about Turbid if they were
to estimate the entropy in the input, and compare this estimate to the
theoretical result, and show that there is a close match. I do this for my
INM, for example, and others do this for their TRNGs. I build three of
them yesterday, and all three output measured entropy within 0.5% of the
model's prediction. Turbin's theory is solid, but when a Turbid technician
goofs, it would be nice to catch the error.
Sound outputs will be correlated when sampled at high speed. To help
correct for this short-term correlation, Turbid could keep the history of
the next sample given several previous samples. This would give a good
estimation of surprisal, allowing more accurate entropy estimation. This
could then be compared to the predicted entropy.
The paper states:
"If there is some 60-cycle hum or other interference, even something
injected by an adversary, that cannot reduce the variability (except in
This is the basic concept behind an Infinite Noise Multiplier, where
signals added by an attacker cannot reduce the entropy of the output. Many
other TRNGs also rely on this principle, and like Turbid, an attacker who
can inject a large enough signal can saturate the output, controlling the
bits produced. This problem is worse in cheap zener-noise TRNGs, which
saturate easily, but with a 24-bit A/D, not much amplification is required
to sample thermal noise.
"We also need a few specialists who know how to tune a piano. Similarly,
we need a few specialists who understand in detail how turbid works.
Security requires attention to detail. It requires double-checking and
"Understanding turbid requires some interdisciplinary skills. It requires
physics, analog electronics, and cryptography. If you are weak in one of
those areas, it could take you a year to catch up."
This is the weakest point to Turbid, IMO. Security needs to be easy to be
As the paper states with the line-in example on a ThinkPad, if the
upstream gain times the expected thermal noise level given the presence of
capacitance to GND is less than 1 bit worth of input voltage on the A/D
converter, then most entropy will be lost. Other TRNG architectures do not
have such problems. Turbid is difficult to get right.
Section 8.3 is titled, "Whitener Considered Unhelpful". This is just a
matter of semantics, IMO. I would call Turbid's output hash function a
whitener, so hearing them claim whiteners are not helpful seems strange to
me. Most people working on TRNGs would call Turbid's output hash a
whitener, I think.
I would prefer a Blake2b rather than SHA-1 in Turbid, since it is faster
and more secure, and they should keep the internal state for the next
snippet of data to randomize the chances of any given output occurring,
rather than what they have now where some outputs are more likely than
The health checks for Turbid sound weak, such as checking for bits stuck
at 1 or 0. In my INM driver, as well as drivers for OneRNG, Entropy Key,
and others,, entropy is statistically estimated, and if any sample fails,
it is disguarded, and if this continues for long enough, it stops all
Here's a part in the paper I found very helpful:
"A subtle type of improper reseeding or improper stretching (failure 3) is
pointed out in reference 22. If you have a source of entropy with a small
but nonzero rate, you may be tempted to stir the entropy into the internal
state of the PRNG as often as you can, whenever a small amount of entropy
(ΔS) becomes available. This alas leaves you open to a track-and-hold
attack. The problem is that if the adversaries had captured the previous
state, they can capture the new state with only 2ΔS work by brute-force
search, which is infinitesimal compared to brute-force capture of a new
state from scratch. So you ought to accumulate quite a few bits, and then
stir them in all at once (“quantized reseeding”). If the source of entropy
is very weak, this may lead to an unacceptable interval between reseedings,
which means, once again, that you may be in the market for a HRNG with
plenty of throughput, as described in this paper."
This is why TRNGs should mix only a cryptographically strong entropy
sample at a time into /dev/random. 256 bits at a time should do the trick.
They also said:
"Therefore in some sense /dev/urandom can be considered a stretched random
generator, but it has the nasty property of using up all the available
entropy from /dev/random before it starts doing any stretching. Therefore
/dev/urandom provides an example of bad side effects (failure 4). Until the
pool entropy goes to zero, every byte read from either /dev/random or
/dev/urandom takes 8 bits from the pool. That means that programs that want
to read modest amounts of high-grade randomness from /dev/random cannot
coexist with programs reading large amounts of lesser-grade randomness from
/dev/urandom. In contrast, the stretched random generator described in this
paper is much better behaved, in that it doesn’t gobble up more entropy
than it needs."
Linux (at least Ubuntu 14.04) let's users read from /dev/random if only 64
bits of entropy exist in the pool, meaning if an attacker knows the state
when the pool is at 0, he can guess your keys read from /dev/random in 2^64
guesses. I guess in real life that's a lot, but I think this makes it
harder than need be to reseed the Linux pool when compromised.
Force-feeding the entropy pool >= 4096 bits *might* be good enough... not
100% sure. Why isn't the lower limit something stronger, like 160 bits or
The paper states:
"The least-fundamental threats are probably the most important in
practice. As an example in this category, consider the possibility that the
generator is running on a multiuser machine, and some user might
(inadvertently or otherwise) change the mixer gain. To prevent this, we
went to a lot of trouble to patch the ALSA system so that we can open the
mixer device in “exclusive” mode, so that nobody else can write to it."
Instructions are provided for patching ALSA. However, until those patches
are main-stream, users of Turbid will also need to be good at system
Turbid violates my KISS rule. Security this complex isn't secure.
All that said, I think their paper is outstanding, and I benefited
substantially from reading it. I just don't expect to set up a Turbid
installation any time soon. It *is* excellent work, however, and it
advanced the state of the art so far as I know it (which is limited) a ton.
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the cryptography