[Cryptography] Hard Truths about the Hard Business of finding Hard Random Numbers

ianG iang at iang.org
Fri Jan 31 02:55:57 EST 2014


On 31/01/14 04:29 AM, John Kelsey wrote:
> On Jan 29, 2014, at 7:38 PM, John Denker <jsd at av8n.com> wrote:
> ...
>> Also:  One point that the web page doesn't mention:  It helps
>> to use general-purpose components.  Using special-purpose 
>> crypto chips (including RNG chips) is like putting a "kick 
>> me" sign on your own back.  In contrast, a sound card can be 
>> put to lots of different uses, and it is relatively hard for 
>> the bad guys to mess with it in a way that subverts the crypto 
>> without making the device unusable for other purposes.
> 
> I very strongly disagree with this.  There is a tradeoff between purpose-built crypto hardware, and off-the-shelf computers and devices pressed into service to do crypto.

It's definitely a trade-off, we're all agreed here.  We are in
engineering space.


> The purpose-built crypto hardware and software is a bigger target for very high end attackers, but it is also almost certain to be designed to be harder to tamper with in the field, and it's probably designed with security in mind to a far greater extent than general-purpose hardware and software.


Yes, but the cost of that is very high.  Moderately well designed
equipment seems to run into o($1000), and top class equipment, add
another zero.  And that's before we add the much higher costs to program
and maintain the gear.  These costs aren't easily split across projects
(prop dev kits typically just push the problem around) whereas at least
costs generation by FLOSS stuff is more easily spread out.

Which is to say, we pay a high price for that supposedly better designed
security, and that justification is often not economic but is
mandatory/compliance.


> Worse, if some commonplace software or hardware component becomes the thing everyone bases their entropy collection on, that will become a tempting point for a targeted attack, but the sound card manufacturer or whatever won't think they're primarily building a security product.  


Oh, that we had that problem!  Please!

The market will self-correct itself soon enough, serious security folk
will come out with more serious solutions.


> A dedicated crypto device can be designed to try to resist a lot of attacks that will pretty trivially compromise most off the shelf hardware and software devices, like side-channel attacks.  It normally will be resistant to compromise by someone who takes over the computer it's installed in or connected to.


Yes, *but* how many of those attacks are real?  Validated?  A clear and
present danger?  If the market were allowed to operate (which means, get
rid of the compliance millstone which breaks the economic equation),
then we would find out.  Then there would be a market for better gear.


> It can have an entropy source that's purpose-designed and analyzed as an entropy source, reasonably resistant to intentional or accidental outside interference, etc.


Well, except that a cold-hearted analysis in the hard light of sunlight
has it that the entropy source is likely the low hanging fruit for a
built-in compromise to a high-value HSM.


> For whatever it's worth, it can also be tested by some organization that validates hardware crypto devices.  Those validations all have problems, but they're probably better than no validation, which is the practical alternative.  


As it is, the compliance model has killed the market for serious gear,
because the costs are so high that there is no compromise possible,
which forces the cost higher and less economy results.  It's a
self-defeating feedback cycle, so the only recommendation that makes any
sense is to avoid all compliance gear at all, and do the best you can
without it.

I agree with the theory of the compliance model.  I think it is however
evident that it has failed to deliver a workable and economic product to
market.


> ...
>>  a) We agree that statistical tests on the output are mostly
>>   window-dressing.  As Dykstra said, testing can show the 
>>   presence of bugs, but it can never show the absence of bugs.
> 
> How do you recognize when your source is no longer behaving according to the model you so carefully built of its behavior, if you aren't doing some kind of ongoing health testing?  


Which, is shifting the burden.  The on-board testing is also subject to
all the other questions...


...
>> Let's be clear:  You can have a HRNG without a PRNG but not
>> vice versa.
> 
> Right.  The goal of your entropy source really needs to be to generate an impossible to guess seed for your PRNG, and then to periodically reseed it.


( As a minor quibble, the reseed requirement is derived from the goal of
a platform PRNG.  If one moves into app space (as my OP was about) then
there is more flexibility.  If one cares about a freeze&copy-state
attack, then the app can simply request new PRNGs on demand.  E.g., this
is what I do for key generation. )


> That means you can probably accept a relatively low rate of entropy produced per second, if you can know how much you are getting.  



iang



More information about the cryptography mailing list