[Cryptography] Mac OS 10.7.5 Random Numbers

ianG iang at iang.org
Tue Feb 4 00:23:57 EST 2014


On 4/02/14 00:00 AM, Jerry Leichter wrote:
> On Feb 3, 2014, at 12:30 PM, ianG wrote:
>>> Any validation process you come up with is going to have the same feature: once you've gotten something validated, changing it is, in general, going to mess up your validation.  Otherwise the validation means nothing, because you could get a crypto device validated using RSA2048 and SHA256, and then change it over to using RSA512 and MD5.  
>> Why can't validation say RSA2048 or longer, SHA1 or longer (SHAn) ?
>> PRNG Has 160 bit state, or more, all else held constant?
> "RSA2048 or longer" has a clear meaning:  The RSA algorithm is well defined, the key lengths it uses are well defined.  Sure, there are implicit assumptions (like that the modulus is actually a product of exactly two primes of close to equal size), but we can deal with that if we have to.
> 
> I have *no clue* what "SHA-1 or longer" means.  The SHA-1 algorithm is defined over 512-byte input blocks, with 160 bits of intermediate state and 160 bits of output.  You can readily decrease the  output block size (choose some subset of the bits).  It's not clear that there's a secure way to decrease the input block size - some kind of padding, but padding has proven to introduce vulnerabilities in the past.  You can do nothing meaningful about the intermediate state without changing the compression algorithm, which is specific to the state size.  How what everyone would agree is a "longer version of SHA-1" might be produced is anyone's guess.


Well, Jerry, if you wish to torture yourself, please don't let me stop
you :)

Yes, for it to work in this context it has to be simply stated
somewhere, in the same sense that a FIPS verification is ... a
statement.  (The concept of reliance on statements is well understood in
the audit world.)


> Now, if you say "a one-way hash with all the properties of SHA-1 but with a longer intermediate state and output block size", well, that's kind of saying what you want - but not in a way that's actually useful unless you can pin down just what "all the properties of SHA-1" might mean.
> 
> In effect, you're making the validation dependent on the validation of something else (the hash function)

Correct.

> - but you aren't saying what *kind* of validation is appropriate for that "something else".  Can I substitute some particular variant of Salsa, because being accepted by DJB counts as "validation"?  How about a hash algorithm developed and published for general use by the Chinese government?  Or, for that matter, the NSA?

Yes, you can do all that.  Someone simply needs to state what
substitutions are allowed, and you can rely on it.

In detail the process would be something like this:

NIST (or whoever) publishes a table that states:

     SHA1 < SHA2 <= Salsa20/HashVariant < SHA3/1234

     SHA2 == Chinese variant X not Y

Etc.  Now, you dial into the table and can 'rely' on what they say.  To
some extent, NSA does this with Suite B.  OpenSSL could do it with their
ciphersuites, but that would ruin everyone's fun.
http://www.keylength.com/ does it from an academic pov.

Of course, the downside is that this encourages soft cryptographic
numerology, which is a bad thing, and not hard cryptoplumbing.  But with
verification processes, you get what you get.


> The sponge construction, for the first time, gives us a generic description of a family of hash functions whose strength is given by security parameters we can set pretty much as we want.  It'll take a bit of time before we have adequate confidence to say "just go ahead and change the security parameters; we understand the effects".  With luck, we might even end up with an appropriately parameterized family of stream ciphers out of the same construction as well.  There have been attempts to construct similar families of block ciphers - I vaguely recall MARS, an AES candidate, was one - but none so far has gotten broad acceptance.


Right.

Speaking of Keccak, did the controversy over the change in emphasis over
the various pre-image resistances ever get resolved?  The sort of change
that had been mooted would have ramifications for this table idea.

> *If* we have such accepted families of components, and *if* we come up with ways to approve entire families (note that some combinations of security parameters might, in principle, open an other-wise secure primitive much more vulnerable to side-channel attacks), and *if* we can come up with ways to specify appropriate combinations of security parameters for different primitives making up some reviewed component ... then we might be in a position to define "family approval" for a class of instantiations of some software module.


Verifications are always broad-brush, what we care about today, etc, and
it is a mistake to place them on a pedestal.  Did all those FIPS things
check for constant time?  Clear explanation of source of defaults?


> But we have enough trouble doing meaningful approvals for single implementations today that I don't see this happening soon.


Right, the problem space is immense, and any thought of a solution tends
to overwhelm.

Seriously, someone at NIST should contract some local econ post-grads to
do some research on the economic effects of the approvals, as against
say baselines of open source or "Apple update" or Microsoft Tuesday.
Who's delivering more value?

iang



More information about the cryptography mailing list