[Cryptography] Another crypto paper in the wrong place

dj at deadhat.com dj at deadhat.com
Mon May 19 15:20:45 EDT 2014


>> They tested 625MBytes of raw data compressed to 156 MBytes. When I have
>> 'close to perfect' raw data, I typically need 100GBytes or so to get a
>> reasonable error margin on my min-entropy estimates.
>>
>
> Just a very quick question, what methodology or tools do you use to
> estimate min-entropy from such data?
>
> Thanks :]
> -mk
>

For raw data, first I take the basic statistical properties using ent
(bias, SCC, compression, Chi-square TOR etc).

Then I use the min entropy algorithms from SP800-90B. The markov-renye min
entropy test is particularly useful because it's sensitive to bias,
correlation and non stationarity. So comparing the min entropy output
against data generated with the same bias as the measured bias should
closely match if there isn't correlation and deviate a lot if there is.

Using these tests we can judge if the model of the source run in
simulation matches reality. This gives us (or doesn't) confidence in the
model. From the model we must show model_min-entropy >
measured_min-entropy.

Once we have that, we can add some conservative margin to the min-entropy
estimate and then know how much conditioning to do.

The smaller the deviation from pure random, the more data you need to
detect it or support the null hypothesis. So crappy data is easier to
characterize. Good data deserves more scrutiny to check for claims of no
correlation or stationarity (which never ever occur in real circuits).

DJ



More information about the cryptography mailing list