[Cryptography] Name for a specific type of preimage resistance

Tushar Patel tjpatel.tl at gmail.com
Tue Dec 13 15:48:48 EST 2022


On Mon, Dec 12, 2022 at 10:29 PM Viktor Dukhovni <cryptography at dukhovni.org>
wrote:

> On Mon, Dec 12, 2022 at 05:54:45AM +0000, Peter Gutmann wrote:
>
> > In the absence of any actual name, could I suggest "singular preimage
> > resistance"?
>
> The closest extant term is "1st preimage resistance", where all you have
> is the digest, rather than "2nd preimage resistance" where you have some
> given preimage and the goal is to find another.
>
> However, you're asking for something that's mathematically less well
> defined, because mathematically all 1st preimages are alike, but in
> you're looking for "the one true" preimage, that has low entropy, or
> some expected structure, that would it better than the rest.
>
> So I don't think there's a well-established term for that.  The expected
> number of preimages of a "random" function is (if I'm not mistaken)
> e/(e-1) ~ 1.5819767068693, so if a digest is close to "ideal", finding
> any preimage gives you an ~63% chance of finding "the one true"
> preimage, but of course you're concerned with "less than ideal"
> digests...
>
> --
>     Viktor.
> _______________________________________________
> The cryptography mailing list
> cryptography at metzdowd.com
> https://www.metzdowd.com/mailman/listinfo/cryptography


Though catching a running train, it might be worth considering the entropy
defined in the set, e.g. if there is less than an order of 1 million,
SP800-38G/format preserving can be reversed aka brute forced, similarly, a
hash of a field like password with limited entropy would be identified
through the Big Data Set over the full entropy. Also, this was the same
issue with the crypt function with let's say 200000 rounds, definitely, it
is an obfuscation and whether you go to the 200000th iteration, it still
has the same resistance as the very first hash. Hence, the preimage
resistance would be calculated based on the fields having entropy after
removing the constants like rounds, etc., and the next iterations would be
the variance over each iteration, e.g., of Hash(Hash X), this is still Hash
X, however, Hash((Hash (X + entropy1)) + entropy2) can have strength
increments or decrements based on the variance/entropy in entropy1 and
entropy2.

I forgot to send that the term could be Dispersion Cofactor and to test it
under FIPS it requires
1. 1. 8 to 10 million + bits with Iteration 1 on random data sample fields
2.  2 to N, 8 to 10 million bits each of the same data run through the
second and to nth iterations of the hash or compensating hash function for
PQC.

Validators can then run the IID or Min-entropy estimators on them and then
accept them.

Thx.,
Tuhar
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://www.metzdowd.com/pipermail/cryptography/attachments/20221213/150cc33a/attachment.htm>


More information about the cryptography mailing list