[Cryptography] Blockchain without proof of work

Mark Steward marksteward at gmail.com
Sun Jan 6 09:56:33 EST 2019


On Sat, Jan 5, 2019 at 10:26 PM Phillip Hallam-Baker
<phill at hallambaker.com> wrote:
>
>
>
> On Sat, Jan 5, 2019 at 2:55 PM Mark Steward <marksteward at gmail.com> wrote:
>>
>> On Sat, Jan 5, 2019 at 7:46 AM Phillip Hallam-Baker
>> <phill at hallambaker.com> wrote:
>> >
>> > OK so I have decided to make a few changes to the structure here so the values will change.
>> >
>> > Let us say that m = "Konrad is a Kompromised agent".
>> > first choose a random key kt = P (0xB0 || rand (512), 125)
>> > (where P is the presentation/truncation function using Base32 putting in dashes every 5 characters and truncating to 125 bits)
>> >
>> > We use SHA-2-512 to construct a Keyed UDF, so HKDF(), H(), HMAC() are functions all using that as the base digest:
>> >
>> > k = HKDF (kt, 512)
>> > f = HMAC (k, ct || ':' || H(m))
>> > udf = P((v || f), 125)
>> >
>> > Where v is simply a tag to identify the fingerprint type and ct is the IANA content type
>> >
>>
>> So that's effectively just HMAC(rand(117), m), truncated to 120 bits
>> (i.e. a birthday complexity of 60 bits). What purpose does the
>> obfuscation serve?
>
>
> A straight hash function would allow a dictionary attack. Adding in the key means that a brute force attack has a work factor of the dictionary size times 2^512 which is obviously infeasible.
>

I don't mean "why use an HMAC", I mean why take 512 bits of random,
truncate them to 117 bits (by prefixing 8 constant bits and calling
P), and then stretch the output and pretend it's 512 bits? Why then
prefix content type unless you're anticipating reusing keys? And why
truncate the final output to 117 bits again, and format it like it's
something to be entered by hand, when a simple HMAC would suffice?
There must be a reason you've chosen this inscrutable structure.

>>   https://github.com/hallambaker/Mathematical-Mesh/blob/8f309a1/Libraries/Goedel.Cryptography/UDF.cs#L212-L222
>>
>> That link is to some bizarre functionality to "compress" the output by
>> adding the number of leading zero bytes to the versionID, which
>> further reduces the hash to 117 bits, and allows fingerprints from two
>> different versions to collide.
>
>
> That is important and useful functionality for key fingerprints. The compression function does not affect the brute force work factor for the attacker. What it means is that if the user puts in a Work Factor of 2^24 on key generation, the work factor of the resulting 125 bit fingerprint will be 2^(117+24) rather than 2^117.
>
> That is a standard construction that has been reviewed by the IETF. Unfortunately, there is a patent on a particular approach held by Microsoft. While they have licensed that for certain IETF protocols, it is not generally licensed.
>

I can't make sense of what you're trying to say here. The initial
zeroes are part of the hash, not leading zeroes from prime numbers or
similar. If you removed this suspicious and buggy functionality, would
that break anything?

>> Your convenient Validate function isn't constant time:
>>
>>   https://github.com/hallambaker/Mathematical-Mesh/blob/8f309a1/Libraries/Goedel.Cryptography/UDF.cs#L407-L408
>
>
> Probably not. That is currently just a placeholder. It doesn't even accept longer fingerprints to check against shorter patterns.
>

Checking shortened fingerprints is an antipattern, as demonstrated by
GPG short key collisions. If humans need to enter hashes by hand,
something's gone very wrong already.

> I don't plan to rely on constant time implementation to protect against side channel attack. It tends to be brittle in the face of aggressive compiler optimization. The Kocher patents should expire soon if they haven't already and they provide a more powerful approach that I need to implement for other reasons.
>

Are you talking about the patent from 1998? I don't see how that can
help with hash comparison.


Mark


More information about the cryptography mailing list