[Cryptography] quantum computers & crypto

Ray Dillinger bear at sonic.net
Sun Oct 31 17:29:55 EDT 2021



On 10/31/21 4:50 PM, Jerry Leichter wrote:
>> Software ought to have three to seven algorithms, at most, built in.  
.....
>> Ideally that's implemented as contagious de-certification - with some
>> form of accepted, certified proof that the de-certification is real. 

> I see two problems with this approach:
> 1.  If some of the three to seven algorithms are known - or at least strongly believed - to be stronger than the others ... why include the weaker ones?  But if they are all (as far as we know) equally strong, then an attacker might break any one of them at any time.  If it happens to be the one currently in use ... good times for them.  If it's one of the others ... you've just created a huge incentive for the attacker to try to force a roll-forward to the algorithm he has an attack against.  Just what looks like a partial attack, or even just rumors of an attack, will likely be enough for some users to decide, in the famous abundance of caution, to roll forward - right into the ambush.
I would include something believed "weaker" if I had reason to believe
that it would withstand the family of attacks most likely to break one
or more of the "stronger." Ideally, they should rest on *different*
QC-hard problems.  The idea being that if the "stronger" one breaks, it
indicates that some class of attacks has had some breakthrough.  But if
based on an entirely different problem, the "weaker" algorithm might be
left standing.

An attacker with an attack against a cipher that isn't being used yet is
a far more harmless hacker than one who can force the choice with key
negotiation.  The first generation of algorithm agility, sadly, we did
flatly wrong - we gave the attackers choices.  Revolver crypto in this
case buys us an interim period during which his attack may be discovered
by someone who publishes, resulting in that algorithm being de-certified
before it comes up. And if it turns out that no suitable attack against
A is ever found, the 'interim' period may turn out to be indefinite.  In
either case the attack on B is never criminally useful for an instant. 

If an attacker tries to force a roll-forward via social-engineering, ie,
by just lying about it repeatedly until the weak-minded start to believe
there's a break, then he doesn't get what he wants just by playing the
mob.  Mobs can't be trusted to make good decisions about complicated
math in the face of rumors and lies. But in this case the mob doesn't
have the choice in algorithm negotiation so they can't leap forward into
the ambush until A is actually decertified. 

A decertification is done either with a credible proof that can be
automatically checked, or by a trusted authority responding to proof of
some type that can't be automatically checked.  He's not going to
produce credible machine-checkable proof, so the interim roll-forward
period depends on how skeptical the trusted authority is. And there are
no real guarantees there but the authority is likely to be better
informed and more able to detect bullshit, so it does buy some time.

This situation works against the attacker.  The more people are yelling
about an algorithm A break which there's no evidence of, the more
convinced some researchers will become that someone has discovered but
not published a break in B and wants to use it. So for every grad
student checking the bogus claims about A, there's probably going to be
another one looking for a likely "easy" discovery in B that they can
publish about.  An attempt to force a roll-forward will cause suspicion
that is likely to make his attack irrelevant before the roll-forward
happens.

> 2.  Just what would constitute a sufficient proof of decertification?  Is is a signed note from a bunch of pre-accepted authorities attesting to a problem?  Signed with what algorithm:  The current one, so that you get a message using a particular algorithm asserting that the algorithm is no good?  Oddly, that kind of works ... but it seems very peculiar.  If not that ... then what?  Signed using the *next* algorithm?  Or one of the other "future" algorithms?  That falls right into the ambush of the previous item!  Signed using some other algorithm?  Does it have to be stronger than any of the seven?  If so, why aren't you just using it?

I was thinking of two kinds of de-certification:  The first based on
credible machine checkable proofs, and the second based on the judgment
of a trusted authority.

There are known feats that anyone with a significant attack ought to be
able do, and published test vectors can allow these feats to be
attested.  These constitute machine-checkable proofs.  For example if I
have a significant attack on a 64-round block cipher, then I ought to be
able to decrypt a test vector encrypted with a 32-round version of that
cipher.  If I have an improvement in factoring that endangers the
products of ten-thousand-bit primes, then I ought to be able to
completely factor a test vector consisting of a product of
two-thousand-bit primes.  If I can build hardware that can brute-force
3DES, then I ought to be able to decrypt DES.  So for crypto subject to
such a "partial solution" proof, any user anywhere ought to be able to
create a de-cert if they can solve one of the published test vectors.

So Findlay Foont comes up with a couple of images that have the same SHA
hash, publishes the machine checkable proof, and the next day Eggs
Ackley notices that his GPUs are a different temperature because now
they're doing matrix signatures.  And this all happens while Mister
Natural (the Trusted Authority) is on vacation.  Ackley doesn't
reconfigure his software.  He doesn't have to keep up with every crypto
journal.  And he doesn't get a chance to make a mistake.

The second variety is issued by the Trusted Authority. When Mister
Natural gets back from his fission trip, he discovers that a whole lot
of people are telling him somebody's getting into their accounts and
figures, even though nobody on the white-hat side has figured out how,
somebody has to have broken the main encryption algorithm, so he issues
a de-cert and somewhere Devil Girl's botnet stops working. The following
day Foont and Ackley are both using encryption algorithm B, although
neither of them has even noticed it and neither of them has had an
opportunity to make a mistake.

It's amusing to think that the decertification of 8092-bit RSA could be
signed with 4096-bit RSA.  Faking the Trusted Authority's signature on a
type-2 would be a dinkum checkable proof if published as a type-1.  But
I think there are probably good reasons not to do that.

Bear





Anybody at all should be able to de-certify a cipher by publishing that
decryption or those factors.


> I agree that there's something very tempting about this approach.  Actually, I've proposed a broader application of a simpler version for trustable hardware/software:  You build a secure kernel that's so simple that you can really prove it secure *and never allow any patching.*  A patch mechanism is a potential attack vector, and one that immensely opens the potential attack surface.  Leave it out.  If it turns out you got it wrong, replace the whole thing with a new one designed along similar principles.
>
>                                                         -- Jerry
>
>



More information about the cryptography mailing list