Exponent 3 damage spreads...

Peter Gutmann pgut001 at cs.auckland.ac.nz
Thu Sep 14 07:40:49 EDT 2006


Simon Josefsson <jas at extundo.com> writes:
>pgut001 at cs.auckland.ac.nz (Peter Gutmann) writes:
>>Simon Josefsson <jas at extundo.com> writes:
>>>The second problem is that the "parameters" field can ALSO be used to store
>>>data that may be used to manipulate the signature value into being a cube.
>>>To my knowledge, this was discovered by Yutaka Oiwa, Kazukuni Kobara, Hajime
>>>Watanabe.  I didn't attend Crypto 06, but as far as I understand from Hal's
>>>post, this aspect was not discussed. Their analysis isn't public yet, as far
>>>as I know.
>>
>>Can you make a guess at what it is?  Is it the fact that you can have NULL
>>parameters for algorithms or optionally non-NULL parameters?
>
>Yes.  Implementations that didn't validate the parameters field are
>potentially vulnerable; the attacker can put garbage in the parameters field
>to make the signature value a cube.

In that case (and because of something else I thought of after I posted, I was
just heading out for dinner at the time) I think it's game over for RSA e=3
(see below).

>>Changing this could be tricky because there are all sorts of
>>inconsistencies both in standards and implementations, the standard
>>practice has been to skip the parameters field because if you don't,
>>things break.
>
>I don't think so.  The contents of the parameters field depends on the hash
>algorithm.  As far as I know (but I didn't read the scriptures), for normal
>hashes like SHA-1 the parameters field should not be used.

It may or may not be used, depending on which standard you follow.  First of
all, even for the simple case of SHA-1, the parameters can be present or not.
See the note in RFC 3274:

  There are two possible encodings for the [...] parameters field which arise
  from the fact that when the 1988 syntax for AlgorithmIdentifier was
  translated into the 1997 syntax, the OPTIONAL associated with the
  AlgorithmIdentifier parameters got lost.  Later it was recovered via a
  defect report, but by then, everyone thought that algorithm parameters were
  mandatory.  Because of this, some implementations will encode null
  parameters as an ASN.1 NULL element and some will omit them entirely (see
  for example section 12 of CMS [RFC2630]).

So for both standards and implementations it's pretty much a coin-toss (crap-
shoot if you're in the US) as to what you'll find there.  Because of this,
standard practice in implementations has been to skip any parameters.

But wait, there's more!  Some AlgorithmIdentifiers have parameters that you
can set to either a large range of fixed values or even arbitrary values.  For
example the Ascom Tech IDEA AlgoID (which admittedly isn't a hash algorithm,
but bear with me) has three optional parameters for CFB mode which can take
various values and may or may not be present (AES almost took this path as
well, luckily NIST decided against it at the last minute and went with simple,
streamlined parameters).

OK, so you don't sign IDEA values so this isn't a problem.  However, a related
AlgoID is the DES one, which for CBC mode has as parameter an IV, which is 64-
bits of arbitrary user-chosen data.  I've seen a banking security system from
either Belgium or the Netherlands that signs CBC-MACs (I'm using a borrowed
machine to send this so I can't provide names and numbers at the moment, it's
from memory), and knowing the organisation that produced it it wouldn't
surprise me if they also used keys with e=3.

So lets extend this further.  There have been a pile of designs for hash
algorithms with user-definable parameters.  If these ever get used in
standards then no doubt there'll be AlgoIDs defined that allow an attacker to
set arbitrary values in the AlgoID through them.  So the arms-race of trying
to track invalid data now becomes a problem of proving a negative, i.e.
proving that there isn't some AlgoID out there somewhere that allows you to
set one or two parameter bytes to arbitrary values.

But wait, there's more!  From what I understand of the attack, all you need
for it to work is for the sig.value to be a perfect cube.  To do this, all you
need to do is vary a few of the bytes of the hash value, which you can do via
a simple brute-force search.  So even with a perfect implementation that does
a memcmp() of a fixed binary string for all the data present but the hash, the
attack still works.

In either of these cases, RSA e=3 is dead.  Obesa cantavit.

So the fix isn't an ongoing penetrate-and-patch arms race to try and filter
out more and more hard-to-find possibilities, it's to immediately deprecate e=
3.  Grab a pile of IETF boilerplate, add a single sentence "Don't use RSA with
e <= 3" (I'd actually say "<= 17" since there's no good reason not to), and
apply it as a BCP by reference to SSL/TLS, IPsec, S/MIME, PGP, DNSSEC, and so
on.  There'll always be broken standards out there that require e=3 (I know of
at least one that uses e=2, and another that uses raw, unpadded RSA, and
another that... well you get the idea), but the only quick, sure fix is to
kill e=3, not to try and anticipate every potential way of trying to use it,
because you'll never be secure that way.

Peter.

---------------------------------------------------------------------
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majordomo at metzdowd.com



More information about the cryptography mailing list