Montgomery Multiplication

Don Davis dtd at world.std.com
Tue Jul 2 13:47:08 EDT 2002


> I was just wondering if anyone knew where to get
> a good explanation of Montgomery multiplication
> for the non-mathematician? 

here's an explanation i wrote a couple of years ago:

				- don davis, boston



what's going on with montgomery reduction:
do you remember in your first digital-hardware class, how they taught
you that you don't need AND, OR, and NOT gates, to build full-function
logic circuits, because you can do everything with just NAND gates?
montgomery's algorithm is kind of like that; it shows that you don't
need the multiply, divide, and remainder operations to do bignum modular
exponentiations, because you can use this montgomery-multiplication
operation over-and-over, instead.  the advantage of montgomery-mult'n
is that it avoids the remaindering operation, and so avoids expensive
divisions, but nevertheless manages to keep an exponentiation's
intermediate products small, in a very cheap & easy way.

it works rather like logarithms, and the performance trade-offs are
similar:  to multiply two numbers, you have to transform them to the
montgomery representation, montgomery-multiply the transformed numbers,
then transform the product's montgomery-representation back to the
normal representation.  for two factors, this takes longer than a
normal multiply-and-take-remainder step would, even though montgomery-
multiplication is just as cheap as normal multiplication, because the
final transformation takes longer than doing a remainder in the usual
way. but, for exponentiation, where we would montgomery-multiply many
times, we only have to do the representation-change twice, at the
beginning and end.  to do a modular exponentiation in the normal way,
we'd have to take the remainder after every multiplication.  so, that's
how montgomery saves time: convert the operand to a representation in
which the remaindering operation disappears, multiply that representation
by itself as much as the exponent indicates, then change the repre-
sentation back.  this is like the advantage of logarithms: if you're
multiplying many numbers together, then you convert all the operands
once to their logs, and you convert the final result back, but you
don't have to convert any of the intermediate results, so the cost
of converting to & from logarithm format is more than offset by
the cheapness of adding instead of multiplying.

now, how does montgomery representation make remaindering go away?
the basic idea is that when we're doing a long chain of modular
multiplications, we need to keep the intermediate products from
getting too big, but taking their remainders mod N isn't the only
way to keep them small.  montgomery's conversion just multiplies
each operand by the maximum operand-size, say 2^1024, modulo the
crypto-system's modulus N.  now, when we multiply two such trans-
formed operands, the product gets bigger than we want to keep around,
just as in normal modular arithmetic, but it's much easier to make
the montgomery representation's excess go away at each step.  the
heart of montgomery's theorem is the trick of adding a certain
simple multiple of the modulus N to this intermediate product,
making a modulus-N equivalent that happens to be a multiple of
2^1024 .  now, recall that this intermediate product is not a good
montgomery representation; it's too big by just this factor of 2^1024.
so, by throwing away the least-significant 1024 bits, we kill two birds
with one stone: we keep the intermediate product's size manageable,
and we keep the intermediate product in montgomery representation,
so it's ready for another round of montgomery-multiplication.  it's
very slick, but none of the explanations i've read, treat this
intermediate-size issue at all.

---------------------------------------------------------------------
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majordomo at wasabisystems.com



More information about the cryptography mailing list