Bernstein's NFS machine

Nomen Nescio nobody at dizum.com
Sun Mar 3 04:20:12 EST 2002


David Wagner writes:
> Bernstein's analysis is based on space*time as your cost metric.
> What happens if we assume that space comes for free, and we use simply
> time as our cost metric?  Do his techniques lead to an improvement in
> this case?

Bernstein basically treats memory and processing elements as
interchangeable to a first approximation.  After all, it's just different
ways of laying out silicon.  So assuming that space comes for free is
equivalent to assuming that you have an infinite number of processors
to bring to bear on the problem.

> It looks to me like there is no improvement in the first phase, but
> there is some improvement in the second phase (reduction in time from
> B^2 to B^1.5).

If you had an unlimited number of processors for the first phase, and
you have X number of values to check for smoothness, then you can use X
processors and run one test on each value, completing the whole thing in
time L^o(1).  This does not reduce Bernstein's cost metric but it would
reduce the time considerably.  I don't think such a speedup is possible
with the sieving approach.

However this is not physically reasonable, as the number of values to
be tested for smoothness is approximately L^2, which would be on the
order of 2^90 for a 1024 bit key.  If each cell were .01 microns on a
side you would still need 100,000 square kilometers of chip area.  So
this extrapolation is not very meaningful.

---------------------------------------------------------------------
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majordomo at wasabisystems.com



More information about the cryptography mailing list