[Cryptography] NSA says China's supercomputing advances put US at risk

Tom Mitchell mitch at niftyegg.com
Fri Mar 17 16:22:26 EDT 2017


On Fri, Mar 17, 2017 at 12:15 PM, James A. Donald <jamesd at echeque.com> wrote:
> On 2017-03-17 21:28, Jerry Leichter wrote:
>>
>> But just outright dismissing the whole issue as
>> "more pandering
....
>
> Answer: A GPU is a supercomputer that solves a kind of problem very similar
> to those that government supercomputers are supposed to solve at about one
> ten millionth the cost.

There are issues worth thinking about and reviewing. It is not just
big clusters of common
hardware as a solution amplifier.   If you are serious about
supercomputing a lot of things need to
be looked at, only one of these is floating point.   What other data operations
and data operation problems apply to this groups agenda.

This one issue of floating point data is a pet of mine going way back
to when a customer
demanded reproducible results at the 19th digit on a multiple processor system
yet had a card  "PI = 3.14" in his deck.  His program deck was unstable if the
card was PI =  3.14159265358979323846

In the below does a better FPU design and cryptographic data width
allow interesting attacks
on cryptosystems?    If not FPU are there stuff processing units
(SPUs) that could
be built and installed into the same system bus interfaces that GPU
(CUDA) hardware
lives.

A recent Stanford EE380 talk.

Stanford Seminar: Beyond Floating Point: Next Generation Computer Arithmetic.
Published on Feb 2, 2017

EE380: Computer Systems Colloquium Seminar
Beyond Floating Point: Next-Generation Computer Arithmetic
Speaker: John L. Gustafson, National University of Singapore

A new data type called a "posit" is designed for direct drop-in
replacement for IEEE Standard 754 floats. Unlike unum arithmetic,
posits do not require interval-type mathematics or variable size
operands, and they round if an answer is inexact, much the way floats
do. However, they provide compelling advantages over floats, including
simpler hardware implementation that scales from as few as two-bit
operands to thousands of bits. For any bit width, they have a larger
dynamic range, higher accuracy, better closure under arithmetic
operations, and simpler exception-handling. For example, posits never
overflow to infinity or underflow to zero, and there is no
"Not-a-Number" (NaN) value. Posits should take up less space to
implement in silicon than an IEEE float of the same size. With fewer
gate delays per operation as well as lower silicon footprint, the
posit operations per second (POPS) supported by a chip can be
significantly higher than the FLOPs using similar hardware resources.
GPU accelerators, in particular, could do more arithmetic per watt and
per dollar yet deliver superior answer quality.

A series of comprehensive benchmarks compares how many decimals of
accuracy can be produced for a set number of bits-per-value, using
various number formats. Low-precision posits provide a better solution
than "approximate computing" methods that try to tolerate decreases in
answer quality. High-precision posits provide better answers (more
correct decimals) than floats of the same size, suggesting that in
some cases, a 32-bit posit may do a better job than a 64-bit float. In
other words, posits beat floats at their own game.

About the Speaker:
Dr. John L. Gustafson is an applied physicist and mathematician. He is
a former Director at Intel Labs and former Chief Product Architect at
AMD. A pioneer in high-performance computing, he introduced cluster
computing in 1985 and first demonstrated scalable massively parallel
performance on real applications in 1988. This became known as
Gustafson's Law, for which he won the inaugural ACM Gordon Bell Prize.
He is also a recipient of the IEEE Computer Society's Golden Core
Award.

Search for the above to find this.

https://youtu.be/aP0Y1uAA-2Y

And now I need to look at Julia.



-- 
  T o m    M i t c h e l l


More information about the cryptography mailing list