[Cryptography] floating point

Henry Baker hbaker1 at pipeline.com
Sat Dec 27 12:18:19 EST 2014


At 05:54 AM 12/27/2014, Jerry Leichter wrote:
>IEEE representations have one enormous advantage:  They come in hardware, so have much higher performance than any software alternative.  (I haven't looked at the FP that comes with graphics cards; I suspect it's mainly a return to the "make it fast" philosophy of the early days, especially since the uses for which they are primarily aimed - computing stuff to put on the screen - isn't sensitive to fine details.  It'll be interesting to see how it evolves as graphics cards are increasingly used as highly-parallel compute engines for other purposes entirely.)  Those who are unhappy with the properties of IEEE arithmetic spend their efforts in building better - more appropriate for their purpose - systems with IEEE as the primitives.

IEEE floats in GPU's are getting closer & closer to the IEEE standard, simply because the GPU customers are more & more people like physicists rather than gamers.  I believe that contemporary GPU IEEE floats follow everything but the gradual underflow part of the IEEE standard and perhaps some of the exception handling.

>Does that mean the standard Baker refers to is "wrong"?  From an elegance/correctness point of view, sure.  From a "usefulness" point of view, probably not.  IEEE FP support is cheap and fast and everywhere.  Any other representation would be more difficult to support widely - though one could well argue that Knuth dealt with exactly this issue in two dimensions in TeX, and one should just use his design and code, which is freely available.  (TeX positioning errors are given in absolute terms and are well under the wavelength of light - certainly good enough for typesetting, good enough for 3D fabrication unless you're building metamaterials.)

The biggest problem with the IEEE standard is its complete ignorance of compilers, optimizers and (in general) program analysis tools.  I'm not aware of any program analysis tool that can "think" like a topologist; all the ones that I know of "think" like an algebraist.  In order for a data type & data operations to be compatible with program analysis tools -- e.g., compilers, optimizers, etc. -- such a data type has to have _algebraic_ properties, like commutivity, associativity, etc.  Knuth attempted to axiomatize these algebraic properties, but subsequent work hasn't gone much beyond Knuth's work.

Properly optimizing floating point arithmetic in a compiler requires the ability to "dispatch" on certain bit configurations of the floating point numbers, and this unfortunately requires moving the floats into fixed point registers in order to check these configurations.  Given the distinct datapaths for fixed & float values, this is _extremely expensive_, so you are forced to make a choice between speed and correctness.  It should be possible to perform these bit dispatches & fixups even in deep pipelines, but I'm not aware of any floating point HW that allows this.

The good news about IEEE floats: they are _standard_, so every computer now makes the same mistakes.

The bad news about IEEE floats: they are standard, so there is no pressure to improve their properties, so every computer now makes (the same) mistakes.

I recall a discussion about benchmarking a program, and the new program was faster, but the results were slightly different.  These differences were not acceptable, even though they were within the same error bounds as the original program.

Customers don't mind mistakes, so long as they are the _same mistakes_ that everyone else makes.

(Hmmm....  This sounds very much like the behavior of the banks in the recent financial crisis; so long as everyone else made exactly the same errors ("Value-At-Risk", etc.), nobody cared.)



More information about the cryptography mailing list