[Cryptography] floating point

Jerry Leichter leichter at lrw.com
Sat Dec 27 13:15:29 EST 2014


On Dec 27, 2014, at 12:18 PM, Henry Baker <hbaker1 at pipeline.com> wrote:
> The biggest problem with the IEEE standard is its complete ignorance of compilers, optimizers and (in general) program analysis tools.  I'm not aware of any program analysis tool that can "think" like a topologist; all the ones that I know of "think" like an algebraist.  In order for a data type & data operations to be compatible with program analysis tools -- e.g., compilers, optimizers, etc. -- such a data type has to have _algebraic_ properties, like commutivity, associativity, etc.  Knuth attempted to axiomatize these algebraic properties, but subsequent work hasn't gone much beyond Knuth's work.
Is the issue here *IEEE* floating point in particular, or *floating point* in general?  I think the latter.  And it's not even FP.  Many of the algebraic properties you'd like to be able to rely on aren't even true for bounded integer arithmetic.  There was a long thread on exactly this topic on this list not long ago.  I forget the exact details, but the question was whether a C compiler was allowed to "optimize away" an overflow "present in the source" (in quotes since the standards specifically said that the overflow causes "undefined behavior", so "present in the source" is a statement that's tricky to make in the model presented by the standard).

I've never seen a floating point representation that has clean algebraic properties.  I doubt such a thing could exist:  In any finite FP representation, there will always be a non-zero value e such that 1+e produces the same (rounded/truncated) representation as 1.  So 1+e == 1 and yet e != 0.  You could trap such values or produce something like NaN, but neither of those gives you nice algebraic properties.  You could try to remove such small values from the set of model numbers, but since FP deals with *relative* values ... where do you stop?  The E such that 10^20+E produces 10^20 is rather large.

It's not even clear that this is worth doing.  Some compilers have modes that allow "unsafe" optimizations, but they don't seem to buy much.  The optimizations you need to get real gains *with accurate results* in FP computations are subtle and way beyond reasonable compiler technology.

> Properly optimizing floating point arithmetic in a compiler requires the ability to "dispatch" on certain bit configurations of the floating point numbers, and this unfortunately requires moving the floats into fixed point registers in order to check these configurations.
The set of FP computations that gain much from this kind of thing are rather specialized.  Those who do these kinds of computations complain about features missing from machines out there, but they are an insignificant minority of the audience, so have never gained any traction.  (There was a guy who used to be a regular on the comp.compilers newsgroup years ago.  He was a statistician at some university - apparently quite competent in that field - but he kept complaining about the "obvious" things missing from existing programming languages and machine instruction sets.  The ability to do certain bit operations on FP values was one thing he insisted was "absolutely essential".  People eventually gave up arguing with him - he just could not see that the specialized computations that were important to *him* were simply not that interesting to anyone else - and that no one was going to build him either the compiler or the hardware he wanted just because he wanted it.)

> Given the distinct datapaths for fixed & float values, this is _extremely expensive_, so you are forced to make a choice between speed and correctness.  It should be possible to perform these bit dispatches & fixups even in deep pipelines, but I'm not aware of any floating point HW that allows this.
The old CDC 6000's used the same registers for integers (well, some integers) and floats.  Efficient algorithms for certain common operations - ABS(), MAX(), MIN() - made use of this fact:  They could be computed using the integer and Boolean operations.  (In fact, the FORTRAN compiler generated in-line code using these algorithms.)  But even these machines had separate functional units for Boolean/integer/FP operations (though integer multiply was actually done by using non-standard FP values in the FP multiply units).  As you move on to the Cray vectorized hardware, the vector registers split off as a completely separate set which mostly gets accessed from the FP units.  That's been a broad trend:  The more potential units a register has to feed, the more complex the logic.  Keeping FP computation separate from "control" - which is what the boolean and integer operations are principally about in heavily FP code - gets you more bang for the buck.

Yes, the unusual algorithms that mix the two domains are expensive.  It's all tradeoffs.

> I recall a discussion about benchmarking a program, and the new program was faster, but the results were slightly different.  These differences were not acceptable, even though they were within the same error bounds as the original program.
> 
> Customers don't mind mistakes, so long as they are the _same mistakes_ that everyone else makes.
Many years ago, when I was working at DEC, a friend was asked to port a FORTRAN program from PDP10's to VAXes.  The code implemented a model that predicted the cost of maintaining hardware; it was used to set support prices.  Deep in the FORTRAN, there was a central algorithm that rounded intermediate values for no apparent reason.  The comment described the mathematics involved - all of which made perfect sense; but it made no mention of intermediate rounding.  My friend asked me what to do with this code.  I examined it, examined the comments, and told her that the code clearly didn't implement the model correctly and that she should just toss all the extra rounding. 

The new program was delivered to the customer.   They soon came back with a problem:  As a test, they had run it for some older hardware.  The answers differed from what the old code produced.

We looked closely at the results and, indeed, it was due to the rounding we'd removed.  DEC being an engineering company at that time, we could easily explain this to our customers.  They agreed - our results were a more accurate representation of their model than the old results.  But ... many millions of dollars were tied up in support contracts based on the old results. It would produce a huge mess if those had to be changed.  Could we put things back the way they had been?

Now, you can take this as a sign of irrationality.  But then again, the model itself was a simplification of reality.  Who could say whether it, or the messy output with the internal rounding, was a better predictor of actual costs?  The fact is, the old version's output had been found to "work" empirically.  Living with the churn that would result from a change, with no actual assurance that the new results were in any real sense "better", wasn't justifiable.

                                                        -- Jerry



More information about the cryptography mailing list