# [Cryptography] floating point

Henry Baker hbaker1 at pipeline.com
Sun Dec 28 10:52:40 EST 2014

```At 12:06 PM 12/27/2014, Ray Dillinger wrote:
>The IEEE floats are, mathematically speaking, are a set of
>numeric values which approximate (though not very closely)
>a logarithmic distribution.
>
>This has always annoyed me somewhat.  If you're going to
>approximate a logarithmic distribution, why not just make it
>BE a logarithmic distribution?  Define your numeric value as
>the result of raising some root to the power of the bit
>representation, and you get a logarithmic distribution that
>minimizes relative error better than the system IEEE is using
>now.  Further, you get efficient and accurate(!) FP
>multiplication and division using the same hardware you use
>
>Of course you'd still have inaccurate addition and subtraction,
>but heck, you've got that now.  You could at least get
>multiplication and subtraction right, which is better than
>IEEE does.
>
>You still need an analogue of "denormalized" floats right around
>zero because it breaks down there for the same reasons the IEEE
>mantissa+exponent system does - but you need fixedpoint
>representation anyway! You've got to have it for things like
>modeling software and accounting, etc. where you're trying to
>minimize or eliminate absolute error rather than relative error,
>so using it for numbers near zero doesn't really increase
>overall requirements.

Yes, there have been electronic calculators built (Wang??)
that utilized pure log numbers (slide rules, anyone?).

If you want better behavior near zero -- aka denormalized
floats -- then you really want ASINH(x/2) rather than LOG(x).
ASINH(x/2) is linear around zero, but like LOG(x) once you
get far enough from zero.

```