[Cryptography] floating point

Bill Stewart billstewart at pobox.com
Wed Dec 24 16:30:18 EST 2014


At 10:32 PM 12/23/2014, John Levine wrote:
> >I still remain astonished, that, since Babbage/Lovelace/Turing/etc, that
> >many alleged programmers still do not know that 0.0 != 0.
>
>To pile on a little, on every reasonably modern* architecture I know,
>0.0 == 0, both semantically and the bit representation.

IEEE754 0.0 is 32 or 64 or 80 or 128 or 16 bits.
Came out in 1985ish, and took a while before everybody adopted it,
though the IBM 8087 chip ran a draft standard since 1980.

These days, most computers represent integers as 32 or 64 bits,
but in your "since the PDP-6" timeframe, many used 16 bits or even 8,
or sometimes weirder values (e.g. 12 or 24, though probably none of them
did IEEE floating point; the 24-bit ints I've seen were on DSPs.)

Then of course there's -0.0, just to be annoying ( -0.0 == 0.0,
but with different bit representations, so also -0.0 == 0 .)




More information about the cryptography mailing list