[Cryptography] floating point

John Denker jsd at av8n.com
Fri Dec 26 00:55:12 EST 2014


Executive summary:
 *) There are things you an do with floating point, and things you can't.
 *) Recognizing the distinction does not make you an "alleged programmer".
 *) Name-calling is not an acceptable substitute for facts or logic.
 *) Name-dropping allusions to famous scientists is not an acceptable
  substitute for facts or logic.


Here is a /partial/ list of issues and non-issues:

1) There is a distinction between /numbers/ and /numerals/.  It
 is quite common to have multiple representations for the same
 number, such as seventeen, 17, XVII, 0x11, 17.0000, and literally
 infinitely many others.  This is something that people were 
 supposed to learn in third grade.

 The fact that IEEE floating point has two numerals that represent
 the number zero is not a serious problem.  It is a third-grade
 problem.  It is manageable with minimal effort.

2) IEEE floating point is not the only game in town.  Forsooth,
 it seems likely that most of the floating point done in the
 world today is /not/ IEEE compliant.  Hint:  GPUs.

 If you want IEEE floating point behavior, and it is not 
 supported in the hardware, it is painful to implement it
 in software.

3) There are many different equivalence relationships in the
 world.  A blue triangle is shapewise_equivalent to a red 
 triangle.  A blue triangle is colorwise_equivalent to a 
 blue square.  Take your pick.

 In the computer, we have arithmetical comparison as well as
 bitwise comparison, stringwise comparison, et cetera.  Take
 your pick.  Different ones are good for different purposes.

4) Restricting attention now to IEEE floating point default
 behavior, there are three types of entities to consider:
  -- rfloat: regular floating point numbers, representing a 
    subset of the rational numbers.
  -- xfloat:  all the above, extended to include +inf and -inf.
    I don't care whether you consider infinity to be a "number"
    or not.
  -- xxfloat: all of the above, extended to include NaN, which
    (as it says on the tin) is /not/ a number.  Even though it
    is not a number, it is an xxfloat entity.

 So, to answer a question that was previously asked:  In the
 domain of xxfloats, the arithmetic == operator is definitely
 /not/ an equivalence relation, because it fails the "reflexive"
 requirement.  Specifically, (NaN == NaN) is false.  If we 
 don't have a viable notion of equality, we can't even ask
 the question about whether something is a function or not.

 In the xxfloat domain, arithmetical == does not imply bitwise
 equality ... or vice versa:
  -- 0.0 == -0.0 but the representations are bitwise different.
  -- NaN != NaN but the representations are bitwise the same.

 Do not ask me to explain why NaN != NaN returns true.  I am 
 quite aware of the traditional and historical explanations.
 They just don't make any sense.

 If we focus attention on the xfloat domain, narrowly speaking,
 then arithmetic == is an equivalence relation as far as I can
 tell.  In particular, let 
        a = 0.0 
        b = -0.0
 Then a is == to b, and any xfloat that is equal a is also == 
 to b.

 On the other hand, if we mix ints and xfloats, then arithmetic
 == is not an equivalence relation, because if fails the 
 "transitive" requirement.  Specifically, due to a peculiar 
 notion of /promotion to float/ you can have p == q and q == r 
 but p != r.  I have code that demonstrates this.

 On the third hand, if p == 0 and 0 == r then p == r, so 
 the example that provoked this discussion is still not a 
 good example.

5) Within the xfloat domain, atan2 is /not/ a function, but 
 that's overkill;  there are plenty of simpler examples, not 
 involving transcendental functions.  Indeed, xfloat divide 
 is not a function, because (a == b) is true whereas (1/a == 1/b)
 is false.  Recall that
        a = 0.0 
        b = -0.0
 Also the usual decimal formatting is not an injective mapping,
 because b is arithmetically == to a, but str(b) is not stringwise
 equal to str(a).  That is, "-0" is not stringwise equal to "0".

6) Even within IEEE floating point, default behavior is not
 the only allowed behavior.  If you are writing a low-level
 library, with no control over the big picture, it is difficult
 to find /any/ floating point operations that are safe.  Even
 simple operations such as add, subtract, multiply, and divide
 could trap out and die.

 Conversely, some things that you might want to trap out or
 return NaN -- such as atan2(0,0) -- do not.  OTOH you could
 write wrappers for such things.

7) Looking at the standards won't tell you what happens in the
 real world.
  -- Chez moi clang++ will not trap on floating divide by zero,
   no matter how politely you ask, even though other traps
   behave as expected.
  -- Chez moi g++ labels every FP trap as "divide by zero", even
   when something else (e.g. underflow) actually happened.
  -- et cetera.

8) The things I am calling FP "traps" are commonly called exceptions,
 but they are not to be confused with C++ exceptions.  With a bit
 of work you can convert an FP trap into a C++ exception.



More information about the cryptography mailing list