[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Weird numeric predicates?

   Date: Sun, 19 Mar 89 17:22 EST
   From: Alan Bawden <Alan@AI.AI.MIT.EDU>

       Date: Fri, 17 Mar 89 11:26:05 -0500
       From: jinx@chamartin.AI.MIT.EDU (Guillermo J. Rozas)
       I haven't thought about it carefully, but it may not be reasonable
       (unless it is defined that way) to implement n-ary < and friends as
       the "AND accumulation" of binary <.  Comparisons between exact and
       inexact numbers should coerce the exact numbers to inexact, and this
       value may have to be used consistently afterwards.

   If <= is to behave transitively, even on inexact arguments, then you have
   to coerce inexact number to exact numbers in order to perform comparisons.
   (Or you must behave as if you did.)  To see why, consider the following
   three numbers:

     (DEFINE A (- (EXPT 10 38) 1))
     (DEFINE B 1E38)
     (DEFINE C (+ (EXPT 10 38) 1))

   Assuming your implementation has an exact representation for A and C
   (probably as a BIGNUM) and the inexact B is represented in a floating point
   format with less (probably far less!) that 38 digits of precision, then
   coercing either A or C to inexact will most likely return B.  If comparison
   predicates coerce EXACT->INEXACT, then the following will be true:

     (<= C B)	==>  #T
     (<= B A)	==>  #T
     (<= C A)	==>  #F

   Perhaps worse:

     (= C B)	==>  #T
     (= B A)	==>  #T
     (= C A)	==>  #F

   If instead comparison predicates coerce INEXACT->EXACT then consistently
   transitive answers will be obtained.

I think that what this example really speaks to is the limitations of
implicit arithmetic coercion.  It is really a form of DWIM.  (I am
opposed to DWIM in any form -- some forms more than others.)

To clarify what I am talking about, let me sketch an alternative set
of arithmetic primitives for a hypothetical version of Scheme.  (This
is not yet intended to be a formal proposal.)  Suppose there were two
sets of primitives, one for exact arithmetic, one for inexact; for
instance, exact equality might be EX= and inexact equality INEX=.
Both these primitives are required to accept both kinds of numbers as
arguments (this is *not* an efficiency hack to allow them to make
assumptions about the argument types); they then coerce both arguments
to exact or inexact, respectively, before making the comparison.

My claim is that distinguishing the two primitives makes sense because
they are really two different operations; INEX= really asks if the
arguments are equal within the floating-point precision of the
implementation.  Similar arguments hold for the other arithmetic

Implicit coercion is DWIMmy because it asks the system to choose one
of these two arguably different operations based on what the user
probably meant.  Especially in an untyped language where the
discrimination is made at runtime, this is an excellent way to produce
unmaintainable code.  (Suppose you have a program that does a lot of
arithmetic and, in debugging it, you find an inexact value where you
expected an exact one.  How are you going to find out where the
inexact operand was introduced?)

Notice that, given Alan's example values, both EX= and INEX= are

     (EX= C B)	==>  #F
     (EX= B A)	==>  #F
     (EX= C A)	==>  #F

     (INEX= C B)  ==>  #T
     (INEX= B A)  ==>  #T
     (INEX= C A)  ==>  #T

Same for EX<= and INEX<=.

I am well aware that I am bucking Lisp tradition in making this
proposal.  However, there have been times when I was trying to do
numerics in Lisp when I wished I had a strongly-typed language, for
just this kind of reason -- a float would creep in somewhere and I
would have trouble finding where.  In (most?) strongly-typed languages
one still doesn't explicitly specify the mode in which the operation
is done separately from the types of the operands, but at least the
coercion rules apply at compile time.

(If you really want to make floating-point programs portable, how
'bout an optional third argument to *every* INEXop which is the number
of bits of precision (mantissa)?  (And even a fourth, which is the
number of bits of exponent?  (And then there's rounding...)))

To restate: I am basically proposing that the mode of each arithmetic
operation (exact or inexact; should there also be integer mode?) be
specified independently of the types of the operands.

-- Scott