#372 closed defect (invalid)
Problem with basic arithmetic
Reported by: | wvdb | Owned by: | gb |
---|---|---|---|
Priority: | critical | Milestone: | |
Component: | ANSI CL Compliance | Version: | trunk |
Keywords: | arithmetic | Cc: |
Description
Hard to believe but this is what I'm getting:
? (= (+ .6 .8) 1.4) NIL ? (+ .8 .6) 1.4000001 ? (+ .6 .8) 1.4000001 ? (+ .6 .8 .1) 1.5000001 ? (- .8 .6) 0.19999999
I do not see this for other numbers, e.g.
? (+ .6 .9) 1.5 ? (+ .6 .7) 1.3
.6 and .8 somehow have to be involved, but this is not systematic: e.g.
? (+ .1 .6 .8) 1.5
I tried other numbers but could not find similar behavior for addition, but for substraction i'm also getting:
? (- .9 .6) 0.29999995 ? (- .6 .9) -0.29999995 ? (- .6 .5) 0.100000024 ? (- .7 .8) -0.100000024
I also seem to getting similar problems with multiplication but only with numbers 0<x<1 and with one decimal
this is one particular terminal test session:
$ ccl Welcome to Clozure Common Lisp Version 1.2-r11241M (DarwinX8664)! ? (= (+ .6 .8) 1.4) NIL ? (= (+ .6 .9) 1.5) T ? (+ .6 .8) 1.4000001 ? (+ .6 .9) 1.5 ? (+ .6 .7) 1.3 ? (+ .8 .6) 1.4000001 ? (+ .5 .6) 1.1 ? (+ .8 .6 .1) 1.5000001 ? (- .6 .8) -0.19999999 ? (- .5 .8) -0.3 ? (- .6 .6) 0.0 ? (- .6 .7) -0.099999964 ? (* .1 .1) 0.010000001 ? (* 1 1) 1 ? (* 2 .1) 0.2 ? (* 1.1 1.1) 1.21 ? (* 1.1 .1) 0.11000001 ? (* .2 .2) 0.040000003 ? (* .1 1) 0.1 ? (* .1 .1111) 0.01111 ? (* .1 .11) 0.011 ? (* .1 .1) 0.010000001 ? (* .11 .11) 0.0121
platform: Mac OS X 10.5.5 on 64bit intel
ccl: 1.2-r11241M (DarwinX8664) compile from svn checked out on Wed 29 Oct 2008
What could be the problem here?
Change History (3)
comment:1 Changed 13 years ago by gb
- Resolution set to invalid
- Status changed from new to closed
comment:2 Changed 13 years ago by rme
For more about the topic of floating-point arithmetic than you probably want to know, see http://docs.sun.com/source/806-3568/ncg_goldberg.html
The problem is that floating-point numbers are not infinitely precise, and you seem to be assuming that single-precision floats are precise enough to represent all numbers that can accurately be represented in decimal.
The function CCL::SINGLE-FLOAT-BITS (which isn't otherwise interesting) returns a 32-bit integer that has bits set in the same positions as its SINGLE-FLOAT argument. If we print this integer in hex, it may make it easier to visualize what the single-float "looks like":
If we had an infinite number of bits in which to express 0.1's mantissa in base 2 ("#x...CCCCCCCCCCCCCC..."), there wouldn't be a problem accurately representing 0.1 as an IEEE single float. Unfortunately, we don't have an infinite number of bits (we only have 23 or so ..) and have to use an approximation and decide whether and how to round the infinite value so that it fits. If we round up, we get a number that's a little bigger than 0.1; if we round down ("CCCCCC"), we get a number that's a little smaller. In either case, the number that we can represent is a finite approximation of 0.1 in base 2, just as 0.33, 0.333, and 0.3333333 are finite approximations of 1/3 in base 10.
Rounding the way that we do, the SINGLE-FLOAT number that reads and prints as 0.1 is really something like "0.100000005"; if we'd rounded down, it would be something like "0.999999998". Whatever those values are, we can't represent anything in between them in a 32-bit IEEE single-float.
A rule of thumb is that about 6 or 7 decimal digits of an IEEE single-float are accurate; the lisp printer prints the single-float value "0.100000005" as 0.1, but it's not really "precisely the same as the result of dividing 1 by 10" any more than 0.33333... is "precisely the same as the result of dividing 1 by 3".
Once you start doing arithmetic involving imprecise approximations of real numbers, the rounding error starts to accumulate and becomes visible in the printed representation of the results (and may deviate significantly from the true value of the result, which may or may not be accurately representable.)
DOUBLE-FLOATs are also imprecise finite approximation, but they're significantly more precise than SINGLE-FLOATs are.
Exactly how the CL types SHORT-FLOAT, SINGLE-FLOAT, DOUBLE-FLOAT, and LONG-FLOAT map onto whatever types the hardware and system software support is up to the implementation. In CCL, SHORT-FLOAT and SINGLE-FLOAT map to 32-bit IEEE single floats and DOUBLE-FLOAT and LONG-FLOAT map to 64-bit IEEE double-floats. This is fairly typical and has the property that lisp SINGLE-FLOATs are IEEE single-floats and lisp DOUBLE-FLOATs are IEEE double-floats, which is generally less confusing than other mappings. Since the initial value of *READ-DEFAULT-FLOAT-FORMAT* is required by ANSI CL to be SINGLE-FLOAT, this partitioning does mean that floating-point constants written without explicit exponent markers are 32-bit IEEE single-floats.
(MCL made the CL type SINGLE-FLOAT map to IEEE double-float, which meant that arithmetic involving unqualified floating-point constants and coercions of rational values via FLOAT et al more precise, but this partitioning - where SINGLE-FLOATs were double-floats - led to significant confusion in other cases.)