Tuesday, February 10, 2015

The Egyptians were better at fractions than our floating-point standard

When it comes to fractions, the Egyptians were just as good, and possibly better, than our computing standards for floating-point math.

Our floating-point arithmetic has a lot in common with the Egyptian methods. Both build numbers as a series of fractions. Not just any fractions, but fractions in the form "1/n". Both limit the series to unique values; you cannot repeat a value in the series. When writing the value 3/5. the Egyptians would write not "1/5 + 1/5 + 1/5" but instead "1/2 + 1/10".

Our floating point algorithms enforce unique values by the nature of binary representations. A number stored in floating-point format has a number of decimal places, values to the right of the decimal point. Those values are 1/2, 1/4, 1/8, 1/16, ... the negative powers of two, and one bit is allocated to each. The bit for a value may be one (on, indicating that the value is part of the series) or zero (off; the value is not part of the series) but it may be used only once.

The Egyptian system was more flexible than our floating-point system. The values for our floating-point system are restricted to the negative powers of two; the Egyptian system allowed any fraction in the form "1/n". Thus the Egyptians could have values 1/5, 1/7, and 1/10.

Our floating-point system does not have values for 1/5, 1/7, or 1/10. Instead, a sum of the permitted values must be used in their place. And here is where things get difficult for the floating-point system.

As odd as it may appear, the value 1/10 cannot be represented exactly in our floating-point representation. Let's look at how a value is constructed:

We must build the value for 0.10 with a set of binary fractions, with at most one of each. The possible values are:

1/2      0.5
1/4      0.25
1/8      0.125
1/16    0.0625
1/32    0.03125
1/64    0.015625
1/128  0.0078125
1/256  0.00390625

The first three values are larger that 0.10, so those bits must be zero. Therefore, the first part of our binary representation must be "0.000" (an initial zero and then three zero bits for 1/2, 1/4, and 1/8).

The value 0.0625 can be used, and in fact must be used, so our value becomes "0.0001". But we're not at 1/10, so we must add more values.

The value 0.03125 can be added without pushing us over the desired value, so we add it. Now our series is 1/16 + 1/32, or 0.00011.

But 1/16 + 1/32 has the value 0.09375, which is not 1/10. We must add more values.

The value 1/64 would push us over 1/10, as would 1/128, so we do not add either. We can add 1/256, giving us 1/16 + 1/32 + 1/256 (or 0.00011001 in binary, or 0.09765625).

You may be getting a bit tired at this point. How many more iterations of this algorithm must we endure? They answer is shocking and daunting: an infinite number!

It turns out that the value 1/10 cannot be represented, exactly, in our floating-point system of binary fractions. (Just as the value 1/3 cannot be represented in our decimal system of fractions.)

The popular programming languages C, C++, Java, and C# all use floating-point numbers with binary fractions. They all represent the value 1/10 inexactly. For any of those languages, the following sequence will show the error:

double a = 0.1;
double b = 0.2;
double c = 0.3;
double d = a + b - c;
// substitute the proper function to print in your language
print(d);

Our modern systems cannot handle 1/10 exactly.

Yet the Egyptians had the value 1/10.

No comments: