In floating-point arithmetic, only a small number of digits of the number are maintained; floating-point numbers can only approximate most real numbers.
Consider the real number
.1234567891234567890 .
A floating-point representation of this number on a machine that keeps 10 floating-point digits would be
.1234567891,
which seems pretty close--the difference is very small in comparison with either of the two numbers.
Now perform the calculation
.1234567891234567890 - .1234567890 .
The real answer, accurate to 10 digits, is
.0000000001234567890
But on the 10-digit floating-point machine, the calculation yields
.1234567891 - .1234567890 = .0000000001 .
Whereas the original numbers are accurate in all of the first (most significant) 10 digits, their floating-point difference is only accurate in its first digit. This amounts to loss of information.
It is possible to do all rational arithmetic keeping all significant digits, but to do so is often prohibitively slower than floating-point arithmetic. Furthermore, it usually only postpones the problem: What if the data is accurate to only 10 digits? The same effect will occur.
One of the most crucial parts of numerical analysis is to avoid or minimize loss of significance in calculations. If the underlying problem is well-posed, there should be a stable algorithm for solving it. The art is in finding a stable algorithm.
For example, consider the venerable quadratic formula for solving
a quadratic equation
.
The quadratic formula gives the two solutions as
.
We have
In real arithmetic, the roots are
,
In 10-digit floating-point arithmetic,
,
Notice that the solution of greater magnitude (speaking of absolute value) is accurate to ten digits, but the first nonzero digit of the solution of lesser magnitude is wrong.
Because of the subtraction that occurs in the quadratic formula,
it does not constitute a stable algorithm to calculate the
two roots of a quadratic equation.Instability of the quadratic formula
The case , , will serve to illustrate the problem:
.
.