A major advantage of positional numeral systemss over other systems of writing down numbers is that they facilitate the usual grade-school method of long multiplication: multiply the first number with every digit of the second number and then add up all the properly shifted results. In order to perform this algorithm, one needs to know the products of all possible digits, which is why multiplication tables have to be memorized. Humans use this algorithm in base 10, while computers employ the same algorithm in base 2. The algorithm is a lot simpler in base 2, since the multiplication table has only 4 entries. Rather than first computing the products, and then adding them all together in a second phase, computers add the products to the result as soon as they are computed. Modern chips implement this algorithm for 32-bit or 64-bit numbers in hardware or in microcode. To multiply two numbers with n digits using this method, one needs about n2 operations. More formally: the time complexity of multiplying two n-digit numbers using long multiplication is &Theta(n2).
An old method for multiplication, that doesn't require multiplication tables, is the Peasant multiplication algorithm; this is actually a method of multiplication using base 2.
For systems that need to multiply huge numbers in the range of several thousand digits, such as computer algebra systems and bignum libraries, this algorithm is too slow. These systems employ Karatsuba multiplication which was discovered in 1962 and proceeds as follows: suppose you work in base 10 (unlike most computer implementations) and want to multiply two n-digit numbers x and y, and assume n = 2m is even (if not, add zeros at the left end). We can write
If T(n) denotes the time it takes to multiply two n-digit numbers with Karatsuba's method, then we can write
It is possible to experimentally verify whether a given system uses Karatsuba's method or long multiplication: take your favorite two 100,000 digit numbers, multiply them and measure the time it takes. Then take your favorite two 200,000 digit numbers and measure the time it takes to multiply those. If Karatsuba's method is being used, the second time will be about three times as long as the first; if long multiplication is being used, it will be about four times as long.
Another Method of multiplication is called Toom-Cook or Toom3
There exist even faster algorithms, based on the fast Fourier transform. The idea, due to Strassen (1968), is the following: multiplying two numbers represented as digit strings is virtually the same as computing the convolution of those two digit strings. Instead of computing a convolution, one can instead first compute the discrete Fourier transforms, multiply them entry by entry, and then compute the inverse Fourier transform of the result. (See convolution theorem.) The fastest known method based on this idea was described in 1972 by Schönhage/Strassen and has a time complexity of Θ(n ln(n) ln(ln(n))). These approaches are not used in computer algebra systems and bignum libraries because they are difficult to implement and don't provide speed benefits for the sizes of numbers typically encountered in those systems. The GIMPS distributed Internet prime search project deals with numbers having several million digits and employs a Fourier transform based multiplication algorithm. Using number-theoretic transforms instead of discrete Fourier transforms should avoid any rounding error problems by using modular arithmetic instead of complex numbers.
All the above multiplication algorithms can also be used to multiply polynomials.
A simple improvement to the basic recursive multiplication algorithm:
This may not help so much for multiplication by real or complex values, but is useful for multiplication of very large integers which are supported in some programming languages such as Haskell, Ruby, and Common Lisp.
External links: