Leibniz notation
In
calculus, the
Leibniz notation, named in honor of the
17th century German philosopher and
mathematician Gottfried Wilhelm Leibniz (pronounced
LIPE nits) was originally the use of
dx and
dy and so forth to represent "infinitely small" increments of quantities
x and
y, just as Δ
x and Δ
y represent finite increments of
x and
y respectively. According to Leibniz, the derivative of
y with resepct to
x, which mathematicians later came to view as
was the quotient of an infinitely small (i.e.,
infinitesimal) increment of
y by an infinitely small increment of
x. Thus if
then
Similarly, where mathematicians may now view an integral as
Leibniz viewed it as the sum of infinitely many infinitely small quantities
Well before the end of the
19th century, mathematicians had ceased to take Leibniz's notation for derivatives and integrals literally. A number of 19th century mathematicians found logically rigorous ways to treat derivatives and integrals without infinitesimals. In the
1950s and
1960s,
Abraham Robinson introduced ways of treating infinitesimals both literally and logically rigorously, and so rewriting calculus from that point of view. But Robinson's methods are not used by most mathematicians. (One mathematican, Jerome Keisler, has gone so far as to write a first-year-calculus textbook according to Robinson's point of view.)
Nonetheless, everyone continues to use Leibniz's notation today, and few doubt its utility in certain contexts. Although most people using it do not construe it literally, they find it simpler than alternatives when the technique of separation of variables is used in the solution of differential equations. In physical applications, one may for example regard f(x) as measured in meters per second, and dx in seconds, so that f(x) dx is in meters, and so is the value of its definite integral. In that way the Leibniz notation is in harmony with dimensional analysis.