Table of contents |
2 Notation 3 Formal definitions 4 Properties 5 Generalizations 6 Footnote |
The natural numbers presumably had their origins in the words used to count things, beginning with the number one.
The first major advance in abstraction was the use of numerals to represent numbers. This allowed systems to be developed for recording large numbers. For example, the Babylonians developed a powerful place-value system based essentially on the numerals for 1 and 10. The ancient Egyptians had a system of numerals with distinct hieroglyphs for 1, 10, and all the powers of 10 up to one million. A stone carving from Karnak, dating from around 1500 BC and now at the Louvre in Paris, depicts 276 as 2 hundreds, 7 tens, and 6 ones; and similarly for the number 4,622.
A much later advance in abstraction was the development of the idea of zero as a number with its own numeral. A zero digit had been used in place-value notation as early as 700 BC by the Babylonians, but it was never used as a final element.¹ The Olmec and Maya civilization used zero as a separate number as early as 1st century BC, apparently developed independently, but they did not pass it along to anyone outside of Mesoamerica. The modern concept dates to the Indian mathematician Brahmagupta in 628 AD. It took more than five centuries for European mathematicians to accept zero as a number, and even when they did, it was not counted as a natural number.
The first systematic study of numbers as abstractions (that is, as abstract entities) is usually credited to the Greek philosophers Pythagoras and Archimedes. However, independent studies also occurred at around the same time in India, China, and Mesoamerica.
In the nineteenth century, a set-theoretical definition of natural numbers was developed. With this definition, it was more convenient to include zero (corresponding to the empty set) in the naturals. Wikipedia follows this convention, as do set theorists, logicians, and computer scientists. Some other mathematicians, mainly number theorists, prefer to follow the old tradition and exclude zero from the natural numbers.
The term whole number is used informally by some authors for an element of the set of integers, the set of non-negative integers, or the set of positive integers.
Mathematicians use N or (an N in blackboard bold) to refer to the set of all natural numbers. This set is infinite but countable by definition.
W or is sometimes used to refer to the set of whole numbers, by authors who do not identify it with the integers.
The precise mathematical definition of the natural numbers has not been easy. The Peano postulates state conditions that any successful definition must satisfy:
A standard construction in set theory is to define each natural number as the set of natural numbers less than it, so that 0 = {}, 1 = {0}, 2 = {0,1}, 3 = {0,1,2}... When you see a natural number used as a set, this is typically what is meant. Under this definition, there are exactly n elements in the set n and if m is bigger than n, then n is a subset of m. Although this particular construction is useful, it is not the only possible construction. For example, one could define 0 = {}, 1 = {0}, 2 = {1}, and so on.
One can inductively define an addition on the natural numbers by requiring a + 0 = a and a + (b + 1) = (a + b) + 1. This turns the natural numbers (N, +) into a commutative monoid with neutral element 0, the so-called free monoid with one generator. This monoid satisfies the cancellation property and can therefore be embedded in a group. The smallest group containing the natural numbers is the integers.
Analogously, a multiplication * can be defined via a * 0 = 0 and a * (b + 1) = ab + a. This turns (N, *) into a commutative monoid; addition and multiplication are compatible which is expressed in the distribution law:
a * (b + c) = ab + ac.
Furthermore, one defines a total order on the natural numbers by writing a ≤ b if and only if there exists another natural number c with a + c = b. This order is compatible with the arithmetical operations in the following sense: if a, b and c are natural numbers and a <= b, then a + c ≤ b + c and ac ≤ bc. An important property of the natural numbers is that they are well-ordered: every non-empty set of natural numbers has a smallest element.
While it is in general not possible to divide one natural number by another and get a natural number as result, the procedure of division with remainder is available as a substitute: For any two natural numbers a and b with b ≠ 0 we can find natural numbers q and r such that
Two generalizations of natural numbers arise from the two uses: ordinal numbers are used to describe the position of an element in a ordered sequence and cardinal numbers are used to specify the size of a given set.
For finite sequences or finite sets, both of these are of course the same as the natural numbers.
Other generalizations are discussed in the article on numbers.
zh-cn:自然数 zh-tw:自然數History of natural numbers and the status of zero
Notation
Formal definitions
If zero is excluded from the natural numbers, every 0 in the Peano postulates should be replaced by a 1.Properties
The number q is called the quotient and r is called the remainder of division of a by b. The numbers q and r are uniquely determined by a and b. This, the quotient-remainder theorem, is key to several other properties (divisibility), algorithms (such as the Euclidean algorithm), and ideas in number theory.Generalizations
Footnote
¹ "... a tablet found at Kish ... thought to date from around 700 BC, uses three hooks to denote an empty place in the positional notation. Other tablets dated from around the same time use a single hook for an empty place." [1]