# Natural Numbers and Integers

*Natural numbers* are sometimes called the *counting numbers*. They include all of the numbers from *one* to *infinity* {1, 2, 3, 4, 5, ...} that we use to count things like apples, sheep, and people. The term *infinity* is used here for convenience, and simply means that there is no limit to the magnitude of a natural number - the sequence of natural numbers starts at *one*, and simply goes on forever. The set of natural numbers is an ordered sequence of numbers in which each number is one greater than its *predecessor* (the number that appears immediately *before* it in the sequence), and one less than its *successor* (the number that appears immediately *after* it in the sequence). The only exception to this rule is *one*, which being the first number in the sequence has no predecessor. You may have noticed that the natural numbers are what we often call *whole numbers* (i.e. numbers that are not fractions, and have no fractional part). In fact, the only difference between the set of natural numbers and the set of whole numbers is that the set of whole numbers {0, 1, 2, 3, 4, 5, ...} includes *zero* (0). The distinction reflects the fact that zero cannot be included in the counting numbers. There must, after all, be at least *one* of something (e.g. one apple, one sheep or one person) to enable a count to be made. By definition, all natural numbers are whole numbers, but not all whole numbers are natural numbers (the single exception being zero).

Natural numbers include several other sets of numbers that you have probably come across, such as *odd* whole numbers and *even* whole numbers. *Odd* whole numbers are those whole numbers that cannot be divided by two to give a result that is also a whole number. The set of odd whole numbers includes every other whole number, starting with one {1, 3, 5, 7, 9, ...}. *Even* whole numbers *can* be divided by two to give a result that is a whole number. The set of even whole numbers includes every other whole number, starting with two {2, 4, 6, 8, 10, ...}. The natural numbers also include the set of whole numbers known as *prime numbers*. A prime number is any number that can only be divided by itself or one to give a result that is still a whole number {2, 3, 5, 7, 11, 13, 17, 19, 23, 29, ...}. Note that the number *one* (1) is excluded from the list of prime numbers, as it does not match the criteria included in the formal mathematical definition of what constitutes a prime number. All natural numbers that are greater than one are either prime numbers or *composite numbers* (a composite numbers is a number that has at least one other factor besides itself and one).

At this stage it should perhaps be pointed out that not all mathematicians and scientists would agree with the above definition of natural numbers. Some would insist that the set of natural numbers must include zero. This is perhaps understandable when you consider areas such as set theory, where zero may be used to represent the *cardinality* of an empty set (cardinality in this case refers to the number of items a set contains, and is therefore very much related to counting). One thing that (almost) everyone agrees on is that the set of natural numbers does *not* include negative numbers. When talking about whole numbers, things are not so clear cut, since there is no formal definition of what constitutes a whole number. Some people think that the set of whole numbers should be considered to include negative numbers, as well as zero. If that were the case, however, there would be no distinction between whole numbers and *integers*.

Fortunately, when it comes to integers, there is no controversy about the definition. The set of integers includes all of the natural numbers, their negative counterparts, and zero {... -5, -4, -3, -2, -1, 0, 1, 2, 3, 4, 5, ...}. We can of course distinguish between *positive* integers {1, 2, 3, 4, 5, ...} and *negative* integers {-1, -2, -3, -4, -5, ...}. This does however leave us with the question of zero - does this belong to the positive integers or the negative integers? Positive integers must, by definition, be greater than zero. Similarly, negative integers must be less than zero, so zero can't belong to either set of numbers. Sometimes, however, we want to refer to the set of numbers that includes all of the positive integers *and* zero {0, 1, 2, 3, 4, 5, ...}. We can get around the limitations of the term *positive integer* (which *doesn't* include zero) by using the term *non-negative integer* (which by definition *does* include zero). Similarly, if we want to refer to the set of numbers that includes all of the *negative* integers and zero {... -5, -4, -3, -2, -1, 0}, we can use the term *non-positive integer*, since this will also by definition include zero.

Numbers that are specifically stated to be either positive or negative are referred to as *signed numbers*, because we use either a *plus sign* (+) or a *minus sign* (-) to indicate whether they are positive or negative. Actually, we rarely bother to use a plus sign to indicate positive numbers, since the absence of a minus sign is usually taken to mean that the number is positive. If we *do* have to write a number that has a negative value, we usually place a minus sign to the left of the number (e.g. *minus three* would be written as -3). When computers have to deal with integers, life is not quite so simple. A computer can store the absolute value of any integer from *zero* to *two-hundred and fifty-five* (0-255) in a single *byte* (eight bits). If we want the computer to know whether the number is positive or negative, we can use one of the bits (usually the *left-most* or *most significant* bit) to indicate the sign. A zero indicates that the number is positive, while a one indicates that the number is negative. The *magnitude* of the largest positive or negative number we can represent as a signed integer using a single byte is therefore only half that of the largest unsigned integer, since we have one less bit available with which to represent it.

Programmers or students studying computer science will probably be aware that there are several quite different and sophisticated ways in which computers can represent signed numbers internally. Most of us, however, do not have to worry about such things, interesting as they may be. What *is* of interest for the purposes of this discussion is that we can get around any possible ambiguity surrounding the definition of natural numbers using one of the groups of integers defined above. If we consider the set of natural numbers to be all of the positive whole numbers *excluding* zero (which is the traditional definition), we can substitute the term *positive integers* for the term *natural numbers*. If, on the other hand, we want to *include* zero in our definition of the natural numbers, we can instead use the term *non-negative integers*.

Special symbols are used to represent both the set of natural numbers and the set of integers. The set of natural numbers is usually denoted using ℕ (an upper case double struck N). In order to indicate whether or not zero is considered to be included in the set of natural numbers, some form of additional notation is often used. For example, ℕ_{1} is sometimes used to show that the first number is considered to be *one* {1, 2, 3, 4, 5, ...}, while ℕ_{0} may be used to indicate that the first number is *zero* {0, 1, 2, 3, 4, 5, ...}. The symbol commonly used to denote the set of integers is ℤ (an upper case double struck Z). Additional notation can be used to signify a subset of integers. The set of *positive* integers {1, 2, 3, 4, 5, ...} is often represented by ℤ_{+}, while the set of *negative* integers {-1, -2, -3, -4, -5, ...} is represented by ℤ_{-}. For the *non-negative* integers {0, 1, 2, 3, 4, 5, ...}, we can use ℤ_{≥0} to indicate that the set includes all of the positive integers *and* zero. Conversely, we can use ℤ_{≤0} to represent the *non-positive* integers {0, -1, -2, -3, -4, -5, ...}.

The mathematical properties of natural numbers and integers differ slightly. The sum or product of any two natural numbers will itself be a natural number. This is always true. It is also always true that the sum or product of any two integers will also be an integer. In the case of integers however, we can additionally state that the difference of any two integers will always be an integer, since the set of integers includes negative numbers. It is *not* true that the difference of two natural numbers will always be a natural number, since the possibility exists that the result of subtracting one natural number from another will be negative. For both natural numbers and integers, division may result in a fraction or a number with a fractional component. The quotient of two natural numbers is therefore not guaranteed to be another natural number, and the quotient of two integers is similarly not guaranteed to be another integer.

The following diagram (known as an *Euler* diagram) shows the relationship between the set of integers, ℤ, the set of whole numbers (essentially the non-negative integers, ℤ_{≥0}) and the set of natural numbers excluding zero, ℕ_{1}. You can see from the diagram below that all natural numbers are also both whole numbers and integers. This is a good example of how a number may simultaneously belong to more than one number type. We mentioned previously that prime numbers are a subset of the natural numbers, and that any natural number greater than one is either a prime number or a composite number. You should be able to see, therefore, that diagrams like the one shown below could become rather complicated if we tried to include all of the possible number types. Fortunately, we rarely need to consider the relationship between more than two or three different types of number at any given time, so such complexity can usually be avoided.

The relationship between integers, whole numbers and natural numbers (ℕ_{1})