# Prime Numbers

*Prime numbers* (or just *primes*) are a subset of the *natural numbers,* {0, 1, 2, 3, ...}. In fact, all of the natural numbers greater than one are either prime numbers or *composite numbers*. By definition, a prime number is a natural number that is greater than *one* (1), and that has *no positive factors* other than itself or one. This naturally implies that a *composite number* is any natural number greater than one that *does* have factors other than itself or one. There has been some controversy in the past over whether the number one should be included in the set of prime numbers, but the modern consensus is that it does not satisfy the strict definition of a prime. The number *two* (2) is a prime, but is distinguished by the fact that it is the *only even* prime number. Any even number greater than two would obviously have two as a factor, as well as itself and one. The set of all prime numbers is often denoted using the symbol ℙ (an upper case double-struck P). Here is the set of prime numbers that fall between one and one hundred:

{2, 3, 5, 7, 11, 13, 17, 19, 23, 29, 31, 37, 41, 43, 47, 53, 59, 61, 67, 71, 73, 79, 83, 89, 97}

The ancient Egyptians are thought to have been aware of the existence of prime numbers, but do not appear to have attached any special significance to them. As far as we can ascertain, prime numbers were first studied for their own sake by Greek mathematicians during the third century BCE. Even so, they were considered little more than a mathematical anomaly for many centuries. Mathematicians have studied them more out of curiosity than because they were thought to have any practical application. That changed in the 1970s, thanks to the fact that any composite number is the product of a unique set of prime numbers. In other words, no two composite numbers can be expressed as the product of the same set of prime numbers (more about this shortly). For this reason, prime numbers became the basis for various *data encryption schemes*, and the study of prime numbers took on a new significance.

Greek mathematicians were able to use what they learned about prime numbers, and their relationship with composite numbers, to formulate the *fundamental theorem of arithmetic*. The theorem first appeared in the writings of the Greek mathematician *Euclid* in his great work the *Elements*, circa 300 BCE. It states that any positive integer greater than one can be written as the product of one or more prime numbers in a way that is unique, apart from the ordering of the prime factors. This means that any natural number greater than one is either a prime number itself, or is the product of two or more prime numbers. The process of finding all of the prime factors of a composite number is called *prime factorisation*. Regardless of the method used to carry out the prime factorisation, the result will always be the same for a given number. The prime factors of all composite numbers between one and twenty are listed below.

4 = 2 × 2

6 = 2 × 3

8 = 2 × 2 × 2

9 = 3 × 3

10 = 2 × 5

12 = 2 × 2 × 3

14 = 2 × 7

15 = 3 × 5

16 = 2 × 2 × 2 × 2

18 = 2 × 3 × 3

20 = 2 × 2 × 5

The prime factors of a positive integer *a* will be the set of prime numbers by which *a* can be divided exactly (if the positive integer happens to be a prime number, then the only prime factor will be the number itself). If you study the examples above carefully, you will see that for each number there is only one set of prime numbers that can be multiplied together to produce that number. The order in which they appear is unimportant, although they are usually shown in ascending order from left to right. Note that the same prime number can appear multiple times in the list of prime factors for a particular composite number. For small numbers such as these, finding the prime factors is a relatively trivial exercise. As the numbers get larger, however, we need to be a bit more methodical in our approach. If the numbers we wish to factorise are not *too* big, we can use a process called *trial division* (or *direct search factorisation*).

Using trial division, we can find the prime factors of a composite number by systematically carrying out division operations on the number using each of the prime numbers in turn, starting with the smallest prime number that will yield an integer result. An example will help to illustrate how this works. Let's say that we want to find all of the prime factors of the number *four hundred and eighty* (480). We proceed as follows:

480 ÷ 2 = 240 (2 is the first prime factor)

240 ÷ 2 = 120 (2 is the second prime factor)

120 ÷ 2 = 60 (2 is the third prime factor)

60 ÷ 2 = 30 (2 is the fourth prime factor)

30 ÷ 2 = 15 (2 is the fifth prime factor)

15 ÷ 3 = 5 (3 and 5 are the sixth and seventh prime factors)

In this case, the smallest prime number by which we can divide our composite number and get a whole number result is *two*, because four hundred and eighty is an even number. Since the result of the division (two hundred and forty) is also an even number, we can again divide by two. Eventually, we have a result (fifteen) that cannot be divided by two. We must therefore divide fifteen by the next largest prime number that will give an integer result - in this case *three*. The result of this division is itself a prime number (*five*), so we have found all of the prime factors of four hundred and eighty. We can now write down the result of our factorisation:

480 = 2 × 2 × 2 × 2 × 2 × 3 × 5

Sometimes, the numbers to be factorised are so large that using trial division is simply not practical. There are of course other, more sophisticated methods that can be used for the prime factorisation of very large numbers. Nevertheless, even with the aid of computers and the most efficient algorithms available to us, factorising such large numbers can present a formidable challenge. No generic method has yet been devised that will allow us to quickly find the prime factors of any number, regardless of its size. Indeed, if such a method *were* to be developed, it would effectively render many of our current data encryption schemes useless (more about that later). Given a sufficiently large number, even the world's most powerful computers can be kept busy working on a factorisation problem for days, months or even years.

One of the earliest methods used to find prime numbers is thought to have been devised circa 200 BCE by the Greek mathematician and astronomer *Eratosthenes* (275 - 194 BCE), and is called the *Sieve of Eratosthenes* after its creator. This prime number sieve is useful for finding relatively small primes, and can be used to find all of the prime numbers up to a specified integer value. We call it a *sieve*, because it essentially works by catching all of the composite numbers between one and the specified integer, leaving only the prime numbers behind. In the following example, we will use the sieving method to find all of the primes that occur in the first one hundred positive integers. The first step is to arrange the numbers in a grid:

The numbers are arranged in a grid

We have already established that *one* (1) is not a prime number. We have also established that *two* (2) *is* a prime number, but that it is the only *even* prime number. We can therefore blank out one, highlight two, and then blank out all of the remaining even numbers.

Blank out 1, and all of the even numbers except 2

We can now highlight the next largest prime number, which is *three* (3), and blank out all of the remaining numbers that are *multiples* of three (some of these numbers will already have been blanked out because they are even numbers).

Highlight 3, and blank out all of its multiples

Next, we highlight the numbers *five* (5) and *seven* (7), both of which are prime numbers, and blank out any multiples of these numbers that have not already been eliminated.

Highlight 5 and 7, and blank out all of their multiples

The process continues for all prime numbers that are less than, or equal to, the square root of the largest number in the grid. In this case, the largest number on the grid is *one hundred* (100), and the next prime number is *eleven* (11). Since the square root of one hundred is *ten* (10), we are finished. Any numbers remaining that have not so far been highlighted or blanked out are prime numbers, and can therefore also be highlighted. Hopefully you can see why we don't need to worry about multiples of prime numbers greater than the square root of the largest number on the grid. These prime numbers will have no multiples on the grid that have not already been eliminated, since any such multiples would also be multiples of smaller primes that have already been dealt with.

The highlighted numbers are all primes

Mathematicians have known that there are an infinite number of prime numbers since the third century BCE, when Euclid formulated his *infinitude of primes* theorem. Euclid's proofs for the theorem are presented in the *Elements*. The quest to identify ever larger prime numbers is thus an ongoing one. Indeed, the non-profit organisation *Electronic Frontier Foundation* (EFF), which is based in the United States, offers significant financial rewards to the first group or individual to discover a prime number having more than some specified number of digits. In 2000, they awarded a prize of fifty thousand US dollars ($50,000) for the discovery of the first prime number with over one million (1,000,000) decimal digits. In 2009, they awarded one hundred thousand US dollars ($100,000) for the discovery of the first prime number with over ten million (10,000,000) decimal digits. Currently, they are offering one hundred and fifty thousand US dollars ($150,000) for the discovery of the first prime number with over one hundred million (100,000,000) decimal digits, and two hundred and fifty thousand US dollars ($250,000) for the discovery of the first prime number with over one billion (1,000,000,000) decimal digits.

At the time of writing, the largest known prime number is 2^{57,885,161} - 1, discovered in January 2013. When expressed as a decimal number, it has 17,425,170 digits. As you can probably imagine, one of the biggest problems facing mathematicians is how to determine whether such a large number is a prime or not, which essentially involves finding out whether or not it can be factorised. For relatively small numbers, we can use the trial division method described above to determine whether the number has any prime factors other than itself. If no prime factors are found, the number must be a prime. As we have already discussed, however, this method is totally impractical for numbers that are so large they have hundreds, thousands or even millions of digits.

Identifying prime numbers would be a lot easier if we could find some simple formula that allowed us to predict where they were going to occur within the domain of the natural numbers. Unfortunately, the distribution of prime numbers doesn't seem to conform to any discernible pattern. Many mathematicians have tried and failed to find some order in the sequence of prime numbers. Of course, we can identify overall trends in the distribution. We can also identify several groups of numbers that will definitely not be primes. For the natural numbers greater than two, we can eliminate every even number from our search. We can also eliminate any number greater than five that ends in either *five* or *zero*, since five will always be a factor of these numbers. This means that all prime numbers greater than five must end with *one* (1), *three* (3), *seven* (7), or *nine* (9). There are other simple tests you can perform to eliminate a number from the search. For example, if you add together all of the digits in a large number and the result is a multiple of *three* (3), then the number itself is also a multiple of three, and therefore cannot be a prime number.

Although Greek mathematicians in the third century BCE were able to determine many of the properties of prime numbers, very little further interest seems to have been taken in them until the seventeenth century CE. Prior to that time, the largest prime numbers found tended to take the form 2^{n} - 1, where *n* is a prime number. Numbers of this form later became known as *Mersenne primes*, for reasons we will explain shortly. The two largest prime numbers known to exist at this time were both discovered in 1588 by the Italian mathematician *Pietro Antonio Cataldi* (1548 - 1626). The first of these was 2^{17} - 1 = 131,071, and the second was 2^{19} - 1 = 524,287. Not all numbers in this form are primes, however. Cataldi subsequently claimed that values for *n* of 23, 29, 31 and 37 would also yield prime numbers, whereas in fact all but one of the resulting numbers were later found to be composite (the exception being 2^{31} - 1). Nevertheless, Cataldi's two confirmed primes (now designated M_{17} and M_{19}) would remain the largest known prime numbers for the better part of the next two centuries.

In the seventeenth century CE, a significant contribution to the study of prime numbers was made by the French lawyer and amateur mathematician *Pierre de Fermat* (1601 - 1665). We should perhaps start by mentioning the fact that Fermat incorrectly asserted that all numbers of the form 2^{2^n} + 1 were prime numbers. This does in fact hold true for the first five values of *n*. The numbers *three* (2^{1} + 1 = 3), *five* (2^{2} + 1 = 5), *seventeen* (2^{4} + 1 = 17), *two hundred and fifty-seven* (2^{8} + 1 = 257) and *sixty-five thousand, five hundred and thirty-seven* (2^{16} + 1 = 65,537) are all prime. However, the next number in the series is 2^{32} + 1 = 4,294,967,297. It turns out that this number is composite, although this was not discovered until a century later when the Swiss mathematician and physicist *Leonhard Euler* (1707 - 1783) showed that it could be factorised. To date, no other numbers of this form (known as *Fermat numbers*) have been found to be prime numbers.

In circa 1640, Fermat made a more significant contribution to prime number theory with what came to be known as *Fermat's Little Theorem*. It was so named in order to distinguish it from another theorem formulated by Fermat some time later, known as *Fermat's Last Theorem*. Fermat's Little Theorem states that for any prime number *p* and any integer *a*, and providing a is not divisible by *p*, then *a* ^{p-1} - 1 is an integer multiple of (and thus divisible by) *p*. Thus, for any number *n*, if *a* ^{n-1} - 1 is not divisible by *n*, then *n* cannot be prime. A formal way of expressing Fermat's Little Theorem using the notation of modular arithmetic would be:

*a* ^{p-1} ≡ 1 (mod *p*)

This notation looks a little confusing if you are not familiar with modular arithmetic, but it basically means that if you subtract *one* (1) from the first term (*a*^{p-1}), it will result in a value that is a multiple of (or can be divided by) the term appearing after the word mod inside the brackets, which in this case is *p*. An example might make things a bit clearer. Supposing we take the number *sixteen* (16) and raise it to the power of the prime number *five* (5) *minus one*. We get:

16 ^{5-1} ≡ 1 (mod 5)

This is the same as saying:

16 ^{4} - 1 is divisible by five.

or:

65,535 is divisible by five.

This is clearly true, since any natural number greater than five that ends in either zero or five will yield an integer result when divided by five. You can try other examples using different numbers, but so long as you make sure that the value used for *p* is a prime number, you will get the same result. Supposing we use the expression *a* ^{n-1} - 1 as a test for whether or not *n* is a prime number (Fermat did in fact propose this as a test for primality). If *n* is not a prime number, we will probably find that the statement evaluates to false. Supposing we again take the number sixteen, but this time raise it to the power of *nine* (9). We get:

16 ^{9-1} ≡ 1 (mod 9)

This is the same as saying:

16^{8} - 1 is divisible by nine.

or:

4,294,967,295 is divisible by nine.

This turns out to be false, since the division leaves us with a remainder of *three* (3). This means that for any number *n*, if *a* ^{n-1} - 1 is not divisible by *n*, then *n* is not prime. Although Fermat offered no proof for his theorem (apparently he was rather notorious for not bothering with such requirements), it was later proved to be correct by the German mathematician and philosopher *Gottfried Wilhelm Leibniz* (1646 - 1716) and later still by Leonhard Euler. Unfortunately, there are (infinitely) many composite number values for *n* that will actually pass Fermat's primality test. Thus, while it will certainly tell us which numbers are definitely *not* primes, it cannot tell us for certain whether a candidate number *is* a prime. We therefore refer to Fermat's primality test as *probabilistic* (meaning that it tells us something about the *probability* of a number being prime) rather than *deterministic*. Nevertheless, when mathematicians want to write computer programs for finding prime numbers, Fermat's Little Theorem is very useful for reducing the number of candidates, enabling more efficient algorithms to be written. Fermat's other contributions include his discovery that any prime number in the form 4*n* + 1 can be written as the sum of two squares, and devising a new method for factorising large numbers.

Another contributor during the early seventeenth century was the French monk *Marin Mersenne* (1588 - 1648) who studied mathematics (among other things) and had an interest in prime numbers. In particular, he studied prime numbers of the form M_{n} = 2 ^{n} - 1 which are today referred to as *Mersenne primes* in recognition of the work carried out by Mersenne. Not all numbers that take this form are prime numbers. In fact, if *n* is a composite number, then so is 2 ^{n} - 1. Nevertheless, Mersenne numbers have distinct properties that allow them to be tested for primality using relatively fast and efficient algorithms, not least of which is the fact that any Mersenne number M_{n} = 2 ^{n} - 1 for which *n* is composite can be immediately eliminated. Mersenne himself compiled a list of Mersenne primes with values for *n* ranging from *two* (2) to *two hundred and fifty-seven* (257), although his list was subsequently found to contain a number of errors and omissions.

As of 2013, a total of forty-eight Mersenne primes have been verified. In fact, the ten largest prime numbers discovered to date have all been Mersenne primes. Whether or not there are an infinite number of Mersenne primes is still a matter for conjecture. For much of the last two decades, the only new Mersenne primes have been discovered through the *Great Internet Mersenne Prime Search* (GIMPS), which is a distributed computing project run by *Mersenne Research, Inc.* (you can find out more about the project by visiting *http://www.mersenne.org*). The first four Mersenne primes (M_{2} = 3, M_{3} = 7, M_{5} = 31 and M_{7} = 127) have been known since at least the time of the early Greeks. We have already mentioned the sixth and seventh (M_{17} = 131,071 and M_{19} = 131,071), which were discovered in the sixteenth century by Pietro Cataldi. Only a handful more were discovered up until the 1950s, when computers became available. This enabled the necessary calculations to be made many times faster than was previously possible.

The largest Mersenne prime to be verified using calculations made by hand is M_{127} = 2^{127} - 1. The calculations were carried out by French mathematician *Édouard Lucas* (1842 - 1891) in 1876 - no mean feat, given that this is a thirty-nine digit number. Not surprisingly, this number held the record for being the largest prime number discovered for over seventy years, although smaller, previously undiscovered primes were found during the intervening years. Even using modern computers, the exponential rate at which Mersenne numbers increase in size limits the frequency with which they can be found. One of the most efficient methods for checking the primality of Mersenne numbers, the *Lucas-Lehmer* primality test, was originally devised by Lucas, and further refined by the American mathematician *Derrick Henry Lehmer* in the 1930s. In 1952, Lehmer oversaw the first successful attempt to identify a Mersenne prime (M_{521}) using a computer, at the University of California, Los Angeles. Ironically, despite this being the first Mersenne prime to be identified in nearly forty years, the next one (M_{607}) was found by the same computer less than two hours later.

If you can imagine the sheer size of the largest prime numbers currently known to us, you will begin to appreciate the scale of the problem we face in determining whether such a number is a prime or not. To illustrate the point, consider that the largest prime number discovered to date (2^{57,885,161} - 1), when expressed as a decimal number, has 17,425,170 digits. If we wanted to publish this number in its full comma-delimited decimal format, we would need to produce a document several thousand pages in length. The exact number of pages would of course depend on the type font, line spacing, and page margins used. Based on a quick simulation we carried out using *Microsoft Word* and a 12-point serif font, we estimate that we can get something like *fifty* (50) lines per page, with *sixty* (60) digits per line, for a total of *three thousand* (3,000) digits per page. On that basis, our document would require *five thousand, eight hundred and nine* (5,809) pages of digits!

We have talked about the distribution of prime numbers throughout the domain of the natural numbers. As you have probably realised, there are actually quite a few prime numbers between one and a hundred. A quick check will show, however, that there are more prime numbers between one and fifty than between fifty and a hundred. If you were to conclude from this that the gap between one prime number and the next largest prime number will increase as the size of the numbers gets bigger, you would be right - at least in terms of the overall trend. For any given number *n*, there is a mathematical function that gives the number of primes less than or equal to *n*. This function is called the *prime counting function*, and is denoted π (*n*). For example, supposing we wanted to use the prime counting function to express the fact that there fifteen prime numbers that are less than or equal to fifty. We would express this as:

π (50) = 15

Over a broad range of numbers, the distribution of primes is fairly even. It is described by the *prime number theorem*, which was originally proposed at the beginning of the nineteenth century CE by both the French mathematician *Adrien-Marie Legendre* (1752 - 1833) and the German mathematician and physicist *Carl Friedrich Gauss* (1777 - 1855), working independently. The prime number theorem states that, for any randomly chosen number *n*, the probability that *n* is a prime is inversely proportional to the number of digits in the number. An informal way of saying this would be to say that the bigger the number (i.e. the more digits it has, the smaller the chances that the number is a prime. This fits in with the notion that prime numbers become harder to find as the numbers in the range we are looking at get bigger. In slightly more formal terms, we can say that the probability that *n* is a prime is inversely proportional to the logarithm of *n*. We can express this as follows:

Probability that n is prime = | 1 |

ln n |

where ln *n* is the natural logarithm of *n*. Legendre and Gauss also conjectured that the approximate value of the *prime counting function* for a given natural number *n* - i.e. the number of primes we can expect to find that are less than or equal to *n* - is approximately equal to *n* divided by the natural logarithm of *n*. We can state this formally as:

π (n) ≈ | n |

ln n |

Let's see if that works out for the numbers up to one hundred. We already know how many primes there are between one and one hundred. Indeed, we listed them earlier on this page, and a simple count reveals that there are *twenty-five* (25) in total. Applying the formula we saw above for the prime counting function, we get:

π (100) ≈ | 100 | ≈ | 100 | ≈ 21.715 |

ln 100 | 4.605 |

Well, we did say it was an approximation! Applying the same formula to a value for *n* of *one million* (1,000,000) gives us the following:

π (1,000,000) ≈ | 1,000,000 | ≈ | 1,000,000 | ≈ 72,382 |

ln 1,000,000 | 13.816 |

The actual number of primes between one and one million is *seventy-eight thousand, four hundred and ninety-eight* (78,498), so again, we're in the right ball park. Both Legendre and Gauss provided modified formulae in order to achieve more accurate results. For example, Legendre's revised estimate for π (*n*) is given by:

π (n) ≈ | n |

ln n - 1.08366 |

If we apply Legendre's modified formula to a value for *n* of *one million*, we get:

π (1,000,000) ≈ | 1,000,000 | ≈ | 1,000,000 | ≈ 78,543 |

ln 1,000,000 - 1.08366 | 12.732 |

This new approximation is very close to the actual figure (78,498). In 1896, both the French mathematician *Jacques Salomon Hadamard* (1865 - 1963) and the Belgian mathematician *Charles Jean de la Vallée-Poussin* (1866 - 1962), working independently, were able to present proofs that the conclusions reached by Legendre and Gauss about the distribution of prime numbers had been correct.

There are a number of other conjectures relating to prime numbers, some of which have been proved and some of which have not. For example, the German mathematician *Christian Goldbach* (1690 - 1764) proposed that every even number greater than two can be expressed as the sum of two prime numbers. Although this has not so far been mathematically proven, we know that it holds true for all even numbers up to and including 2 × 10^{17}. Other conjectures relate to the intervals between consecutive primes. The prime number theorem tells us that the gap between consecutive primes will increase as their size increases, and this is generally true. However, they often occur at much smaller intervals than the theorem predicts (and sometimes, it should also be said, at much greater intervals). It has been known for a very long time, for example, that there are many pairs of consecutive primes with a difference of just *two* (2). Eight such pairings occur between primes of less than one hundred, which we list here:

{3, 5}

{5, 7}

{11, 13}

{17, 19}

{29, 31}

{41, 43}

{59, 61}

{71, 73}

The *twin prime conjecture* asserts that there are an infinite number of pairs of primes that have a difference of two. Although the conjecture has never been proved, and despite the fact that twin primes occur less frequently as the numbers get larger, they do seem to occur throughout the range of natural numbers. In fact, at the time of writing, the largest twin primes found so far are 3,756,801,695,685 × 2^{666,669} - 1 and 3,756,801,695,685 × 2^{666,669} + 1. The French mathematician *Alphonse de Polignac* (1826 - 1863) went even further, claiming that for every positive integer *n*, there must be an infinite number of pairs of consecutive prime numbers that differ by 2*n*. No doubt the subject of prime numbers and their properties will continue to be of interest to mathematicians, especially those involved in the study of the branch of mathematics known as *number theory*. As mentioned earlier, however, prime numbers are of particular interest in the age of electronic communication because of their importance in the field of data encryption.

One of the best-known encryption algorithms in the public domain is the *RSA public key encryption algorithm*, which first appeared in 1977 and was named for its co-inventors, *Ron Rivest*, *Adi Shamir* and *Leonard Adleman*. An organisation wishing to use RSA encryption creates a *public key* by generating a very large composite number (called a *semiprime*) using two large prime numbers. The semiprime (which is simply the *product* of the two prime numbers) is paired with a very much smaller number known as the *auxiliary number*. Together, the semiprime and the auxiliary number form the public key. The public key is then published so that anyone wishing to do so may send RSA encrypted messages to the organisation (of course, the prime numbers used to generate the semiprime must *not* be disclosed). The public key is used to encrypt the message. The organisation also generates a *private key*, using the same prime numbers and auxiliary number, by means of a slightly more complex mathematical process.

The encryption algorithm itself is essentially a mathematical function that accepts two parameters - the message to be encrypted, and the public key. The *output* from the encryption algorithm is the encrypted version of the message. The decryption algorithm decodes the encrypted message using the private key, which is known only to the receiver. Its output is the original message. Because the encryption and decryption algorithms use different keys generated from the same pair of prime numbers, they are said to be *asymmetric*. You may have surmised from the above that it is theoretically possible for someone with a detailed knowledge of RSA encryption to use what they know to decrypt an encoded message. Information about the encryption and decryption algorithms, together with the methods used to calculate public and private keys, is publicly available (see *http://tools.ietf.org/html/rfc3447*). Anyone with access to an organisation's public key could factorise the semiprime component into its constituent prime numbers. They could then use the results, together with the auxilliary number component, to derive the private key. Once the private key was acquired, decoding encrypted messages sent to the organisation would be relatively easy.

Although the scenario described above is theoretically possible, the sheer size of the semiprime number component of an RSA public key presents a would-be hacker with a daunting task. The strongest version of RSA (as of 2003) uses a 3072-bit key. The semiprime component, even expressed as a decimal integer, would have hundreds of digits. Most would-be hackers simply do not have access to the computational resources needed to factorise a number of this size. Even if such resources somehow became available, the amount of time required could render the exercise pointless, since the kind of information that needs to be encrypted is often only useful for a short time. Of course, processor technology continues to evolve, and the algorithms used to carry out such analysis are constantly being refined. Even so, the scope of the problem is demonstrated by the results of a distributed computing experiment carried out at *Ecole Polytechnique de Lausanne* in Switzerland in 2007. It reportedly took up to four hundred desktop and laptop computers, working together, nearly a year to factorise a single 1039-bit integer!