On primes and Pluto

Burkard Polster and Marty Ross

The Age, 3 March 2014

Is 1 a prime number? Not according to the Australian Curriculum, which bluntly declares that, whatever else, a number must be greater than 1 to be prime. So, that settles it. We'll be back next week.

Except, of course, such an answer settles nothing. As many a curious maths student has noted, the apposite question is the follow-up: why is 1 not prime? After all, it is clearly not a product of other numbers. However, the frequent reply is a dismissive "because your teacher (and the Australian Curriculum) said so". Great help.

There are variations on the don't-bother-me response. For example, your Maths Masters are often apprised that a prime is a natural number (positive whole number) with exactly two distinct factors, namely 1 and itself; that leaves us with the expected 2, 3, 5, 7, 11 and so on. However this is nothing but legal shenanigans: the contrived qualification "exactly two distinct factors" has been inserted precisely to exclude 1, with still no justification for doing so.

And the justification can't be all that simple. After all, the number 1 used to be prime. Really.

Exactly 100 years ago mathematician D. N. Lehmer compiled his List of Prime Numbers from 1 to 10,006,721. There, proudly occupying first place, is the number 1:

Was Lehmer a one-off, 1-loving crank? Decidedly no. In the centuries prior to Lehmer many tables of prime numbers were published and some, though not all, began with the number 1. Clearly some explanation is in order, and the explanation begins on Pluto.

Here's the question: is Pluto a planet?

The answer, as most readers will know, is that Pluto is not a planet but that it used to be. Pluto was discovered in 1930 and was proclaimed to be the ninth planet in our solar system. However, astronomers always recognised that Pluto is very different from the other planets, and in 2006 the International Astronomical Union reclassified Pluto as a lesser, dwarf planet.

The point is that the notion of "planet" is not God-given and it is not set in stone. Rather, astronomers define what they mean by the word. Then, as astronomers learn more about just what kind of things are zooming around out there, they refine their classifications, making "planet" more precise and more useful. And so, as it happens, Pluto is demoted.

The question of which numbers are prime is analogous. We cannot hope to prove that 1 is or is not prime; it is simply a question of how mathematicians have chosen to define "prime". And, though it is now accepted that 2 should be the first prime number, historically mathematicians have been neither clear nor consistent.

So how did mathematicians come to agree to exclude 1 as a prime, and why did it take them so long to do so? The answers take us into some weird and fascinating history.

The importance of primes is that they are the building blocks of the natural numbers; any composite number (a number greater than 1 that is not itself prime) can be written as a product of primes. For example, 84 is the product 2 x 2 x 3 x 7. Moreover, except for changing the order of the factors, 84 can be written as a product of primes in just that one way. That the same is true for any composite number is the very important (and not so easy to prove) fundamental theorem of arithmetic.

What if we permitted 1 to be prime? In that case, 84 would also have the "prime" factorisation 1 x 1 x 1 x 2 x 2 x 3 x 7. That is, 84 could still be factorised, but it would no longer have a unique prime factorisation.

The upshot is, if 1 is a prime number then describing prime factorisations is more complicated. That would seem sufficient reason to exclude 1 as a prime, and it is the reason your Maths Masters have always accepted. However, both mathematically and historically, that reason turns out to be somewhat wide of the mark.

Consider again the fundamental theorem of arithmetic. Is it such a big deal if we're forced to replace "product of primes" with "product of primes greater than 1"? Hardly, and no one ever really considered it so. Indeed, until relatively recently the question barely even arose.

The fascinating history of such questions is documented in a wonderful paper by mathematicians Chris Caldwell and Yeng Xiong. They point out (quoting another excellent survey) that prime factorisation was not of any great interest before the 19th century. Yes, division by prime numbers was of practical importance, but not the complete factorisation.

It all really dates from 1801, when the great Carl Friedrich Gauss gave the first explicit statement of the fundamental theorem of arithmetic. Then, mathematicians began thinking seriously about the structure of numbers, and of whole worlds of numbers. And, once they started considering more exotic number worlds, things got very confusing.

In particular, mathematicians discovered very strange worlds in which a number can be factorised into "primes" in fundamentally different ways. (We'll discuss these bizarre worlds in a future column.) In order to figure out what was going on, mathematicians were forced to be extremely thoughtful and precise with their definitions. A critical component of these definitions was to get the distractions out of the way, to classify do-nothing numbers such as 1 into their own separate group. That was the real impetus for 1 to cease being prime, though it still took another century for the modern definitions to take hold.

That would seem to be pretty much the story, except there is one more, astonishing twist. From the 19th century on there was good reason to exclude 1 from the primes, but what about prior to that? Yes, a number of mathematicians classified 1 as a prime number, but before 1600 it was very uncommon to do so. Why? Because 1 wasn't a number!

We bet you didn't see that coming. Certainly your Maths Masters didn't. In their paper (and in an extensive companion survey coauthored by Angela Reddick), Caldwell and Xiong consider very carefully how mathematicians throughout history have thought of "number". It turns out that, at least as far back as Euclid, most mathematicians prior to 1600 considered 1 to just be there. By contrast, "numbers" were different, effectively created from 1 by addition. It was only with the emergence of decimals in the late 16th century that excluding 1 as a number began to seem arbitrary and unnecessary.

The amazing fact is, for most of its history 1 has not been a prime number, simply because it hasn't been a number. If it seems difficult figuring out what mathematics is about now, just imagine what it was like in times past.

Burkard Polster teaches mathematics at Monash and is the university's resident mathemagician, mathematical juggler, origami expert, bubble-master, shoelace charmer, and Count von Count impersonator.

Marty Ross is a mathematical nomad. His hobby is helping Barbie smash calculators and iPads with a hammer.

www.qedcat.com

Copyright 2004-∞ All rights reserved.