PHILOSOPHY OF MATHEMATICS EDUCATION JOURNAL 11 (1999)

 

THE VARIETIES OF NUMERICAL EXPERIENCE

Jerry Ravetz

It doubtless gives comfort to some philosophers to know that a number is "the set of all sets that are similar to a given set". But for those who use and teach mathematics at any level but the most ethereal, Bertrand Russell's classic formulation leaves much to be desired.

The debate over the essence of number goes back continuously to Plato and Aristotle. I would like to try another approach. I will start by considering the various functions performed by numbers, and then see what this variety can tell us about how numbers are to be used, conceived and, possibly, taught.

Here is the list. Numbers can be used, variously to: count, name, calculate, measure, estimate, order, denote, theorise, enrich.

The last entry may be obscure; that is the word I have chosen (for lack of a better one ) for the functions performed by such numbers as 3, 7, 666 and 2000. It provides a link between the worlds of scientific rationality and of symbolism and magic. The latter, after all, has been an important, if not indeed primary use of numbers down through the ages, and even we are not immune to its influence.

I will argue that for each function there are appropriate concepts, symbols and techniques. Together these form a family that we call "number". Each "functionally defined number concept" is a product of the interaction of practice with theory in history; it has its own richness, and its own counter-intuitive aspects, obscurities and contradictions. To ignore such variety and depth, and to conceive and teach "number" as a single, simple thing, is to produce confusion and incompetence among pupils and teachers alike.

Let us start at the beginning. The counting of smallest sets can be done by tallying, but soon it requires names. And the naming process runs up against the problem of the indefinite extension of the sequence of counters. No matter how high we count, there is always another element there, at least in our thoughts. The most primitive solution is to have "many" after 1, 1, 2, 3, ..., representing the sets of objects that the mind cannot hold in its grasp at once, perhaps anything more than 5 or 7. Or, more sophisticated, we might choose a largest unit, like "myriad", which we know we will never reach in practice, and so be secure in the knowledge that "myriad plus one" is a meaningless expression. But once we allow mathematicians to theorise, no such barriers are effective. Even the child who asks, "What is the last number?" and is silenced by scorn, has sensed the infinite in the apparently trivial.

Naming of numbers is thus intimately connected with counting. For mathematicians, this function might seem totally non-theoretical and trivial. Indeed, once we have the indefinitely extensible exponential notation, it is in principle straightforward. In its absence, it took an Archimedes to construct recursive systems of naming of large quantities (in his "Sand-reckoner"). In practice, the matter is far from trivial. Not only technical practice, but even power-politics can determine names; note how the old British Billion, or a million-million, was silently discarded in favour of the Franco-American variety (thousand-million) not many years ago.

The naming of numbers can also introduce us to counter-intuitive features in a way that can enhance our imaginations. Some will remember the programs in "Multi-base arithmetic for tots" that formed one of the more notorious components of the New Maths. Naming also forms a direct link to the magical functions of numbers, through gematria. This "science" was natural when letters of the alphabet were used to denote numbers; add up a name, and you have its number. It actually happened to me, not to a friend of a friend, to be told by an educated Dutch woman that during the war their Calvinist preacher instructed them that Hitler's number was 666, confirming what they already knew about his wickedness and thereby justifying joining the Resistance. (The conversation was in the summer of 1965, in a suburb of Utrecht).

When we come to calculating, the fun really begins. Here I can enunciate the principle: it is the inverse operation that requires extensions of the original set of objects, and the extended set always includes non-standard objects. (The basic insight for this was given to me by D. T. Whiteside, the Newtonian scholar at Cambridge). Thus we can add and multiply the numbers with which we count, and we just get more of the same. But when we subtract, we get "negative" numbers. Of these, the bigger are in a sense smaller; and the multiplication rule (-1)(-1) = +1 needs explanations which themselves need explanations.

Subtraction also shows what happens when a naming system is tightly bound up with calculation, and then shares the contradictions that arise on its extension. "Place-value" is a graphical naming system, with quite reasonable rules for "carrying" extra digits in a sum. But when we subtract larger digits from smaller, there's trouble ahead. My impression is that wars between religious sects have nothing in ferocity compared to those between the adherents of the different ways of adjusting that particular monster. Either one engages in a peculiar practice of "borrow and pay back", or one breaks the basic principle of the notation, translating a number like 2\5 into 1\15 so that there is a valid subtraction to perform in the digits column. (If I am mistaken here, I will be pleased to accept correction from those closer to the chalk-face). And I believe that the ferocity of the arguments arise from the same cause as in religious sectarianism: the conviction that there is a simple truth, indeed there must be a simple truth; and so those who disagree with me are defective in both reason and morals.

While we are on the interaction between naming and calculating, we might as well mention The Zero. Here is a quasi-number which obeys some laws of calculation but not others (you can do everything with Zero except divide by it). Also, it behaves like rather like a cardinal (quantity) but not at all like an ordinal. There is no zeroeth element of a sequence, which leads to confusion in numbering and naming centuries (the Italians are at least consistent, their Cinquecento is our 1500's, or, ordinally the "Sixteenth"). Even when "the Millennium" begins is a matter of fiat. I have seen philosophers in some confusion when talking about Number; is Zero a number, or isn't it?

All these topics I have dealt with so far are in the realm of "elementary" mathematics, which "every schoolboy", and every schoolteacher, should find easy to learn and teach. I could go on about Fractions, but I will forbear.

Next comes measurement. From classical antiquity until modern times, it was accepted that the two great branches of mathematics were those dealing with discrete quantity and continuous magnitude, respectively. The former was the field of arithmetic, the latter of geometry. The division broke down in Renaissance times. First, algebra developed beyond the realm of solution of puzzles with integer solutions. And the use of numbers to describe continuous magnitudes (traditional in astronomy) was extended to a variety of new fields. In direct measurement, there is always a "last digit" of estimation between the finest set of marked gradations. Hence fractions, at least of one digit (or simple denominators) perform a real function in the measurement process.

In a way that was nearly unique in history, the little book on "The Decimal", published by Simon Steven in 1590, immediately transformed practice. A new common-sense was instantly created, and it is left for historians to puzzle out what counter-intuitive features of decimal fractions were resolved or suppressed by Stevin's account.

It could be that he showed practitioners that they just didn't need to worry about the geometrical meaning of the "powers" of numbers, meanings that we still retain in the terms "square" and "cube". A digit located two places to the right of the decimal point was just there; attempts to explain it in geometrical terms were redundant. Judging from previous practice, it seems that decimal fractions, as in trigonometrical tables, could be rationalised in terms of a "root", say 10,000; that could be the number of units in an hypotenuse, and the sine or cosine would be so many of those units. Again, Stevin showed that no pitfalls were lurking if one simply threw away that "root" and used as many decimal places as necessary. Of course, that opened up the way to the infinite, but I know of no record of people objecting. Arithmetic was for practical men rather than for philosophers; Descartes resolved the problem of dimensions for geometry in a rather different way.

The point here is that Stevin's notation seemed to work; and in practice anyone could know how many "significant digits" to take in measuring a quantity against a standard. That might well be so; but some of us are aware that when calculation is mixed with measurement, bizarre results are all too easily obtained. How many digits are significant when two one-digit numbers are multiplied, or divided? The answer is far from trivial. It is all too easy to retain lots of digits; one can still hear the argument that one shouldn't throw away all those digits, as they might actually express the right answer! A more familiar example are those numerical tables where the percentages add up to just 100. This is either the result of good luck, or of someone fiddling the numbers rather than confront the paradox that rounding-off produces counterintuitive results.

Also, few are aware of the pitfalls arising in the representation of the uncertainties in inverse arithmetical operations. Consider the example, 1/[b - a], where b = 100, a = 97, and each has an "error" of 2%. The inversion of matrices easily becomes an art rather than a science when uncertain quantities are involved.

I have mentioned estimation in connection with measurement, as the means of obtaining that last digit. But estimation goes much further than that. It involves the explicit management of significant uncertainties, to avoid results that are meaningless in their hyper-precision, or perhaps even meaningless in themselves. I believe that in the last century, estimation was taught in schools along with mental arithmetic; and perhaps it will come back some day.

A subtle use of estimation is in connection with large aggregates of things, let us say of the order of millions. Here our standard number system is both unsubtle and ambiguous. In speech we can say "one million", "a million" or "millions", where the increasing vagueness of expression connotes decreasing precision in the number referred to. With place-value notation we have the totally misleading 1,000,000; and even scientific notation allows us only 1.E6.

That last example highlights a feature of the zero, whose importance is matched only by its neglect. I can illustrate it with a joke, about fossils. (I read it in the Reader's Digest many years ago; but Lewis Carroll had already made the same point in _Sylvie and Bruno_). There was this museum curator who told a group of schoolchildren that a particular fossil was sixty-five million and four years old. Someone asked how he knew; and he explained that he learned from Jurassic Park that the fossil was sixty-five million years old, and that was four years previous. Some else scoffed, and he said, OK, let's do the sum:

65,000,000

+4

65,000,004.

One of the children then turned to the teacher, and asked her whether the sum was correct. She was in a quandary; although it was nonsense, she could not bring herself to write the meaningful sum:

65,000,000

+4

65,000,000.

What she could not explain, because it had never been explained to her, was that the Zero, along with its other non-standard properties, actually means two very different things. In manipulation among other digits, it is a counter, arising when equals are subtracted among equals. But in the naming of estimates of large numbers in the place-value notation, it is a filler. We can suppose that all six of the Zeroes in the expression 65,000,000 are fillers; together they mean "millions". But the usage is ambiguous; thus if we had 60,000,000, we do not know whether this means six of ten-millions, or sixty of millions.

Of course this ambiguity and the resulting confusion is not usually of critical importance; but it just contributes to the haze of misunderstanding of mathematics which is all the more damaging because it is deemed to be nonexistent or illegitimate. One consequence of this suppressed muddle is the prevalence of cases of hyper- or pseudo-precision in the numbers quoted in public discussion of science, technology and public affairs. Thus we will have social statistics given to four or more s.d.'s, and translations between Imperial and Metric units producing ludicrous numbers. (One of my favourites is the figure of 335.28 m. as the distance between the landing point of a moon-probe and its target. This is actually "A thousand feet + 10%" translated precisely.) But we should not be too hard on the publicity persons; the Systeme Internationale itself, with its abolition of the centimetre, enforces a pseudo-precision by requiring measurements in millimetres even where these are of dubious significance 

I need not spend too much time on the functions at the end of the list. The use of numbers for ordering has the hidden pitfall of expressing more precision than is justified or intended. I recall using numerical grades for assessing students' work, where each number had a little poetic characterisation of its meaning. Then we would average them! Of course, my colleagues and I knew what we were doing, and if the average was ambiguous, then we would think again. But it could be a ripe subject of anthropological research, to see which departmental cultures believe in the absolute quantitative truth and accuracy of their two-digit numbers.

I mention denoting as another, broader function. Normally it is harmless; we know that the numbers on footballers' shirts do not convey an order, and still less a quantity. But subtle problems can arise when computers interpret such denotations as numbers and not as strings. For example, zeroes to the left of an expression may be disregarded, while they actually are significant in some contexts.

Finally, there are numbers as the basis for theorising. We see this first with "algebraic" numbers, starting with the square root of two, and proceeding to "imaginary" and "complex" numbers. Each of these were the occasion of confusion or crisis on their first appearance; and their descendants become increasingly counter-intuitive. I recall feeling distinctly odd on being told (by a physicist, no less) that Vector cross-multiplication is non-associative; that's a funny sort of quantity, I thought.

To finish my story, there are the Alephs of Georg Cantor, with which he tried to count infinity. They bring us back to the contradictions of counting, in that it deals with finite things but cannot keep them in bounds. For Cantor, the pitfall came in the idea of "set", which the proponents of the New Maths considered to be quite elementary.

What does all this amount to? For me, it is a practical refutation of the faith in mathematics that we have inherited from Descartes, Galileo and their followers. We do not need to go as far as The Calculus to find mysteries and confusions. They are there in the basic concepts we use to organise our experience of the world, even in the most abstract way. My point could be summed up in the slogan, "Zeno was right!". The question is whether we continue in the millennial endeavour to refute his demonstration of the contradictoriness of our basic conceptual apparatus; or whether we learn to live with it and make it a source of a mathematical understanding that is fruitful and effective.

 © The Author 1999

Web page maintained by
P.M.Rosenthall@exeter.ac.uk.