Infinity. It is a concept that defies imagination. We have a hard-enough time trying to wrap our minds around things that are merely extremely big: our solar system, our galaxy, the observable universe. But those scales are nothing compared with the infinite. Just thinking about it can make you queasy.

But we cannot avoid it. Mathematics as we know it is riddled with infinities. The number line stretches to eternity and beyond, and is infinitely divisible: countless more numbers lurk between any two others. The number of digits in a constant like pi is limitless. Whether geometry, trigonometry or calculus, the mathematical manipulations we use to make sense of the world are built on the idea that some things never end.

Trouble is, once unleashed these infinities are wild, unruly beasts. They blow up the equations with which physicists attempt to explain nature’s fundamentals. They obstruct a unified view of the forces that shape the cosmos. Worst of all, add infinities to the explosive mixture that made up the infant universe and they prevent us from making any scientific predictions at all.

All of which encourages a bold speculation among a few physicists and mathematicians: can we do away with infinity?

Belief in the never-ending has not always been a mainstream view. “For most of the history of mathematics, infinity was kept at arm’s length,” says mathematician Norman Wildberger of the University of New South Wales in Sydney, Australia. For greats of the subject, from Aristotle to Newton and Gauss, the only infinity was a “potential” infinity. This type of infinity allows us to add 1 to any number without fear of hitting the end of the number line, but is never actually reached itself. That is a long way from accepting “actual” infinity – one that has already been reached and conveniently packaged as a mathematical entity we can manipulate in equations.

Things changed in the late 19th century, when the German mathematician Georg Cantor invented set theory, the underpinning of modern number theory. He argued that sets containing an infinite number of elements were themselves mathematical objects. This masterstroke allowed the meaning of numbers to be pinned down in a rigorous way that had long eluded mathematicians. Within set theory, the infinite continuum of the “real” numbers, including all the rational numbers (those, like ½, which can be expressed as a ratio of integers) and the irrational numbers (those that cannot, like pi) came to be treated as actual, rather than potential, infinities. “No one shall expel us from the paradise Cantor has created,” the mathematician David Hilbert later declared.

For physicists, however, the infinite paradise has become more like purgatory. To take one example, the standard model of particle physics was long beset by pathological infinities, for instance in quantum electrodynamics, the quantum theory of the electromagnetic force. It initially showed the mass and charge of an electron to be infinite.

Decades of work, rewarded by many a Nobel prize, banished these nonsensical infinities – or most of them. Gravity has notoriously resisted unification with the other forces of nature within the standard model, seemingly immune to physicists’ best tricks for neutralising infinity’s effects. In extreme circumstances such as in a black hole’s belly, Einstein’s equations of general relativity, which describe gravity’s workings, break down as matter becomes infinitely dense and hot, and space-time infinitely warped.

But it is at the big bang that infinity wreaks the most havoc. According to the theory of cosmic inflation, the universe underwent a burst of rapid expansion in its first fraction of a second. Inflation explains essential features of the universe, including the existence of stars and galaxies. But it cannot be stopped. It continues inflating other bits of space-time long after our universe has settled down, creating an infinite “multiverse” in an eternal stream of big bangs. In an infinite multiverse, everything that can happen will happen an infinite number of times. Such a cosmology predicts everything – which is to say, nothing.

This disaster is known as the measure problem, because most cosmologists believe it will be fixed with the right “probability measure” that would tell us how likely we are to end up in a particular sort of universe and so restore our predictive powers. Others think there is something more fundamental amiss. “Inflation is saying, hey, there’s something totally screwed up with what we’re doing,” says cosmologist Max Tegmark of the Massachusetts Institute of Technology (MIT). “There’s something very basic we’ve assumed that’s just wrong.”

For Tegmark, that something is infinity. Physicists treat space-time as an infinitely stretchable mathematical continuum; like the line of real numbers, it has no gaps. Abandon that assumption and the whole cosmic story changes. Inflation will stretch space-time only until it snaps. Inflation is then forced to end, leaving a large, but finite, multiverse. “All of our problems with inflation and the measure problem come immediately from our assumption of the infinite,” says Tegmark. “It’s the ultimate untested assumption.”

### Disruptive influence

There are also good reasons to think it is an unwarranted one. Studies of the quantum properties of black holes by Stephen Hawking and Jacob Bekenstein in the 1970s led to the development of the holographic principle, which makes the maximum amount of information that can fit into any volume of space-time proportional to roughly one quarter the area of its horizon. The largest number of informational bits a universe of our size can hold is about 10^{122}. If the universe is indeed governed by the holographic principle, there is simply not enough room for infinity.

Certainly we need nothing like that number of bits to record the outcome of experiments. David Wineland, a physicist at the National Institute of Standards and Technology in Boulder, Colorado, shared last year’s Nobel prize in physics for the world’s most accurate measuring device, an atomic clock that can measure increments of time out to 17 decimal places. The electron’s anomalous magnetic moment, a measure of tiny quantum effects on the particle’s spin, has been measured out to 14 decimal places. But even the best device will never measure with infinite accuracy, and that makes some physicists very itchy. “I don’t think anyone likes infinity,” says Raphael Bousso of the University of California at Berkeley. “It’s not the outcome of any experiment.”

But if infinity is such an essential part of mathematics, the language we use to describe the world, how can we hope to get rid of it? Wildberger has been trying to figure that out, spurred on by what he sees as infinity’s disruptive influence on his own subject. “Modern mathematics has some serious logical weaknesses that are associated in one way or another with infinite sets or real numbers,” he says.

For the past decade, he has been working on a new, infinity-free version of trigonometry and Euclidean geometry. In standard trigonometry, the infinite is ever-present. Angles are defined by reference to the circumference of a circle and thus to an infinite string of digits, the irrational number pi. Mathematical functions such as sines and cosines that relate angles to the ratios of two line lengths are defined by infinite numbers of terms and can usually be calculated only approximately. Wildberger’s “rational geometry” aims to avoid these infinities, replacing angles, for example, with a “spread” defined not by reference to a circle, but as a rational output extracted from mathematical vectors representing two lines in space.

Doron Zeilberger of Rutgers University in Piscataway, New Jersey, thinks the work has potential. “Everything is made completely rational. It’s a beautiful approach,” he says.

Then again, Zeilberger himself subscribes to a view of infinity so radical that it would have even the pre-Cantor greats of mathematics stirring in their coffins. While Wildberger’s work is concerned with doing away with actual infinity as a real object used in mathematical manipulations, Zeilberger wants to dispose of potential infinity as well. Forget everything you thought you knew about mathematics: there is a largest number. Start at 1 and just keep on counting and eventually you will hit a number you cannot exceed – a kind of speed of light for mathematics.

That raises a host of questions. How big is the biggest number? “It’s so big you could never reach it,” says Zeilberger. “We don’t know what it is so we have to give it a name, a symbol. I call it N_{0}.” What happens if you add 1 to it? Zeilberger’s answer comes by analogy to a computer processor. Every computer has a largest integer number that it can handle: exceed it, and you will either get an “overflow error” or the processor will reset the number to zero. Zeilberger finds the second option more elegant. Enough of the number line, stretching infinitely far in both directions. “We can redo mathematics postulating that there is a biggest number and make it circular,” he says.

Hugh Woodin, a set theorist at the University of California, Berkeley, is sceptical. “He could be correct, of course. But to me the view is a limiting view. Why take it unless one has strong evidence that it is correct?” For him, the success of set theory with all its infinities is reason enough to defend the status quo.

So far, finitist mathematics has received most attention from computer scientists and robotics researchers, who work with finite forms of mathematics as a matter of course. Finite computer processors cannot actually deal with real numbers in their full infinite glory. They approximate them using floating-point arithmetic – a form of scientific notation that allows the computer to drop digits from a real number, and so save on memory without losing its overall scope.

The idea that our finite universe might work similarly has a history. Konrad Zuse, a German engineer and one of the pioneers of floating-point arithmetic, built the world’s first programmable electronic computer in his parents’ living room in 1938. Seeing that his own machine could solve differential equations (which ordinarily use infinitely small steps to calculate the evolution of a physical system) without recourse to the infinite, he was persuaded that continuous mathematics was just an approximation of a discrete and finite reality. In 1969, Zuse wrote a book called *Calculating Space* in which he argued that the universe itself is a digital computer – one with no room for infinity.

Tegmark for his part is intrigued by the fact that the calculations and simulations that physicists use to check a theory against the hard facts of the world can all be done on a finite computer. “That already shows that we don’t need the infinite for anything we’re doing,” he says. “There’s absolutely no evidence whatsoever that nature is doing it any differently, that nature needs to process an infinite amount of information.”

Seth Lloyd, a physicist and quantum information expert also at MIT, counsels caution with such analogies between the cosmos and an ordinary, finite computer. “We have no evidence that the universe behaves as if it were a classical computer,” he says. “And plenty of evidence that it behaves like a quantum computer.”

At first glance, that would seem to be no problem for those wishing to banish infinity. Quantum physics was born when, at the turn of the 20th century, physicist Max Planck showed how to deal with another nonsensical infinity. Classical theories were indicating that the amount of energy emitted by a perfectly absorbing and radiating body should be infinite, which clearly was not the case. Planck solved the problem by suggesting that energy comes not as an infinitely divisible continuum, but in discrete chunks – quanta.

The difficulties start with Schrödinger’s cat. When no one is watching, the famous quantum feline can be both dead and alive at the same time: it hovers in a “superposition” of multiple, mutually exclusive states that blend together continuously. Mathematically, this continuum can only be depicted using infinities. The same is true of a quantum computer’s “qubits”, which can perform vast numbers of mutually exclusive calculations simultaneously, just as long as no one is demanding an output. “If you really wanted to specify the full state of one qubit, it would require an infinite amount of information,” says Lloyd.

### Down the rabbit hole

Tegmark is unfazed. “When quantum mechanics was discovered, we realised that classical mechanics was just an approximation,” he says. “I think another revolution is going to take place, and we’ll see that continuous quantum mechanics is itself just an approximation to some deeper theory, which is totally finite.”

Lloyd counters that we ought to work with what we have. “My feeling is, why don’t we just accept what quantum mechanics is telling us, rather than imposing our prejudices on the universe? That never works,” he says.

For physicists looking for a way forward, however, it is easy to see the appeal. If only we could banish infinity from the underlying mathematics, perhaps we might see the way to unify physics. For Tegmark’s particular bugbear, the measure problem, we would be freed from the need to find an arbitrary probability measure to restore cosmology’s predictive power. In a finite multiverse, we could just count the possibilities. If there really were a largest number then we would only have to count so high.

Woodin would rather separate the two issues of physical and mathematical infinities. “It may well be that physics is completely finite,” he says. “But in that case, our conception of set theory represents the discovery of a truth that is somehow far beyond the physical universe.”

Tegmark, on the other hand, thinks the mathematical and physical are inextricably linked – the further we plunge down the rabbit hole of physics to deeper levels of reality, the more things seem to be made purely of mathematics. For him, the fatal error message contained in the measure problem is saying that if we want to rid the physical universe of infinity, we must reboot mathematics, too. “It’s telling us that things aren’t just a little wrong, but terribly wrong.”

*(Amanda Gefter is a science writer based in Cambridge, Massachusetts. Her book Trespassing on Einstein’s Lawn will be published by Random House in January 2014)*

Reblogged this on Muser.

LikeLike