Category: Science World
Are you real? What about me?. We might live in a computer program, but it may not matter.
These used to be questions that only philosophers worried about. Scientists just got on with figuring out how the world is, and why. But some of the current best guesses about how the world is seem to leave the question hanging over science too.
Several physicists, cosmologists and technologists are now happy to entertain the idea that we are all living inside a gigantic computer simulation, experiencing a Matrix-style virtual world that we mistakenly think is real.
Our instincts rebel, of course. It all feels too real to be a simulation. The weight of the cup in my hand, the rich aroma of the coffee it contains, the sounds all around me – how can such richness of experience be faked?
But then consider the extraordinary progress in computer and information technologies over the past few decades. Computers have given us games of uncanny realism – with autonomous characters responding to our choices – as well as virtual-reality simulators of tremendous persuasive power.
It is enough to make you paranoid. The Matrix formulated the narrative with unprecedented clarity. In that story, humans are locked by a malignant power into a virtual world that they accept unquestioningly as “real”. But the science-fiction nightmare of being trapped in a universe manufactured within our minds can be traced back further, for instance to David Cronenberg’s Videodrome (1983) and Terry Gilliam’s Brazil (1985).
Over all these dystopian visions, there loom two questions. How would we know? And would it matter anyway? The idea that we live in a simulation has some high-profile advocates.
In June 2016, technology entrepreneur Elon Musk asserted that the odds are “a billion to one” against us living in “base reality”. Similarly, Google’s machine-intelligence guru Ray Kurzweil has suggested that “maybe our whole universe is a science experiment of some junior high-school student in another universe”.
What’s more, some physicists are willing to entertain the possibility. In April 2016, several of them debated the issue at the American Museum of Natural History in New York, US.
None of these people are proposing that we are physical beings held in some gloopy vat and wired up to believe in the world around us, as in The Matrix.
Instead, there are at least two other ways that the Universe around us might not be the real one. Cosmologist Alan Guth of the Massachusetts Institute of Technology, US has suggested that our entire Universe might be real yet still a kind of lab experiment. The idea is that our Universe was created by some super-intelligence, much as biologists breed colonies of micro-organisms.
There is nothing in principle that rules out the possibility of manufacturing a universe in an artificial Big Bang, filled with real matter and energy, says Guth.
Nor would it destroy the universe in which it was made. The new universe would create its own bubble of space-time, separate from that in which it was hatched. This bubble would quickly pinch off from the parent universe and lose contact with it.
This scenario does not then really change anything. Our Universe might have been born in some super-beings’ equivalent of a test tube, but it is just as physically “real” as if it had been born “naturally”.
However, there is a second scenario. It is this one that has garnered all the attention, because it seems to undermine our very concept of reality.
Musk and other like-minded folk are suggesting that we are entirely simulated beings. We could be nothing more than strings of information manipulated in some gigantic computer, like the characters in a video game.
Even our brains are simulated, and are responding to simulated sensory inputs. In this view, there is no Matrix to “escape from”. This is where we live, and is our only chance of “living” at all.
But why believe in such a baroque possibility? The argument is quite simple: we already make simulations, and with better technology it should be possible to create the ultimate one, with conscious agents that experience it as totally lifelike.
We carry out computer simulations not just in games but in research. Scientists try to simulate aspects of the world at levels ranging from the subatomic to entire societies or galaxies, even whole universes.
For example, computer simulations of animals may tell us how they develop complex behaviours like flocking and swarming. Other simulations help us understand how planets, stars and galaxies form.
We can also simulate human societies using rather simple “agents” that make choices according to certain rules. These give us insights into how cooperation appears, how cities evolve, how road traffic and economies function, and much else.
These simulations are getting ever more complex as computer power expands. Already, some simulations of human behaviour try to build in rough descriptions of cognition. Researchers envisage a time, not far away, when these agents’ decision-making will not come from simple “if…then…” rules. Instead, they will give the agents simplified models of the brain and see how they respond.
Who is to say that before long we will not be able to create computational agents – virtual beings – that show signs of consciousness? Advances in understanding and mapping the brain, as well as the vast computational resources promised by quantum computing, make this more likely by the day.
If we ever reach that stage, we will be running huge numbers of simulations. They will vastly outnumber the one “real” world around us. Is it not likely, then, that some other intelligence elsewhere in the Universe has already reached that point? If so, it makes sense for any conscious beings like ourselves to assume that we are actually in such a simulation, and not in the one world from which the virtual realities are run. The probability is just so much greater.
Philosopher Nick Bostrom of the University of Oxford in the UK has broken down this scenario into three possibilities. As he puts it, either:
(1) Intelligent civilisations never get to the stage where they can make such simulations, perhaps because they wipe themselves out first; or
(2) They get to that point, but then choose for some reason not to conduct such simulations; or
(3) We are overwhelmingly likely to be in such a simulation.
The question is which of these options seems most probable.
Astrophysicist and Nobel laureate George Smoot has argued that there is no compelling reason to believe (1) or (2).
Sure, humanity is causing itself plenty of problems at the moment, what with climate change, nuclear weapons and a looming mass extinction. But these problems need not be terminal.
What’s more, there is nothing to suggest that truly detailed simulations, in which the agents experience themselves as real and free, are impossible in principle. Smoot adds that, given how widespread we now know other planets to be (with another Earth-like one right on our cosmic doorstep), it would be the height of arrogance to assume that we are the most advanced intelligence in the entire Universe.
What about option (2)? Conceivably, we might desist from making such simulations for ethical reasons. Perhaps it would seem improper to create simulated beings that believe they exist and have autonomy.
But that too seems unlikely, Smoot says. After all, one key reason we conduct simulations today is to find out more about the real world. This can help us make the world better and save lives. So there are sound ethical reasons for doing it.
That seems to leave us with option (3): we are probably in a simulation. But this is all just supposition. Could we find any evidence?
Many researchers believe that depends on how good the simulation is. The best way would be to search for flaws in the program, just like the glitches that betray the artificial nature of the “ordinary world” in The Matrix. For instance, we might discover inconsistencies in the laws of physics.
Alternatively, the late artificial-intelligence maven Marvin Minsky has suggested that there might be giveaway errors due to “rounding off” approximations in the computation. For example, whenever an event has several possible outcomes, their probabilities should add up to 1. If we found that they did not, that would suggest something was amiss.
Last month astronomers from the Kepler spacecraft team announced the discovery of 1,284 new planets, all orbiting stars outside our solar system. The total number of such “exoplanets” confirmed via Kepler and other methods now stands at more than 3,000.
This represents a revolution in planetary knowledge. A decade or so ago the discovery of even a single new exoplanet was big news. Not anymore. Improvements in astronomical observation technology have moved us from retail to wholesale planet discovery. We now know, for example, that every star in the sky likely hosts at least one planet.
But planets are only the beginning of the story. What everyone wants to know is whether any of these worlds has aliens living on it. Does our newfound knowledge of planets bring us any closer to answering that question?
A little bit, actually, yes. In a paper published in the May issue of the journal Astrobiology, the astronomer Woodruff Sullivan and I show that while we do not know if any advanced extraterrestrial civilizations currently exist in our galaxy, we now have enough information to conclude that they almost certainly existed at some point in cosmic history.
Among scientists, the probability of the existence of an alien society with which we might make contact is discussed in terms of something called the Drake equation. In 1961, the National Academy of Sciences asked the astronomer Frank Drake to host a scientific meeting on the possibilities of “interstellar communication.” Since the odds of contact with alien life depended on how many advanced extraterrestrial civilizations existed in the galaxy, Drake identified seven factors on which that number would depend, and incorporated them into an equation.
The first factor was the number of stars born each year. The second was the fraction of stars that had planets. After that came the number of planets per star that traveled in orbits in the right locations for life to form (assuming life requires liquid water). The next factor was the fraction of such planets where life actually got started. Then came factors for the fraction of life-bearing planets on which intelligence and advanced civilizations (meaning radio signal-emitting) evolved. The final factor was the average lifetime of a technological civilization.
Drake’s equation was not like Einstein’s E=mc2. It was not a statement of a universal law. It was a mechanism for fostering organized discussion, a way of understanding what we needed to know to answer the question about alien civilizations. In 1961, only the first factor — the number of stars born each year — was understood. And that level of ignorance remained until very recently.
That’s why discussions of extraterrestrial civilizations, no matter how learned, have historically boiled down to mere expressions of hope or pessimism. What, for example, is the fraction of planets that form life? Optimists might marshal sophisticated molecular biological models to argue for a large fraction. Pessimists then cite their own scientific data to argue for a fraction closer to 0. But with only one example of a life-bearing planet (ours), it’s hard to know who is right.
Or consider the average lifetime of a civilization. Humans have been using radio technology for only about 100 years. How much longer will our civilization last? A thousand more years? A hundred thousand more? Ten million more? If the average lifetime for a civilization is short, the galaxy is likely to be unpopulated most of the time. Once again, however, with only one example to draw from, it’s back to a battle between pessimists and optimists.
But our new planetary knowledge has removed some of the uncertainty from this debate. Three of the seven terms in Drake’s equation are now known. We know the number of stars born each year. We know that the percentage of stars hosting planets is about 100. And we also know that about 20 to 25 percent of those planets are in the right place for life to form. This puts us in a position, for the first time, to say something definitive about extraterrestrial civilizations — if we ask the right question.
In our recent paper, Professor Sullivan and I did this by shifting the focus of Drake’s equation. Instead of asking how many civilizations currently exist, we asked what the probability is that ours is the only technological civilization that has ever appeared. By asking this question, we could bypass the factor about the average lifetime of a civilization. This left us with only three unknown factors, which we combined into one “biotechnical” probability: the likelihood of the creation of life, intelligent life and technological capacity.
You might assume this probability is low, and thus the chances remain small that another technological civilization arose. But what our calculation revealed is that even if this probability is assumed to be extremely low, the odds that we are not the first technological civilization are actually high. Specifically, unless the probability for evolving a civilization on a habitable-zone planet is less than one in 10 billion trillion, then we are not the first.
To give some context for that figure: In previous discussions of the Drake equation, a probability for civilizations to form of one in 10 billion per planet was considered highly pessimistic. According to our finding, even if you grant that level of pessimism, a trillion civilizations still would have appeared over the course of cosmic history.
In other words, given what we now know about the number and orbital positions of the galaxy’s planets, the degree of pessimism required to doubt the existence, at some point in time, of an advanced extraterrestrial civilization borders on the irrational.
In science an important step forward can be finding a question that can be answered with the data at hand. Our paper did just this. As for the big question — whether any other civilizations currently exist — we may have to wait a long while for relevant data. But we should not underestimate how far we have come in a short time.
The theory of quantum mechanics is one of the most successful in all of science. It explains the behaviour of very small objects, such as atoms and their constituent fundamental particles. It can predict all kinds of phenomena, from the shapes of molecules to the way light and matter interact, with phenomenal accuracy.
Quantum mechanics treats particles as if they are waves, and describes them with a mathematical expression called a wave function.
Perhaps the strangest feature of a wave function is that it allows a quantum particle to exist in several states at once. This is called a superposition.
But superpositions are generally destroyed as soon as we measure the object in any way. An observation “forces” the object to “choose” one particular state.
This switch from a superposition to a single state, caused by measurement, is called “collapse of the wave function”. The trouble is, it is not really described by quantum mechanics, so no one knows how or why it happens.
In his 1957 doctoral thesis, the American physicist Hugh Everett suggested that we might stop fretting about the awkward nature of wave function collapse, and just do away with it.
Everett suggested that objects do not switch from multiple states to a single state when they are measured or observed. Instead, all the possibilities encoded in the wave function are equally real. When we make a measurement we only see one of those realities, but the others also exist.
This is known as the “many worlds interpretation” of quantum mechanics. Everett was not very specific about where these other states actually exist. But in the 1970s, the physicist Bryce DeWitt argued that each alternative outcome must exist in a parallel reality: another world.
Suppose you conduct an experiment in which you measure the path of an electron. In this world it goes one way, but in another world it goes another way.
That requires a parallel apparatus for the electron to pass through. It also requires a parallel you to measure it. In fact you have to build an entire parallel universe around that one electron, identical in all respects except where the electron went.
In short, to avoid wave function collapse, you must make another universe.
This picture really gets extravagant when you appreciate what a measurement is. In DeWitt’s view, any interaction between two quantum entities, say a photon of light bouncing off an atom, can produce alternative outcomes and therefore parallel universes.
As DeWitt put it, “every quantum transition taking place on every star, in every galaxy, in every remote corner of the Universe is splitting our local world on earth into myriads of copies.”
Not everyone sees Everett’s many-worlds interpretation this way. Some say it is largely a mathematical convenience, and that we cannot say anything meaningful about the contents of those alternative universes.
But others take seriously the idea that there are countless other “yous”, created every time a quantum measurement is made. The quantum multiverse must be in some sense real, they say, because quantum theory demands it and quantum theory works. You either buy that argument or you do not. But if you accept it, you must also accept something rather unsettling.
The other kinds of parallel universes, such as those created by eternal inflation, are truly “other worlds”. They exist somewhere else in space and time, or in other dimensions. They might contain exact copies of you, but those copies are separate, like a body double living on another continent.
In contrast, the other universes of the many-worlds interpretation do not exist in other dimensions or other regions of space. Instead, they are right here, superimposed on our Universe but invisible and inaccessible. The other selves they contain really are “you”.
In fact, there is no meaningful “you” at all. “You” are becoming distinct beings an absurd number of times every second: just think of all the quantum events that happen as a single electrical signal travels along a single neuron in your brain. “You” vanish into the crowd.
In other words, an idea that started out as a mathematical convenience ends up implying that there is no such thing as individuality.
When Albert Einstein’s theory of general relativity began to come to public attention in the 1920s, many people speculated about the “fourth dimension” that Einstein had allegedly invoked. What might be in there? A hidden universe, maybe?
This was nonsense. Einstein was not proposing a new dimension. What he was saying was that time is a dimension, similar to the three dimensions of space. All four are woven into a single fabric called space-time, which matter distorts to produce gravity. Even so, other physicists were already starting to speculate about genuinely new dimensions in space.
The first intimation of hidden dimensions began with the work of the theoretical physicist Theodor Kaluza. In a 1921 paper Kaluza showed that, by adding an extra dimension to the equations of Einstein’s theory of general relativity, he could obtain an extra equation that seemed to predict the existence of light.
That looked promising. But where, then, was this extra dimension? The Swedish physicist Oskar Klein offered an answer in 1926. Perhaps the fifth dimension was curled up into an unimaginably small distance: about a billion-trillion-trillionth of a centimetre.
The idea of a dimension being curled may seem strange, but it is actually a familiar phenomenon. A garden hose is a three-dimensional object, but from far enough away it looks like a one-dimensional line, because the other two dimensions are so small. Similarly, it takes so little time to cross Klein’s extra dimension that we do not notice it.
Physicists have since taken Kaluza and Klein’s ideas much further in string theory. This seeks to explain fundamental particles as the vibrations of even smaller entities called strings.
When string theory was developed in the 1980s, it turned out that it could only work if there were extra dimensions. In the modern version of string theory, known as M-theory, there are up to seven hidden dimensions.
What’s more, these dimensions need not be compact after all. They can be extended regions called branes (short for “membranes”), which may be multi-dimensional.
A brane might be a perfectly adequate hiding place for an entire universe. M-theory postulates a multiverse of branes of various dimensions, coexisting rather like a stack of papers.
If this is true, there should be a new class of particles called Kaluza-Klein particles. In theory we could make them, perhaps in a particle accelerator like the Large Hadron Collider. They would have distinctive signatures, because some of their momentum is carried in the hidden dimensions.
These brane worlds should remain quite distinct and separate from each other, because forces like gravity do not pass between them. But if branes collide, the results could be monumental. Conceivably, such a collision could have triggered our own Big Bang.
It has also been proposed that gravity, uniquely among the fundamental forces, might “leak” between branes. This leakage could explain why gravity is so weak compared to the other fundamental forces.
As Lisa Randall of Harvard University puts it: “if gravity is spread out over large extra dimensions, its force would be diluted.”
In 1999, Randall and her colleague Raman Sundrum suggested that the branes do not just carry gravity, they produce it by curving space. In effect this means that a brane “concentrates” gravity, so that it looks weak in a second brane nearby.
This could also explain why we could live on a brane with infinite extra dimensions without noticing them. If their idea is true, there is an awful lot of space out there for other universes.
Next Page: The Quantum Multiverse
Another kind of multiverse avoids what some see as the slipperiness of this reasoning, offering a solution to the fine-tuning problem without invoking the anthropic principle.
It was formulated by Lee Smolin of the Perimeter Institute for Theoretical Physics in Waterloo, Canada. In 1992 he proposed that universes might reproduce and evolve rather like living things do.
On Earth, natural selection favours the emergence of “useful” traits such as fast running or opposable thumbs. In the multiverse, Smolin argues, there might be some pressure that favours universes like ours. He calls this “cosmological natural selection”.
Smolin’s idea is that a “mother” universe can give birth to “baby” universes, which form inside it. The mother universe can do this if it contains black holes.
A black hole forms when a huge star collapses under the pull of its own gravity, crushing all the atoms together until they reach infinite density.
In the 1960s, Stephen Hawking and Roger Penrose pointed out that this collapse is like a mini-Big Bang in reverse. This suggested to Smolin that a black hole could become a Big Bang, spawning an entire new universe within itself.
If that is so, then the new universe might have slightly different physical properties from the one that made the black hole. This is like the random genetic mutations that mean baby organisms are different from their parents.
If a baby universe has physical laws that permit the formation of atoms, stars and life, it will also inevitably contain black holes. That will mean it can have more baby universes of its own. Over time, universes like this will become more common than those without black holes, which cannot reproduce.
It is a neat idea, because our Universe then does not have to be the product of pure chance. If a fine-tuned universe arose at random, surrounded by many other universes that were not fine-tuned, cosmic natural selection would mean that fine-tuned universes subsequently became the norm.
The details of the idea are a little woolly, but Smolin points out that it has one big advantage: we can test it.
For example, if Smolin is right we should expect our Universe to be especially suited to making black holes. This is a rather more demanding criterion than simply saying it should support the existence of atoms.
But so far, there is no evidence that this is the case – let alone proof that a black hole really can spawn an entirely new universe.
Next Page: The Brane Multiverse
Some physicists have long been searching for a “theory of everything”: a set of basic laws, or perhaps just a single equation, from which all the other principles of physics can be derived. But they have found there are more alternatives to choose from than there are fundamental particles in the known universe.
Many physicists who delve into these waters believe that an idea called string theory is the best candidate for a “final theory”. But the latest version offers a huge number of distinct solutions: 1 followed by 500 zeros. Each solution yields its own set of physical laws, and we have no obvious reason to prefer one over any other.
The inflationary multiverse relieves us of the need to choose at all. If parallel universes have been popping up in an inflating false vacuum for billions of years, each could have different physical laws, determined by one of these many solutions to string theory.
If that is true, it could help us explain a strange property of our own Universe. The fundamental constants of the laws of physics seem bizarrely fine-tuned to the values needed for life to exist.
For example, if the strength of the electromagnetic force were just a little different, atoms would not be stable. Just a 4% change would prevent all nuclear fusion in stars, the process that makes the carbon atoms our bodies are largely made of.
Similarly, there is a delicate balance between gravity, which pulls matter towards itself, and so-called dark energy, which does the opposite and makes the Universe expand ever faster. This is just what is needed to make stars possible while not collapsing the Universe on itself.
In this and several other ways, the Universe seems fine-tuned to host us. This has made some people suspect the hand of God.
Yet an inflationary multiverse, in which all conceivable physical laws operate somewhere, offers an alternative explanation.
In every universe set up in this life-friendly way, the argument goes, intelligent beings will be scratching their heads trying to understand their luck. In the far more numerous universes that are set up differently, there is no one to ask the question.
This is an example of the “anthropic principle”, which says that things have to be the way we find them: if they were not, we would not be here and the question would never arise.
For many physicists and philosophers, this argument is a cheat: a way to evade rather than explain the fine-tuning problem.
How can we test these assertions, they ask? Surely it is defeatist to accept that there is no reason why the laws of nature are what they are, and simply say that in other universes they are different?
The trouble is, unless you have some other explanation for fine-tuning, someone will assert that God must have set things up this way. The astrophysicist Bernard Carr has put it bluntly: “If you don’t want God, you’d better have a multiverse”.
Next Page: Cosmic Natural Selection
The second multiverse theory arises from our best ideas about how our own Universe began.
According to the predominant view of the Big Bang, the Universe began as an infinitesimally tiny point and then expanded incredibly fast in a super-heated fireball. A fraction of a second after this expansion began, it may have fleetingly accelerated at a truly enormous rate, far faster than the speed of light. This burst is called “inflation”.
Inflationary theory explains why the Universe is relatively uniform everywhere we look. Inflation blew up the fireball to a cosmic scale before it had a chance to get too clumpy.
However, that primordial state would have been ruffled by tiny chance variations, which also got blown up by inflation. These fluctuations are now preserved in the cosmic microwave background radiation, the faint afterglow of the Big Bang. This radiation pervades the Universe, but it is not perfectly uniform.
Several satellite-based telescopes have mapped out these variations in fine detail, and compared them to those predicted by inflationary theory. The match is almost unbelievably good, suggesting that inflation really did happen.
This suggests that we can understand how the Big Bang happened – in which case we can reasonably ask if it happened more than once.
The current view is that the Big Bang happened when a patch of ordinary space, containing no matter but filled with energy, appeared within a different kind of space called the “false vacuum”. It then grew like an expanding bubble.
But according to this theory, the false vacuum should also experience a kind of inflation, causing it to expand at fantastic speed. Meanwhile, other bubble universes of “true vacuum” can appear within it – and not just, like our Universe, 13.8 billion years ago, but constantly.
This scenario is called “eternal inflation”. It suggests there are many, perhaps infinitely many, universes appearing and growing all the time. But we can never reach them, even if we travel at the speed of light forever, because they are receding too fast for us ever to catch up.
The UK Astronomer Royal Martin Rees suggests that the inflationary multiverse theory represents a “fourth Copernican revolution”: the fourth time that we have been forced to downgrade our status in the heavens. After Copernicus suggested Earth was just one planet among others, we realized that our Sun is just one star in our galaxy, and that other stars might have planets. Then we discovered that our galaxy is just one among countless more in an expanding Universe. And now perhaps our Universe is simply one of a crowd.
We do not yet know for sure if inflationary theory is true. However, if eternal inflation does create a multiverse from an endless series of Big Bangs, it could help to resolve one of the biggest problems in modern physics.
Next Page: The Theory of Everything