Every time a bit of information is erased, we know it doesn’t disappear. It goes out into the environment. It may be horribly scrambled and confused, but it never really gets lost. It’s just converted into a different form.
A concept developed for computer science could have a key role in fundamental physics — and point the way to a new understanding of space and time.
When physicist Leonard Susskind gives talks these days, he often wears a black T-shirt proclaiming “I ♥ Complexity”. In place of the heart is a Mandelbrot set, a fractal pattern widely recognized as a symbol for complexity at its most beautiful.
That pretty much sums up his message. The 74-year-old Susskind, a theorist at Stanford University in California, has long been a leader in efforts to unify quantum mechanics with the general theory of relativity — Albert Einstein’s framework for gravity. The quest for the elusive unified theory has led him to advocate counter-intuitive ideas, such as superstring theory or the concept that our three-dimensional Universe is actually a two-dimensional hologram. But now he is part of a small group of researchers arguing for a new and equally odd idea: that the key to this mysterious theory of everything is to be found in the branch of computer science known as computational complexity.
Antes de Newton no había ningún concepto de leyes universales que se aplicaran a objetos astronómicos como los planetas y también a los objetos terrestres como la lluvia que cae y las flechas que vuelan. Las leyes del movimiento de Newton fueron el primer ejemplo de tales leyes universales. Pero incluso para el poderoso sir Isaac Newton era un salto muy grande suponer que las mismas leyes llevaban a la creación de seres humanos: él dedicó más tiempo a la teología que a la física.
No soy historiador, pero aventuraré una opinión: la cosmología moderna empezó realmente con Darwin y Wallace. A diferencia de cualquiera que lo hubiera intentado antes, ellos ofrecieron explicaciones de nuestra existencia que descartaban por completo a agentes sobrenaturales.
Darwin y Wallace fijaron un canon no sólo para las ciencias de la vida, sino para la cosmología. Las leyes que gobiernan el nacimiento y la evolución del Universo deben ser las mismas leyes que gobiernan la caída de las piedras, la química y la física nuclear de los elementos y la física de las partículas elementales. Nos liberaron de lo sobrenatural mostrando que la vida compleja e incluso inteligente podía surgir del azar, la rivalidad y las causas naturales. Los cosmólogos también tendrían que hacerlo: la base de la cosmología tenía que sustentarse en reglas impersonales que son las mismas en todo el universo y cuyo origen no tiene nada que ver con nuestra propia existencia.
The language of physics is mathematics, and it cannot be done honestly without mathematics. That makes it inaccessible. The language of literature is English or Chinese or whatever, and that makes it accessible. And literature is about the human condition. Physics is about the nonhuman condition. It’s not a taste that all human beings have.
People are also social creatures, and literature fits in with that. Physics is perceived as a lonesome, nerdy kind of enterprise that has very little to do with human feelings and the things that excite people day-to-day about each other. Yet physicists in their own working environment are very social creatures.
There is a philosophy that says that if something is unobservable – unobservable in principle – it is not part of science. If there is no way to falsify or confirm a hypothesis, it belongs to the realm of metaphysical speculation, together with astrology and spiritualism. By that standard, most of the universe has no scientific reality – it’s just a figment of our imaginations.
What is your favorite, deep, elegant, or beautiful explanation?
Leonard Susskind: Stanford professor of theoretical physics, one the greatest minds on the second half of the 20th century, close friend of my personal hero Richard Feynman, author and incredible story teller; answers the question “What is your favorite, deep, elegant or beautiful explanation?” which is the question of the year on edge.org
“That’s a tough question for a theoretical physicist; theoretical physics is all about deep, elegant, beautiful explanations; and there are just so many to choose from.
Personally my favorites are explanations that that get a lot for a little. In physics that means a simple equation or a very general principle. I have to admit though, that no equation or principle appeals to me more than Darwinian evolution, with the selfish gene mechanism thrown in. To me it has what the best physics explanations have: a kind of mathematical inevitability. But there are many people who can explain evolution much better than I, so I will stick to what I know
The guiding star for me, as a physicist, has always been Boltzmann’s explanation of second law of thermodynamics: the law that says that entropy never decreases. To the physicists of the late 19 th century this was a very serious paradox. Nature is full of irreversible phenomena: things that easily happen but could not possibly happen in reverse order. However, the fundamental laws of physics are completely reversible: any solution of Newton’s equations can be run backward and it is still a solution. So if entropy can increase, the laws of physics say it must be able to decrease. But experience says otherwise. For example, if you watch a movie of a nuclear explosion in reverse, you know very well that it’s fake. As a rule, things go one way and not the other. Entropy increases.
What Boltzmann realized is that the second law—entropy never decreases—is not a law in the same sense as Newton’s law of gravity, or Faraday’s law of induction. It’s a probabilistic law that has the same status as the following obvious claim; if you flip a coin a million times you will not get a million heads. It simply won’t happen. But is it possible? Yes, it is; it violates no law of physics. Is it likely? Not at all. Boltzmann’s formulation of the second law was very similar. Instead of saying entropy does not decrease, he said entropy probably doesn’t decrease. But if you wait around long enough in a closed environment, you will eventually see entropy decrease: by accident, particles and dust will come together and form a perfectly assembled bomb. How long? According to Boltzmann’s principles the answer is the exponential of the entropy created when the bomb explodes. That is a very long time, a lot longer than the time to flip a million heads in
I’ll give you a simple example to see how it is possible for things to be more probable one way than the other, despite both being possible. Imagine a high hill that comes to a narrow point—a needle point—at the top. Now imagine a bowling ball balanced at the top of the hill. A tiny breeze comes along. The ball rolls off the hill and you catch it at the bottom. Next, run it in reverse: the ball leaves your hand, rolls up the hill, and with infinite finesse, comes to the top—and stops! Is it possible? It is. Is it likely? It is not. You would have to have almost perfect precision to get the ball to the top, let alone to have it stop dead-balanced. The same is true with the bomb. If you could reverse every atom and particle with sufficient accuracy, you could make the explosion products reassemble themselves. But a tiny inaccuracy in the motion of just one single particle, and all you would get is more
Here’s another example: drop a bit of black ink into a tub of water. The ink spreads out and eventually makes the water grey. Will a tub of grey water ever clear up and produce a small drop of ink? Not impossible, but very unlikely.
Boltzmann was the first to understand the statistical foundation for the second law, but he was also the first to understand the inadequacy of his own formulation. Suppose that you came upon a tub that had been filled a zillion years ago, and had not been disturbed since. You notice the odd fact that it contains a somewhat localized cloud of ink. The first thing you might ask is what will happen next. The answer is that the ink will almost certainly spread out more. But by the same token, if you ask what most likely took place a moment before, the answer would be the same: it was probably more spread out a moment ago than it is now. The most likely explanations would be that the ink-blob is just a momentary fluctuation.
Actually I don’t think you would come to that conclusion at all. A much more reasonable explanation is that for reasons unknown, the tub started not-so-long-ago with a concentrated drop of ink, which then spread. Understanding why ink and water go one way becomes a problem of “initial conditions”. What set up the concentration of ink in the first place?
The water and ink is an analogy for the question of why entropy increases. It increases because it is most likely that it will increase. But the equations say that it is also most likely that it increases toward the past. To understand why we have this sense of direction, one must ask the same question that Boltzmann did: Why was the entropy very small at the beginning? What created the universe in such a special low-entropy way? That’s a cosmological question that we are still very uncertain about.
I began telling you what my favorite explanation is, and I ended up telling you what my favorite unsolved problem is. I apologize for not following the instructions. But that’s the way of all good explanations. The better they are, the more questions they raise.”
In early 2009, determined to make the most of his first sabbatical from teaching, Mark Van Raamsdonk decided to tackle one of the deepest mysteries in physics: the relationship between quantum mechanics and gravity. After a year of work and consultation with colleagues, he submitted a paper on the topic to the Journal of High Energy Physics.
In April 2010, the journal sent him a rejection — with a referee’s report implying that Van Raamsdonk, a physicist at the University of British Columbia in Vancouver, was a crackpot.
His next submission, to General Relativity and Gravitation, fared little better: the referee’s report was scathing, and the journal’s editor asked for a complete rewrite.
But by then, Van Raamsdonk had entered a shorter version of the paper into a prestigious annual essay contest run by the Gravity Research Foundation in Wellesley, Massachusetts. Not only did he win first prize, but he also got to savour a particularly satisfying irony: the honour included guaranteed publication in General Relativity and Gravitation. The journal published the shorter essay1 in June 2010.
Still, the editors had good reason to be cautious. A successful unification of quantum mechanics and gravity has eluded physicists for nearly a century. Quantum mechanics governs the world of the small — the weird realm in which an atom or particle can be in many places at the same time, and can simultaneously spin both clockwise and anticlockwise. Gravity governs the Universe at large — from the fall of an apple to the motion of planets, stars and galaxies — and is described by Albert Einstein’s general theory of relativity, announced 100 years ago this month. The theory holds that gravity is geometry: particles are deflected when they pass near a massive object not because they feel a force, said Einstein, but because space and time around the object are curved.
Both theories have been abundantly verified through experiment, yet the realities they describe seem utterly incompatible. And from the editors’ standpoint, Van Raamsdonk’s approach to resolving this incompatibility was strange. All that’s needed, he asserted, is ‘entanglement’: the phenomenon that many physicists believe to be the ultimate in quantum weirdness. Entanglement lets the measurement of one particle instantaneously determine the state of a partner particle, no matter how far away it may be — even on the other side of the Milky Way.
Einstein loathed the idea of entanglement, and famously derided it as “spooky action at a distance”. But it is central to quantum theory. And Van Raamsdonk, drawing on work by like-minded physicists going back more than a decade, argued for the ultimate irony — that, despite Einstein’s objections, entanglement might be the basis of geometry, and thus of Einstein’s geometric theory of gravity. “Space-time,” he says, “is just a geometrical picture of how stuff in the quantum system is entangled.”
“I had understood something that no one had understood before.”
This idea is a long way from being proved, and is hardly a complete theory of quantum gravity. But independent studies have reached much the same conclusion, drawing intense interest from major theorists. A small industry of physicists is now working to expand the geometry–entanglement relationship, using all the modern tools developed for quantum computing and quantum information theory.
“I would not hesitate for a minute,” says physicist Bartłomiej Czech of Stanford University in California, “to call the connections between quantum theory and gravity that have emerged in the last ten years revolutionary.”
Gravity without gravity
Much of this work rests on a discovery2 announced in 1997 by physicist Juan Maldacena, now at the Institute for Advanced Study in Princeton, New Jersey. Maldacena’s research had led him to consider the relationship between two seemingly different model universes. One is a cosmos similar to our own. Although it neither expands nor contracts, it has three dimensions, is filled with quantum particles and obeys Einstein’s equations of gravity. Known as anti-de Sitter space (AdS), it is commonly referred to as the bulk. The other model is also filled with elementary particles, but it has one dimension fewer and doesn’t recognize gravity. Commonly known as the boundary, it is a mathematically defined membrane that lies an infinite distance from any given point in the bulk, yet completely encloses it, much like the 2D surface of a balloon enclosing a 3D volume of air. The boundary particles obey the equations of a quantum system known as conformal field theory (CFT).
Maldacena discovered that the boundary and the bulk are completely equivalent. Like the 2D circuitry of a computer chip that encodes the 3D imagery of a computer game, the relatively simple, gravity-free equations that prevail on the boundary contain the same information and describe the same physics as the more complex equations that rule the bulk.
“It’s kind of a miraculous thing,” says Van Raamsdonk. Suddenly, he says, Maldacena’s duality gave physicists a way to think about quantum gravity in the bulk without thinking about gravity at all: they just had to look at the equivalent quantum state on the boundary. And in the years since, so many have rushed to explore this idea that Maldacena’s paper is now one of the most highly cited articles in physics.
Among the enthusiasts was Van Raamsdonk, who started his sabbatical by pondering one of the central unsolved questions posed by Maldacena’s discovery: exactly how does a quantum field on the boundary produce gravity in the bulk? There had already been hints3 that the answer might involve some sort of relation between geometry and entanglement. But it was unclear how significant these hints were: all the earlier work on this idea had dealt with special cases, such as a bulk universe that contained a black hole. So Van Raamsdonk decided to settle the matter, and work out whether the relationship was true in general, or was just a mathematical oddity.
He first considered an empty bulk universe, which corresponded to a single quantum field on the boundary. This field, and the quantum relationships that tied various parts of it together, contained the only entanglement in the system. But now, Van Raamsdonk wondered, what would happen to the bulk universe if that boundary entanglement were removed?
He was able to answer that question using mathematical tools4 introduced in 2006 by Shinsei Ryu, now at the University of Illinois at Urbana–Champaign, and Tadashi Takanagi, now at the Yukawa Institute for Theoretical Physics at Kyoto University in Japan. Their equations allowed him to model a slow and methodical reduction in the boundary field’s entanglement, and to watch the response in the bulk, where he saw space-time steadily elongating and pulling apart (see ‘The entanglement connection’). Ultimately, he found, reducing the entanglement to zero would break the space-time into disjointed chunks, like chewing gum stretched too far.
The geometry–entanglement relationship was general, Van Raamsdonk realized. Entanglement is the essential ingredient that knits space-time together into a smooth whole — not just in exotic cases with black holes, but always.
“I felt that I had understood something about a fundamental question that perhaps nobody had understood before,” he recalls: “Essentially, what is space-time?”
Quantum entanglement as geometric glue — this was the essence of Van Raamsdonk’s rejected paper and winning essay, and an idea that has increasingly resonated among physicists. No one has yet found a rigorous proof, so the idea still ranks as a conjecture. But many independent lines of reasoning support it.
In 2013, for example, Maldacena and Leonard Susskind of Stanford published5 a related conjecture that they dubbed ER = EPR, in honour of two landmark papers from 1935. ER, by Einstein and American-Israeli physicist Nathan Rosen, introduced6 what is now called a wormhole: a tunnel through space-time connecting two black holes. (No real particle could actually travel through such a wormhole, science-fiction films notwithstanding: that would require moving faster than light, which is impossible.) EPR, by Einstein, Rosen and American physicist Boris Podolsky, was the first paper to clearly articulate what is now called entanglement7.
Maldacena and Susskind’s conjecture was that these two concepts are related by more than a common publication date. If any two particles are connected by entanglement, the physicists suggested, then they are effectively joined by a wormhole. And vice versa: the connection that physicists call a wormhole is equivalent to entanglement. They are different ways of describing the same underlying reality.
No one has a clear idea of what this underlying reality is. But physicists are increasingly convinced that it must exist. Maldacena, Susskind and others have been testing the ER = EPR hypothesis to see if it is mathematically consistent with everything else that is known about entanglement and wormholes — and so far, the answer is yes.
Other lines of support for the geometry–entanglement relationship have come from condensed-matter physics and quantum information theory: fields in which entanglement already plays a central part. This has allowed researchers from these disciplines to attack quantum gravity with a whole array of fresh concepts and mathematical tools.
Tensor networks, for example, are a technique developed by condensed-matter physicists to track the quantum states of huge numbers of subatomic particles. Brian Swingle was using them in this way in 2007, when he was a graduate student at the Massachusetts Institute of Technology (MIT) in Cambridge, calculating how groups of electrons interact in a solid material. He found that the most useful network for this purpose started by linking adjacent pairs of electrons, which are most likely to interact with each other, then linking larger and larger groups in a pattern that resembled the hierarchy of a family tree. But then, during a course in quantum field theory, Swingle learned about Maldacena’s bulk–boundary correspondence and noticed an intriguing pattern: the mapping between the bulk and the boundary showed exactly the same tree-like network.
“You can think of space as being built from entanglement.”
Swingle wondered whether this resemblance might be more than just coincidence. And in 2012, he published8 calculations showing that it was: he had independently reached much the same conclusion as Van Raamsdonk, thereby adding strong support to the geometry–entanglement idea. “You can think of space as being built from entanglement in this very precise way using the tensors,” says Swingle, who is now at Stanford and has seen tensor networks become a frequently used tool to explore the geometry–entanglement correspondence.
Another prime example of cross-fertilization is the theory of quantum error-correcting codes, which physicists invented to aid the construction of quantum computers. These machines encode information not in bits but in ‘qubits’: quantum states, such as the up or down spin of an electron, that can take on values of 1 and 0 simultaneously. In principle, when the qubits interact and become entangled in the right way, such a device could perform calculations that an ordinary computer could not finish in the lifetime of the Universe. But in practice, the process can be incredibly fragile: the slightest disturbance from the outside world will disrupt the qubits’ delicate entanglement and destroy any possibility of quantum computation.
That need inspired quantum error-correcting codes, numerical strategies that repair corrupted correlations between the qubits and make the computation more robust. One hallmark of these codes is that they are always ‘non-local’: the information needed to restore any given qubit has to be spread out over a wide region of space. Otherwise, damage in a single spot could destroy any hope of recovery. And that non-locality, in turn, accounts for the fascination that many quantum information theorists feel when they first encounter Maldacena’s bulk–boundary correspondence: it shows a very similar kind of non-locality. The information that corresponds to a small region of the bulk is spread over a vast region of the boundary.
“Anyone could look at AdS–CFT and say that it’s sort of vaguely analogous to a quantum error-correcting code,” says Scott Aaronson, a computer scientist at MIT. But in work published in June9, physicists led by Daniel Harlow at Harvard University in Cambridge and John Preskill of the California Institute of Technology in Pasadena argue for something stronger: that the Maldacena duality is itself a quantum error-correcting code. They have demonstrated that this is mathematically correct in a simple model, and are now trying to show that the assertion holds more generally.
“People have been saying for years that entanglement is somehow important for the emergence of the bulk,” says Harlow. “But for the first time, I think we are really getting a glimpse of how and why.”
That prospect seems to be enticing for the Simons Foundation, a philanthropic organization in New York City that announced in August that it would provide US$2.5 million per year for at least 4 years to help researchers to move forward on the gravity–quantum information connection. “Information theory provides a powerful way to structure our thinking about fundamental physics,” says Patrick Hayden, the Stanford physicist who is directing the programme. He adds that the Simons sponsorship will support 16 main researchers at 14 institutions worldwide, along with students, postdocs and a series of workshops and schools. Ultimately, one major goal is to build up a comprehensive dictionary for translating geometric concepts into quantum language, and vice versa. This will hopefully help physicists to find their way to the complete theory of quantum gravity.
Still, researchers face several challenges. One is that the bulk–boundary correspondence does not apply in our Universe, which is neither static nor bounded; it is expanding and apparently infinite. Most researchers in the field do think that calculations using Maldacena’s correspondence are telling them something true about the real Universe, but there is little agreement as yet on exactly how to translate results from one regime to the other.
Another challenge is that the standard definition of entanglement refers to particles only at a given moment. A complete theory of quantum gravity will have to add time to that picture. “Entanglement is a big piece of the story, but it’s not the whole story,” says Susskind.
He thinks physicists may have to embrace another concept from quantum information theory: computational complexity, the number of logical steps, or operations, needed to construct the quantum state of a system. A system with low complexity is analogous to a quantum computer with almost all the qubits on zero: it is easy to define and to build. One with high complexity is analogous to a set of qubits encoding a number that would take aeons to compute.
Susskind’s road to computational complexity began about a decade ago, when he noticed that a solution to Einstein’s equations of general relativity allowed a wormhole in AdS space to get longer and longer as time went on. What did that correspond to on the boundary, he wondered? What was changing there? Susskind knew that it couldn’t be entanglement, because the correlations that produce entanglement between different particles on the boundary reach their maximum in less than a second10. In an article last year11, however, he and Douglas Stanford, now at the Institute for Advanced Study, showed that as time progressed, the quantum state on the boundary would vary in exactly the way expected from computational complexity.
“It appears more and more that the growth of the interior of a black hole is exactly the growth of computational complexity,” says Susskind. If quantum entanglement knits together pieces of space, he says, then computational complexity may drive the growth of space — and thus bring in the elusive element of time. One potential consequence, which he is just beginning to explore, could be a link between the growth of computational complexity and the expansion of the Universe. Another is that, because the insides of black holes are the very regions where quantum gravity is thought to dominate, computational complexity may have a key role in a complete theory of quantum gravity.
Despite the remaining challenges, there is a sense among the practitioners of this field that they have begun to glimpse something real and very important. “I didn’t know what space was made of before,” says Swingle. “It wasn’t clear that question even had meaning.” But now, he says, it is becoming increasingly apparent that the question does make sense. “And the answer is something that we understand,” says Swingle. “It’s made of entanglement.”
As for Van Raamsdonk, he has written some 20 papers on quantum entanglement since 2009. All of them, he says, have been accepted for publication.
Nature 527, 290–293 (19 November 2015) doi:10.1038/527290a
40 years ago Stephen Hawking predicted that black holes emit a special kind of radiation. Consequently black holes are theoratically able to shrink and even vanish. This radiation arises when virtual particles (pairs of particles developing because of quantum fluctuations inside the vacuum; usually they nearly instantly destroy each other) are near the event horizon. Then the virtual particle pair gets divided: one disappears in the black hole (and its quantum mechanical information) and the other one becomes real. Thus the black hole radiates but unfortunately this radiation is so low that astronomical observations are nearly impossible. Therefore scientists have to simulate black holes to get empirical evidence. The physicist Jeff Steinhauer of the Technion, the University of Technology of Haifa in Israel exactly did this. He realized an idea of physicist Bill Unruh with an acoustical event horizon. He uses a fog made of rubidium atoms which is only slightly above the absolute zero. Because they are trapped inside an electromagnetic field these atoms become a Bose-Einstein Condensate. Inside of this condensate the acoustic velocity is only a half millimeter per second. With the help of accelerating some above this speed an artificial event horizon is created. The low temperatures lead to quantum fluctuations: pairs of phonons develop. In the simulation these pairs also get divided: one gets caught by the supersonic event horizon; the other one becomes some kind of Hawking radiation. It is still not sure if this experiment really simulates black holes. According to Ulf Leonhardt it does not proof for sure that the two phonons are entangled. Thus it is not sure if the pairs arised out of one fluctuation. Leonhardt even doubts that the fog of atoms is a real Bose-Einstein Condensate. Leonard Susskind thinks this experiment does not reveal the mysteries of black holes: for instance it does not explain the information paradox, because acoustic black holes do not destroy information.
NEW THEORY OF GRAVITY MIGHT EXPLAIN DARK MATTER
A new theory of gravity might explain the curious motions of stars in galaxies. Emergent gravity, as the new theory is called, predicts the exact same deviation of motions that is usually explained by inserting dark matter in the theory. Prof. Erik Verlinde, renowned expert in string theory at the University of Amsterdam and the Delta Institute for Theoretical Physics, published a new research paper today in which he expands his groundbreaking views on the nature of gravity.
In 2010, Erik Verlinde surprised the world with a completely new theory of gravity. According to Verlinde, gravity is not a fundamental force of nature, but an emergent phenomenon. In the same way that temperature arises from the movement of microscopic particles, gravity emerges from the changes of fundamental bits of information, stored in the very structure of spacetime.
Newton’s Law from Information
In his 2010 article [http://link.springer.com/article/10.1007/JHEP04%282011%29029], Verlinde showed how Newton’s famous second law, which describes how apples fall from trees and satellites stay in orbit, can be derived from these underlying microscopic building blocks. Extending his previous work and work done by others, Verlinde now shows how to understand the curious behaviour of stars in galaxies without adding the puzzling dark matter.
Puzzling Star Velocities
The outer regions of galaxies, like our own Milky Way, rotate much faster around the centre than can be accounted for by the quantity of ordinary matter like stars, planets and interstellar gasses. Something else has to produce the required amount of gravitational force, and so dark matter entered the scene. Dark matter seems to dominate our universe: more than 80% of all matter must have a dark nature. Hitherto, the alleged dark matter particles have never been observed, despite many efforts to detect them.
No Need for Dark Matter
According to Erik Verlinde, there is no need to add a mysterious dark matter particle to the theory. In a new paper, which appeared today on the ArXiv preprint server, Verlinde shows how his theory of gravity accurately predicts the velocities by which the stars rotate around the center of the Milky Way, as well as the motion of stars inside other galaxies. “We have evidence that this new view of gravity actually agrees with the observations, “ says Verlinde. “At large scales, it seems, gravity just doesn’t behave the way Einstein’s theory predicts.”
At first glance, Verlinde’s theory has features similar to modified theories of gravity like MOND (modified Newtonian Dynamics, Mordehai Milgrom (1983)). However, where MOND tunes the theory to match the observations, Verlinde’s theory starts from first principles. “A totally different starting point,” according to Verlinde.
Adapting the Holographic Principle
One of the ingredients in Verlinde’s theory is an adaptation of the holographic principle, introduced by his tutor Gerard ‘t Hooft (Nobel Prize 1999, Utrecht University) and Leonard Susskind (Stanford University). According to the holographic principle, all the information in the entire universe can be described on a giant imaginary sphere around it. Verlinde now shows that this idea is not quite correct: part of the information in our universe is contained in space itself.
Information in the Bulk
This extra information is required to describe that other dark component of the universe: the dark energy, which is held responsible for the accelerated expansion of the universe. Investigating the effects of this additional information on ordinary matter, Verlinde comes to a stunning conclusion. Whereas ordinary gravity can be encoded using the information on the imaginary sphere around the universe only – as he showed in his 2010 work – the result of the additional information in the bulk of space is a force that nicely matches the one so far attributed to dark matter.
On the Brink of a Scientific Revolution
Gravity is in dire need of new approaches like the one by Verlinde, since it doesn’t combine well with quantum physics. Both theories, the crown jewels of 20th century physics, cannot be true at the same time. The problems arise in extreme conditions: near black holes, or during the Big Bang. Verlinde: “Many theoretical physicists like me are working on a revision of the theory, and some major advancements have been made. We might be standing on the brink of a new scientific revolution that will radically change our views on the very nature of space, time and gravity.”
A team of physicists has provided some of the clearest evidence yet that our universe could be just one big projection.
In 1997, theoretical physicist Juan Maldacena proposed that an audacious model of the Universe in which gravity arises from infinitesimally thin, vibrating strings could be reinterpreted in terms of well-established physics. The mathematically intricate world of strings, which exist in nine dimensions of space plus one of time, would be merely a hologram: the real action would play out in a simpler, flatter cosmos where there is no gravity.
Maldacena’s idea thrilled physicists because it offered a way to put the popular but still unproven theory of strings on solid footing—and because it solved apparent inconsistencies between quantum physics and Einstein’s theory of gravity. It provided physicists with a mathematical Rosetta stone, a “duality,” that allowed them to translate back and forth between the two languages, and solve problems in one model that seemed intractable in the other and vice versa. But although the validity of Maldacena’s ideas has pretty much been taken for granted ever since, a rigorous proof has been elusive.
In two papers posted on the arXiv repository, Yoshifumi Hyakutake of Ibaraki University in Japan and his colleagues now provide, if not an actual proof, at least compelling evidence that Maldacena’s conjecture is true.
In one paper, Hyakutake computes the internal energy of a black hole, the position of its event horizon (the boundary between the black hole and the rest of the Universe), its entropy and other properties based on the predictions of string theory as well as the effects of so-called virtual particles that continuously pop into and out of existence. In the other3, he and his collaborators calculate the internal energy of the corresponding lower-dimensional cosmos with no gravity. The two computer calculations match.
“It seems to be a correct computation,” says Maldacena, who is now at the Institute for Advanced Study in Princeton, New Jersey and who did not contribute to the team’s work.
The findings “are an interesting way to test many ideas in quantum gravity and string theory,” Maldacena adds. The two papers, he notes, are the culmination of a series of articles contributed by the Japanese team over the past few years. “The whole sequence of papers is very nice because it tests the dual [nature of the universes] in regimes where there are no analytic tests.”
“They have numerically confirmed, perhaps for the first time, something we were fairly sure had to be true, but was still a conjecture — namely that the thermodynamics of certain black holes can be reproduced from a lower-dimensional universe,” says Leonard Susskind, a theoretical physicist at Stanford University in California who was among the first theoreticians to explore the idea of holographic universes.
Neither of the model universes explored by the Japanese team resembles our own, Maldacena notes. The cosmos with a black hole has ten dimensions, with eight of them forming an eight-dimensional sphere. The lower-dimensional, gravity-free one has but a single dimension, and its menagerie of quantum particles resembles a group of idealized springs, or harmonic oscillators, attached to one another.
Nevertheless, says Maldacena, the numerical proof that these two seemingly disparate worlds are actually identical gives hope that the gravitational properties of our universe can one day be explained by a simpler cosmos purely in terms of quantum theory.
This article is reproduced with permission from the magazine Nature. The article wasfirst published on December 10, 2013.
PHYSICS OF THE DAY: An Introduction To Black Holes, Information And The String Theory Revolution by Leonard Susskind
A unique exposition of the foundations of the quantum theory of black holes including the impact of string theory, the idea of black hole complementarily and the holographic principle<BR>bull; Aims to educate the physicist or student of physics who is not an expert on string theory, on the revolution that has grown out of black hole physics and string theory
Particles are particles, waves are waves. How can a particle be a wave?
a wave in an ocean, that’s not a particle. The ocean is made out of particles but the ocean are not particles. And rocks are not waves, rocks are rocks. So a rock is an example of a particle, and the ocean is an example of a wave. And now somebodies trying to tell you, a rock is the ocean. What?