Entanglement is one of the strangest aspects of quantum mechanics, whereby two subatomic particles can be so closely connected that one can seem to influence the other even across long distances. Albert Einstein dubbed it “spooky action at a distance,” and two new experiments have now definitively shown that the phenomenon is real.

Okay, I’ll say it: yes, Einstein was wrong — at least when it comes to his proposed alternative explanation for spooky action.

Along with Boris Podolsky and Ned Rosen, Einstein suggested in 1935 that it might be the result of what are now called hidden variables. It’s possible that there is no spookiness involved, he argued; the hidden variables predetermined the outcome. And since these variables could only influence things in their vicinity (locality), this view has become known as local realism.


Thirty years later, John Bell proposed a means of testing that hypothesis — an experimental set-up that would rule out any possible hidden variables, thereby proving that spooky action is real after all. You take an entangled pair of photons, separate them, and have each choose randomly between two possible “questions” at either location. The “choice” in this case involves one of two possible measurements of different properties being made, selected at random. Those answers should be connected in specific ways if spooky action is true: for instance, if a measurement is made and we find that one photon of an entangled pair is red, we will know that the other must automatically be blue, even if they are miles apart.

Physicists have been conducting variations of the Bell test ever since, with greater and greater precision, but could never quite claim to have produced definitive proof of spooky action, because there were still critical loopholes in the experimental design. Until quite recently, physicists simply didn’t have sufficiently advanced technology to close those loopholes.

Krister Shalm, a physicist at the National Institute of Standards and Technology (NIST) in Boulder, Colorado, draws an analogy with the recent VW emissions scandal. The car manufacturer figured out the assumptions made by the EPA in its emissions tests, and modified its performance accordingly to exploit that loophole. The Bell tests conducted over the past 50+ years also make certain assumptions.


So like others before them, Shalm and his colleagues at NIST had to close those loopholes. By doing so with greater precision than ever before, they hammered the final nail in local realism’s coffin. “That idea is now out the window,” Shalm told Gizmodo.

The NIST version of the Bell test involved placing a photon source and two detectors far apart, in three different rooms in the laboratory building. The source pumped out entangled pairs of photons, which were then separated and sent via fiber optic cables to the detectors. While the photons were still en route, the scientists used a random number generator to make a “choice” — the equivalent of calling heads or tails for the flip of a coin. That outcome determined the analyzer setting, and if the photon matched that setting, it was considered a detection.


The first loophole involves fair sampling. Let’s say you wanted to test whether a coin was biased. You would flip it 100 times, then count how many times you got heads, and how many times you got tails. If you got, say, 70 heads and only 30 tails, there’s a good chance the coin is biased. But what if for half of those 100 flips, the coin bounced away and fell down the drain? You’d only be counting the remaining 50 flips, in which the coin landed equally on heads and tails. You can assume that the other unrecorded flips were fair, but the coin could still be biased and you wouldn’t know it, because half the results are hidden from you. “If every time I got heads, I threw it away and didn’t show it to you, you’d be fooled into thinking it was fair,” said Shalm.

That’s what happened in prior incarnations of the Bell test: until quite recently, the photon detectors weren’t recording a sufficiently high sampling of photons — above 72% is the critical threshold, according to Shalm. The NIST experiment detected 75% of the photons, thanks to the team’s development of much-improved sources of entangled photons and the use of superconducting materials in its detectors.

It’s incredibly difficult to achieve those numbers, given the intricate process involved. Shalm calls it “quantum archery”: the entangled photon pairs must be coupled to optical fibers — a target about 10 microns in size — and then sent to separate locations hundreds of meters away with minimal losses, weaving in and out of the optical fibers along the way before they finally reach the detectors. Any kind of outside interference will break the entanglement between the photon pair.


This brings us to the second loophole: there can be no faster-than-light communication of information. That means the photon detectors must be at least several hundred meters apart so that any signal traveling at the speed of light can’t reach either photon in its respective location before a “choice,” or measurement, is made. It would take 617 nanoseconds for any signal to travel between the two NIST detectors, and the measurements were completed a good 40 nanoseconds faster than that, so they successfully ruled out the possibility of some kind of mysterious communication between the photons.

The third and final loophole involves freedom of choice, and Shalm admits that it can never truly be closed, because at some point you find yourself in the untestable realm of metaphysics. When that happens, “You’re firmly out of the realm of physics, in my opinion,” he said. But he and his colleagues did address the issue by combining two different processes to generate truly random numbers, thereby closing as much of that loophole as it is possible to close.

They also added a third random bit to ensure there could be no outside manipulation, mashing together data from films and TV shows, such as Back to the Future or Saved by the Bell, with the digits of pi. In order to cling to the super-determinism implied by local realism, “You’d have to believe that sometime before the experiment started, the photons (or whatever was creating the photons) have to be able to influence what two quantum random number generators were doing, and would also have to know what Marty Mcfly was doing in Back to the Future, along with other correlations and the digits of pi,” said Shalm. Any such model would be absurd.



And what about implicit bias? The NIST team thought about that, too, bringing in another physicist who firmly believed in local realism to help design the experiment, lest their own bias against local realism sneak in. When the results came in, that physicist had to adjust his thinking. “He’s a good scientist,” said Shalm. “He saw the evidence and changed his opinion.” But he still needed a period of mourning: “He had a few days where he moped around the halls. You could almost see the five stages of grief.”

Even other physicists might not appreciate the difficulty of firmly closing all those loopholes. If prior Bell tests were akin to climbing Mount Kilmanjaro or Mount Fuji, Shalm said, the NIST experiment is “like climbing Everest or K2 without oxygen.” It took many, many scientists working 20 hours a day, seven days a week, to get the experiment up and running, because every component had to be precise to within one part in a million or billion — like crossing millions of T’s and dotting billions of i’s.

“I’ve poured my blood and tears into this,” Shalm admitted. “I got married in the middle of the experiment and I was analyzing data the day of the wedding. I’m glad my bride didn’t run away.”

Science usually progresses in incremental improvements. The NIST results come on the heels of a similar successful Bell test announced earlier this year by physicists at Delft University of Technology in the Netherlands. The Dutch scientists sent two entangled electrons to separate corners of the campus and also found that spooky action was real. They achieved higher detection efficiency than the NIST team, and also closed the loopholes, albeit with substantially lower statistics (a probability of 4% versus the 1 in a billion in the NIST experiment).


And a second team of physicists at the University Vienna just conducted yet another version of the Bell test using one of NIST’s single photon detectors. They reported similar results, submitting their own paper to the journal Physical Review Letters at the same time as Shalm and his co-authors.

Taken together with the Delft results and those from the University of Vienna experiment, NIST’s loophole-free Bell test should settle the question once and for all. As Shalm said. “You would have to have a very bizarre model of the universe to explain these three independent tests [without spooky action].”



Bell, John S. (1964) “On the Einstein-Podolsky-Rosen Paradox,” Physics 1: 195–200.


Einstein, A; Podolsky, B; Rosen, N. (1935) “Can Quantum-Mechanical Description of Physical Reality be Considered Complete?” Physical Review 47 (10): 777–780

Giustina, Marissa et al. (2015) “A significant loophole-free test of Bell’s theorem with entangled photons,” Physical Review Letters (submitted).

Hensen, B. et al. (2015) “Loophole-free Bell inequality violation using electron spins separated by 1.3 kilometers,” Nature 526: 682-686.


Shalm, L.K. et al. (2015) “A strong loophole-free test of local realism,” Physical Review Letters (submitted).

Top image: Krister Shalm adjusts the photon source in his Bell test experimental set-up. Credit: Burrus/NIST. Bottom image: Site A set-up of Delft University’s Bell test earlier this year. Credit:. Frank Auperle.