Posts Tagged 'quantum measurement'

Entangled by quantum mechanics

No topic in quantum physics has inspired as much confusion, frustration, and experimental creativity than entanglement. Certainly more nonsense has been written and spoken about entanglement than any other concept; even physicists who understand it struggle to explain it to others, and often screw up. And even if you understand it pretty well (as I like to think I do), it can seem profoundly disturbing. I would say that if it doesn’t bother you on some level, you haven’t thought about it carefully.

In general terms, entanglement involves the following steps:

1. A single quantum system is prepared carefully, then split into two. The practical example (which I’ll elaborate on below) is to create two photons from a single source with opposite but indeterminate polarizations. (See the previous posts in this series for more on polarization.)
2. Because the two new systems are correlated, their properties are not independent: if one photon’s polarization is vertical, then the other must be vertically polarized. However, there is no way to know which one has which polarization without measuring: it’s indeterminate, and as usual in quantum physics, the best we can do is assign probabilities to each possible outcome of a measurement.
3. If you measure the polarization of one of those photons (using a polarization filter and seeing if it goes through), any polarization measurement on the other photon can be predicted, no matter how far the photons have traveled. Thus, if you use a horizontal filter on the first photon and it gets through, then the second photon will not get through a horizontal filter, because it must be vertically polarized.

So here’s the problem: the first measurement does not cause anything to happen with the second system: they cannot be in communication in any way, because the distance between them is arbitrary. In other words, they could be separated by several parsecs without changing the outcome, so if they were actually passing information, that would be in violation of relativity. (Though parsec-scale distances are impractical, real experiments have verified entanglement across a lake and between islands.) You can’t send signals faster than light using entanglement as a result: the only way you could kinda-sorta communicate is if you had two groups of researchers who agreed in advance on what the settings of their instruments would be before they parted company; no new information would be available, since the real communication takes place at light-speed or slower, before the measurements are even performed. There’s even a theoretical result showing that faster-than-light information transfer can’t happen without allowing other things to violate relativity, which we know ain’t so.

Something else must be going on, then: either things truly are indeterminate and non-local (meaning the quantum system doesn’t depend on where the measurements are performed), or there is a “hidden variable” (which may involve a random fluctuation) connecting the two far-flung systems that determines what the outcome of each measurement must be, or yet another idea that I may not know about. The first general explanation is from the standard “Copenhagen” interpretation of quantum theory: it says not to worry about things being non-local, as long as no information is being transferred. The Copenhagen interpretation declares: there is no independent reality beyond our measurements, so all we need is the probability of a particular outcome. Other explanations are plagued by difficulties: they involve interpretation only and so are not subject to experimental tests, or they are difficult to distinguish from the Copenhagen interpretation, or they predict things that just ain’t so.

Historical digression

The first paper to try to grapple with quantum entanglement came from Albert Einstein, Boris Podolsky, and Nathan Rosen, so it is known as the EPR paper. (Although the “Schrödinger’s cat” thought experiment is better known, it deals primarily with a separate problem with the interpretation of quantum mechanics—the interaction between a microscopic system dictated by quantum processes and a macroscopic cat—so I think it’s not very useful for understanding entanglement itself.) The EPR paper, published in 1935, is titled “Can Quantum-Mechanical Description of Physical Reality Be Considered Complete?”; whenever a headline asks a “yes or no” question, the authors are expecting the answer is “no”, and this paper is no different. Einstein and his coauthors conclude that the standard interpretation of quantum mechanics must be wrong.

Though I have read it several times and still find it fascinating, I’m not going to explain the original EPR paper in detail: the experiment they propose is eminently impractical, not to mention kind of messy to discuss. I don’t think anyone has ever proposed carrying it out, so pretty much every discussion of entanglement following EPR is based on later papers by David Bohm and Yakir Aharonov (the same dudes as the Aharanov-Bohm effect) and especially John S. Bell. Most modern experiments either assume entanglement is correct and use the predictions of quantum theory to interpret their results, or test one of the Bell inequalities, which are a set of mathematical relations predicting how quantum physics would differ from a class of alternative models involving hidden variables. For more about Bell’s research, see Aatish Batia’s excellent explanation in Wired.

Entanglement in action

Inverse Thunderdome: one photon goes in, two come out. The two top pictures are a schematic of a laser shining on a crystal, which then emits two. The bottom two pictures are a schematic of the internal quantum transition of an electron within the crystal. The two new photons have correlated polarizations: they are entangled.

The example I’m describing here is a simplified version of a common type. (See one of my earlier articles for a more detailed  discussion of an eight-photon entanglement experiment.) In these experiments, researchers send single photons from a laser onto a special type of crystal. The single photon excites the crystal into a higher energy state, but when it decays, it emits two photons.

Polarization is (among other things) a measure of the photon’s spin: its rotational state independent of its motion. Spin is conserved, so the total amount of spin the original photon carried to the crystal must be equal to the spin of the two photons emitted afterwards. That means the emitted photons have correlated polarization: if one is horizontally polarized, the other must be vertical. However, the specific polarization of either one of those photons is indeterminate! We only know that they are correlated.

Schematic of the entanglement experiment.

The new photons are sent along different paths, which can be very long, just so they aren’t tampered with en route. At the end of each path, the photons meet up with some kind of filter that measures their polarization with respect to the filter orientation. Polarization can be any angle as long as it’s perpendicular to the path the photon follows, so experimenters have a lot of freedom in the filter choice. John Bell proposed changing the filter orientations while the photons are actually in transit, to rule out dynamic changes in photon properties (as well as preclude any goofy ideas about photon communication or telepathy). To be even more sure, a trio of physicists proposed using light from quasars to set the orientation of the filters, which eliminates even more subtle effects.

A variety of experiments starting in the early 1980s demonstrated that the photons’ polarizations are correlated even though they cannot be directly interacting at the time of measurement. These experiments use a variety of filter orientations, including random settings, changing the filters as the experiment is in progress to preclude hidden communications, and other complicated methods — all to demonstrate entanglement is real. Later experiments have entangled more than two photons, but the principle is still the same: that the photons were originally part of a single system means that the results of measurements on one are not independent of measurements on the others.

As with the case of ordinary polarization experiments, measurements on single photons (or rather individual pairs) don’t let us reconstruct all we need to know. We must repeat the experiment for many pairs of entangled photons to reconstruct all the probabilities and to show that the two photons are not independent of each other. However, the results are clear: no matter how far apart the polarization filters, the two photons behave as two parts of a single system.

Now there are a lot of details I haven’t included. Entanglement experiments are tricky, because a number of things can mess them up. Single photons are very easily lost or otherwise affected by random environmental influences, which carry the technical name “decoherence”. (Maybe the next entry in this series should be on decoherence?) Entanglement of matter particles has proven even more difficult. Yet, every experiment supports the reality of entanglement, leaving us with the sometimes uncomfortable task of trying to understand what is really going on.

[As I continue to battle deadlines — and heading to ScienceOnline 2014 this week — his post is a heavily modified and updated version of an earlier post.]