> And it gets weirder: Measuring which slit such a particle goes through will invariably indicate it only goes through one—but then the wavelike interference (the “quantumness,” if you will) vanishes. The very act of measurement seems to “collapse” the superposition. “We know something fishy is going on in a superposition,” says physicist Avshalom Elitzur of the Israeli Institute for Advanced Research. “But you’re not allowed to measure it. This is what makes quantum mechanics so diabolical.”
But (afaik) measuring means disturbing, because you have to exchange some energy with the system in order to perform the measurement.
So in the above quote, if you replace "measuring" by "disturbing", then all of a sudden, the whole paragraph doesn't make any sense ... Can anyone clarify this?
>But (afaik) measuring means disturbing, because you have to exchange some energy with the system in order to perform the measurement.
This isn't necessarily true; QM allows what's called interaction-free measurement. Say you have a particle that's in a superposition of two states (or places, or whatever), which I'll call A and B. You do something that'll interact (and detect) it if it's in A. If you didn't detect it, that means you've effectively measured it as being in B.
You can also turn it around, and use a particle in a superposition to see if an object (that it would interact with if it's in state A) exists; if the object exists, the possibility of interacting with it will destroy interference effects between the parts of the particle's superposition, allowing you to infer the existence of the object. The Elitzur–Vaidman bomb tester (https://en.wikipedia.org/wiki/Elitzur–Vaidman_bomb_tester) is an extreme example of this: it lets you verify that a bomb is "live" (will explode if interacted with), with only a 50% chance of setting it off.
Another example is the interaction-free version of the quantum Zeno effect. The quantum Zeno effect is that (under the right circumstances) a particle can be trapped in a particular state by continuously measuring whether it's still in the state. In the interaction-free version, you use something that'll interact if the particle ever leaves the state... and it never does. This has actually been done; see https://www.nature.com/articles/ncomms7811?WT.ec_id=NCOMMS-2...
Does anyone know of actual videos of the experiment? All the videos I find online only show the wave interference pattern, but not the particle-like pattern shown when measuring/observing the particles.
In the case of an "interaction-free" measurement (i.e. one that supposedly would not disturb the system), as mentioned by the parent, if the interaction did not happen, then the system continues to be in a superposition of the two base states (0 and 1, in your example).
Citation needed. In a two-level quantum system, if I make any measurement of a particle such that the distribution of outcomes depends on whether the particle is in state 0 or 1, then I have disturbed the state of the system. This is true no matter how fancy the measurement is.
This can’t even be avoided with hacks that depend on detailed knowledge of the system. If the initial state is |0>+|1>, then, sure, I can measure and then reset the state. But if the initial state was entangled with some far-away particle, I can’t reset the state of the nearby particle without completely forgetting the outcome of the measurement.
> you have to exchange some energy with the system in order to perform the measurement
That's true, but what matters is not the energy exchange so much as the resulting entanglement. It is entanglement, not measurement per se, that "destroys" the interference. See:
For context, in classical mechanics we also have to perturb the system to measure it, but we can make the perturbations arbitrarily small. That's why we call it a measurement rather than an interaction, and can get away with not explicitly modeling it.
A seminal paper on the question in classical mechanics was the Bohr Rosenfeld 1933 Paper on measuring Electrodynamic fields, that showed that classical EM fields can indeed be measured accurately (I can't find it online right now, but the follow up on QED is [1]). The question appears in the context of General Relativity as the question when extended bodies follow geodesics.
These are all different to the situation in QM. There is an irreducible alteration of the state in QM. It resembles in many ways the updating of a probability distribution if you gain new information.
Imagine you have two envelopes with a blue and a red card in them. You mix them and send one of them far away. Now you know that the far away envelope has a 50/50 chance of being blue. It is in a superposition of red and blue. If you open your envelope and look in, the superposition collapses, and you suddenly know exactly what state the other envelope is in, even though you didn't touch it, didn't disturb it at all.
QM measurements behave just like this "updating of probabilities upon learning something" _except_ that we know from other, more subtle properties (Bell's inequalities) that they are _not_ probabilities referring to a hidden state.
You might want to look up the quantum eraser experiments. You can have a particle interact with the system in such a way that the result of the measurement cannot be known.
The issue is that no-one has a cast-iron definition of what exactly amounts to a measurement, outside of something entangling with the system being measured.
You can "disturb" a two-slit system, for example by creating a third slit which will change the "interference pattern" or pattern of probability of detection on the screen, without this being a measurement, though, so the concepts of "disturb" and "measure" are distinct.
> look which experiment shows inteference and which doesn't
This is also a kind of measurement, and one you can't meaningfully perform before the other. By the time you know the interference pattern, it's already too late to decide whether you want to measure the intermediate position.
You might try to measure without looking at the results, but that doesn't change anything about the fact that the interaction happened. (Superpositions do not collapse by being observed by a human, they collapse by interacting with the rest of the world.)
> Superpositions do not collapse by being observed by a human, they collapse by interacting with the rest of the world.
But the article says: "Measuring which slit such a particle goes through will invariably indicate it only goes through one—but then the wavelike interference (the “quantumness,” if you will) vanishes. The very act of measurement seems to “collapse” the superposition." "Aharonov’s approach is called the two-state-vector formalism (TSVF) of quantum mechanics, and postulates quantum events are in some sense determined by quantum states not just in the past—but also in the future. "
Doesn't it mean that future measurement 'pushes' superposition into a definite state in the past?
> The apparent vanishing of particles in one place at one time—and their reappearance in other times and places—suggests [...] a particle’s presence in one place is somehow “canceled” by its own “counterparticle” in the same location. [...] These putative counterparticles should possess negative energy and negative mass, allowing them to cancel their counterparts.
I thought negative energies in Quantum Mechanics give rise to senseless infinities that can't be eliminated. What gives?
This is a horrendous article, clearly written by someone who doesn’t grasp the basics. I gave up when they conflated how the double-slit experiment demonstrates wave-particle duality with a demonstration of superposition. I can’t believe how much of a joke SciAm has become, it makes me sad.
Your criticism is totally incorrect. In fact the 2-slit experiment does demonstrate superposition. Wave-particle duality and superposition of states are really just complementary ways of describing the same situation.
To understand this, please try to read any undergrad level treatment of quantum mechanics. For example -
See Feynman's lectures volume 3.
"When a particle can reach a given state by two possible routes, the total amplitude for the process is the sum of the amplitudes for the two routes considered separately. In our new notation we write that
⟨x|s⟩both holes open=⟨x|s⟩through 1+⟨x|s⟩through 2. (3.4)"
You can derive the result by treating the particle like a wave, but that is clearly a false/incomplete model. There is only one particle, the only thing it can be interfering with is itself. It's in a superposition of going through both slits.
I.e., in plain English, particle goes through one slit, it wave goes through both slits, then wave interfere with itself and alters course of particle.
Since the interference pattern is built over time, how do we know a particle is actually a wave that interferes with itself and not a particle that selects a different path from a set each time it is fired?
There are lots of ways to explain the interference pattern. Probably the most famous "renegade" theory is Bohmian mechanics, which suggests that there's a real particle that is guided by a wave that we can't see. The particle doesn't interfere with itself, but it's guided by a wave that does.
I don't think the particle can select a path from a set at the time it's fired. That sounds very much like a local hidden variable, which is more or less strictly ruled out by violations of Bell's inequality. Unless you allow causality to go back in time (in which case, it can make its decision on where to land based on, for instance, how it will eventually be measured). You can do things to the particle (like trying to measure it, or entangling it with other particles and measuring them, etc) which make it seem less and less likely that the particle could have selected a "which-way" as it was fired.
For instance: suppose you put a detector on one of the slits as the particles are "in flight". We know this results in no 'interference pattern' observed. How do the particles know to scramble themselves as soon as the detector is put on? It doesn't seem like a local effect, since all the particles that go through the other slit never interacted with the detector at all and therefore shouldn't have been affected. But they are anyway. So either there's some information being passed back in time along the particles' history, or there's some strange nonlocal effect.
If it was acting like a particle and selecting a different path each time, the expectation would be only two peaks, each following line-of-sight to the source:
That doesn't solve the problem, because the question is really how the set of paths available to the particle is constrained.
If you go Full Copenhagen, there are no waves, no particles, and no paths. There are only evolving probabilities - which act in unintuitively non-local but predictable ways - and events which appear to sample one possible result from the current state of a probability density.
The probabilities can be composite, which allows for entanglement and superposition.
The probability part is fairly well understood, or at least fairly easy to calculate.
The exact nature of an event/measurement/whatever is still a complete mystery.
But there is nothing in a naive wave/particle/path model that makes it any less mysterious or easier to understand.
It has everything to do with superposition. A photon can only exhibit wave-like effects (i.e. interference) as long as its position ('x') state exists in a superposition. Measuring which slit the particle travels through collapses the particle's position into a pure state, thereby removing the interference effect.
I’m admittedly pretty basic on the subject matter myself and wouldn’t have been able to notice that. Any suggested light reading that would clarify this distinction?
What intrigues me about the entire situation is that the scientists are not standing back and looking at all the assumptions they use.
Simple things like that the belief that there are "virtual" particles "popping" in and out of existence and they don't affect the path of "real" particles like photons or electrons.
Simple things like what processes is the matter that exists each side of the slits undergoing in relation to the particles that pass through the slits and how each affects the other.
Simple things like what is a photon or other such particle.
Is the particle/wave duality idea hindering or helping the further understanding of these processes? Is quantum mechanics the best approximation that we have or are there other ideas that would simply the models in use?
I liken it to a programming project that has gone down one path based on a series of assumptions and when these assumptions are found to be incorrect or different from what is actually there, it becomes very difficult to change direction without doing a complete rebuild.
If one looks at the history of investigations over the last 100 years or so, one finds that there are a variety of ideas that never gained any traction at the time of proposal. Yet, today appear to give a handle on some of the puzzles that are being found. These ideas are ignored because they didn't gain traction at the time of first proposal.
Mayhaps, it would be worth spending some time to investigate to see if they have any merit. They may not, but it can't hurt to see if they have.
Notions like wave-particle duality, virtual particles "popping in and out", etc. aren't input assumptions. They're metaphors that come out of the equations, which are precise and well-founded. These have a useful role as levers to spur intuition, but ought not to be confused with the theories themselves.
It turns out questions like "what is a photon" are not that simple. For example, the very phrase: "a photon." Well, the particle number is an operator which is subject to the uncertainty principle and doesn't commute with other measurements, so already you're at risk... Our intuition misleads us in the quantum realm.
Yes, they are metaphors. However, they come out of prior assumptions and they are used as input assumptions for later work. The equations are simplified mathematical characterisations of the experimental evidence. Certain simplifying assumptions are made to create these equations. Theories are an attempt to clarify what is believed to be happening and are always based on some kind of simplifying assumptions.
Often, it is difficult to take a step back and take another long hard look at what the evidence is.
The point of my mentioning "what is a photon?" is that we make certain assumptions (as per your operators, uncertainty principle, etc) which we don't step back from and see if there are alternatives. Even trying to get a handle on what the uncertainty principle means gets us into a philosophical quagmire. Our view of the "quantum realm" is coloured by a huge amount of background assumptions, including the "intuitions" of those investigating the subject matter.
There is an oft stated saying: "The science is settled" over some matter or another. Yet, the whole point of scientific endeavour is that it is not settled, we are still investigating (or at least should be).
In Carlo Rovelli's, Reality is Not What it Seems [0], he speaks of this final point in its short, final chapter.
He starts by using an example from Plato's, Phaedo, where Plato acknowledges the limits of the knowledge of his time – in relation to Socrates being unsure about his "belief" in the Earth being a sphere.
Rovelli says:
"This acute awareness of our ignorance is the heart of scientific thinking. It is thanks to this awareness of the limits of our knowledge that we have learned so much. We are not certain of all which we suspect, just as Socrates was not sure of the spherical nature of the Earth. We are exploring at the borders of our knowledge."
As was Newton, Maxwell, Einstein, amongst others.
Rovelli goes on:
"Science is not reliable because it provides certainty. It is reliable because it provides us with the best answers we have at present. ... They are the best we have because we don’t consider them to be definitive, but see them as open to improvement. It’s the awareness of our ignorance that gives science its reliability."
Science is never settled. Rovelli's book is about Loop Quantum Gravity; as unsettled a field of research in science _today_.
I think you might be underestimating physicists, or over-estimating yourself... rest assured, modern physics is not a blind cargo-cult.
The mainstream complex models are the simplest explanations consistent with empirical evidence, and gaps or weirdness in those models drives further experimentation. So far, reality has proven to be more complex than the trivially simple and clean abstractions found in software.
I don't think I am underestimating physicists, simply because it is irrelevant if you are a physicist or not, you are a human first with all the foibles associated with that. Physicists are not immune to being pig-headed, obstinate, arrogant, self-righteous and just out and out plain wrong. These are attributes of all humans.
The mainstream complex models are not the simplest explanations consistent with empirical evidence. There are various kinds of models that explain some phenomena better, but since they are not the consensus view they are not further investigated. that's the point I'm making. There is no funding for anything that is not part of the current suite of models.
The reason for mentioning software was simply the mindset demonstrated by people to how they look at something. Reality is way more complex than any theory or model that we have so far come up with. Software is a dog's breakfast and that's my opinion after nearly 40 years being involved with it.
One of the advantages of peer review is using the natural human tendency to be “pig-headed, obstinate, arrogant, self-righteous” to poke each other’s work for flaws we cannot find in our own work.
QM is the most rigorously tested and most accurate scientific theory in the whole of human history and nothing even comes remotely close. You seem to think you have insights that the greatest minds in history have missed, then please by all means enlighten us plebs and take your Nobel Prize.
> is that the scientists are not standing back and looking at all the assumptions they use
Plenty of physicists do exactly this. I have seen a number of physics papers that center around re-thinking or re-deriving basic assumptions. What gave you the impression otherwise?
By the way, if you're inferring anything at all about the field via popular science articles, please don't. They are almost always extremely misleading.
Some physicists will do this (in part). I, also, have seen various papers that look at the basic assumptions. What I am saying is that on the whole, physicists don't do this. The impression comes from seeing the lack of diversity of funding, the standard way of discouraging investigation into looking at assumptions, the ad hominem attacks by various segments and actually communicating with various scientists in various fields about their subject matter.
Popular science articles, well, there's another problem.
I suppose what I am saying here is that scientists of all stripes (including physicists) are human and have all the same human fallibilities of politics, anger management, distrust, hatred, cowardice, hero worship, arrogance, hypocrisy and back biting, to name but a few, that occurs in all human environments.
Different interpretations of QM suggest different things. There's no wave/particle duality in Bohmian mechanics: there's literally a wave, and literally a particle, and the particle is guided by the wave.
And the many-worlds interpretation does yet a third thing. They all make the same predictions of measurements though, which means it's rather a matter of philosophy and/or practicality which one you subscribe to.
"The universe seems to like talking to itself faster than the speed of light,” said Steinberg. “I could understand a universe where nothing can go faster than light, but a universe where the internal workings operate faster than light, and yet we’re forbidden from ever making use of that at the macroscopic level — it’s very hard to understand.”
This can only lead to one conclusion: the Universe is a computer program. We can observe some of the side effects of the underlying computer, but we cannot use it from inside the simulation.
Local hidden variables have been ruled out, notably by experiments with Bell's inequality.
This means that quantum effects cannot be caused by some unseen property of the objects involved (e.g. if particles had some extra 'charge' we didn't know about).
The effects could be caused by some unseen "non-local" property, e.g. some property of space itself which propagates faster than light. A nice example is to model spacetime as a network (e.g. http://www.wolframscience.com/nks/p475--space-as-a-network ) with information/causality propagating along edges in the network. If most of the edges are short, and form a 3D lattice, we'd get the universe we're familiar with. We can then model quantum phenomena, like entanglement, by introducing some long edges, connecting regions of the network which would otherwise be far apart (the entangled particles).
Since there are so many possible non-local theories, with no way to distinguish between them, Physicists tend to prefer quantum descriptions of local, observable properties, since that seems like less of a leap of faith.
And if you assume standard probability theory holds. If you allow for so called Exotic Probability, then local hidden variable theories are not ruled out. See [1] for a list of relevant literature.
Is the particle/wave duality idea hindering or helping the further understanding of these processes?
Is that an idea? I thought that it was an observed fact. Some people believed that light was a wave, some other people believed that it was particles so they tried their assumptions and both calculations were correct, as long as you don't mix them. Maybe it makes more difficult to understand reality, but sweeping duality under the rug doesn't seem right.
The slit experiment and its outcomes make it pretty easy for laymen to understand the duality. Feynman used it to great effect (I just watched that lecture last week - it’s really great.)
We observe certain things, the interpretation is the question. I am in no way saying that duality should be swept under the rug, but rather have the entire interpretation and information held up to the light to see what other information can be brought to bear to the problems this duality give rise to.
As I intimated, should the observations that give rise to the interpretation that there are "virtual" particles that pop in and out of existence be brought to bear on this particular problem? There are those who would have you believe that the quantum vacuum is not empty but full of these "virtual" particles. Does this idea give any explanation to why we see a wave phenomena? Or this there something else that might give a better handle on the subject matter.
Is how we measure adding additional things that give rise to the effects we see? There are so many questions that I do not see being asked.
We should not be confusing the evidence we see with the interpretation we come up with. Too often, the interpretation of the facts is treated as if it is the fact.
One thing kinda like it is the optical mechanics used to describe a mirror, they use virtual points that are beyond the mirror (if my French Lycée memory serves me well) but still produce the correct answer.
In a way you could say that wave physics (from the Lycée too) behave like described by the equations... even though they migh be represented as in a fluid (ripples on water molecules-meta-material).
Sometimes it's hard to reconsider the model and the perception of reality we have.
Nice to see you kept commenting after having so many downvotes ;)
Well being ill in bed does put a crimp in responding. What I find interesting is the different views that come out when someone does make a comment that others find not satisfactory. Further grist for the mill in understanding people and the nature of the universe around should be welcome.
The wonderful thing is that there are so many questions and so few real answers. It means that we are able to have fun in investigating the universe around us. The child's "why?" is a wonderful question and should be delighted in.
Duality more than an interpretation seems to be an uncertainty state since we can't discard one of the possible models. Models are after all mathematical devices around a metaphor: EM waves are similar but not the same as waves in a fluid, and subatomic particles are more like machines than like little balls. But the maths stand.
Actually I just disagree with you about this point. I also believe that physicists are too confident in chains of deductions that could fail in some point.
> And it gets weirder: Measuring which slit such a particle goes through will invariably indicate it only goes through one—but then the wavelike interference (the “quantumness,” if you will) vanishes. The very act of measurement seems to “collapse” the superposition. “We know something fishy is going on in a superposition,” says physicist Avshalom Elitzur of the Israeli Institute for Advanced Research. “But you’re not allowed to measure it. This is what makes quantum mechanics so diabolical.”
But (afaik) measuring means disturbing, because you have to exchange some energy with the system in order to perform the measurement.
So in the above quote, if you replace "measuring" by "disturbing", then all of a sudden, the whole paragraph doesn't make any sense ... Can anyone clarify this?