Jason Rush sent me a link to a new insane anti-quantum article spread by the BBC:

Quantum test pricks uncertaintyThe subtitle is particularly "cool". In recent months, we "learned" from some would-be scientific journals and the mainstream media that the wave function is "surely" not interpreted statistically. Now we're told that the Heisenberg uncertainty principle has been wrong from the beginning, too. In fact, the subtitle finally makes the "paradigm shift" that all the anti-quantum zealots have been expecting at least from 1925 as simple and clear as possible: quantum mechanics itself has been put in doubt.

Subtitle:Pioneering experiments have cast doubt on a founding idea of the branch of physics called quantum mechanics.

*What happens when a few assholes acquire access to cool lasers?*

Fine. What concepts are these hardcore cranks relying upon?

The crackpot paper that managed to swim in between the anti-pseudoscience filters directly to the prestigious

*Physical Review Letters*makes the answer very clear.

They "prove" that one may violate the uncertainty principle by violating a completely different and physically uninteresting non-principle that has a "weak measurement" instead of a "measurement". But despite the name, a "weak measurement" isn't a measurement at all. It's a bizarre construct expressed by a formula that generalizes the measurement in a certain way but the "mutation" is serious enough so that you know that it's not a measurement at all. If you explained the definition of this "weak measurement" concept to Werner Heisenberg, he would surely need just minutes to show that this "weak measurement" fails to have many properties that a proper measurement possesses, and he would surely add "WTF?", too.

Weak measurements have been mentioned on this blog in 2004 and 2011.Related:See also an article on non-demolition measurements

When I tell you what a weak measurement is, you should be able to immediately to see what the "trick" is. Indeed, the main reason why "weak measurements" became a popular term in the anti-quantum literature was the suggestion that "we can measure something without disturbing it after all". Except that we obviously can't, a revolutionary insight for which Werner Heisenberg rightfully received his 1932 physics Nobel prize (without sharing it with anyone).

The term "weak measurement" was coined in the 1988 article in

*Physical Review Letters*

How the result of a measurement of a component of the spin of a spin-1/2 particle can turn out to be 100by Yakir Aharonov, David Z. Albert, and Lev Vaidman who slightly extended some speculative 1932 work by John von Neumann. Everyone who knows this basic historical fact must clearly see that this "new kind of a measurement" isn't a measurement at all. If \(WM(j_z)\) for a spin-2 particle may deviate from the standard set \(j_z=\pm 1/2\) and take values such as \(WM(j_z)=100\), it's very clear that \(WM(j_z)\) isn't \(j_z\) in any sense. It can't even be an average (or expectation) value. If it were \(j_z\), it just couldn't be equal to \(100\).

This criticism – pointing out that there is really nothing new and interesting, nothing that would challenge conventional quantum mechanics in the AAV 1988 paper – was made already in this comparably well-known 1989 paper.

Let me mention that AAV 1988 didn't just introduce a controversial new phrase. They also designed a method to calculate the expectation value of an observable out of many measurements none of which disturbs the measured object too much. The actual expectation value may be obtained from an average of such "weak values". However, the real problem is whether the individual terms in the average should be interpreted as properties of the physical system, generalized values, at all.

To answer this question, I may enthusiastically recommend you an almost unknown but extremely sensible 2009 paper by Stephen Parrott (arXiv) who retired already in 2002 (UMass Boston: younger people may be increasingly insane about QM) and who explains that the "weak measurement" yields a value that doesn't really say anything about the system itself; it always depends on all the details of the measurement. It's like a survey in which all the numbers may be internally distorted by various hidden choices but a survey that still claims to measure the same thing. That's why one can derive nonsensical conclusions such as values outside the interval of allowed eigenvalues.

Fine, what is a weak measurement? The term has been used in a very vague and sloppy way in the literature. It's meant to be some kind of a measurement that tries "not to affect the system too much". But whenever people stick to the proper definitions, a weak measurement is any process to quantify the following ratio known as the "weak value":\[

WM(j_z) = \frac{ \bra{\phi_\text{before}}j_z\ket{\phi_\text{after}} }{ \braket{\phi_\text{before}}{\phi_\text{after}} }.

\] The numerator and the denominator differ by the insertion of the \(j_z\) only. Note that while the ket vectors are the "state vectors after", as they should be, the bra vectors are "state vectors before". That's of course too bad because a genuine measurement must be given by this formula:\[

E(j_z) = \frac{ \bra{\phi_\text{after}}j_z\ket{\phi_\text{after}} }{ \braket{\phi_\text{after}}{\phi_\text{after}} }.

\] All the bra and ket vectors are the "states after" because the very point of a measurement is to acquire some information about what will be happening with the system after the measurement. That's how it must be; we can only measure by affecting the object so the measured value is linked to the "state after". In other words, one can never really measure properties of objects "before the measurement" in quantum mechanics (the "states before" are interfering and have all the other wonderful non-classical properties) and not even the "mixed inner products". Note that I didn't call the ratio simply \(j_z\) because it's just the expectation value \(E(j_z)\). You probably won't get this value during any measurement. You will get random values and \(E(j_z)\) is their average.

So even if those people admitted that the "weak measurement" isn't an actual measurement, there's still another dishonesty hiding in the terminology. The "weak value" doesn't actually generalize the "measured value". Instead, it generalizes the "expectation value".

One may invent mumbo-jumbo explanations why it could be natural to consider both the initial state and the final state in similar ratios (an example is to model the weak measurement as a process that takes some time and requires a time-dependent Hamiltonian). But all this mumbo-jumbo is pure demagogy. The reason is that the information about a physical system in quantum mechanics (information only about the system, not some mixed information relating the systems to lots of extra conventions) may only be found out by "actual measurements", not by their "weak generalizations". The actual measurements may have some extra sources of inaccuracy but as long as they find out something about the system, they always disturb the system at least as much as Heisenberg and quantum mechanics in general states. In particular, if you're measuring the position of a particle with precision \(\Delta x\), you inevitably disturb the momentum at least by \(\Delta p = \hbar/2\Delta x\). Regardless of the claim in PRL, I can rigorously prove this version of Heisenberg's principle involving "disturbances", too.

That's what the new crackpot paper hyped by the BBC wants to deny – and it denies it just by changing the rules of the game and pretending that something that isn't a measurement is a measurement.

If you think about the formula for the weak value for a while, you will notice that it's really inconsistent because it suggests that one may know the final state just by making a procedure that yields a value \(WM(j_z)\) which, as I explained, is a generalized expectation value, not an actual value. This is of course an inconsistent mixture of the quantities that may be measured during one particular repetition of the experiment; and things that can only be extracted from a statistical treatment of many repetitions.

In proper quantum mechanics, you may only learn something about the "after state" if you actually know a quantity you have measured. This measured value gets reprinted into the nearly classical devices and brains as "effectively classical information" and one may show that the new state of the subsystem is the corresponding eigenstate one may exactly reproduce by a quantum analysis of the whole apparatus-small-system composite system.

If you only know some generalized "expectation value", and the "weak value" is an example of that, you don't actually know anything about a particular repetition of the experiment, so you can't determine the "after state".

The claim "one may circumvent the uncertainty principle after all" is of course the most obvious application of the concept of the weak measurement by the pseudoscientists and we were only waiting for someone to make himself "famous" by this obvious piece of crackpot interpretation. However, the "weak measurement" has been misused for more modest pseudoscientific goals in the past, too. For example, the "weak measurement" has been claimed to "solve" Hardy's paradox.

What did that mean?

Well, obviously, there's no Hardy's paradox in quantum mechanics. Quantum mechanics predicts probabilities for combined results of any measurements regardless of the types of measurements that the experiments choose to perform and regardless of their timing. The predictions are unique and the probabilities obey all the logical rules they have to obey (belonging to the interval \((0,1)\), being added when mutually exclusive outcomes are connected by "OR", causality, locality, and so on). So there can't possibly be any paradox! You either know the right prediction or you don't.

Hardy's paradox, much like all other pieces of "quantum recreational mathematics", are only paradoxical if you're trying to think about the world classically. So the claim that the "weak measurement has solved Hardy's paradox" was nothing else than another classical mumbo-jumbo justification for the indefensible, namely that one can construct a "classical model" that gives right predictions for situations such as Hardy's experiment. But this is only "possible" if one throws all standards out of the window and if one uses concepts such as "weak measurement" in such a deliberately vague way that he overlooks the fact that it isn't a measurement at all.

Let me mention that there are various legitimate ways to measure the position "approximately" so that the momentum doesn't get disturbed infinitely strongly. For example, one may ask what is the probability that the system finds itself in a state associated with a particular "fuzzy bump" in the phase space – which is given by just another vector in the Hilbert space. One may measure many such probabilities simultaneously (try to cover the phase space with similar bumps or their modifications) and obtain some approximate information both for \(x\) and \(p\) (phase cells have nonzero width and height) while both of them are only disturbed by a finite amount. In this "Ansatz" for a "not too pushy measurement", Heisenberg's inequality may again be proven rigorously; it's the same proof involving the error of position and the error of momentum, just applied at the "benchmark states" onto which we project instead of the actual state. Well, you must interpret the uncertainty of the position and the momentum in the "benchmark state" to be a part of the disturbance even though you may be lucky and have an actual state equal to a benchmark state (which means that the state won't be disturbed at all). But if you average over lucky and unlucky cases, you will be again able to prove that the averaged disturbances obey the inequality even if the disturbance only counts the change of the actual state vector.

There are other, effectively equivalent yet legitimate ways to limit the "disturbance" of the system for the price of learning the values of \(x\) and \(p\) only approximately but the "weak measurement" isn't one of them because it's an arbitrary "statistic" that always depends on the precise protocol defining which weak measurements we perform and how. A way to distinguish legitimate realizations of "inaccurate measurements of the position" from the illegitimate one is to analyze whether they obey Heisenberg's inequality for the "precision of position" and "disturbance of momentum". The legitimate techniques to measure obey this inequality; the illegitimate ones often violate it.

I never know whether people making the claims – or encouraging the journalists to make claims – that "we have proved that Heisenberg was wrong all along" are extraordinarily dumb or extraordinarily dishonest but the most frequent answer I tend to pick in similar situations is that it is something in between. Those people must subconsciously know that what they're saying is rubbish so they're not "intrinsically stupid"; but they just decide that the most convincing method to deceive other people is to deceive themselves, too. So they actually work hard on themselves so that at least superficially, they believe their own garbage.

At any rate, it's clear that at a deeper level, they must know that this research is pure garbage. It isn't a paper about a shocking experiment in which some physicists did something that happened to lead to surprising and important results. Instead, this new "BBC" paper and all similar papers always start with the conclusions. The authors decide what they want to "achieve", e.g. to prove that Heisenberg has always been wrong, and they just construct a required combination of loaded and distorted terminology and redundant experimental setups that make it look like that there is some science behind the preconceived and wrong conclusion.

Of course that these people "intrinsically" know some basics of quantum mechanics and they used these basics to construct the experiment. Of course that they knew in advance what the result would be and it's the result predicted by quantum mechanics. Of course that there isn't an infinitesimal piece of evidence for the claim that quantum mechanics is in doubt. And in fact, the PRL paper doesn't claim that they cast doubt on quantum mechanics itself (only BBC does); it "only" (still incorrectly) claims that they disprove Heisenberg's "generalized" uncertainty principle for the "error in position" and "disturbance of momentum by the measurement" by dishonestly redefining what the measured position is (a "generalized measurement" that can give you \(100\) for the spin isn't a legitimate measurement of the spin). But by a combination of vague, and distorted redefinitions of concepts, loaded language, and "legitimate simplifications" expected from journalists, it is always possible to write down a paper that will lead to 100.0000% incorrect subtitles such as "pioneering experiments cast doubts on quantum mechanics" and I guess that this has been the goal from the beginning.

It's a nasty junk science and famehunting and all the scientists and journalists around this scandal are assholes.

And that's the memo.

## snail feedback (39) :

Fundamental physics is "refuted" on a weekly basis. I noticed this when I was 19 and had a subscription to New Scientist, which I ended up not renewing once I realized they never did follow-up of all the heroes you took down Einstein.

I'm sure other people notice this too, and it's probably bad for the image of science. Bad journalism of science is a major problem, and something needs to be done, though I'm not sure what.

I saw this article, I thought, I'm not going to believe this until Dr. Motl says it's correct, I come here, and yup. Thanks for another great article.

Reading physics articles in such popular magazines and often other popular media too is no longer that much fun for me :-(.

90 % of the time they write about and promote scum that tries to overthrow valid bascs of physics without any good reason just for the heck of it and even if they manage to mention cool nice serious physics they mostly do nothing but spit and spat on it.

That one explained in this article is a particularely bad textbook example that confisms this trend :-(

Everybody who has takean a QM class (not by one of the authors ...) knows that the weak value is not a measurement.

I'm getting fed up of crap being over-reported and over-hyped these days.

Dear Lubos,

this weak measurement thing really sounds incorrect.

You haven't mentioned it, but if the bra and ket vectors are different, then the weak value is generally a complex number even if the operator of the physical quantity is self-adjoint. This seems even more bad to me, than the 100 value for J_z.

This weak measurement thing really seems to be incorrect. The weak value of the operator A is in general a complex number even if A is self-adjoint.

To me, that is even more problematic than the 100 value for J_z .

Hi Lubos,

I agree that the "weak measurement" papers are crap. However you made some conceptual mistakes in your paragraph starting "Fine, what is a weak measurement?....": By the projection postulate

of QM the value of your expression

$$

\frac{< \phi_{after} | j_z | \phi_{after} >}{< \phi_{after} | \phi_{after} >}

$$

is uniquely given by the eigenvalue associated to the

state | \phi_{after} > which must be in a subspace associated to this eigenvalue. In the usual bad terminology the wavefunction has "collapsed" to produce | \phi_{after} > with a definite value of the observable J_z.

On the other hand, the expectation value of a operator Q is in your notation given by

$$

E(Q) = \frac{< \phi_{before} | Q | \phi_{before} >}{< \phi_{before} | \phi_{before} >}

$$

(note "before" not "after" appear here)

and is in general not an observable quantity in a single measurement as it is a statistical average

over the different components of the state | \phi_{before} > decompsed into eigenstates of the operator J_z.

Regards,

Fellow Quantum Theorist

(and long-time silent reader of your blog)

They should launch a new scientific magazine with beside all physics articles there would be the counter argument article. Same with climate matters... No preconceived opinions, just the blunt theories explained. For the reader to deduce what is the most sensible theory. I've seen it in some magazines a couple of times about climate controversies and the 9/11 conspiracy theories. It was pretty good. However the public don't like to be left alone making up their mind, they like to be guided, even if they feel the manipulation. Do we all own some critical thinking I wonder ?

Why the hell other scientists on the PRL board speak up, why the PRL apologize for publishing that junk. Leaving it the way it is and staying silent means they are helping popularizing that crap and letting people think it is true science.

I meant not speak up, and not apologize?

Right. Just to be sure, the phase doesn't cancel between the numerator and the denominator. Example: consider J_x and J_y for initial and final states being up, down in a spin-1/2 system. Clearly, one of them is real and one of them is imaginary. Too bad.

Journalists are like flies on a fresh cow pad :-)

This magazine already exists, of course. You can find it at: http://motls.blogspot.com

Thank me very much. You're welcome. ;-)

Haha, duh ! ;-)

Yep LOL ... :-D

This is my favorite science magazine :-)))

Hi Lubos, I was extremely busy in the real world, so I have posted on your blog for a while. Your article Paul Dirac's Forgotten Quantum Wisdom (http://motls.blogspot.com/2011/12/paul-diracs-forgotten-quantum-wisdom.html ) is very good. Media tend to hype worthless books like "Not Even Wrong" and "The Trouble with Physics", but they haven't noticed much more relevant books like Dirac's "The Principles of Quantum Mechanics". The Heisenberg picture, Feynman's argument involving the double-slit experiment and the arguments of Dirac in that book makes it quite clear that the Copenhagen interpretation of quantum mechanics is exactly correct. In fact, after the formulation of the Copenhagen interpretation, only two useful concepts have been added to the fundamentals of QM: consistent histories and decoherence. These things should be clear to any physics graduate, but unfortunately we live in the world where even some top physicists make wrong comments regarding the Copenhagen interpretation. Let me point out something disturbing. In 2009, Yakir Aharonov was awarded the National Medal of Science. And what was the citation for the medal?

"For his contributions to the foundations of quantum physics and for drawing out unexpected implications of that field ranging from the Aharonov-Bohm effect to the theory of weak measurement." (https://www.nsf.gov/od/nms/recip_details.cfm?recip_id=5300000000461 )

Is this some kind of a joke? If Aharonov would have won the medal for the Aharonov-Bohm effect, it would have been alright, but how could they award such a prestigious medal for a totally flawed concepts like weak measurement?

The original paper by Aharonov et al. isn't bad. They designed a valid procedure to measure the expectation value by many "not too invasive" measurements. All the problems started with the overinterpretation of the individual "non-invasive measurements".

But even if the original paper were wrong, there is nothing shocking about prizes - especially not too prestigious ones such as the National Medal of Science - to be given for bad work.

Dear Lubos,

I noticed an error in your description of normal strong measurements. You define E(A) as a sandwich with the "after state" bra and ket, and then you talk about it as the expectation value of the operator A. But the "after state" should be the eigenstate of A, so the sandwich is the eigenvalue, which is the actual result of the measurement. The "after state" is chosen randomly from the possible eigenstates, so if you calculate the expectation value from the possible eigenvalues and the probability distribution, you will find that it is equal to the sandwich with the "before state".

So the sandwich with the "before state" gives the expectation value, while the sandwich with the "after state" gives the actual result, where both the "after state" and the actual result are random.

You're right. In the correct QM, one gets the actual eigenvalue, one of them, different in each repetition. But in the weak measurement, because of the presence of the initial state which is not the chosen "post-selected" eigenstate, it's not the case. So that object, the "weak value", can't be identified with the particular repetition of the measurement - via the bra, it also depends on the pre-selected pre-measurement state. So it's a bizarre mixed object.

... but sometimes I think reading TRF too enthusiasitally has done me no good ... ;-P

I'm that nerdy now that I can not even read about contours in the complex plane without thinking, when looking at a picture of an open curve that crosses itself, about the emission of a graviton ... LOL :-D

It is with great frustration that one sees anti quantum zeal begin to gain ground. What is more frustrating is that many want to paint this as a revolution in thinking and not several steps backward. I think its that they never really understood quantum mechanics in the first place, and instead of trying to learn, they are trying to make the world conform to their expectations. Very troubling times.

Good points, Ehab. Unfortunately, I think that even PRL benefits from this coalition. When BBC publishes an article saying that the whole quantum mechanics is in doubt according to PRL, it still makes PRL more important because while not a legitimate scientific research resource, BBC is still considered a totally acceptable major mainstream "opinionmaker". So the PRL enjoy their place in the unholy coalition. People who are in charge of things like PRL are "politicians playing PR games", too. They just happened to be put in positions where the science standards used to be high but they don't mind forming new coalitions with people whose standards are really low.

I noticed at the bottom of PRL article on this subject that PRL is seeking an assistant editor maybe you could help them;-)

Yep,

it would be much needed that things get cleaned up there ... !!!

But Lumo should be the editor in chief :-)

And crackpot theories and crackpot reporting will never change how nature really works. Sadly I suppose they can influence what sort of research gets funding and that's quite unfortunate, if so.

I am not going to pay for it so can't read the crackpot piece. There is a synopsis, though, which seems more correct than the press presentation. It distinguishes between ".. a rigorously proven relationship about uncertainties intrinsic to any quantum system, often referred to as “Heisenberg’s uncertainty principle,” Heisenberg originally formulated his ideas in terms of a relationship between the precision of a measurement and the disturbance it must create. Although this latter relationship is not rigorously proven, it is commonly believed (and taught) as an aspect of the broader uncertainty principle...."

I was taught that the former rigorous relationship implied correctly that no classical statement makes sense except as approximations of expectations after measurement and that the naive notion that the uncertainty arose of disturbance was an incorrect notion sometimes used to placate laypersons and sometimes believed by those who could not accept the implication that classical concepts and our sense are not right.

So are the authors claiming more than that one can confirm that which I thought I was taught was right? Whether or not a measurement can be made that reduces the disturbance to closer to the limits of uncertainty without exceeding them seems a question of technology not new science. Does the paper claim more? If that is all it claims why would t shake any foundation ? I apologize but mast serious schooling in qm is about when Lubos was in kindergarten and so I fail to see why this would be controversial or consequential if the synopsis above quoted is all there is to their paper?

what about this paper

http://pra.aps.org/abstract/PRA/v22/i6/p2362_1it also came from U Toronto, but in 1980 ! By now the same people had written a book

http://www.amazon.com/Fundamentals-Nuclear-Models-Foundational-v/dp/9812569561So, the latest PRL is just the latest in a long series of papers ranging from Nature, PRL, Annals of Phjysics, European J.of Physics, Phys.Letters A, etc.. Surely, they are all assholes, except those who like your blog

Dear Michael, you build your claims of "seriousness" of the new PRL paper on the fact that it extends some work by a Mr Ozawa.

But the paper by Ozawa isn't terribly serious by itself. It's not really correct and you really don't want to confuse this freaky maverick paper with Heisenberg's actual discoveries; even sociologically, the Ozawa paper has just 66 citations after 10 years or so which is weak, too. Moreover, the bulk of Ozawa's paper is a *proof* that a relationship holds. He has also invented some extra mutations of these principles that aren't valid; it's always possible. It's always possible to decide which of the versions is right and which of them is wrong within minutes. It's complete nonsense to suggest that these are some great questions that have to be solved for decades and described in PRL papers.

When defined properly, the Heisenberg uncertainty principle is valid in all of its forms stated by Heisenberg, including the disturbance-error relationship. Everything else is just wrong or a strawman.

The weak value is indeed a complex number and the imaginary part indicates a displacement in momentum space of the pointer after the protocol is carried out. While the weak value is not a measurement of the quantum system, as this article explains, it is a predictable value that is related to the system and the measurement protocol and is very often much easier to measure than other values, which makes it useful to calculate the other variables directly. For instance, you could setup an experiment to obtain a weak value that is exactly 100 times some other value (say... the refraction index of a cristal) and it would be a more precise measurement of it. Post-selection is quite useful

You confuse Weak Measurement with weak value dude. A weak measurement is a measurement where the initial state of the pointer is not the pure state marking 0. Weak measurements are in fact measurements: they measure to a degree the state of the quantum system prior to the interaction.

On the other hand, the weak value can only be observed through the use of weak measurements *and* strong measurements.

And yes, they don't reinvent QMs or break any QM law. What they do is break paradigms. For instance, they quantify the strength of the measurements by studying the entanglement between the pointer and the quantum system and removing the idealization that the initial state of the pointer is pure.

No, your comment is rubbish. The "weak measurement" and the measurement of the "weak value" is exactly the same thing, see e.g. page 11 above eqn 21 of

http://arxiv.org/pdf/quant-ph/0105101v2.pdf

None of those - identical - things is a measurement in the proper sense (the reason is that it's just not "strong measurement" and nothing else; using "strong measurement" at some substep wouldn't make it a strong measurement, anyway!) and none of these extra constructions changes any paradigms.

Also, it's complete nonsense to suggest that the assumption that the initial state is pure poses a limitation the formalism of quantum mechanics. All mixed states behave according to laws that are completely determined by the behavior of the pure states, because of the linearity of the terms in the density matrix

rho = sum p_i ket psi_i bra psi_i

If one knows what happens with the pure states psi_i, one also knows what happens with any rho - rho is just the usual statistical mixture of different possibilities! There is absolutely no breaking of paradigms. Moreover, the conventional weak measurement formalism works with pure states, anyway.

Dr. Motl, thank you for providing this critique. I am not in the field and have only recently become infatuated with QM as a means of fascination and wonder. I am a retired attorney by trade, but am now making feature films and am cooking up a new screenplay from the world of QM. My excitement as a lay person has been fueled by the twin concepts of superposition and entanglement. My home and vehicle are crowded with texts and audio books spanning everything from Einstein to string theory. (ST actually makes me kind of sick and is rather boring to me). I digress.

So being in a sort of love affair with the QM history (Feynman/Copenhagen/Shimony/Aspect) etc, I was truly rocked and disturbed by this paper on weak measurement that you have so eloquently trashed. And while I do not have the mathematical ability to understand the proofs of your critique, like another commenter here, I feel like a better description of the results should have been "weak average" and had that terminology been used, the hooplah over this paper might have been avoided. But the mania over this paper did scare me into thinking that my original reasons for fascination might have been disposed of by the conclusions and hooplah surrounding this weak measurement phenomena.

The March 2013 issue of Physics World even hypes "the new paradigm of 'Weak Measurement'". For my purpose, wanting to properly discuss the current state of QM in my script and not to give an outdated version, I began to doubt I had the ability to separate the wheat from the chaff. I started to consider dumping the project. But I was able to find enough evidence from people such as yourself to continue the journey.

But the paper nearly knocked me away from going any further down the path of QM enlightenment, as did the Afshar experiment, which I now know, from my research, has been soundly critiqued as well, although I would love to know your thoughts on the Afshar situation.

But now there's one more paper that has kind of brought a depression over me and this one seems, to my lay eye, to be the most problematic yet. Im surprised it hasnt been hyped as much as the weak measurement was. It was highlighted in ARS claiming a DIRECT violation of complementarity. I have included below the link to the article, "Wave-particle dualism and complementarity unraveled by a different mode" as well as a link to the paper, headed by R. Menzel, and I have included a link to a paper that supports the conclusions. Please let us know what you think of the following:

http://arstechnica.com/science/2012/05/disentangling-the-wave-particle-duality-in-the-double-slit-experiment/?comments=1&start=40

http://www.pnas.org/content/109/24/9314.full

paper that supports it

Challenges to Bohr's Wave-Particle Complementarity Principle

arxiv.org/pdf/1211.1916

It alleges that they were able to discern the exact which path info and still maintain interference pattern by using entangled particles. Please advise.

Dear Mr Wintzer, it's touching to hear about your excitement but if your excitement gets erased whenever you see a crackpot paper that claims that quantum mechanics is wrong, then you have almost no chance to complete the project because dozens of such crackpot papers are published every year, if not every month.

Mr Rabinowitz understands at most coaxial cables

http://scholar.google.com/scholar?q=%22Mario+Rabinowitz%22&hl=en&lr=&btnG=Search

and his comments about QM - that one can determine which-slit information *and* preserve the interference pattern - are pure rubbish. The paper seems to be an even more uninsightful piece of junk than a violation-of-complementarity paper of a Shahriar Afshar I clarified many years ago (I even met with him and found out that he was faking everything, including his affiliations, to promote his garbage)

http://motls.blogspot.com/2004/11/violation-of-complementarity.html?m=1

The same is true for the crackpots in the Menzel et al. "experimental" team

http://www.pnas.org/content/109/24/9314.full

Sorry, I don't want to spend my life by interacting with similar hopelessly stupid hacks. You have three choices. You either learn QM sufficiently so that you may see what's wrong with those papers; or you trust me that these papers and all other papers in the future claiming that one can simultaneously determine the values of non-commuting observables is/are fundamentally wrong; or you abandon or muddy your projects about QM and movies etc.

Good luck

Lubos

Dr. Motl, thanks for quick reply reply. I am moving as fast as I can, and i have avoided many cranks as you say. But when papers are published in peer reviewed journals an hyped by science media, a lay person such as myself is bound to become confused. So i turn to others such as yourself. I look for passion and you seem to have plenty. Perhaps, should i ever get funding for this script, i could hire you as technical consultant, although...you might make a good character in my film! But how would you go about learning the area at least to some reasonable ability to read papers if you were me? If you give me some kind of syllabus, i will take it seriously. Thanks

LOL, thanks but I will probably prefer avoiding becoming a movie star.

Peer-reviewed means "reviewed by peers", not "reviewed by infallible perfect scientists". When too many peers are idiots, many peer-reviewed published papers end up being complete rubbish, and that's the situation in which we are already in.

Ok, noted. But can you just take a look at the Menzel paper and give a very breif explanation why it is bogus? I thought PNAS had some credibility. No?

Re the Menzel paper, the only real discussion I can find is by Chaz Orzel..http://scienceblogs.com/principles/2012/06/04/single-photons-are-still-photons-wave-particle-dualism-and-complementarity-unraveled-by-a-different-mode/

And he admits to being confused a bit by the paper. I read the Afshar article, andcomment, and again was very impressed by your critique. Since nobody has taken apart the Menzel paper, perhaps you might consider enlightening thereon. The paper makes a paradigm shifting claim. Why is it wrong?

Dear Jet, the interference pattern that Menzel et al. see actually isn't created by the existence of the two slits at all. They observe the which-slit information through the other photon in the entangled pair which is why they destroy the interference that would be created by the existence of two slits. But this doesn't prohibit other sources of interference pattern, and the whole extra superconstruction with the "other modes" in the paper is what makes the result look like an interference pattern although the double slit isn't the reason of it. So complementarity is completely preserved - the relevant interference which enters complementarity is destroyed.

PNAS - well, it isn't publishing the top research in any of the related fields of theoretical physics. In fact, this is one of the first quantum mechanics papers in the journal in the history. They don't have folks to do it right.

Thank you for taking a look. Much appreciated. You may watch my multiple award winning post apocalyptic feature film TOWERS for free at the following link (feel free to remove the comment after copying, dont want to be seen as spamming off topic stuff). Please friend me on Facebook as well. Best, Jet Wintzer

http://vimeo.com/69130020

Post a Comment