I am often asked what I think of this or that report on an anomaly in particle physics, like the B meson anomaly at the Great Hadron Collider that made headlines last month or the muon G-2 that is currently in the news was. But I figured instead of just giving you my opinion, which you may or may not trust, I will instead give you some background to help you judge the relevance of such headlines for yourself. Why are there so many anomalies in particle physics? And how seriously should you take it? That’s what we’ll talk about today.
The Higgs boson was discovered in 1984. I’m serious. The crystal ball experiment at DESY in Germany yielded a particle that met expectations as early as 1984. It made it into the New York Times with the headline “Physicists Report Mysterious Particles”. But the supposed mystery particle turned out to be a data fluctuation. The Higgs boson was only discovered in 2012 at the Large Hadron Collider at CERN. And 1984 was quite a year because supersymmetry was also observed and then vanished.
How can this happen? Particle physicists calculate what to expect in an experiment based on the best theory they have at the time. This is currently the standard model of particle physics. In 1984 this would have been the standard model without the particles that had not been discovered.
However, theory alone does not tell you what to expect from a measurement. To this end, it must also be taken into account how the experiment is set up, for example which beam and which luminosity, how the detector works and how sensitive it is. This together: theory, structure, detector, gives you an expectation for your measurement. What you are looking for are deviations from this expectation. Such deviations would be evidence of something new.
Here is the problem. These expectations are always probabilistic. They don’t tell you exactly what you are going to see. They only give a distribution of possible outcomes. This is partly due to quantum indeterminism, but partly only due to the classical uncertainty.
Hence, it is possible that you will see a signal when there isn’t one. Suppose I randomly place one hundred points on this square. If I divide the square into four equal parts, I expect about twenty-five points in each square. Indeed, it turns out to be about right for this random distribution. Here is another random distribution. Looks reasonable.
Now we’re doing it a million times. No, actually we don’t do that.
I’ve had my computer do this a million times and here is one of the results. Whoa. It doesn’t look like a coincidence! It looks like something is dragging the dots onto that one square. Maybe it’s new physics!
No, there is no new physics. Note that this distribution was created randomly. There’s no signal here, it’s all noise. It’s just that every now and then noise looks like a signal.
For this reason, particle physicists, like scientists in all other disciplines, give their observation a “confidence level”, which shows how “confident” they are that the observation was not a statistical fluctuation. To do this, they calculate the probability that the supposed signal could have been generated purely by chance. If fluctuations produce a signature like the one you’re looking for twenty times, the confidence level is 95%. If it is caused by fluctuations one in a hundred times, the confidence level is 99% and so on. The higher the confidence level, the more remarkable the signal.
But exactly at which confidence level you declare a discovery is convention. Since the mid-1990s, particle physicists have used a confidence level of 99.99994 percent for discoveries. That’s about a one-in-a-million chance the signal was a random fluctuation. It is also often referred to as 5σ, where σ is a standard deviation. (However, this relationship only applies to the normal distribution.)
But of course, deviations from the expectation attract attention even below the detection threshold. Here is a little more history. For all we currently know, quarks are elementary particles, which means that we have not seen any substructures. But many physicists have speculated that quarks could also be made up of small things. These smaller particles are often referred to as “preons”. They were found in 1996. The New York Times reported, “The smallest nuclear building block may not be the quark.” The meaning of the signal was about three sigma, which is about one in a thousand chances that it is random and about the same as the current B meson anomaly. The supposed quark substructure, however, was a statistical fluctuation.
In the same year the Higgs was rediscovered, this time at the Large Electron Positron Collider at CERN. It was an excess of Higgs-like events that made it to nearly 4σ, which is a probability of one in sixteen thousand of being a random fluctuation. Guess that signal is gone too.
Then, in 2003, the supersymmetry was “discovered” again, this time in the form of a putative sbottom quark, the hypothetical supersymmetric partner particle of the bottom quark. This signal was also around 3 σ, but then disappeared.
And in 2015 we saw the di-photon anomaly that made it over 4 σ and disappeared again. There were even six sigma signals that disappeared, although these were not interpreted in relation to the new physics.
For example, the Tevatron at Fermilab in 1998 measured some events they termed “superjets” at six σ. They were never seen again. In 2004, HERA saw pentaquarks at DESY – that is, particles made up of 5 quarks – with a meaning of 6 σ, but this signal also disappeared. And then there is the muon g-2 anomaly, which recently increased from 3.7 to 4.2 σ but still hasn’t crossed the detection threshold.
Of course, not all discoveries that disappeared in particle physics were due to variability. For example, the UA1 experiment at CERN in 1984 saw eleven particle decays of a certain type when they expected only three point five. The signature matched that expected for the top quark. Physicists were pretty optimistic they’d found the top curd, and that news also made it into the New York Times.
It turned out that they had misjudged the expected number of such events. There really wasn’t anything out of the ordinary. The top quark was only discovered in 1995. Something similar happened in 2011 when the CDF collaboration at Fermilab experienced an excess of events at around 4 sigma. These weren’t fluctuations, but they required a better understanding of the background.
And then of course there are potential problems with data analysis. For example, there are various tricks that you can play to increase its supposed importance. This basically doesn’t happen in collaboration papers, but sometimes you see individual researchers using very, um, creative methods of analysis. And then there can be systematic problems with detection, triggers or filters, etc.
In summary: Possible reasons why a discovery could disappear are (a) fluctuations (b) miscalculations (c) analysis errors (d) systematics. The most common are fluctuations if you just look at history. And why are there so many fluctuations in particle physics? It’s because they have a lot of data. The more data you have, the more likely you are to find swings that look like signals. Incidentally, this is the reason why particle physicists introduced the five-sigma standard in the first place. Otherwise they would have “discoveries” that disappear.
What about that B meson anomaly at the LHC that hit the headlines recently? It’s been around since 2015, but recently a new analysis came out and it was back on the news. It currently lingers at 3.1 σ. As we’ve seen, the signals of this strength go away all the time, but it’s interesting that this one stays instead of going away. That makes me think it’s either a systematic problem or, in fact, a real signal.
Note: I have a lengthy comment on the most recent muon G-2 measurement here.