On April 7, particle physicists around the world were thrilled and excited about the announcement of a measurement of the behavior of muons – the heavier, unstable subatomic cousins of electrons – that differed significantly from the expected value.
In a century, when they look back on this moment, will historians understand this excitement? You certainly won’t see a major turning point in the history of science. No riddle was solved, no new particle or field discovered, no paradigm changed in our view of nature. What happened on April 7th was just an announcement that the wobbling of the muon – its value is called g-2 – had been measured a little more precisely than before and that the international community of high-energy physicists was therefore a little more confident than other particles and fields are there still to be discovered outside.
Nonetheless, a historian of science become Think of this as a special moment, not because of the measurement, but because of the measurement. The first results of the experiment at Fermilab were the result of a remarkable and perhaps even unprecedented series of interactions between an extraordinarily diverse range of scientific cultures that developed independently and required each other over 60 years.
Early theoretical calculations of g-2 according to quantum electrodynamics got a jolt in 1966 when Cornell theorist Toichiro Kinoshita realized that his earlier studies had prepared him well to find out its value. His first calculations were done by hand, but soon his calculations became too unwieldy to be done this way and he became dependent on computers and specialized software. In order to make the prediction increasingly precise, he had to incorporate the work of various groups of theorists who specialized in the large and diverse range of interacting particles and forces that subtly affect the g-2 value. ((Kinoshita is retired, and today the theoretical value is being worked on by more than 100 physicists.) The result was a specific prediction, drawing on the input of many theorists, that had a tiny error bar that was a clear experimental goal.
The first experimental work on a g-2 measurement, which began at CERN in 1959, involved a multi-stage process. The experimenters used a particle accelerator to create unstable particles called pions and then directed them into a flat magnet, where the pions decayed into muons. The muons were forced to spin in a circle and the swirling muons were made to gradually “walk” down the magnet. The muons emerged at the other end of the magnet into a field-free area in which their orientation could be measured so that the experimenters could infer their g-2.
The next experiment, which began at CERN in 1966, used a more powerful accelerator to produce larger numbers of pions and inject them into a storage ring five meters in diameter with a magnetic gradient to accommodate the resulting hordes of muons. The third CERN experiment, which started operating in 1969, was a great step forward. It used a much larger storage ring with a diameter of 14 meters and ran with a certain “magical” energy at which the electric field would not affect the muon spin. This enabled a uniform magnetic field, which drastically sharpened the sensitivity of the measurement. However, with this increased sensitivity came new sources of precision sabotage of instrument noise. Other methods had to be used to reduce uncertainties in the magnets and to measure the magnetic field.
The fourth generation of G-2 experiments, begun in 1999 at Brookhaven National Laboratory, required even more years of arduous struggle to fight back sources of error and control various confounders. As with the third CERN experiment, a storage ring, 3.1 giga-electron-volt muons, the magic energy and a uniform field were used. Unlike the CERN experiment, however, it had higher flux, muon injection instead of pion injection, superconducting magnets, a cart with NMR probes that could roam around in the vacuum chamber to check the magnetic field, and a kicker in the storage ring.
These and other features added to the complexity and cost of the experiment. 60 physicists from 11 institutes took part in the experiment; In 2004, the g-2 was transported from Brookhaven to Fermilab and reinvigorated, remodeled and operated on with a host of increasingly subtle and refined new tricks required to push the exterior even further along the limits of precision. Ultimately, all of these overlapping decades of work together produced the measurement announced this month, one with a tiny error bar that made it useful to compare with the theoretical prediction, which had also had a tight error bar up until then.
The late Francis Farley, spokesman for the very first G-2 experiment at CERN, once told me, “What theorists do and what we experimenters do is completely different. They talk about Feynman diagrams, amplitudes, integrals, extensions, and a whole load of complex math. We hook up an accelerator pedal to send wires and steering magnets to the device itself, which is filled with wires, thousands of cables, timing devices, sensors, and the like. They are two completely different worlds! But they come out with a number, we come out with a number, and those numbers match parts per million! It’s incredible! This is the most amazing thing for me! “
At the announcement on April 7th, the participating physicists showed a graph with two error bars, one for the theoretical prediction and one for the experimental measurement. All the fuss came from the tiny but undeniable gap between the two – 2.5 parts per billion. If one of the bars had been wider, it would have merged into the other, and the measurement would not have shown that physics was waiting to be discovered. To carry out the experiment, the scientific community and government agencies that provided funding had placed tremendous trust in the international team of staff.
I think what will astonish historians of science in the future will be that today’s scientists could create this measly, but at all revealing, void.
This is an opinion and analysis article.