The OPERA experiment has failed to find an error in their measurement of neutrino speeds that shows them travelling faster than light. The earlier result was most strongly criticized because of the statistical nature of the measurement which involved fitting the timing profile of many observed events at Gran Sasso to the known shape of long duration pulses of protons 730 km away at the source in CERN. This objection has now been quashed by using much shorter pulses so that the effect can be seen with just a few neutrinos. While the previous measurement used data gathered over three years, this new confirmation took just a few weeks.
The crucial new plot is this one
The timing of the neutrinos is spread over a 50 ns window but is still clearly different from the zero mark that would be consistent with travel at the speed of light. The spread could either be due to inaccuracies in the timing or differences in the speed of the neutrinos themselves, if the effect is real. An interesting question would be whether there is any correlation between the timing offset and the energy of the neutrinos and I don’t know if they have that data.
In fact this spread is the most exciting part of the new result. As far as I know it is bigger than the known systematic errors. If there were an unknown systematic error in the measurement of the distance between the experiments or in the timing, we would expect that to be constant. Here I am assuming that atomic clocks were used at each end to keep the timing stable rather than constant referral to GPS time which could vary. If this is the case then the spread actually rules out several other sources of systematic error.
Factoring in my prior probabilities from preconceived theoretical prejudices I can now say that the probability of the result being correct has increase from 1 in a million to one in 100 thousand (numbers are illustrative 🙂 ). This is sufficient to convince most of the collaboration to sign the paper which may now go forward to peer-review. To convince more theorists they may need to do more checks on the result. The strongest criticisms will now fall on the use of GPS. To eliminate this they should check the timing and the distance calculation independently. The timing could be checked by flying a portable atomic clock from CERN to OPERA and back at high-speed on a helicopter to calibrate the clocks at either end. Portable clocks can be stable to within a few nanoseconds over the course of a day so it should be possible to carry out this check with sufficient accuracy and it would not be too expensive. The distance measurement also needs to be repeated, preferably using old-fashioned surveying techniques rather than GPS between the two locations.
If this is also fails to find an error then the probability of the result being correct goes up to one in ten thousand. The next most likely source of errors would be the timing measurement for the collisions that generate the neutrinos at CERN. This involves some electronics with a lag that may not be precisely known. To eliminate this they possibility need to build a near detector to catch neutrino events in the path of the beam near CERN. If the beam is everywhere deep underground this could be an expensive addition to the experiment, but it would be a very significant check taking the probability of the result being real up to one in 100 or better depending on what other possible sources of error might be left.
To really confirm that neutrinos are faster than light requires confirmation from other labs using measurements that could not be subject to correlated errors. Hopefully this will arrive next year.
For more details see TRF where this was reported two days ago based on a tachyonic version of twitter, or AQDS using conventional light-speed technology from arXiv. The official press release is here.
Update: People are telling me that the timing calibration has already been done. An update from Dorigo makes some interesting points including the fact that the timing depends on a 20 megahertz clock signal. This explains the spread of the measurements over 50 ns. In fact it means that the time offest must be very sharp which is not such good news. It makes a constant systematic error seem much more likely.
I think another essential upgrade to the experiment would be to record a timestamp for events with nanosecond accuracy.