Top Ten Bits of Ig Nobel Trivia

September 29, 2011

As a prelude to the official Nobel prizes that will be announced next week,  the less austere Ig Nobel prizes will be awarded at a ceremony today. The event will be webcast starting at  7:30pm Boston time. Here is a trailer and ten interesting facts

  1. The Ig Noble prizes were first awarded in 1991 and included three items of research that were just made up as well as a peace prize to Edward Teller for the H-bomb.
  2. The prices were originally described as being awarded for research that “cannot, or should not, be reproduced” but this was later changed to the more ingratiating slogan that it would “make people LAUGH, and then THINK”
  3. The name of the award should be pronounced IG-NO-BELL to emphasize its relation to the Nobel Prize, rather than the word ignoble.
  4. Two Ig Nobel prizes have been awarded for Homeopathy, one in 1991 and another in 1998
  5. It used to be a tradition to throw paper airplane onto the stage as prizes were awarded but this had to be stamped out for security reasons.
  6. In 1995 Robert May, Baron of Oxford, and the then chief scientific advisor to the British government requested that the organizers no longer award Ig Nobel prizes to British scientists. He said that the awards risked bringing genuine experiments into ridicule.
  7. Despite the air of ridicule that comes with the award of an Ig Nobel prize several winning lines of research have had practical value including the observation that mosquitoes are as much attracted to the smell of  Limburger cheese as they are to human feet.
  8. When Andre Geim won the Nobel prize last year for his work on graphene he became the first person to have won both the Nobel and the Ig Nobel. The latter was awarded to him 10 years earlier for levitating frogs.
  9. Despite the reputation for toilet humor the Ig Noble prize citations have only mentioned toilets, urine and pooh once each (Update: urine count has gone up to twice).
  10. The Ig Nobel blog at has ridiculed at least twice this year but we are still waiting for the award of a prize to a paper from our archive.

Shutdown approaches for the Tevatron

September 28, 2011

They say that all good things must end and it is certainly true for the Tevatron. The US hadron collider based at Fermilab will finally shutdown in just two days time. The last moments when the switch is thrown to kick the store of protons and anti-protons into the graphite dump blocks will be webcast live from the control rooms at 2 p.m. CDT on the 30th of September. There will be celebrations but they are likely to be somewhat muted. The operators would have liked to continue for the chance of finding the Higgs boson before their European rivals at CERN but congress refused the funding.

The Tevatron’s greatest discovery was the Top quark which it found in 1995. In 2002 the new higher luminosity run II began. Between then and the final day the two detectors CDF and D0 will have recorded about 10.7/fb worth of collision events. Run II has had many smaller successes including the exclusion of the Higgs boson over a range of masses but it has missed out on the opportunity to take the prize for its discovery. That glory will now almost certainly go to the LHC. When Run II began the LHC had just been approved and was expected to start running in 2005. Project delays and funding reviews pushed that back to 2007. Then further delays due to accidents that damaged the magnets meant that the final startup date was the end of 2009 with the first serious data being recorded in 2010. It is tempting to wonder if things would have been different if it had been expected earlier that the LHC would start operating so late.

The recent decision to end the Tevatron was I think the right one. It had become clear that the LHC results would overshadow anything that it could now produce. Fermilab has remained optimistic that their final total will help find the Higgs. If you add the Tevatron data to the LHC Higgs plots it makes a difference at the low mass end where we now think the Higgs is hiding. In this plot the black line is the unofficial combination for the latest LHC data while the red line shows a fuller combination with CDF and D0 included too.

But that is not the whole story. While the LHC is starting to see bumps that might be the first hints of where the Higgs could lie, the Tevatron just sees a broad and statistically weak excess. The difference may be that the detectors at the LHC have much better energy resolution. They are next generation gadgets in a world where technology is moving very fast. At low mass the Higgs resonance is narrow and they will need good energy resolution to see it well.

I am not sufficiently well versed in the lore of detectors to know if the Tevatron could have beaten the LHC to the Higgs discovery if they had put more into the collider before the start of Run II. I know that some opportunities to upgrade the detectors were passed over. Perhaps that would have been sufficient to see the signal the Higgs produces. With the benefit of hindsight we can see that things might have been done differently if they had known of the delays that would befall the LHC.

The race between the LHC and the Tevatron has been a classic hare against tortoise story and as in the original Greek drama the hare took its time to get going. Two years ago when the LHC was just starting up, the Tevatron had already excluded the Higgs at 170 GeV. If it had been sitting at that mass there is little doubt that the tortoise would have claimed the prize. In this real life version of tale however,  the distance to go was a little longer and now the hare is coming through to take the finish line.

This will certainly not be the end of the story for the accelerator complex at Fermilab. Stopping the Tevatron gives a chance for neutrino experiments to take over the injectors. Two new neutrino experiments will be added to help in a field of research where there are many mysteries to be resolved. Another line of development known as project X will explore the intensity frontier with muon beams and perhaps eventually an ambitious muon collider will be funded. The future for collider physics in the US now depends on the choices that the government has to take.

LHC prospects for 2012

September 28, 2011

The Large Hadron Collider continues its remarkable 2011 run with the 4/fb mark passed today for all time delivered luminosity (depending on whose stats you believe). With 4 more weeks left of proton physics this year and some of it reserved for TOTEM, we can expect the final count to reach 5/fb if the present run efficiency is maintained.

Some of you will want to remind me that earlier in the year I optimistically predicted a total of 10/fb. At that time the expected peak luminosity for this year was around 1.7/nb/s but by improving bunch intensity, emittance and squeeze beyond design limits they have actually doubled this to 3.3/nb/s . The reasons that this did not lead to even more integrated luminosity were (A) they took a bit longer than I expected to ramp up to maximum and (B) the running efficiency has not been quite as good as it could have been.

Nevertheless the run for this year still counts as a humongous success given that the original target was just 1/fb for the whole year. Collecting five times that amount means that they now have enough data to catch a glimpse of the long-sought-after God particle whose mass is now rumored to be 119 GeV while his partner Goddess particle sits nearby at 140 GeV. The next conference where more new information may be released is Hadron Collider Physics 2011 due to open in Paris on 14th November, just three weeks after ATLAS and CMS stop recording data for the year.

What Run Parameters for 2012?

Th next question is how much data can they collect during 2012? A previous tentative schedule that would have delayed the start of the run to allow more work during the winter shutdown has apparently been ditched. That is probably a good thing because with no beyond-standard-model physics emerging at 7 TeV yet they will now want to get to the design energy of 14 TeV as soon as possible. That will still not happen until 2014. Meanwhile the draft schedule for 2012 looks like this.

Possible scenarios for the 2012 running parameters according to Mike Lamont are as follows

A couple of crucial questions not yet decided are what energy to run at and whether to move to 25ns bunch spacing or stick with 50ns. The ability to go to 4 TeV per beam for a centre of mass energy of 8 TeV instead of 7 TeV can only be determined during the winter shutdown using thermal amplifier tests to check the splices. A higher energy of 9 TeV is not ruled out but will probably be deemed too risky. Let’s assume they will run at 8TeV.

The next question is about the bunch spacing. In principle moving to 25ns doubles the number of proton bunches that can be fitted in which would double the luminosity but you can see from the table above that this is not quite the case. Firstly the emittance at 25ns would not be so small because of limitations of the injector chain. Secondly, with more bunches they cannot push the bunch intensity quite so high because the extra load on the  cryongenic systems would just be too much. In fact the projected luminosity may be less at 25ns . So far they have only run with 216 bunches at 25ns and it is not certain what the limitations may be. So why would they still consider 25ns?

The effects of Pile-up

The answer is pile-up meaning the number of collision events that take place for each bunch crossing. This number, shown as “Peak mean mu” in the table has already gone up to an average of 16. This was much more than the experiments expected to see at this stage of the game and they have had to work hard to cope with it. There is a limit to how many sets of event data the onboard computers can register and save before the rate gets too high. To control this they need to adjust the trigger settings to catch just the most interesting events. They also have to work harder to reconstruct the collision vertices which can be very close together.

More uncertainty is unavoidable as the pile-up increases and this affects some anaylsis more than others. The worst hit results are the ones that look for missing energy and momentum, a classic signal of particles leaving the experiment undetected. These could be ordinary neutrinos or massive particles responsible for dark matter that are as yet unknown to science. To tell the difference they need to account for all the missing energy and momentum by adding up the individual contributions from every particle that is detected and subtracting it from the energy and momentum of the original particles in the collision. Given the precise energy and momentum of a particle you can work out its mass. The ATLAS and CMS detectors have been designed to have an exceptional ability to catch as many of these particles as possible and measure them accurately, but if two collision events are too close it becomes hard to be sure which event each particle came from and the energy resolution suffers. As well as searches for dark matter and supersymmetry some channels used in the Higgs boson search are being hit by the high pile-up seen this year. The WW decay mode has been particularly affected because the W bosons decay always produces neutrinos which are not directly detected. The lower energy resolution means that this channel is not going to be so good for identifying the mass of the Higgs boson and may even mislead us into thinking it has been ruled out at its real mass. Luckily the Higgs at low mass can also decay into two hard photons or two Z bosons that decay into leptons. These are easily detected and the higher luminosity means that they are seen more often. These are now the best channels to look for the Higgs at the LHC.

Overall I think the reduction of pile-up will be seen as worth the extra effort and risk of moving to 25ns so I am placing my bet on them aiming for that.

Yet more squeeze?

From Lamont’s table it can be seen that there is not much prospect of further major increases in luminosity next year. The LHC had already been pushed very close to its limits and will require some major work on the cryogenics, injectors and of course the splices before it can perform at energies and luminosities much beyond what we have seen this year. At best a small increase in energy and and 50% increase in luminosity can be expected for 2012.

But Lamont has been too shy to mention one further possibility. During the machine development periods this year they have tested a different way to squeeze the beams at the collision points using ATS optics. In this way they showed that it may be possible to get down to a beta* of 0.3m at the present energy. That means a further factor of 3 improvement in luminosity. It is a delicate and difficult process and I have no idea if they will really be able to make use of it next year, but at least there is some hope of further luminosity improvements without adding much strain to the cryongenic systems, we shall see.

Runtime efficiency

The remaining big unknown is the runtime efficiency achievable next year. The rapid push to high luminosities has taken its toll on the percentage of time that the LHC has been in stable beams. For the long term the strategy to maximise peak luminosity as soon as possible makes sense. By learning about the problems that come up now they can plan the work during the long shutdown in 2013 so that they have smoother running when they restart at full energy in 2014. For now this has meant running with a rather low efficiency measured with a Hübner Factor of just 0.2 .

In comparison to other colliders such as LEP and the Tevatron a figure of 0.2 is not unreasonable, but those machines were always limited to that amount by their design. The Tevatron takes a day to build up a store of anti-protons with enough intensity for a single run. In the meantime each store keeps running but the luminosity decreases with a half life of just a few hours. This means that no matter how smoothly it runs, the Tevatron can never do much better than a Hübner Factor of around 0.2.  The LHC avoids these limitation because it only uses protons which can be extracted from a bottle of hydrogen on demand. They also have much better luminosity lifetimes ranging from 10 hours at the start of a run to 30 hours as the luminosity decreases. In theory if the LHC can be made to run smoothly without unwanted dumps and without delays while getting ready for the next fill, then they would reach a  Hübner Factor of over 0.5.

But for now we must accept the vagaries of the machine. It uses technologies such as cryongenics and superconducting magnets running at scales way beyond anything ever tried before. It will take time to iron out all the problems, so for next year we should still expect a factor of 0.2. Given 130 days of clear proton physics and luminosities around 4/nb/s this implies a total luminosity of about 1o/fb for 2012. Enough to identify the real Higgs and perhaps see some hints of more beyond if nature is kind to us.

Can Neutrinos be Superluminal? Ask OPERA!

September 19, 2011

Four days ago a rumour started circulating in the comments at Resonaances that some “6.1 sigma” signal of new physics had been seen at CERN. I reported it in an update on  the Seminar Watch post. There had been a seminar titled “Seminar DG” which was listed on indico and removed the day before it was due. The rumour confirmed that this meeting was rescheduled to Friday but as an update on OPERA, the neutrino experiment which a couple of years ago saw its first tau neutrino. The claim now is that they have measured the speed of muon neutrinos and got a result faster than the speed of light!

This is of course a crazy idea because if true it would violate everything we think we know about causality. Even if neutrinos are hard to detect it should be possible to use them to send information into the past if this result holds up. That does not sound very likely (but I am now setting up a neutrino beam to send the news back in time so that it was actually me who leaked the story) .

Hypothetical superluminal particles are known as tachyons and they always move faster than light because they have imaginary valued mass, but quantum field theories for tachyons have terrible problems. Aside from the causality issues, the vacuum becomes unstable because you can create neutrino pairs with negative energy out of nothing. You would need a very unconventional variation of relativistic quantum field theory to stop the universe degenerating into an instant burst of neutrinos, and we don’t have that.

However this is not the first time that superluminal neutrinos have been reported. Some people claimed that observations of neutrinos arriving before gamma rays from supernovae implied that they are superluminal see . Other people just say that the neutrinos were created before the gamma rays.  In fact some “crazy” people believed in superluminal neutrinos well before that. Early attempts to measure the squared mass of the neutrino in the 1990s always seemed to give negative results  I have not had time to look back at that old ideas but it may be time to do that.

Of course such extraordinary claims need very good evidence and for now the most likely explanation by far is a systematic error. The rumoured “6.1 sigma” significance is probably a statistical error and it will be important to consider any systematic sources of error before coming to conclusions.  For now we will need to wait for the official seminar at CERN on Friday to see what they have to say about that.

click for wallpaper

Update: It is of course worth recalling that the MINOS experiment also measured the neutrino speed and got a result faster than the speed of light at 1.7 sigma see If the OPERA measurements are consistent with this measurement it will have to be taken seriously. As far as I can tell no measurement of neutrino speed or mass refutes the claim that they are tachyons, it’s just the theory that’s a problem.

Measurements of mass-squared from beta decay in Tritium have tended to give negative value results with error bar consistent with zero or positive values. This plot from shows how the measurements have developed over time. The latest result I can find is -0.6 ± 2.2 (stat) ± 2.1 (syst) eV2 from These are measurements for the lectron anti-neutrino, not the muon neutrino that OPERA is looking at.

If you are wondering about theories that allow tachyonic neutrinos the least wacky one I can find is that neutrinos can take “shortcuts off the brane through large extra dimensions”

What about the Supernovae observations? The timing of neutrinos vs light from supernova 1987a constrains the speed of neutrinos to be within one part in 10-8 of the speed of light, while the MINOS measurement had a speed of about (v-c)/c = (5.1 ± 2.9) x 10-5 so this seems inconsistent, even taking into account any differences of energy. Since neutrinos oscillate between different flavours we can’t make the excuse that one case looks at electron neutrinos and the other muon neutrinos, can we?

That said, neutrino physics has many unknowns. Other experiments hint at sterile neutrinos and even differences in mass between neutrinos and their anti-particles, even though we don’t even know what kind of spinors they are yet. If the large extra dimension theory has any bearing they may only travel faster than light in the presence of a gravitational field. It all sounds too crazy to be true but I am reserving judgement until at least we have heard from OPERA to see what they are actually claiming and how confident they are.

Meanwhile we have other views from MotlStrassler and Kea.

Update 23-Sep-2011: The news is now officially out with a CERN press release and an arxiv submission at The result they have obtained is that the neutrinos arrive ahead of time by an amount 60.7 ns ± 6.9 ns (statistical) ± 7.4 ns (systematic). On the face of it this is a pretty convincing result for faster than light travel, but such a conclusion is so radical that higher than usual standards of scrutiny are required.

The deviation for the speed of light in relative terms is (v-c)/c = (2.48 ± 0.28 ± 0.30) x 10-5 for neutrinos with an average energy of 28.1 GeV The neutrino  energy was in fact variable and they also split the sample into two bins for energies above and below 20 GeV to get two results.

13.9 GeV:  (v-c)/c = (2.16 ± 0.76 ± 0.30) x 10-5

42.9 GeV: (v-c)/c = (2.74 ± 0.74 ± 0.30) x 10-5

These can be compared with the independent result from MINOS, a similar experiment in the US with a baseline of almost exactly the same length but lower energy beams.

3 GeV: (v-c)/c = (5.1 ± 2.9) x 10-5

If we believe in a tachyonic theory, with neutrinos of imaginary mass the value of (v-c)/c would decrease in inverse square of the energy. This is inconsistent with the results above where the velocity excess is more consistent with a constant independent of energy, or a slower variation.

We also have a constraint from supernova SN1987A where measurement of neutrino arrival times compared to optical observation sets |v-c|/c < 2 x 10-9 for neutrino energies in the order of 10 MeV. For smaller energies we should expect a more significant anomaly so this is important, but perhaps the energy dependence is very different from this expectation.

So if this is a real effect it has to be something that does not affect the cosmic neutrinos in the same way. For example it may only happen over short distances or in the presence pf a gravitational field. It would still be a strong violation of Lorentz invariance of a type for which we do not really have an adequate theory.

So obviously there could be some error in the experiment, but where? The distances have been measured to 20cm accuracy and even earthquakes during the course of the experiment can only account for 7cm variations. The Earth moves about 1m round its axis in the time the neutrinos travel but this should not need to be taken into account in the reference frame fixed to Earth. The excess distances by which the neutrinos are ahead of where they should be is in the order of 20 meters, so distance measurements are unlikely to be a source of significant error.

Timing is more difficult. You might think that it is easy to synchronous clocks by sending radio waves back and forward and taking half the two way travel time to synchronise, but these experiments are underground and radio waves from the ground would have to bounce off the upper atmosphere or be relayed by a series of tranceivers. This is not a practical method. What about taking atomic clocks back and fourth between the two ends of the experiment? the best atomic clocks lose or gain about 20 pico seconds per day, but portable atomic clocks at best lose a few nanoseconds in the time it would take to get them from one end to the other. This could be a good check to carry out if a good atomic clock could be flown on a helicopter, but as far as I know this has not been done.

Instead the best way to synchronise clocks over such distances is to use GPS which sends signals from satellites in low earth orbit. Each satellite has four atomic clocks which are constantly checked with better groundbased clocks. The ground positions are measured very accurately with the same GPS and in this way a synchronisation of about 0.1 ns accuracy can be obtained at ground level. The communication between ground and experiment adds delay and uncertainty but this part has been checked several times over the course of the experiment with portable atomic clocks and is good to within a couple of nanoseconds. The largest timing uncertainties come from the electronic systems that are timing the pulses of neutrinos from the source at CERN. The overall systematic error is the quoted 6.9 ns, well within the 60 nanosecond deviations observed. Unless a really bad error has been made in the calculations these timings must be good enough.

The rest of the error is statistical so it is worth looking at the variations in timings to see if another error could be hidden there. Here is a plot from the paper of some of the timings over the years the experiment has run. The blue band shows the average delay relative to timing delays assuming travel at the speed of light that were calculated later to be 987.8 ns. I have added a green band at this time plus or minus the 6.9 ns systematic error so that we can see how cleanly the measurements are displaced.

It looks pretty consistent. I think the only conclusion we can draw at this point is that further independent results are required. Perhaps MINOS could upgrade their timing measurements to see if they get a similar result with increased precision. T2K might also be able to attempt a measurement but their baseline is 295km compared with 730km for OPERA and MINOS. Otherwise a new experiment with shorter neutrino pulses and superaccurate timers may be the only way to resolve it. OPERA could also remove possible systematic timing errors at the source end by installing a second (much smaller) neutrino detector much nearer to CERN.

Some more reports: arstechnica , BBC, and of course Dorigo whose earlier post was ordered off line by big cheeses at CERN. Look out for his repost of his  interesting review of where he thinks problems may lie.

Post-talk update: The webcast talk at CERN was very interesting with lots of good questions. The most striking thing for me was the lack of any energy dependence in the result, a confirmation of what I noted this morning. The energy of the neutrinos have a fairly wide spread. If these were massive particles or light being refracted by a medium there would be a very distinct dependence between the speed and the energy of the particles but no such dependency was observed. The speeker showed how the form of the pulse detected by OPERA matched very nicely the form measured at CERN. If there was any kind of spread in the speed of the neutrinos this shape would be blurred a little and this is not seen.

Most physical effects you could imagine would have an energy dependence of some sort. A weak energy dependence is possible in the data but that would still be hard to explain. On the other hand, any systematic error in the measurement of the time or distance would be distinguished by just such a lack of energy dependence.

The only physical idea that would correspond to a lack of energy dependence would be if the universe had two separate fixed speeds, one for neutrinos and one for photons. I don’t think such a theory could be made to work, and even if it did you would have to explain why the SN1987A neutrinos were not affected. I think the conclusion has to be that there is no new physical effect, just a systematic error that the collaboration needs to find.

Higgs Days at Santander

September 18, 2011

Santander is a Spanish port on the Bay of Biscay coast that next week will host its fourth annual workshop on the Higgs Boson. This meeting will be very different in character from the huge summer conferences where exciting new results on searches for the Higgs boson were recently presented to thousands of physicists. The Santander meeting involves just 30 participants with a mix of theorists and experimenters involved in the analysis of data from Fermilab and CERN. Half their time will be spent presenting slides and the other half will be discussions covering searches for standard model Higgs and other models including the charged Higgs sector of SUSY. They will talk about the procedures for combining Higgs searches across experiments and implications of any findings. The aim is to promote a dialog between theorists and experimenters about what data needs to be shared and how.

Santander beaches, photo by yeyo

There is no indication that the discussions will be webcast or recorded for public viewing and it is not sure that all the slides will appear online so as outsiders the rest of us may have very little indication of what they decide. It is unlikely that new data will be made public but there is some chance that we may finally get to see a combination of ATLAS and CMS search data. Originally we were promised a combination of the searches shown at EPS in July using the first 1/fb of data from the LHC. Instead we got a new helping of plots from the individual experiments using 1.6/fb in the most important channels and even 2.3/fb for the ZZ channel in ATLAS. These were shown at the Lepton-photon conference in August. Theorists would now very much like to see the combinations of these data sets and it is not clear why they have been held back.

One question has become very topical and has already surfaced at some of the larger Higgs workshops: Is it right to do quick approximate combinations of Higgs search data or do we need to wait for the lengthy process of producing the official combinations? This summer I have become quite notorious for doing these quick combinations and showing them on viXra log. These have variously been described by experts as “nonsense” (Bill Murray) “garbage” (John Ellis) and “wrong” (Eilam Gross), but just how bad are they? Here is a plot of my handcrafted combination of the D0 and CDF exclusion plots compared with the official combo. The thick black line is my version of the observed exclusion limit that can be compared with the dotted line of the official result, while the solid blue line is may calculated expected limit to be compared with the official dashed line. You need to click on the image for a better view.

My result is not perfect but I hope you will agree that it provides similar information and you would not be misled into drawing any wrong conclusions from it that were not in the official plot. Any discrepancy is certainly much smaller than the statistical variations indicated by the green and yellow bands for one and two sigma variations.

A more ambitious project is to combine exclusion plots for individual channels to reproduce the official results for each experiments. Here is my best attempt for the latest ATLAS results where I have combined all eight channels for primary decay products of the Higgs boson.

The result here is not as good and could only serve as a rough estimation of the proper combination. Why is that? There are several sources of error involved. Firstly the data for the individual channels had to be digitised from the plots. This was not the case for the previous Tevatron combination above where they published the plots in tabular form. ATLAS and CMS have only published such numerical data for a few channels and in some cases the quality of the plots shown is extremely poor. For example this is the best plot that ATLAS has shown for the important H → ZZ → 4l channel

As you can see it is very hard to follow the lines on this plot, especially the dashed expected limit line. I don’t want to be over-critical but seriously guys, can’t you do better than this?

Another source of error comes from neglect of correlations between the individual plots where background estimates may have the same or related systematic errors. The Higgs combination group at CERN play on this as one of the reasons why these quick combinations can’t be right, but I doubt that these effects are significant at all. If they were I would not be getting such good results for the Tevatron combination.

In fact the main source of error is in approximations used in my combination algorithm. It assumes that each statistical distribution of the underlying signals can be modeled by a flat normal distribution with a mean \mu_i and standard deviation \sigma_i. Combining normal distributions is standard stuff in particle physics the combined mean \mu and standard deviation \sigma are given by these formula

\frac{1}{\sigma^2} = \sum_i{\frac{1}{\sigma_i^2}}

\frac{\mu}{\sigma^2} = \sum_i{\frac{\mu_i}{\sigma_i^2}}

For example, if one experiment tells me that the mass of the proton is 938.41 ± 0.21 GeV and another tells me it is 938.22 ± 0.09 GeV and I know that the errors and independent, then I can combine with the above formula to get a value of 938.25 ± 0.08 GeV. The Particle Data Group does this kind of thing all the time.

A plot of the signal for the Higgs boson given by the ATLAS results would look like this,

The black line (value of \mu) is the observed combined signal for the Higgs boson normalised to a scale where no Higgs boson is zero and a standard model Higgs boson gives one. The blue and cyan bands show the one and two sigma statistical uncertainty (\mu \pm\sigma and \mu \pm 2\sigma). Don’t think about where the Higgs boson is for now. Just look at the upper two sigma level curve and compare it with the ATLAS Higgs exclusion plot above (i.e the dotted line, click to enlarge for a better view). These are of course the same lines because the 95% level exclusion is given when the 2 sigma error is below the signal for SM Higgs. The expected line on the exclusion plot is just where the observed line would be if the signal were evrywhere zero, i.e it is a plot of 2\sigma. In summary, the observed limit for CL_s in the exclusion plot is just \mu + 2\sigma and the expected limit is just 2\sigma. We can derive one plot from the other using this simple transformation.

From this it should be clear how to combine the exclusion plots. We first transform them all to signal plots, then they can be combined as if they are normal distributions. Finally the combined signal plot can be transformed back to give the combined exclusion plot. This is what I did for the viXra combinations above.

Ignoring the digitisation errors and the unknown correlations, the largest source of error is the assumption that the distribution is normal. In reality a log normal distribution or a Poisson distribution would be better, but these require more information. Fortunately the central limit theorem tells us that anything will approximate a normal distribution when high enough statistics are available so the combination method gets better as more events accumulate. That is why the viXra combination of the exclusion plots for each experiment is more successful than for the combination of individual channels. The number of events seen in some of these channels is very low and the flat normal distribution is not a great approximation to use. As more data is collected the result will get better. Of course we cannot expect a reliable signal to emerge from individual channels until the statistics are good, so it could be argued that the approximation is covered by the statistical fluctuations anyway.

I don’t know if a full LHC combination will emerge next week at the Santander workshop but in case it does, here is my best prediction from the most recent data for comparison with anything they might show.

Some people say that there is no point producing these plots because the official versions will be ready soon enough, but they are missing the point. The LHC will produce vasts amounts of data over its lifespan and these Higgs plots are just the beginning. The experimenters are pretty good at doing the statistics and comparing with some basic models provided by the theorists, but this is just a tiny part of what theorists want to do. The LHC demands a much more sophisticated relationship between experimenter and theorists than any previous experiment and it will be necessary to provide data in numerical forms that the theorists can use to investigate a much wider range of possible models.

As a crude example of what I mean, just look at the plot above. It provides conflicting evidence for a Higgs boson signal. At 140 GeV there is an interesting excess but it is below the exclusion limit line. Is this a hint of a Higgs signal or not? To answer this I might look at different channels combined over the experiments. Here is the ZZ channel combined over ATLAS and CMS.

The Higgs hint at 140 GeV is now nice and clear, though not significant enough yet for a reliable conclusion. Here is the diphoton channel combination.

Again the 140 GeV signal is looking good. What about the WW channel?

Here is where the problem lies. The WW channel has a broad excess from 120 GeV to 170 GeV at 2 sigma significance, but it is excluded from about 150 GeV . In fact the energy resolution in the WW channel is not very good because it relies on missing energy calculations to reconstruct the neutrino component of the mass estimation. Perhaps it would be better to combine just the diphoton and ZZ channels that have better resolution. I can show the result in the form of a signal plot.

It’s still inconclusive, but at least it is not contradictory.

This is just as example of why it will be useful for theorists to be able to explore the data themselves. The signal for the Higgs will eventually be studied in detail by the experiments, but what about other models? There is a limit to how many plots the experiments can show. To really explore the data that the LHC will produce theorists will need to be able to plug data into their own programs and compare it with their own models. The precise combinations produced by the Higgs combination groups take hundreds of thousands of CPU hours to build and are fraught with convergence issues. My combinations are done in milliseconds and gives a result that is just as useful.

There is no reason why the experiments can’t provide cross-section data in numerical form for a wide range of channels with better approximations than flat normal distributions if necessary. This would allow accurate combinations to be generated for an infinite range of models with varying particle spectra and branching ratios. It will be essential that any physicist has the possibility to do this. I hope that this is what the theorists will be telling the experiments at Santander next week and that the experiments will be listening.

Update 26 Sept 2011: I found a better version of the ATLAS ZZ -> 4l plot that I was moaning about. It has not appeared in the conference notes for some reason but it is same data from LP11 so I think it must be OK to show.

The latest expectation from the combination group is that a Lepton-Photon based combo will be ready for  Hadron Collider Physics 2011 which is in Paris starting 14th November.

Update 1-Oct-2011: Most of the slides from the Santander meeting have now been uploaded

New luminosity record marks great start for LHC run

September 9, 2011

The Large Hadron Collider has logged a new luminosity record with 2.57/nb/s in ATLAS and 2.69/nb/s in CMS beating the previous figure of 2.4/fb/s.

It is just one week since the start of the final proton physics run for 2011 and already they have returned to colliding the current maximum of 1380 bunches per beam. This run is using a better squeeze of beta*=1.0 meters which should be enough to increase luminosity by 50%. Further records can therefore be anticipated on subsequent runs as emittance and bunch intensity are brought back to former levels.

To have reached this point so quickly after the end of the technical stop is a good sign for the collider.  After previous stops it has typically taken two weeks to iron out problems and return to previous luminosities. The change in the squeeze could have required collimator settings to be adjusted but luckily the old settings have proved more or less sufficient, avoiding delays.

This final run has seven more weeks to go with everyone anxious to see as many inverse femtobarns as possible added to the 2.7/fb already delivered. The increased luminosity and good stability (so far) are good signs that a high total is achievable for 2012 to give good prospects for seeing clear signs of the Higgs boson or other new physics by November.

Grail about to launch

September 8, 2011

NASA’s Grail mission is about to launch with destination the Moon.

The Delta II rocket will release two satellites into orbit above the lunar surface that will study the effect of gravitational anomalies on the flight trajectories. This will provide information about the internal density and structure of the Moon. They will also carry cameras to send back stereo 3D pictures of the moon surface.

Countdown is currently 4 minutes and holding.

Update: launch scrubbed for today due to bad weather. There are two launch windows tomorrow but forecast high winds may push the date back further.

When it does happen it will be covered on NASA TV

Update 10-Sep-2011: GRAIL has now launched