The Large Hadron Collider continues its remarkable 2011 run with the 4/fb mark passed today for all time delivered luminosity (depending on whose stats you believe). With 4 more weeks left of proton physics this year and some of it reserved for TOTEM, we can expect the final count to reach 5/fb if the present run efficiency is maintained.
Some of you will want to remind me that earlier in the year I optimistically predicted a total of 10/fb. At that time the expected peak luminosity for this year was around 1.7/nb/s but by improving bunch intensity, emittance and squeeze beyond design limits they have actually doubled this to 3.3/nb/s . The reasons that this did not lead to even more integrated luminosity were (A) they took a bit longer than I expected to ramp up to maximum and (B) the running efficiency has not been quite as good as it could have been.
Nevertheless the run for this year still counts as a humongous success given that the original target was just 1/fb for the whole year. Collecting five times that amount means that they now have enough data to catch a glimpse of the long-sought-after God particle whose mass is now rumored to be 119 GeV while his partner Goddess particle sits nearby at 140 GeV. The next conference where more new information may be released is Hadron Collider Physics 2011 due to open in Paris on 14th November, just three weeks after ATLAS and CMS stop recording data for the year.
What Run Parameters for 2012?
Th next question is how much data can they collect during 2012? A previous tentative schedule that would have delayed the start of the run to allow more work during the winter shutdown has apparently been ditched. That is probably a good thing because with no beyond-standard-model physics emerging at 7 TeV yet they will now want to get to the design energy of 14 TeV as soon as possible. That will still not happen until 2014. Meanwhile the draft schedule for 2012 looks like this.
Possible scenarios for the 2012 running parameters according to Mike Lamont are as follows
A couple of crucial questions not yet decided are what energy to run at and whether to move to 25ns bunch spacing or stick with 50ns. The ability to go to 4 TeV per beam for a centre of mass energy of 8 TeV instead of 7 TeV can only be determined during the winter shutdown using thermal amplifier tests to check the splices. A higher energy of 9 TeV is not ruled out but will probably be deemed too risky. Let’s assume they will run at 8TeV.
The next question is about the bunch spacing. In principle moving to 25ns doubles the number of proton bunches that can be fitted in which would double the luminosity but you can see from the table above that this is not quite the case. Firstly the emittance at 25ns would not be so small because of limitations of the injector chain. Secondly, with more bunches they cannot push the bunch intensity quite so high because the extra load on the cryongenic systems would just be too much. In fact the projected luminosity may be less at 25ns . So far they have only run with 216 bunches at 25ns and it is not certain what the limitations may be. So why would they still consider 25ns?
The effects of Pile-up
The answer is pile-up meaning the number of collision events that take place for each bunch crossing. This number, shown as “Peak mean mu” in the table has already gone up to an average of 16. This was much more than the experiments expected to see at this stage of the game and they have had to work hard to cope with it. There is a limit to how many sets of event data the onboard computers can register and save before the rate gets too high. To control this they need to adjust the trigger settings to catch just the most interesting events. They also have to work harder to reconstruct the collision vertices which can be very close together.
More uncertainty is unavoidable as the pile-up increases and this affects some anaylsis more than others. The worst hit results are the ones that look for missing energy and momentum, a classic signal of particles leaving the experiment undetected. These could be ordinary neutrinos or massive particles responsible for dark matter that are as yet unknown to science. To tell the difference they need to account for all the missing energy and momentum by adding up the individual contributions from every particle that is detected and subtracting it from the energy and momentum of the original particles in the collision. Given the precise energy and momentum of a particle you can work out its mass. The ATLAS and CMS detectors have been designed to have an exceptional ability to catch as many of these particles as possible and measure them accurately, but if two collision events are too close it becomes hard to be sure which event each particle came from and the energy resolution suffers. As well as searches for dark matter and supersymmetry some channels used in the Higgs boson search are being hit by the high pile-up seen this year. The WW decay mode has been particularly affected because the W bosons decay always produces neutrinos which are not directly detected. The lower energy resolution means that this channel is not going to be so good for identifying the mass of the Higgs boson and may even mislead us into thinking it has been ruled out at its real mass. Luckily the Higgs at low mass can also decay into two hard photons or two Z bosons that decay into leptons. These are easily detected and the higher luminosity means that they are seen more often. These are now the best channels to look for the Higgs at the LHC.
Overall I think the reduction of pile-up will be seen as worth the extra effort and risk of moving to 25ns so I am placing my bet on them aiming for that.
Yet more squeeze?
From Lamont’s table it can be seen that there is not much prospect of further major increases in luminosity next year. The LHC had already been pushed very close to its limits and will require some major work on the cryogenics, injectors and of course the splices before it can perform at energies and luminosities much beyond what we have seen this year. At best a small increase in energy and and 50% increase in luminosity can be expected for 2012.
But Lamont has been too shy to mention one further possibility. During the machine development periods this year they have tested a different way to squeeze the beams at the collision points using ATS optics. In this way they showed that it may be possible to get down to a beta* of 0.3m at the present energy. That means a further factor of 3 improvement in luminosity. It is a delicate and difficult process and I have no idea if they will really be able to make use of it next year, but at least there is some hope of further luminosity improvements without adding much strain to the cryongenic systems, we shall see.
The remaining big unknown is the runtime efficiency achievable next year. The rapid push to high luminosities has taken its toll on the percentage of time that the LHC has been in stable beams. For the long term the strategy to maximise peak luminosity as soon as possible makes sense. By learning about the problems that come up now they can plan the work during the long shutdown in 2013 so that they have smoother running when they restart at full energy in 2014. For now this has meant running with a rather low efficiency measured with a Hübner Factor of just 0.2 .
In comparison to other colliders such as LEP and the Tevatron a figure of 0.2 is not unreasonable, but those machines were always limited to that amount by their design. The Tevatron takes a day to build up a store of anti-protons with enough intensity for a single run. In the meantime each store keeps running but the luminosity decreases with a half life of just a few hours. This means that no matter how smoothly it runs, the Tevatron can never do much better than a Hübner Factor of around 0.2. The LHC avoids these limitation because it only uses protons which can be extracted from a bottle of hydrogen on demand. They also have much better luminosity lifetimes ranging from 10 hours at the start of a run to 30 hours as the luminosity decreases. In theory if the LHC can be made to run smoothly without unwanted dumps and without delays while getting ready for the next fill, then they would reach a Hübner Factor of over 0.5.
But for now we must accept the vagaries of the machine. It uses technologies such as cryongenics and superconducting magnets running at scales way beyond anything ever tried before. It will take time to iron out all the problems, so for next year we should still expect a factor of 0.2. Given 130 days of clear proton physics and luminosities around 4/nb/s this implies a total luminosity of about 1o/fb for 2012. Enough to identify the real Higgs and perhaps see some hints of more beyond if nature is kind to us.