The Large Hadron Collider is currently in a technical stop following a very productive machine development break. Normal running has now brought the collider to its intended running parameters for this year with 1380 bunches per beam colliding at 7TeV centre of mass. With good fills the peak luminosity should be around 1.4/nb/s and there are 103 days of scheduled proton physics left this year. Assuming a reasonable Hübner factor of 0.3 this will be sufficient to provide 1.4*3600*24*103/nb = 3.7/fb. Add this to the 1.3/fb already delivered for a grand total of 5.0/fb in 2011. To put that in context here is the latest table of Higgs sensitivities.
5/fb is enough to exclude Higgs or find evidence for it up to 600 GeV (LEP has already excluded it below 114 GeV) This will be a tremendous result for the LHC and much sooner than expected. At yesterday’s DG address to CERN staff Rolf-Dieter Heuer was keen to make it clear that a Higgs exclusion would be just as exciting as a Higgs discovery! At a CERN seminar today Carminati said that there will be an update on Higgs searches using 1/fb from ATLAS in two weeks time.
From the blog comments I know that there are a few of you out there who are interested in what more can be done to increase luminosity this year, so let’s forget about the particle physics for now and see what the possibilities are for beam physics. Unless you are a hardened LHC geek you probably don’t want to read any further.
I don’t know everything that the beam operations teams are considering but the reports from the MD phase provide some interesting clues. Things should be much clearer after the “mini-Chanonix” meeting on 15th July. Some of you may be aware of points I missed so please do comment.
Not long ago it seemed most likely that very little would be done to increase luminosity this year, but the mini-Chamonix is purely dedicated to what can be done in the rest of 2011 and everything is on the agenda. The opportunities for luminosity can be summarised as follows
- Improved Hübner factor – Luminosity x 1.5
- reduced emittance by half – luminosity x 2
- bunch intensity to twice nominal – luminosity x 4
- ATS squeeze to 0.3m – luminosity x 5
- bunch spacing to 25ns – luminosity x2
This factor depends on how efficiently they can run the collider and how much of the time they can keep it in stable beams. I think it is not unreasonable to expect a Hübner factor of 0.3 in they keep all the parameters fixed and concentrate on improving efficiency. If they are lucky and things go very well an improvement up to 50% is conceivable, but if they decide to try out some more risky ways to increase luminosity then a lower Hübner factor is more likely, perhaps around 0.2
The emittance is a measure of how well the protons keep going in a straight and narrow line. Better emittance means the protons in a bunch can be kept closer together. Halving emittance will double the luminosity. The nominal emittance is 3.75 μm, but by improving the injector chain they have found ways to get this down to about 1.7 μm . I dont know any downside of reducing emittance (anyone?). It does not lead to higher beam currents, just better luminosity. In that case it is a no-brainer that they will want to run with better emittance as soon as possible, isn’t it?
“Nominal” bunch intensity is 115 billion protons per bunch but they have already been running with up to about 125 billion protons per bunch. The “ultimate” bunch intensity was supposed to be 170 billion, but in an earlier MD slot they already pushed it beyond that to 195 billion. In the latest tests they went as far as twice nominal at 250 billion! Sorry that is already too many exclamation marks for one post, but some of these numbers are really surprising. Nobody expected them to do this. How did they do it?
Luminosity goes up with the square of bunch intensity so this is a great way to increase luminosity, but there is a catch. Too much beam current is dangerous for the collider and the protection systems may not be sufficient beyond some limit. The cryogenics can also only take so much before the extra heat from the beams causes them to fail. Some increase in bunch intensity at 1380 bunches is possible, I don’t know how much. To make use of the higher intensities that they can now reach they would have to decrease the number of bunches. For example, they could increase bunch intensity to 170 billion and decrease the bunch numbers to 1010. The total beam current would stay the same because it depends on bunch intensity times bunch number. But the luminosity would go up by 36% because luminosity depends on bunch intensity squared times bunch number.
The major downside of going for this option is that event pile-up increases. ATLAS and CMS already see about 8 events each time two bunches intersect. This goes up with bunch intensity squared, so for a 36% luminosity increase they get 84% more pile-up. High pile-up rates makes it hard to reconstruct individual events in the detectors. There will be more background and uncertainty in the numbers produced. It will be up to CMS and ATLAS to decide how much pile-up they want to take at this point versus the potential for more luminosity.
The Achromatic Telescopic Squeezing is a complicated system for modifying the optics to focus the beams better at the intersection points. It has been possible to squeeze down to beta* = 0.3m compared to 1.5m at in present use. This implies a factor of 5 in luminosity with no extra beam current! but that would require reduced crossing angles which means more parasitic collisions, especially if they increase bunch intensity. Realistically a factor of 1.5 might be gained this year using ATS without too much side-effect but this is unclear at the moment.
Tests during the MD with injection at 25ns bunch spacing and 216 bunches have been encouraging. The smaller spacing would mean that they could increase the bunch number up to the nominal figure of 2808 bunches to double luminosity. The advantage of this over increased bunch intensity is that pile-up during bunch crossings is not affected so CMS and ATLAS would prefer it.
The downside is that they cannot increase bunch numbers and bunch intensity at the same time because it would mean too much beam current. The luminosity increases are not as dramatic as what can be achieved by bunch intensity increases alone. Again it will be the trade-off between pile-up and luminosity that must be considered.
In any case I am not sure they are ready for 25ns spacing this year. Further tests during later MD slots are necessary. This step is more likely for next year.
Details will become clearer at the mini-Chamonix. From what I have seen so far I think the best we can hope for this year is some modest increases in bunch intensity and emittance leaving them to concentrate on keeping a good Hübner factor. Some use of ATS optics towards the end of the year’s run may be tried. A doubling of peak luminosity during 2011 is probably an option if general machine running os smooth enough and if CMS and ATLAS can take the extra pile-up.
For 2012 there is much more scope for improvements. I think they will go with 25 ns spacing and use ATS to increase luminosity further without exceeding beam current limits. Ten times luminosity is not beyond the realm of possibility so they could deliver 50/fb during 2012 at 9 TeV. Perhaps I am being too optimistic again. What do you think?