For LHC Geeks Only

The Large Hadron Collider is currently in a technical stop following a very productive machine development break. Normal running has now brought the collider to its intended running parameters for this year with 1380 bunches per beam colliding at 7TeV centre of mass. With good fills the peak luminosity should be around 1.4/nb/s and there are 103 days of scheduled proton physics left this year. Assuming a reasonable Hübner factor of 0.3 this will be sufficient to provide 1.4*3600*24*103/nb = 3.7/fb. Add this to the 1.3/fb already delivered for a grand total of 5.0/fb in 2011. To put that in context here is the latest table of Higgs sensitivities.

5/fb is enough to exclude Higgs or find evidence for it up to 600 GeV (LEP has already excluded it below 114 GeV) This will be a tremendous result for the LHC and much sooner than expected. At yesterday’s DG address to CERN staff Rolf-Dieter Heuer was keen to make it clear that a Higgs exclusion would be just as exciting as a Higgs discovery! At a CERN seminar today Carminati said that there will be an update on Higgs searches using 1/fb from ATLAS in two weeks time.

From  the blog comments I know that there are a few of you out there who are interested in what more can be done to increase luminosity this year, so let’s forget about the particle physics for now and see what the possibilities are for beam physics. Unless you are a hardened LHC geek you probably don’t want to read any further.

I don’t know everything that the beam operations teams are considering but the reports from the MD phase provide some interesting clues. Things should be much clearer after the “mini-Chanonix” meeting on 15th July. Some of you may be aware of points I missed so please do comment.

Not long ago it seemed most likely that very little would be done to increase luminosity this year, but the mini-Chamonix is purely dedicated to what can be done in the rest of 2011 and everything is on the agenda. The opportunities for luminosity can be summarised as follows

  • Improved Hübner factor – Luminosity x 1.5
  • reduced emittance by half – luminosity x 2
  • bunch intensity to twice nominal – luminosity x 4
  • ATS squeeze to 0.3m – luminosity x 5
  • bunch spacing to 25ns – luminosity x2
So there is a potential increase in luminosity by a factor of 120! But no, it is not possible to do all these things at once until some major hardware upgrades are made. That will be for the High Luminosity LHC project a few years down the road. For now we need to look at these factors one at a time and see what combinations are possible and most desirable.

Hübner factor

This factor depends on how efficiently they can run the collider and how much of the time they can keep it in stable beams. I think it is not unreasonable to expect a Hübner factor of 0.3 in they keep all the parameters fixed and concentrate on improving efficiency. If they are lucky and things go very well an improvement up to 50% is conceivable, but if they decide to try out some more risky ways to increase luminosity then a lower Hübner factor is more likely, perhaps around 0.2

Reduced Emittance

The emittance is a measure of how well the protons keep going in a straight and narrow line. Better emittance means the protons in a bunch can be kept closer together. Halving emittance will double the luminosity. The nominal emittance is 3.75 μm, but by improving the injector chain they have found ways to get this down to about 1.7 μm . I dont know any downside of reducing emittance (anyone?). It does not lead to higher beam currents, just better luminosity. In that case it is a no-brainer that they will want to run with better emittance as soon as possible, isn’t it?

Bunch Intensity

“Nominal” bunch intensity is 115 billion protons per bunch but they have already been running with up to about 125 billion protons per bunch. The “ultimate” bunch intensity was supposed to be 170 billion, but in an earlier MD slot they already pushed it beyond that to 195 billion. In the latest tests they went as far as twice nominal at 250 billion! Sorry that is already too many exclamation marks for one post, but some of these numbers are really surprising. Nobody expected them to do this. How did they do it?

Luminosity goes up with the square of bunch intensity so this is a great way to increase luminosity, but there is a catch. Too much beam current is dangerous for the collider and the protection systems may not be sufficient beyond some limit. The cryogenics can also only take so much before the extra heat from the beams causes them to fail. Some increase in bunch intensity at 1380 bunches is possible, I don’t know how much. To make use of the higher intensities that they can now reach they would have to decrease the number of bunches. For example, they could increase bunch intensity to 170 billion and decrease the bunch numbers to 1010. The total beam current would stay the same because it depends on bunch intensity times bunch number. But the luminosity would go up by 36% because luminosity depends on bunch intensity squared times bunch number.

The major downside of going for this option is that event pile-up increases. ATLAS and CMS already see about 8 events each time two bunches intersect. This goes up with bunch intensity squared, so for a 36% luminosity increase they get 84% more pile-up. High pile-up rates makes it hard to reconstruct individual events in the detectors. There will be more background and uncertainty in the numbers produced. It will be up to CMS and ATLAS to decide how much pile-up they want to take at this point versus the potential for more luminosity.

ATS squeeze

The Achromatic Telescopic Squeezing is a complicated system for modifying the optics to focus the beams better at the intersection points. It has been possible to squeeze down to beta* = 0.3m compared to 1.5m at in present use. This implies a factor of 5 in luminosity with no extra beam current! but that would require reduced crossing angles which means more parasitic collisions, especially if they increase bunch intensity. Realistically a factor of 1.5 might be gained this year using ATS without too much side-effect but this is unclear at the moment.

Bunch Spacing

Tests during the MD with injection at 25ns bunch spacing and 216 bunches have been encouraging. The smaller spacing would mean that they could increase the bunch number up to the nominal figure of 2808 bunches to double luminosity. The advantage of this over increased bunch intensity is that pile-up during bunch crossings is not affected so CMS and ATLAS would prefer it.

The downside is that they cannot increase bunch numbers and bunch intensity at the same time because it would mean too much beam current. The luminosity increases are not as dramatic as what can be achieved by bunch intensity increases alone. Again it will be the trade-off between pile-up and luminosity that must be considered.

In any case I am not sure they are ready for 25ns spacing this year. Further tests during later MD slots are necessary. This step is more likely for next year.

Conclusions?

Details will become clearer at the mini-Chamonix. From what I have seen so far I think the best we can hope for this year is some modest increases in bunch intensity and emittance leaving them to concentrate on keeping a good Hübner factor. Some use of ATS optics towards the end of the year’s run may be tried. A doubling of peak luminosity during 2011 is probably an option if general machine running os smooth enough and if CMS and ATLAS can take the extra pile-up.

For 2012 there is much more scope for improvements. I think they will go with 25 ns spacing and use ATS to increase luminosity further without exceeding beam current limits. Ten times luminosity is not beyond the realm of possibility so they could deliver 50/fb during 2012 at 9 TeV. Perhaps I am being too optimistic again. What do you think?


30 Responses to For LHC Geeks Only

  1. Anders Lund says:

    Thank you for this post, it has been an exciting week, but too much has happened to keep up.
    If my 2+ years of following the LHC has taught me anything, it is that they cannot just leave the machine alone, without steadily improving it. What I think will happen is that they will adiabatically increase the bunch intensity, for the rest of the year; possibly to values above 1.7e11, depending on the stability. 25ns and ATS will be on the agenda next year is my guess.
    One think you did not mentioning is luminosity leveling. It is a method where the beams are separated by a feedback system, which keeps the luminosity constant at a value lower than the maximum. This is already used at LHCb which is kept at 300 (ub s) ^-1. This system works well it seems. If used at CMS and ATLAS it would mean that, even if the detectors cannot handle the maximum luminosity (because of pileup), they would be able to run constantly at a lower luminosity with no decay. This might also lead to some very long fills (thus increasing the Hübner factor). Imagine running at >1300 (ub s) ^-1 for 50 hours!
    One of the largest showstoppers could be radiation. At high luminosity and beam current the radiation dose delivered to the electronics in the tunnel, might be too high for the systems to handle. I believe that they are installing additional screening in the tunnel, doing this technical stop.

    /Anders Lund

  2. Philip Gibbs says:

    Good points. Just increasing the intensity (with improved emittance) is the obvious route to take. There will be a limit set by the vacuum and cryogenics. It will be dependent on factors such as how well the e-cloud is cleared, but they shouldn’t be able to go above 1.7e11 I think.

    Luminosity leveling is an interesting possibility, especially if one of CMS or ATLAS wants more luminosity than the other. There would be strong competition between them to improve their reconstruction analysis and take more luminosity.

    Radiation is also an important topic to bring up. It would make future upgrades both more necessary and more difficult. There is a talk at EPS-HEP about coating critical components with diamond for radiation protection which shows how significant it is. They may limit intensity and or luminosity just because of radiation considerations. It would be a shame to age the detectors too much before they reach 14 TeV.

    • Bill K says:

      Radiation damage occurs in two places – the electronics, and the inner detectors in CMS and ATLAS. The original plans anticipated that radiation exposure would need to be addressed after only 0.5/fb.

      “There is a talk at EPS-HEP about coating critical components with diamond for radiation protection which shows how significant it is.”

      Talk of diamonds made the kobolds’ ears perk up! However this is only a long-term prospect, still aways down the road. The idea is to fabricate completely new solid state detectors using diamond in place of silicon. Diamond works very well for this, and is considerably more radiation resistant than silicon. What has made the idea possible is success with the CVD (Chemical Vapor Deposition) manufacturing process for creating diamond structures. Undoubtedly the military is thinking about this too.

  3. algernon says:

    “So there is a potential increase in luminosity by a factor of 120!”

    Sounds great, but is that 120x increase even necessary?

    In the future they’ll start to get diminishing returns where adding more lumi might no longer be worth the effort and the risks, as they will only be able to marginally improve their 14 TeV measurements.
    Since the machine is performing much better than anticipated, this point in time might not be many years away as initially tought, right? (budget considerations also come to mind, especially if no new physics is found – it only takes ~16/fb to find a 5-sigma SM Higgs @ 115 GeV, and that’s the “hardest” mass)

    What key science results can you possibly get with, say, 3000-4000/fb that you cannot get with 1500-2000/fb?
    Is there a ballpark long-term total lumi figure where CERN will likely just call it quits and direct funds to other projects – perhaps a new collider going well above 14 TeV?

    • Philip Gibbs says:

      These large amounts of data will be very important for two reasons. Firstly there can be some rare events from decay modes with low cross section. You have to sift through huge amounts of data to find them. Secondly, more data allows precision measurements such as the asymmetries seen at the Tevatron, or just the mass and width of a particle like the Higgs. You can compare these numbers with predictions to see how well you understand the decay processes. Any deviation in the width can indicate a new particle for example.

      Most of the collision events are due to strong and electromagnetic interactions between particles but some of the most interesting possibilities come from the weak force, for example creation of dark matter. A lot of events are needed to see these or exclude them

  4. Philip Gibbs says:

    There are indications that the experiments are anticipating luminosity increase by a further factor of 4 this year, so my 10/fb prediction could still transpire.

    Reality depends on decisions to be taken at the mini-Chamonix where all possibilities and factors will be considered and the vagaries of the machine itself.

    • carla says:

      A futher increase by 4? At the start of the previous run in mid-May, the luminosity was 0.9 and at the end of June, 1.2/nb. Maybe they can double the previous difference of 0.3/nb giving 1.8/nb at the end of the next run, 3/nb at the end of the year.

      It’ll be interesting to see what they manage to do with the LHC for a few days before the mini-Chanonix, such as raising luminosity to 1.3 or 1.4/nb with no change in emittance loss.

    • Philip Gibbs says:

      All they did since May was increase bunch numbers. Intensity went down if anything. This was made difficult due to cryo and RF problems. Remember how the RF limited the total intensity, but they removed that obstacle in the last coupe of weeks.

      Now there is nothing to stop them increasing bunch intensity up to around 170 billion and emittance can be reduced too. This is where the 4x factor could come from, but there are lots of ifs.

      I think they will be slow to come out of this technical stop because cryo is still warm with not much time left.

    • algernon says:

      Really? 4x before the end of the year? Seems bold…

      Could it be that they’re beginning to see signals from the Higgs – say, in the 130-140 GeV region – and are trying to reach discovery status before the end of 2011? And then maybe cancel the 2012 run and shut down for an immediate 14 TeV upgrade…

      (I’m just having fun speculating a bit, we’ll know more in just a few days anyway… a very exciting time for HEP indeed)

    • Philip Gibbs says:

      I don’t think they are being influenced by undisclosed results. The plan has always been to get as much luminosity as possible during 2011.

      We all know how fluid their plans can be but I think they will still run in 2012 because they have to plan the shutdown well in advance. If they find something this year the incentive to increase energy in 2012 will be stronger.

  5. B Jackson says:

    Can someone explain the ATS squeeze? Supposedly it gets β* down to 30 cm, but on p. 33 and 34 of the above-lined “reports from the MD phase” is a graph of beta with a spike to 6800 m circled. Is this some sort of reciprocal relationship between β and β* that makes this graph meaningful? Where does the 2000 m² proportionality constant come from?

  6. Derk says:

    Does anyone know how long this tech stop is going to be? They haven’t updated the “Coordination” vistar.

  7. Marc says:

    I don’t fully understand the table. If you look at
    http://www.science20.com/quantum_diaries_survivor/point_higgs_searches-79568

    then ATLAS claims that 2/fb will give a 95% exclusion of 123-124 GeV, but your table has it going all the way down to 114. Am I misinterpreting the meaning of the first column. Looks as if ATLAS has 2/fb, then the total would be 4…

    • Anon says:

      The luminosities in the table are per experiment, so the limits are what one would get after accumulating that lumi in each expt concurrently, and then combining ATLAS and CMS,

  8. Bill K says:

    I thought maybe the recent disagreement between D0 and CDF would have warned us of the dangers of “combining” results. There are great advantages to having two independent experiments, and combining their results does not enhance the error bars, it enhances the errors!

  9. Anon says:

    So long as one makes sure that the separate results are self-consistent (for instance via the combined likelihood or chi^2), and one makes sure that the correlated and uncorrelated uncertainties are correctly treated, there’s no reason not to do this – statistically it is perfectly valid.

  10. carla says:

    Is the amount of data collected by the Atlas equal to that for CMS when searching for the Higgs? I thought there was a significant difference with the CMS detector optimised for detecting processes connected with the production of the Higgs.

    • Anon says:

      ATLAS and CMS each have their strengths so on average the performance is similar, even if one experiment may do better in one particular channel. The datasets are basically the same however, unless one springs a water leak again …

    • Philip Gibbs says:

      I suppose they are optimised for different channels with e.g. CMS being more optimised for decay modes that produce muons.

      I don’t see a reason why combining the data sets from both should be much more difficult than combining the different channels from either one of them, except there are more data and more people involved.

      • carla says:

        Combining isn’t the problem, it’s the assumption that Atlas and CMS need the same amount of data to discover a Higgs at the same confidence level. I thought CMS was more efficient, compared to the Atlas.

      • Philip Gibbs says:

        What you may find is that CMS is better at the high masses where the Higgs decays more to muons so it will have smaller uncertainties than ATLAS at high mass. For low mass it could be the other way round because ATLAS can record the jets and photons a bit better.

        The table above just assumes they have the same capabilities, so you are right that it is a crude approximation. However, the statistical fluctuations can reverse their fortunes anyway.

        The important thing is that the actual combination is done properly. This table is just a guide to what we can expect and is not worth slaving over.

    • Bill K says:

      One thing that makes combining results from the two experiments difficult is that the events are being filtered differently. Likewise if the energy is raised next year, how do you combine 7 TeV data with 8 TeV data – you are measuring two different things. You can only report them side by side.

      At any rate I wasn’t questioning the difficulty of combining, I was questioning the wisdom. If an important result is found, one wants independent confirmation. And for this reason the LHC designers did not build two CMS’s, or two ATLASes, they built one of each, to serve as a cross-check. Obvious. But if you throw all the results into one basket you’ve defeated the purpose. And you are left with no way to check it except to split the results back apart again and look at the correlation.

      If CDF and D0 had reported their results in combination, would they have seen a bump? Or not?

      • Philip Gibbs says:

        The Tevatron Higgs exclusions are from combined results. D0 and CDF would not have much exclusion on their own yet.

        In principle combining results from different detectors is not difficult. Even if the energies are different and different triggers have been used you can still combine the results. You just have to make sure that your background estimates combine all the different elements too. It is more work and there is more scope for making mistakes so it is only worth while if the datasets you combine are about the same size or with complementary profiles.

        They already face this complexity to some extent when they combine last years LHC data with this years. The triggers and pile-up effects are different so they should estimate the backgrounds individually for each part of the dataset and combine them. I think this is one reason why they don’t always add the 2010 data in.

        It is still true that the different detectors cross-check each other but this is to ensure that systematic errors are not overlooked. This is what is happening with the Wjj signals from D0 and CDF. We know one of them must have made an error because they are not consistent, but it must be something systematic because the statistical side is too good. Once it has been conformed that data from two detectors is consistent and we believe that the systematics are correctly understood, then they can be safely combined.

        I am sure that they do plan to combine ATLAS and CMS data for Higgs, but not yet.

  11. Philip Gibbs says:

    Mike Lamont in the CERN bulletin has confirmed the plan to increase the luminosity further during 2011 using bunch intensity and emittance

    “This week, the LHC has been in a 5-day
    technical stop (4-8 July), which is to be
    followed by a sustained 6-week physics
    production run. The aim is to ramp quickly
    back up to 1380 bunches per beam and
    start “turning the handle”. The Booster
    and PS are able to offer somewhat higher
    bunch intensities and smaller beam sizes
    than those used at present, and the hope
    is to gently push these parameters in the
    search for even higher luminosities.”

    • carla says:

      hmm… I wonder if it’ll be a slow, slooooooow or sloooooooooooow type of increase. I’ll put a bet on sloooooooooooow with 0.1 /nb/s luminosity increases per week and collecting 3/fb by the end of the 6 weeks 🙂

%d bloggers like this: