The Hübner Factor for the LHC

Update: New record luminosity 876/μb/s in ATLAS and 800/μb/s in CMS using 768 bunches per beam. The total integrated luminosity delivered has now passed 300/pb

A couple of days ago we had some fun estimating how much integrated luminosity the Large Hadron Collider could collect this year. My estimate was 10/fb but a poll of readers favoured a more sober 5/fb-7/fb range. The official estimates shown at Chamonix said there would be about 2/fb – 4/fb collected in 2011.  Following the comments I think it is worth going into a little more detail about how these calculations are done.

There are many factors involved but we can conveniently write an equation in terms of three factors

L_I = 0.0864 D L_p H

L_I is the intgerated luminosity in inverse femtobarns.

D is the number of running days for physics assuming no unplanned stoppages. If we allow for the build-up time this is going to be about 135 days this year.

L_p is the peak luminosity in inverse microbarns per second. based on current luminosities and the assumption that they will increase bunch numbers to 1404 with most colliding we can estimate peak luminosity of 1600/μb/s. At Chamonix they estimated 2000/μb/s given the 50ns spacing being used so there may yet be room for improvement.

H is the Hübner Factor which is the hardest part to estimate. It is the ratio of actual delivered luminosity to the amount you could collect by running continuously at the peak luminosity. This makes the equation above correct by definition. The official estimates for the Hübner Factor are about 0.2 or even just 0.15 for running at 50ns spacing. The actual Hübner Factor from the last few weeks has been in the range 0.3 to 0.4. To get my estimate of 10/fb they would need a factor of 0.53 for peak luminosity of 1600/μb/s or 0.43 for peak luminosity of 2000/μb/s. Obviously I am being optimistic but these numbers are not impossible.

The official estimates for the Hübner Factor are based on the observation the LEP realised a figure of 0.2 and the LHC might be expected to be similar. However, the LHC is very different from LEP and the Tevatron both of which required a store of anti-particles. I think there is good reason to expect a better result from the LHC especially given recent performance.

It is also possible to calculate theorectical values for the  Hübner Factor given T_h the luminosity half life for the beams and T_c  the mean cycle time between fills. The optimum time to keep each fill is then around T = 1.5 \sqrt{T_h T_c} . Assuming fills can always be kept for this time without an unplanned dump the value of  the Hübner Factor is easily calculated. For example, taking the current half life of 20 hours and an average time between fills of 7 hours we find that fills of length 18 hours are about optimal and give H = 0.53 as required! The minimum cycle time is even better at about 3 hours, but the real question is how much this will be extended due to failed injections and other faults.


10 Responses to The Hübner Factor for the LHC

  1. Kevin says:

    One margin that remains is the bunch size. They only used 1.15 to 1.25 10^11 protons per bunch, but that can go up to 1.5 10^11 protons with 50 ns spacing. And the luminosity depends on the square of the bunch size. So they can have a 44% gain just from the bunch size. With the current filling scheme, it would give 1.15/nb/s instead of 800/µb/s. with 1300 colliding bunches, it gives a 2.1/nb/s luminosity.
    In computing H, one must be careful to properly analize the availibility time. Between two major failures H is between 0.3 and 0.4. If we take into account major, more than a day long, failures H drops a lot (look at the past 3 days, H is .02 or so).
    A good estimate for the integrated luminosity in 2011 would be 4-8 /fb.
    When you look at the vistars, you can see than they are now testing 1.5-1.6 10^11 protons bunches.

    • Philip Gibbs says:

      That’s an interesting point. I can see that they are trying out these higher intensity bunches now with some injections of 8 bunches. If they can work with those this year it will make a big difference

      • carla says:

        It would be nice if you could blog on the machine development phase going on right now and what we should expect when it starts to fill for physics a week on saturday.

        Right now, there’s a mesage “High Beta optics MD unsqueezing… Beta star@ip1 and 5 now 90m!!”

        I’d love to join in the excitement of what all this means, but I haven’t a clue what it means.

      • Philip Gibbs says:

        I think there is too much missing information for me to be able to do that well. There is a report on the MD at http://lhc-commissioning.web.cern.ch/lhc-commissioning/news-2011/presentations/week18/05-05-11_MD1.pptx but it assumes a lot of specialized knowledge. Maybe some others can add some comments about it

      • Philip Gibbs says:

        Usually the beams are squeezed down to beta=1.5m to concentrate the protons at the collision points to maximise luminosity for ATLAS and CMS. However, there is another experiment called TOTEM that requires the opposite effect. It will look at particles created very close to the direction of the beams. For this they need to unsqueeze the beams to beta=90m. TOTEM has goals such as measuring the size of the protons and getting an accurate estimate of luminosity. http://public.web.cern.ch/public/en/lhc/TOTEM-en.html

      • Philip Gibbs says:

        Most of the rest of the MD seems to be looking at bunch instabilities, either on their own or when they interact. By understanding these things better they can prevent the thing going crazy leading to unwanted beam dumps. Instability effects get worse as the number of bunches goes up so they have to keep working at it. That sums up the basic level of my understanding. 🙂

        As Kevin pointed out they seemed to be studying the instabilities with higher intensity bunches. I have no idea if this is because they would consider running with higher intensity bunches, or simply because they get better info about instabilities that way.

      • Philip Gibbs says:

        As far as I know when they come out of the technical stop they will continue as before, increasing the bunch number in 144 steps every few days until they get to 1404. At some point they will do Van der Meer scans to analyse the beam distribution and check luminosity. They also have to schedule in some time for the TOTEM runs. I am not aware of anything else that might happen.

    • Philip Gibbs says:

      Now they are trying 1.7 x 10^11 protons per bunch. They want to find the limits of the stability. These intensities are a long way past the nominal and its impressive that they can go so far. Perhaps it is too much to hope that they will be able to run physics at this level in the near future.

  2. […] viXra log LHC Luminosity 30-Apr-2011: peak = 830/μb/s total = 323/pb « The Hübner Factor for the LHC […]

  3. […] 325/pb to the total data delivered. Based on an average peak luminosity of 1.13/nb/s this gives a Hübner Factor of 0.3 which is very respectable for this early stage of the […]

%d bloggers like this: