Over the last few weeks the LHC controllers have been working towards an improved luminosity target using squeezed beams. This morning they succeeded when they declared stable beams using the new configuration. Since the 30th march when the protons were collided at 3.5TeV per beam for the first time, they have been running with a configuration of 3.5TeV/11m/10Billion/2-bunches (energy per beam/beta/protons per bunch/bunches per beam). The new configuration is 3.5TeV/2m/12Billion/3-bunches. This should increase the luminosity by a factor of about 10 (x5 from the squeeze and x2 from the bunches) but they may need to do some luminosity scans to reposition the beams before they actually increase the collision rate by that amount.

The bucket configuration being used is (1, 8941, 17851) for beam 1 and (1, 8911, 17851) for beam 2. The total number of buckets is determined by the frequency of the RF fields used to accelerate the beam and the size of the collider ring. The result is that there are exactly 35640 buckets in each beam where the bunches of protons can be positioned. The bunches are injected from the SPS ring into the main LHC ring with careful timing so that they are placed in the buckets the controllers want.

The buckets chosen determine where the protons in the two beams will cross over and collide. bunches in bucket 1 for beam 1 and bucket 1 for beam 2 circulate in opposite directions so they will come together at two points diametrically opposite around the ring. These two points are called IP1 and IP5 and these are where the two biggest experiments live (ATLAS and CMS) . Other bunches that are in the same bucket number will also collide at these points. E.g with todays bucket numbers the bunches in bucket 17851 of either beam will also collide in ATLAS and CMS, but the bunches in buckets 8941 and 8911 will miss, so these experiments are now getting twice as many collisions as the previous configuration.

The other bucket numbers are chosen to provide collisions at the other two intersection points IP2 (ALICE), and IP8 (LHCb) The point IP2 is exactly one eighth of the way round the collider ring so bunches will collide there when the difference of bunch numbers (b2 – b1) is exactly 35640/4 = 8910, so with today’s configuration the bunch in bucket 8911 of beam 2 collides with the bunch in bucket 1 of beam 1, and 17851 of beam 2 collides with 8941 of beam 1. So ALICE is also getting twice as many bunches colliding as before.

Finally, LHCb at IP8 is at a point approximately one eighth of the way round the ring in the other direction, but because of the nature of the detector its collision point is 11.5 meters away from the exact point. This means that the difference in bucket numbers must be -8940 rather than the more convenient -8910. With the new numbers we have bucket 8941 of beam 1 colliding with bucket 1 of beam 2 and bucket 17851 of beam 1 colliding with 8911 of beam 2. So LHCb also sees two collisions for every circuit of the ring and the controllers have been fair to each of the experiments.

b2-b1 |
1 |
8941 |
17851 |

1 |
0 (IP1+IP5) | -8940 (IP8) | -17850 |

8911 |
8910 (IP2) | -30 | -8940 (IP8) |

17851 |
17850 | 8910 (IP2) | 0 (IP1+IP5) |

As the number of bunches is increased the controllers will have to work harder to find the best bucket numbers. For CMS and ATLAS they want all bunches in equal bucket numbers to maximise the number of collisions. To please ALICE they should place them at intervals a quarter of the way round. Four bunches in buckets (1,8911, 17821, 26731) for both beams would be ideal for CMS, ALICE and ATLAS who would each see four collisions per circuit, but would fail for LHCb. They will have to offset the bucket numbers to get the best results.

As the number of bunches gets larger the problem eases. If they could have 1188 bunches placed at 30 bucket intervals then all four experiments would be seeing 1188 collisions per circuit. In practice this is not possible because some gaps must be left to allow safe dumping of the beams at the end of each run. The bunches must also be at least 10 bucket numbers apart. There are other constraints depending on how bunches can be injected and various other considerations. In fact the highest number of bunches planned is 2808 per beam.

Before they get there we will see them go through other carefully worked out bucket schemes with possibly 16, 43, 96, 156 or 936 bunches per beam. Juggling the precise bucket numbers to please all the experiments is going to be a delicate business.

**Update:** The stable beam was held for 30 hours before being purposefully disabled. This is a new record for longevity

What’s the second number in your beam parameter list? I’m referring to the “11m” and “2m” numbers. Is this some measure of the beam focus? The others I can all understand, but that one’s a bit unclear to someone not steeped in accelerator lore.

That is beta which is roughly the width of the beam squared divided by the emittance. See http://en.wikipedia.org/wiki/Beam_emittance for more info.

As beta is reduced the luminosity is reduced by about the same factor.

Thanks! I also found it just a few postings back at LHC needs more luminosity.

I have to admit that wikipedia article isn’t very helpful on the subject of “beta”. Emittance I sort of get; it’s kind of like Heisenberg uncertainty: you can move uncertainty between position and momentum, but reducing the overall product is a lot harder.

And I understand that you generally “blow up” the beam, fuzzing the position in exchange for momentum stability, between intersection points, but do the opposite at the intersection point, trying to focus it as tightly as possible in space.

But I have a bit to learn before I understand how the beta parameter measures that spatial focus. Still, thanks.

“As beta is reduced the luminosity is ->reduced<- by about the same factor"

Increased, you certainly mean!

oops, you are right of course, thanks

[…] Luminosity Record at the Large Hadron Collider By philipgibbs Three weeks ago we reported that the LHC had achieved a record luminosity by squeezing the proton beams to get a factor of 10 […]