LHC end of proton-run Update

This week marks the end of proton physics runs at the LHC. The last days are dedicated to machine development and in particular test runs at 25 ns. This shot shows the scrubbing runs during which they filled the collider to its full capacity for the first time. Record intensities of 270 trillion protons per beam were reached with 2748 bunches injected in 288 bunch trains with 25ns spacing. This doubles the intensity numbers used in the proton physics runs this year but it comes at a cost. In the pictures you can see how fast the beam intensity drops due to losses from the e-cloud effect. The purpose of the scrubbing runs this weekend was to clean out the e-cloud and improve beam lifetime. After nine runs the effect was significantly reduced but not fully removed. During the last few remaining days we may see some runs bringing 25ns beams into collision, but perhaps not at these intensities.


The point of these tests is to work out if and how the next runs can work at 25ns spacing rather than 50ns. That will happen when the LHC restarts at 13 TeV in 2015 after the long shutdown. We still have some heavy ion runs before the shutdown but otherwise it is going to be a long wait for new data.. During the LHCC meeting last week Steve Myres gave an overview of the main considerations for running at 25ns vs 50ns. You can watch the video from here. Myres revealed that other tests had shown that they can increase the brightness of the beams from the injectors by 50% using new optics. In addition to this the beta* in the next runs will come down to 0.5m or perhaps even 0.4m, so with all other things being equal luminosities could be three times as high. The problem is that pile-up with 50ns spacing is already near the limit of what the experiments can take. Switching to 25ns will half the pile-up making the situation much more tolerable. The other alternative would be to use luminosity levelling to artificially keep the luminosity down during the first part of any run.

This means the pressure to run at 25ns is high, it will make a big difference to the physics reach, but the technical issues get very troublesome. As well as the e-cloud problems which could mean losing maximum luminosity far too fast, they also have to worry about excess heating which has already been a problem with this years run forcing them to wait for things to cool down before refills. Another big worry is that UFO events become much more frequent at 25ns so even if they can maintain the luminosity they may keep losing the beams through unplanned dumps. Switching between 25ns and 50ns can lose a week of runs so they must decide which setting to use from the start of 2015 and try to stick to it. This makes the present 25ns tests very important. they had been planned for a few weeks ago to allow plenty of time but some injector problems set them back as explained by Myres in his talk. hopefully they will get all the data they need this week.

Meanwhile this week is also the occasion of the annual Cern Council Meetings. Remember that last year this was the event where they announced the first signs of an excess at 125 GeV in the Higgs searches. There are rumours coming in via twitter of new updates from CMS on Wednesday and ATLAS on Thursday (see calendar comments). There is nothing yet scheduled in indico that I can find apart from a status update on 13th (not physics) and the CCM open session on Thursday. We are still waiting for reports of the analysis using 12/fb at 8 TeV that were missing this year at the HCP meeting in Tokyo, especially the diphoton channel. In anticipation here is the latest CMS combo plot that has been around for a few weeks but which has not been much discussed.

CMS12fbThe peak at 125 GeV is clear but what about the excesses that continue up to 200 GeV? No doubt these are due to systematic errors and fluctuations that will go away, bur any new updates will be keenly awaited, just in case.

The LHC has now delivered 23/fb to CMS and ATLAS at 8 TeV of which about 20/fb will be usable data. The complete analysis could be ready in time for Moriond in March with the diphoton over-excess being the most likely centre of attention.

Update: Indications are that the CMS and ATLAS updates were cancelled.

Update: Peter Woit thinks that ATLAS will give new diphoton and ZZ results at the LHC status meeting tomorrow. Meetings with this title usually indicate technical updates on the running of the collider and its experiments, not new physics results. It looks a lot like they are trying to spring a surprise by stealth 🙂 A presentation later at KITP confirms that they are planning to talk. It still seems that CMS are not ready to give their diphoton update but they do have a status update.

24 Responses to LHC end of proton-run Update

  1. Lubos Motl says:

    I am not sure that the general excess up to 200 GeV should be expected to go away. It’s excess relatively to the “null hypothesis” that doesn’t have any Higgs boson at all. It’s a different theory than the “currently relevant” null hypothesis with the 126 GeV Higgs boson. Especially the final states with neutrinos – missing energy – may get contributions at significantly higher energies than the actual Higgs mass and those won’t disappear with more data.

    It would be good if someone were redrawing the graphs with the new null hypothesis, one that does include the 126 GeV Higgs and that may, for example, look for another Higgs.

    • Philip Gibbs says:

      It’s true that the low resolution channels such as WW can produce excesses away from the Higgs mass, but the WW channel only has a smaller excess there.

      I agree that it would be good to change the null hypothesis, but you would need a new signal hypothesis too in order to make this variety of plot.

    • anonymous says:

      I agree with Lubos here. The signal tail up to 200 GeV might well be from the Higgs boson itself, namely in channels with low or non-existing mass resolution. The exclusion plot is a comparison of data to a background only hypothesis, however the observation of “something” at 125 GeV already tells us that this background hypothesis is wrong.

      Apart from that, there is currently an excess in the measurement of the WW cross section at LHC (see Resonaances blog), which might partially account for the upward fluctuation in the 160-200 GeV region.

      Time will tell.

  2. Tony Smith says:

    Is ” the latest CMS combo plot” a combo of 2011 and 2012 data for the diphoton channel only ?

    If so, I would be very interested to see whether the ZZto4l channel also sees the peaks around 200 GeV and 270 GeV with cross section at least around 20 per cent of full SM single-Higgs cross section
    those two peaks at such a cross sections are predictions of my model with 3 Higgs mass states.


  3. Robert L. Oldershaw says:

    Oh, oh. Rumor has it that different masses for the resonance are seen for different decay paths.

    If Higgsy goes the way of FTL neutrinos, will it be time to consider a new general paradigm? Or time to dig in further and double-down?

  4. Philip Gibbs says:

    Sorry cant post at the moment due to a disk crash, but ATLAS have updated their ZZ and Diphoton results. They get a lower estimate of the Higgs mass from ZZ than other measurements and have said it represents a 2.3 to 2.7 sigma discrepancy. This is not a big deal. The ZZ sensitivity drops rapidly as mass decreases so a mild upward fluctuation at a slightly lower mass can easily upset the books. The systematic and statistical errors then combine in complex ways that make it easy to underestimate the effect. The correct thing to do at this point is ignore the individual channel mass and use the high resolution channels (ZZ and gamgam) combined to get the mass estimate. The individual channel data can then be used to estimate the signal strengths for each decay at that mass (now looking to be around 126 GeV) gamgam channel still gives an overexcess at 2.2 sigma but we still wait for CMS to tell us what they have for gamgam. Spin and Parity measurements were also presented today and they favour the spin zero positive parity standard model case over the alternatives.

  5. Lubos Motl says:

    Interesting. Because the total luminosity at 7-8 TeV is below 30/fb, we can’t complete our SUSY bet with Jester yet, so I actually can’t send him the $100 before the 13 TeV runs starts. Bad luck. And SUSY may appear abruptly at higher energies if it’s there so it’s not a formality and his $10,000 won’t be safe even after all the 8 TeV papers are out and showing nothing.

    • someone says:

      bad luck?
      i have an idea to make a susy application.
      if i input some of susy particles observable energy scales or ranges, it suggests several favorable susy models fitting observations. so we know susy model is really enterprize physics model easily for anyone before we got an plank scale experiments.
      and if it has several options there, it would perfectly nice like ftl neutrinos or any unsual things can also be explained on it 😉

      • Lubos Motl says:

        Dear Someone,

        there is no way how SUSY could allow superluminal neutrinos. Quite on the contrary, supersymmetry makes it more prohibited to have faster-than-light things. It prohibits tachyons – for example, that’s why tachyons in string theory disappear when SUSY is incorporated.

        Otherwise your comments are formulated so that they’re funny but what you write is completely serious and sensible. Indeed, even if we were continuing to increase the collider energy up to the Planck scale, there would always be some reason to expect new physics – although it would be never guaranteed – and unless some completely different and new phenomena would be found or competing motivated theories proposed, SUSY would remain the single most likely set of new phenomena to be expected from such a raising of colliders’ energy (and/or luminosity). There’s nothing funny about it, that’s how physics works. Of course that at any point, people could decide that the probability of major new discoveries is too low to justify the too high investments. We’re not far from that point today. But assuming that they decide that construction of ever higher-energy colliders is warranted, SUSY will remain an important motivation for them.


      • someone says:

        Dear, Dr. Luboš Motl,
        Thank you for your reply.
        Please let me be honest to ask you one question at this time. So another option for susy is a theory which is placed under the condition of null result expectation?

      • Lubos Motl says:

        Dear Someone,

        if I understand you well, you are talking about a subclass of SUSY models that will remain experimentally indistinguishable from the Standard Model.

        But there isn’t any fixed class that could be defined in this way. By definition, every SUSY theory has to deviate from the non-SUSY Standard Model at *some* point. You’re not saying anything about the point.

        Even if you specified the point so that your subset of SUSY models would be well-defined, it would be a very awkward class because it would still contain wildly different types of SUSY models. They may look similar at low energies but they’re fundamentally different because they look different at higher and high i.e. more fundamental energies.

        Indeed, the experiment is giving us the low-energy info, up to some scale etc. And we have to be determining what’s going on fundamentally, at higher energies etc. But it’s not impossible.

        The non-SUSY Standard Model valid up to huge scales is possible but has flaws, e.g. the contradiction with the dark matter observations, apparently, and fine-tuning etc. That’s why the SUSY models will be preferred by many top theorists even if they’re required to reduce to SM up to much higher energies than those currently probed by the LHC. You may find it counterintuitive but that’s because you don’t really understand the logic behind those things.

        Best regards

      • someone says:

        Dear, Dr. Luboš Motl

        Thank you for your continuos reply.
        My point of view might be bit a different from you. The difference is might be if schwarzschild metric is regarded as an exact solution or not stated at an article on fqxi.org below,

        susy is indeed derived for quantization of gravity which we think schwarzschild metric is correct. I would appreciate it if you read it through.

        Thank you again,

  6. someone says:

    so we do not need to wait over 100 years for getting enough experimental upgrades to pay your 100 dollar!
    and i will pay 100 bucks for you if you make an enterprise app (for ios or whatever you like)!

  7. carla says:

    They’ve managed to fill with 804 bunches without collisions, and the beam life time still looks good, dropping by 0.3-0.6%/hour. Looking good for 25ns physics in 2015?

    • Philip Gibbs says:

      It seems like they loose protons fast at 450 Gev then its OK when ramped up. Filling before the ramp is going to be a bit like filling a bucket with a big hole. At the top intensities they were losing protons as fast as they could inject them. Perhaps they have other measures they can use to improve + lots of scrubbing

      • carla says:

        Clicking on the image at the top to enlarge it, we can see the losses look negligible upto an intensity 1E14 at 450Gev, and then start to increase dramatically as more bunches are added.

        I’d expect the losses to increase linearly with number of bunches, unless the effect is more non-linear than I’d expect.

  8. s.vik says:

    Can anyone explain why the scattering amplitude calculations which are critical for the background at LHC, have simple formulas as discovered in the last 5 or so years??? (i.e. Using Recursive and Unity methods). Since there is no new physics, yet, at the LHC this may be the bigger discovery.


  9. Nick says:

    It’s shutting down so it won’t be running during December 21st, 2012. A lot of people probably feared the LHC would be behind the Mayan Apocalypse but it’s not running that day!

%d bloggers like this: