LHC end of run update

October 30, 2011

Today is scheduled as the end of proton physics at the Large Hadron Collider and the last few fills are circulating this morning. The integrated luminosity recorded this year will end at about 5.2/fb each for CMS and ATLAS, 1.1/fb for LHCb and 5/pb for ALICE. For the remainder of this year they will return to heavy ion physics until the winter shutdown.

The good news this year has been the high luminosity achieved with peaks at 3.65/nb/s. This compares with the expectations of 0.288/nb/s estimated before the 2011 run began. The higher luminosity has been made possible by pushing beam parameters (number of bunches, bunch intensity, emittance, beta*) to give better than expected performance. The not so good news is that out of 230 days that were available for physics runs only 55 (24%) were spent in stable beams. This was due to a barrage of technical difficulties including problems with RF, Vacuum, cryogenics, power stability, UFOs, SEUs and more. There were times when everything ran much more smoothly and the time in stable beams was then twice the average. The reality is that the Large Hadron Collider pushes a number of technologies far beyond anything attempted before and nothing on such scales can be expected to run smoothly first time out. The remarkable amount of data collected this year is testament to the competence and dedication of the teams of engineers and physicists in the operation groups.

After the heavy ion runs they will start looking towards next year. There will be a workshop at Evian in mid December to review the year and prepare for 2012.  Mike Lamont, the LHC Machine Coordinator will be providing a less technical overview for the John Adams Lecture on 18th November.

Brian Cox, Bloggers and Peer Review

October 24, 2011

Brian Cox is a professor of physics at Manchester and a member of the ATLAS collaboration. He is very well-known as a television presenter for science, especially in the UK and has been credited with a 40% increase in uptake of maths and science subjects at UK schools. He is clever, funny and very popular. If you are in the UK and missed his appearance on comedy quiz QI last week you should watch it now (4 days left to view).

At the weekend the Guardian published a great question and answers article with Brian Cox and Jeff Forshaw, who I am less familiar with. The answers all made perfect sense except one:

How do you feel about scientists who blog their research rather than waiting to publish their final results?

BC: The peer review process works and I’m an enormous supporter of it. If you try to circumvent the process, that’s a recipe for disaster. Often, it’s based on a suspicion of the scientific community and the scientific method. They often see themselves as the hero outside of science, cutting through the jungle of bureaucracy. That’s nonsense: science is a very open pursuit, but peer review is there to ensure some kind of minimal standard of professionalism.

JF: I think it’s unfair for people to blog. People have overstepped the mark and leaked results, and that’s just not fair on their collaborators who are working to get the result into a publishable form.

I would be interested to know what Brain Cox was thinking of here. Which bloggers does he think see themselves as “the hero outside of science” and what examples back up the idea that bloggers try to circumvent peer review?

It is not clear to me whether Brian Cox is referring to the internal review process that experimental collaboration go through or the peer review provided by journals as a part of publication. Surely it cannot be the latter because most science research and especially that from CERN is widely circulated long before it reaches  the desk of any journal editor, not by bloggers but by CERN through conferences, press releases, preprints etc. So Cox must be talking about internal review, but that does not really count as peer-review in the usual sense. In any people within a collaboration do not get away with blogging about results before approval.

There have been a few leaks of results from CERN and Fermilab before approval from the collaboration. For example, one plot featured here earlier this year from a talk that turned out to be not intended for the public. However, by time I had passed it on it was already in Google having been “accidentally” released in a form that made it look like any other seminar where new preliminary results are shown. There were a few other examples of leaks but none that fit what Cox is saying that I can think of.

Given his obvious dislike for blogs I can’t hold much optimism that Brian will comment here and elaborate on what he means, but it would be nice if he did. Otherwise perhaps someone else knows some examples that could justify his answer. Please let us know.

Help CERN search for the Higgs boson

August 11, 2011

If you have been following our reports on new developments in the search for the Higgs Boson you may be itching to get involved yourself. Now you can by joining LHC@Home 2.0 a new project for people to run simulations of LHC particle collisions on their home PCs.

Projects like this used to be difficult to set up because the software is written to run on LINUX systems, but a new virtual machine environment from Oracle has made it much easier. CERN runs simulations on a powerful global computing grid but you can never have too much CPU time for the calculations they have to do.

Running monte carlo simulations to compare with the latest experiments is an importnat part of the analysis that goes into the plots they show at the conferences. CERN have been making extraordinary efforts to show results as quickly as possible to the public but these calculations is one of the limiting factors that keeps us waiting. Getting the public involved in the process may be one way to solve the problem.

D0 sees no bump

June 10, 2011

Sadly the D0 experiment sees no bump in boson+dijet at Fermilab, dismissing the 4.1 sigma result of CDF.

This has already been reported here, here, here, and here. The original paper is here.

Now the two experiments need to get together to work out which is wrong and why. It is not a sure conclusion that D0 is right but it seems more likely that someone would see an effect that isn’t there by mistake than that someone would fail to see an effect that is there. This gives D0 a strong psychological advantage.

To find out what went wrong they have to compare the raw plots and the background as seen in these plots

The differences are too subtle to see from just the visual image, and it does not help that they used different bins. There does appear to be significant differences in the backgrounds while the data look quite similar. If that is the case then the problem is purely theoretical and they just need to compare their background calculations. However, the detectors are different so perhaps the backgrounds should not look exactly the same. Only the people directly involved have enough details to get to the bottom of it.

I hope they will work it out and let us know because it would be nice to see that their understanding of their results can be improved to give better confidence in any similar future results.

By the way, you can still vote for us on 3QuarksDaily

“crackpots” who were right: the conclusion

August 28, 2010

I have been posting a blog series about scientists who were called “crackpots” but eventually turned out to be right. There is a convenient archive of the posts under the tag crackpots-who-were-right in case you missed any of these fascinating stories. I could carry on the series forever, but I want to do other things so I’m going to conclude it with this last post.

If I had continued I would have gone on to tell you about Barry Marshall who got the Nobel Prize after showing that stomach ulcers are caused by a bacterium rather than stress as everyone believed. He found it so difficult to convince anyone that he eventually drank a petri dish of the bacteria to prove it. I also wanted write a bit about Robert Chambers who wrote a popular book about evolution before Darwin. He was ridiculed by biologists for his misuse of terminology but the public were won over and he paved the way for acceptance of Darwin’s theory while much of  the scientific establishment held on to creationism. I also never got round to the famous case of Hannes Alfvén another Nobel laureate who faced ridicule when he realised that plasmas and magnetic and electric fields are important in galactic physics, not just gravity as everyone else believed. Nor have I mentioned Subrahmanyan Chandrasekhar who showed that stars above a certain size would eventually collapse to form black holes at a time when others did not believe they could really exist. The lambasting he got from Eddington almost ended his brilliant career. Then there was Joseph Goldberger who showed that Pellagra is a disease caused by dietary deficiency but for political reasons his opponents continued to claim it was infectious. Others on my list are William Harvey for blood circulation, Doppler for light frequency shifts, Peyton Rous for showing viruses can cause cancer, Boltzmann, Dalton, Tesla, Alverez, Margulis, Krebs, and on and on. All of them had to fight against resistance before their ground breaking work gained the recognition it deserved.

But so what? What can we draw from this? Some people have commented that these people were not real crackpots. They worked as real scientists and had ideas that just took time to establish. They are not like the people who turn up in physics and maths forums with crazy ideas that have no respect for hundreds of years of progress ins science. Furthermore, our “crackpots”-who-were-right are a tiny minority compared to all the ones who were wrong.

I disagree with these points. Firstly, these people really were treated as crazy and were subjected to ridicule or were ignored. The cases described here are the extremes. There are many more who have merely had an important paper rejected. In fact it is hard to know the real extent of the problem because only the most important stories get told in the history of science. My guess is that these people represent the tip of a large iceberg most of which lies hidden below the threshold it takes for historians to take note.

Furthermore, even if the “crackpots” who were right are the minority among all “crackpots”, they are still the most significant part. It is better to create an environment in which these people can have their theories recorded for the sake of the few who are right, than to try to dispel them all because of some irrational fear that they disrupt real science.

And, even amongst those who have really crazy ideas there will be the people like Ohm who also have some valid ideas hidden underneath. No amount of peer-review or archive moderation can reliably separate the good ideas from the bad. The only solution is to allow everyone to have their say and to record it in a permanent accessible form. Some people ask me why I expect scientists to wade through so many papers looking for something they find worthwhile. The answer is I don’t. Work of no value will be ignored while useful ideas will be found by someone doing related research who finds it through keyword searches or other means. Even in the academically run archives there are vast numbers of papers that will never be cited or read by many people. Scientists find out about new ideas through citations, seminars, conferences, word or mouth, etc.

I hope that some people at least will read this series and get the point about why we run the viXra archive with an open policy that allows any work on scientific topic to be recorded. I can’t say that some future Nobel Prize winner will be among our deposits, but it is not impossible. More likely there will be lots of smaller good ideas that move science along in less dramatic steps, but that is the way most science is done.