String Theorists get biggest new science prize

July 31, 2012

Yuri Milner is a Russian hi-tech investor who dropped out of physics classes as a student. He must have done quite well with his investments because he has just given away $27,000,000 in prizes to nine physicists in $3,000,000 chunks. He plans to do the same every year making his the biggest recurring science prize of them all. Recipients of the prize this year which is given in fundamental physics are Ed Witten, Alan Guth, Nima Arkani-Hamed, Jaun Maldacena, Nathan Seiberg, Maxim Kontsevich, Ashoke Sen, Alexei Y. Kitaev and Andre Linde. Congratulations to them all.

Past winners will select future winners so we can expect to see a lot of rich people in string theory and cosmology in the coming years.

String Theory returns to symmetry

July 31, 2012

The strings 2012 conference has finished and it is great to see that all the talks are online as slides and videos. Despite what you hear from some quarters, string theory is alive and progressing with many of the brightest young people in physics still wanting to do strings. Incredibly the next three strings conferences in Korea, US and India are already being organised. How many conference series have that many groups keen to organise them?

It has become a tradition for David Gross to give some kind of outlook talk at these conferences and this time he said there were three questions he would like to see answered in his lifetime

  • How do the forces of nature unify?
  • How did the universe begin and how will it end?
  • What is string theory?

The last of these questions is one he has been asking for quite a few years now. We know string theory only as a small set of perturbative formulations linked together by non-perturbative dualities. There has to be an underlying theory based on some unifying principle and it is important to find it if we are to understand how string theory works at the all-important Planck scale. This time Gross told us that he has heard of something that may answer the question. Firstly he now thinks the correct question to ask is “What are the underlying symmetries of string theory?” and he thinks that work on higher spin symmetries could lead to the answer. What is this about?

For about 16 years it has been known that an important element of quantum gravity is the holographic principle. This says that in order to avoid information loss is black holes, the amount of information in any volume of space must be bounded by the area of a surface that surrounds it in Planck units. This might mean that the theory in the bulk of spacetime is equivalent to a different theory on the boundary. How can that happen? How can it be that all the field variables in the volume of spacetime only carry an amount of information that can be contained on the surface. We can reason that measurement below the Planck length is not possible, but even then there should be at least a few valid field parameters for each plank volume of space. If the holographic principle is right there must be a huge amount of redundancy in this volumetric description of field theory. Redundancy can be taken to imply symmetry. Each degree of symmetry or dimension of the group Lie algebra tells us that one field variable is redundant and can be taken out by gauge fixing it. In gauge theories we get one set of redundant parameters for each point in spacetime but if the holographic principle is correct there must be a redundancy for almost every field variable in the bulk of spacetime and we will need it to be supersymmetry to deal with the fermions. I call this complete symmetry and I’ve no idea if anyone else appreciates its significance. It means that the fields of the theory are given by a single adjoint representation of the symmetry. This does not happen in normal gauge theories or in general relativity or even supergravity, but it does happen in Chern-Simons theory in 3D which can be reduced to a 2D WZW model on the boundary, so perhaps something is possible. Some people think that the redundancy aspect of symmetry means that it is uninportant. They think that the field theory can be reformulated in a different way without the symmetry at all. This is incorrect. The redundant nature of the local symmetry hides the fact that it has global characteristics that are not redundant. In holographic theories you can remove all the local degrees of freedom over a volume of space but you are left with a meaningful theory om the boundary.

If there is symmetry for every degree of freedom in the bulk then the generators of the symmetries must match the spin characteristics of the fields. Supergravity only has symmetries corresponding to spin half and spin one fields but it has fields from spin zero scalars up to spin two. String theory goes even further with higher excitations of the string providing an infinite sequence of possible states with unlimited spin. This may be why the idea of higher spin symmetries is now seen as a possible solution to the problem.

Surprisingly the idea of higher spin symmetry as a theory of quantum gravity is far from new. It goes back to the 1980s when it was founded by Vasiliev and Fradkin. It is a difficult and messy idea but recent progress means that it is now becoming popular both in its own right and as a possible new understanding of string theory.

There is one other line of development that could lead to a new understanding of the subject, namely the work on supersymmetry scattering amplitudes. Motl has been following this line of research which he calls the twistor mini-revolution for some time and has a nice summary of the conference talk on the subject by Nima Arkani-Hamed. It evolved partly out of the need to calculate scattering amplitudes for the LHC where people noticed that the long pages of solutions could be simplified to some very short expressions. After much thought these expressions seem to be about permutations and Grassmanians with things like infinite dimensional Yangian symmetry playing a big role. Arkani-Hamed believes that this is also applicable to string theory and could explain the holographic principle. The Grassmanians also link nicely to algebraic geometry and possibly work on hyperdeterminants and qubits.

I have to confess that as an undergraduate at Cambridge University in the late 1970s I was completely brainwashed into the idea that symmetry is the route to the underlying principles of nature. At the time the peak of this idea was supergravity and Stephen Hawking – who had just been inaugurated into the Lucasian chair at Trinity college – was one of its greatest advocate. When string theory took over shortly after, people looked for symmetry principles there too but without convincing success. It is true that there are plenty of symmetries in string theory including supersymmetry of course, but different sectors of string theory have different symmetry, so symmetry seems more emergent than an underlying principle. I think the generations of undergraduates after mine were given a much more prosaic view of the role of symmetry and they stopped looking out for it as a source of deep principles.

Due to my brainwashing I have never been able to get over the idea that symmetry will play a huge role in the final theory. I think that all the visible symmetries in string theory are remnants of a much larger hidden symmetry so that only different residual parts of it are seen in different sectors.  In the 1990s I developed my own idea of how infinite dimensional symmetries from necklace algebras could describe string theory in a pregeometric phase. The permutation group played a central role in those ideas and was extended to larger string inspired groups with the algebra of string creation operators generating also the Lie algebra of the symmetry. Now that I know about the importance of complete symmetry and higher spin symmetry I recognise that these aspects of the theory could also be significant. Perhaps it is just a matter of time now before string theorists finally catch up with what I did nearly twenty years ago 🙂

In any case it is good to see that there is now some real hope that the very hard problem of understanding string theory from the bottom up may finally have some hope of a solution. It will be very interesting to see how these ideas mature over the next few strings conferences.

H → WW Revisited

July 18, 2012

Before the independence day and ICHEP Higgs discovery I raised a question about the Higgs decay to WW channel. In the early days it had shown a broad excess, but this had then faded to the point where it was consistent with no Higgs anywhere rather than the signal seen in some other channels. I asked how well we could trust these results.

The deficit was especially noticeable from ATLAS with CMS showing a less significantly low event count. Today at the Higgs Hunting workshop ATLAS have released an update for their WW channel at low mass with a combination of 7 TeV and 8 TeV data. Now they once again have a broad excess signal more consistent with a boson in the low mass range. There is also a conference note giving all the details.

Using unofficial combinations I can now update the plot that shows the size of the signal in each channel. Here it is with the earlier results from 2011 shown in blue and the updated versions in green. This is a global combination with the Tevatron data helping in the bb channel.

The diphoton channel stills shows an excess while the ditau now has a deficit. Others are really in line with the standard model Higgs. In any case there is not yet enough data to draw any conclusions but that is no reason to not speculate about what might explain the results if they hold up.

UK to make all publicly funded research open access

July 17, 2012

A few months ago the UK government announced that it wanted all UK research that is funded from public money to be available through open access. Now they have told us how they plan to do it. They will pay the journals a fee for each paper they publish.

In the traditional publishing system journals charge people to read a paper, or libraries are charged a fee to hold copies of the journal. In recent years this has moved largely to electronic systems but the principle remains the same. In some cases the authors may pay the journal a fee for their work to be available to everyone for free, but this is the exception rather than the rule.

Now UK researchers will have to use the open access system for all their publications if it is publicly funded. This has to be a good thing because it will make the research more widely available, but how will it alter the dynamics of research?

According to an article in the New Scientist the UK government has set aside 1% of research budgets to pay the open access fees, but the fees are estimated to be £2000 per article. This means that there will be enough to pay for one publication for every £200,000 spent on research. This does not sound like very much, especially in subjects like theoretical physics where many papers are produced by doctorates and post doctorates who dont cost much. Is it enough? Will the money be distributed unevenly with theory departments getting much more of it? Let’s look at it another way. The total amount they have allowed for to pay for the fees is £50 million. At £2000 a paper that is enough to pay for 25,000 papers each year. So how many research papers does the UK produce each year. The answer is at least 100,000 and perhaps several times that. Clearly it does not add up. So how will the system shake out? It will be interesting to see.

Guest Post by Felix Lev

July 17, 2012
Today viXra log is proud to host a guest post by one of our regular contributors to the archive. Felix Lev gained a PhD from the Institute of Theoretical and Experimental Physics (Moscow) and a Dr. Sci. degree from the Institute for High Energy Physics (also known as the Serpukhov Accelerator). In Russia Felix Lev worked at the Joint Institute for Nuclear Research (Dubna). Now he works as a software engineer but continues research as an independent physicist in a range of subjects including quantum theory over Galois fields.

Spreading of Ultrarelativistic Wave Packet and Redshift

In standard cosmology, the red shift of light coming to the Earth from distant objects is usually explained as a consequence of the fact that the Universe is expanding. This explanation has been questioned by many authors and many other explanations have been proposed. One of the examples – a recent paper by Leonardo Rubio “Layer Hubble and the Alleged Expansion of the Universe” in viXra:1206.0068.

A standard explanation implies that photons emitted by distant objects travel in the interstellar medium practically without interaction with interstellar matter and hence they can survive their long (even billions of years) journey to the Earth. I believe that this explanation has the following obvious flaw: it does not take into account a well-known quantum effect of wave-packet spreading and the photons are treated as classical particles (for which wave-packet spreading is negligible). The effect of wave-packet spreading has been known practically since the discovery of quantum mechanics. For classical nonrelativistic particles this effect is negligible since the characteristic time of wave-packet spreading is of the order of ma2/ℏ where m is the mass of the body and a – its typical size. In optics the wave-packet spreading is usually discussed in view of the law of dispersion ω(k) when a wave travels in the medium. But even if a photon travels in empty space, its wave function is a subject of wave-packet spreading.

A simple calculations the details of which can be found in my paper viXra:1206:0074, gives for the characteristic time t* of spreading of the photon wave function a quantity given by the same formula but with m replaced by E/c2 where E is the photon energy. This result can be rewritten as t* = 2πT(a/λ)2 where T is the period of the wave, λ is the wave length and a is a dimension of the photon wave function in the direction perpendicular to the photon momentum. Hence even for optimistic values of a this quantity is typically much less than a second.

If spreading is so fast then a question arises why we can see stars and even planets rather than an almost isotropic background. The only explanation is that the interaction of photons with the interstellar medium cannot be neglected. On quantum level a description of the interaction is rather complicated since several processes should be taken into account. For example, a photon can be absorbed by an atom and reemitted in approximately the same direction. This process is an illustration of the fact that in the medium the speed of propagation is less than c: because after absorbing a photon the atom lives some time in an excited state. This process plays an important role from the point of view of wave-packet spreading. Indeed, the atom emits a photon with a wave packet of a small size. If the photon encounters many atoms on its way, this does not allow the photon wave function to spread significantly.

In view of this qualitative picture it is clear that at least a part of the red shift can be a consequence of the energy loss and the greater the distance to an object is, the greater is the loss. This phenomenon also poses a problem that the density of the interstellar medium might be much greater than usually believed. Among different scenarios discussed in the literature are dark energy, dark matter and others. As shown in my papers (see e.g. viXra:1104.0065 and references therein), the cosmological acceleration can be easily and naturally explained from first principles of quantum theory without involving dark energy, empty space-time background and other artificial notions. However, the other possibilities seem to be more realistic and now they are intensively studied.

Global combination gives unofficial Higgs discovery with 2011 data.

July 10, 2012

Warning for allergy sufferers, this post contains multiple sigmas 🙂

When ATLAS and CMS first published their results based on 2011 data in December, an unofficial combination of the results gave an excess with significance 3.74 sigma, a long way short of the 5 sigma needed to claim a discovery. Adding the Tevatron results available at that time only made it worse with a drop to 3.69 sigma. In February CMS added some extra diphoton events that pushed the LHC combination up to 4.3 sigma, then at the Moriond conference on March both CMS and ATLAS updated their combinations with the result that the significance dropped to 3.64 sigma. At the same meeting CDF and Dzero presented an update using the full dataset from the Tevatron. This time the combination with the Tevatron data improved the result pushing the significance back up to 4.25 sigma.

With all the data in use it looked like new data from the 2012 LHC run would be needed to reach discovery significance. Three days before the ICHEP conference the Tevatron collaborations presented updated combinations using some updated analysis from Dzero. This pushed the significance of the global combination up to 4.39 sigma. Then of course ATLAS and CMS added their 2012 data to reach 5.0 sigma individually with the combination reaching an impressive 7.45 sigma.

Later when the data was published as analysis notes more detail was given including data for the diphoton and 4 lepton data at 7 TeV. These had been updated yet again with ATLAS improving their analysis technique and CMS finding an extra 0.33/fb of 2011 data. Using these a new unofficial combination for the 2011 data can be generated and the result is dramatic. The LHC combination jumped from 3.67 sigma to 4.64 sigma while the global combination with all the 2011 data jumped from 4.4 sigma to 5.27 sigma. Even taking into account the error margins for the unofficial combination this means that the global combination has risen to discovery level significance level based on 2011 data alone, an impressive result.

So where did this increase of nearly 1 sigma from the 2011 data come from? Looking at the individual contributions, the CMS combination increased by 0.35 sigma and the ATLAS combination increased by 0.85 sigma. Of course all these results are approximate, unofficial and not endorsed by the experiments.

You can generate all the combinations here using the unofficial Higgs combination tool.

Who will/should get the Nobel Prize for the Higgs Boson

July 9, 2012

With the discovery of the Higgs Boson now in the bag it seems inevitable that someone will be getting a Nobel prize for it but who? There may even be two prizes, one for the theory and one for the experiment, but I think it more likely that only one prize will be awarded. Peter Higgs and François Englert seem dead certs but the committee can choose up to three living physicists. Will there be a third man and if so who? If you want a reminder of the history my earlier chronology of contributions may help.

The physics prize can only be given to living indivduals (unlike the peace prize which can be given to an organisation) so if they want to honour CERN they will have to give it to an individual representative.

So let’s have a poll. Actually let’s make it two. Assuming that I am correct about the first two laureates who else do you think should get the prize because they deserve it, and who else do you predict will get it.

By the way I don’t think that the prize will be awarded this year because nominations needed to be in by the 1st January, unless some nominations were made based on evidence from last year.

Update: After a day of voting the clear leaders after “no thrid person” are Anderson, Evans, Goldstone and Kibble.  Any of these would be a worthy winner and it is just unfortunate that the others (including Kibbles collaborators) would be overlooked. I don’t think the rule of three will be changed but you have to wonder what will happen when a collaboration of four make a ground breaking discovery.

It is not unlikely that a separate prize will be given for the experiments. I sense that CERN are promoting Lyn Evans as the one who lead the LHC especially as he has now come back to take on the difficult task of leading the ILC project. In this case people will argue about whether the Tevatron also deserves recognition for their contribution. That will be another difficult question that could be conveniently dodged by splitting a prize across the discovery of top and the Higgs . The theory prize for the top prediction was given in 2008 to Kobayashi and Maskawa.

There were some suggestions for others as follows:

  • Peter Higgs – someone did not read the text
  • Phil Gibbs – you are too kind, LOL
  • any of a number of passed over theorists
  • Eridtoto – who?
  • Al Gore – I didn’t know he read this blog
  • Jesus – if this is a God particle joke Moses would have been marginally less lame
  • me – Al, you can only vote once.

Timelapse video of the Higgs Boson Discovery plot evolution

July 8, 2012

Post-Higgs LHC Update

July 7, 2012

Just because the Higgs has been discovered does not mean there is any rest the the physcists at the LHC. They are now a few days out of the technical stop and have already returned to running with maximum bunch numbers (at slightly reduced intensity for now) and have added 0.33/fb, and they have done some special runs for TOTEM.

The CERN directorate would not feel that they are doing their job if they did not change the plan at least three times a year, so at the 4th July press conference the DG announced that they had decided to run the LHC for an extra three months and then shut down the collider for a longer period of two years. It will not restart at higher energy until 2015. There will be about 2 months extra proton physics this year which could add another 5/fb to 10/fb to the total delivered. The hope is that it will give enough data to study the properties of the Higgs and perhaps find something else so that the physicists have plenty of analysis to work on during the two year break while the LHC and the detectors are being upgraded. For more details see the ICHEP talk here

Are unofficial Higgs Combinations “Valid” ?

July 5, 2012

The Unofficial Higgs Combination Tool has now been updated with all the new Higgs plots released in the last few days, including the Tevatron updates and the new 8 TeV data from the LHC. There will probably be more to add on 7th and 9th July from ICHEP. Feel free to play around with it.

At the CERN press conference yesterday the Director General Dr Rolf Heuer warned journalists about unofficial combinations. What he said exactly was at follows (It is 26:50 in if you are looking on the recording):

The fact that they [CMS and ATLAS] have not yet combined their results today is that they did not have enough time. We should have shifted the Melbourne conference by 2 weeks or 3 weeks or 4 weeks but that was not possible. You have to stay tuned until at some time they combine their results. Whatever combination you get beforehand is unauthorised and is certainly not valid because you have to take into account the different correlations, one has to be very careful.”

I agree with what he says. The unofficial combinations you find on this blog are approximate and unofficial and should be used with caution. I have always made that clear. It is not just the correlations that are neglected. The quick combination method assumes that the statistical errors have a flat normal distribution and that is not quite correct. The detector collaborations don’t provide detailed likelihood data to outsiders so this is the best I can do. Luckily all statistical errors tend towards the normal Gaussian as the quantity of data increases (central limit theorem) and in most cases there is enough data for the results to be good, with a few exceptions.

Whether the combinations are “valid” or not depends on what you are using them for. I don’t consider them valid for writing up published results of any kind, but they are good enough as a rough guide to theorists looking for possible signals in the data and there is nothing wrong with showing them at conferences as some eminent theorists have already done, provided they come with appropriate caveats.

I have previously shown some comparisons between official combinations and my unofficial ones to show how accurate they can be (or not). I think it is worth doing a few more now using some of the recent results where the amount of data has increased. In all the plots below the red line is the official result and the black is the unofficial. First up is the latest version of the Tevatron combination compared with an unofficial combination of the updated Dzero and the latest CDF plot that was updated in March. You can click on the plots to get a larger version.

The combinations across all channels have always worked quite well because they use lots of data. The last time that the LHC provided an official combination for ATLAS + CMS was in November when there was only 2.3/fb. here is how it looked next to the unofficial combination that I had done 10 weeks earlier.

Notice here how the accuracy gets worse at higher energies where there is less data available. Heuer seemed to be implying that there should be another combination due out soon. If so it will be interesting to see if the comparison improves as I would expect.

The combinations for single channels have been less successful in the past, but now they are improving. Here is a reconstruction of the ATLAS combination for 7 TeV + 8 TeV data in the diphoton channel

But the results don’t always come out so well even now. The 4 lepton channel uses very few events in both the signal and the background. Here is the result of a similar combination (Update: There was an error in the digitisation that I now fixed and it is not so bad now)

The combination across ATLAS and CMS should be better because it involves twice as much data. They should also have twice as much again by the end of the year so by then combination should work OK even in this channel.

If you want to try more the Higgs combination tool is easy to use and free.

Update: I said that I dont think these combination methods should be used in published papers but other theorists are apparently not as reticent. arXiv:1207.1347 is one example of  paper showing a combined signal plot as well as combined channel values and other fits. There conclusion is that everything fits the standard model except that the diphoton rate is 2.5 sigma too high, in agreement with my figure.