Vote for inflation Nobel Prize

March 20, 2014

So who do we think would be the right recipients for any Nobel Prize that might be awarded for inflation assuming the BICEP2 results hold up? (update 28-3-2014: Since people keep commenting that this is premature let me stress again that this is a vote based on the assumption that the experimental results hold up and the theory is agreed to confirm inflation, see my earlier post on this for my somewhat skeptical take given the current standing. Despite the uncertainty it is still of interest to think about who are considered the main discoverers of the inflation theory because there are a lot of news reports that are simplifying the history)

Cast your votes in this poll. You can vote multiple times so you can vote for up to three winners for a theory prize and another three for an experimental prize. (multiple voting  is now closed)

Update (22-Mar-2014): After a few days we can see where this voting is going so thanks to all those who voted so far.

The Guardian has now also discussed the same question and made the point that the Nobel committee will have a hard choice, but you will see that they have not identified all the candidates that we have here. Some people have responded by saying that we should not be thinking instantly about who should win a Nobel yet because it is too soon. I  disagree with that. The story about who are the main people behind this discovery is of immediate interest and by focusing on the possibility of Nobel prizes I think we highlight the human side of the discovery. It is true that we should be cautious about the uncertainty of the discovery until it has been confirmed but that does not stop us talking about the consequences, either scientifically or sociologically. There is a danger of being too negative and missing the opportunity to make some science and worthy scientists known to the wider public while their gaze falls fleetingly on upon physics and cosmology.

So what do the poll results say? The first thing that stands out is that the theorists are getting the most votes, especially Linde, Guth, and Starobinsky. Linde has now rushed ahead of Guth because he suddenly got 50 extra votes. The theoretical bias is perhaps understandable because the media (including me) has said more about the theorists and they have been familiar to us for many years. Indeed Guth and Linde in particular have been tipped for the Nobel long before this discovery. The experimenters are new stars so they have a smaller fan club and get less votes, but the Nobel Committee may see it the other way round. If BICEP2 is confirmed by Planck then it will be clear that a Nobel worthy discovery has been made even if the theory behind it remains uncertain. When the prize was given for accelerating cosmic expansion the committee made it clear that the award was for the observation irrespective of how theorists interpreted it and they are likely to see this discovery the same way until it is clear that inflation is the correct explanation rather than the alternatives.

The Nobel committee could in fact play things in several different ways:

  1. A prize for the experimental side first followed by the theory prize later
  2. A prize for the theory side first followed by the experiment
  3. A combined prize for both
  4. A prize just for the theory
  5. A prize just for the experiment
  6. No prize at all.

I predict option 1 assuming confirmation, but any of the others are quite possible. Choosing the experimental prize is already difficult. There is an interesting story about how it was caltech postdoc Brian Keating who originated the idea for this experiment and then persuaded Jamie Bock to take it on. This would suggest that Keating and Bock are key candidates for the prize but Keating seems to have dropped out of the picture at some point so he does not get many votes. John Kovac has been promoted as the main leader of the experiment but Chao-Lin Kuo led the team that really made the instrument work and Clem Pryke’s team made crucial discoveries for the analysis. I find it painful to think that at least one of these people will have to be left out but that is the way the Nobel works. If I am forced to make a prediction at this stage I would go with the voting so far and say it will be Kovac and Bock who take the honour on behalf of the BICEP2 team while  Uros Seljak would make a fitting third laureate for his seminal work on B-modes that made the experiment possible.

On the theory side that I already covered there are three classes of theoretical work on inflation that could eventually be rewarded. There is the initial realisation that inflation may be a feature of cosmology and could solve certain problems (flatness, horizon, monopole etc) Guth, Starobinsky,  Kazanas and Sato are independently responsible for this idea. Then there are the people who made crucial predictions of gravitational waves and anisotropies in the microwave background. The ones who got there first are Starobinsky, Einhorn and Mukhanov. The committee favours such predictions for obvious good reasons so any of these people could be up for the prize. Finally we have those who have worked on specific models including Linde, Albrecht and Steinhardt. The problem for these people is that no particular model for inflation has been shown to work yet. It is possible that that work has not yet been completed or that a more recent specific model will be shown to be right. However Linde is such a big figure in the field of inflationary cosmology who has been tipped for the Nobel for years already that I think the weight of nominations will be in his favour and if that is the case then he is surely deserving enough. In my opinion the destination of the theory component of the prize is not yet determined even if the experimental discovery is confirmed and will depend on work that is still to come otherwise I would expect it to go to Guth, Linde and Starobinsky as indicated by the voting.

Update 27-03-2014: see the comments for information about Erast Gliner who published an inflation theory in 1965. I have added his name to the poll but too late.

How certain are the BICEP2 findings?

March 20, 2014

Now that the excitement over the BICEP2 results has abated a little it is a good time to look at just how much we can believe the BICEP2 results and the conclusions that could be drawn from them. This actually breaks down into a whole sequence of questions. Has BICEP2 really seen cosmological B-modes? Were these B-modes formed by primordial gravitational waves? If so does this confirm inflation? Does it tell us anything more specific about inflation? Does it support the multiverse?

There have already been some words of caution pushed around. On the blogosphere this is most notable from Matt Strassler, Peter Coles, Neil Turoc and Ted Bunn. I endorse their point of view and I think it is fair to say that every scientist we have heard from has expressed an appropriate degree of caution  but because of their natural excitement it is easy for onlookers to miss this. There is a danger that soon Planck or another experiment will publish a contradictory result that seems to rule out B-modes at the level BICEP2 has claimed and then the news headlines will be that scientists were wrong again. This would be unfair. It is important to understand that the results are understood to be preliminary but there is no need to wait for confirmation before thinking about what the implications will be on the assumption that they are confirmed.

If anyone was not able to see much of the press conference on Monday due to the web overload, a video recording is available. Here is a direct link . There was also a live streamed colloquium from Stanford yesterday evening that was very watchable and informative. I don’t know if a recording will be made available. Many questions were raised at these events about the reliability of the experiment and its implications. Even the question about the multiverse has come up several times. More about that later.

So let’s look at some of the important questions in a little more detail

Has BICEP2 really seen cosmological B-modes?

Everybody agrees that the BICEP2 team know their stuff and have been very professional in their approach to evaluating their results. They have performed many consistency checks on their data. In particular they have played the game of partitioning their data in two according to a variety of criteria and then examining each half independently. This is a standard technique for identifying systematic errors and every test was passed. They have expressed high confidence that their observation really shows B-mode polarisation in the cosmic microwave background but everyone is subject to the human failings of cognitative bias so this alone is not enough.

Their positive cross-correlations between the BICEP1, BICEP2 and Keck are another good indication that the results are not instrument error but there are common elements of the analysis that could still be wrong. For example their was much made at the colloquium of an numerical analysis method devised by one of the team members that greatly improved there ability to extract the signal. The B-mode data is less than 20% of the strength of the E-mode signal from which it must be separated so you have to be careful that there is no systematic error (leakage) in this process. The BICEP2 team know this as well as anyone and have used data simulations to confirm that the method works. I have no reason to doubt that this is a good check but it would be wise to see if other independent teams can replicate the same B-mode pattern independently. There was also some talk about doing a correlation-check with data from the South Pole Telescope. These correlation checks are valid confirmations even though BICEP1 and SPT do not have sufficient sensitivity on their own but they are not anything like as good as another team independently producing a B-mode signal that shows the same bumps.


It is worth trying to put a confidence level on how good you think the result is. I don’t do bets but I can consider a thought experiment in which Hawking is forcing me to make a bet under threat of running me down at high speed on his wheelchair. That makes it possible for me to imagine what are the breakeven odds I would accept as a way to evaluate my Bayesian estimate for the probability that BICEP2 have indeed seen cosmological B-modes.  I would put this figure at about 80% for now and I consider that to be a good level of confidence at this early stage. This will change dramatically either up or down when other completely independent results either confirm or refute the BICEP2 results.

There was an interesting exchange during the discussion at the end of yesterdays colloquium when someone asked “How could Planck have missed this?” It was pointed out that Planck has a very broad mission  compared to BICEP2 whose only goal was to find the B-modes. Someone from Planck then stated quite pointedly that they had not yet published anything that would justify the claim that they had missed this signal. Very interesting! There is a further long list of other experiments that have the capability to look for B-modes too so soon we should know the answer to this particular question with a much better level of confidence.

Were these B-modes formed by primordial gravitational waves?

So if we now assume that the observations of B-modes is good we can ask about how certain we can be that these are a signature of primordial gravitational waves.  At the Colloquium we heard from  Uros Seljak who was the first cosmologist to realize in 1996 that B-mode CMB polarisation could be used as a signal for these gravitational waves. He was followed shortly after by  Marc Kamionkowski, Arthur Kosowsky, and Albert Stebbins. See this report from Berkeley for the details of the story. If I were in a position to make nominations for the Nobel prize for the observational side of this discovery I would want to include Seljak along with people from BICEP2 but past awards indicate that the phenomenologists who take these crucial steps usually fail to be recognised at this level because they fall between the two stones marking the original theoretical insight and the final experimental discovery, too bad.

I noticed that several theorists claimed that the B-modes can only be produced by gravitational waves. I think we need to be cautious about this claim. The well known additional source of B-modes is gravitational lensing from background galaxy clusters. Fortunately this effect can be accurately modelled because we can observe the galaxies and we know very well now how much dark matter is clumped around them. The power spectrum plot from BICEP2 compares their signal with the lensing background. Here is the plot with the lensing background filled in yellow to show clearly how the signal stands out above the lensing. Simply put the gravitational wave spectrum is seen a higher angular scales than the lensing so it is clearly separated.


Could something other than lensing or gravitational waves produce B-modes. One thought that came to my mind was the magnetic fields can also twist the polarisation of radiation as first shown by Michael Faraday. None of the theorists have discussed this possibility so I was at first willing to accept that you need tensor modes rather than vector fields to produce the B-modes, but then I found this presentation by Levon Pogosian  that seems to say otherwise. Traditionally magnetic fields have been discounted in cosmological models but in recent years some have found reason to be more open to their importance.  I have no idea if this is a viable alternative source for the B-modes but until I hear a plausible direct refutation I am keeping this open in my mind. The good news is that new radio telescopes such as SKA and LOFAR should be able to detect signals from cosmic magnetic fields so I think this is a question that can be settled .

Other sources of the B-modes such as galactic dust and synchrotron radiation were considered by the BICEP2 team but were only discounted at 97% confidence. This is not a high confidence level for such an important result. The significance of an observation is only valid when all possible backgrounds are accounted for so the 5 to 7 sigma claims are questionable here. We have seen many observations in particle physics and cosmology at 2 to 3 sigma fade away as new data arrived. As outsiders who could hear such results from hundreds of experiments that are taking place we should not discount the “look elsewhere effect” that this implies.

The only other possibility I can think of as an alternative explanation for B-modes would be if cosmic structure had formed in dark matter before the moment of last scattering when the CMB decoupled from the visible matter. This is not the accepted view in cosmology but since the surprisingly early formation of galaxies after that time has not been explained I will not discount it. Ironically there is an improved chance of something like this happening if the primordial gravitational waves are present and that weakens the doubt that this alternative casts. Finally we should perhaps allow for the more remote possibility that something else we have not thought of can  produce these B-modes.

So  with Hawking on my heals how would I rate my confidence that B-modes are a signature of gravitational waves. Again I think I would have to put it at about the 80% level which is pretty good confidence. This figure is harder to improve than the certainty in the BICEP2 experiment itself  but future measurements of cosmic magnetic fields would make a significance difference.

Assuming primordial gravitational waves have been observed, did inflation happen?

Primordial gravitational waves have been described as a “smoking gun” for inflation. That would of course be the starter gun that set the universe off rather than a murder weapon, I hope. However most versions of inflation theory that had risen in popularity before these results had been announced predicted very small values of r. Very few can come close to accounting for r=0.2 or even r=0.1 if we take the lower side of the error range. There are some models that might, such as the axion monodromy inflation and even Linde’s chaotic inflation that had been all but abandoned until now. Indeed you could make the case that it would have been a better signal for inflation if primordial gravitational waves had been ruled out at this amplitude because most inflation models predict that. In a different sense it is a good thing that so many inflation models would be discounted by BICEP2 if it stands because it concentrates inflation theorists in a new direction, but the harsh fact remains that they do not yet have a fully viable theory for inflation. Scaling invariance in the CMB has already been a good result for inflation but there may be other theories that explain it (e.g. gravitational waves from other phase transitions have been cited as an alternative).

The simple truth is that we know so little about the earliest origins of the universe that we cannot honestly place very high confidence on the claim that primordial gravitational waves are a sure fire signature of inflation. Nevertherless I will once again place an my confidence level at 80% in inflation if the primordial gravitational waves are well confirmed.

What would it take to improve this figure? I would like to see a fully viable theory of the inflation mechanism with an uncontrived prediction (or retrodiction) of the power spectrum of gravitational waves. When this power spectrum is known to better precision (and that should now converge rapidly over the next few years) and it agrees nicely with the model , then we can be very happy with the result.

So has inflation been confirmed as a Nobel worthy theory?

Not yet. The problem is that there are at least these three major levels of uncertainty. For each one I give a good confidence level of 80%, but 80% cubed is only about 50%

When Sean Carroll asked us three years ago if we thought inflation happened I put my assessment at only 40% likely. This had probably increased to about 60% in the intervening years because of Planck results, but I am still open minded either way. I should make it clear that I do have a high level of confidence in the big bang theory itself but the inflation part cannot be considered settled until we have both a good theory and a solid observational confirmation. I do appreciate its strength in resolving cosmological problems and its successful prediction of CMB fluctuations but I want to see more.

The BICEP2 result does not yet change my mind very much, but the good news is that there is now hope that further confirmation will make a huge difference. More detailed measurements of the B-modes have the potential to tell us in great detail how the process of inflation started, progressed and ended. Just when we thought our understanding of physics was hitting a brick wall we get this gift of an observation that promises to completely revolutionize our understanding once again.

Does BICEP2 support the multiverse?

This question was asked twice at the press conference (44 minutes and 53 minutes into the video recording) John Novak was quick to interject that as an experimentalist he is firmly opposed to theories that have no observational consequences. The theoriests in the room countered that by pointing out that people doubted inflation initially because they did not believe it could make testable predictions (this was all said with a lot of laughter). Linde and Guth were then consulted and they said that it is hard to develop models of inflation that do not lead to the multiverse (such as eternal inflation) There are certainly many other theorists who would have provided a very different view if they had been there.

My personal view does not count for much but I have always discounted eternal inflation because it is a conclusion that is situated at the end of a long combination of speculative and unconfirmed ideas. It also does not fit with my favourite philosophical position but nobody else should care about that and anyone needs to be ready to update their philosophy if experiment requires. On the other hand I do find that the multiverse is a fitting explanation for small levels of fine-tuning and it follows naturally from what quantum gravity seems to be trying to tell us about the landscape of the vaccum (in string theory and alternatives) I just see the multiverse as the range of logically possible solutions of the physics rather than something that is actually realised.

Now if the results from BICEP2 are confirmed and improved leading to a solid theory of inflation and that theory tells us in a convincing way that the multiverse must be out there, then things will be very different. Most of the levels of speculation I was concerned about would then have turned into solidly confirmed science.  It is hard even then to see how a directly observable prediction could be made using the multiverse although that is not something we can completely discount and if the theory is complete and convincing enough it may even not be necessary.

I am sure there will be more opinions than physicists with an opinion on this subject, but that for what it is worth is mine.

Primordial Gravitational Waves?

March 15, 2014

The rumor mill has once again turned its wheels a few cogs to throw out some new grist for physicists and cosmologists. This is following the announcement last Wednesday of an announcement this Monday in which the Harvard-Smithsonian Center for Astrophysics will reveal to the world a “Major Discovery”. The best available rumours now say that astrophysicsts working with the BICEP observatory in Antartica will reveal the discovery of primordial gravitational waves in the cosmic microwave background. If true this would be a very big deal indeed because it could be a direct experimental hook into the physics of inflation and even quantum gravity. These are of course the least well understood and most exciting unchartered waters of fundamental physics. Any observation that could provide phenomenology for these areas would be the greatest empirical discovery for the foundations of our universe for decades.

Before going any further it is worth recalling that we have tuned into some webcast announcements recently only to be disappointed that the expected discovery comes in the form of a negative result setting new limits on what we wanted (e.g. LUX, AMS-2 etc) This could turn out to be another case of the same but if so the “Major Discovery” tag will be pointed out as a major piece of over-hype. It is also possible that the announcement has nothing to do with what the rumors say. but that looks increasingly unlikely at this point. So let’s try to understand a little better what it might be about.

The Cosmic Microwave Background has been mapped out in exquisite detail by a series of space and Earth-based observatories including the European Planck mission which provided the best resolution all-sky survey of the CMB. So far Planck has only shown us the fluctuations of the scalar modes but it also looked at the polarisation of the background. Although it stopped working back in 2012 we are still waiting for those maps. Meanwhile some smaller scale results for the polarisation have already come in from land based observatories.


Microwave polarisation can be broken down into two modes using a Helmholtz decomposition which splits a vector field into a sum of two parts: The E-mode whose vector curl is zero and the B-mode whose divergence is zero. The E-mode in the CMB was first observed in 2002 by the DASI interferometer, but it is not particularly interesting. E-mode polarisation is generated by scattering from atoms before the radiation decoupled from matter but long after the period of inflation. Last summer the South Pole Telescope (SPT) found B-modes in the CMB for the first time, but these were known to be due to gravitational lensing of the radiation around massive galactic clusters. These can twist the E-mode polarisation to form B-modes so they are only slightly more interesting than the E-modes themselves. Really these lensing B-modes are not much better than a background that needs to be subtracted to see the more interesting B-modes that may be the signature of primordial gravitational waves.

The B-modes will have an anisotropy spectrum just as the scalar modes do and Planck may eventually provide us with a plot of this spectrum but as an initial result we are interested in the peak ratio of the tensor modes to the scalar modes which is given by a parameter known simply as r. The latest rumor say that a value for r has been measured by the BICEP2 observatory in Antarctica which is a smaller rival to the SPT, both housed at the Dark Sector Lab (pictured) Some more precise and less reliable versions of the rumor say that the answer is r=0.2. This is somewhat bigger than expected and could be as good as a 3 or 4-sigma signal because the sensitivity of BICEP2 was estimated at r=0.06. If this is true it has immediate implications for inflationary models and quantum gravity. It would rule out quite a lot of theories while giving hope to others. For example you may hear a lot about  axion monodromy inflation if this rumor is confirmed, but there will be many other ideas that could explain the result and it will be impossible to separate them at least until a detailed spectrum is available rather than a single data point. Another implication of such a high value for r might be that primordial gravitational waves could have a bigger impact on galaxy formation than previously envisioned. This could help explain why galaxies formed so quickly and why there is more large scale structure than expected in galaxy distribution (see my previous spectulations on this point)

The most important thing about a high signal of primordial gravitational waves for now would be that it would show that there is something there that can be measured so more efforts and funding are likely to be turned in that direction. But first the new result (if it is what the rumors say) will be scrutinised, not least by rival astronomers from the SPT and Polarbear observatories who only managed to detect lensing B-modes. Why would BICEP2 succeed where they failed? Can they be sure that they correctly subtracted the background? These questions are premature and even immature before we hear the announcement, but it is good to go along prepared for the kind of questions that may need to be asked.

For more see TRF, RESONAANCESTToD, Excursionset, Bruce and The Guardian Update: Blank on the Map, Prep. Uni., In the Dark

Fundamental Physics 2013: What is the Big Picture?

November 26, 2013

2013 has been a great year for viXra. We already have more than 2000 new papers taking the total to over 6000. Many of them are about physics but other areas are also well covered. The range is bigger and better than ever and could never be summarised, so as the year draws to its end here instead is a snapshot of my own view of fundamental physics in 2013. Many physicists are reluctant to speculate about the big picture and how they see it developing. I think it would be useful if they were more willing to stick their neck out, so this is my contribution. I don’t expect much agreement from anybody, but I hope that it will stimulate some interesting discussion and thoughts. If you don’t like it you can always write your own summaries of physics or any other area of science and submit to viXra.


The discovery of the Higgs boson marks a watershed moment for fundamental physics. The standard model is complete but many mysteries remain. Most notably the following questions are unanswered and appear to require new physics beyond the standard model:

  • What is dark matter?
  • What was the mechanism of cosmic inflation?
  • What mechanism led to the early production of galaxies and structure?
  • Why does the strong interaction not break CP?
  • What is the mechanism that led to matter dominating over anti-matter?
  • What is the correct theory of neutrino mass?
  • How can we explain fine-tuning of e.g. the Higgs mass and cosmological constant?
  • How are the four forces and matter unified?
  • How can gravity be quantised?
  • How is information loss avoided for black holes?
  • What is the small scale structure of spacetime?
  • What is the large scale structure of spacetime?
  • How should we explain the existence of the universe?

It is not unreasonable to hope that some further experimental input may provide clues that lead to some new answers. The Large Hadron Collider still has decades of life ahead of it while astronomical observation is entering a golden age with powerful new telescopes peering deep into the cosmos. We should expect direct detection of gravitational waves and perhaps dark matter, or at least indirect clues in the cosmic ray spectrum.

But the time scale for new discoveries is lengthening and the cost is growing. It is might be unrealistic to imagine the construction of new colliders on larger scales than the LHC. A theist vs atheist divide increasingly polarises Western politics and science. It has already pushed the centre of big science out of the United States over to Europe. As the jet stream invariably blows weather systems across the Atlantic, so too will come their political ideals albeit at a slower pace. It is no longer sufficient to justify fundamental science as a pursuit of pure knowledge when the men with the purse strings see it as an attack on their religion. The future of fundamental experimental science is beginning to shift further East and its future hopes will be found in Asia along with the economic prosperity that depends on it.  The GDP of China is predicted to surpass that of the US and the EU within 5 years.

But there is another avenue for progress. While experiment is limited by the reality of global economics, theory is limited only by our intellect and imagination. The beasts of mathematical consistency have been harnessed before to pull us through. We are not limited by just what we can see directly, but there are many routes to explore. Without the power of observation the search may be longer, but the constraints imposed by what we have already seen are tight. Already we have strings, loops, twistors and more. There are no dead ends. The paths converge back together taking us along one main highway that will lead eventually to an understanding of how nature works at its deepest levels. Experiment will be needed to show us what solutions nature has chosen, but the equations themselves are already signposted. We just have to learn how to read them and follow their course. I think it will require open minds willing to move away from the voice of their intuition, but the answer will be built on what has come before.

Thirteen years ago at the turn of the millennium I thought it was a good time to make some predictions about how theoretical physics would develop. I accept the mainstream views of physicists but have unique ideas of how the pieces of the jigsaw fit together to form the big picture. My millennium notes reflected this. Since then much new work has been done and some of my original ideas have been explored by others, especially permutation symmetry of spacetime events (event symmetry), the mathematical theory of theories, and multiple quantisation through category theory. I now have a clearer idea about how I think these pieces fit in. On the other hand, my idea at the time of a unique discrete and natural structure underlying physics has collapsed. Naturalness has failed in both theory and experiment and is now replaced by a multiverse view which explains the fine-tuning of the laws of the universe. I have adapted and changed my view in the face of this experimental result. Others have refused to.

Every theorist working on fundamental physics has a set of ideas or principles that guides their work and each one is different. I do not suppose that I have a gift of insight that allows me to see possibilities that others miss. It is more likely that the whole thing is a delusion, but perhaps there are some ideas that could be right. In any case I believe that open speculation is an important part of theoretical research and even if it is all wrong it may help others to crystallise their own opposing views more clearly. For me this is just a way to record my current thinking so that I can look back later and see how it succeeded or changed.

The purpose of this article then is to give my own views on a number of theoretical ideas that relate to the questions I listed. The style will be pedagogical without detailed analysis, mainly because such details are not known. I will also be short on references, after all nobody is going to cite this. Here then are my views.


Causality has been discussed by philosophers since ancient times and many different types of causality have been described. In terms of modern physics there are only two types of causality to worry about. Temporal causality is the idea that effects are due to prior causes, i.e. all phenomena are caused by things that happened earlier. Ontological causality is about explaining things in terms of simpler principles. This is also known as reductionism. It does not involve time and it is completely independent of temporal causality. What I want to talk about here is temporal causality.

Temporal causality is a very real aspect of nature and it is important in most of science. Good scientists know that it is important not to confuse correlation with causation. Proper studies of cause and effect must always use a control to eliminate this easy mistake. Many physicists, cosmologists and philosophers think that temporal causality is also important when studying the cosmological origins of the universe. They talk of the evolving cosmos,  eternal inflation, or numerous models of pre-big-bang physics or cyclic cosmologies. All of these ideas are driven by thinking in terms of temporal causality. In quantum gravity we find Causal Sets and Causal Dynamical Triangulations, more ideas that try to build in temporal causality at a fundamental level. All of them are misguided.

The problem is that we already understand that temporal causality is linked firmly to the thermodynamic arrow of time. This is a feature of the second law of thermodynamics, and thermodynamics is a statistical theory that emerges at macroscopic scales from the interactions of many particles. The fundamental laws themselves can be time reversed (along with CP to be exact). Physical law should not be thought of in terms of a set of initial conditions and dynamical equations that determine evolution forward in time. It is really a sum over all possible histories between past and future boundary states. The fundamental laws of physics are time symmetric and temporal causality is emergent. The origin of time’s arrow can be traced back to the influence of the big bang singularity where complete symmetry dictated low entropy.

The situation is even more desperate if you are working on quantum gravity or cosmological origins. In quantum gravity space and time should also be emergent, then the very description of temporal causality ceases to make sense because there is no time to express it in terms of. In cosmology we should not think of explaining the universe in terms of what caused the big bang or what came before. Time itself begins and ends at spacetime singularities.


When I was a student around 1980 symmetry was a big thing in physics. The twentieth century started with the realisation that spacetime symmetry was the key to understanding gravity. As it progressed gauge symmetry appeared to eventually explain the other forces. The message was that if you knew the symmetry group of the universe and its action then you knew everything. Yang-Mills theory only settled the bosonic sector but with supersymmetry even the fermionic  side would follow, perhaps uniquely.

It was not to last. When superstring theory replaced supergravity the pendulum began its swing back taking away symmetry as a fundamental principle. It was not that superstring theory did not use symmetry, it had the old gauge symmetries, supersymmetries, new infinite dimensional symmetries, dualities, mirror symmetry and more, but there did not seem to be a unifying symmetry principle from which it could be derived. There was even an argument called Witten’s Puzzle based on topology change that seemed to rule out a universal symmetry. The spacetime diffeomorphism group is different for each topology so how could there be a bigger symmetry independent of the solution?

The campaign against symmetry strengthened as the new millennium began. Now we are told to regard gauge symmetry as a mere redundancy introduced to make quantum field theory appear local. Instead we need to embrace a more fundamental formalism based on the amplituhedron where gauge symmetry has no presence.

While I embrace the progress in understanding that string theory and the new scattering amplitude breakthroughs are bringing, I do not accept the point of view that symmetry has lost its role as a fundamental principle. In the 1990s I proposed a solution to Witten’s puzzle that sees the universal symmetry for spacetime as permutation symmetry of spacetime events. This can be enlarged to large-N matrix groups to include gauge theories. In this view spacetime is emergent like the dynamics of a soap bubble formed from intermolecular interaction. The permutation symmetry of spacetime is also identified with the permutation symmetry of identical particles or instantons or particle states.

My idea was not widely accepted even when shortly afterwards matrix models for M-theory were proposed that embodied the principle of event symmetry exactly as I envisioned. Later the same idea was reinvented in a different form for quantum graphity with permutation symmetry over points in space for random graph models, but still the fundamental idea is not widely regarded.

While the amplituhedron removes the usual gauge theory it introduces new dual conformal symmetries described by Yangian algebras. These are quantum symmetries unseen in the classical Super-Yang-Mills theory but they combine permutations symmetry over states with spacetime symmetries in the same way as event-symmetry. In my opinion different dual descriptions of quantum field theories are just different solutions to a single pregeometric theory with a huge and pervasive universal symmetry. The different solutions preserve different sectors of this symmetry. When we see different symmetries in different dual theories we should not conclude that symmetry is less fundamental. Instead we should look for the greater symmetry that unifies them.

After moving from permutation symmetry to matrix symmetries I took one further step. I developed algebraic symmetries in the form of necklace Lie algebras with a stringy feel to them. These have not yet been connected to the mainstream developments but I suspect that these symmetries will be what is required to generalise the Yangian symmetries to a string theory version of the amplituhedron. Time will tell if I am right.


We know so much about cosmology, yet so little. The cosmic horizon limits our view to an observable universe that seems vast but which may be a tiny part of the whole. The heat of the big bang draws an opaque veil over the first few hundred thousand years of the universe. Most of the matter around us is dark and hidden. Yet within the region we see the ΛCDM standard model accounts well enough for the formation of galaxies and stars. Beyond the horizon we can reasonably assume that the universe continues the same for many more billions of light years, and the early big bang back to the first few minutes or even seconds seems to be understood.

Cosmologists are conservative people. Radical changes in thinking such as dark matter, dark energy, inflation and even the big bang itself were only widely accepted after observation forced the conclusion, even though evidence built up over decades in some cases. Even now many happily assume that the universe extends to infinity looking the same as it does around here, that the big bang is a unique first event in the universe, that space-time has always been roughly smooth, that the big bang started hot, and that inflation was driven by scalar fields. These are assumptions that I question, and there may be other assumptions that should be questioned. These are not radical ideas. They do not contradict any observation, they just contradict the dogma that too many cosmologist live by.

The theory of cosmic inflation was one of the greatest leaps in imagination that has advanced cosmology. It solved many mysteries of the early universe at a stroke and Its predictions have been beautifully confirmed by observations of the background radiation. Yet the mechanism that drives inflation is not understood.

It is assumed that inflation was driven by a scalar inflaton field. The Higgs field is mostly ruled out (exotic coupling to gravity not withstanding), but it is easy to imagine that other scalar fields remain to be found. The problem lies with the smooth exit from the inflationary period. A scalar inflaton drives a DeSitter universe. What would coordinate a graceful exit to a nice smooth universe? Nobody knows.

I think the biggest clue is that the standard cosmological model has a preferred rest frame defined by commoving galaxies and the cosmic background radiation. It is not perfect on small scales but over hundreds of millions of light years it appears rigid and clear. What was the origin of this reference frame? A DeSitter inflationary model does not possess such a frame, yet something must have co-ordinated its emergence as inflation ended. These ideas simply do not fit together if the standard view of inflation is correct.

In my opinion this tells us that inflation was not driven by a scalar field at all. The Lorentz geometry during the inflationary period must have been spontaneously broken by a vector field with a non-zero component pointing in the time direction. Inflation must have evolved in a systematic and homogenous way through time while keeping this fields direction constant over large distances smoothing out any deviations as space expanded. The field may have been a fundamental gauge vector or a composite condensate of fermions with a non-zero vector expectation value in the vacuum. Eventually a phase transition ended the symmetry breaking phase and Lorentz symmetry was restored to the vacuum, leaving a remnant of the broken symmetry in the matter and radiation that then filled the cosmos.

The required vector field may be one we have not yet found, but some of the required features are possessed by the massive gauge bosons of the weak interaction. The mass term for a vector field can provide an instability favouring timelike vector fields because the signature of the metric reverses sign in the time direction. I am by no means convinced that the standard model cannot explain inflation in this way, but the mechanism could be complicated to model.

Another great mystery of cosmology is the early formation of galaxies. As ever more powerful telescopes have penetrated back towards times when the first galaxies were forming, cosmologists have been surprised to find active galaxies rapidly producing stars, apparently with supermassive black holes ready-formed at their cores. This contradicts the predictions of the cold dark matter model according to which the stars and black holes should have formed later and more slowly.

The conventional theory of structure formation is very Newtonian in outlook. After baryogenesis the cosmos was full of gas with small density fluctuations left over from inflation. As radiation decoupled, these anomalies caused the gas and dark matter to gently coalesce under their own weight into clumps that formed galaxies. This would be fine except for the observation of supermassive black holes in the early universe. How did they form?

I think that the formation of these black holes was driven by large scale gravitational waves left over from inflation rather than density fluctuations. As the universe slowed its inflation there would be parts that slowed a little sooner and other a little later. Such small differences would have been amplified by the inflation leaving a less than perfectly smooth universe for matter to form in. As the dark matter followed geodesics through these waves in spacetime it would be focused just as light waves on the bottom of a swimming pool is focused by surface waves into intricate light patterns. At the caustics the dark matter would come together as high speed to be compressed in structures along lines and surfaces. Large  black holes would form at the sharpest focal points and along strands defined by the caustics. The stars and remaining gas would then gather around the black holes. Pulled in by their gravitation to form the galaxies. As the universe expanded the gravitational waves would fade leaving the structure of galactic clusters to mark where they had been.

The greatest question of cosmology asks how the universe is structured on large scales beyonf the cosmic horizon. We know that dark energy is making the expansion of the universe accelerate so it will endure for eternity, but we do not know if it extends to infinity across space. Cosmologists like to assume that space is homogeneous on large scales, partly because it makes cosmology simpler and partly because homogeneity is consistent with observation within the observable universe. If this is assumed then the question of whether space is finite or infinite depends mainly on the local curvature. If the curvature is positive then the universe is finite. If it is zero or negative the universe is infinite unless it has an unusual topology formed by tessellating polyhedrons larger than the observable universe. Unfortunately observation fails to tell us the sign of the curvature. It is near zero but we can’t tell which side of zero it lies.

This then is not a question I can answer but the holographic principle in its strongest form contradicts a finite universe. An infinite homogeneous universe also requires an explanation of how the big bang can be coordinated across an infinite volume. This leaves only more complex solutions in which the universe is not homogeneous. How can we know if we cannot see past the cosmic horizon? There are many homogeneous models such as the bubble universes of eternal inflation, but I think that there is too much reliance on temporal causality in that theory and I discount it. My preference is for a white hole model of the big bang where matter density decreases slowly with distance from a centre and the big bang singularity itself is local and finite with an outer universe stretching back further. Because expansion is accelerating we will never see much outside the universe that is currently visible so we may never know its true shape.


It has long been suggested that the laws of physics are fine-tuned to allow the emergence of intelligent life. This strange illusion of intelligent design could be explained in atheistic terms if in some sense many different universes existed with different laws of physics. The observation that the laws of physics suit us would then be no different in principle from the observation that our planet suits us.

Despite the elegance of such anthropomorphic reasoning many physicists including myself resisted it for a long time. Some still resist it. The problem is that the laws of physics show some signs of being unique according to theories of unification. In 2001 I like many thought that superstring theory and its overarching M-theory demonstrated this uniqueness quite persuasively. If there was only one possible unified theory with no free parameters how could an anthropic principle be viable?

At that time I preferred to think that fine-tuning was an illusion. The universe would settle into the lowest energy stable vacuum of M-theory and this would describe the laws of physics with no room for choice. The ability of the universe to support life would then just be the result of sufficient complexity. The apparent fine-tuning would be an illusion resulting from the fact that we see only one form of intelligent life so far. I imagined distant worlds populated by other forms of intelligence in very different environments from ours based on other solutions to evolution making use of different chemical combination and physical processes. I scoffed at science fiction stories where the alien life looked similar to us except for different skin textures or different numbers of appendages.

My opinion started to change when I learnt that string theory actually has a vast landscape of vacuum solutions and they can be stabilized to such an extent that we need not be living at the lowest energy point. This means that the fundamental laws of physics can be unique while different low energy effective theories can be realized as solutions. Anthropic reasoning was back on the table.

It is worrying to think that the vacuum is waiting to decay to a lower energy state at any place and moment. If it did so an expanding sphere of energy would expand at the speed of light changing the effective laws of physics as it spread out, destroying everything in its path. Many times in the billions of years and billions of light years of the universe in our past light come, there must have been neutron stars that collided with immense force and energy. Yet not once has the vacuum been toppled to bring doom upon us. The reason is that the energies at which the vacuum state was forged in the big bang are at the Planck scale, many orders of magnitude beyond anything that can be repeated in even the most violent events of astrophysics. It is the immense range of scales in physics that creates life and then allows it to survive.

The principle of naturalness was spelt out by ‘t Hooft in the 1980s, except he was too smart to call it a principle. Instead he called it a “dogma”. The idea was that the mass of a particle or other physical parameters could only be small if they would be zero given the realisation of some symmetry. The smallness of fermion masses could thus be explained by chiral symmetry, but the smallness of the Higgs mass required supersymmetry. For many of us the dogma was finally put to rest when the Higgs mass was found by the LHC to be unnaturally small without any sign of the accompanying supersymmetric partners. Fine tuning had always been a feature of particle physics but with the Higgs it became starkly apparent.

The vacuum would not tend to squander its range of scope for fine-tuning, limited as it is by the size of the landscape. If there is a cheaper way the typical vacuum will find it so that there is enough scope left to tune nuclear physics and chemistry for the right components required by life. Therefore I expect supersymmetry or some similar mechanism to come in at some higher scale to stabilise the Higgs mass and the cosmological constant. It may be a very long time indeed before that can be verified.

Now that I have learnt to accept anthropomorphism, the multiverse and fine-tuning I see the world in a very different way. If nature is fine-tuned for life it is plausible that there is only one major route to intelligence in the universe. Despite the plethora of new planets being discovered around distant stars, the Earth appears as a rare jewel among them. Its size and position in the goldilocks zone around a long lives stable star in a quite part of a well behaved galaxy is not typical. Even the moon and the outer gas giants seem to play their role in keeping us safe from natural instabilities. Yet of we were too safe life would have settled quickly into a stable form that could not evolve to higher functions. Regular cataclysmic events in our history were enough to cause mass extinction events without destroying life altogether, allowing it to develop further and further until higher intelligence emerged. Microbial life may be relatively common on other worlds but we are exquisitely rare. No sign of alien intelligence drifts across time and space from distant worlds.

I now think that where life exists it will be based on DNA and cellular structures much like all life on Earth. It will require water and carbon and to evolve to higher forms it will require all the commonly available elements each of which has its function in our biology or the biology of the plants on which we depend. Photosynthesis may be the unique way in which a stable carbon cycle can complement our need for oxygen. Any intelligent life will be much like us and it will be rare. This I see as the most significant prediction of fine tuning and the multiverse.

String Theory

String theory was the culmination of twentieth century developments in particles physics leading to ever more unified theories. By  2000 physicists had what appeared to be a unique mother theory capable of including all known particle physics in its spectrum. They just had to find the mechanism that collapsed its higher dimensions down to our familiar 4 dimensional spacetime.

Unfortunately it turned out that there were many such mechanisms and no obvious means to figure out which one corresponds to our universe. This leaves string theorists in a position unable to predict anything useful that would confirm their theory. Some people have claimed that this makes the theory unscientific and that physicists should abandon the idea and look for a better alternative. Such people are misguided.

String theory is not just a random set of ideas that people tried. It was the end result of exploring all the logical possibilities for the ways in which particles can work. It is the only solution to the problem of finding a consistent interaction of matter with gravity in the limit of weak fields on flat spacetime. I don’t mean merely that it is the only solution anyone could fine, it is the only solution that can work. If you throw it away and start again you will only return to the same answer by the same logic.

What people have failed to appreciate is that quantum gravity acts at energy scales well above those that can be explored in accelerators or even in astronomical observations. Expecting string theory to explain low energy particle physics was like expecting particle physics to explain biology. In principle it can, but to derive biochemistry from the standard model you would need to work out the laws of chemistry and nuclear physics from first principles and then search through the properties of all the possible chemical compounds until you realised that DNA can self-replicate. Without input from experiment this is an impossible program to put into practice. Similarly, we cannot hope to derive the standard model of particle physics from string theory until we understand the physics that controls the energy scales that separate them. There are about 12 orders of magnitude in energy scale that separate chemical reactions from the electroweak scale and 15 orders of magnitude that separate the electroweak scale from the Planck scale. We have much to learn.

How then can we test string theory? To do so we will need to look beyond particle physics and find some feature of quantum gravity phenomenology. That is not going to be easy because of the scales involved. We can’t reach the Planck energy, but sensitive instruments may be able to probe very small distance scales as small variations of effects over large distances. There is also some hope that a remnant of the initial big bang remains in the form of low frequency radio or gravitational waves. But first string theory must predict something to observe at such scales and this presents another problem.

Despite nearly three decades of intense research, string theorists have not yet found a complete non-perturbative theory of how string theory works. Without it predictions at the Planck scale are not in any better shape than predictions at the electroweak scale.

Normally quantised theories explicitly include the symmetries of the classical theories they quantised. As a theory of quantum gravity, string theory should therefore include diffeomorphism invariance of spacetime, and it does but not explicitly. If you look at string theory as a perturbation on a flat spacetime you find gravitons, the quanta of gravitational interactions. This means that the theory must respect the principles of general relativity in small deviations from the flat spacetime but it is not explicitly described in a way that makes the diffeomorphism invariance of general relativity manifest. Why is that?

Part of the answer coming from non-perturbative results in string theory is that the theory allows the topology of spacetime to change. Diffeomorphisms on different topologies form different groups so there is no way that we could see diffeomorphism invariance explicitly in the formulation of the whole theory. The best we could hope would be to find some group that has every diffeomorphism group as a subgroup and look for invariance under that.

Most string theorists just assume that this argument means that no such symmetry can exist and that string theory is therefore not based on a principle of universal symmetry. I on the other hand have proposed that the universal group must contain the full permutation group on spacettime events. The diffeomorphism group for any topology can then be regarded as a subgroup of this permutation group.

String theorists don’t like this because they see spacetime as smooth and continuous whereas permutation  symmetry would suggest a discrete spacetime. I don’t think these two ideas are incompatible. In fact we should see spacetime as something that does not exists at all in the foundations of string theory. It is emergent. The permutation symmetry on events is really to be identified with the permutation symmetry that applies to particle states in quantum mechanics. A smooth picture of spacetime then emerges from the interactions of these particles which in string theory are the partons of the strings.

This was an idea I formulated twenty years ago, building symmetries that extend the permutation group first to large-N matrix groups and then to necklace Lie-algebras that describe the creation of string states. The idea was vindicated when matrix string theory was invented shortly after but very few people appreciated the connection.

The matric theories vindicated the matrix extensions in my work. Since then I have been waiting patiently for someone to vindicate the necklace Lie algebra symmetries as well. In recent years we have seen a new approach to quantum field theory for supersymmetric Yang-Mills which emphasises a dual conformal symmetry rather than the gauge symmetry. This is a symmetry found in the quantum scattering amplitudes rather than the classical limit. The symmetry takes the form of a Yangian symmetry related to the permutations of the states. I find it plausible that this will turn out to be a remnant of necklace Lie-algebras in the more complete string theory. There seems to be still some way to go before this new idea expressed in terms of an amplituhedron is fully worked out but I am optimistic that I will be proven right again, even if few people recognise it again.

Once this reformulation of string theory is complete we will see string theory in a very different way. Spacetime, causality and even quantum mechanics may be emergent from the formalism. It will be non-perturbative and rigorously defined. The web of dualities connecting string theories and the holographic nature of gravity will be derived exactly from first principles. At least that is what I hope for. In the non-perturbative picture it should be clearer what happens at high energies when space-time breaks down. We will understand the true nature of the singularities in black-holes and the big bang. I cannot promise that these things will be enough to provide predictions that can be observed in real experiments or cosmological surveys, but it would surely improve the chances.

Loop Quantum Gravity

If you want to quantised a classical system such as a field theory there are a range of methods that can be used. You can try a Hamiltonian approach, or a path integral approach for example. You can change the variables or introduce new ones, or integrate out some degrees of freedom. Gauge fixing can be handled in various ways as can renormalisation. The answers you get from these different approaches are not quite guaranteed to be equivalent. There are some choices of operator ordering that can affect the answer. However, what we usually find in practice is that there are natural choices imposed by symmetry principles or other requirements of consistency and the different results you get using different methods are either equivalent or very nearly so, if they lead to a consistent result at all.

What should this tell us about quantum gravity? Quantising the gravitational field is not so easy. It is not renormalisable in the same way that other gauge theories are, yet a number of different methods have produced promising results. Supergravity follows the usual field theory methods while String theory uses a perturbative generalisation derived from the old S-matrix approach. Loop Quantum Gravity makes a change of variables and then follows a Hamiltonian recipe. There are other methods such as Twistor Theory, Non-Commutative Geometry, Dynamical Triangulations, Group Field Theory, Spin Foams, Higher Spin Theories etc. None has met with success in all directions but each has its own successes in some directions.

While some of these approaches have always been known to be related, others have been portrayed as rivals. In particular the subject seems to be divided between methods related to string theory and methods related to Loop Quantum Gravity. It has always been my expectation that the two sides will eventually come together, simply because of the fact that different ways of quantising the same classical system usually do lead to equivalent results. Superficially strings and loops seem like related geometric objects, i.e. one dimensional structures in space tracing out two dimensional world sheets in spacetime.

 String Theorists and Loop Qunatum Gravitists alike have scoffed at the suggestion that these are the same thing. They point out that string pass through each other unlike the loops which form knot states. String theory also works best in ten dimensions while LQG can only be formulated in 4. String Theory needs supersymmetry and therefore matter, while LQG tries to construct first a consistent theory of quantum gravity alone. I see these differences very differently from most physicists. I observe that when strings pass through each other they can interact and the algebraic diagrams that represent  this are very similar to the Skein relations used to describe the knot theory of LQG. String theory does indeed use the same mathematics of quantum groups to describe its dynamics. If LQG has not been found to require supersymmetry or higher dimensions it may be because the perturbative limit around flat spacetime has not yet been formulated and that is where the consistency constraints arise. In fact the successes and failures of the two approaches seem complementary. LQG provides clues about the non-perturbative background independent picture of spacetime that string theorists need.

Methods from Non-Commutative Geometry have been incorporated into string theory and other approaches to quantum gravity for more than twenty years and in the last decade we have seen Twistor Theory applied to string theory. Some people see this convergence as surprising but I regard it as natural and predictable given the nature of the process of quantisation. Twistors have now been applied to scattering theory and to supergravity in 4 dimensions in a series of discoveries that has recently led to the amplituhedron formalism. Although the methods evolved from observations related to supersymmetry and string theory they seem in some ways more akin to the nature of LQG. Twistors were originated by Penrose as an improvement on his original spin-network idea and it is these spin-networks that describe states in LQG.

I think that what has held LQG back is that it separates space and time. This is a natural consequence of the Hamiltonian method. LQG respects diffeomorphism invariance, unlike string theory, but it is really only the spatial part of the symmetry that it uses. Spin networks are three dimensional objects that evolve in time, whereas Twistor Theory tries to extend the network picture to 4 dimensions. People working on LQG have tended to embrace the distinction between space and time in their theory and have made it a feature claiming that time is philosophically different in nature from space. I don’t find that idea appealing at all. The clear lesson of relativity has always been that they must be treated the same up to a sign.

The amplituhedron makes manifest the dual conformal symmetry to yang mills theory in the form of an infinite dimensional Yangian symmetry. These algebras are familiar from the theory of integrable systems where they may were deformed to bring in quantum groups. In fact the scattering amplitude theory that applies to the planar limit of Yang Mills does not use this deformation, but here lies the opportunity to united the theory with Loop Quantum Gravity which does use the deformation.

Of course LQG is a theory of gravity so if it is related to anything it would be supergravity or sting theory, not Yang Mills. In the most recent developments the scattering amplitude methods have been extended to supergravity by making use of the observation that gravity can be regarded as formally the square of Yang-Mills. Progress has thus been made on formulating 4D supergravity using twistors, but so far without this deformation. A surprise observation is that supergravity in this picture requires a twistor string theory to make it complete. If the Yangian deformation could be applied  to these strings then they could form knot states just like the loops in LQG. I cant say if it will pan out that way but I can say that it would make perfect sense if it did. It would mean that LQG and string theory would finally come together and methods that have grown out of LQG such as spin foams might be applied to string theory.

The remaining mystery would be why this correspondence worked only in 4 spacetime dimensions. Both Twistors and LQG use related features of the symmetry of 4 dimensional spacetime that mean it is not obvious how to generalise to higher dimensions, while string theory and supergravity have higher forms that work up to 11 dimensions. Twistor theory is related to conformal field theory is a reduced symmetry from geometry that is 2 dimensions higher. E.g. the 4 dimensional conformal group is the same as the 6 dimensional spin groups. By a unique coincidence the 6 dimensional symmetries are isomorphic to unitary or special linear groups over 4 complex variables so these groups have the same representations. In particular the fundamental 4 dimensional representation of the unitary group is the same as the Weyl spinor representation in six real dimensions. This is where the twistors come from so a twistor is just a Weyl spinor. Such spinors exist in any even number of dimensions but without the special properties found in this particular case. It will be interesting to see how the framework extends to higher dimensions using these structures.

Quantum Mechanics

Physicists often chant that quantum mechanics is not understood. To paraphrase some common claims: If you think you understand quantum mechanics you are an idiot. If you investigate what it is  about quantum mechanics that is so irksome you find that there are several features that can be listed as potentially problematical; indeterminacy, non-locality, contextuality, observers, wave-particle duality and collapse. I am not going to go through these individually; instead I will just declare myself a quantum idiot if that is what understanding implies. All these features of quantum mechanics are experimentally verified and there are strong arguments that they cannot be easily circumvented using hidden variables. If you take a multiverse view there are no conceptual problems with observers or wavefunction collapse. People only have problems with these things because they are not what we observe at macroscopic scales and our brains are programmed to see the world classically. This can be overcome through logic and mathematical understanding in the same way as the principles of relativity.

I am not alone in thinking that these things are not to be worried about, but there are some other features of quantum mechanics that I have a more extraordinary view of. Another aspect of quantum mechanics that gives some cause for concern is its linearity, Theories that are linear are often usually too simple to be interesting. Everything decouples into modes that act independently in a simple harmonic way, In quantum mechanics we can in principle diagonalise the Hamiltonian to reduce the whole universe to a sum over energy eigenstates. Can everything we experience by encoded in that one dimensional spectrum?

In quantum field theory this is not a problem, but there we have spacetime as a frame of reference relative to which we can define a privileged basis for the Hilbert space of states. It is no longer the energy spectrum that just counts. But what if spacetime is emergent? What then do we choose our Hilbert basis relative to? The symmetry of the Hilbert space must be broken for this emergence to work, but linear systems do not break their symmetries. I am not talking about the classical symmetries of the type that gets broken by the Higgs mechanism. I mean the quantum symmetries in phase space.

Suppose we accept that string theory describes the underlying laws of physics, even if we don’t know which vacuum solution the universe selects. Doesn’t string theory also embody the linearity of quantum mechanics? It does so long as you already accept a background spacetime, but in string theory the background can be changed by dualities. We don’t know how to describe the framework in which these dualities are manifest but I think there is reason to suspect that quantum mechanics is different in that space, and it may not be linear.

The distinction between classical and quantum is not as clear-cut as most physicists like to believe. In perturbative string theory the Feynman diagrams are given by string worldsheets which can branch when particles interact. Is this the classical description or the quantum description? The difference between classical and quantum is that the worldsheets will extremise their area in the classical solutions but follow any history in the quantum. But then we already have multi-particle states and interactions in the classical description. This is very different from quantum field theory.

Stepping back though we might notice that quantum field theory also has some schizophrenic  characteristics. The Dirac equation is treated as classical with non-linear interactions even though it is a relativistic  Schrödinger equation, with quantum features such as spin already built-in. After you second quantise you get a sum over all possible Feynman graphs much like the quantum path integral sum over field histories, but in this comparison the Feynman diagrams act as classical configurations. What is this telling us?

My answer is that the first and second quantisation are the first in a sequence of multiple iterated quantisations. Each iteration generates new symmetries and dimensions. For this to work the quantised layers must be non-linear just as the interaction between electrons and photons is non-linear is the so-called first-quantised field theory. The idea of multiple quantisations goes back many years and did not originate with me, but I have a unique view of its role in string theory based on my work with necklace lie algebras which can be constructed in an iterated procedure where one necklace dimension is added at each step.

Physicists working on scattering amplitudes are at last beginning to see that the symmetries in nature are not just those of the classical world. There are dual-conformal symmetries that are completed only in the quantum description. These seem to merge with the permutation symmetries of the particle statistics. The picture is much more complex than the one painted by the traditional formulations of quantum field theory.

What then is quantisation? When a Fock space is constructed the process is formally like an exponentiation. In category picture we start to see an origin of what quantisation is because exponentiation generalises to the process of constructing all functions between sets, or all functors between categories and so on to higher n-categories. Category theory seems to encapsulate the natural processes of abstraction in mathematics. This I think is what lies at the base of quantisation. Variables become functional operators, objects become morphisms. Quantisation is a particular form of categorification, one we don’t yet understand. Iterating this process constructs higher categories until the unlimited process itself forms an infinite omega-category that describes all natural processes in mathematics and in our multiverse.

Crazy ideas? Ill-formed? Yes, but I am just saying – that is the way I see it.

Black Hole Information

We have seen that quantum gravity can be partially understood by using the constraint that it needs to make sense in the limit of small perturbations about flat spacetime. This led us to strings and supersymmetry. There is another domain of thought experiments that can tell us a great deal about how quantum gravity should work and it concerns what happens when information falls into a black hole. The train of arguments is well known so I will not repeat them here. The first conclusion is that the entropy of a black hole is given by its horizon area in Plank units and the entropy in any other volume is less than the same Bekenstein bound taken from the surrounding surface. This leads to the holographic principle that everything that can be known about the state inside the volume can be determined from a state on its surface. To explain how the inside of a blackhole can be determined from its event horizon or outside we use a black hole correspondence principle which uses the fact that we cannot observe both the inside and then outside at a later time. Although the reasoning that leads to these conclusions is long and unsupported by any observation It is in my opinion quite robust and is backed up by theoretical models such as AdS/CFT duality.

There are some further conclusions that I would draw from black hole information that many physicists might disagree with. If the information in a volume is limited by the surrounding surface then it means we cannot be living in a closed universe with a finite volume like the surface of a 4-sphere. If we did you could extend the boundary until it shrank back to zero and conclude that there is no information in the universe. Some physicists prefer to think that the Bekenstein bound should be modified on large scales so that this conclusion cannot be drawn but I think the holographic principle holds perfectly to all scales and the universe must be infinite or finite with a different topology.

Recently there has been a claim that the holographic principle leads to the conclusion that the event-horizon must be a firewall through which nothing can pass. This conclusion is based on the assumption that information inside a black hole is replicated outside through entanglement. If you drop two particles with fully entangled spin states into a black hole you cannot have another particle outside that is also entangled to this does not make sense. I think the information is replicated on the horizon in a different way.

It is my view that the apparent information in the bulk volume field variables must be mostly redundant and that this implies a large symmetry where the degrees of symmetry match the degrees of freedom in the fields or strings. Since there are fundamental fermions it must be a supersymmetry. I call a symmetry of this sort a complete symmetry. We know that when there is gauge symmetry there are corresponding charges that can be determined on a boundary by measuring the flux of the gauge field. In my opinion a generalization of this using a complete symmetry accounts for holography. I don’t think that this complete symmetry is a classical symmetry. It can only be known properly in a full quantum theory much as dual conformal gauge symmetry is a quantum symmetry.

Some physicists assume that if you could observe Hawking radiation you would be looking at information coming from the event horizon. It is not often noticed that the radiation is thermal so if you observe it you cannot determine where it originated from. There is no detail you could focus on to measure the distance of the source. It makes more sense to me to think of this radiation as emanating from a backward singularlty inside the blackhole. This means that a black hole once formed is also a white hole. This may seem odd but it is really just an extension of the black hole correspondence principle. I also agree with those who say that as black hole shrink they become indistinguishable from heavy particles that decay by emitting radiation.


Every theorist working on fundamental physics needs some background philosophy to guide their work. They may think that causality and time are fundamental or that they are emergent for example. They may have the idea that deeper laws of physics are simpler. They may like reductionist principles or instead prefer a more anthropomorphic world view. Perhaps they think the laws of physics must be discrete, combinatorical and finite. They may think that reality and mathematics are the same thing, or that reality is a computer simulation or that it is in the mind of God. These things affect the theorist’s outlook and influence the kind of theories they look at. They may be meta-physical and sometimes completely untestable in any real sense, but they are still important to the way we explore and understand the laws of nature.

In that spirit I have formed my own elaborate ontology as my way of understanding existence and the way I expect the laws of nature to work out. It is not complete or finished and it is not a scientific theory in the usual sense, but I find it a useful guide for where to look and what to expect from scientific theories. Someone else may take a completely different view that appears contradictory but may ultimately come back to the same physical conclusions. That I think is just the way philosophy works.

In my ontology it is universality that counts most. I do not assume that the most fundamental laws of physics should be simple or beautiful or discrete or finite. What really counts is universality, but that is a difficult concept that requires some explanation.

It is important not to be misled by the way we think. Our mind is a computer running a program that models space, time and causality in a way that helps us live our lives but that does not mean that these things are important in the fundamental laws of physics. Our intuition can easily mislead our way of thinking. It is hard understand that time and space are interlinked and to some extent interchangeable but we now know from the theory of relativity that this is the case. Our minds understand causality and free will, the flow of time and the difference between past and future but we must not make the mistake of assuming that these things are also important for understanding the universe. We like determinacy, predictability and reductionism but we can’t assume that the universe shares our likes. We experience our own consciousness as if it is something supernatural but perhaps it is no more than a useful feature of our psychology, a trick to help us think in a way that aids our survival.

Our only real ally is logic. We must consider what is logically possible and accept that most of what we observe is emergent rather than fundamental. The realm of logical possibilities is vast and described by the rules of mathematics. Some people call it the Platonic realm and regard it as a multiverse within its own level of existence, but such thoughts are just mindtricks. They form a useful analogy to help us picture the mathematical space when really logical possibilities are just that. They are possibilities stripped of attributes like reality or existence or place.

Philosophers like to argue about whether mathematical concepts are discovered or invented. The only fair answer is both or neither. If we made contact with alien life tomorrow it is unlikely that we would find them playing chess. The rules of chess are mathematical but they are a human invention. On the other hand we can be quite sure that our new alien friends would know how to use the real numbers if they are at least as advanced as us. They would also probably know about group theory, complex analysis and prime numbers. These are the universal concepts of mathematics that are “out there” waiting to be discovered. If we forgot them we would soon rediscover them in order to solve general problems. Universality is a hard concept to define. It distinguishes the parts of mathematics that are discovered from those that are merely invented, but there is no sharp dividing line between the two.

Universal concepts are not necessarily simple to define. The real numbers for example are notoriously difficult to construct if you start from more basic axiomatic constructs such as set theory. To do that you have to first define the natural numbers using the cardinality of finite sets and Peano’s axioms. This is already an elaborate structure and it is just the start. You then extend to the rationals and then to the reals using something like the Dedekind cut. Not only is the definition long and complicated, but it is also very non-unique. The aliens may have a different definition and may not even consider set theory as the right place to start, but it is sure and certain that they would still possess the real numbers as a fundamental tool with the same properties as ours.  It is the higher level concept that is universal, not the definition.

Another example of universality is the idea of computability. A universal computer is one that is capable of following any algorithm. To define this carefully we have to pick a particular mathematical construction of a theoretical computer with unlimited memory space. One possibility for this is a Turing machine but we can use any typical programming language or any one of many logical systems such as certain cellular automata. We find that the set of numbers or integer sequences that they can calculate is always the same. Computability is therefore a universal idea even though there is no obviously best way to define it.

Universality also appears in complex physical systems where it is linked to emergence. The laws of fluid dynamics, elasticity and thermodynamics describe the macroscopic behaviour of systems build form many small elements interacting, but the details of those interactions are not important. Chaos arises in any nonlinear system of equations at the boundary where simple behaviour meets complexity. Chaos we find is described by certain numbers that are independent of how the system is constructed. These examples show how universality is of fundamental importance in physical systems and motivates the idea that it can be extended to the formation of the fundamental laws too.

Universality and emergence play a key role in my ontology and they work at different levels. The most fundamental level is the Platonic realm of mathematics. Remember that the use of the word realm is just an analogy. You can’t destroy this idea by questioning the realms existence or whether it is inside our minds. It is just the concept that contains all logically consistent possibilities. Within this realm there are things that are invented such as the game of chess, or the text that forms the works or Shakespeare or Gods. But there are also the universal concepts that any advanced team of mathematicians would discover to solve general problems they invent.

I don’t know precisely how these universal concepts emerge from the platonic realm but I use two different analogies to think about it. The first is emergence in complex systems that give us the rules of chaos and thermodynamics. This can be described using statistical physics that leads to critical systems and scaling phenomena where universal behaviour is found. The same might apply to to the complex system consisting of the collection of all mathematical concepts. From this system the laws of physics may emerge as universal behaviour. This analogy is called the Theory of Theories by me or the Mathematical Universe Hypothesis by another group. However this statistical physics analogy is not perfect.

Another way to think about what might be happening is in terms of the process of abstraction. We know that we can multiply some objects in mathematics such as permutations or matrices and they follow the rules of an abstract structure called a group. Mathematics has other abstract structures like fields and rings and vector spaces and topologies. These are clearly important examples of universality, but we can take the idea of abstraction further. Groups, fields, rings etc. all have a definition of isomorphism and also something equivalent to homomorphism. We can look at these concepts abstractly using category theory, which is a generalisation of set theory encompassing these concepts. In category theory we find universal ideas such as natural transformations that help us understand the lower level abstract structures. This process of abstraction can be continued giving us higher dimensional n-categories.  These structures also seem to be important in physics.

I think of emergence and abstraction as two facets of the deep concept of universality. It is something we do not understand fully but it is what explains the laws of physics and the form they take at the most fundamental level.

What physical structures emerge at this first level? Statistical physics systems are very similar in structure to quantum mechanics both of which are expressed as a sum over possibilities. In category theory we also find abstract structures very like quantum mechanics systems including structures analogous to Feynman diagrams. I think it is therefore reasonable to assume that some form of quantum physics emerges at this level. However time and unitarity do not. The quantum structure is something more abstract like a quantum group. The other physical idea present in this universal structure is symmetry, but again in an abstract form more general than group theory. It will include supersymmetry and other extensions of ordinary symmetry. I think it likely that this is really a system described by a process of multiple quantisation where structures of algebra and geometry emerge but with multiple dimensions and a single universal symmetry. I need a name for this structure that emerges from the platonic realm so I will call it the Quantum Realm.

When people reach for what is beyond M-Theory or for an extension of the amplituhedrom they are looking for this quantum realm. It is something that we are just beginning to touch with 21st century theories.

From this quantum realm another more familiar level of existence emerges. This is a process analogous to superselection of a particular vacuum. At this level space and time emerge and the universal symmetry is broken down to the much smaller symmetry. Perhaps a different selection would provide different numbers of space and time dimensions and different symmetries. The laws of physics that then emerge are the laws of relativity and particle physics we are familiar with. This is our universe.

Within our universe there are other processes of emergence which we are more familiar with. Causality emerges from the laws of statistical physics within our universe with the arrow of time rooted in the big bang singularity. Causality is therefore much less fundamental than quantum mechanics and space and time. The familiar structures of the universe also emerge within including life. Although this places life at the least fundamental level we must not forget the anthropic influence it has on the selection of our universe from the quantum realm.

Experimental Outlook

Theoretical physics continues to progress in useful directions but to keep it on track more experimental results are needed. Where will they come from?

In recent decades we have got used to mainly negative results in experimental particle physics, or at best results that merely confirm theories from 50 years ago. The significance of negative results is often understated to the extent that the media portray them as failures. This is far from being the case.

The LHC’s negative results for SUSY and other BSM exotics may be seen as disappointing but they have led to the conclusion that nature appears fine-tuned at the weak scale. Few theorists had considered the implications of such a result before, but now they are forced to. Instead of wasting time on simplified SUSY theories they will turn their efforts to the wider parameter space or they will look for other alternatives. This is an important step forward.

A big question now is what will be the next accelerator? The ILS or a new LEP would be great Higgs factories, but it is not clear that they would find enough beyond what we already know. Given that the Higgs is at a mass that gives it a narrow width I think it would be better to build a new detector for the LHC that is specialised for seeing diphoton and 4 lepton events with the best possible energy and angular resolution. The LHC will continue to run for several decades and can be upgraded to higher luminosity and even higher energy. This should be taken advantage of as much as possible.

However, the best advance that would make the LHC more useful would be to change the way it searches for new physics. It has been too closely designed with specific models in mind and should have been run to search for generic signatures of particles with the full range of possible quantum numbers, spin, charge, lepton and baryon number. Even more importantly the detector collaborations should be openly publishing likelihood numbers for all possible decay channels so that theorists can then plug in any models they have or will have in the future and test them against the LHC results. This would massively increase the value of the accelerator and it would encourage theorists to look for new models and even scan the data for generic signals. The LHC experimenters have been far too greedy and lazy by keeping the data to themselves and considering only a small number of models.

There is also a movement to construct a 100 TeV hadron collider. This would be a worthwhile long term goal and even if it did not find new particles that would be a profound discovery about the ways of nature.  If physicists want to do that they are going to have to learn how to justify the cost to contributing nations and their tax payers. It is no use talking about just the value of pure science and some dubiously justified spin-offs. CERN must reinvent itself as a postgraduate physics university where people learn how to do highly technical research in collaborations that cross international frontiers. Most will go on to work in industry using the skills they have developed in technological research or even as technology entrepreneurs. This is the real economic benefit that big physics brings and if CERN can’t track how that works and promote it they cannot expect future funding.

With the latest results from the LUX experiments hope of direct detection of dark matter have faded. Again the negative result is valuable but it may just mean that dark matter does not interact weakly at all. The search should go on but I think more can be done with theory to model dark matter and its role in galaxy formation. If we can assume that dark matter started out with the same temperature as the visible universe then it should be possible to model its evolution as it settled into galaxies and estimate the mass of the dark matter particle. This would help in searching for it. Meanwhile the searches for dark matter will continue including other possible forms such as axions. Astronomical experiments such as AMS-2 may find important evidence but it is hard to find optimism there. A better prospect exists for observations of the dark age of the universe using new radio telescopes such as the square kilometre array that could detect hydrogen gas clouds as they formed the first stars and galaxies.

Neutrino physics is one area that has seen positive results that go beyond the standard model. This is therefore an important area to keep going. They need to settle the question of whether neutrinos are Majorana spinors and produce figures for neutrino masses. Observation of cosmological high energy neutrinos is also an exciting area with the Ice-Cube experiment proving its value.

Gravitational wave searches have continued to be a disappointment but this is probably due to over-optimism about the nature of cosmological sources rather than a failure of the theory of gravitational waves themselves. The new run with Advanced LIGO must find them otherwise the field will be in trouble. The next step would be LISA or a similar detector in space.

Precision measurements are another area that could bring results. Measurements of the electron dipole moment can be further improved and there must be other similar opportunities for inventive experimentalists. If a clear anomaly is found it could set the scale for new physics and justify the next generation of accelerators.

There are other experiments that could yield positive results such as cosmic ray observatories and low frequency radio antennae that might find an echo from the big bang beyond the veil of the primordial plasma. But if I had to nominate one area for new effort it would have to be the search for proton decay. So far results have been negative pushing the proton lifetime to at least 1034 years but this has helped eliminate the simplest GUT models that predicted a shorter lifetime. SUSY models predict lifetimes of over 1036 years but this can be reached if we are willing to set up a detector around a huge volume of clear Antarctic ice. Ice-Cube has demonstrated the technology but for proton decay a finer array of light detectors is needed to catch the lower energy radiation from proton decay. If decays were detected they would give us positive information about physics at the GUT scale. This is something of enormous importance and its priority must be raised.

Apart from these experiments we must rely on the advance of precision technology and the inventiveness of the experimental physicist. Ideas such as the holometer may have little hope of success but each negative result tells us something and if someone gets lucky a new flood of experimental data will nourish our theories, There is much that we can still learn.

Naturally Unnatural

July 18, 2013


Today is the first day of the EPS-HEP conference in Stockholm, the largest particle physics conference of the year. In recent years such conferences have been awaited with great anticipation because of the prospects of new results in the latest LHC and Tevatron reports but this year things are a little more subdued. We will have to wait another two years before the LHC restarts and we can again follow every talk expecting the unexpected. Perhaps there will be some surprises in a late LHC analysis or something from dark matter searches, but otherwise this is just a good time to look back and ask, what did we learn so far from the LHC?

Nightmare Scenario

The answer is that we have learnt that the mass of the Higgs boson is around 125 GeV and that this lies near the minimum end of the range of masses that would allow the vacuum to be stable even if there are no new particles to help stabilize it. Furthermore, we do indeed find no evidence of other new particles up to the TeV range and the Higgs looks very much like a lone standard model Higgs. Yes, there could still be something like SUSY there if it has managed to hide in an awkward place. There could even be much lighter undiscovered particles such as those hinted at by some dark matter searches, if they are hard to produce or detect at colliders, but the more obvious conclusion is that nothing else is there at these energies.
This is what many people called the “nightmare scenario” because it means that there are no new clues that can tell us about the next model for particle physics. Many theorists had predicted SUSY particles at this energy range in order to remove fine-tuning and have been disappointed by the results. Instead we have seen that the Higgs sector is probably fine tuned at least by some small factor. If no SUSY is found in the next LHC run at 13 TeV then it is fine-tuned at about the 1% level.


Many physicists dislike fine-tuning. They feel that the laws of physics should be naturally derived from a simple model that leaves no room for such ambiguity. When superstring theory first hit the street it generated a lot of excitement precisely because it seemed to promise such a model. The heterotic string in particular looked just right for the job because its E8 gauge group is the largest exceptional simple lie algebra and it is just big enough to contain the standard model gauge group with suitable chiral structures. All they needed to do was figure out which calabi-yau manifold could be stabilised as a compactification space to bring the number of dimensions down from 10 to the 4 space and time dimensions of the real world. They would then see quickly how the symmetry gets broken and the standard model emerged at low energy, or so they hoped.

The problem is that there has been evidence for fine-tuning in nature for a long time. One of the earliest known examples was the carbon resonance predicted by Hoyle at precisely the right energy to allow carbon to form in stellar nucleosynthesis. If it was not there the cosmos would not contain enough carbon for us to exist. Hoyle was right and the resonance was soon found in nuclear experiments. Since then we have realized that many other parameters of the standard model are seemingly tuned for life. If the strong force was slightly stronger then two neutrons would form a stable bond to provide a simple form of matter that would replace hydrogen. If the cosmological constant was stronger the universe would have collapsed before we had time to evolve, any weaker and galaxies would not have formed. There are many more examples. If the standard model had fallen out of heterotic string theory as hoped we would have to accept these fine tunings as cosmic coincidences with no possible explanation.

The Multiverse

String theorists did learn how to stabilize the string moduli space but they were disappointed. Instead of finding a unique stable point to which any other compactification would degenerate they found that fluxes could stabilize a vast landscape of possible outcomes. There are so many possible stable states for the vacuum that the task of exploring them to find one that fits the standard model seems well beyond are capabilities. Some string theorists saw the bright side of this. It offers the possibility of selection to explain fine-tuning. This is the multiverse theory that says all the possible states in the landscape exist equally and by anthropic arguments we find ourselves in a universe suitable for life simply because there is no intelligent life in the ones that are not fine-tuned.

Others were not so happy. The conclusion seems to be that string theory can not predict low energy physics at all. This is unacceptable according to the scientific method or so they say. There must be a better way out otherwise string theory has failed and should be abandoned in favor of a search for a completely different alternative. But the stting theorists carry on. Why is that? Is it because they are aging professors who have invested too much intellectual capitol in their theory. Are young theorists doomed to be corrupted into following the evil ways of string theory by their egotistical masters when they would rather be working on something else? I don’t think so. Physicists did not latch onto string theory just because it is full of enchanting mathematics. They study it because they have come to understand the framework of consistent quantum theories and they see that it is the only direction that can unify gravity with other forces. Despite many years of trying nothing else offers a viable alternative that works (more about LQG is for another post).

Many people hate the very idea of the multiverse. I have heard people say that they cannot accept that such a large space of possibilities exist. What they don’t seem to realize is that standard quantum field theory already offers this large space. The state vector of the universe comes from a Hilbert space of vast complexity. Each field variable becomes an operator on space of states and the full Hilbert space is the tensor product of all those spaces. It’s dimension is the product of the dimensions of all the local spaces and the state vector has a component amplitude for each dimension in this vast multiverse of possibilities. This is not some imaginary concept. It is the mathematical structure that successfully describes the quantum scattering of particles in the standard model. The only significant difference for the multiverse of string theory is that many of the string theory states describe different stable vacuua whereas in the standard model the stable vacuua are identical under gauge symmetry.  If string theory is right then the multiverse is not some hypothetical construct that we cannot access. It is the basis of the Hilbert space spanned by the wavefunction.

Some skeptics say that there is no such fine-tuning. They say that if parameters were different then life would have formed in other ways. They say that the apparent finetuning that sets the mass of the Higgs boson and the small size of the cosmological constant is just an illusion. There may be some other way to look at the standard model which makes it look natural instead of fine-tuned. I think this is misguided. During the inflationary phase of the universe the wavefunction sat in some metastable state where the vacuum energy produced a huge effective cosmological constant. At the end of inflation it fell to a stable vaccum state whose contribution to the cosmological constant id much smaller. Since this is a non-symmetrical state it is hard to see why opposite sign contributions from bosons and fermions would cancel. Unless there is some almost miraculous hidden structure the answer seems to be fine-tuned. The same is true for the Higgs mass and other finely tuned parameters. It is very hard to see how they can be explained naturally if the standard model is uniquely determined.

People can complain as much as they like that the multiverse is unscientific because it does not predict the standard model. Such arguments are worthless if that is how the universe works. The multiverse provides a natural explanation for the unnatural parameters of physics. We do noy say that astrophysics is unscientific because it does not give a unique prediction for the size and composition of the Sun. We accept that there is a landscape of possible stellar objects and that we must use observation to determine what our star looks like. The same will be true for the standard model, but that does not stop us understanding the principles that determine the landscape of possibilities or from looking for evidence in other places.

What does it mean for life in the universe?

If the landscape of vacuua is real and the world is naturally unnatural it may take many centuries to find convincing evidence, but it will have consequences for life in the universe. If you think that life arises naturally no matter what the parameters of physics are then you would expect life to take a very diverse range of forms. I dont just mean that life on Earth is diverse in the way we are familiar with. I mean that there should be different solutions to the chemistry of life that work on other planets. On Earth there is just one basic chemistry based on DNA and RNA. This also includes the chemistry of metabolism, photosynthesis and other biochemical processes without which life on Earth would be very different. If we find that all higher lifeforms on other planets uses these same processes then we can be sure that physics is fine-tuned for life. If any one of them did not work there would be no life. Either this fine-tuning must arise naturally from a multiverse or we would have to accept that the existence of life at all is an almost miraculous coincidence. If on the other hand we find complex lifeforms based on molecules unlike DNA and supported by completely different mechanisms then the argument for fine-tuning in nature is weaker.

Theorist Nima  Arkani-Hamed recently suggested that it would be worth building a 100 TeV hadron collider even if the only outcome was to verify that there is no new physics up to that energy, It would show that the Higgs mass is fine-tuned to one part in 10,000 and that would be a revolutionary discovery. If it failed to prove that it would find something less exciting such as SUSY. I don’t think this argument will raise the funding required but if the LHC continues to strengthen the case for fine-tuning we must accept the implications.

Update 29-Jul-2013: I am just updating to add some linkbacks to other bloggers who have followed up on this. Peter Woit takes the usual negative view about what he continues to call “multiverse mania” and summarised my post by saying “Philip Gibbs, … argues that what we are learning from the LHC is that we must give up and embrace the multiverse.” To respond, I don’t think that recognising the importance of the multiverse constitutes giving up anything except the failed idea of naturalness ( In future I will always couple the word “naturalnness” with the phrase “failed idea” because this seems to be a successful debating technique ) In particular phenomenologists and experimenters will continue to look for physics beyond the standard model in order to explain dark matter, inflation etc. People working on quantum gravity will continue to explore the same theories and their phenomenology.

Another suggestion we see coming from Woit and his supporters is that the idea that physics may be unnaturally fine-tuned is coming from string theory. This is very much not the case. It is being driven by experiment and ordinary TeV scale phenomenology.  If you think that the string theory landscape has helped convert people to the idea you should check your history to see that the word “landscape” was coined by Lee Smolin in the context of LQG. Anthropic reasoning has also been around since long before string theory. Of course some string theorists do see the string theory landscape as a possible explanation for unnaturalness but the idea certainly exists in a much wider context.

Woit also has some links to interesting documents about naturalness from the hands of Seiberg and Wilczek.

Lubos Motl posted a much more supportive response to this article. He also offers an interesting idea about Fermion masses from Delta(27) which I think tends to go against the idea that the standard model comes from a fine-tuned but otherwise unspecial compactification drawn from the string lanscape. It is certainly an interesting possibility though and it shows that all philosophical options remain open. Certainly there must be some important explanation for why there are three fermion generations but this is one of several possibilities including the old one that they form a multiplet of SO(10)  and the new one from geometric unity.

Planck thoughts

March 22, 2013

It’s great to see the Planck cosmic background radiation data released, so what is it telling us about the universe? First off the sky map now looks like this


Planck is the third satellite sent into space to look at the CMB and you can see how the resolution has improved in this picture from Wikipedia


Like the LHC, Planck is a European experiment. It was launched back in 2009 on an Ariane 5 rocket along with the Herschel Space Observatory. The US through NASA also contributed though.

The Planck data has given us some new measurements of key cosmological parameters. The universe is made up of  69.2±1.0% dark energy, 25.8±0.4% dark matter, and 4.82±0.05% visible matter. The percentage of dark energy increases as the universe expands while the ratio of dark to visible matter stays constant, so these figures are valid only for the present. Contributions to the total energy of the universe also includes a small amount of electromagnetic radiation (including the CMB itself) and neutrinos. The proportion of these is small and decreases with time.

Using the new Planck data the age of the universe is now 13.82 ± 0.05 billion years old. WMAP gave an answer of 13.77 ± 0.06 billion years. In the usual spirit of bloggers combinations we bravely assume no correlation of errors to get a combined figure of 13.80 ± 0.04 billion years, so we now know the age of the universe to within about 40 million years, less than the time since the dinosaurs died out.

The most important plot that the Planck analysis produced is the multipole analysis of the background anisotropy shown in this graph


This is like a fourier analysis done on the surface of a sphere are it is believed that the spectrum comes from quantum fluctuations during the inflationary phase of the big bang. The points follow the predicted curve almost perfectly and certainly within the expected range of cosmic variance given by the grey bounds. A similar plot was produced before by WMAP but Planck has been able to extend it to higher frequencies because of its superior angular resolution.

However, there are some anomalies at the low-frequency end that the analysis team have said are in the range of 2.5 to 3 sigma significance depending on the estimator used. In a particle physics experiment this would not be much but there is no look elsewhere effect to speak of here, any these are not statistical errors that will get better with more data. This is essentially the final result. Is it something to get excited about?

To answer that it is important to understand a little of how the multipole analysis works. The first term in a multipole analysis is the monopole which is just the average value of the radiation. For the CMB this is determined by the temperature and is not shown in this plot. The next multipole is the dipole. This is determined by our motion relative to the local preferred reference frame of the CMB so it is specified by three numbers from the velocity vector. This motion is considered to be a local effect so it is also subtracted off the CMB analysis and not regarded as part of the anisotropy. The first component that does appear is the quadrupole and as can be seen from the first point on the plot. The quadrupole is determined by 5 numbers so it is shown as an everage and a standard deviation.  As you can see it is significantly lower than expected. This was known to be the case already after WMAP but it is good to see it confirmed. This contributes to the 3 sigma anomaly but on its own it is more like a one sigma effect, so nothing too dramatic.

In general there is a multipole for every whole number l starting with l=0 for the monpole, l=1 for the dipole, l=2 for the quadrupole. This number l is labelled along the x-axis of the plot. It does not stop there of course. We have an octupole for l=3, a hexadecapole for l=4, a  dotriacontapole for l=5, a tetrahexacontapole for l=6, a octacosahectapole for l=7 etc. It goes up to l=2500 in this plot. Sadly I can’t write the name for that point. Each multipole is described by 2l+1 numbers. If you are familiar with spin you will recognise this as the number of components that describe a particle of spin l, it’s the same thing.

If you look carefully at the low-l end of the plot you will notice that the even-numbered points are low while the odd-numbered ones are high. This is the case up to l=8. In fact above that point they start to merge a range of l values into each point on the graph so this effect could extend further for all I know. Looking back at the WMAP plot of the same thing it seems that they started merging the points from about l=3 so we never saw this before (but some people did bevause they wrote papers about it). It was hidden, yet it is highly significant and for the Planck data it is responsible for the 3 sigma effect. In fact if they used an estimator that looked at the difference between odd and even points the significance might be higher.

There is another anomaly called the cold spot in the constellation of Eridanus. This is not on the axis of evil but it is terribly far off. Planck has also verified this spot first seen in the WMAP survey which is 70 µK cooler than the average CMB temperature.

What does it all mean? No idea!

Guest Post by Felix Lev

July 17, 2012
Today viXra log is proud to host a guest post by one of our regular contributors to the archive. Felix Lev gained a PhD from the Institute of Theoretical and Experimental Physics (Moscow) and a Dr. Sci. degree from the Institute for High Energy Physics (also known as the Serpukhov Accelerator). In Russia Felix Lev worked at the Joint Institute for Nuclear Research (Dubna). Now he works as a software engineer but continues research as an independent physicist in a range of subjects including quantum theory over Galois fields.

Spreading of Ultrarelativistic Wave Packet and Redshift

In standard cosmology, the red shift of light coming to the Earth from distant objects is usually explained as a consequence of the fact that the Universe is expanding. This explanation has been questioned by many authors and many other explanations have been proposed. One of the examples – a recent paper by Leonardo Rubio “Layer Hubble and the Alleged Expansion of the Universe” in viXra:1206.0068.

A standard explanation implies that photons emitted by distant objects travel in the interstellar medium practically without interaction with interstellar matter and hence they can survive their long (even billions of years) journey to the Earth. I believe that this explanation has the following obvious flaw: it does not take into account a well-known quantum effect of wave-packet spreading and the photons are treated as classical particles (for which wave-packet spreading is negligible). The effect of wave-packet spreading has been known practically since the discovery of quantum mechanics. For classical nonrelativistic particles this effect is negligible since the characteristic time of wave-packet spreading is of the order of ma2/ℏ where m is the mass of the body and a – its typical size. In optics the wave-packet spreading is usually discussed in view of the law of dispersion ω(k) when a wave travels in the medium. But even if a photon travels in empty space, its wave function is a subject of wave-packet spreading.

A simple calculations the details of which can be found in my paper viXra:1206:0074, gives for the characteristic time t* of spreading of the photon wave function a quantity given by the same formula but with m replaced by E/c2 where E is the photon energy. This result can be rewritten as t* = 2πT(a/λ)2 where T is the period of the wave, λ is the wave length and a is a dimension of the photon wave function in the direction perpendicular to the photon momentum. Hence even for optimistic values of a this quantity is typically much less than a second.

If spreading is so fast then a question arises why we can see stars and even planets rather than an almost isotropic background. The only explanation is that the interaction of photons with the interstellar medium cannot be neglected. On quantum level a description of the interaction is rather complicated since several processes should be taken into account. For example, a photon can be absorbed by an atom and reemitted in approximately the same direction. This process is an illustration of the fact that in the medium the speed of propagation is less than c: because after absorbing a photon the atom lives some time in an excited state. This process plays an important role from the point of view of wave-packet spreading. Indeed, the atom emits a photon with a wave packet of a small size. If the photon encounters many atoms on its way, this does not allow the photon wave function to spread significantly.

In view of this qualitative picture it is clear that at least a part of the red shift can be a consequence of the energy loss and the greater the distance to an object is, the greater is the loss. This phenomenon also poses a problem that the density of the interstellar medium might be much greater than usually believed. Among different scenarios discussed in the literature are dark energy, dark matter and others. As shown in my papers (see e.g. viXra:1104.0065 and references therein), the cosmological acceleration can be easily and naturally explained from first principles of quantum theory without involving dark energy, empty space-time background and other artificial notions. However, the other possibilities seem to be more realistic and now they are intensively studied.

How did early supermassive black holes form?

July 2, 2011

It seems like every few weeks that we hear news of a new study of the early universe showing that black holes formed earlier than expected and that structure in the pattern of galaxies extends to larger distances than expected.

The ruling paradigm says that galaxies formed when hydrogen gas and dark matter slowly clumped together under its gravitational pull. Stars were formed which continued to collapse togther to form galaxies. The early stars which were large would die quickly and form black holes which would coallesque to form supermassive black holes at the centres of galaxies.

The process was seeded by density perturbations in the gas that existed at the time of last light scattering. The effects of these perturbations are seen in the cosmic microwave background and are very familiar to cosmologists. They are believed to be due to fluctuations during the inflationary epoch and they have the right scale invariant spectrum to fit that hypothesis. You can model the formation of galaxies and large scale galactic structure in cold dark matter models using computer simulations. With the right parameters set for the mass of dark matter particles you can get good agreement with observations.

But the agreement is not good enough. It predicts that the black holes form after the stars, yet we see quasars appearing in the early universe containing huge black holes that must have formed much earlier. The latest example is a quasar with a mass of 2 billion suns observed at just 770 million  years after the big bang by ESO’s Very Large Telescope. We have seen proto-galaxies and gamma ray bursts from even earlier.

We also observe structure in the distribution of galaxies that extends out to very large scales. This is not predicted by the cold dark matter theory of structure formation. An example is the Great Sloan Wall, a vast planar structure covering 5% of the size of the observable universe. Up to these scales the distribution of galaxies forms clusters and filaments as well as voids separated by these walls. How could these have formed so soon and so big?

One possible answer is that they did not form through gravitational collapse at all, but instead by a process of caustic focusing of dark matter by gravitational waves. Let me explain.

We know very little about how the inflationary epoch ended. The vacuum state would have changed as the inflationary scalar field dropped into a broken phase. There may have been a phase transition but it may have been a soft second order transition or even a smooth crossover. We don’t even know when it happened. It may have been the elctro-weak transition or something earlier. With new physics from the LHC we may be able to work out how it happened.

It is likely that the transition did not happen simultaneously at all points in space. Fluctuations would mean that inflation continued a little longer in some places than others. This would leave a remnant gravitational wave background in the universe which in time would have cooled and weakened as the universe expanded more slowly. It would be hard to detect directly today because of its very low frequency and weak amplitude, but in the early universe during baryogenesis it would have been stronger.

The effect on baryonic matter would however have been washed out by electromagnetic forces acting more strongly than anything these waves could do. Dark matter on the other hand is uncharged and only interacts weakly. The gravitational waves, if strong enough could have influenced the distribution of dark matter. This is more true if dark matter particles are heavy so that they move more slowly at a given temperature. So what would happen?

In fact heavy particles would follow geodesics through the gravitational waves which would focus them onto caustic surfaces. The process is very similar to the focusing of light through the waves on the surface of the sea of a swimming pool creating familiar patterns of light on the bottom. The caustic lines are replaced with surfaces stretching across the universe just like the ones seen in the Sloan Survey. Where the walls meet even denser concentraions of matter would form.

Caustic light patterns formed by water waves quickly shift to disappear and reform elsewhere, but when enough dark matter is concentrated into one place it will itself gravitate and form dark stars or black holes which lock in the pattern. This could have happened very early in the universe, possibly even before the cosmic radiation background last scattered off hot baryonic matter.

The ordinary stars would form around these structures either by gravitational attraction or due to the pressure of radiation from dark stars and gas falling into the black holes.  Either way the structure in the distribution of galaxies would be largely determined by the caustic patterns provided by the gravitational waves so it can extend much further depending on the spectrum of the waves at large wavelengths. The large black holes that form quasars would originate from concentrations of dark matter at the densest points where the caustic planes meet.

Even as more discoveries appear to contradict the  ΛCDM theory cosmologists stick to the old paradigm because it almost works. Λ, the cosmological constant is there and so is Cold Dark Matter, but that does not mean that they explain the formation of super-massive black holes and large scale structure in the universe. Cosmologists need to wake up to this fact and start exploring alternatives such as the caustic theory outlined here. As usual I will quietly wait while they ignore it and eventually reinvent the idea for themselves. Good luck guys.  🙂