Nobel Anticipation

October 7, 2013

This is Nobel week and prize handouts start today with Medicine. Tomorrow is Physics, and Chemistry is on Wednesday. All others are political prizes of no interest here.

The physics prize should be awarded for the Higgs Boson and most likely Higgs himself and Englert will get it, but a third share may go to some other wildcard person or organisation. This will be decided by a vote at TRF but I can also remind you that a similar vote has been running for some time on viXra which has already been used to elect two winners of the much larger Fundamental Physics Prize.

Update 8-Oct-2013:  Congratulations to James E. Rothman, Randy W. Schekman and Thomas C. Südhof who won the Physiology or Medicine prize “for their discoveries of machinery regulating vesicle traffic, a major transport system in our cells”.

Today will be the turn of the physics prize and it this point it may be worth hedging bets by noting that this might not even be given for the Higgs. Each year the Nobel committee has a big pile of worthy nominations and they may just decide that one of them fits better. However the chances for Higgs seem slightly better than even.

Some people are suggesting that the third share for the Higgs could go to CERN. The committee have hinted that an organisation is not ruled out even though they have never used that option before, but I think CERN would be a mistake. This is because many people in the CMS and ATLAS collaborations are not strictly speaking part of CERN. You could justify that CERN played a deserving role but to then leave out the physicists who actually made the discovery would be a new kind of mistake for the Nobel committee to make. Lumping CMS and ATLAS together as one also seems a bit forced but they could give it to Higgs, CMS and ATLAS leaving out Englert. That is more like the kind of mistake they have made before. I favour the option of reserving the prize for the theorists. If they start giving it to big organisations then they will also have to look at many other big collaborations in the future and they probably dont want to set a precedent that could radically change the nature of the award.


Super Yang-Mills vs Loop Quantum Gravity

July 19, 2013

Some of you may remember my Xtranormal video from a few years back “A Double Take on the String Wars.” Here it is if you missed it.

If you enjoyed that you will be pleased to know that there is a new sequel called “SYM and LQG: The Same Bloody Thing”

No offense intended to anyone who may accidentally resemble the characters in this video 🙂 This is just for fun.


Naturally Unnatural

July 18, 2013

EPS-HEP

Today is the first day of the EPS-HEP conference in Stockholm, the largest particle physics conference of the year. In recent years such conferences have been awaited with great anticipation because of the prospects of new results in the latest LHC and Tevatron reports but this year things are a little more subdued. We will have to wait another two years before the LHC restarts and we can again follow every talk expecting the unexpected. Perhaps there will be some surprises in a late LHC analysis or something from dark matter searches, but otherwise this is just a good time to look back and ask, what did we learn so far from the LHC?

Nightmare Scenario

The answer is that we have learnt that the mass of the Higgs boson is around 125 GeV and that this lies near the minimum end of the range of masses that would allow the vacuum to be stable even if there are no new particles to help stabilize it. Furthermore, we do indeed find no evidence of other new particles up to the TeV range and the Higgs looks very much like a lone standard model Higgs. Yes, there could still be something like SUSY there if it has managed to hide in an awkward place. There could even be much lighter undiscovered particles such as those hinted at by some dark matter searches, if they are hard to produce or detect at colliders, but the more obvious conclusion is that nothing else is there at these energies.
This is what many people called the “nightmare scenario” because it means that there are no new clues that can tell us about the next model for particle physics. Many theorists had predicted SUSY particles at this energy range in order to remove fine-tuning and have been disappointed by the results. Instead we have seen that the Higgs sector is probably fine tuned at least by some small factor. If no SUSY is found in the next LHC run at 13 TeV then it is fine-tuned at about the 1% level.

Fine-tuning

Many physicists dislike fine-tuning. They feel that the laws of physics should be naturally derived from a simple model that leaves no room for such ambiguity. When superstring theory first hit the street it generated a lot of excitement precisely because it seemed to promise such a model. The heterotic string in particular looked just right for the job because its E8 gauge group is the largest exceptional simple lie algebra and it is just big enough to contain the standard model gauge group with suitable chiral structures. All they needed to do was figure out which calabi-yau manifold could be stabilised as a compactification space to bring the number of dimensions down from 10 to the 4 space and time dimensions of the real world. They would then see quickly how the symmetry gets broken and the standard model emerged at low energy, or so they hoped.

The problem is that there has been evidence for fine-tuning in nature for a long time. One of the earliest known examples was the carbon resonance predicted by Hoyle at precisely the right energy to allow carbon to form in stellar nucleosynthesis. If it was not there the cosmos would not contain enough carbon for us to exist. Hoyle was right and the resonance was soon found in nuclear experiments. Since then we have realized that many other parameters of the standard model are seemingly tuned for life. If the strong force was slightly stronger then two neutrons would form a stable bond to provide a simple form of matter that would replace hydrogen. If the cosmological constant was stronger the universe would have collapsed before we had time to evolve, any weaker and galaxies would not have formed. There are many more examples. If the standard model had fallen out of heterotic string theory as hoped we would have to accept these fine tunings as cosmic coincidences with no possible explanation.

The Multiverse

String theorists did learn how to stabilize the string moduli space but they were disappointed. Instead of finding a unique stable point to which any other compactification would degenerate they found that fluxes could stabilize a vast landscape of possible outcomes. There are so many possible stable states for the vacuum that the task of exploring them to find one that fits the standard model seems well beyond are capabilities. Some string theorists saw the bright side of this. It offers the possibility of selection to explain fine-tuning. This is the multiverse theory that says all the possible states in the landscape exist equally and by anthropic arguments we find ourselves in a universe suitable for life simply because there is no intelligent life in the ones that are not fine-tuned.

Others were not so happy. The conclusion seems to be that string theory can not predict low energy physics at all. This is unacceptable according to the scientific method or so they say. There must be a better way out otherwise string theory has failed and should be abandoned in favor of a search for a completely different alternative. But the stting theorists carry on. Why is that? Is it because they are aging professors who have invested too much intellectual capitol in their theory. Are young theorists doomed to be corrupted into following the evil ways of string theory by their egotistical masters when they would rather be working on something else? I don’t think so. Physicists did not latch onto string theory just because it is full of enchanting mathematics. They study it because they have come to understand the framework of consistent quantum theories and they see that it is the only direction that can unify gravity with other forces. Despite many years of trying nothing else offers a viable alternative that works (more about LQG is for another post).

Many people hate the very idea of the multiverse. I have heard people say that they cannot accept that such a large space of possibilities exist. What they don’t seem to realize is that standard quantum field theory already offers this large space. The state vector of the universe comes from a Hilbert space of vast complexity. Each field variable becomes an operator on space of states and the full Hilbert space is the tensor product of all those spaces. It’s dimension is the product of the dimensions of all the local spaces and the state vector has a component amplitude for each dimension in this vast multiverse of possibilities. This is not some imaginary concept. It is the mathematical structure that successfully describes the quantum scattering of particles in the standard model. The only significant difference for the multiverse of string theory is that many of the string theory states describe different stable vacuua whereas in the standard model the stable vacuua are identical under gauge symmetry.  If string theory is right then the multiverse is not some hypothetical construct that we cannot access. It is the basis of the Hilbert space spanned by the wavefunction.

Some skeptics say that there is no such fine-tuning. They say that if parameters were different then life would have formed in other ways. They say that the apparent finetuning that sets the mass of the Higgs boson and the small size of the cosmological constant is just an illusion. There may be some other way to look at the standard model which makes it look natural instead of fine-tuned. I think this is misguided. During the inflationary phase of the universe the wavefunction sat in some metastable state where the vacuum energy produced a huge effective cosmological constant. At the end of inflation it fell to a stable vaccum state whose contribution to the cosmological constant id much smaller. Since this is a non-symmetrical state it is hard to see why opposite sign contributions from bosons and fermions would cancel. Unless there is some almost miraculous hidden structure the answer seems to be fine-tuned. The same is true for the Higgs mass and other finely tuned parameters. It is very hard to see how they can be explained naturally if the standard model is uniquely determined.

People can complain as much as they like that the multiverse is unscientific because it does not predict the standard model. Such arguments are worthless if that is how the universe works. The multiverse provides a natural explanation for the unnatural parameters of physics. We do noy say that astrophysics is unscientific because it does not give a unique prediction for the size and composition of the Sun. We accept that there is a landscape of possible stellar objects and that we must use observation to determine what our star looks like. The same will be true for the standard model, but that does not stop us understanding the principles that determine the landscape of possibilities or from looking for evidence in other places.

What does it mean for life in the universe?

If the landscape of vacuua is real and the world is naturally unnatural it may take many centuries to find convincing evidence, but it will have consequences for life in the universe. If you think that life arises naturally no matter what the parameters of physics are then you would expect life to take a very diverse range of forms. I dont just mean that life on Earth is diverse in the way we are familiar with. I mean that there should be different solutions to the chemistry of life that work on other planets. On Earth there is just one basic chemistry based on DNA and RNA. This also includes the chemistry of metabolism, photosynthesis and other biochemical processes without which life on Earth would be very different. If we find that all higher lifeforms on other planets uses these same processes then we can be sure that physics is fine-tuned for life. If any one of them did not work there would be no life. Either this fine-tuning must arise naturally from a multiverse or we would have to accept that the existence of life at all is an almost miraculous coincidence. If on the other hand we find complex lifeforms based on molecules unlike DNA and supported by completely different mechanisms then the argument for fine-tuning in nature is weaker.

Theorist Nima  Arkani-Hamed recently suggested that it would be worth building a 100 TeV hadron collider even if the only outcome was to verify that there is no new physics up to that energy, It would show that the Higgs mass is fine-tuned to one part in 10,000 and that would be a revolutionary discovery. If it failed to prove that it would find something less exciting such as SUSY. I don’t think this argument will raise the funding required but if the LHC continues to strengthen the case for fine-tuning we must accept the implications.

Update 29-Jul-2013: I am just updating to add some linkbacks to other bloggers who have followed up on this. Peter Woit takes the usual negative view about what he continues to call “multiverse mania” and summarised my post by saying “Philip Gibbs, … argues that what we are learning from the LHC is that we must give up and embrace the multiverse.” To respond, I don’t think that recognising the importance of the multiverse constitutes giving up anything except the failed idea of naturalness ( In future I will always couple the word “naturalnness” with the phrase “failed idea” because this seems to be a successful debating technique ) In particular phenomenologists and experimenters will continue to look for physics beyond the standard model in order to explain dark matter, inflation etc. People working on quantum gravity will continue to explore the same theories and their phenomenology.

Another suggestion we see coming from Woit and his supporters is that the idea that physics may be unnaturally fine-tuned is coming from string theory. This is very much not the case. It is being driven by experiment and ordinary TeV scale phenomenology.  If you think that the string theory landscape has helped convert people to the idea you should check your history to see that the word “landscape” was coined by Lee Smolin in the context of LQG. Anthropic reasoning has also been around since long before string theory. Of course some string theorists do see the string theory landscape as a possible explanation for unnaturalness but the idea certainly exists in a much wider context.

Woit also has some links to interesting documents about naturalness from the hands of Seiberg and Wilczek.

Lubos Motl posted a much more supportive response to this article. He also offers an interesting idea about Fermion masses from Delta(27) which I think tends to go against the idea that the standard model comes from a fine-tuned but otherwise unspecial compactification drawn from the string lanscape. It is certainly an interesting possibility though and it shows that all philosophical options remain open. Certainly there must be some important explanation for why there are three fermion generations but this is one of several possibilities including the old one that they form a multiplet of SO(10)  and the new one from geometric unity.


Debating Open Access

July 8, 2013

Starting in April the UK research councils have pushed ahead with a policy that all research they fund must be publish in Open Access journals so that everyone has free online access to the research they funded. The issues involved are actually much more complex than this simple principle so it if you support Open Access you will agree that it is a good thing that they have rashly gone ahead rather than waiting for the inevitable long debate to reach some conclusions.

Now the British Academy has published 9 short articles by academics presenting their opinions. The British Academy supports the Humanities and Social Sciences and this is reflected in the fact that only one of the articles was written by a scientist (a biologist). The principles for publishing in the humanities are not very different from those in the sciences but support for open access from the humanities tends to be less enthusiastic than it is from the sciences.

One of the articles by historian Robin Osborne is particularly negative. He argues that academics should not be obliged to publish in open access because they get their skills and information from a wider range of sources than just those they are paid to look at by the research councils. He also claims that research should not be free to view by the public because they do not have the training to understand it. I think many people will agree with me that these arguments are outrageously misguided, yet he may represent the opinion of many academics in both the humanities and sciences so it is important to have these points debated openly.

The other articles recognize many of the complex issues involved such as the affect of open access on learned societies who are funded by their publishing empires and the wider questions about how peer-review needs to evolve. Another point made which is too frequently overlooked is the conflicting motivations behind the open access movement. For many scientists and mathematicians the main purpose of open access is to destroy the business model of private publishing houses who have been making vast profits by charging academics for their own research through their libraries. Just look up articles on the subject by John Baez or Timothy Gowers to see how true this is (the linked posts are just the most recent of many and I mostly agree with their views).

On the other hand the Finch Report which is behind the open access policy in the UK makes no mention of this and is only concerned with the need to make research more available to industry and the public. In fact they aim to protect publishers and increase the amount of the research funding spent on publishing in order to increase access. It is not difficult to see why this is. Elsevier is one of the top FTSE listed companies and the UK cannot afford to risk damaging such industry giants (even if they are really based in another country) Despite pressure from an academic boycott Elsevier-Reed have seen a more than 50% increase in their share price in the last year (see here for latest figure). They are also using their influence to push back on the extent of open access e.g. by arguing strongly against copyright reforms and trying to lengthen embargo times on green open access. See their position statements for the UK policy makers to get an idea of how this works.

Update: Robin Osborne has posted some of his article on the Gaurdian Network. It would be helpful if there were some rational comments to explain to him why it is wrong to stop the public reading academic research in case they misunderstand it.

Not Open_Access_logo2


Brain Power

June 28, 2013

Supercomputers

In 1984 when big brother was meant to invade our privacy I was a graduate student in Glasgow working on lattice gauge theories. As part of the research towards my doctorate I spent a week on a special mission to Germany where I was allowed into a secret nuclear base to borrow some computer time on a Cray-XMP. It was the world’s fastest supercomputer of the time and there was only one in Europe so I was very privileged to get some time on it even if it was only a few hours of CPU. Such resources would have been hugely expensive if we had to pay for them. I remember how the German’s jokingly priced the unseen cost in BMWs. The power of that computer was 400 Megaflops and it had a massive 512 Megabyte ram disk.

The problem I was working on was to look for chiral symmetry breaking in QCD at high temperatures and densities using lattice simulations. In the last few years this has been seen experimentally at the LHC and other heavy ion accelerators but back then it was just theory. To do this I had to look at the linear discretised Dirac equation for quarks on a background of lattice gauge fields. This gave a big hermitian NxN matrix where N is the number of lattice sites times 3 for the QCD colours. On a small lattice of 164 sites (working in 4D spacetime) this gave matrices of  196608 square and I had to find its smallest eignevalues. The density of this spectrum says whether or not chiral symmetry is broken. Those are pretty big matrices to calculate the eigenvalues of, but they are sparse matrices with only 12 complex non-zero components in each row or column. My collaborators and I had some good tricks for solving the problem. Our papers are still collecting a trickle of citations.

tianhe-2

Thirty years later big brother has finally succeeded in monitoring what everyone is doing in the privacy of their own homes and my desktop computer has perhaps 100 times the speed and 30 times the memory of the Cray-XMP, which makes me wonder what I should be doing with it. The title for the fastest supercomputer has recently been taken by China’s Tianhe-2 which has been benchmarked at 33.86 Petaflops and it has a theoretical peek performance of 53.9 Petaflops so it is about 100,000,000 times faster than the Cray. This beats Moore’s law by a factor of 5000 which may be in part due to governments being willing to spend much more money on them. The US who more commonly hold the record wont be beaten for long because the NSA is said to have a secret and very expensive project to build a supercomputer to surpass the Exaflop mark in the next few years. I doubt that any HEP grad students will have  a chance to use it.

This begs the question: Why do they need such powerful computers? In the past they may have been used to simulate nuclear explosions or design stealth fighters. Now they may be needed to decrypt and search all our e-mails for signs of dissenting tendencies, or perhaps there is an even more sinister purpose.

Artificial Intelligence

When computer pioneers such as Von Neumann and Turing conceived the possibility of building electronic computers they thought it would be easy to make computers think like humans even though they had no idea how fast computers would become. This turned out to be much harder than expected. Despite some triumphs such as “superhuman” chess programs which can now crush the best grandmasters (see discussion at World Science Festival) the problem of making computers think like us has seen little progress. One possibility that looked promising back in the 1980s was neural networks. When I left academia some of my colleagues at Edinburgh were switching to neural networks because the theory and the computing problems were very similar to lattice calculations. Today their work has applications in areas such as facial recognition but it has failed to deliver any real AI.

Now a new idea is raising hopes based on the increasing power of computers and scanning technologies. Can we simply map the brain and simulate it on a computer? To get a flavour of what is involved you can watch this TED talk by neuroscientist Sebastian Seung. His aim is to simulate a small part of a mouse brain, which seems quite unambitious but actually it is a huge challenge. If they can get that working then it may be simply a case of scaling up to simulate a complete human brain. If you want to see a project that anyone can join try OpenWorm which aims to simulate the 7000 neuro-connections of a nemotode worm, the simplest functioning brain in nature (apart from [insert your favourite victim here]).

Brain Scans

An important step will be to scan every relevant detail of the brain which consists of  100 billion neurons connected by a quadrillion synapses. Incredibly the first step towards this has already been taken. As part of the European Human Brain Project funded with a billion Euros scientists have taken the brain of a 65 year old women who died with a healthy brain, and they have sliced into 7404 sections each just 20 microns thick (Obama’s Brain Mapping Project which has had a lot of publicity is just a modest scaled down version of the European one). This is nearly good enough detail to get a complete map of the synaptic connections of every neuron in the brain, but that is not clear yet. If it is not quite good enough yet it is at least clear that with an order of magnitude more detail it will be, so it is now only a matter of time before that goal is achieved.

If we can map the brain in such precise detail will we be able to simulate its function? The basic connectivity graph of the neurons forms a sparse matrix much like the ones I used to study chiral symmetry breaking but with about a trillion times as many numbers. An Exaflop supercomputer is about a trillion times more powerful than the one I used back in 1984, so we are nearly there (assuming linear scaling). The repeated firing of neurons in the brain is ( to a first approximation ) just like multiplying the signal repeatedly by the connection matrix. Stable signals will be represented by eigenvectors of the matrix so it is plausible that our memories are just the eigenvalue spectrum of the synaptic map and the numerical methods we used in lattice gauge theories will be applicable here to.

However, the processes of logical reasoning are more than just recalling memories and will surely depend on non-linear effects in the brain just as the real physics of lattice QCD depends on the highly non-linear interactions of the gauge field. Will they be able to simulate those for a human brain on a computer? I have no idea, but the implications of being able to do so are enormous. People are starting to talk seriously about the moral implications as well as what it may bring in capability. I can understand that some agencies may want any such simulations to be conducted under a veil of secrecy if possible. Is this what is driving governments to push supercomputer power so far?

It would be ironic if the first true artificial intelligence is actually a faithful simulation of a human brain. No doubt billionaires will want to fund the copying of their own brains to giant supercomputers at the end of their lives if this becomes possible. But once we have the capability to simulate a brain we will also start to understand how it works, and then we will be able to build intelligent computers whose power of thought goes far beyond our own. Soon it may no longer be a question of if this is possible, just when.


Why I Still Like String Theory

May 16, 2013

There is a new book coming up by Richard Dawid “String Theory and the Scientific Method. It has been reviewed by Peter Woit and Lubos Motl who give their expected opposing views. Apparently Woit gets it through a university library subscription. I can’t really review the book because at £60 it is a bit too expensive. Compare this with the recent book by Lee Smolin which I did review after paying £12.80 for it. These two books would have exactly the same set of potential readers but Smolin is just better known which puts his work into a different category where a different type of publisher accepts it. I dont really understand why any author would choose to allow publication at a £60 price-tag. They will sell very few copies and get very little back in royalties, especially if most universities have free access. Why not publish a print-on-demand version which would be cheaper? Even the Kindle version of this book is £42 but you can easily self publish on Kindle for much less and keep 70% of profits through Amazon.

My view is equally predictable as anyone elses since I have previously explained why I like String Theory. Of the four reasons I gave previously the main one is that it solves the problem of how quantum theory looks in the perturbative limit about a flat space-time with gravitons interacting with matter. This limit really should exist for any theory of quantum gravity and it is the realm that is most like familiar physics so it is very significant that string theory works there when no other theory does. OK, so perturbative string theory is not fully sewn up but it works better than anything else. The next best thing is supergravity which is just an effective theory for superstrings.

My second like is that String Theory supports a holographic principle that is also required for quantum gravity. This is a much weaker reason because (a) it is in less well known territory of physics and requires a longer series of assumptions and deductions to get there (b) It is not so obvious that other theories wont also support the holographic principle.

Reason number three has not fared so well. I said I liked string theory because it would match well with TeV scale SUSY, but the LHC has now all but ruled that out. It is possible that SUSY will appear in LHC run 2 at 13 TeV or later, or that it is just out of reach, but already we know that the Higgs mass in the standard model is fine-tuned. There is no stop or Higgsino where they would be needed to control the Higgs mass. The only question now is how much fine-tuning is there?

Which brings me to my fourth reason for liking string theory. It predicts a multiverse of vacua in the right quantities required to explain anthropic reasoning for an unnatural fine-tuned particle theory. So my last two reasons were really a hedge. The more evidence there is against SUSY, the more evidence there is in favour of the multiverse and the string theory landscape.

Although I dont have the book I know from Woit and Motl that Dawid provides three main reasons for supporting string theory that he gathered from string theorists. None of my four reasons are included. His first reason is “The No Alternatives Argument”, apparently we do string theory because despite its shortcomings there is nothing else that works. As Lee Smolin pointed out over at NEW, there are alternatives. LQG may succeed but to do so it must give a low energy perturbation theory with gravitons or explain why things work differently. Other alternatives mentioned by Smolin are more like toy models but I would add higher spin gravity as another idea that may be more interesting. Really though I dont see these as alternatives. The “alternatives theory view” is a social construct that came out of in-fighting between physicists. There is only one right theory of quantum gravity and if more than one idea seems to have good features without them meeting at a point where they can be shown to be irreconcilable then the best view is that they might all be telling us something important about the final answer. For those who have not seen it I still stand by my satirical video on this subject:

A Double Take on the String Wars

Dawid’s second reason is “The Unexpected Explanatory Coherence Argument.” This means that the maths of string theory works surprisingly well and matches physical requirements in places where it could easily have fallen down. It is a good argument but I would prefer to cite specific cases such as holography.

The third and final reason Dawid gives is  “The Meta-Inductive Argument”. I think what he is pointing out here is that the standard model succeeded because it was based on consistency arguments such as renormalisability which reduced the possible models to just one basic idea that worked. The same is true for string theory so we are on firm ground. Again I think this is more of a meta-argument and I prefer to cite specific instances of consistency.

The biggest area of contention centres on the role of the multiverse. I see it as a positive reason to like string theory. Woit argues that it cannot be used to make predictions so it is unscientific which means string theory has failed. I think Motl is (like many string theorists) reluctant to accept the multiverse and prefers that the standard model will fall out of string theory in a unique way. I would also have preferred that 15 years ago but I think the evidence is increasingly favouring high levels of fine-tuning so the multiverse is a necessity. We have to accept what appears to be right, not what we prefer. I have been learning to love it.

I dont know how Dawid defines the scientific method. It goes back many centuries and has been refined in different ways by different philosophers. It is clear that if a theory is shown to be inconsistent, either because it has a logical fault or because it makes a prediciton that is wrong, then the theory has to be thrown out. What happens if a theory is eventually found to be uniquely consistent with all known observations but its characteristic predictions are all beyond technical means. Is that theory wrong or right? Mach said that the theory of atoms was wrong because we could never observe them. It turned out that we could observe them but what if we couldn’t for practical reasons? It seems to me that there are useful things a philosopher could say about such questions and to be fair to Dawid he has articles freely available on line that address this question, e.g. here, so even if the book is out-of-reach there is some useful material to look through. Unfortunately my head hits the desk whenever I read the words “structural realism”, my bad.

update: see also this video interview with Nima Arkani-Hamed for a view I can happily agree with

 https://www.youtube.com/watch?v=rKvflWg95hs


Book Review: Time Reborn by Lee Smolin

April 24, 2013

Fill the blank in this sentence:-

“The best studied approach to quantum gravity is ___________________ and it appears to allow for a wide range of choices of elementary particles and forces.”

time_rebornDid you answer “String Theory”? I did, but Lee Smolin thinks the answer is his own alternative theory “Loop Quantum Gravity” (page 98) This is one of many things he says in his new book that I completely disagree with. That’s fine because while theoretical physicists agree rather well on matters of established physics such as general relativity and quantum mechanics you will be hard pushed to find two with the same philosophical ideas about how to proceed next. Comparing arguments is an important part of looking for a way forward.

Here is another non-technical point I disagree with. In the preface he says that he will “gently introduce the material the lay reader needs” (page xxii) Trust me when I say that although this book is written without equations it is not for the “lay reader” (an awkward term that originally meant non-clergyman). If you are not already familiar with the basic ideas of general relativity, quantum mechanics etc and all the jargon that goes with them, then you will probably not get far into this book. Books like this are really written for physicists who are either working on similar areas or who at least have a basic understanding of the issues involved. Of course if the book were introduced as such it would not be published by Allen Lane. Instead it would be a monograph in one of those obscure vanity series by Wiley or Springer where they run off a few hundred copies and sell them at $150/£150/€150 (same number in any other currency) OK perhaps I took too many cynicism pills this morning.

The message Smolin wants to get across in that time is “real”  and not an “illusion”. Already I am having problems with the language. When people start to talk about whether time is real I hear in my brain the echo of Samuel Johnson’s well quoted retort “I refute it thus!” OK, you can’t kick time but you can kick a clock and time is real. The real question is “Is time fundamental or emergent?” and Smolin does get round to this more appropriate terminology in the end.

In the preface he tells us what he means when he says that time is real. This includes “The past was real but is no longer real” “The future does not yet exist and is therefore open” (page xiv) In other words he is taking our common language based intuitive notions of how we understand time and saying that this is fundamentally correct. The problem with this is that when Einstein invented relativity he taught me that my intuitive notions of time are just feature of my wetware program that evolved to help me get around at a few miles per hour, remembering things from the past so that I could learn to anticipate the future etc. It would be foolish to expect these things to be fundamental in realms where we move close to the speed of light, let alone at the centre of a black-hole where density and temperature reach unimaginable extremes. Of course Smolin is not denying the validity of relative time, but he wants me to accept that common notions of the continuous flow of time and causality are fundamental, even though the distinction between past and future is an emergent feature of thermodynamics that is purely statistical and already absent from known fundamental laws.

His case is even harder to buy given that he does accept the popular idea that space is emergent. Smolin has always billed himself as the relativitist (unlike those string theorists) who understands that the principles of general relativity must be applied to quantum gravity  How then can he say that space and time need to be treated so differently?

This seems to be an idea that came to him in the last few years. There is no hint of it in a technical article he wrote in 2005 where he makes the case for background independence and argues that both space and time should be equally emergent. This new point of view seems to be a genuine change of mind and I bought the book because I was curious to know how this came about. The preface might have been a good place for him to tell me when and how he changed his mind but there is nothing about it (in fact the preface and introduction are similar and could have been stuck together into one section without any sign of discontinuity between them)

Smolin does however explain why he thinks time is not fundamental. The main argument is that he believes the laws of physics have evolved to become fine-tuned with changes accumulating each time a baby universe is born. This is his old idea that he wrote about at length in another book “Life of the Cosmos” If this theory is to be true he now thinks that time must be fundamentally similar to our intuitive notions of continuously flowing time. I would tend to argue the converse, that time is emergent so we should not take the cosmological evolution theory too seriously.

I don’t think many physicists follow his evolution theory but the alternatives such as eternal inflation and anthropic landscapes are equally contentious and involve piling about twenty layers of speculation on top of each other without much to support them.  I think this is a great exercise to indulge in but we should not seriously think we have much idea of what can be concluded from it just yet.

Smolin does have some other technical arguments to support his view of time, basically along the lines that the theories that work best so far for quantum gravity use continuous time even when they demonstrate emergent space. I don’t buy this argument either. We still have not solved quantum gravity after all. He also cites lots of long gone philosophers especially Leibniz.

Apart from our views on string theory, time and who such books are aimed at I want to mention one other issue where I disagree with Smolin. He says that all symmetries and conservation laws are approximate (e.g. Pages 117-118). Here he seems to agree with Sean Carrol and even Motl (!) (but see comments). I have explained many times why energy, momentum and other gauge charges are conserved in general relativity in a non-trivial and experimentally confirmed way. Smolin says that “we see from the example of string theory that the more symmetry a theory has, the less its explanatory power” (page 280). He even discusses the preferred reference frame given by the cosmic background radiation and suggests that this is fundamental (page 167). I disagree and in fact I take the opposite (old fashioned) view that all the symmetries we have seen are part of a unified universal symmetry that is huge but hidden and that it is fundamental, exact, non-trivial and really important. Here I seem to be swimming against the direction the tide is now flowing but I will keep on going.

Ok so I disagree with Smolin but I have never met him and there is nothing personal about it. If he ever deigned to talk to an outsider like me I am sure we could have a lively and interesting discussion about it. The book itself covers many points and will be of interest to anyone working on quantum gravity who should be aware of all the different points of view and why people hold them, so I recommend to them, but probably not to the average lay person living next door.

see also Not Even Wrong for another review, and The Reference Frame for yet another. There is also a review with a long interview in The Independent.


UK Open Access policy launches today

April 1, 2013

The UK Research Councils RCUK have today begun the process of making all UK government-funded publications open access. Details of the scheme can be found here.

Some other countries are looking at similar initiatives or have already implemented them in some subjects (e.g. medicine in the US) but the UK scheme will be watched as a pioneering effort to bring Open Access to all public research.

Not Open_Access_logo2In fact the system will be phased in over a period of five years with 45% of publications to be open access this year. Both gold and green open access standards are approved. In the case of the gold standard publishers will be paid up front to make papers open access from the publisher’s website immediately from publication. The budget for this has been set at about £1650 per paper but there is considerable variability depending on the journal. It will be interesting to see if market forces can keep these prices down. Money will be allocated to research institutions who will distribute it around their departments. The figures are set out here.

The system will also accept the option of green open access where a journal simply allows the author to put there own copy of the paper online. Here there is a big catch: The RCUK will accept that the journal can embargo public access for six months or maybe even a year. To my mind this is not real open access at all. Publications should be open access from the moment they are accepted if not before. And there is another catch with this option. I don’t see anything in the RCUK guidelines to ensure that the document is put online by the author or that it will be kept online. For both open access standards it is not clear that there can be any guarantee that papers will be kept online forever. What if a gold standard journal disappears? What if a repository disappears? Under what circumstances can an author withdraw a paper? Perhaps there are answers to these questions somewhere but I don’t see them.

Another set of questions might be asked about how Article Processing Charges will affect the impartiality and standards of journals. You might also want to know if paying for open access up front will eventually reduce the cost to libraries of paying for subscriptions, or will they still always have to pay for access to papers published under the old system, and for papers that are privately funded?

I hope that the answer is that none of this will matter for long because another system of open access will evolve with a new way to do non-profit peer-review without the old journal system at all, but perhaps that’s just a pipe dream.


Fifth FQXi Essay Contest: It From Bit, or Bit From It?

March 26, 2013

The Fifth essay contest from the Foundational Questions Institute is now underway. The topic is about whether information is more fundamental than material objects. The subject is similar to the contest from two years ago but with a different slant. In fact one of the winning essays by Julian Barbour was called “Bit From It”. Perhaps he could resubmit the same one. The topic also matches the FQXi large grant awards for this year on the physics of information. Sadly I have already been told, unsurprisingly, that my grant application fell at the first hurdle but the essay contest provides an alternative (less lucrative) chance to write on this subject. Last year I did not get in the final but that really doesn’t matter. The important thing is to give your ideas an airing and discuss them with others, honestly.

In last year’s FQXi contest 50 essays were submitted by viXra authors. With the number of viXra authors increasing rapidly I hope that we will increase that figure this year. There has been a change in the rules to try to encourage more of FQXi’s own members to take part and improve the voting. Members will automatically get through to the final if they vote for 5 essays and leave comments. Last year there were about 15 FQXi member essays in the competition and if I am not mistaken only two failed to make the final, so it will not affect the placings much, but it should encourage the professional entrants to enter into the discussions and community rating which cannot be a bad thing.

For many of the independent authors who submit their work to viXra, getting feedback on their ideas is very hard. The FQXi contest is one way that can get people to comment, so get writing. We have until June to make our entries.

Please note that FQXi is not connected to viXra in any way.


Planck thoughts

March 22, 2013

It’s great to see the Planck cosmic background radiation data released, so what is it telling us about the universe? First off the sky map now looks like this

Planck_CMB_565x318

Planck is the third satellite sent into space to look at the CMB and you can see how the resolution has improved in this picture from Wikipedia

PIA16874-CobeWmapPlanckComparison-20130321

Like the LHC, Planck is a European experiment. It was launched back in 2009 on an Ariane 5 rocket along with the Herschel Space Observatory. The US through NASA also contributed though.

The Planck data has given us some new measurements of key cosmological parameters. The universe is made up of  69.2±1.0% dark energy, 25.8±0.4% dark matter, and 4.82±0.05% visible matter. The percentage of dark energy increases as the universe expands while the ratio of dark to visible matter stays constant, so these figures are valid only for the present. Contributions to the total energy of the universe also includes a small amount of electromagnetic radiation (including the CMB itself) and neutrinos. The proportion of these is small and decreases with time.

Using the new Planck data the age of the universe is now 13.82 ± 0.05 billion years old. WMAP gave an answer of 13.77 ± 0.06 billion years. In the usual spirit of bloggers combinations we bravely assume no correlation of errors to get a combined figure of 13.80 ± 0.04 billion years, so we now know the age of the universe to within about 40 million years, less than the time since the dinosaurs died out.

The most important plot that the Planck analysis produced is the multipole analysis of the background anisotropy shown in this graph

Planck_power_spectrum_565w

This is like a fourier analysis done on the surface of a sphere are it is believed that the spectrum comes from quantum fluctuations during the inflationary phase of the big bang. The points follow the predicted curve almost perfectly and certainly within the expected range of cosmic variance given by the grey bounds. A similar plot was produced before by WMAP but Planck has been able to extend it to higher frequencies because of its superior angular resolution.

However, there are some anomalies at the low-frequency end that the analysis team have said are in the range of 2.5 to 3 sigma significance depending on the estimator used. In a particle physics experiment this would not be much but there is no look elsewhere effect to speak of here, any these are not statistical errors that will get better with more data. This is essentially the final result. Is it something to get excited about?

To answer that it is important to understand a little of how the multipole analysis works. The first term in a multipole analysis is the monopole which is just the average value of the radiation. For the CMB this is determined by the temperature and is not shown in this plot. The next multipole is the dipole. This is determined by our motion relative to the local preferred reference frame of the CMB so it is specified by three numbers from the velocity vector. This motion is considered to be a local effect so it is also subtracted off the CMB analysis and not regarded as part of the anisotropy. The first component that does appear is the quadrupole and as can be seen from the first point on the plot. The quadrupole is determined by 5 numbers so it is shown as an everage and a standard deviation.  As you can see it is significantly lower than expected. This was known to be the case already after WMAP but it is good to see it confirmed. This contributes to the 3 sigma anomaly but on its own it is more like a one sigma effect, so nothing too dramatic.

In general there is a multipole for every whole number l starting with l=0 for the monpole, l=1 for the dipole, l=2 for the quadrupole. This number l is labelled along the x-axis of the plot. It does not stop there of course. We have an octupole for l=3, a hexadecapole for l=4, a  dotriacontapole for l=5, a tetrahexacontapole for l=6, a octacosahectapole for l=7 etc. It goes up to l=2500 in this plot. Sadly I can’t write the name for that point. Each multipole is described by 2l+1 numbers. If you are familiar with spin you will recognise this as the number of components that describe a particle of spin l, it’s the same thing.

If you look carefully at the low-l end of the plot you will notice that the even-numbered points are low while the odd-numbered ones are high. This is the case up to l=8. In fact above that point they start to merge a range of l values into each point on the graph so this effect could extend further for all I know. Looking back at the WMAP plot of the same thing it seems that they started merging the points from about l=3 so we never saw this before (but some people did bevause they wrote papers about it). It was hidden, yet it is highly significant and for the Planck data it is responsible for the 3 sigma effect. In fact if they used an estimator that looked at the difference between odd and even points the significance might be higher.

There is another anomaly called the cold spot in the constellation of Eridanus. This is not on the axis of evil but it is terribly far off. Planck has also verified this spot first seen in the WMAP survey which is 70 µK cooler than the average CMB temperature.

What does it all mean? No idea!