Congratulations to the winners of the FQXi essay contest “Questioning the Foundations” . The results show an impressive and diverse range of ideas about common assumptions that need to be questioned to progress with foundational physics. This was the fourth contest of its type run by the FQXi institute. These provide a unique opportunity for professional and independent physicists to cross words in a public forum about this kind of subject. I know there will always be criticisms of the results and the imperfect voting system but the contest is still a very worthy exercise. This year there were 272 entries, significantly more than previous contests so the top 36 from the community voting who made the final cut should be extra proud of their success, even if they were not among the final winners. This year I narrowly missed out of joining them but there were many other good essays that did not make it either so there is no need to feel out of it. Taking part and having a chance to air our views on physics is much more important than winning. One last word of congratulations goes to the Perimeter Institute since the vast majority of the winners had strong connections with the centre, such as being past or present researchers there. The Perimeter Institute is well-known for its research on foundational issues so their success here is not surprising. They should also be applauded for their culture which seems to encourage taking part when many professional scientists from other centres are too shy to try it.
The winning essay entitled “The paradigm of kinematics and dynamics must yield to causal structure” was written by Perimeter Institute theorist Robert Spekkens. The idea of questioning the separation of kinematics and dynamics is very original. I never thought of it in this context myself even though I had previously made a similar point in a physics.stackexchange answer about a year ago. Spekkans goes on to link this to causality and the use of POSETs (Partially ordered sets) in models of fundamental physics. This aspect of his essay is a perfect example of what my essay on causality is against. In my view the concept of temporal causality (every effect has a cause preceding in time) is not fundamental at all. It is linked to the arrow of time which emerges as an aspect of thermodynamics. It is not written into the laws of physics which as we know them are perfectly symmetrical under time reversal (or more precisely CPT inversion). I therefore question why it needs to be used in approaches to understanding the fundamental laws of physics. My point did not go down well with other contestants and Spekkens was not the only prize winner who advocated the importance of causality as something to preserve while throwing out other assumptions. Of course this just makes me more pleased that I choose this point to make, winning is not what matters.
Aside from that there is something else about the contest that is of special interest on this blog. According to my count exactly 50 out of the 295 authors (17%) who wrote essays have also submitted papers to the viXra archive. The number who have submitted papers to the arXiv is 95 (32%). This provides a rare opportunity to do a comparative statistical analysis on range of quality of papers submitted to these repositories. By the way 11 of the authors can be found in both arXiv and viXra (including myself), leaving 161 authors (54%) who have not used either. The authors who use arXiv are mostly professional physicists because the endorsement system used by Cornell to filter arXiv submissions makes it difficult, but not impossible for most independent scientists to get approval, so we can conclude that about a third of the FQXi contest entrants are professionals. However I am more interested in what can be learnt about viXra authors.
I started viXra in 2009 to help scientists who have been excluded from the arXiv, either because they do not know anyone who can act as their endorser or because the arXiv administrators have specifically excluded them. Many people at the time said that viXra would only support crackpots and this opinion persists in many places. When someone wrote an entry for viXra on Wikipedia some administrators actively campaigned (unsuccessfully) to have it deleted calling viXra a “crank magnet” and concluding that it had no scientific value. Last month the wave of censorship even reached Google who suddenly removed all viXra entries from Google Scholar. We only had about 3% of our hits coming from there so it was not such a great loss, but it leaves us with no way of tracking citations of viXra papers which is a great disservice. This development reflects the opinions of many professional scientists who have said that viXra at best provides no value to science and only serves to keep crackpots in one safe place. Some are even less charitable and believe that it only promotes bad research and is harmful to science. Are they right?
When viXra was launched I said that it would also serve as an experiment to see if arXiv’s moderation policy was excluding some good science. Nobody should be surprised that there is a lot of bad quality research on viXra because it does not have any filtering and makes no claim to endorse its individual contents (personally I am of the opinion that even bad research can have value as a creative work and may even contain hidden gems of knowledge), but does it nevertheless have work of high value that would otherwise be lost? A recent paper by Lelk and Devine submitted to both arXiv and viXra tried to carry out a quantitative assessment of viXra in comparison to arXiv. It found that 15% pf articles on viXra were published in peer-reviewed journals (based on a very low sample). This may sound low but you should take into account that many independent scientists are less interested in journal publications because they do not need to produce a CV. In any case 15% of 4000 papers is a non-negligeable count if you do think this is a good measure of value.
How else then can the value of viXra by assessed if the papers are not being rated via peer-review? One answer is to use the ratings of its authors as provided by the FQXi contest. Essays in the contest were rated using marks from the authors themselves. This is not a perfect system by any means. There were essays that were placed either much lower or much higher in the results than they deserved. Nevertheless, the overall ranking is statistically a good measure of the papers quality in the terms demanded by the contest rules, with mostly good papers ending up at the top and bad ones at the bottom. It can therefore be used to collectively analyse the range of ability of the authors using either arXiv or viXra.
Let’s start with arXiv whose authors have been endorsed and moderated by its administrators. Given such filtering it is easy to predict that they should do well in the contest. Here is a graph of their placings counted in ten bins of about 29.5 authors. The lowest rated essays are in bin 1 on the left and the highest are in bin 10 on the right.
As expected the majority of arXiv authors have made it into the top bins. 87 were ranked in the top half and only 17 in the lower half.
How would you expect the distribution to look for viXra authors? If we are indeed all crackpots as many people suggest then the distribution would be the opposite with most authors doing badly and hardly any making the top bins that are dominated by the arXiv authors. Here is the actual result.
In fact the distribution is essentially flat within the statistical error bars (not shown) and there are plenty of viXra authors who did well. In fact six viXra authors made the final cut.
What should be concluded from this? If someone is identified to you as an author who submits papers to viXra how should you judge their status? Is it justified to assume that they must be a crank with no useful knowledge because they apparently can’t get their research into arXiv? The answer according to this analysis is that you should judge them the same way you would judge a typical author who has submitted an essay to the FQXi contest. They may not be good but they could be of a similar standard to the authors who submit papers to arXiv. I don’t suppose this will change the opinions of our critics but it should. Google are happy to index FQXi essays on Google Scholar so why should they refuse to index viXra papers?