2013 has been a great year for viXra. We already have more than 2000 new papers taking the total to over 6000. Many of them are about physics but other areas are also well covered. The range is bigger and better than ever and could never be summarised, so as the year draws to its end here instead is a snapshot of my own view of fundamental physics in 2013. Many physicists are reluctant to speculate about the big picture and how they see it developing. I think it would be useful if they were more willing to stick their neck out, so this is my contribution. I don’t expect much agreement from anybody, but I hope that it will stimulate some interesting discussion and thoughts. If you don’t like it you can always write your own summaries of physics or any other area of science and submit to viXra.

The discovery of the Higgs boson marks a watershed moment for fundamental physics. The standard model is complete but many mysteries remain. Most notably the following questions are unanswered and appear to require new physics beyond the standard model:

- What is dark matter?
- What was the mechanism of cosmic inflation?
- What mechanism led to the early production of galaxies and structure?
- Why does the strong interaction not break CP?
- What is the mechanism that led to matter dominating over anti-matter?
- What is the correct theory of neutrino mass?
- How can we explain fine-tuning of e.g. the Higgs mass and cosmological constant?
- How are the four forces and matter unified?
- How can gravity be quantised?
- How is information loss avoided for black holes?
- What is the small scale structure of spacetime?
- What is the large scale structure of spacetime?
- How should we explain the existence of the universe?

It is not unreasonable to hope that some further experimental input may provide clues that lead to some new answers. The Large Hadron Collider still has decades of life ahead of it while astronomical observation is entering a golden age with powerful new telescopes peering deep into the cosmos. We should expect direct detection of gravitational waves and perhaps dark matter, or at least indirect clues in the cosmic ray spectrum.

But the time scale for new discoveries is lengthening and the cost is growing. It is might be unrealistic to imagine the construction of new colliders on larger scales than the LHC. A theist vs atheist divide increasingly polarises Western politics and science. It has already pushed the centre of big science out of the United States over to Europe. As the jet stream invariably blows weather systems across the Atlantic, so too will come their political ideals albeit at a slower pace. It is no longer sufficient to justify fundamental science as a pursuit of pure knowledge when the men with the purse strings see it as an attack on their religion. The future of fundamental experimental science is beginning to shift further East and its future hopes will be found in Asia along with the economic prosperity that depends on it. The GDP of China is predicted to surpass that of the US and the EU within 5 years.

But there is another avenue for progress. While experiment is limited by the reality of global economics, theory is limited only by our intellect and imagination. The beasts of mathematical consistency have been harnessed before to pull us through. We are not limited by just what we can see directly, but there are many routes to explore. Without the power of observation the search may be longer, but the constraints imposed by what we have already seen are tight. Already we have strings, loops, twistors and more. There are no dead ends. The paths converge back together taking us along one main highway that will lead eventually to an understanding of how nature works at its deepest levels. Experiment will be needed to show us what solutions nature has chosen, but the equations themselves are already signposted. We just have to learn how to read them and follow their course. I think it will require open minds willing to move away from the voice of their intuition, but the answer will be built on what has come before.

Thirteen years ago at the turn of the millennium I thought it was a good time to make some predictions about how theoretical physics would develop. I accept the mainstream views of physicists but have unique ideas of how the pieces of the jigsaw fit together to form the big picture. My millennium notes reflected this. Since then much new work has been done and some of my original ideas have been explored by others, especially permutation symmetry of spacetime events (event symmetry), the mathematical theory of theories, and multiple quantisation through category theory. I now have a clearer idea about how I think these pieces fit in. On the other hand, my idea at the time of a unique discrete and natural structure underlying physics has collapsed. Naturalness has failed in both theory and experiment and is now replaced by a multiverse view which explains the fine-tuning of the laws of the universe. I have adapted and changed my view in the face of this experimental result. Others have refused to.

Every theorist working on fundamental physics has a set of ideas or principles that guides their work and each one is different. I do not suppose that I have a gift of insight that allows me to see possibilities that others miss. It is more likely that the whole thing is a delusion, but perhaps there are some ideas that could be right. In any case I believe that open speculation is an important part of theoretical research and even if it is all wrong it may help others to crystallise their own opposing views more clearly. For me this is just a way to record my current thinking so that I can look back later and see how it succeeded or changed.

The purpose of this article then is to give my own views on a number of theoretical ideas that relate to the questions I listed. The style will be pedagogical without detailed analysis, mainly because such details are not known. I will also be short on references, after all nobody is going to cite this. Here then are my views.

## Causality

Causality has been discussed by philosophers since ancient times and many different types of causality have been described. In terms of modern physics there are only two types of causality to worry about. Temporal causality is the idea that effects are due to prior causes, i.e. all phenomena are caused by things that happened earlier. Ontological causality is about explaining things in terms of simpler principles. This is also known as reductionism. It does not involve time and it is completely independent of temporal causality. What I want to talk about here is temporal causality.

Temporal causality is a very real aspect of nature and it is important in most of science. Good scientists know that it is important not to confuse correlation with causation. Proper studies of cause and effect must always use a control to eliminate this easy mistake. Many physicists, cosmologists and philosophers think that temporal causality is also important when studying the cosmological origins of the universe. They talk of the evolving cosmos, eternal inflation, or numerous models of pre-big-bang physics or cyclic cosmologies. All of these ideas are driven by thinking in terms of temporal causality. In quantum gravity we find Causal Sets and Causal Dynamical Triangulations, more ideas that try to build in temporal causality at a fundamental level. All of them are misguided.

The problem is that we already understand that temporal causality is linked firmly to the thermodynamic arrow of time. This is a feature of the second law of thermodynamics, and thermodynamics is a statistical theory that emerges at macroscopic scales from the interactions of many particles. The fundamental laws themselves can be time reversed (along with CP to be exact). Physical law should not be thought of in terms of a set of initial conditions and dynamical equations that determine evolution forward in time. It is really a sum over all possible histories between past and future boundary states. The fundamental laws of physics are time symmetric and temporal causality is emergent. The origin of time’s arrow can be traced back to the influence of the big bang singularity where complete symmetry dictated low entropy.

The situation is even more desperate if you are working on quantum gravity or cosmological origins. In quantum gravity space and time should also be emergent, then the very description of temporal causality ceases to make sense because there is no time to express it in terms of. In cosmology we should not think of explaining the universe in terms of what caused the big bang or what came before. Time itself begins and ends at spacetime singularities.

## Symmetry

When I was a student around 1980 symmetry was a big thing in physics. The twentieth century started with the realisation that spacetime symmetry was the key to understanding gravity. As it progressed gauge symmetry appeared to eventually explain the other forces. The message was that if you knew the symmetry group of the universe and its action then you knew everything. Yang-Mills theory only settled the bosonic sector but with supersymmetry even the fermionic side would follow, perhaps uniquely.

It was not to last. When superstring theory replaced supergravity the pendulum began its swing back taking away symmetry as a fundamental principle. It was not that superstring theory did not use symmetry, it had the old gauge symmetries, supersymmetries, new infinite dimensional symmetries, dualities, mirror symmetry and more, but there did not seem to be a unifying symmetry principle from which it could be derived. There was even an argument called Witten’s Puzzle based on topology change that seemed to rule out a universal symmetry. The spacetime diffeomorphism group is different for each topology so how could there be a bigger symmetry independent of the solution?

The campaign against symmetry strengthened as the new millennium began. Now we are told to regard gauge symmetry as a mere redundancy introduced to make quantum field theory appear local. Instead we need to embrace a more fundamental formalism based on the amplituhedron where gauge symmetry has no presence.

While I embrace the progress in understanding that string theory and the new scattering amplitude breakthroughs are bringing, I do not accept the point of view that symmetry has lost its role as a fundamental principle. In the 1990s I proposed a solution to Witten’s puzzle that sees the universal symmetry for spacetime as permutation symmetry of spacetime events. This can be enlarged to large-N matrix groups to include gauge theories. In this view spacetime is emergent like the dynamics of a soap bubble formed from intermolecular interaction. The permutation symmetry of spacetime is also identified with the permutation symmetry of identical particles or instantons or particle states.

My idea was not widely accepted even when shortly afterwards matrix models for M-theory were proposed that embodied the principle of event symmetry exactly as I envisioned. Later the same idea was reinvented in a different form for quantum graphity with permutation symmetry over points in space for random graph models, but still the fundamental idea is not widely regarded.

While the amplituhedron removes the usual gauge theory it introduces new dual conformal symmetries described by Yangian algebras. These are quantum symmetries unseen in the classical Super-Yang-Mills theory but they combine permutations symmetry over states with spacetime symmetries in the same way as event-symmetry. In my opinion different dual descriptions of quantum field theories are just different solutions to a single pregeometric theory with a huge and pervasive universal symmetry. The different solutions preserve different sectors of this symmetry. When we see different symmetries in different dual theories we should not conclude that symmetry is less fundamental. Instead we should look for the greater symmetry that unifies them.

After moving from permutation symmetry to matrix symmetries I took one further step. I developed algebraic symmetries in the form of necklace Lie algebras with a stringy feel to them. These have not yet been connected to the mainstream developments but I suspect that these symmetries will be what is required to generalise the Yangian symmetries to a string theory version of the amplituhedron. Time will tell if I am right.

## Cosmology

We know so much about cosmology, yet so little. The cosmic horizon limits our view to an observable universe that seems vast but which may be a tiny part of the whole. The heat of the big bang draws an opaque veil over the first few hundred thousand years of the universe. Most of the matter around us is dark and hidden. Yet within the region we see the ΛCDM standard model accounts well enough for the formation of galaxies and stars. Beyond the horizon we can reasonably assume that the universe continues the same for many more billions of light years, and the early big bang back to the first few minutes or even seconds seems to be understood.

Cosmologists are conservative people. Radical changes in thinking such as dark matter, dark energy, inflation and even the big bang itself were only widely accepted after observation forced the conclusion, even though evidence built up over decades in some cases. Even now many happily assume that the universe extends to infinity looking the same as it does around here, that the big bang is a unique first event in the universe, that space-time has always been roughly smooth, that the big bang started hot, and that inflation was driven by scalar fields. These are assumptions that I question, and there may be other assumptions that should be questioned. These are not radical ideas. They do not contradict any observation, they just contradict the dogma that too many cosmologist live by.

The theory of cosmic inflation was one of the greatest leaps in imagination that has advanced cosmology. It solved many mysteries of the early universe at a stroke and Its predictions have been beautifully confirmed by observations of the background radiation. Yet the mechanism that drives inflation is not understood.

It is assumed that inflation was driven by a scalar inflaton field. The Higgs field is mostly ruled out (exotic coupling to gravity not withstanding), but it is easy to imagine that other scalar fields remain to be found. The problem lies with the smooth exit from the inflationary period. A scalar inflaton drives a DeSitter universe. What would coordinate a graceful exit to a nice smooth universe? Nobody knows.

I think the biggest clue is that the standard cosmological model has a preferred rest frame defined by commoving galaxies and the cosmic background radiation. It is not perfect on small scales but over hundreds of millions of light years it appears rigid and clear. What was the origin of this reference frame? A DeSitter inflationary model does not possess such a frame, yet something must have co-ordinated its emergence as inflation ended. These ideas simply do not fit together if the standard view of inflation is correct.

In my opinion this tells us that inflation was not driven by a scalar field at all. The Lorentz geometry during the inflationary period must have been spontaneously broken by a vector field with a non-zero component pointing in the time direction. Inflation must have evolved in a systematic and homogenous way through time while keeping this fields direction constant over large distances smoothing out any deviations as space expanded. The field may have been a fundamental gauge vector or a composite condensate of fermions with a non-zero vector expectation value in the vacuum. Eventually a phase transition ended the symmetry breaking phase and Lorentz symmetry was restored to the vacuum, leaving a remnant of the broken symmetry in the matter and radiation that then filled the cosmos.

The required vector field may be one we have not yet found, but some of the required features are possessed by the massive gauge bosons of the weak interaction. The mass term for a vector field can provide an instability favouring timelike vector fields because the signature of the metric reverses sign in the time direction. I am by no means convinced that the standard model cannot explain inflation in this way, but the mechanism could be complicated to model.

Another great mystery of cosmology is the early formation of galaxies. As ever more powerful telescopes have penetrated back towards times when the first galaxies were forming, cosmologists have been surprised to find active galaxies rapidly producing stars, apparently with supermassive black holes ready-formed at their cores. This contradicts the predictions of the cold dark matter model according to which the stars and black holes should have formed later and more slowly.

The conventional theory of structure formation is very Newtonian in outlook. After baryogenesis the cosmos was full of gas with small density fluctuations left over from inflation. As radiation decoupled, these anomalies caused the gas and dark matter to gently coalesce under their own weight into clumps that formed galaxies. This would be fine except for the observation of supermassive black holes in the early universe. How did they form?

I think that the formation of these black holes was driven by large scale gravitational waves left over from inflation rather than density fluctuations. As the universe slowed its inflation there would be parts that slowed a little sooner and other a little later. Such small differences would have been amplified by the inflation leaving a less than perfectly smooth universe for matter to form in. As the dark matter followed geodesics through these waves in spacetime it would be focused just as light waves on the bottom of a swimming pool is focused by surface waves into intricate light patterns. At the caustics the dark matter would come together as high speed to be compressed in structures along lines and surfaces. Large black holes would form at the sharpest focal points and along strands defined by the caustics. The stars and remaining gas would then gather around the black holes. Pulled in by their gravitation to form the galaxies. As the universe expanded the gravitational waves would fade leaving the structure of galactic clusters to mark where they had been.

The greatest question of cosmology asks how the universe is structured on large scales beyonf the cosmic horizon. We know that dark energy is making the expansion of the universe accelerate so it will endure for eternity, but we do not know if it extends to infinity across space. Cosmologists like to assume that space is homogeneous on large scales, partly because it makes cosmology simpler and partly because homogeneity is consistent with observation within the observable universe. If this is assumed then the question of whether space is finite or infinite depends mainly on the local curvature. If the curvature is positive then the universe is finite. If it is zero or negative the universe is infinite unless it has an unusual topology formed by tessellating polyhedrons larger than the observable universe. Unfortunately observation fails to tell us the sign of the curvature. It is near zero but we can’t tell which side of zero it lies.

This then is not a question I can answer but the holographic principle in its strongest form contradicts a finite universe. An infinite homogeneous universe also requires an explanation of how the big bang can be coordinated across an infinite volume. This leaves only more complex solutions in which the universe is not homogeneous. How can we know if we cannot see past the cosmic horizon? There are many homogeneous models such as the bubble universes of eternal inflation, but I think that there is too much reliance on temporal causality in that theory and I discount it. My preference is for a white hole model of the big bang where matter density decreases slowly with distance from a centre and the big bang singularity itself is local and finite with an outer universe stretching back further. Because expansion is accelerating we will never see much outside the universe that is currently visible so we may never know its true shape.

## Naturalness

It has long been suggested that the laws of physics are fine-tuned to allow the emergence of intelligent life. This strange illusion of intelligent design could be explained in atheistic terms if in some sense many different universes existed with different laws of physics. The observation that the laws of physics suit us would then be no different in principle from the observation that our planet suits us.

Despite the elegance of such anthropomorphic reasoning many physicists including myself resisted it for a long time. Some still resist it. The problem is that the laws of physics show some signs of being unique according to theories of unification. In 2001 I like many thought that superstring theory and its overarching M-theory demonstrated this uniqueness quite persuasively. If there was only one possible unified theory with no free parameters how could an anthropic principle be viable?

At that time I preferred to think that fine-tuning was an illusion. The universe would settle into the lowest energy stable vacuum of M-theory and this would describe the laws of physics with no room for choice. The ability of the universe to support life would then just be the result of sufficient complexity. The apparent fine-tuning would be an illusion resulting from the fact that we see only one form of intelligent life so far. I imagined distant worlds populated by other forms of intelligence in very different environments from ours based on other solutions to evolution making use of different chemical combination and physical processes. I scoffed at science fiction stories where the alien life looked similar to us except for different skin textures or different numbers of appendages.

My opinion started to change when I learnt that string theory actually has a vast landscape of vacuum solutions and they can be stabilized to such an extent that we need not be living at the lowest energy point. This means that the fundamental laws of physics can be unique while different low energy effective theories can be realized as solutions. Anthropic reasoning was back on the table.

It is worrying to think that the vacuum is waiting to decay to a lower energy state at any place and moment. If it did so an expanding sphere of energy would expand at the speed of light changing the effective laws of physics as it spread out, destroying everything in its path. Many times in the billions of years and billions of light years of the universe in our past light come, there must have been neutron stars that collided with immense force and energy. Yet not once has the vacuum been toppled to bring doom upon us. The reason is that the energies at which the vacuum state was forged in the big bang are at the Planck scale, many orders of magnitude beyond anything that can be repeated in even the most violent events of astrophysics. It is the immense range of scales in physics that creates life and then allows it to survive.

The principle of naturalness was spelt out by ‘t Hooft in the 1980s, except he was too smart to call it a principle. Instead he called it a “dogma”. The idea was that the mass of a particle or other physical parameters could only be small if they would be zero given the realisation of some symmetry. The smallness of fermion masses could thus be explained by chiral symmetry, but the smallness of the Higgs mass required supersymmetry. For many of us the dogma was finally put to rest when the Higgs mass was found by the LHC to be unnaturally small without any sign of the accompanying supersymmetric partners. Fine tuning had always been a feature of particle physics but with the Higgs it became starkly apparent.

The vacuum would not tend to squander its range of scope for fine-tuning, limited as it is by the size of the landscape. If there is a cheaper way the typical vacuum will find it so that there is enough scope left to tune nuclear physics and chemistry for the right components required by life. Therefore I expect supersymmetry or some similar mechanism to come in at some higher scale to stabilise the Higgs mass and the cosmological constant. It may be a very long time indeed before that can be verified.

Now that I have learnt to accept anthropomorphism, the multiverse and fine-tuning I see the world in a very different way. If nature is fine-tuned for life it is plausible that there is only one major route to intelligence in the universe. Despite the plethora of new planets being discovered around distant stars, the Earth appears as a rare jewel among them. Its size and position in the goldilocks zone around a long lives stable star in a quite part of a well behaved galaxy is not typical. Even the moon and the outer gas giants seem to play their role in keeping us safe from natural instabilities. Yet of we were too safe life would have settled quickly into a stable form that could not evolve to higher functions. Regular cataclysmic events in our history were enough to cause mass extinction events without destroying life altogether, allowing it to develop further and further until higher intelligence emerged. Microbial life may be relatively common on other worlds but we are exquisitely rare. No sign of alien intelligence drifts across time and space from distant worlds.

I now think that where life exists it will be based on DNA and cellular structures much like all life on Earth. It will require water and carbon and to evolve to higher forms it will require all the commonly available elements each of which has its function in our biology or the biology of the plants on which we depend. Photosynthesis may be the unique way in which a stable carbon cycle can complement our need for oxygen. Any intelligent life will be much like us and it will be rare. This I see as the most significant prediction of fine tuning and the multiverse.

## String Theory

String theory was the culmination of twentieth century developments in particles physics leading to ever more unified theories. By 2000 physicists had what appeared to be a unique mother theory capable of including all known particle physics in its spectrum. They just had to find the mechanism that collapsed its higher dimensions down to our familiar 4 dimensional spacetime.

Unfortunately it turned out that there were many such mechanisms and no obvious means to figure out which one corresponds to our universe. This leaves string theorists in a position unable to predict anything useful that would confirm their theory. Some people have claimed that this makes the theory unscientific and that physicists should abandon the idea and look for a better alternative. Such people are misguided.

String theory is not just a random set of ideas that people tried. It was the end result of exploring all the logical possibilities for the ways in which particles can work. It is the only solution to the problem of finding a consistent interaction of matter with gravity in the limit of weak fields on flat spacetime. I don’t mean merely that it is the only solution anyone could fine, it is the only solution that can work. If you throw it away and start again you will only return to the same answer by the same logic.

What people have failed to appreciate is that quantum gravity acts at energy scales well above those that can be explored in accelerators or even in astronomical observations. Expecting string theory to explain low energy particle physics was like expecting particle physics to explain biology. In principle it can, but to derive biochemistry from the standard model you would need to work out the laws of chemistry and nuclear physics from first principles and then search through the properties of all the possible chemical compounds until you realised that DNA can self-replicate. Without input from experiment this is an impossible program to put into practice. Similarly, we cannot hope to derive the standard model of particle physics from string theory until we understand the physics that controls the energy scales that separate them. There are about 12 orders of magnitude in energy scale that separate chemical reactions from the electroweak scale and 15 orders of magnitude that separate the electroweak scale from the Planck scale. We have much to learn.

How then can we test string theory? To do so we will need to look beyond particle physics and find some feature of quantum gravity phenomenology. That is not going to be easy because of the scales involved. We can’t reach the Planck energy, but sensitive instruments may be able to probe very small distance scales as small variations of effects over large distances. There is also some hope that a remnant of the initial big bang remains in the form of low frequency radio or gravitational waves. But first string theory must predict something to observe at such scales and this presents another problem.

Despite nearly three decades of intense research, string theorists have not yet found a complete non-perturbative theory of how string theory works. Without it predictions at the Planck scale are not in any better shape than predictions at the electroweak scale.

Normally quantised theories explicitly include the symmetries of the classical theories they quantised. As a theory of quantum gravity, string theory should therefore include diffeomorphism invariance of spacetime, and it does but not explicitly. If you look at string theory as a perturbation on a flat spacetime you find gravitons, the quanta of gravitational interactions. This means that the theory must respect the principles of general relativity in small deviations from the flat spacetime but it is not explicitly described in a way that makes the diffeomorphism invariance of general relativity manifest. Why is that?

Part of the answer coming from non-perturbative results in string theory is that the theory allows the topology of spacetime to change. Diffeomorphisms on different topologies form different groups so there is no way that we could see diffeomorphism invariance explicitly in the formulation of the whole theory. The best we could hope would be to find some group that has every diffeomorphism group as a subgroup and look for invariance under that.

Most string theorists just assume that this argument means that no such symmetry can exist and that string theory is therefore not based on a principle of universal symmetry. I on the other hand have proposed that the universal group must contain the full permutation group on spacettime events. The diffeomorphism group for any topology can then be regarded as a subgroup of this permutation group.

String theorists don’t like this because they see spacetime as smooth and continuous whereas permutation symmetry would suggest a discrete spacetime. I don’t think these two ideas are incompatible. In fact we should see spacetime as something that does not exists at all in the foundations of string theory. It is emergent. The permutation symmetry on events is really to be identified with the permutation symmetry that applies to particle states in quantum mechanics. A smooth picture of spacetime then emerges from the interactions of these particles which in string theory are the partons of the strings.

This was an idea I formulated twenty years ago, building symmetries that extend the permutation group first to large-N matrix groups and then to necklace Lie-algebras that describe the creation of string states. The idea was vindicated when matrix string theory was invented shortly after but very few people appreciated the connection.

The matric theories vindicated the matrix extensions in my work. Since then I have been waiting patiently for someone to vindicate the necklace Lie algebra symmetries as well. In recent years we have seen a new approach to quantum field theory for supersymmetric Yang-Mills which emphasises a dual conformal symmetry rather than the gauge symmetry. This is a symmetry found in the quantum scattering amplitudes rather than the classical limit. The symmetry takes the form of a Yangian symmetry related to the permutations of the states. I find it plausible that this will turn out to be a remnant of necklace Lie-algebras in the more complete string theory. There seems to be still some way to go before this new idea expressed in terms of an amplituhedron is fully worked out but I am optimistic that I will be proven right again, even if few people recognise it again.

Once this reformulation of string theory is complete we will see string theory in a very different way. Spacetime, causality and even quantum mechanics may be emergent from the formalism. It will be non-perturbative and rigorously defined. The web of dualities connecting string theories and the holographic nature of gravity will be derived exactly from first principles. At least that is what I hope for. In the non-perturbative picture it should be clearer what happens at high energies when space-time breaks down. We will understand the true nature of the singularities in black-holes and the big bang. I cannot promise that these things will be enough to provide predictions that can be observed in real experiments or cosmological surveys, but it would surely improve the chances.

## Loop Quantum Gravity

If you want to quantised a classical system such as a field theory there are a range of methods that can be used. You can try a Hamiltonian approach, or a path integral approach for example. You can change the variables or introduce new ones, or integrate out some degrees of freedom. Gauge fixing can be handled in various ways as can renormalisation. The answers you get from these different approaches are not quite guaranteed to be equivalent. There are some choices of operator ordering that can affect the answer. However, what we usually find in practice is that there are natural choices imposed by symmetry principles or other requirements of consistency and the different results you get using different methods are either equivalent or very nearly so, if they lead to a consistent result at all.

What should this tell us about quantum gravity? Quantising the gravitational field is not so easy. It is not renormalisable in the same way that other gauge theories are, yet a number of different methods have produced promising results. Supergravity follows the usual field theory methods while String theory uses a perturbative generalisation derived from the old S-matrix approach. Loop Quantum Gravity makes a change of variables and then follows a Hamiltonian recipe. There are other methods such as Twistor Theory, Non-Commutative Geometry, Dynamical Triangulations, Group Field Theory, Spin Foams, Higher Spin Theories etc. None has met with success in all directions but each has its own successes in some directions.

While some of these approaches have always been known to be related, others have been portrayed as rivals. In particular the subject seems to be divided between methods related to string theory and methods related to Loop Quantum Gravity. It has always been my expectation that the two sides will eventually come together, simply because of the fact that different ways of quantising the same classical system usually do lead to equivalent results. Superficially strings and loops seem like related geometric objects, i.e. one dimensional structures in space tracing out two dimensional world sheets in spacetime.

String Theorists and Loop Qunatum Gravitists alike have scoffed at the suggestion that these are the same thing. They point out that string pass through each other unlike the loops which form knot states. String theory also works best in ten dimensions while LQG can only be formulated in 4. String Theory needs supersymmetry and therefore matter, while LQG tries to construct first a consistent theory of quantum gravity alone. I see these differences very differently from most physicists. I observe that when strings pass through each other they can interact and the algebraic diagrams that represent this are very similar to the Skein relations used to describe the knot theory of LQG. String theory does indeed use the same mathematics of quantum groups to describe its dynamics. If LQG has not been found to require supersymmetry or higher dimensions it may be because the perturbative limit around flat spacetime has not yet been formulated and that is where the consistency constraints arise. In fact the successes and failures of the two approaches seem complementary. LQG provides clues about the non-perturbative background independent picture of spacetime that string theorists need.

Methods from Non-Commutative Geometry have been incorporated into string theory and other approaches to quantum gravity for more than twenty years and in the last decade we have seen Twistor Theory applied to string theory. Some people see this convergence as surprising but I regard it as natural and predictable given the nature of the process of quantisation. Twistors have now been applied to scattering theory and to supergravity in 4 dimensions in a series of discoveries that has recently led to the amplituhedron formalism. Although the methods evolved from observations related to supersymmetry and string theory they seem in some ways more akin to the nature of LQG. Twistors were originated by Penrose as an improvement on his original spin-network idea and it is these spin-networks that describe states in LQG.

I think that what has held LQG back is that it separates space and time. This is a natural consequence of the Hamiltonian method. LQG respects diffeomorphism invariance, unlike string theory, but it is really only the spatial part of the symmetry that it uses. Spin networks are three dimensional objects that evolve in time, whereas Twistor Theory tries to extend the network picture to 4 dimensions. People working on LQG have tended to embrace the distinction between space and time in their theory and have made it a feature claiming that time is philosophically different in nature from space. I don’t find that idea appealing at all. The clear lesson of relativity has always been that they must be treated the same up to a sign.

The amplituhedron makes manifest the dual conformal symmetry to yang mills theory in the form of an infinite dimensional Yangian symmetry. These algebras are familiar from the theory of integrable systems where they may were deformed to bring in quantum groups. In fact the scattering amplitude theory that applies to the planar limit of Yang Mills does not use this deformation, but here lies the opportunity to united the theory with Loop Quantum Gravity which does use the deformation.

Of course LQG is a theory of gravity so if it is related to anything it would be supergravity or sting theory, not Yang Mills. In the most recent developments the scattering amplitude methods have been extended to supergravity by making use of the observation that gravity can be regarded as formally the square of Yang-Mills. Progress has thus been made on formulating 4D supergravity using twistors, but so far without this deformation. A surprise observation is that supergravity in this picture requires a twistor string theory to make it complete. If the Yangian deformation could be applied to these strings then they could form knot states just like the loops in LQG. I cant say if it will pan out that way but I can say that it would make perfect sense if it did. It would mean that LQG and string theory would finally come together and methods that have grown out of LQG such as spin foams might be applied to string theory.

The remaining mystery would be why this correspondence worked only in 4 spacetime dimensions. Both Twistors and LQG use related features of the symmetry of 4 dimensional spacetime that mean it is not obvious how to generalise to higher dimensions, while string theory and supergravity have higher forms that work up to 11 dimensions. Twistor theory is related to conformal field theory is a reduced symmetry from geometry that is 2 dimensions higher. E.g. the 4 dimensional conformal group is the same as the 6 dimensional spin groups. By a unique coincidence the 6 dimensional symmetries are isomorphic to unitary or special linear groups over 4 complex variables so these groups have the same representations. In particular the fundamental 4 dimensional representation of the unitary group is the same as the Weyl spinor representation in six real dimensions. This is where the twistors come from so a twistor is just a Weyl spinor. Such spinors exist in any even number of dimensions but without the special properties found in this particular case. It will be interesting to see how the framework extends to higher dimensions using these structures.

## Quantum Mechanics

Physicists often chant that quantum mechanics is not understood. To paraphrase some common claims: If you think you understand quantum mechanics you are an idiot. If you investigate what it is about quantum mechanics that is so irksome you find that there are several features that can be listed as potentially problematical; indeterminacy, non-locality, contextuality, observers, wave-particle duality and collapse. I am not going to go through these individually; instead I will just declare myself a quantum idiot if that is what understanding implies. All these features of quantum mechanics are experimentally verified and there are strong arguments that they cannot be easily circumvented using hidden variables. If you take a multiverse view there are no conceptual problems with observers or wavefunction collapse. People only have problems with these things because they are not what we observe at macroscopic scales and our brains are programmed to see the world classically. This can be overcome through logic and mathematical understanding in the same way as the principles of relativity.

I am not alone in thinking that these things are not to be worried about, but there are some other features of quantum mechanics that I have a more extraordinary view of. Another aspect of quantum mechanics that gives some cause for concern is its linearity, Theories that are linear are often usually too simple to be interesting. Everything decouples into modes that act independently in a simple harmonic way, In quantum mechanics we can in principle diagonalise the Hamiltonian to reduce the whole universe to a sum over energy eigenstates. Can everything we experience by encoded in that one dimensional spectrum?

In quantum field theory this is not a problem, but there we have spacetime as a frame of reference relative to which we can define a privileged basis for the Hilbert space of states. It is no longer the energy spectrum that just counts. But what if spacetime is emergent? What then do we choose our Hilbert basis relative to? The symmetry of the Hilbert space must be broken for this emergence to work, but linear systems do not break their symmetries. I am not talking about the classical symmetries of the type that gets broken by the Higgs mechanism. I mean the quantum symmetries in phase space.

Suppose we accept that string theory describes the underlying laws of physics, even if we don’t know which vacuum solution the universe selects. Doesn’t string theory also embody the linearity of quantum mechanics? It does so long as you already accept a background spacetime, but in string theory the background can be changed by dualities. We don’t know how to describe the framework in which these dualities are manifest but I think there is reason to suspect that quantum mechanics is different in that space, and it may not be linear.

The distinction between classical and quantum is not as clear-cut as most physicists like to believe. In perturbative string theory the Feynman diagrams are given by string worldsheets which can branch when particles interact. Is this the classical description or the quantum description? The difference between classical and quantum is that the worldsheets will extremise their area in the classical solutions but follow any history in the quantum. But then we already have multi-particle states and interactions in the classical description. This is very different from quantum field theory.

Stepping back though we might notice that quantum field theory also has some schizophrenic characteristics. The Dirac equation is treated as classical with non-linear interactions even though it is a relativistic Schrödinger equation, with quantum features such as spin already built-in. After you second quantise you get a sum over all possible Feynman graphs much like the quantum path integral sum over field histories, but in this comparison the Feynman diagrams act as classical configurations. What is this telling us?

My answer is that the first and second quantisation are the first in a sequence of multiple iterated quantisations. Each iteration generates new symmetries and dimensions. For this to work the quantised layers must be non-linear just as the interaction between electrons and photons is non-linear is the so-called first-quantised field theory. The idea of multiple quantisations goes back many years and did not originate with me, but I have a unique view of its role in string theory based on my work with necklace lie algebras which can be constructed in an iterated procedure where one necklace dimension is added at each step.

Physicists working on scattering amplitudes are at last beginning to see that the symmetries in nature are not just those of the classical world. There are dual-conformal symmetries that are completed only in the quantum description. These seem to merge with the permutation symmetries of the particle statistics. The picture is much more complex than the one painted by the traditional formulations of quantum field theory.

What then is quantisation? When a Fock space is constructed the process is formally like an exponentiation. In category picture we start to see an origin of what quantisation is because exponentiation generalises to the process of constructing all functions between sets, or all functors between categories and so on to higher n-categories. Category theory seems to encapsulate the natural processes of abstraction in mathematics. This I think is what lies at the base of quantisation. Variables become functional operators, objects become morphisms. Quantisation is a particular form of categorification, one we don’t yet understand. Iterating this process constructs higher categories until the unlimited process itself forms an infinite omega-category that describes all natural processes in mathematics and in our multiverse.

Crazy ideas? Ill-formed? Yes, but I am just saying – that is the way I see it.

## Black Hole Information

We have seen that quantum gravity can be partially understood by using the constraint that it needs to make sense in the limit of small perturbations about flat spacetime. This led us to strings and supersymmetry. There is another domain of thought experiments that can tell us a great deal about how quantum gravity should work and it concerns what happens when information falls into a black hole. The train of arguments is well known so I will not repeat them here. The first conclusion is that the entropy of a black hole is given by its horizon area in Plank units and the entropy in any other volume is less than the same Bekenstein bound taken from the surrounding surface. This leads to the holographic principle that everything that can be known about the state inside the volume can be determined from a state on its surface. To explain how the inside of a blackhole can be determined from its event horizon or outside we use a black hole correspondence principle which uses the fact that we cannot observe both the inside and then outside at a later time. Although the reasoning that leads to these conclusions is long and unsupported by any observation It is in my opinion quite robust and is backed up by theoretical models such as AdS/CFT duality.

There are some further conclusions that I would draw from black hole information that many physicists might disagree with. If the information in a volume is limited by the surrounding surface then it means we cannot be living in a closed universe with a finite volume like the surface of a 4-sphere. If we did you could extend the boundary until it shrank back to zero and conclude that there is no information in the universe. Some physicists prefer to think that the Bekenstein bound should be modified on large scales so that this conclusion cannot be drawn but I think the holographic principle holds perfectly to all scales and the universe must be infinite or finite with a different topology.

Recently there has been a claim that the holographic principle leads to the conclusion that the event-horizon must be a firewall through which nothing can pass. This conclusion is based on the assumption that information inside a black hole is replicated outside through entanglement. If you drop two particles with fully entangled spin states into a black hole you cannot have another particle outside that is also entangled to this does not make sense. I think the information is replicated on the horizon in a different way.

It is my view that the apparent information in the bulk volume field variables must be mostly redundant and that this implies a large symmetry where the degrees of symmetry match the degrees of freedom in the fields or strings. Since there are fundamental fermions it must be a supersymmetry. I call a symmetry of this sort a complete symmetry. We know that when there is gauge symmetry there are corresponding charges that can be determined on a boundary by measuring the flux of the gauge field. In my opinion a generalization of this using a complete symmetry accounts for holography. I don’t think that this complete symmetry is a classical symmetry. It can only be known properly in a full quantum theory much as dual conformal gauge symmetry is a quantum symmetry.

Some physicists assume that if you could observe Hawking radiation you would be looking at information coming from the event horizon. It is not often noticed that the radiation is thermal so if you observe it you cannot determine where it originated from. There is no detail you could focus on to measure the distance of the source. It makes more sense to me to think of this radiation as emanating from a backward singularlty inside the blackhole. This means that a black hole once formed is also a white hole. This may seem odd but it is really just an extension of the black hole correspondence principle. I also agree with those who say that as black hole shrink they become indistinguishable from heavy particles that decay by emitting radiation.

## Ontology

Every theorist working on fundamental physics needs some background philosophy to guide their work. They may think that causality and time are fundamental or that they are emergent for example. They may have the idea that deeper laws of physics are simpler. They may like reductionist principles or instead prefer a more anthropomorphic world view. Perhaps they think the laws of physics must be discrete, combinatorical and finite. They may think that reality and mathematics are the same thing, or that reality is a computer simulation or that it is in the mind of God. These things affect the theorist’s outlook and influence the kind of theories they look at. They may be meta-physical and sometimes completely untestable in any real sense, but they are still important to the way we explore and understand the laws of nature.

In that spirit I have formed my own elaborate ontology as my way of understanding existence and the way I expect the laws of nature to work out. It is not complete or finished and it is not a scientific theory in the usual sense, but I find it a useful guide for where to look and what to expect from scientific theories. Someone else may take a completely different view that appears contradictory but may ultimately come back to the same physical conclusions. That I think is just the way philosophy works.

In my ontology it is universality that counts most. I do not assume that the most fundamental laws of physics should be simple or beautiful or discrete or finite. What really counts is universality, but that is a difficult concept that requires some explanation.

It is important not to be misled by the way we think. Our mind is a computer running a program that models space, time and causality in a way that helps us live our lives but that does not mean that these things are important in the fundamental laws of physics. Our intuition can easily mislead our way of thinking. It is hard understand that time and space are interlinked and to some extent interchangeable but we now know from the theory of relativity that this is the case. Our minds understand causality and free will, the flow of time and the difference between past and future but we must not make the mistake of assuming that these things are also important for understanding the universe. We like determinacy, predictability and reductionism but we can’t assume that the universe shares our likes. We experience our own consciousness as if it is something supernatural but perhaps it is no more than a useful feature of our psychology, a trick to help us think in a way that aids our survival.

Our only real ally is logic. We must consider what is logically possible and accept that most of what we observe is emergent rather than fundamental. The realm of logical possibilities is vast and described by the rules of mathematics. Some people call it the Platonic realm and regard it as a multiverse within its own level of existence, but such thoughts are just mindtricks. They form a useful analogy to help us picture the mathematical space when really logical possibilities are just that. They are possibilities stripped of attributes like reality or existence or place.

Philosophers like to argue about whether mathematical concepts are discovered or invented. The only fair answer is both or neither. If we made contact with alien life tomorrow it is unlikely that we would find them playing chess. The rules of chess are mathematical but they are a human invention. On the other hand we can be quite sure that our new alien friends would know how to use the real numbers if they are at least as advanced as us. They would also probably know about group theory, complex analysis and prime numbers. These are the universal concepts of mathematics that are “out there” waiting to be discovered. If we forgot them we would soon rediscover them in order to solve general problems. Universality is a hard concept to define. It distinguishes the parts of mathematics that are discovered from those that are merely invented, but there is no sharp dividing line between the two.

Universal concepts are not necessarily simple to define. The real numbers for example are notoriously difficult to construct if you start from more basic axiomatic constructs such as set theory. To do that you have to first define the natural numbers using the cardinality of finite sets and Peano’s axioms. This is already an elaborate structure and it is just the start. You then extend to the rationals and then to the reals using something like the Dedekind cut. Not only is the definition long and complicated, but it is also very non-unique. The aliens may have a different definition and may not even consider set theory as the right place to start, but it is sure and certain that they would still possess the real numbers as a fundamental tool with the same properties as ours. It is the higher level concept that is universal, not the definition.

Another example of universality is the idea of computability. A universal computer is one that is capable of following any algorithm. To define this carefully we have to pick a particular mathematical construction of a theoretical computer with unlimited memory space. One possibility for this is a Turing machine but we can use any typical programming language or any one of many logical systems such as certain cellular automata. We find that the set of numbers or integer sequences that they can calculate is always the same. Computability is therefore a universal idea even though there is no obviously best way to define it.

Universality also appears in complex physical systems where it is linked to emergence. The laws of fluid dynamics, elasticity and thermodynamics describe the macroscopic behaviour of systems build form many small elements interacting, but the details of those interactions are not important. Chaos arises in any nonlinear system of equations at the boundary where simple behaviour meets complexity. Chaos we find is described by certain numbers that are independent of how the system is constructed. These examples show how universality is of fundamental importance in physical systems and motivates the idea that it can be extended to the formation of the fundamental laws too.

Universality and emergence play a key role in my ontology and they work at different levels. The most fundamental level is the Platonic realm of mathematics. Remember that the use of the word realm is just an analogy. You can’t destroy this idea by questioning the realms existence or whether it is inside our minds. It is just the concept that contains all logically consistent possibilities. Within this realm there are things that are invented such as the game of chess, or the text that forms the works or Shakespeare or Gods. But there are also the universal concepts that any advanced team of mathematicians would discover to solve general problems they invent.

I don’t know precisely how these universal concepts emerge from the platonic realm but I use two different analogies to think about it. The first is emergence in complex systems that give us the rules of chaos and thermodynamics. This can be described using statistical physics that leads to critical systems and scaling phenomena where universal behaviour is found. The same might apply to to the complex system consisting of the collection of all mathematical concepts. From this system the laws of physics may emerge as universal behaviour. This analogy is called the Theory of Theories by me or the Mathematical Universe Hypothesis by another group. However this statistical physics analogy is not perfect.

Another way to think about what might be happening is in terms of the process of abstraction. We know that we can multiply some objects in mathematics such as permutations or matrices and they follow the rules of an abstract structure called a group. Mathematics has other abstract structures like fields and rings and vector spaces and topologies. These are clearly important examples of universality, but we can take the idea of abstraction further. Groups, fields, rings etc. all have a definition of isomorphism and also something equivalent to homomorphism. We can look at these concepts abstractly using category theory, which is a generalisation of set theory encompassing these concepts. In category theory we find universal ideas such as natural transformations that help us understand the lower level abstract structures. This process of abstraction can be continued giving us higher dimensional n-categories. These structures also seem to be important in physics.

I think of emergence and abstraction as two facets of the deep concept of universality. It is something we do not understand fully but it is what explains the laws of physics and the form they take at the most fundamental level.

What physical structures emerge at this first level? Statistical physics systems are very similar in structure to quantum mechanics both of which are expressed as a sum over possibilities. In category theory we also find abstract structures very like quantum mechanics systems including structures analogous to Feynman diagrams. I think it is therefore reasonable to assume that some form of quantum physics emerges at this level. However time and unitarity do not. The quantum structure is something more abstract like a quantum group. The other physical idea present in this universal structure is symmetry, but again in an abstract form more general than group theory. It will include supersymmetry and other extensions of ordinary symmetry. I think it likely that this is really a system described by a process of multiple quantisation where structures of algebra and geometry emerge but with multiple dimensions and a single universal symmetry. I need a name for this structure that emerges from the platonic realm so I will call it the Quantum Realm.

When people reach for what is beyond M-Theory or for an extension of the amplituhedrom they are looking for this quantum realm. It is something that we are just beginning to touch with 21^{st} century theories.

From this quantum realm another more familiar level of existence emerges. This is a process analogous to superselection of a particular vacuum. At this level space and time emerge and the universal symmetry is broken down to the much smaller symmetry. Perhaps a different selection would provide different numbers of space and time dimensions and different symmetries. The laws of physics that then emerge are the laws of relativity and particle physics we are familiar with. This is our universe.

Within our universe there are other processes of emergence which we are more familiar with. Causality emerges from the laws of statistical physics within our universe with the arrow of time rooted in the big bang singularity. Causality is therefore much less fundamental than quantum mechanics and space and time. The familiar structures of the universe also emerge within including life. Although this places life at the least fundamental level we must not forget the anthropic influence it has on the selection of our universe from the quantum realm.

## Experimental Outlook

Theoretical physics continues to progress in useful directions but to keep it on track more experimental results are needed. Where will they come from?

In recent decades we have got used to mainly negative results in experimental particle physics, or at best results that merely confirm theories from 50 years ago. The significance of negative results is often understated to the extent that the media portray them as failures. This is far from being the case.

The LHC’s negative results for SUSY and other BSM exotics may be seen as disappointing but they have led to the conclusion that nature appears fine-tuned at the weak scale. Few theorists had considered the implications of such a result before, but now they are forced to. Instead of wasting time on simplified SUSY theories they will turn their efforts to the wider parameter space or they will look for other alternatives. This is an important step forward.

A big question now is what will be the next accelerator? The ILS or a new LEP would be great Higgs factories, but it is not clear that they would find enough beyond what we already know. Given that the Higgs is at a mass that gives it a narrow width I think it would be better to build a new detector for the LHC that is specialised for seeing diphoton and 4 lepton events with the best possible energy and angular resolution. The LHC will continue to run for several decades and can be upgraded to higher luminosity and even higher energy. This should be taken advantage of as much as possible.

However, the best advance that would make the LHC more useful would be to change the way it searches for new physics. It has been too closely designed with specific models in mind and should have been run to search for generic signatures of particles with the full range of possible quantum numbers, spin, charge, lepton and baryon number. Even more importantly the detector collaborations should be openly publishing likelihood numbers for all possible decay channels so that theorists can then plug in any models they have or will have in the future and test them against the LHC results. This would massively increase the value of the accelerator and it would encourage theorists to look for new models and even scan the data for generic signals. The LHC experimenters have been far too greedy and lazy by keeping the data to themselves and considering only a small number of models.

There is also a movement to construct a 100 TeV hadron collider. This would be a worthwhile long term goal and even if it did not find new particles that would be a profound discovery about the ways of nature. If physicists want to do that they are going to have to learn how to justify the cost to contributing nations and their tax payers. It is no use talking about just the value of pure science and some dubiously justified spin-offs. CERN must reinvent itself as a postgraduate physics university where people learn how to do highly technical research in collaborations that cross international frontiers. Most will go on to work in industry using the skills they have developed in technological research or even as technology entrepreneurs. This is the real economic benefit that big physics brings and if CERN can’t track how that works and promote it they cannot expect future funding.

With the latest results from the LUX experiments hope of direct detection of dark matter have faded. Again the negative result is valuable but it may just mean that dark matter does not interact weakly at all. The search should go on but I think more can be done with theory to model dark matter and its role in galaxy formation. If we can assume that dark matter started out with the same temperature as the visible universe then it should be possible to model its evolution as it settled into galaxies and estimate the mass of the dark matter particle. This would help in searching for it. Meanwhile the searches for dark matter will continue including other possible forms such as axions. Astronomical experiments such as AMS-2 may find important evidence but it is hard to find optimism there. A better prospect exists for observations of the dark age of the universe using new radio telescopes such as the square kilometre array that could detect hydrogen gas clouds as they formed the first stars and galaxies.

Neutrino physics is one area that has seen positive results that go beyond the standard model. This is therefore an important area to keep going. They need to settle the question of whether neutrinos are Majorana spinors and produce figures for neutrino masses. Observation of cosmological high energy neutrinos is also an exciting area with the Ice-Cube experiment proving its value.

Gravitational wave searches have continued to be a disappointment but this is probably due to over-optimism about the nature of cosmological sources rather than a failure of the theory of gravitational waves themselves. The new run with Advanced LIGO must find them otherwise the field will be in trouble. The next step would be LISA or a similar detector in space.

Precision measurements are another area that could bring results. Measurements of the electron dipole moment can be further improved and there must be other similar opportunities for inventive experimentalists. If a clear anomaly is found it could set the scale for new physics and justify the next generation of accelerators.

There are other experiments that could yield positive results such as cosmic ray observatories and low frequency radio antennae that might find an echo from the big bang beyond the veil of the primordial plasma. But if I had to nominate one area for new effort it would have to be the search for proton decay. So far results have been negative pushing the proton lifetime to at least 10^{34} years but this has helped eliminate the simplest GUT models that predicted a shorter lifetime. SUSY models predict lifetimes of over 10^{36} years but this can be reached if we are willing to set up a detector around a huge volume of clear Antarctic ice. Ice-Cube has demonstrated the technology but for proton decay a finer array of light detectors is needed to catch the lower energy radiation from proton decay. If decays were detected they would give us positive information about physics at the GUT scale. This is something of enormous importance and its priority must be raised.

Apart from these experiments we must rely on the advance of precision technology and the inventiveness of the experimental physicist. Ideas such as the holometer may have little hope of success but each negative result tells us something and if someone gets lucky a new flood of experimental data will nourish our theories, There is much that we can still learn.

Wow Phil, that is not a blog post but rather a whole book chapter … 😀

I look forward to reading this 🙂

Cheers

This is the biggest picture in physics. Aether was imagined by Descartes and finally found as actually existing & pervading all the space. Newton & Einstein proved wrong. Existence of aether proved by Michelson-Morley Experiment. Read “Michelson-Morley Experiment: A Misconceived & Misinterpreted Experiment” at http://www.indjst.org/index.php/indjst/issue/view/2879 (Seial No.25)

Congratulations Phil you did it.

May I phill in some answers already present in my Vixra listing?

What is dark matter?

Clumpy Dark Matter Around Dwarf Galaxies a Support for an Alternative Black Hole Theory According to the Quantum Function Follows Form Model.

http://vixra.org/abs/1208.0031

What was the mechanism of cosmic inflation?

The Splitting Dark Matter- Black Hole- Big Bang and the Cyclic Multiverse.

http://vixra.org/abs/1306.0065

What mechanism led to the early production of galaxies and structure?

New Dark Matter Black Holes and a New Dark Energy Higgs Field, Lead to a Bouncing CP Symmetrical Multiverse, and New Experiments.

http://vixra.org/abs/1301.0050

How are the four forces and matter unified?

Wavefunction Collapse and Human Choice-Making Inside an Entangled Mirror Symmetrical Multiverse.

http://vixra.org/abs/1103.0015

How can gravity be quantised?

Quantum Gravity and Electro Magnetic Forces in FFF-Theory. http://vixra.org/abs/1103.0024

How is information loss avoided for black holes?

An Alternative Black Hole, Provided with Entropy Decrease and Plasma Creation.

http://vixra.org/abs/1103.0011

What is the small scale structure of spacetime?

Construction Principles for Chiral “atoms of Spacetime Geometry.

http://vixra.org/abs/1102.0052

What is the large scale structure of spacetime?

Reconciliation of QM and GR and the Need for a Pulsating Entangled CPT Symmetric Raspberry Shaped Multiverse.

http://vixra.org/abs/1111.0096

Leo.

Some extra poster information is to find under:

http://www.flickr.com/photos/93308747@N05/?details=1

and also in general on Vixra;

http://vixra.org/author/leo_vuyk

Leo.

A notion called “contemporary physics” exists. It is a mixture of more and less accepted theories that brought us some useful results.However, they also brought us the insight that these theories are not consistent with each other and in most cases these theories are also not self-consistent. Drastic differences to contemporary physics are given a hard time and are concurred with arguments such as not following the scientific method, which in case of physics usually includes that any statement must be experimentally verified. This blocks any kind of free thinking. At the same time the progression of fundamental contemporary physics seems to be blocked. Taking that block away requires fundamental rethinking.

For example why isn’t it possible that universe possesses a universal clock that ticks everywhere in the same way. It existence would drastically change our view on fundamental physics. At the same time I challenge everybody to prove that such a universal time clock is impossible.

What I wanted to prove is that we are hopelessly stuck in our contemporary knowledge and beliefs.

Nobody dares to think drastically different!

I was impressed to find this paper from Taiwan which deduces universal time from Gerlach interference.

arXiv:1201.3164v4 [gr-qc] 5 Jul 2013

Now lost behind the image of the Stern-Gerlach effect, Gerlach in 1928 in fact not only gave the only consistent approach to quantum probabilities, but also anticipated the whole Bohm-Aharanov debate, with the right answer. His experiment sets the subtle yet realistic tone closes to you reference-group of French empiricists.

I find their approach gives a good model of how we construct the object-work in relation to action/pragmatic outcomes, which places it with the logic of experiment, and captures one of the most attractive features of Matti Pitkanen’s system.

The twist in the tale is that the Taiwan paper cited above throws up a SO(3,1) symmetry, which is the Pin symmetry, beyond the Standard range, and about the only clear lead on chirality in neutrinos, which is the clear way beyond the Standard Model. I find this very interesting, perceiving that this symmetry operates within communication, opening a dimension which is “unphysical” by Classical and Standard standards.

This relates neatly to Edgar’s relentless challenge to conventional wisdom, and your quaternion dogma, Hans. Why should all physical processes by associative in their logic. Why?

Here’s I’m right with Phil on “logic is our only real ally,” and in the open-ended spirit of combinatorial possibilities.

It remains hard today to dissent from “Standard standards”. The very idea of a Standard Model is tyrannical! It seems to come from Grothendieck once speaking of “standard conjectures,” meaning the Weil Conjectures.

WHICH REMAIN UNPROVEN AS A WHOLE! NO WE DON’T HAVE A STANDARD MODEL, WE HAVE STANDARD CONJECTURES AND THE NEUTRINOS PROVE THEM WRONG!!!

It is difficult for me to interpret the paper of the two Taiwanese scientists correctly, but when I try to do that I think that they convert equations of motion from a spacetime space-progression model that has a Minkowski signature into a space-progression model that has an Euclidean signature. That is what I pursue in my own model of physics.

Yes, Matti gets that result too, coming from topology. If you go strictly local the space is always Euclidean, and the ‘curvature’ can be reinterpreted as light refracting as the energy-density changes! That returns relativity from metaphysics to optics!

And the ideas go back with optics a long way, and were recovered just before Gerlach’s experiment:

MacDonald, Continuous re-creation and atomic time in Muslim scholastic theology, Isis 9: 326-44 (1927).

Avicenna in notes on Nichomachus the Pythagorean gave an event-matrix of space and times! That was his way of showing how a statement can be in general true, but only at specific moments. And his kind of analysis is really important for disentangling logical ‘necessity’ from local causation, which is in fine detail undecidable, and will never decide our metaphysical arguments…

I fund it sad that philosophers still don’t notice this heritage of ancient science and medicine, and instead blather about ‘two-dimensional semantics’.

All space-progression models possess the notion of an observer’s time clock and the notion of the observed time clock.

Observer’s time ticks at the location of the observer and travels with the observer. Observer’s time differences can be measured by appropriate instruments.

Observed time ticks at the location of the observed item and travels with that item. In general observed time cannot be measured.

The difference between observer’s time and observed time depends on the path that information takes to travel from the observed item to the observer and on the number of time ticks that information takes to travel this path.

Contemporary physics is based on the spacetime model. In this model operating space and progression are coupled via the local speed of information transfer. In this model the observer’s time will be used as the common time concept. This results in a spacetime continuum that is characterized by a Minkowski signature. In the spacetime model in principle the observer’s time setting can be selected freely. It means that the observed time depends on the selected observer’s time.

In a paginated space-progression model all observed time clocks are considered to be synchronized. In this model nature possesses a universal time clock. It means that in this model nature steps with universal time ticks from each static status quo to the next static status quo. In this model universe can be considered to be recreated at every universal time click. This results in a space-progression vector space that has an Euclidean signature. This model shows that quaternions represent natures preferred number system.

In the paginated model the observer’s time can no longer be selected freely. It must be deduced from the universal time clock, which equals any observed time clock. The universal time clock runs at a super-high frequency and for that reason it cannot be measured directly. Only derived clocks that run synchronously at a much lower frequency can be observed.

If we mindlessly continue in our unjustified faith in strict reductionism, we will continue to get nothing but Ptolemaic model-building that is untestable and in the end will be judged to be rubbish as natural philosophy.

You say that the multiverse fantasy explains fine-tuning. This rivals astrology for the lack of scientific content.

I am in the process of proving that binary star systems have a mass distribution that is characterized by integral multiples of a unit mass (0.145 solar mass). Just like atoms/molecules.

Now THAT would be something new and useful to know!

It’s a fractal world after all, my friend.

RL Oldershaw said “I am in the process of proving that binary star systems have a mass distribution that is characterized by integral multiples of a unit mass (0.145 solar mass). Just like atoms/molecules.”

Sounds interesting. I would be pleased for this possibility to be established as fact by your work and others.

But why is there quantization of atoms and molecules in the first place?

The discovery of quantization was strange unexpected surprise. Its raison d’être to me, has never been properly explained (excepted now by me in an upcoming publication).

We have Bohr’s ad hoc notion that the “angular momentum in an atom is “just” quantisized then we have de Broglie’s idea that the spectrum is quantisized because of “constructive interference”. This is necessary because if interference did not occur, the electron would crash into the proton. A lot of people believe in this explanation for the quantization of the spectrum of atoms and molecules. But constructive interference is found with waves that travel through a medium. Where’s the medium that the electron in an atom flows through?

Maybe it is true that the world is fractal. But why is it fractal?What is causing it to be fractal? Nature cannot just make it fractal by fiat as any long-suffering theorists is tempted to do. There has to be some mechanism that can be found in nature that makes the spectrum of atoms and molecules quantisized. We can identify the mechanism that makes Chladni figures fractal. So we have to believe that nature a real valid mechanism that makes the spectrum quantisized, You have succinctly criticized my life’s work* and I thought I would return the favor.

*which, I now believe thanks to others, I can now successfully squirm out of.

First of all, it’s logical for photons to be quanta of energy. Since photon is defined in a wave function as an oscillation e^i, it *has* to be limited in space, or it would be technically a “ray” from -infinity to +infinity in space. In terms of spectrum analysis, the probability density function is a kind of “window function” that limits the oscillation spatially. This way a quanta is created.

Why atom’s spectrum is quantized? Physics does not give an answer I think, it only postulates and proves that it is quantized.

I can speculate that protons carry a “superthin” structure with them that creates a kind of spatial energetic “grooves” for electrons where they fall and can’t go out or go to another level if not given an additional energy input by a photon.

Hi Phil,

I have a technical question: Does WordPress have a way to ignore specific commenters, similar to the feature Amazon offers, and if so could it be enabled?

That would be helpful 🙂

There are not many options on wordpress,com. You will have to use your own inbuilt filter 🙂

I found a very good catalog of detached eclipsing binary stars with mass determinations “accurate to 2%”. It is an ongoing catalog with new systems being added as they are published.

http://www.astro.keele.ac.uk/jkt/debcat/

This would appear to offer a good preliminary sample with which to test my hypothesis that the total masses of binary star systems (and single white dwarfs) have distributions that are characterized by preferred masses that are integer multiples of 0.145 solar mass.

Taking only 2012 and 2013 data from Southworth’s catalog (I am only interested in new mass determinations), I find a sample of 36 systems with sufficiently narrow error bars for an adequate preliminary test of the hypothesis.

Of the 36 test systems, 77% are located at 0.04 solar mass.

The total masses for the EcB systems cluster around the predicted masses. A histogram of the + and – deviations is centrally peaked at the generic multiple value.

These results seem much better than the results for the small sample of neutron star binary systems, with the previously noted heterogeneity in error estimates.

I am of course wondering if I have something to write home about yet, and I am hoping that the readers of SAR will have some constructive criticism.

Perhaps I might also get some advice on the best way to analyze the data so as to clearly demonstrate what the data say, and what they do not say.

RLO

Fractal Cosmology/Discrete Scale Relativity

[Deal with it, Mr. Dilaton. Or better yet: don’t]

Sorry, 77% are located WITHIN 0.04 solar mass of a predicted multiple.

Thanks for the snapshot! There you have a blueprint for a book 😉 If you ever write one (and you most definitely should) I’m the first one to buy it.

I wrote a book in 1998 that includes some of the ideas in this post. It is on viXra http://vixra.org/abs/0911.0042 . It would be nice to update it and use print-on-demand technology to publish in hardcopy, but not yet.

Phil, that’s a pretty interesting paper, I must read, I know a thing or two about Clifford algebras.. also, “The universe is made of stories, not of atoms.”

Muriel Rukeyser … saw that quote on a plaque on a sidewalk in new york city last time I was there

It’s a nice quote that I have interpreted in a completely different way from what she intended 🙂

There’s a very high probability (say unity) that the Muriel Rukeyser poetry line quoted was inspired by Hopi mythology. The Hopi creatrix, Ts’ its’ tsi’ nako or Thought Woman a.k.a. Spider Woman, sits at the center of her web continuously weaving the creation and dissolution of reality with her imagination; the “stories” she weaves become reality! You find this present in every one of her myths . . .

I thought it was inspired by path integrals.

Phil, in your 1998 book you said:

“… the algebra of creation and annihilation operators for fermions

is … isomorphic to a Clifford algebra …

In developing an algebraic string theory

the first step would be to define creation and annihilation operators for strings analogous to Dirac’s operators for bosonic and fermionic particles.

It might be possible to do this if strings are described as composites of particles like a string of beads.

The creation and annihilation operators can then be strings of ordinary bosonic or fermionic operators.

The algebras … are … isomorphic to algebras of string creation and annihilation operators …”.

Since the fermionic Clifford Algebra is based on 2×2 Complex mstrices

and

since Complex Clifford Algebras have periodicity 2

that allows factorization of a large Complex Clifford Algebra CCl(2N)

into the tensor product CCl(2) x …(N times tensor product)… x CCl(2)

so that the union of all such tensor products and its completion

all have Clifford structure and form the II1 hyperfinite von Neumann factor algebra

that gives the conventional fermionic AQFT

and

since you want to generalize to include both fermions and bosons

then

how about going to Real Clifford Algebras with periodicity 8

so that

the basic unit would be the Real Clifford Algebra Cl(8)

which

has 8 vectors (for 4+4 Kaluza-Klein spacetime)

and 28 bivectors (for gauge bosons)

and two sets of 8 half-spinors (for fermionic particles and antiparticles)

and

then seeing very large Real Clifford Algebras Cl(8N) as tensor products of Cl(8)

and

then completing the union of all such tensor products to make an unconventional AQFT

???

If you feel restricted by Cl(8) having only 28 gauge boson generators

then

you could get more by using Cl(16) = Cl(8)xCl(8) as the basic building block for the AQFT

(note that Cl(16) contains E8)

One more comment/question:

How about physically interpreting strings as world-lines of particles

with closed strings being virtual loops in quantum processes ?

Tony

Thanks for looking at it Tony. I am not really looking to find gauge groups in the algebra. I’ve got used to the idea that the low energy gauge groups are not easily seen in the Plank scale physics. I could be wrong. I am more looking out for a connection with scattering theory.

I do agree though that there is lots of interesting maths in the periodicity. I just dont know how to fit it together.

Hi Tony,

What about geometrical interpreting simple single different shaped strings as strings of exactly 4 beads with in between three rotation hinges? forming special shapes of electron -positron or -photons-gluons-or neutrinos?

Then Quarks could be compound particles of one or more electrons combined with one or more photons or for positive charged particle: positrons combined with photons.?

The Higgs should be the origin of all those particles as a closed string, with a ring shape.

wrong idea?

Great review, but I hardly think it’s correct to say that it is “theism” which has pushed (some) basic physics research to Europe. Theists have never really cared one way or the other about basic physics. A more reasonable explanation is contained in your subsequent sentence “…its future hopes will be found in Asia along with the economic prosperity that depends on it.” To wit, economic prosperity no longer *does* depend on fundamental experimental science, hence the public and politicos are logically less willing to fund it (as it simultaneously grows more expensive).

“Theists have never really cared one way or the other about basic physics.” Which is exactly the point, but it is only true of certain right wing politicians. There are plenty of people who follow a religion who are just as interested in science as anyone else, but they are not the ones making the decisions.

“economic prosperity no longer *does* depend on fundamental experimental science” Not directly, but as I said it is the educational output that counts. People doing science learn research methods and this has economic benefits. Any science will do but the students need to be inspired to make the effort and fundamental physics is particularly inspirational.

It is hard for politicians and the public to understand where the value from fundamental science comes from and that the amounts being spent are small fractions on national budgets compared to other forms of expenditure. Places like CERN need to be saying the right things and they aren’t

A week ago I visited CERN and got a close look at the ATLAS detector. During that visit I noticed the concern of the leading scientists that their job might finish soon when not sufficient results become available. At the same time they try to commercialize their accelerator technology.

As I have said there are other things they can do to mitigate such a situation.

(1) They appreciate and explain the value of negative results

(2) They produce likelihood data for a wide range of crossections and make it open access for the benefit of all theorists to test their models against

(3) They produce research that evaluates their impact through technological innovation, post-graduate education etc. and how this compares with other research spending.

An excellent update to your Cyclotron Notebooks Phil; I especially appreciate your ontology section with its emphasis on universality and emergence.

Regarding your theist v. atheist divide, I don’t feel this is as much of an issue as you suggest. If you take the time to read the October issue of Scientific American all of these issues are well addressed: political dysfunction by Alex “Sandy” Pentland, director of MIT’s Human Dynamics Lab, with his article “The Data-Driven Society”; scientific policy by David Kappos, former head of the USPTO and current member of the World Economic Forum’s Global Agenda Council, with his article “Who Will Bankroll the Next Big Idea”; the rise of China by Lee Branstetter, Guangwei Li, and Francisco Veloso, all academics at Carnegie Melon, with their article “The Polyglot Patent Boom”. The overwhelming conclusion is that science is a) increasingly becoming an international phenomenon due to advancement in communication technologies and infrastructure and b) much of the R&D work is taking place in Eastern Europe, India, and China because of the skilled labor market. Consider, in 2010, of 3,176 U. S. patents granted to groups with one or more Chinese residents, 30% were purely Chinese invented but assigned to multinational firms and 37% were co-invented and assigned to multinational firms; that’s 67% of the total assigned to multinationals!

Really what you see in the U.S. is privatization – period! Corporations are increasingly taking over more and more aspects of society, certainly the politics. According to the Supreme Court corporations are individuals entitled to free-speech, free-speech which is, unfortunately, cost-prohibitive to most. And this, of course, is really a global phenomenon; evince the recent Wikileaks Trans-Pacific Partnership and Trans-Atlantic Free Trade Agreement document release as indication thereof: (http://www.nytimes.com/2013/06/03/opinion/obamas-covert-trade-deal.html?_r=0); a TED talk by Lawrence Lessig (http://www.ted.com/talks/lawrence_lessig_we_the_people_and_the_republic_we_must_reclaim.html); and a free documentary, Ethos, (https://www.facebook.com/pages/Ethos-the-Movie/115151175221520).

But anyway, it’s quite possible, etymologically, that universe comes from uni, meaning “one”, and versa, meaning “body of words”, so perhaps it IS all just a story . . .

I do appreciate the unfortunate power of big corporations in politics. It is a big issue here in the UK as it is in the US and many other countries. But big corporations are not anti-science. I am not the only one who sees how important is the political conflict between religion and science.

http://www.scientificamerican.com/article.cfm?id=antiscience-beliefs-jeopardize-us-democracy

A search will find articles going into much more detail e.g. http://www.huffingtonpost.com/victor-stenger/rising-antiscience-faith_b_3991677.html http://www.sciencemeetsreligion.org/blog/2012/10/antiscience-beliefs-and-u-s-politics/ http://www.forbes.com/sites/petergleick/2011/08/31/why-anti-science-ideology-is-bad-for-america/

Actually, big corporations turned out to be anti-science. I published as a freelance in a journal not requiring academic affiliation. At the end they asked that I name my city as affiliation.

Okay, Phil, I couldn’t resist these links. Check out the chart Paul Krugman links to: in 2009, 39% of U. S. Republicans believed humans and other life forms have existed in their present form since the beginning of time (say 6000 years); in 2013, 48% believe the same (http://www.pewforum.org/2013/12/30/publics-views-on-human-evolution/)! Crazy . . . And it’s not just Republicans. But religion is a tool wielded by the State and the State serves corporate America (http://morrisberman.blogspot.com/2013/12/home-of-brave.html)! Even the Scientific American article you link to addresses the apparent partnership between corporations and fundamentalists.

And what happened to Obama’s change? His HOPE campaign was the ultimate in Rope-a-dope, Edward Snowden demonstrated that! The point I’m making is . . . well, I’m not entirely certain as to how to properly express the point I’m trying to make . . . To the State, hence, corporate America (or corporate world, America’s hardly unique) science is nothing but a convenient tool, the same as religion, to be utilized when convenient and beneficial to the corporate idealogy, and ignored or suppressed otherwise.

Yeah in addition to Phil’s post here, I liked Phil’s video with string theory and loop quantum gravity being the “same thing”. Closed strings vs loops sounds like the same idea. I remember Urs Schreiber mentioning Feynman paths and string theory worldsheets having a deep relationship. I also remember you and John Baez talking about wanting to use exceptional algebras for a spin foam but it being not possible to do (without your suggestion of having the exceptional algebra as part of a Clifford Algebra).

It seems like string theory and loop quantum gravity really will end up being the “same thing” and symmetry should be very important. Baez, Lisi, you and the E6 GUT for string theory are all into exceptional algebras for that symmetry. Symmetry amazes me. One thing I tried to look at recently is what changes for the root lattice when you go from E8’s compact form to the split form. With a quick look, it seemed like quaternionic imaginaries swap places with Triality and the quaternionic reals swap with the adjoint (D4s).

Adjoint, vectors, and spinors seem tailor made for bosons, spacetime, and fermions. If details/experiments have a problem with this picture, it really seems like it’s the details/experiments that have to be better understood.

I like what you wrote.

“… My preference is for a white hole model…”

“… proton decay. So far results have been negative pushing the proton lifetime to at least 10^34 years…”

Therefore, The perpetual immortal proton must also have none changing variables of what makes up a proton. No inputs of energy and no dissipation of energy.

(quarks, gluons, higgs)

“…Someone else may take a completely different view that appears contradictory but may ultimately come back to the same physical conclusions. …”

Since I am learning, I reserve the right to change my mind.

Ever since I found out about dark energy, I have been trying to make a working image of our universe.

The best that I can come up with would be an hourglass where the particles, (protons), would be the neck and be responsible for the transfer of energy from dark energy to vacuum energy or vice versa.

What if we were able to experimentally determine that a proton is dissipating energy and that it needed an input of energy to remain stable?

What if that effect of that inflow of energy was gravity?

Where would we look?

What would be the constrains?

Here is a paper to wet your appetite.

http://arxiv.org/abs/1202.5097

Matter Non-conservation in the Universe and Dynamical Dark Energy

Harald Fritzsch, Joan Sola

(Submitted on 23 Feb 2012 (v1), last revised 30 Aug 2012 (this version, v3))

I wonder why people keep juggling wit Clifford algebras. All we know of current fundamental physics can be solved with quaternionic algebra and with quaternionic function theory. All relevant discrete symmetries are implemented by the 16 discrete symmetry versions that quaternionic number systems and continuous quaternionic functions can take.

In the same way spinors and corresponding matrices can be replaced by quaternionic functions, whose discrete symmetry version is indicated by a special index.

For example the coupling equation for entangled systems runs

∇ψ= m φ, where ψ and φ are normalized quaternionic functions and m is the coupling factor.

For elementary particles the coupling equation runs:

∇ψᵘ= m ψᵛ , where ψᵘ and ψᵛ are equal apart from their discrete symmetries, which are indicated by the indices.

In this way the Dirac equation for free electrons runs

∇ψ= m ψ* , where ψ* is the quaternionic conjugate of ψ.

∇ is the quaternionic nabla.

Hi Hans,

Would it mean that quaternionic algebra could describe string based on exctly 4 beads with internal curvatuire and three internal quarter rotating hinges.?

like this; http://vixra.org/pdf/1103.0002v4.pdf

Leo

I do not know enough about string theory in order to judge whether quaternions fit.

In the Hilbert Book Model, which is a quaternionic model, elementary particles walk along a stochastic micro-path that is formed by a distribution of step stones. The step stone density distribution corresponds to the squared modulus of the wave function of the particle. The micro-path is not an elastic string.

Hans,

And the step stones have a basic scale of the Planck length? to form a 3D lattice?

Leo in the HBM elementary particles are point-like objects. They got their properties from the symmetries of the quaternions that are embedded in the quaternionic function that acts as our curved living space. This quaternionic function has its own discrete symmetries. The difference of the symmetry set defines properties, such as electric charge, color charge and spin. The embedding process goes together with a singularity that represents a local source or drain in the embedding field. The singularity causes a wave front that moves away with maximal speed. The embedding recurs at every subsequent progression step, but occurs at a slightly different location. The recurrence represents a super-high frequency that is so high that it cannot be observed. The wave fronts combine in super-high frequency waves. The averaged effects of these waves become noticeable as the particle’s potentials. This sequence of embedding locations causes the stochastic micro-path. Thus, the particle itself has no size, but the micro-path relates to the wave function.

Hans this becomes a long story, I will react by email

Very interesting ideas to ponder. I have several more questions I will post. First relates to quantum mech; here is “the problem” as I see it, and I’m wondering what your reaction would be, as I never see it discussed in these terms.

The “problem” is that in a closed system, i.e. with actual physical measuring devices also contained in the system, one cannot define measurement probabilities that satisfy the laws of probability. Essentially, decoherence is never complete. This I find very bothersome because I don’t understand what probabilities mean if they aren’t actually probabilities.

The probabilities are real but they only come into play when “I” make an observation. You should not try to imagine wave collapse in a closed system that you are not observing. It is not happening.

But to define “I” is murky. There is the problem of sorites, i.e., if you keep taking away one atom from an “observer”, when does it stop being an observer. Similarly if one dials up h-bar a little at a time, decoherence goes away and there are no observers. So what is the threshold? What is the criterion that distinguishes QM systems that have “sufficiently good approximate observers” to have physics, and those that don’t? I am puzzled that people seem very concerned about the unitarity of QM under baroque circumstances, but not at all concerned that even unitary QM does not have a clear-cut, state-independent way to produce probabilities which add up to one.

Good article Phil. A lot of food for thought.

One question not on the list is ‘How do we solve the problems with time?’ Some see that as the most important question physics is up against now, and a fast growing number of us think only a radical solution will work.

Two questions on the list assume spacetime to be right, it’s worth pointing out that it’s untested, untestable, and that it led to what some see as major conceptual inconsistencies. Being in some ways intertwined with SR, spacetime often sponges off its neighbour a lot of the credibility that really only belongs to SR, because SR has been well tested by experiment.

Making time emergent is not as easy as it might seem. I’ll point out one or two of several difficulties with that. Starting out from Minkowski spacetime, you inevitably get block time. If you want time to be emergent from that, and linked to thermodynamics, the problem is that there’s a pre-existing sequence built into the block, and it includes the laws of physics in it. The laws of physics need time if they are to work, and if you make time emergent, a lot of the laws of physics would need to be emergent as well, because they need a pre-existing flow of time.

If you have a pre-existing sequence in the block that is static, and say that then something causes time to emerge, it’s not only very difficult to get something that seems to move and change to emerge from something that doesn’t. The problem is also that in that picture the laws of physics were already pre-implied, and frozen into the block, waiting for something more superficial to come along and make them work. And that looks contrived.

Then there’s the question of why an emergent flow of time, which is presumably therefore unreal, varies in speed locally, and these variations in the time rate leave behind them permanent age differences between objects.

I hope this makes sense. I think there are approaches that work fine in the mathematical picture, but when you get to the conceptual picture, as we have to with the time issues, it turns out that it’s rather different. Along with Lee Smolin, George F R Ellis, J R Lucas and others, I think a real flow of time must exist somehow.

Excellent point! Here’s my two cents on time http://www.toebi.com/blog/theory-of-everything-by-illusion/time/

What do you guys think of it?

Jonathan, thanks for your questions. The problems I listed do not assume that “spacetime is right”. Asking about the smallscale/largescale structure means determining what it really is or what it needs to be replaced with.

IMO the “problem of time” has been overstated. You can do quantum mechanics with 0 or 2 time dimensions just as well as with 1. In GR the metric is dynamic and there is no barrier to change it dynamically to configurations where the signature changes, so the number of time dimensions is also dynamic, not kinematic. QM is just about summing over a phase space of possibilities. It does not have to be an evolution of something through time.

I don’t see the motivation for thinking that the flow of time is a fundamental feature of physics, especially if space is not. This just contradicts what we know from GR and QM for no good reason. The flow of time is a psychological construct and saying it is fundamental to physics is like saying that good and evil are fundamental. We are past that stage.

Hi, thanks for your reply. It obviously depends on how much one thinks standard theory is right. You say:

‘I don’t see the motivation for thinking that the flow of time is a fundamental feature of physics, especially if space is not. This just contradicts what we know from GR and QM for no good reason.’

Personally, I think ‘what we know from GR and QM’ is unreliable and incomplete – this seems reasonable, because they’re incompatible. So it may be a bit early to draw conclusions from there, because it looks like there’s a flaw somewhere. This gives good reason to point out possible flaws in the spacetime element of SR, and my criticisms of that are part of a general search for flaws in GR and QM recently, which was partly set off by attempts to unify them into QG.

The time issues are important in that, because there are places where GR and QM don’t match up. The future is one example – in physics it must either exist or not exist. The fundamental unpredictability in QM suggests it doesn’t yet exist, but in block time it’s already sitting there unalterable, and just as real as any other part of the block.

That’s one of quite a list of contradictions, and to me the interesting thing is that even a minor change to spacetime could remove all of them. If the rules about simultaneity at a distance, for instance, were slightly different, you don’t necessarily get block time. And to change the rules about simultaneity at a distance, or reference frames in relation to time, all you have to say is that the time dimension has further differences from the other dimensions that we don’t yet know about, along with the many differences we’ve already accepted. That seems possible, and if it were true, four or five major contradictions might go.

PS. A New Scientist article last month called ‘Saving time: Physics killed it. Do we need it back?’ has comments about a possible flow of time from Smolin, Ellis, Carroll, Craig Callender.

http://www.newscientist.com/article/mg22029410.900-saving-time-physics-killed-it-do-we-need-it-back.html

It’s interesting to see how widely they disagree, just as in the 2008 FQXi essays on time. It’s certainly an issue again now, for new reasons as well as old ones. The flaw, if there is one, could be there because what led to block time included a set of assumptions about time, ie. a set of assumptions about something we don’t yet understand.

In spite of his contributions to the field of dental work, Mr.

Just as other LABA drugs, these drugs may heighten the

risk of having a chronic asthma attack. Pupils enrolled in an on-line program can teach in places like:.

Isn’t “causality in time” always strictly dictated by the velocity of particles? It takes time for a particle to reach another particle, and time is dictated by velocity. The “speed of light” is our universal clock source, duration of anything can be measured objectively if we compare it to the arrival time of a photon.

A way to define “arrow of time” via thermodynamics is not useful. The laws of thermodynamics are born from the proposition of existence of random exchanges of energy between particles, this process, if not spatially constrained, leads to an outward spatial expansion and cooling of the matter. However, the net energy of matter does not change, only its volume increases. Such expansion is not an example of an arrow of time, it’s just a statistical picture of kinetic energy exchanges between particles. But it can’t be reversed, because particles reflect or deflect from each other upon collision. So, the “arrow of time” is only a philosophical concept, it does not exist when you run a physical model. If you time-reverse the physical model, physical laws will also change in an obvious way, and will simply become invalid. In other terms the “arrow of time” is “encoded” in physical laws like the law of reflection or energy exchange.

What would be interesting to know is the photon frequency limit of our reality. Is it in exaHertz region? This could give an idea which frequency range our universe spans in the global multiverse spectrum.

The other reason to think spacetime is wrong is that it leads to the fixed future of block time, which contradicts QM, if one takes the unpredictability in QM to be fundamental, which we very widely do.

Block time also has all events inevitable, including past ones. So all of human history would have been inevitable, and already in the block. In a general way this goes against a lot of our science. The sacrifices that have to be made to go on believing that a more or less untestable idea, spacetime, is right, are not to be ignored.

Spacetime seems to tidy up the picture, that’s why it wasn’t questioned for so long. But it tidies up only the mathematical picture. The reason we’re questioning it now at last (apart from the need to because of quantum gravity), is that it plays havoc with the conceptual picture. The time issues will hopefully bring us back to the conceptual side, which we moved away from it some time ago, but which is where we may need to go to make real progress.

I can only agree. However, if we change to another space-progression model, then a lot of physical models must change as well. Personally I have explored such a model and it seems to be viable. It is even a lot easier to comprehend than the Minkowski spacetime model. The model that I used uses a universal time clock that ticks synchronously at all observed locations/events. The model is called “Hilbert Book Model”

Not sure if I’m entirely correct, but here I’ve proposed a “statistical spacetime curvature” to resolve complexities of GR. Practically, it is close to modeling of time dilatation effects that depend on the gravity field’s strength, but does not require non-linear space.

The statistical space time curvature is based on the proposition that particles closer to the gravity field’s center gain and exchange more energy with their neighbors than particles farther away from the gravity field’s center. It’s quite an easy concept, yet it exhibits statistical “time dilatation” effects. The speed of light does not depend on the gravity field strength, so statistical modeling in gravity field becomes a breeze, much like with thermodynamics equations.

http://vixra.org/abs/1310.0050 Version 12 is the latest one. Prior versions are suboptimal and wrong in some aspects. I’m sorry for my style in advance, I’m not a professional physicist, just know signal processing, time and spectral series well.

Any switch to another space-progression model must preserve the notion of fields and particles and must replicate the SM zoo of elementary particles. It must support curved space and it must present similar equations of motion.

Why would you want to support curved space if you want to get rid of it in the first place?

The target is a model that is in concordance with physical reallity.

Yes, of course, but this does not imply the curved space should be supported in any outcome.

If you are familiar with function finding, every additional day spent on function finding increases the chance of finding a less complex and more precise function.

Curved space is related to graviton, and if graviton can be sufficiently formulated without curved space, you won’t need curved space in any model that requires graviton.

Gravitons are still hypothetical.

Sure thing, gravitons are still hypothetical. But again, this does not mean “curved space” born from GR can’t be changed for gravitons defined in a new, more simplistic, framework. If GR and this newer framework are both sufficiently precise, it’s usually an easy step – to switch to an easier-to-use framework and equations. Especially to those who struggle with GR in their work.

Before we dump the most elegant (conceptually) and empirically validated physics theory we have – General Relativity – we should demand very strong empirical motivations and definitive predictions/testing, rather than theoretical fizzibabble.

I may sound too demanding and too layman, but the widely accepted theory of gravity, General Relativity, gives too little answers and has too little practical applications. From what I understand, we can’t control gravity using GR’s framework. We can only make corrections to measurements using GR, so it went not too far from Newtonian’s gravity in that respect. Most other physical theories offer very practical applications: thermodynamics, electrodynamics, chemistry, etc. Gravity still looks as underdefined concept since we can’t control it.

I don’t think we can have an argument.

Elegant yes. But, empirically, General Relativity is both less precisely measured and less completely empirically tested than the Standard Model by a long shot. While every test done of GR has confirmed it, our empirical tests of GR are not nearly so precise, as for example, our electroweak measurements. The gravitational constant is known only to four significant digits and the cosmological constant is known only to one significant digit.

Equally important, there are whole domains of GR at very short distances and in very strong fields or at long ranges in strong electromagnetic fields, for example, where our measurements are not particularly precise. We have precision tests of special relativity, and we have moderately precise tests of GR effects like frame dragging and the precession of planets, but there are lots of gaps where we lack direct measurements.

Also, since GR is important in the same domain of measurement where dark matter is important, and we don’t understand dark matter, we must rely on faith that the differences between GR without dark matter and what is observed is all due to DM rather than some flaw in GR.

The binary pulsar test of GR gave an accuracy of 14 digits and is regarded as one of the most precise tests in fundamental physics. The GPS corrections are also quite good. So there are some precise results and some less precise. The same is true when measuring parameters of the standard model.

But yes there is still room for GR to fail at short distances etc., just as there could be loads of other forces around that we dont know about because they are too weak or too short range

Please stop promoting this idea that GR has been tested to 14 significant figures. It hasn’t.

Just because you have an atomic clock accurate to 1 part in 10^14 does not mean that any experiment

you perform using that clock tests its subject matter to a precision of 14 significant figures. The original

experimenters observing these systems in the 70s, who understood this, only claimed to be within 1% of the GR predicition.

Good, but not comparable to QED. Sorry, couldn’t reply directly to Philip so this is going to end up in not quite the right place.

I think Phil mentioned something that is very much not very well comprehended or appreciated by lay people and that is that there is an extremely huge mega desert hiearchal gap between the low-energy classical states of GR and Quantum Field theories in the electroweak range with the insanely high-energies in the Planck energy realm. At the Planck energies is to be expected where Quantum Gravity exists. This is at an order of 10^19 magnitude over the current low energy vacuum conditions we exist in. In a sense the Planck energy is near where the inital conditions close to t = 0 lies (near the Planck time). So we can hardly expect that the low energy state of our current Universe to look or act like the initial physics of near the Big Bang. What makes all this very interesting is the question of how selection of our macro Universe Standard Model phyics parameters were chosen. One thing is for sure is that there will eventually be a Scientific explanantion of how this occured however we may or may not like it. A lot is being concentrated on Quantum Gravity and I think by far Super String Theory is probably correct in the Planck energy arena. I think in the low-energy realm of the Standard Model that the so called effective theories or approximations (emergences etc.) reflect some unknown connections with the Planckian initial conditions. The question of what is fundamental then arises and there may be some confusion here. Is there anything fundamental at our low energy classical world then that translated from the Planck energy realm? I think the answer would be yes but I could not tell you what that would actually be. I don’t know. There is always good news in that Science is always finding something new (or refining a concept) that adds to knowledge which will help us understand. Just recently it was found that the ‘fine structure constant’ exhibits no variation in the gravitational field of a White Dwarf star. See http://physics.aps.org/synopsis-for/10.1103/PhysRevLett.111.010801

and http://blog.physicsworld.com/2013/07/05/is-the-fine-structure-constant-affected-by-gravity/ This is in keeping with the equivalence principle and Einstein’s insistence that constants exhibit a rigidity in a correct theory of Nature. So Einstein wins again. There is probably no variation in the ‘fine structure constant’ due to variation in time and space within our observable Universe (screening charge cloud of the electron change from 1/137 to 1/128 does not count because that is a change in vacuum energies due to accelerative conditions natural or manmade). Last I think figuring out selective processes are very difficult because there are so many to choose from. The most interesting to me is based on GR, that of the Weyl curvature. It must be extremely limited in our classical Universe due to neutron stars having a mass limit and the fact that only moderate size and monstrous size black holes exist. If primordial smaller black holes existed the Weyl curvature would give way to something closer to Quantum Gravitational Curvature and we would observe some quantum gravitational effects from these. Thankfully our Universe is not permeated with quantum gravitational effects or we would not exist. Eventually on the wind down of our Universe in 10^66 years some black holes will get smaller and gamma radiation will be ubiquitous. I do have some ideas on pure mathematical structures and rigidities of dimensionless constants such as the ‘fine structure constant’ based on the Monster Group Order. The Neutron Star at 1.36 solar masses is a limiting object in classical physics. It is possible that there are two computable ‘gravitational coupling constants’ (very weak gravitational coupling) and ‘fine structure constants’, that of the neutron star and of the pre-neutron star state where quantum superposition of proton-quantum states could occur in a classical object. I would be willing to bet that the later does not exists but is an effective mathematical tool (utilising isospin symmetry between proton-neutron). My site is https://sites.google.com/site/nonanthropicconstants/

Nice comment, I sometimes think that it is not only “lay people” who do not appreciate the significance of the large scale divide between electroweak and Plank. Many experts also expect that the standard model should be derived easily from quantum gravity. I agree that there may be some signal that permeates down, perhaps the number of generations is related to the number of curled of dimensions or something like that.

If we cannot understand 95% of the stuff that makes up 95%, of the universe, how wrong observationally can General Relativity be?

Newton said about his own gravity theory, “You sometimes speak of gravity as essential and inherent to matter. Pray do not ascribe that notion to me, for the cause of gravity is what I do not pretend to know.”

Did Einstein ever address this mysterious, unbelievable power of mass to attract other mass or warp space. Just because a theory has prediction that does not make it right. That fantastic, unbelievable idea that the earth possessed an incredible ability to to make everything in the Heavens rotate around it in a 24 hour period had and still does have some easily observed observational support.

With very little resources and no formal training in physics, I can get a ~0.5 kg copper test mass to change its weight by 9% (47 gm) by applying heat below it and its opposite coldness above it! A. Dmtriev in a number of studies and PE Shaw in 1916 have also observed a change of weight of a test mass with the application of heat!

http://vixra.org/abs/0907.0018

http://arxiv.org/abs/1201.4461

http://docs.lib.noaa.gov/rescue/mwr/044/mwr-044-09-0515.pdf

It seems to me that these studies raise some serious question about the Principal of Equivalence whose falsification should make the whole edifice of General Relativity crumble as what finally happened to the 1500-year-old Ptolemaic edifice in the 1850’s when the amateur, Leon Foucault, demonstrated that the earth rotated on its axis. After this finding Galileo’s book was at last taken off the proscribed list.

Thanks for these references! They are useful for my “delta-integrator” model which basically states that gravity field is created due to kinetic energy interactions between particles, directly leading to a conclusion that heating may increase gravity.

The same is true for a body suspended above gravity field – if heated, its particles will integrate the gravity field faster.

I will update my paper with these conclusions later.

http://vixra.org/abs/1310.0050

It is rather simple to explain how curvature is caused by the embedding of particles in the field that we consider as our curved living space. However, for that purpose it is necessary to switch to a space-progression model in which a universal clock ticks at all locations where an item/event can be observed. This space-progression view differs significantly from the spacetime model that is used by contemporary physics. That does not mean that the universal clock based model is not a viable view of physical reality.

Interesting essay. But I have to say I felt it was loaded with assumption and abstraction and interpretation. Take this for example:

“Our intuition can easily mislead our way of thinking. It is hard understand that time and space are interlinked and to some extent interchangeable but we now know from the theory of relativity that this is the case. Our minds understand causality and free will, the flow of time…”

IMHO that isn’t what relativity says. Yes Minkowski trumpeted the union of space and time, but one has to go back to basics and look carefully at the ontology and what Einstein actually said. He put quotes round Minkowski’s “world”. He talked of what clocks do. He didn’t talk about curved spacetime, he gave the equations of motion. But spacetime is an abstract thing, a block universe that is utterly static. There is no motion in spacetime, and our world is a world of space and motion. Take a look at A World Without Time: The Forgotten Legacy of Gödel and Einstein. Then take a look at what a clock really does, and then you understand that time doesn’t literally flow. It’s just a cumulative measure of motion. After that you understand what Einstein said about the speed of light, and after that you understand gravity. Deeply. A gravitational field isn’t curved space, it’s just a place where you can plot a curvature in your measurements, your metric. It’s inhomogeneous space, like Einstein said. Read what Einstein said about a field being a state of space, note the shear stress in the stress-energy tensor, then read the original Maxwell and Percy Hammond too, and you gain a different view of electromagnetism. It isn’t some kind of magic, it has a screw nature, like Minkowski said, like frame-dragging. Then what hansvanleunen said strikes a chord. A photon is a wave in space. Space waves. And when it does, space is curved.

I think there’s a lot to gain from a fresh look at things.

In literature I have not yet seen approaches that make use of a universal clock. I have investigated this possibility and it looks very promising.

The universal clock ticks the location of all items and events that can be observed.

Universal time comes close to the notion of proper time that is used in relativity. In a universal clock model all proper time clocks are synchronized.

In a universal time model universe steps with super-high frequency progression steps from each static status quo to the next static status quo.

It is possible to assume that in this model universe is recreated at every progression step.

Without extra care this model will lead to dynamical chaos. Thus some external mechanism must take care that sufficient coherence exists between subsequent members of the sequence of static status quos..

On the other hand the cohesion must not be too stiff otherwise no dynamics will be possible.

This seems to be an odd space-progression model, but extending this model shows that this model can represent a viable view on physical reality.

I use such approach in my model. Photon has a fixed universal velocity, and this velocity does not depend on the gravity fields, and photons can’t be deflected, may only be red- or blueshifted. And when you see a photo where a photon is seemingly deflected, the first thing you have to do is to think about atmospheric-like reflection, there are a lot of type of

“atmospheres” exist – gases, plasmas, etc. Pure gravitational deflection is the “last resort”, I personally do not believe it exists.

In most physics models the (local) speed of information transfer is taken as a model constant. If the information path is highly curved, then the average speed of information can still look slower, but it never will be faster than the constant.

This however, has no direct relation with a universal clock model. A constant local speed of information transfer can go together with the existence of a universal clock.

Hi John, nice to see you here.

Yes the essay is full of assumptions, abstractions and interpretations. That is the point of it. It is speculation about what I think could be beyond what we know. It is an exercise in finding a consistent big picture. Reality may be different. I like to see others try different possibilities, but then I can argue about anything that I think is not consistent. I tried to make this spirit of the essay clear.

Einstein was a smart man but we don’t need to heed his words too closely. The equations are all that really counts and we can form a better interpretation. I think Minkowski’s realisation that it is about 4D geometry was spot on and it helped Einstein to see how GR should proceed after SR because it emphasized the importance of the metric.

The final theory will look very different again, but what we see from GR and QM can tell us a lot about what it is not like at least.

Philip, if you want a significant change in current theories, then a switch in the space-progression model can bring you the way out.

To my opinion it is possible to synchronize all proper time clocks. That clock will then become a universal time clock. In that case coordinate time can no longer be freely selected and must be deduced from the universal time clock. The result is a very interesting space-progression model. The problem with that model is that in general the universal time cannot be measured, There are two reasons for this. The first reason is that in general the attributes of the information path from observed item/event to the observer is not known. The second reason is that the frequency of the universal clock must be higher than any other clock and in that way it becomes non-measurable. This does not exclude that the universal clock model is a viable model..However these considerations make the universal time model fundamentally a deduced model.

The spacetime model of conventional physics has the advantage that the time at the location of the observer can be freely selected and can be measured with appropriate instruments. However, also in the spacetime model the time at the location of the location of the observed item/event cannot be measured without knowing the attributes of the information path.

The universal time model has some particular properties. In that model universe steps with universe wide steps from any static status quo to the next one. Further, it is possible to consider that in this model universe is recreated at every progression step.

This also means that an external mechanism must exist that establishes sufficient cohesion between subsequent members of the sequence of static status quos, otherwise the model will result in dynamical chaos. On the other hand the cohesion must not be too stiff because in that case no dynamics will be possible.

The static status quos can be described by static sub-models. In that case the dynamic model takes the form of a book in which the pages are the static status quos. Within these pages nothing happens. Interaction will run from lower page numbers to higher page numbers.

A Hilbert space together with its Gelfand triple is a perfect candidate for the environment that can be used to describe the static status quo. The whole Hilbert space will then be a function of the page number, which acts as the progression parameter.

The nice thing about this model is that it has an Euclidean space-progression signature..

hans, I know this is not what you are referring to but do you know about NTP, the Network Time Protocol? I really don’t see why you would want to synchronize all the clocks though. On my first week of a job a few years ago we would get bouts of sleepiness and then the admins would exclaim and remark “wtf, why are all the clocks on these computers drifting?” and other remarkably weird shit occured. It’s freaky to think that human beings and cold machines can interact in such weird ways

The most important thing behind this model is that universe is recreated at every progression step. That recreation must be done in a coherent fashion otherwise the model will end in dynamical chaos. On the other hand the recreation cannot be too stiff otherwise no dynamics will result. Thus the embedding of particles in the embedding continuum that we consider as our living space recurs at a super-high frequency and at slightly different locations. With other words these particles appear to walk in a stochastic micro-path, whose step stones form a coherent discrete distribution.

This distribution can be described by a continuous object density distribution. If this density distribution can be considered as a probability density distribution, then it can be considered as the squared modulus of the wave function of the particle.

This makes the universal clock model a quantum physical model!

All points noted Philip. But you know, I think the search for understanding is like being a detective. When you reach an impasse you must backtrack looking for that vital clue. And I remember some years ago looking in the mirror after I realised that something I’d believed for decades was… wrong. I blinked at myself, rocked by the realisation that I could not trust my own assumptions, my axioms, the foundations of everything I thought I knew. Please bear this in mind, and please bear with me:

It’s one thing to contradict Einstein, who said the speed of light varies with position. But optical cocks run slower when they’re lower, and there is no literal time flowing in those clocks. At the fundamental level it’s light, moving. So when a clock goes slower it’s because the light goes slower.

It’s another thing to say equations are what really count, but when your understanding contradicts both Einstein and the evidence, you should know you’re onto something. You should know you’re getting warm.

Now take a look at “Is the speed of light constant?” on the Baez website. I don’t know if you wrote it all, but see the GR section. See the sentence that says this: “This interpretation is perfectly valid and makes good physical sense, but a more modern interpretation is that the speed of light is constant in general relativity”. Now see the sentence that says this: “Finally, we come to the conclusion that the speed of light is not only observed to be constant; in the light of well tested theories of physics, it does not even make any sense to say that it varies”.

It’s one thing to contradict Einstein, it’s another thing to contradict the evidence. When you also contradict yourself, you know that you have found your clue. It is hidden in plain sight, and yet once you can see it, it’s as obvious as the nose in front of your face.

Perhaps I could elaborate, and you could attempt to argue about anything you think is not consistent. If you find you cannot, then perhaps you will agree that you are finding the answers you seek. I think it’s like pulling a thread with Einstein’s name on it, and out comes a string of pearls. Your interpretation shifts, things change in ways that make sense, and I like to think that at last, we can see the road ahead.

Why do you think there is an impass?

Of course there are things we realize are wrong and our opinions must change. I gave one of my examples at the start of this post. But you dont go back on a century of physics. There is nothing wrong with general relativity. It agrees with all observations and is internally consistent until you reach a singularity. Of course GR will be shown to be “wrong” in the same sense that Newtonian physics was shown to be wrong by relativity, but it is still good for physics in the right situations.

The question about constancy of the speed of light in GR is a question of interpretation. When we change paradigms some of our old terminology is no longer meaningful without new definitions. Physicists only need to concern themselves with the new equations. The equations of GR are consistent but people may use the old language differently and make it look like there are inconsistencies when ideas are expressed in English. How can you talk about speed or mass in GR unless you define how you intend to measure it?

I think there’s an impasse because there’s been scant progress in physics in recent decades. Now there’s an ever-growing threat of funding cuts.

I don’t think GR is wrong, I think the interpretation changed and is now wrong in some crucial ways. For example you mentioned singularities. See Kevin Brown’s The Formation and Growth of Back Holes where you can read about point singularities v frozen stars and the “geometrical interpretation” v the “field interpretation”. This used to refer to Wheeler and Weinberg respectively. I queried it with Weinberg and it now refers to Einstein and QFT respectively. IMHO it shouldn’t, because Einstein would have objected to the point singularities. It’s as if Wheeler’s interpretation of GR is now presented as the only interpretation of GR that there ever was. And the result is an elephant in two places at once. Look at Kruskal-Szekeres coordinates in figure 32.1 in MTW. The infalling observer goes to the end of time and back. With one small change to your view on time you will know that this is wrong.

I’m afraid I think the speed of light in GR is far more than a question of interpretation. I think it’s an absolutely crucial issue. And I feel I can talk about the speed of light in GR by simply pointing to one clock going slower than another. Like this:

http://www.sciforums.com/attachment.php?attachmentid=6257&d=1367740440

Yes it’s an exaggeration. Yes the mirrors should lean back a little and the light paths should curve a little. But with no detectable tidal force the curves are the same. And still the lower clock goes slower. Make no mistake Philip, when that light is lower, it goes slower. That’s what Einstein said, that’s what the Shapiro delay tells us, that’s what the NIST optical clocks are telling us, and more besides. Do you recall http://arxiv.org/abs/0705.4507 ? Magueijo and Moffat were right.

IMHO until we can fix this issue, relativity is doomed to stay the Cinderella of contemporary physics.

She has some ugly sisters.

John, I am afraid I have to disagree with just about everything you say here. There has been remarkable progress in fundamental physics over the last decades, both on the experimental and theory sides. Was it not progress that they found the top, the Higgs, neutrino oscillations etc? That is just a tiny sample of the experimental progress which also includes many important negative results such as the repeated no-show of SUSY, proton decay, WIMPs etc. Progress from cosmology has been particularly spectacular finding dark energy, CMB fluctuations, etc. This is just a small sample of hghlights.

On the theory side there has been steady progress in string theory related work. The way it is understood has changed but it still stands as a consistent theory for quantum gravity. I also count loop quantum gravity and twistor theory results as important progress as I explained in this post.

The only bone of contention is that there has not been so much interplay between theory and experiment, but that is to be expected because most advances in theory are relevant to the Plank scale while experiment is working at mostly lower scales. Time will bring them back together. That is not a lack of progress. It is just the nature of the beast.

On GR I urge you to try to understand the mathematics better, nobody who understands it thinks there is a problem with clock rates. Its consistency up to the point where singularities form is a mathematical fact. I don’t think anything further I can say in English can explain to you why there is no problem. At some point you need to match up your thoughts against the geometry and equations to clear up your misunderstanding.

On the subject of Wheeler, Einstein, Weinberg etc., I don’t take anything they say for granted. I learn the subject from them and form my own understanding. I can see areas where Einstein and Wheeler have been wrong, as can others, though oddly enough Weinberg is one of the few physicists who I have never found reason to dispute. He is a real master.

I must have given what I called universal time clock a wrong name, because this notion of time already exists with another meaning. May be universe wide time clock is a better name.

It’s interesting, good luck with it. I think basing a model on principles from QM would be better if QM wasn’t so uninterpreted.

I don’t believe in a universal clock, because I think block time is wrong (the ‘grain in the block’ I mentioned was to point out what one would get even if block time was right). For block time to be wrong, removing a long list of contradictions, simultaneity at a distance would be wrong. That doesn’t look like a universal clock, instead it fits with the local time rates of SR, which can’t be related at all via their ‘now’ moments.

The tick of the universal clock has a duration. It is very small.

I do not comprehend your arguments against progression steps. What is block time?

Block time is the frozen universe picture, in which you have a static four-dimensional block, and nothing moves through time at all. It came out of Minkowski spacetime, and in 1966 a rigorous proof of it was published (later called the Rietdijk-Putnam argument). People used to see that as a proof that time doesn’t exist, and the idea that it doesn’t became part of the landscape, and simply accepted. But 50 years later, during a time of renewed questioning of these areas because of QG, it was noticed that this ‘proof’ only applies if Minkowski spacetime is right. So some of us have been questioning spacetime.

I have only shown some arguments against there being a universal clock, which is that time looks local if block time is wrong. That’s because the Rietdijk-Putnam argument assumed simultaneity as in SR holds over long distances, beyond the light cone. If that single assumption is wrong (it’s untestable), then our whole present view of time is wrong.

The Hilbert Book Model is a paginated space-progression model that consists of an ordered sequence of static sub-models that each represent a static status quo of the whole universe. The sequence number acts as the progression parameter. Thus the HBM is a realization of the Rietdijk-Putnam argument. The main difference is that the HBM uses the proper time (at the location of the observed item/event) as its notion of progression, instead of the time that is measured at the location of the observer.

It’s possible that we have a fixed “universal clock”. To find its rate, we should measure the maximal photon frequency which can be emitted in our vacuum. It’s called Nyquist frequency in digital signal processing terms. Then the double of that frequency can be the time quantization frequency. When we get this time quantization frequency, we can find space quantization step, which will set the fundamental speed limit, which may be way above the speed of light. This speed limit is fundamental since when going above it, spectral aliasing errors will happen, and I bet our reality is constructed in a way to minimize these errors.

However, I do not believe that the vacuum is fundamentally curved. Nor I believe that energy is fundamentally quantized. Otherwise the frequency scale would be discrete, which is not.

Photon can have any energy as photon’s frequencies are not fundamentally quantized. The quantization takes its roots in discrete particle counts and atomic orbital constraints, and is not fundamental.

In the HBM photons appear as modulations of the super-high frequency waves that carry the potentials of the particles that emit these photons.. Thus in my opinion the frequency of these carrier waves is much higher than the highest frequency of photons.

The super-high frequency diminishes as a function of time. It goes together with space expansion and red-shifting. However, in the HBM the speed of information transfer is a model constant.

I’ll probably write a paper on “Universe as a digital information superstore”, should be fun as there is only a limited number of basic particles (quarks, leptons and gauge bosons), with neutrino probably being a lost mass of photon and gluon, and Higgs boson being a “kludge”. If physics is right, and most particles are constructs of these basic particles, the basic “digital” model may be quite simple.

For the given laws of physics, the “universal time quantization” should be chosen as optimal, I do not think it is much higher than maximal possible photon frequency, maybe 2-8 times higher, not much more than that, or there will be vacuum state redundancy. What for?

We must admit that physics currently has no fundamental law to limit the speed of light. What we see as a limit does not mean it is the actual limit. What if cosmic rays are not exactly “heavy” particles, but are lighter particles accelerated to speeds above a commonly measured speed of light? This should not be assumed as impossible.

Photons and gluons are quite different beasts than the other (massive) particles including neutrinos.

In a model that is based on a universe wide clock, the complete model is recreated at every progression step. That step has certain duration. During that step the wave fronts that carry the potentials have proceeded over a given space step. The progression step and the space step define a speed of information transfer. You can either take the progression step or the speed of information transfer as a model constant. In the latter case space expansion goes together with expansion of the progression step.

A universe wide clock already produces a paginated space-progression model. If no extra measures are taken, then this model leads to dynamical chaos. Thus, an external mechanism must establish sufficient coherence between the subsequent pages. On the other hand the coherence must not be too stiff otherwise no dynamics will take place. The result is that the pages are fair but mot precise replicates of the previous page. A point-like particle will appear to walk in a stochastic micro-path that is formed by a coherent distribution of step stones. This establishes the seeds of the stochastic nature of quantum physics.

I remember that some group of scientists have concluded from the relic radiation that the universe is actually a box, with its sides sticked to each other. So, the position is a modulo operation like x0=x%1000, where 1000 is the span of the dimension.

This is partly confirmed by the latest discovery:

http://www.scientificamerican.com/article.cfm?id=universe-may-be-curved-not-flat

Such curving of the relic radiation may be emergent from the “boxness” of the universe – a long wave that went to the right corner was superimposed from the left corner.

So, the space is not emergent, it always existed.

I do not mean to compare photon to neutrino, I’m just suggesting that neutrino is “kludge” for apparent non-symmetry in some particle interactions with *photon*.

This means that while in most interactions involving photon its mass is transferred symmetrically with its E=hv energy, in other interactions its mass is carried away by another particle, so a split happens between photon’s mass and its E=hv component.

This “another” particle is also photon, but with “E=hv” part being zero. This actually equates to the actual mechanism of mass transfer.

The electron inside the atom switches from a higher energy oscillation to a lower energy oscillation mode and as a consequence a photon is emitted. For the atom (or in fact the electron) the difference in energy means a difference in mass. The photon encodes this energy jump in its frequency.

Yes, but that’s only a part of the possibilities. How would you “interface” the pass of kinetic energy and hence mass from one atom to another. Then consider electron-atom interaction in elastic or inelastic scattering? You also have to consider Bremsstrahlung which “converts” kinetic energy into x-rays…

It’s easy in maths, but not so easy to describe in a stepped, concise, “mechanical”, manner.

What I’ve thought about photon is that its energy can be expressed as E=Eb*(1+v^2/c^2)+h(f-Eb*(1+v^2/c^2)/h).

Where Eb is photon’s “base” mass-energy (per c^2). The term (1+v^2/c^2) is the inverse square of the Lorentz factor, but this is only a coincidence. The meaning of (1+v^2/c^2) is that when the light source has velocity “v=c”, photon’s “base” mass-energy doubles. The relationship can be different to (1+v^2/c^2), as I’ve assumed linear dependence. It is unlikely that the dependence is similar to the Lorentz factor since in that case photon’s mass would be detected a long time ago in experiments. So, it should be assumed its mass is quite small.

This also demonstrates how photon’s frequency is shifted when it is emitted by a source of light moving at velocity “v”. This is not due to time dilation or length contraction. The proposed mechanism is that light source’s velocity is added to the speed of light.

The exact energy Eb can actually be found by measuring the frequency shift. So, photon has mass, and it can be calculated. And this mass is most probably *negative*.

When photon enters gravity field, it behaves much like a massive body, it accelerates, decelerates, deflects, and simultaneously changes its frequency.

Alternative, more classical, variant of expression is

E=0.5*Eb*v^2/c^2+h(f-0.5*Eb*v^2/c^2/h).

Then again, expression

E=0.5*Eb*v^2/c^2+h(f-0.5*Eb*v^2/c^2/h)

assumes that velocity can be zero. For the case of photon it’s probably more correct to assume that velocity is added to the speed of light: so, here is a corrected expression:

E=0.5*Eb*(v+c)^2/c^2+h(f-0.5*Eb*(v+c)^2/c^2/h)

=

E=0.5*Eb*(1+v/c+v^2/c^2)+h(f-0.5*Eb*(1+v/c+v^2/c^2)/h)

which is quite curious…

Sorry, fast typing…

E=0.5*Eb*(v+c)^2/c^2+h(f-0.5*Eb*(v+c)^2/c^2/h)

=

E=0.5*Eb*(1+2*v/c+v^2/c^2)+h(f-0.5*Eb*(1+2*v/c+v^2/c^2)/h)

You need to focus on change of mass in gravitational field, it will lead you to some interesting conclusions. . .

It’s refreshing to hear somebody mention this, Oliver (?). When you throw a brick into the air you do work on it. Yes conservation of p=mv momentum applies, but the Earth is so much bigger than the brick, that the brick gets virtually all the KE=½mv² kinetic energy. As its rises kinetic energy is converted into potential energy, in the brick. At the top of its arc, it is momentarily at rest. Its E=mc² rest mass has increased. As it falls back down some of that energy is converted into kinetic energy which is radiated away when the brick hits the ground. So the mass of the brick is reduced to its original value. Invariant mass varies, hence the mass deficit. See wiki:

http://en.wikipedia.org/wiki/Mass_in_general_relativity#Questions.2C_answers.2C_and_simple_examples_of_mass_in_general_relativity

There’s a dreadful subtlety wherein light is deflected twice as much as matter, but perhaps that’s one for another day. As is conservation of energy and descending photons.

The answers in Wikipedia to the mass questions are good except the last which quotes MTW on energy in GR. MTW got that wrong. Energy conservation has been argued about here at length see http://vixra.org/abs/1305.0034 for the answers on that.

I read it Philip, and I noted this:

“In my view it is a great pity that very few modern general relativity text books treat the conservation laws fully. A window of understanding on the universe has been blacked out”.

I agree with you, only more so. You know I mentioned descending photons? When you release a 511keV photon such that it descends into a black hole, the black hole mass increases by 511keV/c². No more. You might measure the photon frequency to be higher when you’re lower, but that’s only because you and your clocks are going slower. The photon frequency didn’t change one jot. Nor did its energy. Conservation of energy applies. It is the meta-law because energy exists. And in a strange fundamental way, it is the only thing that does.

Clocks always count at the rate of one second per second in their own reference frame. That is a tautology. There is no clear meaning in your statement that “clocks are going slower” unless you define how you are measuring the rate. What can be said is that frequencies of photons and the passage of time are relative to your state of motion in SR, and in GR they are also relative to position, so a clock as observed in one reference frame can appear to be slower or faster than it is in its own reference frame. As I said above, these words will not help you understand because you have heard them before. I think that only learning the mathematics of curved geometry better is going to clear this up for you. One thing for certain is that there is no inconsistency here. That is not a question of physics. It is a mathematical fact.

@John You said that BH gains only additional mass of 511keV/c² when photon arrives. Where’s the excess energy due to blue shifting?

@John I mean that if you annihilate electrons here on Earth and in intergalactic space and measure those photons energy on Earth they have different energies. Or do you disagree?

Thank you John (Oliver I am 🙂 ) . This is what I think (you should ignore it):

http://vixra.org/author/oliver_r_jovanovic

I’m not working on that anymore; I had enough of that…

Thank you Philip, you are our liberator.

About white elephant in the room…electron electric charge is -1, proton electric charge is +1, and everybody thinks that is by accident (tomorrow’s children will laugh at us for this).

Furthermore you could find regions in this universe where PROTON has the rest mass identical as an ELECTRON on Earth surface.

Regardless of that, and many other things, I am the only one who believes in electron quark-ed structure.

.

@jovanovicoliver Obviously you haven’t read my http://toebi.com/documents/AtomModelAndRelativity.pdf 😉 You ain’t the only one believing in electron based hadrons.

The clear meaning comes from the ontology. You have two NIST optical clocks* in front of you. One is 30cm lower than the other, and because of that it’s going slower than the other. And they’re optical clocks. There is no literal time “passing” within them. A clock isn’t some kind of cosmic gas meter that literally measures the flow of time. It features some kind of regular cyclic motion, and you can idealise those NIST optical clocks to parallel-mirror light clocks. When the lower clock is going slower, the light’s going slower:

http://imageshack.us/photo/my-images/855/miyz.gif/

Yes, like I said the picture is exaggerated and the mirrors should be tilted back and the light paths curved. But with no detectable tidal force in the room you’re in, the tilts and curves are the same. And yet the lower clock still goes slower than the upper clock. Because light goes slower when it’s lower. The speed of light varies with position, just like Einstein said. It’s a physical fact. What isn’t, is the notion that time is literally flowing or passing at different rates at different altitudes. The “time” is just your clock reading. It’s just the number of reflections, nothing more. Call it 9,192,631,770 reflections for a second.

Gaze at the picture and let it sink in. Don’t tell yourself the distances are different because the slower light is countered by the bigger second and the horizontal metre is unchanged. Forget everything else and focus on the light moving through space. What you’re looking at is all frames at once. That’s the big picture. The big picture is that the speed of light varies with position, and that Einstein said it, and the hard scientific evidence says it too.

* http://www.learner.org/courses/physics/scientist/transcripts/wineland.html

duffieldjohn, why do you assume the light is going slower? The whole clock’s mechanism has enough matter to introduce matter-born delays. Every light reflection is the point where the *matter*’s clock-work interferes with the light. The light is not reflected by some “magical” matter-less way, it is always absorbed and then re-emitted at a reflection angle. Maybe, after all, the matter at a lower point actually runs *faster* and atomic energy levels are blueshifted? Clock rate can be a vague source of knowledge if you do not understand every tiny bit of it.

From what I have gathered, nobody really did any “real life” comparisons of the processes of aging on the Earth and orbit. If you can point me to these, I would be grateful.

So far I assume that cesium clocks may run faster at an orbit because energy lines of cesium change and this increases the transition rate. However, I’m more than sure it’s possible to find a substance whose transition rate will become reduced when placed at an orbit.

So, SR may be useful at establishing atomic energy level changes, but the conclusion about “natural clock” rate change may be quite opposite.

Take a pendulum, for example: it swings faster in a stronger gravity field.

We tend to time things with an Earth surface clock. In GR, in relation to an Earth surface clock, the speed of light varies. That’s all John has pointed out. It’s constant in relation to a local clock.

When anyone tries to explain time away, they tend to use time in the description. For instance:

‘A clock isn’t some kind of cosmic gas meter that literally measures the flow of time. It features some kind of regular cyclic motion.’

‘Regular cyclic motion’ implies a sequence of events, and a flow of time. (The guy from NIST you linked to does exactly the same thing.)

Aleksey’s comments underline the point that physics is full of equivalence. That’s why we need the clues from the conceptual picture, because the clues from the mathematical picture are ambiguous. The same mathematics could be describing ten or fifty different underlying pictures. But when the conceptual clues seem to be telling us something, that’s more reliable.

Jonathan: regular cyclic motion implies a sequence of events, but it doesn’t imply a flow of time. It’s important to be empirical with all this. Light moves. You can see things moving, because light moves to your eye, and electrochemical signals move in your brain.

But you can’t see time flowing. Because in truth, there IS no time flowing. There is no time flowing between those parallel mirrors. It’s just light, moving, and that’s it.

It’s that simple.

Yes, light moves and its velocity set the timing reference. However, the light always moves at the same velocity, without dependence on the gravity field’s strength. Where GR implies light’s speed should change, it at the same time implies that light’s frequency should change. This poses a hardly-resolvable problem, it’s actually an important “paradigmatic” point. We should choose which way to go: fix the speed of light, or fix its frequency. In my opinion, fixing the frequency is an easier way as it allows to live without curved space.

Has anyone really measured the speed of light at distant orbits like 26000km? Using cesium clocks’s rate change as an estimation of speed of light is generally incorrect, because cesium clocks do not measure the speed of light. Speed of light can be measured objectively only on very big distances, and there is lack in such measurements at distant orbits it seems. If you measure speed of light on short distances, energy level shifts of matter begin to take a large part in the equation, and such measurements immediately become incorrect.

Sorry, I’ve meant “fixing the speed of light” gives an easier world-view.

In all your posts, you haven’t said one single thing to back up your statement that there is no flow of time. You describe the workings of a clock (a light clock), and say that we don’t see time flowing, just the clock mechanism working. That’s true with any clock, but it’s utterly irrelevant. The question is whether there’s an underlying flow of time, and you haven’t even looked at it.

You use words that invoke a flow of time constantly, such as “light moves”. That statement implies a flow of time. You say:

“The “time” is just your clock reading. It’s just the number of reflections, nothing more.” You’ve done it again! A reflection takes time, so you’ve used time in the description.

“Regular cyclic motion” you admit has a sequence of events. A sequence of events implies a flow of time, unless you can say why there isn’t one.

To say why there isn’t one, you have to explain the apparent motion, or flow of time we observe, or seem to observe, because we do. To explain it, you presumably you have to make the observed apparent motion either an illusion or emergent. In either case you then need to explain why something emergent or psychological varies in speed, and why these local speed differences to the apparent flow leave permanent age differences between objects behind them. One problem with the illusion idea is – slow an illusion down, then speed it back up again, there should be no permanent traces left behind. Some of the problems with time being emergent, on the other hand, are in posts above – most of those apply to both illusion or emergent time.

To make the point you’re trying to make, it’s not enough to just have in your mental picture the idea that time doesn’t exist. We all observe the same apparent flow of time, and we measure variations to its apparent rate. In science, we try to explain what we observe.

The reason some really good physicists are grappling with these questions (read the article I linked to above – btw, Tim Maudlin is among those who are trying to create theories that bring back a flow of time into physics, along with Ellis and Smolin), is that dismissing the apparent flow of time has simply not worked so far. It may work in the future, but it hasn’t yet.

Alexey, I think you signals perspective is valuable: Einstein started in a patent office where they were processing that stuff! viz. volterra filters from Volterra-Stieltjies integrals, time-slicing like in Hans van Leunen’s stuff.

People still stumble over the difference between an instrument registering an event, and what the observer may deduce from that: this was Bohr’s interest, and he was always misunderstood.

Here’s Knuth on how these *deductions make a *logic with a Minkowski signature:

arXiv:1209.0881v1 [math-ph] 5 Sep 2012

Meanwhile the ‘real’ local metric remains Euclidean… My interest now is in the fine structure of these logics, where even the mathematicians trip up: esp. associative and distributive laws.

orwin, reminds me of the work of another Bohr, that of almost periodic functions. also, i give you props for the Volterra integral mentions, I fondly recall reading the works of the great mathemagician Norbert Weiner. Peace

Jonathan: I have looked at the flow of time. And it simply isn’t there in any real sense. I can demonstrate space and motion with my hands. Can you demonstrate the flow of time? No. Is there any actual evidence for it? No. And please note this isn’t something I’ve made up. See A World Without Time: The Forgotten Legacy of Godel and Einstein.

This issue isn’t irrelevant, it’s crucial. To understand the universe and see the big picture you have to pay attention to what’s there. You must not allow yourself to be distracted by abstractions that are not there. I cannot prove to you that there is no flow of time just as I cannot prove to you that there is no heaven. But I don’t need to explain motion, because I can show it to you. It is real, it is empirical, it is evidential, it isn’t some illusion. And I can also show you one thing moving faster than another.

I didn’t say time does not exist. Time exists like heat exists. Heat is real, it burns you, but it’s is an emergent property of motion. The temperature of an ideal gas is a measure of the average KE and thus the average motion of its atoms. Time is a cumulative measure of local motion, and the wave nature of matter should tell you that for SR this is reduced by macroscopic motion through space because the total rate of motion is c. See the Wiki time dilation page and look at the “simple inference” using parallel-mirror light clocks. The Lorentz factor comes from Pythagoras’s theorem. It really is that simple.

Yes, in science we try to explain what we observe. And when we observe light going slower when it’s lower we don’t say it isn’t and claim that it is. Do we? We don’t dismiss the patent scientific evidence by referring to something we can neither observe nor demonstrate. Do we? And we don’t claim that we live in a block universe that is static. Do we? Amazingly, the answer is yes.

I have read the New Scientist article. It’s popscience I’m afraid, intended to sell magazines, featuring people with “really good” credentials. Pay no attention to credentials, pay attention to the empirical evidence. Pay attention to what you can see.

It seems you haven’t read my posts much, for instance, I said light does go slower lower, not that it doesn’t – in relation to the Earth surface clock you’re referring to by implication.

You talk about light clocks as if they had some weird difference about them, there’s none. With any clock, you only see the mechanism at work, you don’t see time flowing. But with the light clock, this fact seems enough to prove something to you.

You say time is emergent, that’s a respectable view, though you haven’t addressed the problems I’ve pointed out with it. Anyway, we’re off topic, and I feel like Kimmo did at 4.17.

Einstein interested in Volterra kernels? No surprise. Full Volterra kernels are “arcane” stuff as they model non-linear time lapse effects. In digital signal processing we usually use diagonal Volterra kernels only, to model the harmonic distortion. It’s close to impossible to calculate the full “black box” Volterra kernel out of input and output signals, it is much like solving a cryptographic problem.

I personally think we do not need a curved space, it’s a complexity to train the mind, but not the practical requirement. Sometimes I dislike to see people juggle with complex equations and concepts. I think if somebody is smart, he should simplify things, not make them complex.

John, the “frozen universe” is strictly due to the Bianchi identity in the matrix algebra. Levi-Citiva found a work-around using symplectic manifolds, but in the Einstein-Cartan theory, the topos acts up, because its not distributive. This is close to saying its non-Abelian, like quantum theory. So is there a quantum of gravity waves: not a graviton, but maybe the Higgs phenomenon?

As for evidence, yes, in practice electrodynamic solutions come though Greens functions in the boundary conditions, giving just relativistic “images”: how the process appears from different points of view.

Alexey: yes Volterra is an off-beat example, like affine manifolds, which are more general than physics. The straightforward time-slicing is Riemann-Stieltjies. And there are optical axioms of relativity which reduce it to just what is observable. It would help to have a common-sense approach for communications engineering. I would encourage you to write a book like that.

Correction, that’s Levi-Civita. And if you want time to flow you can order it with Stokes Theorem:

arXiv:1101.1690v1 [gr-qc] 10 Jan 2011

It does help to see the flow as an independent assumption.

orwin, I think I’ve implicitly, without prior knowledge, used Riemann–Stieltjes integration in my paper in function (12) http://vixra.org/abs/1310.0050 It’s that simple, but may be a bit different still (in my approach the f(x) and g(x)=dP/dt are not known a priori, cannot be calculated directly, they are revealed in the process of integration).

This works very efficient if you work with body or particle energies rather than with accelerations. And applies to relativistic and non-relativistic velocities, just change the formula for dP/dt (P is position).

In my approach, the energy gain per unit time depends on body’s velocity. While the time, which is “x” in Riemann–Stieltjes formula, is incremented linearly.

What I have found out is that such “energy integrator” based modeling gives the same result when you model a body in a gravity field as the classical velocity-acceleration “Newtonian” based modeling.

It’s also interesting to note that my “delta-integrator model” approach rules out the curved space in the case of photon. Or otherwise, a formula should be invented which physically ties the mass-less photon and its energy (it does not have any other properties on its own) to produce its non-constant velocity. This is impossible to do without adding some external variable into the equation (like gravity field strength), but then you’ll just visibly violate the postulate of the constant speed of light. To resolve it, you’ll have to modify the “time step”, and then you have a mess which is hard to comprehend.

Alexy, your method sounds like Emmy Noether tidying up the Lagrangians after Einstein, to get the components of conserved energy. But it didn’t stop there, and integration theory is where people are working now.

But aren’t you missing the old Weber formula:

c^2 = 1/mu.epsilon, permittivity and permeability of free space? that allows the speed of light to vary with the medium, as we know it does. Feynmann’s path integrals for the Lamb shift were about getting that right, for radio waves passing through the atom, and that’s QED.

orwin, I’m currently focused on gravity fields. Vacuum permeability is a different thing, it’s actually defined for 2 conductors, so it does not have the same fundamental order as “space metric”. Conductors are matter and the change of speed of light may not be strictly due to “space metric” (presumably changing with vacuum permeability), but can be due to photon exchange between matter of conductors. The same applies to photons going through some medium, be it atom with its orbitals or crystals. Permeability may not be exactly vacuum’s property, but may be the property of transition between matter and vacuum. The math is not specific about this.

I’m trying to get the “space metric” to be linear everywhere.

orwin, “transition between matter and vacuum” is what can be called a “boundary effect”. It’s great you’ve mentioned this “permeability” formula. I think the formula actually lacks the alternating current element. Has anybody measured how far AC runs from conductor’s boundary? I bet the higher the frequency, the farther away from the conductor the AC runs, and “alternating permeability” increases.

Take Tesla coil for example. I know, establishment does not regard Tesla’s experiments and ideas high, but there are two facets about it. First of all, it’s not understood, secondly, Morgan and Co could actually threaten Tesla with death. If you can run a very high-frequency current somewhere higher into the atmosphere, the permeability will be so high, it will suck energy from “thin air” in unlimited quantities. The Wardenclyffe tower was a master idea of Tesla. It’s not about distribution of energy, it’s a high-tech power generator; the electricity ran from the air to the tower, not the other way round.

Matter-non matter is a simple dichotomy in my opinion. Matter has mass and can absorb and emit photons (integrate and “de-integrate”). Non-matter cannot integrate photons, hence photons are “non-matter”. However, both matter and non-matter can integrate and de-integrate (emit) gravitons.

Currently, I’m working on “oscillatory integrator” piece. This way it’s very easy to demonstrate how photon’s energy can be integrated by a complex-numbered oscillator. If you multiply a e^i(f1) by a e^i(f2) oscillator, you get e^i(f1+f2), exactly what’s needed to fulfill conservation of energy. Also, oscillation phases sum which is equivalent to phase shift. If you add phase quantization here you get time delaying effects “for free”. So, the speed of light is constant, but when it interacts with matter, it is delayed. This is what current research seemingly strives to achieve – a variable light delay. Signal processing achieved this a long time ago in bucket-brigade delays.

(sorry, non-matter cannot emit gravitons)

This is getting gritty. On matter/non-matter: the Schroedinger wave is asymptotic to infinity, so there’s no place with no amplitude. They just call it vacuum polarization, but Weber’s formula also holds everywhere.

On the oscillator: aren’t you looking at the complex dielectric which is now emerging in private sector development? See Remote Sensing Systems: http://www.rmss.com. This is hydrodynamic stuff: see arxiv:1201.681v1 [cond-mat-soft], so think magnetohydrodynamics in astrophysics.

Tesla’s struggle: this option wasn’t on the table, of course, and Helmholtz did the wave function decomposition strictly in terms of positive definite real values, like a good Prussian bureaucrat. Hence, after all these years, a quiet privatization of physics…

I mean, a recent scan picked up high energy protons/cosmic rays up to 300 GeV. More solar energy in there! And those amplitudes are also everywhere.

It strikes me that when Eistein hit the freeze with the Bianchi identity on GR1, he made it over like a thermodynamics in Euler stresses. So the ‘curvature’ should resolve as refraction due to the mass-energy density, but the integration is tricky due to ‘ultraviolet catastrophy’ back then, now renormalization issues.

On skin depth in conductivity: I haven’t seen frequency analysed, but I know its important: indeed, just where energy breaks into frequencies, classical gives way to quantum: h = E/f. Also, resistivity is frequency-dependent so there’s no thermal equilibrium short of frequencies. Again a thermodynamic problem.

crow: the short answer is Hermitian (!) Hamiltonian scalar product, with positive-definite values (sigh!). Anyone for ghost-gauge?

Yes, Schroedinger wave spans -inf to +inf, because e^i spans that way. However, the square of the wave function, a probability density function, sets a narrow area of space where a photon can be located.

In my opinion, photon is always bounded by a “spatial probability density function” for which I’ve finally chosen Cauchy PDF.

The double slit experiment is great, but it does not imply that the wave is really infinite: even in this experiment the wave visibly dims at the edges, a fact which should not be taken lightly. This fits Cauchy PDF well.

c0=1/sqrt(mu0*epsilon0) may not be as fundamental as it seems. c0 may actually be a constant, always. Then this formula can only be used to define the “boundary conditions” between a conductor and vacuum: e.g. when mu0 is increased, epsilon0 is decreased. According to Wikipedia, NIST does not regard this formula as fundamental.

Curvature as refraction? I do not really know. What I know (maybe wrongly), is that change of speed of light in gravity field has no solid long-distance experimental evidence. I can agree that gravity field’s strength (or its velocity which is equivalent) affects atomic energy levels (lines), this effect is akin to the compression produced by acceleration; this is again “very thermodynamic”.

On Tesla’s Wardenclyffe, I find it very-very amusing to read bold statements as such:

“The transmitter itself was to have been powered by a 200 kilowatt Westinghouse alternating current industrial generator.”

“Beneath the tower, a shaft sank 120 feet (37 m) into the ground.”

http://en.wikipedia.org/wiki/Wardenclyffe_Tower

No wonder, many perceive Tesla’s ideas as partly insane… until you realize that a hangar with “hay” label on it really held something different, and the history behind it was partly “forged”.

Sorry for injecting mentions of Tesla, but this is fundamental. From what I’ve gathered by “Wireless energy transfer” Tesla actually meant the “wireless energy transfer from the atmosphere”, hence the need of deep grounding. Otherwise Tesla should have built at least 2 towers – the sender and receiver to demonstrate the proof of concept, yet history has no records about second tower.

Here’s the latest Mar’13 patent from MIT and US government on wireless energy transfer, works at 5MHz: http://www.google.com/patents/US8400018

I wonder if a similar, but a larger device worked at 5 GHz – I bet, it would cause permanent lightning strikes in the area. Dangerous without solid grounding and high standing structure. If a lightning strike can be caused at will, it equals to enormous amounts of energy going like high-speed trains between air and ground. One just clings into this flow and the circuits have power.

A lot of dark matter seems to exist in physical sciences. It must have been generated by the dark energy of the minds of some physicists that love to keep things complicated. Know that enlightenment is located in simplicity and not in mixing non-comprehended stuff with more non-comprehended stuff.

Real science tries to bring problems down to their bare essence.

Deep grounding: of course, to get beneath the surface flux of electrons. This is long known in American outback science: there’s a diurnal flux which drove the pioneering long-distance telegraph; now replaced by solar.

That ‘making hay’ story: runs to an ironic twist in the ‘barn’ measure for experimental information: as in the now classic ‘inverse femtobarns’ at CERN.

Skin depth again: I actually probed that, interested n the transfinite diameter. For material conductors, frequency gets lost behind the roughness of the surface!! Its happening right on the surface in all its miscroscopic detail!! So now they find a cos^2 dependency on the bond angles at the edge of a carbohydrate molecule!! and just now, carbon nanotubes wrap round and eliminate this edge: the resistance then drops to near zero: call it superconduction without superfluid coherence.

I’m impressed that you noticed the Cauchy distribution, but even the faintest tails are non-zero and can sustain what they call vacuum polarization. Have you looked at the Clausis-Mosotti/Maxwell/Lonentz-Lorenz law? Refraction etc. direct from the complex dielectric constant.

You are using the engineers delta^2 = zero, which is a fudge. Strictly, that’s not zero, its a symplectic (Cauchy, of course!) infinitesimal, and this now matters: that’s exactly where the Weil conjectures collapse, with infinitesimal divergences, as in neutrinos with mass!!

If you recall that Weyl wrote the neutrino equation in 1924, and its now tripped the standard conjectures/model….

Back to skin depth: the place to look is heavy astrophysics, where it appears in jets and the like.

arXiv:1009.5406v1 [gr-qc] 27 Sep 2010

frequency-dependent skin at event horizon, relative to frequency associated with Schwartzchild radius. that’s now enough to suggest that in conductivity the ‘deep’ thermal factor involves quantum tunnelling!! a seriously knotty issue, then….

There must be something to building a Tesla coil at a high altitude. Well, maybe combine high frequency with high voltage. Some group used a small rocket to push a conductive tether into the atmosphere, it got burnt by a huge power surge from the atmosphere to the ground. Maybe its possible to “dose” the power surges by using Tesla coil, there is little hope any other technology can be applied to make use of the atmospheric potential. It’s either something which can control the inflow of energy or something which will get burnt: no practical tether can stand that power.

Of course, the original idea behind Tesla coil was to transfer information and power through the ground.. like it’s super-conductive. Well, enough of that, I’m probably too intrigued by these coils…

orwin,

What’s the reason behind introduction of vacuum polarization as a “fact”? Was it due to energy uncontrollably leaking from the system? e.g. like the situation with the dark energy? Was it due to strict spatial constraints in the equations? But does the actual matter (and photons) have strict spatial conditions? I doubt it, photon’s tails won’t be cut off abruptly if you won’t do it with the “help” of math. Just increase the area of interest so that tails are not cut – this is what is usually necessary to do in signal processing, “pad the signal with zeroes on both sides”. It usually does not take infinity to achieve a tail with a tolerable discontinuity.

I think I understand what do you want to tell by referring to refraction and complex dielectric constant, but I can’t see how they are applicable to the space if it’s metric is assumed to be linear everywhere. Until I see some very conclusive long-distance speed of light measurements that demonstrate the variation in speed of light rather than variation in atomic clock rate, I can’t introduce an unnecessary element into my worldview.

…well, the worldview changes with time of course.. what if photon has “kinetic mass” which is added and subtracted every time photon is emitted and absorbed? This is consistent with special relativity (if speed of light does not depend on the inertial frame then the velocity of inertial frame must be added to the photon “behind the scenes”). I’ll think along these lines, this seems to be compatible with my “delta-integrator model”, but does allow a variable speed of light when inertial frames are summed. I just need to find a balance between “kinetic mass” and E=hv.

Should be fun 🙂 well, I’m not hurrying with my model, it’s a refreshing hobby, even if I sometimes write a complete BS…

orwin, I do not see where “I’m using” delta^2 = zero. What is that?

Cauchy PDF scales well, via its “sigma” parameter. While Earth’s gravity field is considerable for Moon as a whole, this gravity field is insignificant for an electron on the Moon.

They now see gamma rays from electrical storms, which are hectic; and there are quieter large energies called sprites after the musings of old German nature philosophers. But if we can detect gamma rays and now transmit data in orbital angular momenta is it impossible to build a cosmic ray conductor? Just asking – the problem would be more than complex because of antimatter – like the Dirac equation, as it turned out.

The delta^2 = zero is the old way of managing infinitessimals, as errors so small that their variance is negligable. To say the squared amplitude of the Schroedinger wave vanishes is like that.

The vacuum polarization comes in with Feynman’s path integral solution for the Lamb shift in spectra due to passing radio waves. Julian Schwinger was very sarcastic – “who needs it?” – but he eventually talked himself out of physics into the mists of cold fusion. On a close examination, Feynman was correcting for the dielectric within the atom.

orwin, what do you think about an idea that “space” has a kind of “friction” against rest mass? For proton’s mass, this friction adds correction of roughly the square of the Lorentz factor: E=m/2*v^2/(1-v^2/c^2). Increasingly lighter particles may have a different correction, and even go faster than “c”.

Now, set photon’s mass to a very small value, small enough to not detect its gravity field, and you have almost no “rest mass friction”, and practically, in our known universe, the speed of light can reach “2*c”: from 0 to up to “c” from kinetic energy of light source, and plus exactly “c” as a constant velocity.

Maybe neutrino is that missing photon’s mass?

Then, hypothetically, it should be possible to arrange a system that pushes the lightest available particles like billiard balls, in a cascade arrangement (each cascade level increases energy by roughly 1.7, two particles in average strike a single particle at an angle, and this single particle together with another particle strikes a particle at the next level, etc). And viola, you have an information channel with velocity >c.

I really wonder, why physics articles DO NOT mention a single easy-to-understand idea that photon’s velocity equals “v+c”. I fail to admit I’m the first to think about it that way.

I read a lot about aether ideas about photons being waves like sound waves. And I actually was carried away by them..until now.

And then Einstein creates his theory as a “solution” to these wrong aether ideas. But he ALSO insists that speed of light is constant. Why? Add “v” to “c” and you won’t notice the change of speed of light due to movement of inertial frame. But as a result you have no length contraction, no time dilation…

Jonathan: I’ve read every word of your posts, and I’ve responded faithfully to your points. With respect you cannot demonstrate that time flows because this is merely a figure of speech. IMHO you should examine this issue carefully instead of saying you’re retiring from the discussion because you’re bored. When attempting to gain understanding it’s important to be clinical and logical and empirical. I find it helps to look in the mirror and say something like this:

Is there any evidence to support my belief?

Yes.

Then give it.

I can’t.

Why not?

I just can’t.

Let’s start again, OK?

OK.

Is there any evidence to support my belief?

No.

Then why do I believe it?

I don’t know.

That’s not very scientific, now is it?

No.

So don’t believe it, OK?

OK.

I’ve given plenty of evidence, see posts above (you haven’t addressed a single one of the points I made). One bit of evidence was the permanent age differences between objects left behind by time dilation. These traces left behind – after differences to the rate of the apparent flow that we all seem to observe – leave a kind of ‘grain’ in the block of block time. This grain suggests not only motion through time, but laws of motion involving motion through time. This is one of the many things you need to explain to argue that there’s no flow of time.

At one point you said something like ‘you can’t show me a flow of time, but I can show you x, and y’ etc. and one thing you mentioned was motion. Well if you can show me motion, I can show you a flow of time, because you can’t have one without the other. More generally, the laws of physics need a flow of time, and we seem to observe one. So the burden of proof, if you say it doesn’t exist, is very much with you. You have a lot to explain if you say it doesn’t exist.

Just because physics can’t explain something doesn’t mean it doesn’t exist. We’ve known about all kinds of things before being able to explain them. And we even used to admit that they appeared to exist.

Admitting what we don’t know helps progress in physics, and failing to admit what we don’t know holds progress in physics back.

Jonathan did you consider the possibility that nature possesses a universe wide time clock that ticks with a super-high frequency that is higher than any other frequency?

Such notion of time seems to flow, but in fact it steps.

See:

http://www.e-physics.eu/TheStochasticNatureOfQuantumPhysics.pdf

[sorry, my post in reply to this one from Hans is above]

Alexey, from what I understand:

*photon mass=relativistic or virtual mass is known experimentally from solar wind.

*a velocity above c must be a group velocity, actually a shuffling of phases in the wave packet. but real enough as such.

*space does offer resistance, its been calculated recently, from vacuum polarization.

*Einstein objected to Weyl’s take on GM denying any histories involving change in spectral data. Weyl then backed off, but the Weyl action is now back in the game: interestingly Gothendieck of category theory fame is quietly working there now.

The question of where the virtual mass goes when a photon is absorbed seems to me interesting and important. Its the energy factor in gravitational potential again: is must deform the atom, and that response would have bearing on elasticity and scattering. And quire possibly massive neutrinos.

**Does this not involve the Weyl action, and deformation into the range of the kaluza-klein dimension, which may well be inverted, i.e. act in the range of magnetism/torsion/the spin factor as a soliton/dark matter/KdV wave.??**

orwin, thanks.. Why group velocity? To satisfy SR and GR? But they deny >c velocities, be them group or not.

Do you know if anybody analyzed the actual speed of the cosmic rays/particles? (not resorting to the speed of light constant)

Is vacuum polarization an effect related to mass? I’ve read it’s related to charges and currents only.

What I’ve meant is vacuum’s resistance to gaining momentum. The formula for relativistic momentum can be seen as formula that requires increasingly more energy for each velocity increment. It’s like “vacuum’s mass resistance”.

So, Einstein didn’t like an idea that photon is gaining or losing energy in the gravity field like a massive body? That’s understandable if one offers a theory where the vacuum is non-linear and the change of spectral lines are not due to the change of self-energy, but due to the change of metric.

But I do not see why another model is surely be wrong, where the gravity field is a *field*, not a spacetime curvature.

*gravity field: yes, that was Weyl’s strong intuition.

*change of momentum: there the vacuum resistance is called ‘backreaction’

*>c: was confirmed at CERN recently in the neutrino experiment with the wave packets on route to where the (vertex where) neutrino appears

*cosmic rays: would be rather slow, being heavy. the ‘imaginary’ factor here is surely antimatter, not >c. The Dirac equation throws up another square root giving imaginary solutions, and much to Dirac’s surprise, positrons.

I’m interested in your Cauchy distribution: that seems to me to cover the coherence of matter at the level of alpha particles (so not stray protons): hence the clumping of radiation in ‘hot spots’.

Weyl action: this paper was left unpublished for twenty year(those Star Wars years!): as the author says, the epistemology is non-standard. I find it interesting for introducing the berry phase, which links to the fine structure constant.

arXiv:1212.6786v1 [quant-ph] 30 Dec 2012

All this stuff is now in heavy matrix calculus, which is far from your spectral analysis. Which is a problem for communications engineering: the internet is horribly sticky with unattended delay factors, not to mention topological knots. It helps to use Intel’s IA64 architecture, but Oracle were trying to annihilate it.

On the brighter side, Leonard di Caprio is entering a team in Formula E racing: electric cars, where these ideas find application. So at least California is on the move again.

And your Yablo team in the smartphone wars have a better design idea: a two-sided screen to match the duality of electromagnetism!

OK, so vacuum induces “backreaction” like impedance. But this is valid against charged particles. Mass is not equivalent to charge. So, it’s not “that” thing in a general case.

3-dimensional spectral analysis does exist, it’s also “heavy matrix calculus”. But I’m not much interested in spectral analysis as you suggested, I’m interested in 3-dimensional convolution, it is in this domain particles get their fields.

Cauchy distribution, when “converted” into 3-dimensional finitely-integrable field gives 1/(1+x^4) tails, very short tails. However, when you integrate the field over infinite area, it has 1/(1+x^2) dependence. So, when “working” with the field it’s usually as simple as calculating field’s intensity at a point and taking a square root of it, no imaginary parts exist. Then the differential of 1/(1+x^2) has 1/(1+x^3) dependence, reminiscent of magnetic field, and this differential has 2 polarities.

Fine structure constant may be a simple thing. It corresponds to the difference in “sigmas” between the Mass field and Charge field, if they are both expressed via Cauchy distribution. This difference in “sigmas” leads to separation of force scales.

For heavy matrix calculus you’d use graphics accelerator cards, they all now have 24-bit mantissa floating point, should be enough to calculate the tails. IA64 is a lost case… turned out to be a test bed for Intel.

On the cosmic rays note… Has anybody really measured cosmic ray velocity in some laboratory arrangement? (e.g. detecting a cosmic particle at 50m altitude and then detecting it at 1m altitude and measuring time)

I understand that if you see some decay when a cosmic particle hits a target, you can do some extrapolations based on the current theories. But what if speed of light is not constant for many cosmic particles, which are actually pretty light, but carry a lot of energy? Was this ruled out definitely?

Differential of 1/(1+x^2) is actually closer to -2/(1+x^3). Which suggested to me (remembering Einstein’s 2GM) that Mass/Gravity field for *photon* is like magnetic field to a charged particle.

Then the differentiated Charge (Electric) field is equivalent to Magnetic field.

To be exact: -2*x/(1+x^4). Of course, full exact expressions include “pi”s and “sigma”s.

orwin, what’s your take on water flush rotation being different at North and Southern hemispheres?

Alternating current creates electric field differential which is magnetic field.

Electron can integrate the electric field differential, be it atom-based (magnetic domains) or vacuum-based (AC), but repels from a constant electric field.

This also means that a moving electron has a small self dipole moment, because movement of electron in vacuum creates a small electric differential along its path over whole electron’s charge field.

In a closed energy-conserving electrodynamic system it’s not even necessary to use photons to describe electromagnetism. Photons are merely system’s energy losses.

* * *

Using a similar logic, the gravity field can be handled.

Optical axiomatization was attempted by on Richard Robb way back:

https://archive.org/details/theoryoftimespac00robbrich

and two other works there by the same author. But his optics was never up with the wave mechanics. With Einstein’s non-linear GR, the problem turned out to involve Godel undecidability:

http://www.renyi.hu/conferences/nemeti70/Lefever.pdf

If you allow feedback in a flexing surface, you get freedom: that’s just how bacteria do it.

One can still envisage a thermodynamic overview, and I always read Einstein that way, but that would require second-order arithmetic, Peano through Dedekind, which brings you round to deformation algebras.

Now if the fine structure relates charge and mass, it also links impedance to mass: atomic matter is charged matter. I’m not talking about static external charge here, but the internal resistance to the deformation due to uptake of relativistic mass, which brings one close to the Lamb shift problem.

In accordance with Lefever, I believe both SR and GR accomplished their “higher” goals to generate a spectrum of related ideas. However, as Lefever’s work suggests, in the long run SR and GR should be changed to something much more “decidable”, or there will be no further progress, that’s obvious now.

What a new theory should predict? Time dilation of atomic clocks that depends on velocity and gravity field strength, photon’s red- and blueshift, including electron orbital energy line shift, due to velocity and gravity field strength. That would be enough, because “length contraction” (or change of speed of light) has not been proven sufficiently yet. Equivalence of velocity and gravity field strength is proven, so kinetic energy is equivalent to a gravity field (E=mc^2 also insists on this).

Overall electron orbitals shift due to increase of gravity field is quite easy to understand and define: gravity field’s intensity adjusts the “fine structure constant” for a given atom. When atom has little kinetic energy, the “fine structure constant” is a known value, otherwise, as gravity field increases, this value changes, probably decreases. It’s easy to derive shifts out of this proposition, not resorting to SR or GR. The math is available in abundance.

This orbitals shift can indeed raise the “internal resistance” of atom to raising of its local gravity field/mass. So, the change of the “fine structure constant” can probably include the “saturation curve”.

The “fine structure constant” may actually increase with the increasing velocity/gravity field strength, if it is considered as a ratio between electron’s electrostatic repulsion and gravitational attraction.

Fine structure variation: yes, that’s where Bernd Binder has been working: on Berry phase, where a fractal distribution of alpha appears – arguably the root fractal in existence. Here’s Berry phase in relation to Weyl action:

arXiv:1212.6786v1 [quant-ph] 30 Dec 2012

I must admit I’m a bit of a GR sceptic. if one follows your Cauchy distribution, then Levi-Civita’s symplectic fix for GR2 is cut out, and one must look to GR3 – Einstein-Cartan theory, where the ‘freeze’ problem appears again. To me that’s real: where a field acts we can place a relativistic image in the boundary conditions, but any causal relation must depend on the real state of the medium intervening.

The rest is wave mechanics, but that now overlaps with GR3 through solitons. The propagation of light has been described (by Bohr I think) as a perturbation of the medium, which then gives a wave-packet, with a group velocity. And solitons do propagate faster than the real speed of light, up to the theoretical maximum, which seems inviolable. The folks in astrophysics are touchy about that limit, so I don’t believe there are any known exceptions: just some issues with cosmic neutrinos.

With GR3, which is still very little discussed, you start to see that torsion offers a different kind of reference-frame, as with the different twist in vortices with hemisphere. Goedel showed that the whole relativistic universe may be rotating, so now people look to black hole horizons for a sense of the limit. And there things are still discovered, as with that frequency-dependence I flagged. I think, Aleksey, that you’ve outrun the GR2 range, with a better sense of the probabilities involved. That’s impressive.

Thanks, however, I do not think I’m anywhere near to offering a “knowledge” system which can be used in practice. Then the math currently used in physics, in almost all areas, is very complex. By choosing a particular mathematical domain, you end up being understandable to a narrow expert audience only, which is not acceptable.

The most generally-problematic part I think is going from quantum or relativity to classic. I think most complexities arise from the requirement to end up in classical terms and scales, which Copenhagen interpretation requires.

Digital signal processing offers a very interesting, and not particularly complex, approach to the problems of rescaling which is called resampling (I’m currently involved in a project https://code.google.com/p/r8brain-free-src/ )

Most importantly, the fields, given they follow the principle of superposition, can be resampled independently from the “particle cores” (I’ve called them that way to distinguish from just “particles” that combine various intrinsic fields).

So, in order to change a scale, at first the “particle cores” space should be resampled. Every resampling operation requires spatial low-pass filtering. If before resampling the particle cores were unit points spread over space, after resampling these particle cores form a “particle field” that estimates particle density. There can be various types of particle cores and particle fields. Unit points correspond to the centers of Electric and Mass fields. But there can be also “spin cores” that lead to “spin fields”, which may be zero for plasma, but non-zero for a magnetic material.

Then the Electric and Mass fields these particles possess are also resampled independently. If Cauchy distribution is used, the result of resampling is also Cauchy distribution, but with the “sigma” parameter changed.

After resampling it is possible to *convolve* the “particle field” with the resampled Electric and Mass fields.

By taking differential of the Electric field it’s possible to derive the Magnetic field.

After convolution was performed, it is possible to see the distribution of e.g. Mass, Charge, Magnetic and Spin fields, and make predictions of evolution of the system on a larger scale.

Of course, all these fields can be calculated in the complex-numbered 3-dimensional space, and Magnetic field can be equated to be in quadrature to the Electric field, for Cauchy distribution it’s the same thing as differential (if I remember correctly).

Aleksey, orwin and Jonathan,

Why don’t you open a discussion on LinkedIn. There you can start dedicated discussions in a suitable group. Here your chat gets off-topic.

Last comment… my intuition tells me that photons may not fundamentally carry any frequency information. They are just ultra-light particles, with their mass being in the area of 1^-54kg as the latest estimations from solar wind demonstrate.

These particles can also be represented as “particle cores”.

When the particle cores are spatially low-pass filtered, they combine into electromagnetic waves.

The mechanism of frequency formulation by these particles is probabilistic. For example, take an electron that emits a E=hv photon. Electron starts to “dump” its mass by emitting photon particles in succession at random periods of time, but with this random period function modified by photon’s required frequency. So, with shortening the average random period by 2, the photon particle “fire rate” doubles and so doubles the energy. Such perception of photons has a physical meaning.

Ultimately, E=hv represents an array of photons which when spatially averaged bring the electromagnetic wave to existence.

Then the gravity field can be alternatively formulated as photon emission at random periods of time with a constant expectation, hence such electromagnetic field has zero frequency. If each photon is entangled with its source then its possible to immediately subtract the energy from the source when photon is absorbed after some time at a distance.

This brings the unification of gravity and electromagnetism.

(but that means that every particle in the Universe is indexed, everything is counted like in a computer relation database)

And, also important, when formulated classically via E=hv, mass of photon increases with its frequency since energy of photon wave equates to the number of elementary photons. Call them Photino. 🙂

Photons are energy quanta. They have no mass, otherwise they cannot propagate with light speed. On detection the whole energy of the photon is absorbed by the detector. The energy of the photon is encoded in its frequency.

Sorry, explosion of ideas… A constant emission of photino actually describes gravity quite well. It’s a low-energy emission, and its intensity obeys inverse square law. Being a constant emission, it permeates all matter without frequency-dependent absorption. Then it describes atomic decay quite well, and permits decay of any mass-bearing particles. Then it poses a question whether or not stable elements radiate mass at all. So, for example, proton may have its intrinsic mass, but since it does not decay, it may not have any measurable gravity field: no radiation equates to no gravity field.

Oh, yes, detector absorbs the “whole” photon. But even if you quantize this photon further as I suggest, it will still be absorbed fully, because photinos that construct the photon travel in the same direction and detector can’t bypass this stream. Integrally, this stream is photon.

In my opinion, ultra-small mass can travel at the speed of light and beyond. Existence and behavior of ultra-small mass physics has not been analyzed by the “big” physics celebrities, hence there is no consensus available. It’s a new area of research, analysis of solar wind.

Think about how a photon is emitted by an atom.

Is is emitted by the atom or is it emitted by the electron that switched its oscillation mode?

Is it emitted in one stroke or does it use a number of wave cycles?

In the last case, what is the duration of the emission?

Is it a separate wave, or is it a modulation of another wave?

Similar questions can be asked at absorption. How is it arranged that the atom absorbs the complete photon?

I do not have much problems with these questions…

It’s a common knowledge that photons are emitted mainly by a “probabilistic” electron on atomic orbit. Electron on any orbit may emit a photon, which one is governed by probability.

Electron in atom is always in “oscillation mode”.

Per my suggestion, photon is not emitted “in one stroke”, it is emitted via a stream of “Photino”, a further quantization of photon. Emission start and end times are governed by a kind of “possibility window”, which I think is fixed for photon of any wavelength. Hence, the more photinos are emitted during this time window, the higher photon energy is. This “possibility window” is like a free-fall segment of ballistic trajectory. Duration of this segment is fixed, but the peak height can be different.

Detector is never a single atom, it’s usually millions of atoms. Hence, a directed photon (or a stream of photino) has nearly 100% chance to be absorbed by some electron at some orbital (all probabilistic). If it’s a “wrong” amount of energy, electron will go to the ground level and re-emit the photon.

Electron absorption of the photon (photino) is a raising segment of that ballistic trajectory. The duration of this segment is also generally fixed.

All elementary particles have a stochastic nature. That does not explain how photons are emitted.

Photons are waves. If photons are emitted in a finite period of time, then a finite number of wave fronts are emitted. The recurrence frequency of this emission equals the frequency of the photon.

Never the amplitude of the photon is significant. Only its frequency counts.

Only a single atom detects the photon. The surplus of the energy is used to emit lower frequency photons that then can be absorbed by another atom. This happens for example with gamma quanta in scintillation crystals.

Photons are not waves, because there is no medium to vibrate. Aether theory was discarded a long time ago. More importantly, nobody knows how to measure wave characteristics of aether.

As I suggested, photons may actually be stohastically-emitted Photinos, with the spatial distribution of photinos following a wave-like formation.

Of course, when a photon is emitted, a limited number of wave-fronts are emitted, this corresponds to Wavefunction Square probability density, it is never infinite, it actually damps quite quickly, similar to Gaussian bell curve, in most cases.

Amplitude of photon is always 1, because Photinos as particles have constant characteristics. Phase of photon may vary, depends on the moment when Electron began emission.

I agree that only a single atom detects photon, but this single atom is among millions. Electrons “revolve” at atomic orbitals slower than the speed of light, and the spatial size of Photinos is most probably smaller than that of electron. Electron absorbs each Photino in a quantum manner, Photino simply dissolves in Electron’s field after reaching a certain radius within Electron, adding energy to Electron.

Thus a single photon can be represented as a string of unidirectional Photinos, reminiscent of string theory.

“Photon as string of Photinos” actually gives understanding why there are no anti-photons exist: photons are virtual integral particles, they do not have any physical counterpart.

The space we live in is curved. That curvature is not static. Thus, this space changes. It has all aspects of a field. Why can’t it vibrate?. In fact it vibrates with a super-high frequency that is so high that you cannot observe the waves. For photons these waves act as carrier waves. Photons modulate these carrier waves.

The super-high frequency waves are constituted from wave-fronts that are emitted by massive elementary particles. In averaged form these waves form the potentials of the particle. You might call these wave fronts photinos. Otherwise I would not have any explanation for your photinos.

See:http://www.e-physics.eu/TheStochasticNatureOfQuantumPhysics.pdf

Working with Digital signal processing (DSP) projects, I’ve came to conclusion that you never know everything about DSP; you build understanding, it works for a moment, but fails in a certain situation. 10 years of experience obviously was not enough.

The same applies to understanding of physics. Takes a lot of trial and error.

I do not pretend Photinos I’ve described here are real, but for me they contain the “ultimate” answer at this moment, they allow to linearize model of both space and time and define a universal carrier of mass, electromagnetic and kinetic energy: kinetic energy exchanges (phonons) can be also represented as local non-oscillating streams of Photinos, and such non-oscillation is also true for gravity interaction.

Photinos in average are emitted at the speed of light, but they have a ultra-small mass and their velocity can change. Surely, the velocity of reference frame is added to emitted photino’s velocity, thus not requiring length contraction corrections.

Photinos should interact with each other, this way they gain an additional momentum when directed toward gravity field’s center.

My intuition tells me that Photinos should have an ultra-small *negative* mass, so when they are absorbed or scattered, their momentum is subtracted from the goal, not added. Maybe hypothetical negative masses do not have a speed of light limitation?

Yes, we all have our models and theories. We belief in them and we defend them against better knowledge. That is why we find it difficult to listen to other peoples theories and why we mistrust other models.

It’s also interesting to note that, mathematically, changing momentum of the group of Photinos that represent photon is equivalent to shifting the frequency of photon. The basic stochastic wave-like formation remains unchanged (not spatially stretched nor contracted), but the group momentum changes. For Electron, the change of Photino momentum is equivalent to change of photon’s frequency (which is, in other words, corresponds to the stochastic number of Photinos).

*complex 3-space: this paper shows how useful quaternions can be there: where’s Hans is coming from. But its Weyl-like Cartan geometry, which is strictly post-Einstein! arxiv:gr-qc/abs/1101.3606v1 2011

*that ‘window’ here its defined as T/m (temperature/mass) which gives the passage from metrics and energy to heat capacity/thermodynamics where we register events in instruments. arxiv:hep-ph/abs/0306211v1

*photinos: here’s a wave-packet view from the Hamilton-Jacobi (HJ) equation, which introduces cocycles, where the Higgs parameter-space now seems warped: arxiv:hep-th/abs/12110798v1. They assume a potential function as well as energy, and isn’t that the missing factor in the propagation of wave-packets: the extended field, which is always there from background.

*Hans: those extra particles that show up in your SM derivation: aren’t they knot-invariants related to quantum tunnelling? And so the thermal factor in resistance?

Orwin

The “extra” particles are not extra. They are types that cannot be discerned by measurements. In fact this also is the case for quarks that only differ in color charge, because color charge cannot be measured. We know that color charge exist, because the Pauli principle would otherwise forbid the existence of some hadrons.

Quaternions can be avoided by using Clifford algebras or by using spinors and matrices. To my opinion the usage of quaternions make theories simpler, but then the different discrete symmetry versions in which quaternions and continuous quaternionic functions exist must be indicated with special indices, otherwise confusion will enter in equations that treat a mixture of versions..

Connections between gauge theory and measure theory? I cut my calculus teeth on “Measure Theory and Integration” 2nd edition by M.M. Rao and… then I noticed gauge theory is a core component of modern day (theoretical) physics.So I do a search and find http://arxiv.org/abs/hep-th/9310201 Generalized Measures in Gauge Theory by John Baez a fairly renowned mathematician and all around clever fellow from what I gather. interesting, I wish I would have thought about thinking about this earlier, it would have saved me some frustration 🙂

The trouble is Baez is a fine mathematician but does not have strong physical intuition – so he treats Wilson renormalization group theory as a cook-book procedure. In the heat of the quantum controversy, it was Heitler and London who noticed the group velocity, which can run way over the speed of light – they actually confirmed that at CERN in the neutrino speed debate. Follow the London limit and you get to Ginzburg-Landau theory, where one scents the Fundamental Physics Prize…

I’m seriously interested in the differentials of the form d/d(1/g), as in group velocity, with application through to growth dynamics and economics. You can think of them as differentials of scaling.

Fine points Orwin. My immediate concern is how to tune out these gawdamn gas powered weedeaters these hourly people are blasting outside my window. corporate dullards, leave the damn shrubs how they are… let nature be free!!!

Problem (at least conditionally/temporarilly) solved. Walking outside, and waiving my heads about my head and motioning towards my ears with my hands seems to have communicated something that led to them taking the infernal devices away such that the effective decibel level is now back to tolerable levels. I know you are right about that Orwin… my impression was “it ain’t that simple dude, you glossed over a lot of stuff”

This seems interesting, “String Theory From Gauge Interactions”. retroshare://file?name=String%20Theory_%20From%20Gauge%20Interactions%20t%20-%20Laurent%20Baulieu.pdf&size=4392243&hash=2a386c6063a189710df1b79022d879aeb518a94e

Also this, http://arxiv.org/abs/gr-qc/9606089 (referenced from the Baez paper reference a few posts ago) seems like makes sense, and I paste from the abstract: “An anomaly-free operator corresponding to the Wheeler-DeWitt constraint of Lorentzian, four-dimensional, canonical, non-perturbative vacuum gravity is constructed in the continuum. This operator is entirely free of factor ordering singularities and can be defined in symmetric and non-symmetric form. We work in the real connection representation and obtain a well-defined quantum theory. We compute the complete solution to the Quantum Einstein Equations for the non-symmetric version of the operator and a physical inner product thereon. The action of the Wheeler-DeWitt constraint on spin-network states is by annihilating, creating and rerouting the quanta of angular momentum associated with the edges of the underlying graph while the ADM-energy is essentially diagonalized by the spin-network states. We argue that the spin-network representation is the “non-linear Fock representation” of quantum gravity, thus justifying the term “Quantum Spin Dynamics (QSD)”.”

That ‘physical inner product” sounds right. But working though the implications he seems to get a purely Abelian model, which can’t be right:

http://arxiv.org/abs/1101.1690

Here’s why: having got to the Euclidean local phenomenology, all the action is in the matrix diagonal, which then covers just Eric Verlinde’s equilibrium states (the gaussian normal range, 68%, no dark energy or matter).

http://arxiv.org/abs/1109.1290

Einstein managed to appear deeply convinced that God has it all under control, but that surely underestimates the moral challenge of existence at the local level/dark side of reality.

Behind the scenes, of course, he staggered on through endless difficulties, with Eli Cartan, to find some soliton-like solutions for torsion which are now widely ignored. Shades of the Goldstone boson? Does reality eat itself? Will the Higgs-like phenomenon ever own up? What is the sound of one isospin clapping? Was Cartan a Bourbaki? There the standard answer is now yes, but since Andre Weil’s conjectural proof-strategy for the Riemann conjecture actually fails in symplectic manifolds at infinity, apparently their cohomolgy evacuates itself and you are back with Bianchi’s “frozen” cosmos, a 2+1 reality, all surfaces and no substance.

Here you do need more than talented mathematicians, but too many are frozen in turn in awe of the Weil cult, which new keeps critical details off the wikis. Excuse my rambling, but it was Peter Woit who said that right now nobody seems to know where to from here.

Orwin, here is Rao’s definition of gauge… does it have any relation to the “physics definition” ? http://173.172.177.172:42000/measure-theory-gauge.jpg

On a fancy jpg? Is that necessary? On a real slow connection? Jpgs are not secure, remember. You must have heard the matra “Abelian gauge symmetry”: it runs round and round like a dizzy tune.

There’s lots to chew on here. Baez: starts with Legendre, but says “its not well-defined”: no, like the affine manifold, more general than physics, in the sense of physical equilibrium. But the original connection with reality is like that, right?

And I do want to say that Andre Weil was in a rush and well aware of his limitations, and warned that he didn’t have the historical background. I can trace the problem all the way back to Apollonius in antiquity, through involutes (self-regulation) and evolutes (expression). That’s like “speakable and unspeakable in quantum theory”: the Cambridge line on Aharonov-Bohm: but we speak in the flow, so that brings in the Stokes assumption!

Weyl discovered gauge as a phase, and I don’t see it being tamed short of something like Kaluza-Klein. How that goes with flow takes you all the way to renormalization group flow, which brings in p-adics.

So if Rao doesn’t have p-adic definition, its not so interesting.

orwin, what do you mean by secure? I fail to see how I could be harmed by that jpeg being public other than some script kiddy knowing what firmware I have on my phone by inspecting the exif headers. Slowness on a connection I control rather than sitting on some evil 3rd parties machines (google, microsoft and the like). I guess I could have converted the jpeg to text with some sort of OCR scanning tool first and then pasted the text, but that. IP addresses do not identify people. At best someone could figure out Im using time warner cable and am in the US. anyway, imho its not secure to post anything on a public forum which is why I have been promoting retroshare but everyone seems to think security is a lost cause (hint: its not) if you generate a private rsa key at 2048 bits and are careful about where you store the key then it IS secure, in the sense that it will at the very least cause those turd burglers at the NSA to store the data offf somewhere and start cracking… well, jokes on them, if they want to crack everything then thats ridiculous and almost as bad as the SETI project uselessly processing static for ages…

Clifford algebras and the Duflo isomorphism, http://arxiv.org/abs/math/0304328

I’ll take a look and see if Dr Rao has written anything about p-adics

orwin, p-adics are not covered in “Measure Theory and Integration” but the Graham-Schmidt process (orthonormalization) so that is kinda like renormalization… or, the same thing? but the Graham-Schmidt process tells you how to write a bunch of integrals and whatnot, or to make a computer program to calculate these, which I have no idea relates to things that are actually useful

Hold on, down in the flow of things, bringing Stokes old flow theorem to Chern-Simmons topology, where it gets knotted, Thiemann-them now say you face ordering choices to get solutions, which is just the kind of factor introduced on viXra by Leo Wuyk, which took him off to FQXi.

http://arxiv.org/abs/1101.1690

I would appeal to an unphysical alt-inner product posing the choices, for a pluralist and transcendental ontology: like you can back off into memory and construct your own continuity of things, for which your only guarantee is internal to your constitution as a conscious being….

Orwin,mind=blow, I have to take some time to let this sink in. I was kinda thinking something about that. Sometimes memories and real world observations and internet stuff appear to intersect in quite uncanny ways and im fully aware that my apparent internal linkage is not always a figment…ever wonder why they claim some people “break with reality” well who exactly is the author of reality I ask with a mischievous grin 🙂

also found this gem on the wikipedia page about Cherenekov radiation (i just love setting off these big brother keyword triggers)

“Cherenkov radiation can be generated in the eye by charged particles hitting the vitreous humour, giving the impression of flashes,[5] as in cosmic ray visual phenomena.””

That’s where it all started after Huygens with eigenvalues in spectral analysis, and whole bunch of questions do now boil down to base and change of base. But what happens under deformation? That’s why cohomology seems so critical, and that’s where all the ghosts of measure theory sidle back into the equation.

Makes me think the trigenometric-type methods are ultimately more powerful, which is now setting people after log-algebraic stuff.

But Wilson renormalization involves both dimension and organization, dimensions of organization, and change in the same. There’s a kind of switch logic there, which Stu Kauffmann tried to binarize with Boolean networks, and t’Hooft bought in recently. That’s the cracked/publicist face of von Neumann – I find his algebra types much more interesting, where temperature shows up, so one can think of transcendental thermodynamics.

This in now Koopman-von Neumnn theory: arxiv KVN theory, e.g. quant-ph/0301172v1 2003, a PhD thesis.

This article features Thiemann in the debate on Life after Loop Quantum Gravity which Phil revisits here. They did the 3D analysis, and found tight uniqueness theorems, too tight for comfort, at least if you have belly for beyond SM.

arxiv:gr-qc/0507078v2 2005

crow, it seems to me that your gauge measure question is now very apt and takes us back to that unruly Legandrian.Legandre transforms are the mainstay of thermodynamics, with its intensive and extensive variables: all the baggage of states of matter/renormalization/unruly infinities. etc. But GR is quietly like that what with local and global gauge symmetries!!!

To my opinion gauge theories are restricted to one dimensional abstractions.

Hans, your high-frequency carrier tone formally sets a cut-off and thus defines a renormalization. I don’t suppose you’ve looked for the infra-red fixed point.

Orwin

A model needs a few constants, Among them are a fixed minimal progression step, a fixed speed of information transfer or a fixed minimal spatial step. You cannot choose all three of them, because they are related. Most models accept space expansion and a constant speed of information transfer. This means that in those models the progression step must expand as well and correspondingly the highest frequency in the model decreases with increasing progression.

If you take gyroscopic stability as your reference-frame, you find its not perfect, and it may seem, as Newton said, that God must intervene at each moment to preserve us. That was his contribution to the natural theology of Barrow. But he worked in optics and alchemy and there anticipated Einstein: “Query: Does not gravity bend the path of light?”

The axial-radial perspective has a one-dimensional gauge foundation. And its interesting to find that, as John has been suggesting, time as distinct from events most be “metered,” in the strict sense of being a function of measure:

http://arxiv.org/abs/1105.5959

http://arxiv.org/abs/1105.3755

This starts to answer crow’s interesting questions.

Just to avoid misunderstanding, I haven’t argued for time as necessarily distinct from events. A good way of putting the central point is about emergence. If the universe is seen as a series of layers, each of which emerges from the more fundamental one underneath it, the need is to identify the layers and arrange them in the right order.

Minkowski spacetime is telling us that two of the layers in that sequence come in a particular order, and I’ve tried to show that they must come in the reverse order. One of these layers is the laws of physics, such as laws of motion. The other has some sort of flow of time, or motion through time (real or unreal).

Spacetime leads directly, via an exact proof, to the time layer being more superficial, and the laws of physics running deeper, and so ‘preceding’ the other layer. This proof depends on relative simultaneity across a distance, and the idea that different observers see the same event as past or future, depending on how they’re moving. So it seems to show time to be a perception-based illusion. But it depends on untested assumptions coming out of Minkowski spacetime.

That proof (the Rietdijk-Putnam argument) leads to block time, but I’ve argued that if block time is right, then the laws of physics were pre-implied in the slices in the block, and then something more superficial like an illusion, or emergence at a more superficial level, happens to come along and make these laws work by running the slices in a sequence? That looks contrived.

No-one has argued the other way on that point. To me, Minkowski spacetime is like in a whodunnit detective novel, where it always turns out at the end to be the one person who is utterly above suspicion. But in the case of spacetime, if you look at what has actually been tested, it’s the one main bit of our picture that hasn’t. And with the problems with time as they are, we should be questioning any untested assumptions anyway.

No model that we know of is correct in all its aspects.

Models that differ considerably can still give acceptable views of physical reality. These different views offer different insights. Together, these models offer a more complete view of physical reality.

I see a place for supersymmetry in the 1-D gauge of the spinnor axis: when the impact of a collision is taken in the axis itself. One could think of the axis as gyroscopic stability having a stringy tension, and the 1-D gauge then prevents run-away high NRG speculations or undecidable feed-back. You can call it minimalist supersymmetry: anything more is pretty much ruled out by now.

Я готов прокомментировать. Но позже. Все затронутые вопросы требуют обстоятельного подхода. Прокомментирую, точнее изложу свою точку зрения, обязательно!

С искренним уважением к Вам,

В. Смоленский

Kimmo:

“You said that BH gains only additional mass of 511keV/c² when photon arrives. Where’s the excess energy due to blue shifting?”

There isn’t any. A descending photon doesn’t gain any energy. Conservation of energy applies. Think about the principle of equivalence and imagine a photon coming towards you. When you accelerate towards it, that photon doesn’t change one jot. Yes you measure its frequency to be higher, but it didn’t change, you did.

“I mean that if you annihilate electrons here on Earth and in intergalactic space and measure those photons energy on Earth they have different energies. Or do you disagree?”

I agree. Start with two electrons A and B in free space. Let electron A fall to Earth, then dissipate its kinetic energy. This kinetic energy came from the electron, nowhere else. So electron A now comprises less energy than electron B. When you annihilate both electrons, photon A appears to be a 511keV photon and photon B appears to be a 511.000001KeV photon. Photon B appears to have gained energy, but it hasn’t. You and your clocks are going slower on Earth than they were in free space, so you measure its frequency to be higher, that’s all. When you descended to Earth you lost energy in line with electron A and photon A. So photon B appears to have gained energy. Yes you measure its frequency to be higher, but it didn’t change, you did.

Seems to me that you changed the goalposts here. Originally you were talking about photons sent to BH. In that case, gravitational blue shifting applies.

Same applies to my example.

One other thing… how speed of light effects (optical) atomic clocks? Basicly (optical) atomic clocks are measuring frequencies.

I didn’t change the goalposts Kimmo. When you drop a 511keV photon into a black hole the black hole mass increases by 511keV. That’s it. There is no magical way by which the black hole mass increases by any more than 511keV. Conservation of energy applies. The descending photon does not gain any energy. Its frequency does not change. It is not blue shifted. You and your clocks go slower when you’re lower, that’s all.

You hear plenty about frequency when you read about atomic clocks. But read up about the NIST caesium fountain clock. We use it to define the second as: “the duration of 9,192,631,770 periods of the radiation corresponding to the transition between the two hyperfine levels of the ground state of the caesium 133 atom”. You’ll hear about a “peak frequency”, but note that frequency is measured in Hertz, which is defined as cycles per second, and we are defining the second. So what we’re really doing here is counting incoming microwave peaks. When we get to 9,192,631,770, we say “that’s a second”. After that the frequency is 9,192,631,770 Hertz by definition.

Please consider this experiment: http://en.wikipedia.org/wiki/Pound%E2%80%93Rebka_experiment

Red- and blueshifts DO happen. However, in the case of black-hole, the energy gained above 511keV until photon hits the ground is returned back to the black-hole.

Conservation of energy sure applies. If that photon is ejected back (somehow) from BH it experiences red shifting.

Atom clocks just gain (feedback loop) the peak frequency, no time nor speed of light involved here. After the peak frequency is gained then the clock starts the counting.

Kimmo: the energy of a photon that is (somehow) emitted from a black hole doesn’t change at all. Let’s say it’s emitted vertically. Its path doesn’t curve. It doesn’t fall back into the black hole. It doesn’t slow down. And there is no magical mysterious mechanism which reduces the energy of that ascending photon. Just as there is no magical mysterious mechanism that increases the energy of the descending photon. Conservation of energy applies. The energy doesn’t change, and nor does the frequency.

So don’t be distracted by frequency when it comes to atomic clocks. Also note this: you cannot assert that the second is different *because* the frequency is different, because frequency is defined *from* the second.

What can I say? 🙂 I got bored already, so somebody else should carry on this debate.

Oliver: I watched your videos and sent you an email. My comments were bulky and arguably off-topic.

I thought about your email … for a while… some pretty things there, not correct but nice anyway. do you prefer private, or public response (in my opinion better privately)

Sorry Oliver I’ve only just seen this.

I don’t prefer private, I have nothing to hide, but it’s all very bulky and there’s so many comments here, best send me an email.

you no worry 🙂 I’m lost here too. I’ll send you email, as soon as I find some time.

Einstein’s theory can not be applied directly on Exotic phase changes observed-reg

Einstein formula m=mo/√1-v2/c2 can not be expressed or evaluated directly says Sankara Velayudhan Nandakumar as Exotic phase transitions to build his theory of gravity to be reevaluated supporting American New science requirement is surprisingly observed.For instance, at low temperatures liquid helium’s properties change dramatically, becoming a “superfluid” that can overcome friction. In fact, we are for co-opted the mathematics of exotic phase transitions to build our theory of gravity along the 12 magneto optic quantum sectors of space breaking the symmetry always along the plane of hologram. Mass changes to invisible disappearance at square √ 2 root c velocity (1.414c) instead of further increase in mass to infinity at c velocity. Also typical ESP power of the brain to have future predication capability as destiny of space is already decided.Mass seems to increased to infinity at v=c , but a violent mass transition is observed directly by mathematical application. as observed by Sir Arthur Eddington and and further observation is essential as the prediction goes wrong sometimes,but seems to go on imaginary sideof disappearance at 1.41c as envisaed by Dr.Chandrasekhar

“In my entire scientific life, extending over forty-five years, the most shattering experience has been the realisation that [New Zealand mathematician Roy Kerr’s] exact solution of Einstein’s equations of general relativity provides the *absolutely exact representation* of untold numbers of massive black holes that populate the universe. This “shuddering before the beautiful,” this incredible fact that a discovery motivated by a search after the beautiful in mathematics should find its exact replica in Nature, persuades me to say that beauty is that to which the human mind responds at its deepest and most profound.Research group working at Oxford-Cambridge-Anna University astro physics team with Sankara Velayudhan Nandakumar special officer on behalf of Loyola college nano technology dept Cape Institute of Technology,Nagercoil formerly with ,KNSK Engineering college ,Nagercoil as research scholar,Anna University with Hubble space research committee of Hon.Roger Davies,Hon.Collin Webbs FRS of Laser dn of Oxford uk,Hon.Marteen Rees ,Emeritus Professor of cosmology Cambridge ,former president of Royal society, London.

Sankara Velayudhan Nandakumar member PNAS,American on behalf of Loyola college of Engineering and technology ,Member American committee for the Weizman institute of science ,Energy renovation committee cape Institute of Technology,Nagercoil ,former Guest lecturer ,KNSK Engineering college ,Anna University have surprisingly found out genetic mirror

1) Your call CNSHD766034 regarding Einstein’s theory can not be applied directly on Exotic phase changes observed-reg has been received Outreach@stsci.edu

Physics has won over mathematics:As part of another longstanding scientific dispute, Hawking had emphatically argued, and bet, that the Higgs Boson would never be found.[265] The particle, proposed to exist as part of the Higgs Field theory by Peter Higgs in 1964, became discoverable with the advent of the Fermilab near Chicago and the Large Electron Positron and the Large Hadron Collider at CERN.[266] Hawking and Higgs engaged in a heated and public debate over the matter in 2002 and again in 2008, with Higgs criticising Hawking’s work and complaining that Hawking’s “celebrity status gives him instant credibility that others do not have.”[266] The particle was discovered at CERN in July 2012: Hawking quickly conceded that he had lost his bet[267][268] and said that Higgs should win the Nobel Prize for Physics

It is not too difficult to explain a mechanism that installs space curvature for all massive elementary particles. If such a mechanism exists, what is then the task of the Hiiggs mechanism. Does the Higgs mechanism explain how space is curved?

OT Phil, when are you going to do a posting on the update of the LHC, and how it,s progressing ?

Carla, As far as I know it is all on schedule and on target and we just have to wait for 2015 when it gets interesting again

Philip: sorry to be tardy. And apologies, we’re getting rather narrow up above, so I’ll reply here if you don’t mind.

We’ll have to agree to differ on progress in fundamental physics in recent decades. Read Unzicker’s book*. I don’t think it was progress that they “found” the top quark. Its lifetime is so short that it didn’t leave the collision point, and its existence is inferred from a particle that was similarly inferred. As for the Higgs, I’m afraid the Higgs mechanism is responsible for at most 1% of the mass of matter, and moreover it contradicts E=mc².

I don’t take much issue with neutrinos or cosmology, but come now, no SUSY and no WIMPs is not a sign of progress. Particularly since electron models have been studiously ignored. As for string theory, it is not scientific in that there has been NO interplay between theory and experiment for forty years.

When it comes to GR I understand the mathematics well enough. I also understand what Einstein said, and more importantly that the scientific evidence backs him up. With respect, IMHO there are important differences between this and what you think of as mathematical fact. I would urge you to examine these differences carefully. Focus on the scientific evidence, distinguish it from abstraction. Ask yourself “what can I really see?” IMHO that’s how you get to see the big picture. Take it one step at a time. Baby steps. But IMHO one thing leads inexorably to another, and it is very much simpler than you think.

* I blogged about it, see http://bogpaper.com/the-higgs-fake/

Duffield

Do you conclude that contemporary physics is on the wrong track?

If so, what would be the cure?

Hans: not quite. There’s a great deal of physics that’s on the right track, like Materials Science and Condensed Matter Physics and Optics. And whilst cosmology isn’t quite physics and suffers from speculation, I think that’s broadly on the right track too. Ditto for relativity, but see my conversations with Philip. It’s like it’s stuck in siding because Wheeler broke the engine.

But I would say that particle physics aka HEP is on the wrong track, and as a result fundamental physics is on the wrong track.

IMHO the cure is to persuade particle physicists to look within the Standard Model and appreciate that this is where work is required rather than “beyond the standard model”. If they will not be persuaded, then we must call them out like Unzicker has done. If that still doesn’t persuade them, then in the end the cure is for their funding to be removed because they are impeding scientific progress. After that other physicists and mathematicians can reprise optics, classical electromagnetism, QED, and TQFT to fully develop the nature of the photon, the photon-photon interaction in gamma-gamma pair production, the Dirac’s belt electron, the unification of electromagnetism and gravity, and so on. With demonstrable progress that can be explained simply, thereby reawakening public enthusiasm and political support. And taking us places.

We all inherited the spacetime model from Einstein, Lorentz and Minkowski. However, these people are so famous that no one dares to think differently. In relativity a proper time lives at the location of the observed item/event. With these great scientists we selected the time that can be measured at the location of the observer as our common notion of time and we try to deduce proper time from this measured time.

What if we can make a different choice and use proper time in order to describe our observations. Next, is it possible to synchronize all proper time clocks? This would mean that the time at the location of observer can no longer be selected freely and must be deduced from this universe wide proper time clock. In this new space-progression model the universe wide time cannot be measured, but it can be used as a parameter in the model. This new approach will enable a different view on physical reality. It does not create a different reality. However, in this new model universe proceeds with universe wide progression steps from each static status quo to the next one. If this occurs, then it must occur with a super-high frequency that is higher than any other frequency that occurs in the model. This will also mean that universe is recreated at this super-high frequency.

This offers a revolutionary new space-progression model. At first sight it looks insane. However, closer investigation learns that this model is compatible with many aspects of the spacetime view of contemporary physics.

So can it be that this new view will offer new insights?

I am convinced that it does!

Hans,

Super high frequency between stepping stones could hint into super high Higgs field frequency? a base even for local variations in local time, lightspeed (by lensing effects) and the speed of entanglement such as suggested by Nicolas Gisin suggests?

http://www.newton.ac.uk/programmes/MQI/seminars/2013101609001.html

Leo

The only field in the Hilbert Book Model that can act as a kind of Higgs field is the curved field that embeds particles and that we consider as our living space. Indeed, this field forms the super-high frequency waves that carry the potentials of the particles. However, I cannot see how this field relates to a particle that can act as a Higgs particle. In the HBM the Higgs mechanism is not necessary. The wave fronts that form the super-high frequency waves perform the task of adding the capability to curve space to the massive particles.

See: http://www.e-physics.eu/TheStochasticNatureOfQuantumPhysics.pdf

Hans,

You wrote: “I cannot see how this field relates to a particle that can act as a Higgs particle. In the HBM the Higgs mechanism is not necessary. The wave fronts that form the super-high frequency waves perform the task of adding the capability to curve space to the massive particles.”

What is in a name? If not the Higgs mass but the Higgs energy of the oscillating field is 126 GeV, your problem is solved and forget the SM Higgs mechanism

The energy of the super-high frequency oscillating fields cannot be measured, because the frequency of these waves is so high that they cannot be observed. Only the averaged effects of these waves become noticeable as potentials of the particles. Thus the energy differs per particle type.

I do not feel that there is a need to represent the Higgs field or particle in the Hilbert Book Model

The HBM offers a suitable alternative.

Hans: I don’t there’s any problem with the spacetime model so long as you remember it’s a model. IMHO the problem comes when people start thinking of spacetime as being the same thing as space. Look out for people talking about things moving through spacetime. That’s wrong. Spacetime is an abstract “mathematical space” which models all times. As such, it is static. There is no motion in it. The map is not the territory.

There’s no real problem with proper time either provided you remember that clocks don’t literally measure the flow of time. They merely “clock up” local motion and show the cumulative result as “the time”.

Einstein did think along these lines, and IMHO people do try to use proper time to describe observations. Hence we have Kruskal-Szekeres coordinates for the black-hole infalling observer. The problem with them is that they attempt to cancel a stopped clock by putting a stopped observer in front of it, and overlook the fact that the observed goes to the end of time and back and is in two places at once.

I don’t feel attracted to your universe-wide super-high frequency clock I’m afraid. We have no evidence for it.

My conviction is that the spacetime view of contemporary physics and the paginated space-progression model that is based on a universe wide time clock are both valid but different views of the same physical reality. In the Hilbert Book Model I prove that the second view provides new insights.

Jonathan: let’s start a new subthread.

I’m sorry, but you have given no evidence that time literally flows. Time dilation isn’t evidence for it at all. Again see the simple inference of time dilation on wiki and note that all we have is the light and the mirrors moving. Time dilation occurs when local motion is reduced by virtue of the macroscopic motion through space, the maximum rate of motion being c. There is no literal block of block time, and no grain in it either. These things are abstractions. Motion is not. I can show it to you. You can see it. It is empirical. It is wrong to claim that time must therefore flow, just as it is wrong to claim that angels’ wings must therefore beat. You know it would be wrong to claim that time dilation is evidence that the angels’ wings beat slower. There ARE no angels beating their wings. There is no time flowing either. And you CANNOT show it to me.

The burden of proof does not lie with me to disprove the flow of time. Just as the burden of proof does not lie with me to disprove the existence of fairies. There is no evidence for these things. But there is evidence that motion occurs. There is evidence that clocks clock up some form of regular cyclic local motion and display a cumulative result called the time. Open up a clock. That’s what clocks do, and you can see it. It is said that clocks measure the flow of time. Open up a clock. Can you see it? No.

As regards explanation and existence and evidence, IMHO unsupported conviction is what holds back scientific progress. And one such is the conviction that time literally flows. There is no evidence to support this conviction. It is a falsehood that bars the way to scientific progress.

You’ve still not answered any of the points I’ve made. The specific evidence in the permanent age differences between objects I’ve mentioned about four times now. And several clear arguments that time can’t be emergent, as in my first post on this page (30/11 12:16). And about motion – if you can see motion happening, then you’re seeing an object at a series of points in time, so it seems that either you’re seeing it moving through time, or you’re seeing some sort of flow of time. (See my post from 11:36 today for part of why the idea that there’s no motion through time got so ingrained.)

But on the other hand, I’ve answered your points, and showed what you called “The big picture” to be nothing outside well-known, standard GR. First Phil pointed out that your idea that light goes slower lower is only meaningful if you say how it’s measured, then I pointed out that you’re implicitly referring to an Earth surface clock, which we often have underneath our view, whether we know it or not. (In fact, if you take a 1 AU clock in the Sun’s field, in relation to it of course light goes slower lower, that’s very widely understood because of the Shapiro effect.)

And your points about light clocks turned out to apply to absolutely any clock, though you made out that there was some difference about light clocks.

So I’m leaving it there, as I’ve answered your points directly, but you’ve not answered mine.

I’ve explained time dilation twice. If you can see motion, you can see motion. Not something else. You can’t see a point in time. Or a series of them. You can’t see motion through time. You can’t see any flow of time. So when a light clock goes slower when it’s lower, it isn’t because time goes slower, it’s because light goes slower. I’m glad you can see that. Phil can’t. Yes other types of clocks go slower too, but that isn’t the point. The point is that the speed of light varies with position, like the patent evidence says, like Einstein said. And not just in 1911.

But OK, let’s leave it there.

Ok. I say this not to argue with you, but to clarify a point I made above, about the Shapiro delay. Light always travels at c in relation to a local clock. In relation to a 1 AU clock in the Sun’s field, it goes slower lower over the Euclidian distances, but not over the GR distances, which are longer. That explains part of this. But the other thing you should know is that the Shapiro delay has two equal components, a space component and a time component. Clifford Will has written about this. The time component is simply due to gravitational time dilation, so time does go slower.

Sorry, to put it the right way round, the longer GR distances can be taken to explain the space component of the Shapiro delay, but not the time component.

Noted Jonathan. The Shapiro delay is said to be “nine tenths timelike” and “one-tenth spacelike”. But if you sent a light wave back and forth between two almost-adjacent stars, it wouldn’t bend at all, and the Shapiro-like delay would be “ten-tenths timelike”. However note the Einstein quote in this old wiki article:

http://en.wikipedia.org/w/index.php?title=Shapiro_delay&oldid=346844409

What he actually said in the original German was “die Ausbreitungsgeschwindigkeit des Lichtes mit dem Orte variiert”. That translates to “the propagation speed of the light with the place varies”. Think about how you would measure the thing we call time if you were between those two stars. If you were a relativity purist like me, you would use a light clock. That clock goes slower because the light goes slower. And because of the wave nature of matter, all other clocks “go slower” too. And so do you.

It’s good to talk. It would be nice if we had separate smaller topics so we could talk with less confusion. Another “time” perhaps.

It’s five tenths. The only thing to add about the Shapiro delay is that a common interpretation for the time component is a lengthening of the time axis due to curvature, equivalent and equal to the lengthening of the space axis. I’ve gone into that here, showing it not work in tandem with illusion/emergent time http://www.fqxi.org/community/forum/topic/1359

Yes, block time and motion through time can’t both be real. Block time certainly isn’t. It is totally incompatible with what we actually see. But that doesn’t then mean that motion through time is real. See where you said this?

“And if motion through time has to be unreal, the laws would then reside within the illusion, and physics would risk being the study of what an unexplained illusion does”.

There is a third way: motion through space is real. There’s no denying that you can see it. You can even demonstrate it to somebody with your fist. And motion through space within a mechanical clock is real too. You can open up a clock and you can see it. And you can examine that clock to see that this motion is geared down to the motion of the hands on the face of that clock. When the little hand has moved an inch we say an hour has passed. But hours don’t literally pass like buses. Nor does time. It doesn’t literally flow, and we don’t literally travel through it. See below for more:

http://www.physicsdiscussionforum.org/time-explained-t3.html

Note though that all this doesn’t mean relativity is wrong. Au contraire, it means it is right, and more important than people realise. Have a look at “The Other Meaning of Special Relativity” by Robert Close.

I simply don’t understand why whether you can see something has anything to do with it. A lot of good physics has been done on things we couldn’t see at the time. Talking to some physicists about the time issues, you can discuss what the answer might be – I’d rather do that than talk to someone who doesn’t even see the question. So I’m leaving it.

Should we change the paradigm in physics? I would say Yes! How can we do it? http://www.toebi.com/blog/applications/changing-paradigm/

Wish me luck! 🙂

@Kimmo

What is the application?

At the end, it might be some kind of fusion ignition system. Also antimatter bomb is one potential application. But my goal in future experiments is to keep annihilation amount as low as possible.

FYI I’ll start conducting my experiments today.

Phil, speaking about universal concepts of mathematics, I would mention clock modular arithmetic.

@Phil Gibbs (“String Theory”) “String theory … is the only solution to the problem of finding a consistent interaction of matter with gravity in the limit of weak fields on flat spacetime.” I agree with the preceding statement but with 2 caveats. First, if Wolfram’s “A New Kind of Science” is one of the greatest books ever written (as he seems to think) then string theory might have a surprisingly simple computational method based upon Wolfram’s mobile automaton. Second, string theory might be fundamentally correct (in its current form) but might require a generalization that is an order of magnitude more complicated than the current theory.

http://quantumfrontiers.com/2013/11/05/fundamental-physics-prize-prediction-green-and-schwarz/#comments

“How then can we test string theory?” If the monster group and the 6 pariah groups enable M-theory to have a computational method, then I claim that the alleged Fernández-Rañada-Milgrom effect and the Space Roar Profile Prediction are decisive tests of M-theory. (If the monster group and the 6 pariah groups fail to enable M-theory to have a computational method, then the string landscape might provide an infinite class of models that defy refutation.) Do dark energy and dark matter obey the equivalence principle? INDEED, IS THERE ANY EXPERIMENTAL PROOF THAT DARK MATTER PARTICLES EXIST?

Is there any experimental proof that dark energy and dark matter obey the equivalence principle?

“The process of learning what string theory is still has a long way to run. I don’t think we are close to seeing the end of it.” — Edward Witten, Newton Medal Ceremony, July 1, 2010

Einstein: Measurements show that inertial mass-energy equals gravitational mass-energy.

Brown: String theory might require a rethinking of Einstein’s equivalence principle. All measured mass-energy occurs on the boundary of the multiverse. All non-measured mass-energy occurs in the interior of the multiverse. For measured mass-energy, inertial mass-energy equals gravitational mass-energy. For non-measured mass-energy, the inertial mass-energy is always zero but virtual mass-energy that is not measured always has nonzero gravitational mass-energy. Dark energy has zero inertial mass-energy and negative mass-energy. Dark matter has zero inertial mass-energy and positive gravitational mass-energy. Is the preceding a meaningless farrago of verbiage? The alleged Fernández-Rañada-Milgrom effect might yield a decisive test.

http://vixra.org/pdf/1207.0049v1.pdf Gravity Probe B and the Rañada-Milgrom Effect

Is Milgrom the Kepler of contemporary cosmology? Are Milgrom’s acceleration law and the space roar the two main keys to unlocking the mysteries of string theory? If X is to string theory as Kepler’s laws are to Newtonian theory, then what is X?

Convex, the geometry of any string wrapped as a brane, where they now look for observables. And there’s a serious problem there, in the critical area of density functional theory (remember Neumann’s density of states?) and scalar potentials:

http://arxiv.org/abs/1210.2291

Opening up the problem seems to take one on to Sturm-Liouville integration which then covers Lebesgue, Hermite, Lageurre and Bessel functions: all the knotty aspects of the quantum solution-spaces. Then a new take on supersymmetry:

http://arxiv.org/abs/1206.4966

Gravitational lensing proves the existence of dark matter.

The existence of isotropic space extension might indicate the presence of dark energy, but it might as well indicate the existence of matter beyond our information horizon.

There is no reason to call upon string theory in order to explain these dark influencers.

For Europe at least, Eric Verlinde has wrapped dark energy: its off-diagonal, off-equilibrium, mostly free protons. Where the dispersion reduces directional gravitational effect, there is place for MOND solutions.

But looking off-equilibrium means looking beyond the physical in the classical sense! And that’s why its important now to consider more generalized models, and currents constituted by interaction and correlation. Much interesting work will follow in this line.

First, let me say as many probably have that Vixra is in itself a contribution to physics merely by it’s open nature and acceptance of new ideas by its mediator (thanks Phil!). This is one of the problems in physics for sure – extreme close mindedness fostered by years of academic stuffiness that has created the dogma that is starting to show holes in it today. Regardless of anybody’s pet theory, openness and tolerance of even the craziest of ideas (remember Oppenheimer’s quote about needing to hear some crazy ideas), we have moved away from innovation and exchanged it for complacency of ideas that have been solidified in granite. But the new jackhammer for that granite is experiment (as it always has been) and as much as everybody is cutting back there are still experimental programs without astronomical budgets like the LHC that are doing good work – Herschel, LIGO (and yes, not seeing gravity waves is useful information!), ISS dark matter experiments and many others. I think we will find that the space telescopes will challenge our view of the universe (especially its age) and that the are many ground-based experiments that have yet to be performed (because we have overlooked minor details in some existing “simple” experiments) that will reveal more about fundamental physics. Necessity is the mother of invention and the need to answer some fundamental questions about all the “dark” entities will resolve themselves right here on Earth with less than million dollar budgets, in my opinion.The days of large expensive colliders is coming to an end – no news from the LHC by 2015 means the end of super high-energy experiments and the soul searching that often results in simpler experiments (and cosmic ray observations) that will bring about more answers. Yes, math is good but it is not a physical theory and as the LHC has been showing us, beautiful mathematics does not necessarily result in experimental evidence (but can result in a long wait for evidence that brings back nothing) so the real path in my opinion is revisiting our own steps in fundamental physics to find the mistakes (yes, mistakes made by people who are human). I believe it will start with the starburst galaxies and other cosmic anomolies that are being found and end with much simpler experiments on earth that could have been done 100 years ago if somebody had taken the the time to think about them. The standard model has been successful enough to deceive us of the complete truth, and settling for anything less than the complete truth is the sin of the last generation of physicists that needs to be resolved (like the maze that has a tantalizing path that comes so close to the exit).

Well spoken Mike! I totally agree with you.

Phil,

I agree that LQG and string Phenomenology share some SUSY issues. What keeps the core views apart?

I saw a paper that wants to look again at LQG quantization formulas that they say perhaps we should reconsider this numerology.

Why when there is perfectly good mathematics as standard says otherwise?

http://matpitka.blogspot.com/2014/01/magnetic-hysteresis-super-string.html#c5291507946486265280

There have been some comments that High Energy Physics experimental research is very expensive and I want to set this in perspective:

The price of one Nimitz-class aircraft carrier that the US deploys is 4.5 billion$. And it has 10. http://en.wikipedia.org/wiki/Nimitz-class_aircraft_carrier .

The total budget of the LHC is 9billion$, two aircraft carriers.

The cost is shared among many countries, not only the CERN members, it is a world project.

An aircraft carrier will end up either at the bottom of the ocean or as scrap meta, its use just a deterrent instrument for a powerful nation. The data from the LHC and any future collider will be available for the future generations, as a stepping stone in the saga of elementary interactions.

. There is still hope that the 14 TeV data will give us a glimpse of future physics, or that some brilliant young researcher will be able to separate better the baby from the bathwater.

As long as countries are willing to burn money in huge army projects one should not regret the much smaller percent spent in basic physics.

I still think though that not enough money is spent in trying to find new methods of acceleration for particles. One would think that with nanotechnology advancing in leaps and bounds some new way of utilizing the fields around nuclei might be developed that could be used in accelerators and would not require huge new tunnels ( http://www.nature.com/news/physicists-plan-to-build-a-bigger-lhc-1.14149 ) .. Thus I would favor the next accelerator to be the ILC or an improved LEP on the HIggs. This would give time for accelerator technology maybe to make a breakthrough.

איך נברא היקום יש מאין

Origin And Nature of the Universe

New Science 2013 versus classical science

Classical Science Is Replaced By 2013 Gravity Comprehension !!!

http://universe-life.com/2014/02/24/gravity/

Attn classical science hierarchy, including Darwin and Einstein…

“I hope that now you understand what gravity is and why it is the monotheism of the universe…DH”

=================================

Gravity is the natural selection of self-attraction by the elementary particles of an evolving system on their cyclic course towards the self-replication of the system. Period

( Gravitons are the elementary particles of the universe. RNA nucleotides genes and serotonin are the elementary particles of Earth life)

כח המשיכה

כח המשיכה הוא הבחירה הטבעית להיצמדות הדדית של חלקיקי היסוד של מערכת מתפתחת במהלך התפתחותה המחזורית לעבר שיכפולה. נקודה

( הגרוויטון הוא חלקיק היסוד של היקום. הגנים, הנוקלאוטידים של חומצה ריבונוקלאית והסרוטונין הם החלקיקים היסודיים של חיי כדור הארץ)

Dov Henis(comments from 22nd century)

http://universe-life.com/2013/11/14/subverting-organized-religious-science/

http://universe-life.com/2013/09/03/the-shortest-grand-unified-theory/

It seems rather reasonable to make here a number of notes that are mostly “philosophical”.

First of all – in the existing sciences there are three indeed fundamental ones: physics, mathematics, and biology; and so all of them use some Meta-physical (Meta-mathematical, Meta-biological) notions that require some grounds, which, quine naturally, cannot be deduced from the sciences themselves.

Corresponding attempts – there are a number of examples of such attempts – are, of course, unsatisfactory.

The notions occur in the next – “Meta-level”. And this indeed level exists – it is rigorously proven that all/everything what exists, is observable and isn’t observable is/are some informational patterns ( see

“The Information as Absolute” , http://viXra.org/abs/1402.0173 ); which exist as elements of absolutely infinite Set “Information” when, e.g., Matter is some (practically infinitisimal) subset of this Set.

Including the physical Meta-notions “space” , “time”, “Matter” becomes be principally defined, for example – the notions “space” and “time” are some universal rules/possibilities that control any informational structure in the Set; some analogouses of grammar rules. When applied in Matter, they are realized specifically, but are absolute; there aren’t any ways to change them if an obserever is in in Matter, including “in reference frames”. So space and time as possibilities realize themthelves as 4D Emptiness, where there are no “points of reference” ; to have such a point some “material structure” is necessary – as Poincare said

“… Again, it would be necessary to have an ether in order that so-called absolute movements should not be their displacements with respect to empty space , but with respect to something concrete…”

It seens that such “a concrete” is some very regular and dence 4D lattice, some “everyferriouse” ether; when everything what exist in Matter is/are some local disturbances of this ether.

(see also “Space and Time” http://arxiv.org/abs/1110.0003 , “To measure the absolute speed is possible?” http://viXra.org/abs/1311.0190)

Cheers