The LHC shut down for the winter on the 6th December and since this month we have already had several conferences with new results released and some indications of future plans for the LHC. In particular

In some cases we have seen plots using almost all of the 45/pb of data so far collected. The summary is that so far no new Beyond-Standard-Model discoveries have shown up. Some new limits have been set for exotic particles and the allowed parameter space for SUSY has been trimmed down.

The lack of black holes in particular has led to a flurry of headlines saying that string theory predictions have failed. On this subject I am fully in agreement with Lubos when he says that very few theorists expected (including string theorists) expected to see these black holes. They are part of a speculative theory about large extra dimensions. It has some interesting features, but it is not a definite prediction from string theory.

Lubos has also pointed out that some results from the Tevatron are favouring a SUSY Higgs sector. The elimination of some SUSY parameter space is not a big deal for SUSY supporters. There was always a possiblity of SUSY showing up at this point, but it is much more likely to appear with more data. Until the Higgs sector has been fully explored and understood by the LHC, everything is to play for.

Like this:

LikeLoading...

Related

This entry was posted on Tuesday, December 21st, 2010 at 9:34 am and is filed under Conference, Large Hadron Collider, String Theory. You can follow any responses to this entry through the RSS 2.0 feed.
Both comments and pings are currently closed.

Well, I see from Duff’s latest stringer rant that the rest of us are all still just crackpots. Obviously he hasn’t been thinking carefully enough about Jordan algebras.

Merry Christmas. Apologies for asking a simple question but did you change your identity or underwent a brain transplantation surgery or passed an exam on state-of-the-art theoretical physics or something like that?

Or why exactly should the “rest of you” cease to be what you have always been?

Indeed, as Phil emphasizes below, Duff is among the theorists who spend a maximum time with similar algebras, to say the least, so maybe his comments are not despite this fact, but perhaps partially because of this fact? 😉

Just a thought but maybe it would be a heresy for you to consider this possibility, right?

Dear Lubos, I greatly admire much of the mathematics in String Theory, and have done ever since I studied mathematical physics at Princeton and in Australia in the ’90s. In fact, I would not dare be so critical of the non existent Physics if I had not spent roughly 20 years figuring out exactly how it could all be wrong.

A difference between Duff and other algebraist modellers is that their team just tripped over all the octonion and division algebra stuff without expecting it. In some of the first preprints you can see comments of the kind “of course this is well known by mathematicians, but I was first told of it by Mr XXX”.

I think he is thinking a lot about Jordan algebras. I don’t know what he thinks of work such as yours, but the wording of his letter to NS was significantly modified by the editors so be careful how you read it.

Dear Phil, a self-evidently “up to his job” reader argues that the bbb signal at the Tevatron, if real, would still require tan(beta) around 70 which is very high and incompatible with other measurements, including the b-tau-tau events. So now I am not sure whether this interpretation of the deviation may be the right one.

It could be something more surprising – something that distinguishes quarks from leptons, a non-standard replacement for GUT, whatever…

Meanwhile, the CMS’ increased bounds at 35/pb, excluding SUSY to 1/2 of the Indian island masses, are pretty much unreachable by the Tevatron. I think that in the search of any new physics that is at least a bit heavier than 150 GeV or so, the Tevatron can’t match what the LHC has already done even if it ran for another decade.

with a single Higgs doublet. It has a fourth antigeneration and no normal Higgses; the fourth antigeneration has reverted signs for parity; its sneutrino plays the role of the down-type Higgs; there’s no uptime Higgs.

However, the new generation has new quarks and squarks. The interactions of the b’s and maybe the Higgs could produce an excess of the bbb events while creating no new excess in b-tau-tau – maybe. At this level, it’s my wishful thinking supported by vague analyses of the asymmetry of the spectrum.

By the way, I have advanced in my hidden SU(5) flavour symmetry of the scalars of Susy SM, and probably I will sent you something for vixra next week.
My three or four reports in hep-th in the last five years were unable to identify three spureus states in the 15 multiplet, because of very exotic values to charge electric, but now I have noticed that they are partners of chiral fermions, so they can only see chiral charge, and the total charge admits more sensible values. In fact I suspect they are the partners of the chiral fermions who marry the gauge supermultiplets; that should mean that three of the higgses are actually composites, or at least part of a SU(5) symmetry. Only h and H are now left out of the SU(5) scheme.

I also like SU(5). But we already have an approximately correct start with a Georgi-Glashow SU(5) GUT. Could your SU(5) of hidden flavor combine with the Georgi-Glashow SU(5) to form an SU(7)? [And the extra 2 dimensions are the so-called M2 Black-brane?] By the way, it is interesting that you call it “Hidden Flavor” – I had an idea in my January 2008 book that I called “Hyperflavor”.

Are your three spurious states right-handed neutrinos? My crazy models also include “fermions” with apparently tachyonic properties.

I think that the “scalar” content must be much more complicated than even an MSSM Higgs pair of complex doublet of scalars. Yes – that theory permits the W and Z to gain mass, but I don’t think it is complicated enough to explain the entire fundamental fermion mass spectrum.

Hi Ray. Grammar, as octonions, is not associative. A Hidden “Flavor Symmetry” is not the same that a “Hidden Flavor” Symmetry 😀 The flavour in my preprints has ben always the usual, very explicit one. The symmetry, was hidden, and it is related to the symmetry that should protect the yukawas of the quarks for them to be null.

Although it is true that some of my older ideas had extra generations, and therefore, extra flavors, I also mean that the known flavors are broken in a hidden sector by a hidden force that I call Hyperflavor.

Dear Philip, by the method of extrapolation of previous measurements near the critical point, which is right above this comment ;-), I discourage you from answering that question because I can predict where such an answer will lead. 😉

Yes Lubos, I think I can see the hook sticking out of the bait 🙂

All I can say is that the traditional renormalisation procedure for perturbative QED is fomally consistent and has plenty of empirical success. Ultimately the perturbative series is divergent and QED may be inconsistent due to the Landau pole. I don’t think this question is resolved but it is a long time since I studied such things. Asymptotically free theories such as QCD have a better chance of being fully consistent non-perturbatively but this is not proven either.

If there are inconsistencies then they will be resolved through the replacement of the standard model by something better at higher energy scales. We know this only through the observation that the universe exists. I dont think there is much more I can say 🙂

Landau pole is not a problem to me. I just do not like appearing corrections to the fundamental constants, finite or infinite. And I do not like discarding any corrections (subtractions). I think we constructed a wrong initial Hamiltonian and we discard bad contributions so it is not an achievement but failure of physics and mathematics. Due to repairing the wrong solutions on the go, we still do not know what the real electron is despite “success” of such a prescription. Otherwise we would start the perturbation theory from it.

Lubosh, please, participate constructively, if you can.

Renormalization at first sounds counter intuitive. The one thing it does lead to is the renormalization group, which is a very elegant construction. Renormalization has continued to provide us with deeper insights into quantum fields.

You said “If renormalizations are understood as discarding unnecessary corrections to physical constants, no renormalization group arises.”

I’m not sure I understand your point, but consider the example of the U(1) “electromagnetic” Renormalization Group Equation (RGE). At very low energies, the fine structure constant has a value of ~1/137. At the Z mass, the fine structure has increased to ~1/128. These numbers are predicted by the RGE’s and measured at experiments. Perhaps we could argue that the experiments had assumed some of this theory and, were therefore, predetermined to confirm this theory…

I agree that renormalization is an “ugly” process, but it seems to work. From a personal perspective, I would be more pleased if we could “explain away” infinities and their inverses with the mathematical properties of Scales.

Renormalisation theory is understood today in very mathematical terms: Hopf algebras for diagrams, and rules for associating analytic content to diagrams. This can be studied quite independently of Lagrangian formulations for gauge theories and conventional renormalisation group techniques. Although any reformulation of the successful Standard Model must take a stand on the emergence of this abstract renormalisation procedure, there is no reason a physical electron cannot be disussed in some other context. In fact, modern twistor techniques, which make solid contact with HEP experiment, do remove themselves from renormalisation theory by working with non local operators. Physicists have no excuse for their continuing lack of familiarity with these results.

This is the key problem – “Renormalisation theory is understood today in very mathematical terms: Hopf algebras for diagrams, and rules for associating analytic content to diagrams.”

Each such diagram should be multiplied by zero because it describes an elastic channel of charge scattering whose probability is zero. In the end, people obtain correct – inclusive non zero cross sections with the right behavior, but it is in the end, not before!

In other words, whatever ingenious mathematics is found to describe fallacious approach, it does not advance physical understanding of the final results. And the final (inclusive) results are physically very different from the first Born approximation results. The right results are obtained by repairing wrong results on the go, not by advancing a more physical approach from the very beginning.

Dear Kea, any diagram with finite number of initial and final particles is missing a factor equal to zero. Is this factor involved in your favorite approach?

Are you saying that a “proper” Feynman vertex should have an infinite number of interactions?

The Feynman vertex represents a matrix (such as the Gell-Mann matrices for color theory), and this would require an infinite-by-infinite matrix. As such, any TOE based on this model would likewise be infinite and impossible to comprehend…

I’m not “trashing” that particular idea, because I think that there are many (perhaps infinite at some scale?) “dimensions” that correspond to higher-energy phenomena. Perhaps an “infinite” can be modeled with a “trans-finite”.

Or you could use perturbation theory with an infinite number of Feynman 3-legged vertices. [The Standard Approach -you could still have an infinite number of initial and final states, but now you have an infinite number of successive 3-legged vertices.] These vertices would be multiplied by different powers of couplings: alpha^2, alpha^4, alpha^6, etc. At some point, it would be appropriate to ignore contributions of order (alpha)^2N where N is sufficiently large [for all but QCD interactions with alpha>1]. I guess that these are the “zeros” that you mention.

I would like to elliminate the infinities and inverse infinities with Scales. Our top scale-limit is the speed of light. Our bottom scale-limit is the Planck scale. It seems that a physical infinity cannot exist in an apparently finite Universe. Dirac’s Large Number of 10^40 and geometrical powers thereof (for instance, the Cosmological Constant is 10^(-120) ~ (10^40)^(-3) in 3 spatial dimensions?) may be as close to infinity as we can physically realize.

V, you appear to be under the impression that I am discussing ordinary Feynman diagrams. I am not. If you want to know something about what I am saying, I have written plenty about it and much of this research is available online.

Renormalization is a way of realizing effective field theories. The Landau pole is assigned to an energy scale that is outside the bounds of the effective theory. Further, this upper bound can be rescaled in a way so to describe running parameters. Hence we have the renormalization group. It is clear that around 10^{-17}cm in length scale that standard QFTs go bad, where some additional physics such as the Higgs field is required. So the “problems” of QED renormalization are rolled into a bigger picture.

I suppose that if you are working with physics in the 100GeV range of energy it is not that problematic to shove the infinities to a scale far beyond your domain of interest. Of course now there are some subtle issues with strings and extra large dimensions, which do make such connections.

I agree that we do not know high-energy excitations and the corresponding physics but it is no the problem for QED and other QFTs. The first problem is in admitting self-action (infinite or finite if cut-off) potential energy from which we get rid with discarding the corrections to masses. I.e., this part of interaction is at least unnecessary and only complicates calculations. The second problem is in considering the whole coupling perturbatively whereas the charge-field coupling is too “strong”, hence IR divergences. In fact, lack of soft radiation in the first Born approximation is a sign of a too bad start in the perturbation theory – the most probable events are missing. No wonder IR divergences appear in the next step.

All that means we can try to reformulate the theory so that the soft radiation is automatically present in the lowest order (due to permanent coupling) and no UV corrections appear (due to excluding self-action from the interaction Hamiltonian from the very beginning). I have a toy example of such construction (“electronium”).

I skimmed your 30 page paper – obviously it deserves a more serious read-through. The application vaguely reminds me of Plasma Physics whereby a proton “drags” an electron cloud, or of Solid State Physics whereby an electron “drags” a hole. At the Particle Physics level, we could probably imagine a high-energy particle (particularly something in the TeV energy scale) “dragging” the Dirac Sea/ Vacuum.

I agree that at some “scale” it may be more appropriate to treat “fundamental particles” as “composite particles”. I did something similar in my latest paper:

No, electronium is still a charged but compound system, with the center of inertia and ‘internal” (relative motion) degrees of freedom. The charge and the quantized electromagnetic field are coupled in it permanently so this part of interaction is taken into account exactly. This is a better initial approximation for QED. It reminds more an atom or a negative ion with its charge clouds, not a magnetic monopole.

I think a quasi-particle approach is more physical and fruitful than the “elementary” particle approach. In the latter we have difficulties with coupling, and in the former the decoupled equations describe different features of a coupled system in terms separated variables.

Your program seems to be a way to keeping QED on a tree level. Of course this requires some clever machinations to absorb the rest of the radiative correction terms in the tree level.

Renormalization has a curious history, for it assumes we can always shove unwanted infinities outside of the physics of interest. Quantum gravity of course has been a black cloud that loomed over the enterprise, but string theory and vertex topology (Veneziano amplitudes etc) and the rest has softened these concerns.

Charge renormalization still has some problems. The amplitudes result in integrals of the form

∫d^4k [(k – p)(k – q)]^{-2}

which are logarithmic and gives terms such as ln(Λ/m), where Λ is the cutoff that leads to the logarithmic diverged as Λ – -> ∞. So we keep Λ at some finite cutoff value. The fact that these divergences are logarithmic is nice, for much the same reason that “slop” in statistical mechanics (estimating macrostate volumes etc) does not contribute a huge error. The charge renormalization is one place where the estimate enters in and the observe charge q_{ob} is related to the bare charge by

q_{ob} = q_0/(1 + βq_0ln(Λ/m)).,

which looks rather trivial, for as the cut off goes to infinity the observed charge becomes zero. However, this equation may be inverted to solve q_0 according to q_{obs} and you get

q_0 = q_{ob}/(1 – βq_{ob}ln(Λ/m)),

which deiverse for βq_{ob}ln(Λ/m) = 1 and the cut off set = e^{1/ βq_{ob}) . So additional behavior is required, where β = β(q) and one works with the Gell Mann-Low equation

dq/ln(k) = β = sum_nβ_nk^n

and this enters into the Callan-Symanzik equation for q replaced with any coupling parameter of a YM gauge theory

[m∂/∂m + β(g)∂/∂g + …]G(x_1,x_2,…,x_n, m, g) = 0

which scales the n-point correlation function with respect to energy. The above charge renormalization feeds into that part. This then leads to the renormalization group and …. well some pretty interesting stuff.

The Callan-Symanzik equation is similar to the Navier-Stokes equation for fluid flow. In some ways that is what it tells us: how n-point correlation functions “flow” with a rescaling of energy.

It is not really a tree level but interaction of compound systems with exciting their internal degrees of freedom.

/
———-/———–>
________/
\
\________
/
“we can always shove unwanted infinities outside of the physics of interest. ”

But this is achieved with subtractions – a modification of the perturbative result. A computer cannot do it while iterating some equation numerically. So such a subtraction is a human intervention into calculations.

As I said previously, one can just discard unnecessary “corrections” to the fundamental physical constants, and no bare constants with their cut-off dependence arise.

Look at this: I have a charge e. in the next order I manage to separate a part of perturbative correction as a “correction” de to e. But e was OK in the first Born approximation – it gave good numerical results. Why should I change it? It is not a calculable but a fundamental constant. Besides, any correction to e worsens agreement with experiment. So I discard it and, oh miracle, my results become reasonable and even more precise!

But to camouflage this discarding (it is not sensible math), they invent bare charge e_b and “define” the physical charge as a sum e=(e_b + de). obviously it is the same as discarding de and keeping e to be physical. But now we have an additional degree of freedom in choosing e_(L) and de(L) so that e=(e_b+de). This liberty leads to the “renormalization group”. To me it is clear that RG cannot help us out from this divergent situation. Renormalizations remain ‘sweeping garbage under rug” (i.e., banal discarding).

Try to understand that there was no bare particles in the Dirac’s mind when he wrote QED. P. Dirac said: “I advanced a wrong Hamiltonian, let us look for a better one” or so. That’s what I am trying to do.

I did not manage to draw it carefully: everything has shifted and one line disappeared. Anyway, two parallel dashed and solid lines are a bound state of a compound system (an atom, for example) both for a target and for a projectile. When atoms scatter to large angles, it is their nuclei that repulse and change trajectories. Crossed dashed and slash lines correspond to superpositions of excited final atomic states.

Similarly for scattering of electroniums – all soft photon lines usually drawn in numbers while explaining the IR catastrophe are replaced with one horizontal line crossed with the charge line. Parallel, they describe a certain initial state of electronium; crossed, they describe a superposition of excited states (photons) in the final state.

If you go back to basic electromagnetism you learned about the electric displacement vector D = ε_0E + P = εE. The change in the field value in a media is due to the polarization of charge pairs (such as those composing molecules) which screens the charge immersed in the media. Charge renormalization is similar, for the virtual pairs which compose the vacuum (e-e^+ pairs) polarize around the bare charge. In QCD something similar works, but it is a sort of anti-screening.

Yes, when I was a student, our professor used namely this analogy – screening a charge in a dielectric medium. So there is a bare particle Lagrangian and a counter-term particle Lagrangian, both kinds of particles never observed but “necessary” to describe experiments. What a physics!

Let me remind you that an electron in an atom not only screens the nucleus charge at long distances but also smears it at short ones. So no Coulomb singularity appears for an external observer.

A “bare” charge would not only be “screened” but also “smeared” due to interaction with the “bare particles”. Those who talk about screening, forget about motion effect. Apparently because it is not “foreseen” in the counter-term Lagrangian.

Tell me, why a bare charge interacting with with vacuum pairs should stay still?

Certainly, these screening effects (and even anti-screening effects) occur in many fields of Physics. I also have no problem with composite particles [IMHO, these may imply higher-order String and/or lattice effects].

My first question is whether or not your idea is more efficient or more economical than the standard program involving renormalization and perturbation theory.

My second question is whether or not your idea implies any new and/or testable Physics.

You asked “Tell me, why a bare charge interacting with with vacuum pairs should stay still?”

If we represent the vacuum with a collection of localized electron/positron pairs [Of course the Pauli Exclusion Principle prevents this idealistic vacuum model], then the tree-level interaction between the bare charge and the electron is “perfectly” [in theory – in reality there may be a higher-moment effects] counterbalanced by the tree-level interaction between the bare charge and the positron.

Dear Ray, do you know that any nucleus in the crystal lattice is smeared quantum mechanically with the smear size of the order of lattice step? In QM interaction leads to smearing.

I know about screening effect in plasma, for example (Debye screening). It proceeds from a still probe charge and gives good estimation for the Debye radius but in reality the screening is dynamical (which is more physical picture).

1) I really hope so since no renormalization is involved and the soft radiation is obtained automatically in the first Born approximation rather than after summing up all orders of soft diagrams,

2) My idea was not advanced from some unexplained experimental data so it is just a reformulation of QED in terms of usual for QED things coupled differently. It does not have Landau pole but I cannot imagine an experiment where its results would be different from the standard QED final physical results.

Currently my “theory” is more at conceptual stage, rather undeveloped. For example, I take into account exactly only quantized Electromagnetic field but not virtual pairs. However it is already sufficient to have quantum mechanical smearing of the electron charge. In fact, high-frequency modes contribute less in the smearing effect than the low-frequency (soft) modes. At high energies we can add e-p pairs in the spectrum of excitations, mu-meson pairs, etc. Their presence will influence high-energy physics (final particle zoo) but not the low-energy spectrum which is purely electromagnetic (photons).

Yes – I am familiar with all of your examples. I started grad school in experimental Plasma Physics (at U. Texas), switched over to experimental Solid State Physics (also at U. Texas), and finally completed my Doctorate in High Energy Physics Phenomenology (at Florida State U.). My friends and family probably thought I was going to be a professional student…

It is OK to agree with current theory – any new theory must accomplish at least that much. Perhaps your idea has aesthetic advantages (by elliminating the Landau pole) over standard renormalization. I expect “composite particle” behavior to help signal the presence of Strings, sort of like phonons indicating lattice effects.

There is nothing bad in a wide scientific background like yours. On the contrary, I think, it is a plus.

I agree with QED final results but not with the way they are obtained. What I am looking for is a short-cut to them. I think it is possible to achieve if we recognize that a charge and its quantized electromagnetic field are intrinsically coupled (never decoupled). We can advance an initial approximation from this understanding (electronium-like) and the perturbative series will be different (without divergences). It has not only aesthetic advantages but also physical, mathematical, and conceptual ones.

Well, I see from Duff’s latest stringer rant that the rest of us are all still just crackpots. Obviously he hasn’t been thinking carefully enough about Jordan algebras.

Dear Kea,

Merry Christmas. Apologies for asking a simple question but did you change your identity or underwent a brain transplantation surgery or passed an exam on state-of-the-art theoretical physics or something like that?

Or why exactly should the “rest of you” cease to be what you have always been?

Indeed, as Phil emphasizes below, Duff is among the theorists who spend a maximum time with similar algebras, to say the least, so maybe his comments are not despite this fact, but perhaps partially because of this fact? 😉

Just a thought but maybe it would be a heresy for you to consider this possibility, right?

Best wishes

Lubos

Dear Lubos, I greatly admire much of the mathematics in String Theory, and have done ever since I studied mathematical physics at Princeton and in Australia in the ’90s. In fact, I would not dare be so critical of the non existent Physics if I had not spent roughly 20 years figuring out exactly how it could all be wrong.

And have you succeeded in your predetermined goal “to figure out exactly how it could all be wrong”?

Sorry, this question is just a rhetorical one – it’s one of my classical pranks, bazinga.

As a rule one does not disprove a theory with a theory. 😉

Normally yes, Lawrence, but when A Dogma is capable of ignoring all experimental evidence, one must at least try this alternative means.

A difference between Duff and other algebraist modellers is that their team just tripped over all the octonion and division algebra stuff without expecting it. In some of the first preprints you can see comments of the kind “of course this is well known by mathematicians, but I was first told of it by Mr XXX”.

I think he is thinking a lot about Jordan algebras. I don’t know what he thinks of work such as yours, but the wording of his letter to NS was significantly modified by the editors so be careful how you read it.

Dear Phil, a self-evidently “up to his job” reader argues that the bbb signal at the Tevatron, if real, would still require tan(beta) around 70 which is very high and incompatible with other measurements, including the b-tau-tau events. So now I am not sure whether this interpretation of the deviation may be the right one.

Lubos, thanks for the update. Perhaps by this time next year the mystery will be resolved

It could be something more surprising – something that distinguishes quarks from leptons, a non-standard replacement for GUT, whatever…

Meanwhile, the CMS’ increased bounds at 35/pb, excluding SUSY to 1/2 of the Indian island masses, are pretty much unreachable by the Tevatron. I think that in the search of any new physics that is at least a bit heavier than 150 GeV or so, the Tevatron can’t match what the LHC has already done even if it ran for another decade.

leptophobic models. Hmm

Completely independently and accidentally, Arvind Rajaraman et al. have a new SUSY model

http://motls.blogspot.com/2010/12/single-higgs-doublet-supersymmetric.html

with a single Higgs doublet. It has a fourth antigeneration and no normal Higgses; the fourth antigeneration has reverted signs for parity; its sneutrino plays the role of the down-type Higgs; there’s no uptime Higgs.

However, the new generation has new quarks and squarks. The interactions of the b’s and maybe the Higgs could produce an excess of the bbb events while creating no new excess in b-tau-tau – maybe. At this level, it’s my wishful thinking supported by vague analyses of the asymmetry of the spectrum.

Cheers

LM

Parity meant R-parity.

By the way, I have advanced in my hidden SU(5) flavour symmetry of the scalars of Susy SM, and probably I will sent you something for vixra next week.

My three or four reports in hep-th in the last five years were unable to identify three spureus states in the 15 multiplet, because of very exotic values to charge electric, but now I have noticed that they are partners of chiral fermions, so they can only see chiral charge, and the total charge admits more sensible values. In fact I suspect they are the partners of the chiral fermions who marry the gauge supermultiplets; that should mean that three of the higgses are actually composites, or at least part of a SU(5) symmetry. Only h and H are now left out of the SU(5) scheme.

Dear Alejandro,

I also like SU(5). But we already have an approximately correct start with a Georgi-Glashow SU(5) GUT. Could your SU(5) of hidden flavor combine with the Georgi-Glashow SU(5) to form an SU(7)? [And the extra 2 dimensions are the so-called M2 Black-brane?] By the way, it is interesting that you call it “Hidden Flavor” – I had an idea in my January 2008 book that I called “Hyperflavor”.

Are your three spurious states right-handed neutrinos? My crazy models also include “fermions” with apparently tachyonic properties.

I think that the “scalar” content must be much more complicated than even an MSSM Higgs pair of complex doublet of scalars. Yes – that theory permits the W and Z to gain mass, but I don’t think it is complicated enough to explain the entire fundamental fermion mass spectrum.

Have Fun!

Hi Ray. Grammar, as octonions, is not associative. A Hidden “Flavor Symmetry” is not the same that a “Hidden Flavor” Symmetry 😀 The flavour in my preprints has ben always the usual, very explicit one. The symmetry, was hidden, and it is related to the symmetry that should protect the yukawas of the quarks for them to be null.

Happy Xmas.

Dear Alejandro,

Although it is true that some of my older ideas had extra generations, and therefore, extra flavors, I also mean that the known flavors are broken in a hidden sector by a hidden force that I call Hyperflavor.

Merry Christmas!

Hi Philip,

What do you think of renormalizations?

Hi, Renornalisation is fine, just scaling laws near a critical point. Why do you ask?

I am unhappy with the renormalizations in QFT.

OK, if it is fine with you, can you describe qualitatively what a dressed (real) electron is, please? How is it different from a free Dirac electron?

Dear Philip, by the method of extrapolation of previous measurements near the critical point, which is right above this comment ;-), I discourage you from answering that question because I can predict where such an answer will lead. 😉

Yes Lubos, I think I can see the hook sticking out of the bait 🙂

All I can say is that the traditional renormalisation procedure for perturbative QED is fomally consistent and has plenty of empirical success. Ultimately the perturbative series is divergent and QED may be inconsistent due to the Landau pole. I don’t think this question is resolved but it is a long time since I studied such things. Asymptotically free theories such as QCD have a better chance of being fully consistent non-perturbatively but this is not proven either.

If there are inconsistencies then they will be resolved through the replacement of the standard model by something better at higher energy scales. We know this only through the observation that the universe exists. I dont think there is much more I can say 🙂

Thanks, Philip.

Landau pole is not a problem to me. I just do not like appearing corrections to the fundamental constants, finite or infinite. And I do not like discarding any corrections (subtractions). I think we constructed a wrong initial Hamiltonian and we discard bad contributions so it is not an achievement but failure of physics and mathematics. Due to repairing the wrong solutions on the go, we still do not know what the real electron is despite “success” of such a prescription. Otherwise we would start the perturbation theory from it.

Lubosh, please, participate constructively, if you can.

Renormalization at first sounds counter intuitive. The one thing it does lead to is the renormalization group, which is a very elegant construction. Renormalization has continued to provide us with deeper insights into quantum fields.

So, with this deeper insight you can describe a real free electron, can’t you?

The renormalization “group” is “possible” if you admit existence of “bare” particles about which you had no idea while writing the QED equations.

If renormalizations are understood as discarding unnecessary corrections to physical constants, no renormalization group arises.

Dear Vladimir,

You said “If renormalizations are understood as discarding unnecessary corrections to physical constants, no renormalization group arises.”

I’m not sure I understand your point, but consider the example of the U(1) “electromagnetic” Renormalization Group Equation (RGE). At very low energies, the fine structure constant has a value of ~1/137. At the Z mass, the fine structure has increased to ~1/128. These numbers are predicted by the RGE’s and measured at experiments. Perhaps we could argue that the experiments had assumed some of this theory and, were therefore, predetermined to confirm this theory…

I agree that renormalization is an “ugly” process, but it seems to work. From a personal perspective, I would be more pleased if we could “explain away” infinities and their inverses with the mathematical properties of Scales.

Have Fun!

Renormalisation theory is understood today in very mathematical terms: Hopf algebras for diagrams, and rules for associating analytic content to diagrams. This can be studied quite independently of Lagrangian formulations for gauge theories and conventional renormalisation group techniques. Although any reformulation of the successful Standard Model must take a stand on the emergence of this abstract renormalisation procedure, there is no reason a physical electron cannot be disussed in some other context. In fact, modern twistor techniques, which make solid contact with HEP experiment, do remove themselves from renormalisation theory by working with non local operators. Physicists have no excuse for their continuing lack of familiarity with these results.

Dear Ray,

I have some simple explanation indeed. See http://arxiv.org/abs/0811.4416

Also you can consult my blog and my research group.

http://groups.google.com/group/qed-reformulation

http://vladimirkalitvianski.wordpress.com/

Have fun!

Dear Kea,

This is the key problem – “Renormalisation theory is understood today in very mathematical terms: Hopf algebras for diagrams, and rules for associating analytic content to diagrams.”

Each such diagram should be multiplied by zero because it describes an elastic channel of charge scattering whose probability is zero. In the end, people obtain correct – inclusive non zero cross sections with the right behavior, but it is in the end, not before!

In other words, whatever ingenious mathematics is found to describe fallacious approach, it does not advance physical understanding of the final results. And the final (inclusive) results are physically very different from the first Born approximation results. The right results are obtained by repairing wrong results on the go, not by advancing a more physical approach from the very beginning.

V, you should not criticise the approaches I favour without knowing anything about them. They are highly non standard.

Dear Kea, any diagram with finite number of initial and final particles is missing a factor equal to zero. Is this factor involved in your favorite approach?

Dear Vladimir,

Are you saying that a “proper” Feynman vertex should have an infinite number of interactions?

The Feynman vertex represents a matrix (such as the Gell-Mann matrices for color theory), and this would require an infinite-by-infinite matrix. As such, any TOE based on this model would likewise be infinite and impossible to comprehend…

I’m not “trashing” that particular idea, because I think that there are many (perhaps infinite at some scale?) “dimensions” that correspond to higher-energy phenomena. Perhaps an “infinite” can be modeled with a “trans-finite”.

Or you could use perturbation theory with an infinite number of Feynman 3-legged vertices. [The Standard Approach -you could still have an infinite number of initial and final states, but now you have an infinite number of successive 3-legged vertices.] These vertices would be multiplied by different powers of couplings: alpha^2, alpha^4, alpha^6, etc. At some point, it would be appropriate to ignore contributions of order (alpha)^2N where N is sufficiently large [for all but QCD interactions with alpha>1]. I guess that these are the “zeros” that you mention.

I would like to elliminate the infinities and inverse infinities with Scales. Our top scale-limit is the speed of light. Our bottom scale-limit is the Planck scale. It seems that a physical infinity cannot exist in an apparently finite Universe. Dirac’s Large Number of 10^40 and geometrical powers thereof (for instance, the Cosmological Constant is 10^(-120) ~ (10^40)^(-3) in 3 spatial dimensions?) may be as close to infinity as we can physically realize.

Have Fun!

V, you appear to be under the impression that I am discussing ordinary Feynman diagrams. I am not. If you want to know something about what I am saying, I have written plenty about it and much of this research is available online.

Renormalization is a way of realizing effective field theories. The Landau pole is assigned to an energy scale that is outside the bounds of the effective theory. Further, this upper bound can be rescaled in a way so to describe running parameters. Hence we have the renormalization group. It is clear that around 10^{-17}cm in length scale that standard QFTs go bad, where some additional physics such as the Higgs field is required. So the “problems” of QED renormalization are rolled into a bigger picture.

I suppose that if you are working with physics in the 100GeV range of energy it is not that problematic to shove the infinities to a scale far beyond your domain of interest. Of course now there are some subtle issues with strings and extra large dimensions, which do make such connections.

I agree that we do not know high-energy excitations and the corresponding physics but it is no the problem for QED and other QFTs. The first problem is in admitting self-action (infinite or finite if cut-off) potential energy from which we get rid with discarding the corrections to masses. I.e., this part of interaction is at least unnecessary and only complicates calculations. The second problem is in considering the whole coupling perturbatively whereas the charge-field coupling is too “strong”, hence IR divergences. In fact, lack of soft radiation in the first Born approximation is a sign of a too bad start in the perturbation theory – the most probable events are missing. No wonder IR divergences appear in the next step.

All that means we can try to reformulate the theory so that the soft radiation is automatically present in the lowest order (due to permanent coupling) and no UV corrections appear (due to excluding self-action from the interaction Hamiltonian from the very beginning). I have a toy example of such construction (“electronium”).

Dear Vladimir,

I skimmed your 30 page paper – obviously it deserves a more serious read-through. The application vaguely reminds me of Plasma Physics whereby a proton “drags” an electron cloud, or of Solid State Physics whereby an electron “drags” a hole. At the Particle Physics level, we could probably imagine a high-energy particle (particularly something in the TeV energy scale) “dragging” the Dirac Sea/ Vacuum.

I agree that at some “scale” it may be more appropriate to treat “fundamental particles” as “composite particles”. I did something similar in my latest paper:

http://www.prespacetime.com/index.php/pst/article/viewFile/124/124

Does “electronium” behave anything like a Magnetic Monopole?

Have Fun!

No, electronium is still a charged but compound system, with the center of inertia and ‘internal” (relative motion) degrees of freedom. The charge and the quantized electromagnetic field are coupled in it permanently so this part of interaction is taken into account exactly. This is a better initial approximation for QED. It reminds more an atom or a negative ion with its charge clouds, not a magnetic monopole.

I think a quasi-particle approach is more physical and fruitful than the “elementary” particle approach. In the latter we have difficulties with coupling, and in the former the decoupled equations describe different features of a coupled system in terms separated variables.

Your program seems to be a way to keeping QED on a tree level. Of course this requires some clever machinations to absorb the rest of the radiative correction terms in the tree level.

Renormalization has a curious history, for it assumes we can always shove unwanted infinities outside of the physics of interest. Quantum gravity of course has been a black cloud that loomed over the enterprise, but string theory and vertex topology (Veneziano amplitudes etc) and the rest has softened these concerns.

Charge renormalization still has some problems. The amplitudes result in integrals of the form

∫d^4k [(k – p)(k – q)]^{-2}

which are logarithmic and gives terms such as ln(Λ/m), where Λ is the cutoff that leads to the logarithmic diverged as Λ – -> ∞. So we keep Λ at some finite cutoff value. The fact that these divergences are logarithmic is nice, for much the same reason that “slop” in statistical mechanics (estimating macrostate volumes etc) does not contribute a huge error. The charge renormalization is one place where the estimate enters in and the observe charge q_{ob} is related to the bare charge by

q_{ob} = q_0/(1 + βq_0ln(Λ/m)).,

which looks rather trivial, for as the cut off goes to infinity the observed charge becomes zero. However, this equation may be inverted to solve q_0 according to q_{obs} and you get

q_0 = q_{ob}/(1 – βq_{ob}ln(Λ/m)),

which deiverse for βq_{ob}ln(Λ/m) = 1 and the cut off set = e^{1/ βq_{ob}) . So additional behavior is required, where β = β(q) and one works with the Gell Mann-Low equation

dq/ln(k) = β = sum_nβ_nk^n

and this enters into the Callan-Symanzik equation for q replaced with any coupling parameter of a YM gauge theory

[m∂/∂m + β(g)∂/∂g + …]G(x_1,x_2,…,x_n, m, g) = 0

which scales the n-point correlation function with respect to energy. The above charge renormalization feeds into that part. This then leads to the renormalization group and …. well some pretty interesting stuff.

The Callan-Symanzik equation is similar to the Navier-Stokes equation for fluid flow. In some ways that is what it tells us: how n-point correlation functions “flow” with a rescaling of energy.

It is not really a tree level but interaction of compound systems with exciting their internal degrees of freedom.

/

———-/———–>

________/

\

\________

/

“we can always shove unwanted infinities outside of the physics of interest. ”

But this is achieved with subtractions – a modification of the perturbative result. A computer cannot do it while iterating some equation numerically. So such a subtraction is a human intervention into calculations.

As I said previously, one can just discard unnecessary “corrections” to the fundamental physical constants, and no bare constants with their cut-off dependence arise.

Look at this: I have a charge e. in the next order I manage to separate a part of perturbative correction as a “correction” de to e. But e was OK in the first Born approximation – it gave good numerical results. Why should I change it? It is not a calculable but a fundamental constant. Besides, any correction to e worsens agreement with experiment. So I discard it and, oh miracle, my results become reasonable and even more precise!

But to camouflage this discarding (it is not sensible math), they invent bare charge e_b and “define” the physical charge as a sum e=(e_b + de). obviously it is the same as discarding de and keeping e to be physical. But now we have an additional degree of freedom in choosing e_(L) and de(L) so that e=(e_b+de). This liberty leads to the “renormalization group”. To me it is clear that RG cannot help us out from this divergent situation. Renormalizations remain ‘sweeping garbage under rug” (i.e., banal discarding).

Try to understand that there was no bare particles in the Dirac’s mind when he wrote QED. P. Dirac said: “I advanced a wrong Hamiltonian, let us look for a better one” or so. That’s what I am trying to do.

A “tree level” diagram for a projectile-atom collision is given here:

https://docs.google.com/leaf?id=0B4Db4rFq72mLOGFlNTRhMDctYjgyMS00ZDkwLThjYzAtNjllNzQzYzkwYTE5&hl=en

I did not manage to draw it carefully: everything has shifted and one line disappeared. Anyway, two parallel dashed and solid lines are a bound state of a compound system (an atom, for example) both for a target and for a projectile. When atoms scatter to large angles, it is their nuclei that repulse and change trajectories. Crossed dashed and slash lines correspond to superpositions of excited final atomic states.

Similarly for scattering of electroniums – all soft photon lines usually drawn in numbers while explaining the IR catastrophe are replaced with one horizontal line crossed with the charge line. Parallel, they describe a certain initial state of electronium; crossed, they describe a superposition of excited states (photons) in the final state.

If you go back to basic electromagnetism you learned about the electric displacement vector D = ε_0E + P = εE. The change in the field value in a media is due to the polarization of charge pairs (such as those composing molecules) which screens the charge immersed in the media. Charge renormalization is similar, for the virtual pairs which compose the vacuum (e-e^+ pairs) polarize around the bare charge. In QCD something similar works, but it is a sort of anti-screening.

Yes, when I was a student, our professor used namely this analogy – screening a charge in a dielectric medium. So there is a bare particle Lagrangian and a counter-term particle Lagrangian, both kinds of particles never observed but “necessary” to describe experiments. What a physics!

Let me remind you that an electron in an atom not only screens the nucleus charge at long distances but also smears it at short ones. So no Coulomb singularity appears for an external observer.

A “bare” charge would not only be “screened” but also “smeared” due to interaction with the “bare particles”. Those who talk about screening, forget about motion effect. Apparently because it is not “foreseen” in the counter-term Lagrangian.

Tell me, why a bare charge interacting with with vacuum pairs should stay still?

Dear Vladimir,

Certainly, these screening effects (and even anti-screening effects) occur in many fields of Physics. I also have no problem with composite particles [IMHO, these may imply higher-order String and/or lattice effects].

My first question is whether or not your idea is more efficient or more economical than the standard program involving renormalization and perturbation theory.

My second question is whether or not your idea implies any new and/or testable Physics.

You asked “Tell me, why a bare charge interacting with with vacuum pairs should stay still?”

If we represent the vacuum with a collection of localized electron/positron pairs [Of course the Pauli Exclusion Principle prevents this idealistic vacuum model], then the tree-level interaction between the bare charge and the electron is “perfectly” [in theory – in reality there may be a higher-moment effects] counterbalanced by the tree-level interaction between the bare charge and the positron.

Have Fun!

Dear Ray, do you know that any nucleus in the crystal lattice is smeared quantum mechanically with the smear size of the order of lattice step? In QM interaction leads to smearing.

I know about screening effect in plasma, for example (Debye screening). It proceeds from a still probe charge and gives good estimation for the Debye radius but in reality the screening is dynamical (which is more physical picture).

1) I really hope so since no renormalization is involved and the soft radiation is obtained automatically in the first Born approximation rather than after summing up all orders of soft diagrams,

2) My idea was not advanced from some unexplained experimental data so it is just a reformulation of QED in terms of usual for QED things coupled differently. It does not have Landau pole but I cannot imagine an experiment where its results would be different from the standard QED final physical results.

Currently my “theory” is more at conceptual stage, rather undeveloped. For example, I take into account exactly only quantized Electromagnetic field but not virtual pairs. However it is already sufficient to have quantum mechanical smearing of the electron charge. In fact, high-frequency modes contribute less in the smearing effect than the low-frequency (soft) modes. At high energies we can add e-p pairs in the spectrum of excitations, mu-meson pairs, etc. Their presence will influence high-energy physics (final particle zoo) but not the low-energy spectrum which is purely electromagnetic (photons).

Dear Vladimir,

Yes – I am familiar with all of your examples. I started grad school in experimental Plasma Physics (at U. Texas), switched over to experimental Solid State Physics (also at U. Texas), and finally completed my Doctorate in High Energy Physics Phenomenology (at Florida State U.). My friends and family probably thought I was going to be a professional student…

It is OK to agree with current theory – any new theory must accomplish at least that much. Perhaps your idea has aesthetic advantages (by elliminating the Landau pole) over standard renormalization. I expect “composite particle” behavior to help signal the presence of Strings, sort of like phonons indicating lattice effects.

Have Fun!

There is nothing bad in a wide scientific background like yours. On the contrary, I think, it is a plus.

I agree with QED final results but not with the way they are obtained. What I am looking for is a short-cut to them. I think it is possible to achieve if we recognize that a charge and its quantized electromagnetic field are intrinsically coupled (never decoupled). We can advance an initial approximation from this understanding (electronium-like) and the perturbative series will be different (without divergences). It has not only aesthetic advantages but also physical, mathematical, and conceptual ones.