Yes I know that physicists don’t use the term “God particle” but it has entered into popular culture and when the terms “Higgs Boson” and “God Particle” were trending on Twitter and Google earlier this week it was the latter that went the highest. Contrary to what some scientists imagine of the interested public, very few think that there is some religious significance attached to the particle because of this name, it’s just a catchy moniker and we need not be afraid to use it.

Following the CERN announcement earlier this week, physicists have been giving some very different assessments of the chances that the ATLAS and CMS detectors have seen the Higgs boson. The CERN DG says merely that they have seen some “interesting fluctuations”, while Tommaso Dorigo, (an expert on the statistical aspects of the CMS analysis) calls it “firm evidence“. Theorist Lubos Motl is even more positive. He says that it is a “sure thing“, but another theorist Matt Strassler has criticised such positive reports. He regards the situation as 50/50 and backed this up with a poll of experimenters that came up 9 to 1 in favour of uncertainty. This contrasts with a similar poll by Bill Murray who is lead Higgs analyst for the ATLAS collaboration. In an interview he reported a 10 to 0 vote that the Higgs had indeed been found.

## What is the question?

So can we make a more objective and quantitative assessment of the current level of uncertainty over the result? You might want to know the probability that the Higgs Boson has been seen for example. Unfortunately this quantity depends on the prior probability that the Higgs Boson exists. Theoretical physicists have a very wide range of opinions on this depending on which theories they favour. Experimenters are supposed to make their assessments independently of such prejudices. So how can we measure the situation objectively?

Luckily there is a different question that is model independent. We can ask for the probability that the experiments would produce results as strong or stronger than those reported if there were no Higgs Boson. This conditional probability removes the theory dependence in the question so the answer should be a number that everyone could in principle agree on. The smaller this probability is, the better the certainty that the Higgs Boson has been found.

Before we can calculate the result we must define precisely what we mean by the “strength of the result”. This has to be a single number so it should come from the combined results of both experiments. I will define it to be the maximum value of the CLs likelihood ratio anywhere on the plot. This takes into account both the exclusion side and the signal side of the statistics and is standard use for Higgs searches. Don’t worry if you are not familiar with this quantity, it will become clearer in a minute.

### Can we trust the combination?

The Higgs combination group have tried to spread propaganda that my unofficial combinations cannot be trusted because only people familiar with the inner details of the experimental analysis are capable of doing it correctly. This is not true. I repeatedly acknowledge that my method is an approximation and that only the official combination can be used to claim a discovery, but it is a good approximation and is perfectly acceptable for making a rough assessment of the combined certainty.

They warn that people should not add the event histograms from separate experiments but that is not how my combination is done. They say that only the experts can understand the systematic uncertainties of the detectors well enough to do the combination, but these uncertainties are all built into the individual exclusion plots that they have shown and are therefore taken into account when I combine them. They warned in the past that there are correlations between the background calculations because both experiments use the same algorithms. These correlations are there and must be accounted for to get the most accurate combination possible, but they have been shown to be small. You can ignore these correlations and still get a very good approximation.

In fact the largest source of error comes from the fact that the approximate combination method assumes a flat normal probability distribution at each mass point, when in reality a more complex function based on Poisson distributions would be correct. Happily the central limit theorem says that any error function with a finite variance becomes approximately normal given high enough statistics, so the approximation gets better as more data is added.

When the combination group published their first result in November I was able to compare it with my unofficial combination done in September. This confirmed that the approximation was good. This was no surprise to me because it had already been demonstrated with the Tevatron combinations and some earlier unpublished LHC combinations. I acknowledge that my combinations for some of the individual channels were not so good because the number of events has been low, especially for the ZZ channels. This will have improved for the latest results because there is now much more data but still these individual channel combinations should be considered less certain than the overall combination.

The assessment I am doing today depends mainly on that, so this is not a big issue, however it is worth showing one further comparison between my combination and the official one for a signal channel. the plot below shows the official combination for the diphoton channel published in November when ATLAS used just 1.1/fb and CMS used just 1.7/fb. The red line is the unofficial result from viXra. It will be interesting to see how much this has improved for 5/fb.

### What must be evaluated?

It is possible to do a systematic evaluation of the probability in question using the combined plot. This takes into account the statistical uncertainties as well as the theoretical uncertainties in the background due to imprecise measurements of the standard model parameters (e.g. W mass) and the approximation methods used in the theory. It also includes the uncertainty in the energy resolution and other similar uncertainties in detector performance. All these things have been considered by the experts from the experimental collaborations and built into the plots, so we don’t need to know the details to do the calculation (If anyone tries to claim otherwise they are wrong)

However, there is also the possibility that the experimenters have made some more fundamental kind of error. There may be a subtle fault in the detectors that has not shown up in all the calibration tests which causes an excess on the plot where there should not be one. This should not happen because there are hundreds of people checking for such errors and they are all very competent. Nevertheless bad luck can strike and throw everything out. This has been the case before and it is probably the case with the OPERA result indicating that neutrinos are faster than light.

A second similar possibility is that the theorists have underestimated the accuracy of some of their calculations so that the background calculation is a little off in one mass range. The analysis involves subtracting a very small signal from a large background, especially in the diphoton channel, so the scope for magnifying any inaccuracy has to be considered. A miscalculation of the signal size is also possible but less likely to lead to a bad result.

As I said, the published plots include all the known experimental and theoretical uncertainties, but these other unknown errors in experiment and theory cannot be accounted for exactly. They can only be estimated based on past experience. Some “expert theorists” say that us more “naive theorists” don’t appreciate these facts. Do we really sound so stupid?

### What is the chance of an experimental fault?

How often do experimental faults contribute to a false positive like the excess reported this week? We can only look at past performance but I am not aware of any careful surveys, so a guestimate is required. Someone else may be able to do better. The answer might be one in a hundred but let’s be more conservative and say one in ten. If you think it is more common please fell free to reevaluate for yourselves.

However, with the CERN Higgs result we have good evidence that such a fault is not the cause of the excess. That is because there are two independent experiments reporting a very similar result. ATLAS and CMS may seem very similar from the answers they produce, but the detector technologies they use are quite different. The chance of a common fault producing the excess in both detectors must therefore be very small. I am going to assume that this is negligible. If anyone thinks otherwise please explain why.

This means that if the excess is due to such a fault it must be a coincidence that it has a similar effect for both experiments. If there is a one in ten chance of a fault for one experiment, the chance for two independent experiments is one in 100, but even then that is the chance that they would produce the fault at different places. Lets have a look at the two signal plots together.

The positions on the maximum excess differ by about 2 GeV but the mass resolution is around 2% so this is not an inconsistency. If these excesses are produced by detector faults then the chance of them lining up so close would be small. How small? That depends on some unknowns. we can’t just say the fault could appear anywhere in the mass range, so let’s be conservative and just call it a one in three chance.

Overall then we arrive at a one in 300 chance for the observed excess to be explained by a coincidental combination of detector faults. I think this is conservative. Someone else might estimate it to be more probable.

### What is the chance of a theoretical Fault?

The other outside possibility is that the result has been afflicted by a misunderstood background so that the observed excess is really just a subtle effect of the Higgsless standard model that the theorists failed to recognize or estimate correctly. Again this is unlikely but it happens and must be considered. How often does it happen? Once in a hundred perhaps? I will be more cautious and assume one in ten. You may think that is an underestimate in which case you can make your own evaluation.

But again we have more than one place to look. The separate experiments could well be affected by the same theoretical error but the different decay channels are much more independent. There may be some small chance that a single theoretical error could affect all the channels but this would have a small probability, say one in a hundred. If you think it is bigger please justify how that could happen.

So now let’s look at the combined signal plots for the three main channels; diphoton, ZZ->4l and WW->lvlv. For the WW plot I can’t use the latest CMS results because the plots shown are frankly rubbish quality. I hope they will improve them before publication. However the WW channel has good sensitivity even with less data so I will show the combination from the summer.

All three channels show an excess in the same low mass region so if this is due to independent faults it would require a coincidence. However, the excess is not as good in ZZ and WW as in the diphoton channel. I am going to put the probability at one in a hundred overall and add to this the probability of one in a hundred for a common fault that affects all three. So the overall chance for a fault from theory is one in 50. Some people will say that this is a low estimate and some people will say that it is low. Others will say that it is nonsense to attempt such an estimate. Never mind, I am just giving it my best honest shot. Let others do the same.

## What is the chance of a statistical fluctuation?

The last thing to consider is what is the probability of getting s signal as string or stringer than that observed according to the statistical analysis. Actually this also takes into account some theoretical uncertainty and measurement error, but mostly it is statistical. This is a probability that can be worked out more scientifically, but it does include the Look Elsewhere Effect which is partly subjective.

First consider what would be the chance of seeing a signal as strong as the one reported at the fixed mass point of the maximum excess if in fact there was no Higgs Boson. The plot shows a three sigma excess at 124-125 GeV. This would have been much stronger if the peaks from the two experiments had coincided more closely, possibly about 4 sigma. This discrepancy may be due to some detector calibration that could be corrected but it is correct that we do not take that possibility into account. The 3 sigma excess is what we should work with.

As everyone knows, the probability of a three sigma fluctuation is one in 370, but that allows for fluctuations up or down. So the probability for an excess this big or stronger at this point is one in 740. But we need to know the probability for an excess this strong anywhere on the plot. In other words we need to multiply by the Look Elsewhere Effect factor. Have a look at the plot over the entire range

Notice that for the entire range from 130 GeV to 600 GeV the line remains within 2 sigma of the zero line. Big deviations are indeed rare but how rare?

Another point to consider is that if there was a three sigma fluctuation at say 180GeV, the Higgs would still be excluded at that point. This would not count as such a strong signal. This is why I specified that the strength should be measured using the CLs statistic which takes the ration of the probability for the signal hypothesis over the probability from the null hypothesis. This means that the probability of getting a signal as strong in the regions where the Higgs is excluded is much smaller. In fact we can neglect this altogether. So we need only count the regions from 114 GeV (using LEP) to about 135 GeV and perhaps 500 to 600 GeV. Hoe big is the LEE factor for these regions. This depends on the width of the signal which we see to be about 5 GeV in the low mass range due to mass resolution of the detector, and which is much bigger above 500 GeV due to a very large natural width for a high mass Higgs Boson. The LEE factor will therefore be about 6 but let’s call it 10 to be extra cautious.

This gives a final answer for the probability of a fluctuation to be about one in 70.

### The final answer?

Combining the three things I have considered i get an overall probability for such a strong signal if there is no Higgs to be about 1 in 30. Perhaps I have failed to account for combinations where more than one of these effects could combine. That requires further coincidences but lets just call the overall result 1 in twenty. In other words, everything considered I take the observed result to be a two sigma effect.

### What about prior probabilities?

there is one more thing you need to take into account when considering how likely a result of any number of sigmas significance is going to stand the test of time. That is your prior estimate for the probability of it being true. The OPERA neutrino observation is a good example of an extreme case. A six sigma effect was observed, but he prior probability of neutrinos going faster than light would be considered very small by most theoretical physicists. It follows that the probability for this result to go away is quite high despite the statistical significance. An experimental fault is likely to be the biggest contributing factor despite the care of the experimenters.

In fact most 3 sigma excesses for observations beyond the standard model do go away. This is because the prior chance of any one such effect being correct is very small. You can consider this to be part of the Look Elsewhere Effect too. However, the observation of the Higgs Boson is a very different case. Most theoretical physicists would estimate the prior probability for the existence of a Higgs(like) Boson is very high. The standard model provides a very simple explanation of electroweak symmetry breaking but there is no simple way to understand a Higgsless universe. This make the prior probability high which means that the chance of the 2 sigma result going away is small. There is a bigger chance however that it could move to a different mass.

Not everyone agrees with this. some people do not think that the Higgs Boson can exist. Stephen hawking is one of them. these people would assign a low value to the prior probability that the signal for the Higgs will be seen and so they will consider it very likely that the present observation will go away. I doubt that there are enough people of this opinion to account for much doubt among the experimenters.

### How long will it take to settle This?

To claim a discovery the combined results must give a 5 sigma excess without considering the Look Elsewhere Effect. How long this takes depends on a certain amount of luck. If the peaks of the excesses comes closer together with more data, then the excess will grow faster than you would otherwise expect. In that case the matter might be settled with just twice as much data and the whole thing will be over by the summer. On the other hand, if they are unlucky it could easily require the full dataset from 2012 to get enough data to finish the job properly. It will then not be until March 2013 when the combination is ready that they will finally be able to declare a discovery.

**Endnote:**

I have been accused by theorist and blogger Matt Straddler of being over enthusiastic about the case for the Higgs Boson and the strength of the latest results. In fact I have not made any overly-strong claims. examples of things I have said previously include

*“The result is very convincing if you start from the assumption that there should be a Higgs Boson somewhere in the range. Everywhere is ruled out except 115 GeV to 130 GeV and within that window there is a signal with the right strength at around 125 GeV with 3 sigma significance. They will have to wait for that to reach 5 sigma to claim discovery and next years data should be enough to get there or almost.”*

*“Some caution is merited. The signal is only 3 sigma combined and the possibility of systematic contributions is there. However, look elsewhere effect is very small given that most regions are strongly excluded. Systematic effects look less likely because of consistency across channels. I agree with Dorigo’s more optimistic assessment but until they have 5 sigma it is not a discovery and collapse of the signal is not out of the question.”*

*“This is basically a half-full/half-empty result. You can state it optimistically or pessimistically according to your political requirements. Another twelve months will be needed to settle it, but it is much more probable that it will be settled with a positive outcome.”*

I have not said anything stronger than this and I stand by what I said. These are not bold claims and are no different from what has been said by some of the prominent members of the collaborations. I find it very bizarre that someone is insinuating that my conclusions are overstated and naive. My detailed assessment of the situation here bears out my earlier level of optimism. If anyone wants to criticize any aspect of my calculations I am open to discussion, but if you think I should just bow to the superior authority of people in apparently better positions, please forget it.

“If the peaks of the excesses comes closer together with more data, then the excess will grow faster than you would otherwise expect.”

They don’t see different Higgs bosons, an Atlas boson at 126 GeV and a CMS boson at 124. The peaks will obviously come closer together, since what they see is the one and same Higgs boson, whatever mass it actually has. This of course assumes they don’t have a ridiculously unlikely combination of excesses spanning many channels times two experiments that just happens to mimic the model that has been the standard for 40+ years.

It is possible that they are different due to a systematic error, which may or may not get corrected. Sources of error this big are indicated in the talks.

Would an error like this be in a single channel only? Diphoton would be the only candidate, I assume. Ohwell, hope they can pinpoint any such problems quickly.

And what could happen if the Higgs is massless?

LHC Signals Between 121-130 Gev Interpreted with Quantum-FFF Theory

Authors: Leo Vuyk

Abstract, In Quantum Function Follows Form theory, (Q-FFF) the Higgs particle is interpreted as a massless transformer particle able to create the universe by transforming its shape after real mechanical collision and able to merge with other shaped particles into composite geometrical knots called Quarks and some Leptons (Muons and Tauons). Singular particles are; the two leptons: the electron and positron, different shaped photons, gluons and neutrinos all originated out of one single transformed Higgs. As a result, Q-FFF theory leads to a NON Standard Model of elementary particles, also described in this paper. As a consequence the recent Large Hadron Collider (LHC) results, showing special values between 121-130 GeV for the predicted signal of the massive Standard Model (SM) Higgs, should be interpreted as the result of one or more different composite particle decay- and collision processes and not as the result of Higgs decay. In this paper I present some possible transformations after the collision of (Non- SM) Proton particles interpreted as Quark- Gluon cloud collisions, into the observed production and decay results such as, gg into Di-photons, ZZ into 4 Lepton or WW into LvLv .

http://bigbang-entanglement.blogspot.com/

These are all good questions asking the big what if that often leads to discovery. One could add to the list of crackpots like ourselves who could be right a certain “patent clerk” had a prior good grounding for mathematical physics and could base his understanding on the previous work of Maxwell, Hertz, Lorentz and Poincaré as well as the experiments of Michelson and Morley and “just” had to look at the ideas in a new way. True,it took him many years to get it right. And in the end he could not face certain implications of quantum theory that demanded in itself an extension to SR, But it has always been down through history the misfits that allowed us to take a real leap forward and not the establishment, which in his case was more ruled by Newtonian idealism than anything else.

Today we face the same thing, only now the Establishment itself has become the entrenched dogma from which perhaps one “patent clerk” of a new generation will look upon the past and have the insight to see a fruitful step forward.

In fact, what is even more telling in the posts here one can view a wide dispersion of the views within that establishment and see there is a lot of disorder below the surface giving rise to a diverse set of theories and ideas. All based upon some seldom addressed issues like what if the Higgs is massless, what if Neutrinos do travel superluminal, etc. The right questions are being asked, but the responce for the most part is coming from what some call the crackpots themselves. What is being silent is those as one termed trying to tell us what to believe.

If I was personally going to propose something outside the mainline I might be tempted to take a hint from here, look at the disorder generating patterns and try to find the real strange attractor out there in nature that conspires to be the God-particle and as such provides a pattern below the surface much as the debate here does the same. But isn’t that itself very much what all the Big If’s are designed to do?

So in essence, if anyone out there thinks groups like this or the organization itself serves no real purpose the real purpose it serves is to ask the big questions no matter how strange they may seem and keep the establishment honest.

By the way, I like your general idea.

Thanks Paul Holland.

I m indeed a humble crackpot who is only trying to translate Function into Form no more.

I am an architect.

Do not feel alone, I myself am somewhat a crackpot, best discription I ever got was from New Scietists once labeling me as someone how challenges the establishment once upon a time. Actually, one reason I have never favored the Higgs was along simular lines and a very simular idea. However, if they do actually find a Higgs it would not totally spoil the picture and I could take some of the same and adapt it to that situation. But to be honest I still rather favor it being real as far as an origin, but, more a composite as far as a measurable reality goes.

Thank you for your kind words,

Leo.

Phil,

Any consideration given to a possible artifact induced by two-photon emission by the detector substrate itself? This may be far-fetched, but a slight dependence of permittivity on incident beam intensity may generate two outgoing photons instead of one. I assume that they accounted for this possibility, but who knows?…

Radiation of this sort could only produce much lower energy photons I would think. Also they pinpoint the origin of the event with some accuracy and would not be fooled by an event outside the beam pipe.

Hope you’re right…I think along the same lines, this scenario is highly unlikely in principle, but some systematic artifacts of this kind are still possible when working with high energy beams…recall the story of the 17 keV neutrino.

Hi Phil.,

Excellent Analysis!

With the hints of Higgs Boson reported on Dec 13th, PSTJ has decided to publish a 13th Issue of Volume 2 around or on Christmas Day to celebrate this [pre-]gift of this Holiday Season.

Well informed scholars are welcomed to submit your assessment or reated writings for consideration in this or coming issues of PSTJ.

PSTJ Editor

Tommaso has just published an eloquent evidence that “six-sigma happens”: http://www.science20.com/quantum_diaries_survivor/lhcb_confirms_cdf_measurement_omega_b_mass-85591

It may concern superluminal neutrino as well as Higgs “combinations”. As to the latter, we need a lot of statistics and other particular data as well as careful physical analysis to judge definitely. So far it looks ridiculous: they searched an excess “predicted” by SM but found the “mass” that makes SM invalid. And do not tell me “all our theories must be invalid”, I do not buy it.

P.S. “Dog particle” is better than “God particle”, in my opinion. 😉

How many sigma do you have for the exclusion of an SM Higgs? 😉

Form what I gather, most seem to think the SM is not all there is but the SM is pretty accurate. Nothing ridiculous about it breaking down at high energies.

With the current tendency to produce and defend catastrophes, it may indeed be OK to break down at “high energies”, as they say.

Disfavoring ‘Dog’ over the alternative: the Higgs does celebrate mass…

Agree, as an alternative I propose a “Gibbs boson” if it is found to be a boson. 😉

Well, 1-in-30 means about 97% which is still, in the IPCC language, “extremely likely” (95%-100%) that the Higgs is there.

While this could be an OK result, I still feel uneasy about your punishments for possible errors at every level. Such punishments are inevitably subjective. What are the odds that this guy made a mistake? This lady made a mistake? These three grad students hacked the file and replaced it by fake data?

I don’t question that any of these things could happen but it is simply not compatible with rigorous research to assume that a new error was introduced with a 10% probability. Errors may happen, but the consistent thing is for the bloggers calculating combinations to err as well. If those excesses are flukes or some errors, well, I will be proved wrong. It’s not deadly.

To summarize, I find it pathological to talk about a two-sigma signal if there are a couple of channels, each of which gives 3+ sigma evidence. One may be afraid that there are human errors anywhere but one shouldn’t include this fear into the quantitative evaluations of the sigmas because if this became the norm, all important numbers in physics would be deduced by purely subjective methods.

The better way is to assume that other physicists and detectors have done a superb job and reliably build on them, and when a mistake happens, just throw away the whole line of research that got invalidated because of that. But “partly discarding” each step in science is just not a way to progress.

I agree that I have been conservative for the possibility of unexpected systematic errors, but since these probabilities cannot be accurately estimated it seems right to err on the cautious side.

Dear Phil, couldn’t you say the same thing about any other question in the world? Probabilities of a mistake of an unknown origin may never be accurately estimated. Does it mean it is right to err because of that?

Much more generally, I don’t understand why you think it’s right to err or even “err on a side”. It’s the very point of doing good science that one should try not to err, and if there is a risk of his erring, he should make sure that the risks of erring on both (or all) sides are balanced.

Trying to selectively err on one side, whatever is the adjective that you attribute to your preferred side, just means to fail to be an impartial, objective judge/scientist. Sorry. There’s no cautious side. If you attack Gordon Kane or anyone else for using the observation of a 125 GeV Higgs, then you’re hurting progress in physics, the scientific community’s and public’s knowledge of the facts in physics, and this is at least as un-cautious as making premature conclusions about the Higgs.

So please don’t ever try again to worship errors and systematic distortions by adding unjustifiable adjectives such as “cautious” to them. Thank you.

It depends on what you are trying to achieve. I wanted to show that even if you are conservative with your estimates you will still come to the conclusion that this is a strong result.

I said in the text that if you want to estimate the probabilities differently you are welcome to put your numbers in, or you can take different things into account that might help reduce the probability for a Higgsless outcome further. I would be very interested to see what answers other people including you would get for the same question.

Supporting -partly- Lubos comment, if Eratostenes had opted by “err” on some side, his measuremen of Earth circunference had been worse.

By keeping on the data, he was “fortunate” that both errors in his calculation did cancel. Of course his “error bars” accumulated, but they also accumulate in any case. Err-ing towards a side really means that you want to show asymmetrical error bars in the final result: if the error is not symmetrical (nor gaussian) from the start, it is still a valid option.

Right, Alejandro. What I really want to say is that there is never a preferred side in the search for the truth. Any deviation away from the truth is an error, something one should try to avoid.

This is of course a theme that is omnipresent in the global warming debate. They also “err on the cautious side” by “protecting the Earth” and saying that the climate sensitivity isn’t 1 but 3 or 5 °C. That will make people more concerned, and so on, the message goes.

In reality, they’re not protecting anything. Even at those high values, Earth wouldn’t be threatened, and with promoting this bias, they are threatening the global economy and bringing a possibly constant recession (which may be their goal, after all).

When a public health official wanted to fight against smoking, he could have err on the cautious side and spread papers saying that 90% of smokers instantly die. But that wouldn’t be a valid science and it wouldn’t be helpful, either, because those people simply don’t die and lots of people manage to release stress by smoking, and so on, and their lives would be made worse. It’s not true that there only exist arguments in one direction and one side has the right to be caused “cautious”.

In this case, denying a result which is as strong as the Higgs one could stop serious research which needs to assume the mass (it didn’t stop it, of course, and there are already a dozen of papers on HEP-PH that try to apply the 125 GeV information on various things), and it could delay progress in HEP physics by a year. It’s clear that not all research in HEP-PH will be about a 125 GeV Higgs but it is damn reasonable if a substantial part of it will take this likely truth into account. The part should be sanely correlated with the degree of certainty about the result which is rather high. This is the rational approach: the evidence actually matters. The approach which a priori decides which outcome is the “cautious one” is an approach based on prejudices.

Being cautious either by trying to spread a shocking result, or trying to suppress a shocking result to make everyone sure that scientists never ever draw conclusions prematurely, these are two sides of the same coin: bias and distortion of the truth. They’re really equivalent. The equivalence is obtained by nothing more complicated than reversing the question.

Data may indicate that something is overwhelmingly right but it can still be wrong. In that case, a line of research has to be thrown into the trash bin. But research can’t assume a 10% punishment of reduced certainty at each step. This would not only quickly make all “chains of arguments” totally unusable: it could also cripple the desire of researchers to get things right because they would rightfully conclude that no one will take their results quite seriously, so there’s no reason to do it in a perfectionist way.

Well, the sentence “The LEE factor will therefore be about 6 but let’s call it 10 to be extra cautious.” makes it kind of obvious this is not a best guess but an “if we play it safe”-scenario.

Gibbs wants to defend himself against Strasslers criticism by showing even a very cautious interpretation gives a strong reason to believe this is the real deal.

We could of course pick this post apart to approximate the real confidence 🙂

One problem today is the need for instant gratification. People tend to want that big hormone inducing lightbulb flashing on event and Science tends to work in small caculated steps. There is also the big distrust issue that some seem to have when it comes to the establishment.

The first is part of the Human equation. The second while related to that also has it’s roots in individual idealism where some who do not see things the same as the Establishment do not think the establishment would ever admit they are wrong.

If the Higgs had been found dead in the SM there would have been those who cried fowl. If it had been found 100% you would still have those crying in their coffee.

We all have differences of opinion on this. I for one have always favored there was another mechanism that worked like the Higgs. But have also been open minded enough to accept the Higgs could exist. My money was on if it existed it would be found in a borderline place that supports the SM and also supports extensions to that model.

But the slow way science works is just a fact that people have to live with. We do not leap out on faith like religion does, nor do we have absolute proofs like in math. We make every mistake along the way as we wait for the truth to reveal itself.

The only “problem” here is that some people are trying to tell us what we should think when we are perfectly capable looking at the probabilities shown in the data ourselves and forming our own opinions which will be weighted by our own personal theoretical assessments of what the likely outcomes are. We are happy following the ongoing results at what ever pace they develop. It is great that the experimenters are being open and quick to reveal what they have, but we don’t need them to tell us that we should play down the conclusions. We dont need to trust the “establishment” because we are capable of our own reasoning. There is no sense in which we are looking for “instant gratification” and we will not “cry fowl” if nature tells us that the answer was not the one we most expected. We are here to learn the truth and use it to get a better picture of how the universe works.

I don’t agree with the interpretations. But at least one see that the WW and ZZ channels meanwhile reach an uncertainty of 1 – 2.5 that of diphoton, but have no peak in the interval 120 – 130 . Not so good one see from these plots, that they together have enough weight to exclude H with almost 95% (OK perhaps with 94%). Also not so good one see, that for diphoton the integral (or net observed minus expected) is almost zero, so that effectively no (or very few) photons are produced but only these of the background a little shifted in certain wavelenghts, what produces that kind of ‘spectrum’ (if significant).

Sorry, but significant is something different than that !!! .

Another point that caught my eye, this is most likely dribble but here goes anyway.

First, all the mass energy given are center of mass energies, yes?

Second, when you say in this mass range most of the Higgs decay to two b quarks. I assume you mean via the Z^0 decay?

Third, can (are) they measuring the ratio of the number of three–jet events to the number of two–jet events, i.e. the strong coupling constant? If so what is the SM prediction of the coupling constant to the existence of the Higgs?

Hence, if this discovery pans out and there is a 125 GeV Higgs, its existence so very close to the Z^0 resonance could be of major significance. I guess what I am asking is if there are more three-jet than two-jet events and with a fairly large cross-sections both particles could have a lot in common, enough maybe to confuse them as two separate particles instead of one very unstable particle?

The Higgs decays directly to bb without going via Z. It can decay to Z plus gamma but rarely. They have lots of results for different numbers of jets but I have not looked ta those details.

“It can decay to Z plus gamma but rarely.”

Do they expect ( have they seen yet) any events of the mode at this low energy range?

Does SM allow for the mode at his end of the spectrum?

Is it of any particular significance that the Higgs would be so close to the Z resonance?

How stable is the Z?

Could the Z resonance be a composite of a multiple of Higgs bosons each of which and/or a subset of could account for the different masses that have condensed so far, quarks and fermions. As for the W boson, it is also close enough to the Z that it to can be of similar consequences.

The gamma in the Z, gamma mode could be a tall tell sign of supersymmetry breaking, i.e. the gamma is the highest frequency we know of, true?

Sorry about my depth of understanding of the physics. But as an engineer on the space and aerospace programs for over 30 years and a great deal of environmental testing experience, I am very encouraged with the results so far and so soon. Although the sigmas for the specific measurement, Higgs, may still be low, they nevertheless show that the machine is in fairly good shape. Let us not forget this is a very complex machine and like humans machine too have ills and pains and we need to sort them out before we can have sufficient confidence of the parameters.

I think it is too early to say the machine is fully calibrated and it will take many more cycles at each power level and time for the machine to “age” (burn in) properly and find its peak condition. It is when all the deltas of the machine parameters have been zeroed out is when we will have confidence in the collision parameters.

I don’t know CERN definition of the stabilization times for each parameter but judging by the complexity there should be many more years before the deltas settle and have completed the characterization.

I hope the folks at CERN don’t confuse the actual testing to the burn in.

” But as an engineer on the space and aerospace programs for over 30 years and a great deal of environmental testing experience, I am very encouraged with the results so far and so soon. ”

As such, you should remember the Pioner ‘Anomay’ . Although it’s well-known that in orbital motion and parameter determination enter plenty modelation uncertainties/errors, and the quotient of surface-to-mass for small objects (like, space vehicles) is higher than for big objects (like, planets) so that’s easy there enter surface-proportional effects (like, heat radiation), long time that ‘anomaly’ was considered as significant and even as a need for a revolution of the physics (gravitation theory)

The observations by the LHC, until now, don’t give any significant indication of Higgs.

I disagree wl59. These findings are significant because you have two independents, ATLAS and CMS, with two independent set of parameters giving very similar results. Although you cannot say identical since that will require >5 sigma they are nevertheless similar and well beyond the region of coincidence at 2-3 sigma.

I say beyond coincidence because the two results have passed 1-2 sigma range which is normally due to anomalies of the machine. It is clear to me with my experience in complex setups, NOT as complicated as these ones at CERN, that once you pass 2 sigma it is only a question of fine tuning and proper burn in and in some respects waiting for the machine to settle, all the deltas have reached steady state. Then you will see a dramatic rise beyond 3 sigma in a very short time compared to the burn in time. Yes, reaching 5 sigma is directly proportional to the complexity of the machine and more specifically the number of contradictory events that can cause parameter divergence it is still encouraging because they reached beyond 2 sigma fairly quick at high power.

My recommendation would be to stay at the same power at they reach 3-4 sigma before increasing the power higher. They may even have to lower the power slightly to reach 3-4 sigma. Once they are in this range it will clearly indicate the machine is ready to rumble. 🙂

Soap Bubbles,

So now that OPERA has 5 sigma and Minos has more than 2 sigma, we can consider the FTL neutrinos a given? Let’s be careful about combining independent experimental results.

You mention that the OPERA neutrino observation is likely to go away because “the prior probability of neutrinos going faster than light would be considered very small by most theoretical physicists.” But suppose that the neutrinos are not going faster than light but simply taking a short cut through some string-theory extra dimensions. The analogy might be a plane taking a great circle route across the globe rather than being confined to the flat two dimensional world where a straight line is always the shortest distance between two points. Presumably there is no a priori objection to that – which could shift the estimate of probabilities on the Opera results standing up?

These things have been discussed at great length on the relevant neutrino posts so you should look there for answers. I don’t want to start a repeat thread along those lines here.

I have gave some thought, especially in light of one of Fernado’s article a few years back where he had suggested high energy particles could be used to probe and modify the Israel junction comdition. When I combined that basic idea with the fact that the measurment here is faster than the one of supernova I suspect there is merit in that idea. By the way, if you can get a copy of his older hyperdrive idea and articles he had on brane lensing he gives some math one could plug the known energy of the experiment that produced those neutrinos and see what you figure out.

I might also add on a quantum level such a short cut is not forbidden. In fact, there was an article a bit back suggesting particles in a brane world could do just that.

As to “… people should not add the event histograms from separate experiments …”

if the histograms are from the same channel

then the only reason that I can see to not add them is that backgrounds can (and do) vary between ATLAS and CMS

but

I do not see any problem with adding the excess/deficit for each corresponding Higgs mass bin. (Maybe the bins are not exactly the same size, but that should work out on average over a region of interest like 125 to 135.)

If you take that approach, the diphoton channel in the bins at the interesting energy 126 GeV for CMS and for ATLAS seems to me to show that the highest excess for ATLAS corresponds to a deficit for CMS as shown by the red arrows on the image at

so maybe the Moriond 2012 diphoton ATLAS/CMS combos might show the 126 GeV bump going away.

As to ATLAS using a 3-event ZZto4l bin to support the 126 bump,

it may be of interest to note that the two adjacent bins above and below the 3-event bin are both empty

so

if those 3 events were spread among the 3 bin,

the 0-3-0 event structure for the 3 bins at 126

would become

a 1-1-1 event structure for the 3 bins at 126 which is almost exactly what would be expected as background.

Tony

PS – If you look closely at the CMS diphoton histogram around 125 to 135 you see what looks to me like a flat deficit valley bounded by two high ridges

which

to me just does not look like a single bump (Gaussian or Poisson).

It is interesting that ATLAS does not show that structure.

Further,

the region 125 to 135 is where the CDF Wjj bump lives but is not seen by D0

much like the CMS structure is not seen by ATLAS.

Maybe something interesting is going on around 125 to 135 but maybe it is not SM Higgs.

Tony, I think you are spot on here. Even if the four color 125 GeV prediction is basically correct, it requires BSM physics to make sense of it!

Or the Higgs is a composite and the bump high up is one of those composite parts.

Then there are:

Technicolor a class of models that attempts to mimic the dynamics of the strong force as a way of breaking electroweak symmetry. Extra dimensional Higgsless models where the role of the Higgs field is played by the fifth component of the gauge field Abbott-Farhi models of composite W and Z vector bosons Top quark condensate theory in which a fundamental scalar Higgs field is replaced by a composite field composed of the top quark and its antiquark. The braid model of Standard Model particles by Sundance Bilson-Thompson, compatible with loop quantum gravity and similar theories.

The Higgs boson does not exist simply because quarks are composite. You may say now ‘come on, we haven’t seen it’, and the truth is that we have seen several indications of it. The first one was found in 1956 by Hofstadter when he determined the charge distributions of both nucleons. (one can see them around p. 450 (depending on edition) of the Berkeley Physics Course, vol. 1 (Mechanics)). We clearly see that both nucleons have two layers of internal constituents. Unfortunately these results were put aside from 1964 on due to the success of the quark model and of QCD later on. From 1985 on we began to see more indications of compositeness, but we were so enthusiastic with the SM that we didn’t pay much attention to them. A partial list of them: 1) in 1983 the European Muon Collaboration (EMC) at CERN found that the quarks of nucleons are slower when the nucleons are inside nuclei; 2) in 1988 the SLAC E143 Collaboration and the Spin Muon Collaboration found that the three quarks of the proton account for only half of its total spin (other subsequent collaborations (EMC in 1989 and Hermes in 2007) have confirmed this result which is called the proton spin puzzle); 3) in 1995 CDF at Fermilab found hard collisions among quarks indicating that they have constituents (this was not published because CDF didn’t reach a final consensus); 4) Gerald Miller at Argonne (Phys. Rev. Lett. 99, 112001 (2007)) found that close to its center the neutron has a negative charge equal to -1/3e (inside the positive region with +1/2e); 5) new measurements of the EMC effect have been carried out by J. Arrington et al. at Jefferson Lab and they have shown that the effect is much stronger than was previously observed; 6) the ad hoc terms of the matrix of Kobayashi-Maskawa; etc.

Gerald Miller wrongly attributed to d quarks the -1/3 charge at the neutron center, but as the neutron ia a udd system we know (from QCD) that none of the 3 quarks spends much time at the center.

The relevant paper on this subject is Weak decays of hadrons reveal compositeness of quarks which can be accessed from Google (it is at the top of the list on the subjects Weak decays of hadrons, Decays of Hadrons and Weak decays).

Therefore, we should go back and probe further the nucleons in the low energy scale, and carry on Miller’s experiment with the proton.

You say:

“Therefore, we should go back and probe further the nucleons in the low energy scale, and carry on Miller’s experiment with the proton.”

Looking back at the history of physics it is high energy scatterings that uncovered the complexity of the nucleus and then the nucleon. If there exists a further level of complexity it will be seen with higher energy probes. It seems strange that you are asking for low energy scales.

Miller’s experiment is a low energy scale experiment. Imagine, for a moment, that quarks constituents have very low masses and that each quark has about 0.5F. In this case high energy collisions can not see them because they don’t have much momentum, only see the quarks. Take a look at the figures of Hofstadter of the charge distributions of the nucleons and at the paper WEAK DECAYS OF HADRONS REVEAL COMPOSITENESS OF QUARKS.

In discussing the question whether Higgs is found or not one should not forget the question that signal could be also something else than Higgs.

On basis of statistical arguments one can consider only the question whether there is a genuine signal or not. One must also consider the question whether there are signals at higher energies. These signals correspond to signal cross section above standard model background and can be well below the signal cross section due to Higgs.

The presence of these signals for which there is evidence, is argument against the interpretation of 125 GeV signal as Higgs. Also the vacuum instability is a theoretical argument against Higgs interpretation.

For more see http://matpitka.blogspot.com/2011/12/comparison-of-m-89-hadron-physics-and.html .

I think the avaliation of the LHC results if there is a real signal potentially by a Higgs or not, should be exclusively on the base of these data themself (i.e., their statistics and possible systematic errors/uncertainties). What consequences would have a Higgs at 124 , is another question.

… and at the current situation, I think the best is, to verify, if the supposed ‘signal’ occurs only at the doublegamma experiment, or also in the WW and ZZ experiments. Moreover, as Higgs primarily should give mass to W and Z and its existence should be more connected with and relevant for them, while photons of the background could be energy-shifted by many effects.

We should analyze the case of the 140 GeV excess from last summer as an example – how likely is that scenario playing out here?

I was thinking of that too. It was not as good a case as this one, but it had a lot in its favour. There was a combined excess of three sigma overall, but I think the truth is that that did not all go away. It came mostly from the WW channel and the resolution there is so poor that a Higgs at 125 GeV can account for it.

In other channels it only reached a combined two sigma and with a lot of room for a much larger LEE factor than the one we get now. With the new results the excess is nearly 3 sigma without any help from WW.

140 GeV was the best at the time but there were other excesses that were also candidates preferred by others. Although I admit I was keen on the 140 GeV excess, I think what I am saying now is consistent with everything I said at the time.

I think overall the evidence for a Higgs Boson somwehere was promising even then, but the evidence for it being at 140 GeV rather than somewhere else in the range 120 to 150 GeV was weak. Now the evidence for an excess somewhere is a bit better, but the allowed range is perhaps more like 120 GeV to 130 GeV

As an all-day crackpot I find the the list of anomalies by M. E. de Souza is highly interesting.

a) The observation that quarks inside nuclear nucleons are slower than inside free nucleons might have deep significance. In TGD framework the color magnetic flux tube are the essence of non-perturbative QCD and responsible for basic aspects of jet hadron physics: jets, quark gluon plasma, etc.. In nuclear physics nuclei are described as sequences of nucleons connected by color magnetic flux tubes with light quarks at ends. This predicts a lot of new nuclear physics making among other things possible cold fusion and explaining strange findings in keV scale. Maybe the slowness of quarks inside nuclear nuclei could relate to the color magnetic flux tubes connecting nucleons?

b) Also the evidence for an object with charge -1/3 units inside neutron is interesting. I would interpret it as d quark and would be ready to give QCD based ideas since we really know very little about non-perturbative QCD. Could the vision about nucleon as as structure involving flux tubes connecting the quarks help to understand the situation? Also the recent finding that proton’s charge radius is not quite what one would expect could have explanation in terms of color magnetic flux tubes which are quite long.

c) Quark collisions among quarks are very interesting. The recent TGD picture is that all elementary particles have Kaehler magnetically charged (the charge is “homological” and due to M^4xCP_2 geometry) wormhole throats as structural units. Physical particles must have vanishing net magnetic charge. Free fermions consist of pairs of wormhole throats carrying opposite magnetic charges. Second end carries the quantum numbers and second end neutrino pair canceling the weak isospin so that weak screening results. Gauge bosons correspond to pairs of wormhole contacts: wormhole pairs at both sheets are similar magnetic dipoles and neutrino pair at the second end cancels the weak charges. Weak massivation is a result of magnetic confinement and screening.

One could say that wormhole throats or contacts are the basic constituents of elementary particles. Could the hard collisions among quarks correspond to situations in which the quark behaves like wormhole throat than pair of them? If the length of quark flux tube is of order weak length scale (I do not however see obvious reason for why it could be even of order Compton length) this would mean that these “hard” collisions could become visible at LHC energies.

Second option is M_89 hadron physics which should have mass scale 512 times of ordinary hadron physics. It would be quarks of M_89 hadron physics which become visible at these energies. Ordinary quarks could contain inside themselves hadrons of M_89 hadron physics as hot spots with high p-adic temperature created in high energy collisions.

d) My own addition to the list of anomalies is a three-year old article telling about satellites of pion. These states can be arranged to Regge trajectory but with mass differences of order few tens of MeV. Could color magnetic flux tubes assigned light quarks with mass of order 5-20 MeV give rise to “infrared” Regge trajectories.

Also the signal cross section as a function of hypothesized Higgs mass shows bumpy structure and TGD explanation could be in terms of satellites of M_89 pion. Shnoll effect as a universal effect related to p-adic physics suggests an alternative but possibly equivalent explanation. See this. This would be low energy hadron physics not describable in terms of QCD.

The link to the article Higgs or M_89 hadron physics? in the last paragraph of the above response did not work. Maybe I had better luck at this time.

What makes me skeptical is that spatial dimensions that being mass are the fabric of our universe and massless particles are like a shadow to anything with mass. All the energy in our universe is based on quasi-spatial dimensions. What gives something mass is a form energy takes and should go beyond one single field of force. I can see the Higgs Field doing something to keep spatial dimensions organized, but only some of it, such as perhaps synchonizing rate-of-time and speed of light.

Take the line-element in its dynamical/impulse representation, i.e. with E,P … , what’s without space-curvature the four-vector, but with, in first order, the same corrected by a potential term (see f.ex. http://en.wikipedia.org/wiki/Schwarzschild_geodesics , cap. ‘Effective radial potential energy’, the second formula for a central field). At least in the classical limit, dS² = -m²c⁴ dTau² = L² dt² , so that macroscopically anyway it’s correct that the Proper Time corresponds to the Action S, what we put at the right side, together with E, P. Thus we get: 0 = – (S/h)² + (E/Epl)² – (P/Ppl)² … The dimension of Action, however, is discrete, so that microscopically, any event or action means a little forwards-jump of the proper time.

One can see, that the mass m is just a proportional constant of the OBJECT passing its world-line, in the sense that two, three … times bigger objects together have a … – times bigger Action, Energy, Impulse. In opposite, S/m , E/m , P/m … i.e. Action, Energy, Impulse … divided by the mass, or por mass unit, are geometrical properties of the DIMENSIONS, where for full accuracy we’ld have to add still the metric coefficients. This representation is equivalent with the statical form, i.e. the geometry expressed by n, t, x . The two aspects and their comparison contain the basical relations of physics (for example, conservation laws; transformations/variance depending on the observer, …)

From this, it should be clear, that physics, its rules, movements of objects, … are connected with the geometric properties of the dimensions. Also, that mass don’t need any special mechanism, it’s just a proportional constant for a certain representation of physics using observables such as Action, Energy, Impulse (also others like Force, …) which, by conveniency because of our daily live, are proportional to the ‘bigness’ of the object and additive, rather than specific. Such object-size-proportional quantities, like also the Volume or number-of-atoms, don’t need a specal mechanism; it’s sufficient that objects can accumulate to bigger ones and that there are several senseful (f.ex. proportional) measures for that.

In the special case that the mass is zero, then we have to be careful, formally not to produce senseless results by divide by zero, and physically to observe what results means sensefully and what not.

In the macroscopic case, and also in the microscopic case for observers in which proper-frame there aren’t intermediary events, if m=0 and the proper-time is zero, then this means that also the Action is zero, and such an ‘Object’ don’t exist in the same proper-frame because it don’t act there. In these cases, we have the well-known solutions of the line-elements, like the light with (almost) any value of E, P … but with a fixed quotient E/P according to 0 = (dE/Epl)^2 – (dP/Ppl)^2 . In this case, the photon, during/in/of this state of lightness, don’t belong to these dimensions, so that it don’t need to fullfill their E/m , P/m … values, and the relations E/P=c etc. ‘for’ lightlike objects aren’t property of that individual object but are just general conditions between the dimensions (something like a relativity speed between them)

Thus, the mass is just as a proportional constant or measure for the ‘bigness’ of objects for all the dimensions where they are connected to.

We have to subline that the mass is NOT connected only to one dimension. It appears proportionally in the observables of all dimensions. It is not linked to exclusively the Inertia – although it is a good example that objects with a mass are linked with Time and Space, so they have Inertia or resistence against reach light-speed relative any object (and thus relative to all objects) as a pure geometric property of these dimensions, inclusive as a smallest-distance property of any quadratic metrics (inclusive making irrelevant Mach’s ideas).

That the mass is not linked only to the inertia but to all dimensions (classically it’s at the left side of the four-vector), and that obviously should exist such a proportional constant for smaller and bigger objects, makes the Higgs mechanism redundant at least for normal objects.

If even so it’s necessary for certain objects, according to the SM, makes questionable that part of the SM .

There is also the aspect where the Higg’s mechanism was also utilized as an answer for infinites in solutions where an opposite, yet, smaller infinite was used to cancel them when it came to QM. Added to this is the aspect of needing some mechanism to account for mass.

About what you said, “From this, it should be clear, that physics, its rules, movements of objects, … are connected with the geometric properties of the dimensions.”

Geometric dimensions–That’s the word I was looking for.

Though proof is even more evident by the simple fact that math works to measure it in the first place.

There’s many people who say things like reality is a hologram and belief twists the laws of physics, and observation has shown that’s not true.

“mass don’t need any special mechanism, it’s just a proportional constant for a certain representation of physics using observables”

You left out that mass is when particles have some kind of form in their energy that says they can only exist in a certain space and nothing else can. When another particle of mass or without mass goes there, then it interacts. Compare two light beams vs. two proton beams crossing.

The Higgs Boson is to explain why W & Z Bosons have mass but the photon doesn’t. Actually when photons are virtual particles like W & Z Bosons, then they too have mass.

Oh by the way, the gluon, which has only been found as a virtual particle, is often said to be massless, but that’s not confirmed, only that if it has mass it would be small.

I dont see there any problem. For make it more easy, stay we with the situation without space curvature / gravitation. Then is c⁴ m² = S² / Tau² = E² – c² p² or with speeds in fractions of c: m² S² / Tau² = E² – p² where the action-per-proper-time-and-per-mass included at the left side should be the same for all objects with m > 0 , so that (macroscopically, i.e. beside of microscopic discretization) the proper times of near objects runs roughly with the same speed, and neither objects with different action-por-mass-production and thus different propertime-speed wouldn’t appear and disappear in the past / future, nor show a time-dilatation sphere around them.

Thus, for objects with mass, m>0, which exists and acts, the mass is a proportionality factor in S , E, p . All of them get x times bigger for an x times bigger object (f.ex, accumulation of x objects of unitary mass) under the same situation. The mass is a OBJECT-depending property. In contrary, S/m , E/m , p/m are GEOMETRY-depending (more general, we have to add metric coefficients), above formula divided by m normally considered as a four-vector of length 1 where both E and p can increase but the difference of their squares stays constant = 1 .

Generally we agree, that “exists only, what acts, and where and how”. Also in physics, we understand that it don’t ‘exist’ what don’t have effects, or more restrict even, what principally isn’t measurable. You can now ask, what’s about objects which don’t act, this means, which don’t exist in the ‘normal’ manner. An example is a photon between emission and absorption (more exactly for such observers in whose ‘world’ and proper-frame are no intermediary events between emission and absorption — as Events and Action are an own dimension, they are observer-variant, and although in the event-like world-line and proper-frame of the photon, emission and absorption are imediately neighboured, in the proper-frame of observers can be events between such as diffraction).

One see that with this interpretation, the action and left side is zero, thus such an object has zero proper-time-production-speed and no defined mass. With 0 = E² – c² p² , we get only the condition E/p = c for such objects. Writing the line element in the static form, we get the same condition also in the form for coordinate-intervals, i.e. dl/dt = c .

If m is defined, then is valid the Energy – Mass – equivalence, just put p = 0 in the formula above, c⁴ m² = E² . The essence of that equation is a simple equation-of-motion for E in the special case p=0 . But instead of this, for objects without (defined) mass, we have the condition 0 = E² – c² p² and they can have energy even without mass.

Thus, there is no problem, with lightlike objects.

The phenomen of matter is conditioned to curvature-defined space – to a certain connection of the object with the space, and to the existence of space at all. If we understand the primordial origination of dimensions in the sequence: events; time; kinematic extension (1 spacelike coordinate); curvature (2 spacelike coordinates), then objects without mass aren’t localizeable in the last space-dimension, i.e. they aren’t completely localizeable in all dimensions normal for us as observers. As said before, it’s the Inertia the natural force of the kinematic extension, and the gravitation that of the curvature-defined extension; both these forces make sure that objects belonging one time to their space, they can’t unlock from it – the inertia prevents speeds passing outside of the ‘relative speed’ of extension to time, the gravitation prevents position shifts outside the universe. Mass however isn’t related exclusively to one dimension/force like f.ex. Inertia or Gravitation, in the dynamic representation of the line-element it’s a proportional factor of all terms/dimensions – of S, E, P and the space curvature.

There isn’t many more to say about objects without mass, and about the meaning of the mass However, I don’t see why we would need a Higgs for explain mass … It is also NOT necessary, nor plausible, that the phenomen of Inertia is caused by Higgs; as said, beside of the object-size proportionality, the inertia and prevention something reach light-speed, becoming light-like, and disconnect from the space, is a property of the dimensions themselves, since their origination, certainly much earlier than the arise of ‘particles’.

Rather akin to: Let’s drop an empty sphere in metal in a container filled with water. It is the volume of the sphere not its mass that produces the displacement of water. The latter will exert a pressure on the surface of the sphere. But water is not the quantum vacuum. So even if one wants to follow a simular idea it would be the Volumes with mass that one would have to actually utilize which leads on back to the whole issue of mass in the first place. Volume in this case is your spatio or quasi-spatio dimensions. But for that volume to have any effect since generally the quantum vacuum is seen as either canceling or nearly doing so there has to be mass/energy contained in that volume designated by that mass/energy. The whole dimensional argument just circles right back to the original probllem of accounting for the mass in that fimensional spatio unit to begin with.

In essence to get to that conclusion in the first place you have in fact traded out dimensional units of the vacuum for something more akin to Newton’s aether even if the aether you utilize does not have on the surface an absolute frame of rest. But in essence to even begin to explain the mass of that volume you have to invoke some sort of reference frame by which to measure such. That frame becomes the substitute for Newton’s absolute frame of rest in that it has some basic measuring rod upon which that mass/energy is determined even if it is Lorentz invarient.

The Higg’s mechanism simply relies upon a scalar field, with a basic quanta called the Higgs in an already Lorentz invarient energy field we call the vacuum who’s basic value is determined by that scalar itself.

Assuming the dimensional units remove the problem of what causes mass is just a circular argument back to square one. Dimensionality by what ever process arose expressed via the quantum vacuum. Mass arose due to symmetry breaking to begin with.

Symmetry breaking is more related to entropy or temperature and temperature itself can be considered a scalar. So the dimensional units of vacuum in a sence have a certain temperature based upon some fundamental unit of such which we call the Higgs which is what gives that volume of mass it’s mass/energy to begin with irrespective of the dimensional unit. That unit when it comes to mass/energy has no value without the mechanism that generates the mass. It is simply an empty set without anymore value than dimensional units of measurements.

Even if you invoke the vacuum has a non-canceling energy which can be broken down into units you have simply traded one quantum for another to preform the same function. Even that quanta if we had enough energy could itself be seperated into an individual unit. Sure it would have some form of dimensiona quality. But it is the energy, not its volume that is the origin of mass.

At http://tgd.wippiespace.com/public_html/articles/higgsigma.pdf On page 6 I quote the following:

The first possibility is that the second eigenstate is tachyonic due to very strong mixing proportional and must be excluded from the spectrum.

Now this is interesting in that generally tachyon states are rejected. But this whole argument rests upon the interpretation of a detected signal at this range. So in essence the first idea rests upon a real tachyon signature if this is the case. In fact, it would imply that tachyon states arer generated in high energy events.

The second alternative is that its a SUSY signature. Here again it would translate to a defacto extension of the SM which by the way requires in some forms Tachyon modes again. In fact, M89 Hadron requires things like Instantons, tachyon modes, etc and it is exactly this extension the author and others are arguing for. In general, while they tend to discount outside superluminal ideas it is interesting that can indirectly admit that at least superluminal states can exist and play a role in a lot of the alternatives to the Higgs.

What I fail to see is how they can propose solutions that involve negative energy states and yet reject any idea of travel based upon simular energy especially when you can generate such through an inflation field that their own physics model’s demand?

So in general, anyway it goes either you have a small Higg’s mass with possible metastable vacuum or you have evidence of an extension beyond the SM with either no Higgs or a still missing Higgs. Isn’t that what further research is supposed to determine? And in general the concensus is we have evidence of an indirect type of physics beyond the SM?

The reason I mention all this is the establishment is very quick to judge anything that goes against the SM, Yet, everytime one turns around there is evidence of things that go beyond it. You cannot have it both ways. In fact, even if this signal does turn out to be the Higgs then in essence you have proof positive there is physics beyond the SM and if it turns out not to be the Higgs, say some supersymmetry partner you have evidence of the same.

What this tells me is the general concensus is that we have evidence for physics nbeyond the SM.

Now that is not people telling you things that is letting the evidence speak for itself, even if limited at this point.

Some think Smarandache hypothesis(a radical in his own right on the fringe) can be applied only to entities that do not have real mass or energy or information. But negative energy and negative mass

come up in quantum equations all the time. The same goes for supersymmetry, String Theory, Brane Theory, ZPF based ideas, etc. The original equations for

the quantum vacuum have it a sea of such states. So what happens to physics when you consider these states real and not some myth in the equations?

This is my opinion only. I suspect nature does allow these states as long as

casuality is not violated. Granted some physical processes are reversable and

some are not. But, nothing out of physics forbids travel to the future, just

travel to the past. We travel forward in time everyday. But no one has ever

shown any proof of backwards time travel. Even entanglement experiments show a

forward in time arrow. Its just forward in time faster than C can account for.

With the entanglement case there never has been honest proof the change happens at an infinite velocity. It may just be a velocity greater than C because it takes place in a space-time structure that allows a vacuum potential where C is higher. That does not violate the spirit of SR. It simply enforces that same spirit in that even in that state there is some limit to the speed of information transfer.

That same space-time structure appears again and again in everything from QM to modern Brane Theory and no one to date has ever taken an honest look at what happens to physics if such states casually limited do exist. And yet, here we have people mentioning them as possible objections to the Higgs being in a metastable area.

To me I see some trying to have their cake and eat it too. Logic dicates you cannot.

By the way, this was just a general thought about what I see out here. Like the other post one took as directed at him when it was not and more directed to those who reject the whole Higgs idea simply because they see the establishment as wrong to begin with. Those who honestly question the results I have no problem with. You are right to ask the important questions as should any honest scientists.

Still a comment. Both the research group and Phil do excellent job in applying statistical methods and *if* one assumes that the possible signal is necessarily Higgs then the conclusion is clear: it is Higgs with mass about 125 GeV. If it is Higgs it is Higgs!

The problem is that there are also alternative explanations for the signal and already the existing data suggests that standard model Higgs is not the correct explanation. 125 GeV is just at the verge of the vacuum instability which requires more particles. There are also additional structures at higher mass and although they have smaller signal cross section that that for Higgs does not mean that they are statistical fluctuations. There is also oscillatory bump structure. We simply do not know what the situation is.

For more details see

http://matpitka.blogspot.com/2011/12/comparison-of-m-89-hadron-physics-and.html .

What if we find it and it doesn’t have the right spin? Say it’s 1/2 or 1?

A particle of spin 1/2 or 1 could not have the same couplings to other particles that the Higgs has, nor would it obey the same dynamics itself. For example it could not decay into two photons because only spin 0 or spin 2 particles can do that by spin conservation. So such a particle would be something completely different and would not be regarded as a Higgs boson in any sense. As far as I know there are no theories of EW symmetry breaking induced by fields of spin 1/2 or 1 either, (except where these bind to find a composite spin zero Higgs boson) but it there were it would be different from the Higgs mechanism.

Dear M. Pitkänen,

Just take a look at the electric charge distributions of the nucleons, as found by Hofstadter (Nobel prize due to this in 1962). You clearly see that both nucleons have two internal shells and that the innermost shell is common to both of them. Of course, three point-like quarks do not reproduce such charge distributions. Now, take a look at Gerald Miller results and the long list of anomalies… by the way the worst anomaly is the mass of the top quark, 173 GeV. Of course, just because of the top quark mass a 125 GeV Higgs is a complete disaster, is a chimera. This Higgs boson goal-at-any-price has become a ridiculous fairy tale. This is not science!!!!! This is RELIGION!!!!

Not exactly correct. The actual case is as follows:

The theory of electrodynamics has been tested and found correct to a few parts in a trillion. The theory of weak interactions has been tested and found correct to a few parts in a thousand. Perturbative aspects of QCD have been tested to a few percent. In contrast, non-perturbative aspects of QCD have barely been tested.

The first computations of transport coefficients have recently been concluded. These indicate that the mean free time of quarks and gluons in the QGP may be comparable to the average interparticle spacing. The weakly interacting interpretation derived from the lattice QCD calculation, where the entropy density of quark-gluon plasma approaches the weakly interacting limit. Both energy density and correlation shows significant deviation from the weakly interacting limit, it has been pointed out by many authors that there is in fact no reason to assume a QCD “plasma” at the transition point should be weakly interacting and if anything it seems it is not the case at all. When you add in the CERN case for neutrino velocity departure from normal SR format and the current Higg’s in a metastable range one has even more evidence of a departure from the SR.

The indication of mean free time of quarks and gluons in the QGP being comparable to the average interparticle spacing has interesting aspects when you look at the relationship on size and distance to quantum scale entanglement and if you through in extensions like brane theory into the mix where under say an RS model the Israel junction condition is itself energy dependent and can vary the scale of it’s effects there is a lot of uncertanity on what would work or what would not.

It is not some religious fairy tale search for the Higg’s because only through a proper search can we actually narrow what is correct and what is not. In fact, any search for anything even your own idea would becomne nothing more than a fairy tale without such a proper search. So real science that departys from religious dogma demands such a search.

We already know through proper searches that the entropy density of quark-gluon plasma approaches the weakly interacting limit, but does not exactly match it as mentioned before. That in itself was derivbed from high energy particle experiments that got created as a means to search for things like the ridiculous fairy tale you speak of. So most of the problems you find in the QED/QCD would not be even known if it was not that search for what you see as religion.

It is our search to prove or disprove the SM which includes the Higg’s that has given birth to everything we know to date that is wrong with the SM to begin with. Yes, the 125 GeV Higgs spells trouble for the SM. So, what most everyone over the last few years with any knowledge of physics had already expected such to be the case.

If you want to be fair this whole issue of if the SM is wrong or not is rather mote: the origin of mass, the strong CP problem, neutrino oscillations, matter–antimatter asymmetry, and the nature of dark matter and dark energy, even the 125 GeV possible Higgs are all aspects that point to a problem with the SM. It is a good model in need of a lot of refinement to begin with. In fact three of those are long standing problems with the SM and the first goes right back to the whole Higgs idea which is what the point of this so-called fairy tale research is about that lead to us discovering the other problems.

My prediction is that we must return back to the pre-Higgsian roots. Partially conserved axial current hypothesis is extremely general hypothesis of old hadron physics, and has sigma model as a possible but not necessary realization. PCAC fixes pseudo-scalar couplings completely since it says that pion (meson) field is proportional to the divergence of appropriate axial current. It predicts couplings very similar to the couplings of Higgs.

If one accepts sigma model, sigma expectation effectively replaces Higgs expectation: in TGD framework the extrapolation from ordinary pion predicts its value to be v=m_W so that the counterpart of Higgs scale reduces to intermediate boson mass scale. The signal cross section is predicted to be by a factor 1.54 higher than for Higgs and the observed signal cross section is consistent with this.

Sigma model, which is QFT model in M^4 and only an approximation in TGD framework, would allow to model the non-perturbative dominating contribution to nucleon masses (both M_1207 and M_89) having in TGD framework explanation as the magnetic energy of color magnetic flux tubes carrying monopole fluxes. Quark masses would result from p-adic thermodynamics rather than Higgs mechanism which is only a parameterization of data as we should know but have forgotten long ago;-).

In the middle of this fuss about Higgs boson one tends to forget how poor the understanding about non-perturbative aspects of QCD really is. We have jet QCD but it involves a long list of factorization theorems, which need not follow from QCD proper. We have quark gluon plasma, instead of which QCD would predict gas of partons rather than glass like phase and which AdS/CFT approach has failed to describe.

One amazing – only three year old – experimental claim is that pion and nucleons have exotic variants for which the scale for mass differences is measured using 10 MeV as a natural unit. The organization of the states to “infra-red” Regge trajectories is suggestive and in TGD framework string like excitations assignable to the color magnetic flux tubes accompanying u and d quarks is suggestive.

If true this single experimental finding kills QCD as a theory of strong interactions and also standard model at one blow.

Anyone seriously interested on hadron physics – and physics in general(!) – should read the article which can be found here:

http://www1.jinr.ru/Pepan_letters/panl_5_2008/02_tat.pdf .

The simple finding of this potential Higgs in a metastable boundry tends to argue for the SM being in need of modification.

Dear Fill: If “Combining the three things I have considered i get an overall probability for such a strong signal if there is no Higgs to be about 1 in 30.” then the probability of existence of Higgs here is equal almost 1 (1-1/30 = .9667)?

No this is incorrect. The “probability for such a strong signal if there is no Higgs” is a conditional probability. To go from there to “the probability of existence of Higgs” which is unconditional you need to also use your assessment of the prior probability for its existence. See http://en.wikipedia.org/wiki/Conditional_probability

Well: let B is “Higgs exists”, P(A) is force of signal:

If P(A/notB) = 1/30 then P(B) = 1 – 30*(P(A) – P(AB)).

If P(A) -> 1 then P(AB) -> P(B) then P(B) -> 1?

http://arxiv.org/abs/math/0304081

Phil, thanks for the stats, which I suppose has much in common with a risk-assessment.

Just a question, given that the Higgs boson is proposed to be in copious existence, and the apparatus was specifically designed to detect it, why is the effect size so small?

They have a very large data-set, yet still can’t state with reasonable confidence whether the effect is there or not. That seems strange. Did they misjudge their initial power calculation?

Another thing that is strange is that the recent announcment was made at all: why bother with an inconclusive announcement? The overall impression is one of haste. Are they feeling obliged and under pressure?

Dirk, these are interesting questions.

The results they are getting are completely in line with the calculations they did beforehand. For a Higgs Boson in the mass region not yet excluded they would have produced a number of Higgs bosons numbering in the thousands or tens of thousands. They are quite rare. Although the Higgs field has a huge effect everywhere, the production of its on-shell excitations requires weak interactions and does not have a big cross-section at 7 TeV. To make it worse, most of the Higgs Bosons that are produced decay into bottom quarks that are very difficult to detect above the background. Only a fraction of a percent decay into photons or other particles that can easily be detected and seen above the background.

It is also not strange that the announcement was made. there have been a succession of reports on the Higgs searches at 30/pb, 200/pb, 1/fb, 2/fb and now 5/fb. If they report like this they are bound to pass through a stage where there is some tentative sign of the Higgs boson that is still not conclusive. That is where they are now.

There is a number of reasons why they have decided to report incrementally rather than waiting until they have conclusive results. One reason is that there is so much interest and the collaborations are so large that there would be no hope of keeping the results secret for very long. It is better to state what they have than allow rumours to persist unchecked. Another reason is that they are anxious to firm up their plans for the next generation of accelerators as soon as possible. They don’t yet know whether they will go with ILC or CLIC and it is not certain that either will get funding. This depends critically on what the LHC is telling us about new physics and it means they need to report whatever they have as soon as possible, so yes they are under pressure. The last reason is that lots of people are interested in the progress towards discovering the Higgs. There is a unique opportunity here to show science progressing as it happens. This is good for public education and for raising the profile of the work that CERN does. Despite the fact that results are still inconclusive the media have been keen to report every step and the public are paying attention. This is really good for science and you can be sure that many future scientists will say that they were inspired by observing this as it unfolds rather than just being given the definitive result when all the most interesting discussions were over.