It’s great to see the Planck cosmic background radiation data released, so what is it telling us about the universe? First off the sky map now looks like this

Planck is the third satellite sent into space to look at the CMB and you can see how the resolution has improved in this picture from Wikipedia

Like the LHC, Planck is a European experiment. It was launched back in 2009 on an Ariane 5 rocket along with the Herschel Space Observatory. The US through NASA also contributed though.

The Planck data has given us some new measurements of key cosmological parameters. The universe is made up of 69.2±1.0% dark energy, 25.8±0.4% dark matter, and 4.82±0.05% visible matter. The percentage of dark energy increases as the universe expands while the ratio of dark to visible matter stays constant, so these figures are valid only for the present. Contributions to the total energy of the universe also includes a small amount of electromagnetic radiation (including the CMB itself) and neutrinos. The proportion of these is small and decreases with time.

Using the new Planck data the age of the universe is now 13.82 ± 0.05 billion years old. WMAP gave an answer of 13.77 ± 0.06 billion years. In the usual spirit of bloggers combinations we bravely assume no correlation of errors to get a combined figure of 13.80 ± 0.04 billion years, so we now know the age of the universe to within about 40 million years, less than the time since the dinosaurs died out.

The most important plot that the Planck analysis produced is the multipole analysis of the background anisotropy shown in this graph

This is like a fourier analysis done on the surface of a sphere are it is believed that the spectrum comes from quantum fluctuations during the inflationary phase of the big bang. The points follow the predicted curve almost perfectly and certainly within the expected range of cosmic variance given by the grey bounds. A similar plot was produced before by WMAP but Planck has been able to extend it to higher frequencies because of its superior angular resolution.

However, there are some anomalies at the low-frequency end that the analysis team have said are in the range of 2.5 to 3 sigma significance depending on the estimator used. In a particle physics experiment this would not be much but there is no look elsewhere effect to speak of here, any these are not statistical errors that will get better with more data. This is essentially the final result. Is it something to get excited about?

To answer that it is important to understand a little of how the multipole analysis works. The first term in a multipole analysis is the monopole which is just the average value of the radiation. For the CMB this is determined by the temperature and is not shown in this plot. The next multipole is the dipole. This is determined by our motion relative to the local preferred reference frame of the CMB so it is specified by three numbers from the velocity vector. This motion is considered to be a local effect so it is also subtracted off the CMB analysis and not regarded as part of the anisotropy. The first component that does appear is the quadrupole and as can be seen from the first point on the plot. The quadrupole is determined by 5 numbers so it is shown as an everage and a standard deviation. As you can see it is significantly lower than expected. This was known to be the case already after WMAP but it is good to see it confirmed. This contributes to the 3 sigma anomaly but on its own it is more like a one sigma effect, so nothing too dramatic.

In general there is a multipole for every whole number *l* starting with *l*=0 for the monpole, *l*=1 for the dipole, *l*=2 for the quadrupole. This number *l* is labelled along the x-axis of the plot. It does not stop there of course. We have an octupole for *l*=3, a hexadecapole for *l*=4, a dotriacontapole for *l*=5, a tetrahexacontapole for *l*=6, a octacosahectapole for *l*=7 etc. It goes up to* l*=2500 in this plot. Sadly I can’t write the name for that point. Each multipole is described by 2*l*+1 numbers. If you are familiar with spin you will recognise this as the number of components that describe a particle of spin* l*, it’s the same thing.

If you look carefully at the low-*l* end of the plot you will notice that the even-numbered points are low while the odd-numbered ones are high. This is the case up to* l*=8. In fact above that point they start to merge a range of l values into each point on the graph so this effect could extend further for all I know. Looking back at the WMAP plot of the same thing it seems that they started merging the points from about *l*=3 so we never saw this before (but some people did bevause they wrote papers about it). It was hidden, yet it is highly significant and for the Planck data it is responsible for the 3 sigma effect. In fact if they used an estimator that looked at the difference between odd and even points the significance might be higher.

There is another anomaly called the cold spot in the constellation of Eridanus. This is not on the axis of evil but it is terribly far off. Planck has also verified this spot first seen in the WMAP survey which is 70 µK cooler than the average CMB temperature.

What does it all mean? No idea!

Rotation… on every scale. That comes to my mind as a first reaction.

Interestingly I was commenting on this sort of view, an article I called Camera Obscura sums it up… all the synchronicity kicks in I suppose like the violets compared to independent inventors of the non-Euclidean geometry… I fall down into the conceptual black hole singularity (or who knows, something like a wall or boundary after all?) with Vuyk and others and their dawning insights. I hope those who may benefit from the experimental observation and the metaphors I use will benefit from the figures that contain details of the mathematics as a form of over language of the binary symmetry information.

L. Edgar Otto

Interestingly, I’m pretty sure observations of this kind were made in ancient times, using the Pythaqgorean law of illumination, i=I/d^2, in a camera obscura. They were trying to analyze the astrological effects of the constellations, and poor Virgo got aligned with the contrary movement in Plato, which he saw as degenerate. Aristotle struck back in On Generation with his view of the reversible cycle of the elements.

The key is the old Rhoemer color-temperature scale, which comes out of furnace craft, just five stops up to about 5000 degrees. Here 2500 is aligned with 90 degrees, giving the sun at white light on 180 for the passage across the sky, and 360 as the Pythagorean central fire matching 1000. So divide the color-temp by 1000 and you get Rhoemer. This was Pythagoras’ answer to Babylonian astrological math.

Orwin O’Dowd: “This was Pythagoras’ answer to Babylonian astrological math.”

Good story. But, I think that there is a better one. Let’s invent a fictitious particle, called “iceberg” which is composed of three parts.

a. A big chunk of ice.

b. A big ocean of water.

c. A big sky of space.

This particle (iceberg) is defined as only the visible part of the ice, and the sea water is completely opaque. For a universe which contains only one single such particle (iceberg), if the three parts are equal in terms of energy, then the big chunk of ice weights about 33% of the total. In terms of the “real” ice, the tip of the iceberg is about 10% of the total ice mass. In this story, let the particle (visible part of the ice) be 5% and the invisible ice be 28% (making a total of 33%). Now, this iceberg universe has 67% dark energy, 28% dark matter and 5% visible mass. This is a good story, perhaps a bit better than the Babylonian math. Yet, this story can be easily transformed as a model. When we let particle (iceberg) = quark, we have a new particle physics and a new cosmology which meets the Planck’s data.

It means the relation between Earth and its environment:

Yes I fully agree

What does it tell us?

1: The cold spot is located in the “warmer” hemisphere.

2: the colder Hemisphere is larger than the warmer hemisphere.

According to Q-FFF theory: Both elements tell us about our 3D position in the universal bubble of the raspberry multiverse see my topological image of the multiverse cross section.

See:

http://www.esa.int/Our_Activities/Space_Science/Planck/Planck_reveals_an_almost_perfect_Universe

and

http://bigbang-entanglement.blogspot.nl/2013/03/final-esa-cmb-map-2013-is-support-for.html

Hi Leo, the Lyman-alpha is quasars, which a an ell of a long way from any originary cosmic spasm. Aren’t you talking about galactic evolution? Anyway, here’s a well-formed measure of phase changes in gravity, which relate to ferromagnetism, as by gravitomagnetic coupling, where the fine structure constant is implicated. http://arxiv.org/abs/hep-lat/9812022.

If quasar kinetics were somehow attenuating a star-gas, that could seed something like the Baryon Acoustic Oscillation. There picture only reaches two sigma, but its only a statistical pattern, and not centered on our view. Still the only real candidate for a cosmic center. http://www.sdss3.org/dr9/

Hi Orwin,

I did not point at quasars.

I postulate that the cold spot could be the center of the raspberry multiverse.

“Life is like arriving late for a movie, having to figure out what is going on without bothering everybody with a lot of questions, and then being called away unexpectedly before you find out how it ends.” Joseph Campbell

Yeah, I’m telling you; this is about as close to a movie script/sequential function chart as we’re probably going to get, at least for a while. I read an interview with Max Tegmark and if he’s any indication the cosmologists seem to be quite happy. Congratulations to the European Space Agency for mission accomplished.

The image linked above is quite interesting but what does it say, really, about dark matter/ dark energy? According to contemporary models, two galaxies can pass through one another without the collision of visible matter but what of the dark energy/dark matter? It’s a fascinating mystery . . .

Hi Wes,

Do you react on my blog composite image?

Meet “Maxwell”

Imagine a “Maxwell Demon” of infinitesimal size deep in the interior of a type-II supernova event.

Surveying his observable environment of about 10^-18 cubic centimeters, he draws the following conclusions.

1. There is global expansion, as he can see from the velocities of the 10^11 gigantic particles.

2. Superimposed upon this global expansion there are random velocities of about 700 km/sec that he calls “peculiar velocities” and indicate some unexplained very high-energy and chaotic phenomena.

3. The unusual “weblike” filamentary/void distribution of the particles reminds him of high energy plasma phenomena.

4. The overall distribution of the gigantic particles looks very homogeneous, at least statistically speaking, but there is a small dipole anisotropy, i.e., slightly more particles and slightly higher temperatures in one direction and slightly lower values in the opposite direction.

We then move “Maxwell” by about 10^15 centimeters to a location far outside of the supernova event. With mouth and eyes wide open, he utters two 4-letter words. The first is “Holy” and the second begins with “S”.

Get the point?

There’s a hole in your UniVerse

Lisa, dear Lisa,

And a hole in the finances

Oh dear god Whole!

Traditional, in variational renditions, since Newton’s bucket experiment.

“…while the ratio of dark to visible matter increases”

Why is this? Isn’t the amount and therefore ratio of dark to visible matter stable?

“…there is no look elsewhere effect to speak of here”

Isn’t cosmic variance kind of look elsewhere effect, and the gray area is not due to statisctical error but cosmic variance?

Yes the ratio of dark to visible remains constant. I dont know why I said it increases, now fixed.

Yes, neither the gray region or the bars on the points are experimental errors in the usual sense. They are cosmic variance and perhaps I should have used that term. I will review it. These things are still statistical in a sense but that word may lead to confusion with statistical errors in particle physics experiments that can be reduced with more data. For the CMB the grey area is fixed while the bars on the points can only be reduced if we can observe the universe from other vantage points billions of light years away.

Look Elsewhere Effect is a relative term depending on what range of things you were looking at in the first place. For example, if you notice that the point at l=14 is low by about three sigmas someone would point out that this could have been true for any of the 100 points on the plot so you need to adjust for LEE. Whether the same applies to the low-l multipole points depends on how you approached the data. If you specifically wanted to look at those points to test models of inhomogeneous cosmologies on large scales then no LEE applies.

LEE is also relevant when looking at the cold spot since you could have looked for it anywhere in the sky. It is said to be significant even after this is taken into account but there is always the larger LEE due to the fact that many other types of anomalies could have been observed in the data but weren’t. This effect is impossible to estimate reliably.

Has this measurement shown any preffered direction? Maybe in cold spot direction? For instance, when variation of fine structure was measured, they find:

http://arxiv.org/abs/1008.3907

From the article is evident that

ascension 17,3 +- 0,6 hours,

declination -61 +- 9 degrees

It also visible that this direction is galactic equator (but not in the center of our galaxy).

Wish there was something to get excited about, but alas it looks like another day of expensive science with confirmation of the same measurements as before and only tantalizing hints of small change. If something this much more accurate than WMAP produces next to nothing for new results, nobody should (or likely will) get funded for the next biggest instrument in the future. I personally believe we are dead wrong on so many aspects of dark energy/matter that we won’t see anything new until our models change.

You are absolutely right. There is no need for darkmatter or dark energy. As long as the modern science holds on this paradigm no real progress will be made.

the interest seems high for this Planck level data (I abbreviate as hmap) so if anyone is interested in the philosophy in seeing these sorts of sky maps- or even from an artistic view… I have posted simple modifications of the new map to express my general take on unified theories and so on… Let us take our assertions a little lighter and our speculations a little deeper in asking why- and we do need to expand and invest in new science projects, perhaps urgently. A map such as this or a collider microscope tells us a little more that they tells us all or nothing at the frontier foundational level.

http://www.pesla.blogspot.com space and enegy hmap…post

ThePeSla

Does anyone know if the acceleration of expansion is to be computed slower… or if the range of particle core physical parameters since the so called big bang has shifted slightly?

For centuries many have assumed that the observable universe (u)

was essentially equivalent to the whole Universe (U).

A minority of natural philosophers, notably Spinoza and Kant, have strenuously countered that the u = U assumption is very dubious, is

empirically unmotivated, and indicates a regrettable anthropocentric bias.

They argued that if one used the observable universe as a guide for modeling the Universe, then an infinite hierarchical model was a much more scientific assumption.

Since neither Spinoza nor Kant knew that stars were hierarchically organized into vast “island universes” called galaxies, the discovery of galaxies in the 20th century was an impressive vindication of their hierarchical paradigm.

Why do many still assume that u = U? Well, perhaps those who

do not learn the lessons of history are condemned to repeat past mistakes.

My main bias against the theory of cosmological inflation is an extrapolation of values of the fundamental constants in the Big Bang moment.

“rapid exponential expansion of the early universe by a factor of at least 1078 in volume, driven by a negative-pressure vacuum energy density. The inflationary epoch comprises the first part of the electroweak epoch following the grand unification epoch. It lasted from 10−36 seconds after the Big Bang to sometime between 10−33 and 10−32 seconds. Following the inflationary period, the universe continued to expand, but at a slower rate”.http://en.wikipedia.org/wiki/Cosmological_inflation

10-32,10-36,10-43 sec This is ridiculous…..

Yes, a classic just-so story. But mathematics has the amazing property of being able to make the ridiculous look oh so scientific.

The sad part is that’s is all directed at hiding monopoles which have proved awkwardly unobservable. Dirac’s great idea was to use monopoles to quantize charge, and with that hypothesis he ventured on to quantize the fields associated with charges. No monopoles, and the whole cardhouse collapses.

The math tells the inside story: it all happens in what’s called a Gelfand tripple or a rigged Hilbert space, or in the latest axiomatic views, a tripple of Russian doll sets. Well charges are found in neighbourhoods where the EM eddies set in, and these in larger distributions of temperature. You just can’t do QM thoroughly without the Boltzmann factor kT, and that;s just not a local phenomenon. An atom as such has no temperature.

But the actual function that mediates is not a Lagrangian or a Hamiltonian,, but the discrete Legendre transform. Guess what? Its discrete – its naturally quantized!

So how did we get into this mess? Well, the limit of the classical regime in the old thermodynamics (where the Legendre works just fine) is give by the uh.. matter wave, which sweeps up very close particles into a correlated dance. Pitty de Broglie messed the idea up with particle metaphysics. That was the whole Bohm “local realism” fiasco, and then the string bandwagon swept over us.

Then they silenced the principal competition, from Mohamed El Naschie at the journal Chaos, Solitons and Fractals. He took a complex view of vacuum dynamics, and found the fractal known as the Serpinski gasket, but not as a dissipation of energy as in Poincare’s dynamics. This means it rather the Kolmogarov vague attractor, and on that note you find Alexandre Polyakov relying on Kolmogarov’s view of turbulence. A. Polyakov is big news now and back in July 2011 John Baez was writing about this theme in Scientific American.

They’re saying that the void, the vacuum is not any fluid aether, but something vaguer than that, which resonates with the logic of game theory. No random dynamics, not chaos, but pure potential, or possibility. The mathematical signature is just the Golden Ratio, and this research picks up in continued fractions where the Pythagoreans left off:

http://www.elnaschiewatch.com/categories/published-papers/golden-ratio-in-quantum-mechanics/