A good friend (thanks, Betty!) suggested that it is rather difficult to judge the advantage of the theories I discuss over those of consensus science, especially the standard model, without making it clear what the limitations are of the latter. That is a very good point, hence this page – for which the real title is “What is wrong with consensus science?” I’ll mainly use Wikipedia for the references below rather than the original sources, seeing as Wikipedia is basically the the home of consensus science, and is often the first port-of-call for someone wishing to learn more about a particular subject, although many teachers consider it to be unreliable.
A colleague had also suggested to me that evidence of how good the standard model is could be found in Quantum Mechanics, which some people describe as “probably the most precise scientific discipline ever devised by humankind“, which is a statement that both I and Miles Mathis amongst others have very strong opinions about, hence I put together a separate specific page on that subject.
As a historical reminder – in the past consensus science models of the Universe included the following theories:
- The Earth is flat
- The Earth is carried through space on the back of a giant turtle
- The Earth sits within a celestial orb surrounded by a bright substance; stars are small holes in the orb that allow the light to shine in
- The Earth is at the centre of the solar system; the planets and Sun rotate around it
- There is only one galaxy
- Despite the Universe being filled with plasma, which is made up of charged particles, and groups of flowing charged particles always result in an electric electric with a magnetic field around it, there are no electric currents in space – but there are magnetic fields, the origin of which is unexplained
At times people have been quite violent when you disagreed with the above; Galileo had his book banned and was put under what we would call house arrest for many years by the Catholic Church for agreeing with Copernicus that the Earth rotated around the Sun, and he got off relatively easily compared to Giordano Bruno, who was burned at the stake (although there are arguments as to how much of this was due to his cosmological beliefs, and how much his religious beliefs – but back then the two were closely intertwined).
By the way, the last theory above is still part of consensus science in 2021.
The Standard Model
So, what is the standard model? Well, there really isn’t one. In the field of physics the very word “standard” suggests that there is a model of the universe that everyone has long accepted as fact. In reality there are a huge number of separate theories that all fit under the umbrella of this phrase, similar to that of the Electric Universe, and they are constantly being tweaked in an attempt to explain new findings that disagree with previous predictions – which is one of my key objections.
Anyway, the standard model these days refers mainly to the groups of theories that describe what matter is made of at the smallest level and how these bits interact to form the atoms, molecules and eventually us.
As described elsewhere the idea of the atom has been around for nearly 2500 years. In 1904 JJ Thomson proposed that the fundamental components of elements had structure themselves, a “plum pudding” of negatively charged electrons, that had recently been “discovered”, embedded in a positively charged “pudding”.
This was replaced in 1911 with the Rutherford planetary model which was tweaked in 1913 to become the Rutherford-Bohr cake model, that moved the electrons out of the central nucleus and had them orbiting, like the planets around the Sun.
This is the main model that was taught in schools for some time, and is still used as the logo for the US Atomic Energy Commission.
Although this model seemed to be quite accurate in explaining experimental results, it had big problems from a theoretical perspective. The positively charged protons all reside in the nucleus, and the negatively charged electrons all reside outside it. Charges of the same type are meant to repel each other, with that force increasing dramatically as you come closer – halve the distance and the repulsion is four times as much. So at the scale of the atom the nucleus would explode. As such the strong force had to be invented to explain this – unlike electromagnetism and gravity which we have direct experience of, the strong force has never been experienced or measured – merely postulated to explain the obvious problem with atomic theory.
The weak force also had to be invented to explain why a homogeneous nucleus of evenly spread protons and neutrons would decay in some situations, causing fission and radiation. A new particle, the neutrino, also had to be invented to explain missing energy and momentum in radioactive decay, with no charge like the neutron, but with virtually no mass – an electron is meant to be around 500,000 times as massive.
There were also severe problems with the electrons – they should either repel each other due to charge repulsion, or be sucked into the nucleus due to the charge attraction. Even if you argue that they are orbiting the nucleus so they won’t get sucked in due to electromagnetic attraction, just as the planets aren’t sucked into the Sun despite gravity, the orbits wouldn’t be stable – accelerating charges lose energy in the form of photons, and anything orbiting a fixed point has to be accelerating. To avoid this issue it was initially just decreed that this wouldn’t happen for electrons orbiting atoms, for no reason.
These faults started becoming too obvious for physicists to explain away, so that’s when quantum physics came to the fore and physicists decided to stop being physicists (which by its very definition requires one to deal with the physical) and instead become mathematicians and magicians. Electrons stopped being particles that occupied a specific position, but instead became an electron cloud or atomic orbital, which rather than being anything physical is merely a mathematical formula that specifies the probability of something with a negative charge being in a certain location at the time you try to detect it.
That meant the magicians could rewind the clock to the plum pudding model and just pretend that the negative charge was a property of a volume of space (although spread out in a special pattern), so you no longer had to worry about specific bits of charge repelling each other or accelerating. This is despite the fact that there are a huge number of experiments distinctly showing electrons acting as particles with a very definite volume and mass. This is also the basis behind Einstein’s theory of the photoelectric effect, which we know works because otherwise solar panels would not exist. I go into a lot more of the problems with this type of reasoning on the Quantum Mechanics page.
Thanks to the development of particle accelerators in the 1950s issues with this model also started appearing, in that physicists found that when they smashed electrons and nuclei into each other at high speed a whole heap of other “elementary particles” were created, many of which were quite a bit larger than protons and neutrons – so many that the term “particle zoo” ended up being used.
As such it was decided that protons and neutrons weren’t fundamental particles at all, but instead were made up of smaller particles that came to be known as quarks, which each had a fraction of a charge compared to the electron and proton – despite “fractionally” charged particles having never been detected then or since. On top of that new properties of matter also had to be invented to explain why these quarks, electrons and neutrinos did or didn’t appear in certain combinations, which were eventually called “flavour” and “colour”.
After a few more tweaks (force carriers) were added we ended up with the commonly accepted key tenets as of 2021 of the standard model being that matter is made out of elementary particles that are either quarks, of which there are 3 pairs, or are leptons, of which there are also 3 pairs, with the lightest being the electron and the electron neutrino. On top of that there are 4 forces that result in these elementary particles being attracted to each other or repelled, due to “force carrying” particles, of which photons are the only ones we readily detect, and the non-explained rules of colour and flavour to restrict the ways in which the particles interact.
As a further example of how ridiculous all of the above is, consider the W and Z bosons, which are meant to be the “force carrying” particles of the weak force, and so mediate neutrinos being absorbed or emitted. As per above, the lightest neutrino is meant to be 500,000 times less massive than an electron. The W boson, on the other hand, is meant to be about 157,000 times MORE massive than an electron.
That means that the W boson is about 78 thousand million times MORE massive than the neutrino it is meant to “mediate”. That’s a bit like saying that the “mediators” of fleas are elephants. This is truly ridiculous; the “mediators” of a force should at worse be on a par with the size of the particles they are meant to be affecting, or in general a lot smaller, like photons and charged particles, not the exact opposite. One wonders how they even keep a straight face when making claims like this.
The above chart is an excellent summary of the standard model as it stands in 2021, and I would highly recommend it for teaching purposes, except for the fact that I come to bury the standard model, not to praise it, to misquote Shakespeare.
Despite all the above there is still no explanation of what charge really is and how it works (especially how oppositely charged particles attract each other at a distance, “force carrying” particles notwithstanding), nor what electrical current is and why it is found in combination with magnetism that spins around the current. There is a property called “spin” that has been assigned to elementary particles, but it is supposedly no more meant to indicate a real physical spin than the flavour of a quark is meant to indicate what it tastes like.
Chemistry
Some claim that quantum mechanics explains the periodic table of elements, and by extension much of Chemistry, but that is not my understanding. The problems were so evident to me that by the time I’d reached University I already realised it was no use taking classes in Chemistry as there were so many fundamental issues; it took me a few more years to realise that there were similar issues in most areas of physics and cosmology as well. Here are some of the issues I originally had with Chemistry:
- Why doesn’t the position of an element in the periodic table tell us all of the physical properties that element would have on Earth in bulk? For example, whether it is a solid/liquid/gas and what density it is, whether it decays through radiation, whether it can become a magnet, whether it is a superconductor, etc.
- Similarly with properties of compounds between different elements, e.g. on Earth both Hydrogen and Oxygen are gasses, but water which is the combination of both of them is a liquid (usually). Sodium is a highly reactive element – especially in the presence of water, Chlorine is a highly toxic gas, but Sodium Chloride – common table salt – is essential for our health, dissolves readily in water and is highly stable.
- Why does water have so many properties that are different from other liquids (e.g. on Earth solid water – ice – is less dense than liquid water so it floats, liquid water forms clouds that float high in the sky, et al)?
There are also more complex issues, including bridging the gap between what we call inert compounds and what we call life, and a number of other issues that require a lot of background knowledge to explain, many of which are listed on Wikipedia.
Cosmology
Now that I’ve discussed a lot of the issues I have with the science of the very small, I’ll jump up a few orders of magnitude and discuss consensus science views of the very large.
Electric Currents
The largest problem in Cosmology currently in my opinion is the complete refusal to accept that electric currents exist in space, contrary to evidence that has steadily been increasing in the last century, and especially in the last 60 years since we were able to start sending probes beyond the Earth’s atmosphere. It was only a few years after the discovery of the electron that Kristian Birkeland, thanks to his direct experience of auroras, came up with the theory that auroras were caused by electric currents travelling from the Sun to the Earth, which was expanded upon later by Nobel Prize winner Hannes Alfvén.
This theory was dismissed in the UK and USA due to the intervention of initially the very renowned Lord Kelvin, whose life-work on thermodynamics naturally led him to come up with a heat-based explanation, proclaiming (by fiat, with no evidence) that currents could not possible extend all the way from the Sun to the Earth. His dogma was continued by Sydney Chapman.
Birkeland was finally vindicated thanks to the readings on magnetometers on US Navy satellites launched in 1963 and 1966, although it took another year before the results were written up by Alex Dessler, leading to the suggestion by Alfvén that they be called Birkeland-Dessler currents.
Despite this admission and the vast number of observations whose structure is predicted as being caused by electric currents and can often be duplicated at a smaller size in a laboratory, no-one who wants a career in astronomy/cosmology will admit to electric currents in space, beyond a few self-contained ones around planets or in the Sun. In fact, apart from a small few (mainly from Scandinavia) most people will not even use the accepted term “Birkeland current”, but will instead us the term “Field-aligned current” or more often FAC, which by its very name implies that there is some magic field embedded in space that is directing the path of the current.
It is now admitted in consensus science that magnetic fields are everywhere we see matter (or at least light) in space – despite the fact we know electricity and magnetism are two sides of the same coin, and there is not meant to be any electricity of note in the cosmos.
Gravity
On a similar note consensus science would have you believe that gravity is the only force that is responsible for the structure of the Universe on a large scale. This basically dates back to Newton, whose law of universal gravitation works very well for describing the motion of the planets around the Sun – well, except for Mercury, as it turns out, so many people decided it should be good enough for describing motion in our Milky Way galaxy as well, and other galaxies and larger structures. Except it doesn’t – in the Solar System the closer you are to the Sun the quicker the orbit, but the Milky Way and other similar galaxies are more like a rotating record – the outside areas are rotating faster than the inside areas – otherwise the spiral structure would fall apart.
Dark Matter
The spiral structure of some galaxies is predicted if you include the effects of electromagnetism in space and Birkeland currents. As consensus scientists refuse to recognise this they had to invent dark matter. I say invent, because despite 50 years of looking and numerous tweaks to the theory of what it could be, there has been absolutely no actual confirmed observations of any form of dark matter, except for as large holes in the theories. Note also that dark matter isn’t a minor tweak; current theories require around 85% of ALL matter in the Universe to be dark matter.
Another unmentioned large problem with dark matter is where it occurs – in order for dark matter to explain the spiral structure of the Milky Way, as well as many other galaxies, it has to occur in a very precise distribution – not that you can get anyone to admit this. All distributions of dark matter are either very vague or else artist’s impressions. As someone with a background in statistics I can tell you that the probability of this happening by chance is virtually zero, without some additional theory to explain why the distribution occurs – which has never been provided.
Redshift
Most of the current set of problems in cosmology started with the discovery of redshift. Most people are familiar with the Doppler effect – when a sound is being emitted by something travelling towards you, e.g. a whistle on a train, the frequency goes up as the air and hence the sound-waves travelling through it are compressed, whereas when it is going away from you the air expands and the frequency drops, like this.
Over a century ago something similar was noticed with emission spectra from far-away objects (which generally turned out to be other galaxies) – the lines that we see that indicate the presence of various elements were found shifted in frequency compared to what we see from the Sun, nearly always to the red (lower frequency) end. So it wasn’t a leap to suggest that this meant that the galaxies were travelling away from us as the light was being expanded (or in the rare case with blueshift compressed as the objects headed towards us).
Several people, with Edwin Hubble being the best-known (due to his name gracing the theory; Hubble’s Law of cosmological expansion, although he was actually beaten to the punch two years previously by Georges Lemaître), took this a step further and showed that (based on other observations and theories) that the farther an object was away from Earth the faster it was moving away from us, and hence the higher the redshift.
Expanding Space
However, Einstein’s special theory of relativity, which consensus science had thoroughly embraced, had shown that the speed of light (in a vacuum) has to be constant, which has the side-effect of it not being compressed or expanded when travelling through a vacuum. So in order to get around this expanding space had to be invented – the idea being that the light itself is travelling at the one speed, but the space that the light is travelling through is actually expanding. Despite space being, you know, space. The absence of anything (at least anything we can observe).
An increasing number of observations showed that virtually everything outside of our galaxy was redshifted, and quite a bit in some cases. This is generally explained using the analogy of an expanding balloon – if the galaxies are on the surface of the balloon as you blow it up they all get farther apart – which sounds great, but is a complete failure as an explanation in that space is three dimensional, not two like the surface of a balloon. And, you know, space.
Big Bang
That led to the Big Bang Theory – it’s not just a popular TV series. If (nearly) everything in the Universe is moving away from each other, then that would suggest that at some distant time in the past everything must have all been close together. Unfortunately that caused even more problems.
A major issue is that better instrumentation started showing objects that had such a high redshift it means that they are travelling away from us faster than the speed of light – which once again breaks special relativity, but is again explained away by saying it’s the space that’s expanding faster than the speed of light, not the objects we’re observing in it. This excuse also started falling apart because when combined with other observations in the late 1990s that showed if the Universe was expanding then after an initial explosion the expansion must have first slowed down a bit, but then the rate of expansion started increasing over time.
Dark Energy
For this to work consensus science then had to go one better over dark matter and invent dark energy. This required normal matter to be further demoted; under the new model around 68% of the entire mass-energy of the Universe must be dark energy (which, like dark matter, has yet to be actually observed), 26% dark matter (which we still can’t find) and only around 5% the actual matter which we’re used to, with the remaining “very small amount” contributed by neutrinos and photons. Although dark energy doesn’t really explain why the expansion of the Universe slowed down in the past after the big bang, and then started speeding up again.
Redshift Redux and Quasars
Before we get lost even more down this rabbit hole, let us revisit redshift. Dark Matter, the Big Bang, the Expanding Universe and Dark Energy, just to name a few unproven theories, all stem from the fundamental decision that redshift is a type of Doppler Effect. But is it?
Interestingly Edwin Hubble himself in 1941 reported that a six-year survey did NOT support the expanding universe theory, so he spent the latter years of his life arguing against the “law” named after him.
Halton Arp certainly didn’t think so either. He pointed out that there are many observations of what appear to be linked galaxies or other objects – especially galaxies and quasars – where one of the bodies has a much different redshift than the other, which should not be possible. Initially this was explained away by saying that they just happened to look like they were linked, and in fact one object was much further away than the other. As our telescopes and other technology has improved showing not only more examples but better resolution, this has proven to be an increasingly unlikely explanation – especially as for it to be true the object farther away has to be much larger in size for the comparison to work.
Several of these appeared in Arp’s Atlas of Peculiar Galaxies back in 1966, were the main subject in Quasars, Redshifts and Controversies, Seeing Red, and later in the Catalogue of Discordant Redshift Assocations. Many more examples have been found since, including a very telling quasar that is almost certainly in front of spiral galaxy NGC 7319, despite its redshift implying it was several billion light years behind it. That single example invalidates the entire theory that redshift can only be due to a Doppler effect. Recent observations also seem to contradict Doppler redshift.
Speaking of quasars, which according to consensus science are objects a lot smaller than galaxies but thousands of times as bright, they also have a large number of issues known since 1967. The list of problems has been growing; dozens of quasars have apparently disappeared over as little as ten years, with a normal galaxy appearing in their place.
So are there any alternative explanations to Doppler redshift? Yes. Emil Wolf predicted in 1987 that it was possible for redshift to occur even when the two objects were not moving in relation to each other; this was proven in a laboratory a year later leading to the Wolf effect being named after him. This might explain the difference between the redshifts of connected objects like quasars and galaxies.
Dozens of alternative theories have been proposed; many have been disproved, but many more are still awaiting the necessary evidence to determine their accuracy. One of the leading suggestions, by Ari Brynjolfsson, is that in hot, low density plasmas there is some energy loss of the photons travelling through them to the plasma, causing the plasma to heat up and the photons to drop in frequency.
Originally it was thought that plasma would only be found in limited parts of the Universe around stars and nearby objects, but with the confirmation of Birkeland currents and the solar wind, the interplanetary medium, the interstellar medium and the intergalactic medium, it’s been acknowledge that plasma exists in space pretty much everywhere we care to look for it; in fact it is estimated that over 99% of all matter in the Universe made out of plasma, completely at odds with the world of solids, liquid and gasses we’re used to. If this is correct that would also mean that redshifted objects are merely a long way from us, not that they’re moving away, or even accelerating away from us.
Pulsars
In 1967 Jocelyn Bell discovered a radio source in the sky that appeared to be pulsing, sending out signals about every 1.33 seconds. A few months later she found a second source. A year later it was decided to call it a pulsar (pulsing star), and a theory was suggested that it was a special type of collapsed star, or star-core, only 20km across or so, a bit like a lighthouse, with the light (visible, X-ray, radio and even in some cases gamma rays) being emitted out of the poles as the object rotated.
Ignoring the fact that we have an exceptionally small object (compared to a star – or even a planet – or even an asteroid) that is often producing more energy than a full size star, for the pulsing to be due to rotation the object must be rotating at an incredibly fast rate. If we allow for the fact that we could be getting two pulses per second, with a rotation rate of 2.66 seconds and a diameter of around 70km, that would mean the part of the star joining the poles would be rotating at 26 km/sec, around 94000 km/hour.
We now know of over 1500 pulsars, and the as of 2021 the one with shortest period we know of seems to be rotating at 716 times per second, meaning 70000 km/sec – around 24% of the speed of light! There is no way any material object could handle the centrifugal forces – it would break apart well before reaching this speed (more likely even before reaching the rotation rate of the initial pulsar) – so to counter this it was just decided to declare centrifugal forces don’t happen in these cases. Just because.
We also now know of pulsars that have irregular pulsing – which if they’re rotating means that either they’re stopping and starting very quickly, which would cause even more stress, plus require a further explanation to explain this acceleration and deceleration. One pulsar that was quite happily spinning around about once a second suddenly stopped pulsing completely for 580 days, then started pulsing again. Another regularly pulses for a week and then turns off for a month. Others just have irregular, or semi-regular, pulsing cycles.
One explanation is that there is another body (like a normal star or planet) near the pulsar that is causing the pulsing to every so often miss the Earth (so the explanation is that the pulsing is still regular, just that we’re missing seeing some of the pulses), but this can’t explain all of the irregularities so far. Other pulsars change not only their spin rate over time, but also the frequency of the light they are emitting.
Variable Stars
It has been found that not only special stars like pulsars vary quite quickly, but “normal” stars can as well. Consensus science teaches us that stars evolve over enormous periods of time as they burn through their Hydrogen supplies, with lifetimes varying from millions to trillions of years. This means that stars should be extremely slow to change, and they should be changing in a defined path, never to change back to a previous state.
Contradicting this we now know of many stars that actually change over small time periods. Betelgeuse (the star, not the film) in the 1900s was the tenth brightest star in the sky – the red one in the constellation of Orion. Back in the 1800s Herschel noticed that over a number of years Betelgeuse quickly brightened, then dimmed, then didn’t change much for several years. This cycle has been repeated a few times since, with it dimming by over 35% in the last few years.
The most recent explanation is that parts of the star are rotating at different rates, which then result in an expulsion of material which absorbs some of the light, causing it to dim. However, there is no real reason given as to why this is happening, apart from some “convection cell” that exists in the star for some reason for very long periods of time – which might be a suitable explanation if this was a ball of liquid, but not if it is plasma.
It also doesn’t explain why around 2300 years ago Chinese astronomers recorded the star as being yellow, not red, and in the 1600s it was for a long period of time even brighter than Rigel, which is also in Orion, and currently the brightest star in the sky. The variability also seems to have been recorded by First Nations people in Australian.
Other stars have been even more dramatic, with hundreds appearing to vanish entirely, only to reappear later – well, most of them. In fact, our own Sun is a highly variable star – only just not in the visible spectrum. The overall luminosity is only meant to vary by 1 part in a 1000 over the (roughly) 11 year solar cycle. But the amount of ultraviolet can vary by thousands of times, and the change can take place over just a few seconds. Similarly X-ray production can vary by a huge amount, often in cycles like the solar cycle.
Speaking of the solar cycle – what causes it, and why is the variability so little in the infrared and visible part of the spectrum, but so large in other parts? Co-incidentally the variability of the length of the solar cycle seems to line up quite well with the variability of the orbits of the Jovian planets (Jupiter, Saturn, Uranus and Neptune), especially the orbit of Jupiter, but if consensus science would have you believe that there is no way for the planets to affect the Sun – such a suggestion generally means that you will be labelled as practising astrology, and hence immediately ignored by any career scientist.
So what does cause the solar cycle? There are a few theories that have been put forwards, but so far none of them have been accurate in predicting future solar cycle lengths or sunspot intensities, nor even able to match past variabilities, including breaks in the cycle like the Maunder Minimum, a period of 70 years when virtually no sunspots were observed, coinciding with a noticeable cooling in at least Europe and North America known as the Little Ice Age.
But wait, there’s more
This is a reasonable sampling of most of the problems I am aware of in consensus science in both the little and big ends of town. I will be adding more over time; one will be black holes, and how they’ve become a lot less black and a lot less holey since their first invention.