The sudden appearance of the notions of Dark Energy and Dark Matter were the game changer that finally nudged me onto the path of outright scientific heresy. I had been a physicist for many years, researching the central nervous system using high technology imaging techniques like MRI and Nuclear Medicine. I had pretty much accepted all I had read in my old textbooks, though I had maintained what I believed was a healthy scepticism. I understood that, now and again, new observations would come along that would overturn whatever was the scientific orthodoxy of the time. Nevertheless, I believed that eventually science would converge on the truth and ultimately reveal the workings of the universe.
I don’t believe that now.
So why was this a defining moment? For most of the twentieth century, the firm consensus had been that the universe had started with the Big Bang, that it had exploded into existence out of nothing, and that it has been expanding outward ever since. Although other theories had come and gone since this idea was first mooted in 1927, the Big Bang Theory, and the theory of gravity that underpinned it, had become received wisdom: gospel, almost.
This had got to the point where the big question wasn’t whether this was true or not, it was about what the ultimate consequences would be. Would the universe expand forever, each star flying further and further away from the others, burning up all its fuel and becoming a dead cinder? In this version the whole universe would end in a dying whimper.
Or alternatively, would there be enough matter in the universe for gravity to overcome the explosive effects of the Big Bang, eventually drawing everything back into one great ball of matter in a process called the Big Crunch, perhaps for the whole cycle of expansion and contraction to begin again?
For a long time it looked as if it could go either way because the estimated mass of the universe seemed perilously close to the tipping point.
But then cosmology, which is the study of the origin and fate of the universe, became rather surreal. Respected physicists announced that, as well as the mass and energy that made up the known universe, there was also another nineteen times as much that we hadn’t known about.
Oh yes, and that this extra nineteen times as much mass and energy was all invisible and undetectable. They called this stuff Dark Energy and Dark Matter which made them sound rather spooky and sexy.
Then they went even further and said that this Dark Energy acted in the opposite way to gravity: it pushed matter apart rather than pulled it together. This was quite a stretch as nothing even remotely like that had ever been detected, yet now the universe was supposedly teeming with this Dark Energy.
And the reason for making this incredible announcement wasn’t that Dark Energy and Dark matter had been observed, but rather because they supposedly had to be there to explain experimental measurements that would otherwise undermine some rather precious theories. Specifically, astronomers had found that the stars, rather than slowing down under the effect of their mutual gravitational attraction, were instead accelerating away from each other.
This would make nonsense of the long held theories that underpinned what we thought we knew about gravity. If cosmologists were to preserve their faith in these theories then a couple of very large rabbits would have to be pulled out of a very big hat.
In other words Dark Energy and Dark Matter were fudges of, literally, astronomical proportions.
And how did the rest of the world react to this stunning announcement? What was the effect when they realised that physics had been so fabulously in error? Did they burn their textbooks or hang the scientists from street posts? Did they never believe a word the scientists said again?
Not at all. Those who had any interest became intrigued by the science fiction-like ideas of invisible matter and anti-gravity forces. In other words they focused their attention on the rabbits, rather than the fact they had appeared out of thin air.
I, on the other hand, found my faith in physics shaken. This prompted me to really go back to basics and examine our most fundamental scientific assumptions. And wherever I looked, whether it was in physics or biology or medicine, or at even the maths that underpinned them, all certainties dissolved like morning mist in sunlight.
Not only that, but I discovered that for thousands of years respected scientists and philosophers have been warning of these problems inherent in all aspects of science. Often couched in abstruse language and hidden away in dusty textbooks, these warnings have been ignored or even suppressed or just plain forgotten in the face of the apparent successes of science. Indeed it is with the aim of bringing these concerns to the general reader without any training in science that I have written my book Science for Heretics.
Let’s consider just a very few examples of these problems involving maths, physics, and our understanding of the human brain and mind.
Take the simple equation 1 + 1 = 2. Self evident, unequivocal and the basis of our whole mathematics, it is also completely wrong in the real world. It makes the assumption, sound only in the fictional domain of mathematics, that there are at least two identical things in the universe. In reality, no two things in the universe are truly identical, from grains of sand up to galaxies. We may consider things like atoms and sub-atomic particles to be identical but that is only an assumption. Whenever we’ve been able to look at supposedly identical things with sufficient resolution (for example when newly developed light microscopes were used to look at creatures like fleas) we have always found some differences.
If there are indeed no two things in the universe that are identical then 1 + 1 never equals 2. At best 1 + 1 is only approximately true, though do you remember your maths teacher ever pointing that out? The only way that one object plus another object equals two objects is if you ignore all the things that make them different.
Indeed, it might be argued that many young children labelled as being ‘bad with numbers’ are simply struggling to fit this fictionalised mathematical system into a universe that to them is obviously far more complicated.
Perhaps this non-identicality is the only fundamental truth underlying the universe yet, if so, it is a truth that our maths and science studiously ignore.
Maths also breeds other possibly quite fictitious concepts such as infinity and as a result physics, in thrall to mathematics and equations, in turn throws up what may be entirely mythical beasts like Black Holes. These latter are a consequence of infinities appearing in gravitational equations (even Einstein, who developed the theory, wasn’t convinced of their reality). Yet is there anything in the universe that is actually infinite? Is there really any such thing except in the intellectual realm of mathematics, itself based on a fundamental fiction?
One major problem in physics is the disconnect between the physics of the very large (cosmology) and the very small (particle physics). Even the mathematics that govern the equations of the two are quite different: smooth and continuous in the former, abrupt and discontinuous in the latter. When attempts are made to unite the two physics, via versions of a Grand Unified Theory (GUT), physicists find themselves resorting to at least six new spatial dimensions for which there is absolutely no evidence. Not only that, but the new particles required by the theory will never be detectable due to their supposed high energies. In other words this new String Theory can never be proved. Wolfgang Pauli, a famous physicist, once described physics theories that could never be validated as: ‘Not even wrong’.
It is not only undetectable new dimensions that physicist find themselves having to resorting to in some GUT but also multiple, maybe even infinite, other universes. These too are undetectable.
Physicists who believe in these unverifiable things are showing faith in a strange new religion, yet this is a religion in which man, rather than God, is at the centre of the universe. There is no space here to go into the quantum mechanical world of poor Schrodinger’s Cat but suffice it to say that experimental evidence shows that a ‘particle’ like a photon may act as a wave in one circumstance (when there is no human observer) but like a particle in another (when a human observer is involved). Physicists extrapolate from this to the point where they consider that the universe is in a series of ghostlike states (alternative realities) each of which resolves into only one state if a human observer is looking. Einstein was never happy about this and once asked: ‘Do you really think the moon isn’t there if you aren’t looking at it?’
Incidentally, if you don’t like this man-centric view of the universe, then physics offers an alternate theory: that each individual quantum event (of which there are countless millions just in your thumb over the next second) will split the universe into two versions and it just so happens you wind up in one of those many universes. In other words, we are back to resorting to multiple unprovable universes again.
Something is surely going badly wrong here, yet these sorts of ideas are the scientific orthodoxy of the present day.
This sort of disconnect between the physics of the very large and the very small is reflected throughout the sub-specialties of physics such as electromagnetism or thermodynamics or mechanics as their laws and theories generally do not translate from one speciality to the next. Thus the core problem with the notion of physics being a unified science is that it simply isn’t. As time goes by, the situation gets worse as the science splits into smaller and smaller sub-specialities, each with their own sets of ‘fundamental’ laws.
It’s also worth mentioning that the twenty-fold Dark Matter and Dark Energy fudge is trivial compared to the rule-bending necessary to keep many other physics theories on the rails.
One begins to wonder whether our theories of physics are simply a projection. Perhaps these are like the lines of latitude and longitude we project onto the surface of the earth. As far as the Earth and its workings are concerned, these lines have no relevance whatsoever to its reality: how it works, how it exists, its complexity, its past and its future. Nevertheless latitude and longitude can be very useful to us and allow us to navigate over it with great success though they are a purely human fiction. Perhaps the universe has no laws underlying it at all. Perhaps the universe just is the way it is and we are simply spinning tales when we try to explain it.
Even a scientific heretic has to agree that physics and the other sciences can be useful. Experiment can show regularities in the behaviour of the universe (at least here and at least now) that we can then exploit. However, the theories we subsequently construct to explain our experimental findings may have nothing to do the truth and have to become more and more elaborate and bizarre in a futile attempt to understand something that has no explanation.
So, my researches into the basics of both physics and maths had served only to undermine what, I now realise, had been my naive faith in all theories and supposed laws. Even more dismaying, wherever I looked in other fields such as biology and chemistry and medicine I found the same thing. These many problems are described in detail in the book but we’ll look here only at just two examples from studies of the human brain and mind.
Hundreds of thousands of neuroscientists have devoted their careers to attempting to understand the human brain. This consists of about 100 billion interconnected brain cells (neurons). However, we still have no idea, from a neuronal point of view, why some of us feel love, appreciate music or find old Laurel and Hardy films funny.
One can even begin to wonder if the whole neuronal model of thought may be a fundamental misconception. Insects, despite in some cases having only a millionth of the neurons of mammals such as ourselves, can still do quite a lot. Some have hundreds more eyes than we do but somehow their limited number of neurones can integrate them all into an overall view of the world. They also have antenna to feel, which we generally don’t, and their tiny brains need to take on board the stimuli coming from the numerous hairs on their body that allow them to sense the wind. They also sometimes have wings to control and, if you’ve ever tried to swat a spirited fly, you’ll know what exquisite control they have of these.
They also know how to hunt and trap prey, how to eat it, how to mate, to recognise certain smells, how to navigate away from and back to their nests.
And some insects can build truly impressive things. Even though termites have so few neurons, they produce vast and labyrinthine citadels up to ten metres in height. They build roads, bridges and tunnels across the forest floor and all with two lanes for the two directions of travel. Below their mounds they excavate down to sixty metres to find water.
Insects cast even further doubt on our understanding of the brain. Carl Linnaeus, the father of the taxonomic classification of creatures into different groups, in the eighteenth century constructed his insect grouping on the basis that they all had no brain at all. He based this on the observation that insects such as horseflies can live after decapitation. The Comte de Buffon, another naturalist of distinction at the time, went so far as to say the ‘…horsefly, will live, run, nay, even copulate, after being deprived of its head’. There is even some recent evidence that parts of a cockroach’s body other than the brain may be associated with memory. Indeed cockroaches can survive several weeks without having a brain at all.
We’ll leave this section on the brain with the words of one worker in the field, Christof Koch, Chief Scientific Officer at the Allen Institute for Brain Sciences in Seattle: ‘The round worm has exactly 302 neurons, and we still have no frigging idea how the animal works’.
The historical failings of medical theory are obvious. For two thousand years, medics were in thrall to Hippocratic theory whose chief treatments were the bleeding and purging of their patients. Bloodletting was used in almost all diseases, even for patients who had already bled heavily from wounds or were about to undergo amputation. It is thought that George Washington, a haemorrhoid sufferer, died due to the ‘heroic bloodletting’ of his doctor Benjamin Rush in 1799. Thus it is likely that medics killed far more people than they ever saved. More latterly, medicine has moved to a much more evidence based approach (seeing what actually happens rather than depending on an over-arching theory) and has become far more effective. Nevertheless, there is one aspect of medicine where theory, or to be precise two theories, still hold sway. This is in the medicine of the mind where these theories have split the psychiatric profession into two warring factions. One side is trying to shoehorn the wide spectrum of mental illnesses into possibly quite fictitious diagnostic pigeonholes generated by committee and, at the stroke of a pen, turning tens of millions of us into sufferers overnight. The other side, meanwhile, isn’t even sure that mental illnesses are entities themselves and instead are the product of internal responses to external social factors rather than actual diseases that somehow originate from an organic abnormality. These latter psychiatrists, from the Freudian tradition, believe that a patient’s ‘craziness’ as perceived by the psychiatrist is in fact only a measure of the social distance between the two. In other words, if the psychiatrist was in the patient’s shoes and had had the same life experiences, they would see exactly where the patient was coming from.
However, the former group, the categorizers, believe the broad spectrum of mental illness can be broken down into hundreds of discrete categories of disease. For the last thirty years the categorisers have been in the ascendant, though there seems to be rarely any anatomical basis for mental illness as at autopsy the brains of individual sufferers are usually indistinguishable from those considered normal. The categorisers’ work is enshrined in the Diagnostic and Statistical Manual of Mental Disorders (DSM). Started in the 1950s it has gone through a number of editions. Though originally a little regarded academic exercise, the impact of its later editions on the global practice of psychiatry cannot be underestimated. The hundreds of categories of mental illness vary from edition to edition, each one provoking acrimonious debate and often requiring a vote amongst psychiatrists to decide whether a disease exists or not and should be included.
For example, in DSM-I homosexuality was identified as a mental illness. This did not go down well with the gay community so in the next edition, DSM-II, homosexuality was instead considered a mental illness only if the ‘sufferer’ was troubled by their sexual orientation. They even coined an entirely new term for this: homodysphilia. People tried to point out that perhaps homosexuals who felt distress at their condition didn’t do so because of some underlying disease but because society, certainly in the 1970s, hounded and ridiculed and discriminated against them. In fact this rather chimed in with the old Freudian psychoanalytical view that mental ill health could be seen as the result of external rather than internal factors. Despite acrimonious debate, the categorizers’ view prevailed and the condition still appeared in DSM-III though, instead of being called homodysphilia, it was now called ego-dystonic homosexuality. So homosexuality was still at least indirectly pathologised (made to become a disease) because there was no corresponding ego-dystonic heterosexuality.
This issue of the DSM’s variable treatment of homosexuality illustrates a telling point. In this case at least, the identification or otherwise of a so-called mental disorder had been a function of time. What this means is that the categorisation of the ‘illness’ had changed as society’s attitudes to homosexuality had changed. Who is to say that this isn’t also true, to some extent at least, with other ‘mental disorders’ and that they are not the absolute disease entities that the categorizers would have us believe, but rather that they change with time as societies attitudes towards them change? A similar issue pertains to tomboyism which was pathologised in DSM-III. The fact that girls might wish in some ways to be like boys may at that time have been quite easily explained without resorting to mental illness. Up until at least the time of DSM-III (1980), it was boys who had more power over their lives and were generally given more options in terms of how they lived it. That alone might explain why some little girls wanted to be like them.
In DSM-III whole new disorders emerged such as attention deficit hyperactivity disorder (ADHD). There was even suddenly a ‘Tobacco Use Disorder’ later renamed ‘Tobacco Dependence’ in response to the furore and ridicule caused by tens of millions of ‘normal’ people being suddenly medicalised overnight. The conditions of autism, post traumatic stress disorder (PTSD), anorexia nervosa, bulimia and panic disorders also suddenly appeared and grew ’empires of sufferers’. ‘Neurotic depression’ became no longer neurotic and was no longer considered a depression. ‘Hysteria’, beloved by medics and psychiatrists for thousands of years, became factitious (in other words artificial or a scam) and equally applicable to both sexes for the first time.
There is a growing feeling nowadays that the DSM’s simple disease model approach may be fallacious and that no specific disease entities underlie what are simply a range of symptoms; that categorisation of mental disease is itself a phantasm. There are certainly no other fields of science where truth has to be decided on by a vote.
So what does it matter that in science we tend to follow theories that may have no basis in reality? One of the purposes of Science for Heretics is to point out the risks we run, perhaps at extinction level for mankind, by doing this. Science gives us theories that purport to explain how the universe works. This breeds confidence in scientists who then go on to do things that carry certain risks. These risks are then rationalised away on the basis of existing theory. There are vast energies in the cosmos as evidenced by the bursts of high energy cosmic rays of unknown origin that sleet down through our atmosphere. By tinkering with physics and biology are we like children walking through a minefield? Could we accidentally unleash these huge energies, or produce some new biological organism that might wipe us out?
We’ve actually been ambling through this minefield for quite a while now with scientists at each stage rationalising away the possibility of catastrophe. So far they have been right, though usually for the wrong reasons, but will our luck hold forever?
Take for example the discovery of radiation. No theory of the day predicted its fatal effects on the human body. Or the first hydrogen bomb test: some physicists thought it possible that the hydrogen fusion process in the bomb would set off a chain reaction in the hydrogen of the atmosphere and so ‘set the atmosphere on fire’. Only three scientists were put on the job to investigate if this was a possibility and, using the existing theory, rickety and speculative and often just plain wrong as it was at the time (the current orthodoxy of the Standard Model that superseded it was still thirty years in the future) that such a thermonuclear reaction would not kill all life on Earth.
Luckily, they were right, though the theory they used was wrong in other ways as demonstrated by the explosive yield of the bomb being far greater than predicted.
The Large Hadron Collider (LHC) is the latest example of how we have managed, so far, to get away with it. Concerns were raised by some physicists that colliding beams of particles at such energies might cause the creation of theoretical phenomena like Black Holes, strangelets, magnetic monopoles, Bose-Novas or vacuum bubbles, all of which could extinguish life on earth and, in the last case, destroy the whole universe
A specially convened LHC Safety Assessment group was set up to use existing physics theory to show these events could not happen. Theory that, if we have learned anything from the history of physics, we know will very likely be overturned at some point. Indeed, it was to validate some aspects of existing theory that the LHC was built in the first place. On such circular arguments may the existence of life on Earth depend.
And similar problems apply to biology due to both our limited understanding and our poor record of securely handling pathogens, though subsequent deaths might only be in the billions rather than at extinction level.
So where does this all leave us? Perhaps the idea of laws underpinning reality is a falsehood and as a result we need more and more scientists and more and more computational power to produce greater and greater elaborations on our theories to make them fit inconvenient experimental data. We’re forced to make our theories increasingly complicated and also to break science down into smaller and smaller sub-specialities, each with ever more disparate theories pertaining only to their speciality and not to others. Perhaps when it comes to research we should put the emphasis back on open minded observation and away from attempting to corroborate or disprove rigid theories. Not being shackled by theory in this way may actually be quite liberating. And as for the unanticipated dangers, at the very least the scientific community needs to exercise far more humility and caution.