Today is Easter Sunday, which is of course the primary “movable feast” in the church calendar, which is of course relevant to the other movable feasts. Easter Sunday is supposed to fall on the first Sunday after the first full moon after the Vernal Equinox because it’s connected to the Jewish passover. That’s complicated enough as it is, and it’s further complicated by the fact that it varies in the Eastern Orthodox Church and Western Churches, and complicated again by the fact that both the Vernal Equinox and the full moon are defined by the Church rather than by looking at the actual sky. The equinox is fixed on 21st March and the ecclesiastical full moon, which is what the full moon as defined by the Church, is the fourteenth day of the ecclesiastical lunar month. This differs from the real full moon, intervals between which can vary between 29.27 and 29.83 days. There is a calculation referred to as computus which was instituted in 1583 by the Roman Catholic Church when it fixed the Vernal Equinox. In 2015, Pope Francis proposed harmonising the date with the Orthodox Church, whose calendar is still Julian rather than Gregorian like ours, as a show of support. The Julian calendar is around a fortnight behind ours because it treats all years ending in two zeroes as leap years whereas ours skips the ones which are not divisible by four.
It’s all a bit complicated and peculiar. The “movable feasts”, whose dates depend on the dates of Easter, include, in the Western churches the days of rogation, the Global Day Of Prayer For Peace, Pentecost, Whitsun and the other Whit Days, the Feasts of Christ the Priest and the Crown of Thorns, Trinity Sunday, Corpus Christi, Petertide, Sea Sunday, the Day Of Prayer For The Peace Of Jerusalem, World Communion and Mission Sundays, All Saints’ Day (which is surprising as Hallowe’en is not movable), Septuagesima, Sexagesima, Shrove Tuesday, Ash Wednesday, Quadragesima, Palm Sunday, Maundy Thursday, Good Friday and Quinquagesima, and no, I don’t know what all of those are.
As time goes by this planet rotates more slowly. This is because it’s slowed by the Moon (I’m resisting calling it Cynthia although it does feel wrong to call it the Moon still), and as that happens the Moon moves outward from us and the month consequently gets longer. This is because the energy of Earth’s spin is gradually being transferred to the Moon, which causes it to orbit faster and therefore move out and take longer to orbit. Meanwhile, the date of the Vernal Equinox varies because the ellipse that is our orbit round the sun gradually moves like a spirograph pen would. Various other things happen as well. Consequently, the question arises of how many Easters there can possibly be according to computus.
How can we know how long the day was in prehistoric times? The answer is that there are certain living things, and other processes, which occur in daily cycles, and rings can be counted in shells and corals which also have seasonal variations according to hot and cold. Also, layers of silt can be laid down and baked seasonally, forming a primitive calendar. The further back you go, the more days there are per year. This would mean either that the days are getting longer or that the years are. However, if the years were getting longer it would mean our orbit used to be smaller and the planet would have been much hotter unless the sun was warming at precisely the same rate as the orbit was widening. For some reason the day, though it lengthens constantly, doesn’t do so particularly steadily. At around the time of the extinction of the non-avian dinosaurs, the day was around twenty minutes shorter than it is now, so there were around 370 days a year. That was 65 million years ago. Go back to the beginning of the age of dinosaurs, which was 120 million years earlier, though, and the day was only half an hour shorter than it is now, so it’s not slowing in linear terms.
The year currently lasts about 365.25 days. Incidentally, there is no definitive day length because it depends on whether it’s measured by when the sun, stars or Moon rises, all of which are different. Leaving that aside, the question arises of when there were last 366 days in a year and when there will be exactly 365 days. The answer to the first question is that it was true around 20 million years ago, when there were definitely apes but none of them were particularly human-like. With reference to Easter and Passover, that point isn’t particularly important because this is about religious festivals. However, since we do now have Abrahamic religion, the question arises of how long it would make sense to adhere to such a calendar. This calendar, with its leap years generally every four years skipping one every few centuries, will cease to make sense once there are exactly 365 days a year. By that time the month will also be longer and the date of the Vernal Equinox will have changed – it will in fact have cycled completely round the calendar a couple of hundred times.
The answer is that it will be roughly 5 700 000 years from now when Easter, along with various other aspects of this calendar, ceases to make sense according to that reckoning. By an interesting coincidence, it so happens that if the days, years, position of the Vernal Equinox and length of the month are all assumed to be constant, the cycle of possible dates on which Easter Sunday falls repeats exactly over the same period of time – 5 700 millenia. Hence the cycle coincides quite closely to the number of possible dates on which Easter can fall before which the calendar as it currently stands stops working completely.
The bar graph at the top of this post represents the number of possible Easter Sundays in this cycle. It so happens that this is also the actual number of possible Easter Sundays in all time, from the first Passover right up until the sun becomes a red giant and wipes out the Earth. Easter as we know it will by some quirk happen roughly 220 875 times on 19th April, standing out as more frequent than the other possible dates. The rarest date, 22nd March, will only take place 28 500 times or so. The first year it occurred on 19th April after the Council of Nicaea, which determined the date in 325, was 330.
As a Christian, I just slightly wonder if that 19th April date was the actual date of the first Easter Sunday, but I haven’t done the maths on that and I should probably just leave it.
Another consequence of movable feasts is that they mean that the years on which the same dates occur on the same days of the week, for instance 1978 and 2017, are nevertheless different in other ways. Easter Sunday 1978 fell on 26th March. Therefore any “perpetual” calendar (which isn’t, because of what I just said) would need to take the date of Easter into consideration to be genuinely reliable from the perspective of a Christian-influenced culture.
The other thing this makes me think, though, is how Easter could possibly have any meaning at all in getting on for six million years from now. I mentioned in the last post that there are different “theories of atonement”. Similarly there are different theories of eschatology, the study of the end of the world, and where these are Christian they don’t all require the Day of Judgement will come to pass at all. It still seems to me that for Christianity as most Christians currently understand it to be true, surely it would’ve blown it if the Second Coming still hadn’t happened by the year 5 700 000 AD. I don’t know what I think, but I do think there is no way there will be 5 700 000 celebrations of Easter Sunday.
Will Self is quite a readable and captivating author to me, although like many other writers I feel he may be wasted on me and that a lot of his stuff must be going straight over my head. Having said that, I did feel that he got to the heart of a particular aspect of bereavement in his short story ‘The North London Book Of The Dead’, used as the basis of his novel ‘How The Dead Live’. Very obviously there will now be spoilers for those two works and possibly for other stories in his ‘The Quantity Theory Of Insanity’, and they’re good stories so you might want to take this particular warning seriously. Against this I could also set the thought that literary prose is unspoilable because it’s not primarily about the plot.
In ‘The North London Book Of The Dead’, which is apparently “now a major motion picture” (why is nothing ever a minor motion picture?), a man whose mother has recently died keeps thinking he sees her, and it turns out that when people die they go to live in a different part of London. Later on, if I recall correctly, they move out to the “provinces”. Ignoring the Londocentricity, which I’m sure is there for a reason, this does actually capture quite well one of the experiences of bereavement: the recurrent impression of glimpsing the person who has died, hearing her voice and the like.
You’ve had your spoiler warning. Here’s your trigger warning. At this rate nobody will be left to read this by the end. Okay: trigger warning for acute bereavement. I messed up seriously a few years ago on this and I don’t want to do it again. Some of my friends and acquaintances and I experienced quite a traumatic loss a number of years ago which someone close to the person concerned took askance to my reaction to. For all I know, I may be venturing into traumatic territory here, but I want to make it worthwhile. Again, this is where literary sensibilities might help me because the sufficiently talented can pull this off but I don’t think I’m one of them, so please forgive my clumsiness in advance.
So we had a friend with a history of depression. Whereas you shouldn’t speak ill of the dead, the fact remains that she was a real asset to her community, family and so on. She was absolutely not a waste of space. Nobody’s a waste of space of course but I don’t even think anyone who knew her could call her that. And she was a really nice person too. She did, though, suffer recurrent bouts of deep depression and sometimes she would hide away at these times. Two things about her stick in my mind. She was probably the best adult outside our immediately family at relating to our son as a toddler and whenever I made the tea at church, I remembered her telling me how to do it the first time. The second sounds quite trivial but it means that whenever I make tea for a large number of people it reminds me of her.
She triumphed over her depression countless times, and sometimes it was very bad indeed. As someone who had won so often, she deserves to be remembered for the numerous occasions where she did prevail, and also not to be defined by her illness at all but by her enormous value to the people around her. Even so, on one occasion she did not prevail. Her illness did, and she went missing. Only after a month or so did it become clear that she had definitely found it too hard to go on living, and in the meantime, although most of us were pretty convinced about what had happened we couldn’t be absolutely sure. In other words we didn’t have closure. None of us knew for certain what had happened, not 100%, and we needed a definite answer to the question of where she’d gone, even though we knew really. I’m avoiding saying that we knew “deep down”, because actually we pretty much knew on the surface too and we were all pretty definite about what had happened, but sometimes you need to do the doubting Thomas thing and stick your fingers in the nail holes and the wound in your friend’s side.
During the intervening period, I and many other people kept thinking we could see her in the distance or even quite nearby, although of course none of those identifications were correct. Nonetheless she was out there somewhere for us because we didn’t have an answer for what had happened. After a few weeks of course, we did get an answer, and that got us there to some extent. Even so, way after her funeral I for one, and probably others, still glimpsed her in the distance quite often. The experience of not having a resolution stayed and consequently this experience, which had gone on for far longer than was good for our mental health, left its mark. I won’t say scar because there are “wounds” you don’t want to heal. This is another thing I have left of her. It’s been a long time since I “saw” her, but those inverted commas still hurt and so they should.
This is of course a fairly intense personal story, though it’s shared with a lot of other people. Then there’s public grief, and here the death of Diana Spencer comes to mind. Whereas I can see that she shouldn’t have been constantly pestered as a means of selling newspapers and I feel sympathy for her and respect for her work with land mines and people with AIDS, there’s no way I could feel a personal connection with her and the outpouring of grief didn’t seem authentic to me, even though it did affect some close friends quite profoundly. I’ve never been able to pin down exactly what seemed in poor taste about that reaction but since I don’t really do much of the role model or hero thing, maybe there’s something about who I am which means I’ll never get it. I’ve already done grieving celebrities though, so I won’t go on.
One of the unexpected consequences of all this grief was that it challenged my faith in an unusual way. The gospels tell of a period after Christ’s death when people met him at length without realising it was him. For instance, Mary Magdelene meets him in the garden and thinks he’s the gardener, then he goes unrecognised on the road to Emmaus. There are a couple of ways in which this could be taken, but given my experience with my friend at the time, it shook my faith, not along the lines of “why does God let bad things happen to good people?” but more in terms of seeing how loss without closure can warp one’s perception.
I want to entertain for a while the possibility that this hallucinatory experience was what the gospel accounts are about. One influential consequence might be that people could learn to treat everyone they meet with the same degree of respect and love as they would with Christ. And this is where we get to the central issue of the problem: the question of whether Jesus existed or not as a historical personage.
There are various theories of atonement, and most of them do require Jesus to have been a real person. There’s at least one which doesn’t, known as the moral influence theory. In this version, people are inspired to behave better as a result of their perception of Christ’s sacrifice on the Cross. This requires neither a resurrection nor even the gospel accounts being true, although I suspect that almost every Christian who believes in moral influence also accepts other theories of atonement, which are not mutually exclusive. One of the oddities of theology is that its terminology includes a lot of “-ologies” which are not so much thoroughgoing discrete disciplines as mere subject areas of the larger subject, so there’s mariology – the study of Mary; soteriology – the doctrine of salvation; christology – the nature of Christ – and so on. Similarly, theological theories can sometimes coexist when they’re about the same subject, and consequently it’s possible to believe, for example, that the Crucifixion is an inspiring story as well as that, for example, Christ acted as a bait for Satan to take which was then unjustly taken, thereby leading to Satan’s defeat, which is the ransom theory. So you can dispense entirely with the story of Jesus as a matter of historical fact and still accept the moral influence theory, but probably most people don’t do that even if they do believe in moral influence.
The story as it stands, however, with the authenticity of seeing the person you loved after your death, seems very genuine. Treated as a work of literature, this raises a question for me which I can’t seem to answer right now. How did people in that place and time grieve and how well did they recognise the grieving process? It seems very realistic that the loss of a charismatic leader they were expecting to become an all-conquering Messiah in capital punishment would not involve closure and that they would then experience this manifestation of difficulty in accepting their loss. What I don’t know, because it’s about emotional realism in literature, is whether that detail, if you take it seriously, far from discounting the idea of the story being fictional, actually supports the idea that it’s true. If people universally experience grief in such ways, it doesn’t follow that it was recognised by the evangelists or whoever came up with the possibly fictional story of Jesus. If they didn’t, it strongly suggests to me that the account is historical. However, because I’m literarily impaired I have no idea.
The other aspect of all this is how personal and meaningful it is to me. I can recount the events of my friend’s death and feel it very deeply and personally. Likewise, talking about Jesus in this way really does feel as personal and real. It doesn’t mean it is, but it does mean that it might be polite to behave with some respect towards people who believe these things. They’re not just fun, interesting theoretical discussions or things which people believe just to be difficult, but things which are as real to them as their friends are. The same applies to other faiths. However, what they then do with those beliefs is often dodgy and dangerous, because that genuine emotion can be hijacked by the person concerned or others to, well, hijack for example.
The Big Bang Theory is today generally accepted by scientists and in fact people in general, and there certainly seems to be a lot of evidence for it. However, I personally don’t happen to believe in it. Before I explain why, I want to go into why people do.
The astronomer Slipher noticed in 1912 that the further away a galaxy was from us, the redder its light seemed to have become. This was explained by the Doppler Effect, which is the way sound, for example, goes up in pitch as a fire engine approaches and goes down as it recedes. Galaxies therefore get redder the further away they are, which is weird because it makes it seem like we’re the centre of the Universe when that’s most unlikely. The way this problem was resolved was to suppose that in fact galaxies in general are receding from each other rather than just all getting further away from here. This is because space is expanding. This also resolves something called Olber’s Paradox, which is that the night sky is black when if the Universe is infinite one might expect it to have starlight coming towards us from all directions, meaning that every point in the Universe should be at about the temperature of the hottest stars and there should in fact be no solid matter in the Universe at all. The reason this can’t happen is that the further two objects are apart, the faster they’re moving apart, and once they’re more than around 13 billion light years apart, they are doing so faster than light, which means the light emitted by stars further away than that can never reach us. Also, space is not infinite, though it is endless.
A rather misleading analogy used at this point is of a balloon being inflated with dots on its surface representing galaxies. This tends to lead people to the question of what the Universe is expanding into. Two possible answers to it are offered respectively by so-called “‘brane theory” and a philosophical idea about the nature of space and time. ‘Brane theory holds that space is a three-dimensional surface of a hyperspatial “membrane”, so the answer in that case is that it’s expanding into hyperspace, and there may be many other universes around it doing the same thing which could even collide with this one. I don’t think this is what’s happening though.
My take on it is that space and time are relations rather than particulars. There’s distance and direction, both of which are relationships between locations. In terms of time, some events seem to take place before, after or at the same time as others, although this may be illusory. Space is more relevant to this. It’s not a container for locations or objects, but a combination of direction and distance, neither of which are real “things”. It’s more similar to a temperature scale, and it makes no sense to imagine a temperature below absolute zero or think of negative fahrenheit or centigrade scales as kind of “subterranean” or underwater, because it’s merely a measurement. Direction is the same. It’s an angle between two objects in three dimensions. What the idea of space being endless but finite communicates is that there is at any one time a maximum distance between any two points and that travel in that direction will eventually lead to the distance between those two points starting to reduce and the direction suddenly reversing. The idea of an expanding Universe is the claim that the point at which direction reverses increases as time goes by. In other words, the maximum possible distance between any two places is increasing. There need not be any “outside” to space for this to be true, nor need there be any edge to space. Only objects have edges.
If the Universe is not infinitely large and points in it tend to move away from each other, if you rewind the film as it were, there seems to have been a point where everything was in the same place. This is the basis of the Big Bang Theory. Also, the further back you go, the hotter the Universe was on average because the same amount of energy was present in a smaller space, meaning that at the very start everything was infinitely hot. The traces of this are said to exist still in the form of what’s known as “3K radiation” or the cosmic microwave background. Just as a hot object glows red and a hotter one glows orange, the whole of space is filled with a slight glow in the redder-than-red microwave range, indicating a temperature of -270°C, just above the coldest possible temperature at -273.15°C. This is pretty good evidence for the Big Bang theory. So why don’t I believe in it?
Before the Big Bang theory the most popular cosmological view was the Steady State theory. This already included the idea that space was constantly expanding but it rejected the idea that it was finite. The problem then became how to account for the fact that it wasn’t empty, because after a while the only visible galaxy would be the one we are in, and since this theory also holds that there was no beginning to the Universe at all, that’s quite a while. The answer to that was that tiny amounts of matter spring into existence all the time here and there and eventually there’s enough of it to start forming into new galaxies.
The virtue of the Steady State theory is its simplicity. There’s no need for peculiar geometry like the thing about direction reversing and the past, future and the whole of space are all seen as similar to each other, which is most unlike the somewhat messier Big Bang theory. The problems were finding a mechanism for matter to spring spontaneously into existence and accounting for the cosmic microwave background, and as a result the Big Bang theory won through. However, a few scientists carried on believing in it, including Trevor Hoyle, who believed the reddening observed at a distance is due to space being filled with micro-organisms. It’s also notable that Terry Pratchett, although he didn’t believe in the Steady State theory itself, did use it in his work, for instance ‘The Dark Side Of The Sun’ and ‘Eric’. Another difficulty with the Steady State theory is that distant galaxies look similar to each other, i.e. they are at an earlier stage in their history, in other words they are quasars, although at first it wasn’t realised that quasars were outside this galaxy. There’s a relic of the idea that quasars are inside this galaxy in the Star Trek episode ‘The Galileo Seven‘, but in order for that to be possible they would be phenomenally bright. The fact that we are surrounded by a great distance by quasars suggests that again, we aren’t special but that that’s just what galaxies used to be like.
One of the things which bothers me about the Big Bang theory is that it was thought up by a Roman Catholic priest, Georges Lemaître, who was unable to reconcile the idea that the Universe was not created with his faith in God. Whereas this may be the motivation, that in itself doesn’t mean the Big Bang theory isn’t true. However, I don’t believe in a “God of the Gaps”, that is, a God who is used to explain things we don’t yet have a scientific explanation for, because such a God would constantly recede from plausibility as new discoveries and better theories are made. Consequently, although I believe in a creator God, the kind of creation I imagine is at every instant of and every point in the Universe, i.e. God holds the Universe in existence. This is another form of continuous creation, but it’s still compatible with the idea that there was a Big Bang.
A Marxist cosmologist, Eric J Lerner, happens to agree with me on this point. He feels that the motivation behind the idea that the Universe had any beginning at all is theistic or deistic, and more to do with the idea that we find it hard to cope psychologically with the idea that there is no creator than anything scientific. I like this idea for two reasons. One is that it throws the idea of a God of the Gaps out of the window. The other is that it’s Marxist, and Marxism is basically true. Like other theories, it approaches truth without quite getting there and it needs updating in various ways, but the principles are sound.
Lerner points to several problems which the Big Bang theory seems not to be able to explain. Firstly, science has somehow got to the point where in order for the Universe to have expanded from the point it did so recently, it has to have a lot of extra matter in it which can’t be detected, in other words dark matter. I’ve said this before (but I can’t find it because I don’t use tags): dark matter is a myth, and it’s a nasty bit of science too because something has just been posited to exist to explain stuff which can’t, however, be detected. Modified Newtonian Dynamics is a better theory than dark matter by far, as it explains things much more neatly. There are a few other problems too. The large scale structure of the Universe is kind of frothy. There are large empty bubbles in space surrounded by “membranes” (again) consisting of galaxies fairly close together. However, there hasn’t been time according to the Big Bang theory for this frothiness to form because the size of the bubbles is too big. Also, there seem to be stars older than the Universe.
Lerner’s solution to all this is plasma cosmology. The Big Bang theory places a lot of emphasis on gravity, but gravity is in fact a very weak force compared to electromagnetism. Plasma is ionised matter behaving as a fluid, and in fact virtually all visible matter in the Universe is plasma. Gases and solids are minor impurities, and liquids are even scarcer because they can only exist under pressure in a small temperature range. It seems reasonable therefore to expect electromagnetism to be more important to the Universe than gravity. If gravity were to be “turned off” for some reason, plenty of matter would continue to exist, but if electromagnetism were to cease to be, everything in the Universe would basically disintegrate instantly into subatomic particles, many of which would themselves fall apart and disappear. Plasma cosmology can explain why spiral galaxies are that shape without having to pretend there’s this thing called dark matter, for example. That said, I can’t say for sure that I personally actually believe in plasma cosmology.
My personal argument against the Big Bang theory is rather different, although it may be compatible with plasma cosmology as I also reject the existence of dark matter. It starts with the idea of Boltzmann Brains. This is a rather disturbing idea which starts from the perspective of an eternal Universe.
Scientists who do believe in the Big Bang theory usually also believe that the Universe will always exist and that time will never end. A few of them believe that the Universe will collapse in on itself or that there is an endless cycle of the Universe expanding and contracting, perhaps repeating itself in exactly the same way every time, but on the whole the belief is reflected by the illustration at the start of this blog post. The Universe began very bright and hot, then stars formed, then more stars plus planets, getting us to the present day. After us, the stars will all burn out, leading to a very dark, cold future punctuated occasionally by smaller dead stars colliding and flaring into life again for a while. Later still, all stars will have collided and black holes will form from them. After that, those black holes will gradually evaporate due to a process called Hawking Radiation, which again I’ve mentioned somewhere on this blog but lost. This Flanders and Swann song becomes relevant, and they do a better job at explaining it than me.
The Universe after that point becomes very cold and dark, and very quiet. However, this is not the end. There are tiny fluctuations in space which cause subatomic particles to pop into existence spontaneously, and in fact if this happens often enough it would be an adequate mechanism for continuous creation to happen, although that’s not quite where I’m going with this. Sometimes this will cause a hydrogen atom to appear from nowhere. Less often, it will, by pure chance because this is operating by pure chance rather than any supposed chain of cause and effect, create a hydrogen molecule. Even more seldom than this, a water molecule will appear, and so on, going down the range of less and less frequent events involving the spontaneous appearance of more and more complex objects. This will happen unimaginably rarely, but since we’re looking at eternity it will happen an infinite number of times. This means that there will also be infinite occurrences of your brain at a state it was in at every moment of your life. In fact, and to me this is where it gets really vertiginous and frightening, compared to the real me, who existed when the Universe was busy and young, the infinite number of points at which my brain pops into existence believing wrongly that it isn’t a disembodied brain floating in an empty Universe about to freeze out of existence in a few seconds is infinitely more common, and this basically means that the probability of me being right about living on the planet Earth in a human body in the twenty-first century is zero. This is Boltzmann’s Paradox.
If the Universe is both eternal and the Big Bang theory is true, this is the reality of our situation. However, disturbing though this is, I don’t believe it is so. Here’s why.
Complexity in the Universe has arisen from much simpler situations. For instance, snowflakes and salt crystals form from random assemblages of molecules, atoms and ions with no real structure, Earth formed from a cloud of gas and chunks of rock and dust billions of years ago and complex life evolved from simpler forms over a period of many aeons. One of these simpler situations is the apparent early Universe. Go back far enough and the entire Universe was a single subatomic particle containing all the potential matter that would ever exist, and although this was a remarkable situation it was also a very simple one. Now, we are able to look back into the past and work out that everything seems to have exploded from a single point billions of years ago. However, there is a big problem with that. Just as Boltzmann Brains would be called into existence spontaneously an infinite number of times throughout eternity in a mostly uneventful and quiet Universe, if time is eternal, the same situation could create an infinite number of situations where the Big Bang would seem to have happened only a few billion years ago, and this too would happen an infinite number of times. Consequently, the probability of us being this close to the beginning of time is zero. The genuine Big Bang could be the one we think we’re able to see evidence for but it almost certainly isn’t. However, it could be something else quite similar.
If the Big Bang happened, it would happen an infinite number of times like everything else. Also, since a subatomic particle with the mass of the entire Universe is much, much simpler than a human brain, the chances of that coming into existence are inconceivably greater than a Boltzmann Brain doing the same thing, and these other “universes” will often contain versions of our own brain actually in proper universes and correct about things. These would, in any finite sample of time sufficient to include these spotaneous universes, be much more common than the other versions of us, and therefore the Boltzmann Brain paradox is incorrect.
This also means that it’s unlikely that this is the “first” Big Bang, or that the Big Bang happened at all. The same kind of illusion which would lead us to think we aren’t disembodied brains floating in space is more likely to occur with reference to what we think of as the afterglow of creation. Therefore, I don’t believe the Big Bang ever happened, or at least if it did, it was only one of many. Otherwise we would be confronted that we are in a state of affairs with zero probability – living within measurable distance of the Big Bang. And that could be, and if there was one there would’ve been people living back then, but we aren’t them.
Including thumbs that is. Having said that, not everyone has those:
I can only imagine how difficult not having thumbs must be. This is of course literally a five-fingered hand, meaning that it’s impossible for the first digit to touch the others fingerpad to fingerpad, which is crucial to tool use. Primates generally do have thumbs of course, although their opposability isn’t always like ours. This is a squirrel monkey’s hand:
Squirrel monkeys are from the other half of the monkey clade and have pseudo-opposable thumbs, meaning that they operate like hinges but can’t swivel to touch the other fingertips. Tarsiers, which are nocturnal versions of our direct ancestors the omomyids, have completely non-opposable thumbs:
Clearly tarsiers elicit a cuteness response in humans, probably because they’re proportioned like babies, which is therefore presumably ideal for a small primate, or at least one option.
An oddity about hands and feet, which I mentioned yesterday, is that whereas there are many species with fewer than five digits per limb, for instance horses and many lissamphibians, there never seem to be any animals which usually have more than five actual digits. There are animals with an extra dew claw, such as cats and dogs, but this is a wrist bone rather than a real digit. Even odder is the fact that whereas there are no species with more than five digits as standard, there are many cases of individuals in certain species born with more than five, including humans:
As a healthcare professional I have come across a few patients with more than the usual number of digits per hand, and because we generally have two types of digit there are two ways in which this can happen – two thumbs or five fingers, or more. Also, they tend to be branched from other digits rather than simply come off the hand as such. I could go on about Robinow Syndrome at this point but that really belongs on another blog, as does Kennedy’s Syndrome incidentally, which I mention because its social construction is similar.
The issue of functional extra digits, each with their own nails, bones, muscles, blood vessels, nerves and part of the brain onto which their sensory and motor functions are mapped, illustrates an oddity about the nature of genes and DNA. My own fifth digits are bent, a trait known as clinodactyly which can be associated with various other genetic rarities such as the chromosomal Turner and Down syndromes. Both these involve one fewer chromosomes than usual, but different ones, yet one possible result of both is clinodactyly, meaning that it can occur due to completely different sets of genes being absent. Similarly, although it’s tempting to think of there being genes for specific features of different digits and their associated muscles, nerves and other organs, the fact that people can have fully functional extra fingers is strong evidence against the idea that genes work that way. What there seems to be instead is a set of inherited traits for the ends of one’s limbs to become “frayed” as an embryo, and the tendency to have extra digits is about something else, meaning also that the tendency not to have them is as well.
So why five? If it isn’t completely genetic – there’s not a separate gene or set of genes for each finger – then where does the fiveness come from? Why five also when having more than five can happen too without any disadvantages? I personally think the answer lies in the Fibonacci Series.
At this point you’re going to have to indulge me because I have no idea whether the Fibonacci Series is well-known or not. It’s a sequence of numbers each of which is the previous two added together, so it goes 1, 1, 2, 3, 5, 8, 13, 21, 34, 55, 89… ad infinitum. The other thing about it is that if you divide a Fibonacci number by its predecessor you get a number close to its own reciprocal plus one, and the higher the two numbers are the closer that number is to that value, which is called φ. To illustrate this, the number φ, which like π goes on forever, is roughly equal to 1.61803399. 5/3 is 1.4, 144/89 is 1.61797753 and so on, and of course the reciprocal, which is one divided by that number, is 0.618055555 in that case. This is known as the Golden Ratio.
I think nearly everybody knows all that but I’m not sure, so I’m just mentioning it in case there are people who don’t know it.
Something I’ve never understood about either the Fibonacci Series or the Golden Ratio is why they turn up so much in nature, but they do. For instance, here’s a picture of an ox-eye daisy with a crab spider:
The florets in the centre of the inflorescence (what people generally refer to incorrectly as a flower when in fact like all plants in that family a daisy’s “flower” is in fact a bouquet of many flowers) occur in spirals of 21 in one direction and 34 in the other. These kinds of numbers also turn up in the spirals of pine cones, pineapples and cauliflowers. However, they needn’t be directly governed by genes alone, as this picture of the M51 galaxy shows:
The spiral arms of the galaxy, like many others possibly including the Milky Way, pass through the rectangles, each of which is a golden rectangle with the sides in proportion of around 1.618. The same applies to the shell of a nautilus and the cloud swirls in hurricanes:
These are all logarithmic spirals and the hurricane and galaxy have nothing to do with genetic inheritance in their form. The spiral and its association with the Golden Ratio just represent a path of least resistance.
As for the actual numbers of the Fibonacci sequence itself, these turn up as well. For instance, the crab spider has eight legs, a starfish has five arms and so on. However, when the numbers get that small the chances of coindences increase dramatically because smaller Fibonacci numbers are more frequent than larger ones. Also, it starts to look a bit like numerology because whereas an octopus or a spider might have eight appendages, the actual reason for that might be that it has four on each side multiplied by two due to its bilateral symmetry, and whereas that symmetry itself is in the Fibonacci series – two sides – it starts to feel to me like I’m seeing patterns everywhere which aren’t really there.
Nonetheless, I do consider five to be an important number. There are only a few different forms of symmetry among animals. An animal may be completely asymmetrical, for instance some sponges are:
In the distant past there were also life forms with threefold – triplanar – symmetry. However, there are no animals with sixfold or sevenfold symmetry, and this to me is significant. The reason starfish and their relatives evolved fivefold symmetry seems to have been that it makes them tougher. This isn’t immediately apparent with starfish but with a sand dollar it’s a different matter:
These animals are tough. Although their shells are made up of five plates joining at corners, the weak lines of the cracks between these plates are compensated for by the fact that the opposite point is the middle of a plate, meaning that every weak point is accompanied by a strong one. This would also be true, however, of a heptagonal animal:
The seven-sided fifty pence piece is, incidentally, designed so as to have the same diameter in all directions so it can work in slot machines. So I’ve heard anyway. The point being that there is no real reason why a sand dollar shouldn’t be a sand fifty pence piece, were it not for the sole fact that seven is not in the Fibonnaci series. I don’t know this for sure, but I suspect that’s the reason echinoderms have five sides rather than seven. Having said that, I find the Fibonacci series mysterious and I don’t know why it turns up all the time.
Of course, what I’m working up to is the claim that limbs have a maximum of five digits because the number five is in the sequence. I suspect that if there is vertebrate-like life elsewhere in the Universe it will turn out to have something like three, five or eight digits rather than six or seven. I think also that there’s a way of testing this hypothesis, although I haven’t done it.
Limbs evolved from the fins of fish, particularly their pectoral fins. If I’m right, a prediction which could be made would be that pectoral and pelvic fins will tend to have a number of rays in the Fibonacci sequence. This would confirm the fact because fins have much larger numbers of rays than hands and feet have digits, thereby reducing the chances of coincidence. However, I haven’t checked. If that turns out not to be true, though, it may not mean I’m wrong.
So basically I can’t tell if I’m being a mathematician or a numerologist about this. Or indeed a palm-reader.
The only obvious connection I can come up with between this post and political matters is Ken Livingstone, the well-known mayor of London and salamander enthusiast. Drawing a veil over recent events, let’s escape into Newtville.
People sometimes seem to think of amphibians as failed reptiles or bracket them with reptiles. They are also of course very much endangered nowadays, which considering their very long history is a terrible thing. Pressing though that is, I don’t want to talk about it right now. What I do want to talk about is the way amphibians contradict the idea that there is a ladder of evolution with single-celled organisms at the bottom and humans at the top. In this view, the vertebrates would go: jawless fish; sharks and rays; bony fish; lung fish; amphibians; reptiles; birds; mammals. This isn’t how things are of course and I think most people realise that on some level, but it’s difficult to shed old idea of how the tree of life is.
An amphibian is not by any means a failed reptile. Nor is it a primitive ancestor of future vertebrates. If it’s around today, clearly it can’t be ancestral to anything else that’s around today. Evolution didn’t know where it was going, and in fact today’s amphibians might just have almost as little to do with the ancestors of all other land vertebrates as mammals have.
Before I go there though, I want to talk about how they breathe, and how they don’t breathe, because it’s a little surprising at first to realise this. For an amphibian, lungs are not actually very important to respiration, to the extent that there are quite a lot of them which live on land but have no lungs at all: the lungless salamanders. In fact there are also lungless frogs and caecilians, although not so many. There are in fact more species (380) of lungless salamander by far than there are of salamanders with lungs. In other words, it’s unusual for a salamander to have lungs.
By contrast, most frogs do have lungs, and this is a clue to the function of amphibian lungs. They don’t have them for breathing, even when they’re on land. This is a slightly misleading statement because in fact amphibians with lungs do use them to breathe. However, they also breathe through their skin, and the fact that they breathe through their lungs is more because they happen to be there than because they need them for this purpose. Lungs provide a moist surface across which oxygen and carbon dioxide can cross but so does amphibian skin. Frogs at least use lungs to make sound. They do breathe with lung movements, but this is by means of pushing air in and out of their lungs using their mouths because unlike mammals, but like many other amphibians, they have no ribs or diaphragm, unlike mammals. Some of them also use their lungs to give birth, by inflating them and pushing out their offspring. Incidentally, some of them gestate in their stomachs.
Lungless salamanders are notably skinny, meaning that every part of their body is close to the air. This enables them to acquire oxygen and give up carbon dioxide without breathing. They also have a groove between their nostrils and mouths like the organ found in many mammals which allows them to smell things, because nostrils without air passing through them wouldn’t work very well as sense organs in that respect. In some places, not only do they make up the majority of salamander species but they actually comprise the majority of the mass of vertebrates in the habitat. Having lungs is apparently overrated.
Much of the amphibian body plan and lifestyle is dictated by the fact that they breathe through their skins. They tend to be small. The largest living amphibian is the Chinese giant salamander, illustrated above, which is up to the size of adult humans. However, they can only survive in fast-flowing and therefore highly-oxygenated water and are quite inactive. Amphibians can also only survive in fresh or slightly salty water because otherwise osmosis would pull the water out of their bodies through their permeable skins and they would die of dehydration. The ocean is a desert with the life underground and the perfect disguise above. They are also carnivorous because herbivores need longer digestive systems and there’s no room inside an amphibian for a long gut or anything similar. However, perhaps contrary to expectations, not all amphibians need to live near water or start out as tadpoles. Some hatch out as small adults and others are born live rather than as eggs.
One of the odd things about amphibians is that what the ones seen today, also known as lissamphibians, are quite unlike the first four-legged animals which crawled out of the water, or scudded themselves through muddy puddles and shallow river beds. These animals had heavy ribcages which seem to have been adapted to help them breathe air with their lungs. Many of them were quite large – up to the length of a bus – and their teeth suggest they tended to be herbivorous. These animals were also the ancestors of reptiles, mammals and birds and later on at least had scaly skins through which gases would not have been able to pass. They also tend to resemble reptiles quite closely and gradually became them, insofar as it’s possible to become a reptile when they don’t really form a group of closely related animals. In other words, if you want to see what most of the amphibians were like at their peak, really what you need to do is imagine something rather like a veggie saltwater crocodile laying eggs in the water and having tadpoles instead of baby crocs. Lissamphibians just aren’t like that.
For this reason, it’s been suggested that lissamphibians may not even be related to our ancestors. Rather, it’s been theorised that a different group of fish struggled their way onto land and evolved into them separately. Whereas I find that idea seductive, I don’t think it’s true because like every other species of quadruped, no lissamphibian normally has more than five digits on each limb. This is markedly unlike the earliest four-footed beasts who had more than five toes per foot:
This raises the question of why no vertebrates today usually seem to have more than five digits. There are certainly people with six or more fingers and toes but they’re exceptional, and there are many species with fewer than five digits per limb such as many amphibians, birds and hoofed mammals, but for some reason there seem to be none which usually have more than five. My own hypothesis is that five is in the Fibonacci series of 1, 1, 3, 5, 8, …, which turns up a lot in nature. That raises a further question of course, but it is at least a possible partial answer. Anyway, given that the ancestors of Ichthyostega had more than five digits, the fact that all amphibians, reptiles, birds and mammals have at most five digits strongly suggests they’re all related and therefore that the amphibians didn’t arise separately from fish.
Recently it’s been thought that lissamphibians are descended from the earliest quadrupeds but not necessarily in one go. There were two major groups of amphibians at first. One group were the large, often lizard-like forms which include our ancestors. It so happens that our evolutionary history at this point involves us splitting from the reptiles so early on that it’s almost true to say that we ourselves evolved directly from amphibians. The earliest known synapsids, a group including mammals, are about 320 million years old and the earliest reptiles, that is the ancestors of the dinosaurs, birds and turtles, appear to be from about the same time. In other words the fork occurred before there were even proper reptiles. That said, mammals, reptiles and birds are all descended from temnospondyl amphibians. Lissamphibians, on the other hand, seem to be descended from lepospondyls, which were all small, tended to be salamander or snake-like in shape and probably occupied specialised ecological niches and appeared by 350 million years ago as distinct from our ancestors. However, the worm-like caecilians may themselves not be closely related to salamanders, frogs and toads.
Casting a bit of perspective on this, this means humans and lizards are more closely related to each other than either is to newts and salamanders. As such, it seems a bit unfair to lump reptiles and amphibians together, because although lizards and salamanders may look quite similar and even have quite similar lifestyles in some cases, they haven’t really got a lot in common genetically.
What can be learned from all this? Well, one thing is that modern appearances can often mask historical truths. Another is that we shouldn’t try to judge other living things in terms of hierarchy. No other land vertebrate can manage as well as an amphibian can in cold, damp conditions. Mammals, for example, can adapt, but constantly need to produce enough heat in their bodies to raise their temperatures high enough for the chemical reactions in their bodies to keep them alive. Consequently they have to eat more than amphibians, and some of them would also risk lung problems. Amphibian lungs are not like ours in that respect. They wouldn’t get pneumonia when we would. We are what we are and they are what they are, but there is no reason why, for example, they are inferior to us simply because they don’t always have lungs. All that’s out there in the living world is a host of different species, none better or worse than any other.
This is a map of the proposed North American Technate, one goal of the Technocracy Movement of the 1930s, whose symbol was the Monad:
The crucial things to remember about automation are that it ought to be a solution rather than a problem, and that as technology it’s not new, but part of our nature as a species since we were Australopithecines. Hominid tool use is primarily motivated by a drive to ease life and raise living standards and automation is just a continuation of this. Any illusion of scarcity is artificial and unnecessary. The corollary of automation should be something like a basic income scheme or a technocratic social order, and to be frank I can’t understand why everyone isn’t outraged that this still doesn’t exist and isn’t demanding that it happen immediately. The fact is that there is simply no reason for anyone to be exploited or to have an unacceptably low standard of living. It’s hard to imagine a bigger scandal than this in the whole of human history, and this scandal isn’t even new. I can only imagine there is a psychological need for some people to imagine they’re superior to others.
Technological change through the Palaeolithic seems to have led to increasing population and life expectancy, suggesting that the advent of new technology increased hominid fitness to survive and thrive in its environment. The adoption of agriculture has a number of drawbacks in this respect, such as the possible emergence of a more hierarchical society, patriarchy and the problems of managing infectious disease and malnutrition due to the change to a lifestyle to which we are adapted, but in some ways there was a further increase in living standards brought on by technology, at least for some. However, inequality grew and there was a drift away from providing for the common good.
The Industrial Revolution brought fear that means of livelihood, now substantially centred around factories, would be lost with increasing mechanisation. It was actually suggested quite early on that those who were put out of work by machinery should simply be paid enough to live on, although I can’t track down a source – it may have been late eighteenth or early nineteenth century.
An argument frequently deployed against basic income is that it’s psychologically damaging and another is that it could lead to unrest or lack of motivation. People are seen as benefitting from paid work with employers and as primarily motivated by monetary gain. However, if work is worth doing, literally, i.e. it is heart work which fulfils a fundamentally useful social function and cannot be automated well, then it’s worth doing without pay. The wages paid for work have the function of supporting someone financially who doesn’t have the time to support themselves by pursuing a hunter-gatherer way of life or self-sufficiency in other ways because they’re working for someone else. This compensation needn’t be from the actual employer. Much unpaid work is already done, such as parenting and housework, and the motivation to do those tasks doesn’t come from the prospect of monetary gain. There is also much work which simply should not be done. Financial services come to mind, and there’s nothing to be proud of in being a citizen of a nation which is a world leader in swindling people out of their income and forcing them into debt, which is basically what financial services amount to much of the time. As far as mental health issues are concerned, much work is seriously deleterious to happiness and the constant anxiety and depression which emanates from the symbolic estimation of people’s lives as worthless and expendable and the removal of meaningful work from their lives definitely constitutes a mental health hazard. These factors need to be set against the supposed dignity of “work” in the restricted sense of the word, namely paid work with an employer. A change in the relationship with income could also free parents up from having to organise childcare in the form of state schooling, which is clearly now superfluous as a means of relevant or efficient education. Incidentally, education needs also to carry the message of self-motivation in its delivery, which is currently impaired by societal factors.
It should also be borne in mind that there can be virtue in useless employment. If your job involves providing essential goods and services there is a sense in which you are holding the beneficiaries of those services to ransom by asking to be paid for doing that. Work which is “useless”, such as in the creative arts and entertainment, is more interchangeable. It can “say” something important but an audience can prefer Ben Jonson to Shakespeare or the Stones to the Beatles. Consequently it makes sense to ask for money in such a situation. If the work is something like providing adequate sanitation, growing food, putting out fires or life-saving medical treatment, that work needs to be renumerated in a way that is unconditional in order to prevent the ransom situation from arising. This is another way in which basic income could address the problem. It may of course also be that such essential work is more likely to be replaced through automation than less vital work, which is another reason for basic income.
There are in fact both left and right wing arguments for basic income, each providing a counter-argument for the other side. The right wing case is that it simplifies the welfare system. Milton Friedman argued for it as a “negative income tax”, i.e. a tax which is paid to individuals below a certain income threshold. This would make it means-tested and thus introduce bureaucracy. It’s also seen as reducing the incentive to work, and in this scenario work is seen as an unequivocally good thing because considerations are primarily in terms of traditional economics and work is not seen as an intrinsic part of human nature. This potential disincentive could be seen as a bad thing from a left wing viewpoint because it could reduce the potential number of trade union members. Another right wing argument is that it could completely remove the need for the lowest paid employees to be paid at all by an employer, and lead to abolition of the minimum wage. Friedman also questioned whether those relying on basic income should still have the right to vote, since he saw them as inevitably voting for increases in basic income, making the scheme impractically expensive.
A number of potential problems have been raised regarding basic income. One is that it could lead to inflation of accommodation costs. Since everyone would then have a certain guaranteed income, rent and other costs might then rise according to market forces, thereby wiping out any advantage it might have. Possibly for this reason, some people advocate that the level of basic income should be set slightly below subsistence level. It’s also possible that those with greater needs such as the disabled would not be provided for because the welfare state would potentially have been dismantled.
One of the most obvious objections to basic income is its affordability. This would depend on it being initially unaffordable because the cost of not having basic income is enormous. If you consider, for example, the expense of dealing with mental illness, homelessness, physical ill-health and crime resulting from poverty, if unaffordability is the strongest argument against it, it would have to be that it would be too expensive even to be considered as an investment in the future. Some people also believe that new jobs will arise as automation proceeds, a phenomenon seen as having occurred throughout history.
Leaving the objections aside, I see basic income as a solution to many problems. It removes the motive to do harmful work just for the money. It means the lowest-paid employees needn’t be paid at all. It reduces the bureaucracy of the welfare state. It means that people will work for its own sake rather than for money. It will also save money because of the cost of crime, substance abuse, poverty, homelessness and poor physical and mental health. However, I consider the reasons it isn’t implemented to be unconnected to any of these things. Not having a basic income is cost-effective, I think, because it means the poor live in fear, which has both social and psychological functions. The fear of penury prevents poor workers from demanding better working conditions and job security, meaning that a frightened and ground-down workforce is cheap and disposable. This means that the vast investment necessary to ensure the existence of a large number of desperate, hopeless people pays for itself many times over. I’ll come back to disposability in a minute. It also performs an important psychological function, although at the cost of preventing a generally happier society. It isn’t enough for some people that they succeed in their own terms of wealth and possessions. It’s also important to them to know that there are many other people living in misery and want, not knowing where their children’s next meal is coming from or if they will die of hypothermia tonight, because it makes them feel more secure and valuable as individuals themselves. Against this can be placed the issue that the happiest societies are the most equal in terms of income. These are the reasons, I think, that basic income will never be implemented.
On the matter of disposability, it occurs to me that the response of the rich and secure to an automated society would not be so much concern for the physical needs of the poor and unemployed as fear that these idle hands are expensive to maintain and yield no return, and that they may rise up against them and overthrow the system. Consequently, the rational response may be to drive them to an early grave either through their own decision to kill themselves or simply by not bothering to take care of them at all, which is of course very cheap. Maybe what the rich really want is for most poor people simply to die. Basic income doesn’t achieve that, so that’s another reason it may not be implemented.
This is the belief that society is best managed by experts in the likes of engineering and science rather than by politicians. This idea was particularly popular in the 1920s and 1930s but was overtaken by events such as the Great Depression and Second World War. As can be seen from the map at the top, technocrats in North America believed in the unification of the North American continent and nearby areas into a unit they referred to as the “Technate”. They had what they referred to as the “Energy Theory Of Value”, which was that the basic measure common to all goods and services is energy, so the sole scientific foundation for the monetary system is also energy. Therefore they would issue energy certificates to individuals instead of money which could be exchanged for the equivalent energy use. To take a simple example, someone who shifted sixteen tons two metres vertically during a day would then be entitled to use the same amount of electricity or fuel, or to buy food providing that amount of energy, having taken the work done to provide that, to the same amount, so they would be another day older but they wouldn’t be deeper in debt unless they were using more energy than they were expending.
Technocracy could be seen as the extension of automation all the way up to government. There are, however, various problems with it. It’s not clear, for example, which kind of expertise, or within that which theory, is more appropriate. It’s notable, for example, that educational theory and the psychology of learning are quite different in nature, so which system would be applied to educational policy? Theories are not free of value or political bias either. Evolutionary psychology, sociobiology and social Darwinism all spring to mind here, as does Lamarckianism on the other side of the political spectrum. Technocracy was also used recently and controversially in Italy to implement neoliberal economic policies. Technocrats are also distant from popular opinion, although the two may sometimes coincide. Technocracy is not democracy. However, it also strikes me as potentially quite left-wing because it doesn’t rely on “the school of hard knocks”, which may or may not be a bad thing. Right wing anti-intellectualism would seem to be opposed to that.
The Venus Project is a modern manifestation of technocracy. This is a long-term project started by the architect Jacque Fresco and featured in the film ‘Zeitgeist Addendum’. Fresco’s view, which I happen to agree with in general, is that the alternatives for the future are utopia or oblivion, with utopia in the form of technocratically-organised sustainable cities. Like other forms of technocracy, however, there appears to be little room for non-conformity. The main problem as I see it with the Venus Project is that the will to save the world is not there. I would argue that the laws of thermodynamics, particularly the tendency towards entropy, mean that there is a drive towards self-destruction in all living organisms, which is of course balanced in part by evolved homeostatic feedback mechanisms but cannot be completely eliminated. This is what Freud called Thanatos, and although his ideas are largely discredited this one in particular is useful. There may be a tendency for people to turn against positive, life-affirming and optimistic ideas and plans precisely because they have those features. Therefore, to me the optimism and positivity of both technocracy and the idea of basic income are the precise reasons why they will inevitably fail.
The Gig Economy
Before I get down to discussing this, I should define what I mean by this currently popular term. The “Gig Economy” is a labour market characterised by the prevalence of short-term contracts or freelance work. The likes of Uber and Deliveroo are often focussed on in this respect, where people are nominally self-employed and have none of the recently acquired entitlements which employees normally have because of this but also get all of their work from one source. In other words, it seems to be a ruse designed to get round legal requirements for employers to provide their employees with such things as pension schemes, sick pay and the like. Zero-hours contracts are another common feature of these situations where people are employed by others.
Two things strike me about this. One is that this sounds like the kind of situation with which working class people have long been very familiar. The difference, I suppose, is that people from a middle class background have greater social capital and are therefore able to make more visible fuss about it, and also their mind set may have been less ground down than working class people’s, although it will shortly probably be down there. In other words, the middle class is disappearing for this reason as well as automation.
The other thing about this, to me, is that it sounds a little like an inferior version of what’s been called the “Catholic Economy”, and that there may be a connection there. There is a somewhat convoluted link between the concept of the catholic economy and the coalition government which formed after the 2010 election.
The catholic economy, although initially associated with the Roman Catholic Church, is now probably better referred to as distributivism. This is the idea that private ownership is important to all members of society and a basic right, and that the means of production should be distributed as evenly as possible throughout society. As such, the idea is not easily categorisable as either right or left wing. A distributivist society would be one in which most people are self-employed sole traders, though they may be organised into guilds. It sees both capitalism and socialism as products of the enlightenment and prefers to hark back to a mediaeval system, though I would see that as very idealised. Against that, of course, it could be said that my own description of pre-agricultural society itself partakes of the myth of the Noble Savage.
Distributivism goes hand in hand with the theological position of Radical Orthodoxy, which rejects modernity via postmodernity to arrive at a position where the world is interpreted theologically, science and similar disciplines being seen as essentially secular, atheistic and nihilist.
The reason this is relevant is that Phillip Blond, a proponent of Radical Orthodoxy and distributivism, was a key figure in the construction of the “Big Society” agenda of Cameron’s Conservative Party, one of whose slogans held that “there is such a thing as society; it’s just not the same as government”. This is presumably meant to emphasise the idea of the organic growth of customs and institutions into a society which works for all without state intervention, but also notably without the intervention of monopoly capitalist corporations. This seems however to be largely a rhetorical device. To illustrate, consider the coalition government policy on free schools. The idea seems at first to be about giving parents, religious groups and others the legal right to establish their own educational institutions. However, education need not be carried out in specific physical premises or locations. It could be online, achieved via home tuition, take place in rented rooms or in people’s homes. However, legislation required free schools to have physical premises, which immediately prices poorer people out of the situation and involves property or building firms quite heavily in the establishment of such schools when it is in fact entirely unnecessary.
I suspect that the gig economy is in fact what’s become of the catholic economy in the hands of Conservative pragmatism and realpolitik. Hence a lot people are nominally self-employed now, and in purely technical legal definitions of the situation there are now a lot of self-employed sole traders just as there are supposed to be under distributivism. However, these people own precious little and are fragmented, having little recourse to professional bodies, trade unions or guilds, and consequently they have few rights and little power. The situation, nominally, does however seem to have quite a lot in common with Phillip Blond’s ideas even if he would himself wish to dissociate himself from them.
This dissociation, however, could be key to the success of a more left-wing approach. Just as New Labour in government didn’t do what many Labour members and voters wanted it to do, it seems to me equally possible that the current Conservative government isn’t doing what its own members and voters wanted it to do either. This dissatisfaction, which I believe must exist, is probably fairly typical of the disillusionment felt by ordinary voters and party members when their party is in office. It could also potentially be exploited by the Labour party as it is now. It’s been clearly demonstrated that the practical result of the Big Society is just business as usual and the permanent government rather than anything like distributivism, and I suspect there is a strong groundswell of dissatisfaction among people who voted Tory and are now repenting at leisure. I suggest therefore that this is something which other groups could capitalise upon, and obviously I have Labour in mind here.
Those of us of a certain vintage will of course remember this guy, to whom I used to be compared a lot as a child. His name was of course Magnus Pyke, and seemed to portray himself deliberately as a stereotypical “mad scientist”. Another example which comes to my mind is Professor Marius, the fifty-first century inventor of the robot dog from Doctor Who, K-9:
Finally, I absolutely have to mention Professor Branestawm, my other childhood nickname, marvellously portrayed by Harry Hill on the BBC in the past couple of years:
If you wanted a definition of “madness”, once you got over the rather disrespectful term, which was acceptable in the ’80s, you might plump for either “doing the same thing every time and expecting a different result” or “being out of touch with reality”. Although many people might caricature scientists as being socially out of touch with reality, doing the same thing every time and expecting a different result is more or less the opposite of what scientists are supposed to do. Or is it?
Yesterday’s entry got out of hand and I decided to cut it off and leave the rest for today, but I’m going to have to repeat some of what I said back then, so it’s slow progress. I want to talk about the philosophy of science.
When Christians, for example, say “atheists” they usually seem to mean people who are also metaphysical naturalists and scientific realists. Focussing on the second, there’s been a lot of argument about what scientific realism is, but it amounts to something like the idea that the world described by science is the real world or an approximation to it. Christians who are also Young Earth Creationists sometimes also seem to feel the need to couch their beliefs in scientific terms, presenting what they see as evidence for a young Earth, a global Flood and the perceived impossibility of evolution. They might also see the world as in constant physical decline and deterioration, as with the idea that the speed of light is getting slower, and there have also been attempts to reconcile what we seem to see out there in the Universe, which appears to be old, with their views – some see space as non-Euclidean in the opposite way to how Einstein saw it, so they suppose parallel lines move apart with distance rather than staying at the same distance, which allows them to conceive of time as running at a different rate on Earth than in apparently distant galaxies. Or, in a view which has recently become more popular, they may simply believe the world is flat.
Many Muslims and a tiny minority of Jews have the same sort of beliefs. In some countries the majority of Muslims are creationist and there is also a case on record of a rabbi refusing to certify the kosher status of a range of milk cartons because they depicted dinosaurs, which he believed could not have existed because of the world being less than six thousand years old. Having said that, I understand that Jewish creationists are very rare indeed.
Whereas all of that is in my view based on a fundamental misunderstanding of sacred texts, the difference between fundamentalist Abrahamic scientific “realism” and secular scientific realism is that the former rejects anything which appears to contradict a particular literal interpretation of the sacred text in question even if it is otherwise irrefutably supported by empirical evidence. This is because they have as much confidence that their literal interpretation of their sacred text is correct as they have that 2+2=4. Whereas they would probably see it as the Word of God, it’s actually their reading of the text in which they have confidence, not its actual content or anyone else’s reading. Nonetheless, they would very probably still see themselves as scientific realists – they seem to believe that science, their “science” that is, describes the real world or approximates it.
When I studied scientific realism at university I was rather puzzled by the reluctance my lecturers seemed to have to pin down what it actually was. It turns out that almost everyone has their own view of what it is. The general idea seems to be that science is true and about real things even when they’re not observable. Dark matter is a really good example of this, one in which I happen not to believe, but another example is neutrinos. These are subatomic particles with no charge or mass which hardly interact at all with ordinary matter, and because our sense organs are made of ordinary matter, as are scientific instruments, the chances of detecting them are pretty small.
A neutrino “telescope” consists of a large tank of dry cleaning fluid buried miles underground, and neutrinos are detected by trying to find the occasional molecule which has been slightly altered over several years. About forty years ago there was a bit of a scientific panic because they couldn’t detect enough neutrinos coming from the Sun as predicted by their theories, meaning that either they were wrong or the Sun was about to go out. I don’t know what happened with that actually but clearly the Sun was still shining eight minutes ago or I wouldn’t be able to see to type this.
Scientific realism is committed to the idea that the world doesn’t go away when you stop believing in it, and to the thought that you don’t make the world. This could be seen as important from a Christian perspective as it seems to mean there’s no practical result to prayer. However, it also seems to contradict physics as it’s understood today because of the ideas of quantum mechanics and Schrödinger’s Cat, which is neither dead nor alive until it’s observed. Since this suggests that cats are not conscious, that example doesn’t seem to work very well. Most contemporary physicists, therefore, would not always be scientific realists. It would also be a problem in social science because clearly people do behave differently if they’re observed. Psychology calls this the “audience effect”. This might seem to rule out the possibility of the likes of psychology, sociology or economics being true sciences altogether.
Another option is of course instrumentalism, which is quite relevant to economics. Instrumentalism is, for the third time of saying, the belief that scientific entities are fictional and only there to account for what the instrument readings tell you. Sometimes this is probably true, as with this thing:
This is a Mark VIII E-meter as used by scientologists to detect engrams. It works rather like a lie detector, measuring the conductivity of human skin to detect stress when the subject is being asked questions. Scientologists claim that these measurable stresses indicate that someone has had difficult past life experiences which have become blockages to personal growth, and that these can be cleared by certain methods. People who are neither scientologists nor members of the breakaway group from Scientology known as the Freezone, whereas probably accepting that the E-meter does detect changes in the conductivity of the skin under stress, which was one principle on which lie detectors were based, would probably reject the idea that engrams are real, so in terms of e-meters most of us non-Scientologists are probably instrumentalists and likely to remain so. However, a scientific instrumentalist would also reject the idea that this is a real picture of platinum atoms under a field ion microscope:
This seems to mean also that scientific instrumentalists don’t believe in atoms. In some areas this kind of thing might make some sense because, for instance, a biological species seems to be real, i.e. it’s a population of organisms which can breed together and produce viable offspring, but I have to say that when I think about what physicists say about subatomic particles or dark matter, I find my bogometer goes off the scale. Then again, maybe I’m being too realist about the existence of bogons and that I can’t quantifiably measure how bogus something is at all.
My problem with instrumentalism is twofold. To illustrate the first problem, suppose there was a scientific experiment conducted to test the theory that fish feel pain. If it were then found that they did, according to scientific instrumentalism that would be a convenient fiction, and the question arises of how far this fiction could be extended. For instance, would it be okay to experiment on children to see if they felt pain and then regard that as a convenient fiction? For this reason I reject instrumentalism – it means there is a risk of not taking suffering seriously.
The second problem is related to the first. The idea that a cat is not herself an observer seems to fail to respect the likelihood of a conscious cat being able to suffer or otherwise have experiences. If you take this further to ever-simpler forms of life there doesn’t seem to be a reasonable cut-off point where experience will no longer occur. For example, it’s sometimes claimed that vegans shouldn’t worry about eating mussels, cockles and oysters because they haven’t got brains, but they do have a sense of taste and are even able to do something we can’t, which is tell when the tide would change even in a fish tank thousands of miles from the nearest sea. They also do have nervous systems, the difference being that they are not concentrated in a single organ like the brain, but the question arises of why, if they’re not like us, we should therefore assume that they can’t suffer.
Pursue this to its logical conclusion and you’re forced to see everything as a potential observer, meaning that there is a problem with quantum mechanics. There would then be no such thing as an unobserved subatomic particle, and quantum mechanics would appear to be stuck.
Another observation, slightly different this time, is that our own observations are theory-laden and in a sense our own sense organs are scientific instruments. This would mean that our beliefs about the world, through our senses, are in fact theories, and therefore in the view of scientific instrumentalism, merely useful fictions. In other words, take this far enough and the world turns out not to exist. I believe the world does exist though.
This is MONIAC, a water-based analogue computer used to model the British economy. Money would literally flow, in the form of water, around the machine, which would make usable predictions about what different economic policies and conditions would do. This is a realist way of looking at economics, and was used when Keynesian economics were popular. I don’t know if Keynes was the economic equivalent of a scientific realist or not, but I do know that Milton Friedman, the economist who supplanted him after 1979 as the theorist most influential in British economic policy was not a realist but an instrumentalist. Like many others, he saw economics as divided into political economy and positive economics, the latter being a value-free scientific approach. It would clearly be possible to be a scientifically realist positive economist, but Friedman wasn’t. He believed instead that economic theory was about useful fictions, among other things. They are merely means to ends and don’t actually describe anything real.
Now I’m not able completely to justify what this makes me think, so I’ll just baldly assert it. It seems very interesting to me that just as scientific instrumentalism allows scientists to imagine that they aren’t really causing pain when they experiment on other species of animals, so economic instrumentalism seems to be particularly associated with an economic theory which is often seen as responsible for the economic policies of Thatcher, Reagan and their successors, which are so often seen as causing incalculable suffering, and which continue today. It just seems rather suspicious to me that the same theory which would make vivisection seem more acceptable than common sense suggests was also held by the economist responsible for the dismantling of the welfare state.
Of course I’m not actually scientifically realist either, but that’s a story for another time.