The Four Horsemen

Apart from the awesome arrival of our granddaughter, two things are on my mind just now, partly in connection with that event because it focusses one on the future, which in this case will, I hope, extend into the twenty-second century. This has provoked the ironic purchase of white goods which are more energy-efficient, although clearly there is embodied energy in them, but we have to preserve this world for our grandchildren so this time it’s personal, as the cliché has it.

This is an eldritch chimæra of four works to which I’ve recently been exposed: ‘The Archers’ (been exposed to that since 1973 at the latest but even still), ‘The Last Man On Earth’ (which apparently has really bad acting but because of my lowbrow taste I’m able to enjoy it anyway), James Lovelock’s ‘Novacene’ and Edmund Cooper’s ‘The Overman Culture’. Three of these deal with a potential apocalypse, two of them focus on artificial intelligence and one of them is an everyday story of country folk. I’ll talk about the AI angle elsewhere. ‘The Archers’ is a notably non-apocalyptic soap opera, though with educational intent, at least initially. However, it’s recently been bugging me – apparently this is called a “plot bunny” – that the soap tells a tale of a village which clearly has a considerable history extending well back into the Dark Ages and probably Roman times. It has prequel novels which I understand go back to the early twentieth century, and there’s a volume covering its fictitious history which I’ve ordered and am currently awaiting with anticipation, but oddly, to me, what they’ve never really done is to explore the mediæval aspect. Thus I decided to do that, and am currently researching and planning a seven-episode series covering the years 1315 to 1381, because of the eventful nature of the fourteenth century. They were, as the phrase has it, interesting times. In fact they were bloody awful, even leaving aside the fact that feudalism characterised that time. Although most people with more than a passing knowledge of English history would doubtless be aware of this, it’s pretty gobsmacking how appalling the fourteenth century really was. It was shockingly grim. The bucolic tone of ‘The Archers’ can’t really survive this turn of events and the best that could be managed would be a kind of ‘Black Adder Style’ gallows humour, if I decide to go in that direction.

The 1300s marked the end of the Mediæval Warm Period and the start of the Little Ice Age, which only seems to have ended as a result of the Industrial Revolution, as James Lovelock explores in his ‘Novacene’. In other words, the previously mild English climate favourable to the growth of crops, which moreover enabled marginal land to be brought into full cultivation, underwent a change to cooler summers, colder winters and more rain and snow. Europeans suffered terribly because of this. The warm period had enabled the population to grow considerably from about the beginning of the ninth century onward, and the climate change, nota bene, led to a crash with enormous consequences. Firstly, the cold wet weather of 1315 meant that crop yields crashed from a seven to one ratio for wheat to a two to one ratio or below. That proportion allows for one grain for planting and one for eating, which is just about subsistence agriculture, and was also reflected in the productivity of other food crops. Consequently there was no food, the wet conditions made it hard to store up seed, salt used for preservation also got damp and washed away, the fish moved south because of the cooling water and there was considerable inflation. Villeins and serfs ended up eating the grain for planting to survive, then later slaughtered their beasts of burden for food for the same reasons and were left with a situation where they couldn’t use oxen to plough the fields because they’d eaten them all and in any case they had nothing to plant if they had been able to. Older people voluntarily starved themselves to death so the younger generation could survive and there was also infanticide and cannibalism. This went on until 1317 and food stocks and farming didn’t recover completely until the mid-’20s. That was the Second Horseman – Famine.

Then there was the start of the Hundred Years’ War in 1337. The Hundred Years’ War actually lasted more than a hundred years and had several truces in it, so although being a mediæval war it also broke for the winter and other happenings – I’m guessing holy days and fast days for example. One cause of the war was the attempt to deny the right of Isabella of France to “transmit” her succession to her sons, and the rivalry between the Plantagenets and the House of Valois was also a factor. Armies were fed from local food sources, so happening as it did just a couple of decades after the Great Famine, this was not particularly marvellous either. As far as I can tell, this was a self-inflicted calamity with little to redeem it, although from a modern perspective the fact that it was partly caused by a gender-related issue redeems it a little. Even so, it still seems pretty appalling that the main reason for taxation up until very recently was to enable wars to be fought. Relating this to the Great Famine (not to be confused with the Irish and Scottish Great Famine of the nineteenth century), there was a general upturn in violence resulting from the desperate circumstances it had wrought, so it’s possible that it could relate to the onset of the Little Ice Age. Even so, the Hundred Years’ War strikes me as the fourteenth century version of Brexit in that it was basically the fault of the royal houses of Europe. Like it, it involves Calais, which was captured by the English and hung onto for a couple of centuries. It’s interesting how the realities of physical geography bring these kinds of parallels. The First Horseman – War.

The Third Horseman is of course Pestilence, which for my purposes right now is the most “interesting” of the four and also the most notorious event of the fourteenth century – the Black Death. This infection seems to have spread into Europe from ships folding their sails in Sicily, Venice, Sardinia and Corsica in 1347. The signs and symptoms are pretty well known but I’ll go over them again anyway. Tumors (note the spelling – not “tumours”) the size of apples arose in the armpits and/or groin which oozed pus and blood when opened, the lungs filled with fluid and the body became afflicted with melancholy skin lesions. The victim also had a fever, vomited blood and death occurred within a week. It was attributed to a foul miasma brought by a wind, and looking at it from the perspective of humoral medicine, to me it looks like an excess of melancholy. That said, it notably marked the onset of doubt in the medical profession that the principles of Galen and Hippocrates were effective and probably led to the introduction of chemotherapy shortly afterwards (not “cancer chemotherapy”). I could get led into a quagmire here because this is thoroughly homeedandherbs territory, but even so I’m sticking this, and the rest of what I’m going to say, here. In any case, whatever the cause, the disease killed a quarter of the population of Europe.

The standard explanation nowadays is that the Black Death was related to Yersinia pestis, carried by rat fleas and was bubonic and/or pneumonic plague. A third variety, septicæmic plague, also exists. This has also been questioned. It spread literally thousands of times faster than the plagues mentioned would be expected to, crossed mountain ranges where rats didn’t go, and afflicted Iceland where there were no rats at the time. There were also no reports of mass die-offs of rats, which might be expected. One form does produce the buboes mentioned in Boccaccio’s description and there are petechial haemorrhages as described, but the spread may have been between humans rather than via rat fleas or rats. Pneumonic plague can be spread via droplet infection. It’s been suggested that the Black Death was in fact either anthrax or the African disease Ebola, which would make it a hæmorrhagic viral fever rather than a bacterial disease, and it’s notable that the first infections in Europe were in the Mediterranean region, just as it’s thought that AIDS spread into Europe from Africa via Sicily. However, Yersinia pestis DNA is found in plague pits, the problem with that being that it might’ve been there anyway because rats were so common.

The soil and seed analogy emphasises that there are two groups of factors in infection: the pathogen itself and the state of the body in which it finds itself. Some of the time, the physical health of the body is fairly irrelevant because of the virulence of the organism associated with the disease, but this is by no means always so, and this, I think, is key to what the Black Death actually was. This can be illustrated with reference to the horrific disfiguring ailment known as noma, cancrum oris or “water canker”.

Noma is a disease now found mainly in Africa south of the Sahara. It starts with a gum infection in childhood, which then spreads to the cheek, elsewhere in the mouth, causing the tissues including bone to become gangrenous. The victim is left with a large hole in her face passing through to the inside of the mouth, which makes it hard to eat and leads to social stigma. It can also cause blindness due to inability to close the eye. Predisposing conditions include measles, poor dental hygiene, proximity to livestock and, particularly noteworthy, malnutrition. Babies are occasionally born with it, and having recently become a grandparent, this is particularly appalling to me. In a sense, it’s easily preventable through good nutrition and antibiotics, and it’s a neglected disease. This term, which is official, refers to serious diseases which are widespread but relatively little studied and into which little resources are invested. Incidentally, in my alternate history of the Caroline Era, AIDS is a neglected disease since it was retained in Sub-Saharan Africa due to the presence of a liberal pope after John Paul II’s assassination led to the use of condoms, and has practically wiped out the population of Central Africa. The fact that noma even still exists is appalling and ought to spur us all into bringing about global socialism, along with about a million other things. Of course this doesn’t happen because the wealthy and “powerful” either never see it or are sociopathic due to their upbringing, so we continue with global capitalism and its outrageous toll of death and suffering.

Noma used to be found a fair bit in Europe, including Britain, where it persisted until at least Victorian times. We can assume, in fact, that it existed in England in the fourteenth century, where it would undoubtedly have led to the conclusion that a family was cursed and caused social ostracism and persecution. The point about noma, however, is that without malnutrition and other stresses on the body, the same pathogens associated with it only cause self-limiting conditions. After the Great Famine, the population was weakened and tuberculosis and pneumonia were widespread. Children who weren’t killed or eaten during this time would’ve grown up fairly sickly. I suspect, therefore, that the Black Death is not so much not plague as such as the way Yersinia pestis takes advantage of a weakened and constitutionally compromised body, so it’s very much like noma, whose associated bacteria here generally coincide with mere tonsillitis and sore throats. In other words, it was all the rest of what was going on in the fourteenth century that led to the Black Death turning out the way it did. Subsequent waves of infection were milder, probably partly because of evolution in the human population but also due to improving general health and nutrition. Before I move on, the Black Death was very probably spread partly by flagellants – people walking to distant towns and cities whipping themselves and others in penitence for the sins of the world as a way of assuaging God’s wrath – and also by the movement of troops in the War.

Rather than getting into the consequences of the Black Death just now, which are interesting and valuable material for the ‘Archers’ project, I’m going to turn to ‘The Last Man On Earth’. There are clearly going to be spoilers at this point, but I think probably the series is little known and not popular or critically acclaimed, so if you go on reading it’s not much of a loss and in any case they’re quite mild. In ‘The Last Man On Earth’, almost all vertebrates including humans have been wiped out by what seems to be a viral hæmorrhagic fever. So virulent was this that the apparent number of survivors in the whole of North America, whose population is currently 579 million, only seems to be in low double figures. This is of course a plot device to get all the people out of the way so the apocalyptic fun and games can start, but oddly, in spite of the fact that it’s based on gallows humour, it manages to introduce a number of realistic features which are usually ignored in post-apocalyptic fiction. What piqued my curiosity, however, was the feasibility of humanity being practically wiped out by a virus or other pathogen.

The Plague of Justinian in the sixth century is estimated to have wiped out up to a quarter of the species. This is far less devastating than the Mount Toba eruption, which seems to have left about a hundred people alive. As for the Black Death itself, which like the Plague of Justinian is associated with Yersinia pestis, that seems to have killed about a quarter of the population of Europe and also a large part of the human population of other parts of the Old World such as China. This is devastating but also relatively easy to recover from. Spanish ‘flu wiped out about 3% of Homo sapiens and like the Black Death correlated with a major European war. Prevailing wisdom holds that a virus is unlikely to wipe out our species because a pathogen which destroys its host is destroying its habitat and would therefore wipe itself out. The problem to my mind with this argument is that we are ourselves in the process of making our environment uninhabitable and there’s little sign of that changing. It also seems to me that this puts the cart before the horse in evolutionary terms. Mutations and evolution may lead to greater fitness to survive in the long run, but that doesn’t mean that individual mutations don’t lead to extinction in the short term, leaving other species who do have greater fitness to survive and reproduce. Maybe we’ve just been lucky up until now.

Everything I’ve written here so far has had a rather glib atmosphere to it, but this is of course a serious matter. Apart from anything else, it involves us, our families and friends. The abstraction and detachment I feel is probably an issue of scale. Even so, the Doomsday Argument is about the near future, and although I’ve addressed its validity elsewhere because it may not in fact work particularly well, it’s worth covering it again. Suppose one’s life to be a random sample of all the human lives which will ever occur, and more specifically that one’s thought that humanity might be about to disappear is a random sample. This thought clearly did occur many times in the fourteenth century, as is seen from accounts at the time along with the art and literature it produced. Even so, it seems reasonable to suppose that one’s life occurs about half way through all human lives which will ever be. It was calculated in about 1970 that seventy-five thousand million people had lived up to that point, with the cut-off in terms of evolution being Homo erectus, who lived from around a million years ago up. Whether they were capable of having such a thought is another matter. Presumably it correlates with behavioural modernity and this is in fact an overestimate of the number of people capable of thinking that way. Also in about 1970, population was doubling about every thirty years. This has apparently now slowed, but population growth generally slows because of development, because children are not then being used as much for labour or care of the elderly and infant mortality goes down, so an underdeveloped world such as this one, in which mora is still rife, has rapid population growth. Population reached seven thousand million in 2011. Seventy-five is roughly eleven times that number, which is less than 2⁴. Since doubling occurs every thirty years in current conditions, this amounts to one hundred and twenty years, yielding a date of 2131 – we can expect the last human being to be born about that time given these assumptions. Note that this argument has nothing to say about the cause of our extinction, just that it’s likely to happen shortly after 2131, or at least within a human lifetime of that date.

There are naturally major flaws with this argument. For instance, it might just be predicting when humans become immortal or when we stop being pessimistic about our future. It also leaves the mechanism of our demise entirely mysterious. It looks at first that it suggests a cause connected to overpopulation, but in fact it does nothing of the kind. What it does do, though, is focus the mind on the future and possible reasons for human extinction, and it particularly does this if one has descendants or cares about people who have them, or are just young. My father is currently ninety. He lived through a time and in conditions which were not particularly conducive to health. Nonetheless he’s still here. Extending that to our granddaughter, who was born last week, it’s reasonable to expect her to live until at least 2109, a mere twenty-one years before the supposèd cutoff date, which itself is well within the expected lifetime of any children she might have. This brings the prospect vividly home to me, and it makes me wonder, moreover, what explains the disturbing lack of concern shown by our apparent leaders and the high and mighty generally. While I’m aware that they might not buy into the Doomsday Argument, which even I think has its flaws, it’s no longer rational to deny anthropogenic climate change, and on the whole these people have children and probably grandchildren. What are they expecting to happen?

During the ‘noughties, it was notable that the schooling system did not appear to be preparing the rising generation for survival in a post-apocalyptic environment, and in fact seemed to be mainly concerned with short-term economic goals. This was under the auspices of New Labour, which was enough to discredit the Blairite project completely and make a vote for Labour a crime against one’s children and grandchildren. This doesn’t currently hold true, although it does of course apply to voting Conservative at the moment. I find it hard to avoid the conclusion that the richest governments and powers on the planet are engineering some kind of population crash, although this may be paranoid. Regardless, we can enumerate the possibilities:

  • They’re just short-termists.
  • They are aware of the risk and have a plan but are keeping it secret in order to hide the fact that it doesn’t include most of us.
  • They’re in denial about the risk and therefore have no relevant plans.
  • They’re aware of the risk but expect something like a technical or market-based fix.
  • They’re dispensationalist post-millenialist Christians and see this as the apocalypse, and are therefore not interested in sorting it out.

I think the last is true of some religious people. The penultimate possibility is compatible with Singularitarianism – the idea I’ll explore later when I get to talking about ‘Novacene’ and ‘The Overman Culture’. This has been described as “The Rapture For Nerds”. One problem with it for them is that it doesn’t look like it’ll end with them in power still, or rather, able to maintain the illusion to themselves that they are. Another is that it’s been possible to provide for every member of the human race for a very long time now, though probably not in the fourteenth century, but this simply hasn’t been done, and this doesn’t depend on technology. The idea that they’re in denial is rather feasible and was suggested to me recently. The second possibility strikes me as the most feasible but also the most paranoid, and that bothers me because I can’t decide what it is.

But I want to leave you with this. I now have a grandchild. On the whole, most people in the developed world become grandparents at some stage, although of course many people are also child-free, not least because of the state of the planet and society. Considering that this is where the majority finds itself, we can surely be expected to have common cause in getting this sorted out. It’s just extremely concerning to me and also rather mystifying. Any thoughts?

Advertisements

Nessie And Friends

For someone living in Great Britain, Loch Ness is an absolutely awesome place. It has more water in it than the whole of England, Glen Mòr in which it’s situated marks a crevice separating the part of the Highlands which used to be part of the Appalachian mountain range, it includes the only inland lighthouse in Britain and is connected to the sea by the fastest river in Britain. Given its depth of 227 metres, it is the second deepest loch after Loch Mhòrar and the deepest mean body of fresh water on this island at eighty-seven metres. It has a volume of 7.4 cubic kilometres. The water is incredibly cold, really clean and somewhat stained by peat. It looks rather like whisky in fact. It’s only the second longest loch, and Lough Neagh in Ireland is seven times larger as the biggest inland body of water in these isles, but unlike its Scottish rival, Lough Neagh is quite shallow at an average of nine metres deep and a maximum depth of only twenty-five. Loch Ness is long and narrow, and this may be significant in the perception that there’s a monster in it.

(c) 1972, Academy of Applied Science/Loch Ness Investigation Bureau

Right now, I don’t believe in the Loch Ness Monster, Nessiteras rhombopteryx. As a child, I truly did, and I wasn’t alone in that. I really wanted to believe there was a Mesozoic plesiosaur living in the loch, and as usual with young children, if a strong emotion accompanies a thought, that thought becomes a belief. You’ll also appreciate that the really big sciency things for children at the time were space and dinosaurs, and as a lifelong nerd it isn’t surprising I was into all that stuff. In addition to that, though, I was also into a load of other “weird” stuff which nowadays I’d probably call Forteana – UFOs as alien spacecraft or time machines, psychic powers, past life regression, all the usual suspects. A lot of my peers were interested in that kind of thing too. But I, and they as it turned out, was oddly discriminating about what I chose to believe in. I had no trouble believing in the Loch Ness monster or other lake monsters, sea serpents and the likes of giant dinosaurs who had survived from the Mesozoic into the Glam Rock Era. After all, there was even a band named after a dinosaur at the time. But for some reason I did have a problem believing in the Yeti and Sasquatch. There were several reasons for this I think. One was that they were a bit too much like teddy bears, and I was trying to put away childish things, which ironically led to that childishly irrational choice. Another, something which I noticed about myself quite early on, was that I was drawn to the strange, i.e. that which was unlike the prosaic. For instance, birdwatching has never appealed to me much because the animals concerned are everywhere, and at that stage I found cetaceans w, ay more interesting than land mammals, marsupials more interesting than placental mammals, and so on. Sasquatches and Yetis were both placental mammals and very humanoid, thereby reducing their appeal. Another reason was the rather poor marketing job done on the Yeti and Sasquatch by calling them the “Abominable Snowman” and “Bigfoot”, both of which came across as rather goofy names, not allowing the cryptids concerned to be taken seriously. I mentioned also that my friends were sceptical about certain “mysteries”, for want of a better word, and in particular that they didn’t believe in the Cottingley Fairies. I think this is gender-based. They were happy to believe in the more macho monsters – massive, aggressive, hulking muscle-bound role models perhaps – but not in the rather feminine little flying girls dancing in rings at the bottom of the garden. Not that I necessarily believed in fairies myself, although they did once nick one of my library books, but that’s another story.

The funny thing about what I’m going to call the “mountain men”, because I do think there’s a gender-based element in believing in them and in their image, is that although the Yeti is in theory more plausible than the Sasquatch a lot more evidence has been produced for the latter than the former. The reason I say this is that a few million years ago there was in fact a very large ape called Gigantopithecus living in South Asia, terrestrial and related to orangutan. He (see above) was getting on for three metres tall standing upright and it makes sense to suppose that come the ice ages he would’ve become adjusted to the colder conditions and ended up retreating to Tibet and the Himalayas when the ice sheets retreated. There’s an entirely feasible process whereby yetis could’ve evolved and ended up in that part of Asia. By contrast, Sasquatches are much harder to make work. Either apes would have had to have entered the Pacific Northwest of North America somehow or there could’ve been convergent evolution from New World monkeys when the fauna exchange occurred in the Pliocene (I’m doing this from memory incidentally so I might’ve got the exact date of the formation of the Isthmus of Panama wrong, but it was sometime around then). New World monkeys are more arboreal and smaller than Old World monkeys and never gave rise to apes, although there could be other reasons for that such as the absence of the right kind of environment or ecological niche for them. If apes, particularly our closest relatives the other hominins such as Neanderthals, had reached North America, they could’ve been expected to have left remains of their activities and bodies, and that didn’t happen.

Nonetheless, the Sasquatch is far more “popular” than the Yeti. I haven’t heard anything about yetis for decades now, and they don’t seem fashionable compared to their American cousins. Reports of yetis are largely based on hearsay and the few apparent samples have turned out to be from bears. It is possible that there’s a known endangered species of bear which is confused with humanoids living up those mountains. Bigfoot is another question entirely. There’s alleged film footage and samples which have been subjected to genome sequencing, and there’s even a semi-official Latin binominal for them: Homo sapiens cognatus, recognised by ZooBank. This would make them an human subspecies, which seems odd to me because I’ve always thought of them as belonging to a separate genus entirely. Clearly with science the jury is out until something is well-corroborated and replicated, but I don’t feel able to accept that they are real at all.

Bigfoot investigators have themselves been investigated scientifically, and are said to be people who feel excluded from the establishment and the usual academic career ladder. This shouldn’t be taken as a comment on their intelligence, but it does often mean that they’re self-taught and not necessarily trained in scientific investigation, and they are of course largely excluded from a community of researchers which could otherwise provide either a sanity check or groupthink, maybe both. Their situation reminds me of shop stewards, that is, people excluded from middle class career progression and therefore pursuing promotion or a role which uses their abilities and experience by other means. The presence of the Rockies in two developed and wealthy nations also means that there are facilities and infrastructure available which may not apply so much to Nepal, Tibet and associated regions.

But for whatever reason, I don’t believe in Bigfoot.

As a child, my understanding of Nessie was not only that it was a species of surviving plesiosaur who had become trapped in the loch when Glen Mòr first formed, although since it was apparently completely covered in glaciers for many thousands of years relatively recently I don’t know how that works exactly. It suggests that they were around more widely in the world before this happened, and this is to my mind one of the problems with the idea. Although there are plenty of other lake monster accounts in the world, including ones in North America and Japan, the niches occupied by plesiosaurs seem now to be filled by the likes of whales and seals, and in fact there seems to be a more general problem with the survival of any of the large animals from the Cretaceous onward. If they had managed to survive more than just marginally at the beginning of our era, they would’ve been in an excellent position just to take over again and exclude the rise of the mammals. Either plesiosaurs or large dinosaurs must surely have been very close to being wiped out immediately after the Chicxulub impact or they would just have come back again immediately after. I do in fact think that there probably were a very few survivors into the Palaeocene, but they were too isolated to find each other and mate, or possibly the populations were so small that they would’ve become inbred and not been able to thrive. In fact, so far as carrion-eaters were concerned it’s even possible that they underwent a temporary population explosion, though unsustainably. One thing which definitely did happen was that small reptiles and birds who made it through evolved into larger forms millions of years later, such as Diatryma and Phorusrhacos, giant flightless birds getting on for three metres high and the lizard Barbaturex, which although it was only about 2.6 metres long was probably the largest animal around in its habitat at the time. But there are no signs at all of plesiosaurs and if there had been, they would’ve had to have been pretty competitive to survive at all, and therefore probably quite common.

When I wrote ‘Here Be Dragons’, I tried to make the Loch Ness Monster work, and the only way I managed that was to imagine the desmostylids to survive in the Atlantic Ocean. Desmostylids were amphibious sirenians, so roughly like manatees and dugongs but able to climb onto the land as well, but the trouble with them being candidates for Nessie is that they only lived in the Pacific. They didn’t spread to this side of the world at all and are distinctively American and Far Eastern mammals. This does suggest, though, that Nessie might not be a plesiosaur at all but some other exotic kind of animal. But there’s a big problem with anything of that size living in the loch. They’re supposed to be twelve metres long and there can’t realistically be fewer than about twenty or thirty of them because of genetic diversity. There is not, however, enough living matter in the loch to support a community of such a size. Not enough food. If they were mammals this would be even harder because they’d need more fuel to maintain their body temperature. This may, though, be part of a clue to what’s actually going on.

Looking at the loch, one notices a couple of things. One is that there are plenty of grebes in the water, who have a habit of rearing up on their legs while swimming, which makes them look like the neck and head of a larger animal most of whose body is underwater. The other is that there are peculiar standing waves in the water running lengthwise which look either like snakes or solid humps in the water, which is how the monster is described. I don’t know what causes these but I presume it’s something to do with the shape of the body of water and the banks. There are in fact thermal standing waves deep in the water although these aren’t supposed to have any visible surface manifestation. The waves you can see are pretty distinctive though.

Sir Peter Scott, the well-known ornithologist and son of Scott of the Antarctic, strongly believed in the monster and went so far as to give it the Latin name I mentioned earlier, Nessiteras rhombopteryx – “Ness Monster with diamond-shaped fins”. Some joker later pointed out that the name was an anagram of “Monster Hoax By Sir Peter S”, which is interesting but probably a coincidence, although a very good one. On the whole I think the people who wrote and said they were looking into the story were genuine, honest people and definitely not hoaxers, with some exceptions. The famous “Surgeon Photograph”, for example, really is a hoax. A submersible was sent down to scan the water with sonar and found a large echo which was either a large moving object or a shoal of fish. Parsimony demands it was the second of course.

There is, to my mind, just one possibility, though a pretty slender one, that there is indeed a large animal, or in fact a whole community of large animals, in the loch which does make sense to me. This is the suggestion that there are in fact Greenland sharks in the water. These are extremely long-lived animals, living up to five hundred years, because they can grow to about six metres long and live at a very low temperature. They are in fact found locally as well, in the waters around Scotland and further north. Hence their metabolism is very slow and they wouldn’t need much food. They’re also said to be able to adjust to living in fresh water, which makes sense because the Arctic Ocean is not as salty as the rest of the oceans due to being almost landlocked with many rivers flowing into it along with snow and ice formed from snow melting into it. So although I have been very doubtful about the existence of any large animals in Loch Ness, I have to admit that right now, although I’ve only heard about it recently, this does sound quite feasible. They also tend to be undetectable to sonar, and if they did grow to a considerable size in the water they wouldn’t be able to return to the sea.

I’ve written all this without looking at what’s now thought about most of the topics covered, so I might be way off-beam, but right now, although I used to believe strongly in Nessie, then stopped mainly because of the food argument and the improbability of there being plesiosaurs around nowadays, just now, though I’m still pretty convinced there’s nothing interesting there, I have to admit it’s just possible there’s a small number of freshwater Greenland sharks in the loch.

Or it could just be the frog exaggerator.

Theistic Stockholm Syndrome

As you must surely know, Stockholm Syndrome is a situation where a kidnappee comes to ally herself, apparently willingly, with her kidnapper and is named after a 1973 incident in Norrmalmstorg, Stockholm where a bank robber took four people hostage at least one of whom later showed sympathy for him and complained about his treatment to Olof Palme. The classic example, though, relates to the Symbionese Liberation Army kidnap of Patty Hearst in 1974, who later proceeded to rob banks with them and unsuccessfully pleaded Stockholm Syndrome when caught. It’s also said to occur in child sexual abuse situations, and classically in cases of domestic abuse. As a former kidnappee, you might think I have some insight into the situation and to a limited extent I have, though I can see that kind of behaviour much more clearly in other parts of my life. I would say, though, that the reason my case was newsworthy is that I had several opportunities to report the incident to the police which I didn’t take and that I felt a strong need to protect my kidnapper, although this was partly for pragmatic reasons as I felt a prison sentence would mean he would resort to crime later due to not having good prospects and there would be further victims down the line from myself. That may be a rationalisation but apart from the general sympathy I feel for people just because they’re people, I wouldn’t say I felt especially sympathetic towards him.

The diagnosis criteria for Stockholm Syndrome are the development of positive feelings towards one’s captor and negative feelings towards the police and authorities. I didn’t satisfy those criteria because I started with negative feelings towards the authorities and police, and in fact felt more positively towards them as a result. The characteristic features of being captured in this way include poor memory of the events, denial of their reality, flashbacks, confusion, fear, despair, lack of feeling, depression, guilt, anger, PTSD, reliance on the captor, gradual ramping up of physical issues associated with the situation such as thirst and hunger and their possible adverse consequences, caution, aloofness, anxiety and irritability. I can see some of that but not all of it. For me, as an example, it was more like a flashbulb memory and I was able to tell the CID in great detail, more than they were expecting, what had happened and when, which might be the result of a predeliction to depressive thinking which gives me better recall of negative experiences than positive.

On the whole, people who believe in God seem to say that God is good, although there’s also misotheism and deism. Prima facie, it seems odd to start from first principles and conclude that not only is there a God, but that that God is morally aligned by human standards. Hence you might believe in a morally neutral God or, given your life, the state of the human world or the existence of worms who eat babies’ eyes or whatever, that there is a God but that that God is evil. Nonetheless, most people seem to opt either for atheism, apatheism, agnosticism or theism. This is, as an aside, somewhat reminiscent of the limited motives people have for killing themselves – people do it because of depression or as a form of euthanasia but rarely for other reasons, at least in the West. Nonetheless, theism – belief in a loving, personal and involved God – is very common.

Given, then, that people do believe in a loving, personal and involved God, one might think they would then associate their version of God with actions which seem intuitively good or positive. Sometimes this does happen, but quite often the reverse seems to happen. Things get a bit difficult here because if one does believe in a good, loving God, and I do, one may be tempted to make one’s idea of God after one’s own values, and the chances are that, given that one is not perfect, not all one’s values will necessarily be ideal. This, incidentally, doesn’t depend on whether God exists. For instance, one might take a different view of infidelity than one’s partner. Belief in objective morality doesn’t have to go with theism and moral relativism needn’t go with atheism. Beyond a certain point, though, it would become very difficult to sustain a certain set of ethical beliefs at the same time as believing in, for example, the “Old Testament God”. My discussion here is not about whether God really exists or is good here so much as a particular combination of beliefs which I think may lead to a certain coping strategy.

The God I’m talking about is wrathful, vengeful, judgemental and very willing to kill people and exhort others to kill, and maintains an eternal torture chamber we call Hell, but is at the same time called loving and perfect by His followers (and also tends to be referred to as “He” and “Father”). Also, and this is crucial, this God is all knowing and therefore telepathic – He can read your mind and therefore knows at all times what you’re thinking and feeling. Having just watched the second series of ‘Criminal Justice’, abusive heterosexual marriages are very much in my mind at this point, but even if they weren’t, I think that to me this would still look very much like one. The abuser is male, controlling, keeps a record of everything you do and holds the threat of violence and other forms of abuse over one’s head at all times. In fact the relationship between God and the Church is often explicitly compared, even by Christians, to a marriage, although the spin given it is unsurprisingly much more positive. Considering the patriarchal nature of the society in which this was a popular belief, with the father having enormous power over his whole family and slaves, even including the power of life and death in some cases (which incidentally is why it’s absurd to believe the Bible condemns abortion – what it condemns is women having control over their own bodies because of the paterfamilias).

Before I go on, I don’t want to give the impression that men are never the recipients of abuse in relationships. Of course they are. There are statistical differences of course but it’s not even relatively rare for that to happen in heterosexual relationships. It’s just that this example involves a deity seen as male. God is also our father in this scenario rather than our husband, so it’s not particularly about marriage either, but it clearly is about a more powerful person who is telepathic. I’m going to carry on calling God “him” in this because it reflects what I think is the view of the people involved, not because I think all abusers are male or that God is.

Even in abusive situations between humans the victims can come to believe that they are in the wrong, and fool themselves that they love the abuser and consider him to be good. They may have come to persuade themselves that he is more powerful mentally than he in fact is, but straying into the realm of believing him to be telepathic, though I can easily believe that if someone is wont to psychotic thinking, the stress of being in that situation might lead them to believe that. In God’s case, we assume Him to be a mind-reader and therefore that our minds are transparent to Him and consequently if we even let ourselves believe He is not good, He will visit the kind of vengeance upon us as He did on Sodom and Gomorrah. For this reason, I believe that certain theistic religious believers, deep down, don’t believe that their God is good or loving at all, but that they try really hard to push that belief down, with the result that they have Stockholm Syndrome.

Having said all that, I’m now going to move on to the cheery subject of Hell. We generally tend to believe the Bible tells us that Hell is a physical location underground where the souls of the dead are tortured for all eternity in fire by innumerable demons whose king is Satan. However, just as the popular view of God is really Zeus, with the long white beard and the thunderbolts, so is our view of Hell strongly influenced by Greek and other mythology (as is the Bible itself of course). The Greek Tartaros is a place deep underground where the Titans are imprisoned and tormented along with wicked humans. It’s the place where Tantalos (after whom the metal tantalum is named) is unable to reach the grapes just above his head or the water in which he stands because the first are pulled out of his way and the latter freezes over as he tries to reach for them, and the place where Sisyphos has to push a boulder up a hill only for it to roll down after him just before reaching the peak. The word “ταρταρος” only occurs in the very suspicious second epistle of Peter, and is described there as the place the rebellious angels are in chains while waiting for judgement. Apart from that, the clearest reference to Hell seems to be in the story referred to as “Dives and Lazarus”, where a rich man traditionally referred to as Dives, though that’s a description rather than a name, meaning “rich man”, goes to Hell and a beggar goes to Heaven, as is commonly understood. The problem with taking this story literally is that it refers to Dives seeing Lazarus in Heaven from Hell (and it doesn’t actually say Lazarus is in Heaven but does refer to an enormous chasm between the two of them) which makes it rather unfeasible. Even so, the word used for the place of torment here is “ᾍδης“, a word which is used many times throughout the Greek version of the Bible including the Septuagint, where it translates the Hebrew “sheol“. In Greek mythology, Hades refers to a shadowy realm where the spirits of the dead dwell, not in suffering but in a form of half-life. This is also a traditional Jewish view. These are not people who have lived particularly good lives necessarily, but there’s a mixture of people there and not a sorting into sheep and goats that we see in the New Testament.

Another common claim is that “Hell is graves”. “Grave” is another possible translation of sheol, perhaps as the common grave of the human race, that is, simply that part of Earth under the surface where the dead are interred. Thus apparent references to Hell in Bible translations might only be referring to the grave, or perhaps slightly more figuratively as the “fatal urn of our destiny”, and they also don’t seem to admit to an idea that the dead are conscious on the whole, with the exception of the “Witch of Endor” incident where the spirit of Samuel speaks to Saul, which many Christians have found problematic. There’s also the issue of the Lake Of Fire. This is referred to in Egyptian and Greek mythology and also the Book of Revelation, and seems to be the same as Gehinnom, based on Gehenna, a valley in Jerusalem where the kings of Judah appear to have burnt their children to death (according to one interpretation) and so considered cursed. In the Tanakh, it’s referred to in the books of Chronicles, Kings, Nehemiah and Joshua, and possibly also alluded to in Isaiah, but as a physical place on Earth rather than somewhere the dead go to be punished. Rabbinical Judaism, I hear, regards it as a purgatory-like place where the sins of the dead are burned away.

I’m not going to deny that Hell, more or less as it’s popularly understood, is in the Bible, and for my own reasons I have to say that I do believe both in Satan and Hell, but I’m not going into that here. My point is really that it is up for debate. Another view taken by some Christians is annihilationism, which is the idea that those who are not saved cease to exist at the Day of Judgement, which actually I think takes away any motivation for bothering to believe for selfish reasons, which wouldn’t work anyway, but means that in a Pascal’s Wager kind of way there’s not much point in believing in that because the stakes aren’t high enough.

But I’ve now run out of time, so I’ll leave you with this. I firmly believe that many people who call themselves Christian and have that familiar wrathful idea of God who is also supposed to be loving are actually likely to be suffering from some kind of psychological problem where they’re in denial about the fact that such a God would not be good or amandous, and are in fact in an abusive relationship with that kind of God, which further makes me wonder whether they are also the perpetrators and/or victims of such relationships in their real lives, and perhaps, to be topical for a second, would vote in a similarly abusive head of state.

Untranslatability And Rubik’s Cubes

Are there really untranslatable words? If so, could a language be entirely untranslatable and if so (again) how? I’ll start with Rubik’s cubes and move on to saudade and sisu.

I never managed to solve the Rubik’s cube. The closest I can get is three sides. I refuse to cheat by reading up on how to do it, and I don’t know how many people who can do the cube have cheated in that way. However, certain things can be seen to be true of the cube which don’t depend on knowing how to do it. One of these is that it’s impossible to do five faces without the sixth also being done. If that were so, at least one face of a sub-cube would have to be the wrong colour for it ever to be complete. That said, there could be other versions of the cube, maybe prank ones, which did have a subface the wrong colour for them to be doable. It’s more than that too. Cubes can be easily dismantled and put back together again, but unless you reassemble a cube in a fashion you know for sure is a possible arrangement of squares you could reach from the completed state, the probability is that you will have put it into a position from which you can’t get back to the original state unless you just take it to bits again and try to put it into such a condition. This is because only one arrangement in twelve can be reached from the perfect starting state. The number of possible arrangements, although vast, is only one of twelve sets of such arrangements, and none can be reached from any of the others. The branch of maths known as group theory can be applied to cube-solving and these permutations, whose sets have been referred to as “orbits”.

Now for language. When one “does” language, one is attempting to express oneself clearly in a way which can be understood by others, or perhaps by oneself although Ludwig Wittgenstein would have a lot to say about that particular idea. It’s a process which reminds me somewhat of Rubik’s cubes, and in fact there are notations for cubes which are rather language-like, though somewhat restricted. They’re not going to be able to describe the world as a whole so much as the very restricted but still gigantic world of The Cube. A string of letters and punctuation, upper and lower case, can be used to describe how to turn the parts of a cube to get it to a particular point, such as F, U, L, R and D for Front, Up, Left, Right and Down, and so on. Other versions exist, such as ones referring to clockwise and anticlockwise rather than using the apostrophe to indicate anticlockwise, but translation between them is easy so this is not what we want. If, however, there’s a way of comparing the transformations of a cube to the communication of ideas, we might be onto something. If there was a scrambled cube in a different orbit and the aim was to get it into a particular pattern which was inaccessible to another orbit, the same string of letters would be fine as a way of instructing someone how to twist the sides, but the end result would be different and communication would have failed. This seems much more promising. Now imagine this. There’s a community of language users whose languages are each based on the cube and how to turn it, and the instructions for getting from the completed cube to the patterns are used as words to describe concepts for which the patterns are metaphors. For instance, twisting the middle layer to produce horizontal stripes from a complete cube becomes a word meaning “stratified” and turning the cube in a manner which produces a chessboard-style arrangement becomes a word meaning “chequered”. A completed cube has a special, simple word and comes to mean “clean” or “perfect”. Nobody from the twelve communities has ever seen cubes from the others, but their language uses the same words. These words will fail to communicate for quite some time, but the set-up is quite artificial and closely resembles Wittgensteins Private Language Argument (PLA).

Wittgenstein often wrote philosophy in quite an aphoristic way. One of the things he asked us to imagine was that we each carry a matchbox with us which contained a beetle which we never show anyone else. For all anyone knows, a matchbox could be empty and when someone says “beetle”, they’re not referring to anything. If we imagine twelve communities each with a differently arranged cube, it does become easier to understand from an outsider’s perspective that “doing the Rubik’s cube” means something both different and the same for each group, and it differs from the beetle in a matchbox situation because everyone in a particular social group can see everyone else’s cubes in that group, so it isn’t the same as a private language. Wittgensteins argument is that an essentially private word which could not be defined in terms of other words cannot mean anything because there’s no way to distinguish between it seeming correctly applied and actually being correct. I also suspect that Wittgenstein is rather too much of a logical positivist for my tastes, something which oddly I haven’t seen anyone else say. That is, he means that meaningful statements have to be axiomatic, logically derived or verifiable by the senses, and in terms of philosophy of mind that would make him a behaviourist, which involves the denial of all purely subjective mental states. That said, he did say useful things and the PLA is not just about logical positivism, and may not even apply to our dozen secret Rubik’s cube communities.

Wittgenstein also said that if a lion could speak, we could not understand him. If you hear a conversation between two people about a soap opera you’ve never come across, you might hear them referring to people like Vera and Ena as if they were real people. I used to have aunts called Ena and Vera. As a child, if I’d never seen ‘Coronation Street’, I might have heard some people on a bus going on about what was happening between Vera Duckworth and Ena Sharples and wonder why I’d never heard of any of that going on between my aunts (and I must admit right now that I’m curious about any story lines which might have involved both of them but I can’t remember). I wouldn’t understand the conversation, but I might think I could. Something even further removed from my experience would be talk of the “offside rule” and the “five-yard line”, which I think is what they call certain things in soccer but I have no idea what they are and I couldn’t participate in such a conversation. Or could I? Is there a way of manipulating talk about those things which means I could fake it? If I could fake that, are there whole fields of discourse which are fake? But leaving that for the time being, the more different someone’s world is from yours, the harder it is to understand what they’re saying, and this seems to be what Wittgenstein means about the lion. It’s been said that the apparent difficulty some non-neurotypical people have in empathising is not what it seems. The process of empathising seems to involve the faculty of placing oneself mentally in the other person’s position and imagining what it’s like for them, and the idea is that a non-neurotypical person doesn’t have difficulty in doing that, but once they’ve done it they’re not similar enough to the other person to succeed in imagining them accurately. It isn’t because they don’t go through the same process as anyone else. I personally tend to think being on the “spectrum” is more about salience than the primary absence of theory of (other) mind(s), but this could lead to such circumstances. In such a situation, you can imagine someone saying “I want to eliminate world hunger” and doing so by trying to wipe out all animal and fungal life on the planet. The initial statement, “I want to eliminate world hunger” is in English, but that doesn’t help most people to understand its full meaning. If everything about that person’s mental world is sufficiently different from our own supposèdly shared one, the fact that they were speaking English wouldn’t even matter, and in a sense it would be untranslateable. But the reason is that it would take too long to outline their assumptions and views for it to be practical. Given enough time, it could be done.

Non-Cantorian set theory was a response to Russell’s Paradox: whether the set of all sets which are not members of themselves is itself a member of itself. This paradox led to older notions of set theory being thrown out, or at least placed into question, and a new set of axioms arose in response aiming to avoid this paradox. All are expressed using predicate and propositional calculus notation and most are quite easy to translate into English. My grasp of maths is rather weak and also patchy. I noticed, for example, that I could often understand the content of the first and final year BSc maths syllabus at my university but not the second year. Nevertheless I don’t have a huge problem understanding Zermelo-Fraenkel set theory. Here’s a fairly easily translatable axiom from that known as the Axiom of Extensionality:

∀x∀y[∀x(z∈x<=>z∈y)=>x=y]

That is, “for any x, for any y, for any x, z being a member of x if and only if z is a member of y entails that x is equivalent to y”. This bare-bones “translation” of the above sequent is of course rather opaque, but it can be disentangled and simplified to read “two sets are equal if they have the same members”. The next few axioms are similarly translatable until one reaches the sixth: the Axiom of Replacement:

∀A∀w1∀w2…∀wn[∀x(x∈A=>∃!yφ)=>∃B∀x(x∈A=>∃y(y∈B/\φ))]

This is difficult even to type – I had to resort to using HTML directly to write the above line. It means that the image of a set under any definable function will also fall inside a set. That isn’t an immediately clear thing to say to most non-mathematicians. The above is also an axiom schema rather than just an axiom, meaning that it’s a metalanguage referring to the language used to write the axioms themselves. It also occurs to me that there might be an issue with the use of the cardinal integers in that because Russell’s Paradox is itself applicable to the foundations of arithmetic, so it could presumably be profitably expanded to consistency with Peano’s axioms of number theory. “∀w1∀w2…∀wn” also refers to a countably infinite number of items, so in practical terms this is inexpressible unless you just say something like “and so on, forever” or “ad infinitum“. This kind of thing takes most people into a realm where English, and probably most natural languages, are inadequate to describe something but which is nevertheless not antilanguage. It isn’t anybody’s fault that this can’t be expressed clearly in English as far as I can tell.

Another example of this might be APL – “A Programming Language”. Like my other favourite programming language, FORTH, APL has been described as a “write-only programming language”. That is, it has the reputation of being fairly easy to write but impossible to understand once written. I disagree with this assessment of FORTH because giving words names which make sense and inserting comments, as usual with coding, will lead to code in FORTH making sense to other people. For instance, “: CHARSET 127 32 DO I EMIT LOOP ;” has a series of English words in it, the first just being the label for what you’re going to call that word, which could therefore be named something clearer like “ALLTHECHARACTERSINORDER”. APL, though, is not the same because it uses symbols rather than letters and is very pithy.

(~10001000∘.×1000)/10001ι1000

will find all prime numbers lower than a thousand. It makes sense if you know APL but wouldn’t be easy to express in English.

Most of the time the problem with setting these over into English or most other natural languages is that they take a lot longer to express when translated. Whereas I’ve described what the above does in APL, I haven’t set out the algorithm using words because it would be much longer and the question of maintaining comprehension arises because of attention span. This feels like a bit of a cheat to me because the weakness is to do with something which could be extended with practice, at which point the sequents would be understood. The idea of untranslatability to me would be a language which simply cannot be translated no matter what, and to illustrate these I can finally get round to talking about the likes of saudade and sissu.

Saudade is a Portuguese word which is often said to be untranslatable, although it can be described roughly as “longing”, “nostalgia” or “missing”. I don’t speak Portuguese but the words used in English are insufficient because they don’t express the strength of feeling involved. I’m wondering if it expresses first stage bereavement, where there’s denial that something or someone is gone for good. Welsh has a similar word, “hiraeth”, and German has “Sehnsucht”, which although it doesn’t strike me as untranslatable, I find myself thinking the word in German rather than English when I try to do it in my head, so maybe it is. This might mean it can’t be translated into English but can be into other languages which do have the same concept.

Sisu is a Finnish word meaning something like “steadfastness” or “perseverence”, or perhaps “foolish bravado”. I’ve only ever puttered around in the foothills of Finnish so I can’t comment much on this. Finnish also has a word for Schadenfreudevahingolino. When coming across words like this, it can be easy to be hypnotised by the pride someone might have in their culture or language which stops one from being able to think of a word. Even so, sisu seems to me to describe the quality one might need to succeed in giving birth vaginally, or perhaps to push through the wall during a marathon. but maybe I’m wrong.

A recent popular word of this kind is the Danish hygge, which I perceive as being a synonym for Gemütlichkeit – a kind of homely cosiness. Other words claimed to be untranslatable include mångata, sobremesa, toska and itsuarpok – Cynthia’s reflection on the water which looks like a street heading towards her (Swedish), the convivial feeling after a meal (Castilian), gloominess/ennui/lugubriousness (Russian), waiting impatiently for something to turn up (Inuit). Considering the first, the idea of multiple reflections on a methane ocean of objects in the night sky of Titan would extend this meaning, perhaps allowing for a whole series of roads to different moons and planets, and I could perhaps invent a word for that but I’ve been able to describe what it would mean already in English. Itsuarpok is a particularly useful word in these days of constant deliveries of stuff we’ve bought online. Could there be an entire language consisting of such words though?

Jorge Luis Borges once wrote a story called ‘Tlön, Uqbar, Orbis Tertius’ which is pretty amazing but rather than go into that now, I want to talk about just one aspect of the story, which is disconcertingly vertiginous and rather like an earworm. The nations of Tlön, which appears to be imaginary in the story, are idealist in the sense that for them the world is not a collection of objects but a succession of separate dissimilar acts. Thus their language is based on verbs rather than nouns, a sample phrase being “hlor u fang axaxaxas mlo” – “upward behind the onstreaming it mooned” or “the moon rose above the river” (Tlön isn’t Earth). In the Northern Hemisphere though, the languages are based on a different principle: that of the monosyllabic adjective. Thus the moon is described as “pale-orange-of-the-sky” and “round-airy-light on-dark” depending on the impression given. Two different sensory impressions can be mixed, such as the cry of a bird at sunset. Hence there is a vast number of nouns, including all those found in English and Spanish in the sense of being direct translations, but none of the speakers gives them any credence as they are transient impressions. Both of these types of language, particularly the second, correspond closely to the idea of untranslatability, although there would be times when coincidentally translatable words would turn up in the languages, and it would be alien to the spirit of the story to exclude such words. Incidentally there’s plenty more in the story than that but I don’t want to veer off-topic.

Dolphins have been said to transmit sound pictures of their perceptions in order to communicate. Although I find it hard to credit that anyone would be able to demonstrate that this is in fact really happening, it’s still an interesting idea, and given that a picture is worth a thousand words, would seem to be untranslatable. A similar idea was pursued by the poet Les Murray in his ‘Bat’s Ultrasound’:

Bat’s Ultrasound

Sleeping-bagged in a duplex wing
with fleas, in rock-cleft or building
radar bats are darkness in miniature,
their whole face one tufty crinkled ear
with weak eyes, fine teeth bared to sing.

Few are vampires. None flit through the mirror.
Where they flutter at evening’s a queer
tonal hunting zone above highest C.
Insect prey at the peak of our hearing
drone re to their detailing tee:

ah, eyrie-ire; aero hour, eh?
O’er our ur-area (our era aye
ere your raw row) we air our array
err, yaw, row wry—aura our orrery,
our eerie ü our ray, our arrow.
A rare ear, our aery Yahweh.

Then again, a series of pictures could just become like a pictographic script with stylised images, although this wouldn’t necessarily impose syntax on it. It might get quite difficult to express certain abstract concepts in it unless a ‘Darmok’-like approach was taken, with abbreviated descriptions of well-known myths and fables. Even in English this could become hard to make sense of. One might say “The Fox And The Grapes”, referring to Aesop’s fable whence the idea of sour grapes originates, and there’s also the concept of “sweet grapefruit”, which is a reversal of the same which however has no associated fable as far as I’m aware. Hence one could proceed to refer to “the hound and the lemon” to refer to a situation where having something which is worthless is subjectively perceived to be of greater value in order to conceal the cost from oneself.

To conclude then, it does in fact seem that several kinds of practically untranslatable languages are possible. There could be languages which refer primarily to experiences which are outside those of most humans, as with the other Rubik’s cube orbits. A species whose dominant sense was smell and whose vision was poor might use a fairly untranslatable language, because for example it would have “insmells” rather than “insights” and wouldn’t “regard” anything so much as “scent” it, and beyond that have a whole world of sensation as rich as our visual one but entirely based on odour rather than light. Or it might have a magnetic sense which could be even harder to relate to. Or, there could be languages which simply take too long to translate for the human attention span, so they could be in principle but not in practice. There could also be languages which have developed metaphors and formed words and phrases as depicted in ‘Darmok’, where the dependence on shared narrative culture is so strong that it’s impossible to make sense of them. Or, there could be languages which combine two or more of these things.

One thing, which I find quite unsatisfactory, is that I haven’t been able to articulate clearly what I would think of as the ultimate case of an untranslatable language – one which does the same job as natural languages as we know them, but based on entirely different principles. The closest to these is the putative delphinese, using sound pictures, but I wonder what else is out there and how it can be made sense of, if at all. Or is it just that the relative obscurity to the Anglophone (or even Pirahaphone or Ubyxophone) mind in which these languages would operate makes them inconceivable to us?

Racism In Politics

A couple of recent affairs currently in the news have revolved around two different kinds of racism and a few thoughts on how to respond to them. These are, of course, Trump’s recent racism against congress members and the accusations of anti-semitism in the “U”K Labour Party.

Donald Trump, as I’m sure you know, recently said of four Representatives who were also women of colour, that they should go back where they come from. All four of them are American citizens although one is originally from Somalia. He later confounded this unacceptable behaviour by tweeting that “I don’t have a racist bone in my body!” and a crowd chanted “send her back!” of Ilhan Omar at a public meeting, to which Trump was seen to nod, I presume in agreement. He stood there for fifteen seconds and didn’t condemn them. The Republican John McCain interrupted a speaker who described Barack Obama as a Muslim and took away her microphone to condemn that statement.

This just is racism. There’s no argument about the definition here, no ambiguity and it’s not really even an evaluative statement to call it that. In the past people have been proudly racist and scientifically racist, and they would agree with that epithet – it isn’t always used as a pejorative term, although clearly most people would see it as pejorative. Trump said later that he disagreed with the chant but of course “he would say that, wouldn’t he?” is the obvious response there whether one agrees with him or not. I’m not sure I agree with Omar in describing him as fascist because other words do just as well and don’t have the same history, although I’m open to that interpretation, but it would clearly just be a neutral, objective description of this behaviour as racist, and there isn’t really any arguing with that.

That leads me to wonder about the BBC, who are not calling it racist. The BBC is supposed to be an unbiassed, neutral institution, and it seems to me that not calling this racist is a form of bias. The coverage I’ve heard doesn’t paint it positively but they have not come out and stated unequivocally that it’s racist, and this makes me wonder. It also made me curious about how the BBC described apartheid in South Africa, segregation in the US and the behaviour of the Nazi party at the time. There is a risk inherent in exploring the last because comparison to the Nazis is clichéed and lays one open to criticism, but I can’t recall them describing apartheid South Africa as a racist régime. I think they should have. It isn’t okay to be silent about something like this, not just in the sense of not reporting it but also in the sense of not describing something accurately. It’s a little similar to false balance.

Furthermore, it’s even more depressing to note that only four Republicans censured Trump for his racism.

I want to turn now to “I don’t have a racist bone in my body!”. I am racist of course. That doesn’t mean I’m pro-racist so much as that I’m aware of racism in myself and the need to strive to reduce it. Racism is a bit like sin, as well as actually being a sin, in that all have sinned and fallen short of the glory of God is a healthy attitude for two reasons. It means one isn’t worse than anyone else, and it means that residual wrongdoing is more likely to continue to be addressed if one suspects oneself of being racist. The point at which one declares oneself as non-racist rather than anti-racist is the point at which one’s racism will never reduce. Many people see this as insulting but it isn’t so much an insult as a recognition that nobody’s perfect, and not in the sense of shrugging one’s shoulders and planning to continue negligently in the same way. It means nobody’s perfect and therefore everyone should work on improving their thoughts and behaviour to be fairer and more compassionate. It seems to me in particular that a white person claiming not to be racist is like a man claiming to be feminist.

I’m pretty sure I’ve covered this before but it probably bears repeating. There’s a fairly widespread concept of racism which asserts that non-whites can’t be racist because of structural issues with society, such as the plundering by white people of the rest of the world causing gross inequality during the imperial age and the ongoing practice of that policy by other means today. I’m not making that claim, but it’s clear that the issue of white racism is more important than most other forms of the prejudice, and as far as a white person is concerned racism is in any case universal, whether or not it’s because they’re white. The other issue would be about whether racism occurs among non-whites. I think it’s pretty clear that it does, although it very often seems to be against other ethnic minorities, at least in white majority countries. Looking at racism again in a non-evaluative sense, where racism could be seen as something which just exists as a phenomenon rather than something which is condemned, although obviously I do condemn it, it is, like most or all other forms of prejudice, an error of inductive inference. Inductive reasoning is the use of more than one example to draw a tentative conclusion. For instance, “all swans are white” generally worked in Europe and North America until the people living there learnt that there were black swans. It’s always logically possible that inductive reasoning will be proven false. It’s necessary to use inductive inference in order to function properly, so we continue to do it. The situation is somewhat complicated by the fact that there are false propositions about othered ethnicities which have no evidence supporting them, but we do generally draw inferences based on imperfect information, and that means we will always be racist unless we’re drastically neurologically compromised. I suspect, for example, that someone with advanced dementia or severe learning difficulties would not be racist because they’re simply not making persistent inferences at all. Hence we just are racist, and in particular white people in white-majority countries are extra racist on top of that due to structural and institutional racism. For instance, we might not expect someone in a position of authority to be black because of various social factors preventing them from reaching such a position, but that assumption is nonetheless racist, and also important to notice in oneself.

Moving on, just before eliciting the racist chanting, Trump accused Omar of being anti-semitic. I don’t know the details of the accusation, but it brings me to the second concern which is in and out of the news a lot: accusations of anti-semitism in the Labour Party. I do actually believe it’s possible that there is a particular form of anti-semitism among Labour Party members if they, for example, believe in the idea that there’s a Jewish conspiracy supporting global capitalism, and this is of course completely unacceptable. The reason I believe this is possible is that a very large number of people joined the Party in the last few years and it seems to me probable that at least a few of those would be conspiracy theorists of one kind or another. We don’t want conspiracy theories, racist or not, because they distract from deeper problems. But I don’t want to get into the issue of whether anti-Zionism is automatically anti-semitic or not because there’s a way of broadening the issue which is likely to make it more neutral. Anti-semitism is of course a form of racism. At the same time, governments often pursue policies which violate civil liberties, and Israel is one of these countries, as is Egypt, which I understand also restricts movement from the Gaza Strip into their territory. So there are two issues here: possible racism in political parties and support of oppressive policies and action by other countries which are considered allies of the United Kingdom. Consequently, I propose, and in fact I’m pretty serious about this and would like to pursue it as a possible more neutral response to accusations of anti-semitism.

There would seem to be no good reason not to undertake an independent investigation into racism and dealings with oppressive activities by allies of Britain in all major political parties in this country: the Lib Dems, the Tories, the SNP, the Greens, Sinn Féin, the DUP, anything you like. This would include anti-semitism in the Labour Party, and although Muslims are not an ethnicity, Islamophobia in Labour and the Tories, and, well, whatever counts as racist. It would avoid the tactic of appearing to accuse others of something just as a distraction from one’s own wrongdoing and it would in any case address serious issues across the political spectrum which need to be addressed regardless of anti-semitism. We shouldn’t just be concentrating on one kind of racism. Then there are the dealings HM Government has with in particular Sa`udi Arabia, a notorious violator of civil liberties and also highly anti-semitic. Pupils in Sa`udi schools are taught that the ‘Protocols Of The Elders Of Zion’ is a genuine Jewish document and the government officially believes in a Jewish conspiracy to take over the world. If we’re going to condemn the Labour Party for being anti-semitic, even to that extent, does it not also make sense to condemn the Conservative Party for promoting trade agreements with an openly anti-semitic government like that of Sa`udi Arabia? It’s simple consistency.

That’s all I’ve got to say today really. That there should be a country-wide investigation into all forms of racism in all major British political parties and those in the Six Counties, and also into their dealings with oppressive régimes, including anti-semitic ones, and that the BBC should call a spade a spade and describe Trump and the Republican Party as racist, because it’s a matter of fact and not an example of bias or evaluation.

On Not Writing What You Want

Yesterday’s post mentioned ‘Islamic Societies And The Great Transformation’ a dissertation I wrote back in the day which was pretty lacklustre, although it was helpful to write it and gave me a few insights which I still use today. It got me thinking of the tendency to write something other than what you want to, which plagues at least me and probably lots of people, so I launched upon a new post on that subject, only to find that I wasn’t saying what I wanted to say in it! This, then, is the second attempt at covering that subject.

I’ve written three dissertations in my time, and probably would’ve written more if I was less scattered about how I approach things. None of them ended up saying what I wanted, and in fact they also all involved me selling myself short. But did they?

When I was about twelve, my voice used to have a whistle register. That’s the pitch range above falsetto which for example Minnie Ripperton uses in her 1975 song ‘Lovin’ You’. Apparently it’s also used by Ariana Grande and Mariah Carey. Unsurprisingly, the effect of testosterone on my larynx led to me losing that range of my voice as far as I can tell, but when you lose an ability, it doesn’t always go in quite the way you might expect. In a phenomenon similar to phantom limbs, the impression I had was that it was still there but I just needed to clear my throat sufficiently to reach it. This was of course an illusion and it makes me wonder what other apparent abilities one might feel are just beyond one’s reach which are in fact well beyond it. An example of this might be one’s magnum opus. Maybe one just doesn’t have a work in one, or even well-expressed thoughts, but it could well be that they would seem to be something which if you push yourself just that little bit more, you’d get there. Well, maybe you wouldn’t.

Every time I’ve written a dissertation it’s ended up feeling that it falls short. This is partly because the subject matter has always been different than what I intended. It makes sense to push oneself beyond the familiar in writing or other creative activities, to be sure, but if this happens in the wrong way that unfamiliarity, far from being stimulating, ends up putting the writer in a realm which she simply doesn’t care enough about to do more than a workaday, unspectacular job. That job might well end up appealing to other people, even a huge number of them, but that doesn’t mean she can identify with it, that it belongs to her or that she had a sense of control over it. This is the contrary position to feeling that something is just beyond one’s reach when it’s really completely impossible to achieve because in that case one still feels ownership for it even though it isn’t really there, like a lost limb. It’s the feeling that although it was well within one’s abilities, it doesn’t feel like part of you. If someone were to criticise it harshly, it would stand no chance of upsetting you on a deep level because one doesn’t care enough about it. There is a neurological analogue to this in the conviction some people have that certain body parts are not theirs. On the whole, we all probably feel something like that when we consider the bodies of strangers, but we may disagree on which limbs are part of ourselves.

The mention of the word “care” brings Heidegger to mind. His idea of Geworfenheit, “thrownness”, is about that which matters to one. This could also be linked to the soul-destroying job – paid work which only seems to involve tasks which one doesn’t care about. This could happen in a time-serving sort of way, where one writes, for example, what people want to read, what has a ready market, for some time, building a reputation, and on having reached a certain status is then able to do one’s own “thing”. My Masters dissertation, ‘A Comparison Of Dialectic And Supervenience’, serves to illustrate how this can go wrong. Supervenience is a philosophical concept about the relationship between mind and body and also between ethics and description, among other things, and is crucial to the attempted solution to the mind-body problem known as Anomalous Monism. Without going into detail, I’m one of the few people who has done much significant work on the concept of supervenience and if I hadn’t dropped out of academia, I can easily see myself having become some kind of recognised authority on it, particularly in the area of philosophy of mind. The only trouble is that I don’t believe in anomalous monism at all. I’m panpsychist, and that could be characterised as the opposite position to anomalous monism. I can envisage a path never taken in my life where I advocated anomalous monism solely for career reasons and was deep down a panpsychist, but recognised that it would be imprudent to say anything to that effect. One coping mechanism for such a situation would be to lie to oneself, in my case to talk myself into believing that anomalous monism was correct, and I wonder if I’d done that if some kind of God of Philosophy, maybe Athena, would have struck me down by making it impossible to publish anything on the matter, due to the possibility that although I might argue that position well, the reviewers would be able to intuit unconsciously that there was something not quite right about my work. The question then arises of whether that particular intuition could ever be examined in such a philosophical framework, or whether it would simply go unacknowledged.

Naturally, none of that happened, and I’m free to continue to believe in panpsychism and subject myself to what’s been called the “argument from incredulous stare”, though that applies to modal realism – the idea that all possible worlds are actual and this one is just where we are located. That said, a particularly stark situation is known to arise among religious ministers, where clergy lose their faith but continue to practice as ministers because so much of their lives is invested in the Church, such as an income, accommodation, friendship and, I hope, the more rewarding parts of their jobs.I now have to ask myself whether this passage is itself getting away from me. I hope it isn’t.

In the Simpsons episode ‘Bart Gets Famous’, Bart gets known for his accidental catchphrase “I didn’t do it” as ‘The “I Didn’t Do It” Boy’. He finds it impossible to escape from this reputation and attempts to comment on the state of the rainforests on a chat show, to which the host irritatedly responds “just say the words”. The details of that may be wrong, but there’s a strong tendency for the media to package people into a particular persona and it can be very hard to forge a new one. One can get lucky and find a match between what one is moved to create and popular appeal, but there should be some kind of match between who one really is and what one puts out there, because people can have excellent insincerity detectors and not being able to churn the stuff out is a serious problem. It has to come from you, and that means you have to be the right kind of person. It might be that the kind of person you are is simply one who can fake sincerity and consciously build a persona, and there will be people out there to help you if that is who you are, but this is in fact paradoxically a form of sincerity. You can be entirely fake in such a way that your very fakeness is who you are, in which case you are not fake at all. A fairly innocuous example of that might be camp. Perhaps that’s who Julian Clary really is, and the jokes about him secretly having a wife and children are in a way a reference to that.

Another aspect of this is the way in which one’s work takes on a life of its own when it leaves one’s fingers. Writing letters used to be a marvellous example of this. One might write a very emotive missive, stick it in an envelope and post it into a pillar box, and as that envelope dropped from one’s grasp there could be a sense of crossing the Rubicon. The postal service prides itself on not allowing anyone to intercept the mail. What’s done is done, and there followed a growing sense of anticipation or forboding concerning the imagined consequences of one’s letter. This could apply to a love letter, hatemail, a CV or so-called “blackmail”. It still works today with online communication except that the response is often far more immediate. The same would also be true of written creations submitted to a potential publisher. But the feeling of casting a message in a bottle onto the waves doesn’t end with the submission. If it’s accepted, or if something goes viral, one has lost control of the consequences and it takes on a life of its own. You can never grasp the skein of cause and effect, wrap it up neatly into a ball and stow it away unobtrusively into your sewing bag. In fact, if you have a contract you are legally constrained from even trying, and if it’s gone far enough, with J K Rowling for example, the livelihoods and careers of thousands may depend on your work. This very consequence can be deadening, I would imagine, and lead to just the kind of inauthenticity which can kill.

All this could be seen as a form of loss of control of the creative process, but such a loss is also essential. I can imagine throwing paint at a canvas and, if my talents were in that area, finding inspiration to produce something representational from the results. That also applies to writing. On re-reading a first draft, one can become aware of patterns or aspects which one had no intention of creating but which are doubtless perceived, and then choose to augment them and make them look deliberate. An audience for one’s work also works in this respect because they can sometimes contribute to one’s creation by critiquing intelligently and enthusing about aspects of it which they have in a sense created themselves by reading one’s inkblot. This means something like the audience being an extra member of the band (thinking of the Simpsons again) might sound insincere but is pretty close to the truth.

I’ll leave it up to you to decide if this post is really mine or not, and if it really belongs to me.

Islam And Civilisation (and Boris)

In the ‘noughties, Boris Johnson wrote a book, ‘The Dream Of Rome’, about the Roman Empire, comparing it to the EU in a positive way, in which he argued for Turkey joining the EU as a way of bringing them out of what I presume he regarded as the “Dark Ages”. It was, as I understand it, a pro-EU book, seeing the organisation as a successor to the Empire, which he would prefer included Turkey as a representative of the Eastern Roman Empire, so the context is rather interesting and makes me feel he’s an opportunist who is saying he’s pro-Brexit when in fact he isn’t. There are various explanations for this but rather than go into them, I want to talk about another aspect of this book. In it, he describes Islam as putting the Muslim world “centuries behind” the West, which he appears to mean both intellectually and socially.

I don’t know much about the book concerned and I’m not about to swell Boris’s coffers by buying and reading it, so I’m necessarily going to be poorly informed by misrepresentations. It also seems a little unfair to dredge someone’s past to find something to defame him in this way, because after all this is in a book which I’ve been given the impression advocates for the EU, and clearly he’s not doing that now, so why would we expect him still to believe that Islam holds back progress? Maybe he didn’t even believe it then either. Having said that, his “letterbox” comment, though made in a context which supported freedom of dress, would certainly seem to suggest that he is at least passively Islamophobic if not actively so. The book itself purports to look into why the Roman Empire worked and the EU doesn’t, to which my answer might be, to the extent that it doesn’t, that the Empire was a more unified political entity than the EU. Oswald Moseley was of course very much in favour of a European Union as a kind of homeland for white people, so the idea of being pro-EU is by no means an essentially liberal one, but I’m getting too much into speculation here. The argument appears to be that the EU was born out of weakness but the Empire out of strength. The EU is a coalition of fading imperial powers, whereas the Roman Empire was more like the United States, based more on confidence and strength.

To a limited extent I’m qualified to comment on this because I wrote a dissertation on ‘Islamic Societies and the Great Transformation’ in 1986. It was by no means particularly marvellous, well-researched or accurate, but it does mean I know more than nothing on this particular topic although a little knowledge is a dangerous thing and I’ve already talked a lot about Dunning-Kruger. In this work, I attempted to apply theories of liberal democracy, Marxism, the rationalisation thesis and anomie to Islamic societies, with particular reference to a possible discrepancy between their scientific and technological development on the one hand and what we would perhaps perceive as their social conservatism on the other, and my conclusion, which I think was quite forced, was that they couldn’t be applied and that they worked in an entirely different way than these Western ideas could come to grips with. As usual with dissertations (I’ve written three), I really felt I didn’t do myself justice and it came out quite shallow and facile, which is partly because I got stressed out by how much was riding on it and partly due to being in a hurry. But I don’t want to get into personal stuff too much here because I’m trying to address his ideas with mine and it would pull me off-topic.

He asserts that the first printing press didn’t arrive in Istanbul until the mid-nineteenth century, and that liberal democracy couldn’t develop in Islamic societies because capitalism couldn’t do so either. I wish I knew more than I do about his argument. Islamic societies are, according to him, inherently less tolerant and more violent. The conflicts which exist between the Islamic world and the West are not due to Islam and Christianity but Islamic and Roman values. The most interesting assertion is the one about capitalism and liberal democracy.

Marxism says society traverses several stages, starting with primitive communism, then some time later proceeding through feudalism and capitalism finally to communism. Many argue that governments using Marxist-sounding rhetoric are not in fact Marxist because they transitioned immediately from a feudal to a nominally communist system, which brings with it a number of problems, such as those arising from the pre-revolutionary education of such societies. The issue with Islamic societies is rather different because they theoretically undermine commodification before it even starts through identifying the banking process as usury and forbidding it, and to some extent the provision of compulsory zakat provides a kind of welfare system, although this kind of thing existed in the West too before the likes of income tax. Thus there are ways in which at least at first glance capitalism cannot function in an Islamic society, and if you believe that capitalism is essential for liberal democracy, that would presumably imply that you also believe liberal democracy is incompatible with them. The question is then, how would capitalism foster liberal democracy? The welfare state could be seen as having a role in increasing the stability of society, as there might be less crime or unrest if there’s a safety net, and the welfare system built into Islam would seem to afford that. Perhaps more promisingly, having a stake in the economic system such as home or share ownership or having a paid job might lead one to have more personal interest in a stable society, and I presume the argument is that only banks able to turn a profit are able to afford this because they can then lend money at interest, allowing ambitious and ultimately lucrative projects to be pursued. But even if this is true, it doesn’t mean the system as it exists now is still doing that because my perception of it is that it’s rigged to syphon money from the poor to the rich, which could be expected to destabilise democracy through the likes of rioting and vandalism, but oddly, doesn’t. I honestly don’t understand why this is. Nonetheless it could be that historically, capitalism did in fact nurture the idea of individual rights and civil liberties. Whether this is significant is another matter.

In fact, in both Mediaeval Europe and today’s Islamic societies, there are a series of contracts and loopholes which lead to interest being charged. For instance, loans can be construed as the rental of money and there’s a system of three contracts none of which violate Islamic principles in themselves but which together allow Western-style banking. The same system evolved independently in Europe. Moreover, shortly after the Islamisation following on from the Hejira, a mercantile economy developed in those countries and persisted for several centuries.

The matter of the printing press may be a good point but it’s hard at first to see a link with Islam. The invention of movable type is a major stage in the dissemination of information as it brings down the prices of books dramatically and puts them within the reach of the peasants, or at least the middle class. They also led to the translation of the Bible into the vernacular and thereby the Reformation. It’s easier to print non-cursive scripts, so for example Gothic was printed in the fourth century CE and the Diamond Sutra was printed in the ninth Christian century. Arabic script doesn’t lend itself easily to printing because it’s essentially cursive – each of the twenty-seven consonants has more than one form and many have four according to their position in the word, and they often have to be linked to each other. There are also other subscript and superscript signs used to indicate the likes of the presence or absence of vowels or the case of a noun. Turkish was only officially written in Latin script from 1926, so the absence of a printing press in Istanbul until the late nineteenth Christian century is not surprising. However, Urdu newspapers, which use a different version of the Arabic script, were written cursively by pen (and then presumably copied) into at least the 1980s and are mass-market publications, so the barrier may be artificial and of course this is about the use of the Arabic script rather than Islam itself. The script works quite well for writing the Arabic language itself, and would also work well for other related languages such as Hebrew and Maltese, but despite the nationalistic insistence on using Arabic script for certain other languages such as Urdu, formerly Turkish and Malay, it doesn’t work as well for them. Studies have also shown that even proficient readers of Arabic take longer to read texts than adept readers of languages written in the Latin alphabet because they need to assimilate more diacritics. Hence it could be said that poor literacy is built into cultures which primarily use Arabic script, particularly for non-Afro-Asiatic languages, and there could be elements of information-hoarding and antilanguage in its use. This presumably would impact on the social development of Islamic societies which use the Arabic script, and in such societies it does have a privileged position which could be exclusive in a similar way to Latin in the Roman Catholic church and mediaeval Europe. Nonetheless there are Islamic societies which don’t use Arabic script for their vernaculars, such as Albania and the largest of all, Indonesia.

The Arab world was of course the repository for much knowledge which would otherwise have been lost to the world with the onset of the European dark ages. Probably most star names apart from Bayer designations (such as α Centauri) have Arabic-derived names such as Thuban and Betelgeuse, so clearly astronomy was very significant and relatively advanced at this time. The Indian use of zero and place value entered Europe via the Arab world, which considerably advanced mathematics. Arabic medicine also pioneered the use of non-biochemical compounds as drugs, and paper was also used, having been taken from China, before it reached Europe. Hence technologically the Arab world was ahead of Europe technologically most of the time until the end of the Middle Ages in Europe, meaning also that the Middle Ages themselves are a solely European phenomenon as it doesn’t apply, for example, to China or Mesoamerica either. There’s a lot of debate about the causes of and causes of the end of the Dark and Middle Ages, one suggestion being that they were caused by the adoption of Christianity by the Empire which led to a loss of civic virtue and focus on the afterlife and ended in connection with the Reformation.

It is the case that countries with a majority of Muslims are more likely to have a large number of creationists compared to most other countries, and this does have practical consequences. For instance, the treatment and understanding of cancer and antibiotic resistance depends on accepting evolutionary theory, but in some Islamic countries the teaching of evolution has been removed from state school curricula. However, the same problem exists in the United States, so this can’t be seen as a specifically Islamic issue. Oddly, the predominant belief among Islamic creationists is the least sustainable option of Old Earth creationism, which has to maintain that mutations do not significantly influence living things even over hundreds of millions of years. I would suggest that this means there’s a major disconnection between different aspects of biology in places where this is maintained to be true. You can have young Earth creationism, which although obviously untrue doesn’t need to explain why mutations don’t have a major influence on the course of life, and you can have young Earth evolution, though nobody actually seems to believe that, but you cannot possibly sustain old Earth creationism. It’s by far the most absurd option of the four. In situations where women in particular are denied full participation in society, there will be a waste of potential which could disadvantage a country or skew its research and development in a particular direction because it isn’t informed by the distinctive experience of women.

There are, of course, democratic Muslim-majority countries such as Pakistan, Malaysia and Indonesia, but it isn’t clear that democracy implies liberal democracy. Democratisation has sometimes led to the imposition of what many Westerners would perceive as the more oppressive aspects of Shari`a law such as stoning and execution for homosexual acts and adultery, the amputation of limbs for theft, or the prohibition of alcohol. However, in countries such as Pakistan, which were originally under the British Raj and didn’t exist as a separate political entity, there was no specific national identity and Islam has been employed to create this. This also leads to a situation in some places where just as we might identify opposition to what we see as the more negative aspects of certain countries with Islam as a negative force, Muslim-majority countries are opposed not only to the colonialist legacy of the West but also to its liberalism as part of Western identity. The social development which was able to occur in the home countries of the imperial powers didn’t take place to the same degree in countries which happen to have Muslim majorities because of the stagnation imposed by colonialism. In a Christian parallel in non-Islamic colonial countries, there is more official homophobia because of the Western creation of laws to that effect.

When a society is apparently far away or “othered” as Islam tends to be to WASPs, we tend to generalise. In fact, although Islam does pride itself on its unity there are open and closed interpretations of the faith. Where a conservative interpretation of organised religion has been allowed to dominate, the result is often disastrous. Some might say that this is seen in the US, formerly in Ireland, and also in both Israel and certain Muslim-majority countries. The situation in Israel, which is officially secular, is potentially without connection to the ethnicity of its citizens but more to do with both the secular pursuit of power and the conservative religious approach within Judaism, which neither condemns Judaism as a whole nor the Jews, and those same problems are replicated in Muslim-majority countries. For instance, in Turkey the situation hasn’t so much risen due to Islamic fundamentalism as the tendency of a party to grab power, which then established a precedence and an imbalance which made that approach seem more acceptable.

There is also the question of what constitute Islamic values. It’s perfectly valid within Islam to emphasise principles such as justice, freedom and respect for human life and base one’s political approach on those. Just because stonings, executions, corporal punishement, sexism and homophobia are done in the name of Islam, that doesn’t necessarily mean all Muslims want that or that the stress on the role of Islam in public life needs to focus on those. However, it’s problematic for many Muslims for us in the West simply to see them as “backward” and not having got to our “enlightened” state yet. That would be to presume that Islamic societies will evolve in the same direction as the West has, and not only does that not follow, but also there’s a case for that being kind of racist. I say “kind of” because although I would see Islam as a protected characteristic, I don’t see Muslims as a race and don’t think they would want to see themselves that way either. Islam is for the whole human race. Nonetheless it does reflect a form of discrimination against a certain group to exhort them to become more liberal.

Secularism, or at least things done in its name, can also be oppressive. This can be seen in China with its current persecution of not only Uighur Muslims but also Falun Gong practitioners. It’s easy to argue that this isn’t true secularism, as that involves the equal treatment of all belief systems, but it could equally well be argued that what’s done in the name of Islam is not true Islam, so this leads to a kind of stalemate. It’s also been noted by certain citizens of Muslim-majority nations that when Shari`a law is implemented it has a disproportionately negative effect on women, gender and sexual minorities and religious minorities, including non-believers. If this is also linked with democracy, this would constitute the tyranny of the majority and ignore the possible value of experience and judgement gained by working in government, having instead simply allowed it to be dictated by a particular interpretation of Islam. And this interpretation strikes at least me as odd, because the Qur’an says “there is no compulsion in religion” and advocates for the tolerance and perhaps equal treatment of Christians, Jews and even the Sabaeans, who were not monotheist. There is some problem with this though, because as represented by the Qur’an, the beliefs of the people it refers to as Christians don’t correspond to mainstream Protestant, Catholic or Orthodox Christianity as it’s understood today.

But I have to confess to a feeling of discomfort in talking about any of this because it feels like I’m telling Muslims how to practice their faith, and also probably presuming a lot more than I actually know about Islam. Boris Johnson doesn’t seem to have the same kind of hesitancy in his pronouncements, and as we have seen recently from a certain prominent head of state that can constitute a major problem. But to conclude, I suppose I’d say the following. Islam clearly did not historically hold back scientific and other intellectual progress. The practice of creationism and possible failure to exploit its human resources as fully as the West might hold it back at least economically, but that may not be so much the result of Islam as colonialism and a search for national identity. Also, we should be wary of imposing a preordained path on the future of the Islamic world just because we consider themselves to be ahead of them in some way, and be aware that what we perceive as Islamic may be just one interpretation of many, which may suit us if we wish to other them.