World’s Whitest Woman?

I recently witnessed an interesting example of the kind of misunderstanding that tends to occur online nowadays, although to be fair they’ve probably existed as long as we’ve had language. I stuck a status update on FB saying that I was quite possibly the whitest person in the world, as a preliminary to saying stuff about my ancestry, which I’ve recently apparently gleaned from a DNA test. One person, and I’m not maligning them because I’m sure I do the same, said that what I’d said came across as quite Trumpian, which is quite possible. But as it turns out, the reason I made that statement was to head off any impression that I might be attempting to appropriate the Black identity in what I was about to say, because to make the kind of announcement I was about to make could easily be misconstrued in all sorts of negative ways. In a way it shouldn’t even be seen as that interesting, but the world we live in makes it inevitable that it could be seen as notable. All of this nuance was, however, lost due to the nature of that thin trickle of text to which we are subjected on social media (except we aren’t because of all the pics and ads).

My actual intention was to emphasise in advance that a statement regarding my genetics is not meant to make some kind of facile outsider’s claim on a “cool” identity, or for that matter cleave to an oppressed or deprecated aspect of ethnicity. Practically all of my genome is not only Northwest European but also Western Irish and what might fallaciously be called Celtic or Gaelic, which of course I already knew. I do in fact feel not only a real allegiance to that identity but also an obligation towards keeping it alive or supporting it in some other way, although I also feel that Celtic heritage tends to be both romanticised and emphasised at the expanse of Germanic, partly no doubt due to Hitler.

Now to cut to the chase and actually mention in earnest those things to which I’ve previously only alluded. For a number of years I have suspected that I had Black ancestry on my father’s side in the eighteenth century. I don’t know how this could’ve happened, but it seemed likely for a number of reasons. For instance, although my hair is fairly typical for someone with Western Scottish lineage, the only kind of comb which really works on it is an Afro comb and left to itself, it will tend to dread more quickly than most other white people I know. My lips are also thicker than most white people’s and as an adolescent I had a skin condition called pityriasis versicolor, which is much more common among Black people than white. At the same time I have very fair skin (although it never burns, for what it’s worth), a leptorrhine nose (unusually narrow even for a white person), blue eyes and hair that’s easily bleached by sunlight. There’s a more significant physical factor which clinched it for me but which I’ve unfortunately forgotten. More importantly, I’m culturally quite incredibly white, and of course this kind of statement is what my friend zeroed in on as sounding like Donald Trump. But I’m not prioritising whatever might be construed as “white culture”, quite a frightening sounding phrase, above what might be understood to be “black”, or for that matter denigrating all of it compared to black.

I should probably specify at this point that what I think of as “white” seems wider than most other people, because I would include Jews other than the Beta Israel, North Afri?ans (I’ll get to that question mark in a bit), Hispanics and Middle Easterners in that category as well as stereotypical Northwestern Europeans. That said, I personally only identify as Northwest European. There just is nobody else who feels like my kin, which is not to say I have any conscious disrespect for anyone else, just that it’s a fact. I do of course acknowledge the idea of white privilege and the hoarding and plundering of resources white people have inflicted on the rest of the species, and I’d even say that Brexit would go some way to redressing that balance were it not for the fact that money would stay in the hands of the heirs of the people who did that in the first place and go nowhere else. Moreover, mention of the hands of the heirs brings up the issue that whereas we can’t be held responsible for the accident of our birth, there is something inherently racist about the fact that we have the privilege we have in the first place, and if we believe in the success of rational endeavours at all, it makes sense to try to redress that balance. At the same time I’m conscious that this is a white person saying this and these views are being expressed by someone who is not in the position of having experienced, or been unconsciously subject to, the kind of racism I’m talking about, so I know not whereof I speak. Because I’m white.

That said, it is indeed the case that I have a North Afri?an ancestor on my father’s side who lived during the eighteenth century, exactly as I suspected. The eighteenth century because I know the entire family tree, at least as officially reported, on that side back to the early nineteenth century, so the scope for that happening without it being drowned out by other influences to the extent that it would no longer be discernible is that the person concerned must’ve happened in the eighteenth century. But it gets confusing at this point, because the question arises: are North Afri?ans black? But before I get to that, I’ll address the question mark.

It’s been said that Afri?a should not be spelt with a C but with a K. The arguments for this are that most indigenous Afrikan languages spell their word for Afrika with a K, that European colonial languages such as French, English and Portuguese introduced the C spelling, that the K symbolises the possibility of a pan-Afrikan linguistic unity and that writing Afrikan languages in this alphabet at all tend to involve using a C. That’s all interesting, and my first “instinct” is to go with that because it’s an Afrikan way of thinking about it and imposing a C on it from outside signifies the opinion of a European being foisted onto Afrika. But I have to confess that the arguments don’t sound very strong to me, that writing “Afrika” isn’t really enough and is perhaps just a form of slacktivism and perhaps drawing attention to how “woke” I am in a manner which is ultimately unhelpful. It seems like tokenism.

Looking at each of those arguments, it comes to mind that Afrikan languages are, and were historically, often not even written in scripts with a “K” in them, which amount to Latin, Greek and Cyrillic. Coptic, however, does have a K in its alphabet, and is a thoroughly Afrikan language, being a form of Egyptian (as in hieroglyphics). I can’t honestly say that Coptic had a word for Afrika because although there is of course a Bible translation into Coptic which mentions regions of Afrika such as Nubia, Egypt, Ethiopia, the Roman province of Africa and perhaps Libya, but doesn’t appear to have a concept of a continent, which raises another issue – should we subsume all Afrikan identities under one heading like this? Maybe in opposition to post-colonial powers? But why allow outsiders to define who you are in that way? Isn’t that part of the problem? Moving on to other scripts, there’s the Ge`ez of Ethiopia, Tifinagh of the Berber folk, Vai as used in Liberia and Sierra Leone and, probably formerly the most widespread script in pre-European colonial times, Arabic. Not having anything visually resembling a letter K or C, none of these scripts would’ve been used to write Afrika with either a C or a K, and in fact some of them used a character often transliterated as “Q”, which has never been suggested. Hence it isn’t clear why this would work well as an argument for writing “Afrika”. Moreover, Afrikaans, which could be seen as a particularly imperialist language, does spell Afrika that way. As far as I can tell, the Roman Empire was the first to use the spelling, so it is in fact true, though in a rather unexpected way, that a European power introduced the C. But to be honest, my main reasons for not writing “Afrika” would be that it seems ostentatious and quirky, and I don’t want to be ostentatious and quirky about the serious matter of how to approach this continent. That could simply mean that my bubble is European – I just don’t come across people for whom it is an important issue. Consequently, for now, provisionally, I am going to write “Afrika”, in the absence of a real reason not to.
Getting back to the North Afrikan ancestor, it’s rather hard to be more specific. This is mainly because the DNA testing service I used, although it has millions of people on its database, has a heavy White Anglosaxon Protestant bias, and therefore has relatively few black people on it. Therefore the information which places people’s ancestry will be taken from more general data. It might be logical to go by either mitochondrial DNA or Y chromosomes. Y chromosomes, being inherited only along the paternal line, and mitochondrial DNA only along the maternal, help large movements of people to be traced. North Afrika has three major Y chromosomal groups: Egyptian, Berber and Tuareg. The odd ones out are the Tuareg, who consider themselves part of the Berber but have mitochondrial DNA more like that found in southern Europe, which probably means there’s a stream of people between North Afrika, Malta, Sicily and the Italian peninsula. This is just my guess incidentally. I don’t actually know. The Berber and Egyptian populations are partly eclipsed by the Arabisation of the Maghreb and there are Berber nationalist movements which seek to assert the distinct identity of the Berber. Although I’ve said “black ancestry”, it really seems most likely that the person concerned was Berber, which is where the true fluidity of human genetics gives the lie to the conventional WASP idea of race based on skin tone, because the skin tone of the Berber peoples is highly variable and our genes don’t pay much attention at all to what we imagine are fixed racial shoeboxes.

I’ve mentioned previously that the biggest genetic variation is in Afrika, particularly south of the Sahara, next to which the variation in Europe, for example, is rather small. This is a slight distortion of the facts, as Southern and Southeastern Asia is also quite variable, but it does also mean that whereas we might think of people as Black Afrikan, it’s closer to the truth to say that there are a few non-white “races” plus a larger number of Black Afrikan races. This genetic diversity and the fact that Afrikans alone have often not gone through bottlenecks such as Sinai, the Bering Straits, the Wallace Line and Panama, is a good a priori argument for Black Afrikans being the most intelligent human population, except for the disadvantages imposed on them by their colonial history, because the relative inbreeding of the rest of the human race probably wouldn’t do our IQ any favours. However, intelligence is as contentious an issue as race and there’s ample evidence that there is in fact no essential difference between the cognitive skills of Afrikans and any other arbitrarily chosen person, if they have the same background, which is of course a big “if” owing to racism.

The Berber people today tend to be quite Arabised and have mixed with the Arab population, but prehistoric North Afrika was quite genetically isolated by the Sahara and Mediterranean. Egypt and nearby land is less so due to Suez. Hence the reference to “North African” (with a C!) is more likely to refer to Berber than Egyptian, whose genes are thoroughly mixed with Arabs’. Going by the rather crass notion of skin tone, they’re a particularly striking example of how the colour of someone’s skin is not a good guide to who they are or where they come from, even culturally or linguistically. Malian Tuareg, going by photos, often look black to me, but more “pure” Berber, including Tuareg, from further north look white to me, but probably not to most other people who see themselves as white. If I was going to be really essentialist about this I might say that the perception of skin tone as a marker of whether someone is in an in-group or an out-group could be genetically determined, but given the rather small amount of DNA I have from the people concerned, I’d be inclined to put it down to something else even if it were true, which I seriously doubt.

I find it particularly mysterious that the person concerned seems to have been Berber rather than from further south, because I’ve always assumed that whoever it was had been caught up in the Atlantic slave trade and ended up in Scotland. There were plenty of black slaves in Scotland in the eighteenth century, so that always made sense to me. However, maybe it’s my ignorance of the nature of the slave trade, but to me it seems out of place that someone from the Sahara should have been enslaved, so I feel like there’s a different history here. It’s also not someone I could really fit into any standard narrative of being descended from Afrikan slaves like so many other people can. At least one of two assumptions needs to shift in this. One is my presumption that I had an eighteenth century Afrikan slave ancestor. The other is that there was no Berber DNA among Afrikan slaves. But I don’t know enough about the situation or Black History to say what the likely explanation is.

I was going to say more here but this will now have to wait for another time.

Adidas-Related Karma

One of the reasons I blog, and you’ll note that I haven’t done it for a while, is to brain-dump. It stops me from plonking walls of text on social media and going on and on at Sarada, and it also means I can link to things I find myself saying over and over again online, such as on reddit and Yahoo! Answers. Technically that makes it kind of blog spam, but I don’t think constantly copypasting the same old screeds ad infinitum would go down as well. This would also confirm that dictum that writers write for themselves rather than an audience, though perhaps in the hope that their words will be received positively, because it gives potential readers the option of ignoring what they’ve written while allowing the writer to fantasise that they are being acknowledged.

Today’s post is perhaps even more like that than usual, as it reflects a possibly coincidental series of events which I felt rather sheepish to admit, for a possibly surprising reason. It has to do with ethical consumerism, something which I have been heavily into since I was about twenty. The basic principle, as I’m sure you know, is that one boycotts some products and possibly opts to buy others in order to put pressure on companies considered to have unethical practices and “reward” companies for doing something more positive. On the whole I’ve emphasised the boycott rather than the support alternative because I suspect that the company whose ethical record seems spotless is simply one which is particularly good at hiding its dodgy behaviour. If you take it far enough it goes way beyond merely boycotting and tips over into self-sufficiency, because one view is that the only person whose ethics one can trust is oneself. This underlines an aspect of economics which is not always clearly appreciated: you pay people to do what you can’t do for yourself. When people do things for you, there can be an element of trust there, that they’re not going to put cyanide in your cuppa or prepare food unhygienically, and beyond that, use slave labour to make your clothes or make products which break down and can’t be recycled or sell essential bits at a premium (razor blades and ink cartridges). It’s been estimated, by Baby Milk Action, the pressure group which organises the Nestlé boycott, that a company will start to make changes to a policy if a boycott reduces their profit by 2%. Nestlé went so far as to employ an “issues manager” to counter the accusations of causing death in young babies who used to send back a form letter to complaints. Scott Paper Company did something similar after receiving complaints about their activities in East Timor, and in fact it’s quite a common occurrence. I presume this is a financial calculation: how much will it cost to employ a department for arguing against consumer boycotts compared to the loss of profits? Consumer boycotts are usually absolutely absolutely minute compared to the size of the company, so on the whole it probably isn’t worth their while to bother.

Perhaps surprisingly, I do have a counter-argument to consumer boycotts, which roughly goes along these lines. I used to work for a charity, and once I knew what went on inside it I came up with a similar aphorism to the one above: an organisation you believe to be good is one about whose inner workings you know little. I don’t want to name this charity because it would be unfair and fail to illustrate the wider point, that organisations in general are problematic, and become more problematic the larger they get. This applies as much to governments and public bodies as it does to charities and multinationals. I can think of at least two major reasons for this. One is that a large organisation is more likely to have people in it who are out of touch with the outside world, as it were. They don’t deal face to face with consumers, manual workers, suppliers and so forth, and as a result they can lose track of the practical reasons for the existence of the organisation, and it also becomes easier to sketch out a career path without having much to do with the bare function of the group. This is of course complicated further by the fact that the purpose of most companies is survival, growth and profit, and there’s even some honour in that if you believe in what you’re doing, in the quality of your products for example or the value of your service to the public. If you work, for example, for Apple and you honestly believe Apple products are better than their competitors, in a way you have a duty to try to promote them at the expense of the others. It’s a conditional duty because of the likes of vendor lock-in, the origin of the tantalum in the capacitors and built-in obsolescence, all of which are ultimately bad things, looking beyond Apple itself, but nonetheless it is a duty. It’s like working in an office and doing your job well, being nice to your colleagues and generally being a decent human being, all the while selling weapons of mass destruction to both sides of a Third World civil war.

The other reason is more sympathetic to the organisations involved. If there’s something wrong with everything, you could end up doing nothing, and just like the fact that taking no risks is the biggest risk of all, doing nothing is sometimes the worst thing you can do. If a company or other organisation does something, it risks doing something negative, and the more it does, the more likely it is that it will do something bad. It isn’t helped by the reality of existing merely for the purposes of increasing profit and growth, but it’s still possible that bad things are going to be done by good organisations. The most decent social security or public healthcare system is still going to have some irate service users, say because the wrong leg was amputated or one simply fell through the holes in the safety net. It must also be said that it has become much harder to do the right thing nowadays because public services are substantially involved or influenced by the private sector and the world is being run by sociopaths and psychopaths, although that last bit is nothing new.

One of my favourite shows is ‘The Good Place’, for which I’m about to utter a major spoiler. The basic premise of the series is that when you die, you go to the Good Place or the Bad Place depending on your deeds in life, with a single exception of someone who ended up going to the Medium Place because her good and bad deeds balanced out exactly. Sometime during the third season it emerges that nobody has ended up in the Good Place since the fifteenth century because life has become too complex to make the right moral choices. Someone said to me recently that Extinction Rebellion were hypocrites because they all used smart phones. One of the problems with contemporary society is that it forces you to do impure things all the time just to survive, so I’d argue that what they would doubtless regard as the practical necessity of using smart phones is precisely what they’re demonstrating against. Nonetheless I do have sympathy for that position, although it’s often advocated by people who don’t do anything, and in this respect they’re similar to trolls. It’s notable that the most critical comments made on my YouTube channels, for example, were invariably made by people who hadn’t uploaded a single video. This may be due to wanting to watch videos rather than contribute them, but behind that, studies have shown that trolls frequently hold themselves to such high standards of quality that they never contribute anything positive for fear that it won’t be good enough. That’s a simplistic view of trolling, which is a lot more complex than that, but in a sense refusal to engage with the world due to high moral standards is the “real world” equivalent of trolling. Hence you should probably do something, although inaction rather than action for its own sake does have much to recommend it.

The title of this post is of course “Adidas-Related Karma”. Now as you know, there’s a very successful sportswear company out there called Adidas, although other sportswear companies are available, notably the even more successful Nike. Nowadays, Adidas primarily market their footwear, which most of the time is just beyond the pale for me because of their use of leather, and for me veganism trumps everything. Hence you will never, ever find me in a pair of Adidas trainers, and not just because I will be running too fast (I won’t be – my running is basically a slightly speedy amble most of the time). The reason I mention veganism, which most people think of as having to do with “animal rights”, is that there are clearly many other problems with Adidas trainers than just the fact they use leather, notably that they’re manufactured in sweatshops by literal slave labour. That said, Adidas are attempting to clean up their supply chain by eliminating forced labour and have made some progress in this area, although considering that slavery has been illegal for getting on for two centuries now, it’s not exactly something to boast about. In terms of environmental record, they’re committed to being 90% polyfluorinated chemical-free because PFCs disrupt hormones and contribute to cancer, and are moreover used a lot in outdoor gear and therefore end up in quite fragile wilderness or at least remote environments, not to mention the effects on the wearer. They tend to use tax havens for their holdings companies and their boss had a €12 000 000 annual bonus in 2015, so it’s pretty clear they really aren’t one of the good guys, to no-one’s surprise. But I do wear Adidas stuff as running gear some of the time, and I’ve just noticed something interesting.

I’ve gone into the issue of running gear and gender dysphoria before on another blog, and another aspect of this is a comment someone made just after I transitioned socially – “we always wondered why you wore ladies’ [sic] tracksuits all the time” when I had practically no idea that I was wearing clothes which didn’t look unisex and my attempt to be gender-neutral had completely failed despite my efforts not to dress “like a woman”, whatever that means. I try to go running as much as possible in the dark, first thing in the morning, or at least first thing in the morning, because otherwise I worry that people will see me and read me as male. Nonetheless I find running a bit of a slog, and in order to “bribe” myself to go running I’ve spent a fair bit of money on clothes to do it in, from three or four sources, the theory being that in order to justify the investment I actually have to go running. As I’ve said, I never wear Adidas trainers. The trainers I do wear are Nike, but they were a present so it doesn’t really count. In fact, as I’ve said before, the only thing which really matters with running, I think, is footwear. There’s plenty of rationalisation about compression wear and so forth reducing recovery time, but I think that’s basically a marketing exercise. And I do wear Adidas stuff to run in, though not often up until now. Maybe about ten percent of the time.

This, however, is a record of all the injuries I’ve sustained during running:

  1. January 2019: Tripped over a churchyard wall and barked my shin, wearing a blue Adidas trackie top and blue Adidas running tights.
  2. June 2019: Inexplicably fell over sideways almost immediately after starting running, wearing a one-piece Adidas thingy I don’t know how to describe, winding myself and bruising some ribs.
  3. October 2019: Tripped over a curb grazing my right knee in three separate places and hurting my right hand (thenar eminence), wearing brand new black Adidas leggings, which also ripped in the process of me doing myself an injury.
  4. October 2019 (yesterday in fact): Distracted by a red fox in the distance, turned over my ankle in an unexpected hole in the ground and wrenched the extensor retinaculum in my right foot, wearing a second brand new pair of Adidas leggings.

And the thing is, I don’t actually wear Adidas that often! It’s very tempting to see this as karma or divine retribution for wearing unethically-sourced clothes. The probability can kind of be worked out. I’ve probably gone for at least a hundred runs since January at this point. On perhaps ten or fewer occasions I’ve worn Adidas stuff. I’ve injured myself four times. The chances of injuring myself are therefore about one in twenty-five and the chances of wearing Adidas are one in ten. Taken alone, the probability of injuring myself is statistically significant by the usual standard of one in twenty, so, and I may be reasoning fallaciously here, it is, taken by itself, statistically significant that the combination of the two is so far universal. So, why? Is it because I’m bribing myself into running by wearing Adidas, so it tends to happen at a time when I’m not really into it and therefore failing to concentrate, or at a time after a break when I may be less aware and focussed on running. Or, is it God punishing me? Or me punishing myself? I don’t really believe in that kind of God though.

Yes, I really am that shallow and I really am letting Adidas fleece me even though they’re an highly unethical company. Maybe it serves me right then.

The Four Horsemen

Apart from the awesome arrival of our granddaughter, two things are on my mind just now, partly in connection with that event because it focusses one on the future, which in this case will, I hope, extend into the twenty-second century. This has provoked the ironic purchase of white goods which are more energy-efficient, although clearly there is embodied energy in them, but we have to preserve this world for our grandchildren so this time it’s personal, as the cliché has it.

This is an eldritch chimæra of four works to which I’ve recently been exposed: ‘The Archers’ (been exposed to that since 1973 at the latest but even still), ‘The Last Man On Earth’ (which apparently has really bad acting but because of my lowbrow taste I’m able to enjoy it anyway), James Lovelock’s ‘Novacene’ and Edmund Cooper’s ‘The Overman Culture’. Three of these deal with a potential apocalypse, two of them focus on artificial intelligence and one of them is an everyday story of country folk. I’ll talk about the AI angle elsewhere. ‘The Archers’ is a notably non-apocalyptic soap opera, though with educational intent, at least initially. However, it’s recently been bugging me – apparently this is called a “plot bunny” – that the soap tells a tale of a village which clearly has a considerable history extending well back into the Dark Ages and probably Roman times. It has prequel novels which I understand go back to the early twentieth century, and there’s a volume covering its fictitious history which I’ve ordered and am currently awaiting with anticipation, but oddly, to me, what they’ve never really done is to explore the mediæval aspect. Thus I decided to do that, and am currently researching and planning a seven-episode series covering the years 1315 to 1381, because of the eventful nature of the fourteenth century. They were, as the phrase has it, interesting times. In fact they were bloody awful, even leaving aside the fact that feudalism characterised that time. Although most people with more than a passing knowledge of English history would doubtless be aware of this, it’s pretty gobsmacking how appalling the fourteenth century really was. It was shockingly grim. The bucolic tone of ‘The Archers’ can’t really survive this turn of events and the best that could be managed would be a kind of ‘Black Adder Style’ gallows humour, if I decide to go in that direction.

The 1300s marked the end of the Mediæval Warm Period and the start of the Little Ice Age, which only seems to have ended as a result of the Industrial Revolution, as James Lovelock explores in his ‘Novacene’. In other words, the previously mild English climate favourable to the growth of crops, which moreover enabled marginal land to be brought into full cultivation, underwent a change to cooler summers, colder winters and more rain and snow. Europeans suffered terribly because of this. The warm period had enabled the population to grow considerably from about the beginning of the ninth century onward, and the climate change, nota bene, led to a crash with enormous consequences. Firstly, the cold wet weather of 1315 meant that crop yields crashed from a seven to one ratio for wheat to a two to one ratio or below. That proportion allows for one grain for planting and one for eating, which is just about subsistence agriculture, and was also reflected in the productivity of other food crops. Consequently there was no food, the wet conditions made it hard to store up seed, salt used for preservation also got damp and washed away, the fish moved south because of the cooling water and there was considerable inflation. Villeins and serfs ended up eating the grain for planting to survive, then later slaughtered their beasts of burden for food for the same reasons and were left with a situation where they couldn’t use oxen to plough the fields because they’d eaten them all and in any case they had nothing to plant if they had been able to. Older people voluntarily starved themselves to death so the younger generation could survive and there was also infanticide and cannibalism. This went on until 1317 and food stocks and farming didn’t recover completely until the mid-’20s. That was the Second Horseman – Famine.

Then there was the start of the Hundred Years’ War in 1337. The Hundred Years’ War actually lasted more than a hundred years and had several truces in it, so although being a mediæval war it also broke for the winter and other happenings – I’m guessing holy days and fast days for example. One cause of the war was the attempt to deny the right of Isabella of France to “transmit” her succession to her sons, and the rivalry between the Plantagenets and the House of Valois was also a factor. Armies were fed from local food sources, so happening as it did just a couple of decades after the Great Famine, this was not particularly marvellous either. As far as I can tell, this was a self-inflicted calamity with little to redeem it, although from a modern perspective the fact that it was partly caused by a gender-related issue redeems it a little. Even so, it still seems pretty appalling that the main reason for taxation up until very recently was to enable wars to be fought. Relating this to the Great Famine (not to be confused with the Irish and Scottish Great Famine of the nineteenth century), there was a general upturn in violence resulting from the desperate circumstances it had wrought, so it’s possible that it could relate to the onset of the Little Ice Age. Even so, the Hundred Years’ War strikes me as the fourteenth century version of Brexit in that it was basically the fault of the royal houses of Europe. Like it, it involves Calais, which was captured by the English and hung onto for a couple of centuries. It’s interesting how the realities of physical geography bring these kinds of parallels. The First Horseman – War.

The Third Horseman is of course Pestilence, which for my purposes right now is the most “interesting” of the four and also the most notorious event of the fourteenth century – the Black Death. This infection seems to have spread into Europe from ships folding their sails in Sicily, Venice, Sardinia and Corsica in 1347. The signs and symptoms are pretty well known but I’ll go over them again anyway. Tumors (note the spelling – not “tumours”) the size of apples arose in the armpits and/or groin which oozed pus and blood when opened, the lungs filled with fluid and the body became afflicted with melancholy skin lesions. The victim also had a fever, vomited blood and death occurred within a week. It was attributed to a foul miasma brought by a wind, and looking at it from the perspective of humoral medicine, to me it looks like an excess of melancholy. That said, it notably marked the onset of doubt in the medical profession that the principles of Galen and Hippocrates were effective and probably led to the introduction of chemotherapy shortly afterwards (not “cancer chemotherapy”). I could get led into a quagmire here because this is thoroughly homeedandherbs territory, but even so I’m sticking this, and the rest of what I’m going to say, here. In any case, whatever the cause, the disease killed a quarter of the population of Europe.

The standard explanation nowadays is that the Black Death was related to Yersinia pestis, carried by rat fleas and was bubonic and/or pneumonic plague. A third variety, septicæmic plague, also exists. This has also been questioned. It spread literally thousands of times faster than the plagues mentioned would be expected to, crossed mountain ranges where rats didn’t go, and afflicted Iceland where there were no rats at the time. There were also no reports of mass die-offs of rats, which might be expected. One form does produce the buboes mentioned in Boccaccio’s description and there are petechial haemorrhages as described, but the spread may have been between humans rather than via rat fleas or rats. Pneumonic plague can be spread via droplet infection. It’s been suggested that the Black Death was in fact either anthrax or the African disease Ebola, which would make it a hæmorrhagic viral fever rather than a bacterial disease, and it’s notable that the first infections in Europe were in the Mediterranean region, just as it’s thought that AIDS spread into Europe from Africa via Sicily. However, Yersinia pestis DNA is found in plague pits, the problem with that being that it might’ve been there anyway because rats were so common.

The soil and seed analogy emphasises that there are two groups of factors in infection: the pathogen itself and the state of the body in which it finds itself. Some of the time, the physical health of the body is fairly irrelevant because of the virulence of the organism associated with the disease, but this is by no means always so, and this, I think, is key to what the Black Death actually was. This can be illustrated with reference to the horrific disfiguring ailment known as noma, cancrum oris or “water canker”.

Noma is a disease now found mainly in Africa south of the Sahara. It starts with a gum infection in childhood, which then spreads to the cheek, elsewhere in the mouth, causing the tissues including bone to become gangrenous. The victim is left with a large hole in her face passing through to the inside of the mouth, which makes it hard to eat and leads to social stigma. It can also cause blindness due to inability to close the eye. Predisposing conditions include measles, poor dental hygiene, proximity to livestock and, particularly noteworthy, malnutrition. Babies are occasionally born with it, and having recently become a grandparent, this is particularly appalling to me. In a sense, it’s easily preventable through good nutrition and antibiotics, and it’s a neglected disease. This term, which is official, refers to serious diseases which are widespread but relatively little studied and into which little resources are invested. Incidentally, in my alternate history of the Caroline Era, AIDS is a neglected disease since it was retained in Sub-Saharan Africa due to the presence of a liberal pope after John Paul II’s assassination led to the use of condoms, and has practically wiped out the population of Central Africa. The fact that noma even still exists is appalling and ought to spur us all into bringing about global socialism, along with about a million other things. Of course this doesn’t happen because the wealthy and “powerful” either never see it or are sociopathic due to their upbringing, so we continue with global capitalism and its outrageous toll of death and suffering.

Noma used to be found a fair bit in Europe, including Britain, where it persisted until at least Victorian times. We can assume, in fact, that it existed in England in the fourteenth century, where it would undoubtedly have led to the conclusion that a family was cursed and caused social ostracism and persecution. The point about noma, however, is that without malnutrition and other stresses on the body, the same pathogens associated with it only cause self-limiting conditions. After the Great Famine, the population was weakened and tuberculosis and pneumonia were widespread. Children who weren’t killed or eaten during this time would’ve grown up fairly sickly. I suspect, therefore, that the Black Death is not so much not plague as such as the way Yersinia pestis takes advantage of a weakened and constitutionally compromised body, so it’s very much like noma, whose associated bacteria here generally coincide with mere tonsillitis and sore throats. In other words, it was all the rest of what was going on in the fourteenth century that led to the Black Death turning out the way it did. Subsequent waves of infection were milder, probably partly because of evolution in the human population but also due to improving general health and nutrition. Before I move on, the Black Death was very probably spread partly by flagellants – people walking to distant towns and cities whipping themselves and others in penitence for the sins of the world as a way of assuaging God’s wrath – and also by the movement of troops in the War.

Rather than getting into the consequences of the Black Death just now, which are interesting and valuable material for the ‘Archers’ project, I’m going to turn to ‘The Last Man On Earth’. There are clearly going to be spoilers at this point, but I think probably the series is little known and not popular or critically acclaimed, so if you go on reading it’s not much of a loss and in any case they’re quite mild. In ‘The Last Man On Earth’, almost all vertebrates including humans have been wiped out by what seems to be a viral hæmorrhagic fever. So virulent was this that the apparent number of survivors in the whole of North America, whose population is currently 579 million, only seems to be in low double figures. This is of course a plot device to get all the people out of the way so the apocalyptic fun and games can start, but oddly, in spite of the fact that it’s based on gallows humour, it manages to introduce a number of realistic features which are usually ignored in post-apocalyptic fiction. What piqued my curiosity, however, was the feasibility of humanity being practically wiped out by a virus or other pathogen.

The Plague of Justinian in the sixth century is estimated to have wiped out up to a quarter of the species. This is far less devastating than the Mount Toba eruption, which seems to have left about a hundred people alive. As for the Black Death itself, which like the Plague of Justinian is associated with Yersinia pestis, that seems to have killed about a quarter of the population of Europe and also a large part of the human population of other parts of the Old World such as China. This is devastating but also relatively easy to recover from. Spanish ‘flu wiped out about 3% of Homo sapiens and like the Black Death correlated with a major European war. Prevailing wisdom holds that a virus is unlikely to wipe out our species because a pathogen which destroys its host is destroying its habitat and would therefore wipe itself out. The problem to my mind with this argument is that we are ourselves in the process of making our environment uninhabitable and there’s little sign of that changing. It also seems to me that this puts the cart before the horse in evolutionary terms. Mutations and evolution may lead to greater fitness to survive in the long run, but that doesn’t mean that individual mutations don’t lead to extinction in the short term, leaving other species who do have greater fitness to survive and reproduce. Maybe we’ve just been lucky up until now.

Everything I’ve written here so far has had a rather glib atmosphere to it, but this is of course a serious matter. Apart from anything else, it involves us, our families and friends. The abstraction and detachment I feel is probably an issue of scale. Even so, the Doomsday Argument is about the near future, and although I’ve addressed its validity elsewhere because it may not in fact work particularly well, it’s worth covering it again. Suppose one’s life to be a random sample of all the human lives which will ever occur, and more specifically that one’s thought that humanity might be about to disappear is a random sample. This thought clearly did occur many times in the fourteenth century, as is seen from accounts at the time along with the art and literature it produced. Even so, it seems reasonable to suppose that one’s life occurs about half way through all human lives which will ever be. It was calculated in about 1970 that seventy-five thousand million people had lived up to that point, with the cut-off in terms of evolution being Homo erectus, who lived from around a million years ago up. Whether they were capable of having such a thought is another matter. Presumably it correlates with behavioural modernity and this is in fact an overestimate of the number of people capable of thinking that way. Also in about 1970, population was doubling about every thirty years. This has apparently now slowed, but population growth generally slows because of development, because children are not then being used as much for labour or care of the elderly and infant mortality goes down, so an underdeveloped world such as this one, in which mora is still rife, has rapid population growth. Population reached seven thousand million in 2011. Seventy-five is roughly eleven times that number, which is less than 2⁴. Since doubling occurs every thirty years in current conditions, this amounts to one hundred and twenty years, yielding a date of 2131 – we can expect the last human being to be born about that time given these assumptions. Note that this argument has nothing to say about the cause of our extinction, just that it’s likely to happen shortly after 2131, or at least within a human lifetime of that date.

There are naturally major flaws with this argument. For instance, it might just be predicting when humans become immortal or when we stop being pessimistic about our future. It also leaves the mechanism of our demise entirely mysterious. It looks at first that it suggests a cause connected to overpopulation, but in fact it does nothing of the kind. What it does do, though, is focus the mind on the future and possible reasons for human extinction, and it particularly does this if one has descendants or cares about people who have them, or are just young. My father is currently ninety. He lived through a time and in conditions which were not particularly conducive to health. Nonetheless he’s still here. Extending that to our granddaughter, who was born last week, it’s reasonable to expect her to live until at least 2109, a mere twenty-one years before the supposèd cutoff date, which itself is well within the expected lifetime of any children she might have. This brings the prospect vividly home to me, and it makes me wonder, moreover, what explains the disturbing lack of concern shown by our apparent leaders and the high and mighty generally. While I’m aware that they might not buy into the Doomsday Argument, which even I think has its flaws, it’s no longer rational to deny anthropogenic climate change, and on the whole these people have children and probably grandchildren. What are they expecting to happen?

During the ‘noughties, it was notable that the schooling system did not appear to be preparing the rising generation for survival in a post-apocalyptic environment, and in fact seemed to be mainly concerned with short-term economic goals. This was under the auspices of New Labour, which was enough to discredit the Blairite project completely and make a vote for Labour a crime against one’s children and grandchildren. This doesn’t currently hold true, although it does of course apply to voting Conservative at the moment. I find it hard to avoid the conclusion that the richest governments and powers on the planet are engineering some kind of population crash, although this may be paranoid. Regardless, we can enumerate the possibilities:

  • They’re just short-termists.
  • They are aware of the risk and have a plan but are keeping it secret in order to hide the fact that it doesn’t include most of us.
  • They’re in denial about the risk and therefore have no relevant plans.
  • They’re aware of the risk but expect something like a technical or market-based fix.
  • They’re dispensationalist post-millenialist Christians and see this as the apocalypse, and are therefore not interested in sorting it out.

I think the last is true of some religious people. The penultimate possibility is compatible with Singularitarianism – the idea I’ll explore later when I get to talking about ‘Novacene’ and ‘The Overman Culture’. This has been described as “The Rapture For Nerds”. One problem with it for them is that it doesn’t look like it’ll end with them in power still, or rather, able to maintain the illusion to themselves that they are. Another is that it’s been possible to provide for every member of the human race for a very long time now, though probably not in the fourteenth century, but this simply hasn’t been done, and this doesn’t depend on technology. The idea that they’re in denial is rather feasible and was suggested to me recently. The second possibility strikes me as the most feasible but also the most paranoid, and that bothers me because I can’t decide what it is.

But I want to leave you with this. I now have a grandchild. On the whole, most people in the developed world become grandparents at some stage, although of course many people are also child-free, not least because of the state of the planet and society. Considering that this is where the majority finds itself, we can surely be expected to have common cause in getting this sorted out. It’s just extremely concerning to me and also rather mystifying. Any thoughts?

Nessie And Friends

For someone living in Great Britain, Loch Ness is an absolutely awesome place. It has more water in it than the whole of England, Glen Mòr in which it’s situated marks a crevice separating the part of the Highlands which used to be part of the Appalachian mountain range, it includes the only inland lighthouse in Britain and is connected to the sea by the fastest river in Britain. Given its depth of 227 metres, it is the second deepest loch after Loch Mhòrar and the deepest mean body of fresh water on this island at eighty-seven metres. It has a volume of 7.4 cubic kilometres. The water is incredibly cold, really clean and somewhat stained by peat. It looks rather like whisky in fact. It’s only the second longest loch, and Lough Neagh in Ireland is seven times larger as the biggest inland body of water in these isles, but unlike its Scottish rival, Lough Neagh is quite shallow at an average of nine metres deep and a maximum depth of only twenty-five. Loch Ness is long and narrow, and this may be significant in the perception that there’s a monster in it.

(c) 1972, Academy of Applied Science/Loch Ness Investigation Bureau

Right now, I don’t believe in the Loch Ness Monster, Nessiteras rhombopteryx. As a child, I truly did, and I wasn’t alone in that. I really wanted to believe there was a Mesozoic plesiosaur living in the loch, and as usual with young children, if a strong emotion accompanies a thought, that thought becomes a belief. You’ll also appreciate that the really big sciency things for children at the time were space and dinosaurs, and as a lifelong nerd it isn’t surprising I was into all that stuff. In addition to that, though, I was also into a load of other “weird” stuff which nowadays I’d probably call Forteana – UFOs as alien spacecraft or time machines, psychic powers, past life regression, all the usual suspects. A lot of my peers were interested in that kind of thing too. But I, and they as it turned out, was oddly discriminating about what I chose to believe in. I had no trouble believing in the Loch Ness monster or other lake monsters, sea serpents and the likes of giant dinosaurs who had survived from the Mesozoic into the Glam Rock Era. After all, there was even a band named after a dinosaur at the time. But for some reason I did have a problem believing in the Yeti and Sasquatch. There were several reasons for this I think. One was that they were a bit too much like teddy bears, and I was trying to put away childish things, which ironically led to that childishly irrational choice. Another, something which I noticed about myself quite early on, was that I was drawn to the strange, i.e. that which was unlike the prosaic. For instance, birdwatching has never appealed to me much because the animals concerned are everywhere, and at that stage I found cetaceans w, ay more interesting than land mammals, marsupials more interesting than placental mammals, and so on. Sasquatches and Yetis were both placental mammals and very humanoid, thereby reducing their appeal. Another reason was the rather poor marketing job done on the Yeti and Sasquatch by calling them the “Abominable Snowman” and “Bigfoot”, both of which came across as rather goofy names, not allowing the cryptids concerned to be taken seriously. I mentioned also that my friends were sceptical about certain “mysteries”, for want of a better word, and in particular that they didn’t believe in the Cottingley Fairies. I think this is gender-based. They were happy to believe in the more macho monsters – massive, aggressive, hulking muscle-bound role models perhaps – but not in the rather feminine little flying girls dancing in rings at the bottom of the garden. Not that I necessarily believed in fairies myself, although they did once nick one of my library books, but that’s another story.

The funny thing about what I’m going to call the “mountain men”, because I do think there’s a gender-based element in believing in them and in their image, is that although the Yeti is in theory more plausible than the Sasquatch a lot more evidence has been produced for the latter than the former. The reason I say this is that a few million years ago there was in fact a very large ape called Gigantopithecus living in South Asia, terrestrial and related to orangutan. He (see above) was getting on for three metres tall standing upright and it makes sense to suppose that come the ice ages he would’ve become adjusted to the colder conditions and ended up retreating to Tibet and the Himalayas when the ice sheets retreated. There’s an entirely feasible process whereby yetis could’ve evolved and ended up in that part of Asia. By contrast, Sasquatches are much harder to make work. Either apes would have had to have entered the Pacific Northwest of North America somehow or there could’ve been convergent evolution from New World monkeys when the fauna exchange occurred in the Pliocene (I’m doing this from memory incidentally so I might’ve got the exact date of the formation of the Isthmus of Panama wrong, but it was sometime around then). New World monkeys are more arboreal and smaller than Old World monkeys and never gave rise to apes, although there could be other reasons for that such as the absence of the right kind of environment or ecological niche for them. If apes, particularly our closest relatives the other hominins such as Neanderthals, had reached North America, they could’ve been expected to have left remains of their activities and bodies, and that didn’t happen.

Nonetheless, the Sasquatch is far more “popular” than the Yeti. I haven’t heard anything about yetis for decades now, and they don’t seem fashionable compared to their American cousins. Reports of yetis are largely based on hearsay and the few apparent samples have turned out to be from bears. It is possible that there’s a known endangered species of bear which is confused with humanoids living up those mountains. Bigfoot is another question entirely. There’s alleged film footage and samples which have been subjected to genome sequencing, and there’s even a semi-official Latin binominal for them: Homo sapiens cognatus, recognised by ZooBank. This would make them an human subspecies, which seems odd to me because I’ve always thought of them as belonging to a separate genus entirely. Clearly with science the jury is out until something is well-corroborated and replicated, but I don’t feel able to accept that they are real at all.

Bigfoot investigators have themselves been investigated scientifically, and are said to be people who feel excluded from the establishment and the usual academic career ladder. This shouldn’t be taken as a comment on their intelligence, but it does often mean that they’re self-taught and not necessarily trained in scientific investigation, and they are of course largely excluded from a community of researchers which could otherwise provide either a sanity check or groupthink, maybe both. Their situation reminds me of shop stewards, that is, people excluded from middle class career progression and therefore pursuing promotion or a role which uses their abilities and experience by other means. The presence of the Rockies in two developed and wealthy nations also means that there are facilities and infrastructure available which may not apply so much to Nepal, Tibet and associated regions.

But for whatever reason, I don’t believe in Bigfoot.

As a child, my understanding of Nessie was not only that it was a species of surviving plesiosaur who had become trapped in the loch when Glen Mòr first formed, although since it was apparently completely covered in glaciers for many thousands of years relatively recently I don’t know how that works exactly. It suggests that they were around more widely in the world before this happened, and this is to my mind one of the problems with the idea. Although there are plenty of other lake monster accounts in the world, including ones in North America and Japan, the niches occupied by plesiosaurs seem now to be filled by the likes of whales and seals, and in fact there seems to be a more general problem with the survival of any of the large animals from the Cretaceous onward. If they had managed to survive more than just marginally at the beginning of our era, they would’ve been in an excellent position just to take over again and exclude the rise of the mammals. Either plesiosaurs or large dinosaurs must surely have been very close to being wiped out immediately after the Chicxulub impact or they would just have come back again immediately after. I do in fact think that there probably were a very few survivors into the Palaeocene, but they were too isolated to find each other and mate, or possibly the populations were so small that they would’ve become inbred and not been able to thrive. In fact, so far as carrion-eaters were concerned it’s even possible that they underwent a temporary population explosion, though unsustainably. One thing which definitely did happen was that small reptiles and birds who made it through evolved into larger forms millions of years later, such as Diatryma and Phorusrhacos, giant flightless birds getting on for three metres high and the lizard Barbaturex, which although it was only about 2.6 metres long was probably the largest animal around in its habitat at the time. But there are no signs at all of plesiosaurs and if there had been, they would’ve had to have been pretty competitive to survive at all, and therefore probably quite common.

When I wrote ‘Here Be Dragons’, I tried to make the Loch Ness Monster work, and the only way I managed that was to imagine the desmostylids to survive in the Atlantic Ocean. Desmostylids were amphibious sirenians, so roughly like manatees and dugongs but able to climb onto the land as well, but the trouble with them being candidates for Nessie is that they only lived in the Pacific. They didn’t spread to this side of the world at all and are distinctively American and Far Eastern mammals. This does suggest, though, that Nessie might not be a plesiosaur at all but some other exotic kind of animal. But there’s a big problem with anything of that size living in the loch. They’re supposed to be twelve metres long and there can’t realistically be fewer than about twenty or thirty of them because of genetic diversity. There is not, however, enough living matter in the loch to support a community of such a size. Not enough food. If they were mammals this would be even harder because they’d need more fuel to maintain their body temperature. This may, though, be part of a clue to what’s actually going on.

Looking at the loch, one notices a couple of things. One is that there are plenty of grebes in the water, who have a habit of rearing up on their legs while swimming, which makes them look like the neck and head of a larger animal most of whose body is underwater. The other is that there are peculiar standing waves in the water running lengthwise which look either like snakes or solid humps in the water, which is how the monster is described. I don’t know what causes these but I presume it’s something to do with the shape of the body of water and the banks. There are in fact thermal standing waves deep in the water although these aren’t supposed to have any visible surface manifestation. The waves you can see are pretty distinctive though.

Sir Peter Scott, the well-known ornithologist and son of Scott of the Antarctic, strongly believed in the monster and went so far as to give it the Latin name I mentioned earlier, Nessiteras rhombopteryx – “Ness Monster with diamond-shaped fins”. Some joker later pointed out that the name was an anagram of “Monster Hoax By Sir Peter S”, which is interesting but probably a coincidence, although a very good one. On the whole I think the people who wrote and said they were looking into the story were genuine, honest people and definitely not hoaxers, with some exceptions. The famous “Surgeon Photograph”, for example, really is a hoax. A submersible was sent down to scan the water with sonar and found a large echo which was either a large moving object or a shoal of fish. Parsimony demands it was the second of course.

There is, to my mind, just one possibility, though a pretty slender one, that there is indeed a large animal, or in fact a whole community of large animals, in the loch which does make sense to me. This is the suggestion that there are in fact Greenland sharks in the water. These are extremely long-lived animals, living up to five hundred years, because they can grow to about six metres long and live at a very low temperature. They are in fact found locally as well, in the waters around Scotland and further north. Hence their metabolism is very slow and they wouldn’t need much food. They’re also said to be able to adjust to living in fresh water, which makes sense because the Arctic Ocean is not as salty as the rest of the oceans due to being almost landlocked with many rivers flowing into it along with snow and ice formed from snow melting into it. So although I have been very doubtful about the existence of any large animals in Loch Ness, I have to admit that right now, although I’ve only heard about it recently, this does sound quite feasible. They also tend to be undetectable to sonar, and if they did grow to a considerable size in the water they wouldn’t be able to return to the sea.

I’ve written all this without looking at what’s now thought about most of the topics covered, so I might be way off-beam, but right now, although I used to believe strongly in Nessie, then stopped mainly because of the food argument and the improbability of there being plesiosaurs around nowadays, just now, though I’m still pretty convinced there’s nothing interesting there, I have to admit it’s just possible there’s a small number of freshwater Greenland sharks in the loch.

Or it could just be the frog exaggerator.

Theistic Stockholm Syndrome

As you must surely know, Stockholm Syndrome is a situation where a kidnappee comes to ally herself, apparently willingly, with her kidnapper and is named after a 1973 incident in Norrmalmstorg, Stockholm where a bank robber took four people hostage at least one of whom later showed sympathy for him and complained about his treatment to Olof Palme. The classic example, though, relates to the Symbionese Liberation Army kidnap of Patty Hearst in 1974, who later proceeded to rob banks with them and unsuccessfully pleaded Stockholm Syndrome when caught. It’s also said to occur in child sexual abuse situations, and classically in cases of domestic abuse. As a former kidnappee, you might think I have some insight into the situation and to a limited extent I have, though I can see that kind of behaviour much more clearly in other parts of my life. I would say, though, that the reason my case was newsworthy is that I had several opportunities to report the incident to the police which I didn’t take and that I felt a strong need to protect my kidnapper, although this was partly for pragmatic reasons as I felt a prison sentence would mean he would resort to crime later due to not having good prospects and there would be further victims down the line from myself. That may be a rationalisation but apart from the general sympathy I feel for people just because they’re people, I wouldn’t say I felt especially sympathetic towards him.

The diagnosis criteria for Stockholm Syndrome are the development of positive feelings towards one’s captor and negative feelings towards the police and authorities. I didn’t satisfy those criteria because I started with negative feelings towards the authorities and police, and in fact felt more positively towards them as a result. The characteristic features of being captured in this way include poor memory of the events, denial of their reality, flashbacks, confusion, fear, despair, lack of feeling, depression, guilt, anger, PTSD, reliance on the captor, gradual ramping up of physical issues associated with the situation such as thirst and hunger and their possible adverse consequences, caution, aloofness, anxiety and irritability. I can see some of that but not all of it. For me, as an example, it was more like a flashbulb memory and I was able to tell the CID in great detail, more than they were expecting, what had happened and when, which might be the result of a predeliction to depressive thinking which gives me better recall of negative experiences than positive.

On the whole, people who believe in God seem to say that God is good, although there’s also misotheism and deism. Prima facie, it seems odd to start from first principles and conclude that not only is there a God, but that that God is morally aligned by human standards. Hence you might believe in a morally neutral God or, given your life, the state of the human world or the existence of worms who eat babies’ eyes or whatever, that there is a God but that that God is evil. Nonetheless, most people seem to opt either for atheism, apatheism, agnosticism or theism. This is, as an aside, somewhat reminiscent of the limited motives people have for killing themselves – people do it because of depression or as a form of euthanasia but rarely for other reasons, at least in the West. Nonetheless, theism – belief in a loving, personal and involved God – is very common.

Given, then, that people do believe in a loving, personal and involved God, one might think they would then associate their version of God with actions which seem intuitively good or positive. Sometimes this does happen, but quite often the reverse seems to happen. Things get a bit difficult here because if one does believe in a good, loving God, and I do, one may be tempted to make one’s idea of God after one’s own values, and the chances are that, given that one is not perfect, not all one’s values will necessarily be ideal. This, incidentally, doesn’t depend on whether God exists. For instance, one might take a different view of infidelity than one’s partner. Belief in objective morality doesn’t have to go with theism and moral relativism needn’t go with atheism. Beyond a certain point, though, it would become very difficult to sustain a certain set of ethical beliefs at the same time as believing in, for example, the “Old Testament God”. My discussion here is not about whether God really exists or is good here so much as a particular combination of beliefs which I think may lead to a certain coping strategy.

The God I’m talking about is wrathful, vengeful, judgemental and very willing to kill people and exhort others to kill, and maintains an eternal torture chamber we call Hell, but is at the same time called loving and perfect by His followers (and also tends to be referred to as “He” and “Father”). Also, and this is crucial, this God is all knowing and therefore telepathic – He can read your mind and therefore knows at all times what you’re thinking and feeling. Having just watched the second series of ‘Criminal Justice’, abusive heterosexual marriages are very much in my mind at this point, but even if they weren’t, I think that to me this would still look very much like one. The abuser is male, controlling, keeps a record of everything you do and holds the threat of violence and other forms of abuse over one’s head at all times. In fact the relationship between God and the Church is often explicitly compared, even by Christians, to a marriage, although the spin given it is unsurprisingly much more positive. Considering the patriarchal nature of the society in which this was a popular belief, with the father having enormous power over his whole family and slaves, even including the power of life and death in some cases (which incidentally is why it’s absurd to believe the Bible condemns abortion – what it condemns is women having control over their own bodies because of the paterfamilias).

Before I go on, I don’t want to give the impression that men are never the recipients of abuse in relationships. Of course they are. There are statistical differences of course but it’s not even relatively rare for that to happen in heterosexual relationships. It’s just that this example involves a deity seen as male. God is also our father in this scenario rather than our husband, so it’s not particularly about marriage either, but it clearly is about a more powerful person who is telepathic. I’m going to carry on calling God “him” in this because it reflects what I think is the view of the people involved, not because I think all abusers are male or that God is.

Even in abusive situations between humans the victims can come to believe that they are in the wrong, and fool themselves that they love the abuser and consider him to be good. They may have come to persuade themselves that he is more powerful mentally than he in fact is, but straying into the realm of believing him to be telepathic, though I can easily believe that if someone is wont to psychotic thinking, the stress of being in that situation might lead them to believe that. In God’s case, we assume Him to be a mind-reader and therefore that our minds are transparent to Him and consequently if we even let ourselves believe He is not good, He will visit the kind of vengeance upon us as He did on Sodom and Gomorrah. For this reason, I believe that certain theistic religious believers, deep down, don’t believe that their God is good or loving at all, but that they try really hard to push that belief down, with the result that they have Stockholm Syndrome.

Having said all that, I’m now going to move on to the cheery subject of Hell. We generally tend to believe the Bible tells us that Hell is a physical location underground where the souls of the dead are tortured for all eternity in fire by innumerable demons whose king is Satan. However, just as the popular view of God is really Zeus, with the long white beard and the thunderbolts, so is our view of Hell strongly influenced by Greek and other mythology (as is the Bible itself of course). The Greek Tartaros is a place deep underground where the Titans are imprisoned and tormented along with wicked humans. It’s the place where Tantalos (after whom the metal tantalum is named) is unable to reach the grapes just above his head or the water in which he stands because the first are pulled out of his way and the latter freezes over as he tries to reach for them, and the place where Sisyphos has to push a boulder up a hill only for it to roll down after him just before reaching the peak. The word “ταρταρος” only occurs in the very suspicious second epistle of Peter, and is described there as the place the rebellious angels are in chains while waiting for judgement. Apart from that, the clearest reference to Hell seems to be in the story referred to as “Dives and Lazarus”, where a rich man traditionally referred to as Dives, though that’s a description rather than a name, meaning “rich man”, goes to Hell and a beggar goes to Heaven, as is commonly understood. The problem with taking this story literally is that it refers to Dives seeing Lazarus in Heaven from Hell (and it doesn’t actually say Lazarus is in Heaven but does refer to an enormous chasm between the two of them) which makes it rather unfeasible. Even so, the word used for the place of torment here is “ᾍδης“, a word which is used many times throughout the Greek version of the Bible including the Septuagint, where it translates the Hebrew “sheol“. In Greek mythology, Hades refers to a shadowy realm where the spirits of the dead dwell, not in suffering but in a form of half-life. This is also a traditional Jewish view. These are not people who have lived particularly good lives necessarily, but there’s a mixture of people there and not a sorting into sheep and goats that we see in the New Testament.

Another common claim is that “Hell is graves”. “Grave” is another possible translation of sheol, perhaps as the common grave of the human race, that is, simply that part of Earth under the surface where the dead are interred. Thus apparent references to Hell in Bible translations might only be referring to the grave, or perhaps slightly more figuratively as the “fatal urn of our destiny”, and they also don’t seem to admit to an idea that the dead are conscious on the whole, with the exception of the “Witch of Endor” incident where the spirit of Samuel speaks to Saul, which many Christians have found problematic. There’s also the issue of the Lake Of Fire. This is referred to in Egyptian and Greek mythology and also the Book of Revelation, and seems to be the same as Gehinnom, based on Gehenna, a valley in Jerusalem where the kings of Judah appear to have burnt their children to death (according to one interpretation) and so considered cursed. In the Tanakh, it’s referred to in the books of Chronicles, Kings, Nehemiah and Joshua, and possibly also alluded to in Isaiah, but as a physical place on Earth rather than somewhere the dead go to be punished. Rabbinical Judaism, I hear, regards it as a purgatory-like place where the sins of the dead are burned away.

I’m not going to deny that Hell, more or less as it’s popularly understood, is in the Bible, and for my own reasons I have to say that I do believe both in Satan and Hell, but I’m not going into that here. My point is really that it is up for debate. Another view taken by some Christians is annihilationism, which is the idea that those who are not saved cease to exist at the Day of Judgement, which actually I think takes away any motivation for bothering to believe for selfish reasons, which wouldn’t work anyway, but means that in a Pascal’s Wager kind of way there’s not much point in believing in that because the stakes aren’t high enough.

But I’ve now run out of time, so I’ll leave you with this. I firmly believe that many people who call themselves Christian and have that familiar wrathful idea of God who is also supposed to be loving are actually likely to be suffering from some kind of psychological problem where they’re in denial about the fact that such a God would not be good or amandous, and are in fact in an abusive relationship with that kind of God, which further makes me wonder whether they are also the perpetrators and/or victims of such relationships in their real lives, and perhaps, to be topical for a second, would vote in a similarly abusive head of state.

Untranslatability And Rubik’s Cubes

Are there really untranslatable words? If so, could a language be entirely untranslatable and if so (again) how? I’ll start with Rubik’s cubes and move on to saudade and sisu.

I never managed to solve the Rubik’s cube. The closest I can get is three sides. I refuse to cheat by reading up on how to do it, and I don’t know how many people who can do the cube have cheated in that way. However, certain things can be seen to be true of the cube which don’t depend on knowing how to do it. One of these is that it’s impossible to do five faces without the sixth also being done. If that were so, at least one face of a sub-cube would have to be the wrong colour for it ever to be complete. That said, there could be other versions of the cube, maybe prank ones, which did have a subface the wrong colour for them to be doable. It’s more than that too. Cubes can be easily dismantled and put back together again, but unless you reassemble a cube in a fashion you know for sure is a possible arrangement of squares you could reach from the completed state, the probability is that you will have put it into a position from which you can’t get back to the original state unless you just take it to bits again and try to put it into such a condition. This is because only one arrangement in twelve can be reached from the perfect starting state. The number of possible arrangements, although vast, is only one of twelve sets of such arrangements, and none can be reached from any of the others. The branch of maths known as group theory can be applied to cube-solving and these permutations, whose sets have been referred to as “orbits”.

Now for language. When one “does” language, one is attempting to express oneself clearly in a way which can be understood by others, or perhaps by oneself although Ludwig Wittgenstein would have a lot to say about that particular idea. It’s a process which reminds me somewhat of Rubik’s cubes, and in fact there are notations for cubes which are rather language-like, though somewhat restricted. They’re not going to be able to describe the world as a whole so much as the very restricted but still gigantic world of The Cube. A string of letters and punctuation, upper and lower case, can be used to describe how to turn the parts of a cube to get it to a particular point, such as F, U, L, R and D for Front, Up, Left, Right and Down, and so on. Other versions exist, such as ones referring to clockwise and anticlockwise rather than using the apostrophe to indicate anticlockwise, but translation between them is easy so this is not what we want. If, however, there’s a way of comparing the transformations of a cube to the communication of ideas, we might be onto something. If there was a scrambled cube in a different orbit and the aim was to get it into a particular pattern which was inaccessible to another orbit, the same string of letters would be fine as a way of instructing someone how to twist the sides, but the end result would be different and communication would have failed. This seems much more promising. Now imagine this. There’s a community of language users whose languages are each based on the cube and how to turn it, and the instructions for getting from the completed cube to the patterns are used as words to describe concepts for which the patterns are metaphors. For instance, twisting the middle layer to produce horizontal stripes from a complete cube becomes a word meaning “stratified” and turning the cube in a manner which produces a chessboard-style arrangement becomes a word meaning “chequered”. A completed cube has a special, simple word and comes to mean “clean” or “perfect”. Nobody from the twelve communities has ever seen cubes from the others, but their language uses the same words. These words will fail to communicate for quite some time, but the set-up is quite artificial and closely resembles Wittgensteins Private Language Argument (PLA).

Wittgenstein often wrote philosophy in quite an aphoristic way. One of the things he asked us to imagine was that we each carry a matchbox with us which contained a beetle which we never show anyone else. For all anyone knows, a matchbox could be empty and when someone says “beetle”, they’re not referring to anything. If we imagine twelve communities each with a differently arranged cube, it does become easier to understand from an outsider’s perspective that “doing the Rubik’s cube” means something both different and the same for each group, and it differs from the beetle in a matchbox situation because everyone in a particular social group can see everyone else’s cubes in that group, so it isn’t the same as a private language. Wittgensteins argument is that an essentially private word which could not be defined in terms of other words cannot mean anything because there’s no way to distinguish between it seeming correctly applied and actually being correct. I also suspect that Wittgenstein is rather too much of a logical positivist for my tastes, something which oddly I haven’t seen anyone else say. That is, he means that meaningful statements have to be axiomatic, logically derived or verifiable by the senses, and in terms of philosophy of mind that would make him a behaviourist, which involves the denial of all purely subjective mental states. That said, he did say useful things and the PLA is not just about logical positivism, and may not even apply to our dozen secret Rubik’s cube communities.

Wittgenstein also said that if a lion could speak, we could not understand him. If you hear a conversation between two people about a soap opera you’ve never come across, you might hear them referring to people like Vera and Ena as if they were real people. I used to have aunts called Ena and Vera. As a child, if I’d never seen ‘Coronation Street’, I might have heard some people on a bus going on about what was happening between Vera Duckworth and Ena Sharples and wonder why I’d never heard of any of that going on between my aunts (and I must admit right now that I’m curious about any story lines which might have involved both of them but I can’t remember). I wouldn’t understand the conversation, but I might think I could. Something even further removed from my experience would be talk of the “offside rule” and the “five-yard line”, which I think is what they call certain things in soccer but I have no idea what they are and I couldn’t participate in such a conversation. Or could I? Is there a way of manipulating talk about those things which means I could fake it? If I could fake that, are there whole fields of discourse which are fake? But leaving that for the time being, the more different someone’s world is from yours, the harder it is to understand what they’re saying, and this seems to be what Wittgenstein means about the lion. It’s been said that the apparent difficulty some non-neurotypical people have in empathising is not what it seems. The process of empathising seems to involve the faculty of placing oneself mentally in the other person’s position and imagining what it’s like for them, and the idea is that a non-neurotypical person doesn’t have difficulty in doing that, but once they’ve done it they’re not similar enough to the other person to succeed in imagining them accurately. It isn’t because they don’t go through the same process as anyone else. I personally tend to think being on the “spectrum” is more about salience than the primary absence of theory of (other) mind(s), but this could lead to such circumstances. In such a situation, you can imagine someone saying “I want to eliminate world hunger” and doing so by trying to wipe out all animal and fungal life on the planet. The initial statement, “I want to eliminate world hunger” is in English, but that doesn’t help most people to understand its full meaning. If everything about that person’s mental world is sufficiently different from our own supposèdly shared one, the fact that they were speaking English wouldn’t even matter, and in a sense it would be untranslateable. But the reason is that it would take too long to outline their assumptions and views for it to be practical. Given enough time, it could be done.

Non-Cantorian set theory was a response to Russell’s Paradox: whether the set of all sets which are not members of themselves is itself a member of itself. This paradox led to older notions of set theory being thrown out, or at least placed into question, and a new set of axioms arose in response aiming to avoid this paradox. All are expressed using predicate and propositional calculus notation and most are quite easy to translate into English. My grasp of maths is rather weak and also patchy. I noticed, for example, that I could often understand the content of the first and final year BSc maths syllabus at my university but not the second year. Nevertheless I don’t have a huge problem understanding Zermelo-Fraenkel set theory. Here’s a fairly easily translatable axiom from that known as the Axiom of Extensionality:

∀x∀y[∀x(z∈x<=>z∈y)=>x=y]

That is, “for any x, for any y, for any x, z being a member of x if and only if z is a member of y entails that x is equivalent to y”. This bare-bones “translation” of the above sequent is of course rather opaque, but it can be disentangled and simplified to read “two sets are equal if they have the same members”. The next few axioms are similarly translatable until one reaches the sixth: the Axiom of Replacement:

∀A∀w1∀w2…∀wn[∀x(x∈A=>∃!yφ)=>∃B∀x(x∈A=>∃y(y∈B/\φ))]

This is difficult even to type – I had to resort to using HTML directly to write the above line. It means that the image of a set under any definable function will also fall inside a set. That isn’t an immediately clear thing to say to most non-mathematicians. The above is also an axiom schema rather than just an axiom, meaning that it’s a metalanguage referring to the language used to write the axioms themselves. It also occurs to me that there might be an issue with the use of the cardinal integers in that because Russell’s Paradox is itself applicable to the foundations of arithmetic, so it could presumably be profitably expanded to consistency with Peano’s axioms of number theory. “∀w1∀w2…∀wn” also refers to a countably infinite number of items, so in practical terms this is inexpressible unless you just say something like “and so on, forever” or “ad infinitum“. This kind of thing takes most people into a realm where English, and probably most natural languages, are inadequate to describe something but which is nevertheless not antilanguage. It isn’t anybody’s fault that this can’t be expressed clearly in English as far as I can tell.

Another example of this might be APL – “A Programming Language”. Like my other favourite programming language, FORTH, APL has been described as a “write-only programming language”. That is, it has the reputation of being fairly easy to write but impossible to understand once written. I disagree with this assessment of FORTH because giving words names which make sense and inserting comments, as usual with coding, will lead to code in FORTH making sense to other people. For instance, “: CHARSET 127 32 DO I EMIT LOOP ;” has a series of English words in it, the first just being the label for what you’re going to call that word, which could therefore be named something clearer like “ALLTHECHARACTERSINORDER”. APL, though, is not the same because it uses symbols rather than letters and is very pithy.

(~10001000∘.×1000)/10001ι1000

will find all prime numbers lower than a thousand. It makes sense if you know APL but wouldn’t be easy to express in English.

Most of the time the problem with setting these over into English or most other natural languages is that they take a lot longer to express when translated. Whereas I’ve described what the above does in APL, I haven’t set out the algorithm using words because it would be much longer and the question of maintaining comprehension arises because of attention span. This feels like a bit of a cheat to me because the weakness is to do with something which could be extended with practice, at which point the sequents would be understood. The idea of untranslatability to me would be a language which simply cannot be translated no matter what, and to illustrate these I can finally get round to talking about the likes of saudade and sissu.

Saudade is a Portuguese word which is often said to be untranslatable, although it can be described roughly as “longing”, “nostalgia” or “missing”. I don’t speak Portuguese but the words used in English are insufficient because they don’t express the strength of feeling involved. I’m wondering if it expresses first stage bereavement, where there’s denial that something or someone is gone for good. Welsh has a similar word, “hiraeth”, and German has “Sehnsucht”, which although it doesn’t strike me as untranslatable, I find myself thinking the word in German rather than English when I try to do it in my head, so maybe it is. This might mean it can’t be translated into English but can be into other languages which do have the same concept.

Sisu is a Finnish word meaning something like “steadfastness” or “perseverence”, or perhaps “foolish bravado”. I’ve only ever puttered around in the foothills of Finnish so I can’t comment much on this. Finnish also has a word for Schadenfreudevahingolino. When coming across words like this, it can be easy to be hypnotised by the pride someone might have in their culture or language which stops one from being able to think of a word. Even so, sisu seems to me to describe the quality one might need to succeed in giving birth vaginally, or perhaps to push through the wall during a marathon. but maybe I’m wrong.

A recent popular word of this kind is the Danish hygge, which I perceive as being a synonym for Gemütlichkeit – a kind of homely cosiness. Other words claimed to be untranslatable include mångata, sobremesa, toska and itsuarpok – Cynthia’s reflection on the water which looks like a street heading towards her (Swedish), the convivial feeling after a meal (Castilian), gloominess/ennui/lugubriousness (Russian), waiting impatiently for something to turn up (Inuit). Considering the first, the idea of multiple reflections on a methane ocean of objects in the night sky of Titan would extend this meaning, perhaps allowing for a whole series of roads to different moons and planets, and I could perhaps invent a word for that but I’ve been able to describe what it would mean already in English. Itsuarpok is a particularly useful word in these days of constant deliveries of stuff we’ve bought online. Could there be an entire language consisting of such words though?

Jorge Luis Borges once wrote a story called ‘Tlön, Uqbar, Orbis Tertius’ which is pretty amazing but rather than go into that now, I want to talk about just one aspect of the story, which is disconcertingly vertiginous and rather like an earworm. The nations of Tlön, which appears to be imaginary in the story, are idealist in the sense that for them the world is not a collection of objects but a succession of separate dissimilar acts. Thus their language is based on verbs rather than nouns, a sample phrase being “hlor u fang axaxaxas mlo” – “upward behind the onstreaming it mooned” or “the moon rose above the river” (Tlön isn’t Earth). In the Northern Hemisphere though, the languages are based on a different principle: that of the monosyllabic adjective. Thus the moon is described as “pale-orange-of-the-sky” and “round-airy-light on-dark” depending on the impression given. Two different sensory impressions can be mixed, such as the cry of a bird at sunset. Hence there is a vast number of nouns, including all those found in English and Spanish in the sense of being direct translations, but none of the speakers gives them any credence as they are transient impressions. Both of these types of language, particularly the second, correspond closely to the idea of untranslatability, although there would be times when coincidentally translatable words would turn up in the languages, and it would be alien to the spirit of the story to exclude such words. Incidentally there’s plenty more in the story than that but I don’t want to veer off-topic.

Dolphins have been said to transmit sound pictures of their perceptions in order to communicate. Although I find it hard to credit that anyone would be able to demonstrate that this is in fact really happening, it’s still an interesting idea, and given that a picture is worth a thousand words, would seem to be untranslatable. A similar idea was pursued by the poet Les Murray in his ‘Bat’s Ultrasound’:

Bat’s Ultrasound

Sleeping-bagged in a duplex wing
with fleas, in rock-cleft or building
radar bats are darkness in miniature,
their whole face one tufty crinkled ear
with weak eyes, fine teeth bared to sing.

Few are vampires. None flit through the mirror.
Where they flutter at evening’s a queer
tonal hunting zone above highest C.
Insect prey at the peak of our hearing
drone re to their detailing tee:

ah, eyrie-ire; aero hour, eh?
O’er our ur-area (our era aye
ere your raw row) we air our array
err, yaw, row wry—aura our orrery,
our eerie ü our ray, our arrow.
A rare ear, our aery Yahweh.

Then again, a series of pictures could just become like a pictographic script with stylised images, although this wouldn’t necessarily impose syntax on it. It might get quite difficult to express certain abstract concepts in it unless a ‘Darmok’-like approach was taken, with abbreviated descriptions of well-known myths and fables. Even in English this could become hard to make sense of. One might say “The Fox And The Grapes”, referring to Aesop’s fable whence the idea of sour grapes originates, and there’s also the concept of “sweet grapefruit”, which is a reversal of the same which however has no associated fable as far as I’m aware. Hence one could proceed to refer to “the hound and the lemon” to refer to a situation where having something which is worthless is subjectively perceived to be of greater value in order to conceal the cost from oneself.

To conclude then, it does in fact seem that several kinds of practically untranslatable languages are possible. There could be languages which refer primarily to experiences which are outside those of most humans, as with the other Rubik’s cube orbits. A species whose dominant sense was smell and whose vision was poor might use a fairly untranslatable language, because for example it would have “insmells” rather than “insights” and wouldn’t “regard” anything so much as “scent” it, and beyond that have a whole world of sensation as rich as our visual one but entirely based on odour rather than light. Or it might have a magnetic sense which could be even harder to relate to. Or, there could be languages which simply take too long to translate for the human attention span, so they could be in principle but not in practice. There could also be languages which have developed metaphors and formed words and phrases as depicted in ‘Darmok’, where the dependence on shared narrative culture is so strong that it’s impossible to make sense of them. Or, there could be languages which combine two or more of these things.

One thing, which I find quite unsatisfactory, is that I haven’t been able to articulate clearly what I would think of as the ultimate case of an untranslatable language – one which does the same job as natural languages as we know them, but based on entirely different principles. The closest to these is the putative delphinese, using sound pictures, but I wonder what else is out there and how it can be made sense of, if at all. Or is it just that the relative obscurity to the Anglophone (or even Pirahaphone or Ubyxophone) mind in which these languages would operate makes them inconceivable to us?

Racism In Politics

A couple of recent affairs currently in the news have revolved around two different kinds of racism and a few thoughts on how to respond to them. These are, of course, Trump’s recent racism against congress members and the accusations of anti-semitism in the “U”K Labour Party.

Donald Trump, as I’m sure you know, recently said of four Representatives who were also women of colour, that they should go back where they come from. All four of them are American citizens although one is originally from Somalia. He later confounded this unacceptable behaviour by tweeting that “I don’t have a racist bone in my body!” and a crowd chanted “send her back!” of Ilhan Omar at a public meeting, to which Trump was seen to nod, I presume in agreement. He stood there for fifteen seconds and didn’t condemn them. The Republican John McCain interrupted a speaker who described Barack Obama as a Muslim and took away her microphone to condemn that statement.

This just is racism. There’s no argument about the definition here, no ambiguity and it’s not really even an evaluative statement to call it that. In the past people have been proudly racist and scientifically racist, and they would agree with that epithet – it isn’t always used as a pejorative term, although clearly most people would see it as pejorative. Trump said later that he disagreed with the chant but of course “he would say that, wouldn’t he?” is the obvious response there whether one agrees with him or not. I’m not sure I agree with Omar in describing him as fascist because other words do just as well and don’t have the same history, although I’m open to that interpretation, but it would clearly just be a neutral, objective description of this behaviour as racist, and there isn’t really any arguing with that.

That leads me to wonder about the BBC, who are not calling it racist. The BBC is supposed to be an unbiassed, neutral institution, and it seems to me that not calling this racist is a form of bias. The coverage I’ve heard doesn’t paint it positively but they have not come out and stated unequivocally that it’s racist, and this makes me wonder. It also made me curious about how the BBC described apartheid in South Africa, segregation in the US and the behaviour of the Nazi party at the time. There is a risk inherent in exploring the last because comparison to the Nazis is clichéed and lays one open to criticism, but I can’t recall them describing apartheid South Africa as a racist régime. I think they should have. It isn’t okay to be silent about something like this, not just in the sense of not reporting it but also in the sense of not describing something accurately. It’s a little similar to false balance.

Furthermore, it’s even more depressing to note that only four Republicans censured Trump for his racism.

I want to turn now to “I don’t have a racist bone in my body!”. I am racist of course. That doesn’t mean I’m pro-racist so much as that I’m aware of racism in myself and the need to strive to reduce it. Racism is a bit like sin, as well as actually being a sin, in that all have sinned and fallen short of the glory of God is a healthy attitude for two reasons. It means one isn’t worse than anyone else, and it means that residual wrongdoing is more likely to continue to be addressed if one suspects oneself of being racist. The point at which one declares oneself as non-racist rather than anti-racist is the point at which one’s racism will never reduce. Many people see this as insulting but it isn’t so much an insult as a recognition that nobody’s perfect, and not in the sense of shrugging one’s shoulders and planning to continue negligently in the same way. It means nobody’s perfect and therefore everyone should work on improving their thoughts and behaviour to be fairer and more compassionate. It seems to me in particular that a white person claiming not to be racist is like a man claiming to be feminist.

I’m pretty sure I’ve covered this before but it probably bears repeating. There’s a fairly widespread concept of racism which asserts that non-whites can’t be racist because of structural issues with society, such as the plundering by white people of the rest of the world causing gross inequality during the imperial age and the ongoing practice of that policy by other means today. I’m not making that claim, but it’s clear that the issue of white racism is more important than most other forms of the prejudice, and as far as a white person is concerned racism is in any case universal, whether or not it’s because they’re white. The other issue would be about whether racism occurs among non-whites. I think it’s pretty clear that it does, although it very often seems to be against other ethnic minorities, at least in white majority countries. Looking at racism again in a non-evaluative sense, where racism could be seen as something which just exists as a phenomenon rather than something which is condemned, although obviously I do condemn it, it is, like most or all other forms of prejudice, an error of inductive inference. Inductive reasoning is the use of more than one example to draw a tentative conclusion. For instance, “all swans are white” generally worked in Europe and North America until the people living there learnt that there were black swans. It’s always logically possible that inductive reasoning will be proven false. It’s necessary to use inductive inference in order to function properly, so we continue to do it. The situation is somewhat complicated by the fact that there are false propositions about othered ethnicities which have no evidence supporting them, but we do generally draw inferences based on imperfect information, and that means we will always be racist unless we’re drastically neurologically compromised. I suspect, for example, that someone with advanced dementia or severe learning difficulties would not be racist because they’re simply not making persistent inferences at all. Hence we just are racist, and in particular white people in white-majority countries are extra racist on top of that due to structural and institutional racism. For instance, we might not expect someone in a position of authority to be black because of various social factors preventing them from reaching such a position, but that assumption is nonetheless racist, and also important to notice in oneself.

Moving on, just before eliciting the racist chanting, Trump accused Omar of being anti-semitic. I don’t know the details of the accusation, but it brings me to the second concern which is in and out of the news a lot: accusations of anti-semitism in the Labour Party. I do actually believe it’s possible that there is a particular form of anti-semitism among Labour Party members if they, for example, believe in the idea that there’s a Jewish conspiracy supporting global capitalism, and this is of course completely unacceptable. The reason I believe this is possible is that a very large number of people joined the Party in the last few years and it seems to me probable that at least a few of those would be conspiracy theorists of one kind or another. We don’t want conspiracy theories, racist or not, because they distract from deeper problems. But I don’t want to get into the issue of whether anti-Zionism is automatically anti-semitic or not because there’s a way of broadening the issue which is likely to make it more neutral. Anti-semitism is of course a form of racism. At the same time, governments often pursue policies which violate civil liberties, and Israel is one of these countries, as is Egypt, which I understand also restricts movement from the Gaza Strip into their territory. So there are two issues here: possible racism in political parties and support of oppressive policies and action by other countries which are considered allies of the United Kingdom. Consequently, I propose, and in fact I’m pretty serious about this and would like to pursue it as a possible more neutral response to accusations of anti-semitism.

There would seem to be no good reason not to undertake an independent investigation into racism and dealings with oppressive activities by allies of Britain in all major political parties in this country: the Lib Dems, the Tories, the SNP, the Greens, Sinn Féin, the DUP, anything you like. This would include anti-semitism in the Labour Party, and although Muslims are not an ethnicity, Islamophobia in Labour and the Tories, and, well, whatever counts as racist. It would avoid the tactic of appearing to accuse others of something just as a distraction from one’s own wrongdoing and it would in any case address serious issues across the political spectrum which need to be addressed regardless of anti-semitism. We shouldn’t just be concentrating on one kind of racism. Then there are the dealings HM Government has with in particular Sa`udi Arabia, a notorious violator of civil liberties and also highly anti-semitic. Pupils in Sa`udi schools are taught that the ‘Protocols Of The Elders Of Zion’ is a genuine Jewish document and the government officially believes in a Jewish conspiracy to take over the world. If we’re going to condemn the Labour Party for being anti-semitic, even to that extent, does it not also make sense to condemn the Conservative Party for promoting trade agreements with an openly anti-semitic government like that of Sa`udi Arabia? It’s simple consistency.

That’s all I’ve got to say today really. That there should be a country-wide investigation into all forms of racism in all major British political parties and those in the Six Counties, and also into their dealings with oppressive régimes, including anti-semitic ones, and that the BBC should call a spade a spade and describe Trump and the Republican Party as racist, because it’s a matter of fact and not an example of bias or evaluation.