I’m not keen on the idea that a person has internal conditions which are problematic. I prefer the social model of disability, which is that society disables people. For example, most people in the West eat dairy, and lactose intolerance is therefore seen as a disorder, but it wouldn’t exist if there were no sources of lactose in the diet. Also, there’s a strong tendency for disabilities and disorders to become part of one’s identity, and this is not helpful. That said, I have two official disorders which could be shoehorned into a psychiatric diagnosis should one choose to do so. One of them is very obviously gender incongruence, which was diagnosed sometime early last decade by the NHS. But I also have another diagnosis which is much older, from about 1975 if I recall correctly, and that’s what’s now known as ADHD but back then was called “hyperactivity”. Because I was understood to be a boy back then, I got this diagnosis much more easily than I would’ve done otherwise, and I think my ADHD shows itself very clearly in this blog. I haven’t been very closely focussed on it much of the time, and of course that may be part of it.

There’s probably no doubt that I’m neurodiverse and I would frankly be astonished if I couldn’t be diagnosed easily as depressive, but I don’t think there’s any good reason to pursue such a diagnosis as it wouldn’t be useful and I don’t consider my depressiveness to be essentially problematic. There’s a whole plethora of other things going on, some more nebulous than others, including probable dyspraxia, possible Geschwind Syndrome and a weirdly split form of what might be thought of as Asperger’s, which again suffers from being underdiagnosed in women and also as manifesting differently in us. The ASD aspect of my personality is, however, odd, and not officially diagnosed, because in some ways I’m a classic aspie but in others I’m almost the opposite. The way I think of myself is as in a wastebasket diagnosis which may or may not exist, but which I would call “neurodiversity not otherwise specified”. I do not consider myself in any way disabled and I place any problems I might encounter outside myself. This is partly because epistemologically I am more externalist than most people: concepts are not mental but objective entities which exist independently of being conceived of, in spite of the etymological link between those two words.

But none of this so far has been particularly personal, so I shall now remedy that and talk about my so-called “hyperactivity”. My experience of my first primary school was that it was under-stimulating. Nothing on the overt curriculum was new to me, and I used to hope for a while that teachers would introduce something I didn’t already know about, but it never happened. I found this very disappointing, and came to regard school as a distraction from serious academic study. This was okay because I could still pursue my own hobbies in my own time and got fairly far with those. It’s notable that when we later came to ensure that our children were aware that school attendance was optional and they opted not to go, that the other families with whom we participated in education had a strong tendency to perceive school as involving overachievement rather than underachievement, whereas my initial expectation of our children was that they ought to be able to knock off a few IGCSEs by the time they were seven or so. However, I don’t believe in hothousing and that didn’t happen. Bearing in mind the significance of all this for a child at primary school, I would say that a hyperactive child is frequently bored and that almost any child, but not me, needs physical activity to stop them moving around at other times in a way the staff deem problematic. I also think that, like many other pathologised neural differences, hyperactive people are likely to have filled some kind of social niche which is currently not recognised in most post-industrial societies, or for that matter industrial ones, and no, I don’t know what that is. I’ve also deliberately used the inaccurate term “hyperactive” here because one of the D’s in ADD and ADHD stands for disorder and I don’t consider myself disordered in that respect. However, of course not all people who can be fitted into this diagnosis are hyperactive and I definitely wasn’t.

After my diagnosis, I was on medication for two years. I don’t know what it was except that I’m aware that it was neither Ritalin or anything like it. It was a sedative. After a year or so, I began to feel uneasy and tended to get depersonalised a lot, so it was discontinued. It’s been said that sedatives are the opposite of what someone with ADD needs to conform because it sedates the faculty which would dampen down their activity, help them to extend their attention span and the like, and to that end I sometimes wonder if the fact that I find lavender oil stimulating – actually it makes me irritable – and rosemary sedating is linked to this effect. Likewise, and this is of course just anecdotal but also phenomenological, the colour red is a low-energy, downer of a colour to me and blue is high-energy and cheerful, and I strongly suspect this has something to do with this aspect of my neurodiversity.

One of the projections made at the time of my diagnosis and afterward was that food additives worsened the condition. I find this idea rather akin to that other idea, that vaccines cause autism. It isn’t that it’s right or wrong so much as that it frames ADD as problematic and therefore having an aetiology like a disorder. Having said that, my experience as a clinician strongly suggests that the likes of coal tar dyes and in particular aspartame are quite harmful, and the liver failure our son experienced is attributed by the orthodox medical profession to the formation of immune system complexes between self antigens and erythromycin, which is similar to a food dye, hence the word “‘ερυθρος”, meaning “red”, in its name. There’s a very strong tendency for suspicion of aspartame in particular to be stigmatised, but the people who do that cannot have had my experience of many patients whose lifestyles and diets appeared to be flawless apart from the presence of aspartame in their food whose health problems disappeared once they eliminated it and did absolutely nothing else.

From a Marxist perspective, the presence of colours and preservatives in food and beverages is substantially about the alienation of use and exchange value. Under capitalism, a commodity has two different values. One is its actual value, so for example an apple is nutritious and enjoyable. It also has exchange value, and this often requires it to have, for example, a longer shelf life (preservatives) or appeal more to the senses (food dyes). These often reduce the use value of the commodity and this is a major reason why capitalism is irrational and needs to be superceded. In the case of food, it may become less nutritious due to the presence of additives and the fact that it can be stored for longer. Therefore, whether or not the likes of azo or coal tar dyes are relevant to ADD, they shouldn’t exist. There are plenty of directly biochemical alternatives such as anthocyanins, chlorophyll and carotenoids. Note that I’m not making a distinction between the natural and unnatural here as I consider that dichotomy spurious.

One practically all-pervading experience I had during secondary school might be called “the paradox of effort”. My school had a monthly effort report system where if you were deemed as trying harder than average you got a plus, if you were working about average in their judgement you got a zero and if they considered you were slacking you got a minus. To me this felt like a pit of despair, but apart from that the months when things came easy were when I got good reports and I got poor ones when I felt I was striving. A further problem, connected to dyspraxia I think, was that I got a minus in metalwork the first month, and getting a bad initial report was unknown. It was also true that metalwork seemed too stereotypically masculine to me and I didn’t like it for that reason. That gave me a reputation as lazy. I share this paradoxical experience, though, in case anyone else has had a similar experience. I don’t know if I’m lazy or not. I think I am to some extent but some of that view is internalised from this rather formative period in my life.

It’s a platitude, but it’s probably worth saying that in a way having internet access is a bit like an alcoholic having a kitchen tap which dispenses alcohol, if you want to pathologise ADHD. This form of distraction is so much more common nowadays than it used to be, and I think it’s led to a further shortening of my attention span. You can see some of this in the way this blog so often tends to flit around and ramble off-topic, although that’s probably partly down to my compulsion to write. I also tend to write things down quickly for fear of forgetting them.

ADHD also has comorbidities, one of which is schizophrenia. Others are generalised anxiety disorder, depression, intermittent explosive disorder, dyslexia, dyscalculia, insomnia, restless legs, substance abuse, phobias, psychopathy and oppositional defiant disorder. Looking at that list, I can see some of them as resulting from difficulty in fitting in, making progress or otherwise being successful. Of them as applied to myself, I have not one jot of dyslexia or dyscalculia but used to suffer from insomnia very badly. Although I’m not myself psychopathic (and yes, I do know that’s a deprecated term), my father is and the genetic element that exists in personality disorders has presumably led to me having a disordered personality but the specifics of antisocial personality disorder don’t apply to me at all. My father also has intermittent explosive disorder. I do have restless legs, but I’m practically teetotal and a non-smoker, and I now have two cups of coffee a day and went without for five years once. That, actually, may be a form of self-medication because those five years seemed to involve endless withdrawal which I hoped for a long time would come to an end but just didn’t, and I ultimately decided that even if caffeine did shorten my life it wasn’t worth not being on it, so I just went back to it.

Getting back to gender and neurodiversity, probably the worst gender dysphoria of all I feel by far is in the possibility that I may be on the autistic spectrum. If I think about it too much I would probably feel like ending my life, not because autism is a problem – it absolutely isn’t – but because of Simon Baron-Cohen’s “extreme male brain” theory. This particular line of thought doesn’t really belong here though. The same does not apply to ADHD in my mind. I’ve never perceived ADHD, considered as internal, as having anything to do with gender. However, it’s also true that it’s underdiagnosed in women and presents differently, just as being on the autistic spectrum tends to. ADHD is just as common in women as men, and consequently tends to be misdiagnosed due to the erroneous and probably structurally sexist attitude that it’s less common in girls than boys. Regarding schooling, girls are more likely to do more homework and ask their parents for help to compensate than boys are. In my case all this is complicated by having been misgendered in my childhood. Teachers are less likely to notice girls who are either inattentive or hyperactive (two different ways in which ADHD presents itself in children) than they are boys, and since I was perceived as a boy, it’s likely that this would’ve been picked up more in me. In fact it wasn’t, due to the fact that there were forty-six pupils in my primary school class, and my mother noticed something instead. This also means that women are more likely to proceed through their lives without being able to identify this feature and the disabling influence society may have on “people like them”, and it’s therefore likely to be more of a revelation to them when they realise it applies to them. This doesn’t apply to me because I’ve known about it since I was a child although I don’t often think about it nowadays. It might, though, also help if the stage at which one is diagnosed is at a point in one’s life where one has a certain degree of productive self-reflection. Whether this applied to me as an eight year old, I don’t know. I should probably say here, because it doesn’t fit in anywhere else, that it can be expensive being ADHD because I can never find anything and am very messy (although I also believe that society has got it wrong in where they position optimum tidiness, but that’s another story).

I haven’t really mentioned the criteria for diagnosis yet because this is more about my personal experience. It’s also the case that what I can attribute clearly to ADHD in my life and experience may be obfuscated by other stuff going on in my head such as the weird split aspie/”Williamsoid” state of my emotional life and empathy. As I said, it’s neurodiversity not otherwise specified which includes ADHD-like features which are striking enough to be noticed and fit into that diagnosis rather than just simple ADHD. Then again, textbook cases of most conditions are more an exception than a rule and the real mystery is how any condition at all resembles that of other people, so maybe I’m not unusual in that respect. But for the sake of completeness, this is ADHD according to the medics:

ADHD has two main aspects, and “sufferers” tend to fall into one or the other (who’s inflicting the suffering though?): inattention and hyperactivity along with impulsivity. To be diagnosed, one must have at least six of the following signs as a child, or five as an adolescent (because it’s said to “improve” with age): forgetfulness, distractability, losing important items for daily activity (in my case this tended to be my glasses or PE kit), trouble organising things, often failing to pay attention to school work (I once answered the question “which is the biggest whale?” with “the blue whale is the blue whale” and said that Elizabeth of England wasn’t a very good “king” (although I tend to mix gendered nouns and pronouns up anyway, so this may not be a sign)), difficulty in maintaining attention on tasks (not a problem so much as a child as it is now), failure to finish tasks (this drives Sarada round the bend actually because this extends as far as not finishing jars of peanut butter and the like). But in my defence, at no time during my childhood did I ever mislay my mobile phone!

On the impulsiveness/hyperactivity side, which influenced me less but was there to some extent, again there need to be at least six as a child or five as an adolescent: acting as if driven by a motor (this happens when I’m tired but not otherwise – I’m pretty torpid a lot of the time to be honest), excessive talking (definitely, and more so as a child – I used to be separated from the class for talking too much), answering before a question is completed (yes – sounds useful for ‘University Challenge’), trouble with turn taking (no), unable to participate in leisure activities quietly (not quite, more unable to be inactive but fine with being quiet), fidgeting (yes – apparently I have a genetic propensity to move around a lot when I sleep as well), runs about or climbs a lot (no, although I did walk a lot – I don’t think this is significant), tends to interrupt a lot (no, but this is also part of a typically masculine use of language so it probably bears closer examination). As an aside, it’s notable that although these are supposed to be present in different settings, such as at home as well as school, a lot of these seem to be very firmly to do with how a child behaves in a traditional school setting, which although it strikes me as potentially irrelevant and more a problem with schooling than anything else, does at least mean that problems encountered as an ADHD adult might be detected early because of the kind of educational system this society has been saddled with. There are several criteria outside the specific signs. The child must have exhibited these before the age of twelve, they must be explained better by this diagnosis rather than another (this is boilerplate – it’s in practically every set of criteria for psychiatric conditions), they must, as I’ve said, be present in more than one setting (this takes some of the issue of schooling being a dysfunctional environment out of the picture), and the symptoms must interfere with school, work or social function.

As I’ve already said, there are two poles here and a grey area in the middle, between hyperactivity/impulsivity on the one side and inattentiveness on the other, and I’m more inattentive than impulsive. I could probably do with being more impulsive in fact.

As an adult, the NHS observes that it can be difficult to maintain friendships or romantic relationships, lead to poor driving (I’m actually the opposite although I have never had enough money to take a driving test) and one tends to underachieve at paid work or in education (which explains never having had a driving test!). I have in fact underachieved at education, partly because I spread myself too thinly, which is indeed to do with ADHD, and partly just anyway, although it may not be obvious because I have postgraduate qualifications. This is, however, also substantially due to internalised transphobia and toxic masculinity on the part of others at my university department in my case.

That, then, is a rough sketch of my take on my ADHD, and I hope it helps. In keeping with the poor planning involved, this post will now end rather abruptly.

Anti-Asian Racism

It would be patronising and inappropriate for me to presume to speak on behalf of Asian people in this blog post. Furthermore, it would be unsurprising were I to make unwittingly racist statements or reveal my White privilege in this post, and I positively invite anyone who perceives this to call me out on it. Even so, it would be foolish of me to pretend that this isn’t going on, even though I’m neither a victim nor have noticed it locally.

First of all, I’m going to have to define my terms, and in itself I think this reveals something about racism. The word “Asian” refers to different groups in British and American English, and both tend to refer to a narrower group of people than the literal term does. Moreover, there’s a geographical restriction in the very idea of Asia which may reflect racism, but I’ll go into that later. In British English, “Asian” often refers to people of South Asian origin, including the countries of Pakistan, India, Sri Lanka, Nepal, Bhutan, the Maldives and Bangladesh. By contrast, the American English term refers to people from the Koreas, China, Taiwan, Mongolia, Tibet, Japan, the Phillipines and Southeast Asia. Both of these are quite wide-ranging and include more than a billion people, but they exclude much of the landmass of Asia itself. Note also that I say “landmass” rather than “continent”. There are many other Asian ethnicities and nations such as many Jews, who could be considered to have Asian origins since they’re from the eastern Mediterranean, Arabs to some extent, although not those in the Maghreb, Turks, Iranians, Afghanis, many Russians and the various Siberian peoples and some Inuit. But most English speakers think of Asians as either South or East Asians.

Also, there’s a problem with terminology regarding the very concept of Asia. Significantly, Asia is not a continent because Europe isn’t. The reason we think of it as one is that we think of Europe as one. My concept of Europe stretches from these islands to the Urals and the Caucasus, and there are some countries in both Europe and Asia such as Turkey, Russia, Kazakhstan and Azerbaijan. In the prehistoric past, Europe and Asia were indeed separate, although Europe was less unified and the Indian subcontinent started off as part of Antarctica. It’s also possible to think of Afro-Eurasia as a single continent and in fifty million years time there will be a single northern continent comprising North America, Eurasia, Afrika and Australia, although by then East Afrika will have become a separate continent in the Indian Ocean. Getting back to the present day and human geography, you may think I’m playing mind games here but there is an element of racial segregation in the separation of Europe. It is important to recognise the historical and cultural significance of this part of the world being the bit that plundered and oppressed much of the rest of it, but at the same time I suspect that the idea of Whiteness is supported by placing two borders next to Europe, one running through the Med and the other across the Urals. Although I want to focus on Asians here, I also want to point out that we Europeans tend to have a mindset where we regard Afrika as south of the Sahara, which I acknowledge is a physical barrier, and Europe as north of the Mediterranean, and this ignores the gradual blending of skin tone which tends to characterise the people of North Afrika and makes us imagine that Black people are a single race apart from White Europeans rather than a huge variety of ethnicities some of which include darker and lighter skinned people or even whole groups of people who don’t see themselves as either such as Mestiços of Cabo Verde. But as I say, my focus is on Asia today, and I would contend that the conceptual barrier between Europe and Asia has racist overtones too. Both of these barriers are also permeable however much some people might prefer them not to be.

And this brings me to the European Union. As you probably know, I am only very reluctantly in favour of the EU and the situation I’ve just described is a good illustration of what’s wrong with it. I’ve probably mentioned this before, but one early proponent of the European Union, and that is what he called it at the time, was Oswald Mosley. This was because he wanted there to be a White homeland. It shouldn’t be forgotten that there are Fascist elements to the idea of the EU, particularly for many of the racist political parties operating within it. The EU is loosely speaking Fascist in the sense that it’s a binding together of a large number of White majority nations which also happen to be very rich and powerful, and the symbolism of the fasces is that it’s strength through union and can be used to bludgeon other, more vulnerable nations, and to some extent this is what it does. I don’t think it would be going too far to characterise the EU as an potentially racist organisation, one of which I am also in favour but with open eyes and fully acknowledging its fascist roots. And considering the plethora of racist parties in the European Assembly and the hostility to asylum seekers and, yes, to economic migrants, I think there’s a lot of evidence to back this up.

Onto actual anti-Asian racism though. The ‘Dr Seuss’ series has recently had six books discontinued and there’s been a lot of possibly fake outrage whipped up about this. One of them, which I vaguely remember, includes a caricature of an East Asian with the usual “slitty eyes” and cone-shaped hat. I don’t think there’s really any doubt that this is deeply racist and offensive, and of course he was a man of his time but “banning” these books isn’t any different than doing the same with the racism of Enid Blyton’s Noddy series, although that was a lot worse, but that took place more than twenty years ago, although they came back in modified form. Going even further back, the original illustrations in ‘Charlie And The Chocolate Factory’ showed the Oompa-Loompas as Black people and that was changed in the 1960s for the same reason. It doesn’t seem to mean to me that the relatively small number of ‘Dr Seuss’ books couldn’t be modified in a similar way rather than simply being discontinued, but it’s important to observe that the fact those racist illustrations were still in them really makes them an anachronism in view of the fact that Roald Dahl’s work was edited more than half a century ago, and it isn’t part of some ongoing “woke” project.

A major trigger for the resurgence in anti-Asian racism is of course Covid-19. It wasn’t helped when Trump referred to it as “the China virus”. Although it may have crossed over to humans as a result of the wet market in Wuhan, the problem is not a uniquely Chinese one but more of unsustainable industrial and post-industrial societies. The recent Ebola outbreak and the advent of AIDS are both similar phenomena, caused by deforestation and the mixture of humans with other species which haven’t previously encountered each other much. The Chinese government is of course a significant offender in causing environmental damage, substantially because Westerners buy so much stuff from their country, but this is not an ethnic issue. We don’t know where the next pandemic of this kind will originate. It’s been suggested that there’s a risk from Svalbard due to climate change causing permafrost to melt and release so-called Spanish ‘flu from frozen corpses. Also, China is just one nation among many in East Asia, and unlike White Europeans and Americans many East Asians have make a habit of using face masks when they have a respiratory infection, something which was formerly seen as a cultural characteristic but which we all do now. It would be good if we adopted this practice from now on in the same circumstances as East Asians long have themselves.

Therefore, briefly:

  • Referring to viruses by ethnic epithets has led to active and overt racism in the past. The World Health Organisation now advises against doing this because, for instance, Middle Eastern Respiratory Syndrome has had negative consequences for people from and in the Middle East. The practice of doing so associates something many associate with various kinds of poor hygiene with a particular group of people. It makes them seem defiled in some people’s minds.
  • Systemic racism still exists and whereas in a non-racist utopia (but with pandemics – strange utopia!) it might be okay to name diseases by their geographical origin, that isn’t this world as it is.
  • It was clearly a deliberate policy on Trump’s part, and others’, to stigmatise Chinese people, and by extension other East Asians will suffer because many overtly racist White people are not going to make that distinction.

I don’t want to say too much else because I can’t presume to speak on behalf of people with East Asian heritage, but I just wanted to make these points.

Six-Footed Beasts

In 1658, a 1100-page book by Edward Topsell was published, thirty years after his death, entitled ‘Topsell’s History Of Four-Footed Beasts And Serpents’. Although it’s in the tradition of bestiaries to some extent, it differs somewhat in acknowledging that a lot of its material is hearsay, and Topsell himself was, as was so often so, a member of the clergy. It was written at a time when science was still in the process of taking shape, and it seems that for the purposes of this book it merely acknowledged that the information within it might not be well-founded rather than going far enough to ensure that much of the gen was reliable, except that he did attempt to find it in several sources.

I have myself published a “modern bestiary”, rather unimaginatively called ‘Here Be Dragons’. It has many disadvantages, among which is that the title is shared by so many books that it can’t really be located or promoted distinctively. The big problem with self-publishing is that you need to be able to market your book effectively, which is clearly why publishers exist. On a slightly related note, one would think that the reason adult ed exists in the form it does is so it can provide venues and be promoted. If you have a venue, all you need is promotion and that could also be done without that. If your venue is someone else’s, say a college, you may mainly need them as a means of marketing your course, and if they don’t do that it makes them kind of pointless. Taking these two things together leads me to the conclusion that the most successful self-published books are on the subject of marketing, simply because the person who wrote them knows how to do it. As for courses, perhaps the same applies if they’re not actually at a physical college. If those are not successfully marketed, it’s probably very frustrating for the tutor. I don’t know how much luck’s involved or how much one can make one’s own luck. Even so, the book is “out there” and here’s a link. Topsell clearly put a lot of work into his book, which illustrates how much the vocation of parson must have changed since then, and even since the nineteenth century, because the vicars I know definitely couldn’t fit anything like that into their lives. ‘Here Be Dragons’ took me eight months to write and has considerably fewer than eleven hundred pages, and of course very few illustrations – thereby hangs a tale, but let’s not get into that now.

Sometimes people get earworms – songs they can’t get out of their heads. I presume that this can sometimes be a tune of their own making, yet to be written down or performed. I’ve mentioned hypergraphia on here before, so I too suffer from this in what might be dignifiedly described as a literary sense. Right now I have just such an issue, so as usual I’m just going to splurge it on here. Here it is.

Firstly, here’s a bit of real world stuff. Back in the Ordivician period, around half an aeon ago, an apparently minor phylum now called the chordates developed fins and articulated hard endoskeletons moved by muscles, and animals resembling fish first appeared. These became enormously successful and soon modified their gills to open and close their mouths in an innovation we now call jaws. Some of them entered freshwater, a feat which is considerable as the freshwater environment is surprisingly harsh. It requires some means of keeping the inside of the body saline enough and at the right pH for biochemical functions to remain fairly stable, and being much smaller than the sea on the whole, food is scarcer and harder to get to. It also tends to fluctuate a lot more than the ocean, which being huge is not as susceptible to entropic processes such as silting up or droughts. Fish would therefore frequently find themselves sandwiched between the river or lake bed and a rapidly diminishing water level, with not a lot of oxygen in the water, so it’s unsurprising that those who could scoot along the bottom and use the oxygen in the atmosphere above did fairly well in the harshest of these environments. Hence came the earliest four-footed beasts, actually evolving shortly before they left the water. Although they were amphibious, and might be described as amphibians, even when they did make their way onto the land they were unlike today’s in a number of ways. For instance, they seemed to have heavy ribs which weighed them down in the water, rely more on lungs than gas exchange through the skin and were largely herbivorous rather than today’s carnivorous amphibia. It’s been suggested that they are not in fact the direct ancestors of the lissamphibia, the amphibia around now and that they must’ve evolved separately, although this is no longer thought to be the case.

Amphibia are an oddly neglected vertebrate class. They tend to be thought of as wannabe reptiles, but that isn’t what they are at all. They are in fact the terrestrial class of vertebrate which has been separate from the reptiles for longest of all, as the reptilian grade is the ancestor of the “reptiles” (which don’t strictly speaking exist as I’ve mentioned elsewhere), birds and mammals, and split off from the amphibia at a very early stage. They rely more than most other vertebrates on exchanging oxygen and carbon dioxide through their thin skins, and use their lungs less. Many of them don’t even have lungs, including those who live on land much of the time, and those who do may be more interested in using them to vocalise than breathe, such as frogs and toads. Because they need all of their bodies to be near the surface of their skins due to how they respire, they have a maximum size which today is found in the rather inactive Japanese giant salamander who is up to 1.5 metres long and can’t extract energy fast enough to do much more than lie around all day underwater. There are three living orders of amphibia: the urodela – salamanders and newts, which tend to resemble lizards and like them have a tendency not to keep their limbs; the gymnophionta – very worm-like solely tropical blind forms who are distinctive in being possibly the only companion animals other than mammals who I’d be happy to have around the house and also have hundreds of heart-like organs and more vertebrae than any other species; and the anura – frogs and toads, who are in my opinion remarkably humanoid in form but much smaller. There was also an order called the allocaudata which survived into Australopithecine times only to become extinct just before our own genus Homo appeared. These were salamander-like but had bony scales in their skin, which right now makes me wonder if they depended more on lungs than other modern amphibia.

Amphibia are limited in a number of ways, and to put that in perspective we mammals also are, but in other ways than they are. They are mainly carnivorous because their bodies aren’t usually large enough to house the lengthy digestive systems required to digest plants. This is in turn because they respire through their skins, and they also either lay shellless eggs in water or other fluids or give birth to live young like most mammals. There’s an alpine salamander with the longest pregnancy of any animal – she can take three years to give birth, mainly because she lives in cold conditions and her metabolism and embryology are slow. In warmer conditions the young would be born earlier than that. However, when their ancestors came out of the water for the first time, they still had to mate and lay eggs in the water and the permeability of their skins meant they cannot survive in water that’s more than slightly saline. This means that amphibia and echinoderms (such as starfish, sea urchins and sea cucumbers) can never meet because the latter can only survive in salt water, and these two facts taken together incidentally prove that the Biblical Flood cannot have happened as described in Genesis because both taxa still exist and salinity would’ve been the same all over the world if it had.

Viviparity (birth-giving as opposed to laying eggs) has evolved a number of times independently among fish and other taxa, although in surviving mammals it only seems to have evolved once or twice, but in the ancestors of almost all species alive now. Birds are the only terrestrial vertebrates who absolutely always lay eggs. I presume this has made it easier for them to fly, and in fact the closest any bird comes to viviparity is the flightless kiwi, who lays a proportionately enormous egg which hatches soon afterward. Since the common ancestor of all land vertebrates laid eggs in water, amphibia were somewhat tied to it in most cases unless they became viviparous or evolved a hard-shelled egg like those of birds, reptiles and mammals. It presumably slowed the early progress of vertebrate life on land.

As well as many of them being viviparous, many fish are partly “warm-blooded” (endothermic). This is not necessarily an advantage as it means they need more calories to survive, and for a fish it’s almost impossible to be entirely endothermic because all of their blood must pass through their gills and have water pass the blood vessels, which has a cooling effect. There is a possible way round this found in penguin’s feet and human testes, where heat is exchanged between blood vessels to maintain a temperature different from the rest of the body, but this never seems to have evolved among fish. Consequently fish tend to heat parts of their bodies, such as certain muscles, the brain and eyes. Endothermy is a potential disadvantage.

This is where my conceptual earworm comes in.

Here in the real world, we’re descended from amphibia who were all egg-laying at the time and emerged from fresh water rather than the seas or oceans and evolved legs shortly before using them on land. They were also poikilothermic – relied on external heat sources and have the same internal temperature as their surroundings. It didn’t have to be like this as far as I can tell, though of course I may just be ignorant. Arthropods, for example, evolved legs a very long time before they left the water and many of the crustaceans who did leave the water did so directly from the sea. They do, however, generally lay eggs, though not always. Molluscs, more specifically slugs and snails (gastropods), didn’t evolve legs before adopting a terrestrial lifestyle and I’m not sure land snails and slugs are descended from sea snails or not, but they did have to start laying hard-shelled eggs and still depend on damp conditions on the whole.

Here comes the “what if”.

What if:

  • Vertebrates had evolved legs in the sea?
  • Vertebrates with legs had evolved viviparity in the sea?
  • Air-breathing vertebrates with lungs had not come to rely on transcutaneous respiration?
  • Land vertebrates had come from the oceans rather than fresh water?

A couple of other things about this: I considered making them endothermic, but that would probably mean they’d need too much energy to maintain their temperature if they were damp when they emerged. And, I decided to imagine they’d have six legs, three either side of the body, instead of four. There isn’t a particular reason for this as a precondition, but when animals have a fairly small number of limbs it seems to vary between four and ten. I used to think quadrupeds had four limbs because of the gravity on this planet and that higher gravity would’ve forced them to have more, such as six, but in fact insects have six legs and arachnids eight, and ten-legged animals such as lobsters live underwater where the fluid supports their bodies, so I don’t think this can be the reason and I now think it’s arbitrary. However, when you roll evolutionary dice you have about a chance of one in four, given those choices, of getting bilateral animals with four limbs twice. I do think primitively bipedal animals are possible though, just less likely, and these would literally only have one pair of limbs rather than being bipedal like humans, kangaroos or birds. One further point: crustaceans and molluscs both emerged from the water with land-ready impervious covers, so lung-breathing vertebrates could have done the same, being armoured with tooth-like scales such as many fish at the time had.

Imagine, then, that as well as fish, there are six-legged animals a little like lizards covered in dentine-like chain mail scales and breathing through lungs whose gill-breathing ancestors have already colonised niches on the sea bed and the bottom of the ocean, perhaps also have swimming forms along with the fish, and some of whom have adapted to living on the harsher environment of the shoreline, where they may get trapped in rock pools. These animals are not endothermic but they give birth to live young, can breathe air immediately and are already resistant to desiccation. This skips a whole evolutionary stage. In reality Elginerpeton, the earliest known amphibian, lived in Scotland 368 million years ago but Hylonomus, the first known reptile, appeared by 315 million years ago in Nova Scotia. Land arthropods also preceded them by quite some time: the earliest land animal of all seems to have been a millipede 425 million years ago, again in Scotland. At this point, jawed, armoured fish existed and even fish with lungs evolved quite early, way before they came onto the land, so it doesn’t stretch credulity too much to suppose that reptile-like forms could have appeared 110 million years earlier than they did, and that they would be giving birth to live young.

What makes this timing even more interesting (to me anyway) is that there was a major ice age 360 to 260 million years ago which I can imagine forcing the evolution of endothermy. If the timing of the evolution of true mammals, who were incidentally probably not warm-blooded due to the fact that their lifespan was more typical of reptiles their size than mammals today, were to be set back 110 million years, this puts their appearance at 300 million years ago. However, those were not the first endothermic animals in the lines leading to us. The rather mammal-like therapsids evolved 275 million years ago in the early Permian. Set those back 110 million years and they appear 385 million years back, in other words only fifteen million years before the ice age concerned starts.

The dominance of the dinosaurs seems to have resulted from a mass extinction at the end of the Triassic. Up until then, there had been a large variety of successful mammal-like animals. Dinosaurs had evolved but they were more like equal partners with the group of animals which were later to include the first mammals, and if the mass extinction hadn’t happened, the Jurassic would probably have been an age of mammals as well as one of dinosaurs. In the scenario I’m describing, mammal-like vertebrates could have appeared by the beginning of the cold period and that accelerates evolution still further, placing the start of a Cenozoic-like era at around 360 million years ago, and the appearance of human-like, or perhaps centaur-like, intelligent species as far back as 295 million years ago, long before even the Permian mass extinction.

This is all contingent on the given that there are multicellular oxygen-breathing animals and land plants on this planet. That is not inevitable, and nor is the evolution of vertebrates at all inevitable as far as anyone can tell. However, it’s often claimed that the long period of time during which the only life on this planet was single-celled and didn’t breathe oxygen, the first of those being something like three-quarters of this planet’s history so far, means we should be pessimistic about the evolution of intelligent life in the Universe. What I’ve just described is what seems to me to be a very plausible set of circumstances in which intelligent life on this planet could have evolved when Earth was six percent younger. Although this isn’t a particularly large portion of this planet’s history, it does suggest a couple of things.

The fairly conservative change of simply mixing the characteristics of two major marine phyla could have led to the evolution of human-like intelligence nearly three hundred million years earlier. Apart from making intelligent life in the Galaxy more probable on Earth-like planets or moons in the solar systems of stars almost identical to our Sun, it also makes it possible for it to evolve around somewhat more massive stars with shorter lifespans during which conditions hospitable to life can exist. Moreover, this is just one change. What other plausible changes taking place earlier in the history of life would have facilitated the evolution of intelligence even earlier, making life possible around an even wider variety of stars?

What if we’re actually late developers?

A Shoe Shop Intensifier Ray

My devotion to ‘The Hitch-Hikers’ Guide To The Galaxy’ sometimes worries me a little. Someone once suggested to me that my writing tended to be derivative of Douglas Adams and on another occasion my space opera version of the Book of Jonah was said to come across like it had been written by him, which is interesting with regard to his proselytising atheism, or rather, somewhat proselytising atheism. Although he was indeed atheist, there was a period, at least in the UK, when atheism was in many circles just a minor detail of one’s beliefs, just something which was assumed to be true but wasn’t very interesting as such. It was about as significant as the brand of toilet paper or baked beans you bought back then, and in some places still is. One would be atheist but it didn’t matter: Heinz baked beans were in the larder, Andrex was hung up in the bathroom and there was no God. Simple as.

Nonetheless, perhaps because of the impressionable age at which I was introduced to the work, H2G2 often intrudes into my thoughts and gets quite elaborated. Two interesting things occurred to me today from the work: the Shoe Event Horizon and the Restaurant At The End Of The Universe. I’ll cover the first first.

The Shoe Event Horizon comes up twice in the epic adventure in time and space. It’s in the second series of the original radio series and also in the second book, on two completely different planets. However, this is fine because it’s a tendency found throughout the Galaxy on various worlds, possibly even including Earth if it hadn’t been destroyed by the Vogons, which is surprising because of the whole Fenchurch incident. It works like this:

If you are in an optimistic and energetic civilisation, you tend to look up and see the stars, which inspires you to go into space. However, if you’re in a stagnant and declining civilisation which is quite pessimistic, you tend to look down and see your shoes, which inspires you to buy a new pair. This feeds the shoe shop sector of the economy and leads to more and more shoe shops until there’s no way economically to do anything other than build more shoe shops, which have to sell shoes of poor quality with rapidly changing styles. The economy collapses and the survivors forswear walking and evolve into flying forms to avoid this. It’s tempting to imagine that this is what happens on Olaf Stapledon’s Venus, with the Flying Men, but future history does not record.

Although I can accept intellectually that this is probably a parody of some kind of economic theory, and of course that it wouldn’t happen in reality, such situations as hyperinflation do suggest that almost equally ridiculous economic catastrophes do happen and have unfortunately major real world consequences such as the rise of Nazism. In H2G2, this is shown as happening on the Frogstar worlds and Brontitall, and a minor character also has a throwaway line which suggests it would’ve happened on Earth as well. This last comment should really be put aside as an expression of opinion by the archaeologist concerned because it’s hard to imagine the “good and happy place” Fenchurch would’ve caused to develop also being a stagnant and declining civilisation where people cheer themselves up by buying shoes.

The DolManSaxLil Galactic Shoe Corporation, it has been pointed out to me, is in fact a cosmic scale version of the British Shoe Corporation, comprising as it did Dolcis, Freeman Hardy & Willis, Saxone and Lilley & Skinner. As far as the employees of that company (the galactic one that is, not BSC) are concerned, they have a Shoeshop Intensifier Ray which forces the inhabitants of Brontitall to develop an inexplicable urge to build shoe shops. There are, in my opinion, two odd things about this ray. One is the acknowledged fact that it’s a dummy. It doesn’t actually work at all except as a kind of prop to make people think they’re doing something when it is in reality just part of the natural course of events. The other is that it’s on the far side of Brontitall’s moon, which suggests either that shoe shop intensifier radiation can pass through that moon or that there’s a system of mirror satellites or something which the ray bounces off before reaching the planet’s surface. This also suggests that there would be expected to be a particularly intense desire to build shoe shops if one was near one of the mirrors. It is in fact entirely feasible for there to be radiation which passes through a planet-sized lump of rock and magma, because that’s what neutrinos, which are real, do, but in that case the question arises of why such radiation would influence the brains of the people on Brontitall at all, having passed through the moon without being blocked or having much apparent effect. Therefore I prefer the idea that there were mirrors, although maybe the Galactic Shoe Corporation people just didn’t overthink things the way I do. But as far as I can think, the Shoe Shop Intensifier Ray is unique in H2G2 in being a fictional device within a fictional setting, unlike other devices such as the Infinite Improbability Drive and Joo Janta Peril Sensitive Sunglasses, which are presumed to work in the fictional universe. Oddly, there are real world Peril Sensitive Sunglasses, produced by Infocom as part of the text adventure game in the mid-1980s, but they were just permanently opaque. Maybe they’d block shoe shop intensifier radiation though, if it existed.

There are a few other economic jokes in Adams’s work, notably the way one pays for one’s meal at Milliways, which is to deposit one penny in a bank account in one’s own era whose compound interest will be enough to pay for the meal after 578 thousand million years, when the restaurant will be established. There are a few flaws with this idea, for instance what happens if you live very late in the history of the Universe just before Milliways is built? But this does seem to work to some extent, and is even amenable to maths. If annual interest is at one percent, and there’s no economic collapse in the meantime, one penny would become of greater value than this entire world’s economy by that time. However, there’s another aspect to this, or rather a couple. One is, unsurprisingly given the less than perfect nature of the universe, that capitalism can be expected to last forever, which presumably also means that the Ultimate Question will never be discovered, but then we already knew that. The other is that Milliways is an event horizon like the shoe one. The time will come when most of the money in the Universe will be in the form of interest accruing on the pennies placed in bank accounts in the diners’ own eras and the only economic activity in the Universe will be the construction and running of the Restaurant. This is of course trivially true right at the end, but will also have been true for quite some time before that. Basically, the whole economy has collapsed and the only work which exists is connected to Milliways. But at least it isn’t a shoe shop.

Most of the devices in H2G2 are a huge challenge to build even if they can be constructed at all. A long time ago I had a go at trying to work out how much information could really be extracted from a small piece of fairy cake on this blog. At some point I will probably try to come up with an argument for the existence of rice pudding given the premise of “I think, therefore I am”, and in a way it’s surprising I haven’t done that yet. On another occasion, though not online, I wrote a paper describing the natural environment of Babel Fish which has led to their evolution – they live on Santraginus V , where there’s a giant hive mind made largely of coral and their function within that biosphere is similar to that of neurotransmitters in the human nervous system. All of these are flights of fancy and probably a complete waste of time. However, there is one exception to the first attribute here, although it would probably also be a complete waste of time unless you could get people to buy them, and maybe you could: you really can build Shoeshop Intensifier Rays. This is because even in the H2G2 setting, they don’t work, and it’s entirely feasible to build a machine which doesn’t do what it’s supposed to.

I can’t really draw, but in my mind’s eye I think of the Intensifier Ray as a kind of massive torch in a sturdy frame bolted to the ground which gives off a brilliant blue-white beam like a laser in that it doesn’t diverge but stays the same diameter, and is about fifty centimetres in diameter. The device itself is somewhat larger and it points horizontally, being aimed at a mirror in stationary orbit around the moon. An alternative is a similar device which points straight down, radiating right into the ground. Maybe the moon itself can be thought of as focussing the beam like a lens. But in fact it can’t be like this because it has to cover the whole planet, so it would at least need to be of sufficient diameter to cover the inhabited regions of each side of the planet, bearing in mind that if it can penetrate the moon, it might be able to do the same to the planet. But all of this is made up of course, as it should be.

The next question is whether there are enough people out there in the world who are so into ‘The Hitch-Hiker’s Guide To The Galaxy’ that they would buy one of these. Maybe. I don’t know. But a frightening prospect emerges at this point: what if it proved to be so popular and lucrative making these things that it would bring about the apocalyptic Shoeshop Intensifier Ray Event Horizon?

Webdriver Torso and Unfavorable Semicircle

I love words. I’m beginning to think our grandchild also will owing to something she said this morning, but I can’t currently remember what that was and probably never will. They seem to be almost magic to me. When I was studying postgraduate philosophy I gained a reputation for giving presentations on individual words, which to me seemed quite dismissive but nonetheless I can believe that I come across that way. There’s a way of making something sound brainy and scientific where you use either one or two surnames followed by a noun. It crops up in ‘Look Around You’ with the “Beaumont Grill”, which may or may not have had an E on the end of it. For that reason, in my book ‘Here Be Dragons’ I came up with the Yates-Leason Effect, which is that teleportation always occurs along a diagonal in all dimensions and involves the exchange of equal masses. This is to preserve the genuine Novikov Self-Consistency Principle, which is that if an event exists which would cause a temporal paradox, the probability of that event occurring is zero. One way of satisfying this principle which is particularly neat is to suppose that time travel into the past always involves a shift in location in space over the same distance as it would take light to travel back to one’s starting point, as this means that nothing can ever happen which could change the past, the speed of light being the ultimate speed limit. This is quite elegant because of the concept in physics of the “light cone”, which is the path a flash of light takes through spacetime. Back travel, as it’s sometimes known, wouldn’t create any paradoxes if it only occurs outside that “cone”. ‘Doctor Who’ came up with something very similar to this called the Blinovitch Limitation Effect, which is that you can’t cross your own timeline, by which is meant worldline. Let’s not get bogged down though.

Recently on here I mentioned in passing Webdriver Torso and Unfavorable Semicircle. These are two mysterious internet phenomena, one of which has been solved, one not. They mainly relate to two YouTube channels of those names. The first, Webdriver Torso, is no longer mysterious. Here is an example of a Webdriver Torso video:

There are somewhere near three-quarters of a million videos of this kind there, mostly only a few seconds long and consisting of a series of sine-wave tones and blue and red rectangles, sometimes overlapping, against a white background. One of the gratifying things about these videos for me, being an over-the-hill eight bit computer nerd, is that something similar could’ve been created by most computers with single channel pitch-duration sound and four colour fairly high resolution graphics, probably even by a 16K ZX Spectrum and definitely on a BBC Micro. It would be dead easy to write a program to do this. Anyway, all of these uploads took place within a period of about three years, so that works out at a mean of roughly six hundred videos a day. I once spent a year uploading one YT vid a day and it was pretty gruelling, although Simon Whistler for some reason seems to manage to do more than that over a period of years without any dip in quality (not on that channel though). If you work out the exact details, a video was uploaded every one minute and eleven seconds, the total number of uploads being 624 740. This is an odd coincidence because on the day the uploads ceased, the Rick Astley “rickroll” video had that exact same number of comments and the interval between choruses on that video is one minute and eleven seconds. One of the videos is also a parody of that video. All of that is probably either a coincidence or an in-joke, although it’s fun, and probably worthwhile crunching a few numbers to work out the probability of that, bearing in mind that one could’ve forced other numbers out of rickrolling one way or another which would’ve fitted different stats. It does seem like a very weird coincidence, though it probably just means it’s an in-joke rather than anything mysterious. It did occur to me that the Rickroll video was being used as a seed to generate the Webdriver torso videos: every time someone commented on it, YouTube would use the time stamp to generate a pseudorandom number perhaps?

In the end, Google said the purpose of the channel was internal testing of YouTube performance and this seems feasible. I haven’t noticed the exact size of the rectangles, but it would probably make a difference if they were an exact multiple of eight pixels wide, or whether their corners were that distance from the edges, because although I don’t know much about video compression I do know that it uses similar principles to JPEG compression, which among other things involves dividing an image into 8×8 pixel blocks, which can often be seen when the image quality is poor – it ends up looking like muddy blocks of that size. This is in fact why I mentioned the ZX Spectrum earlier. Colour on that computer was only addressable in 8×8 blocks, each of which could only contain two colours even though there were eight, or arguably sixteen counting by the standards of the day, altogether. The characters labelling each video are in fact in an 8×8 bitmapped font and seem to refer to Shockwave Flash, although the font used is of the kind used on American microcomputers rather than British ones, with two pixels per vertical stroke rather than one, which I find ugly.

Google took a rather humorous approach to the whole thing in the end, and whereas some people want to cling to the mystery and believe they’re fobbing people off, I don’t think they are. It’s just a test channel with fairly constrained and simplified variables such as the colour (FF0000, 0000FF or FFFFFF), shape (sharp-edged rectangles which I think occur at exact multiples of eight pixels) and tone (simple sine waves within human hearing range). My only doubt really is that this seems not to be particularly useful because it consists largely of videos rather unlike what most users would in fact upload, so it doesn’t seem to be a particularly good test. Incidentally, if it really does match up to 8×8 cells, it basically is doable on a Speccy except that the tones would be square waves rather than sine, and would need to be amplified. It might even do a better job than YouTube, where the slides are not perfectly solid colour but show artifacts near their edges, where the colours change slightly.

The other mysterious channel, which is somewhat similar, is Unfavorable Semicircle. This was not owned by Google and was in fact closed down by them as malicious or misleading. Like the other channel, it uploaded a large number of short videos at one heck of a rate which were quite abstract in nature, but since the purpose is unknown it might be unwise to attempt to view them. They consist of blurry pixelated shapes, often apparently random, against solid colour backgrounds with the kind of “muddy” audio you get on poor-quality mobile phone calls. The title of each video starts with the Sagittarius symbol ♐ followed by a short word and a number. There are in particular two videos entitled LOCK and DELOCK which are widely considered to be attempts to hack the devices which are playing them, which is why I don’t want to link to any of the stuff which has been reuploaded. Having said that, I think the most likely explanation is that it’s just a Webdriver Torso copycat channel which is simply a waste of time and has no significance beyond being that. That is, however, not the only hypothesis, and because there’s some chance that it’s an exploratory attempt to interfere with ‘phone or other device security it may be best to ignore it, at least in terms of actually watching the videos.

It’s also been considered outsider art, which is amateur art production. Taking it beyond that, it’s also possible that it’s more centrally driven by mental illness, like this blog could conceivably be. A further suggestion is that it’s an alternate reality game, that is a game which seems like it might be real but which aims to warp the possibly unwilling players’ perceptions of reality, a bit like a hoax but without intending to deceive in the long term. Then there’s the inevitable and now rather predictable “numbers station” hypothesis. Yet another explanation which I personally find quite interesting, if improbable, is that it’s a series of programs in the Piet language. Piet is a programming language which has as its source code graphic images resembling Mondrian paintings. I really want this to be true, perhaps too much to believe that it can be.

Investigators have taken Unfavorable Semicircle videos and subjected them to analysis in such a way as to produce three dimensional plots of their data. Frames can be stacked up like slices through a block, and these sometimes produce significant-looking objects such as a diagonal plane or a mysterious object that looks a bit like a bicycle bell. The diagonal plane isn’t necessarily significant as a line moving gradually up or down would appear like that.

I haven’t gone into much depth here because the main point of this blog post was really just to explain what these two things I mentioned previously were. If you want to know more, there’s the Unfavorable Semicircle Wiki here and information on Webdriver Torso here. It may of course all be a colossal waste of time as well!

Pseudosciences and Overstatement

You might think I’m hardly qualified to comment on these issues owing to the fact that I am not only religious but believe in Nostradamus, and some of you might go further than that and point out that I’m a herbalist. It is true, of course, that the mental world I live in is not that close to that of a metaphysical naturalist in some ways, but as I’ve said before, the reed that bends with the wind survives when the tree would be blown down by it, so it’s in my opinion important to be a bit delusional in order to preserve one’s sanity. And we’re all delusional anyway, which is no bad thing because it helps us thrive mentally. Nonetheless, I want to mention two things today, one of which I think works and one of which doesn’t, although it may not matter that it doesn’t. Please bear in mind that I have a lot of respect for evidence, and sometimes I feel people who pride themselves on being sceptical are in fact not.

I’ll start with graphology. This is usually understood to be a pseudoscience, but before I get into that I need to make a distinction which is sometimes glossed over when it’s discussed. There is such a thing as forensic handwriting analysis which is uncontrovertibly scientific – for instance, it could be used to detect forged signatures or whether an apparent suicide note was in fact written by a murderer. This presumably isn’t as useful as it was back in the day due to people writing less often than they used to, but few people would dispute that it’s scientific. More controversial, apparently, is graphology as a means of assessing personality, and it’s this I find strange. Whereas I don’t think it is a science, and therefore may include a lot which isn’t scientific in the sense that it’s gone through a hypothesis-experiment-theory process, but that’s not the same as saying it’s wrong. There are plenty of respectable academic disciplines, such as history and literary criticism, which don’t operate in this way. In that sense it is a pseudoscience. However, that doesn’t make it wrong. It also doesn’t follow that it couldn’t be made scientific because one could in fact do something like look for correlations between psychometric tests and handwriting samples on a large scale and either refine it out of existence or turn it into a science. It is of course important to be wary of common sense, because that could be wrong. For instance, the temperature of Rovaniemi on the Arctic Circle went up to 40°C a couple of years ago, and that certainly sounds like global warming and I still believe it was, but rather frustratingly it isn’t an indication of it because it’s a single data point and not a measure of climate. The reason this is frustrating is that it would be more effective propaganda (in a positive way) to be able to point to something definite and dramatic to persuade climate change deniers of the truth, but because of the way science works all one can do is say things like “there are a lot more hurricanes today than there used to be” without being able to attribute the cause of any one hurricane to human activity, which rather blunts the message and will continue to do so unless the majority of people in a particular jurisdiction are scientifically literate or have good critical thinking skills, which may not be in the immediate interests of people who want to sell them a lot of shoddy tat.

But humpback bridges, for example, are overengineered and a lot stronger than they need to be because they weren’t scientifically designed, unlike suspension bridges which were, and which could suffer from harmonic oscillations from wind which will destroy them. Renaissance art was painted using tried and tested techniques, but when Leonardo da Vinci chose to experiment with reproducing the appearance of an oil painting on a monastery wall with ‘The Last Supper’, it started deteriorating within his own lifetime, because it was a failed experiment. This is in fact one of the arguments I use to support the efficacy of herbalism. Herbalism is a craft rather than a science, and has a long tradition of use, just as the methods used to built many surviving ancient buildings have been in use for millennia, and yet we trust them when we go into them. We don’t expect the lintels of Stonehenge to fall on our heads but there was probably no science involved in putting them up there.

Back to graphology. I have no particular axe to grind here because as far as I know nothing significant in my life depends on it being accurate or useful. It’s just something “out there” which has long seemed entirely plausible, and as a body of information I know a fair bit about it, to the extent that friends have become insecure when I’ve told them they have interesting handwriting. That’s not the same as believing in it of course, but it so happens that I do think that it’s entirely sensible to do so to some extent, although how far that extends beyond the obvious is another question.

I would suggest, for instance, that the following are likely to be true:

  • Neat handwriting shows a person is organised.
  • Heavy pressure indicates frustration, aggression, high energy or physical strength combined with not writing much.
  • Precisely placed i-dots and t-bars indicate attention to detail.
  • Habitually hurried handwriting suggests someone is overstimulated or bipolar.
  • Handwriting which is particularly close to how one was originally taught to write suggests difficulty in letting go of the past, arrested development or conformity.
  • Large letters may be written by a narcissist.

None of those features seem at all far-fetched to me, and it strikes me as odd that graphology is rejected out of hand even in such aspects as the above, which just seem very likely to be true, and not even controversial, although there may be other explanations for them.

Somewhat more dubious are the following:

  • Rightward or upright slant is significant as well as leftward. Sometimes the slant is probably just due to how one habitually holds pen or paper. That said, since nobody is ever taught to write with a leftward slant, that probably is significant if the position of the writing implement is not a factor. Having said that, I have considerable emotional investment in that possibility.
  • Arcade connections between letters – “n” shapes rather than “u” shapes – mean the writer is guarded in their thoughts.
  • Angular connections represent determination, compulsiveness and rigidity. I don’t think this is true because although I no longer use that hand, I first successfully learnt cursive as italic handwriting (after failing at Marion Richardson, which is the most common hand in England), and that automatically has angular connections.
  • Garlands (“u” shaped connections) and bowls open at the top (e.g. “a” written like a “u”) indicate emotional openness.

Claims of this kind seem less rational to me. They seem to be about the idea that handwriting is a subliminal symbolic work of art, and whereas I agree that it is, I don’t think the symbols are strong enough to transfer between people’s personalities. The rational approach to this, though, would be to test these claims against samples of handwriting and personality tests.

I have noticed that when I’m down, my handwriting tends not to stay on the line but drift downward on the page, which is supposed to be a graphological sign. That said, I am aware of that significance being given to it so it may not be something to rely on with a less informed writer. Also, although this is no longer the case I used to make a conscious effort to write very narrow letters without much space between them to save paper. A graphologist would probably put a different spin on that.

Nonetheless, I think it’s going too far to throw the whole of graphology out of the window simply because of these apparently more fanciful, but as far as I know still untested, claims. The first set of claims seems eminently sensible to me. Why would someone with neat handwriting not also be a neat person as well, usually?

The second pseudoscience I want to look at today is Western astrology. It wouldn’t be entirely accurate to say that I don’t think it’s worthwhile but it has the distinction of being one of the few subjects I’ve tried to conduct scientific research of my own on. There is said to be a correlation between a patient’s sun sign and the system or organs which give them most trouble healthwise. Since I have had thousands of medical histories which include dates of birth, I was able to look into this although my sample size was still rather small. I used recognised statistical measures of correlation, more specifically ones for nominal variables because I don’t consider sun signs to be ordinal, and found that there was no kind of correlation at all in my sample between any kind of chronic health condition and sun sign, whether or not it corresponded to the traditional understanding of what they were. More research could have been done of course, but probably not given my data set. This, however, satisfied me that astrology doesn’t work, and that’s actually quite surprising because I would expect there to be connections between the time of year someone is born and their health – for instance, in Iceland people born at a particular time of year used to have a higher risk of Type II diabetes mellitus because their parents ate a particular kind of food at Christmas which increased the fetal risk of developing it, though surprisingly not Type I, and babies born at the end of winter tend to be lighter, which exposes them to different health risks. Moreover, a baby born in August is likely to be one of the youngest in her year at school, and that could be expected to have consequences. Oddly though, none of that was borne out by my research and I’m now confident there’s no correlation, a conclusion which has been forced upon me.

Western astrology is, however, interesting and significant in historical and cultural ways. For instance, Nancy Reagan’s belief in astrology is said to have influenced some of President Reagan’s decisions and in centuries past this has presumably happened many times. It’s also part of the structure of intellectual information which forms a kind of collective memory palace for Western culture, so for example alchemy and astrology are closely linked.

The reason I’ve chosen to look at these two pseudosciences here is that I sometimes feel people understanding themselves to be sceptics are not in fact sceptics but make major assumptions about the way the world is. For instance, it’s very common for sceptics to reject homoeopathy out of hand, but as far as I know there has never been a coöperative project between homoeopaths and people who don’t believe in homoeopathy to come up with a protocol with ecological validity to test homoeopathy. I should point out here that as a herbalist I have no emotional or business interest at all in homoeopathy, which is not herbalism and has no bearing on my practice, just in case you don’t know. Given the improbability of a single molecule of a particular remedy being present in a homoeopathic remedy, it does seem unlikely, but sharks are able to sense blood at concentrations too low to have probable components of that blood in their nostrils and moths are also able to detect pheromones which are so rarefied that the chances of them encountering them at all are virtually nil, so the sceptical thing to do, it seems to me, is to refrain from commenting until a good quality agreed study between the disputing parties is undertaken. The other thing is pseudoscepticism. I’ve been into this before.

I would also say this applies to anti-theism and particularly opposition to Christian beliefs. Atheists sometimes claim that atheism is the absence of belief in the existence of God. Secular definitions of atheism in philosophy differ from this, usually describing it in terms of the presence of the belief that there are no deities. To claim the opposite is to presume that, for example, newborn babies are atheist because they may well have no beliefs at all. By that token, the cloth I use to clean my glasses is also atheist, and it clearly isn’t. Nor is it theist. The other claim is more specifically aimed at Christianity: that there was no historical Jesus. While it’s perfectly respectable to posit that there was no man born of a virgin who was also God, performed miracles and came back to life, it’s far less respectable to make out that there isn’t evidence that a religious figure called Yeshua lived in first century Judaea. I don’t want to go into too much detail to debunk this now, but I do want to point out that religious Jews had every reason in the world to claim that he didn’t exist, and they never did that.

I also want to add a point about Nostradamus which I’ve mentioned before on this blog. People who see themselves as sceptic were for a time very keen to point out that there was a fake verse of his circulating online regarding 9/11, and went on to claim that at no point before the destruction of the Twin Towers did anyone forsee it using Nostradamus’s quatrains. This is absolutely not the case. In believing circles, the understanding of a particular set of verses, not the same as the fake one, was that there would be an incident where planes would crash into skyscrapers in New York City and cause a huge disaster, and this has been held since at least the late 1970s. It simply isn’t the case that anything was either made up or made to fit this incident after the fact. I expected it to happen because I read it in Nostradamus, although I didn’t know when (I assumed it would be some time between 1986 and 2001).

My point then is this. Many of us tend to think we’re rational and that we only believe what there’s evidence for, but regardless of whether you accept pseudoscience or are religious, or if you’re the most scientific and non-religious person in the world, the chances are that you are not as sceptical as you think you are, but that you believe certain things in the absence of evidence simply because those things being true seem ludicrous or would seem to upset your view of reality too much. And in particular, although this may not be currently so, I honestly can’t see a reason why graphology shouldn’t be made into a respectable science given a bit of help from the scientific method. I have no personal interest in making that claim except that I think it’s rejected out of hand, and I’ve chosen graphology because of my lack of emotional investment in it.

Sometimes it’s healthy to be able to bend with the wind.

Markovian Parallax Denigrate and the Halfbakery

Back in the mid-1990s, the internet was rather different from how it is today. I am, perhaps surprisingly, not an early adopter. I didn’t get internet access until September 1999 and even that was only via AOL. That said, I was aware since the mid-’80s that there was something called Usenet which you needed a better computer than I had to access, and that there was an email server in London called Telecom Gold which you needed a better computer than I had to access, and that there was something called JANET which meant that the university library computers wouldn’t work properly if there was a problem in Bristol or something. Once I actually got online in a manner I recognised as being that, it didn’t take me long to notice that a very large number of documents and web pages seemed to date from 1996. I presume this is because the year before that was when Microsoft had decided to include Internet Explorer with their Windows 95 operating system, so that if you bought either a new PC or installed that version of Windows you’d be close to having access to the World Wide Web, and given the few months people took to get themselves sorted out, 1996 would be the year they’d be likely to start producing public content.

However, three years before that something happened called “The September That Never Ended”, chiefly affecting Usenet. In case you don’t know, Usenet was basically a load of discussion fora somewhat like bulletin boards but accessible via the internet rather than just a modem or acoustic coupler plus a landline ‘phone. Every September there used to be a new influx of users on Usenet who didn’t know what counted as politeness on the internet. In 1993, AOL started to undertake their massive mailing campaign where they sent out millions of floppies, later CD-ROMs, with their online access software on them, and there began the endless September which we are still experiencing, and which now seems to have begun to have a major impact on world affairs and mental health. Personally I think December 1984 is just unusually long. It’ll end any day now I expect.

One of the things which happened during the September that never ended, in 1996, was the famous Markovian Parallax Denigrate spam incident. It wasn’t the first spam by any means, but it did come across as a particularly baffling and unusual form of spam. Here’s a sample:

23 Mar 2018, 15:40:36
On Monday, August 5, 1996 at 2:00:00 AM UTC-5, Chris Brokerage wrote:

jitterbugging McKinley Abe break Newtonian inferring caw update Cohen
air collaborate rue sportswriting rococo invocate tousle shadflower
Debby Stirling pathogenesis escritoire adventitious novo ITT most
chairperson Dwight Hertzog different pinpoint dunk McKinley pendant
firelight Uranus episodic medicine ditty craggy flogging variac
brotherhood Webb impromptu file countenance inheritance cohesion
refrigerate morphine napkin inland Janeiro nameable yearbook hark

mibulls sailcloth blindsight lifeline anan rectipetality faultlessly offered scleromalacia neighed catholicate

This is fairly typical. Markovian Parallax Denigrate is named after what was alleged to be a frequent subject line for these emails, which consisted of strings of apparently random words. However, it’s also alleged that this was not as widespread a title as had been claimed and was as apparently random as the rest of the text, so it’s difficult to know if there’s any mileage in reading anything into these three words. In particular the first word, “Markovian”, seems very significant because it seems to refer to a Markov chain: a series of possible events whose next member only depends on the previous one. A well-known Markov chain is used in the social media game where one types in the start of a sentence and lets one’s device predict the rest of the sentence, word by word. Here’s an example (after “is”): “A Markov chain is not okay for a few weeks and then we are not usually the Guardian because it’s so hard that they probably won’t make sure.” This predicts what you’re going to type next based on the probability of what you’ve tended to type before, following a particular word.

Sometime in the ‘teens, I think, a particular member of US Congress was alleged to have been responsible for this spam, but this turned out to be groundless. However, this drew people’s attention to the incident again and it became known as “the oldest mystery on the internet”. Its name is also similar to several other well-known internet mysteries such as Webdriver Torso and Unfavorable Semicircle, and it also resembles the default passwords generated for AOL software such as “avidly-binned” and “heaps-budger”. But it was quite old by Web standards, as it were, although it occurred in that legendary year of 1996 during which everything in my early internet encounters seemed to have happened. I didn’t encounter Markovian Parallax Denigrate in that form, but I certainly encountered something very similar from my very first days when I was knowingly online and had access to the Web (doesn’t that sound quaint?). I am apparently cursed with a tendency to think outside the box without being able to think inside it, which probably goes some way towards explaining the nature of this blog. This meant that when I used a search engine, although some of the time I’d find something relevant, I was more often likely to find pages of the above apparently random strings of words, particularly if I searched for two consecutive words. I never found out what these were, but they were web pages and consisted entirely of this kind of thing. As well as being mystifying, I also found it very frustrating. However, in the absence of any knowledge of Markovian Parallax Denigrate or ideas about what that might be, I did formulate some kind of hypothesis about them.

Suppose there is a computer program out there somewhere which generates pages consisting of pseudo-random words, in the sense that the probability of the occurrence of each word in a dictionary file is roughly the same as that of any other word. This would, I think, produce strings like the example given above. However, human users don’t generally do this, or at least they didn’t back then before Markov chains became part of a widespread game. Now suppose there are millions of internet users looking for search strings. Some of them will end up finding what they want, but others will be frustrated by finding these apparently arbitrary walls of text. However, the server on which these are stored will register when it serves such a page to a user, so the probability of those strings having meaning is greater if that happens. If that server then scores the number of times a particular page is viewed, it may be able to determine which pages are more apparently meaningful. It can then discard the others and base further pages on the more meaningful strings of words, thereby gradually breeding out the meaningless text and selecting for the meaningful. Eventually, this could be expected to lead to completely meaningful documents. It’s basically evolution.

I don’t understand why I encountered what appeared to be Markovian Parallax Denigrate as web pages and not Usenet posts, since I did access Usenet a fair bit at the time. I also don’t know how much Usenet would lend itself to the kind of evolutionary process I’ve just described. But even now I think that might have been what was going on, and it also explains the title, although here it’s important to beware of seeing patterns where none exist.

I’ve already explained Markov chains. These don’t actually seem to be Markov chains although ultimately they might become them given enough time, since I encountered them using pairs of consecutive words, the problem being that the words I was myself likely to search for are not probable couplings, so if that was what was happening I probably didn’t help them become more apparently meaningful or I wouldn’t’ve been constantly plagued by these meaningless passages. Then comes the word “Parallax”. Parallax is a well-known term, but it might be worthwhile spelling out the meaning – an apparent change in the position of an object against its background when seen from a different point of view. Finally, there’s “denigrate”, in other words “disparage” or “discount”. Put these three words together, and a picture emerges, which may be entirely imaginary. Two users searching for two different pairs of consecutive words could encounter the same page, which would be similar to parallax – it’s seen by two different pairs of eyes and the apparent meaning is different – a form of parallax perhaps? I know it’s a stretch but it seems like an appropriate word to use here. Then the pages which don’t get viewed by more than one person for more than one string are “denigrated” – they are not as valuable and get discarded. This could be nothing of course, but it does feel like I’m almost grasping that sequence of words and reading a meaning into it which may or may not be present. This is of course close to what I hypothesised was going on with these pages – trying to evolve meaning out of nonsense.

Other hypotheses about this included that it was like a numbers station, which is a bit of a cliché. In case you don’t know, a numbers station is a radio station where a series of apparently random numbers is broadcast by a sampled human voice generated by a machine, whose purpose appears to be to provide some kind of standard code for spies. One famous example is Lincolnshire Poacher, which is named after the music it plays at intervals to identify it. I first came across these in the 1970s and had no idea what to make of them, but nowadays the internet makes it easier to compare notes with other people. There also used to be phone lines which did something similar but which could be used to make free long-distance phone calls if two people anywhere in the world were to ring the same number at the same time – they’d be able to hear each other over the sound of the numbers being read. It seems unlikely that these have anything in common with MPD.

There’s a remarkable sequel to my apparent experience of MPD. I had this frustrating experience for a couple of years after I got internet access from home. It would happen intermittently and blocked me from finding anything useful on the topics concerned. Then, gradually, it began to change. At first, although I still ended up with page upon page of gibberish, I used to encounter the occasional meaningful and relevant piece of content written by an actual human being. These slowly became more frequent. Then, the point came when the majority of my searches on these two word phrases pointed me in the direction of this content and the MPD started to fade out. I then noticed that almost all of the hits were to one specific website: the Halfbakery. This happened in ’03 or ’04. For the uninitiated, the Halfbakery is an ideas banks whose users discuss inventions of their own imaginings. I don’t know how other Halfbakers encountered the place for the first time, but this is my story: it gradually emerged from my apparent experience of Markovian Parallax Denigrate.

Cliché Christianity (Part 2)

If you want a reminder of what I mean by “Cliché Christianity”, check this link to yesterday’s post.

Angels are of course rather White, female and have long blond hair, white robes and halos. Oh, and wings between their shoulderblades. There are also Cupid-like beings called cherubim. All of this is rather strange because that isn’t at all how angels are described in the Bible. For a start, the Old English word “engel” is masculine, as is the Greek «’αγγελος», and the Hebrew word, which I can’t easily type, “mal’akh” is too. However, these are languages with grammatical rather than so-called “natural” gender, but the general idea is that angels are not gendered. Secondly, angels are described in various ways in the Bible, notably apparently as winged wheels with lots of eyes. Cherubim are also nothing like the “putti” they’re usually depicted as. Putti originated as pagan beings and were later used to illustrate alchemical and anatomical texts, carrying out roles such as holding flasks and killing stray dogs for dissection. I have to admit that I can’t understand how they came to be used to depict cherubim. Cherubim are completely unlike putti. In terms of the post-Biblical model of choirs of angels, they are the second choir in the first sphere of angels, and therefore particularly close to God. I have a bit of an issue with this description because it seems to have been elaborated out of nothing, and the whole idea of choirs of angels seems to be about imposing human social order, which is of course imperfect and based on sins such as the unjust wielding of power, on Heaven. In fact, one of the biggest problems I have with the Christian faith is that I’m supposed to believe in angels. I can believe in angels in the sense of messengers perhaps unconsciously conveying God’s intention to human beings. For instance, I don’t have much problem with the idea that someone might pick up a hitch-hiker who is moved by God to drop a casual phrase into conversation which ends up changing the course of the motorist’s life. I do absolutely have a problem with the idea of permanent supernatural beings acting as go-betweens, because it seems to detract from the unity of the Godhead. Another problem is specific to cherubim. I believe that logic dictates that there can only be one omniscient being because “another” omniscient being would consist of a mind with identical content and therefore be the same being. Cherubim are supposed to have identical knowledge to God, which would make them God, which is logically impossible. Incidentally, if you are not yourself theist this probably sounds like a mental glass bead game to you, but divorcing it from theism I hope you can see the logic of this argument, which may of course be flawed.

I presume that angels are shown that way in Renaissance art in order to indicate that they are in the sky, i.e. heaven, and beautiful. John Chrysostom said almost exactly this in the fourth or fifth century, and the origin of the depiction itself is from around the same time. Like the depiction of cherubim as putti, they originate from Greco-Roman paganism in the form of the goddess Victoria or Nike. Although it might be disquieting that both “cherubim” and “angels” have pagan origins, I quite like the idea of Abrahamic religion not being sealed off from other spiritual expressions. But although seraphim, cherubim and angels are all in the Bible, they bear no resemblance to their popular depiction and the idea that supernatural beings are humanoid is disturbing and perhaps even slightly nauseating.

Then there’s the idea that humans become angels after death. George Gershwin’s ‘Summertime’ from ‘Porgy And Bess’ sums the sentiment up pretty well here:

One of these mornings you’re gonna rise up singing

And you’ll spread your wings and you’ll take to the sky

But till that morning, there ain’t nothin’ can harm you

So hush little baby, don’t you cry.

This communicates a sense of reassurance. If God decides to take you now, you will escape from your earthly situation by flying away to heaven, but while you’re still alive nothing can harm you because you haven’t died yet, kinda thing. There may also be another connection with putti here, since often we’d be thinking of babies or children who die in this situation and this association still exists among bereaved parents. This illustrates very clearly the comforting function of this mythos, which of course brings me to the rainbow bridge and doggy heaven. The idea here is that there is a possibly unconditional place companion animals go when they die, closely related to the notorious “gone to live on a farm in Wales” lie told to children, including myself on one occasion when a cat died. The rainbow bridge, however, is from Germanic mythology rather than the Abrahamic tradition. Other species are generally regarded within Christianity as being incapable of sin because they’re not fallen – they lack the knowledge of good and evil.

Then there’s the interesting question of halos. The Transfiguration probably helps make sense of this. Luke 9:28-36 and the other two synoptic gospels all refer to this and it also crops up in 2 Peter 1:16-18. Jesus is seen as having a glowing face, which brings to mind also the Aaronic blessing from Numbers 6:24-26, which includes the phrase “יָאֵר יהוה פָּנָיו אֵלֶיךָ, וִיחֻנֶּךָּ‎” – whereof a Jewish translation into English reads “The Lord deal kindly and graciously with you” but which has also been translated less figuratively as “The Lord make his countenance shine upon you”. The Aaronic Blessing has the distinction of being the only Biblical element recited by the person who named our son, although there were other religious elements in the ceremony from non-celebrants, and we partly chose it because it was from the Hebrew Bible rather than the New Testament. I think the discrepancy between the Jewish and Christian translations is due to Christianity attempting to make it refer forward to the Transfiguration.

Outside a Christian context, halos are perhaps auras, emanating from the whole body but being particularly focussed on the head as a kind of shorthand. They’ve been truncated further until they just seem to form a glowing ring like the kind you’re supposed to get when you use a certain brand of toothpaste, and which is also seen hovering over Ian Ogilvy’s and Roger Moore’s heads in ‘The Saint’. I’m not sure about this, but I think it’s probably also associated with the Tabor Light, which is the uncreated light seen in the Transfiguration and also by Paul on the Road to Damascus. I tend to think of this as the bit of the Cosmos which, to quote ‘Roobarb and Custard’, “hasn’t been coloured in yet”. Everything else is created but the Tabor Light isn’t, and is also seen radiating from saints. The doctrine of the Tabor Light in this setting was a factor in the Byzantine Civil War of the fourteenth century. To me, it sounds markèdly non-Abrahamic, reminding me strongly of Hinduism and Yogic meditative practices and certainly none the worse for that, but I can certainly understand why the idea might lead to a war. Anyway, it seems to be the original reason why saints have halos, and for Protestantism this is a problem.

The very idea of saints as a separate class of Christians from other Christians is not accepted in Protestantism on the whole, although I presume some Anglicans do agree with it. This is because according to some passages in the New Testament, all saved people are saints. This appeals to me because it’s egalitarian, and another problem with saints in the non-Protestant sense is that people often pray to them in a similar manner to angels, which makes them intermediaries between God and humanity other than Christ, and nobody other than Christ, as God, is supposed to be revered in that way. God should be enough. Nonetheless, the idea of saints as a special category among Christians is popular, and non-Roman Catholics can be found to use figures of St Christopher and others as talismans. Although this isn’t Protestant, rather surprisingly the use of talismans also seems to crop up in Orthodox Judaism without condemnation, as it’s referred to in the Talmud. I would say this is an indication of a general tendency among human beings to do this, for better or for worse.

For some reason I can’t make sense of, although the general playful view of Christianity includes the idea that consciousness is not significantly interrupted before the person enters Heaven or Hell, which is not generally what the Bible is understood to imply, it also includes the idea of a Day of Judgement when people are selected according to their score of good and evil deeds. This doesn’t make sense because if they’re already in Heaven or Hell and still conscious and in existence, why are they being judged again at the end of time? Nonetheless people generally seem to include this in their mythos regarding the nature of the afterlife. Biblical Christianity would generally understand the situation to be that death is a temporary but probably very long-term cessation of consciousness followed eventually by a return to life in a resurrection body whose precise nature is not clear, which is the form in which one is finally judged by God. There are clear questions raised here about the nature of personal identity.

One of the surprising things about attempting to recount “Cartoon Christianity” was how much of it there seemed to be. I probably haven’t mentioned everything, but it took me a surprisingly long time to recount all of that yesterday. I’ve missed some things out, such as the idea that 25th December is Jesus’s birthday, which doesn’t correspond to events in the Gospels at all and is to do with Saturnalia and the northern winter solstice. I want to remind you that I’m writing this without my Christian hat on. I’m simply comparing more and less systematised belief systems. These can be seen in other, non-Christianised parts of the world, as with the contrast between popular and intellectual Daoism and in the more graduated distinction between different forms of Hinduism. In less wonderful circumstances there’s the idea of FGM being Islamic. In a somewhat similar vein, I’ve also heard people who regard themselves as Christian insist that the Bible forbids women from whistling, even quoting non-attributable “proof texts” to that effect. But all of this fulfils a psychological function, similar to that outlined by Marx – “the cry of the oppressed creature” and “the heart of a heartless world”. One of Marx’s mistakes was to assume this would disappear if material needs were universally satisfied. It seems to me that unless a socialist society manages to eliminate death, and maybe it would, who knows, humans would still be confronted with issues they found difficult to cope with emotionally, and therefore that they would still want to do things like imagine their loved ones were in a better place and pray for people and themselves. The ideas I’ve outlined do serve an important function in people’s lives even though they don’t seem to take them seriously, as a buffer against harsh realities, and although I am, for now, standing outside my own religious beliefs, it strikes me as significant that they persist, because they serve a function. It also seems to me that some secularist non-religious atheists seem to assume that these tendencies are merely childish rather than serving some kind of deep purpose which can’t be eliminated from the human psyche regardless of whether they have any basis in reality. It also doesn’t seem very rational or scientific to assume that religion of some kind is not essential to humanity.

Cliché Christianity

The title of this post is perhaps a little unfair. The way the human mind and cultures work, we have a tendency to accumulate “folk” beliefs and stories, and it’s an important part of who we are. In the West, perhaps I should say in Christendom, there’s a kind of naïve version of something vaguely similar to the Christian world view which is, however, not really much like what most people calling themselves Christian seem to believe in reality. In what follows, I plan to trace the origins and significance of these and compare and contrast them with what Christians actually believe. Having said that, there is a problem stating what we do believe because it isn’t unified, although many soi-disant Christians will claim that it is. The Roman Catholic Church, evangelical Protestants and Jehovah’s Witnesses all have remarkably distinct views on the nature of things, and I should come out right now and say that my own beliefs have historically probably been closest to those of the last two. Yes, surprisingly my theology is in some ways more similar to that of the Witnesses than to twenty-first century evangelical Protestantism. This is partly because I’m a liberal Christian, insofar as I am one at all (thereby hangs a tale too off-topic to shoehorn into this post), and the Witnesses are strongly influenced by nineteenth century liberal theology, and partly the result of extensive discussions with Witnesses during which I became steadily more convinced that they’re wrong, possibly because of the way my mind works, although I did take on board some of their points. The view Witnesses have of mainstream Christianity, incidentally, tends to be quite stereotyped and as substantially more like Roman Catholicism than most of it in fact is, but again this is beyond the scope of this bit.

Before I go on, I want to emphasise that I don’t think this in toto is what people really believe about the world, although I presume there are a few people who do, and I also want to point out that whereas I am in a sense Christian, for the purposes of this post I am merely placing two world views side by side in a kind of folklorish way. I’m not trying to convert anyone or promote the truth or otherwise of one over the other. It’s more about what their purpose is. This description is kind of cartoonish. It’s the kind of “Christianity” which shows up in comedy shows and Western animation. Interestingly, anime has a different version, and in Japan there’s also historically been a separate tradition of Christian-like religion which went its own way after the country closed its borders in the early modern period. That said, I do think it serves a comforting purpose in people’s lives, even if they don’t take it seriously.

Here we go then:

There is a male person called God. He’s a White man in a white robe and a long white beard who lives in the sky. He has a tendency to smite people with thunderbolts when they’re naughty and he tends to get angry a lot. He has a son called Jesus, who is also White and looks a bit like a hippy. He was born of a virgin on Christmas Day and told people to be nice to each other a lot, and is generally way more chilled-out than his dad. He also performed miracles, got crucified by the Romans and came back to life three days later. Forty days after that he floated up into space, presumably.

God punishes you when you’re bad and rewards you when you’re good. He keeps a ledger of everyone’s good and bad deeds throughout their lives. When you die, if you’re good you possibly turn into an angel (this bit varies) and go up into the sky to live forever in a place called Heaven where angels sit on clouds and play harps. Heaven has a pair of pearly gates guarded by a guy called Saint Peter who looks after the keys. If you’ve done more bad things than good, when you die you go to a hot, firey place underground called Hell which is ruled by a guy with goat’s feet, a pointy tail and horns on his head called the Devil or Satan. He carries a pitchfork and has ugly assistants called demons who also carry pitchforks and will torture you forever. Satan isn’t White and nor are his demons. Hmm.

There are also angels, who are once again White women with long blond hair and swan like wings attached between their shoulder blades. They have white robes and look after you while you’re alive. They also have halos, as do saints and Jesus. These are glowing rings which float around or above the head. If you’re a really good person, you might get your own halo while you’re still alive.

Really good people are called saints. These people pray a lot and are very patient. They can sometimes perform miracles like Jesus. There are also bad people called Satanists who always worship the Devil and make deals with him which lead to success during life which is followed by eternity in Hell. You can sell your soul to the Devil. Witches worship the Devil and there are rituals, often cruel and violent, which they use to get what they want.

At the end of time, God will sit on a great white throne and judge everyone, sending them to Heaven or Hell. This belief doesn’t really fit in with the rest but it’s out there for some reason. There are probably quite a number of other inconsistencies I haven’t spotted yet.

Finally, there is doggy heaven, where all dogs go when they die, and this may be extended to other species apart from humans. I’m not sure what happens to farm animals in this scenario.

Again, I want to stress that this is by no means what people actually believe, on the whole. It’s a collection of fictionalised clichés floating around in the social ether which nobody takes seriously but seems to serve the function of comforting people and cushioning them from what they suspect is the truth – that there is no God and that you permanently cease to exist when you die, and that life is not and never will be fair. “Believing” in all this stuff is a bit like being a fan of a cult fantasy or SF TV series or stories, and serves a similar purpose.

I’ll take these points one at a time.

It’s fairly well-known that this version of God is based on the Greek high god Zeus, hence the thunderbolts, him being a thunder god. Although some parts of the Hebrew Bible do attribute human physical characteristics to the being referred to by the Name, the whole picture doesn’t add up to this. God is referred to in the Bible in various ways, one of which is something like “the Many-Breasted One” – El Shaddai, for instance in Genesis 17:1. Hence God is not a man. In fact, God is unlike any created thing and any attempt to make an image of this being is sinful, and also necessarily inaccurate. I should point out here that I may not in fact be expressing my own opinions here about the way things actually are, but I may well be expressing opinions about what Christians generally believe. In any event, the image we tend to have of God is as described above. In this respect, although it still counts as idolatry I’ve found the film ‘Dogma’ which depicts God as Alannis Morisette, to be a useful counterpoint. In my aborted attempt to write a SF version of the Bible, I depicted God as a woman with long black braided hair because she made the Universe from strands of fundamental aspects of reality, i.e. her hair. This is of course also metaphorical but it probably reflects an image I have of God in my head, i.e. a woman with long braided hair. God is, however, very obviously not human and therefore not a woman. Nor does God live above us, but is everywhere. This is orthodox.

Jesus is the unique fully-human and fully-divine perfect person who came to Earth to die for our sins and rise again to life on the third day. He was limited in the same way as any human was, and also tempted as much as any of us, but never, not once, succumbed to that temptation. He was indeed born of a virgin. The purpose of his death and resurrection was atonement, and there are various views on what that constitutes, but its essence is to make humanity right with God provided one commits to Christ and repents of one’s sin. I have done this, and for a long time I believed in substitutionary atonement – that Christ stood in my place as the perfect sacrifice for my sin, so that I would be saved. There are other views, notably the ransom theory, that Christ was a kind of bait for the Devil to take which, because it was unjust, satisfied the operation of justice in the world, which is problematic for various reasons such as the fact that it seems to imply God deceived Satan. A notable view, because it doesn’t depend on the historical Jesus, is moral influence, that we are to imitate Christ as a paragon of virtue. However, most people who believe in moral influence not only believe in the historical Jesus but also in other versions of what atonement is.

Then there’s the Trinity. This is not in fact a single belief. There’s the ontological Trinity – the idea that God is simultaneously and eternally three-in-one as part of God’s essence – and the economic Trinity – the idea that each aspect of God reflects the history of the relationship between God and humanity. I have to be honest here and confess that I do not see any Biblical evidence for the ontological Trinity whatsoever, but this isn’t the same as saying that the Trinity is not a truly Christian belief because Scripture is not the only acknowledged source of authority for Christians, another being the Church. I do, however, believe that the economic Trinity is Biblical, notably because of the views expressed in the Pauline epistles about how the relationship between God and humans evolved over time. Jehovah’s Witnesses are unitarian of course, and among Roman Catholics I’ve heard that there are people who would include Mary as a person of the Trinity, which presumably then needs to be called something else.

It’s probably worthwhile to mention Jesus as an historical figure here. There are many metaphysically naturalistic scientifically realist Western anti-theistic atheists (which is what many people mean when they say “atheist” even though they’re a real subset of atheists) who also seek to deny the very existence of the historical Jesus. This doesn’t in fact stand up to academic respectability, and most secular ancient historians accept that there was such a person, though not usually responsible for miracles, being born of a virgin or coming back to life. Denying the existence of the historical Jesus is a kind of over-statement of the case similar to the definition of atheism as the absence of belief in deities rather than the presence of the belief that there are no deities. Neither of these are reputable academic beliefs. Rather interestingly, they are kind of “out there” in the same way as some of the above stereotypical beliefs are, and they appear to be emotionally motivated. It would seem to be more sensible for atheists to be at least agnostic about the existence of the historical Jesus rather than to assert baldly that no such person ever existed. I have my own doubts about the existence of the historical Socrates but that doesn’t lead to me not taking Plato’s philosophy seriously, and the situation is similar – a non-royal figure who lived in antiquity. If you don’t believe in an historical Jesus, you should have similar doubts about the existence of various other people whose existence you are in fact more likely to accept.

Jesus was, however, very obviously not the White guy seen in all those popular religious paintings. He was more likely to have had a complexion similar to a Mizrahi. I should probably explain this for anyone who doesn’t know. Today, there are a number of different ethnicities under the umbrella term “Jew”. There are the Beta Israel, who are uncontrovertibly Black and originate from Ethiopia. These are probably not relevant to the discussion though, but are worth focussing on for a moment. They have Fitzpatrick skin tone V and 4B or 4C hair, i.e. they would be perceived as Black by most White people, e.g. those with Fitzpatrick skin tone I, 1A fair hair and blue eyes, and DNA studies show that they are Jewish. However, they are also often denied Ḥok ha-Shvut (the Law Of Return) even though they are observant Jews subject to racism in their own country, and it’s strongly tempting to believe that this is racist. That said, there was probably no time when the majority of Jews would’ve looked like Beta Israel people. The Ashkenazim and Sephardim are respectively the groups of Jews who have lived in Northern and Mediterranean Europe for centuries. I perceive them as White. However, the Sephardim are subject to racism from both Ashkenazim and some White Gentiles and some of this is in connection with their skin tone, possibly, and this is a guess, because they’re perceived as Hispanic in the US. It is in fact very likely that a large number of Sephardim in Latin America don’t even know they’re Jewish. Finally, and this is an over-simplification, there are the Mizrahim. Some people claim, incidentally, that the very word “Mizrahi” was popularised in order to deny them their Arab identity, but I have no idea what kind of politics these claimants have so I’m only going to mention it in passing. These people are Middle Eastern Jews, i.e. Jewish people whose ancestors have never been in Northern Europe or the Meditarranean. The stereotypical White person I mentioned above would probably perceive them as Middle Eastern or Arab, although their appearance varies a lot. Now, I haven’t tracked this down and I could well be wrong, but I suspect that the Mizrahim are closest genetically to the Jews of Roman Judaea and therefore that Jesus would’ve looked like a Mizrahi Jew. I think he would’ve had dark eyes, 3C hair and Fitzpatrick Type IV skin tone. In other words he would probably have struggled to find the right makeup in many Western shops, which strikes me as a fairly good measure of what colour he was although obviously I don’t think he would have worn any, and if he’d walked down various streets in Britain they’d probably think he was Asian, and of course they’d be right but not in that way. I have a pet theory about Jesus’s skin tone as a Christian. I think God decided to make Jesus average in ethnic appearance so that we could all relate to him equally provided we had an accurate image of how he looked. If we’re White, we see someone darker than us and it reminds us that those we are racist against are made in the image of God.

It is by no means a standard Protestant view that God has a tally of good and bad human deeds. Rather, God regards all humans except Jesus as bad but loves us anyway, and is prepared to overlook our sinfulness if we repent and commit to Christ. Heaven is not one’s reward for being good. Also, all sins are equal. Weeing in the sink is as bad as mass murder. This doesn’t appear to be the same as the Roman Catholic view, which divides sins into mortal and venial, i.e. more or less serious.

Christianity is also arguably physicalist. A person does not consist of a body and a soul with equal ontological status. To explain: Descartes expressed a popular view of the mind-body problem very clearly. He claims that there are two types of substance. One is extended and never conscious, i.e. it was matter as we perceive all around us. The body is made of such matter. The other is unextended – i.e. it exists at a location but is a point in space although it might move – and always conscious. This is not a Jewish or Christian view, although it has some resemblance to Islamic views on the matter. In Christianity, I would argue, and I’m not alone in this view, the person is seen as essentially physical, without a separate soul in the Cartesian sense. However, there is then an issue with the resurrection because a resurrected version of you didn’t do any of the things you did unless there’s a connection between them and you which somehow makes them you, and it’s therefore hard to see them as responsible for your sin. There are possible solutions to this, but it’s important to recognise that the Bible only rarely appears to depict a soul of a human being as separate from its body, 1 Samuel 28 being one example. Jehovah’s Witnesses, of course, believe in physicalism.

Another thing Witnesses believe is in a Kingdom of God on Earth, and I would agree that this is the Biblical view. An ex-JW comedian once described their view of the coming Kingdom of God as a place where everyone was vegan and spoke Esperanto, and naturally I would personally love to think that’s in our future except that I’d worry that Esperanto wasn’t a particularly good choice for a neutral international language. The Bible describes God as our Father in Heaven, but the book of Revelation shows the saved human race as living in a giant shining cube shaped city over 2000 kilometres wide, which sounds a bit like a Borg cube to me, but still. JWs also believe in what I think is the less Biblically-sanctioned view known as annihilationism, which is the belief that after the Day of Judgement you cease to exist unless you are among the elect. Unfortunately I don’t think the Bible does back this up and I think it does require one to believe in Hell to be a truly faithful believing Christian. But “Heaven”, in any case, is in the physical Universe, not another plane, and Hell is a state of eternal torment. I’m not going to pass over that without comment. I am not condoning the view or claiming it to be mine, but I am claiming that it’s an essential part of Bible-based Christian belief, as is the belief that anyone who has rejected Christ will end up in that state forever.

The description of the Devil corresponds to various non-Christian religious deities but I don’t think it can be traced back to exactly one of them. It seems to resemble Pan quite closely, which makes sense because Pan is god of the physical world, also the Minotaur and I presume various Middle Eastern religious deities. C S Lewis said of this image: “The fact that ‘devils’ are predominately comic figures in the modern imagination will help you. If any faint suspicion of your existence begins to arise in his mind, suggest to him a picture of something in red tights, and persuade him that since he cannot believe in that (it is an old textbook method of confusing them) he therefore cannot believe in you.” I do in fact believe in the Devil, although the Biblical view, particularly taking the Hebrew Bible into consideration, doesn’t particularly depict Satan as evil so much as presenting the case for the prosecution. The reason I believe in an evil version of Satan is that the alternative for me is to believe in conspiracy theories, and I don’t. I don’t believe things in the world, and particularly in politics, can possibly go as badly as they do without some kind of organising principle doing it, and since I’m not about to believe in the Illuminati or something, I choose to believe that that force is not human and is supernatural and powerful.

Satanists are usually not people who believe in Satan so much as kind of theatrical versions of anti-theist atheists with what strikes me as a rather Ayn Randian bent. Most of us grow out of that kind of thing when we’re about four and listen to the grown ups telling us to share and be nice.

I think at this point I should probably stop and hold the rest over for tomorrow’s post, as this seems to have got rather long.

Is There Any Point To Demos?

In the previous post, I mentioned demonstrations and the pointlessness or otherwise of going on them. In this post I want to go into more depth on that issue because I don’t think the answer is a simple one.

First of all, demos themselves are not simple. Although my view of a demo would probably be a large number of people parading through a large city in support of a left-wing cause with placards and banners, this is not what they always are, and in fact they aren’t even necessarily anti-establishment, although this last category always strikes me as strange and ulteriorly orchestrated, which is probably how more conservative people view left wing ones, hence the term “rent-a-mob”. Although I think of them as primarily a young adult activity, this isn’t borne out by my experience of them, since particularly with peace marches the people I’ve been on with them have been upwards of their sixties in age.

I probably should state how I used to see such protests back in the 1980s when I was myself a young adult. To my mind, at that stage the purpose of a demo was to change the minds of the general public, raise awareness of an issue and achieve some kind of governmental policy change. In other words it was highly practical in nature with a clear, linear, cause-and-effect process intended. You don’t like nukes, so you protest against them and the government disarms, in an ideal world. This comes across to me now as rather naïve because the history of that kind of political process is very long and its success may be limited. I don’t know when the first demo happened, but I’m guessing it may have been what led to the Peterloo Massacre, although the Peasants’ Revolt certainly sounds quite like a series of demos, among other things. The point being that some research may need to be done regarding their fruitfulness. It has been said that all but one of the Chartists’ demands were met, the single exception being annual general elections. Whether that was substantially due to protests is another matter.

One particularly notable thing about them is that they are rarely reported in depth by the mainstream media. A national demo in Westminster on almost any subject might get a few seconds near the end of the TV news, but that’s it, unless there’s violence or the destruction of property of course. On one of the Gulf War demos, a particularly irksome banner carried by the so-called Revolutionary Communist Party read “VICTORY TO IRAQ”, and that got a lot of coverage. Someone ended up setting fire to it, so justice was served in a way but that wasn’t mentioned. I’m not suggesting that the news media are biassed politically, although they are of course, but an extra problem with their bias is that it’s towards making everything into a story, putting a negative spin on it, being sensationalist and looking for visually impressive images. Leaving aside any kind of overt political bias, this is bad enough and likely to cause major problems in the public perception of issues, and also likely to create or fuel anxiety and depression. Nonetheless, demos tend not to be reported much if they are peaceful even if they are visually impressive.

The other major thing that happens when demos are reported is that the police state a much lower number of participants than the organisers. I don’t know why this happens, and I’ve obviously never been in the police but given estimates based on how long it takes a protest to pass a particular point and how many people are in a given area, it seems reliable to state that the police have been wrong for every one I’ve been on given the maths done on that basis unless I’m overlooking something obvious.

I do not, however, think demos are pointless. There is, first of all, the question of establishment agenda. If there is a cause and effect relationship between big demos and policy change, it would be in the interests of the establishment to play down their influence and do something like say “well we were going to do it anyway.” It’s important to bear this in mind in the following discussion. It may just be that they are influential sometimes but “They” don’t want you to realise that. However, the rest of what I’m going to say will proceed on the basis that they never lead to changes in policy. I should probably define this more precisely. The assumption is that when the government is aware that a large demonstration has taken place on a particular issue, it leads neither to that government changing its policy on that issue nor to a change of government which then changes policy on that issue. This never happens. That’s my thesis.

Given that, there is still a point.

1991 was a pivotal year in my life for several reasons, one of which was the Gulf War. This led to me going on a lot of demos and doing various other things, and the incident I referred to above took place on one of those protests. By that time, I had reached the conclusion stated in the above paragraph. Now, of course cognitive dissonance leads one to rationalise one’s actions, but rationalisations can be valid. I was coming out of postgraduate work in continental philosophy which had led to me seeing things in rather a different way than before, politically and otherwise (if there is an “otherwise”), and it’s probably quite telling that many of my fellow students ended up going into the theatre, which is perhaps unexpected to an outsider, but this influence also existed in my life and one of its manifestations was to see the point of a demo as emotional expression – to be able to stand up and say this is not in my name, and whereas that isn’t enough it may be all one can realistically achieve. It’s a passionate public performance. And performance it is: it’s street theatre and in order to express how one feels, perhaps undertake some kind of cathartic process, one might make it more elaborate, such as practicing a die-in or all the other stuff I’ve done as part of the peace movement. The more “arty” a protest is the better in this respect. It won’t make any difference directly to government policy but it’s still therapeutic. The trouble is, though, that this approach could edge into self-indulgence and detachment from the issues involved. For this reason, there are other aspects of demonstrations which are also worthwhile.

One is that they can be educational. There are always links between issues and people who are protesting for one reason may also be involved in others, and it helps one make connections between them. Another is that since they are primarily emotional expressions, they can be encouraging. They remind one that one isn’t the only person affected or who cares, and that means that if they are coupled by actual activity on the issues concerned, they can lead to more concerted community efforts on the matter. To some extent, while I don’t want to diss the emotional expression angle too much, I would personally consider going on one nowadays, for me, to be more or less pointless because I don’t currently express myself in that way. They don’t hit the spot for me, but that doesn’t mean I don’t think people protesting in that way isn’t entirely valid and worthwhile. That said, it would be an additional positive for someone to be sufficiently encouraged by going on a demo to take further action of some kind. But this has to be very direct. It isn’t enough to assuage one’s feelings by just walking down a street holding a placard, or to get other people to do that.

There’s a fairly well-known leaflet which takes the mickey out of these events and has sometimes been handed out on them which emphasises the anodyne nature of some of them. In sarcastic tones, it describes how much of a waste of time they are and also a lost opportunity because they sometimes seem to aim at undermining the potential of getting a large mob of people together. Leaving their political leanings aside for a moment, the recent invasion of the US Capitol building was at least not an example of this, although its results were not positive at all.

What bothers me most of all about demos is that they can leave you feeling that you’ve done something when in terms of practical consequences you haven’t. The Socialist Workers’ Party in the UK seemed to suffer from this problem because so much of its activity, apart from selling newspapers, was “building for the next demonstration”. At the time I was going on lots of demos it was primarily a student party which members left soon after graduating, which apart from anything else made it very middle class while maintaining it was working class. It definitely did not mainly consist of people in typical working class jobs or the unemployed. I am of course thoroughly drenched in my middle-classness, and I fully acknowledge that fact. I have absolutely zero pretensions to be anything else and therefore recognise that there’s a massive chunk of experience and therefore praxis to which I have no direct access. Leaving the SWP aside, this kind of thing is a form of slacktivism – the kind of thing which happens today on social media where there’s limited real engagement. It’s nothing new.

In conclusion then, demos are probably not even supposed to be about changing the world so much as changing one’s own world. They’re performances and expressions of emotion, and entirely valid for that reason, but it should always be remembered that they are completely pointless if practiced as one’s main form of political activism. It’s fair that they form a small part of that, and it’s okay just to do that if there are various obstacles in your life which stop you from doing anything else, but the world needs people who are drawn to an entirely different kind of progressive radicalism. I’m going to leave the question open as to what, because it’s not my place to lecture people on this. I’m White, middle-class, able-bodied and a first-language English speaker living in one of the richest countries on the planet. The world has had enough people like that thinking they know best.