How To Argue With An Atheist

Sarada is currently waiting rather a long time for a book called ‘How To Argue With An Atheist’. It’s unclear whether this book is part of a series along with ‘How To Argue With A Racist’. If it is, I would expect it to be along partly the same lines as that book, which presents evidence in a logical fashion to dismantle the concept of a biological basis to ethnicity. All of that is very interesting of course, but unfortunately probably more interesting to someone who is anti-racist and therefore already persuaded. As I said in the post covering that work, it was disappointing because it didn’t address the actual problem it purported to. People don’t believe things for logical reasons, and as the book itself admitted when it quoted Swift, you can’t argue a person out of a position not arrived at rationally by using reason (he put it a lot better than I did of course). People very often believe things they want to believe, and the reasons they want to believe them tend to be things like, it helps them identify with other people and feel they’re part of a community, or they want to feel a sense of certainty, or perhaps that they haven’t wasted their lives persisting in folly which has had quite a big impact on how their lives have gone. On the other hand, many atheists do seem to believe they have arrived at their beliefs via rational means and who am I to contradict them? Except that I’m not clear if most of the things we hold dear are rational whether or not they’re true, and therefore it would be odd if atheists, being people, were atheist as a result of logical thought any more than we theists believe in God for logical reasons. Frequently, we probably believe in God because we’re surrounded by other people who also believe in God, or we may feel a lack of control over our lives and therefore find prayer helpful because it gives us hope. Incidentally, that feeling of powerlessness is accurate, whether we’re theist or not.

It’s entirely possible that I am a brain in a vat being fed hallucinatory impressions of a version of a Universe which doesn’t exist. It’s equally feasible that I am the only conscious being and that “everyone” else is a mindless zombie. It’s also possible that life is a dream or that the world was created last Thursday with fake memories in everyone’s heads to make it seem like it’s older. On the whole, someone who believed any of these things and acted upon them, or more likely didn’t act because they’d consider action pointless in such circumstances, would be sectionable, and it would probably be a good idea for them to be sectioned as they’d probably put themselves in danger by giving up eating and drinking or trying to fly off the top of a high building. Hence on the whole we do accept a whole load of suppositions about reality based on faith. We believe, for instance, that the world has existed since long before we were born, that we are examples of conscious beings among others in a physical world, and that we’re awake and perceiving things fairly accurately. In particular, we tend to assume that our nearest and dearest are not mere automata. It would in fact be quite rude to act on a belief that another person was a mere thing with no faults and feelings, and it would also be considered manipulative and coercive to try to persuade someone that someone else was, as the current terminology has it, a “Non-Player Character” – the lights are on but nobody is home. The Numbskulls have left the building, as it were. For us theists, God can be as real to us as our closest friends and significant others. To be fair, most atheists are just that. Atheism is nothing other than the belief that no supreme supernatural person exists. Believing that does not imply that one act on that belief or that one wishes to disabuse others of their opposing belief. Nonetheless there are also anti-theistic atheists, who do, with considerable reason, believe that theistic religious faith is positively harmful to society. But to be as patronising as possible to theists, it could be cruel to disabuse a child’s belief in her imaginary friend, and the possession of that belief may fulfil some kind of function for her. Perhaps she’s an only child with few real friends. Likewise, if you believe I too have an imaginary friend as an adult, how do you know that that “person” doesn’t serve some function in my psyche? I have seen people lay into recently bereaved theists who also believe in an afterlife (the two do not necessarily go together and one can also believe in an afterlife without believing in God of course) to tell them there’s no God and their mother’s corpse is worm food because she ceased to exist when she died, in a brutal, callous fashion. This kind of approach is not helpful, unlikely to persuade anyone and seems to be more about a rigid principle of asserting that God does not exist for the sake of the person who is uttering it than anything positive. This is not a made-up example incidentally. I’ve seen it happen more than once. But of course such people are not representative of most atheists, or even most anti-theist atheists, even proselytising ones.

This last example can be turned round as “how not to argue with a theist”. Don’t argue with a theist when they are at their highest emotional need for their belief, and don’t argue to make yourself feel better or superior. Likewise, for a theist, don’t argue with an atheist when they are in a similar state. Don’t argue, for example, with a gay atheist who has just been harangued by a manic street preacher about their homosexuality, and don’t take that tone with them either. If you’ve been through a lifetime of homophobia and have been told that if you express love who you are automatically inclined to, God will inflict eternal infinite suffering on both of you for doing so, it’s hardly surprising that you’ve stopped believing in God, partly because it’s an absurd proposition that a supposedly loving God would do that. This emphasises the fact that a fair amount of the time, it’s we theists who are to blame for a former theist becoming atheist. It’s our fault. Consequently we might not even be the right people to argue with such atheists, and in any case it may be more about living by example and allowing God to express love and compassion towards someone rather than actually talking about religious doctrine.

Nonetheless, I do in fact have several arguments I tend to use to support my theism when asked. Before I come to them though, I want to talk about something rather than tangential: what I would probably believe if were I not theist. Because for me the position is not theism vs. atheism but theism vs. ignosticism, also known as theological non-cognitivism, and also to some extent theism and atheism vs. agnosticism. You don’t hear the word “ignosticism” very often, but it’s basically the position that religious language is neither true nor false but nonsensical. It’s like the famous “colourless green ideas sleep furiously”. Well, do they? Likewise a statement such as “there is a God” seems to presume that we know what we mean when we say the word, that it’s rigorously defined in a coherent manner. Maybe it isn’t. Moreover, not only might it not be, but also the idea that it isn’t, in spiritual terms that God is beyond the comprehension of a finite mind or, adopting the via negativa, that God can only be said not to be certain things rather than to have certain attributes, is actually a pretty respectable mystical idea held by people whom outsiders are likely to think of as theists. Hence I’m not just playing games when I say if I wasn’t theist I’d be ignostic. In a sense I’m already there, and that position is less confrontational and less opposed to atheism than might be thought.

Even so, I do believe God exists, and when I say God I mean the unique omnipotent, omnipresent, omniscient supreme supernatural being who holds reality in existence and is not dependent on the physical Universe, who interacts with human beings and loves us. That’s the God I believe in and am in a relationship with. That doesn’t make me superior in any way to anyone else. It’s just a fact. Moreover, I may as well be honest and say I just am going to believe in such a God anyway and that my belief is likely to be recalcitrant to attempts, however rational, to persuade me otherwise. If you want a reason for my theism, I would offer you the following: it’s the result of a coping strategy as a young child deployed to handle separation anxiety. If my parents were absent as a toddler, I would reassure myself that there was a supernatural being who took care of me and soothed my fears. There you go: I obviously wasn’t there in the ’60s because this is a personal memory from that decade. And I would say, furthermore, that I think this is probably the cause of many other people’s theism: our God is an ersatz parent imagined to help us cope with being in a cot in a dark room whose parents were nowhere to be seen. That doesn’t, however, mean that God doesn’t exist. We can after all hit on a true conclusion by pure chance. There is more to say about this though. Muslims have a concept of “fiṭrah”, which is the belief that we are all born with the innate knowledge that a single God exists and is good. Freud seemed to believe something slightly similar: that we begin with “infantile omnipotence”, that we are the Universe, and that only later do we become aware of things we can’t control. It’s always seemed entirely feasible to me that such things are true, and therefore that the assertion made by certain atheists that all babies are born atheist is incorrect. This is going to need some unpacking.

Atheism is, to quote the 2000 edition of the Concise Routledge Encyclopedia of Philosophy, “the position that affirms the non-existence of God. It proposes positive disbelief rather than mere suspension of belief”. Although the last clause concerns differentiating the position from agnosticism, the most interesting aspect of this definition, in a secular philosophical encyclopedia with no theological axe to grind, is that it is an actual conscious belief that there is no God. It isn’t enough to have grown up in Albania under Enver Hoxha, for example, and simply never to have been introduced to the idea of God – I’m guessing this was so there but maybe not, so substitute the imaginary regime of your choice if you like. That would be a person who simply lacks a belief in God, like I lack belief in something I’ve never thought about such as the contents of an arbitrary chest of drawers in Ulan Bator. It’s a completely respectable scientific position to assert that a new born baby isn’t even a conscious being, although I don’t agree with it. If you assert that a non-conscious object such as a neonate is atheist, it follows that that chest of drawers I just mentioned is also atheist, and that’s a somewhat peculiar thing to believe. Something which entirely lacks beliefs of any kind is somehow co-opted into the community of atheists, and that to me seems like projection. We don’t remember early infancy, and we can just as well project theism onto a person at that stage in their existence as we can atheism. Hence my separation anxiety motivated theism is not less valid than any atheism. Maybe it emerged from my own infantile omnipotence or my fiṭrah and I just parcelled off a bit of reality, of which I was already aware, and labelled it “God”. And it is God. Hence the reason I believe in God is that God has caused me to believe in God. God could arrange us to perceive of the divine in such situations. It isn’t necessary to the process conceptually, nor is it ruled out. And fine, you can prefer the psychological explanation, and that is also doubtless correct. It doesn’t contradict the existence of God though.

Somewhat related to this is the sense of the numinous people tend to have. We generally have a strong sense of the spiritual. Even my strongly atheist ex has this, which she associates with beautiful rural scenery. This illustrates that what Paul Tillich calls “Ultimate Concern”, in an attempt to be non-commital about its essence. Ultimate Concern is the numinous or holy, distinct from any profane reality. Hence it can include such things as the nation state for fascists and the coming communist utopia for Marxists, but also the usual things associated with religion such as God, enlightenment or the Dao. We have various senses, and those senses are sometimes mistaken. For instance, dreams are at most alternate perceptions of reality to waking life, and there are other examples such as the tingling of a limb one has slept on, the ringing of tinnitus or the visual impression of an afterimage. However, there seem to be no examples of senses which never have objective stimuli. Therefore, if the sense of the numinous is literally a sense, it would be highly anomalous if it never had such a stimulus, in the form of what I would painfully call God if forced to give it a name, to paraphrase the ‘Dao De Jing’. There is, though, a pretty obvious flaw in this argument, namely that the sense of the numinous may only be a sense in metaphorical terms. This sense is fairly easy to induce by the use of electromagnetic fields on the temporal lobe of the brain, and some people have a temporal lobe condition called Geschwind Syndrome which seems to involve persistent or repetitive religiosity. I personally think Paul’s Road to Damascus experience was a temporal lobe seizure, but this has no adverse affect on my belief because to me that was just one way God talks to us, so it would be. Some religious people would deny this, but why?

Evolution is true. Consequently, it’s fairly common for inefficiently organised aspects of the body to be selected against since they’re a waste of energy and reduce fitness. This has not happened with the sense of the numinous, so it seems to have adaptive value. Either that or it has yet to be selected against. This doesn’t entail that God exists, but to me it suggests that religious belief serves a biological function and also that the belief some anti-theist atheists have that religion has been superceded and can be eliminated is an unwarranted leap of faith. Even if this facility is not influential or doesn’t exist at all, the psychological bonding aspect of ritual and belief means that irrational beliefs which arise in groups due to happenstance will sometimes be retained and come to serve a function. The idea that religion even can be eliminated seems to be based on the supposition that people are primarily rational animals and can function well without it. I think it’s pretty clear that what actually happens is that people develop new beliefs to serve that function as time goes by if they are no longer able to reconcile their previous religious faith, such as in flying saucers or fortune-telling. It just is going to happen because that’s what people are like.

Another aspect of evolution being true is the oddity of a line of primates living on the Afrikan savannah developing the ability to do decorative embroidery, drive cars and understand nuclear physics. None of that seems to follow from the ability to forage for berries and insects in a tropical grassland or to chase antelopes until they’re exhausted and then slit their throats with a sharpened stone. Clearly organisms are flexible and don’t just follow the rules like computers, but I have to say that, though it irks me, this capacity we have is a bit strange and unnecessary. It might also mean we’re completely wrong about everything, and in fact in scientific terms, maybe we are, but is that the hand of God? I don’t know. Just seems a bit suspicious, is all. By the way, this is the only concession I make to the Design Argument for God’s existence. Nothing else suggests that to me: the Universe is indeed fine-tuned for us in terms of physical laws, constants, number of dimensions and so on, but that just suggests there are a load of other universes with different properties which are completely empty or rather boring. Humans evolved on a possibly unusual planet in an unusual galaxy and may have been extraordinarily lucky to exist at all, but the Universe is a big place with countless stars and planets in it and it’s easily possible, though infinitely depressing, that we’re the only example of intelligent life in it, so that doesn’t imply anything either. And so on.

Finally, there’s prayer, and there are good rational and psychological reasons to believe in this. My case for prayer is this: if one prays for something specific, that will be followed by events which are improbable in ways relevant to the topic about which one has prayed. There is of course the element of pareidolia to be considered – we may be seeing patterns which aren’t there, as our brains are wont to find, like constellations for example. Improbable events also happen all the time anyway, and we may be engaging in cherry-picking, choosing to ignore all the times it doesn’t happen. I can give examples from my own life, but I don’t need to because you the reader can just pray and see what happens. However, even if prayer works it doesn’t mean there’s a God, because for all we know we might simply be generating some kind of psionic force like a Poltergeist with our minds, or perhaps doing so as a religious community such as a church, to achieve the same ends. But the trouble with this belief is that it’s arrogant and likely to lead to delusions of grandeur, so it makes more sense to attribute prayer to an external force simply for the sake of sanity.

None of these are clinching arguments by any means. Nor do we have clinching arguments for all the other stuff I mentioned at the start. Maybe life is a dream and everyone else is a robot, maybe we’re all brains in a vat which sprang into existence from nothing last Thursday. Or maybe we are the products of millions of years of evolution living in a society consisting of conscious individuals acting with a purpose. Likewise, maybe God doesn’t exist because of all the counterarguments to my arguments, which are in any case not that strong. Or, maybe God exists, and even if there is no God, it might still be either inevitable that one tends to acquire irrational beliefs, so why not this one?

But of course there is a God.

Rachel Doležal

Most people probably don’t know Nkechi Amale Diallo by that name, hence the title of this blog post, and immediately I have a dilemma. Just as I’d prefer not to be called by my former name, I can only presume that she won’t either, and the question arises of whether this should be respected or not. This is an example of the deeply problematic nature of the behaviour of the person involved, because it isn’t clear if her name change should be respected or ignored. On the other hand, most people won’t remember her former name anyway so it may not be important whether I mention it or not. However, this name – Nkechi Amale Diallo – is of Nigerian origin and not from one tradition. Her birth name was Hebrew and Indo-european, and I choose to mention it to illustrate – Rachel Doležal. And now I don’t know if I should’ve done that or not.

This person is well-known as the former president of the Spokane chapter of the NAACP, and was born to White parents with no known Black ancestry except in the trivial senses that our species originated in Afrika and that Caucasians were all dark-skinned until a few thousand years ago. They were of Eastern European extraction and there was also Swedish ancestry. She was born to them in Montana in an almost exclusively White area, the younger of two biological children. Further children, all adopted, were all Black, and had no contact with other people of their ethnicity. Her parents home edded her, had an authoritarian parenting style and were fundamentalist Christians. Her birth certificate names “Jesus Christ” as in attendance at her birth. Her parents deny this version of her childhood but her siblings agree that it’s accurate. As children they thought they had a “skin disease” because of their skin tone. She then went to a Christian university in Mississippi followed by the historically Black Howard University in Washington DC. At some point in her early adult life, she began to identify as Black herself, used dark makeup on her skin and adopted “Black” hairstyles. In 2014 she became president of the Spokane chapter of the civil rights organisation for racial equality, the NAACP. Spokane is apparently also a particularly White city. She then reported receiving hate mail but it was placed in a box to which only she and one other person had access, and in the ensuing press coverage her parents publicly stated that she was White, like them, and she resigned from her post. They also said she had mental health problems. Up until that point she had been described as biracial, and apparently when this first happened publicly she chose not to correct the account. She also lectured in Africana, concentrating on the significance of hair, and is an artist.

Understandably, a lot of wrath was directed against her for cultural appropriation, and it seems entirely clear that this is what this is. To state the obvious, she has not had the life-long experience of prejudice that genuinely Black people have, and consequently it’s entirely clear that she can opt out of “being Black” at any time. She also claims, though, that ethnicity is a social construct and therefore performative ethnicity is valid. This is all very complicated. It should go without saying that what she’s done is highly inappropriate and that she isn’t Black, and that her claim to be, in her words, “transracial”, isn’t a thing.

Trigger warning: The next paragraph is going to talk about Janice Raymond and gender-critical stuff, so skip what I’m about to say if you like.

Janice Raymond and other gender critics have likened M2F transition to blackface – the caricature by White people of Black people using the likes of makeup. She also mentions that people do not alter their appearance to make themselves more like those of a different ethnicity. But they do. In particular, Black people use skin-lightening creams, disguise and alter their hair and even undergo lip and nose reduction operations, to some extent because the social construction of whiteness is as superior to what are seen as deviations from that. To some extent the same kind of things happen the other way round. Raymond’s point is that you don’t change your ethnicity by doing this, and there’s a wider point that it’s also extremely racist to “black up”, but for decades I found her analogy completely convincing and that was a major reason for not transitioning. However, I don’t want to turn this into a discussion about gender identity because it distracts from the main point. What I do want to observe is that Raymond missed the practice of Black people altering their appearance to be more like White people, and I wonder if this is because she is herself White, which raises issues regarding her commitment to anti-racism for me although I have to confess I know very little about what she’s done here.

Okay, skip to here.

Black people do make attempts to alter their appearance but as far as I know don’t claim to be transracial in doing this. A White person isn’t in a position to comment on this except to say that it seems fundamentally different from what Nkechi Diallo does, and I say “does” because it’s ongoing. She continually decides to style her hair in particular ways for example, and uses dreadlock wigs. For some reason this in particular seems distasteful, probably because it is literally putting on a costume. That said, she has black sons and is therefore a member of a biracial family, and her sons do therefore experience racism at school and as a family, all of them do in the community such as being stopped by the police a lot. As a family, they actually are subject to racism in a structural way.

The story, as usual, isn’t quite as simple as it looks. On the one hand, Nkechi Diallo has been found guilty of benefit fraud because she continued to profit from her art and writing while claiming welfare. As such, she does count as a criminal and this could be seen as reflecting on her character and other actions as fraudulent. Regarding the hate mail, she’s said to have received a lot more of that than previous Black presidents of that chapter of the NAACP, and it’s suggested that she manufactures hate crime incidents to draw attention to herself. Even so, other people have received hate mail with the same handwriting and the same distinctive layout of the addresses on the envelopes, so the possibilities here seem to be that she either wrote all the hate mail or that there was a genuine hate crime incident. Given that Spokane is only 2.3% Black, about the same as Southampton (and it’s also about the same size as Southampton), she has also raised the profile of Black Lives Matter in the area and there have been demonstrations and actions which probably wouldn’t have happened in the area were it not for her. Her family is also demonstrably subject to hate incidents, including her youngest child, more or less from birth, and this is uncontrovertibly wrong whether or not she’s responsible for the initial issue.

There are several colliding narratives here. One is that of criminalisation. She is fraudulent, and it’s possible to extend her benefit fraud, which is partly motivated by her loss of income when she was outed, to her claim that she’s Black. Alternatively it’s possible to use a mental health narrative and say she’s deluded and that her benefit fraud is the result of being victimised and losing a source of income. Then there’s her own social construction narrative, that she’s transracial. Although it is in fact true that racism tends to focus on a very small number of biological traits compared to the whole person, it’s only really possible to assert this if your view of semantics is internalist. I will come back to this as it has enormous implications.

A couple of other things. She claimed a specific Black man as her father, and he didn’t deny this was so. Looking at this in the context of her own family, most of whose children were adopted, it certainly seems to make sense that her concept of who counts as a child and a parent is not biologically-based, and this would also seem to be the right way of looking at it because otherwise adopted children would be seen as second-class members of families, and absentee fathers could also be seen as having less of a claim on being parents than step-fathers. She has also, of course, changed her name, she says because her name is now so notorious that she can’t get a job in her field of expertise under her birth name. Some people also object to the fact that her name now translates as “Bold Gift Of God”, but it might be in order to point out that a man named “Jonathan Kühn” would have a name of exactly the same meaning and this would not be considered problematic even if he wasn’t Jewish. Nonetheless this does seem like cultural appropriation – it’s a thoroughly Nigerian name. And one final point: she actually has Native American ancestry and seems to have completely ignored this fact, which seems very odd to me considering her appropriation of Black ethnic identity.

To me, as an armchair psychologist who is in fact somewhat qualified in actual psychology at degree level, I believe I can see another version of this story. A White girl is born to authoritarian parents in an almost completely non-Black community. She has three adopted Black siblings who consider themselves to have a skin disease because their skin is a different colour than everyone else they know about. After an abusive childhood, she disowns those parents and identifies more closely with her siblings and their Blackness. She has no wish to identify with her abusive, authoritarian and fundamentalist white parents, so instead she identifies with her siblings, and this is a solution to her which makes sense. She also drifts into identifying as Black because rather than simply asserting actively that she is, she accepts the label without correcting the people who gave it. As an adult, it’s been suggested that she receive therapy for her condition, but denies that she needs it, and in fact that also makes sense because to her, she has the solution and is congruent with it. I don’t want to pathologise the situation, and I recognise that it’s deeply problematic, but she has found an answer. But that may of course just be my psychobabble and I’m committing the sin of purporting to have more knowledge of her than she has of herself.

Here’s why she’s wrong though.

The experience of ethnicity in the twenty-first century West is considerably centred on skin tone, although other factors also come into play such as hair, name and language use. What leads to someone being Black is the predominant perception of their appearance, particularly in their childhood, and that of their usual biological relatives. To be absolutely literal, it doesn’t depend on genetics or gene expression, although it almost always will. Transracialism does occur in certain circumstances. For instance, a Black child brought up in an entirely White environment with no perception of other Black people, as has happened in Ireland, is transracial, but still Black. Nkechi Diallo sees ethnicity as internal, which is incorrect, because in terms of semantics nothing is internal. This can be illustrated by the so-called “‘Fis’ phenomenon”. A small child calls a fish a “fis”. An adult tries to correct them, saying “it’s ‘fish’, not ‘fis'”, and the child replies, “yes, I said ‘fis'”, being unaware of the fact that there’s a difference between an alveolar and a palatal voiceless fricative. But the child has succeeded in saying “fish” if the people hearing her understand her to have meant “fish” rather than “fis” (which as far as I know is not an English word). It’s the consent and perception of the community that defines when, for example, a baby says her first word. Ethnicity is the same. Whereas contemporary Western culture makes skin tone meaningful and it’s possible to manufacture highly artificial situations where a person with no alleles linked to dark skin would be perceived as Black (for instance she might have been blacked up every day from birth), these situations are unrealistic and too “playful” for such a serious matter. But the judgement of ethnicity is not made every day by genome sequencing and tracing family history, but in the same way as other kinds of meaning are – external to any sense a person might have of their own ethnicity. And this is why Nkechi is wrong – because semantics are external. They are not in the head, or at least not in the head of the person who thinks they know what they mean.

This is actually a really good example of why philosophy is important and relevant to everyday life. Hilary Putnam is a semantic externalist and wrote an influential pair of papers on the subject. called ‘The Meaning Of “Meaning”‘ and ‘Meaning And Reference’. Oddly, I actually have a major issue with the claims of these papers even though I do believe in semantic externalism. Putnam imagines a twin Earth which is indistinguishable from our Earth in every respect except that water is a different substance, although it does all the same stuff as it does here – e.g. it’s essential to life, freezes at the same temperature, is less dense than water and covers 71% of the planet’s surface and so forth. When a native says “water” on this planet, they have the same mental state as someone in the same circumstances on Earth, but they are not referring to water because the meaning of “water” is not in anyone’s mind but “out there” in the world. This has led to the further claim that “water is not necessarily H₂O”, and I disagree with that. There is a sense in which water is not necessarily H₂O, but that’s because different isotopes of hydrogen might replace the protium which is in most water, but this depends on a semantic network because oxygen-16 and oxygen-18 are different isotopes of oxygen but are still called oxygen. If we used different words for different isotopes of oxygen, the statement “water necessarily contains oxygen” would not be true, but there would be a history behind that claim which led to oxygen-18, for example, being called something else.

This can be transferred easily to the ethnicity situation. People with long-term heritage south of the Sahara are usually Black, but there are other examples of Black people such as Melanesians and Australian Aboriginals, and in fact the genetic variation of Black people south of the Sahara is as great between various Black people with that origin as it is between Black Afrikans and Melanesians, though an interesting genetic difference between Melanesians and Black Afrikans is that the former have Denisovan DNA as well as H.sapiens DNA, and also Neanderthal DNA. None of this is going to matter to a white supremacist, a fairly naïve white racist recruiter in a job interview or a racist white police officer. This is because semantics are external, and it’s not just an ivory tower philosophical point but is hugely significant.

Consequently, Nkechi Diallo is not Black. She is in a sense biracial because of her combined Caucasian and Native American heritage, which oddly she has chosen to ignore. That said, she did a lot to raise the consciousness of people in Spokane regarding racism, and this shouldn’t be ignored, and her family is plainly biracial and subject to racism, and moreover, her actions have led to people close to her being harassed, and this really is not fair and loses sight of the original problem.

The Luxury Of Ignorance

Sometimes it’s important just to keep quiet so as to clear the decks for the words of the othered, and I’ve been doing this for a while now. Due to the Black Lives Matter activism, I’ve been saying nothing on here for quite some time. As I type this, I have the unusual opportunity to be more thoughtful and terser than usual since I won’t be posting it for several days. Whether I’ll actually bother to be salient, considerate or brief is another matter.

In the absence of the opportunity to go out and actually do anything, and it must be borne in mind that outdoor public protests are a form of performance art which is intrinsically valuable but also politically inconsequential, I have been reading. Clearly, listening to people who are not white would be more productive, but we’re still isolated so the chances of that are non-existent unless you count online interactions, which I am having. And oh goodness am I ever having those! More on that later though.

This is already getting rambling and chaotic, but I’m going to stick with that. What I am going to try to avoid doing, though, is “whitesplain”, if that’s a thing. There’s plenty of good material out there about racism written by non-whites, so my task here is to contribute my own perspective insofar as it might be helpful. Probably not very, as the world’s whitest woman. But I have been reading, two books in fact, so maybe approach this as a review of those? There’s more though.

The first was ‘Why I’m No Longer Talking To White People About Race’ by Renni Eddo-Lodge. This was great. I have nothing bad to say about it. One significant thing to take away from it is the idea that being “colour-blind” is not good enough. This is easily illustrated, though not in the book, by the use of AI algorithms on employment candidates and facial recognition. AI learns what current practices are in a particular area, which is why, for example, generative adversarial networks can be trained to produce such convincing images of faces for non-existent people. This means that it may pick up on tacit prejudice. It can recognise white people’s faces, but has been known to come unstuck with those of Black people, likening them to gorillas or simply not recognising their faces at all. This can be a problem with photo ID when run through an IT system without human intervention. A similar problem arises when applying computer learning to curricula vitae. In this case, the neural network compares the profiles of successful candidates to those of applicants, and ends up choosing those who are most similar to those who are already in particular jobs. The result of this is sometimes that the algorithm will automatically reject non-white candidates, because their names are different and also probably also their qualifications, skills and experiences. In other words, an apparently colour-blind approach does little more than freeze racism as it currently operates, and this is why affirmative action is necessary. “I don’t see colour” is inadequate. It’s a lie, and if it’s true it’s harmful to the cause of anti-racism. Other than that I don’t have a lot to say about Eddo-Lodge’s book, not because it isn’t good but because it’s so good I feel like I can’t add anything to it, not to mention that I shouldn’t. Its realm of discourse is mainly non-scientific and therefore experiential, and that experience is not mine because I’m white. I will have a couple of things to say about that later though.

Adam Rutherford’s ‘How To Argue With a Racist’ is the other book of interest, and I have considerably more to say about that. Rutherford is a geneticist of mixed heritage from Ipswich and his angle is science. This means this book is logos whereas Eddo-Lodge’s is pathos and ethos, although ethos is also prominent in his. As far as it goes, he reiterates many scientifically entrenched opinions as well as their history which I think are probably well-known, but they may not be. In particular, he makes the observation that if you were to select two people arbitrarily from two tribes of the San people of the Kalahari, the chances are they would be more genetically different than a Western European and a Sri Lankan. I’ve more or less mentioned this before. He also talks about the variable resolution involved in genome sequencing. The ABO blood group system has been known for a long time and represents one set of alleles which have been used unsuccessfully to distinguish between races. This could be one point. Other genes could be added. At a certain point early on in this process, the division of human diversity seems to resolve itself in the same way as the early twentieth century fivefold division into the cringeworthily-named Negroid, Capoid, Australoid, Mongoloid and Caucasoid. It’s possible to jump on this and go “Aha! They were right!” except that this degree of resolution is arbitrary. Step it up one notch and a small group of people living in the mountains of Pakistan become distinct too, which nobody has ever considered to be an ethnicity apart on the same level as “Whites” or “Blacks”. He also makes the nowadays pretty obvious observation that all the groups blend into each other and are merely peaks in an undulating landscape. He goes on to discuss prowess in sport, which I think was probably supposed to make a connection with his intended audience but which failed in my case to do so because I find the topic so tedious, while recognising that it’s a worthwhile exercise to make this comparison. In this case he notes that most Olympic sprinters are from East Afrika, and in fact from a specific settlement in Kenya (I may have this wrong because I know nothing of this), but that there are hardly any Afrikan Olympic medallists for swimming events. This is nothing to do with genetics. Although the concentration of the relevant alleles among the Olympic sprinters is higher, it isn’t particularly high either in Kenya or that particular part of Kenya, and the low number of Black Olympic swimmers, who could be expected to be more successful given that the same alleles might in isolation be expected to help people swim faster, reflects the fact that Black people are less likely to have learnt to swim at all, a fact which shows up in drowning statistics. Some Black people have internalised the idea that there’s a genetic difference so far that they actually say they have denser bones and can’t float as easily. This has no basis in fact at all.

From physical prowess, Rutherford moves on to intelligence. Here he makes the interesting move of looking at the behaviour of participants in online pro-racist fora such as ‘Stormfront’, which turns out to be quite familar. Those who attempt to examine differences in intelligence according to ethnicity usually claim that they are exploring a taboo and neglected subject. This is by no means the case. In fact, there have been numerous attempts to demonstrate differences in intelligence of this kind, sometimes up and sometimes down. The number of genes contributing to variation of intelligence are very large indeed because, as he puts it, the human brain is the most complex object in the known Universe. In fact this isn’t so – that honour belongs to the Mandelbrot Set, along presumably with other similar sets such as the Burning Ship and so on. It remains true, though, that the human brain may be the most complex physical object of which we are aware. Because of the very large number of genes influencing intelligence, many of which are unknown, the usual situation of the most genetically diverse populations being the most able could be expected to be the most intelligent, which would obviously mean the San plus other peoples south of the Sahara. This isn’t what shows up, but the observation is made that average IQ tends to increase with economic and social development. The lower average IQs found in Afrika correlate quite well with their lower economic development. All this leaves aside, of course, the question of what intelligence is and what IQ tests measure, although there are clearly cultural biasses in these, yes, even unto Raven’s Progressive Matrices if you know what those are. Hence the variation in intelligence found within and outside Afrika is a legacy of European colonialism, as might be expected, and is to do with the likes of poorer health, malnutrition and stressful domestic environments rather than anything genetic. Prima facie it could be expected that a randomly selected individual from Afrika brought up in the same environment as a randomly selected East Asian, to use another racial stereotype, would end up scoring higher on an IQ test. This, incidentally, is another example of why colour-blindness is a bad idea – it will tend to obscure the fact of the major disadvantage at which Afrikan nations south of the Sahara are placed economically due to Western imperialism, not to mention culturally.

This, then, is pretty much hunky-dory stuff. However, I did find one point of contention regarding what’s apparently known as the genetic isopoint. It’s well-known that since each individual has two parents, the point arises pretty quickly where the number of great-great- . . . -granparents one has exceeds the human population of the planet at the time. Eventually we must reach the point where every person living back then is either ancestral to everyone living now or has no living descendants, and according to this book this took place around 3500 years ago, at the time of the European Bronze Age. I don’t have a problem with the idea that the whole species is a single family, but the idea that it was that recent is harder for me, and probably many other people, to accept. This is because the North Sentinelese are said to have been isolated for sixty thousand years, the peoples of Papua often for forty thousand and there are groups of people in the Amazon who are said never to have been contacted. It turns out that this is often transparently not so. Apparently “uncontacted” is not used literally but means “uncontacted by the West”. It’s notable, for example, that every community in the Amazon said to have been uncontacted speaks a language which is not a linguistic isolate and therefore is quite recently derived from other people living in the area. Also, the fact that the languages concerned are known in spite of not having their own written form strongly suggests they have been contacted, although it isn’t inevitable of course as they may simply have been overheard. Therefore it’s entirely possible that the very idea that there are uncontacted tribes is in fact racist. That said, if there are genuinely uncontacted tribes whose genomes have therefore never been sequenced, it’s also possible that the genetic isopoint is further back, and in fact in the past it would’ve been, because for example there was probably no contact on either side of the Wallace Line separating Eurasia and Oceania for hundreds of centuries, or between the Americas and Eurasia. This is rather odd, because it means that to some extent the further back you go, the further up the genetic isopoint recedes. In personal terms, a rather disturbing probability is that very many of these closer relations were forged through rape.

This form of oppression brings me to the issue of gender relations generally. There is an issue concerning feminism and anti-racism which I’ve never quite been able to understand, where it’s claimed that feminism has a tendency to be very “white”. For this reason, Alice Walker came up with the word “womanism”. The fact that it’s from Alice Walker is potentially problematic as she has herself been called anti-semitic due to her endorsement of David Icke’s views, although this happened much later than her introduction of the concept in 1979. I have to admit I don’t fully understand Walker’s idea beyond the likelihood that feminism and Black liberation must surely connect. It’s been claimed that second-wave feminism focussed substantially on the liberation of White women and the Black Power movement on the liberation of Black men. It’s also claimed that some of the suffragettes were white supremacist. Eddo-Lodge claims that feminism as practiced in the British public arena in recent years still tends to be racist. She says that racism receives much less attention than sexism in this context. It might be thought that similar experiences and lessons could be shared between feminism and anti-racism. However, since I’m both White and a trans woman I can only comment on this as an outsider and since I’m an outsider in two different ways here, I’m not going to say much more. I am, however, going to say something else which is relevant to my situation.

Every year there’s a Transgender Day Of Remembrance. The purpose of this event is to commemorate the lives of trans people who have been killed by anti-trans gender violence. This is honoured more in the breach than the observance in my experience. It’s mostly ignored and many people don’t seem to be aware of it. Even among trans people the idea that it was the only global event in the calendar which is specifically trans (which Pride obviously isn’t – it’s wider and it’s fine, but it isn’t specifically trans and I personally don’t want to have anything to do with it because I see it as having been co-opted by neoliberalism and tending to be dominated by white cis gay men) led to a second event being added – Trans Day Of Visibility. I’m not a fan of that either, because for many trans people every day is a trans day of visibility because they don’t have passing privilege, but also having two days detracts from the other day. Why is this important, and what has it to do with anti-racism? Well, if you see photos of the victims of anti-trans gender murders, you might notice that they almost always have something in common. They are more or less always non-White trans women. Note that I’m calling it anti-trans gender, not transphobic. A phobia is an irrational aversion to a stimulus, like xenophobia, which is almost synonymous with racism. Here are two ways of looking at Transgender Day Of Remembrance. One is to see it as a distraction from the constant everyday violence and murder of women which have been going on since time immemorial – there may or may not be a special day to commemorate the victims of domestic violence and other situations where women in general have been murdered, and from that perspective it seems unfair that there should even be a special day for trans people of this kind, although to some extent it does strongly suggest that there should be a well-publicised such day for cis women victims of violence. The other is to see it as a day which remembers Black and Hispanic murder victims, which happens to be what it is.

There’s a fairly well-known quote from a gender-critical source whose name I’m afraid I’ve forgotten, which is along the lines of “they expect us to be shocked by the number of trans women who commit suicide [sic]. They don’t get that we wish they were all dead”. On the whole, through most of my life, I would have complete sympathy with this sentiment, and although it takes the quote away from the context, which may be unfair, it seems feasible to conclude that you could replace the idea of suicide with murder and still express a similar sentiment, except that it wouldn’t be murder so much as justifiable homicide as in the trans panic defence for example. But there is even so a problem with the idea that the deaths of trans women should be celebrated, which is this: it’s celebrating the deaths of primarily non-White people, and is therefore tantamount to being racist. White trans people are much safer from the threat of dying at someone else’s hand than non-Whites. Therefore there’s a racist element to supporting violence against “us”, and when certain behaviour creates a hostile environment for trans people, in certain parts of the world it creates a disproportionately more hostile environment for Black trans women. This is mainly oblivious racism, typical of the luxury of obliviousness which privilege brings.

Now I have to be honest here and say that I do see some parallels between racism, sexism and what’s popularly referred to as transphobia. I’ve said elsewhere that certain aspects of sexism and “transphobia” are homeomorphic: some sexism can be mapped onto discrimination (as opposed to prejudice) against trans women. A good example is that if someone is dressed as I am now, in a floral strappy skater dress, they are less likely to be taken seriously by cis men, so so-called “feminine” clothing is basically a massive gaudy sign pointing at the person wearing it saying “DISRESPECT ME” regardless of the perceived gender of the person concerned. I would imagine that similar comparisons can be made between sexism and racism, and I would hope, for example, that it’s possible for Black heterosexual couples to recognise similarities in the oppression of the man by society at large and the oppression of women by men in heterosexual relationships, although again I have no access to this.

However, there’s a problem, and this is where the idea of “all lives matter” comes in. As a trans woman I could imagine myself to be subject to transphobia, but of course that idea is completely fictitious. Similarly, a white cis man, with the various privileges that involves and the concomitant obliviousness of those privileges that brings, might suppose that he too is oppressed by so-called reverse sexism and reverse racism. This ignores the structural elements of those two sets of prejudices. Public policies, institutions and representations of women and Black people in popular culture manufacture pre-existing conditions from which it’s much harder to escape, so that even if a situation looks superficially equal to an outsider, to a person subject to structural sexism or racism it will be anything but, even if they aren’t conscious of that fact. False consciousness is a thing too. It perhaps amounts to a person in an oppressed position because of a fixed characteristic having the same beliefs and attitudes to someone who does not have those characteristics. Uncle Tom, in other words. Someone who could be seen as betraying their sociocultural allegiance. A very mainstream example might be a poorly-paid worker not joining a trade union, although of course things are not that simple since, for instance, trade unions themselves have a history of sexism and racism. A fairly close sex-based analogue might be a Stepford Wife.

Particularly in these days of online interaction, people with common ground find it easier to discover each other and band together, and in many cases are able to empathise on an aspect of their identity which is clearly entirely valid. Obvious examples of this are feminist and ethnically-based groups. But there are also other groups which seem less valid, such as Flat Earthers, “incels” and Targeted Individuals. I would also place people who say “all lives matter” under the same heading. But the problem is how to distinguish between legitimate and illegitimate grievances. To me it seems thoroughly obvious that feminist movements and Black Lives Matter are worthy of support and equally obvious that Targeted Individuals are mentally ill without insight into their true predicament. It’s equally clear to me that veganism is a moral imperative for every human who is able to make that decision and that there’s structural anti-veganism, although veganism is a conscious choice. Someone else might believe men get a raw deal across the board, and on the whole they’d be wrong. It’s even more obvious that white people don’t get a raw deal and although there can be prejudice against White people it doesn’t have the structural and global elements of anti-Black racism. But to a koumpounophobe, any dress code requiring garments with buttons is prejudicial, and that’s so rare and obscure (one in 73 000) that society can’t be expected to take account of it and that means they just will be excluded or disadvantaged in many ways, such as at school with uniforms or working as a security guard or in an office. Similarly, some people are fatally allergic to apple skin or have seizures if they see an open safety pin, and there’s a whole range of other hidden disadvantages. All of these people need to be taken seriously but it’s surely impractical to do it for everyone, because there’s so much individual variation. In these cases, the luxury of obliviousness still exists and there’s a passive, tacit kind of structural prejudice, but not in the same way as racism and sexism. As an individual, how is it possible to distinguish between ways in which one could legitimately claim to be part of an oppressed group and where one simply has an exaggerated sense of entitlement?

Nevertheless, the fact remains that racism, sexism, homophobia, ableism and ageism are all major problems which we must address, particularly when one is male, white, able-bodied, heterosexual and neither young nor old. Presumably the more of these easily-articulated prejudices one is adversely affected by, the more one would be able to perceive similar mass prejudices accurately.

I want to paint a picture of certain aspects of my own identity which are trivial fragmented and faint echoes of prejudice, so as to contrast them with more systematic and objectifying forms of prejudice. My surname is usually pronounced or spelt incorrectly, and it begins with an unusual letter. My pigeonhole at university was shared with other people whose names began with letters near the end of the alphabet and it was always full of gunk, torn envelopes and other rubbish because the other students used it as a dustbin, assuming that nobody’s name began with that letter. I also have some suspicions that I’ve been passed over for employment because my name is unusual in England and puts me near the end of the alphabet. Because it has only three letters, for a while I couldn’t have a bank account under my real surname. But on the whole this is trivial and a mild annoyance. As a teenager, I had two skin conditions, plus a third, which are only occasionally found in people of my ethnicity. Because my mother was unfamiliar with these skin conditions, she simply believed I wasn’t washing properly, and the other one wasn’t diagnosed at all. Again, the much more widespread problem of acne vulgaris was much more significant in affecting my self-image, and the other conditions were simply a bit of a puzzle. If they had occurred in someone with the ethnicity which they are more usually associated, they might well also have gone untreated and would’ve been much more debilitating and worse for their self-image than they were for mine. Finally, when I comb it at all I use an afro comb on my hair because brushes and other combs just don’t work, so there’s a trivial sense in which my hair is subjugated to “straight hair privilege”. Again, this is trivial. All of these things taken together are trivial because they’re not part of a major system of oppression. They’re just individual idiosyncrasies of who I am. I suppose there’s one positive thing about being privileged in every way. I might be able to empathise to some degree with other privileged people who are unaware of that privilege and perhaps persuade them of why they’re wrong. But I doubt it, and I honestly don’t know how to distinguish between merely imagining and being aware that part of who one is makes them oppressed, because I myself am not oppressed at all. I can therefore relate to that position even though it’s obviously wrong.

Finally, I want to return to ‘How To Argue With A Racist’. It was an interesting read to some extent, but it didn’t help, because people did not reason themselves into racism and therefore can’t be reasoned out of it. You can throw as much evidence as you like at sexist or racist attitudes and it will get you nowhere, because people who are either unaware of their sexism or racism or celebrate them didn’t get there as a result of evidence or reasoning and they’re emotionally invested in their prejudice. I do think there’s a way of doing it, but I don’t yet know what it is.

A Poison Tree

This is basically a FB status update, but if I put it on there it’d be a wall of text, so I’m putting it here instead.

I’ll start with a Blake poem:

I was angry with my friend; 
I told my wrath, my wrath did end.
I was angry with my foe: 
I told it not, my wrath did grow. 

And I waterd it in fears,
Night & morning with my tears: 
And I sunned it with smiles,
And with soft deceitful wiles. 

And it grew both day and night. 
Till it bore an apple bright. 
And my foe beheld it shine,
And he knew that it was mine. 

And into my garden stole, 
When the night had veild the pole; 
In the morning glad I see; 
My foe outstretched beneath the tree.

Am I angry with my friend? I don’t know. I think I’m depressed by what my friend is doing. Am I telling my friend that I’m peed off? No I’m not. Maybe I should, I dunno.

This is complicated. There are a lot of people I count as friends, both face to face and online, whose opinions are very different from mine, and apart from not wanting to make it an issue because other things are more important, such as the value of friendship, which kind of saves the world one person at a time, I also think it’s vital that we try to expose ourselves to contrary opinions, even where we disagree very strongly with them.

I’m going to have to be vague about this bit, but want to provide some context. I’ve known this person for around two decades, our lives have overlapped considerably and our children have been friends and have a lot of mutual friends. We’ve done a lot of work together, been to each others’ parties, I mean it’s the whole kit and kaboodle of a emotional closeness. This isn’t just some randomer out of nowhere.

But recently they seem to have taken a turn for the, well, I’ll be frank, right wing extremist, supporting Trump, opposing anti-racism, reading anti-abortionist websites and approving of them and getting all their information from places which are probably on some kind of list.

Now there’s a lot going on right now. A trans board moderator has just told me I “don’t belong in the LGBTQI+ community”, which didn’t exactly make my day and which I didn’t exactly find validating. The violence against police horses on the London BLM demo was also pretty depressing, and that’s to do with the consequences, not an argument about agents provocateurs or wilful animal abuse from within the movement – the fact of the matter. The behaviour of the police in the US, for instance mowing down peaceful protestors with a car and attacking them from behind without warning – and those are police, not false flag operations – is also depressingly unsurprising.

Also, I’ve just noticed that the bread mix I’m using isn’t vegan, which has really got my goat because as usual there is simply no need for there to be any animal products in it but they just stuck ’em in there anyway, so I’m now going to have to make a whole new loaf of bread before I can even eat, so my blood sugar is low and I’m probably hangry.

Cutting off ties with this person is a step in the direction of both them and me retreating into a bubble with little hope of dialogue, and as I’ve said before the polarisation issue is probably more important than the actual issues about which we are all polarised. It’s interesting that both the 2016 US presidential election and the Brexit vote were very close to fifty-fifty, and I wonder if this somehow reflects that polarisation, although I can’t put my finger on why.

The problem, though, is that right now I’m feeling rather emotionally fragile and I just can’t cope with seeing how far this person has strayed away from their former views. In a way it isn’t even about the views. I remember a guy I used to know whose opinions seemed very easily swayed and was annoyed when he agreed with me because it seemed like he did so too easily, and we need people in the world who have well-thought-through opinions. But this has purely and simply just happened, with no apparent critical thought and reasoning, and I’m at least as disappointed by that as the actual content of the opinions. It might even be that which is bugging me, because there are loads of people at the other end of various spectra than I who I still have a great deal of respect and love for and don’t really care that much that we disagree. I’ve never subscribed to “if you know a racist who thinks that he’s (sic) your friend, then now is the time for that friendship to end”, as the Specials once sang, because as a white person who knows other white people that would not only be most of the people I know, but it would also not help myself or them to address our racism and try to do something about it. It’s a self-righteous and judgemental position, and obviously it applies equally to misogyny, homophobia, ableism, speciesism and even more to the things I haven’t mentioned because the fact that I haven’t been able to bring them to mind means I’m insufficiently aware of them.

Nevertheless, I feel my hand has been forced. I just can’t stand to see this endless stream of bigotry from a person I thought was on my side morally and politically. Therefore now really is the time for this friendship to end.

Ugh.

Baby Cavies

This post is about guinea pigs.  Partly.  It’s also being written in the teeny box of the app because it’s kind of a spontaneous thang.

There seem to be two major reproductive strategies, although this is apparently disputed. I don’t know why that is, but my understanding is that at one extreme, organisms produce millions of offspring in one go and die immediately after. Almost all of those offspring die before being able to reproduce, there’s no parental care but a few do make it through and do the same thing. At the other, organisms produce very few young and devote a lot of time to parenting having done so, so that isn’t the end of their lives, and when this happens they may produce litters, or single young (on the whole – I think most animals who usually have a baby at a time also occasionally have twins or multiple births), but from the viewpoint of replacement animals who reproduce sexually must be able to produce at least two offspring given that they are not in some way invasive. In extreme cases, the reproductive period may even end long before the animal dies of old age, as happens in humans.

Okay, now you’re gonna get your guinea pigs.

Apparently nobody knows why they’re called guinea pigs because they have no association with either Guinea or New Guinea, and I wonder whether at some point they cost a guinea each or whether they’re like turkeys, considered to come from somewhere considered exotic by Western Europeans and therefore associated with an arbitrary distant land. The German name is Meerschweinchen, which translates as “little porpoise”, and I don’t know why they’re called that either. The other name, cavy, is from the Tupi saujá, which means “spiny rodent”, and which they definitely aren’t, and in fact even the name isn’t that close for no known reason (to me). I’m going to call them cavies because the name “Guinea pig” freaks me out a bit, since I don’t know where it comes from, although “cavy” is almost equally weird.

There are, as far as I know, three main suborders of rodent. There are the mouse-like muriomorphs, the squirrel-like sciuromorphs and the cavy-like caviomorphs. This has probably changed since I learned this division due to the revolution in taxonomy which resulted from advances in genetic sequencing, but it remains the case that in terms of number of species and population, rodents are the most successful order of mammals and if mammals survive the current mass extinction at all, they will, and therefore could end up being the last mammals of all. The first mammals were not rodents at all but somewhat rodent-like in form, because in a way the default body plan for mammals is to be rodent-like, as seen in many marsupials and also shrews, golden moles and others. Multituberculates, arguably the most successful mammals ever, were also rodent-like and it’s theorised that they partly became extinct because their litter sizes were smaller than rodents’.

Cavies themselves were never wild as the species they currently are. They are closely related to a wild cavy found in the Andes and I presume they have rapidly evolved to become reproductively incompatible with them. They can reproduce at the age of five weeks and produce two to four precocial pups, but the gestation period is fairly long at around two months. I should probably explain “precocial” vs “altricial” at this point. Cavies clearly do practice parental care, and when an animal does this the evolutionary option exists for young to be born before they are anything like fully developed. These are known as “altricial”. Humans do this, and it’s also common among birds, but ratites such as ostriches and tinamous don’t do this so probably the first real birds didn’t. Egg-laying mammals, however, do, and so do most rodents. Guinea pigs are unusual, compared at least to muriomorphs, in that they produce “precocial” young who are already furry and a little more independent than, say, baby mice or hamsters. They cannot, however, churn out massive litters of children over and over again like muriomorphs, and probably for this reason whereas they do sometimes eat their pups, this is relatively rare. There is a stark disposability to many glires offspring. Okay, I’ll explain glires again too.

The glires are the superorder including lagomorphs (rabbits, hares and pikas), rodents and scandentia (treeshrews) and are close to primates. I have a thing about the insistence that lagomorphs are technically not rodents because there is no definition of what constitutes any clade other than species (a breeding population), so all those families, orders, classes and the like are individually defined but there is no criterion at all which determines which level any of those is at. Therefore, either get rid of the idea of rodents or plonk lagomorphs together with ’em. I have slightly more sympathy with the idea that tupaias (treeshrews) are separate from rodents.

But anyway, rabbits breed like rabbits, and consequently it makes biological sense for them to end up eating their young. Not all archontoglires (including primates) do this. To quote Willy Wonka, “But that is called cannibalism, children, and is in fact frowned upon in most societies.” Humans generally have a taboo about eating babies. Not so most other archontoglires.

Caviomorphs, who I understand we’re now supposed to call “hystricomorphs”, were originally from Afrika like a lot of other animals, and got to South America by floating across on vegetation back when the Atlantic was narrower, during the Eocene. At the time, they were the only placental mammals in South America other than bats and xenarthrans (sloths, anteaters and armadillos, who are the sister clade of all surviving placental mammals and differ from the rest of us in interesting ways, e.g. they tend to be bulletproof, have a lower metabolic rate and an unusually large number of ribs), so they were able to radiate into all sorts of forms which would have been unfeasible in the rest of the world, such as becoming capybaras, who are basically rodent hippos. The largest capybaras were the size of small cars, but those died out a long time ago, probably when the Isthmus of Panama formed (that’s a guess).

Like all placental mammals, cavies lack abdominal ribs, don’t lay eggs and suckle their young through nipples. In their case they only have a single pair of nipples and if they don’t become pregnant when they’re fairly young the pubis can fuse, making it impossible to give birth vaginally. This situation is actually quite similar to that in non-eutherian mammals generally, who have bones making it impossible for them to give birth to live babies. And so we ask ourselves, how did we get here? How did we get into a situation where humans give birth vaginally to mainly singleton altricial babies and suckle them from a pair of pectoral nipples? What does it mean about our society? I haven’t filled in all of the second bit yet.

Humans are synapsids. We are descended from animals in the late Carboniferous who arose more or less directly from amphibians, or rather a vague group of vertebrates who included examples ancestral to living forms who are now called amphibians. We are not anapsids – “reptiles” or birds. Synapsids have tended to go to considerable measures to regulate their temperature, for instance by having large fins on their backs to absorb the sun’s heat and radiate it back again by letting the wind blow past it. The descendants of these animals began to use chemical reactions which created more heat than they absorbed, even to the extent that they were sometimes working in an apparently pointless cycle, because it was able to raise their temperature above the ambient. It’s worth bearing in mind, incidentally, that over most of the history of the synapsids the struggle would often have been to keep cool rather than warm, because of the climate, so in a very real sense it’s the mammals and their ancestors who were cold-blooded and the “reptiles” who were warm-blooded.

There are various ways in which synapsids regulate their temperature, one of which is sweating. That sweat carries antibodies, which are made of protein. The small babies hatching out from the eggs with a need to generate their own heat or keep themselves cool would need to be curled around by parents. One of the distinctive things about synapsid spines is that they can roll up – they can bend backwards and forwards as well as sideways. Hence they can keep themselves warm and their young can be too, at a point where the small size of their bodies means they get hot or cold very easily.

Imagine, then, sweaty things down burrows in the Permian, at a time when practically all the land formed a single continent almost from pole to pole, three times the size of today’s Eurasia – Pangaea. Such a vast continent would be mainly desert simply because so much of it would be so far from the single ocean, Panthalassa. Deserts away from the poles are hot during the day and cold at night, due to lack of cloud cover. Thus these sweaty things down burrows would have to huddle very close, and in doing so the young would lick the sweat for salt and to educate their immature immune systems with antibodies against the infections their parents had previously acquired immunity to. Later on they’d derive protein and other nutrition from the perspiration as well, and so was born suckling. Duck-billed platypodes and echidnas still suckle by licking skin secretions from their mothers.

It’s easy to think of those last two mammals as primitive, but in fact monotremes, for such are they dubbed, have unusually large brains for their size. They are, however, unusual for the mammals living today and they and other mammals did split off very early. From today’s perspectives, it makes sense to look at monotremes as one group, and marsupials and placental mammals as another, as the last two share much more recent ancestors than they do with the first.

One of the unusual things about monotremes is that they have poisonous spurs on their feet – they are venomous mammals. There is a sense in which other mammals are venomous, because our saliva can infect and kill other animals we bite. This is actually quite like the venom of snakes and even more like that of Komodo dragons, all of whom tend to get their toxins from bacteria living in their mouths. Human bites are, after dogs and cats, the most common bites leading to medical emergencies, and are usually inflicted by children. One hand infection in three is caused by a human bite. Cases of limb amputation and necrotising fasciitis have been reported, and death from infections. Nonetheless, platypus venom is another matter and is unlike snake venom or the toxins produced by salivary bacteria. It derives from modified immune system genes and causes a drop in blood pressure with no necrotic effects. It also contains a right-handed amino acid and although females have the spurs when young, and echidnas also, they’re vestigial. The venom is also secreted in the tears. Since similar spurs have been found in Mesozoic mammals, it seems reasonable to assume that they too were venomous. It can kill smaller animals and cause pain for months in humans. Questions of gender role arise in my mind here. Like mammary glands, venom glands are related to the immune system but whereas milk is nurturing, venom is destructive and defensive. I’m imagining male animals going out and hunting or fighting over females using their venom. Some multituberculates had spurs, as did Zhangheotherium quinquecuspidens and Gobiconodon, and although so far it hasn’t been possible to conclude that these spurs are venomous (other mammals do sometimes have spurs today which are not), they are thought to share their origin with those of monotremes, and just as a small animal today such as a wasp or a weaver fish might need to have venomous defences, at a time when mammals tended to be underdogs it’s easy to imagine that they might too.

Looking at their genomes, the common ancestors of monotremes and therian mammals seem to have lived around 210 million years ago, meaning that the early fossils of mammals such as Morganucodon and Megazostrodon are about twenty million years younger. Prior to that, synapsids were non-mammalian. Growth rings in the teeth of the earliest fossil mammals reveal another surprise: they probably weren’t “warm-blooded”. Today a mammal the size of Morganucodon could be expected to live a year or two, but they seem to have had a much longer life expectancy, of up to about fourteen years, which is similar to a living reptile of the same size. This is all the more surprising given that they were already producing sweat and apparently regulating their temperature that way, so maybe some mammals actually lost their ability to generate their own heat internally. This of course contradicts the “whig prehistory” assumption that everything is trying to turn into a human. In fact endothermy requires small animals to work like anything to get enough food. There are shrews who need to eat their own body weight in insects and other protostomes more than once a day. It’s a very efficient way to run a body, particularly if there are easy external sources of heat as there would’ve been in many parts of the planet in the late Triassic.

To monkey sensibilities such as our own, particularly in the richest parts of that same planet in the early twenty-first century of the Common Era, the disposability of muriomorph rodents seems most disturbing. A house mouse can be expected to live well under a year in the wild, although a genetically-modified mouse living in captivity could live up to five and a mouse whose genes have not been directly tinkered with could live up to four given a sufficiently friendly environment. They produce up to fourteen litters a year of up to ten young, although the average of both is much lower. This means a two year old house mouse could have produced as many as a hundred and forty children, and assuming half of those had only been reproducing for a year, that could mean almost ten thousand descendants. This usually means, of course, that most of them would’ve died by that point, which in turn means that life is cheap and they may well have died because their parents have eaten them. Although as humans we do find that harder to handle, even conditions for us used to be a lot harsher. I am one of seven children, three of whom died, and two being adopted in, and this is unusual for a mid-twentieth century British family but not so much a few generations ago or in another part of the world today, and this can lead to a certain lack of emotional engagement with the youngest children for one’s own emotional protection. Nonetheless the need for that emotional protection implies that children matter a lot to us. They also matter to other mammals and birds, and it’s important neither to anthropomorphise nor floccinaucinihilipilificate that in other species.

Another notable aspect of most mammals alive today is that we are born with at most only a thin set of membranes surrounding us and are in the case of placentals retained within our mothers’ bodies for a relatively long time. This is actually less than it might be for humans due to our proportionately large heads, necessitating fontanelles even then, which is one reason we’re altricial at birth. But the question arises of how most mammalian embryos started to be retained rather than being laid in eggs – viviparity.

Viviparity has evolved independently many times in vertebrates, such as in sharks, bony fish, amphibians and reptiles. Birds always lay eggs, I presume because of flight although it also applies to flightless birds. The closest to an exception are kiwis, who lay the largest eggs in proportion to their size and whose eggs hatch out very quickly afterwards. In the case of humans and other placental mammals, the origin of the placenta seems to be viral.

As viruses work by using their host cells to help them reproduce, they sometimes write their own genes into host DNA to do so. If this happens to a spermatozoön, ovum or zygote and the cell concerned survives, this will be present in the genome of most or all of the nucleated cells (or their mitochondria I imagine) of the organism concerned. Around eight percent of the human genome consists of viral genes, and in fact it’s theorised that ultimately the entire genome of most living organisms may originate from viruses, though it will clearly have evolved since. Leaving this possibility aside, most viral DNA in our genomes has stopped working completely, but the way placentae work is different and resembles an infection. The ball of cells all mammals start off as implants itself in the wall of an internal organ and bathes itself in the mother’s blood. This would normally provoke a successful immune response, but doesn’t because the fetal cells fuse, preventing white blood cells from getting a purchase on them. This is known as a syncytium, also found for example in respiratory syncytial virus infections, which protects dodgy cells from being attacked and enables them to produce more viruses. This is made possible by a viral protein called syncytin, previously used by viruses to bind with cells. Given that both marsupials and placental mammals have placentas, in the former case a rudimentary one formed from the yolk sac, and in some marsupials such as bandicoots even a more sophisticated placenta shortly before giving birth, this must have happened before they split from each other about 160 million years ago towards the end of the mid-Jurassic. Hence at some point, as if it wasn’t bad enough to scratch a living dodging the feet of thirty-ton Diplodoci, some ovum or zygote was infected with a virus. Instead of developing a shell and being pushed out, this egg started to invade the wall of the mother’s reproductive system and got stuck, only later being ejected. It probably happened to a lot of mammals, and a lot of them probably died in childbirth or underwent retention of dead fetuses which prevented them from reproducing, or maybe they just died straightforwardly of viral infections, but after this pandemic, a new kind of reproduction had begun to evolve and those mammals are our direct ancestors.

Bringing this back to the present, it’s interesting to note that we owe our existence as placental mammals who bond intergenerationally and invest our time in parenting to the extent we do, to a viral pandemic, and the hope remains that the current viral pandemic will lead to a similar leap in social evolution in the near future. A costly leap, but considering the price we are paying, we may as well get value for money out of it.

Cavy Babies

This is not about guinea pigs.

Sarada and I are now grandparents, and as such we’re watching our grandchild pass the usual milestones. This is of course a blessing not everyone has because not all children survive infancy and of those who do, some are learning disabled. I try not to forget our tremendous luck that so far, all of our descendants are still alive and have the kind of abilities associated with humans. It also makes me wonder about the past of the human race.

We generally can’t remember much about our infancy, and what we do seem to remember is often confabulated because human memory is not so much a store and recall process as a recall and rewrite process. It seems that it’s more important that we seem to remember things than that we actually remember them, and the further back you go in your memory, the more repeatedly recalled memories are likely to have become distorted by the creative, active revision process that is our apparent remembrance. Fortunately, we often have parents and siblings around us who can help us gain accurate information about our early lives, although they too can be influenced by family mythology. Since the invention of writing it has become easier to make accurate records of the past not immediately dependent on current human beings being around to recount the events, although this is probably an early stage in the steady process towards outsourcing our memories of which the likes of Google constitute a more recent stage. Left to our own devices, though, we don’t remember much about our early lives and much of what we do remember has been rewritten. We create our own mythologies about our infancy.

This is similar to human prehistory. Thinking of the Greeks, never far from my mind right now, the likes of Hesiod set down the general idea that before the current iron age, which is incidentally still current, was the Age of Heroes, corresponding to the Greek Dark Ages, preceded by the Bronze Age, itself preceded in reverse order by the Silver and Golden Ages. This is partly accurate, and confirmed to some extent by archaeological findings. I’ve talked about this elsewhere.

When our granddaughter, facilitated by videophone, lifts herself up into an upright position, having also recently begun to crawl, it makes me wonder about the past of the human race. At some point in the Pliocene our ancestors began to walk upright, possibly after having hung down from the branches of trees or, according to Elaine Morgan, having needed to step into deeper water to escape predators or find food, the situation in fact where other apes also stand upright. There are three changes involved in human bipedalism, one of which is the angle of the pelvis and the other two of which I’ve forgotten, each of which adds up to about thirty degrees away from quadrupedalism. Is our granddaughter recapitulating that early human history, or rather prehistory? We see her reach up for a toy rather than a branch carrying fruit, but that’s fruit for her and for all we know, maybe there was just something really interesting on that branch which our great-great-….great-grandmother seriously wanted to take a look at. Before walking, babies often go through a crawling phase, which seems rather odd to me for various reasons. One is that for some reason they take longer to learn to crawl than they actually spend crawling. It’s been said that one reason they take such a long time is that they don’t see as many examples of people crawling around them as they do walking, so they have to invent it for themselves, but it still seems rather odd that they bother to do it at all. I suppose it keeps them busy, although they get really frustrated by it too. On the whole, when they do crawl they always seem to do it on hands and knees, which makes sense but also makes me wonder because I’m not aware of any other quadruped who moves around like that, including those whose hind legs are longer than their forelimbs, and even those whose normal gait is bipedal don’t walk on their knees. Neither do that family in the Middle East who walk on all fours for neurological reasons whose gait may be connected to a former evolutionary stage in apes generally. Consequently, what babies do may not be an accurate representation of our past.

One thing hearing babies tend to do, it’s said, is produce a very wide range of speech sounds, and it’s even been claimed that a hearing baby with a vocal tract like that of most other humans will produce every sound in every language during the babbling stage. After carefully listening to our own children, this definitely doesn’t seem to be true. They did produce a wider range of sounds than were present in German and English, but there were a lot of sounds I never heard either of them utter, so I strongly suspect this isn’t so. More evidence that the phonological inventory of a babbling baby is smaller than that of the sum of spoken human languages is research which shows that babies adopted at birth still pick up the spoken languages of their birth mothers more quickly, i.e. if adopted into a family speaking the same language, than a language foreign to them. This strongly suggests that the fetus is listening in utero and has already started to divide up the speech sounds it hears in terms of a particular spoken language, although whether this is reflected in their babbling is another question entirely.

I recently commented on this blog on the slightly disturbing tendency for spoken languages to simplify in terms of inflection and phonetics as time goes by. For instance, Greek lost its distinction between η, ι & υ, & between ω & ο, started to drop its aitches ages before Cockneys were even thought of and so on. English, likewise, merged I and Y before the Norman Conquest, leading to the familiar tendency to spell words with interchangeable Y’s and I’s in the Middle Ages, and within my own lifetime the distinction between WH and W has been lost and the dark L at the end of words has now become a vowel or semivowel in all positions. This phonological simplification influences inflection. For instance, because schwa has taken over practically every unstressed short vowel in English nowadays, the older distinction between final -u, -a and -e in noun endings has been eroded and lost, and in any case there’s a general tendency towards levelling in inflection anyway. Again, as I’ve said before, if you wind the clock back far enough you end up discovering languages which are much harder to pronounce and with much more complicated grammar on the whole, though not exclusively: Greek’s ancestor didn’t distinguish the passive and middle voices but Ancient Greek does.

Due to this general trend, one is left imagining some prehistoric stage when people just babbled at each other and hoped for the best. If you regularly read this blog, you probably wonder whether that stage ever really ended! However, people other than me can express themselves to each other a lot more clearly than I seem to manage. It’s still confusing though, because one would expect more complex languages to take longer to learn and since people didn’t live as long back then, having to pick up a really complicated language would probably have taken up a relatively much bigger part of their lives.

One feature in particular never seems to have survived into any official national language which has existed in the past few centuries: polysynthetic language. There are certainly languages, such as Swahili, which inflect their verbs according to subect and object, meaning for example that it’s entirely possible to express something like “she used to visit me” in a single word, but there’s a more complex stage before that where something like “they wouldn’t easily let themselves become Greenlanders” or “the praising of the evil of the liking of the finding of the house is right” could be expressed in single words. Ainu, spoken in Northern Japan and previously Sakhalin and now practically extinct, evolved from a polysynthetic stage into a simpler form during recorded history. I know practically nothing about this but I do wonder why this never happens in widely spoken languages in the twenty-first century, or in national languages.

Caucasian languages are characterised by complex grammar and very large numbers of consonants compared to most other languages. Linguists who have attempted to reconstruct older prehistoric languages sometimes tend to produce something rather similar to one of those languages, Georgian, in that respect, although it’s now thought that this is a futile exercise. That said, it does seem likely that the complexity of the grammar and phonological inventory of Georgian is fairly representative of a spoken late Palaeolithic language.

It was thought that the Caucasian languages held the record for the number of consonants in any spoken language, but this has turned out not to be so. In fact, that record seems to belong to the click languages of Southern Afrika. Western linguists realised that what they had previously thought were a small number of clicks were in fact each one of several distinct sounds, and consequently that and a number of other subtle distinctions between speech sounds of other kinds has meant that the language with the most consonants is in fact !Xóõ, with seventy-seven. I’ve tended to call this language !Xo, so I’ll carry on doing that, mainly because it’s easier to type on an English keyboard. Click consonants do occur outside Afrika. For instance there is a register of an Australian language which uses them and they’re also used to express the negative in the Balkans and to express irritation and affection in English, though not as parts of words, and only in Afrika are they found as phonemes of that nature.

It used to be thought that there was a “Khoisan” language family to which most click languages belonged, and which were somewhat related to each other. Even then, though, it was acknowledged that several Bantu languages such as Xhosa and isiZulu did have their own click consonants and were not related to !Xo, !Kung, the gloriously named //au//’e and the rest. More recently it’s been recognised that most of the click languages are unrelated to each other except insofar as they all have clicks, and that the situation seems to be that they’re a Sprachbund like the Balkan languages, sharing features because they’re spoken in the same area and have become more like each other. I suspect that the languages currently spoken in these isles also form a Sprachbund, because for example they all have a circumlocutory way of expressing verbs, but maybe not.

There are probably three examples of language families which are merely convenient groupings based on geography rather than due to them being genuinely related. The other two are the Papuan languages of New Guinea and the Australian Aboriginal languages, the latter of which incidentally also share various features such as the absence of fricatives (e.g. S, F, TH, V, Z) and a special form of the noun to express fear of the item referred to. In both of these cases, the languages have been spoken in those regions for such a long time in relative isolation from each other that if they were ever all descended from a single ancestor it’s no longer possible to trace them back that far. I would suggest that the situation with click languages is somewhat different. As I’ve said before, the human population of Afrika is genetically the most diverse, so it’s not a huge exaggeration to say that the world consists of a number of ethnicities in Afrika plus another one which inhabits the rest of the planet. In fact this isn’t quite true because the genetics of North Afrikans tend to be quite close to that of various people in the rest of the Med and in Western Asia, and in fact there’s quite a bit of variation in Central Asia too. Even so, there’s a lot of variation in Afrika, particularly south of the Sahara.

This variation I think is probably a clue to the nature of the click languages. The reason Afrikan populations vary more than the rest of the human race is probably because the species has spent much longer in that continent than elsewhere, so they’ve had longer to evolve and there are also no geographical bottlenecks like Sinai and Gibraltar. Incidentally, Afrikans are also genetically the purest examples of Homo sapiens , the rest of us having Denisovan and/or Neanderthal ancestors as well as other H. saps. I think the same is likely to apply to Afrikan languages.

What I think is happening with Khoisan languages is that far from being a family or Sprachbund, they are in fact relatively conservative descendants of Palaeolithic languages which retain the wide range of different consonants which earlier spoken languages had because they were closer to babbling. This is not in any way a negative comment on the languages concerned. On the contrary, the difference is that Khoisan languages are more phonetically sophisticated than most other adult speech in that respect. They are probably ultimately related, but they’re also basal.

To illustrate what I mean, and the mistake which I think has been made here, I want to point out three other examples of this happening, two from biology and one from comparative linguistics. Flowering plants used to be thought of as divided into two main taxa: monocots and dicots. Monocotyledons have parallel venation, no tap roots and are never trees, among other things. Dicotyledons have tree-like branching veins, tap roots and are sometimes trees. It turns out that monocots are not closely related to each other whereas dicots are a family tree, so in other words the very course of their evolution resembles the nature of their veins. Therefore the monocots are basal – they are descended from the earliest forms of flowering plants and are sister groups of each other plus one more group, the dicots. Vertebrates have done something similar. There are synapsids (“mammals”), anapsids (“reptiles”), amphibia and various kinds of fish including eel-like jawless fish such as hagfish and lampreys. The hagfish and lampreys are only about as closely related to each other as they are to the rest, and the hagfish in particular are only distantly related to all other vertebrates, who form a more closely related bunch. Finally, the linguistic example is found in our own Indo-European language family, where it used to be thought that there was a split between SATEM languages such as Polish, Bengali, Armenian and Greek on the one side and KENTUM languages such as English, Albanian, Tocharian and French on the other. Once again, it turned out that the KENTUM languages form a relatively closely related group but the SATEM are only distantly related to them and each other.

This is what I think click languages are evolutionarily. Before humans left Afrika, we spoke a large range of languages with lots of different sounds in them, probably often including clicks. These were the putative “what the heck are you talking about?” languages which were, in a good way, closer to babbling than most of the languages spoken today. The rest of the world’s languages became simplified and easier to learn and pronounce, but some of the languages of Southern Afrika, although they diverged enormously from one another, retained their large phonological inventories, and these are the click languages. Interestingly, the highest incidence of albino humans is found in the same area as the click languages are spoken, and I think this reflects the genetic and linguistic diversity of the human population of Southern Afrika.

In closing, I want to stress very strongly that this doesn’t mean at all that Khoisan languages are in any way backward or primitive just because their languages are phonologically conservative. I suspect in fact that the grammar of Khoisan languages is not that complex compared, for example, to Georgian or some northern Native American languages. What I do think is that they have retained the complexity which the rest of us have lost. Moreover, I don’t think clicks were previously all there was to it. I think probably the first spoken languages of our species had sounds in them we can hardly imagine and subtle distinctions which nobody would be able to hear or express nowadays. Click languages are a globally valuable legacy of a glorious linguistic past.

Norman Is In Ireland

This is possibly not going to be one of my more coherent posts, although you could be forgiven for not noticing much difference between it and any others. It is partly about Dominic Cummings and this John Donne poem:

No man is an Iland, intire of itselfe; every man
is a peece of the Continent, a part of the maine;
if a Clod bee washed away by the Sea, Europe
is the lesse, as well as if a Promontorie were, as
well as if a Manor of thy friends or of thine
owne were; any mans death diminishes me,
because I am involved in Mankinde;
And therefore never send to know for whom
the bell tolls; It tolls for thee.

It isn’t difficult to think of examples of this, one of which is the life of the typical human being. We are born, having formed from matter in the biosphere, spend our lives exchanging much of the substance of our bodies with that biosphere and on dying, become one once again with that same biosphere. On the whole. Occasionally the carbon in our ashes is converted to diamond along with some of the nitrogen, or something might happen to preserve our bodies, such as falling into a tarpit, but hominin fossils are rare because we tend to be able to protect ourselves more effectively from physical threats, to the extent that one of the most dangerous animals in our environment is actually Homo sapiens. Leaving aside our ultimate fate, we are both physically dependent on the outside world and on society, which there is such a thing as. All of this is pretty bleedin’ obvious. The Spartan ἀγωγά involved a number of stringent measures including an examination by the Γερουσια soon after birth and the abandonment of a baby on a mountain if deemed unfit to become a soldier either to die or survive for several days. This was actually admired by other Greek states. Regardless of the ethics of the situation, it does illustrate very well that we are fundamentally social and cultural beings. There is not currently such a thing as a solitary human. There are hermits and feral children, to be sure, but feral children ally themselves with other social species and hermits have been social.

Heideggers (note the lack of apostrophe) insistence that all being is being towards death and that death is a solitary event which does not exist in one’s own being and is always in the future has been criticised for ignoring the natal aspect of our existence. There is a kind of solitariness in existentialism and the introduction of others is perceived as a threat or an onerous responsibility. And of course in a sense it is. What we owe to other people can be perceived as a great burden, but it’s so much more than that. We also have an origin, although we can’t perceive that origin because our minds are insufficiently organised at the start of our existence for that to happen. Heideggers view seems to be in a sense that of a sole individual standing alone before an abyss who had no parents or family, who made everything himself, and is male. Sartre’s view of the Look portrays the awareness of oneself as an object for the other’s subjectivity as irredeemably negative, and of course objectification, for instance sexual objectification by the male gaze, is indeed pretty negative, but away from that objectifying context there is the responsibility one has and what one owes to others because without them one would not exist.

This is where I get to Dominic Cummings. He has been much vilified recently, and I’ll come back to that attitude in a minute, for taking his family to Durham rather than observing the lockdown rules. In a way, he can’t be blamed for this because he’s a product of the isolating attitude engendered by Thatcher’s “there’s no such thing as society: there are only individuals and their families”, i.e. the view that we’re all separate even though we’re using a language invented by a whole community, are eating food produced by an army of farmers and other workers and emerged from a person’s body who themselves emerged from another’s all the way back to when a placental syncytial virus infected early eutherians back in the Cretaceous. And we rely on that virus too. We owe our existence as placental mammals to that event, and the many other viruses which have written their genes into our DNA over aeons.

This ecological idea of interdependence between people and also between us and our planet or Universe more widely doesn’t seem to be controversial from a scientific perspective. Ecology, linguistics, archaeology and the rest are all entirely respectably academic disciplines, but the fact of interdependence was applied to human relationships during the nineteenth century by Marx and Engels.

As well as being a theory of socio-economic relationships, Marxism arose out of Hegels dialectical idealism as expressed in his Phänomenologie des Geistes of 1807. In its form as understood by Marx and Engels, it has become dialectical materialism, which is based on several metaphysical principles: contradictions exist in the real world; entities are dynamic rather than static and they exist in relationships with other entities. All of these things are necessary conditions of entities, and incidentally when I say “entity” I mean “a thing with distinct and independent existence”, except of course that nothing is independent at all. Given that Marxism is said to arise out of this ontology, one might consider it to be uncontroversial. The problem, though, is that in fact Marxism does not seem to rest on these principles as firmly as one might hope. Marx is said to have “toyed with” the idea of using dialectical materialism as a means of explaining commodification, but given that he doesn’t seem to express this explicitly and it doesn’t seem to detract from understanding the idea, maybe it isn’t as clearly built on these foundations as might be thought. In fact there have been other attempts to approach Marxism, notably the post-war efforts to forge a different kind of political philosophy referred to as “Analytical Marxism”. This involved the view that the Marxist theory of history, which clearly would involve the “thesis-antithesis-synthesis” of historical materialism and the idea of dynamism – that nothing can be realistically considered as static and frozen in a particular instant – was in fact obscurantist and entirely unnecessary. As far as I know, analytical Marxism is now dead and discredited, although I don’t know the details. However, if it could be shown that a political theory could emerge logically from a rational and evidence-based view of reality, it would be good.

A rather similar phenomenon, I’ve long thought, is found in Jean-Paul Sartre’s adherence to Marxism. In his preface to Frantz Fanon’s Les Damnés de la Terre, Sartre states it to be a work in which “The Third World finds itself and speaks to itself through his voice”. All that is fine and good, since Fanon was in fact from the Caribbean and had a right to do that without false consciousness, and I don’t object to that. What I do object to is that it feels to me that Sartre is using his own terminology and philosophy as a kind of “bolt-on” appendix which is not organically part of his Marxism, and in fact I think he does that throughout. When you look at existentialism as a whole, it’s really about the individual and not about their relations and unity with the world, and it’s hard to see how this could be compatible with Marxism except through some kind of fancy word play which actually signals some kind of privilege and knowledge-hoarding on the part of the writers concerned.

One of the questions which arises for me in connection with politics is whether the individual is important. There is, first of all, a sense in which one’s duties towards each individual are supreme, and we also need to recognise that each individual has a unique perspective informed by her experiences of life. From a left-wing perspective, this can be linked to the idea that the working class know what works for them in production, but that information may not be communicated back to the bosses, to use a rather outmoded model of how a business might work. Thus information is lost and production is less efficient than it might be. But this is also a right-wing idea, potentially: an industry ought to be able to run itself rather than be regulated because it understands better than the political class how to manage its affairs most effectively. There are also more atomised needs, for instance in terms of unique aspects of personality. I appreciate this because as far as I can understand it I am very atypical neurologically in a way which doesn’t seem to fit particularly well into any particular diagnosis – I’m neurodiverse and there are aspects of my personality which can be classified as, for example, gender incongruence, ADHD and possibly being on the autistic spectrum, but none of these things is “textbook”. And it never is, for anyone. One of the more startling experiences of taking a consultation is that the very occasional client turns out to be a textbook case of a person with a particular condition. In that sense we are all individuals and it’s a moral imperative to take that into consideration. You have to consider, for example, that some people have peanut allergies or button phobias. That looks like a flat, straightforward medical implication.

But on another level, individuals are not important. Here in England, Henry VIII started the Reformation. In the Holy Roman Empire it was Martin Luther who did that. If by some artifice we go back in time and ensure that the Battle of Bosworth field is won by Richard III and his cronies rather than the Tudors, maybe the Reformation would’ve started later and been started by someone other than Henry VIII with his perceived need for a male heir and therefore an annulment or divorce, but it would still have happened. That King is like a chesspiece. It doesn’t matter to a game of chess that the king has a particular design other than that it’s distinguishable from a queen or pawn and moves in a particular way, and being in check or achieving checkmate is possible in innumerable ways, but the game is still won or lost, as is the establishment of Protestantism. The individual does not matter in politics. Our Margaret Thatcher is the British version of Ronald Reagan.

Likewise Dominic Cummings is not important. The fact that he broke the lockdown, probably as a result of neoliberalism convincing him that he’s an isolated individual who doesn’t owe anything to anyone else, is not the point. He doesn’t matter as himself. The way in which he does matter is that he represents a particular irresponsible attitude which has been encouraged by a particular social environment and set of attitudes which this society has been encouraging for decades. But that doesn’t mean we should blame him for that, because he is, like all of us, the product of his environment. If we accuse him personally, we’re playing a game which allows Jeremy Corbyn, for example, to be criticised on the basis of his personality rather than his ideas, and that’s a double-edged sword.

I haven’t met Dominic Cummings in person, but the impression given by the representations I’ve seen on this screen and others is of a non-conformist “weirdo”. When we feel tempted to use that word, we should check ourselves, because maybe for other people we too are weirdos. I don’t understand why he doesn’t conform to the usual standards of grooming and dress compared to the generally blandly besuited individuals to whom we are used, but I don’t hold that against him. I know that I don’t even know how to conform or what conformity is except for a vague collection of things such as the tendency for male establishment figures to wear suits. There may even be some kind of fellow-feeling there, unless of course Cummings image is carefully constructed, as it very well may be.

So to conclude this bizarre pell-mell rant, I would say this about Cummings. He is a chesspiece in the impersonal game of politics who has fallen into a position as a result of historical determinism as a representative of an unsustainable attitude which set a bad example, by being typical of the current political environment. As such, it doesn’t matter who he is and in another world, and perhaps another country in this world, there’s another person in the same position. His personality put him where he is, but that personality is the end of a long chain of events which simply means he’s the wrong person in the wrong place and time doing something wrong. I don’t care who he is, how he looks, anything like that. What’s important, as always, is that the impersonal forces of history have pushed us all into a situation where it’s particularly clear that we can’t carry on as we have. In a more real sense than usual, Cummings is a dinosaur: perfectly adapted to his environment, but in an environment which has just been hit by an asteroid his kind must cease to be influential. The future is birds and larger mammals. But I don’t care who he is and this shouldn’t be made personal.

Alien Cows and Cookie Dough

Most people know what a pufferfish is, and of course they’re interesting. I only recently managed to find out whether they inflated themselves with water or air. Since it’s a stress response, whereas it might be fun to get one to do that, it isn’t good for them. This means, of course that the dolphins who seem to use them to get high by stimulating that stress response and the release of tetrodotoxin are not being very nice if they have any sense of empathy, which raises all sorts of interesting questions I’m going to ignore for now.

Fugu are in fact porcupine fish rather than pufferfish, but the two groups are closely related and in the same order, the Tetraodontiformes. Another family in this order is the ostraciidae or boxfish. Just as fugu and pufferfish are covered in spines to defend themselves against predators, boxfish have their own sort of armour in the form of hexagonal plates on their skin adapted from scales, which lead to their body shape being kind of “boxy”, and a very rigid body, unlike the clearly highly pliable skin of fugu and pufferfish. They’ve taken the same “idea” in the opposite direction, and protect themselves just as well as their relatives, but instead of doing so by making themselves physically too spiky and occasionally enormous to be swallowed, as well as poisonous, they’ve instead turned themselves into tanks. Boxfish are apparently kept as aquarium fish, which is a little surprising to me, and again this could take me into the interesting but twiddly area of the ethics of aquaria. Okay, just a bit: I am interested in the idea of keeping ornamental seaweed or other marine or freshwater plants in an aquarium in a kind of abstract way, and I suspect that there would be little in the way of ethical problems keeping a caecilian in one provided she’d been bred in captivity, but besides those, it’s probably not ideal. I’ll never get round to it anyway and it feels like too big a responsibility to have to take care of fish, unlike human children who are a cinch.

Anyway, boxfish. Hexagons, if they’re regular, are unlike most other regular polygons with relatively few sides. Whereas they tile a flat surface excellently, like squares and equilateral triangles, they can’t do the same on a three-dimensional surface because the angle they would need to be placed at doesn’t allow for them to be wrapped round. There is one highly abstract exception to this which kind of fits into the series of Platonic polyhedra (tetrahedron, cube, octahedron, dodecahedron and icosahedron), which is that an infinite hexagonally-faced regular polyhedron could exist. I can’t remember what they’re called. This does, however, raise the question of what happens to the plates on a boxfish at the edges and corners.

On the whole, fish tend to swim by undulating their bodies in a moving horizontal S shape, unlike aquatic mammals such as whales who do so vertically. Boxfish can’t do this because their trunks are rigid. Instead, they slowly row through the water with their fins alone. They don’t need speed because they can secrete poison into the surrounding water and are difficult to get into. Consequently they have warning colouration.

Two of the boxfish are called cowfish. This brings to mind an outdated, pre-Darwinian idea that everything on land had to have a marine counterpart, so for example there are dragons and dragonfish, elephants and elephant seals and so on, which led to the oddity of the insistence that there were such entities as sea bishops, who as far as I can tell don’t exist, and it’s quite odd in fact to think of an office like that as having a biological form. Is there an accountant fish? It’s long seemed to me that there are in fact two sets of these. On the one hand there are “sea” versions, as in sea cow, sealion and the like, and on the other the “fish” versions such as lion fish and cowfish. I also wonder if there are aerial versions too. In any case, cowfish do exist. They’re up to about fifteen centimetres long and they have horns, hence the name, although they’re also slow like bovines. At this point I should probably allow myself a brief digression to point out that the weird English language has no generic basic word for these animals but only for varieties of them, which I think is probably because we’re so close to them conceptually that we can see the wood for the trees. That’s not true in many other languages by the way. It would be interesting to see what Frisian does.

Cowfish have horns for the same reason porcupine and pufferfish have spikes – to make them hard to eat. So far as I know they don’t charge and attempt to impale predators with them. But it’s the fact that they have horns which makes it possible to start to depart into a bit of worldbuilding.

We’re so familiar with life on this planet that a lot of the time it seems there’s no other options but to have organisms of that form, and in fact there is some justification for this. Ant eating mammals are very similar in form to each other though they’re not closely related, there have been marsupial wolves and big cats and South America used to be home to very horse-like animals. In the sea there are dolphins now, but previously the very similar ichthyosaurs and also a load of large fish and sharks which were again quite similar, their forms strongly dictated by the laws of physics. A second line of possibility emerges from the idea of simple or easily generated shapes such as trees and spirals, which turn up repeatedly in all sorts of life forms. Spheres are a good example, turning up in such organisms as sea urchins and coronaviruses, and means that if complex organic life large enough to see exists at all, tribbles almost certainly do. Another very common form on the submicroscopic scale, which oddly isn’t found at all in organisms or organs (such as fruit or eggs) visible to the naked eye is the regular icosahedron. Virions are so often icosahedral as to seem the rule rather than the exception, and it’s quite odd that there doesn’t seem to be, for instance, a crunchy icosahedral fruit or a cactus-like icosahedral plant with spines at its vertices.

Animal symmetry has a strong tendency to be bilateral, like a butterfly for example, where one side is a mirror image of the other. The major exception to this is found among the echinoderms, including starfish, sea urchins and sea cucumbers, all of which have pentaradiate symmetry like that of pentagons and pentagrams. This is thought to be because the corners are weak points between the plates and are opposed by a strong plate on the other side. Very early in the history of animal life, there were even triplanar animals, based on triangular symmetry, and the ancestors of all vertebrates are thought to have been completely asymmetrical. Hence simple geometrical shapes are influential in the structure of living organisms. In the case of boxfish, this shape is the hexahedron, more specifically cubes and rectilinear cuboids. There is, however, a weakness in this shape suggested by the purported reason for pentaradiate symmetry: the vertices and edges are opposite each other. Moreover, a box can be distorted without altering the lengths of its edges, unlike a tetrahedron. Due to this weakness, there is selective pressure for a cuboid animal to develop reinforcements at the corners, and this opens up an interesting possibility: alien cows.

Imagine a rectilinear cuboid body of a mobile terrestrial animal, perhaps a grazer. The vertices of this animal could be strengthened with projections, namely horns above and limbs below. As the Norn riddle has it, “føre honga, føre gonga, føre stad apo skø” – “four hang, four go (walk), four stand up to (the) sky”, except this has no udders. At this point, the aphis can be used as inspiration. This is a giant aphis, in a sense, although not a sap-drinker. The trunk is entirely inflexible, so the “head”, although having a mouth and two eyes with horns, has no neck. Therefore the mouth is on the end of a kind of tube and the “grass” is nibbled at the end before being swallowed upwards. The other end of the body contains a cloaca out of which calves are born. These have to be quite small due to the fact that the body has no room for expansion, but this is also true of aphids so it doesn’t require egg-laying. The pair of projections at the back are in a way their analogue to udders, although these are not mammals and don’t secrete milk for their young in the mammalian sense. Like those of aphids, they secrete a nutrient liquid, maybe like cookie dough.

As I mentioned yesterday, aphids and waterfleas share a practically identical reproductive strategy. Over much of the active period of the year they’re almost exclusively female and give birth to pregnant offspring without mating. Later in the season, males appear and mate with the females, producing eggs which then preserve the species over the winter. This can be made to work for the alien cows.

These cows don’t “know” they’re cows, and therefore it’s important not to boumorphise them. They’re alien animals on an alien world, and not vertebrates. Their bodies have armoured exoskeletons like lobsters, though not jointed, and they grow like arthropods, shedding these boxes and hiding away in caves while their new boxes harden. This leads to plains strewn with abandoned cow skeletons which are used by another species for shelter, in other words as huts. This other species is centaur-like, six-limbed and the front limbs have hands with opposable thumbs. They are in fact tool users, but they have an ancient symbiotic relationship with the cows, just as ants have with aphids on Earth. They “milk” these cows for their dough, on which they feed. This benefit is repaid by the centaurs keeping the cows safe from potential predators who also live on the plain.

A couple of things to note about these sentient centaurs. Their housing needs are satisfied by the cows rather than plain technology and are instinctive. Perhaps more importantly, although they aren’t vegan their relationship with the cows is truly symbiotic and they have no choice about it. They’re farmers, but instinctive farmers. It isn’t part of their culture because culture is everything you don’t have to do. A centaur without access to cow dough is as doomed as a human without gut flora. The ethics of this situation are interestingly different from dairy farming. Nonetheless, as well as having these two givens in their life, the centaurs do have technology and culture. They take the dough home with them and bake it into cookies, which they eat. If they ever decide to go into space and settle there, they’re going to have to take their cows with them, or rather, it probably wouldn’t occur to them not to, which makes the eggs rather convenient for them.

There is a fairly obvious problem with this scenario. Waterfleas and aphids live less than a year but these alien cows (and they are mainly cows because they’re almost all female), being larger and needing to grow, have to live longer because they need to be able to eat enough. The trouble is, in order to be habitable, a planet would seem to have to have a year no more than about two “of our Earth years” long. The hotter and larger the star, the further out its Goldilocks zone, but the hotter and larger the star, the shorter its lifetime and the less time there would be available for these organisms to evolve. If a star is more than about 40% more massive than the Sun, it would have rendered Earth and most of its solar system uninhabitable by the time human-type life was able to evolve, assuming Earth’s typical. This seems to rule out an ecosystem with cows of this kind because to have a year long enough for them to mature, it would have to be in an orbit too far out to support life, such as Jupiter or Saturn. Fortunately there’s an answer to this, found in our own system, and it starts by demolishing one assumption we tend to make about life outside this solar system: that it’s found on planets alone. This world is not a planet. It’s a moon of a planet bigger than Jupiter.

The moon Io is hot and covered in active volcanoes, to the extent that mapping it is a Forth Bridge-type task due to the constant remodelling its surface undergoes from the eruptions. This is despite the fact that at Jupiter’s distance from the Sun, the average temperature at the planet’s cloud tops is -145°C. Io manages to be so hot because the other large satellites in the Jovian system raise violent tides within it, heating it via friction. Hence some kind of “Super-Jupiter” could have an Earth-sized satellite heated by tidal forces from its neighbours, not to the degree Io is, but enough to give it a habitable climate. But this is a gargantuan planetary system even compared to Jupiter. Excluding Earth, which seems to be a special case, the largest moon relative to its primary’s size is Triton at around a five thousandth of Neptune’s mass, and moreover constituting 99.5% of the mass of all Neptune’s moons put together. In order to have a similarly proportioned satellite, Earth would have to be orbiting a planet around a dozen times Jupiter’s mass, and even then it looks like the other moons would be too small and far away to exert strong enough forces. But it isn’t quite that bad. A planet with only forty percent of Earth’s mass could be habitable for us, and life which evolved there could manage less than that. At a pinch, it would only need to be slightly larger than Mars to have liquid water on its surface, but that’s probably going too far.

Assume, therefore, that a planet with forty percent the mass of Earth is the largest moon in a Jovian-style system, proportionate to Ganymede but, as it were, in Io’s position, though without the primary’s Van Allen belts to avoid ionising radiation. A proportionately larger “Jupiter” would be sixteen times the real planet’s mass, which is still small enough to avoid becoming a star. One of the largest known exoplanets is CT Chamaeleontis b, which has seventeen times its mass and may therefore be a brown dwarf. A planet twelve times the mass of Jupiter is probably right at the upper limit for a planetary object as opposed to a brown dwarf, which is arguably a star, would produce its own radiation and therefore possibly be too hot to allow an otherwise Earth-like moon to be habitable. But it is possible.

A cow’s gestation period is close to a human’s and they reach puberty averaging at about a year. Given that they are born pregnant, these alien cows would, assuming a similar time scale, be less than nine months old when they first give birth, and would therefore probably need to be fully grown by then unless the first calves are smaller than subsequent ones. This gives scope for several generations of cows in a Jovian year of 11.86 of ours. Jupiter, though, has no seasons because it has no axial tilt, and in any case this moon would be constantly heated. The other gas giants, though, do have seasons because they are tilted and their days vary in length, with the exception of Uranus whose axis is practically at right angles to the orbit. This would influence day length and therefore the ability of plants to grow, so in spite of the fairly stable climate, there would still be seasons due to the proportion of time during which light is available. The light is also weaker than on Earth due to the planet being further from the Sun. Consequently, overwintering still makes sense and a situation can be imagined where the eggs lie dormant for several years before hatching out in the spring, whereupon several generations of cows would ensue. Late in the summer, bulls would appear, mate with the cows and eggs would once again be laid before all the cows and bulls die off for the winter, less food being available. In the meantime, the centaurs could hibernate, waiting for the new cows to appear in the spring.

This scenario allows for some interesting explorations. For instance, there’s a species of intelligent, technological instinctive farmers who, if they wish to explore space and settle elsewhere, would have to take their cows with them, spend several years sleeping, which is in fact quite useful for travelling between the stars even if they have only sublight velocity spacecraft, and would probably be surprised to find life on actual planets as opposed to moons. The ethics and values could be different due to the necessity to farm animals as opposed to our omnivory which allows us to be vegan. This is all very fruitful, even though they just eat cookie dough.

My Glistening Secretions

This is going to sound quite pretentious, but I can assure you it isn’t: I wish I could have writers’ block. I don’t, of course, but this doesn’t mean I can produce good stuff. In the past I’ve pointed out that there is a sign called hypergraphia – compulsive writing. There are various manifestations of this, including people who write all over their walls and probably ceilings in a manner which somehow reminds me of hoarding, another feature of my personality. Another practice described as hypergraphia is the compulsive keeping of a highly-detailed journal, rather along the lines of ‘Rain Man”s “serious injury” journal – “squeezed and pulled and hurt my neck”. People can find themselves writing on toilet paper, and I once wrote an essay entirely on till receipts, as a medium rather than a topic, although I did later put it on A4 paper. Such behaviour is a sign rather than a condition as such. There is no diagnostic label “hypergraphia”. It can be part of Geschwind Syndrome, a manifestation of temporal lobe epilepsy involving that, hyperreligiosity, atypical sexuality and circumstantial conversation. Anyone reading my stuff will have noticed the circumstantiality of my writing. Digression is my norm.

I have started to blog. It is not difficult enough. It’s always verbose rather than pithy and it flows, in a sense. It’s a secretion of my brain. Perhaps a glistening secretion. But is that glistening aesthetically pleasing or disgusting? I have no idea, and that’s the problem. Not having any insight into other people – I only know how to be myself – I can only conjecture that they have an inner critic which slows their flow: an encumbrance as herbalists might call it. I’m not sure if there’s one around here to ask, so I have to guess. This stuff, it flows out of me in typically verbose and circumstantial style, from day to day, like an exudate, as it were, and I feel the urge to show it to people on this blog and elsewhere.

There’s an apparently non-veganisable joke about someone who goes to a doctor and says “I think my brother is mad because he thinks he’s a chicken”. The doctor replies, “Should I arrange for him to see a psychiatrist”, and she answers, “well I would, but we need the eggs”. This is, I hope, my situation. I hope people need my eggs. Incidentally it has to be a man who thinks he’s a chicken because it adds a certain transphobic je ne sais quoi to the joke. It also brings up the issue of gestation and reproduction, because we seem to be assuming here that the eggs are sterile whereas this may be unwarranted. Maybe it’s parthenogenesis. This leads me to aphids of course.

Aphids are known to be “farmed” by ants. They suck the sap from plants and – well, what happens next? The ants milk the aphids for a sweet, sticky substance called honeydew, on which the ants feed. An obvious comparison to dairy farming can be made here although unlike that, this is an instinctive arrangement and form of symbiosis between ants and aphids. What isn’t clear to me is whether this stuff they produce is a secretion or an excretion. Are the aphids simply “overflowing” with plant juice and squeezing it out of their rear ends like dung or are they specifically producing it from something like glands on their backs? This is of course the problem with my own writing. Is it a secretion or an excretion? The aphis doesn’t know, so why would I?

And at this point I shall permit myself a digression. Aphids and waterfleas have a lot in common in terms of reproduction. For most of their active season they’re all female. Their young are born live and pregnant rather than hatching from eggs, and they can colonise an area very quickly because they can reproduce without mating. At a certain time of year, a few male individuals appear – in aphids these are the ones with wings, because they can fly around and sow their oats more easily, thereby increasing genetic diversity in the population. When they mate, they lay eggs which are tough and resistant to winter conditions, and they all die off, leaving their eggs to hatch out in the spring and start the process all over again. The parallels between the two are fascinating, and I’d be prepared to bet that if complex life exists on other planets (and moons), it will turn out that many of them will use the same strategy. But aphids are something else. They give birth at the age of twenty minutes to pregnant daughters.

Aphids are also somewhat bovine. They’re herbivores who live in herds and are milked by another species. The human practice of milking cows is not instinctive, but like other human activities does parallel those of other species which are instinctive: social insects also farm fungi, so that’s kind of arable farming, though for humans it’s an invention whereas for insects it’s instinctive. But what I don’t know about aphids, getting back to the point, is whether what I’m inaccurately describing as their exudate is an excretion or a secretion. It seems that if it’s a mere excretion, it isn’t very energy-efficient because they’re more than satisfying their metabolic needs and chucking away the rest, which takes extra energy. However, it may also be that the ants protect their herds from ladybirds and other predators, so it may not be wasted energy. If it’s a secretion, somehow that seems to make more sense, although the aphids are still not deriving nutrition from it and it might be energetically cheaper just to poo the stuff out without taking it into their internal environments, processing it and pushing it out again via glands on their backs. At this point I’m tempted to look this up but I’m not going to. I’m going to leave it at that because it raises a similar issue in my mind about my own secretions, or are they excretions?

I produce writing. I churn it out. I am an organism who writes. To me, this writing might just flow out without touching the sides, like regurgitating a textbook, or it might be getting processed and cast into a new form before being secreted. It occurs to me that strictly speaking, an aphis is not excreting because if she isn’t secreting, her honeydew is coming out without touching the sides, and that’s defaecation rather than excretion. Excretion involves processing followed by the expulsion of the “ashes” such as urea or carbon dioxide depending on which end you’re talking about (to some extent – sweating is almost identical to urination). Therefore I am secreting, because I don’t just regurgitate text but think about it before setting digit to digital device. A lot of that thinking isn’t under my own volition and much of it isn’t even conscious. Secretion then.

There used to be a book in my university library rejoicing in the highly memorable title ‘Sex And Internal Secretions’. I don’t think I ever picked it up and read it, and oddly I think the allure of the title was probably accidental, or could at least have been passed off as such to the publisher. Perhaps disappointingly, it merely refers to reproductive hormones, and as such clearly demonstrates one way of looking at glands. There are glands with ducts, producing content which then exits the gland by a tube leading to an external surface, bearing in mind that that’s topologically external rather than on the outside of the body – I’ll come back to that. Then there are the so-called “ductless glands”, which are organs which produce signalling chemicals, i.e. hormones, and release them into the bloodstream. Obvious examples are the pituitary, thyroid, parathyroids and adrenals. Some of them do both, such as the gonads and pancreas, and there are also other types of organs which also produce hormones such as the lungs, kidneys and heart. The fact that the lungs and kidneys produce hormones regulating blood pressure is, incidentally, supremely relevant right now as the receptors for an enzyme converting one of those hormones is also currently being used by a certain virus to get into cells and reproduce, which is why it tends to cause viral pneumonia but also causes blood pressure to fluctuate wildly and injures some people’s kidneys. I too produce internal literary secretions. I overthink and like anyone else, many of my thoughts never reach the surface, not because they’re any less worthy than other thoughts which do get expressed but may not be well-received. In fact I have a history of this. As a schoolchild, I used to write long essays and destroy them rather than handing them in, giving the impression I was lazy and unproductive. The main change today is that I no longer destroy them because I’m also a hoarder. I did, however, write two thirty thousand word long biology essays and hand them in at some point in order to prove to another pupil that a certain teacher didn’t have 6/10 as a ceiling for his marks, but it didn’t really have anything to do with the subject – my motivation was that someone was wrong in the classroom, and since he was homophobic I wanted to retaliate in some way.

As I’ve said, topologically the question arises of what really counts as external and internal. To a topologist, a doughnut and a piece of paper with a pinprick in it are the same shape. Imagining that all solid objects are made of infinitely pliable plasticine which cannot, however, be torn, a doughnut, a ring and a sheet of paper with a tiny hole in the middle, and in fact even a teacup, are all the same shape. Applying this to the human body without being too fussy about the finer features of our anatomy, we are doughnuts, kind of – not quite. We have tubes running through us from our mouths to our anuses whose contents are therefore in a sense external. Likewise, our lungs are open to the air, fortunately, and therefore their surfaces are also outside our bodies. Even the uterus and uterine tubes are external in that sense, which means that a fetus is in a sense merely adhering to an external surface, which sounds a bit precarious but may help emphasise certain issues in reproductive ethics – pregnancies are in a sense occurring outside bodies. Even the abdominal cavity is external in a sense because the uterine tubes are open at both ends. But there is, more or less, an internal environment, consisting of the bloodstream, the bones, muscles and the walls of viscera, and the brain and spinal cord along with many other organs and whole systems. These are often the parts of the body formed from the mesoderm, which is the filling in the sandwich of many animals’ embryos, consisting of three layers which roll up to form a three-layered tube. This is referred to as the internal environment. Of course, on a finer scale even these are external, and there is no interior at all. There’s simply a cloud of subatomic particles whirling around in a void. The Greek philosopher Δημόκριτος (Democritus) once said, νόμωι (γάρ φησι) γλυκὺ καὶ νόμωι πικρόν, νόμωι θερμόν, νόμωι ψυχρόν, νόμωι χροιή, ἐτεῆι δὲ ἄτομα καὶ κενόν – “by convention there is sweetness, bitterness, colour: in reality, there are only atoms and the void”. That’s one possible way of viewing reality, which has problems, but does express a truth, and it means that in a sense there is no exterior or interior.

At this point I’m hoping to be able to climb out of this metaphysical hole I’ve dug myself with this runaway metaphor and apply it once again to creativity. I suppose all of my thoughts and writings do occur in the world. I presume that after my death, if my writings survive they might end up getting read by some poor benighted victim, or maybe they too will be destroyed, like Kafka wanted done with his. In that sense, the stuff I’ve actually put to paper and not eaten, discarded or ignited is merely internal temporarily, like my physical body: it will eventually become part of the world and be publicly read, or rather, the possibility exists that it will emerge into the sunlight. Most of it won’t, as it’s only ever existed in my thoughts, which by the way may also be external. Gottlob Frege, when he wasn’t busy being a proto-Nazi, made the interesting observation that thoughts are not so much the contents of the mind as things which exist “out there”, rather like Platonic forms, waiting to be discovered. Although he didn’t mean the same thing by “Begriff” (concept) as most people, the general concept is more widely applicable. Maybe works of art do exist out there waiting to be discovered. Maybe nothing is ever invented. Instead, the human mind is merely a device which opens tunnels into a vast multidimensional bladder from which innumerable creative works shoot under pressure into the physical world and get splurged onto paper, canvas, TV screens, pianos and websites. There are certainly examples of very similar works and tropes which are not, however, the result of plagiarism. At least two short stories about mathematicians involve a character holding blackboard chalk in his mouth like a cigarette. ‘The Time Traveler’s Wife’ is to me annoyingly similar to ‘Slaughterhouse Five’ and I’m prepared to believe that isn’t plagiarism. Although it’s a cliché today, 1977’s ‘Lucifer’s Hammer’ and 1979’s ‘The Hermes Fall’ are both about a massive astronomical object threatening to hit Earth and I believe there was a court case about the plot similarities. It seems, however, to be entirely accidental, except that ’twas the season for asteroid impact novels. But there are stories out there waiting to be written, and they already exist even though nobody has ever thought of them, and maybe never will.

Getting back to the question of blockage, a ducted gland with a blocked passage, what herbalists call encumbrance but which could also be called obstruction, is generally not a good thing. The contents will build up and a cyst will form. The pancreas is the main gland which secretes digestive enzymes in the body, and normally produces and discharges them into the duodenum where they break most food down into absorbable states. It’s right next to the gall bladder, which makes sense because that’s main function is to emulsify food, such as it is at that point, and help the enzymes secreted by the pancreas to get to it as well as produce microscopic droplets of fat. Sometimes, of course, gallstones form and these can slide out into the common duct shared by the gall bladder and the pancreas. When this happens, the digestive enzymes can back up in the pancreas rather than being released into the external environment and, most unfortunately, proceed to digest the pancreas itself, and having done that go on to digest much of the rest of the abdominal organs. It’s probably one of the worst ways to die, and it’s caused by the failure of pancreatic secretions to get out there into the world of the digestive system. If I don’t write this stuff down somewhere, something similar happens to my mind. It’s another example of having to express one’s “insanity” in order to maximise mental health. If I don’t do this, and I suspect this applies to many other people, I will be lost in my internal musings and it would become increasingly difficult to engage with the world, even in relatively normal ways like going shopping or cooking dinner. The external world recedes from me if I don’t do this.

Most of the time, the word secretion seems to bring nasty fluids to mind to which we have evolved instinctive revulsion in order to protect us. This applies mainly to our own bodies. We’re not keen on mucus, pus or sweat on the whole, and even substances which remind us of them can be hard to engage with. However, not all secretions are necessarily nasty. The labiates (I’m supposed to call them “lamiates” nowadays but I prefer a system which actually describes the organisms to one which just names them after one genus) attract pollinating insects with glands which secrete substances very wont to evaporate and diffuse through the air, which also gives them scent and flavour to human beings. Most culinary and fragrant herbs are labiates, such as mint, rosemary, marjoram, thyme, sage and lavender. These are aesthetically pleasing to many humans (other fragrant herbs are available, such as the umbellifers (“apiaceae” for heaven’s sake!)). Hence some secretions are aesthetically pleasing, so it might be worth airing them.

I hate pearls because they’re a response to a foreign body in a living animal, which to me makes them like pus. In fact to me they even look a bit purulent, like blobs of discharge from a boil. Therefore somewhat aside from the fact that they aren’t vegan, they’re distasteful to me. However, many other people seem to like them, to the extent of making jewellery from them. I would hope that my own glistening secretions lead to similar impressions, regardless of how much to me they seem to have festered and gone septic, and maybe I’m in with a chance.

My other problem is that I’m not good at endings.

A Language Written By The Victors

Learning another language is generally supposed to be good for your brain and mental health, whether or not you have anyone to communicate with. Hence Sarada is currently learning Classical Greek and finding it very stimulating. In the meantime, although I’m not formally learning it, it has piqued my interest in classical culture. For instance, I didn’t previously fully appreciate that many of the tropes we’ve taken for granted in drama had to be thought up by someone, such as having more than one character in a piece of drama. Certainly we have the likes of Alan Bennett’s ‘Talking Heads’ and isolation due to Covid-19 has led to the production of more monologue-type drama, or something close, such as ITV’s Isolation Stories series, but on the whole we expect more. Also, a lot of Greek drama seems to have been effectively musical theatre, comedies weren’t taken seriously for a long time because they weren’t serious and so on. Politics are supposèdly outside the realm of drama, or maybe not, but they too are affected by what TV Tropes calls “Early Installment Weirdness”, with the idea of tyranny, far from being a pejorative term, actually being considered a viable way to run a state, and democracy being full-up honestly unpopular as opposed to aristocracy because of clearly expressed reasons, and this is bearing in mind that Greek democracy wasn’t anything like democracy as we know it today since hardly anyone was considered a citizen. Of course there is a pale ghost behind all this for that kind of reason – Classics is generally about dead white males, with some interesting exceptions such as Sappho and the Black Roman emperors Septimus Severus and Caracalla. Interestingly, Classical Mediterranean culture was basically ethnically colour-blind and homophobia wasn’t a thing either, but that’s not to say that there aren’t major problems with the likes of patriarchy and slavery.

Greek has generally just been Greek. Unlike Latin it hasn’t given rise to a whole family of languages spoken today, although it has been a major influence on other languages, chiefly nowadays for the use of its script and vocabulary in scientific, mathematical and other technical realms. So extensive is this, in fact, that I can recognise the vocabulary of most texts written in Classical Greek, although my grammar isn’t so hot. Greek also lent its script all over the place, crucially of course to Latin itself but also to Old Church Slavonic and through that many Slavic languages and written languages in Soviet-influenced countries, and also to Gothic, Etruscan and Coptic, although in each case modified somewhat. But it has one living descendant: Modern Greek. In a sense Modern Greek is to Ancient Greek as Italian is to Latin, which provokes me into wondering what European languages would’ve existed if Greece had managed to maintain its European political ascendancy and take over the Roman Empire rather than the other way round. But it does mean Greek isn’t particularly useful compared to Latin as a gateway to other languages. Gothic and Coptic both borrowed a lot from it but they are no longer enormously useful, although the latter is at least still used seriously and serves as a counter to Arab dominance in North Afrika.

There are of course all sorts of reasons for learning languages other than being able to communicate in or understand them. They introduce a new way of conceiving of the world for instance. English is unusual compared to many other languages in having a single word for “know” and two separate words for “do” and “make”. It also avoids using “thou” and has only one modern pronoun for “we”, sharing the latter with most other Western languages. Many other languages have a dual first person pronoun and separate inclusive and exclusive dual and plural personal pronouns, and in fact English used to have dual first and second personal pronouns, namely “wit” and “git” (pronounced “yit”). Another reason for learning a dead language is the help it can give you learning related languages, and this is particularly true of Latin, ancestral to the current spoken languages of possibly most living humans. The same applies to Sanskrit with the languages of Northern India and other parts of the Subcontinent. Other branches of the Indo-European language family may not have good written records going back millennia or they may simply not have been very productive. Greek, Albanian and Armenian are each the only representatives of their branch and Tocharian and Hittite have no living descendants. There are also branches of the family which have left hardly any trace: there’s a group near Mongolia separate from the Tocharians (who lived in today’s Turkestan) who can be confidently asserted to have spoken an Indo-European language but it was never written down and completely disappeared without even leaving any loanwords in other languages, so it’s gone forever. There may even be a branch of Indoeuropean spoken in the Pacific Northwest of North America by ancient non-European settlers, although this is highly controversial and seems pretty doubtful.

Due to the cultural biasses of Western academia, the best reconstructed ancient language not based on records of some kind is Proto-Indoeuropean itself. Something like half the languages currently spoken are descended from it, including English, Bengali and Serbo-Croat – in other words, most of the languages of the Indian Subcontinent, many Central Asian languages and most widely spoken languages in Europe. It isn’t entirely clear when and where this language was spoken, its own name was unknown and the people and culture involved are unclear, but it’s possible to track down some details. For instance, if the words for a particular tree or animal are related in a wide range of scattered languages, the chances are that it was native to the region in which it was spoken, and if the name for a piece of technology or other cultural feature is similarly common and its date of invention is known from archaeology or other records, this helps to determine the era during which it was spoken. Because Hittite and its relatives are themselves the oldest written records of Indo-European which have come down to us, it can be known that it must have preceded that civilisation, which began around 1600 BCE. Likewise, the earliest Greek records, which are incidentally pre-alphabetic, using an hieroglyphic-like script called Linear B, dates from 1450 BCE or so. Another possible clue is genetics, which I’ll come back to. There are shared words for “wheel” and for shorter edged weapons but not swords, so the original people must’ve post-dated the invention of the wheel but weren’t familiar with swords.

There are four significant theories as to where the original people involved were. One is the discredited and Nazi-adopted Northern European theory. This is based on the erroneous idea that the Aryans, and that is who we’re talking about, were fair-haired, blue-eyed white people. It even went as far as the claim that they were originally from the Arctic ice caps, which I suppose enables people to say they were the original Europeans. Like some other ideas, although it was adopted by the Nazis it has a somewhat less dishonorable history to it and was convenient for propaganda purposes. It is, moreover, true that the original Germanic homeland is in the southern part of what’s now Sweden. One reason for supposing Aryans are from Scandinavia or perhaps what became Poland in the 20th century is the word “lachs” for salmon, found in all sorts of languages in various forms. Salmon of the European kind are only found in the rivers emptying into the Atlantic and associated seas, so there seems to be a problem with them having this word. This is in fact known as the “Salmon Problem”. However, it’s now thought that the word which became “lachs” originally meant “trout” and didn’t refer to the leaping fish. There are of course other salmon such as the sockeye of Pacific North America, but these are not strictly relevant, although it would be interesting if the Tsimshian people had a word like “lax” for them because it would support the idea that they speak an Indo-European language. In fact the related Haida language calls one kind of salmon “chíin tluwáa”, which is nowhere near and is in any case a compound noun.

There’s a second nationalistically influenced theory of Aryan origin which claims that they arose from Northern India and spread across Eurasia. This one, like the Arctic theory, doesn’t really work at all and seems to be highly politically motivated. There used to be a bias in Indo-European philology towards Sanskrit which made reconstructions of the original language look a lot more like Sanskrit than it’s at all likely to have been, partly because Sanskrit, particularly the language of the Vedas, is one of the most conservative languages in the family which is still decipherable. There are political issues raised by the idea of an Aryan invasion of India. For instance, it’s been seen as justification for the British Raj. However, it requires the Indus Valley civilisation to be Aryan and doesn’t account for the language having words for many things which didn’t exist there at the time. It’s used by Hindu nationalists to justify racism and Islamophobia, which doesn’t make it untrue, but it is nonetheless untrue.

The other two, more respectable theories are Anatolian and Kurgan. The Anatolian theory involves the claim that since Hittite and its close relatives are the oldest recorded Indo-European languages, they probably started in Asia Minor – modern Turkey. It’s also noted that language variation tends to occur most close to the origin of a language. This can be seen with English. Americans often perceive British English as having a myriad of accents and dialects which vary a lot over a small area. In fact what’s happened is that the places where English is now spoken across the world were settled by English speakers and spread fairly rapidly, not allowing for much language change to occur, so whereas there are indeed variations in North American English, they’re nothing like as big as they are in England. This is seen as applying to the Anatolian theory, because for example the Hittites spoke one language, the Luwians another, then there’s Greek, Armenian, Illyrian (the branch from which only Albanian survives) and Slavic, all in quite a small area, plus Indo-Iranian further to the east. This is associated with Nostratic, a probably non-existent language spoken during the last Ice Age which was supposed to be ancestral to Indo-European and a wide range of other languages such as Finnish and Tamil, partly due to the considerable linguistic diversity in the Caucasus. However, there are again a number of items for which there were words in Proto-Indo-European which are a lot older than the Hittites, and the method used to date the language, Bayesian analysis, doesn’t work for languages so much as the words used, which could have been loan words.

The most popular current theory, then, is the Kurgan Hypothesis. I realise I’ve used the word “theory” all the way through, so I should probably come clean and start calling these hypotheses, although it’s a bit unfortunate that I’ve now referred to two ideas popular with quasi-fascist groups as “theories” and this as a mere hypothesis even though it’s better supported. The Kurgan Hypothesis is that the Aryans were a nomadic people living in the Chalcolithic – the “Copper Age”, which is immediately after the Neolithic or New Stone Age – in the area north of the Black Sea. They’re called Kurgans because of their burial mounds, which are called that in Russian. These were the Yamnaya people, who lived in that area from about 3300 BCE, and are the strongest candidates for the original Aryans. Genetic studies also support this: the type of Y chromosome they had is now found all across the area where Indo-European languages were spoken up until about 1500 CE, with its strongest concentration along the Atlantic coast, including the West of Scotland, Ireland and Brittany. In other words, the Celtic fringe. It is in fact the Y chromosome type carried down on the father’s side of my own family. I’ve heard it said that there’s a stretch of DNA which is found most in areas most remote from the place where Homo sapiens originated, which is therefore seen as conferring adventurousness and curiosity, and it’s possible that this is what they had in mind although it’s very Eurasia-centric. If this is true, it might be expected that even if we expand into the Galaxy, the shock front of our settlement will also be marked by this same piece of DNA. It also reminds me of Enya’s song ‘Aldebaran’, which imagines a Celtic spacecraft reaching that star. Maybe.

Unfortunately, “adventurousness” may be a bit of a euphemism. What it may in fact mean is that the Yamnaya, assuming that’s who they were, managed to spread their genes, languages and culture across much of the planet, and let’s face it, now most of it. These people were and are basically “The West” in cultural terms, although with some qualifications because the North Indians are them too and their religion was decidedly non-Abrahamic. What “adventurous” might mean in this context, sadly, may well be “belligerent and plundering”. It’s thought that their warlike, aggressive ways led to them subjugating the more peaceful, less patriarchal cultures which had prevailed in the areas before they got there. And although you can put a different spin on them, the Bhagavad Gita, the Eddas and the Iliad all come across as pretty murderous and violent. As a child, I mainly skipped over the Iliad and went on to the much more interesting Oddysey, although apparently the former can be seen as Achilles avenging his same-sex lover so all may not be lost. ‘The Silence Of The Girls’ also apparently takes a different view, telling it from a female perspective, and it hasn’t escaped my attention that I’m talking about the spread of a Y chromosome here rather than mitochondria.

Consequently there are a couple of disappointments and concerns about Proto-Indo-European. As I said, when you learn a language, you also pick up a whole world view. It may be similar to your own in some respects, because on the whole the language is known to other humans, or at least invented by one or more of them, but it’s bound to differ in several ways. This is one reason why language loss is so tragic. It’s a bit like losing a species of plant due to rainforest devastation which would’ve cured cancer – I realise that’s a crude example but you know what I mean. Just to take a random example, an Australian language was lost a few decades ago which had different words for different kinds of hole, which English lacks. The Q-Celtic languages express possession as being “on” someone and needed and desired items as being “from” them, which gives one a new perspective on needs, wants and property. Many languages distinguish between alienable and inalienable possession. “My leg” and “my body odour” is not the same as “my Rubik’s cube” or “my Ford Cortina” unless one is a Cortina/Rubik’s Cube/human hybrid (just realised that refers to something I wrote elsewhere on another blog but never mind, I’ll leave it in). So learning Proto-Indo-European would provide insight into a prehistoric perspective on the world. If one subscribes to any extent to the Noble Savage myth, this might be expected to provide some kind of holistic, peaceful unity with the Cosmos-type take on things. Unfortunately, while for all anyone knows that might really have existed, that isn’t how the Yamnaya saw the world at all, because they were the conquerors. The people they plundered, crushed, murdered and exploited would’ve had interesting languages too, but they’re mainly gone – the only survivor is Basque so far as I can remember, although there are other candidates such as Burushaski, and older examples like Elamite and Etruscan. The other disappointment is that although this is a prehistoric language, just about, it isn’t actually a Stone Age one. These people had copper weapons and knew how to smelt lead. They had just domesticated the horse, which in fact is probably one reason why they conquered the world.

Another reason for learning some Proto-Indo-European is a bit more promising. Just as learning Sanskrit helps you pick up Sinhala, Hindi and Bengali, among many others, being their ancestor, and Latin helps you with Ladino and Dalmatian (which is sadly extinct due to the last speaker getting blown up during some road work in 1898), so Proto-Indo-European should help you with “everything”! Not literally everything of course, but most languages originating in Europe and many of those from Asia, which of course spread during the colonial era to much of the rest of the globe. It also confronts you with the issue of complexity.

One thing which really bothers me about language change is that it tends to go from complex to simple. Languages generally become easier to pronounce, lose complex grammar and so forth as time goes by. Among Indo-European languages, English is an extreme case of this. Most of our verbs have only four forms: walk – walks – walked – walking. Even the most complex verb in the English language, “be” only has eight forms in present day English prestige dialects. Almost all noun plurals end in “-s” and “-es” and as for cases, most people will just look at you blankly if you even mention them. In the past, of course, English was much more complex. You only need to look at Shakespeare or the King James Bible to see all the “thees” and “thous” and their appropriate verbal forms. There also used to be more strong verbs – verbs like drive – drives – drove – driven – driving. “Help” and “climb” used to be strong verbs too, along with many others. As I’ve already mentioned, we used to have dual pronouns. Go back a bit further and we had five cases, three numbers (singular, dual and plural) and appropriate forms for verbs in all those numbers. English is now such a simple language in terms of inflections that if its history and connections weren’t known it wouldn’t even be considered Indo-European. Ancient Greek and Sanskrit, of course, have much more complex grammar. The former has six cases and initially has a dual number. Sanskrit is notoriously complex, more so even than Greek because it’s more conservative than Greek. If you then reconstruct the ancestor of all these, Proto-Indo-European, it’s on another plane of complexity, although still simpler than a lot of other languages which survive today such as the Inuit and Navajo. The same process leads to the loss of difficult consonant clusters – nobody pronounces the K in “know” or “knight” any more and all those “ough” spellings just look confusing to most people who don’t know their history. Hence Proto-Indo-European is a fortress of complexity and difficulty in pronunciation spoken by a warlike culture which devastated whole continents in late prehistory.

The reason this bothers me is a bit like the way history and ways of life bothered Sarada when she was a child. She lived through a time of perceived increasing fairness, mercy and the decline of various kinds of prejudice. She was aware that in Victorian times things were not so good, and extrapolating that led her to the conclusion that the distant past must’ve been unimaginably awful. I would tend to agree with her, although there also seems to be something of a cycle in these things and although, for instance, Georgian England was doubtless an awful place to live because, for example, of the Bloody Code which got people executed for stealing a handkerchief, it was also less prudish about sex, though in a very misogynistic and patriarchal way. I have a similar problem with the decline in complexity in language, which seems to be universal. It strongly suggests that there was a time when languages were so complicated that nobody could ever have learned them properly in a human lifetime, which was in any case a lot shorter than it is in the richer parts of the world today even taking infant mortality out of the equation. The almost extinct northern Japanese language Ainu, for example, used to use single words for entire sentences until fairly recently, and a lot of other languages still do. More than ninety percent of Inuit words are only spoken once, which is one reason why the myth of words for snow can’t be true or misses the point. It makes me think of cave people babbling gibberish at each other which occasionally made just a little bit of sense.

I have no intention of seriously plunging into Proto-Indo-European and seriously learning it as if it’s a going concern. It is in any case rather hard to do so because much of it has disappeared without trace. There are three very important sounds in the language whose presence can only be seen in their influence on pronunciation in its descendants and had long since vanished by the time writing was invented. Nonethelesss it does have a draw to it, and I will be learning some. It’s just a great pity that one of the few scraps of prehistoric culture that survives is the property of such an aggressive and destructive culture, although it might explain a lot about the nature of today’s world. This is not to say that there wasn’t a lot of violence and oppression elsewhere so much as that all the older cultures which may have been more peaceful and laid back, and nicer to live in for most of their members, just got slaughtered and raped. Pretty depressing really.