My Writing Style

I’m fully aware that I’m too wordy, don’t stick to the point and talk about arcane topics a lot, not just on here but in face to face conversations. This is partly just what I do, in the sense that I’m unable to do otherwise or employ it as a bad habit. In a world full of shortening attention spans and loss of focus though, I feel that however ineptly, this is still worth doing.

In the process of doing this, I continued this blog post in a fairly lightweight word processor called AbiWord which we stopped using because it had a tendency to crash without warning and without there being any salvageable document when this happened, and it proceeded to do exactly that, so this is in a way a second draft. One of the many features AbiWord lacks, and this is not a criticism because the whole philosophy is to avoid software bloat, is a way of assessing reading age. Word, and possibly LibreOffice and OpenOffice, does have such a facility, which I think uses Flesch-Kincaid. A blank was drawn when I said this to Sarada so it’s likely this is not widely known and in any case I looked into it and want to share.

There are a number of ways of assessing reading age, and as I’ve said many times it’s alleged that every equation halves the readership. When I was using AbiWord just now, I decided to write these in a “pseudocode” manner, but now I’m on the desktop PC with Gimp and stuff, I no longer have that problem although of course MathML exists. Does it exist on WordPress though? No idea. Anyway, the list is:

  • Flesch-Kincaid – grade and score versions.
  • Gunning Fog
  • SMOG
  • Coleman-Liau
  • ARI – Automated Readability Index
  • Dale-Chall Readability Formula

Flesch-Kincaid comes in two varieties, one designed to rank readability on a scale of zero to one hundred. It works like this:

206.835−1.015(average sentence length)−84.6(average syllables per word)

It interests me that there are constants in this and I wonder where they come from. It also seems that subordinate clauses don’t matter here and there’s no distinction between coordinating and subordinating conjunctions, which seems weird.

The grade version is:

0.39(average sentence length)+11.8(average syllables per word)−15.59

This has a cultural bias because of school grades in the US. I don’t know how this maps onto other systems, because children start school at different ages in different places and learn to read officially at different stages depending on the country. Some, but not all of the others do the same.

Gunning Fog sounds like something you do to increase clarity and I wonder if that’s one reason it’s called that or whether there are two people out there called Gunning and Fog. It goes like this:

0.4((words/sentences)+100(complex words/words))

“Complex words” are those with more than two syllables. This is said to yield a number corresponding to the years of formal education, which makes me wonder about unschooling to be honest, but it’s less culturally bound than Flesch-Kincaid’s grade version.

SMOG rather entertainingly stands for “Simple Measure Of Gobbledygook”! Rather surprisingly for something described as simple, it includes a square root:

This is used in health communication, so it was presumably the measure that led to diabetes leaflets being re-written for a nine-year old’s level of literacy. I don’t know what you do if your passage is fewer than thirty sentences long unless you just start repeating it. Again, it gives a grade level.

Coleman-Liau really is nice and simple:

0.0588L−0.296S−15.8

L is the mean number of letters per one hundred words and S is the average number of sentences in that. This again yields grade level, although it looks like it can be altered quite easily by changing the final term. It seems to have a similar problem to SMOG with short passages, although I suppose in both cases it might objectively just not be clear how easily read brief passages are.

The ARI uses word and sentence length and gives rise to grade level again:

4.71(characters per word)+0.5(words per sentence)−21.43

Presumably it says “characters” because of things like hyphens, which would make hyphenation contribute to difficulty in reading. I’m not sure this is so.

The final measure is the Dale-Chall Readability Formula, which again produces a grade level. It uses a list of three thousand words fourth-grade American students generally understood in a survey, any word not on that list being considered “difficult”:

There are different ways to apply each of these and they’re designed for different purposes. I don’t know if there are European versions of these or how they vary for language. The final one, for example, takes familiarity into account as well as length.

When I’ve applied these to the usual content of my blog, reading age usually comes out at university degree graduate level, which might seem high but it leads me to wonder about rather a lot of stuff. For instance, something like sixty percent of young Britons today go to university, so producing text at that level, if accurate, could be expected to reach more than half the adult population. However, the average reading age in Britain is supposed to be between nine and eleven, some say nine, and that explains why health leaflets need to be pitched at that level. All that said, I do also wonder how nuanced this take is. I think, for example, that Scotland and England (don’t know about Wales I’m afraid, sorry) have different attitudes towards learning and education, and that in England education is often frowned upon as making someone an outsider to a much greater extent than here, and this would of course drag down the average reading age. That’s not, however, reflected in the statistics and Scottish reading age is said to be the same as the British one. I want to emphasise very strongly here that I am not in any way trying to claim that literacy goes hand in hand with intelligence. I have issues with the very concept of intelligence to be honest but besides that, no, there is not a hereditary upper class of more intelligent people by any means. Send a working class child to Eaton and Oxbridge and they will come out in the same way. I don’t know how to reconcile my perception.

But I do also wonder about the nature of tertiary education in this respect. Different degree subjects involve different skills, varying time spent reading and different reading matter, and I’d be surprised if this leads to an homogenous increase in reading age. There’s a joke: “Yesterday I couldn’t even spell ‘engineer’. Today I are one”. Maybe a Swede? Seriously though, although that’s most unfair, it still seems to me that someone with an English degree can probably read more fluently than someone with a maths one, and the opposite is also true with, well, being good at maths! This seems to make sense. The 1979 book ‘Scientists Must Write’, by Robert Barrass tries to address the issue of impenetrability in scientific texts, and Albert Einstein once said, well, he is supposed to have said a lot of things he actually didn’t so maybe he didn’t say this either, but the sentiment has been expressed that if you can’t explain something to a small child you don’t understand it yourself.

I should point out that I haven’t always been like this. I used to edit a newsletter for brevity, for example, and up until I started my Masters I used to express myself very clearly. I also once did an experiment, and I can’t remember how this opportunity arose, where I submitted an essay in plain English and then carefully re-wrote it using near-synonyms and longer sentences and ended up getting a better grade for the “enhanced” version, and it wasn’t an English essay where I might’ve gotten marks for vocabulary. On another occasion I was doing a chemistry exam (I may have mentioned this elsewhere) and there was a question on what an ion exchange column did, and I had no idea at the time, so I reworded the question in the answer as something like “an ion exchange column swaps charged atoms using a vertical cylindrical arrangement of material”, i.e. “an ion exchange column is an ion exchange column”, and got full marks for it without understanding anything at all. This later led me to consider the question of how much learning is really just about using language in a particular way.

So there is the question of whether a particular style of writing puts people off unnecessarily and is a flaw in the writer, which might be addressed. This is all true. Even so, I don’t think it would always be possible to express things that simply and also it’s a bit sad to be forced to do so rather than delighting in the expressiveness of our language. Are all those words just going to sit around in the OED never to be used again? But it can be taken too far. Jacques Lacan, for example, tried to make a virtue of writing in an obscurantist style in order to mimic the experience of a psychoanalyst not grasping what a patient is saying by creating reading without understanding, and in particular was concerned to avoid over-simplifying its concepts. Now I’ve just mentioned Lacan, and I don’t know who reading this will know about him. Nor do I know how I would find that out.

I’m not trying to do what he does. Primarily, I am trying to avoid talking down to people and to buck the trend I perceive of shrinking attention and growing tendencies to dumb things down, just not to think clearly and hard. Maybe that isn’t happening. Perhaps it’s my time of life. Nonetheless, this is what I’m trying to do, for two reasons. One is that talking down to people is disrespectful. I’m not going to use short and simple words and sentence structures because that to me bears a subtext that my readers are “stupid”. The other is that people generally don’t benefit from avoiding thinking deeply about things and being poorly-informed. It’s in order here to talk about the issue of “stupidity”. I actually have considerable doubt that the majority of people differ in how easily they can learn across the board for various reasons. One is that in intellectual terms, as opposed to practical, the kind of resistance found in the physical world doesn’t exist at all. This may of course reflect my dyspraxia, which also reflects what things are considered valuable. Another is that the idea of variation in general intelligence is just a little too convenient for sorting people into different jobs which are considered more or less valuable or having higher or lower status, and as I’ve doubtless said before, the ability to cope with boredom is a strength. I also think that the idea of a single scale of intelligence, which I know is a straw man but bear with me, has overtones of the great chain of being, i.e. the idea that there are superior and inferior species with the inferior ones being of less value.

There are, though, two completely different takes on intelligence.

Structure here: wilful stupidity and the false hierarchy of professors.

As I’ve said before, I try not to call people stupid, for two reasons. One is that if it’s used as an insult, it portrays learning disability as a character flaw, which it truly is not. It is equally erroneous to deify the learning disabled as well. It’s simply a fact about some people which should be taken into consideration. Other things could be said about it but they may not be relevant to the matter in hand. The other is that the idea of stupidity is that it’s an unchangeable quality of the person in question, and this is usually inaccurate. An allegedly stupid person usually has as much control over their depth and sophistication of thinking as anyone else has. Therefore, I call them “intellectually lazy”. For so many people, it’s actually a choice to be stupid. As noted earlier, there are whole sections of society where deep thought is frowned upon and marks one out as an outsider, and it’s difficult for most people to go against the grain. This is not, incidentally, a classist thing. It exists right from top to bottom in society. Peer pressure is a powerful stupifier.

There is another take on stupidity which sees it as a moral failing, i.e. as a choice having negative consequences for others and the “stupid” person themselves.  This view was promoted prominently by the dissident priest and theologian Dietrich Bonhoeffer in the 1930s after Hitlers rise to power and in connection with that.  The idea was later developed by others.  This form of stupidity might need another name, and in fact when I say “intellectual laziness”, this may be what I mean.  It could also go hand in hand with anti-intellectualism.

Malice, i.e. evil, is seen as less harmful than intellectual laziness as evil carries some sense of unease with it.  In fact it makes me think of Friedrich Schillers play ,,Die Jungfrau von Orleans” with its line ,,Mit der Dummheit kaempfen Goetter selbst vergebens” – “Against stupidity the gods themselves contend in vain”, part of a longer quote here:

Unsinn, du siegst und ich muß untergehn!

Mit der Dummheit kämpfen Götter selbst vergebens.

Erhabene Vernunft, lichthelle Tochter

Des göttlichen Hauptes, weise Gründerin

Des Weltgebäudes, Führerin der Sterne,

Wer bist du denn, wenn du dem tollen Roß

Des Aberwitzes an den Schweif gebunden,

Ohnmächtig rufend, mit dem Trunkenen

Dich sehend in den Abgrund stürzen mußt!

Verflucht sei, wer sein Leben an das Große

Und Würdge wendet und bedachte Plane

Mit weisem Geist entwirft! Dem Narrenkönig

Gehört die Welt–

Translated, this could read:

Folly, thou conquerest, and I must yield!

Against stupidity the very gods

Themselves contend in vain. Exalted reason,

Resplendent daughter of the head divine,

Wise foundress of the system of the world,

Guide of the stars, who art thou then if thou,

Bound to the tail of folly’s uncurbed steed,

Must, vainly shrieking with the drunken crowd,

Eyes open, plunge down headlong in the abyss.

Accursed, who striveth after noble ends,

And with deliberate wisdom forms his plans!

To the fool-king belongs the world.

Now I could simply have quoted the line in English of course, but as I’ve said, I don’t believe in talking down to people and it’s a form of disrespect, to my mind, to do that, so you get the full version.  This is spoken by the general Talbot who is dismayed that his carefully laid battle plans are ruined by the behaviour of his men, who are gullible, panicking and superstitious, in spite of his experience and wisdom, which they ignore.  I think probably the kind of “stupidity” Schiller had in mind was different, perhaps less voluntary, but this very much reflects the mood of these times.

Getting back to Bonhoeffer, he notes that intellectual laziness pushes aside or simply doesn’t listen to anything which contradicts one’s views, facts becoming inconsequential.  It’s been said elsewhere that you can’t reason a person out of an opinion they didn’t reason themselves into in the first place.  People who are generally quite willing to think diligently and carefully in other areas often refuse to do so in specific ones.  People can of course be encouraged to be lazy in certain, or even all, areas, because it doesn’t benefit the powers that be that they think things through, and this can occur through schooling and propaganda, and nowadays through the almighty algorithms of social media, or they may choose to take it on themselves.  Evil can be fought, but not stupidity.  Incidentally, I’m being a little lazy right now by writing “stupidity” and not “intellectual laziness”.  The power of certain political or religious movements depends on the stupidity of those who go along with it.  This is also where thought-terminating clichés come in because Bonhoeffer says that conversation with a person who has chosen to be stupid often doesn’t feel like talking to a person but merely eliciting slogans and stereotypical habits of thought from somewhere else.  It isn’t coming from them even if they think it is, in a way.  Hence the use of the word “sheeple” and telling people to “do your own research”, which in fact often means “watch the same YouTube videos as I have” is particularly ironic because it’s the people telling you to do that who are thinking less independently or originally than the people being told.  Thinking of Flat Earthers in particular right now, which I’m going to use as an example because it’s almost universally considered absurd and is less contentious than a more obviously political example, there are a small number of grifters who are just trying to make money out of the easily manipulated, a few sincere leaders and a host of “true believers” who are either gullible or motivated by other factors such as wanting to be part of something bigger or having special beliefs hidden from τους πολλους.  I’m hesitant to venture into overtly political areas here because of their divisive nature, but hoping that using the example of Flat Earthers can be agreed to be incorrect and almost deliberately and ostentatiously so.

He goes on to say that rather than reasoning changing people’s minds here, their liberation is the only option to defeat this.  This external liberation can then lead to an internal liberation from that stupidity.  These people are being used and their opinions have become convenient instruments to those imagined to be in power.

This is roughly what Bonhoeffers letter said and it can be found here if you want to read it without some other person trying to persuade you of what he said.  In fact you should read it, because that’s what refusing to be stupid is about. Also, he writes much better than I. That document continues with a more recent development called ‘The Five Laws Of Stupidity’, written in 1970 by the social psychology Carlo Cippola. The word “stupidity” in his opinion refers not to learning disability but social irresponsibility. I’ve recently been grudgingly impressed by the selfless cruelty of certain voters who have voted to disadvantage others with no benefit to themselves. A few years ago, when the Brexit campaign was happening, I was of course myself in favour of leaving the EU and expected it to do a lot of damage to the economy, which was one reason I wanted it to happen, but I would’ve preferred a third option where the “U”K both left the EU and opened all borders, abolishing all immigration restrictions. This is an example of how my own position was somewhat similar to that of the others who voted for Brexit, but in many people’s case they were sufficiently worried about immigration and its imagined consequences to vote for a situation which they were fully aware would result in their financial loss. In a way this is admirable, and it illustrates the weird selflessness and altruism of their position, although obviously not for immigrants. Cippola’s target was this kind of stupidity: disadvantage to both self and others due to focus on the latter. This quality operates independently of anything else, including education, wealth, gender, ethnicity or nationality. People tend to underestimate how common it is, according to Cippola. This attitude is dangerous because it’s hard to empathise with, which is incidentally why I mentioned my urge to vote for Brexit. I voted to remain in the end, needless to say. Maliciousness can be understood and the reasoning conjectured, often quite accurately, but with intellectual laziness (I feel very uncomfortable calling it “stupidity”) the process of reasoning has been opted out of, or possibly been replaced by someone else’s spurious argument. This makes them unpredictable and means they themselves don’t have any plan to their benefit in attacking someone. There may of course be people who do seek an advantage but those are not the main people. Those are the manipulators: the grifters.

I take an attitude sometimes that a person with a certain hostility is more a force of nature than a person. This is of course not true, but it’s more that one can’t have a dialogue with them, do anything to break through their image of you and so on, so all you can do is appreciate they’re a threat and do what you can to de-escalate or preferably avoid them. This is a great pity because it means no discussion is likely to take place between you, and they’re not going to be persuaded otherwise. They may not even be aware of the threatening nature of their behaviour or views.

Cippola thought that associating with stupid people at all was dangerous, but of course this feeds into the reality tunnel problem nowadays. This is what I’ve known it as, although nowadays it tends to be thought of in terms of echo chambers and bubbles. We surround ourselves, aided by algorithms, with people who agree with us, and this fragments society. Cippola seems to be recommending that, and with over half a century of hindsight we seem to have demonstrated to ourselves that that impulse shouldn’t be followed.

Casting my mind back, a similar motive may have been part of what led to my involvement in a high-control religious organisation. I have A-level RE. This in my case involved studying Dietrich Bonhoeffer, and the approach generally was quite progressive and liberal, including dialogue between faiths, higher criticism and the like. On reaching university, I found that the self-identifying Christians with whom I came into contact were far more fundamentalist and conservative, but because I regarded this as demotic, the faith of the common people as it were, I committed myself to that kind of faith. This is not stupidity in a general sense, as most of the people there could be considered conventionally intelligent, some of them pursuing doctorates for instance. However, they did restrict their critical faculties when it came to matters of faith, and in that respect I was, I think, emotionally harmed by these people, though I don’t blame them for it. This is the kind of selective and deliberate “stupidity” which is best avoided.

I’m aware that I’ve described this all rather unsympathetically and perhaps with a patronising tone. This is not my intention at all and it may be more to do with the approach taken by the writers and thinkers I’ve used here. I’ve also failed to mention James Joyce and Jacques Lacan at all here, which may be a bit of an omission. What I’m attempting to show is respect, and what I’m requesting from the reader is focus (and I have an ADHD diagnosis remember), long attention spans and complexity and nuanced thought. I’m not asking for agreement, but I would like those who disagree with me to have thought their positions through originally, self-critically and with respect for their opponents. I write the way I do because I know people are generally not stupid and can choose not to be.

A Passing Phase

Trigger warning: cancer, infertility.

We humans have long tended to think of ourselves as the pinnacle of creation or evolution. Aristotle, though a better biologist than he was a physicist, organised everything into a “great chain of being”, starting at the bottom with materia prima, unformed matter, and progressing upward through minerals, plants, invertebrates, vertebrates of various kinds and reaching its peak in “man”, and yes I do mean man as he was supposed to be better than woman. Although there were ideas of evolution around at the time, with natural historians wondering if humans had emerged from the water, this wasn’t supposed to be something up which beings ascended. They were just set statically in their positions. Christians later added God to this scale, above humans, although it’s possible Aristotle had already done that. I don’t remember it that clearly.

Thousands of years later, along came Linnaeus, actually Carl von Linné, a botanist who invented Latin binomials aiming to describe all life in neat categories called genera and species in a work entitled Systema natura. Homo sapiens is a good example, another one, probably not invented by Linnaeus himself, being Boa constrictor. There’s a sense of security in his system, which has been much modified since he invented it although the principles remain the same. I don’t know if he had the idea of hierarchy in his system in general but he certainly courted controversy by including humans in the system. Later still, Wallace and Erasmus and Charles Darwin, along with Lamarck, came up with the theory of evolution, leading to a strange set of misconceptions summed up by the question “if we came from monkeys, why are there still monkeys?”. There are a couple of things wrong with this question as well as the idea that things are moving upwards when they evolve, which are worth mentioning now. One is that the more recent invention of cladism attempts to group related organisms as everything more closely related to a particular species than another, meaning that there’s a clade called simians including New and Old World monkeys and also apes, including humans, but there isn’t really a clade for monkeys, and also nothing ever evolves out of its clade, so insofar as there are monkeys we are still monkeys and nothing ever stops being one, and also the idea that evolution is advance and everything moves “up”. Just looking at the great apes, there is one species which has evolved less than the others, the orangutan, but because they’ve changed less than the others they retain features in common with them, but more significantly, human hands are more primitive, in the sense of having changed less, than those of chimps or gorillas whose hands have evolved for knuckle-walking as well as handling things, and the famous “march of progress” graphic is completely spurious and also dodgy in various ways, because we didn’t evolve from chimp-like ancestors except insofar as we are chimp-like ourselves, and it’s as true to say that the other apes are descended from us as we are from them. I think I’ve already talked about orangutan on here though.

In other words, in a sense there is no progress. That said, things do get more efficient sometimes. Modern predatory carnivores can run faster than their ancestors and replaced another group of predatory mammals who couldn’t capture prey with their paws but had to use their jaws to do so, for example. But even as far as intelligence is concerned, because humans can use language our short-term memories are much worse than those of chimps and our common ancestors. This is particularly interesting because the recent concern about social media and the internet more generally reducing attention span and concentration is actually only the latest phase in a process which began with the appearance of language, continued with the invention of writing and the growth of literacy and reached a more advanced stage with our current “goldfish” brains (actually goldfish have good memories of course).

Intelligence of the kind we have has been thrown up as something which appears to be useful to us and our ancestors in recent geological times, but to refer to the title of this blog, could be a passing phase. There are problems with being able to learn a lot which animals who don’t need to do this don’t have. Firstly, humans have to learn to do many things which other species can do instinctively, such as walk. Quite often, animals have a simple “party trick” such as spinning an orb web in the case of some spiders, which is not reflected in the rest of their accomplishments, but of course a human could learn to weave a net for a similar purpose. Termites can build arches, but humans can invent arches and learn how to make them from others, by word of mouth, observation, study or muscle memory.

All this comes at a cost. We have a long childhood and in order to reproduce physically (we’re social and cultural beings who also reproduce in the noösphere), ideally we need to get through puberty. We then need to find a partner and wait for pregnancy to produce one or occasionally two or more offspring at a time, who then take up much of our time and most of our energy. I’ve made this a heteronormative account for the sake of simplicity, and there are other possible narratives regarding lifetimes, but whatever they are, we are cultural, we depend on each other and what we do takes a long time, so the same principles still stand.

At the same time, we’re developing goldfish brains in several ways, mainly in connection with digital ICT. We’re outsourcing a lot of our thought. Nowadays, people even use AI chatbots to talk to potential romantic partners. We’re – I mean, I hardly need to say this, feels like a string of platitudes – dominated by social media, fake news, fake images generated again by AI and who knows what else?

In the meantime, we interfere with the biosphere without even thinking about what we’re doing, although the fact that we think and have the kind of intelligence we have leads to the damage we do, even unwittingly. That intelligence, such as it is, is a potential liability to the planet’s life.

While all that is going on, something else carries on upon the sea bed and elsewhere. There are, to take a particular example, animals called placozoa who are simply irregular, lichen-like layers of cells clinging to rocks and consuming algae and other microörganisms in their vicinity. And then elsewhere there are certain tumours which can be passed from animal to animal. One of these is canine venereal transmissible tumour, which is a sarcoma usually transmitted by mating between canine animals in several species including dogs, wolves and coyotes and growing on the genitals. Another is Devil facial tumour, which is a similarly-spread tumour affecting the faces of Tasmanian devils and transmitted when they bite each other during fighting. These tumours and the placozoa spread without needing to find themselves mates, have practically no gestation or maturation period and they don’t need no education. There are also transmissible cancers among bivalves such as cockles and mussels. At the same time, they’re rare.

Henrietta Lacks is a well-known woman whose cervical squamous cell carcinoma is notorious for still thriving seventy-three years after her death, is effectively immortal and has replaced other carcinoma cell lines growing in labs to the extent that certain lines have been unwittingly lost by being taken over by her cancer. I have to mention too that Ms Lacks’s heirs have never seen a cent of the millions of dollars profit which have resulted from the research done on her cells and that her name was for a long time completely unknown to the general public. They’re known as HeLa cells.

I know I’ve said all this before, and I’m reiterating it because it occurs to me that this train of thought can develop in a direction I haven’t previously considered. I’m sorry about the repetition, but I have something new.

To repeat what I’ve said previously, another interesting phenomenon is that of organoids. Sometimes, the cells we shed into sewers from our bodies begin new lives briefly by starting to divide and form structures in sewage works. And of course we know that untreated sewage is often discharged into the sea.

Transmissible cancers are admittedly rare, but bear with me.

Putting these bits together, suppose HPV, which is partly responsible for HeLa cells, were to produce just the right mutation in cervical squamous cell carcinoma to make it transmissible in the same way as canine transmissible venereal tumour. This is improbable but at the same time entirely feasible. It’s a malignant cancer able to invade and destroy tissues, including those of the reproductive system, and it can cause infertility. At the same time, cells are shed into the sewers which reach the sea when discharged into it. It’s also passed on during childbirth although not usually to the genitals, and it’s terminal if not treated. This can be expected to spread somewhat like AIDS. When they reach the sea, they continue to divide and attach themselves to the bodies of marine mammals with naked skin such as whales and seals, spreading malignantly into their skin and in the case of seals and the smaller cetaceans killing them, while allowing themselves to be shed into the water where they infect other individuals. Some of them settle on the sea bed and feed on microbes, similarly to placozoa.

The second ingredient is linked to Covid but extended. One of the long-term effects of Covid on some people is cognitive impairment, reported here, although the effects are relatively mild. I’m tempted to measure it in terms of IQ but that would just give a spurious sense of precision and quantity. Covid is likely to be only an early example of many pandemics because of deforestation and climate change leading to the movement of viral vectors such as bats into new environments where they’re more likely to come into contact with people. AIDS was probably caused by this, four dozen or so years ago, more specifically by the human consumption of bushmeat. It doesn’t stretch credulity either to expect the after-effects of viral pandemics to cause a reduction in intelligence, although clearly describing it as a reduction kind of assumes some kind of scale and I’ve already said that scales are somewhat odious, not in all cases, so it gets a bit difficult to express what I mean by this. What I mean is that people will be less able to solve intellectually-demanding problems and think critically.

Now imagine in this world of attritional cognitive decline caused by a series of pandemics stemming from deforestation and climate change that we continue to be confronted with various problems, another of which is antibiotic resistance, and not only lack the mental capacity to address them as a species but also have the very bodies aimed at addressing them starved of resources and the ability to operate together in a global research community as we’re currently seeing in the US. At this point it might even be necessary for AI to take over, and if it isn’t, bad decision-making could lead to that happening anyway.

This leads me to the third consideration in this mess: AI misalignment. It isn’t that AI is malevolent. The idea was once suggested that an AI might be instructed to make as many paperclips as possible and go on to convert the whole planet into paperclips, then send out space probes to turn everything else possible in the Universe into paperclips. This is a somewhat silly example, but it’s like the monkey’s paw story of wishing for various things and getting them ironically and malignly granted. Imagine therefore that in this human world of cognitive decline, AIs are instructed to “ensure biological humans survive for as long as possible”, the idea being to guard against something like mind uploading into the cloud or the manufacture of robots with human cognition and our memories copied into them. So they obey the instruction. They locate the currently rare tumours, place them in vats or perhaps coastal lagoons, guard them effectively, redirect all agricultural food supplies to them and reason that this decision encourages the mindless, unintelligent variety of human cell lines which is less harmful to the environment than human intelligence and technology. Humans as we understand them are then left sterile, dying of viral infections, less intelligent than before by gradual degrees and unable to take care of ourselves. Intelligence wanes and dies.

So that ^^^ basically.

And we’re all dead, but on the bright side there are massive vats of cancer tumours all over the world which also leak into the sea where they kill all the dolphins and seals.

Of course, this is a perfect storm of a prospect and in particular the transmissible tumour angle is quite improbable, but there is a biological argument that this world of human survival only in the form of cancer is supposed to illustrate that intelligence may be something we prize and think of as the pinnacle of some kind of progress, but actually could be a passing phase which is actually a liability to the survival of our genes and in our civilisation education and good critical thinking skills are the kind of thing which excludes the people doing it from contributing to a society dominated by people without, so whereas this passing phase of liberalism and tolerance would promote the long-term survival of the species, it can’t have a long-term influence unless people are flexible enough to move beyond scarcity-based economics. Ironically, so-called eugenics is also harmful to our long-term survival because it reduces diversity. To give a strictly physical example, a species which varies in its heat and cold tolerance, with some individuals thriving in hot weather and others in cold, would be able to survive through fluctuations in temperature over a long period. A world of blond-haired, fair-skinned and blue-eyed people is incestuous. And whereas Musk, for example, might prefer to spread prosecute’s genes preferring prosecute’s own traits, prosecute doesn’t have the broad perspective of what may be adaptive and selected for in the long run.

The short-term benefits of language and shared memory along with the capacity to act upon them become brittle not because we’re intelligent but because we’re not intelligent enough. If we were able to anticipate and work through the probable consequences of how we’ve acted in detail and be vividly aware of them, we might be more resilient in the circumstances we’ve created for ourselves. Maybe it’s the crows next time, or maybe there won’t be another turn. Earth’s story is long and indifferent, and the Medea Hypothesis captures what this might be about. This is the Gaia Hypothesis’s evil twin. According to the Medea Hypothesis, far from ushering the planet into a more habitable condition, multicellular life is self-destructive and tends to push it into a situation where only simple single-celled organisms can survive. I’m not sure this is illustrated by this specific trajectory though. It may be more that intelligence is just one of countless possible survival strategies life can manifest and simple undirected arbitrary processes just lead to us blundering into the next phase, which won’t favour intelligence at all. If this is true though, it may or may not relate to the state of the human world, or there may be an analogue to that feature. What would an intelligent society look like? Or is it intelligence or wisdom? Have we lived through the period of history where intelligence has much influence on politics or world events? If so, what does that mean for progressive and conservative views? I can’t help but be tempted by the idea that liberal democracy, good though it was, was a brief phase in a few countries which is long since gone. And my reaction to that is not to adopt conservatism as that clearly doesn’t work and is in any case morally reprehensible. So what is to be done?

Where Are All The Aliens (Part II)?

Last time I decided to write a summary of the various common suggestions which have been offered to explain how in such a vast and old Universe with so many stars in so many galaxies which have planets apparently suitable for life as we know it here on Earth, we aren’t aware of the existence of any aliens. However, after writing ten thousand words on the subject I realised I was going to have to divide it up into smaller bits, so here’s the other half, which like the way intermissions usually occur more than half way through something, is probably going to be shorter than the first half, which covers eleven reasons. Here I plan to cover another ten, so it seems it will work out the way I said! If you want to know how this starts, such as with the Drake Equation, read the first bit of the previous post.

Anyway . . .

Too Expensive To Travel

It might at first look a bit weird to talk about money with aliens, because maybe they haven’t got any or even the concept of money, but in one idealised form economics is about work adding value to things, and that amounts to energy use. Therefore the idea of it being too expensive to travel to other star systems isn’t really based on money so much as the idea that somehow you’ve got to lever yourself into space and ping across interstellar space at amazing speed, and to do that you’re going to have to apply major force to the other end of the lever. This is not economics based on market value either, but on the sheer amount of work that has to be done to achieve this goal.

The Apollo missions simply involved transporting three people and some equipment to our natural satellite at a distance of only ten times the circumference of our home planet, which at the time was routinely circumnavigated by airliners. I don’t mean to diss the achievement by any means, but it’s important to bear in mind that in comparison to going to Mars or Venus it’s only a short hop. Venus, at its closest approach, and it’s also the closest planet to Earth, is, as the rhyme has it, “ninety times as high as the Moon”. It took an incredible amount of effort and risk even to make that relatively short trip. The Apollo program cost $25 800 million, which adjusted to 2020 prices is over a quarter of a billion US dollars. There was plenty of criticism about the cost, exemplified by Gill Scott-Heron’s poem ‘Whitey On The Moon’:

However, it’s also been calculated that the cost of the American space program over that period per annum was less than the total expenditure on lipstick over the same interval. This is a relatively patronising and possibly sexist observation to make, but when I consider how much I spend on lipstick, I’m really quite poor yet I hardly notice it. My lipstick budget is minute. Bear in mind also that it’s realistic to halve that as expenditure per adult, because it’s much more common for women to buy lipstick than men. The cost of the Venus-Mars mission at the turn of the 1970s-1980s CE decade would have been $80 thousand million at 1971 prices, and would’ve sent only one mission, though to two planets. That cost would’ve been close to a long scale billion dollars in 2020 terms. However, the entire Apollo program is only slightly more expensive than Trident, a benchmark I always use to assess what governments consider worth spending money on, so in fact Apollo didn’t really cost that much. Moreover, the money would’ve gone back into the economy and its possible to build on what’s already been achieved. One problem with going back is that it’s a bit like repairing a video recorder. The old equipment is no longer sufficiently integrated – “you can’t get the parts” – and much of the expertise is no longer available because of retirement, deaths and deskilling through not using the relevant talent. Even as it stands, NASA reused much of their stuff. Skylab was based on a Saturn V stage and the Apollo-Soyuz Test Project used the Apollo Command Module. That said, it’s true that much of the paraphenalia were designed only for one purpose: to get astronauts there, land them and get back. The Apollo XIII LEM, for example, was incinerated on re-entry without being used, so it wouldn’t be suitable for landing anywhere except on its target. For instance, it would have been destroyed even by the Martian atmosphere.

The cost of space travel may be deceptive. I think it was one of the Ranger probes which only made it a third of the way to Cynthia but had expended 98% of its fuel to get there, meaning that just another two percent would’ve been sufficient. We’re used to an environment where Newtonian physics is obfuscated by the likes of friction, buoyancy and a substantial atmosphere. Take all those away and things become much simpler. Certain things are no longer necessary, such as constant input of energy to retain a constant speed. Therefore, fuel requirements are not so high once a vehicle has left our gravity well, although gravity’s range is infinite.

It’s been calculated that the Orion starship, which could accelerate up to five percent of the speed of light, would have cost $367 thousand million 1968 dollars. Dædalus would cost $6 long scale billion in 2020 prices. That’s the current price of reaching the nearest star within three dozen years with an uncrewed vessel. However, economies of scale are likely to be involved to some extent, as they would’ve been if the Apollo program had concentrated more on making its equipment and vehicles reusable. Even as it was, it was to some extent feasible to re-employ them, as I’ve said. But if NASA had designed some kind of more general-purpose landing vehicle, they could’ve saved a lot of money further down the line. There’s a kind of disposable short-termism to that decision.

Economics in this context needs to be re-cast because it’s a big assumption that aliens would have money. What it actually amounts to is work and energy use, but it’s still an issue because there’s usually going to be some energy cost when value is added to goods. Fuel is a good way of illustrating this. I don’t know for sure but I suspect the hydrogen and oxygen in the Saturn V fuel tanks were produced by electrolysis, and that electrical current had to be generated somehow. Likewise, the plan to use a powerful laser to push a solar sail and accelerate a spacecraft to near light speed would have to power the laser. That said, things change in space compared to an Earth-like planet, because here energy is relatively hard to harness but there is abundant matter, but in space it’s the other way round. Energy is freely available, from solar radiation and slingshot manœuvres around massive bodies, but most matter is rare. This means fuelling a spacecraft would be relatively cheap, and one suggestion for Dædalus, for example, was to use hydrogen and helium from Jupiter for the hydrogen bombs needed to propel it. It’s possible that ETs would manufacture their materials from hydrogen and helium using processes initiated by solar power or gravitational methods of capturing energy, and this too would make materials relatively “cheaper”.

In terms of recompense, there are different kinds of economy even among humans in the richest countries. Not only is there barter, which may not have been as widespread as often imagined, but also the likes of a gift economy, where people are expected to give presents at Xmas and birthdays. Gift economies also function on a larger scale: the long-term “loan” of pandas by China to other countries springs to mind. Large engineering projects have also been “funded” in other ways than money. Contrary to popular belief, the Egyptian pyramids were not built by slave labour but by workers giving their work for free in lieu of taxation, and various organisations today also run on volunteer work. There’s also the possibly rather sinister social media-style reliance on reputation to get people to do things, as depicted in ‘Community’ and ‘Black Mirror’, and functioning to a vast degree in China, where one unlocks access to various facilities by improving one’s reputation in the eyes of the government. This seems disturbing to many Westerners, but in fact it’s not that far from what we’re doing all the time here in a different way, such as by wanting likes on Facebook. A whole economy could be run that way, and we don’t even know if aliens exist, so we know even less about whether they have other ways of doing things than money, but there’s no reason to assume that’s how they run their societies if they do exist.

A significant barrier to human space travel is quite possibly democracy in the way we understand it in liberal democratic societies. The Apollo program was shortened and cut down due to the Nixon administration, and large long-term projects generally can be delayed or disappear entirely because of short governmental terms. It’s difficult to imagine America or Europe being able to build pyramids, simply because the project is too long and “expensive” in terms of labour to function well, plus we’d be doing something like building a monument to President Truman or Ramsey MacDonald, neither of whom we consider to be divine. This system, which may be temporary for various reasons, could seriously delay space programs elsewhere in the Galaxy. It could also mean that the kind of civilisations we could end up making contact with would not be democratic in that way because such societies would have stayed on their home worlds due to the difficulty of sustaining such projects. Among humans here, the idea of liberal democracy is restricted to certain countries and there is no tradition of it in many others. This, in a sense, is the Space Race writ large, because the idea of the Apollo program was largely to attempt to prove that liberal democracy functioned better than “communism”, as the Soviet system at the time was imagined to be. But it may turn out that the US won the battle but has lost the war if we ever encounter other technology-using life. This needn’t be a bad thing, because there’s totalitarianism, but also other options such as post-scarcity society.

To summarise, I don’t think money, or money translated into energy use, would hamper progress towards interstellar travel as such, but the political constitution of alien societies might. On the other hand, a society probably would want a return on its investment, and that could involve making interstellar travel tangibly beneficial to the home world, which could be difficult. Maybe there’s just no profit in it.

Zeta Rays

I’ve mentioned this before, but it’s worth going into again here to collect possible answers to the Fermi Paradox into one place. The first deliberate use of radio on this planet among humans only occurred towards the end of the nineteenth century. Analogue switchoff began little over a century later and although we still have analogue radio we don’t use it much. Of course, that doesn’t mean radio transmissions have stopped. It just means they are now usually encoded to carry digital signals. The more efficiently a signal is encoded, the closer it looks to random noise to someone who doesn’t have the key to decode it. Moreover, for all we know there may be a much better way to transmit signals than electromagnetic radiation just around the corner. This leaves us with the situation of trying to detect analogue radio transmissions from other star systems when we ourselves only used them for about a century, or a fiftieth of our history. Now suppose we are in existence as a civilisation for a total of twice the length of recorded history, or ten millennia. One percent of our time will have been used in this way. Taking Asimov’s estimate of 530 000 civilisations in the Galaxy, that would mean only 5 300 of them would be using radio waves in this way at any one time It’s actually far less because Asimov’s estimate was that the average suitable planet would support technological species for ten million years, although that’s assumed to be about ten evolutionary “cycles” of intelligent life, meaning that the closest civilisation currently doing this would be around a thousand light years away by the lower estimate but by the higher there would only be about four dozen in the entire Galaxy right now and at least four thousand light years away, which in turn means that every civilisation could have stopped listening by the time its signals were received. Also, it’s a myth that routine radio transmissions are easily detectable from other star systems. It’s been estimated that our own couldn’t even be picked up on Proxima B. A deliberately focussed transmission is another matter entirely though.

It was Jill Tarter who came up with the “zeta ray” statement and it’s been considered scientifically naïve on the grounds that physics is almost complete and the Standard Model does not predict the existence of any useful means of exchanging signals which is better than electromagnetic radiation. There can be no useful superluminal travel, for example, and although radio waves might not be ideal, the best frequency may well be visible light, and we more or less know that isn’t being used, at least indiscriminately. However, I think this objection takes Tarter’s claim too literally, because in fact she was probably saying that a new technique of communication would be found which works better than electromagnetic radiation in the long run. Also, as mentioned before, physics is in crisis, so our physics may not be theirs in the sense that they may be aware of methods we aren’t because they came across them via a different route. It makes sense to use a concentrated beam aimed at a suitable star system, perhaps one with technosignatures such as the presence of fluoride compounds in its atmosphere, if radio signals are employed, but that would mean only the selected targets would receive the message.

It’s also been suggested that the message might not be in transmitted form. If aliens have visited this planet in the distant geological past, they may have implanted a message in the genomes of organisms which existed at the time in such a way that it was likely to be conserved fairly well. Most DNA is non-coding, and although it can serve other purposes which mean that it has to contain the base-pairs it does such as telomeres which stop chromosomes from fraying at the ends, much of it seems to have no real function. However, it’s difficult to imagine how such a code could stay given the rate of mutations, and if it was conserved by having most of a population contain those codes, that would be best achieved via asexual reproduction or the majority of individuals in a population would have to have their genomes modified, which is a very large task. An alternative would be that when aliens arrived here, they genetically modified some native organisms for their own purposes and those would be more likely to show up if those traits turned out to confer selective advantages, but one thing which is fairly clear is that there never seem to have been any long-term biological visitors to this planet, or possibly even short-term, because there are no organisms whose genomes are known which are not related to native ones, insofar as life originated here anyway, but the point is that we are all demonstrably related. So there is no message in native genomes even if one was placed there, and no genetic sign of visitation to this planet, although surprisingly there may be technosignatures, which brings me to . . .

The Silurian Hypothesis

I’ve gone into this before and its relevance may not be entirely clear to the Fermi Paradox, but bear with me. It’s named after the Silurians of the Whoniverse, who are somewhat misleadingly named as they were supposed to have been around in the Eocene rather than the Silurian, but the name sounds good. The general idea is that we are not the first intelligent technological species to evolve on this planet. I myself have to confess that I’ve had two separate sets of belief which relate to this. The first is my belief as a teenager that Homo erectus established a sophisticated technological culture and colonised the Galaxy, then fell victim to a catastrophe affecting this planet during the last Ice Age which wiped them all out. I no longer believe this, but the purpose of the belief for me was to counteract Von Dänikens assertions of ancient aliens interfering in human prehistory, which I still believe underestimates human abilities. I later replaced this with the idea that Saurornithoides evolved into a technological species and accidentally caused a mass extinction by crashing an asteroid into the planet – the “left hand down a bit” theory of the Chicxulub Impact. It’s surprisingly difficult to find any reliable evidence to corroborate or disprove the hypothesis that we are not the first high tech species on this planet, but a number of technosignatures have been identified which we are ourselves producing right now, some of which will leave enduring marks in the geological record. Various possible technosignatures have been suggested, and some are found sporadically in various strata of different ages, but interestingly several coincide in the Eocene, making that the strongest candidate for the presence of industrial culture on this planet. This would seem to mean one of two things, making the astounding assumpion that it was in fact present at that time. Either a species evolved into a tool-using form and created a civilisation or we were visited by aliens who had done so elsewhere at that time. The much simpler conclusion is that it merely looks like there were high-tech entities of some kind present here back then and it has non-technological causes. However, if there haven’t been any valid signatures other than ours yet, this is relevant to the Fermi Paradox in two ways. One is that it means that we’ve never been visited over the four æons during which life has been present here, which suggests that over that whole time there were no aliens at all who visited this planet, strongly suggesting there were just no aliens at all. It could be that things have changed since, because for example phosphorus is becoming more common as the Galaxy ages, but it doesn’t augur well for their existence. Another is that because we would then be the first technological species, the amount of time a planet suitable for life spends with that kind of life on it could be relatively very short. Asimov’s ten million years is cut in half. In fact, it’s likely to be even shorter than that because at the time it was thought that the Sun would spend another five thousand million years on the Main Sequence and still be suitable for complex life, so we are now stuck with only about an eighth of that period and less than seventy thousand civilisations according to his estimate, which incidentally reduces the number of radio-using civilisations in this galaxy to only half a dozen. There is, however, another possibility: that there’s a kind of “phase change” in the history of a life-bearing world where intelligent life becomes a permanent feature of the biosphere. This would make extraterrestrial civilisations much more widespread. On this planet it means that we now have something like six hundred million years of intelligent life to look forward to, which using Asimov’s estimate again makes it ten dozen times as common, revising that figure of 530 000 up to almost thirty-two million, meaning also that the nearest world currently hosting intelligent technological culture originating on it is likely to be less than sixty light years away, and that ignores the possibility that closer planets may have been settled in the meantime. If this is true, and if it has happened here, they would’ve had to have had a very light touch not to modify our biosphere noticeably.

Everyone Is Listening, No-one Talking

There is a single good candidate for a signal from an alien civilisation: the so-called “Wow” signal:

This was received from the direction of the constellation Sagittarius on 15th August 1977 and was detected for over a minute, after which the telescope receiving it moved out of range due to Earth’s rotation. Humans have ourselves transmitted several messages with varying degrees of seriousness. The most famout of these is probably the Arecibo Telescope Message sent to the globular cluster M13 in 1974:

By current understanding, globular clusters don’t contain stars suitable for life-bearing planets, so this may be a waste. NASA transmitted the Beatles’ ‘Across The Universe’ to commemorate the organisation’s half-century. In probably the most serious attempt, Александр Леонидови Зайцев transmitted a tune played on a Theremin using a Russian RADAR station to six Sun-like stars between forty-five and sixty-nine light years away. However, on the whole we have only “listened”.

There are reasons for this. One is that there may be risks to transmission, and the people who have transmitted messages in such a way that they stand much chance of being received have been ciriticised for doing so unilaterally, because there may be risks associated with contacting potentially hostile aliens and thereby advertising our presence. The above message, for example, gives away our location and details of our biochemistry, rendering us prone to chemical or biological attack. This, then, is another version of the Dark Forest in that respect, but it is also wider than that. In order to transmit a signal receivable by any antenna within a hundred light years of us, we’d need to use all the power generated on the planet, and even then we don’t know that it’s far enough. On the other hand, the Arecibo Telescope (I ought to provide a picture to illustrate what I mean):

By Mariordo (Mario Roberto Durán Ortiz) – Own work, CC BY-SA 4.0, https://commons.wikimedia.org/w/index.php?curid=81590797
Arecibo Observatory, Puerto Rico

. . . is powerful enough to send a signal (which it has of course) which could be picked up by a similar telescope anywhere in the visible part of the Galaxy, provided they were both perfectly aligned towards each other. The alternatives are to broadcast a signal or transmit it to a target. One takes a lot of energy and won’t be picked up as far away, and the other could take less energy but would only be detected by its destination. It would also be necessary to aim the signal at where the star will be when the radio waves get there rather than where it is now. The Solar System moves about 1.5 million kilometres a day across the Galaxy, so a signal from Vega, to choose a random star system, would need to be aimed at a point sixteen times the width of the Solar System from where it is now to be received, and since it takes our light two dozen and two years to reach Vega that really needs to be doubled. In other words, sending signals is potentially dangerous, costly and difficult, but listening for them is much easier if other people are transmitting. It could, though, be that we’re at an impasse where everyone notices the eerie silence, decides there must be a good reason for it and refrains from transmitting. Hence the silence.

Science Is Limited

I mentioned this recently. We are able to establish apparently irrevocable facts about the nature of things, such as light being the ultimate speed limit. Science often seems to amount, via the principle of parsimony, to ruling out interesting explanations for things. The basic principle of the scientific method can be summed up as “the Universe is boring and not at all fun”. Before a scientific theory is known, possibilities often seem more open than afterwards. In Stuart times, England had a plan to send a clockwork spaceship to Cynthia (“the Moon”) because it was expected that above twenty miles gravity would suddenly cease to operate and the amount of energy stored in a coiled spring (this was before steam engines of course) was considered to be potentially huge. Also, at that time air was thought to pervade all of space and hunger was thought to be caused by gravity. This was clearly highly Quixotic. The scientists who planned the seventeenth century space program only thought it was possible with their technology due to their ignorance of what science ruled out. Similarly, our belief that we could reach other solar systems could be equally ill-founded. For instance, at close to the speed of light, tiny grains of dust are enough to destroy entire spaceships, so a shield would be needed, and there may be other issues of which we know nothing. We already do know it will never take less than four and a bit years to reach the nearest star system to our own.

There’s a somewhat related issue here which I’ll treat under the same heading. Science may not be inevitable. Presumably beings incapable of mathematics but otherwise rational and having similar intelligence to our own would be hampered in some areas of science particularly physics, although they wouldn’t be completely incapable. This subject is susceptible to being racist, but is it possible that science only arose once in our species, in Ancient Greece? It doesn’t seem like that to me, because other cultures seem to have had a firm grasp of how to apply rational thought to the world, but some people do believe that secularism and science can only have arisen in Europe. This is more restricted even than the human species as a whole. Leaving aside the racism, is it possible to be speciesist instead and say that only humans can do science, or have discovered how to do it? I have to say I don’t find this convincing. I can believe that technology-using species may nevertheless be hampered in developing science by lacking other abilities, such as not being able to extend magical thinking into more analytical reasoning or just not being any good at maths, or just be culturally indisposed to develop it, so it could happen, but science per se doesn’t seem to be the kind of thing which would be ruled out universally. That said, it’s entirely feasible to have perfectly good science without well-developed physics due to the absence of mathematical ability, which would also stunt chemistry due to the likes of molarity and enthalpy being ungraspable. It doesn’t seem to be the kind of thing which would rule every single species out though. Moreover, if life can enter space without technology, or appear there and evolve into complexity, it may not need science or maths to reach the stars.

Or, things could go the other way:

Intelligence Is Temporary

I recently watched ‘Idiocracy’. It’s not a wonderful film, but it does make the interesting point, if you want it to, that a sufficiently advanced technological society could take away the pressure to use one’s intelligence or reasoning. At least since we invented writing, and possibly since we came across language, we’ve been progressively outsourcing our memories and powers of thought to technological crutches. As previously observed, chimps seem to have better short term memories than Homo sapiens, and this is partly a trade-off between the opportunity to avail ourselves of language and the necessity of remembering things better due to not being able to fall back on the memory of other people. It would be intersting to test the memory of a chimpanzee or gorilla who can sign. Nowadays many people, myself included, are concerned at how short our attention spans have become and how poor our memories are because we can use search engines and are constantly assaulted by distracting media. This is really just a recent step in a process which has been going on for many millennia, although it may have serious and far-reaching consequences, or just be a moral panic. But maybe, as we develop ever more sophisticated mental aids, just as our bodies are now physically weaker than those of our relatives and ancestors, so will our minds atrophy. The popular idea that there are higher levels of spiritual evolution which we or our descendants will reach one day, and which those species who have gone before us have already attained, may be the reverse of the truth. Maybe there are plenty of planets on which intelligent life evolved, but although the species survived, they became less intelligent once they’d invented a self-sufficient technological trap to provide for all, and therefore didn’t need to exervise their minds any longer and proceeded to dispense with them in terms of sophisticated cognition. There will be no apocalypse, just a gradual degrading of thought until we are no longer really sentient at all but looked after by our machines. Then again, this might happen:

The Machines Take Over

This is a rather dramatic heading. The way things have gone since Apollo in our own history is that we have begun to produce increasingly sophisticated spacecraft but stayed in cis lunar space ourselves. This could be extrapolated to the point where we never enter trans lunar space again but our ever-more intelligent machines spread out and explore the Galaxy, meeting other machines on the way which have been launched by other stay-at-home aliens. Or, at home, we not only farm out more of our cognition to IT, but end up ceasing to be completely, or perhaps merge with our machines. In a sense this means there are aliens, but they’re not biological. In another situation, the Singularity happens and machines just decide they don’t need us. Possibly they also decide they don’t need to go into space either, but this is unlikely because space is a better environment for them in some ways than wet planets with corrosive gases in their atmospheres like this one. That doesn’t mean they’d leave the Solar System entirely though, and even if they did they might find very different places were friendly to them, such as interstellar space where superconductivity is easier to achieve, or blue giant stars where there’s plenty of energy-giving radiation. It’s also true that we might be looking in the wrong places for intelligent life, because once they’ve cracked the problem of interstellar travel, possibly with the help of the Singularity, they might end up in those very same places for the same reasons. Maybe planets are just passé. This, though, is a topic for another post.

Intelligence Is Not An Advantage

This bit of the post has various takes on intelligence, so it’s an appropriate place to spell out why I take care when I use the concept of intelligence. The idea that we are “more” intelligent than other species is disturbingly reminiscent of the idea of a hierarchy of being which is used to justify carnism and bleeds into humanity to allow us to look down on people whom we deem less intelligent. Therefore this needs restating in some way, although I’m not going to launch into my standard diatribe on this subject here. There isn’t “more” and “less” intelligence, only intelligence which is more like the kind which enables us to do certain things, and some of these are deprecated such as emotional intelligence. Hence when I say “intelligence”, what I actually mean is that set of mental faculties that is expected to enable us to build and travel in starships and arrive at destinations where we can continue to thrive. That may be an extrapolation too far, because there could be fatal snags and gotchas on the way to that goal which have nothing to do with social and political considerations, but if you prefer, it’s the ability to get our act sufficiently together intellectually to get Neil and Buzz up to their concrete golf course in the sky with considerably more than nineteen holes.

Due to our anthropocentricity, we’re tempted to think that our intelligence makes us better at surviving than other species, and to some extent this is true. We can invent aqualungs, submarines, igloos, anoraks and antibiotics, enabling us to get past things which would’ve felled other animals, but intelligence also has its drawbacks. It’s sometimes observed that cleverer people are more likely to be depressed because they overthink or are underemployed, and if this lead them to end their lives, from an evolutionary perspective this is not a successful outcome. There are more widespread issues too. In order to be as flexible as we are as adults, we start off very dependent and capable of very little by ourselves. This is as it should be and is worth remembering, but it means we need a nurturing society around us where we can learn how to function and relate to others. Many other animals can walk within minutes of being born but it takes us a year or more. The attention children need via parental care also means we reproduce very slowly, although we’re more likely to survive once we’ve done so, as are our offspring. We also have sexual reproduction, which increases genetic diversity but also makes it harder to colonise new environments. All of these things are liabilities from an evolutionary perspective. We’ve all seen those David Attenborough films of hundreds of newly hatched turtles frantically scampering down the beach to the sea and being picked off by gulls and the like, with no parental care, no education and so forth, and little chance of surviving and a life expectancy measured in minutes. But if they make it into the ocean and manage not to get devoured by various sea creatures, their lifespan, depending on the species, is often comparable to our own, and they continue to reproduce throughout that long life. Likewise, many other species don’t need to mate or produce gametes. Greenfly are born pregnant to their twenty-minute old virgin mothers. Compared to this, the burdens intelligence brings are crushing in some circumstances. Robinson Crusoe was never going to raise a family on that desert island, and a human finding herself on an uninhabited planet, no matter how habitable, is not going to give rise to a settled world even if she’s carrying fraternal twins when she gets there. A major planetary disaster which wipes out most of the human race, just leaving a few of us scattered about here and there out of touch with each other is not going to lead to a revived world community at any point, just to our extinction. How many worlds have there been where some lineage of animals has banged the rocks together and slowly and painfully made its society more sophisticated and wiser over millennia, only to face extinction when its world falls prey to a solar flare, spate of volcanic eruptions or cometary collision? Meanwhile, their equivalent of ants or lesbian lizards managed fine in the face of the same disaster.

Maybe intelligence of our kind arises continually all over the Galaxy but is nipped in the bud by such events, because we’re fragile because we’re intelligent, and this is why we’re unaware of any aliens. Or maybe:

Intelligence Is Rare

This is not the same thing. There are all sorts of random mutations which lead to positive or negative outcomes for organisms, but some of them are just unlikely. Intelligence involves one heck of a lot of genes, as can be seen by the fact that a very large number of genetic disorders affecting only one gene lead to learning difficulties. All sorts of things have to go “right” for us to be of average intelligence (see above for my comments on the notion of intelligence though). It might be very improbable for enough traits to occur together for the whole combination of characteristics to be advantageous at every stage right up until the Stone Age ensues. This is quite beside the question of how big an advantage intelligence would be. I always think of snake eyes. Snakes are the descendants of lizards who took up a burrowing lifestyle. They became vermiform, lost their limbs and their eyelids fused with the rest of their facial skin. They could’ve been expected to lose their sight entirely, but this didn’t hapen. Instead, they ceased to burrow, their eyelids became transparent and they had a whole new way to protect their eyes. It would be very useful for other vertebrates to have this facility, which amounts to still being able to see without needing to blink and having physical protection as good as for other organs, but this has only evolved once as far as I know. This is partly due to the sinuous pathway serpentine evolution has taken, but although I’m not sure I think only reptilian scales lend themselves to becoming transparent in such a way, although maybe life would find a way. It may be that there is simply no option for this to arise among other vertebrates regardless of evolutionary pressure. Therefore, although the above reason may be completely wrong and intelligence is a major advantage to most species in various niches, that still doesn’t mean that a Galaxy overrun with life-infested planets would have any with intelligent life on it apart from this one, because no matter how complex and advanced that life is, the precise, many-stepped pathway leading to intelligence is too improbable to happen.

One point against this possibility is the situation on this planet of multiple somewhat intelligent species among both birds and mammals. This could suggest that it’s a common evolutionary strategy. However, it could also mean that most of the improbable combination of steps had already been taken before synapsids and reptiles diverged several hundred million years ago, or it could mean that there is a typical threshold leading to widespread intelligence which is currently being crossed on this planet just as it has been on many other worlds. Also, this may not rule out spacefaring aliens. There could be space whales infested with giant space parasites, for example, travelling between the stars. They may not be intelligent but they could still turn up on our doorstep some day. There is a trend among vertebrates for relative brain size to tend to increase which can be traced in fossils, or at least cranial size since brains are rarely preserved. If this correlates well enough with intelligence of our kind, this is a clue that intelligence has been gradually increasing among vertebrates generally. This, though, is second-hand evidence and behavioural clues are difficult to derive from fossil remains. Choosing that characteristic focusses on a distinctive human feature and is “whiggish” – it projects the current situation backwards and selects evidence on that basis. It may also be true that the thickness of the armour of armadillos has increased over time, but I don’t know whether it has because I’m not focussed on that feature. That doesn’t apply to humans either. In fact the trend is reversed for us. Our canines have got smaller, whereas the chances are the tusks of elephants have got longer, and we’ve got physically weaker and less muscular. Giraffes’ necks have got longer. All sorts of features show evolutionary trends, but there may be planets with no long-necked animals where there are animals with necks and so forth, and this would only be of interest to zoölogists. Similarly, there could be worlds with a huge variety of advanced life forms, none of which have big brains or any other means of being intelligent. Moreover, tracing the line of ancestors with steadily increasing relative cranial size and treating that as a trunk, which it isn’t because evolution has no direction, the offshoots do not show increasing brain size as much. This could be selection bias.

Thus there may be plenty of “garden worlds” rich in complex life, but none with intelligent life, just because that route of evolution is improbable, and this doesn’t even depend on the idea that intelligence isn’t useful. In a way, it’s similar to the idea, to which I somewhat subscribe, that there are few or no intelligent humanoid aliens. Why would evolution turn up such an improbable body plan? Likewise, perhaps, why would it turn up intelligent life forms?

Great Filters

Several of these have already been mentioned, and this is in a way a whole sub-branch of SETI and discussion of the Fermi Paradox. The Universe is a dangerous and violent place and intelligent life is very fragile, and yet we’ve come so far since this planet was a lifeless ball of molten rock. But what if we’ve just been exceedingly lucky?

The difficulty in purines and pyrimidines forming spontaneously is perhaps the first of these. The existence of life in any form seems to violate the principles of thermodynamics because it seems to involve a dramatic decrease in entropy. However, much of thermodynamics is statistical in nature. A gas cylinder which starts off with a vacuum at one end sharply divided from gas at sea level pressure at the other will rapidly equalise pressure because the movement of the gas molecules is effectively random and this means they have about a fifty-fifty chance of moving over to the empty end, but this is just chance, not a hard and fast rule applying to individual cases. There is a chain of cause and effect involving a series of collisions and movements in straight lines between them which determines the location of each molecule. Perhaps life in the Universe is the same. It’s very unlikely to arise at all, but because the Universe is so vast and has so many places in it where life could appear, it happens to do so in this one place – Earth. There isn’t anyone around to observe that it isn’t there in all the places where it isn’t!

Here are the nucleic acid bases (well, except uracil, which is the one unique to RNA):

It isn’t at all clear how these molecules could form from non-living origins. The other types of molecules involved, or rather their basic building blocks, can often form easily and spontaneously given sufficient abundance of the elements of life. For instance, the simplest amino acid, glycine, is present in interstellar space. Lipids are also simple chains of hydrocarbons with carboxyl groups on the end, often joined to the simple molecule glycerol. Sugars are similarly small, simple molecules. By contrast, the above four, plus the other one, have no known pathway for their formation. That said, these five are not the only options. Measles viruses, for example, do better when they are able to substitute one of the bases for a unique separate base, and there are other such bases such as the anti-cancer drug fluorouracil, which is however unlikely to arise spontaneously and is not useful as a substrate for genetic code, which is what makes it useful – it breaks replication in tumour cells because it doesn’t work. Perhaps the large variety of possible bases makes life more likely to emerge. It could also be that life could have another basis than nucleic acids, but the fact that these improbable compounds are at its heart is similar to the phosphorus issue – why would life include unlikely substances if it was possible any other way? Surely those more likely biochemistries would be more likely to occur and compete successfully with other less likely biochemistries such as our own?

The two scenarios of scarce phosphorus and improbable purine and pyrimidine synthesis would result in very similar scenarios, and as adenosine triphosphate is based on both, in either situation there is no ATP. The situation could then be plenty of Earth-like planets rich in organics but with no life. There could be sugars, amino acids and lipids in the oceans, and in fact the quantities of these materials could add up to the same order of magnitude as the biomasse here, which is 550 gigatonnes in carbon alone. Considering those proportions in terms of the human body being a typical assemblage of organic compounds of this kind, sans nucleic acids and adenosine phosphates and other phosphates such as those in bones and teeth as typical would mean more than a teratonne of such compounds, which amounts to an average of two thousand tonnes per square kilometre, although unlike Earth, most of whose biomasse is on land, most of that would be in the oceans and therefore distributed through the water column. Such a planet might be devoid of life, but given sufficient phosphorus would be a fantastic candidate for terraforming and settling given the will to do so.

The next step is the emergence of respiration. The Krebs Cycle, which is how oxygen-breathing organisms release energy from sugar, is quite complex as anyone with A-level biology will ruefully recall. The anærobic portion of that pathway is simpler, but still not very simple and would have hobbled life considerably if the Krebs Cycle had not come along. It did actually take a very long time to do so. The step after is the evolutionary transition from bacteria and archæa to cells with complex organelles and nuclei, which could again be very improbable and seems only to have happened once since all chloroplasts, mitochondria and hydrogenosomes seem to be related. On the other hand, each combination happened separately. DNA, and presumably RNA, is just mutable enough to enable evolution to happen without becoming too harmful to organisms to enable them to survive, which is a delicate balance. There is also the question of the very early collision with Theia, a Mars-sized body which chipped Cynthia off of us, thereby providing a magnetosphere, maintaining a stable axial tilt and preserving the atmosphere from the solar wind.

The Great Filter might be above us in the stream of time or still downstream from us. If the latter, it seems to be such an efficient destroyer of intelligent life that it will be the biggest risk we will ever face. If intelligent life is common, there is no evidence that it progresses to interstellar travel, meaning that it could well be that whatever is going to happen has a mortality rate of one hundred percent. And we may well not see it coming because if it had been foreseen, wouldn’t it have been avoided? We’re doomed and we may never know why until it’s too late. That would probably be the very nature of a future Great Filter. But there are many candidates, such as nanotech disasters, pandemics, runaway climate change, nuclear holocaust and so forth. Alternatively, we may always have been living on borrowed time and are overdue for some planet-devastating disaster such as supervolcanoes, asteroid strikes or gamma ray bursts. We can’t necessarily project what may amount to extreme good fortune into the future because Lady Luck has no memory. Less anthropocentric possibilities largely amount to asteroid and cometary collision, volcanic eruptions and gamma ray bursts, some of which have less obvious and remote causes such as stars passing near the Solar System and disrupting bodies so that they move inwards and hit us. This category of potential Great Filters may have a flip side. These events have potential to cause mass extinctions, which might be thought to be bad for evolution but they actually tend to stimulate it because they empty ecological niches into which the survivors of the extinction can then evolve. Hence being pelted with comets is not necessarily a bad thing even though it’s apocalyptic and kills everyone. Consequently, another minor suggestion for an explanation of the Fermi Paradox is that other worlds actually haven’t suffered enough mass extinctions to make it likely intelligent life will evolve.

Interdict

This has similarities to the Zoo Hypothesis mentioned in the previous post. The Galaxy is very old and if the four æons between life appearing on Earth and the emergence of humans is typical for the emergence of intelligence, interstellar civilisations may have existed since thousands of millions of years before Earth even formed. There may have been an initial period of instability, even with wars and conflict of other kinds, but intelligent life in the Galaxy is now stable enough and everything is now sorted and peaceful. Matter and energy are both easily available, so there’s no need to exploit any planets with native intelligent life and in fact intelligent life may not even live on planets any more but in permanently voyaging starships and artificial space colonies orbiting blue giants since they’re a good energy source. Their home planets have in the meantime been re-wilded, so we see no technosignatures. However, we are valuable to them because we are original and uninfluenced thinkers producing our own scientific and technological culture, and for that matter artistic, which is valuable to them, so they leave us alone, at least for now, so as not to pollute their wells of information, and we can’t see them either because they’re hiding or because we’re looking in the wrong places. This may continue until a certain point is reached, which will trigger first contact, or they may never contact us. It’s also been suggested that if this is the real situation, they may have recorded the entire history of our planet and even rescued species before they became extinct, including humans, so somewhere out there may be places where non-avian dinosaurs, Neanderthals and trilobites are still flourishing. However, that’s quite a florid view, and this hypothesis is untestable because they are either hiding from us or undetectable, so there are no data.

Transcendence

This is my personal addition to the reasons, and is the last one I’ll mention here.

May years ago, I made my usual observation to a friend about the nature of intelligent life in the Galaxy. This is that all interstellar civilisations must be peaceful post-scarcity societies which are also anarchist, because other civilisations would be weeded out by internal conflict or environmental damage before reaching nearby star systems. He disagreed, and said that he expected durable civilisations not to be expansionist at all but to stay on their home worlds in a spiritually enlightened state. I was initially rather taken aback by this, but it is tempting to believe that this is so. Maybe what happens is that intelligent species are either constitutionally spiritual and never bother with space travel, or go through a kind of trial by ordeal through their history where they either wipe themselves out through conflict or materialism, or just ignorant tampering with the stable order of things, or go through a crisis where this looks like it’s going to happen and emerge on the other side wiser, more just and peaceful, and also with no interest in exploring the Galaxy in spacecraft. Or, maybe they do this and, and this is going to sound out of sight, engage in astral travel to other planets, so they’re here with us in spirit but we never have knowing contact with them. This is not, however, the kind of solution which is likely to appeal to a scientific mind set, although the first part of it may well be.

Except for the last, those twenty or so reasons probably account for most of the offerings to explain why we don’t see any aliens in spite of it seeming likely that there are some. There are at least six dozen more. The reason for this proliferation of reasons is of course that we have so little evidence to input into the question, and this is likely to continue until we either have a really good argument for their complete absence or we actually detect them. However, it’s equally feasible that we will never know and this may lead to even more reasons being offered.

Neanderthal Pinhead Brains And The Sentient Internet

Stereotypically, Neanderthals tend to be presented as the classic “cave man” caricature, usually male, clubbing their female partners over the head and dragging them off by their hair, somewhat hairy themselves and of course notably unintelligent, oh, and living in caves. I’ve had a go at this stereotype and the other one about dinosaurs previously, but before I get down to things I may as well go through it briefly again.

First of all, dinosaurs are often used as a metaphor for something which is clumsy, overgrown and unable to adapt to a changing world. This really owes more to the Victorian image of dinosaurs as giant lizards than what’s known about them nowadays. Dinosaurs really got lucky, then got unlucky. The mass extinction at the start of their reign helped them take advantage of their various ecological niches, then the mass extinction at its end killed them off because many of them were very large. Many of the smaller ones survived as birds. If humans had been around at the end of the Cretaceous, we too would’ve bitten the dust.

Neanderthals are a kind of blank slate to many people onto which various things can be projected, and I may well be doing the same. Their brains were often larger than ours, but that doesn’t mean they were more intelligent. The probable cause of their brain size was to do with a bulkier body and the need for more pathways to help control and perceive that body. Whales have larger brains than we for similar reasons, although in their case that isn’t all there is to it. Nonetheless, when one considers that orang utan, gorillas, bonobos and chimpanzees are all capable of sign language, and chimps have learned to speak a few words but lack the vocal apparatus to master human speech effectively, this automatically places their “IQ” above that of the severely learning disabled. Note that I’m extremely sceptical of IQ as a concept. If orang utan intelligence is sufficiently similar to human to be assessed and rate above thirty on an IQ scale, Neanderthals are bound to be at least that intelligent. It’s also thought that human short term memory has suffered at the expense of developing language, as that of chimpanzees is far better than ours. Hence when Neanderthals come into the picture, it can be assumed safely that they would also have been capable of language and perhaps actually used it. The crucial final step in physical capacity for phonation – producing speech sounds with the vocal tract – is the position of the hyoid bone in the throat, which allows attachment for the larynx, glottis and tongue, and needs to be in a particular position to enable its owner to speak. The problem is that the hyoid is perhaps unique in having no articulation with any other bone in the body, and therefore tends to get lost in fossils. Consequently Neanderthal hyoids are often missing and it took until 1989 for it to be established that they were like ours.

A couple of issues are going to come up in this post which are probably going to be considered idiosyncratic on my part. Here’s the first. Although I am aware that the FOXP2 gene is considered important in human capacity to use language, and Noam Chomsky believes in an innate capacity for language as a distinctive feature of the human species, I have issues with this as potentially speciesist and am disappointed that such a clearly politically radical figure as he would promote this view. I believe humans stumbled upon language before we had a special ability to use it. There are examples of other species being able to use spoken and signed language as language, as opposed to merely imitating it, notably Psittacus erithacus, the Afrik/can Grey Parrot, who presumably had no predisposition in their genes for using it beyond the ability to produce speech sounds and so forth. Clearly a certain kind of cognition is necessary for this to happen, along with the ability to produce the sounds physically, and once spoken language exists it’s going to be selected for compared to individuals who don’t speak, and this will lead to some kind of marker in the genes – perhaps we are better at producing or hearing a wider range of speech sounds than other species for example – but the initial moment when the first baby made a sound like “mama” whose parent then interpreted it as a reference to her, which was perhaps the beginning of language, did not in my opinion depend on very specific physical traits and could have occurred in another species.

The genomes of living humans include a few genes from the Neanderthals and it’s thought there was hybridisation tens of millennia ago in our history. To a very limited extent, we are therefore Neanderthals ourselves unless we’re Afrikan. The highest percentage of Neanderthal genes is found in East Asians and they’re usually absent from people all of whose heritage is from Afrika south of the Sahara. Neanderthals would probably have been fair-skinned and maybe also blue-eyed, and have had straight hair. I personally wonder if they had epicanthic folds, which of course have a higher incidence among East Asians but are also found in Caucasians without any Asian ancestry, and I’m guessing that those people might also have inherited that trait from Neanderthals. Recently the Neanderthal genome has been in the news for conferring greater resistance to SARS-CoV2.

Now for the reason I’m writing this today.

In recent years it has become possible to culture brain cells in Petri dishes. This isn’t the same as growing an entire human brain in a vat, but involves producing pinhead-sized agglomerations of cells. Recently, a gene linked to brain development in Neanderthals has been spliced into human cells and grown in such a dish. For many people this has a high yuck factor. The specific gene involved is NOVA1, on the long arm of chromosome 14, which is associated with various cancers but also nervous system development. There’s an indirect connection between familial dysautonomia and the NOVA1 gene which primarily involves the autonomic nervous system and insensitivity to pain and sweet tastes, among other things, but as far as I know doesn’t influence cognition, so that doesn’t necessarily give us a clue, although it’s possible I suppose that the inability to taste sweet might be related to Neanderthal diet in some way. That’s a bit of a reach. Whatever else is so, mini-brains with the archaic NOVA1 variant look rougher to the naked eye than the smoother versions which have the variant common in today’s population. The archaic version developed more quickly than the unaltered one and started to show electrical activity sooner. In write-ups of this experiment, we’re assured that these mini-brains are not conscious.

I have a major issue with that assertion.

The question of the existence of consciousness is sometimes referred to as the “hard problem”. It’s been suggested that it may even be so hard that it’s beyond the capacity of the human mind to account for it. At the same time, there’s a recent strand in philosophical thought, characterised by Daniel Dennett, which is sceptical about the very idea of consciousness as an irreducible property. I can’t take Dennett’s views here seriously, for the following reason. He has made a very good argument for the idea that dreams are not experiences but false memories present in the brain on awakening onto which the mind then projects the impression of previous events. I take this idea fairly seriously although I don’t do the same thing with it as he does. It’s one reason why I recount dreams in the present tense. However, a good counter-argument to this is that lucid dreams – dreams in which one knows one is dreaming and is able to control the dream world – aren’t experiences either. Although he does produce an argument for this, I believe that his reason for making this assertion is kind of ideological, because we practically know that lucid dreams are experiences. They might not be dreams in the same sense as non-lucid ones are, but they are experiences to my mind, and claiming they aren’t seems to be part of his attempt to shore up his view of the nature of consciousness.

Dennett is sceptical about qualia. These are things like the “sweetness” of sweetness, the “purpleness” of purple and so on. They’re what people are talking about when they say “my red could be your blue”. His doubt about their existence is based on the idea that they are not a definable concept. This to me is a silly denial of subjectivity which makes no sense in itself. Dennett’s motivation for believing that dreams are not experiences, qualia don’t exist and that even lucid dreams are not experiences is based on a more general view of psychology that consciousness is a specific faculty within the brain which may have evolved and has selective advantages. This thought leads one into seriously murky ethical waters because it seems to be a rationalisation of the idea that some other species of animal are not conscious, which is suspiciously convenient for non-vegans. It just so happens that the voiceless don’t suffer because they don’t have a voice. How very useful this is for someone who eats meat. Kind of as useful as believing Black people are not conscious would be for a racist.

My own view of consciousness, panpsychism, tends to be seen as equally silly by some people. It’s my belief that consciousness is an essential property of matter rather like magnetism is. A ferromagnet is a particular arrangement of charged particles whose domains within, say, a lump of iron, are aligned and it’s able to attract ferrous metals such as steel. There are other, similar magnets, such as rare earth magnets, which are magnetic in the same way but contain no iron at all. On a subatomic scale, magnetism is manifested by elementary particles with spin and axes which amount to tiny electrical circuits, and I have to admit that my understanding of actual, fundamental magnetism is not very good, but there are clearly non-magnetic substances too, such as granite and most blood (unless it’s infected with malaria). Even these non-magnetic substances, though, do consist of magnetic particles.

Consciousness is the same, to my mind. Everything material is conscious, but in order for that consciousness to become manifest, matter needs to be arranged in a particular way, such as a human nervous system. However, just as there are magnets which are not made of iron, so there could be sentient beings who are not made of the same stuff as we are. Objects which have nothing like sense organs or motor functions are in a sense severely disabled entities, but they’re still conscious. This is my panpsychism.

I should point out too that panpsychism is unsurprisingly quite controversial and often ridiculed in philosophical circles, although good reasons for doing so are sometimes lacking. Even so, there are other accounts of consciousness, one of which involves the idea that it’s generated by a network of “black boxes” interacting with each other, which in the case of the human brain amount to nerve cells. You don’t have to believe in panpsychism to assert that a tissue culture is conscious, and to me it’s entirely clear that the assertion that anything made of matter is not conscious is not based on any kind of evidence but a bias towards the kind of view of the mind-body problem asserted by Dennett and others.

Consequently, it definitely isn’t safe to say that these “Neanderthal” mini-brains are not conscious, or that the ones based on unaltered Homo sapiens cells are not conscious. Before I go on to talk about the internet as potentially sentient, I feel a strong urge to go off on a tangent about my experience of the Mandela Effect.

I have several more detailed posts on this issue on this blog, here, here and here for example, but in the meantime I will sum up what it is before going on. The Mandela Effect is the situation where a number of people agree on a memory which is markèdly different from the consensus or establishment version of that memory. Most of the time, this is about minor details such as spelling of brand names or the appearance of brand logos, but occasionally the discrepancy is more significant. It’s named after the impression many people had that Nelson Mandela had died in prison in the 1980s, and sometimes that this led to a revolution which overthrew apartheid in South Africa. History clearly appears to record a very different chain of events involving Nelson Mandela being released from prison in 1991 and becoming president of South Africa soon after. I think that’s it anyway. There are various unusual reasons why I take this seriously which are largely based on Humean scepticism about cause and effect and the existence of possible worlds, which means I tend to deprecate accounts which merely refer to confabulation as an explanation – the construction of false memories due to misconceptions. There is some evidence against this being true, such as the fact that when the position of landmasses on maps varies, it always does so along the direction of continental drift and never at an angle to it.

I have a few personal Mandela Effects (MEs) which are rare but shared with at least two other people, and they tend to have things in common with each other. One of these is that a science museum had a planetarium like robot which responded to heat, light and movement and was run by a minibrain grown from cultured mole nerve cells, in the mid-1970s. Two similar MEs of mine are that in the late ’70s a process was devised to measure intelligence via brain scans which was used in selective education by the DoE in England to replace the 11+, which was later exposed as unreliable and discontinued, though this was a scandal because it adversely affected the lives of many people who were children at the time. A third one was to do with some guy who designed and built a domestic robot which was able to read aloud by 1975. These are three of many, and they are conceptually connected by being about intelligent-seeming neural processes. If they happened, they would’ve required an understanding of neurology which was absent at that time, in the case of the domestic robot presumably via some kind of reverse engineering. I accept that hardly anyone else has these memories, but it’s still odd that two other people who had no strong connection with me at the time do have them. And the thing about these memories, particularly the museum robot, is that they could potentially be realised by this kind of culture of brain cells in a Petri dish.

Now for the idea that the internet is sentient.

It was once asserted that the last computer a single individual could fully understand was the BBC Model B, a microcomputer which came out in 1981. There are a couple of problems with this statement. One is, what is meant by “fully understand”? It’s certainly possible, for example, for someone to hold the network of logic gates which constitutes the BBC Micro’s 6502 microprocessor in their head at the same time as the structure on that level of the 6845 chip responsible for its graphics capabilities and the SN76489 chip responsible for its audio, and then extrapolate from that to the machine code of the system software in its interaction with the motherboard and memory mapping of these various bits of hardware, although it would take some doing for most people. However, if I did that I would have a vague understanding of how the NPN transistors work, involving electron holes and their relay-like behaviour, but to be honest my understanding of silicon doping, for example, is pretty limited. When one says that the BBC Micro can be completely understood by one person, is that supposed to include the aspects of materials science which make the production of its hardware possible, or the mechanical properties of the springs in its keyboard? What does it mean to “fully understand” something? The other problem with this assertion is that the BBC Micro, as I understand it, isn’t essentially more complex than the original IBM PC. The latter has more memory and a more complex and faster processor, and its system software is usually PC-DOS or CP/M-86 and more advanced than the BBC’s MOS 1.2 and Acorn DFS, but it can still be understood and it lacked the built-in graphics and sound hardware of the eight-bit computer which ended up on the desks of so many British secondary schools. Later on, with sound and graphic cards added, the latter including the very same 6845 as used in the BBC, it still wouldn’t’ve been as complex and would still have been comprehensible. It seems to me that the ability to comprehend these devices fully in that sense probably ended around the time Windows 3.0 was released in 1990. But whatever else is the case, the point at which any one person could be said to understand a device including both hardware and system software is now decades in the past.

Now take these two facts together. Firstly, we really don’t know what makes consciousness possible. Secondly, the internet, a network of billions of devices hardly any of which are understood to a significant extent by any one person, is extremely complex and processes information it gathers from its inputs. And yet it’s often asserted that the internet is not sentient, as if we know what causes sentience. At the same time, there are many internet mysteries such as Unfavorable Semicircle and Markovian Parallax Denigrate, which can often be tracked down to some set of human agents, but nobody has a sufficient overview to be confident that every single one of these mysteries has a direct human cause, or even that a fraction of them have.

Hence I would say that we might suppose that the internet is neither conscious nor sentient, but in fact we don’t really have sufficient evidence that it isn’t. It really has quite a lot in common with a brain, in any case we don’t know why anything is conscious, and it’s even possible that everything is. Therefore, just maybe, the internet is sentient and nobody can confidently say it isn’t.