A few years back a baboon took a selfie on a camera and PETA decided to use it as a test case to move towards establishing personhood for non-human individuals. As usual, this was to my mind a meaningless and pointless stunt – I’m not a fan of PETA by any means. The reason I think this was misguided is that I don’t believe in rights. To my mind, the idea of moral rights is based on human custom or analogy with the law, and it has no basis in reality. I think in terms of duties alone in that respect, because duties arise from one’s own situation with regard to others. I’m not sure what practical significance that has.
This whole attitude of mine is based on two things. One is that I’m philosophically anarchist, although the issue rarely arises in practical terms. This means I think of the law as solely enforced through the threat of violence with no authority, and although it often does coincide with morality this is not its basis. The other is that viewing the Universe dispassionately in terms of what is most real, I have traditionally been nominalist. This means that I didn’t believe that any human concepts were more than arbitrary. I’ve since changed my mind on this because one unfortunate consequence of this view is that it seems to make facts relative, which seems unhealthy. I now think I do believe in natural kinds, for instance in elements and species, but the problem of what actually are natural kinds remains. Nonetheless, in terms of the law, it’s just a deadly serious game humans play, and whereas it has plenty of philosophically interesting aspects these are relative to that fact and operate within what amounts to a contingent human custom. Extending this, rights in a moral sense only exist when not having them respected would make it difficult to discharge duties. For instance, if someone has medical knowledge, skills and experience, she may be obliged to practice as a doctor but that could be difficult if she’s a refugee in another country, so she might have to have her human rights respected in order to be of greatest use to that society. She has those rights, but they exist because she has obligations prior to those rights. That actually applies to most people, although of course I tend to think of people as having infinite value.
In a similar way, when we decide to practice a vegan lifestyle we’re adopting a human custom and set of practices which are based on our possession of concepts of right and wrong, or perhaps good and bad. It appears that most other species, although they often do experience love, compassion, loyalty and companionship, it rarely consciously adds up for them into a global obligation beyond their own species. We are in any case surrounded by a biosphere which only works because food is circulating through it, and in many cases this food consists of the bodies of animals who suffer and die of necessity so that others may survive. Next to this, as I’ve observed before, the amount of suffering conceived in terms of individuals suffering, and also deaths, associated with human activity is utterly trivial. Nonetheless I feel obliged to avoid being party to avoidable suffering and death, so I’m vegan. To pick an exclusively human example, a series of disasters such as quakes, tsunami, volcanic eruptions and asteroid strikes could lead someone to compassion fatigue or burnout in the end and that needs to be treated sympathetically, but it would still be better for someone living in such a world to take steps to treat others well and avoid harming or killing them on the whole. Abandoning veganism because of the vast scale of carnage around us would be similar.
This is one respect in which veganism is essentially for humans. The wider living world knows suffering and death, but we alone are responsible towards it. More specifically, in fact, I can only say that I’m responsible towards it because attributing responsibility to others is an unwarranted imposition. Obviously I do think it would be a lot better if most Westerners were vegan, but it’s their decision. This is to some extent inconsistent, because one action I could take is to persuade others to go vegan too, and through me there would then be less suffering and death. Nonetheless I don’t go there. So in that sense, veganism is for humans.
A few posts ago, I was writing about food deserts, which seems to be one area in which it’s practically difficult to become vegan for some. In the same way as I can’t presume to extend my pacifism to every political situation and every marginalised group, I also have to recognise that some people may find it impossible to go vegan in the world as it’s currently constituted. However, in such a situation it isn’t enough to sit back and leave it alone. The question must always be of how to make it easier to go vegan. The same applies to health situations. Although I’ve never encountered a patient who couldn’t adopt a plant-based diet for health reasons, I’m prepared to believe there may be a few who can’t currently do this. In this situation, the problem is not that they can’t go vegan in absolute terms, but that either the research or economic situation is such that they can’t do it, so the priority is to address that so that they can. It does, however, mean that they may not be directly responsible for their own non-veganism. All this means that even just considered as plant-based eating, veganism has wider implications than just people choosing not to consume animal products. Moreover, doing this for people would generally be positive for them in any case. Once again, veganism is for humans.
This links to a third sense in which veganism is for humans. Humans are animals. We are also the animals with whom many of us have the most socially extensive and meaningful connections. Because we’re animals, and veganism is against the exploitation of animals, vegans must oppose the exploitation of humans. Therefore veganism is more like pacifism than vegetarianism. You can’t consistently be vegan for the sake of other species. You also need to be against avoidable human death and suffering, and because we are ourselves human this is in fact the most important aspect of veganism. If I worked on an animal farm, I probably would have more to do with other species than humans and my direct obligations might be different, but because I live surrounded by other humans, most of my immediate duties are to other humans, and through them to ending a chain of exploitation as is found in mineral resource extraction, working conditions, their income, unionisation and so on. It’s just as vegan to oppose the Palestinian genocide as it is to refuse to consume dairy products.
So this is probably going to be quite a short post, just to say that paradoxically although veganism might look as if it’s about non-human animals, there are several senses in which it’s actually a human thing, and it’s none the worse for that.
Gàidhlig is, as you know, a language I find phenomenally hard. I’ve said in the past that the best way of learning it is to know it already. It’s a bit like when you stop and ask for directions to somewhere and the reply is “Oh, I wouldn’t start from here if I were you”. Nonetheless it’s got to be done.
The above picture is of the Burns Centre in Dumfries. The obvious joke will not be made here as Doonhamers are thoroughly sick of it and have heard it a thousand times. It occurred to me the other day though, that although I know the Welsh words for “centre”, owing to almost getting a job at the Canolfan y Dechnoleg Amgen in Wales – they’re actually “canol” and “canolfan” and not speaking Welsh I have no idea what “-fan” does – I had no idea at all what the Gàidhlig word was. I do know the word for “middle” – meadhan – but “middle” is not “centre”. I’m also aware that the word in English is used figuratively as well as literally, which is also a usage of “canolfan” in Welsh, but wasn’t cognisant of such a usage or otherwise in Gàidhlig.
Well, it turns out, unsurprisingly, that it isn’t that simple, although the reasons it isn’t aren’t quite linguistic. It starts out fairly straightforwardly. The Robert Burns Centre is probably called something like An Ionad Raibeart Burns, assuming “Raibeart Burns” doesn’t need to be put into the genitive, and it’s even true that “ionad” means “centre” in figurative and literal terms, as well as meaning “location” and “situation”, although I’m still confused as to how it’s pronounced the way it is because there’s clearly a rule about whether the I or the O is pronounced, so I initially thought it was “yonnat” but apparently it’s “innet” (I’m not bothering with IPA at the moment because I’m on the wrong device for it and in any case it’s been said that the IPA is inadequate for transcribing this language, and there’s a whole other conversation to be had about that). So you might think you’ve got it sorted and everything’s very very good, but actually it isn’t, at least from about 2008 CE onward, because at that point someone did something subversive.
Technical language is often perceived as a barrier to understanding which maintains an in-group and an out-group. This is certainly sometimes true, but at other times not using it makes it almost impossible to talk about something. Cults, sorry, new religious movements, often seem to use language this way in order to exclude outsiders from understanding what they’re talking about and also often to make it seem to their followers that they know what they’re on about. When this isn’t done, which notably occurs in botany with the words “nut” and “berry”, people often object because it leads to bananas being called berries and peanuts not being nuts. In fact hardly anything is a nut. To hide this quandary away, scientists and mathematicians often draw on Greek or Latin as a kind of nice neat cover for the messy box of what to call things. Hebrew and Sanskrit are also sometimes used. In fact, Sanskrit is often formally used to refer to phenomena in Gàidhlig. Rather refreshingly, “ionad” is used thus, presumably as part of some kind of statement against the Latinisation or Hellenisation of technical terms.
Understanding this usage is possibly one of the steepest learning curves I’ve ever encountered. This is how it’s described when you type something related into Google:
As a Grothendieck topos is a categorified locale, so an ionad is a categorified topological space. While the opens are primary in topoi and locales, the points are primary in ionads and topological spaces.
Clear? Didn’t think so. It isn’t even as straightforward as being about topology or group theory. It sounds like a concept related to topological space but that’s only tangentially true, because apparently this is category theory. The idea seems to be to take various branches of maths and generalise the concepts and processes which exist and occur within them. It feels like a theory of everything but it isn’t. It’s kind of metamathematics although I’d prefer to reserve that idea for something like number theory. It involves three types of thing, one made up of the other two. Categories, made of objects and morphisms. To my rather naive brain, this sounds a bit like group theory and a bit like topology, and probably a bit like linear algebra if I knew what that was, which I don’t, so I’m going to wrestle with this here and try to understand it.
My first thought was the Canterbury Cross, which was used as the emblem for my secondary school and looks like this:
Back when I’d just started at that school, we were supposed to make an ashtray, because in those days tobacco smoking lacked the stigma it has now been allowed to acquire. This involved taking a square sheet of aluminium and clipping the corners inward to make a shape somewhat like this, then folding them inward. Being dyspraxic, my attempt to do this was catastrophic. I was shockingly bad at practical subjects, or rather the ones I was actually allowed to do, which is again another story. On one occasion I was simply sawing a piece of perspex into the right shape and it literally exploded very loudly in a puff of acrid smoke, to which my plastic teacher’s response was to ask, wearily, “What have you done now?”. I could go into the gender politics of all this but anyway, we’re talking about the Canterbury Cross. My initial attempt at understanding a ionad is that it’s like the middle portion of this cross in that you can trace a line from it to each of the arms, but not from one arm to another. This is not quite what I mean of course, because it never is, but there seems to be a sense in which this is true. It is in fact dead easy to draw a line from one arm to another, but it still seems to be connected in such a way that the others aren’t. This is probably not it though.
Category theory is apparently difficult because it’s an abstraction of an abstraction. Group theory and topology are already quite abstract, though still applicable quite easily. Category theory takes it a step further. I’m going to have another go.
Maths generally consists of objects and operations on those objects. 2+2=4. Addition is the operation there and the numbers are the objects. Likewise, the top slice of a Rubik’s cube can be turned clockwise through a right angle and then turned back, and there are twelve possible sets of arrangements of a Rubik’s cube which it’s impossible to reach from any of the other sets. These operations of turning are within these sets of arrangements and this is a typical application of group theory. The sets of arrangements are the objects. I’m currently trying to imagine a species of intelligent extraterrestrials who grasp group theory intuitively but can’t count, because they have five sexes. More on that another time. Anyway, geometry has this too. A shape can be reflected, magnified, rotated and so on. In each of these cases and many others, there are the operations and the elements. Category theory summarises branches of mathematics by turning them into a series of items joined together in various ways by arrows, so it aims to do to maths what maths aims to do to the world, and it does it with things like this:
Presumably, and this is just me, if you can find two branches of maths which can be summarised using the same diagrams, they’re really the same branch and if there’s another diagram which is known from one branch but not another all of whose other diagrams are the same, it’s worth looking into whatever’s represented by that extra diagram as it might well work in the other branch.
I seem to have gone rather far from the Canterbury Cross here and that might well be due to there being no connection between the two topics. In fact I think there’s bound to be a connection because of the nature of the shape, but it might not be what I think it is. For instance, you can take a Canterbury Cross and flip it horizontally, vertically or diagonally without changing the shape, and you can also reflect it, so there are clearly symmetry groups which can be applied to it which can’t to, for example, the conventional long cross used as a symbol of the Christian faith, so things can be done to this which are relevant to group theory. So it is relevant, but the thing is that you could do the same kind of thing with a Star of David, though different in detail because that shape can also be rotated to fit into itself in various ways which a Canterbury Cross can’t, and all that stuff you could represent very generally in a Category Theory diagram but there’s nothing special. So it seems I haven’t got anywhere near understanding what an ionad actually is, except that it’s something to do with Category Theory.
So, my next guess then, which might well be wrong for all I know, is that Grothendieck Topology is a way of looking at those diagrams which compares them so that one can generalise from them and make useful advances by comparing different mathematical fields. Is it that? I don’t know! And I seem to have to work out what that is in order to work out what ionad actually means in that sense.
So I seem to have arrived in some sort of state of conceptual splodge and confusion. It almost feels like I can’t bridge the gap between incomprehension and the holy grail that is the concept of “ionad”. I feel the same way about calculus, which in one of the two cases I consider to be the idea of being able to tell which way a wiggly line will go next and wonder vaguely whether astrologers use it to locate planets or whether they just use ephemerides, and that’s as far as I can get. With calculus, by the way, I’m aware of there being two mutually inverse types. With category theory, who knows? How do you get to the point where you can confidently say you can understand something? How do you know you haven’t got it completely wrong? Well, usually I suppose you can test it in the real world, so if I wire a three-pin plug wrongly I will briefly know when the electric shock throws me across the room and kills me, and if I make a (vegan) soufflé wrongly I will become aware of that when it collapses as soon as I take it out of the oven, but in this case, how will I know when I’ve got it wrong? It seems too abstract to test. I want to savour this state of personal bafflement and adumbrate its characteristics.
(Can you even make vegan soufflés?)
So to survey my mathematical knowledge, I can manage the following:
I scraped an O-level in maths. This probably doesn’t indicate much about how well I understand it though, because I’m fluent in French even though I failed the O-level but not in Spanish even though I have a B at GCSE.
At first degree level, I’ve studied statistics to the extent that I can see through deceptive practices which purport to employ it, use it in my own quantitative research and assess the quality of other quantitative research. However, stats is arguably not maths.
Also at first degree level, I’m very confident in the use of formal logic and have extended my knowledge beyond the mere understanding of sequents, truth-tables and well-formed formulae, and I also have a firm grasp of the foundations of mathematics, which extends into number theory.
I’ve pratted about a bit with stuff like fractals, non-Euclidean geometry and things which take my fancy on the lower levels of the kind of fun maths which crops up in the likes of Martin Gardner’s and Douglas Hofstadter’s writing.
Not sure if it’s maths but I’m kind of okay at coding provided OOP isn’t involved and it follows an imperative paradigm.
I’m also not scared of maths. I’m not wonderfully good at it but in the same way as someone who feels almost alien to me might enjoy a kick-about with a football of a Saturday afternoon as opposed to playing in the FA Cup, I dabble a little bit. For instance, I’m motivated to find a non-iterative algorithm for calculating square roots although I haven’t got round to it yet. I also find it incomprehensible how people can say that they’ve never applied most of the maths they learnt at school and wonder how hard their lives must be as a result, unless they don’t realise they’re applying it. Last night I used E=mc² and 4πr² along with a bit of trig to work out how much energy our solar panels are likely to get from the Sun today, and to me that seems useful although somewhat inaccurate owing to the fact that the planet inconveniently has an atmosphere, furthermore with clouds in it, and that really is not that hard although it takes quite a long time if you don’t use a calculator, and where’s the fun in that? I suppose that has the same role in my life as football does in someone else’s. But I still can’t understand this. I also wish I knew how close I was getting.
Let’s have another go.
There are these things called topoi, and other things called pre-sheaves and sheaves, and they relate to this situation. Topoi appear to be places set up to do particular kinds of maths comfortably. Is that what they are? Well, I just asked an AI and it may have been trying to please me because that’s what they do, but it agreed that that’s what a topos is. It also started talking about sheaves, so yikes.
Okay, so what’s a sheaf and why are there pre-sheaves? My initial thought here is that we have conceptual ring binders, we’re wandering all over a large warehouse covered in mathematical papers from all sorts of fields, and we’re collecting them together in the ring binders according to what category (there’s that word again) they’re in, and that the pre-sheaves are the empty binders and the sheaves are the full binders. Is that it? Plug that metaphor into an AI and see what it says. . .
Right, done that with two different AI chatbots and I’m wary that they may be eager to please, but both of them said that I wasn’t too far off although open sets are involved. I think of open sets as akin to the Bedeutungen of family resemblance definitions as opposed to those of definitions based solely on necessary and sufficient conditions, and to be honest I think I’m right about that. I could be confidently incorrect of course. And once again, leaving the sycophancy problem aside, although I’m not completely correct, I’m not one hundred percent wrong either. There also seems to be something about them sharing a corner.
As I’ve said, there was this guy called Alexander Grothendieck who was unlucky enough to be born in Germany in 1928. After a traumatic childhood, he became a mathematician, some say the most important of the twentieth century CE. At some point he actually left mathematical academia and became a political activist and a religious recluse, and he gave lectures in Vietnam while being bombed. I know very little about him but I wonder, given that limited information, whether his life indicates the potential role of maths in people’s lives as a source of inner peace, and also the affinity between mathematical beauty and the spiritual realm. I am actually trying to do that right now in writing this. I’m trying to escape, and I hope to provide others a temporary respite, from the vexing nature of current political developments. All that said, I also wonder if it is in fact germane to the current situation in some way. For instance, while I’m writing this I’m not worrying about Gaza, the rise of global fascism or the toilet problem. It may however be the source of a potential argument against the supreme court ruling on “single sex” spaces, but it doesn’t have to be to serve a therapeutic purpose.
And I’ll carry on. I’d say that Grothendieck was responsible for innumerably many mathematical ideas except that because he was a mathematician one must pick one’s words carefully and note that in fact the cardinality of his ideas is not the same as the power of the continuum and that, depending on how you count ideas, he probably had a finite number of them. On the other hand, it might depend on what counts, so to speak, as an idea. In any case, one of the many things he came up with is the aforementioned Grothendieck Topology. I’m abandoning this for now due to sheer bafflement and lack of mental energy.
Here’s a thought. England’s surface southeast of the Tees-Exe Line and the English coastline from the Tees to the Exe are very different in character to Scotland’s surface and coastline. Is it possible that the concept of the ionad is more useful or applicable to either of those aspects of Scotland than the part of England mentioned, and of course I’d like that because the concept itself is from Gàidhlig, or rather Q-Celtic. The big difference between the two coastlines, to start with, is that Scotland is more fractal than lowland England, and actually any of England but it’s more striking defined thus. Something similar also applies to mainland Scotland combined with its islands, to Scotland with the lochs and sea lochs and by extension to Scotland including the mountains. And this has practical applications: it’s harder to get around here than it is in lowland England and you get situations where Mull of Kintyre is seventy kilometres from Kilmarnock as the crow flies but 272 kilometres by road, mainly due to Loch Fyne. Here there could be steep slopes, lochs in the way and a very fractal coastline, or islands at varying distances from each other which may even exist intermittently according to the tide. Southeast England is much smoother and less complicated on the whole. At the same time it’s worth remembering that an ionad is a concept found in an abstraction of abstractions which may therefore still not apply very well to the physical geography of Scotland.
Except that I think it does. There are several aspects to this place resulting from its geology, which has consequences for its terrain, coastline, transport network, biomes, other aspects of ecology, dialects and presumably other cultural aspects. For instance, here’s the Scottish rail network:
. . .and this is the Central Belt’s rail network, found in the rectangle within the other map:
Due to the population distribution and engineering difficulties, the complexity of the rail network is the opposite of the complexity of Scottish terrain. It seems feasible that some kind of table of ratios between the fractal dimension of the surface in a particular area and the number of train stations or connections could be constructed, and there might also be some mileage, so to speak, in working out how long it takes to get between two places by rail, and then comparing it to how long it takes by road and separating that into walking, cycling, driving and taking the bus, or for that matter a ferry or plane. In fact all this analysis could reveal things about transport policy and decisions made by the Westminster or Scottish governments on these matters. Considering the fractal nature of the terrain and coastline together with the topology of various transport networks suggests also that it would be useful to find some way of unifying these two different mathematical ways of considering the country.
It goes beyond that too. The Gàidhlig language is, at least from the outside, characterised by remarkable variations in accent. Moreover, the distribution, both today and historically, of different dialects and languages in Scotland is likely to be connected to the terrain and accessibility of different parts of the country. In England, at least historically, there has been notable variation in accent in Lancashire in particular, and it seems that similar variation occurs in the Gàidhealtachd, to the extent that if your Gàidhlig is poor people might just perceive you as being from a different island rather than just not very good at it. This is because of the divisions caused by multiple islands and glens separated by peaks, a similar situation as obtains in New Guinea, and interestingly also in the sea around New Guinea, causing respectively great linguistic and biological diversity. It’s been said that Scotland is able to masquerade as all sorts of other countries, such as Norway, the Caribbean and maybe Austria. All of this variation is linked to the terrain, and I’m sure could be usefully modelled mathematically. I’d also be very surprised if this was irrelevant to ecology and biomes.
Therefore, there are several different fields of maths which could be used to capture and express the complexity of this country in various useful ways. For instance, anyone who’s played Britannia will be aware that it usually takes ages for the Picts to disappear, something reflected in real world history, and this is I guess because they were hunkered down in remote areas which couldn’t be easily accessed by other peoples, and maybe the living was also so hard there that they didn’t bother. This hypothesis could, I think, be tested using some kind of mathematical approach. There is also a very small tree line in the Cairngorms and there seem to have been glaciers there, again in a small area, until something like the seventeenth century. It took longer for wolves to become extinct here than it did in England. There are all sorts of things like this which result from the distinctive characteristics of the northwestern part of Great Britain and its associated smaller islands which can be modelled mathematically in different ways, and they’re practically very important. The logistics of moving things or oneself around the country, for example, or of understanding the locals in different places, are connected to this.
Here, then, are various mathematical ways of approaching the question of Scotland.
Firstly, the inverse correlation between rail network complexity and terrain complexity lends itself to graph theory, operations research and algebraic topology. In the last, islands and mountains constitute holes. The problem of finding the most efficient routes between places belongs to operations research. So with this there’s:
Graph theory
Algebraic topology (I hold my hands up here to say I only have a vague grasp of what this is).
Operations research (which was actually my dad’s job).
Secondly, the isogloss patterns in Gàidhlig accent variation could involve:
Graph theory again, regarding communities as nodes and communication links as edges of various weights.
Topological spaces, where dialect regions are open sets with isoglosses as boundaries between them.
Sheaf theory, apparently. Goodness knows how. I haven’t got to the point where I understand this much except to imagine lots of people wandering around with ring binders in a warehouse with scattered random maths papers all over the floor. I’m getting there.
Thirdly (this is stylistically frowned upon isn’t it?), biome variation:
Cellular automata of all things! The idea that in a particular area, there may be more or fewer resources required by particular species which determines whether they flourish or something else does, or perhaps something else flourishes on the corpses of what didn’t flourish.
Statistics: picks up the patterns of biomes. In particular I strongly suspect that there’s more biodiversity at boundaries between biomes than deep within large homogenous biomes, and Scotland of all places has those boundaries in spades, and I’d like to look into that.
Fourthly, climate:
Fluid dynamics (some of these things are just words to me, but not this one).
Differential equations (these definitely are).
Fifthly, the legendary fractal nature of the coastline:
Fractal geometry (who’d’ve thought?).
Chaos theory.
Something called Measure Theory.
The power law regarding the size of lochs, islands and their distribution:
This is again fractal geometry, as it’s essentially a vertical version of the coastline issue.
Statistical distribution along the lines of Zipf’s Law and, I’m guessing, the log-normal distribution, alias the 80:20 rule.
The phenomenon of clustering in random and pseudorandom distributions, manifested here on a plane.
In this case, deviations from these tendencies are themselves interesting. For instance, it might turn out that the areas of lochs are not distributed in such a way that the majority of them constitute together less than half of the water area in Scotland. For a start, nine-tenths of British fresh water is in Loch Ness.
The fields which come up repeatedly here are fractal geometry, topology and actually measure theory, which I mainly left out because I don’t know what it is. It seems that it arose out of the Banach-Tarski paradox, which includes such oddities as being able to dissemble a single ball and then build two balls of the same size as the first one out of a finite number of components, or taking a ball bearing apart mathematically and reassembling it into an object the size of the Earth. Clearly these things can’t actually be done, but they seem to be intuitively feasible when you look at the details, because spheres and balls are each infinitely large sets, and you can take an infinite number of items out of an infinite set and still be left with an infinitely large set. Measure theory tries to resolve this problem by providing a way to decide exactly how big sets are. I can’t take this any further.
So, there are these three areas of maths along with certain others which come up at least a couple of times: measure theory, topology and fractal geometry. Just in passing, Scotland is not unique in this respect because there are other countries and regions in the world to which these same features apply. These include the aforementioned Papua New Guinea, the South Island of Aotearoa/New Zealand, Japan (maybe Hokkaido even more than the whole of Japan), Switzerland, Norway and of course Nova Scotia. Not all of these have the full set, and it should also be borne in mind that there are also “anti-Scotlands”, including the Netherlands, countries which include bits of the Sahara Desert, most of Antarctica and Kansas. I’d also be very interested to know how North Carolina fits in. It isn’t either that these countries and regions are boring or even that the same mathematical fields don’t apply to them, but what doesn’t happen is that the fields in question apply usefully or interestingly to them. In British terms, the opposite of Scotland in these respects is probably East Anglia. Hence this comparison has already become meaningful and productive and hasn’t just been a waste of time. Also, seriously, no disrespect to the places which are “boring” in this respect, and in fact for all I know there are different aspects of those countries to which exactly the same mathematical fields could become relevant, such as the distribution of sizes of grains of sand in the Sahara.
All of these fields include concepts of dimension, open sets, functions and spaces. The concepts of sheaves and ionadan also come up, so at long last I might finally be able to declare myself ready to understand what “ionad” actually means.
An ionad, which is actually taken from the Irish sense of the word rather than the Gàidhlig but the word is the same barring accent and pronunciation, means “place” or “locale”. In that way it’s a little similar to the concept of locus in geometry, and it aims to mix topology and category theory in such a way as to allow one to reason spatially in a point-free and structured manner. An ionad is like a topological space whose open sets are the starting block and points can be derived from those open sets. If topology, category theory and sheaf theory are each thought of as circles in a Venn diagram, like red, green and blue in additive colour or cyan, magenta and yellow in subtractive colour, an ionad is the bit in the middle which is white in the former case and the infamous “brown splodge” in the latter. Of course I’m nowhere near understanding sheaf theory at this point and still have the Filofax people wandering all over the explosion in the maths warehouse in my head, but I’m closer. But apparently an ionad is useful in the following ways (and others):
It explains how different parts of Scotrail interact without assuming the points are primary, so presumably it could work as a way of explaining train delays and replacement bus services.
It helps to describe when native speakers of Gàidhlig are likely to perceive each other as speaking with different accents and when they’re likely to hear them as familiar, even when there are some differences in those accents.
It enables you to model what happens on the borders of two biomes such as peatland and Caledonian rain forest rather than having to think of the border as merely a line between two more easily understood biomes. There, it allows smooth models rather than sudden jumps.
You can spot scaling rules about the coastline of Scotland and understand its geometry without having to think of it as a series of straight lines or curves.
It does the same thing with the size distribution of lochs, which is hardly surprising considering the Scottish terrain is just a plane-based version of the line which is the coastline.
The idea over all of this is that you don’t start with the points but with open ideas about what categories might be needed, so you might think in terms of Highland and coastal towns, towns with active train stations and the Gàidhealtachd.
So to finish, whereas I still don’t really have a confident understanding of what an ionad is, I do very much feel that as a mathematical concept it seems to be particularly apt as applied to Scotland, and generally have a feeling that it’s like when oil floats on water or an air bubble rises through a burn, but paying attention to the boundary between them and the skin of the bubble in their own right and as primary. That, I think, is what an ionad is!
And I’m perfectly happy for someone to come along and explain why I’m completely wrong.
We humans have long tended to think of ourselves as the pinnacle of creation or evolution. Aristotle, though a better biologist than he was a physicist, organised everything into a “great chain of being”, starting at the bottom with materia prima, unformed matter, and progressing upward through minerals, plants, invertebrates, vertebrates of various kinds and reaching its peak in “man”, and yes I do mean man as he was supposed to be better than woman. Although there were ideas of evolution around at the time, with natural historians wondering if humans had emerged from the water, this wasn’t supposed to be something up which beings ascended. They were just set statically in their positions. Christians later added God to this scale, above humans, although it’s possible Aristotle had already done that. I don’t remember it that clearly.
Thousands of years later, along came Linnaeus, actually Carl von Linné, a botanist who invented Latin binomials aiming to describe all life in neat categories called genera and species in a work entitled Systema natura. Homo sapiens is a good example, another one, probably not invented by Linnaeus himself, being Boa constrictor. There’s a sense of security in his system, which has been much modified since he invented it although the principles remain the same. I don’t know if he had the idea of hierarchy in his system in general but he certainly courted controversy by including humans in the system. Later still, Wallace and Erasmus and Charles Darwin, along with Lamarck, came up with the theory of evolution, leading to a strange set of misconceptions summed up by the question “if we came from monkeys, why are there still monkeys?”. There are a couple of things wrong with this question as well as the idea that things are moving upwards when they evolve, which are worth mentioning now. One is that the more recent invention of cladism attempts to group related organisms as everything more closely related to a particular species than another, meaning that there’s a clade called simians including New and Old World monkeys and also apes, including humans, but there isn’t really a clade for monkeys, and also nothing ever evolves out of its clade, so insofar as there are monkeys we are still monkeys and nothing ever stops being one, and also the idea that evolution is advance and everything moves “up”. Just looking at the great apes, there is one species which has evolved less than the others, the orangutan, but because they’ve changed less than the others they retain features in common with them, but more significantly, human hands are more primitive, in the sense of having changed less, than those of chimps or gorillas whose hands have evolved for knuckle-walking as well as handling things, and the famous “march of progress” graphic is completely spurious and also dodgy in various ways, because we didn’t evolve from chimp-like ancestors except insofar as we are chimp-like ourselves, and it’s as true to say that the other apes are descended from us as we are from them. I think I’ve already talked about orangutan on here though.
In other words, in a sense there is no progress. That said, things do get more efficient sometimes. Modern predatory carnivores can run faster than their ancestors and replaced another group of predatory mammals who couldn’t capture prey with their paws but had to use their jaws to do so, for example. But even as far as intelligence is concerned, because humans can use language our short-term memories are much worse than those of chimps and our common ancestors. This is particularly interesting because the recent concern about social media and the internet more generally reducing attention span and concentration is actually only the latest phase in a process which began with the appearance of language, continued with the invention of writing and the growth of literacy and reached a more advanced stage with our current “goldfish” brains (actually goldfish have good memories of course).
Intelligence of the kind we have has been thrown up as something which appears to be useful to us and our ancestors in recent geological times, but to refer to the title of this blog, could be a passing phase. There are problems with being able to learn a lot which animals who don’t need to do this don’t have. Firstly, humans have to learn to do many things which other species can do instinctively, such as walk. Quite often, animals have a simple “party trick” such as spinning an orb web in the case of some spiders, which is not reflected in the rest of their accomplishments, but of course a human could learn to weave a net for a similar purpose. Termites can build arches, but humans can invent arches and learn how to make them from others, by word of mouth, observation, study or muscle memory.
All this comes at a cost. We have a long childhood and in order to reproduce physically (we’re social and cultural beings who also reproduce in the noösphere), ideally we need to get through puberty. We then need to find a partner and wait for pregnancy to produce one or occasionally two or more offspring at a time, who then take up much of our time and most of our energy. I’ve made this a heteronormative account for the sake of simplicity, and there are other possible narratives regarding lifetimes, but whatever they are, we are cultural, we depend on each other and what we do takes a long time, so the same principles still stand.
At the same time, we’re developing goldfish brains in several ways, mainly in connection with digital ICT. We’re outsourcing a lot of our thought. Nowadays, people even use AI chatbots to talk to potential romantic partners. We’re – I mean, I hardly need to say this, feels like a string of platitudes – dominated by social media, fake news, fake images generated again by AI and who knows what else?
In the meantime, we interfere with the biosphere without even thinking about what we’re doing, although the fact that we think and have the kind of intelligence we have leads to the damage we do, even unwittingly. That intelligence, such as it is, is a potential liability to the planet’s life.
While all that is going on, something else carries on upon the sea bed and elsewhere. There are, to take a particular example, animals called placozoa who are simply irregular, lichen-like layers of cells clinging to rocks and consuming algae and other microörganisms in their vicinity. And then elsewhere there are certain tumours which can be passed from animal to animal. One of these is canine venereal transmissible tumour, which is a sarcoma usually transmitted by mating between canine animals in several species including dogs, wolves and coyotes and growing on the genitals. Another is Devil facial tumour, which is a similarly-spread tumour affecting the faces of Tasmanian devils and transmitted when they bite each other during fighting. These tumours and the placozoa spread without needing to find themselves mates, have practically no gestation or maturation period and they don’t need no education. There are also transmissible cancers among bivalves such as cockles and mussels. At the same time, they’re rare.
Henrietta Lacks is a well-known woman whose cervical squamous cell carcinoma is notorious for still thriving seventy-three years after her death, is effectively immortal and has replaced other carcinoma cell lines growing in labs to the extent that certain lines have been unwittingly lost by being taken over by her cancer. I have to mention too that Ms Lacks’s heirs have never seen a cent of the millions of dollars profit which have resulted from the research done on her cells and that her name was for a long time completely unknown to the general public. They’re known as HeLa cells.
I know I’ve said all this before, and I’m reiterating it because it occurs to me that this train of thought can develop in a direction I haven’t previously considered. I’m sorry about the repetition, but I have something new.
To repeat what I’ve said previously, another interesting phenomenon is that of organoids. Sometimes, the cells we shed into sewers from our bodies begin new lives briefly by starting to divide and form structures in sewage works. And of course we know that untreated sewage is often discharged into the sea.
Transmissible cancers are admittedly rare, but bear with me.
Putting these bits together, suppose HPV, which is partly responsible for HeLa cells, were to produce just the right mutation in cervical squamous cell carcinoma to make it transmissible in the same way as canine transmissible venereal tumour. This is improbable but at the same time entirely feasible. It’s a malignant cancer able to invade and destroy tissues, including those of the reproductive system, and it can cause infertility. At the same time, cells are shed into the sewers which reach the sea when discharged into it. It’s also passed on during childbirth although not usually to the genitals, and it’s terminal if not treated. This can be expected to spread somewhat like AIDS. When they reach the sea, they continue to divide and attach themselves to the bodies of marine mammals with naked skin such as whales and seals, spreading malignantly into their skin and in the case of seals and the smaller cetaceans killing them, while allowing themselves to be shed into the water where they infect other individuals. Some of them settle on the sea bed and feed on microbes, similarly to placozoa.
The second ingredient is linked to Covid but extended. One of the long-term effects of Covid on some people is cognitive impairment, reported here, although the effects are relatively mild. I’m tempted to measure it in terms of IQ but that would just give a spurious sense of precision and quantity. Covid is likely to be only an early example of many pandemics because of deforestation and climate change leading to the movement of viral vectors such as bats into new environments where they’re more likely to come into contact with people. AIDS was probably caused by this, four dozen or so years ago, more specifically by the human consumption of bushmeat. It doesn’t stretch credulity either to expect the after-effects of viral pandemics to cause a reduction in intelligence, although clearly describing it as a reduction kind of assumes some kind of scale and I’ve already said that scales are somewhat odious, not in all cases, so it gets a bit difficult to express what I mean by this. What I mean is that people will be less able to solve intellectually-demanding problems and think critically.
Now imagine in this world of attritional cognitive decline caused by a series of pandemics stemming from deforestation and climate change that we continue to be confronted with various problems, another of which is antibiotic resistance, and not only lack the mental capacity to address them as a species but also have the very bodies aimed at addressing them starved of resources and the ability to operate together in a global research community as we’re currently seeing in the US. At this point it might even be necessary for AI to take over, and if it isn’t, bad decision-making could lead to that happening anyway.
This leads me to the third consideration in this mess: AI misalignment. It isn’t that AI is malevolent. The idea was once suggested that an AI might be instructed to make as many paperclips as possible and go on to convert the whole planet into paperclips, then send out space probes to turn everything else possible in the Universe into paperclips. This is a somewhat silly example, but it’s like the monkey’s paw story of wishing for various things and getting them ironically and malignly granted. Imagine therefore that in this human world of cognitive decline, AIs are instructed to “ensure biological humans survive for as long as possible”, the idea being to guard against something like mind uploading into the cloud or the manufacture of robots with human cognition and our memories copied into them. So they obey the instruction. They locate the currently rare tumours, place them in vats or perhaps coastal lagoons, guard them effectively, redirect all agricultural food supplies to them and reason that this decision encourages the mindless, unintelligent variety of human cell lines which is less harmful to the environment than human intelligence and technology. Humans as we understand them are then left sterile, dying of viral infections, less intelligent than before by gradual degrees and unable to take care of ourselves. Intelligence wanes and dies.
So that ^^^ basically.
And we’re all dead, but on the bright side there are massive vats of cancer tumours all over the world which also leak into the sea where they kill all the dolphins and seals.
Of course, this is a perfect storm of a prospect and in particular the transmissible tumour angle is quite improbable, but there is a biological argument that this world of human survival only in the form of cancer is supposed to illustrate that intelligence may be something we prize and think of as the pinnacle of some kind of progress, but actually could be a passing phase which is actually a liability to the survival of our genes and in our civilisation education and good critical thinking skills are the kind of thing which excludes the people doing it from contributing to a society dominated by people without, so whereas this passing phase of liberalism and tolerance would promote the long-term survival of the species, it can’t have a long-term influence unless people are flexible enough to move beyond scarcity-based economics. Ironically, so-called eugenics is also harmful to our long-term survival because it reduces diversity. To give a strictly physical example, a species which varies in its heat and cold tolerance, with some individuals thriving in hot weather and others in cold, would be able to survive through fluctuations in temperature over a long period. A world of blond-haired, fair-skinned and blue-eyed people is incestuous. And whereas Musk, for example, might prefer to spread prosecute’s genes preferring prosecute’s own traits, prosecute doesn’t have the broad perspective of what may be adaptive and selected for in the long run.
The short-term benefits of language and shared memory along with the capacity to act upon them become brittle not because we’re intelligent but because we’re not intelligent enough. If we were able to anticipate and work through the probable consequences of how we’ve acted in detail and be vividly aware of them, we might be more resilient in the circumstances we’ve created for ourselves. Maybe it’s the crows next time, or maybe there won’t be another turn. Earth’s story is long and indifferent, and the Medea Hypothesis captures what this might be about. This is the Gaia Hypothesis’s evil twin. According to the Medea Hypothesis, far from ushering the planet into a more habitable condition, multicellular life is self-destructive and tends to push it into a situation where only simple single-celled organisms can survive. I’m not sure this is illustrated by this specific trajectory though. It may be more that intelligence is just one of countless possible survival strategies life can manifest and simple undirected arbitrary processes just lead to us blundering into the next phase, which won’t favour intelligence at all. If this is true though, it may or may not relate to the state of the human world, or there may be an analogue to that feature. What would an intelligent society look like? Or is it intelligence or wisdom? Have we lived through the period of history where intelligence has much influence on politics or world events? If so, what does that mean for progressive and conservative views? I can’t help but be tempted by the idea that liberal democracy, good though it was, was a brief phase in a few countries which is long since gone. And my reaction to that is not to adopt conservatism as that clearly doesn’t work and is in any case morally reprehensible. So what is to be done?
I recently had a rather heated discussion with someone over ethical scepticism. Putting this in context, I recently wrote a blog post about the Zizians which I think illustrates a rather analogous approach. When one tries to learn something off the internet, and I’m bound to be as guilty as anyone else is here, just not in this area, it can involve “tunnelling into” a subject until one reaches what one thinks one needs to know and then just stopping without knowledge of the areas around it. Often this is fine but it means, for example, that my knowledge of plumbing tends to involve olives, PTFE tape, doing stuff to ballcocks and nothing else, and in the fine tradition of everything looking like a nail when all you have is a hammer, I end up trying to apply this to everything, although I did once try to repair a burst pipe using melted HDPE so not quite everything (it didn’t work).
The conversation involved David Hume and veganism. Now David Hume is a much-respected and studied Scottish philosopher and happens to have been my specialist author in the final year of my first degree, so I know more than nothing about him. That said, I’m really not an expert. I probably know him about as well as George Orwell (I’m not going to say it). The point made about veganism was that the moral injunction not to be complicit in or directly cause suffering or death in members of other species doesn’t follow from the fact that they can suffer. This is a special case of Hume’s more general point that you can’t derive an “ought” from an “is”. I do have an answer to this issue, but my disagreement with this person in particular was that picking veganism as an example was completely arbitrary because more broadly this could simply amount to ethical scepticism, that is, the belief that right and wrong are meaningless, and that acting on this idea is sociopathy, or psychopathy if it comes “naturally”. The person with whom I was discussing this didn’t take kindly to being described as sociopathic, but although it might be perceived pejoratively, in one sense it’s an entirely neutral term – simply descriptive. There’s therefore an irony in the person in question perceiving me as expressing a moral judgement when I was in fact simply applying a label to their behaviour. If they perceive that as negative, it calls their own claim of not being able to derive an “ought” from an “is” into question, because of their reaction. If this is the kind of reaction most people would show, it also suggests a mechanism for finding a basis for ethics.
The rest of this post is almost going to reproduce what American universities might call “Ethics 201”, hence the title: the advanced undergraduate course in ethics found in many analytical philosophy courses. As such, I feel I’m cheating a bit because I’ve taken it directly from my own degree syllabus, but I’m doing that because I’m getting a wee bit tired of people “mining” philosophy for answers rather than considering things more broadly. There’s also quite a lot missing, such as the question of regret versus remorse, the extent of responsibility and the conflict between tolerance and commitment, but for now I’ll leave those aside. I’ll also inject my own views.
Ethics 201 covers a history of ethics in Western academic analytical philosophy, dating from the nineteenth century CE up until the publication of Alasdair Macintyre’s ‘After Virtue’ in 1981. It starts with utilitarianism, which sounds remarkably like common sense: actions are right in proportion as they tend to promote the greatest happiness of the greatest number. This is, incidentally, often used as the basis of veganism. There’s also negative utilitarianism: that suffering should be minimised, and happiness is ignored. There are a lot of problems with this belief which have been well-explored. One is Robert Nozick’s “utility monster”: a single individual able to feel more pleasure than everyone else put together. This would mean that nobody else matters, and that if the only way the individual could be made really happy is to torture the rest, according to utilitarianism that’s exactly what should be done. This also raises the question of how different people’s pleasure can be compared. You can’t necessarily add it all up because nobody knows what it’s like to be anyone else. It also fails to account for painless death in isolation – a person who dies alone and forgotten painlessly in her sleep is, according to utilitarianism, irrelevant. And this really happens, and when it does people consider it highly tragic. Well, not according to utilitarianism. It also fails to account for justice. An unfair situation which makes everyone happier, or even most people happier, is absolutely fine for a utilitarian, so for example if someone is falsely accused of serial murder and imprisoned or executed due to a miscarriage of justice and that then deters the real murderer, that’s also okay, and obviously it’s not. I’ve attempted to repair utilitarianism and the best I can manage is modal negative utilitarianism: the largest number of people suffering the least should be the aim. Imagine a bar graph showing number of people suffering on a scale of zero to twelve of severity. The lowest column should include the most people compared to the other. However, it’s basically irredeemable so far as I can tell, although many people, particularly other vegans, would disagree.
Although it has these and other drawbacks, most early twentieth century philosophers focus on the naturalistic fallacy. The first is the attempt to define the utility principle as good because it’s desired. In other words it’s “natural” to desire pleasure, but it doesn’t follow that that should happen. This is the is-ought problem Hume highlighted quite some time before. Another fallacy is that everyone desiring their own happiness means that everyone desires happiness for everyone else. These are the two famous and great flaws which may be fatal for utilitarianism. Because of this, G E Moore came up with a new ethical theory, referred to as intuitionism, which claims that goodness is a simple, non-natural property which we can all intuitively understand. One problem with this is that we disagree, but when that happens it may be because our judgements are built up from a number of simple perceptions of right and wrong. I always think of the statement made by an Iranian man in about 1980 CE: “The Ayatollah is a good man: he has banned women from appearing on TV”. To that man, the idea was axiomatic. P, therefore he is good. Most people in the West would say: P, therefore he is bad. Each person can then claim that they simply intuitively know that they’re right and all there is then is disagreement with no ability to argue people around. This particular issue is focussed on later, but intuitionism, although it’s my favourite, is not widely accepted any more.
Intuitionism was followed by the rise of logical positivism, which is the belief that statements mean something if they’re axiomatic, logically necessary or can be verified by observation. Because ethical statements don’t fall into any of these categories, emotivism arose, which is the belief that ethical statements are articulations of approval or disapproval provoked by feelings alone. Moreover, feelings themselves are defined by logical behaviourism, the belief that internal mental states are merely reports of physical sensations which get labelled by words like “happy”, “angry”, “sad” and so on. They don’t mean anything beyond the likes of a fast, strong heartbeat, dry throat and sweating and the rest. I imagine most people today would look at this idea of emotions and emotivism and conclude that the person who thought of it needs therapy, but would probably never volunteer for it. This brings up the issue of ethical scepticism again, which is the belief that there simply is no right and wrong. One of the surprising things about Bertrand Russell, who was very much involved in social reform and peace campaigning, is that he was actually close to being an ethical sceptic and never made any link between his philosophy and political activism, although some work has been done on this after his death which suggests otherwise. His view seems to be basically emotivism, which isn’t quite ethical scepticism but it’s odd that there’s such a disconnection between his activism and his actual ethics.
Emotivism is one of the two major non-cognitivist positions in twentieth century ethics, but there are actually two varieties of it. A J Ayer’s adherence to logical positivism led to him being rather dismissive of ethics and he didn’t seem to focus very much on the details. Later on, C L Stevenson refined it somewhat and placed it more at the centre of his attention, seeing ethical language as similar to imperatives, i.e. commands and requests, despite their apparently declarative form. I think it’s probably relevant that I’m not aware of any language at all which uses imperatives to express moral statements, because if they were really that similar one might expect this to happen, and it doesn’t. On the other hand, we do say “should” and we also say “you shall do that” as a way of commanding someone, so maybe. The difference between A J Ayer’s emotivism and C L Stevenson’s is that the latter claims ethical statements have a persuasive element, which is more sophisticated and can stand on its own easily rather than simply being motivated by logical positivism, which nowadays is basically dead anyway. There are still emotivists today.
I just want to insert a note here about what metaethics is: it’s the basis of ethics, that is, what makes anything right or wrong, good or bad, if anything does.
As I said, emotivism is an example of non-cognitivism. Non-cognitivism is a variety of metaethical theory which takes conventional meaning out of ethical language. For non-cognitivism, looking up a word like “good” or “bad” in the dictionary shouldn’t lead to a definition but should be more like “a word indicating that the speaker approves of something”. This led to the development of a second metaethical theory called prescriptivism, whose main proponent was R M Hare. This has been summed up as the view that “X is good” means “I approve of X: do so as well”. The standard analysis here is that ethical statements are those which are universalisable and entail imperatives, and at this point it probably becomes obvious to a lot of people that this is basically Kantian ethics, which dates from 1785. This basically amounts to “what if everyone did the same?”, and it doesn’t work because “the same” is not defined by the theory. As I said, it’s non-cognitivist, meaning that it actually seeks not to define the content of moral language. Therefore, imagine the following: someone steals a loaf of bread from a bakery to feed their starving children because they have no money. Two ways of putting this are:
A person does what they have to do feed their dependents.
A person deprives another of a source of livelihood generated by that person’s own efforts.
The first is universalisable, the other not, but they’re two descriptions of the same situation and prescriptivism doesn’t give any means of deciding which is which. I think this makes prescriptivism useless.
Then came the Scottish-American philosopher Alasdair Macintyre with his ‘After Virtue’. This consists partly of a survey of the history of ethics, and this is where I get back to the way intuitionists would argue. There’s little basis of common ground there because one can simply assert that something is good based on one’s intuition and someone else can assert the opposite, and then you just get nowhere. After some time of this sort of thing going on, you end up with arguments and opinions that look like they have no conventionally meaningful content, whose result is that non-cognitivism seems to be true. Much of the discussion about metaethics after that was provoked by that initial notion, although Stevenson did also say that he was trying to look out how everyday value judgements were made, such as comments about news stories, and not how philosophers talk about it in seminar rooms or whatever. A J Ayer’s influence is probably not to do with that either.
Macintyre looked to Aristotle here. He saw ancient discussions of ethics as more firmly grounded than they became from the eighteenth century, and thought that virtues and vices made more sense as a way of looking at right and wrong. This was said to be due to rejection of the idea of purpose. In the ancient world and mediaeval Europe, people firmly believed human life had a purpose and an ideal course, and harnessing that idea of an ideal provides a link with ethics. There is a sense in which a sharp knife is better than a blunt one, although obviously one used in a murder would be better if it were blunt, but that’s in a broader context. If we agree on a purpose for humanity, it seems to make sense to suppose that what happens between humans or between us and other things and entities such as the environment, could follow in a similar way to a good knife being suitable for chopping up food. In other words, Macintyre believes the Enlightenment was a mistake because it attributed moral agency to individuals, reducing morality in the end, after a lot of unravelling, to nothing more than individual opinion. Kalani Kaleiʻaimoku o Kaiwikapu o Laʻamea i Kauikawekiu Ahilapalapa Kealiʻi Kauinamoku o Kahekili Kalaninui i Mamao ʻIolani i Ka Liholiho, the second king of Hawaiʻi, abolished the idea of taboos as an influence on Hawaiian life, and met with no opposition. Macintyre, using that example, said that Friedrich Nietzsche had done something similar for Europe by being against Enlightenment morality. He didn’t agree with where Nietzsche took it after that though. He took from Aristotle the idea that how people should be is not the same as how they are, that moral rules are based on virtues, which stem from an understanding of human purpose and that values had to be derived not from individual opinions but in a more broadly social form.
All of this relates to Elizabeth Anscombe’s view of ethical discussions. She sees them as being couched in similar terms to law and crime but without a lawgiver. If there is no God, according to her, this has to be inappropriate, leaving the problem of how you can talk about it at all. I don’t know whether this meant ethical scepticism or something else.
This, then, is part of the wider context of this individual statement about Hume and veganism, and it’s another example of a “tunnel”. Someone took a random idea and applied it to veganism, when actually it could both be applied to ethical questions more widely and was part of a philosophical discussion which has gone on for something like two centuries since then. There’s an obvious parallel with the Zizian narrowness, and this is likely to be an increasing problem today because of the online tendency to present things out of context. There need to be experts.
Now you might have found that this post has been very boring, and in a way that’s the point. To you, this might seem a very abstruse and tedious passage, but this is one of my areas of expertise very likely to be boring and inaccessible to outsiders. I would find plumbing very boring and I’d be bad at it, which is why plumbers are useful and I trust them. It’s about trust. Either you get the long, boring monologue and someone going on and on at you, or you trust experts to some extent, and yes, sometimes they will get things wrong and they will have biasses, but to some extent the choice is between going and living in a cave in a forest and surviving by foraging, or living in a functioning society and trusting experts with a certain healthy degree of suspicion. But if you do get suspicious, you then need to do the work to find out what the bigger picture is, and that may not be easy, but this is what must be done to express an opinion worth taking seriously if you disagree with the evidence-supported views expressed by experts.
I’m currently sitting on our favourite couch. It is in turn sitting in a room downstairs in our house in Scotland. We bought it in England and tried to get it up the stairs of our English house because our living room was upstairs there. We had enormous trouble getting it past the bends in the stairs and eventually I decided to measure the bend and the couch, so I measured the depth and height of the couch and then the three dimensions of which the bend consisted. Using the well known right angle triangle equation a²+b²=c² and taking the square root of c, I was able to calculate the hypotenuse of the couch. I then made the slightly more complex calculation of using the hypotenuse of the dimensions of the stair bend with the height of the ceiling above the stairs to work out the maximum length of an object which could be fitted through the gap, and since that second figure was smaller than c, I was able to prove, and I have to state this carefully to be precise, that the couch would not be able to fit into the space on the stair bend, and therefore it would be impossible to take it up the stairs and put it in our living room, so it remained downstairs. Now there could’ve been some other approaches, such as taking the feet off or the banisters down, but in fact both of those were part of the objects concerned and it wasn’t going to happen because I’m not Bernard Cribbins.
This is of course Pythagoras’s Theorem. People often say they never apply anything they learnt in maths to their lives after leaving school, leading me to conclude that either their lives are unnecessarily hard or that they don’t realise they’re using it, because this kind of problem comes up all the time in everyday adult life and I can only surmise that people think really strangely in this area. I scraped an O-level pass in maths and this is obvious to me. In fact I almost stayed in the CSE group and was the lowest grade person to go “up”. I should also mention that there is a famous Moving Sofa Problem in mathematics, but this isn’t that. The moving sofa problem is the question of which rigid two-dimensional shape of the largest area can be manoeuvred through an L-shaped planar region with legs of unit width. It didn’t help us because the stairs were three dimensional, i.e. they went up diagonally, turned through two ninety degree angles while continuing to ascend and the ceiling of the ground floor was in the way too. There migh be some couch-stair combinations which it could’ve been useful for, but not this one.
Most people know one thing about Pythagoras, and that’s that he’s responsible for Pythagoras’s Theorem that the square of the hypotenuse is equal to the some of the squares on the other two sides of a right angled triangle. This also brings up the issue of the square root of two being irrational, i.e. not being expressible through a ratio, i.e. a fraction, because an isosceles right angled triangle with unit opposite and adjacent sides will have a hypotenuse length equivalent to the square root of two in units. As a child I thought this proved that units of measurement didn’t exist, but obviously that was my child’s mind failing to grasp things properly. The only thing is, Pythagoras probably didn’t think of his theorem. It’s more likely that in order to give it some kudos, people decided to attribute it to him, and it was known about before his time.
Unfortunately I don’t seem to be able to satisfactorily answer the question of whether Pythagoras existed. He may well not have done. I want to start by mentioning a few other figures: Nero was the Roman emperor who fiddled while Rome burned and rebuild the city in a much improved condition; George Washington was the guy who cut down the fruit tree as a boy and admitted to it, saying “I cannot tell a lie” and Archimedes was that bloke who got in the bath which overflowed, giving him the inspiration to tell whether a crown was solid gold, and shouted “Eureka!”, running down the street naked. Or maybe not. I haven’t checked these and they’re very likely to be just stories, and actually the question of whom we refer to when we tell stories like this is a modern philosophical problem. So Pythagoras, by the same token, was an ancient Greek philosopher who discovered something important about triangles, was vegetarian, wouldn’t eat beans and thought numbers were very important to the nature of reality. That’s probably more than most people “know” about him.
So I’m going to start with the question of whether he existed. At least three other important Greek men wrote about him and his life: Aristotle the philosopher, Herodotus the historian and Iamblichus the Neoplatonist philosopher. There was a whole school of philosophy named after him which he’s said to have founded, although that doesn’t mean he existed. That school of philosophy has a consistent belief system rather than just being arbitrary unconnected beliefs, so there is such a thing as a Pythagorean philosophy. However, no writings at all can be attributed to him because Pythagorean philosophy was an oral tradition. It was passed on by word of mouth long before it started to be written down, and this of course means it could’ve ended up being distorted even if he did exist. There was also a tendency in the Greco-Roman world for people to attribute ideas and quotes to people to make them seem more important and respectable than they would’ve been perceived as otherwise, rather like how lots of quotes today are attributed to Churchill and Einstein that they never said.
And the thing is, Pythagoras as he was understood in ancient Greek sounds absolutely bizarre. He had a thigh made of gold, was able to be in two places at once and could converse with non-human animals, and there were a few other things about him which were odd-sounding. He comes across as a kind of magical cult leader and demigod, perhaps a shaman or a sage rather than a philosopher. This partly reflects how philosophy was not neatly parcelled off from religion and spirituality as it is today, at least in academia, and what we separate today was actually considered together until at least the time of Newton. The difficulty, in fact, is similar to those of establishing the nature of the real Jesus and Socrates. So we’re in a situation where the one thing everyone thinks they know about him isn’t true and he was seen as some kind of superhero with incredible psychic powers. But in a way the question of whether he existed or not is the most boring thing about him. Everything I say about him from this point on has therefore to be attributed to some kind of possibly mythical or otherwise fictional figure rather than any real person called Pythagoras living in Ancient Greece.
He was seen as an expert on the soul. In Ancient Greek times before him, nobody thought there was a separate soul which survives death. This was more an Ancient Egyptian thing, and for all we know that’s where it originated. Because of this expertise, combined with his belief in reincarnation he was said to be able to remember his past lives. He once got someone to stop beating a dog because he recognised the cries as those of a dead friend reincarnated in the dog’s body. This is also why he was able to talk to members of other species. And whether or not he existed, there was clearly a cult based on his apparent beliefs, and this cult was also rather strange. They believed that the right shoe should always be taken off before the left one but that the left foot should always be washed before the right, that no-one should eat anything red, and they were seriously into numerology and vegetarianism. In fact, before the invention of the English word “vegetarian”, we were called “Pythagoreans”. They also included both women and men, which seems to have been unusual at the time. We may assume that the idea of an institution which admits women to be the exception back then but we don’t actually know. You also had to be silent for five years once you joined. Returning to the vegetarianism, although they did believe in it, justified through the idea of human souls being reincarnated in other forms, they also believed in sacrificing animals to deities. There’s even a story that Pythagoras was once seen eating chicken and replying to the objection that he was supposed to be veggie and not eat live animals by saying that the animal he was eating was dead, and this makes me wonder if they were actually vegetarian or simply sacrificing animals so they could eat them. Even so, many veggies do have stories like that made up about them, and most surviving records about Pythagoras are about criticising him and his followers or lauding him and them. There isn’t much attempting to be objective. Incidentally, although he had a religious cult of his own, he still worshipped the Greek deities of the time and what they did was “extra”: it was still dodekatheism, as it’s known nowadays, but a kind of denomination of it rather than a separate religion.
Pythagoras was of course into maths, which he combined with numerology because at the time there was no distinction. He seems to have been the first person to connect mathematics to an attempt to explain the world. This particular notion has been extremely influential. Even today, a hard science has to include maths to be taken seriously. One of the reasons psychology emphasises statistics so heavily is that it wants to be a “proper” natural science. However, the way Pythagoreans approached maths and its relationship to the physical world back then seems quite different to how they’re approached now. For instance, even numbers were considered female and odd numbers male, and since the number 1 wasn’t considered a number at all because it didn’t have a beginning and an end, five was considered the number of marriage, as it was the union of the first female number with the first male number. The number seven was considered sacred because, being prime, nothing could make it up and it could make up nothing. Two was considered the number of justice because it enabled things to be divided equally into two halves. Three was considered to sum up the whole Universe as it was the first number to have a beginning, middle and end. He also discovered triangular numbers. The number three was considered to represent a human being, and was of course male, representing the threefold virtues of prudence, good fortune and drive. That almost sounds like it’s out of a contemporary self-help book.
Although the links Pythagoras made between numbers and the Universe were peculiar, he also connected geometry and arithmetic more thoroughly than his predecessors, because of the hypotenuse connection with the square root of 2, and because of his theorem, although that had been known to the Babylonians. He was the first person to come up with a method for constructing a dodecahedron, and connected many shapes to the Cosmos, bringing me to what ought to be the most famous thing he was known for: he was the first person to claim Earth was round. Remarkably, although this has turned out to be incorrect, his reasoning had no connection to any observations because science wasn’t there yet. In addition to that, he came up with the idea that Earth and other planets moved in orbits, although oddly not around the Sun but a central fire, and also that there was a counter-Earth, required to make up the numbers in the system. There are convoluted reasons for all this.
This initially peculiar link between the Universe and mathematics, once forged, has stayed ever since and may not in fact be as obvious as it seems. I have suggested before that one solution to the Fermi Paradox (“where are all the aliens?”) might be that they’re all really bad at maths compared to humans, but another solution may be that although they’re perfectly good at maths, they never had a Pythagoras to make a link between the two and it’s never occurred to them to apply maths in this way. Hence their science is still Babylonian in nature, or even less like Western European science than that. They never got any further. If that’s true, it makes Pythagoras, even if he never existed, an incredibly important figure.
Another aspect of all this is that we can look back from our own “rational” viewpoint and poo-poo the idea that he was an ancient Doctor Dolittle, could be in two places at once and remember past lives, when actually maybe he could do all of that and it’s our own restrictive mind sets which have stopped that from happening. This doesn’t sound sane, but when we consider what many Christians believe about Jesus it becomes more a case of us simply having decided that one ancient semi-mythical person has such attributes rather than the other. It only sounds crazy today because we chose to retain the deification of Christ rather than Pythagoras, which could be seen as practically a coin-toss. There is a world not far from here where many millions of people still believe Pythagoras had something in common with C3PO.
Another numerological aspect of Pythagoreanism was that nobody should gather in groups of more than ten because the number ten was 1+2+3+4, so ten in particular was a sacred number to them. This extended to them composing prayers to that number, and I find this interesting because it creates a link between mathematical entities and deities and other spirits. Platonism and intuitionism are two opposing views of maths. Intuitionism holds that humans invent maths as we go along, i.e. it’s a creation of the mind just like a poem might be, whereas Platonism holds that maths is discovered. It’s already out there before we get to it. So for example, there are considered to be eight planets in this solar system. Assuming there are no others, there were also eight planets when the first trilobites appeared 521 million years ago. In fact, at that point there was a number representing the global population of trilobites, as there still is today: zero. So does that mean that the number eight exists independently of human consciousness or, more precisely, the ability to count? I have a strongly atheist friend who is also a Platonist, and she acknowledges that it’s an odd position to be in. The Ontological Argument for God tries to bootstrap God into existence from the concept of God, and this perhaps reflects the notion that God exists as a concept in a more objective manner than an atheist or agnostic would usually be expected to think. The concept of God is “out there” in the Cosmos in some way, and maybe in the same way as maths is said to be by Platonists. But this, well, I’m going to have to use the word “idea” at some point, of deities existing abstractly is usually considered separately nowadays from the idea that squares or numbers exist. We have a partition in our thoughts which Pythagoreans had yet to erect.
This can be directed back on Pythagoras. Clearly the idea of Pythagoras does exist, although it seems to have varied. We have Pythagoras as the triangle guy and the first person to suggest that the world is round, although actually that might’ve been one of his successors. But Pythagoras himself may not have existed in the same sense that Elizabeth I of England did, and as such this accords quite well with the general attitudes of the time and the problems of ancient history. Also, back at that time and place, the Greeks seem to have taken their religion quite literally so for them Zeus was as real as Pythagoras whether or not we think of him as real.
On consideration though, I do think he existed in the way we generally understand existence today, i.e. not just as an abstract or mythological entity. The reason for this is that his cult existed and was quite forceful and distinct in nature. It seems to me that a requirement for a large group of people to avoid speaking for five years and never to eat beans sounds like the kind of thing a charismatic leader would get their followers to do, and it really sounds like cultish behaviour by today’s standards. It makes cults seem like constant fixtures in human life rather than phenomena characteristic of the modern world. This is probably not terribly surprising, but maybe this assumes too much, because it might be that cults with leaders are more recent developments connected to individualism and a tendency for people to seek complete answers to life’s problems. I haven’t checked, but I don’t think the Essenes had a founder or leaders.
Here’s the weird bit though. As I’ve said before, although Pythagoreans seem to have been the first people to link maths and science, from today’s perspective they seem to have come up with a list of arbitrary superstitions and ideas without a thorough connection to reality. But despite this, somehow they were able to assert the correct idea that the world is round, which to us seems to depend on observation rather than philosophical or mathematical abstraction. Nobody seems to have had that idea before. Later Greek philosophers came up with ways of testing this and measuring Earth’s size, but it wasn’t those careful tests which led to the initial thought. What are we to make of this? Maybe the idea crept in from somewhere else.
We still have the metric system. Does that maybe represent a similar superstition about numbers? We happen to have ten digits on our hands and it’s led to us producing a system which is easier to use than imperial because of how we count, but are we also partaking of Pythagorean mysticism there? We’ve put that into the box of rationality, but maybe it’s more to do with custom. Also it seems that the real mystery is how maths actually manages to engage with the world at all. Why would this be?
This post is not about Nostradamus, although I have written something about him. It would also be easy to write me off on the strength of what I wrote there, but the approach here is very different and in fact suggested by the opinions of the Zizians and other rationalists. It’s based on probability.
We are first of all aware that the way things were before Trump’s election, the human race was due to die out in the 2060s from respiratory paralysis, along with all reptiles, mammals and fish, the last for other reasons. With the change in policies regarding carbon emissions in the US, that date has now been brought forward, but this is not about that. I now realise that I’ve told you two things this isn’t about.
You might remember my post on the Doomsday Argument (there’s probably more than one) a few years ago. The basic idea behind this is based on an estimate of when the Berlin Wall would come down by someone who visited it in the 1960s. In 1969 CE, when the astrophysicist J Richard Gott III visited the then eight year old Berlin Wall, he posited that the Copernican Principle, that there’s nothing special about a particular observation, individual and so forth, meant that the best assumption about how far through the total number of visitors to the Wall was that he was about halfway through. He gave an estimate of 50/50 that it would be gone by 1993. In fact it came down in 1989, which is quite close. The Doomsday Argument is that from the perspective of an individual human life, one’s birth is best estimated as being about halfway through the total number of human births. With the population growth during the twentieth century of doubling every thirty years and an estimate of the number of human lives being lived so far at seventy five thousand million since 600 000 BP, and taking my own birth in 1967 as an example, it being the only one I can, it appears that the human species will probably be extinct by 2133. There are numerous flaws in this argument, but it’s important to note that it isn’t an argument that overpopulation will cause extinction or that any cause in particular will do so. There will of course be a cause but we don’t seem to be able to tell from this argument what that would be. Nonetheless it is the case that if population growth slows, the prediction extends further into the future and it also depends substantially on assumptions about which entities are likely to have those thoughts, that is, when we became human and started to conceive of the idea of the end of the world, the human race and so forth. In fact, population growth is indeed decelerating and this stretches our probable prospect well into the future. I’ve talked about all of this before, but I think it’s a measure of the occurrence of the thought and not the occurrence of humans. An outbreak of optimism about the future of the human race by the early 22nd century would mean that no more ideas of that kind will occur, or that they’ll be rarer, so maybe what we’re really measuring is the extinction of doomerism, not that of humanity. There are all sorts of reasons why this might happen. It could be that our descendants are all parasitic tumour cells with no brains and therefore no expectations, that we’re all wiped out by AI which doesn’t have that thought or that things are going to get a lot better. Hence this apparently cold mathematical argument has so many hidden variables that it may be worthless.
There is another, similar, argument which I’ve used to predict a future without human space exploration, and it goes like this. Suppose there are a million habitable exoplanets which will one day be within human reach, or alternatively the same area in the form of artificial space habitats of some kind. This is a very conservative estimate as it would mean that only one star system in four hundred thousand would have such a planet or that the technology to produce such habitats is very inefficient. Now suppose that each of these planets (I’ll use the planet settlement scenario for simplicity’s sake) only has an average population of a million, with each such population being considered as a discrete number per century, so for example there are a million people on one such planet and then a century later they’ve died but another million people have replaced them. Suppose this goes on for ten thousand years. That’s 100 x 1 million x 1 million, which is 10¹⁴ people. Going back to the original figure of 7.5 x 10¹⁰ people having lived so far, that makes that a tiny fraction of the number of people who will live in this scenario, namely 0.075%. This means that the probability of living at a time before this has happened, i.e. not being one of these people, is only one in around 1300. These are ridiculous betting odds which nobody rational would risk their money on. Also, the estimate I’ve made is extremely conservative. The Galaxy has been estimated to contain around 300 million habitable planets which will continue to be habitable for on average several hundred million years each and could support a population of ten thousand million people each. If the other scenarios are explored, a much wider variety of stars could support a Dyson swarm, i.e. a roughly spherical shell of space habitats with many times Earth’s land surface area which would dwarf even the second estimate at the order of 10²⁵. If one considers one’s life as a random sample from human history, with these odds it can be guaranteed that if humans settle in space substantially in the future, one would be living during that era and not this one. Our very existence now makes it practically certain it’ll never happen. It doesn’t give the reason for it though.
I actually think this is more productive than the Doomsday Argument, but it’s also flawed. Suppose you consider the much greater probability of being born. The chances of that for each person are lower than one in six hundred thousand million, assuming three hundred ovulations per lifetime and 200 million sperm per ejaculate. This also assumes that our identity depends on genes, which I strongly disagree with, but it’s an interesting thought with substantial basis in reality. It’s still a tiny probability, but even so, every one of us does exist. That probability, incidentally, could perhaps be multiplied by the number of generations since the point at which a single allele could be definitively traced to an individual, which is actually only around sixteen, or by the number of generations since the start of sexual reproduction, although since fish, for example, don’t ovulate single eggs but produce similar numbers of eggs as they do sperm, the numbers get wild before about four hundred million years back. Nevertheless, here we are.
But suppose the argument works. It seems to have predictive power of some kind, although what exactly it predicts is unclear. It might simply mean that we won’t make a Dyson swarm, that distances between stars are too large or even that there isn’t enough phosphorus. It’s also closely coupled to the Fermi Paradox, because whatever stops that from happening may also stop other cultures from doing the same, which is why there are no aliens in contact with us, so maybe we’re about to find out why that is. I personally think it means that something will, or is, happening which will prevent that future from unfolding. It could be something positive. Maybe we will achieve a degree of enlightenment which leads us to stay on our planet and make it an earthly paradise which nobody will want to leave. Or, maybe we’ll just bomb ourselves to bits or die in the ocean acidification scenario, or whatever. Just thinking of this in the wider “where are all the aliens?” setting, it’s also possible that the Great Filter only applies to us because there are no intelligent aliens. Just to spell it out, the Great Filter is the idea that an event takes place everywhere life might be expected to develop and prevents it from getting to the point where intelligent representatives start visiting other star systems. It could be that Earth-like planets are rare, phosphorus is too scarce and vital for life of any kind to develop, there aren’t enough mass extinctions to stimulate evolution, there are usually too many of those for intelligent life to evolve, that intelligent life is just unlikely, that intelligent life is common but tends to develop at the bottom of the ocean, that it’s common but really bad at maths, those all being the past Great Filters, and in the future that AI takes over, we wipe ourselves out through war, pandemics put paid to us, we get too engrossed in online activities to bother and that space exploration is a flash in the pan. There are plenty of others. If there are no spacefarers because there’s no life elsewhere, many of those still apply to us.
Ultimately, we only have the brute fact that we’re intelligent tool using entities which have not colonised the Galaxy. It’s difficult to draw conclusions from that. Lack of information also tend to stimulate speculation too much. Venus is a good example. At some point, astronomers realised that the reason Venus looks so bright is that it’s covered in clouds. They couldn’t see any surface features. Because the only clouds they knew about back then were the ones here on Earth, they drew the erroneous conclusion that Venerean clouds were also made of water vapour, and in fact this is a parsimonious decision because it doesn’t posit that they are made of anything else in the absence of information. From that, they further concluded that Venus must be warm (fair enough, it being near the Sun) and humid, perhaps being covered in swamps, rainforests or just a global water or carbonic acid (fizzy water) ocean. Since at the time it was thought that the planets further from the Sun were older, some scientists also wondered if it was home to dinosaur-like creatures. All this, as Carl Sagan observed by the way, from the fact that you can’t see any surface features through a telescope. Lack of knowledge begets dinosaurs.
We don’t actually know we’re not doing something similar from this lack of knowledge but it’s hard to restrain oneself from trying to fill in the gaps. I want, though, to start from the position that it does seem to be a good argument that this will never happen, for whatever reason. I do think it’d be good if it did, because for example the overview effect influencing a lot of people would make the world a better place. The overview effect is the influence seeing Earth from space has on astronauts, where they begin to see humanity as one and the planet as a precious and delicate place worth preserving. It’s been described as “a state of awe with self-transcendent qualities, precipitated by a particularly striking visual stimulus”. When people have spent some time in space, they come back changed, usually positively so, and actually settling in space, I think, would have a lot of other positive results including those which would promote radical left wing and Green political activism here on Earth, which is why I’m so focussed on it. All that said, it doesn’t follow that it would be a good thing in the end and staying here on Earth and turning our back on all that is seen by many people as a good thing. There’s a pretty good case for this too, as the sums of money and resources spent on space while there are starving people down here. . . well, you know the argument. There’s a famous poster by the artist Kelly Freas from the early 1970s which comes across as being finely balanced in this respect:
Presumed to be copyright NASA and therefore in the public domain but will be removed on request
The motivation behind this picture is to encourage support for the Apollo space program and more widely the space program in general, but I think to a 21st century viewer it comes across as emphasising the problems here and makes the Saturn V seem like a wasteful attempt to escape this and distract the world, along the lines of Gil Scott Heron’s ‘Whitey’s On The Moon’. In other words, the simple possibility that astronauts’ days are numbered can be regarded as a neutral fact rather than utopian or appalling. This still appears to be able to predict the future.
A while ago, I raised questions about the Artemis program. If it’s to be conjectured that a probable result of the return of humans to the lunar surface is a large number of people living in space, which then increases until it outnumbers the population ever having lived on Earth, the probabilistic argument I offered above predicts that that’s unlikely to happen. It could still happen if the number of people in space always stays very small or even if it’s relatively large but short-lived. Something will have to stop this from happening unless it’s along the lines of a pointless publicity stunt. Paradoxically, Elon Musk seems to think that it’s vital for humans to settle on other planets for the sake of the long term survival of the species, and that may well be true but he seems to be very good at preventing that from happening due to incompetence and overreaching himself, plus the mere fact that he’s close to being a (long scale) billionaire (he’s only a billionaire using the American system). To be highly specific, this argument in the current period seems to predict that Artemis will fail. Weirdly, this appears to be a form of retroactive causation — the cause follows the effect. Because one can have a high degree of confidence that there will be no significant human space program in the future, one can conclude that Artemis will fail. It’s as if the failure is caused by the way things are in the future rather than the other way round.
This of course has a Zizian flavour, and more broadly Roko’s Basilisk (don’t look it up – it’s almost certainly wrong but in case it’s right, it’s better not to know what it is). Both of these seem to be examples of the future influencing the past, and that makes it appear to be possible to predict certain aspects of the future. A really obvious one appears to be that time machines which travel back before the first instance of one will never be invented, as if they were we might expect to have witnessed time travellers and we haven’t. There may be some stipulations here, and it’s worthwhile putting in the work to determine exactly what we’re attempting to predict, hence for instance the proviso that they can’t travel back before their first instance. There might be other elements. For instance, it might be that time travel backwards is possible but it kills the time traveller, erases them from ever having come into existence or that it makes them undetectable. We would have to be precise about what we know, but once we’ve reached that precision, we basically have a way of predicting certain facts about the future on our hands and also revealing a weird reverse causality phenomenon. It’s pretty revolutionary in itself that effect can precede cause in some situations.
Something rather similar can be done regarding the present moment and the past. Our existence guarantees that we live in a Universe which is not entirely hostile to intelligent tool using entities, which in our case arose through the appearance and evolution of biochemical life. We also know that Earth formed, is currently habitable, and that there was no time between the appearance of life here and today when it was completely wiped out. However, one thing we don’t know is how improbable it is that we’ve come into existence. Just because we’ve lived on a planet which has been hit by a few comets and asteroids without killing all life on it or been sterilised by a gamma ray burst doesn’t mean that it’s unlikely, because our existence today is a given. That could happen tomorrow for all we know, and there may be nothing keeping the future like the past at all. We just don’t know how precarious our situation is.
I want to talk about something similar now and I don’t quite know how to link it but I’m convinced it’s similar. The past being as it has been in certain ways is assured by “survivorship bias”: we have no option currently but to live in circumstances where we’re still here and where we came into existence. Survivorship bias is a logical error. One example of it is successful guesses made of the psychic test cards with different shapes on them, where a researcher with a large number of subjects might select a subject she thinks is psychic because they’ve guessed correctly each time. Suppose there are 1024 subjects being asked to guess a sequence of cards with one of four symbols on each. Given the null hypothesis, statistically, 256 of them will guess correctly the first time, 64 the second and so on until after five guesses, one person will have done so every time. However, suppose further that there are 1024 of these studies going on in universities all over the world. In this situation, there will be variation in the number of successful guessers and in some of them there will be “super-guessers”, meaning that there can statistically be expected to be one person in the whole group who guesses correctly ten times in a row. Moreover, there’s a twenty-five percent chance that someone will do it eleven times, a chance of one in sixteen that one will do it twelve times and so on, and once it reaches below one in twenty, that reaches the arbitrarily chosen threshold for responsibility and a researcher can publish her result suggesting the statistical significance of guessing in at least one subject thirteen times in a row, and there’s then a danger of that paper receiving all the attention while the papers showing nothing remarkable remain unpublished. This is supposed to be avoided because it distorts the results. Negative findings are as important, if not more so, than positive ones. This is potentially an aspect of academic research which is distorted by a need to be perceived as doing something notable, because it means negative results are buried.
Survivorship bias may influence our perception of how typical our history and planet, and possibly even our universe, are. We’re here, so it follows, for example, that Earth hasn’t recently been hit by a large asteroid and that Covid didn’t wipe us all out – it wasn’t actually that kind of virus anyway, although it could’ve been a lot worse. The fact that the former didn’t happen dictates that the asteroids mainly orbit in a belt far from our orbit rather than us being situated in the middle of an asteroid belt, but it may also be that that kind of solar system is short-lived or rare anyway. We may seem to have lived charmed lives in a sense, and this is where things could be extended into the future.
Quantum immortality is a concept whose scientific respectability has never been clear to me. The idea is that as the timelines branch (I actually don’t think they do branch as such, but that’s not something I want to go into just now), we inevitably end up in the ones where we continue to be conscious. For instance, when I was eight, I rushed out of my primary school and was almost hit by a car, but survived of course. There are, depending on how firm determinism is, other timelines where I was fatally injured, but I’m obviously not in any of those, at least in the current year. In fact I couldn’t be, just given the simple fact that I’m still here typing this. The extension of this thought is that in fact, none of us ever die, and in fact our consciousnesses never end, not just subjectively but in terms of continuing to survive as observed by others. Every time a potentially consciousness-terminating event occurs, we take the road where our consciousness continues. Note that I’m talking about the permanent cessation of consciousness here, since we’re clearly temporarily unconscious on a regular basis during dreamless sleep. Hence the idea is that subjectively each of us will never die. A way of linking it to quantum ideas more clearly is to imagine a machine gun which works like the Schrödingers Cat thought experiment, except that the radioactive particle is replaced by a radioactive sample whose decay gives the firing of each bullet a 50% chance of happening, one bullet per second. The subject sits in front of the gun, aimed at their head. Subjectively, the gun will never fire because there will then be no observer to be aware of the bullets not firing, and of course the death of the observer would mean there is no such observer. This is rather sloppily put together but I hope you get my point. After five minutes the gun has potentially fired up to three hundred times and the probability of it not having fired is equivalent to one against a number more than three hundred thousand times greater than the number of atoms in the observable Universe, so it can be almost guaranteed that no-one else not in the firing line will observe the victim still alive at the end of the five minute period, but for the “victim” the situation is one hundred percent safe. Of course, somewhere out there in the Multiverse there is someone who has the reputation of being fantastically fortunate. Other people exist.
Extending this to every event while keeping the quantum component, it’s easy to imagine that each timeline begins with a quantum event which ends up determining the whole future in that timeline until it’s observed, and since it has to keep being observed, there has to be at least one immortal being in each. This means that in the majority of universes, which appear often to be merely composed of hydrogen rather sparsely distributed throughout space, there are no observers and therefore they actually don’t exist, although this would be countered by either panpsychism or the existence of an omniscient deity. I am of course panpsychist myself. A more conventional way of understanding it is that you are immortal in any timeline you actually experience. The bullet misses you, the car crash isn’t fatal, you recover from the infection and your cancer goes into remission.
However, this is not a recipe for ceasing to worry about the future. If you’ve read ‘Gulliver’s Travels’, you’ll know about the Struldbrugs of Luggnagg, who are born with a red dot above their left eyebrows which changes colour until it’s black. Swift obviously did a better job than I’m about to, so you can read his own words on them here. It’s in Chapter Ten. It won’t surprise you to learn their immortality is not a blessing but a curse. The condition’s not hereditary and a baby of this kind is only born every few years in the whole country. Lemuel imagines Struldbrugs to be mentally liberated from the prospect of death and able to become extremely wise, passing on their wisdom to the younger generations as a positive jewel to the land. However, what they actually do is serve as a dreadful warning to the populace which makes them feel relieved that they’re mortal, as their presence is a constant reminder of old age. They have, as the phrase has it, years in their lives but no life in their years, because they continue to age despite being immortal. Just as the old in our society tend to be world-weary, think they know more than they do and have contempt for the young (don’t shoot the messenger – this is Swift talking here, not me), they have all the more vices owing to their knowledge that they’ll never die. They’re ” not only opinionative, peevish, covetous, morose, vain, talkative, but incapable of friendship, and dead to all natural affection”, and they don’t care about any of their descendants beyond their grandchildren. They’re mainly envious and frustrated, and basically wish they were dead the whole time, lamenting at funerals because they know they’ll never have that release, and past the age of eighty, if they’re married to each other the state mercifully dissolves their union out of compassion, because otherwise their marriage will become a living hell out of being totally sick of each other. They’re also declared dead at eighty in order that their heirs can inherit and although they are either allowed to continue on a pittance from their own estate or receive welfare, they can’t own property or even rent it. Any diseases of old age continue, though they don’t get worse, and due to changes in language, after about two hundred years they cease to be able to hold any conversations with people outside their generation, who in any case are very few, and they also have dementia.
Swift wasn’t the only person to make this observation, although this is of course typical of him. There’s also an ancient Greek myth about Τιθωνός, lover of Eos, who scooped up a handful of sand and was granted to live as many years as there were grains in his hands, but forgot to ask for eternal youth and ended up walled up in a room insane until he was mercifully turned into a cicada. There’s also an Asimov story, ‘The Last Trump’, where the dead and the living are given eternal life and youth and initially suppose they’re in paradise but soon realise that they’re damned and that eternal life will become unbearably boring. They’re then reprieved on a technicality when an angel points out that the date of resurrection is different in different calendars, so it can’t have been a proper doomsday.
For this is what quantum immortality is. You don’t die, and you remain conscious, but you also deteriorate without end so your life becomes unbearable. It’s also entirely compatible with dementia to some extent. You don’t need a good memory, only to be able to sense things in one way or another, perhaps with the last remaining cone cell in one retina. Perhaps you occasionally notice a red dot and then forget about it immediately. It isn’t good, really. In fact it wouldn’t even be good if you retained all your faculties because your life would be poisoned by boredom and over-familiarity.
This raises a few questions. One is that of what ageing actually is. In a sense, not all organisms do actually age or die of old age. There’s a species of petrel, a bird, which is effectively immortal, and a jellyfish who responds to injury by regressing to infancy and beginning to mature again. However, these are not in fact immortal. Both, for example, would die in a fire or if eaten by a predator, and this raises the question of what ageing actually is. Is it the accumulation of internal insults and health problems which eventually proves fatal? If so, it’s effectively the same as accidental death – it’s just that the accidents are things like oxidative stress, cardiovascular deterioration or cancer. Or, do we have an allotted span such that we die after a certain number of years determined by an internal clock? This clearly does affect many species which die immediately after reproducing, which is just as well because otherwise they would use up the resources needed by their children, who would then starve, or end up eating their children shortly after hatching. Some might say that this is what one current generation of humans in positions of wealth and power is actually doing right now. We hang around for our children and grandchildren, but on the whole we need to die to get out of the way for future generations.
Presumably with quantum immortality, the former scenario is assumed to be in play. We don’t have an inherent life expectancy, but simply accumulate injuries until they become fatal, but in each subjective case those injuries never end up killing us. Obviously we’re not surrounded by immortals, so each of us has their own private world in this scenario, dying in an increasing number of timelines but persisting in a dwindling number of them, which, however, never reaches zero. One major problem with this is that it seems to be solipsistic, as all the “people” around you are still mortal and are just shadows with no consciousness. You’re in your own world. This may, however, have a form of retrocausality too. For instance, two ways of living longer are to be lucky with your genes and to inherit or adopt health-promoting attitudes from your family or community, meaning that you are, for example, more likely to have particularly healthy and long-lived relatives in your personal timeline. This doesn’t rule out straightforwardly accidental death, but it does mean you’re likely to have selected long-lived relatives. Therefore, if you believe in quantum immortality it would often be reasonable to conclude that your relatives, while not immortal, might end up living a particularly long time or be especially healthy in old age. It might even go further than that, with the possibility of living a relatively charmed life in a stable political environment, free from local wars and famines for example, or with a particularly low rate of serious crime.
This raises an ethical problem. It could make you complacent. You’d know that everyone else was subjectively immortal and also that you’ll never encounter potentially fatal dangers. Therefore you might well be less motivated to do good to others or even particularly bother to look after yourself. In the initial example, you could just wander in front of the quantum machine gun secure in the knowledge that you’ll be unharmed despite the increasingly vast odds against that being so. But you and others still wouldn’t have life in your years, and that would be worth preserving. It’s a heady prospect, but probably not a good one because you might stop caring about those affected by the troubles and hardships of the world, although suffering would still exist, more in fact than it does if we’re mortal.
Hugh Everett was a prominent proponent of this idea, although I have to say it’s a fairly obvious one so I doubt he was the first. He was the first well-known theorist of the many-worlds interpretation of quantum physics, which is the apparently branching paths (in fact they’d probably always have existed but be indiscernible) idea of innumerable parallel universes forking at each probabilistic event. He believed he would never die because of this. From our perspective, he is in fact dead, although this may not have any bearing on whether he’s immortal as if he was right, he would be “elsewhere”: we just happen to live in one of the majority of universes where he is in fact deceased. He died suddenly of a heart attack on 19th June 1982 at the age of fifty-one, having smoked sixty a day, consumed excessive alcohol and being grossly obese, never exercised and never went to the doctor. His son was very angry with him after his death that he never took care of himself, although he also observed that he just did what he wanted without interference and then just died without withholding any pleasures from himself. He also wanted to be cremated and have his ashes thrown out with the rubbish, something his widow wasn’t keen on for a few years after his death but eventually complied with. Incidentally, if you know the band The Eels, that’s the son who commented thus and there’s an album inspired by his death. Of course, this album doesn’t really exist because Hugh Everett is immortal! It seems to me that this kind of self-neglect may have resulted precisely from his belief in quantum immortality – there’s simply no point in looking after your health in his view.
I’m not sure this follows, to be honest. I think that apart from anything else you probably would want to be healthy for as long as possible in order to enjoy life, and also to spare the feelings of people close to you. Also, what if you’re wrong? I don’t think many people who have recently touched grass, as the phrase has it, would willingly step in front of that machine gun. Certain persons, of course, haven’t done that recently.
The Doomsday Argument and Quantum Immortality feel like they’re from the same stable, so it’s worthwhile working out what they have in common. They both start from a kind of Descartes-like position of noting that one is currently conscious and attempting to draw conclusions from that bare fact, though unlike Descartes they neither raise the possibility that the physical world doesn’t exist nor that God does, which gives them greater traction on the consensus view of reality and the Universe. Both constrain the Universe through the fact that we’re observing it, like the anthropic principle that the Universe must have certain physical constants and laws to produce conscious beings. Both involve vast numbers of items. In the Doomsday Argument, this is everyone who has or will ever live, and in Quantum Immortality it’s the number of possible worlds in which one has existed or currently exists. In fact I don’t believe the many worlds are strictly separate but that’s an argument for another time. Oddly though, they draw opposite conclusions from their reasoning. The Doomsday Argument concludes we’re all going to die but Quantum Immortality decides each of us is individually, though perhaps unhealthily, immortal, and that our consciousness will never permanently end. Neither of them are amenable to observational testing. The former can’t be observed by human scientists because it says there won’t be any, and the latter can only be observed by all the lonely people, but individually.
Another significant concept linked to both of these is Roko’s Basilisk, which we cannot talk about. A fourth one is the Simulation Argument. This is an argument which has been popular with Elon Musk but doesn’t seem to work. This is that we are much more likely to be living in a simulation than the real world because any civilisation which existed for long enough and became advanced in computing will eventually decide to simulate the world. Those simulated worlds will then simulate other worlds when their own simulations are sophisticated enough to do so, and so forth. This would mean that of all instances of apparently real worlds, almost all are simulated. This argument compared to the others seems almost trivially easy to refute. Firstly, taking it at face value this means a cascading tree of simulations, each generation more numerous than the last and also more simplistic and therefore less realistic due to lack of computing power, so the fact that the universe is more complex than it might be means we aren’t in the most numerous types of simulation, so why would we be in a simulation at all? Secondly, again taking it at face value, the three-body problem and beyond can in most cases eat up all available computing resources. I actually don’t think this argument works because in the non-special cases a pseudorandom number generator could just be used to prevent this from happening and the chances are nobody would be any the wiser, since the movements of the large number of bodies is in fact unpredictable. I suppose this could be tested by looking at one’s own simulations of three-body problems using various pseudorandom generator algorithms or for that matter true randomness. But beyond all this, the really big assumption seems to be that any civilisation would inevitably end up bothering to simulate the world in the first place. As I’ve said before, apart from anything else they might just be really bad at maths, and with anything else maybe they’ve got more important things to do.
All of these seem to have a self-centred element to them. There’s also an arrogance to them, in that they boldly assert that the person proposing or learning of them has taken everything into consideration and nothing can assail the argument. The Simulation Argument is obviously full of holes, but the holes are the blind spots of a probably autistic sociopath in that the assumption is that just because one person or a group of people working in a particular field would try to do this, thereby incidentally becoming a God to the sims, everyone else would, regardless of their personality or neurodiversity. Quantum Immortality and the Simulation Argument both seem to leave us with “non-player characters”, i.e. zombie shells of people who aren’t really conscious and don’t really matter, so that’s sociopathy and lack of empathy again. They seem to provide an excuse to ignore people’s needs. The Doomsday Argument assumes that humans all contemplate the end of the world or the human race and are all that matters, rather than it being the thought of the end of the world which is significant. There needs to be a cut-off point or certainty that we are the only conscious beings in the Universe for it to work.
In the end, although these arguments are interesting I think they really say more about the people who think of them than the actual world they’re supposed to be applied to. I do think that something will prevent the Artemis Project from succeeding, and that is because of the future galactic civilisation thing, but there could be really positive reasons why it won’t. As for the others, well, they all have a kind of solipsistic and self-centred air to them which it doesn’t seem healthy to entertain. But who knows? Maybe there are other kinds of argument of this nature which do have real predictive power, and if there are that would be fascinating and also useful.
There’s an immediate problem with writing this because I need to refer to an infohazard without giving enough details to make it clear what it is. It’s also what’s been called a “zero-infinity problem”, in that the chances of it being valid are small but the scale of the bad things which would happen if it is valid are very large. Probability, incidentally, doesn’t work in such a way that its probability is measurable.
So what I’m going to do is use ROT-13 to obfuscate what this thing is, but before that I should probably tell you what an infohazard is. An infohazard is something which is innately harmful if known or believed. These things can also be memetic, that is, they can spread from mind to mind. Many people would agree that certain forms of fundamentalist Abrahamic religions are infohazards and they are also memetic. Evangelical Christianity which espouses the idea of eternal conscious torment is a potential infohazard if true because believers think God’s telling them to spread it but if someone hears and rejects it, they’ll go to Hell, whereas if they hadn’t heard it in the first place they’d be “judged accordingly”, as the Bible says at some point. That is, they will be judged by God as if they had been presented with the gospel in a manner as sympathetic to their situation and character as possible and reacted accordingly. Since evangelical Christians are likely to believe that God will be better at doing this than they would by their own efforts, this seems to mean that they shouldn’t spread the gospel, but they also read the gospel as telling them to spread it. Other forms of Christianity are of course available.
Outside the realm of religion, if you watched ‘East Enders’ in the ’80s, when Michelle tells Sharon who her baby’s father is, that’s an infohazard. It’s also possible that what Ian remembers after the car crash is an infohazard. It’s something “Man Was Not Meant To Know”. A related concept is the cognitohazard, and this does exist, at least in a minor form, as the McCollough Effect, which is a series of stripy colours which can mess up your visual perception for months if you look at it for more than a couple of minutes. I think I probably do have such a brain that certain visual perceptions like the McCollough Effect do have an adverse influence on me, although they were worse when I was a child. If you’re familiar with the way that looking at stripes makes things look “swirly” afterwards, that’s similar to a cognitohazard. They’re basically things which permanently mess up your brain if you encounter them, and for some people apparently the thing I’m about to refer to counts as that.
The vast majority of people, when they hear about the thing I’m about to mention, will just shrug it off as silly, and it does indeed stand a good chance of just being silly. There are actually very good rational arguments for the idea that it’s a harmless mind game like a creepypasta, and that’s my judgement as well as the judgement of a lot of other people who call themselves rationalists. When the idea was first proposed on the rationalist website lesswrong.org in 2010, discussion of it was banned after a short period. This was probably a mistake because banning something can draw attention to it and it would be better just to ignore it.
To go off on one briefly, ‘Ghostwatch’ seemed like a harmless hoax when broadcast. This was a mockumentary in a series of clearly fictional programmes, this time about a haunted house and presented by people such as Michael Parkinson who are generally associated with the factual realm and are well-trusted, which was unfortunately taken as gospel, as it were, by certain members of the public, one of whom, Martin Denham, ended up killing himself. For him, ‘Ghostwatch’ was an infohazard and a cognitohazard, though one created in all innocence by the Screen One team. This can’t always be anticipated.
The infohazard posted on Lesswrong in 2010 is now known as Roko’s Basilisk. Whereas I think it’s harmless to most people, I don’t think it is to everyone, and whereas it may be silly, that’s my judgement and I don’t want anyone to suffer, a phrase I’ll be coming back to, just in case I’m wrong. It definitely did not seem to be harmless to Ziz and her followers, or certain people unfortunate to come into contact with them.
Just to describe ROT-13 then, imagine you have two wheels with the letters of the alphabet written on them all the way round, one on top of and smaller than the other. When you start, the two are lined up so that A on the bigger one lines up with A on the smaller and so on all the way to Z. You spin the smaller wheel so that it’s thirteen places ahead of the bigger one, then you take a text and copy it to the corresponding letters on the wheel, so that A becomes N, M becomes Z and so on. This system is known as ROT-13, and it’s dead easy to use and therefore useless for real encryption although it has mistakenly been used for it due to being given as an example in some places by virtue of being easy to understand. It’s used for spoilers a lot. To decrypt, you just do it again and it’ll become clear. This is what I’m going to use to disguise Roko’s Basilisk. The next paragraph describes it. If you want to read what it says, and it’ll probably be harmless although it might not be, go here and copy-paste the paragraph.
Fhccbfr gurer vf n trarenyyl oraribyrag fhcrevagryyvtrag NV va gur shgher juvpu jvyy orarsvg gur jubyr bs uhznavgl naq znxr gur jbeyq n hgbcvn. Vg pna qb nalguvat. Bar bs gur guvatf vg pna qb vf gb gbegher sbe rgreavgl nalbar jub xabjf nobhg gur cbffvovyvgl bs vg orvat vairagrq naq qbrfa’g znkvzvfr gur cebonovyvgl bs vg orvat perngrq nf fbba nf cbffvoyr, vapyhqvat qrnq crbcyr, jubz vg jvyy erfheerpg va beqre gb qb guvf. Nalbar jub qbrfa’g xabj nobhg vg vf rkrzcg sebz guvf sngr.
Okay, so that’s it, and at this point I’m reminded of the Charn Bell in ‘The Magician’s Nephew’, which hung next to this inscription:
Make your choice, adventurous Stranger; Strike the bell and bide the danger, Or wonder, till it drives you mad, What would have followed if you had
So if you’re curious, actually don’t worry about it. The chief problem for me now is how to proceed from this point and continue to make any sense, assuming anything I’ve said so far does make sense. But there are certain dangerous ideas which are only dangerous for some people, or dangerous to others if they’re known to some people. Roko’s Basilisk did turn out to be dangerous in the wrong minds.
If you’ve read that, you can see the argument that knowledge of it endangers the knower, but there are also, as I said, strong arguments against this. For instance, it isn’t clear that identity is persistent over time if there’s a break in that identity. Maybe an identical future person isn’t you. It also isn’t clear what the Basilisk would really stand to gain or how it would think. This is what a mere human is thinking about it, and it’s probably wrong. A popular lesswrong argument against it involves a chain of objections which, taken together and multiplied, make the probability of it being valid vanishingly small, one figure quoted being ten thousand sextillion to one against, long scale. Since there are several existential threats to the human species right now which are all quite urgent and clearly, uncontroversially real, it probably doesn’t matter as such. What does unfortunately matter is that some people strongly believe in it.
A bit more background is needed here. The renowned Peter Singer, whose arguments persuaded me and many other people to go vegan, has various other views which are quite thought-provoking. One of them is the idea that most of us are actually pretty evil. The argument he used to support this idea is this: suppose you are wearing expensive clothes and are on your way to an event when you come across a drowning child in a pond near you. You can wade in at no risk to yourself and rescue the child, but it will ruin your clothes. If instead of saving the child you continue to your event and tell anyone there the anecdote, most people are likely to be rightly shocked and horrified by your failure to act. So far, this is uncontroversial. However, consider this. There are children all over the world whose deaths you can prevent in one way or another but which you choose not to do anything about for your own status or comfort. Therefore, you’re evil. The mere distance and lack of acquaintance with these children, or for that matter adults in the same kind of situation, are not considered morally relevant.
This, by the way, is related to my “doing the hoovering in Dunedin” thought experiment.
One thing which we can be fairly confident about is that everyone is not about to give all their disposable income to a charity to save children in the Third World in various ways. However, it has happened that the occasional very rich person has given a lot of their money away to do this kind of thing, the obvious example being Bill and Melinda Gates. This is part of a movement, associated with rationalism, known as “Effective Altruism”. Ideas include donating to carefully chosen charities and following careers which are likely to further the greater good, and this “greater good” is very much thought of in utilitarian terms – “actions are right in proportion as they tend to promote the greatest happiness of the greatest number”. There’s also negative utilitarianism, which aims to minimise suffering rather than maximise happiness. This sounds relatively okay, if ineffective. However, it can also be coupled with “longtermism”, which also sounds okay. This is rather obviously the idea that we should, morally speaking, consider things in the long term rather than the immediate future. In utilitarian terms, this could mean ensuring that as many people exist over the whole span of human history as possible, living lives as happy, or at least as devoid of suffering, as possible. A popular way of understanding how to make this happen is to invent a superintelligent AI which can do this for us better than we ever could, because, y’know, it’s superintelligent isn’t it? Someone has calculated, I’m sure bogusly, that a nonillion (long scale) humans will ever exist, so they are the best object of our moral focus. Next to them, everyone living today is insignificant. There is, by the way, a cryptocurrency fraud issue connected to all this but that’s not what I’m talking about here.
Roko’s Basilisk is an example of Timeless Decision Theory, which is the idea that one should act as if one is determining the result of the decision one is making. Effectively this means that the present can influence the past. Significantly, this is linked to Bayesian probability, which is counterintuitive to most people but, and this is of some concern to me, not to myself. TDT is quite plausibly deeply flawed, as is utilitarianism, negative or otherwise, but this is not the point. The point is their draw on certain types of people, and the disturbing fact that Bayesian probability is intuitive to me suggests that I may be such a person.
There’s a lot of this before I can get to clarity here, and I’m now going to mention something else. There is out there an enormous piece of Harry Potter fan fiction, written in the late ‘noughties, which comes in here. It’s 660 000 words long with 122 chapters, and is called ‘Harry Potter And The Methods Of Rationality’, and envisages Aunt Petunia marrying a philosophy professor and Harry Potter entering Ravenclaw instead of Gryffindor, and the main aim is to introduce rationalism as a power greater than magic even in the Wizarding World but which unlike that form of magic, actually exists in the real world and can be applied. This is more than half as long as the entire original Harry Potter series, and ended up being very influential on young people attracted to the rationalist community. This is partly a sign of the huge success Harry Potter had at the time. Clearly J K Rowling is not responsible for what grew out of the movement, and it’s worth noting this in the light of one of its other characteristics.
Who, then, is Ziz, and why is she associated with this movement? Well, Ziz is one of a number of people linked to Zizianism and the movement has been named after her, but not by her. This should be taken in proportion. Their divergence from the main rationalist community should be borne in mind, just as a small Christian sect shouldn’t be seen as representative of the mainstream Church. Ziz and some of the others is why I find the movement particularly interesting, because she and they have been vegan trans anarchists strongly influenced by analytical philosophy. This is exactly who I am. I am all of these things. It’s also why I mention the fact that J K Rowling has indirectly influenced them, because just as she can’t be held responsible for their actions, nor can them being trans. Being vegan, on the other hand, is significant.
Ziz LaSota has been underemployed and had an internship at NASA after qualifying in computer engineering from the University of Alaska, dropped out of a Masters at the University of Illinois and at some point became committed to the Effective Altruism (EA) movement and moved to California to get more involved. Like many other people who have moved to some parts of that state, she had to resort to first sharing a house, which she moved out of due to transphobia and lack of money, and then sharing a boat with other EA people. Her plan was then to set up a fleet of boats to promote her ideas. One big issue she had with the rest of EA was their exclusive focus on human well-being rather than that of other species, and she also felt the pull of Roko’s Basilisk. Hence her aim, and that of other Zizians, was to promote the existence of an AI working for the good of all animals rather than simply humans. She also claimed that AE was institutionally transphobic, although other trans members of EA deny this is so. She did not transition medically or surgically.
Owing to their general commitment to a game theory-like and TDT approach to the world, Zizians believe in immediate escalation to opposition, so they believe in murdering anyone who gets in their way or otherwise confronts them. This policy is supposed to prevent anyone from attacking them in advance because they know they’ll lose. It actually goes further than that but I don’t want to go there. Ziz concentrated on recruiting particularly intelligent autistic trans women to her cause. Most of that makes sense, and by that I don’t mean it’s a good idea but that it’s consistent with her world view. Elon Musk actually seems to have a somewhat similar idea in that prosecute appears to have played with the concept of banning neurotypical people from voting on the grounds that they don’t make rational decisions, but I think this is probably connected to prosecute’s idea that empathy harms civilisations. This may also be connected to Ziz’s worldview, because she also seems to believe that sociopathy needs to be encouraged for the future’s sake. The way she planned to do this was odd, needless to say.
Zizians are very keen on the idea that the separate hemispheres of the brain express different personalities. I would “gently” suggest that this should be subject to the usual scepticism about the nature of the bicameral mind, because really this seems very much like pop psychology to me. Anyway, they – actually I need to clear something up here: Zizian blogs are very long and verbose and it’s difficult to maintain the attention needed to follow everything that’s going on in them, so whereas I may be getting the details wrong here, it’s actually quite hard not to get the details wrong and it may even be inadvisable to try. Hence what I’m about to say may be inaccurate or incorrect, but this is how I understand it. Their understanding of the brain is that the two hemispheres express different personalities, and one hemisphere is usually “better” than the other. There are also “double-good” people out there, both of whose hemispheres are effectively altruistic. Unsurprisingly, considering that for some reason a lot of them are trans, they also believe it’s possible for each hemisphere to be gendered differently, and at this point I’m going to have to leap in. The differences between female and male brains are much less significant than the differences between human brains generally, but one frequent difference is that the corpus callosum, i.e. the bundle of nerve fibres linking the two hemispheres, has more fibres in it on average in female brains, and another difference arises from the karyotype – chromosomes. In bodies with two X chromosomes there’s a mosaic of cell lines, some of which express the genes in one of the very large X chromosomes and some of which express those in the other. Since the X chromosome is the largest, it also carries the most genes at around a thousand, meaning that many of them are going to be involved in brain structure and function. This means that the brains of people with more than one X chromosome consist of mosaics of nerve cells expressing different traits working together, whereas those of people with only one X chromosome don’t. Incidentally, the reason I’m saying “more than one X chromosome” is nothing to do with not being transphobic, because it means that people with multiple X chromosomes, XY cis women, women with Turner’s Syndrome, people with Klinefelter’s Syndrome and probably also women with Fragile X will all have brains of an atypical kind for their gross external phenotypical sex. Taking all of this together, the Zizian view that hemispheres can be gendered differently at least seems to reflect ignorance of basic brain anatomy and physiology. In particular, the corpus callosum issue seems to be completely ignored. It might still work if gender identity is socially constructed in a significantly internalised way, but hemispheres operating independently is usually constructed socially as a disability. If the corpus callosum itself is a significant factor, it seems to be complete nonsense.
This is of course partly me caving in to the urge to go on and on, but it isn’t just that. My point is that in this particular instance the Zizians seem to have adopted a naïve approach to the nature of the brain which is based on popular misconceptions about brain anatomy and physiology and is, as the phrase has it, “not even wrong” rather than “less wrong”. I’ll come back to this.
A long time ago, I met a group of people who were avoiding sleep and attempting to deactivate the right hemispheres of their brains because they hypothesised that the Fall in Christian terms involved the absence of monoamine oxidase inhibitors from the diet which led to the left hemisphere becoming dominant and controlling the right hemisphere through fear. I don’t subscribe to that point of view but I do think the influence of nutrition on brain development and function is important – that’s such a given that it’s a platitude. They were planning to attempt to anaesthetise their right hemispheres. I don’t know what became of them, but their approach is strongly reminiscent of the Zizian approach to this – “unihemispheric sleep”. Dolphins are well-known for sleeping with half of their brains at a time because otherwise they’d drown. The Zizians try to induce the left and right hemispheres of their brains to sleep at different times in a similar way. This idea crops up in at least two places in SF, and probably others. Iain M Banks has a character in the ‘Culture’ novels who does it, and the Space Marines in ‘Warhammer 40000’ also have this. This makes me wonder if they’ve just plucked ideas out of sci-fi rather than actually properly researched them in a sensible and rigorous way, and in fact I think I can see a theme emerging here. What it means in immediate practical terms is that there was, and possibly still is but I’ve lost track, a small, isolated community of individuals who were sleep-deprived and only talking among themselves. This is of course what happens in cults.
Another part of their approach is that they aim to find people who are unusually good in their estimation, recruit them to their organisation and turn them into sociopaths for the sake of the long-term common good. This is because of, of all things, ‘The Office’ sitcom! And yes, this all seems to be becoming increasingly bizarre. They see the character of David Brent as the ideal organisational leader, and that the most effective altruistic organisation would be led by such people. So we’ve got some Warhammer 40K stuff, Harry Potter, Peter Singer and ‘The Office’ all mixed together with rationalism, game theory, utilitarianism and Timeless Decision Theory. There’s nothing wrong with being eclectic of course, but this really seems from the outside to be a rather arbitrary collection of ideas which don’t really belong together, or if they do, to have been assembled and interconnected in a manner which is not very logical at all.
There’s also some stuff about being a “vegan sith lord” which I don’t understand because I’ve always considered Star Wars to be an irredeemable pile of garbage hardly worth my time to get my head round.
So anyway, then they went out and murdered people. They murdered their landlord, or rather their “sealord” (unless this is a different landlord) because they didn’t think it was ethical to pay him rent and he was likely to prevent them from pursuing their end. I think this was probably because the rent was thought to be better invested in developing a superintelligent AI to take over the world than paying their landlord, Curtis Lind. The fact that he was murdered, after a previous attempted murder, is partly to do with this, partly to do with the idea that property is theft and partly linked to the idea of immediate escalation of violence as a deterrent, which for them is presumably a game theory thing. Some time before this, I think, Ziz faked her death, which led to a very resource-intensive search of San Francisco Bay. The parents of one Zizian were also murdered but it isn’t clear if this was connected to them. If it was, it was probably for inheritance purposes, again to invest in the development of a superintelligent AI, but the law there (and I hope everywhere except maybe for euthanasia purposes) prevents heirs from inheriting from someone they’ve killed. Regarding the rent situation, the line of thought seems to have been that the landlord needed to be killed because he was having them evicted, and this would ultimately involve police officers who would probably kill them. I guess TDT sees this as pre-emptive self-defence. They then went on the run and attempted to cross into Canada at Vermont, at which point there was a shoot-out with border guards, one of whom was killed along with one of the Zizians. I’m going to ignore the details of the news reports I’ve found on this because of certain inaccuracies, but they do seem to agree with this account of the events.
So that didn’t end well, but this seems particularly germane to me because of who I am. It is very clearly far away from my approach to life, but also very close. Discussing it is a little hampered by the fact that I won’t describe Roko’s Basilisk and also by the déluge of writing produced by these people. This has also been alleged to be potentially harmful. They use a lot of specific jargon which makes little sense to most outsiders and somewhere there’s a glossary which is itself considered an infohazard as reading it tends to convince certain people they’re right. There are some notable parallels with a certain notorious and litigious cult, for instance the use of boats and some overlapping terminology such as “tech” to describe activities aimed at upgrading consciousness. It’s also possible that if rental prices in the area hadn’t been so high, a lot of this wouldn’t’ve happened because they’d’ve been less isolated, although it’s entirely feasible to be lonely in a crowd of course.
When you don’t know how to do something, internet access in particular allows you to “dig a tunnel” down to the knowledge and skill necessary to find out. Examples of this are Mackenzie Friends, who are people without legal training but are able to accompany accused people to court and advise them, sometimes successfully, because they have some legal knowledge, and phrase books, where you learn the minimum needed to get by in a foreign language rather than the whole language. Another example would be what I’ve just done, which is to look up how to use our bread machine to make pizza dough, or you might want to find out how to change a car battery. The internet isn’t necessary to take this approach but it can be very handy. What it doesn’t bring is wide-ranging knowledge and experience. We are of course currently learning Gàidhlig, so for example I might be able to ask someone “where do you stay?” in Gàidhlig, bearing in mind things like pre-aspiration of stops and a good approximation of slender T and R, and with my lack of experience (nearly four dozen years with very little progress on account of living in Kent followed by the English Midlands), I’ll probably end up uttering something which can just about be made out by a native speaker who will then answer me in English, which is just as well because I doubt I’d be able to understand her answer. This will, I hope, gradually improve, and it’s already better than fishing the necessary question out of Omniglot or typing it into Google Translate, or, heaven forfend, actually looking it up in a paper phrasebook!
Is this relevant? Yes! Because what the Zizians have done is tunnel into the corpus of academic philosophy and dug out the narrow range of things they think is relevant to their experience along with a load of other stuff like Harry Potter, Roko’s Basilisk and so forth without a broad experience of the actual discipline of the subject, and then discussed it in a tiny clique unconnected to the academic community, and they’ve done so with misplaced bravado which has led them to certain conclusions. To use a somewhat analogous example, from the ’70s on there were attempts to help chimps and gorillas learn American Sign Language. It’s a long story but the human effort to do this only achieved illusory success, and there were various reasons for this, and although this did change later it began with an attitude that signing was something a hearing person who has used exclusively spoken language for decades could easily pick up what he (and it was usually a man, I’m not using this pronoun wantonly here) considered a simple system of communication without even including any native signers in the process and then “teach” this to a chimpanzee feeling secure that he’d mastered it because it was apparently so simple. And now, thanks to the Zizians, a number of people have been murdered or maimed and various other people have killed themselves due to concluding that their lives were of “negative value”. The reason for this was that they were not philosophers. Ziz’s degree is in computer engineering, and that is connected to analytical philosophy but it isn’t philosophy. As far as I know, few to none of the Zizians are philosophers. If they were, they would’ve been exposed to various schools of thought, a wider range of ideas, alternative ways of looking at ethics than just plain, simplistic old utilitarianism from over two centuries ago and some cobbled together ideas from a newsgroup with no professional philosophers in it.
This is what happens when the education system orients itself mainly towards training for scientific and, well I would say vocational except that a lot of paid work nowadays is hardly a vocation, work and ignores the intrinsic value of the humanities as well as their value when applied to the workplace. They thought they could do without philosophy and the result is that some crass group of over-confident individuals go out and murder people or kill themselves in pursuit of an aim which makes zero sense to almost anyone. It does make sense to me, and I don’t want to be arrogant here but I do have two degrees in philosophy and kept my hand in afterwards, and I’m aware even so of my relative isolation from the community, but despite that, and despite these people being superficially almost me, i.e. vegan trans women on the autistic spectrum, their missteps are as plain to me as they are to the general public.
The value of philosophy has been ignored for so long that there are maggots burrowing into it and infesting its corpse. That happens to be my area, but similar problems arise from the neglect of the other humanities such as history and literature. Neither are my forte, but even I know the adage that those who don’t learn from history are condemned to repeat it and am aware of the value of literature to emotional intelligence and the awareness of techniques of persuasion as used in propaganda. What the Zizians have done is just the tip of the iceberg here, and there you go, if I had a degree in English Literature I probably wouldn’t’ve used that cliché but come up with a more effective and original turn of phrase of my own, but instead of studying that I studied a different and equally valuable humanities subject, and when you do that you tend to lack the time to hone your skills in other areas. The Zizians are absolutely wrong-doers and are substantially responsible for their actions if anyone is, and that’s a philosophical question too, but they’re not alone, and part of the guilt can be attributed to those who denigrated liberal arts and underfunded it for decades and set up a system discouraging people from reading such degrees. This is why I haven’t been living on a houseboat and stabbing my landlord and they have.
A long time ago, I did postgraduate philosophy at the University of Warwick. I’ve talked before about how disillusioning it was and also about my personal tutor Nick Land, but like having connections with Boris, this situation has now become more relevant to current affairs because Land was one of the founders of the Dark Enlightenment, and that movement is central to much of what’s currently being attempted in the US. Additionally, technofeudalism may be relevant. I’m going to try to go into this without injecting personal bias, except to say this: you don’t negotiate with people like this. As a near-pacifist, I wouldn’t sanction violence against them either but they do have to be defeated or overcome, simply because they get in the way of addressing the climate emergency, and anyone who does that is acting contrary to the interests of the biosphere and the human race regardless of their political complexion. It may of course be that certain other views may entail denialism, in which case those views need to end, but with an open mind and a degree of sarcasm, maybe the Nazis were absolutely fine in that respect.
At the moment then, we have the noisy distracting guy in front doing all the outrageous stuff who will be there just as long as the techbros need him to be, and then we have the techbros themselves, mainly Elon Musk, and whereas conspiracy theories are all the rage, this is not the same as simply reporting, without bias, what Musk believes politically, and his views and those of many others in Silicon Valley are based on those of Curtis Yarvin and Nick Land. I should mention that Yarvin has previously used the monicker “Mencius Moldbug” online, and that he’s an accelerationist. Yarvin’s beliefs have been summarised thus:
1. Campaign on Autocracy: Promote centralized, strong leadership.
2. Purge the Bureaucracy: Remove mid-level officials to streamline government.
3. Ignore the Courts: Undermine judicial authority.
4. Co-opt Congress: Align legislative bodies with the new regime.
5. Centralize Police and Powers: Consolidate law enforcement under federal control.
6. Shut Down Elite Media and Academia: Dismantle institutions that challenge the new order.
7. Mobilize Public Support: Rally the people for the regime.
8. Introduce Technocratic Governance: Replace politics with corporate management.
Yarvin believes in an accountable monarchy. He’s an authoritarian libertarian, which of course sounds contradictory. One of his big ideas is the Cathedral, which is the academic and media «élite», in fact the same élite Elon Musk refers to. In financial terms, Musk is of course the élite, but that’s not what he means. He means that people in the academic-media complex, as it were, determine public opinion and march in step, agreeing with each other and not allowing any contrary opinions. This is one reason for Trump’s hostility to the Department of Education and state-funded scientific research: they agree that it’s proposing a single set of beliefs without admitting to alternative views of any kind. Yarvin, for example, believes that White people are genetically more intelligent than Black people, which is obviously a view not expected to be entertained in academia.
Additionally, being in favour of authoritarianism, Yarvin is very keen on Singapore as a successful authoritarian state. William Gibson, inventor of the word “cyberspace”, has described Singapore as “Disneyland with the death penalty“. Obviously Gibson is not a fan. Yarvin was a de facto guest of honour at Trump’s inauguration in 2025, so I don’t think there’s much doubt that his ideas have been very influential. Trump is king of course, but he may also be a figurehead monarch.
As for Nick Land, well, he’s more in the background. He’s been called the godfather of accelerationism. Back in the days when I knew him, he had the reputation of being left wing but was very much in favour of élitism. I’ve written about him on here before too (same link as before incidentally) because someone on YouTube was curious about my experience of him. He has many fans. I don’t understand why anyone would think he was left wing. One of the problems with engaging with his work is that he doesn’t distinguish between theory and fiction, so you never know if he’s serious. The same accusation has been levelled at me too. Hyperstitions are an important concept in his work. He describes these as ideas that bring about their own reality. This is interestingly similar to the idea that one should take Trump seriously but not literally. The reactions to his behaviour can kind of make it real, if that makes sense. It seems fair to say that both Land and Yarvin are strongly opposed to egalitarianism, so that explains the rejection of DEI. Land believes that democracy restricts freedom and accountability. He would want a president to be a kind of CEO of a corporation rather than someone elected. It’s straight White cis able-bodied men who are disadvantaged in their opinion, and this needs to be remedied because they’re better – more competent, more intelligent, harder-working and so on.
I’m saying all this simply as a report of what’s going on. Yarvin’s views are being enacted through Musk’s DOGE. I do, however, want to mention a couple of puzzling aspects to this. At least in Britain, Conservatism has traditionally tended to see itself as a political philosophy without an ideology. All of this looks to me like an ideology, i.e. a belief system to do with political power. Also, although it’s substantially about suspicion of the Cathedral, it seems to be part of it. Nick was my personal tutor at a Russell Group university. It was of course “Margaret Thatcher’s Favourite University™”, and its very existence seems to refute the Dark Enlightenment position. I once asked him if he thought of Roger Scruton as a philosopher and he denied that he was because he was very keen on Nietzsche, whose version of a philosopher was someone who would probably end up in jail or a mental asylum, and to be honest I would concur that someone who follows their principles as a philosopher would probably do that. For instance, suppose you are wedded to solipsism, the idea that you are the only person who exists. If you take this seriously, you may end up getting sectioned or feel very lonely. If you’re a moral sceptic, i.e. you belief there is no right or wrong, in society’s eyes that makes you a sociopath or psychopath and if you aren’t also prudently restrained, you will again end up somewhere secure, possibly a coffin, where you can’t continue to pursue your diet of people’s brains. So yes, he’s absolutely right in that respect: a philosopher with the courage of their convictions is not going to be lecturing at Birkbeck College and writing books about Immanuel Kant, no matter how snarky the introductions are about Kaliningrad. But then this raises the problem of why Nick was even at Warwick, or it would do if he wasn’t so nihilist. Here’s an interesting bit about him. In a sense he was a proper philosopher though, but then that’s the problem because that makes him part of the Cathedral.
Okay, so there’s that. There’s also this.
Ferdinand Hayek once famously wrote the hugely influential book ‘The Road To Serfdom’, which majorly influenced Margaret Thatcher and others. In it, he claimed that centralised government planning is dangerous because it leads to tyranny. This political philosophy has dominated the human world since the late 1970s CE although it was actually published in 1944. It’s basically Thatcherism and Reaganomics, but the reason it’s relevant right now is the word “serfdom”. A serf is an adult labourer bound by feudalism to work on the estate of the lord of the manor, and that’s what we’re drifting towards right now. Just to be a lot more specific about serfdom, or rather feudalism, which Hayek saw neoliberal economics as protecting us from. Feudalism was the economic system arising in Europe in the centuries following the fall of the Roman Empire, and it worked as follows. There were two types. One was to do with landholding and the other mutual protection. The Roman latifundia were large agricultural estates relying on slavery for production. Since the nobles couldn’t manage the land themselves, they delegated to others in exchange for a beneficium, which was generally military service in exchange for the right to work on the land. Because Rome fell, the emperor could no longer guarantee safety to his subjects and it became necessary to adopt a patronage system, whereby nobles organised some faithful retainers around themselves in return for taking care of them. This was similar to the Germanic arrangement already in place, where the chiefs chose outstanding warriors who swore loyalty to them and were fed and provided with arms for doing so. This situation was how things were before full feudalism swang (nope, not a spelling mistake!) into action a few centuries later, which I’ll tell you about in a minute.
Just as an aside then, feudalism fascinates me for two reasons. One is that capitalism replaced and is better than it. Capitalism is progress compared to feudalism, so to an extent it’s worth celebrating. The other is that it’s not capitalism, making it an example of an alternative system worth investigating and studying to show that capitalism isn’t the only way of doing things. Having said that, I’ve not spent much time looking at it. Back to my main point then.
This was the Carolingian Empire:
This is the empire over which Charlemagne ruled after he was crowned emperor in Rome in order to demonstrate continuity with the Western Roman Empire. It was a bit of an on-again/off-again state of affairs which led to the emperors being forced to recognise fiefdoms as hereditary and exempt from royal interference. At the same time, mounted knights were increasingly being granted land. Meanwhile, the Muslims made it impossible to trade widely beyond Europe and the region had to become self-sufficient. This made the rural economy more important than cities and towns and manors became small communities able to make and do everything they needed, meaning that money ceased to circulate as it had before. Small landowners almost disappeared, landed aristocracy became independent, there were the renowned mounted knights and there was no more Mediterranean commerce. Hence feudalism, which arose by 900: political authority wielded by landed nobility, theoretically ruled by the king but actually able to take the law into their own hands most of the time. The king kept large areas of land for personal use, giving the rest to the highest nobles, and so on down to the knights, who had just enough land to support themselves and their horse. A serf was tied to the manor and couldn’t leave without the lord’s consent. It was hereditary – your children weren’t going to escape it and there was basically no such thing as upward mobility. The villeins could afford to leave the manor if they paid the lord to do so, or could send their sons to learn trades or enter the Church. Peasants had to work several days a week for free on the land and pay the lord in produce, or money if available. This allowed the rule of law to continue. Although serfs couldn’t be bought and sold individually, they could be exchanged along with the land. This may surprise you, but to some extent I actually agree that this is so, although I also think voting for a government which does this kind of thing is actually consent for them doing it. However, the same is also true, and this is my opinion again, and very far from Hayek’s, that the same applies to profit. Having said that, I really should restrain myself from expressing my own opinions in this way. Suffice it to say that
The Musk situation is the end of a rather long process examined by the Greek economist “Γιάνης” Βαρουφάκης, who was Greek minister of finance in 2015 when the global economy decided to ignore the colossal contribution the Greek people had made to the modern world, which is so extensive that it’s impossible for them to owe us anything ever (sorry, personal opinion again – I must restrain myself), and declare the Greek economy bankrupt. Βαρουφάκης claims that we are no longer operating under a capitalist system, but what he calls a technofeudalist one. What he means by this can be partly illustrated by the online world as it’s developed since 1989. In that year, Tim Berners-Lee invented the World Wide Web, which he gave the world for free because he considered it too important to be paid for. This effectively created a new “space” which has been described at that point as like the Wild West. That is, it was common ground. It was like a vast new continent had been discovered to which anyone could lay claim, but which was substantially shared. As time went by though, users tended to be corralled into proprietary spaces, particularly social media such as Facebook, Twitter and the rest, which are of course free but only in monetary terms. It’s been said that if you’re getting something for free, such as the food eaten by serfs which they hadn’t paid for, it’s because it’s you who are being sold. This is clearly true with social media and mobile ‘phone use because by using either we tend to provide lots of data about our movements, purchasing habits, tastes, political opinions and relationships, among other things, and these are of course highly valuable to the likes of advertisers, lobby groups and political parties in all sorts of ways. And it goes further than that. There are lots of things we no longer own. For instance, we might listen to music on iTunes or Spotify, read ebooks on Kindles or watch TV on streaming services. If you actually look at the terms and conditions of the companies providing us with those services, you’ll find that you don’t actually own any of that stuff, and nowadays physical media have practically disappeared. Amazon more generally is another example. According to the technofeudalist view, Amazon exists by charging rent, not by selling things. This is true on Prime and with Kindles of course, but also the general marketplace there. Everyone goes to Amazon and it’s hard to go anywhere else. Βαρουφάκης compares it to a town where all the shops belong to Jeff Bezos, and that is of course also known as a company town, but also this is a similar situation to how serfs had to behave. They couldn’t get hold of anything which wasn’t made on the estate or that someone had bought into it. A rather important side-point is that other items are now rented rather than bought. This arrangement exists with cars (and I don’t mean renting a vehicle), leasehold property and software, notably Microsoft Office. This arrangement is better for the corporations and worse for the consumer because we constantly lose money and they constantly gain it. There was a phrase a few years ago by the World Economic Foundation – “You will own nothing and be happy” – which led to panic among conspiracy hypothesists who equated it to communism. In fact it refers to this technofeudal situation. We own nothing, or rather less, because the corporations own it, and to some extent us, via social media and other apps and services. We have governance by the few over the many, and that’s pretty close to feudalism. Moreover, it wasn’t Hayek’s fear that state control and what he probably thought of as creeping socialism and communism which achieved that. It was capitalism. Βαρουφάκης in fact claims that capitalism is over, so to some extent Marx was right, but the trouble is it wasn’t replaced by communism but by a return to feudalism which may become ever more pronounced over the few years remaining before we go extinct. Another aspect of this is that Βαρουφάκης is a Marxist, but another author making practically the same claim, Roger McNamee, is a venture capitalist who provided much of the original funds to set up Facebook. This is hardly even a political position, just a pretty much neutral description of what’s happened.
To conclude then, it’s no secret that Musk and the other billionaires are inspired by and are following the plans of the accelerationists, and in any case capitalism may well be over now, but replaced by its predecessor. Please note also that I’m trying not to insert my own political opinions, most of the time, about this, so much as stating what we can note, observe and think, and what others have thought about the situation. I obviously do have positions on all this stuff, but all I’m doing right now is presenting the facts.
I’ve decided to try writing more spontaneously rather than delving a lot into sources of information like I have been recently. It’s good exercise for the memory and makes for a livelier style. Maybe it’ll also end up being less accurate, as I’m drawing on stuff from the 1980s here.
The other day someone posted a meme about humans being cute for various reasons. In general it was a good meme, but one probable inaccuracy jumped out at me. It was something like “although they’re not aquatic or amphibious, humans flock to be near water just for the pleasure of splashing about in it”. Fair enough as a bit of a meme I suppose, but probably wrong, because some people think we were once “aquatic apes” as the phrase has it, notably Elaine Morgan and manwatcher Desmond Morris. That’s in quotes because the idea isn’t that we used to be like dolphins, living in the sea full-time, but amphibious, living on beaches and in the sea, perhaps foraging in both and escaping from predators by wading into the water. It’s also suggested that the surviving species of elephant have a similar history. This is in contrast to the more usual savannah theory, which claims that we are descended from an ape who had to adapt to the veldt when the African rainforests dwindled due to the world drying up. I’m going to talk about this bit too.
During the Miocene there were a huge number of different ape species. This has led to the human evolutionary “tree” being described as more like a bush, because some of them also show parallel evolution, becoming steadily more like hominins but are in fact our sister groups. The world was wetter at the time because the Tethys Ocean, which encircled the equator, was able to flow all the way round, meaning there was no permanent ice in the Arctic and therefore more water available to the planet’s weather systems. This in turn meant larger rainforests. Then North and South America collided and the Gulf Of Mexico formed, causing the warm current to swirl round and head North, where precipitation increased and snow and ice built up, increasing the planet’s reflectivity and cooling it in a vicious circle which also dried it. Hence the rainforests shrank and some apes were forced onto the savannah, where according to Elaine Morgan they then died out, but according to other people they evolved into humans. Morgan managed to resolve this problem to her own satisfaction by suggesting that our ancestors survived by becoming amphibious and living on beaches and in the sea.
This is the evidence cited to support this claim:
We have a diving reflex. If we are for some reason submerged, our hearts slow down.
We are largely hairless. The body hair we have follows a streamlining pattern.
We have more breath control than other apes have. Think of the hooting made by chimps. They do that because they can’t control their respiration.
We have a hymen which protects us from sand entering our reproductive systems before penis in vagina sex takes place.
Penis in vagina sex usually occurs face to face as in other aquatic mammals.
The female orgasm. I can’t remember the argument for this.
Large amounts of adipose tissue in breasts, enabling them to float and suckle young more easily in water.
Long scalp hair onto which babies can hang in water.
Downward-facing nostrils protecting us from accidentally inhaling water.
Bipedalism is easier in water and is adopted by other apes when they are wading through water and may therefore have first evolved due to this lifestyle.
There may be other reasons but those are the ones I can remember and as I stated earlier I’m trying to research less and type more spontaneously. There are also a number of other observations which don’t pertain directly to the human body:
There’s a gap in the hominin fossil record of several million years. I can’t remember where this gap is supposed to be or whether it’s still there, since Morgan’s ‘The Descent Of Woman’ was published in 1972 CE.
All Afrikan primates except humans have a baboon-generated retrovirus code written into their genomes. No non-Afrikan primates have. This suggests that our ancestors were, for whatever reason, isolated from other primates when this happened.
The oldest hominin remains, including tools, are found in Ethiopia and move south into the Rift Valley with time, suggesting that we spread from the Gulf of Aden southwards rather than from the Congo.
The first human stone tools are made from pebbles, suggesting that the technology arose first on beaches.
There’s also a side argument that succeeds or fails separately from the aquatic ape hypothesis, that elephants also had an amphibious phase in their evolution due to several features they have in common with humans but not mammoths.
One reason Morgan made this claim was that she believed palaeoanthropology focussed too much on male bodies and that if female bodies became the focus a number of traits would be easier to explain, namely the ones listed above. Humans considered as female make much more sense as amphibious life forms than humans considered as male savannah-dwellers. There is, in other words, a strong feminist motivation in her acceptance of this hypothesis, or conversely, a strong patriarchal motivation in the establishment’s rejection of it. Now to me the interesting aspect of all this is not directly whether the hypothesis is well-corroborated but what it says about the scientific establishment and academic thought and research, particularly from a pro-feminist perspective. It’s also interesting to contemplate how I perceive it.
The hypothesis is generally viewed as pseudoscientific and thoroughly refuted but it’s recognised that it still surfaces from time to time and there is some endorsement from celebrity science popularisers such as Desmond Morris and David Attenborough. One issue with it is that because none of it seems to refer to bones and teeth, fossilised hominin remains are hard to assess on this basis. It can be asserted that we have a hymen, breasts, approach hairlessness and so forth, but none of that has to do with the skeleton. Against this, and this is just my opinion, is that adaptations to bipedalism are reflected in bones and joints. However, it is true that the fossil record is difficult to use to back this up, and this highlights a general problem with the reconstruction of vertebrates from most fossils: soft parts are rarely preserved compared to hard parts. This applies particularly to non-avian dinosaurs, who, being closely related to birds, might be expected to have structures like wattles and combs but it’s unlikely we’ll ever know unless we find alien video recordings of them or something. Pebbles, on the other hand, are clearly preserved, and these are again hard “parts”, so the question is, does this hypothesis really only depend on soft parts? It seems these are not soft at all.
I’m not a scientist. I have a fair bit of scientific knowledge and am aware of the scientific method, but I’ve done little research of my own since I finished A-level biology. Not none, because some of my herbalism-related CPD involved original quantitative research, but I’m not a palaeoanthropologist by any means. Gutsick Gibbon, however, is, and it seems fair to bow to her superior knowledge and experience. The issue is with the source. Elaine Morgan’s perspective on the issue was informed by her gender and allegiance to feminism: another of her books is ‘The Descent Of Woman’ which emphasises the increased explanatory power of a model of evolution which sets female bodies as the default rather than male. There’s a strong emphasis on “Man The Hunter” in traditional palaeoanthropology, which portrays men as going out to hunt dangerous prey and bringing them home to the cave while women stay in it, do a bit of foraging and take care of the children, and also that most of the nutritional value of the food they ate was in animals rather than plants. Apparently though, this is not reflected in hunter-gatherer societies as observed by Western anthropologists. The trouble is, though, that we tend to project our own ideas onto the past and that hunter-gatherer societies today, rather than being remnants of the Stone Age, have just as long a history as Western civilisation and its predecessors. The other aspect of this is that Morgan is probably surrounded by men in her profession and field, and therefore that she and her opinions are likely to be at a disadvantage which leads to more people working to refute her hypothesis unsympathetically. This is why I find Gutsick Gibbon’s rejection of it interesting, as she doesn’t seem to be motivated in such a way. However, it may also be that she’s influenced by the general dismissal of the idea by her colleagues and mentors. All of this brings up the question of how scientific theories change.
All of this is therefore about bowing to the opinions of experts who are fairly imagined not to be biassed in unhelpful ways. There’s a degree of trust there of professionals which may have been eroded in recent years, leading to various beliefs being accepted which would previously have been ignored. To my mind, it goes hand in hand with lack of deference, which is often a good thing. For instance, nowadays there seems to be either more awareness of corruption in authority or more actual corruption, and where it’s detected accurately, this must surely be a good thing. However, this approach of dubiousness may be dubious. An opinion is not correct or worth considering in itself when compared to other more learnèd opinions. Experience from outside the field may not be valid within it. OFSTED comes to mind here. Why should outsiders be listened to or taken seriously by educationalists and teachers with years or decades of experience?
Also, sometimes a particular characteristic can give rise to excessive sympathy. For instance, there is a Black supremacist group which maintains among other things that melanin alone is the seat of consciousness and therefore that only Black people are conscious. As a White person, I know this isn’t true. They also believe that a Black scientist working thousands of years ago invented the White race through genetic manipulation. There is certainly a sense of empowerment in these claims, but it occupies a special epistemological position because White people actually know that this is not the case. Regarding the origin of fair skin, this has happened several times in hominin evolution, notably among the Neanderthals, but the most recent appearance is apparently among the Eastern Hunter-Gatherers of the future Russian steppes about ten thousand years BP (BP = before 1950). Another, similar, example, is in the spelling, which I’ve adopted, of Afrika with a K. The reason for this given doesn’t seem to be very soundly based. The claim is that the spelling of “Africa” is entirely colonial and should therefore be rejected. That said, I also have the impression that that spelling is primarily promoted by Afrikan Americans and not actual Afrikans, and the K is also used in the Afrikaans spelling of the word, which is often seen as a language of conquest. Another big issue with this spelling is that Afrikan languages which don’t use Latin script wouldn’t use either C or K and in transliteration the former Roman province of Ifriqiya was written with the Arabic letter Qaf in mediaeval times (and of course the word “Mediaeval” is Eurocentric in any case). It is, however, spelt “Afrika” in Maltese and Cape Verdean Creole, and also in Swahili. In Wolof, it’s actually spelt “Afrig”! So the issue here seems to be that the K spelling, though it does exist in many Afrikan languages, may reflect a mistaken claim made by Afrikan Americans about the culture of an entire continent about which it’s impossible to generalise, but that mistaken claim may in turn arise from the people concerned lacking the opportunity or the information to recognise that their claim is dubious, and therefore I’m still going to spell it with a K. Maybe there’s something I don’t know, but the truth seems to be that the spelling varies and does sometimes include a C in languages which the people concerned own emotionally and consider to be Afrikan languages such as English, French and Portuguese, whereas the claim to the contrary seems to be pressured from outside the continent. Maybe I’m wrong, and I’m very open to that possibility.
Elaine Morgan, who sadly died in 2013 CE, is a somewhat surprising person. Her degree was in English and she was a TV script writer, so she’s an outsider with respect to palaeoanthropology. However, the aquatic ape hypothesis was not originally hers but was formerly mainly promoted by the marine biologist Alister Hardy. Of course, a marine biologist is not an anthropologist but he was a life scientist. Her motivation for adopting the hypothesis was, as I said, that the idea of “Man The Hunter” is androcentric but leaves a gap if it’s rejected as it’s then necessary to explain the differences between humans and other apes. It was claimed also that she didn’t realise that Hardy’s claim was a glib and off-the-cuff remark which was never intended to be taken seriously. This is not so, and he actually wrote the Foreword to the second edition of her book.
Most naked animals with subcutaneous fat are aquatic mammals. This is the basis of Morgan’s claim. Philip Tobias, discoverer of Homo habilis and shaper of the savannah hypothesis, eventually came to reject that. David Attenborough and the former promoter of the idea of “Man The Hunter” Desmond Morris, which previously irked Morgan and persuaded her to think otherwise, both appear to support it and her. One startling claim of hers is that early hominins were already relatively hairless. I’ve already mentioned that the idea that our ancestors were as hairy as chimps and gorillas may be mistaken because orangutan, the most conservative living great ape, is considerably less hairy than either and it’s already established that gorillas’ and chimpanzees’ knuckle-walking evolved separately after they diverged from their common ancestors, so this convergent evolution could equally apply to humans. Looking at it from the perspective of a through-line from the common ancestors of orangutan and humans to humans ourselves, their predecessors, related to gibbons, would’ve been hairier, and their descendants may have gradually lost their hair until today’s situation with humans. This doesn’t mean, though, that hominins didn’t habitually enter the water because that very lack of hair could’ve made it easier. Inherited characteristics appear before they’re tested. Moreover, our hair follows the lines of water currents across our bodies as if we were swimming forward in the water, with axillary and pubic hair, for example in regions facing away from the flow and also with tracks of lanugo or terminal hair in the same direction.
An example of the kind of criticism Morgan received was that her ideas were “thought up by a Welsh housewife”. Not only is there nothing wrong with being either Welsh or a housewife, but also that fails to take into account that she was a scriptwriter for ‘Doctor Finlay’s Casebook’ and later ‘The Life And Times Of David Lloyd George’ and the TV adaptation of ‘Testament Of Youth’. It might be a valid criticism of her writing that her degree was in English Literature rather than a science, but this wasn’t the focus. Instead, her academic credentials and career success were ignored completely and she was apparently assumed to be primarily a home maker and her Celtic heritage was associated with ignorance and low intelligence, so it was both racist and sexist. Her response to this, perhaps typically for a woman of her time, was to point out that it was an eminent male Sassenach biologist, knighted for services to science and Fellow of the Royal Society, who had previously proposed the idea. In this case, though, Hardy’s ethnicity and gender didn’t protect him either as his ideas were equally poo-pooed by the scientific establishment. It doesn’t mean he was right of course, but it’s telling that the response to the same ideas being proposed by a Welsh woman focussed not on the validity or otherwise of her ideas but on her identity. All that said, it doesn’t mean she was right either and her position doesn’t confer infallibility. She could also be expected to have some kind of academic rigour but the fact is that she was not a scientist. Creative writing, however, does benefit from thorough research, and I’m guessing that her work on ‘Doctor Finlay’ increased her knowledge of human biology and the process whereby diagnoses are made on the basis of evidence. Perhaps another main issue with her is that she was to some extent an autodidact.
Here comes another bullet list:
The only mammals with descended larynxes are humans, a species of North American deer and several species of aquatic mammals.
The only mammals which are born covered in vernix are humans and harp seals.
Baby humans have five times as much fat proportionately as baby baboons. When immersed in water they float face up due to the distribution of that fat.
Not only is our sense of smell weak because we’re apes, but it’s actually even weaker than other apes. The only other mammals with such a poor sense of smell are aquatic, notably whales. This is because breath control makes it less functional. In the case of sperm whales, they hold their breath regularly for up to ninety minutes.
We sweat more than any other species of mammal. On the arid savannah, this would be a major liability.
The brain needs high levels of both ω-3 and ω-6 fatty acids, which are most common in the marine food chain. Just as a side note, although these fatty acids are generally used as an argument for eating fish, organ meat and wild animals, they’re plentiful in marine algae and are not made by animal sea food sources themselves, so this is not an argument not to be vegan.
Incidentally, it’s notable that the points about vernix and baby fat are likely to be more evident to people who have given birth than those who haven’t.
It was recently found also that the “savannah” sites where hominin fossils are found have pollen from plants only found in forests, even including liana vines, which are only found in very dense rain forests. Hence the theory that humans, sweating profusely and becoming dehydrated on the savannah, evolved there seems now to have been refuted. Humans did evolve there to some extent, but the areas which are savannah now don’t seem to have been savannah back then. Although the savannah hypothesis seems to have been refuted, it hasn’t been replaced by the aquatic ape hypothesis.
Even so, a wide-ranging comparison of humans and aquatic mammals, even beavers and otters, shows little similarity. Clearly swimmers and divers have health problems arising from their activities such as nitrogen narcosis and swimmers’ nodes in the external auditory meatus due to water getting trapped in the ears during diving. It is the case that diving animals do get the bends, and there are even fossils of marine reptiles showing evidence of it, so the mere fact of nitrogen narcosis may not be adequate evidence, but it isn’t at all clear why swimmers’ nodes would develop if we used to immerse our ears regularly.
What I take away from all this is a feeling of uncertainty. Although I can clearly see how Morgan’s ideas were rejected for ad hominem reasons, or at least that this is a factor in their rejection to a greater extent than the ideas of others were, there are clearly people out there with a lot more knowledge and experience in the field than I who continue to reject them, presumably with good reason. It helps that a famous female palaeoanthropologist rejects them too. I wonder if this is connected with the wave of feminism each is associated with. The fact that they’re also endorsed by respectable science popularisers with a background in relevant fields also seems to help back them up, but by saying that I seem to be committing the same fallacy as I’ve just accused others of committing against her. But one thing is for sure: Morgan may be wrong, but the objections made to her are primarily sexist and to some extent racist, and we’re now left with no hypothesis at all regarding the circumstances of evolution, and that seems most unfortunate.
First of all, an apology. I’m generally committed to not referring to our natural satellite as “the Moon” because perspective is important, so I often call it Cynthia. I regret choosing this name, although it’s a valid label since it is one of the Greek lunar goddesses. Some others are Selene, which I like, Diana and Artemis. There’s an association with hunting because a bright nocturnal celestial luminary renders prey more visible. All of these names have a Western bias, so maybe that could be addressed for once as it would be good if one of the best-known and oft-mentioned celestial bodies had a non-European name. Because it also seems weird and distracting to keep calling it (her?) Cynthia, and indeed “her”, much of the time I refer to our companion in circumlocutory terms, so for example I talk about astronauts reaching “the lunar surface” or do what I just did. This is actually already why it’s been called “Luna” rather than “Mensis”, the older Latin name, since “menses” refers to menstruation and the Romans seem to have felt like they were referring to a “period” in the sky, which could’ve been quite positive but they were the Romans so it wasn’t seen that way.
Now for lunar landing denial, and there’s the circumlocution again. Humans did land on the lunar surface. Twelve of them in fact, between 1969 CE and 1972. Many people only remember Neil Armstrong and Edwin “Buzz” Aldrin, so I’m going to list all of them here: Neil Armstrong, Buzz Aldrin, Charles “Pete” Conrad, Alan Bean, Alan Shepard, Edgar Mitchell, David Scott, James Irwin, John Young, Charles Duke, Eugene Cernan and Harrison Schmitt. There were also six command module pilots and three people who attempted to land but failed due to an explosion. Although I’m tempted to mention their names, along with the three Apollo astronauts killed on the launchpad, I think I’ve made my point: that twelve people have walked on the lunar surface. The reason this needs stating is twofold: most people have no recollection of the other ten and apparently lunar landing deniers are under the impression that there’s only one lunar landing to deny.
How can we be confident that they happened? Well, for example, there are laser reflectors on the surface placed there by Apollo astronauts used by astronomers all over the world, although also one on the Lunokhod automatic lunar rover put there by the Russians, footage of dust kicked up by the Apollo lunar rovers describes a trajectory only possible in a near-vacuum under about one sixth of Earth gravity, returnees develop cataracts significantly earlier than people who have never been there. Add to that that if it really was a conspiracy, all the people involved who knew about it would’ve had to have taken the secret to their graves or haven’t spoken up about it yet. I really can’t be bothered to go into too much detail about this, and other people have done it better than I could, but I’ll mention a couple of things. Stanley Kubrick’s ‘2001’ came out around the same time as the Apollo missions, so he is often named as a co-conspirator, but his lunar landscapes look like others did before they were refuted by images from low orbiters or the astronauts themselves: they’re craggy and covered in cracks because the surface was thought to be more or less uneroded, but actual pictures show soft, undulating hills and fairly thick dusty soil, which however, wasn’t as deep as some astronomers expected and didn’t engulf the Lunar Module or the astronauts. The absolute minimum that happened was that the astronauts orbited and dropped probes, and that there was a sample return mission, and if they did all that they may as well have genuinely gone there. So believe me: humans have walked on the lunar surface.
HOWEVER
There is another issue.
Suppose it’s 1968. Apollo has yet to take anyone to another heavenly body. Moreover, it probably never will. This is because if it did, and that was the start of humanity spreading out into space and settling on other planets across the Galaxy, and at the time many people thought it was, that would probably mean that the total population of the human race would dwarf the number of humans who have lived up until now, since at a very conservative estimate there could be a million Earth-like planets suitable for us to live on in the Galaxy. Each of those would only have to have a total population throughout their human history of less than a hundred thousand for the chances of being born before or after Neil Armstrong to be fifty-fifty, and that’s a tiny number of people. Therefore the chances of him setting foot on the Sea of Tranquility are practically zero unless it doesn’t lead to any further missions to settle, there or elsewhere, or for that matter build any space habitats. Therefore, from the perspective of the late 1960s it makes perfect sense to assert that the Apollo missions will either fail or be fake. They’re a hoax.
Only they weren’t, were they? As I’ve just said, the lunar landings happened. Returning to the present though, 2024 right now, the same argument applies, although it is in fact rather stronger because now, more humans have been born than in 1968. We live in a young world. The median age of the world population is now thirty, meaning that most people alive today have been born since 1994. We also lived in a young world back then, with the baby boom for example, though that was just in the West. More people have lived now, and all of them have still lived on this planet. The chances of this happening have fallen for everyone who was born since 1972.
This is of course similar to the Doomsday Argument, which I’ve mentioned on this blog before. The Doomsday Argument is an attempt to estimate whenabouts we are in human history by considering one’s birth as a random event in time. Given a thirty-year doubling time in human population growth and a birth in the late 1960s, such as mine, and assuming my birth was about halfway through the total number of human births ever, this would mean that the last human birth would take place around 2130. Right now, this seems to be an overestimate and for environmental reasons to do with climate change the human race can be expected to go extinct in about 2060. That said, human population growth is also slowing, and it’s a highly egocentric argument because if someone else, born say in 2006, were to make the same calculation, even given the same doubling rate of population the last human birth would take place quite a bit later.
We now have the Artemis program, aiming to return humans to the lunar surface in the near future, and to facilitate human missions to Mars. If this happens as described, it sounds like it would be the start of this species spreading into space and we are once again probably confronted with trillions of future humans whose existence entails that living before that happens is very improbable. This is the second time this has happened, in almost exactly the same way. The first time, it actually did happen. This time, just as I would’ve said in 1968, it won’t. Whatever has happened in the past has a 100% probability of having happened because it did happen. This is true in one sense. In another, it isn’t. For instance, if you chose a random nation state in 2000, it would probably be a republic, but if you chose one in 1700 it would probably be a kingdom, and the past can’t be perfectly known. It can, though, probably be known more accurately than many future trends and events. Anyway, this means that because humans did reach the lunar surface, they have a 100% chance of having done so. Paradoxically though, if the same prediction had been made in 1968, it would also probably be true. This does raise issues about the nature of probability.
There’s this thing called “immanentising the Eschaton”, which is forbidden by the Roman Catholic Church. It means trying to make the world end by bringing about the kind of things that seem to be prophesied in the Book of Revelation. In the 1980s, Ronald Reagan was accused of doing this because of the Cold War. Well, this is what’s worrying me right now: the Artemis program was looking ever more likely but we “know” that it can’t happen, because if it did it would make our current existence improbable. Therefore, events can be expected to intervene to prevent it and any other such events from happening, because we’re alive now and living on Earth rather than in space or on another planet. The more likely it becomes, the more drastic the event preventing it would have to be. We can be confident that no chain of events which leads to a high-population future off Earth can happen, but we don’t know why it won’t. Any extinction event is incompatible with future human beings being born and carries a high degree of certainty, so to speak, of preventing a “space future”. Nuclear holocaust, catastrophic climate change, pandemic, the Artificial General Intelligence apocalypse – any would be fine. We have what feels like an ever-lengthening list of apocalyptic scenarios.
There are ways in which both Apollo and Artemis could be predicted to happen. If they don’t lead to a likely expansion into space, they’re absolutely fine. Apollo was substantially a Cold War publicity stunt by the West, mainly the US, and could be expected not to lead to anything else. In fact, its scaling down and cancellation is possibly “predictable” simply because we’re still here. The same could apply to Artemis. If it’s just a pipe dream, it won’t happen. Also, if it’s hyped and does not in fact lead either to a permanent base or people going to Mars, we might also be safe. On the other hand, anything which looks like it’s going to lead to an open future of humanity living permanently off this planet immediately becomes improbable because of that, and the probability of that happening kind of retroactively “causes” events which prevent it.
This is not necessarily a pessimistic scenario. It simply means that if we have a long future, which right now seems very unlikely, it will be on Earth, and at no point will there be permanent settlements of fertile people in space or on other planets. It also suggests a rather weird solution to the Fermi Paradox – where are all the aliens? Maybe the solution is that everybody realises this and has a failure of nerve, so nobody takes the risk. On the other hand, it also suggests there is a Great Filter approaching. The immediate solution to the Fermi Paradox in this case is the very vague idea that something stops aliens travelling through space, assuming they exist. The obvious alternative is that there are no aliens. It would also mean that the Great Filter hasn’t already happened.
The Great Filter is the idea that sometime between the appearance of the simplest life to the existence of advanced interstellar civilisations, a significant barrier prevents them from reaching this stage. There are two major possibilities: it’s already happened and we’ve gotten through it, and it hasn’t happened yet but it will. It could be pretty benign. For instance, maybe everyone decides not to bother going into space because they want to solve social problems at home, become spiritually enlightened and lose interest in doing so. I’ve mentioned various attempted solutions on here, including the combined importance and scarcity of phosphorus, the possibility that we might just be swamped in a Galaxy teeming with civilisations, that everyone else might be really bad at maths or that we’ve committed some kind of faux pas that puts us beyond the pale. Another intriguing idea, and calling it a possibility may be going too far, is that civilisations get to the point where they discover backwards time travel and destroy themselves to the extent that they never existed in the first place or are automatically pruned by that very discovery. In a way, this might be the same as being that everyone else might actually be too good at maths: so good that they discover time travel using it and that causes them never to have existed.
The Great Filter could be divided into past and future, but there could of course be a third possibility: maybe it’s happening to us right now. Perhaps all our problems are combining together to wipe us out, or a specific event is occurring which is incompatible with us having a future of any kind.
But maybe Artemis won’t lead to an open space future. The plans after the lunar landing are vague and might not lead to anything much in the long term, so it could be a similar stunt to Apollo. The Chinese have a plan to build a base at the South Pole there too though, so the possibility of them making further plans could be considered. Another possibility is private enterprise taking over, but this might not be good. This is where I get into the whole “Up Wing” business, and maybe I shouldn’t go there. It could just be that due to the probabilistic argument, every attempt at a major space development project is destined to fail and Artemis is just one of those. The Chinese program is too, and all of this can be concluded by the simple fact that we’re around now, not having settled elsewhere in the Universe. It isn’t because of any particular reason so much as that our existence ends up selecting a future without space travel. It is, I’ve long thought, very odd that the predicted developments such as rotary space colonies and going to Mars did not come to pass, but maybe it’s just that if they had, the average human being would be someone living thousands of years in the future. If this is so, space exploration might simply look jinxed for no apparent reason. This does actually seem to happen in at least one particular case, referred to as the “Mars Curse”. Only 53% of missions to Mars succeed completely. This may not even be specifically because of something Mars does, as the flights have been known to fail before even leaving the atmosphere. Rather than adopting a superstitious approach, maybe it’s just because of probability: it scuppers the chances of humans getting there if we don’t find out enough about it, so that’s what happens.
If it really is true that the probability argument works, there seem to be at least two applications to prediction here. One is the Doomsday Argument in general, which appears to have fairly major flaws (for instance it might just predict the end of mortality or pessimism rather than the human race because it focusses on the births, but could be about the thought of extinction itself becoming extinct). Another is the possibility of eliminating an apparently plausible future, which may also connect to the Fermi Paradox. But might there not be other things which this kind of argument could predict? The Mars Curse could be a real thing which does not, however, have any causal or for that matter acausal explanation, but is just how things happen to be. It seems to me that this has potential, but it’s all rather imponderable.
Meanwhile in the real world, Artemis faces delays and constantly recedes from the near future, like the invention of efficient fusion power. What a surprise.