How Real Is Maths?

As you may know, I was involved in a high-control parachurch organisation in the mid-1980s CE when I went to university for the first time. Over the first few months, I didn’t resist them much, at least externally, because I wanted to give them a chance and see whether their claim that God and evangelical Protestantism really did have all the answers. I then went back to Canterbury for Xmas and bought my dad a book about mathematics, something he was very keen on and had a good grasp of at the time, which I also ended up reading myself. In this book, which I think may have been Martin Gardener’s ‘Mathematical Circus’, there was an interesting chapter on different degrees of infinity. In maths, there are countable and uncountable infinities. Countable infinities do take forever to count but given an infinite period of time it can be done. Uncountable infinities are just not countable at all. So for example, there are infinity whole numbers and infinity points in space, but those two infinities are different. It can be proven that this is so as follows: Suppose you have an infinite number of cards with a one on one side and a zero on the other, and you make an infinite number of infinitely long rows of these cards in order, starting with zero and ending with infinity. Have you then produced all possible infinite sequences of ones and zeros? No. You can start in the top left hand corner of this array and turn a card over, then go on diagonally, one row and one column down forever, turning the cards over until you reach the bottom right hand corner infinitely far away. The number you have then generated, running diagonally down the arrangement, is not in that sequence because bit n of sequence n will always be different from the number in that position on the grid. Hence there must be a larger infinity. This leads to peculiar consequences. For instance, it means you can in theory take a sphere of a given size, remove an infinite number of points from it and construct another equally-sized sphere from them without reducing the size or integrity of the first one. Georg Cantor, who first thought of this way of understanding infinity, spent the later part of his life going in and out of mental hospitals, partly due to the hostility of other mathematicians to this concept and its implications and possibly also because the concept he came up with was a cognitohazard. To some extent, thinking of this may have broken his brain.

With steely determination, I returned to university and immediately confronted a member of the cult, not on this issue but other, more practical ones such as intolerance of other spiritual paths and homophobia. However, because we were discussing an infinite being, namely God, I mentioned in passing this concept, and his interesting response has often given me pause for thought since. He regarded this view of infinity, and by extension much of pure mathematics, as a symptom of the flawed nature of the limited and fallen human mind. I can’t remember exactly how he put it but that’s what it entailed. At a later point he tried to explain what I’d said to someone else as “infinity times infinity”, which is not what this is, and advised them not to think about it, which in a way is fair enough. He was a medical student, and it may not be worthwhile to waste your brain cells on it in such a situation, except that it might be useful for psychiatric purposes, because, well, what are cognitohazards? Are they actually significant threats to mental health and are there enough of them encountered in daily life or even occasionally for them to be proper objects of study?

Something which definitely would be a cognitohazard is Graham’s Number. Until fairly recently, Graham’s Number, hereinafter referred to as G, was the largest actively named number. Obviously you could talk about G+1 and so on, but that’s not entirely sensible. G is the upper bound of a solution to a particular problem involving bichromatic hypercubes. Take a hypercube of a certain number of dimensions and join all the vertices together to form a complete graph on 2^n vertices. Colour each edge either one colour or another. What’s the smallest number of dimensions such a hypercube must have to guarantee that every such colouring contains at least one single-coloured subgraph on a plane bounded by four vertices? This number might actually be quite small, namely thirteen. However, it might be, well, extremely large doesn’t really cut it to describe how big it is, so let me just say it might not be that small at all. It might be G.

G can actually be expressed precisely, but in order to do so a special form called Knuth’s up-arrow notation has to be used. There’s an operation called exponentiation which is expressed very easily on computers and other such devices as “^”. Hence 2^2 is two squared, 2^3 two cubed and so on. Although it would probably be fine to use the caret to express this, in the past “↑” has been used for both this operation and in particular in Knuth’s notation. In his scheme, 2↑4 is 2 x 2 x 2 x 2, which is of course sixteen. However, more arrows can be added, so 2↑↑4 is “tetration”, 2↑(2↑(2↑2)), which is 65536 (or ten less than three dozen and two zagiers in duodecimal). Then there’s “pentation”, 2↑↑↑2, which is expanded further as 2↑↑(2↑↑(2↑↑2)), and has something like 19729 digits. This can be continued as long as necessary of course, and G is expressed in this notation as 3↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑3, which should be sixty-four arrows but I haven’t checked. That is, perhaps surprisingly, the exact value of the number. If every Planck volume in the observable Universe were to represent a digit, there still wouldn’t be enough space to write it out longhand. It is literally true that if a human were to visualise G, it would cause their head to implode and turn into a black hole. This is not a joke: that’s what would actually happen. So Graham’s Number is also a cognitohazard.

Nowadays, larger finite numbers have been used. TREE(3), which I’ve mentioned before, also involves graphs, as does Simple Subcubic Graph Number 3, which renders TREE(3) insignificant. There’s an even larger finite integer which resulted from a large number duel in 2007 which I could represent here but I’d probably be talking to myself. Actually, I will:

This is too hard to type out without fiddling about with LaTeX, so here’s the first bit written out longhand, unfortunately with a bic. The next bit is based on this definition, and reads “The smallest number bigger than every finite number m

{\displaystyle m}

 with the following property: there is a formula ϕ(x1)

{\displaystyle \phi (x_{1})}

 in the language of first-order set-theory (as presented in the definition of Sat

{\displaystyle {\mbox{Sat}}}

) with less than a googol symbols and x1

{\displaystyle x_{1}}

 as its only free variable such that: (a) there is a variable assignment s

{\displaystyle s}

 assigning m

{\displaystyle m}

 to x1

{\displaystyle x_{1}}

 such that Sat([ϕ(x1)],s)

{\displaystyle {\mbox{Sat}}([\phi (x_{1})],s)}

, and (b) for any variable assignment t

{\displaystyle t}

, if Sat([ϕ(x1)],t)

{\displaystyle {\mbox{Sat}}([\phi (x_{1})],t)}

, then t

{\displaystyle t}

 assigns m

{\displaystyle m}

 to x1

{\displaystyle x_{1}}

.”

Phi is a Goedelisation and s a variable assignment.

It wouldn’t be difficult to understand this but I haven’t entirely bothered to pursue it. I showed you the actual notation to introduce a new point: mathematical formalism. Also, the fact that this might look like gibberish illustrates an important feature of mathematics on which they capitalise: maybe it’s just a game based on symbols.

When I first read ‘Beginning Logic’, at about the same time as I was resisting the cult, I was rather surprised when the author defined the logical symbols in terms of their physical appearance as marks on paper rather than in more mathematical-type terms, and the fact that I’ve written that out might tempt one to think that ultimately that’s all they are and this form is nothing more than a kind of game which we give meaning to. This appears to be formalism, an approach found in various disciplines which emphasise form over content. The possible connection to the Bauhaus slogan “form follows function” is not lost on me, but rather than pursue that right now I should probably talk about formalism itself. Formalism as applied to literature, for example, yields Russian formalism, an early twentieth century substantially Soviet movement linked to New Criticism which held that literary criticism could be objective by letting the text stand by itself and ignoring influences and authorship, focussing on autonomy (what I just said), unity, which is that every part of a work should contribute towards the whole, and defamiliarisation, that is, making the familiar seem unfamiliar. Martian poetry springs to mind here.

Translating this to maths, formalism is the view that maths consists of statements about the manipulation of sequences of symbols using established rules. Like formalism in literary criticism, it ignores everything outside that realm, so it kind of makes everything into pure mathematics among other things. This is what I was confronted with when I first learnt formal logic, hence that photo. It’s a series of symbols on a piece of paper which there are rules about manipulating, which expresses a very large number given the comment which refers to it underneath.

Now the reason this interests me in the context of my acquaintance (friend? I don’t know) is that there is another philosophical position about maths called Platonism, which is the belief that maths is discovered and already exists. This is similar to believing in the existence of God, so my friend (why not) held an unusual position in that he thought at least one area of maths, and I think by implication much of the rest of it, wasn’t “out there” but was invented by human beings, yet he also believed in God, i.e. something which is “out there” in that sense just as mathematical Platonism sees maths. There doesn’t seem to be anything essentially wrong with this position but it is a bit odd and feels inconsistent. He also probably thought that the “plain reading” of Biblical values referred to objective principles such as not stealing, honouring the Sabbath and so on, which are in that situation like how many people, theistic or otherwise, view maths. But he didn’t view maths like that. I don’t know if he was aware of the apparent contradiction.

On the other hand, I can totally get on board with the idea that whatever we might think about how reality works is completely wrong because the Universe is beyond our comprehension. If we consider certain animals, we perceive their understanding as being limited in various ways. For instance, they might be blind cave fish or they might be sessile filter-feeders living in burrows below the high tide mark, and we suppose that they don’t understand the world as much as we do. Although I think this is accurate, and I should mention that we’re also limited in various ways, particularly in lacking a sense of smell as good as most other mammals, there’s no reason to suppose that the way we think is any more adequate or discerning about reality. All we might have is a system that works most of the time regardless of all the stuff we don’t know about. That said, it still feels like various things must exist, such as current experience and a physical world. In view of that possibility, I do have some sympathy with my friend’s take on this although it felt somewhat unconsidered in his case.

In fact I’d take it further into his world and say that as humans we do in fact have limited understanding, and I would compare ourselves with God. We’re fallible and certain things are beyond us. Moreover, there’s the question of the Fall, and I have to be careful here. Our understanding is also strongly constrained by the kinds of cultures and societies we live in, which to some extent is what the Fall really is. So like him, I do in fact link it to my spirituality and feel that a little bit of humility is in order. In that way, both constructivism and Platonism could be true. There could be mathematical truths known only to God, or for an atheist mathematical truths which could in theory be discovered by a sufficiently powerful mind, and other mathematical activities and forms which are merely games played by our own finite minds.

I’ve done a bit of bait-and-switch here, by swapping formalism for constructivism, and they’re not the same thing. Constructivism is also known as intuitionism, and sees mathematics as built by mathematicians. Hence it does have a meaning beyond the mere manipulation of symbols through rules but the meaning is given by the mathematicians. In other words, maths is invented, but it is real.

To illustrate the difference between formalism and constructivism, I’d like to go back to the diagonal proof of aleph one, ℵ₁, as mentioned above. According to formalism, ℵ₁ is a validly defined symbol and the system is internally consistent, so there’s no problem. Constructivism, though, would reject the proof and even its premises. The set of all numbers, according to this view, is only ever potentially infinite as it can never be completed. Even real numbers, i.e. the set of numbers including all decimal fractions between the integers, are only valid insofar as they can be constructed in a finite way. That infinitely long sequence of zeros and one, and all the ones under it, only exist up to the point where that process has in some way actually been done at some point in the history of the Universe, so in other words infinity of either kind is only a potential and really not even that since the Universe won’t exist forever in a form hospitable to minds capable of performing maths. I would say that this has to be a non-theistic view, since given theism there is an eternal and infinite mind which can and maybe does do all that, which makes Platonism true, although of course God might have better things to do or never get round to it.

An extreme form of constructivism is ultrafinitism. I think of this metaphorically as some mathematical objects being in focus and others being to a greater or lesser extent blurred. So for example, the lower positive integers are in perfect focus, sharp and truly instantiated by virtue of the extensive construction they’ve undergone through continual use. Less well-focussed are the non-integral rational numbers, zero and the negative numbers, and as one ascends higher, further away from zero, away from numbers which can be reduced to fractions and into imaginary, complex and hypercomplex numbers, the less sharply focussed they become, until something like Graham’s number or an octonion is just a meaningless blur and the infinities are grey blobs. This is just an image of course, so here goes.

To an ultrafinitist there is no infinite set of natural numbers because it can by definition never be completed. It goes beyond that though. For instance, a relatively mildly high number, Skewes’s Number, is about 10^10^10^34. It represents the point at which one formula used to estimate the number of prime numbers below a certain value switches from an overestimate to an underestimate. There are also higher Skewes’s Numbers for when the value switches to an overestimate again. It can be proven that this happens but the lowest exact value is unknown, and it may be impossible to calculate it, putting it in a different position from G, which can be precisely known. Peculiarly, this could mean that Skewes’s Number doesn’t exist in these terms but Graham’s does.

This gives rise to a vague set known as the “feasible numbers”, which are numbers which can be realistically worked upon using computers and the like. The question arises of how to account for such things as π, because it seems like it goes on forever, but ultrafinitists apparently view it as a procedure in calculation rather than an actual number. Incidentally, it’s difficult to refer to numbers in this setting because words like “real” and “imaginary” have long since been nabbed by mathematicians for specific meanings which don’t refer to the obvious interpretation of those terms. I suppose I could say “existing” or “instantiated”.

Some mathematicians also view maths as essentially granular. That is, the idea that there are two ways to do maths, one involving continuous functions as with infinitesimal calculus, the other exemplified by the group of integers with addition involving discrete entities, is flawed, and therefore there are no such things as irrational numbers.

Although he didn’t get as far as ultrafinitism itself, Wittgensteins thought does provide a useful basis for it. He views infinity as a procedural convenience and only potential rather than actual, and maths as an activity involving construction of novel concepts which didn’t pre-exist to be discovered. In general, he’s a very concrete philosopher. I’m actually not that keen on a lot of his thought, although some of it’s good such as the family resemblance definition, which could be applied here. Logical positivism also wouldn’t allow for such concepts, but I don’t consider that a respectable school of philosophy so much as an interesting footnote in the history of ideas.

Ultrafinitism has major consequences for physics. Singularities arise in various places in physics and cosmology. A rather prosaic example is that the degree of stress before a material cracks is infinite. This can be resolved by removing the idealised notion that such a material is a continuous substance rather than made up of atoms or other particles. Some other areas where singularities arise are more exciting, but this could operate as an illustration of how the problem might be addressed. Specifically, there was a singularity at the Big Bang, there’s one in the centre of a black hole and also one in the mass, time and length alterations at the speed of light. This has a remarkable consequence, at least as I see it: for an ultrafinitist, the speed of light can be exceeded. Ultrafinitism strongly suggests that faster than light travel is possible and that in some sense the Big Bang never happened. The first in turn also implies that time travel backwards is also possible. At this point, ultrafinitism begins to feel too good to be true, but then a light bulb would probably have seemed like that to a mediaeval European, so that would be argument from incredulity.

There’s also a problem for the theist with ultrafinitism and finitism, in that it implies that any deity would not be eternal or infinite. However, it’s important not to allow a “God of the gaps” in at any point. God should never be used as an explanation for a physical phenomenon. However, the concept of God may be moribund for them because of the need to posit octonions as variables in Bell’s Theorem.

What all of this seems to mean is that quantum physics makes more sense than relativity for the ultrafinitist because it makes reality granular. The difficulties it poses for relativity and cosmology could be a sign that there’s something about relativity which is only an approximation of the real world, but we don’t know what. However, we don’t generally accept the idea that stress before a crack is infinite because it doesn’t accord with our view of the world that something so outlandish would exist in everyday life every time we drop a piece of porcelain onto a stone floor, so maybe we should also reject the idea of lightspeed being a limit or the Big Bang being a beginning. The fact remains that relativity is very well tested and used in daily life, for instance with satnav. It isn’t just an abstract theory about a realm of reality few people venture into and it does seem odd to say that despite all the evidence in its favour, it just will fail at a certain point. Moreover, although I’m at peace with the concept of time travel, many people would object to that implication.

To conclude, I’m aware that I’ve wandered all over the place with this, and my response to this impression is as follows: yesterday I heard someone on the radio comment that as one’s age advances it’s as if different parts of one’s brain want to break up the band and follow solo careers, so maybe this blog post is evidence of my melting brain.

Goddities

This is going to be me going at it like a bull at a gate rather than just sitting down and composing my mind and thoughts about the issues at hand. My basic idea with this is to try to explore the common ground or otherwise between atheism and theism, because I sometimes wonder if we’re talking about the same thing or just using the same words. There are certain things which atheists have been known to do which I feel have just been designed for the specific occasion of their argument rather than having a wider respectability, and there are other things which, well, are just interesting for everyone, or at least might be, and I want to plonk all these things together today and talk about them.

The first one is something I’ve mentioned before, which is the question of active and passive atheism. I insist on a definition of atheism as the existence of a belief that no deities exist rather than the absence of a belief that a deity exists. I’ve been over this, so I’ll be brief. The motivation for defining atheism passively is to set it as the default belief, but in doing so one is forced to accept peculiar implications. We assume all sorts of things, which is in itself interesting and complicated because in fact we seem to have uncountably infinite assumptions but only a finite number of active beliefs. Therefore an assumption is not something which is happening in anyone’s mind. It’s something one has not done. This seems messy and excessive to me, and is actually more or less the exact issue which many philosophers have with the nineteenth century philosopher Gottlob Freges view of concepts, so it’s something which has been flogged to death in philosophy already and to produce this definition at this stage, I think, reflects a lack of philosophical training. It comes across to me as naive and reflecting a kind of thinking on the spot which hasn’t had its rough edges knocked off it. On the other hand, perhaps it reflects some kind of demographic shift. As I understand it, analytical philosophers have had very little interest in the concept of God since the start of the tradition, which was probably Freges thought itself back in the 1870s CE, but they may also have been enjoying this lack of interest in a more overtly theistic and religious society than nowadays, or perhaps a less confrontational one in this area, so the definition of atheism as the absence of a belief may have become more accepted simply because more atheists, as opposed to apatheists which probably characterises most philosophers, are now in academia. Nonetheless, there is no word for someone who doesn’t believe in Russell’s teapot or that there’s an invisible gorilla in every room, so in such a situation there may as well be no word for atheism, but clearly there should be and it does mean something. But I won’t go on.

Second issue: small g “god”. There are atheists who insist on using a small g for the name God. I think they do this because they want to equate God conceptually with what they think of as other deities. This, I think, is also erroneous and an example of an over-reaction to a situation they have kind of imagined. Look at it this way: atheists claim God is a fictional character. It’s possible to go further than that and claim that God is an incoherent concept, but that isn’t atheism, although it’s an interesting position to take and one I have more than a little sympathy with. Fictional characters are given names. We know who Gandalf is, who Bridget Jones is, and unfortunately we know who Bella Swan is (actually I forgot and had to look that up!), and they all have names beginning with capital letters. Is god supposed to be someone like ee cummings or archie the cockroach? Someone once said to me I was confusing myself by capitalising God, which they didn’t explain but I think it’s along the lines that God is just one deity among many. It is, though, a little bit interesting that we generally just call God “God” and don’t say, for instance, Metod any more, which used to be a word used for God and seems to mean “measurer” (i.e. “mete-er”) and “arranger”, which could be a euphemism or a kind of title but is in any case a name for God.

This is of course related to “I only believe in one fewer deities than you do,” which involves the supposition that theistic Christians believe the likes of Ba`al and Zeus don’t exist. This also I think is seriously misconceived and fairly thoughtless. My view of the other deities is not that they don’t exist but that they’re God under different names. They do of course have other attributes, but then if God exists, God is beyond human understanding, so we have no better idea of what attributes are true of God than of any other deities who are, in any case, God by other names. So yes, I do believe in all those deities because they’re all the same deity. Another rather unsettling consequence of saying I’m atheist about all the other deities is that it’s very like the Islamophobic belief that Allah is not God and that Muslims are not worshipping the same god as Christians. It has disturbingly racist overtones to it, to my mind, which is of course a feature of “New Atheism”, and this is where it gets interesting. Many Christians claim Muslims worship a different, false god and not the God of the New Testament, or presumably the Hebrew scriptures, where they see continuity, and among Christian nationalists I would expect a very strong denial that Muslims worship God. This unifies some theists and atheists. The details of the denial may be different though. For instance, Christian nationalists might want to distinguish between the Christian trinitarian God and the Islamic indivisible divine unity, whereas the New Atheist approach is more likely to be along the lines of imaginary beings being given different attributes, including the trinity or otherwise.

Emphasising the fact that New Atheism is not all anti-theistic atheism is vital. It’s also possibly a movement whose time has passed. Nor would I want to say that anyone within that movement is overtly racist. They are characterised, and perhaps led, by Richard Dawkins, Daniel Dennett, Sam Harris and Christopher Hitchens, notably all White men, meaning that they will all have unconscious bias, some of which I inevitably share by virtue of my whiteness and to some extent other aspects of my social conditioning though not all. This by no means makes anti-theistic atheism unsalvageable, but equally it’s important to note that atheism is not monolithic. I always think of South Asia in this respect, with the separate Jain, Samkhya and Carvaka beliefs that God cannot or does not exist, among others, in one case because the force of karma is a sufficient explanation for the Cosmos, and more recently the Marxist anti-theistic movement there, though this is clearly influenced by the West. Some New Atheists see the development of European culture under Christian influence as a necessary precondition for the emergence of what might be termed a more liberal or progressive approach which includes atheistic approaches to reality, possibly including South Asian Marxist activists.

One major problem, I think, with anti-theist approaches in general is that they seem to make a major assumption which really doesn’t seem warranted and is odd for a group which tends to see itself as rational. That is that the urge to be religious can be removed from human psychology even if it should be. It seems to me that there are several reasons why this is unlikely. We have cognitive biasses involving finding patterns in things, we engage in magical thinking which may be the basis of rationality, and large communities tend to drift away from their constituted foundations after a while. We also have ego defences. The idea that a non-religious mind set could be adopted by the general population may not be realistic. There don’t seem to be any societies which are entirely non-religious, and when it does occur officially, religion creeps back in somewhere, such as superstitious beliefs about luck and fate. There are of course very large numbers of non-religious people whose lives are entirely healthy and well-adjusted, but they’re not an entire society and there’s too much diversity between people’s personalities and influences to conclude that everyone could live their lives that way. This has nothing to do with whether religious claims to truth are correct. This also seems to be an article of faith among, for example, humanists – that society can exist, whether or not it’s a good thing, without religion. I really want to stress that I’m not saying religion is needed, just that we don’t know if it even could be eliminated. In fact, ironically this belief is almost religious in itself, although I would also insist in defining religion in a different way which doesn’t emphasise belief.

I feel like I’ve spent several paragraphs low-key slagging off atheism. This isn’t what I want to do at all. I want it to be the way things are in my own life most of the time, and probably increasingly so in these isles with the possible exception of Ireland, that whether one is theist, atheist or agnostic is a private matter one would prefer not to talk about with people outside one’s possibly religious community and maybe not even that. What I’m trying to do is establish common ground and I’m not looking for a fight. There are more important things to engage in conflict over and it can be divisive even to bring this up, but at the same time it feels messy and naive, so I’m going to carry on.

Something which is not so divisive is the rather more nuanced approach found in both religious and non-religious circles which is not firmly atheist, theist, deist or agnostic, which is present both in some forms of mysticism and Western philosophy. Many religious mystics, and in fact a lot of just ordinary religious people like me, would say God is beyond human understanding, and in particular there’s the via negativa, which is the idea that you can best say what God is not in order to suggest what God is. God is also said to be unlike any created thing, and it’s a very familiar experience to find that one can’t express a religious experience in language. Similarly, there’s ignosticism and theological non-cognitivism, which I’ve talked about before on here. In the mid-twentieth century, there was a movement within analytical philosophy called logical positivism which attempted to establish that meaning, i.e. either truth or falsehood, only inheres in statements which are axiomatic, express necessary truths or can be empirically verified. Along with this claim was the one that religious statements were not in any of these categories and therefore they were meaningless. This is not the same thing as being false and in a way it corresponds quite well to the mystical position. Logical positivism is now considered passé, but other areas of Western philosophy have adopted a somewhat reminiscent position. My ex is of course German and among other things a philosopher in the continental tradition. When we got together, I was worried they might be Christian but it turned out that they saw religious claims very much as not having truth values in a manner I found reminiscent of logical positivism but which have much more in common with the postmodern condition, which sees philosophy as a branch of literature and everything as up for deconstruction. Statements about God make sense in their own communities and theology is a poetic or narrative truth, but these truth claims are no more or less valid than those of maths and science. Postmodern theology has been adopted by people in religious communities. There is, however, no truth outside language according to this.

I mean, I have certain views of course, as this view is both ableist and speciesist, but it is nevertheless interesting that there is a kind of agreement in this area between, of all things, postmodernity, religious mysticism and logical positivism. These are not all there is to philosophy of course, but it strikes me that this shows a way forward for us all. There are of course other non-theistic religions and non-theistic traditions within Christianity and Judaism.

Getting back to gripes though, there’s another cluster of beliefs which tend to be considered as universally associated. This is not a definitive list but I hope I’ve captured most of them:

  • Theism
  • An afterlife
  • Souls and bodies as separate items which coexist in the same sense
  • Varying fates according to actions in this life
  • Subjectively sequential time extending beyond death
  • Theological voluntarism/divine command theory
  • Literal and unironic belief

The first three in particular seem to be closely associated with each other. For instance, it’s often said that people want to believe in God because they don’t want to die, so in other words they see the prospect of an afterlife, or possibly reincarnation, to follow from the idea that God exists. There’s also an implicit assumption that God is good and/or loving in theism, which unless you agree with the ontological argument for God’s existence out of the best-known “proofs” of God has no connection with whether God exists or not. In fact I strongly suspect a lot of fundamentalist evangelist Protestants don’t, deep down, believe God is good at all but are afraid to admit it even to themselves because God would be telepathic and know they believe this. Nonetheless their public view is that God is good and just.

In each case you can uncouple the bullet-pointed belief from theism. It’s entirely feasible to believe in an afterlife in isolation, with no God. There are also Christian physicalists, who believe God will re-create us all in superior physical form at the end of time with no separate entity bearing our consciousness. Jehovah’s Witnesses may fall into this category. Alternatively, there are religions which are strongly atheist but believe in souls, such as the Jains. So far as I can tell, even faithful Judaism as opposed to the reconstructionist form is pretty much agnostic on what happens when they die, and as a Christian I think it’s important for ethical reasons to ignore any claims about what happens beyond this life, if anything. My views on the nature of time make it a bit involved for me to go into this just now without it taking over the post. Theological voluntarism and divine command theory are the idea that God alone makes ethics meaningful, a belief which can only sincerely be held by a psychopath. Finally, literal and unironic belief relies on Biblical literalism, which is seriously compromised by Biblical criticism, and there is also a project to imagine history as proceeding as young Earth creationists and otherwise Biblically literalist people suppose but with no God. Incredibly, there really are people who believe that and are atheist.

I very much get the impression that some anti-theistic atheists really would prefer theistic Christians to be conservative evangelicals, and I seem to remember Richard Dawkins saying that liberal and progressive Christianity are dangerous because they represent a kind of gateway drug to extremism. It also seems to me that some anti-theists simply think that’s what Christians are like as a block, and I think this is our fault because of those of us who are particularly strident and emphatic about our bigotry. In fact churches can be excellent factories for anti-theistic atheists and we’re responsible for creating them in many cases. But on both sides there is a tendency, which I’ve probably exhibited here, to caricature the other side, whereas in fact there could be said to be no sides at all, just people dedicated to the truth.

Aquatic Apes

I’ve decided to try writing more spontaneously rather than delving a lot into sources of information like I have been recently. It’s good exercise for the memory and makes for a livelier style. Maybe it’ll also end up being less accurate, as I’m drawing on stuff from the 1980s here.

The other day someone posted a meme about humans being cute for various reasons. In general it was a good meme, but one probable inaccuracy jumped out at me. It was something like “although they’re not aquatic or amphibious, humans flock to be near water just for the pleasure of splashing about in it”. Fair enough as a bit of a meme I suppose, but probably wrong, because some people think we were once “aquatic apes” as the phrase has it, notably Elaine Morgan and manwatcher Desmond Morris. That’s in quotes because the idea isn’t that we used to be like dolphins, living in the sea full-time, but amphibious, living on beaches and in the sea, perhaps foraging in both and escaping from predators by wading into the water. It’s also suggested that the surviving species of elephant have a similar history. This is in contrast to the more usual savannah theory, which claims that we are descended from an ape who had to adapt to the veldt when the African rainforests dwindled due to the world drying up. I’m going to talk about this bit too.

During the Miocene there were a huge number of different ape species. This has led to the human evolutionary “tree” being described as more like a bush, because some of them also show parallel evolution, becoming steadily more like hominins but are in fact our sister groups. The world was wetter at the time because the Tethys Ocean, which encircled the equator, was able to flow all the way round, meaning there was no permanent ice in the Arctic and therefore more water available to the planet’s weather systems. This in turn meant larger rainforests. Then North and South America collided and the Gulf Of Mexico formed, causing the warm current to swirl round and head North, where precipitation increased and snow and ice built up, increasing the planet’s reflectivity and cooling it in a vicious circle which also dried it.  Hence the rainforests shrank and some apes were forced onto the savannah, where according to Elaine Morgan they then died out, but according to other people they evolved into humans.  Morgan managed to resolve this problem to her own satisfaction by suggesting that our ancestors survived by becoming amphibious and living on beaches and in the sea.

This is the evidence cited to support this claim:

  • We have a diving reflex.  If we are for some reason submerged, our hearts slow down.
  • We are largely hairless.  The body hair we have follows a streamlining pattern.
  • We have more breath control than other apes have.  Think of the hooting made by chimps.  They do that because they can’t control their respiration.
  • We have a hymen which protects us from sand entering our reproductive systems before penis in vagina sex takes place.
  • Penis in vagina sex usually occurs face to face as in other aquatic mammals.
  • The female orgasm.  I can’t remember the argument for this.
  • Large amounts of adipose tissue in breasts, enabling them to float and suckle young more easily in water.
  • Long scalp hair onto which babies can hang in water.
  • Downward-facing nostrils protecting us from accidentally inhaling water.
  • Bipedalism is easier in water and is adopted by other apes when they are wading through water and may therefore have first evolved due to this lifestyle.

There may be other reasons but those are the ones I can remember and as I stated earlier I’m trying to research less and type more spontaneously.  There are also a number of other observations which don’t pertain directly to the human body:

  • There’s a gap in the hominin fossil record of several million years.  I can’t remember where this gap is supposed to be or whether it’s still there, since Morgan’s ‘The Descent Of Woman’ was published in 1972 CE.
  • All Afrikan primates except humans have a baboon-generated retrovirus code written into their genomes.  No non-Afrikan primates have.  This suggests that our ancestors were, for whatever reason, isolated from other primates when this happened.
  • The oldest hominin remains, including tools, are found in Ethiopia and move south into the Rift Valley with time, suggesting that we spread from the Gulf of Aden southwards rather than from the Congo.
  • The first human stone tools are made from pebbles, suggesting that the technology arose first on beaches.

There’s also a side argument that succeeds or fails separately from the aquatic ape hypothesis, that elephants also had an amphibious phase in their evolution due to several features they have in common with humans but not mammoths.

One reason Morgan made this claim was that she believed palaeoanthropology focussed too much on male bodies and that if female bodies became the focus a number of traits would be easier to explain, namely the ones listed above.  Humans considered as female make much more sense as amphibious life forms than humans considered as male savannah-dwellers.  There is, in other words, a strong feminist motivation in her acceptance of this hypothesis, or conversely, a strong patriarchal motivation in the establishment’s rejection of it.  Now to me the interesting aspect of all this is not directly whether the hypothesis is well-corroborated but what it says about the scientific establishment and academic thought and research, particularly from a pro-feminist perspective.  It’s also interesting to contemplate how I perceive it.

The hypothesis is generally viewed as pseudoscientific and thoroughly refuted but it’s recognised that it still surfaces from time to time and there is some endorsement from celebrity science popularisers such as Desmond Morris and David Attenborough.  One issue with it is that because none of it seems to refer to bones and teeth, fossilised hominin remains are hard to assess on this basis.  It can be asserted that we have a hymen, breasts, approach hairlessness and so forth, but none of that has to do with the skeleton.  Against this, and this is just my opinion, is that adaptations to bipedalism are reflected in bones and joints.  However, it is true that the fossil record is difficult to use to back this up, and this highlights a general problem with the reconstruction of vertebrates from most fossils:  soft parts are rarely preserved compared to hard parts.  This applies particularly to non-avian dinosaurs, who, being closely related to birds, might be expected to have structures like wattles and combs but it’s unlikely we’ll ever know unless we find alien video recordings of them or something.  Pebbles, on the other hand, are clearly preserved, and these are again hard “parts”, so the question is, does this hypothesis really only depend on soft parts?  It seems these are not soft at all.

I’m not a scientist.  I have a fair bit of scientific knowledge and am aware of the scientific method, but I’ve done little research of my own since I finished A-level biology.  Not none, because some of my herbalism-related CPD involved original quantitative research, but I’m not a palaeoanthropologist by any means.  Gutsick Gibbon, however, is, and it seems fair to bow to her superior knowledge and experience.  The issue is with the source.  Elaine Morgan’s perspective on the issue was informed by her gender and allegiance to feminism:  another of her books is ‘The Descent Of Woman’ which emphasises the increased explanatory power of a model of evolution which sets female bodies as the default rather than male.  There’s a strong emphasis on “Man The Hunter” in traditional palaeoanthropology, which portrays men as going out to hunt dangerous prey and bringing them home to the cave while women stay in it, do a bit of foraging and take care of the children, and also that most of the nutritional value of the food they ate was in animals rather than plants.  Apparently though, this is not reflected in hunter-gatherer societies as observed by Western anthropologists.  The trouble is, though, that we tend to project our own ideas onto the past and that hunter-gatherer societies today, rather than being remnants of the Stone Age, have just as long a history as Western civilisation and its predecessors.  The other aspect of this is that Morgan is probably surrounded by men in her profession and field, and therefore that she and her opinions are likely to be at a disadvantage which leads to more people working to refute her hypothesis unsympathetically.  This is why I find Gutsick Gibbon’s rejection of it interesting, as she doesn’t seem to be motivated in such a way.  However, it may also be that she’s influenced by the general dismissal of the idea by her colleagues and mentors.  All of this brings up the question of how scientific theories change.

All of this is therefore about bowing to the opinions of experts who are fairly imagined not to be biassed in unhelpful ways.  There’s a degree of trust there of professionals which may have been eroded in recent years, leading to various beliefs being accepted which would previously have been ignored.  To my mind, it goes hand in hand with lack of deference, which is often a good thing.  For instance, nowadays there seems to be either more awareness of corruption in authority or more actual corruption, and where it’s detected accurately, this must surely be a good thing.  However, this approach of dubiousness may be dubious.  An opinion is not correct or worth considering in itself when compared to other more learnèd opinions.  Experience from outside the field may not be valid within it.  OFSTED comes to mind here.  Why should outsiders be listened to or taken seriously by educationalists and teachers with years or decades of experience?

Also, sometimes a particular characteristic can give rise to excessive sympathy.  For instance, there is a Black supremacist group which maintains among other things that melanin alone is the seat of consciousness and therefore that only Black people are conscious.  As a White person, I know this isn’t true.  They also believe that a Black scientist working thousands of years ago invented the White race through genetic manipulation.  There is certainly a sense of empowerment in these claims, but it occupies a special epistemological position because White people actually know that this is not the case.  Regarding the origin of fair skin, this has happened several times in hominin evolution, notably among the Neanderthals, but the most recent appearance is apparently among the Eastern Hunter-Gatherers of the future Russian steppes about ten thousand years BP (BP = before 1950).  Another, similar, example, is in the spelling, which I’ve adopted, of Afrika with a K.  The reason for this given doesn’t seem to be very soundly based.  The claim is that the spelling of “Africa” is entirely colonial and should therefore be rejected.  That said, I also have the impression that that spelling is primarily promoted by Afrikan Americans and not actual Afrikans, and the K is also used in the Afrikaans spelling of the word, which is often seen as a language of conquest.  Another big issue with this spelling is that Afrikan languages which don’t use Latin script wouldn’t use either C or K and in transliteration the former Roman province of Ifriqiya was written with the Arabic letter Qaf in mediaeval times (and of course the word “Mediaeval” is Eurocentric in any case).  It is, however, spelt “Afrika” in Maltese and Cape Verdean Creole, and also in Swahili.  In Wolof, it’s actually spelt “Afrig”!  So the issue here seems to be that the K spelling, though it does exist in many Afrikan languages, may reflect a mistaken claim made by Afrikan Americans about the culture of an entire continent about which it’s impossible to generalise, but that mistaken claim may in turn arise from the people concerned lacking the opportunity or the information to recognise that their claim is dubious, and therefore I’m still going to spell it with a K.  Maybe there’s something I don’t know, but the truth seems to be that the spelling varies and does sometimes include a C in languages which the people concerned own emotionally and consider to be Afrikan languages such as English, French and Portuguese, whereas the claim to the contrary seems to be pressured from outside the continent.  Maybe I’m wrong, and I’m very open to that possibility.

Elaine Morgan, who sadly died in 2013 CE, is a somewhat surprising person. Her degree was in English and she was a TV script writer, so she’s an outsider with respect to palaeoanthropology.  However, the aquatic ape hypothesis was not originally hers but was formerly mainly promoted by the marine biologist Alister Hardy.  Of course, a marine biologist is not an anthropologist but he was a life scientist.  Her motivation for adopting the hypothesis was, as I said, that the idea of “Man The Hunter” is androcentric but leaves a gap if it’s rejected as it’s then necessary to explain the differences between humans and other apes.  It was claimed also that she didn’t realise that Hardy’s claim was a glib and off-the-cuff remark which was never intended to be taken seriously. This is not so, and he actually wrote the Foreword to the second edition of her book.

Most naked animals with subcutaneous fat are aquatic mammals.  This is the basis of Morgan’s claim.  Philip Tobias, discoverer of Homo habilis and shaper of the savannah hypothesis, eventually came to reject that.  David Attenborough and the former promoter of the idea of “Man The Hunter” Desmond Morris, which previously irked Morgan and persuaded her to think otherwise, both appear to support it and her.  One startling claim of hers is that early hominins were already relatively hairless.  I’ve already mentioned that the idea that our ancestors were as hairy as chimps and gorillas may be mistaken because orangutan, the most conservative living great ape, is considerably less hairy than either and it’s already established that gorillas’ and chimpanzees’ knuckle-walking evolved separately after they diverged from their common ancestors, so this convergent evolution could equally apply to humans.  Looking at it from the perspective of a through-line from the common ancestors of orangutan and humans to humans ourselves, their predecessors, related to gibbons, would’ve been hairier, and their descendants may have gradually lost their hair until today’s situation with humans.  This doesn’t mean, though, that hominins didn’t habitually enter the water because that very lack of hair could’ve made it easier.  Inherited characteristics appear before they’re tested.  Moreover, our hair follows the lines of water currents across our bodies as if we were swimming forward in the water, with axillary and pubic hair, for example in regions facing away from the flow and also with tracks of lanugo or terminal hair in the same direction.

An example of the kind of criticism Morgan received was that her ideas were “thought up by a Welsh housewife”.  Not only is there nothing wrong with being either Welsh or a housewife, but also that fails to take into account that she was a scriptwriter for ‘Doctor Finlay’s Casebook’ and later ‘The Life And Times Of David Lloyd George’ and the TV adaptation of ‘Testament Of Youth’.  It might be a valid criticism of her writing that her degree was in English Literature rather than a science, but this wasn’t the focus.  Instead, her academic credentials and career success were ignored completely and she was apparently assumed to be primarily a home maker and her Celtic heritage was associated with ignorance and low intelligence, so it was both racist and sexist.  Her response to this, perhaps typically for a woman of her time, was to point out that it was an eminent male Sassenach biologist, knighted for services to science and Fellow of the Royal Society, who had previously proposed the idea.  In this case, though, Hardy’s ethnicity and gender didn’t protect him either as his ideas were equally poo-pooed by the scientific establishment.  It doesn’t mean he was right of course, but it’s telling that the response to the same ideas being proposed by a Welsh woman focussed not on the validity or otherwise of her ideas but on her identity.  All that said, it doesn’t mean she was right either and her position doesn’t confer infallibility.  She could also be expected to have some kind of academic rigour but the fact is that she was not a scientist.  Creative writing, however, does benefit from thorough research, and I’m guessing that her work on ‘Doctor Finlay’ increased her knowledge of human biology and the process whereby diagnoses are made on the basis of evidence.  Perhaps another main issue with her is that she was to some extent an autodidact.

Here comes another bullet list:

  • The only mammals with descended larynxes are humans, a species of North American deer and several species of aquatic mammals.
  • The only mammals which are born covered in vernix are humans and harp seals.
  • Baby humans have five times as much fat proportionately as baby baboons.  When immersed in water they float face up due to the distribution of that fat.
  • Not only is our sense of smell weak because we’re apes, but it’s actually even weaker than other apes.  The only other mammals with such a poor sense of smell are aquatic, notably whales.  This is because breath control makes it less functional.  In the case of sperm whales, they hold their breath regularly for up to ninety minutes.
  • We sweat more than any other species of mammal.  On the arid savannah, this would be a major liability.
  • The brain needs high levels of both ω-3 and ω-6 fatty acids, which are most common in the marine food chain.  Just as a side note, although these fatty acids are generally used as an argument for eating fish, organ meat and wild animals, they’re plentiful in marine algae and are not made by animal sea food sources themselves, so this is not an argument not to be vegan.

Incidentally, it’s notable that the points about vernix and baby fat are likely to be more evident to people who have given birth than those who haven’t.

It was recently found also that the “savannah” sites where hominin fossils are found have pollen from plants only found in forests, even including liana vines, which are only found in very dense rain forests.  Hence the theory that humans, sweating profusely and becoming dehydrated on the savannah, evolved there seems now to have been refuted.  Humans did evolve there to some extent, but the areas which are savannah now don’t seem to have been savannah back then.  Although the savannah hypothesis seems to have been refuted, it hasn’t been replaced by the aquatic ape hypothesis.

Even so, a wide-ranging comparison of humans and aquatic mammals, even beavers and otters, shows little similarity.  Clearly swimmers and divers have health problems arising from their activities such as nitrogen narcosis and swimmers’ nodes in the external auditory meatus due to water getting trapped in the ears during diving.  It is the case that diving animals do get the bends, and there are even fossils of marine reptiles showing evidence of it, so the mere fact of nitrogen narcosis may not be adequate evidence, but it isn’t at all clear why swimmers’ nodes would develop if we used to immerse our ears regularly.

What I take away from all this is a feeling of uncertainty.  Although I can clearly see how Morgan’s ideas were rejected for ad hominem reasons, or at least that this is a factor in their rejection to a greater extent than the ideas of others were, there are clearly people out there with a lot more knowledge and experience in the field than I who continue to reject them, presumably with good reason.  It helps that a famous female palaeoanthropologist rejects them too.  I wonder if this is connected with the wave of feminism each is associated with.  The fact that they’re also endorsed by respectable science popularisers with a background in relevant fields also seems to help back them up, but by saying that I seem to be committing the same fallacy as I’ve just accused others of committing against her.  But one thing is for sure:  Morgan may be wrong, but the objections made to her are primarily sexist and to some extent racist, and we’re now left with no hypothesis at all regarding the circumstances of evolution, and that seems most unfortunate.

Is Revelation A Source Of Knowledge?

This is not about the Book of Revelation, though as I typed it I realised it sounded like I was about to do some exegesis on the last book of the Bible. No, it means revelation in the sense of an experience of divine origin. The other thing is, this is something which I’ve been trying to sort out in my own mind for about fifteen years.

This may actually be quite a short post as it merely aims to pose a question, not to answer it.

I’ll start with a popular analytic definition of knowledge as justified true belief. A re-statement of this is that knowledge is belief which cannot rationally be doubted. There seem to be two sources of knowledge at this standard. One is direct experience. That is, although one might be dreaming, one cannot deny that one is currently experiencing a particular sensory quality when it’s happening. These are known as qualia: qualities or properties as experienced or perceived by a person. The singular is “quale”. Although the ringing in one’s ears may not reflect an actual sound and the odour of burning may be the result of an imminent stroke, the fact remains that one does have the relevant experience. This is not in doubt and cannot in fact be doubted rationally.

The other source of knowledge is logic and mathematics, or at least it seems to be. For instance, 2+3=5. This can be known. It can also be known that if it’s raining then it’s raining. One might also go on to claim that two parallel lines never meet by definition, but this is where a possible flaw in this source of certainty emerges, because it famously turned out that this was not so. Euclid’s Fifth Postulate, which attempts to establish this fact through logic, is oddly wordy and unwieldy, and this is because it turned out that the parallel line claim was not axiomatic but based on observation, and it further turned out that in actual physical space, parallel lines don’t always stay the same width apart and do in fact tend to meet at an enormous distance. Likewise, logic’s reliance on bivalent truth values may be a similar flaw as these may not be enough. There might be meaninglessness, for example, or tense-based truth: something might be true now but false in the future. All that said, logic and mathematics seem to be a good basis for certainty independent of experience: multivalent logic exists and so does non-Euclidean geometry. Incidentally, it’s worth noting that the number of things which can be known from this source alone is infinite, so it isn’t true that a fairly extreme form of scepticism leaves one with knowledge of almost nothing.

Suppose, though, that you believe in an omnipotent source of reliable knowledge such as God. It doesn’t have to be God but I am of course theist myself. If you’re not, this will probably sound highly arcane and theoretical to you but you could look at this more as a thought experiment or perhaps something that can be applied to another force acting on consciousness and it may mean that it’s logically possible that what I’m about to suggest can happen. Anyway, here it comes:

If an omnipotent and omniscient entity exists, that entity would be able to create knowledge in the human mind. Henceforth I’m going to call that entity, theoretical or otherwise, God. Putting it simply, God can do anything, so God can make people know things. That means that God can remove doubt when something is true, and if there is a God, revelation can be a source of knowledge.

However, there’s a caveat here. God doesn’t do everything God can do. When I was a child, I saw a graffito on a fence post saying “I hate you”, and for some reason interpreted it as God’s message to me. Don’t ask me why. I rushed home rather distressed and came into the kitchen, where my mother was listening to a song on cassette called ‘Our God Reigns’. In my perturbed state I heard this as “Our God hates”. I asked her if God hated me and she laughed, replying, “No! God is incapable of hate!”. This didn’t reassure me much because I was aware that the concept of God included omnipotence, meaning that if God so chose, God could indeed hate. This is the prototype of a belief about God I have today that God is capable of anything, but doesn’t invariably act on that capacity. Hence God can hate but doesn’t, or at least God chooses not to hate humans. Applying this to the matter at hand, that would mean that God might be able to force us to know things but does not choose to do so. Hence we are left with confident belief at most rather than actual knowledge in the sense that God provides us with anything it’s rationally impossible to doubt.

To me, it seems quite invasive and controlling for God to cause this to happen in one’s consciousness. It seems to violate the principle of free will. However, it could be that God would respond to one giving consent to bring this about in some way. “God I believe: help my unbelief.” Would it happen then? Prayers are not always answered the way one might expect. It’s undoubtedly also true that omnipotence means God could create a feeling of complete confidence in something which isn’t so, which is not knowledge.

I think that’s the issue stated as clearly as I can, but there’s another approach to this based on the general use of language. In many cases, if we were to insist on exact meanings for words, they’d end up not referring to anything. Nothing physical is perfectly spherical, perfectly flat or perfectly smooth. Hence if I were to say something like “Here is a smooth one metre sphere resting on the flat upper face of a two metre cube”, it would fail to refer to any real situation because the “sphere” wouldn’t be perfectly spherical, exactly a metre in diameter or perfectly smooth, and it wouldn’t be resting on a perfectly flat perfect cube exactly two metres on an edge. Nonetheless I might seem to have referred to a situation correctly and usefully, and to be that nitpicky about language and reference is plainly silly. Now for the situation with God causing me to know something. Maybe my standard of what constitutes knowledge is too high with justified true belief. Maybe knowledge is just belief that is near enough to certainty that it would make no odds. Otherwise we’d be stuck with a concept of knowledge useless for a wide variety of practical situations.

So that is basically the question I’m asking and a few considerations related to it. It’s also something I asked a few times on Yahoo Answers of all places in the vain hope of getting a sensible answer. All I got in the long run was some legalistic moderator saying I shouldn’t ask the same question more than once, even though I asked it several years after failing to get a helpful answer. Ah well.

Our Shadow Twins

There more or less have to be parallel universes because this Universe is “fine-tuned”. The alternative would seem to be to require a Creator, and although there is a Creator, or rather a Sustainer because God is not within time, nothing in the Universe should be allowed to imply or suggest that there is one as that would be a “God Of The Gaps”.

I should probably explain fine tuning. There are certain constants governing the relative strengths of the four known forces in the Universe which, if they varied even slightly, would make rocky planets and life as we know it impossible. Examples are as follows:

  • Electromagnetism is a sextillion (long scale) times stronger than gravity. If it were much smaller, the Universe would have collapsed in on itself before the stars could have formed.
  • When deuterium nuclei fuse to form stable helium-4, the nucleus loses 7% of its mass. If it lost 6%, only hydrogen would exist, and if it lost 8%, all the nuclei in the Universe would’ve fused together within a fraction of a second of the Big Bang and there would be no atomic matter at all. That said, that is quite a large range, determined by the strong nuclear force.
  • If dark energy was slightly stronger compared to gravity, stars would not be able to form because they’d be ripped apart by the expansion of space. If it was slightly weaker, the Universe would’ve collapsed by now.
  • If other than three spatial dimensions were extensive (there are others, which are however very small and so don’t influence this), there would be problems with the weakening of gravity at a given distance which would again either cause collapse or make it impossible for stars to form.

There are several other examples, but taking these together is enough to illustrate the issue, because the improbabilities multiply, and some of them even seem to be part of an infinite range of possibilities, usually very boring ones because they either involve the Universe collapsing in on itself almost immediately after the Big Bang or merely consisting of hydrogen atoms thinly spread throughout space. The situation we actually find is fantastically improbable because of this. It’s also been suggested that the specific existence of the element carbon is suspiciously unlikely, and water is also such an unusual compound that it too is unlikely, but the details of these involve once again the strengths of the strong nuclear force in the case of carbon and that of electromagnetism in the case of water. There is presumably a version of water in a parallel universe which is still H2O but is a gas at well below its current freezing point and contracts when it freezes, is not a good solvent and so forth. In fact probably most versions of water are like that. Likewise, and this is I admit a very sloppy calculation because several different forces are involved in holding atomic nuclei together and they don’t obey the inverse square law, the last element to have stable isotopes is bismuth, and if the strong nuclear force was forty percent weaker, stable carbon atoms could not exist and carbon-based life would be impossible.

Because of all this stuff, some theistic religious people believe that there must be a God. However, there’s a problem, or rather several problems, with that argument. Firstly, even if it does entail a creator, it fails to entail a God like the one in the Bible, Qur’an or whatever. Secondly, the Universe which actually exists is almost completely empty and life seems to be a mere detail, possibly on only one planet and even if widespread it would still only have come into existence on the tiny grains in a vast void. In fact, this almost completely empty void may be a clue to the nature of reality. What we’re confronted with when we look into the night sky is unimaginably enormous distances between stars, whose visible examples are unsuitable for life as we know it, organised into galaxies which are also separated by relatively much smaller distances and organised into clusters forming a kind of “foamy” arrangement around enormous voids like bubbles. Only occasionally are the conditions suitable for the concentration of nuclear matter, and even more seldom do rocky globes form. When we consider Earth, we realise how special she is, but that exceptional nature is contingent on the fact that we are here in the first place to do the considering. The anthropic principle says the same is true of the Universe: there are plenty of other universes but they don’t have any life or observers in them. Ergo, there are parallel universes. The alternatives seem to be enforced belief in a Creator with a capital C or a multiverse, and that multiverse would likewise consist almost entirely of empty universes which have either already ceased to exist or contain only widely space hydrogen atoms and perhaps molecules floating in otherwise empty space. Although I’m theist, I choose the latter.

The question then arises of how inevitable anything is. Alternate history usually depends on PODs, Points of Divergence, such as Hitler dying before coming to power or JFK not being assassinated, and from a macroscopic level it seems entirely plausible that Henry Tandey could have decided to shoot Hitler on 28th September 1918 or that Lee Harvey Oswald missed his target on 22nd November 1963. But in fact these PODs are only apparent. Free will is probably illusory, there’s a whole chain of unknown events influencing those moments and for all we know that chain of cause and effect stretches all the way back to the beginning of the Universe. It will undoubtedly be the case that slight variations in physical constants do indeed lead to differences in the universe, but what we imagine is easily possible could turn out to be completely impossible. The question of whether this is true depends on chaos theory and quantum physics.

I’ll take chaos theory first. This is the whole butterfly effect thing. It was found at some point that computer programs written to forecast the weather gave completely different results depending on how many decimal places the data input to them were calculated to. Given the very many decimal places involved before one hits the Planck length, Planck time and so forth, which amounts to the fixed “resolution” of the Universe, which can’t be calculated anyway because so many perfect instruments would be involved that they’d nudge the weather in a particular direction, there seems to be only a weak connection between cause and effect, and for all we know, as David Hume asserted, none at all. If science is supposed to be based only on what can be observed, cause and effect can’t be and therefore it’s problematic including it in science at all, which rather undermines the whole of science. That said, it does still seem that in principle cause and effect often operate deterministically. You can’t usually expect to jump off the roof of a skyscraper and not fall to your very probable death. Maybe the improbabilities are smoothed out by the arbitrary nature of the universe on a small scale. I don’t think chaos theory is a very promising reason to posit that things could have been different.

Quantum mechanics is another matter entirely. There are no hidden variables. That is, if a radioactive atom is observed, there is no way to predict when it will decay into an atom of another element and that isn’t just because we’re unable to observe processes going on at a sufficiently small scale, but because there simply is no causal chain involved at all. All that can be done is to predict that half of a sample of carbon-14, for example, will have decayed in 5 730 years, give or take forty years, and that prediction only approaches fifty percent with the increase of the size of the sample. However, these are acausal processes. There is absolutely no chain of events other than the formation of the atom leading up to its destruction. It just happens.

Hence there are two contrary factors involved in the nature of parallel universes. On the one hand, there is the causal chain stretching back to the Big Bang, and on the other there are acausal events associated with quantum events. The question then arises of whether the Big Bang itself, or its immediate aftermath, was strongly associated with such events. It could be that things have always been different or that all significant events in our own history can be traced back to quantum events after the beginning of the Universe. All that can be said confidently is that if a known chain of events can be traced back to a quantum event, there are parallel universes where this turned out differently.

A fairly trivial example is the issue of the discoveries of technetium/masurium and astatine/alabamine. The actual names of these elements are technetium and astatine. Neither have stable isotopes. The reason their names could have been different is that they weren’t discovered for sure when they were first apparently identified. In 1925, German chemists bombarded a mineral called columbite with an electron beam and appeared to detect a faint X-ray signature of what would be element 43. Later researchers, however, could not replicate this experiment and consequently, although it was named masurium, it was still considered undiscovered. If a greater number of atoms in the sample of this element had not decayed in the attempt to reproduce the result, element 43 would have been confirmed and would have been named masurium. Likewise, with alabamine, scientists at Alabama Polytechnic believed they had discovered the missing halogen which belonged under iodine in the periodic table in 1931, but their method was found to be invalid. The case of alabamine is slightly different, which I’ll go into in a moment. But because of the method of its discovery, there undoubtedly is a parallel universe in which technetium is called “masurium”. That’s a real place.

The case of astatine is slightly different. Astatine is only a couple of nucleons too heavy to be a stable element. Using the same rough and ready calculations as I did with carbon, for there to be a stable isotope of astatine the strong nuclear force would only have to be 0.08% stronger than it is. This may be the wrong figure but the principle is the same: it would only have to be a hairsbreadth stronger than it is “here” in our timeline for stable astatine to exist. In such a situation, polonium would also have a stable isotope and therefore would be less dangerous and would not have been used to poison Aleksandr Litvinenko. This, however, is a minor detail because probably it would just mean francium would’ve been used instead.

The two scenarios are therefore two different ways alternate histories could happen. In one, the Universe has been different since the Big Bang, astatine is a stable element and Litvinenko was poisoned using francium instead of polonium. In the other, its timeline and ours forked in 1925 and is probably practically identical to our own with the exception that technetium is called masurium.

This brings me to the Mandela Effect. Nowadays, most people seem to have reached the conclusion that the Mandela Effect is only accepted by cranks, and I would agree that there’s a lot of noise in the signal, but in the masurium/technetium example we have a real live Mandela Effect which is present in the scientific community that pivots on an acausal principle. This is inside the establishment, although it looks very different to a typical ME. For this reason, I will continue to maintain that parallel timelines are a valid explanation for some MEs. That’s it: that’s all I’m going to say about this for now because I know it’s generally considered crazy and you’re going to think I’ve gone to Nubicuculia if I go on.

There have been attempts to set up quantum lotteries. Although these are successful, as far as I know there are no serious lotteries using this principle. This is a pity, because if there were they’d amount to real forks in history set off by quantum events. As it stands, the only examples I can think of which involve genuine quantum forks other than masurium/technetium are very improbable, although there are guaranteed to be timelines where this happened. For instance, radioactivity was first discovered when Ernest Rutherford left a piece of the mineral pitchblende next to a photographic plate in a drawer and discovered it was blackened. If this hadn’t happened, radioactivity would have taken longer to be discovered. However, the only way in which that could have happened is if the number of atoms decaying was so small that it wasn’t enough to influence the emulsion on the plate, and considering the amount of substance involved, it’s very improbable. That said, somewhere out there such a timeline does exist. There’s presumably a timeline where radioactivity has yet to be discovered, which would leave a lot of mysteries about the Universe, such as how stars work or how old this planet is. There would be no radiotherapy, the Second World War would not have ended in the way it did, there would be no atomic batteries or nuclear power stations, no Cold War and so on. It is a fantastically improbable universe. But it does exist out there somewhere, and is a very different world. Even the people who live in it don’t understand it, because a big piece of the puzzle is missing. However, radioactivity can be discovered at any time. History is teetering on a knife edge in this world.

The question now arises of who we are. If a POD has occurred after our conception in any parallel universe, are we the same people? My ME explanation requires transworld identity, because I believe memories are transferred between universes when the brain is in an unusual state such as a stroke, seizure or coma. Transworld identity is the belief that an object can exist in more than one possible world, including the actual world (and here the world “actual” really just means “this” and “actual world” means “here”). The alternative theory is that counterparts exist in other possible worlds but that they’re not the same thing. David Lewis holds this, for example. It’s feasible that most people would hold that one is the same person if a POD takes place after conception, or perhaps birth, rather than before it. If they believe in the transmigration of souls, they would almost certainly hold that it doesn’t require a POD to take place that late because they would already claim that someone is the same person living a life in another time and place. If they also accepted that karma existed, different circumstances regarding conception might lead to that soul entering a different body and this could mean that the “same” person could be different in many ways in another possible world, being born in the Congo rather than Canada, in the rainforest rather than Vancouver, and so forth. This is someone else’s belief system rather than mine.

Even so, I do have something in common with people who believe in reincarnation: I don’t actually believe personal identity depends on karyotype. Here’s why. If it turns out that someone has a genetic disorder, they and the people close to them would tend to wish that they had never acquired that disorder rather than wishing they were someone else. These are two different things. Therefore, we don’t identify with our genes and our identity doesn’t depend on having been conceived in a particular way. Nor does it depend on the specific substance of our bodies, because if our parents, particularly our pregnant mothers, had eaten a different diet (such as the potatoes on one side of the field rather than the other, not miso instead of yeast extract or something), it wouldn’t make us different people unless it had a major influence on our development, and possibly not even then. What does that leave? There is no soul, so it isn’t that. Nor is it our genes. Nor is it the substance of our bodies. The answer, I think, is that we are socially defined, both passively and actively. In one sense we are the “software” running on the “hardware” of our bodies, although the metaphor of the brain as computer shouldn’t be pushed too far and it’s important to be aware that other parts of our bodies, such as the endocrine system and the nerves in our digestive system, also form a supervenience base for our psyches. It’s difficult to know how close our brains are to computers and how relevant this is to our identities. In another sense, we are externally defined. For instance, we have the legal concept of “next of kin”, which formalises a custom which already exists in social life: we are siblings, offspring, parents and so forth. Therefore, in a parallel universe where a child whose genetic makeup is rather different from this one, has a different temperament and so on, could still be the eldest daughter, have the same name, same birthdate and so forth, and is arguably the same person. In particular, she might not have the leukæmia which killed her in another universe, because at no point was that leukæmia something anyone in the family owned psychologically: it was a disease attacking her, an outsider enemy. I presume this is how many people with cancer approach their illness, but maybe I’m wrong. But that disease could be in her genome.

I don’t know enough detail about how ionising radiation interacts with DNA to be sure about this, and I should probably know more, but I would expect cosmic rays, which are nuclei and protons raining down onto Earth’s surface at near-light speed, to be to some extent the product of nuclear decay and to some degree interact with the molecules in question in such a way as to change the isotope of specific atoms. The existence of radiation in the environment on this planet, whether or not it results from human activity, would certainly be non-deterministic in nature, although the actual presence of that radiation is only technically not so. That is, it’s possible for a scenario as described above with Rutherford’s pitchblende failing to be sufficiently radioactive to influence his photographic plate to occur, but its probability is infinitesimal. Hence there is an element of pure luck involved in mutation which means that it is possible for minor phenotypical differences between members of the same species in parallel worlds to occur, though only to the extent that this doesn’t influence their fitness to survive, although this does also mean there are extinctions which occurred in one world but not another. However, there is another aspect to identity which suggests the “shadow people” I referred to in the title.

It’s widely known that ordinary human body cells each have two pairs of chromosomes which are reduced to one pair in gametes via meiosis:

Overview of Meiosis
Date
20 June 2016
Source
Own work
Author
Rdbickel
– slightly cropped

It should be noted that the four daughter nuclei in this process are complementary to each other. The one at the top is a perfect counterpart to the one at the bottom and the two in the middle are counterparts of each other. Therefore, for either of the gametes which led to the cell line associated with who we are, there is a complementary alternative. This means there are at least four possible versions of each of us, even assuming the copying process goes without a hitch, which incidentally it never does. For instance, for a White blue-eyed fair-haired child whose mother is White with brown eyes and dark hair and whose father is White with blue eyes and fair hair, there is another potential version who is perfectly complementary, and two more versions who are partly complementary, because different gametes united. These gametes will have existed at some point, and they might even produce a viable child in the case of fraternal twins. These complementary people probably do occasionally exist in the same world. I would estimate that this occurs in one pair of about 500 million fraternal twins. Since in a population of eight thousand million there are around 350 million twins, there’s an even chance that somewhere out there today, this situation exists, and there have probably been about six or seven such pairs in the whole of human history, which by the way emphasises the fact that there are a lot of people around today. But in any case, we all have these shadow people, which brings me to the illustration at the top of the post.

This is a fairly famous gender-swapped version of post-war Prime MInisters of the United Kingdom, which notably has only two men because there have only been two female PMs. The counterparts in question here would usually have different karyotypes. That is, if you are yourself XX, your shadow twin would be XY and therefore usually male. The main situation where they wouldn’t be, incidentally, is complete androgen insensitivity – this is not about trans issues at all right now. However, although we do tend to focus quite strongly on gender as part of identity, there would also be lots of other traits which would differ. We have two children, one of whom resembles one parent quite closely and the other of whom resembles the other. I presume this is because dominant traits from one gamete are more strongly expressed in one than the other. Their shadow twins would be the other way round, which means that they would look very like their siblings, just in a different birth order. Their eyes would also be a different colour. My own shadow twin would still have blue eyes, but also straighter hair. I say that, but the popularly understood traits said to be inherited by single alleles are often not, such as eye colour. There’s also another sex-related issue. Two of the intersex conditions are referred to as Klinefelter’s and Turner Syndrome. The former is XXY and the later just one X chromosome with no counterpart. These two conditions are therefore complementary and a Turner person’s shadow twin would be XXY and vice versa. There’s also chimerism. Some people would be reverse chimeras of their twins, for instance they would be largely cell line A with some of cell line B, but their shadow twin would be largely cell line B with some of cell line A.

It’s also true that every generation of a lineage produces only a quarter of these potential individuals. This means that there are also sixteen possible parents involved, and the number rapidly becomes extremely large. This brings home how unlikely it is that any of us were ever born. Just focussing on the perfect complements, the probability that every person in the world today was their shadow twin is of the order of four to the power of eight thousand million to one. Although this is very improbable, it’s far more likely than the situation I described with the discovery or otherwise of radioactivity.

At this point it becomes clear that there is an issue with the nature of probability. Rutherford’s discovery is genuinely probabilistic and acausal. It could “just happen”, and there’s no need for an explanation. It isn’t so clear that the shadow twin situation could simply happen because there definitely seems to be a deterministic thread running through the whole of meiosis and fertilisation. This raises the question of the nature of probability. Probability is sometimes seen as simply a measure of the frequency of occurrences, so for example half the time a coin comes up heads and half tails, so it has a 50/50 probability of coming up either way. This is an empirical approach, as it’s simply based on observation. The other approach is based on rational degree of belief. For all we know, a coin tossed on a particular occasion might come down heads or it might come down tails, and there is no known reason to prefer one outcome over the other. However, there is in fact a cause, each time, for it landing the way it does, presumably to do with how forcefully it was flipped, the angle, air currents and tiny differences between individual coins which make them slightly unfair. For instance, I believe it’s slightly more likely that a coin will land heads up because I think the tails side is slightly heavier and will tend to weigh the coin down, and I tested this once and found the coin I was tossing was heads up sixty-four times out of a hundred. This helps confirm the hypothesis but doesn’t prove it. Ultimately, there may be two kinds of probability, one deterministic and one not, but the deterministic version of probability could stem from the initial conditions of the Big Bang and therefore not be ultimately so. Incidentally, using possible worlds semantics makes it difficult to use certain terminology. For instance, the world “probably” then comes to mean “in most possible worlds”, in other words something like “usually”. This gets confusing when referring to the theory itself. For instance, I can’t say “most parallel universes have always been separate” because I would then be effectively saying “in most possible worlds, most possible worlds have always been separate”. It could even be that this leads to a contradiction which refutes the theory of parallel universes, and that’s pretty serious because it starts to look like proof for the existence of some kind of First Cause and supports theism or deism to a limited extent.

I am now going to make one of these odd-sounding statements. Namely, “it’s possible that shadow twins exist in other universes”. This could be expanded as saying “there are some possible worlds in which there are some possible worlds where there are shadow twins.” This sounds peculiar, and makes it sound like there are two levels of possible worlds, on whose higher level lies the idea that there are a vast number of arrays of further vast numbers of parallel universes. Using the “rational degree of belief” view of probability, this can be restated as “for all I know, there exist possible worlds where shadow twins exist”. If this is so, it’s possible to imagine the following situation. There is a parallel universe where every representative of a final generation of humans is their shadow twin. In fact there would be several. This uses the criterion of childlessness to select the set of people involved. There’s also the question of whichever cohort includes you. You have a shadow twin, and depending on whether you have descendants you are either in the final generation or one of its recent predecessors.

Getting back to the prime minister picture, these are not photographs of a common type of parallel universe. Not only would the individuals concerned look different besides their gender, and also probably have different personalities, but also these are photographs from a matriarchal society, and quite an odd one at that because the political system of the United Kingdom is otherwise very similar, with Eton, Oxbridge and so forth putting these people in the same positions. In reality, most of the people depicted in the picture would not have become Prime Minister at all because they would have different histories based on their gender. This picture asks us to believe that a woman, Winston Churchill’s shadow twin, would have become PM in 1940, only twenty-two years after Constance Markievicz, which is hard to imagine. Their lives would probably have been much more like those of their sisters, assuming they had any, than their lives in this world. The idea of shadow twins constitutes an interesting thought experiment regarding the nature of gender roles and the patriarchy.

Finally, I’m going to revisit the fringe theory of the Mandela Effect. If there really are shadow twins who are to some extent a sex- or gender-swapped version of oneself in parallel universes, this could sometimes have an interesting consequence which is similar to the idea of a soul of one gender in the body of another as an explanation for gender identity issues. My explanation for hardcore MEs is that individual experiences and memories occasionally get transferred into brains in parallel universes when the brain enters an unusual state. If this happened often enough with a shadow twin, the person concerned could conceivably end up with a different gender identity. However, this suggests that we all go around constantly thinking to ourselves something like “I am a man opening this door” or “I am a woman picking this apple”, when of course we do nothing of the sort. Also, it’s quite an outlandish explanation compared to something much simpler and more easily testable such as chimerism or CAG repeat sequences on the AR gene. Hence I’m going to put that out there, note its similarity to the dubious idea that there are not only souls but also that those souls are gendered, and acknowledge that believing in non-psychological explanations of MEs at all is widely considered dubious. But I do wonder sometimes.

Ethical Intuition And Homophobia

Back in the ’70s, when I was a child, my mother used to read the Bible to me. This was how I discovered that the written Torah appeared to condemn male homosexual acts. There are other takes on this apparently, but they’ve always seemed to be against the grain, perverse interpretations of what was pretty clearly extreme homophobia. At the time though, I didn’t have an issue with it and it seemed entirely logical that if sex was for reproduction, any form of sex which couldn’t lead to pregnancy was morally wrong. This was the simplistic understanding of a nine-year old.

When I was twelve, my English teacher compared homophobia to racism, and asked us, if we were opposed to racism, why would we be homophobic? It was the same kind of issue as far as he was concerned. This seemed an eminently consistent and sensible view to me, partly because at the time I considered racism to be a particularly terrible evil. One influence on my acceptance of this opinion was probably my own queerness, although I had yet to admit that to anyone. Certainly my White friend who was in the same English class as I and was similarly passionately anti-racist persisted in his homophobia for as long as I was aware of his opinions on the matter, which would’ve been another few years. In my case, I remember another pupil calling me “gay” in September 1981 and replying to him that it was terrible that he even considered it an insult. He too was still openly and strongly homophobic four years later. The one person who was aware of my sexuality and identity issues, which I used to call my “Problem”, once said of my opposition to homophobia that homosexuality was “not your Problem,” so clearly both she and I made a connection between the two.

But this post is not just about queerness and homophobia.

A few years later I went to University and became Christian. Before making a commitment, I expressed concern that I would have lots of questions about the issues the Christian faith raised for me, which were multiple. I was assured that this would not be a problem and that they encouraged questions. So, I converted and after a few months began to ask my questions, which were not all about homosexuality, but that was one major concern for me. So I brought it up, and the replies were varied. One was that it might currently be “fashionable” to tolerate homosexual activity but that God’s standards were unchanging and humans were not designed for that purpose. This was from a medical student by the way. Another homophobic Christian said, and this was more sympathetic, that he couldn’t imagine how bad it would be to find out you were gay and felt very sorry for them, but he was nonetheless still homophobic. But to me, this was just not an option, because by that point it seemed intuitively obvious that homosexual activity was not wrong and that homophobia was. As I’ve expressed it more recently, if the Bible told you that 2+2=5, you would either reject that part of the Bible (and possibly the whole thing) or try to work out why it seemed to you that it was saying that because it would clearly be saying something else, and since the Bible at least appears to condemn homosexual acts, that’s equally absurd and one could be expected to feel a similar motivation to resolve the problem.

This equation between the idea that 2+2=5 and the idea that homosexual acts are always sinful, I think, attempts to draw a parallel between the certainties of mathematics and the hope that ethics can be equally certain. There are positions in both ethics and mathematics which are called “intuitionism”. In maths, intuitionism is the position that maths has no external basis and is simply a creation of the mind. This is a more recent usage of the term in the philosophy of mathematics, preceded by Kant’s and his successors’ belief that intuition reveals the principles of maths as true a priori – they arise from logical deduction rather than observation. This seems counterintuitive (ha!) because to us, the Cosmos seems to run on maths and logic, and it’s also problematic for an externalist such as myself because we see concepts and ideas as external to the mind and having their own independent existence. It doesn’t seem to me that intuitionism and externalism could both be true, but since intuitionism can involve denial of the law of excluded middle (either P or not-P), maybe they could be. But at that point logic seems to have become what Arthur Norman Prior once called a “runabout inference ticket” where you can just conclude what you like from any premises. It doesn’t seem to be ultimately useful. This could, however, be psychoanalysed as a need for a feeling of certainty and solidity of foundations. It may not be that it’s mere logic.

Geometry is a notorious example of something which used to seem purely logical and valid without the need for observation to verify it. Euclidean geometry generally needs to be based on axioms which are intuitively true, such as “a straight line segment is the shortest distance between two points” and “a straight line segment can be extended infinitely as a straight line”. However, the fifth postulate is difficult to state simply. It can be stated thus: “If two lines are drawn which intersect a third in such a way that the sum of the inner angles on one side is less than two right angles, then the two lines inevitably must intersect each other on that side if extended far enough.” This amounts to the idea that parallel lines never meet, or meet at an infinite distance, and whereas it certainly seems true, the complexity of stating it rigorously makes it suspicious. In fact, it turns out that the Fifth Postulate is the result of observation rather than deduction, and other geometries are possible based on either assuming that parallel lines diverge or that they converge. The former, known as hyperbolic geometry, can be locally true in this Universe and would be most noticeable near the event horizon of a black hole, and the latter, known as Riemann geometry, is actually real geometry as it applies over most of the Universe, particularly on a large scale. Possibly counterintuitive truths which hold in the real world are, for example, that if you imagine the Earth wrapped in bandages, and you kept wrapping it in ever deeper layers, you would eventually find that you were surrounded by bandages and would be inside the ball rather than outside it, and that there is at any one moment a finite maximum distance between two points after which the direction between them reverses. These facts can be known easily to be true on a spherical surface: our antipodes are a maximum distance between two points on the surface of this bandageless orb, after which the directions between them reverses – go far enough east and you find yourself west of your starting point – and a large enough circle on the Earth’s surface will start to shrink if it “grows” any further.

If it’s possible for that postulate to be cast into doubt, and in fact turn out to be false, what else in mathematics could be? One possibility is that logic is also like this. For instance, truth and falsehood could simply be poles in between which other truth values exist or there could be truth values situated beyond truth, i.e. truth from falsehood could simply be the first step towards a “supertruth” infinitely more true than mere truth itself. If there’s that much play in both geometry and logic, perhaps all mathematics is merely an intuitionistic game. Even so, we do tend to operate on the principle that maths is set in stone and reliable most of the time.

Ethical intuitionism is in a sense the opposite kind of view to mathematical intuitionism. Formulated in response to the perceived failure of utilitarianism, it ran into its own problems later which are also thought to have shaped ethical thought later. As I’ve mentioned before, the Utilitarians attempted to prove the utility principle’s desirability by saying that everyone desires to be happy, which is in any case not true, but also suffers from the problem that one needs to attempt to prove that the greatest happiness of the greatest number is worthy of being desired, which is a problem with the English language: we lack the word “desirandous” or any clearer equivalent and are stuck with the “-able” at the end of “desir(e)”. Consequently, in Edwardian times the philosopher G E Moore sought to establish ethics based on the idea that goodness was a simple, non-natural property which could be intuited by people. There is a big problem with this: cultural and interpersonal relativism, which is why he said it was a simple principle: it could not be analysed into a simpler form. The later philosopher Alasdair Macintyre suggested that this step led to later problems in discussing ethical issues which were then picked up on and influenced newer theories.

As the twentieth century wore on, logical positivism and behaviourism became important. Both of these attempted to tidy up pesky things like religious language and psychological states to things which could be observed by the senses. According to Macintyre, because ethics had come to be discussed in terms of what could be intuited and was considered to be essentially impossible to analyse further, conversations about right and wrong in academic circles tended to get reduced to mere emotional expressions. This was known as emotivism, and in fact more or less amounts to ethical scepticism, although there were two versions of it. One actually attempted to reduce expressions of right and wrong to emotional expressions akin to screaming and laughter, just expressed verbally. Another form of emotivism claimed that ethical statements were simply expressions of approval and disapproval implying exortations to another to do the same. Later still, prescriptivism emerged, which was a revival of Kantian ethics which claimed that to say something was good or right meant that it was universalisable (what if everyone did the same?) and entailed an imperative. The problem with this position is well-known. It depends on how it’s described. “Everyone needs to eat” could be given as a reason for poor people to shoplift food, but “everyone needs to make a living” is a reason shoplifting might be wrong. Again, we could be reduced to merely emotional arguments. An oddity about this period of what’s called “non-cognitivist ethics”, i.e. that actual meaning is not relevant to right or wrong, is that it was held by Bertrand Russell, yet he was a strong campaigner on ethical issues such as free love and nuclear disarmament. He himself commented that he couldn’t reconcile the apparent contradiction.

Some theists, not including me incidentally, see ethics as that which is commanded by God – theological voluntarism. There is no right or wrong except what God chooses to tell us to do or forbids us from doing. This crops up, for example, in some Jewish views of kosher diet. Although explanations have been offered, such as the idea that pork is forbidden to avoid parasites or that the forms of certain species, such as cloven hooves, allow special access to the divine, another explanation is that the rules are completely arbitrary and only exist to ensure that God’s people obey without asking why. Belief in theological voluntarism sometimes leads to the peculiar claim that atheists cannot have a moral compass, when it is in fact a pretty weak form of metaethics. It also gives rise to the moral argument for the existence of God, which is that the awareness of morality as a real thing as opposed to mere custom with no real basis means there must be an ultimate moral authority to back it up. I don’t see things this way at all. I see God as merely reporting on what the right thing to do is from a position of infinite wisdom and knowledge. God might sanction something, for instance, due to positive consequences which we can’t perceive ourselves. In the case of kosher food, for example, it might be that there is a very good reason for it but we cannot understand that reason. In fact I would say that veganism is the “new kosher”, and in fact the “new halal”, so in fact I do use my own reasoning to avoid the negative consequences and associations of deliberately eating animal products. Surprisingly, there are atheist theological voluntarists who claim that ethics would make sense if there was a God, but there isn’t, so it doesn’t!

It certainly seems that any God would be bound by logic and mathematics, although this isn’t always held to be the case. By the same token, God to my mind would be aware of right and wrong, and this means that there is a fact of the matter about these things rather than them being non-cognitivist. Alasdair Macintyre sought to replace previous metaethical theories with ideas of vice and virtue, but I would reject that on the grounds that it seems to lead to judging people directly as essentially good or evil, which seems intuitively wrong to me. And there’s that concept again: intuition.

The essential problem with the idea of a moral sense is cultural relativism, and similarly, circumstances altering cases. Take the campaign against sex robots. Those who oppose them argue that it’s wrong to consume bodies as goods and that sex robots and sex workers have the same undesirable status: humans (let’s face it, probably men) would be using sex workers as means rather than ends and therefore also sex robots. Others disagree, claiming that condemning sex robots is transferring concerns about sexual objectification to actual objects. This is an example of how moral intuition could be questioned. The situation could also be tweaked: what’s the morality of allowing paedophiles to have robot children? These two examples also bring up the issue of “the wisdom of disgust”, something which is often evoked to justify homophobia and which might also explain kashrut. Disgust, culturally mediated in this case, is the reason sanitary towels are advertised using blue rather than red liquid. Presumably on another planet the humanoids all have bright blue blood and red liquid is used to advertise them instead. We have instinctive abhorrence of excrement, which protects us from danger. A teleological view would say that God has made us disgusted by excrement in order to keep us healthy, and likewise has made people disgusted by homosexual activity, thereby justifying homophobia. I would say this is an excellent reason for rejecting the idea of the wisdom of disgust. Research has apparently shown that right wing people are more likely to equate disgust and immorality, which means rhetorically it might be more persuasive to appeal to disgust if your interlocutor is right wing. To me, the idea of there being a strong connection between disgust and ethical judgement is never going to gain any ground because I used to have a button phobia, and it’s clearly absurd for a person disgusted by one specific feature of the world to expect it to be banned or controlled in some way simply because of that disgust. That’s clearly not a good guide to morals.

To return to the history of my opposition to homophobia as an intuition, it does seem to be informed by some kind of reasoning. I have a kind of tangential stake in it, some might say a direct one, but it’s also influenced by the fact that disgust as a guide to ethics is manifestly absurd to me due to the button phobia, and also by a kind of inductive inference from racism. But it’s also very deep-seated, to the extent that the very fact that fundamentalist Christians tend to be recalcitrantly homophobic is sufficient reason to reject their world view, and it’s disappointing that they don’t themselves perceive things that way.

I have to say that in spite of difficulties with it, I find intuitionism the most appealing metaethical theory. Although the biggest problem with it is that it seems to make it impossible to resolve disagreements about right and wrong, moral codes tend to agree broadly across cultures, even when their connections must’ve been in the palaeolithic, and to me this suggests that there does seem to be something like a moral sense. This is metaphorical. I don’t imagine there is a sensory organ of some kind in the brain which responds to “conscience radiation” or something. However, I do think we have a moral instinct, and it makes sense to have an innate conscience which enables society to hold together and operate without individuals being taken advantage of too much, although sadly this seems to fail very often. There’s also a problem with the fact that if you actually do try to extract widespread moral principles from the religious and social codes of the world, many of them are homophobic, sexist and so forth. This is why a deeper set of principles must be used. This was the subject of my first degree dissertation, which wasn’t actually very good. I’m not going into it again here.

An ethical sense would seem to be identical with the conscience and distinct from disgust and charm, both of which are often misleading. For an example away from sexual ethics, disgust could prevent one from treating an illness, performing life-saving surgery or working in sanitation, but all of these are very positive things to do ethically. Conscience has been called “the voice of God”. In a situation where a theist has difficulty with conservative religion because of its homophobia or sexism, their conscience cannot allow them to concede or tolerate that prejudice, and if conscience is the word of God, God would themselves be convicting them to rebel or do something else to act against it.

Although the moral argument for the existence of God doesn’t work for a separate divine being “out there” in the Universe or beyond it, there’s another possible take on this based on Ultimate Concern. The philosopher Paul Tillich manages to separate the issue of theism from religion with this concept, which makes the idea of religion less Westernised as it allows for non-theistic religions, which of course do also exist in the West, for example Spiritualism and the Free Zone. Tillich calls faith “the state of being ultimately concerned”. By this he means whatever one holds sacred. This, I think, is a widespread object in most people’s psyche, including non-religious people. It needn’t be God. It could be love, altruism, rationality, compassion, perhaps even one’s own ego for narcissists, but it’s just as real for most non-religious people as it is for religious people and theists. For a Quaker, it might be the spark of divine in us all, and for atheist Quakers there may be no need to alter that. Conscience could be an Ultimate Concern, in other words one’s God, and because this closes off the concept from argument and questions about the existence of an external deity or not, it could be quite a good one. It’s even ineffable in some ways, because of the inscrutability of ethical intuition.

It is of course problematic to have a set of inaccessible moral principles due to the difficulty in being able to see them collectively in the same way. Coming back to sexual orientation though, this is something which can actually be known because it isn’t so much observed as immediately present to the consciousness, when, for example, one might feel attraction to someone of the same gender. One possible response would be to deny it because it clashes with one’s religious values, and clearly this is a fairly common phenomenon given the large number of people involved in reparative “therapy” who are either openly gay already or admit to it, and pastors who have been stridently homophobic and again turn out to be gay, but this shouldn’t be taken as the rule for homophobia among the religious. There really are people who struggle with the homophobia of the Abrahamic religions and only very reluctantly concede to it. On the other hand, I used to know a man who said he wished he could be as disgusted by other kinds of sin as he was by what he saw as the sin of homosexually expressed love. There is an internal process going on here. In one situation, one is divided against oneself because one knows oneself to be queer but struggles against it. In the other, which rather self-righteously I would claim for myself, one’s awareness of one’s queerness and its incontrovertible nature leads one to reject any understanding of religion which is homophobic, and to be honest, if it turns out homophobia is central to any faith, the voice of God, as it were, would surely lead me to reject that faith.

Startling Semitic-Celtic Parallels And Overinterpretation

Some time ago in the 1980s I think, I made one of my many attempts to learn Gàidhlig and noticed something rather strange. I already had some knowledge of Hebrew and Arabic from when I was younger, and it suddenly struck me that the Celtic language shared some remarkable unusual features with the other two. From what I can recall, these included verb-subject-object word order, two genders – feminine and masculine – and something I can only vaguely remember about how prepositions and pronouns work. At the time, I didn’t know what to make of it. It seemed to be more than a coincidence because three always counts to my mind as more than chance allows, but it was difficult to think of a way of how it could’ve happened. I eventually settled on a rather vague conclusion that maybe Semitic language speakers had travelled north from the Maghreb into Iberia, which Q-Celtic languages are sometimes claimed to originate, and that they then influenced the ancestor of the Irish language in some way. However, this doesn’t work particularly well as it fails to explain how Welsh and Cornish also have these features. After a while, I just put it down to coincidence and my tendency to see patterns where none exist other than the ones my mind has imposed upon them.

At this point I’m going to veer off into probability to illustrate why three things in common is my threshold for statistical significance. It’s common to plump for one in twenty as the point at which something is considered significant, and scientific experiments often use this. In recent years I’ve seen rather too many dubious-looking scientific papers which seem to go for a much lower limit and I now wonder if there has been a new development in statistical theory which justifies this, or whether it’s more to do with “publish or perish”. Anyway, probabilities multiply, so if you flip a fair coin three times and it comes up heads every time the probability of that outcome is one in two times one in two times one in two. 2³ is eight, still below the point when one decides something is significant, but the probability of something happening is not always one in two. For fair dice, you’d only need to throw a six twice for it to become significant: one in thirty-six is six squared. Taking this the other way, the mean probability for three events to multiply up to one in twenty is of course the cube root of twenty, which is just over one in 2.7. However, this reasoning is faulty because we see patterns as opposed to the absence of patterns, so given the large number of other grammatical features one could pluck out of Celtic and Semitic languages, the ones that don’t fit might be ignored and the calculation then becomes extremely complicated because one then has to consider how to delineate specific grammatical features and how to count them, then work out what the chances are that two sets of languages share three grammatical features based on this and the number of possible options. For instance, with syntax the options, assuming a largely fixed word order which doesn’t always happen, are SVO, SOV, OVS, VSO, VOS and OSV, which is one in six. However, other features are quite arbitrary. There are languages out there with more than two dozen grammatical genders, for example. It’s possible to imagine a language whose every noun has a different gender.

Another pattern which definitely is meaningful which can be plucked out of Celtic languages as they are today is the fact that they and Romance languages, more specifically Italic languages, which are Romance languages plus Latin and its closest contemporary relatives, are closer to one another than they are to other branches of the Indo-European language family. Some of these features are the result of parallel evolution. For instance, all of the surviving six Celtic languages have two grammatical genders consisting of feminine and masculine, and this is also true of all Western Romance languages (though not of Romanian, which still has neuter). Besides this, other Indo-European languages tend to use an ending like “-est” to express the superlative of adjectives, but Italic and Celtic tend to use something like “-issimum” – “best” versus “bellissimo” for example. There are a number of other similarities which may be preserved ancient features lost from the other languages, features acquired because they were neighbours or features acquired in their common ancestral language. These are, though, easy to account for because Italic and Celtic just are obviously related, were spoken near each other and so on. The idea of a parallel between Celtic and Semitic is much harder to explain, which is why it might not exist at all.

Recently, I discovered that my personal will o’ the wisp is not in fact just mine. Professional linguists have noticed this too, and there are even theories about how it might have happened and a number of other features in common. VSO and inflected prepositions are just two of several parallels. I should explain that in Gàidhlig and its relatives, prepositions vary according to who they refer to, so for example “agam” means “at me” and “agat” “at thee”. The origin of these is easy to account for, that the words have simply been run together over the millennia, but few other languages do this. Arabic and Hebrew, on the other hand, do. The languages also do things with these prepositions which other languages don’t. They express possession and obligation with them. “The hair on her” – “am falt oirre” is “her hair” and “I need/want/must have a knife” is “tha bhuam sgian” – “there is from me (a) knife”. That “(a)” indicates something else they have in common: they all have a word for “the” but none for “a”. It’s unusual for a language to have a way of expressing definiteness without indefiniteness. Interestingly, Anglo-Saxon and Old Norse, both spoken in these isles, also had a way to say “the” but not one to say “a(n)”, and this may be a clue as to how these apparent coincidences happened. Breton, however, does have an indefinite article. Likewise, all the languages repeat the pronoun at the end of a relative clause – “the chair which I sat on it” and not “the chair (which) I sat on”. There’s also the way the word for “and” is used, or rather, a word for “and”: “agus” in Gàidhlig (there’s another word, “is”) and “wa” in Arabic (“ve” in today’s Hebrew). In English, “and” is a simple coördinating conjunction like “or” and “but”, but in the other languages it can also be used as a subordinating one. It can also mean “when” or “as”. This is also unusual. “Agus”/”ve” can also be used to mean “but” or “although”, and in fact as I understand it, the Arabic “wa” is the only option to express “but”. Besides this, there’s what’s known as the construct state genitive in English descriptions of Hebrew grammar. Arabic doesn’t say “the man’s house” but “man the house”, or “taigh an duine” in Gàidhlig – “the house man”. This is in spite of the fact that the language in question has a genitive form for the noun in question. This makes approximately eight features found in Celtic and Semitic languages but only rarely in others.

And there’s more. The surviving Celtic languages are unusual among Indo-European languages in having these features, and are in general quite aberrant compared to the others. That said, there are branches of the family which have unusual features for it, such as Armenian, which has grammar more like other languages than Indo-European in that it hangs successive suffixes off the ends of words per idea as opposed to having combined ideas in each suffix (in English we have, for example, a final S for genitive (possessive) and plural and don’t need anything extra). Even so, were it not for the known history and the fact that so much Celtic vocabulary is clearly similar to that of other European languages, nobody would guess Celtic languages were Indo-European. In fact, the very features which they share with Semitic languages are the ones which make them unique in the Indo-European family.

They are also emphatically not related to each other, or at least so distantly related that there are languages native to Kenya and Tanzania which are closer to Hebrew and Arabic and a dead language spoken in present day China which is closer to Welsh (and in fact English) than they are to each other. Semitic languages are part of a family now referred to as “Afro-Asiatic”, which also includes Tamazight, a Berber language, and Ancient Egyptian, spoken five thousand years ago and still nowhere near the speech of the Kurgans at the time which are ancestral to Celtic, Germanic and the like. There are, however, a few theories about how this has happened.

One apparently anomalous circumstance which can be seen from the New Testament is that Paul wrote a letter to the Galatians. These lived in Anatolia, the Asian portion of present-day Turkey, and they spoke a Celtic language. This language was clearly in close proximity to the Semitic lingua franca of that region at the time, Aramaic, as well as various others such as Assyrian. It’s therefore been suggested that the whole of the Celtic branch was influenced by this local connection, all the way across to Ireland in the end. To me, this seems a little far-fetched, but it is true that there’s a concentration of a particular set of genes which marks the Irish, and incidentally myself, as possible wanderers from the Indo-European ancestral land who went as far as possible at the time. This may make the so-called Celts the ultimate invaders in a way and contradicts the common mystical, matriarchal and peaceful image some people seem to have of them. This migration also forms part of another theory, that farming, having been invented in the Fertile Crescent where Semitic languages were spoken, then spread culturally across Europe to these islands and took linguistic features with it. Either of these ideas being true could be expected to imply that all Celtic languages, not just the modern survivors here and in Brittany, had these features in common.

Significantly, the speakers of Celtic languages were probably the first Indo-European speakers to arrive in Great Britain and Ireland. Prior to that, clearly there were other people living here who had their own spoken but unwritten languages. It’s possible that traces of these may survive in place names. It used to be thought that the Picts spoke a non-IE language, possibly related to Basque, but this has now been refuted. The features Irish, Welsh and the rest have in common with Hebrew and Arabic are also apparently shared with Tamazight and other languages of the Maghreb, although to me that’s hearsay – I haven’t checked them out. Consequently, one rather outré theory, is that before the Celts got here the folk of Albion and the Emerald Isle spoke a Semitic language, and Celtic was influenced by this when it got here. However, there doesn’t seem to be much reason to suppose this to be so other than the connection.

Leaving those theories aside, I would bring up the issue of linguistic universals, and particularly implicational universals. Some features are common to all spoken languages. For example, every known spoken language has a vowel like /a/ as in “father” in it, every language which distinguishes questions tonally involves changing the pitch of the voice towards the end of the sentence, and every language has at least some plural pronouns. There’s a particular set of implicational universals around SOV languages which they tend to have in common, such as being exclusively suffixing, to the extent that it used to be thought that there was a so-called “Altaic” language family including Turkish and Mongolian, and some would even include Japanese and Korean in that, but they’ve turned out not to be closely related but have sometimes grown more alike through contact, but they also have many of these implicational universals, suggesting to me some kind of possible “standard” human spoken language with those grammatical features. I would tentatively suggest, and I may well be wrong, that the features Celtic and Semitic languages share are in fact similarly implicational universals. Both of them have an unusual syntax and this may lead them both down the same path.

But there’s an extra layer to this which intrigues me. There used to be a famous Hebrew teacher who introduced the subject as “Gentlemen, this is the language God spoke” (yes, this is extremely sexist but it was a long time ago), and similarly Arabic is considered a particularly sacred language almost designed by God to write the Qur’an. Hence the features mentioned are used in two very important sacred texts, and if I’m going to go all religious and mystical on you, just maybe the Celtic and Semitic languages have a special place in spiritual practices, and this is about that. But leaving that aside, it still seems to me that the most likely explanation for the things they have in common is simply that they are a particular “type” of language, just as Japanese and Turkish are, without needing to have any genetic relationship.

They’re also both really annoying!

The issue of overinterpretation will have to be held over until tomorrow, sorry.