How Real Is Maths?

As you may know, I was involved in a high-control parachurch organisation in the mid-1980s CE when I went to university for the first time. Over the first few months, I didn’t resist them much, at least externally, because I wanted to give them a chance and see whether their claim that God and evangelical Protestantism really did have all the answers. I then went back to Canterbury for Xmas and bought my dad a book about mathematics, something he was very keen on and had a good grasp of at the time, which I also ended up reading myself. In this book, which I think may have been Martin Gardener’s ‘Mathematical Circus’, there was an interesting chapter on different degrees of infinity. In maths, there are countable and uncountable infinities. Countable infinities do take forever to count but given an infinite period of time it can be done. Uncountable infinities are just not countable at all. So for example, there are infinity whole numbers and infinity points in space, but those two infinities are different. It can be proven that this is so as follows: Suppose you have an infinite number of cards with a one on one side and a zero on the other, and you make an infinite number of infinitely long rows of these cards in order, starting with zero and ending with infinity. Have you then produced all possible infinite sequences of ones and zeros? No. You can start in the top left hand corner of this array and turn a card over, then go on diagonally, one row and one column down forever, turning the cards over until you reach the bottom right hand corner infinitely far away. The number you have then generated, running diagonally down the arrangement, is not in that sequence because bit n of sequence n will always be different from the number in that position on the grid. Hence there must be a larger infinity. This leads to peculiar consequences. For instance, it means you can in theory take a sphere of a given size, remove an infinite number of points from it and construct another equally-sized sphere from them without reducing the size or integrity of the first one. Georg Cantor, who first thought of this way of understanding infinity, spent the later part of his life going in and out of mental hospitals, partly due to the hostility of other mathematicians to this concept and its implications and possibly also because the concept he came up with was a cognitohazard. To some extent, thinking of this may have broken his brain.

With steely determination, I returned to university and immediately confronted a member of the cult, not on this issue but other, more practical ones such as intolerance of other spiritual paths and homophobia. However, because we were discussing an infinite being, namely God, I mentioned in passing this concept, and his interesting response has often given me pause for thought since. He regarded this view of infinity, and by extension much of pure mathematics, as a symptom of the flawed nature of the limited and fallen human mind. I can’t remember exactly how he put it but that’s what it entailed. At a later point he tried to explain what I’d said to someone else as “infinity times infinity”, which is not what this is, and advised them not to think about it, which in a way is fair enough. He was a medical student, and it may not be worthwhile to waste your brain cells on it in such a situation, except that it might be useful for psychiatric purposes, because, well, what are cognitohazards? Are they actually significant threats to mental health and are there enough of them encountered in daily life or even occasionally for them to be proper objects of study?

Something which definitely would be a cognitohazard is Graham’s Number. Until fairly recently, Graham’s Number, hereinafter referred to as G, was the largest actively named number. Obviously you could talk about G+1 and so on, but that’s not entirely sensible. G is the upper bound of a solution to a particular problem involving bichromatic hypercubes. Take a hypercube of a certain number of dimensions and join all the vertices together to form a complete graph on 2^n vertices. Colour each edge either one colour or another. What’s the smallest number of dimensions such a hypercube must have to guarantee that every such colouring contains at least one single-coloured subgraph on a plane bounded by four vertices? This number might actually be quite small, namely thirteen. However, it might be, well, extremely large doesn’t really cut it to describe how big it is, so let me just say it might not be that small at all. It might be G.

G can actually be expressed precisely, but in order to do so a special form called Knuth’s up-arrow notation has to be used. There’s an operation called exponentiation which is expressed very easily on computers and other such devices as “^”. Hence 2^2 is two squared, 2^3 two cubed and so on. Although it would probably be fine to use the caret to express this, in the past “↑” has been used for both this operation and in particular in Knuth’s notation. In his scheme, 2↑4 is 2 x 2 x 2 x 2, which is of course sixteen. However, more arrows can be added, so 2↑↑4 is “tetration”, 2↑(2↑(2↑2)), which is 65536 (or ten less than three dozen and two zagiers in duodecimal). Then there’s “pentation”, 2↑↑↑2, which is expanded further as 2↑↑(2↑↑(2↑↑2)), and has something like 19729 digits. This can be continued as long as necessary of course, and G is expressed in this notation as 3↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑3, which should be sixty-four arrows but I haven’t checked. That is, perhaps surprisingly, the exact value of the number. If every Planck volume in the observable Universe were to represent a digit, there still wouldn’t be enough space to write it out longhand. It is literally true that if a human were to visualise G, it would cause their head to implode and turn into a black hole. This is not a joke: that’s what would actually happen. So Graham’s Number is also a cognitohazard.

Nowadays, larger finite numbers have been used. TREE(3), which I’ve mentioned before, also involves graphs, as does Simple Subcubic Graph Number 3, which renders TREE(3) insignificant. There’s an even larger finite integer which resulted from a large number duel in 2007 which I could represent here but I’d probably be talking to myself. Actually, I will:

This is too hard to type out without fiddling about with LaTeX, so here’s the first bit written out longhand, unfortunately with a bic. The next bit is based on this definition, and reads “The smallest number bigger than every finite number m

{\displaystyle m}

 with the following property: there is a formula ϕ(x1)

{\displaystyle \phi (x_{1})}

 in the language of first-order set-theory (as presented in the definition of Sat

{\displaystyle {\mbox{Sat}}}

) with less than a googol symbols and x1

{\displaystyle x_{1}}

 as its only free variable such that: (a) there is a variable assignment s

{\displaystyle s}

 assigning m

{\displaystyle m}

 to x1

{\displaystyle x_{1}}

 such that Sat([ϕ(x1)],s)

{\displaystyle {\mbox{Sat}}([\phi (x_{1})],s)}

, and (b) for any variable assignment t

{\displaystyle t}

, if Sat([ϕ(x1)],t)

{\displaystyle {\mbox{Sat}}([\phi (x_{1})],t)}

, then t

{\displaystyle t}

 assigns m

{\displaystyle m}

 to x1

{\displaystyle x_{1}}

.”

Phi is a Goedelisation and s a variable assignment.

It wouldn’t be difficult to understand this but I haven’t entirely bothered to pursue it. I showed you the actual notation to introduce a new point: mathematical formalism. Also, the fact that this might look like gibberish illustrates an important feature of mathematics on which they capitalise: maybe it’s just a game based on symbols.

When I first read ‘Beginning Logic’, at about the same time as I was resisting the cult, I was rather surprised when the author defined the logical symbols in terms of their physical appearance as marks on paper rather than in more mathematical-type terms, and the fact that I’ve written that out might tempt one to think that ultimately that’s all they are and this form is nothing more than a kind of game which we give meaning to. This appears to be formalism, an approach found in various disciplines which emphasise form over content. The possible connection to the Bauhaus slogan “form follows function” is not lost on me, but rather than pursue that right now I should probably talk about formalism itself. Formalism as applied to literature, for example, yields Russian formalism, an early twentieth century substantially Soviet movement linked to New Criticism which held that literary criticism could be objective by letting the text stand by itself and ignoring influences and authorship, focussing on autonomy (what I just said), unity, which is that every part of a work should contribute towards the whole, and defamiliarisation, that is, making the familiar seem unfamiliar. Martian poetry springs to mind here.

Translating this to maths, formalism is the view that maths consists of statements about the manipulation of sequences of symbols using established rules. Like formalism in literary criticism, it ignores everything outside that realm, so it kind of makes everything into pure mathematics among other things. This is what I was confronted with when I first learnt formal logic, hence that photo. It’s a series of symbols on a piece of paper which there are rules about manipulating, which expresses a very large number given the comment which refers to it underneath.

Now the reason this interests me in the context of my acquaintance (friend? I don’t know) is that there is another philosophical position about maths called Platonism, which is the belief that maths is discovered and already exists. This is similar to believing in the existence of God, so my friend (why not) held an unusual position in that he thought at least one area of maths, and I think by implication much of the rest of it, wasn’t “out there” but was invented by human beings, yet he also believed in God, i.e. something which is “out there” in that sense just as mathematical Platonism sees maths. There doesn’t seem to be anything essentially wrong with this position but it is a bit odd and feels inconsistent. He also probably thought that the “plain reading” of Biblical values referred to objective principles such as not stealing, honouring the Sabbath and so on, which are in that situation like how many people, theistic or otherwise, view maths. But he didn’t view maths like that. I don’t know if he was aware of the apparent contradiction.

On the other hand, I can totally get on board with the idea that whatever we might think about how reality works is completely wrong because the Universe is beyond our comprehension. If we consider certain animals, we perceive their understanding as being limited in various ways. For instance, they might be blind cave fish or they might be sessile filter-feeders living in burrows below the high tide mark, and we suppose that they don’t understand the world as much as we do. Although I think this is accurate, and I should mention that we’re also limited in various ways, particularly in lacking a sense of smell as good as most other mammals, there’s no reason to suppose that the way we think is any more adequate or discerning about reality. All we might have is a system that works most of the time regardless of all the stuff we don’t know about. That said, it still feels like various things must exist, such as current experience and a physical world. In view of that possibility, I do have some sympathy with my friend’s take on this although it felt somewhat unconsidered in his case.

In fact I’d take it further into his world and say that as humans we do in fact have limited understanding, and I would compare ourselves with God. We’re fallible and certain things are beyond us. Moreover, there’s the question of the Fall, and I have to be careful here. Our understanding is also strongly constrained by the kinds of cultures and societies we live in, which to some extent is what the Fall really is. So like him, I do in fact link it to my spirituality and feel that a little bit of humility is in order. In that way, both constructivism and Platonism could be true. There could be mathematical truths known only to God, or for an atheist mathematical truths which could in theory be discovered by a sufficiently powerful mind, and other mathematical activities and forms which are merely games played by our own finite minds.

I’ve done a bit of bait-and-switch here, by swapping formalism for constructivism, and they’re not the same thing. Constructivism is also known as intuitionism, and sees mathematics as built by mathematicians. Hence it does have a meaning beyond the mere manipulation of symbols through rules but the meaning is given by the mathematicians. In other words, maths is invented, but it is real.

To illustrate the difference between formalism and constructivism, I’d like to go back to the diagonal proof of aleph one, ℵ₁, as mentioned above. According to formalism, ℵ₁ is a validly defined symbol and the system is internally consistent, so there’s no problem. Constructivism, though, would reject the proof and even its premises. The set of all numbers, according to this view, is only ever potentially infinite as it can never be completed. Even real numbers, i.e. the set of numbers including all decimal fractions between the integers, are only valid insofar as they can be constructed in a finite way. That infinitely long sequence of zeros and one, and all the ones under it, only exist up to the point where that process has in some way actually been done at some point in the history of the Universe, so in other words infinity of either kind is only a potential and really not even that since the Universe won’t exist forever in a form hospitable to minds capable of performing maths. I would say that this has to be a non-theistic view, since given theism there is an eternal and infinite mind which can and maybe does do all that, which makes Platonism true, although of course God might have better things to do or never get round to it.

An extreme form of constructivism is ultrafinitism. I think of this metaphorically as some mathematical objects being in focus and others being to a greater or lesser extent blurred. So for example, the lower positive integers are in perfect focus, sharp and truly instantiated by virtue of the extensive construction they’ve undergone through continual use. Less well-focussed are the non-integral rational numbers, zero and the negative numbers, and as one ascends higher, further away from zero, away from numbers which can be reduced to fractions and into imaginary, complex and hypercomplex numbers, the less sharply focussed they become, until something like Graham’s number or an octonion is just a meaningless blur and the infinities are grey blobs. This is just an image of course, so here goes.

To an ultrafinitist there is no infinite set of natural numbers because it can by definition never be completed. It goes beyond that though. For instance, a relatively mildly high number, Skewes’s Number, is about 10^10^10^34. It represents the point at which one formula used to estimate the number of prime numbers below a certain value switches from an overestimate to an underestimate. There are also higher Skewes’s Numbers for when the value switches to an overestimate again. It can be proven that this happens but the lowest exact value is unknown, and it may be impossible to calculate it, putting it in a different position from G, which can be precisely known. Peculiarly, this could mean that Skewes’s Number doesn’t exist in these terms but Graham’s does.

This gives rise to a vague set known as the “feasible numbers”, which are numbers which can be realistically worked upon using computers and the like. The question arises of how to account for such things as π, because it seems like it goes on forever, but ultrafinitists apparently view it as a procedure in calculation rather than an actual number. Incidentally, it’s difficult to refer to numbers in this setting because words like “real” and “imaginary” have long since been nabbed by mathematicians for specific meanings which don’t refer to the obvious interpretation of those terms. I suppose I could say “existing” or “instantiated”.

Some mathematicians also view maths as essentially granular. That is, the idea that there are two ways to do maths, one involving continuous functions as with infinitesimal calculus, the other exemplified by the group of integers with addition involving discrete entities, is flawed, and therefore there are no such things as irrational numbers.

Although he didn’t get as far as ultrafinitism itself, Wittgensteins thought does provide a useful basis for it. He views infinity as a procedural convenience and only potential rather than actual, and maths as an activity involving construction of novel concepts which didn’t pre-exist to be discovered. In general, he’s a very concrete philosopher. I’m actually not that keen on a lot of his thought, although some of it’s good such as the family resemblance definition, which could be applied here. Logical positivism also wouldn’t allow for such concepts, but I don’t consider that a respectable school of philosophy so much as an interesting footnote in the history of ideas.

Ultrafinitism has major consequences for physics. Singularities arise in various places in physics and cosmology. A rather prosaic example is that the degree of stress before a material cracks is infinite. This can be resolved by removing the idealised notion that such a material is a continuous substance rather than made up of atoms or other particles. Some other areas where singularities arise are more exciting, but this could operate as an illustration of how the problem might be addressed. Specifically, there was a singularity at the Big Bang, there’s one in the centre of a black hole and also one in the mass, time and length alterations at the speed of light. This has a remarkable consequence, at least as I see it: for an ultrafinitist, the speed of light can be exceeded. Ultrafinitism strongly suggests that faster than light travel is possible and that in some sense the Big Bang never happened. The first in turn also implies that time travel backwards is also possible. At this point, ultrafinitism begins to feel too good to be true, but then a light bulb would probably have seemed like that to a mediaeval European, so that would be argument from incredulity.

There’s also a problem for the theist with ultrafinitism and finitism, in that it implies that any deity would not be eternal or infinite. However, it’s important not to allow a “God of the gaps” in at any point. God should never be used as an explanation for a physical phenomenon. However, the concept of God may be moribund for them because of the need to posit octonions as variables in Bell’s Theorem.

What all of this seems to mean is that quantum physics makes more sense than relativity for the ultrafinitist because it makes reality granular. The difficulties it poses for relativity and cosmology could be a sign that there’s something about relativity which is only an approximation of the real world, but we don’t know what. However, we don’t generally accept the idea that stress before a crack is infinite because it doesn’t accord with our view of the world that something so outlandish would exist in everyday life every time we drop a piece of porcelain onto a stone floor, so maybe we should also reject the idea of lightspeed being a limit or the Big Bang being a beginning. The fact remains that relativity is very well tested and used in daily life, for instance with satnav. It isn’t just an abstract theory about a realm of reality few people venture into and it does seem odd to say that despite all the evidence in its favour, it just will fail at a certain point. Moreover, although I’m at peace with the concept of time travel, many people would object to that implication.

To conclude, I’m aware that I’ve wandered all over the place with this, and my response to this impression is as follows: yesterday I heard someone on the radio comment that as one’s age advances it’s as if different parts of one’s brain want to break up the band and follow solo careers, so maybe this blog post is evidence of my melting brain.

The Way In

Backing the losers

I’ve a tendency to back losers. For instance, Prefab Sprout and The The were my favourite bands when I was in my late teens and both were markedly unsuccessful. In keeping with this trend, I used to have a Jupiter Ace computer and I’ve alluded to this a few times on this blog. Jupiter Cantab, the company which designed and made it, had a total of I think five employees, seemed to work out of a garage and a suburban house in Cambridgeshire and had a turnover of around five thousand pounds. They went bust maybe a year or two after releasing the Ace in October 1982 CE and had to sell off all their old stock, a happy thing for me as it meant I could acquire a new computer. Its hardware was very basic even for late ’82, but its firmware decidedly was not. Unlike practically every other home computer of the time, the Ace used not BASIC but FORTH as its native programming language. Perversely, I considered writing a BASIC interpreter for it for a while but it seemed a bit silly so I didn’t.

FORTH, unlike BASIC as it was at the time, was considered a “proper” programming language. It has two distinctive features. One is that it uses a data structure known as the stack, which is a list of numbers in consecutive locations in memory presented to the user as having a top and a bottom. Words in the FORTH language usually take data off the top of the stack, operate on them and may leave one or more results on it. This makes the syntax like Latin, Turkish or Sanskrit rather than English, since instead of writing “2+2”, you write “2 2 +”, which leaves 4 on the stack. The other feature is that rather than writing single programs the user defines words, so for example to print out the character set one writes:
: CHARSET ( This is the defined word and can be whatever the user wants except for control characters ) 255 32 DO I EMIT LOOP ;
If one then types in CHARSET and presses return (or ENTER) in the Ace’s case), it will print out every character the Ace can display except for the graphics characters whose codes are below 32.

What you see above is the output from the Ace when you type in VLIST, i.e. list every word in its vocabulary. I think there are a total of about 140 words. All of them fit in 8K and show that FORTH is a marvellously compact language compared to BASIC or in fact most other programming languages. For instance, the ZX81’s BASIC has around forty-one words. FORTH on the Ace, and in general, was so fast that the cheapest computer faster than it, the IBM PC, cost more than a hundred times as much. For instance, in order to produce sound it was possible, as well as using the word BEEP, to define a word that counted from 0 to 32767 between vibrations and still produce a respectable note by moving the speaker in and out. By contrast, the ZX81 would take nearly ten minutes to count that high and had no proper sound anyway. This is a somewhat unfair comparison but illustrates the gulf between the speed of this high level language and the other.

Whittling Down The Vocab

As I’ve said, FORTH consists of words defined in terms of other words and therefore some people object to calling code written in it “programs”, preferring “words”. The fact that this definitional process was core to the language immediately made me wonder what would constitute a minimal FORTH system. There are quite a few words easy to dispense with in the vocabulary listed above. For instance, the word SPACES prints whatever number of spaces is indicated by the number on top of the stack, so 32 SPACES prints 32 spaces. However, this word could’ve been defined by the user, thus:
: SPACES 0 DO SPACE LOOP ;

The DO-LOOP control structure counts between the two numbers on top of the stack and executes the code between DO and LOOP the number of times it takes to do that. It can be taken a lot further than that though. SPACE and CR are two words with a similar structure: they print out a character. SPACE unsurprisingly prints out a space. CR does a carriage return. Both are part of the standard ASCII character set and the word for printing the ASCII character indicated by the number on top of the stack is EMIT, so they can be defined thus:
: SPACE 32 EMIT ;

: CR 13 EMIT ;

Hence three words are already shown to be unnecessary to the most minimal FORTH system, and the question arises of what, then, would constitute such a system. What’s the smallest set of words needed to do this?

The Ace had already added quite a lot of words which are not part of standard FORTH-79 and omitted others which are easily defined, examples being all the floating point words, PLOT, BEEP, CLS, VIS and INVIS. Others are trivial to define, such as 2+, 1- and 1+. Others are a bit less obvious: PICK can be used to replace DUP, SWAP and OVER, ROT is a special case of ROLL and so can be defined in those terms. . , that full stop, which prints a number, can be replaced by the number formatting words <#, # and #> . You can continue to whittle it down until you have a very small number of words along with the software which accepts input and definitions, and you’re done. In fact, if you know the hardware well enough you can make it even smaller because, with the Jupiter Ace for example, you know where the display is stored in RAM, how the stacks work (there are actually two because practically all computers implicitly use a stack for subroutines) and when it comes down to it it’s even possible to define words which accept machine code, the numbers computers actually use which represent simple instructions like adding two numbers together or storing one somewhere.

This is about how far I got until recently I managed to join two ideas together that I hadn’t previously managed.

Logic To Numbers

As you probably know, my degrees are in philosophy and my first degree is in the analytical tradition, which is the dominant one in the English-speaking world. It’s very common for philosophy degrees to be rubbished by the general public and even within philosophy, continental and analytical philosophers are often hostile to each other. What may not be appreciated is that much of philosophy actually closely resembles mathematics and by extension, computer science. When the department at my first university was closed down, some of it merged with computing. It also turns out, a little surprisingly, that one of my tutors, Nicholas Measor, was a significant influence on the theory of computing, having helped develop modal mu calculus, which is concerned with completeness of systems and temporal logic. He wrote a paper called “Duality and the Completeness of the Modal mu-Calculus” in the ’90s. This has kind of caused things to fall into place for me.

The Dedekind-Peano axioms for the set of natural numbers are central to the theoretical basis of arithmetic. They go as follows:

  1. 0 is a natural number.
  2. For every natural number x, x=x.
  3. For every natural number x equal to y, y is equal to x.
  4. For all natural numbers x, y, z, if x=y and y=z then x=z.
  5. For all a and b, if a is b natural number and a=b then a is a natural number.
  6. Let S(n) be “the successor of n”. Then for every natural number n, S(n) is a natural number.
  7. For every natural number S(m) and S(n), if S(m) = S(n) then m=n.
  8. For every natural number n, S(n)=0 is false.
  9. If K is a set such that 0 is in K, andfor every natural number nn being in K implies that S(n) is in K,then K contains every natural number.

You can then go on to define addition, subtraction, multiplication and inequalities. Division is harder to define because this is about integers, and dividing one integer by another may lead to fractions, decimals and so forth. I’ve known about all this since I was an undergraduate but hadn’t given it much thought. It is, incidentally, possible to take this further and define negative, real and presumably imaginary, complex and hypercomplex numbers this way, but the principle of knowing that that’s possible is enough really.

Dyscalculic Programming

If you have a language which can express all of these axioms, you have a system which can do most arithmetic. And this is where I had my epiphany, just last week: you could have a programming language which didn’t initially use numbers at all, because numbers could be defined in terms of these axioms instead. It would be difficult to apply this to FORTH because it uses sixteen bit signed binary integers as its only data type but I don’t think it’s impossible and that would mean there could be a whole earlier and more primitive programming language which doesn’t initially even use numbers. This is still difficult and peculiar because all binary digital computers so far as I know use sequences of zeros and ones, making this rather theoretical. It’s particularly hard to see how to marry this with FORTH.

Proof Assistants

Well, it turns out that such programming languages do exist and that they occupy a kind of nebulous territory between what are apparently called “proof assistants” and programming languages. Some can be used as both, others just as the former. A proof assistant is a language somewhat similar to a programming language but helps the user and computer together arrive at proofs. I have actually used one of these without realising that that was what I was doing, back in the ’80s, where the aforementioned Nicholas Measor wrote an application for the VAX called “Citrus” after the philosopher E. J. Lemmon, who incidentally died 366 days before I was born, whose purpose was to assist the user to prove sequents in symbolic logic. My approach to this was to prove them myself, then just go along to the VAX in the computer basement and type in what I’d proven, although it was admittedly helpful on more than one occasion. While using this, I mused that it was somewhat like a programming language except that it wasn’t imperative but declarative and wondered how one might go about writing something like that. I also considered the concept of expressive adequacy, also known as functional completeness, in this setting in connection once again with FORTH, realising that if the Schaeffer stroke were to be included as a word in FORTH a whole host of definitions could easily provide any bitwise function. It was also borne in upon me that all the logics I’d come across so far were entirely extensional and that it might even be a distinctive feature of logic and mathematics per se that it was completely extensional in form. However, I understand that there are such things as intensional logics, and I suppose modality might be seen in that way although I always conceive of it in terms of possible world semantics and multivalent truth values.

It goes further than that though. I remember noticing that ALGOL 60 lacks input/output facilities in its standard, which made me wonder how the heck it was supposed to be used. However, it turns out that if you are sufficiently strict with yourself you can absolutely minimise I/O and do everything inside the compiler except for some teensy little bit of interaction with the user, and this instinct, if you follow it, is akin to functional programming, a much later idea which enables you to separate the gubbins from how it looks to the user. And there are purely functional languages out there, and at this point I should probably try to express what I mean.

From Metaphysics To Haskell

Functional programming does something rather familiar. Considering the possibility that programming can dispense with numbers as basic features, the emphasis shifts to operators and functions and they become “first-class citizens”. This, weirdly but then again not so weirdly, is exactly what happens in category theory. Haskell is the absolutely paradigmatic functional language, and it’s been said that when you think you’re programming in it, it feels like you’re actually just doing maths. This approach lacks what you’d think would be a crucial feature of operating a computer just as ALGOL 60 can’t actually print or read input, and such things are known as “side-effects” in functional programming. If a function does anything other than take the values, performing an operation on them and returning the result, that’s a side-effect. This makes it easier to formally verify a program, so it’s linked to the mu calculus.

I’ve now mentioned Haskell, which brings me a bit closer to the title of this post and now I’m going to have to talk about monads. Monads are actually something like three different things and it’s now occurred to me that if you put an I at the start rather than an M you get “ionad”, which gives me pause, but this is all quite arcane enough. Firstly, Leibniz’s metaphysics prominently features the concept of monads. In 1714, he brought out a ninety-paragraph book called ‘The Monadology’ setting out his beliefs. It wasn’t originally his idea but he developed it more than others. Leibniz’s monads are indivisible units within reality which has no smaller parts and is entirely self-contained, though not physical like an atom. Anything that changes within it has to arise within itself – “it has to really want to change”. Since monads don’t interact there’s an arrangement called “pre-ordained harmony” where things in each monad are destined to coincide appropriately. I mean, I think this is all very silly and it arises from Descartes and the problem of the interaction of soul and body, but it’s still a useful concept and got adopted into maths, specifically into category theory. In that, it’s notoriously and slightly humorously defined thus: “in concise terms, a monad is a monoid in the category of endofunctors of some fixed category”, and this at least brings us to the functor. A functor is a mapping between categories. Hence two different fields of maths might turn out to have identical relationships between their elements. It’s a little like intersectionality in sociopolitical terms, in that for example racism and sexism are different in detail but are both forms of marginalisation, the difference being that intersectionality is, well, intersectional, meaning that different kinds of oppression do interact, so it isn’t quite the same as either a monad or a functor. Finally, in Haskell a monad is – er. Okay, well at this point I don’t really know what a monad is in Haskell but the general idea behind Haskell was originally that it was safe and also useless because you could never get anything into or out of a program written in it. This isn’t entirely true because it does do work in a thermodynamic sense, so if you take a computer switched on but doing nothing and you run a Haskell program on it which does something, it does get at least slightly warmer. That is, it does stuff to the data already inside it which you can never find out about, but it’s entirely self-contained and does its own thing. So that’s all very nice for it, but rather frustrating, and just now I don’t know how to proceed with this exactly, except that I can recognise that the kind of discipline one places oneself under by not knowing how one is going to get anything on the screen, out of the speakers or off the keyboard, trackball, mouse or joystick has the potential of making one’s programming extremely pure, if that’s the word: operating in an extremely abstract manner.

Home Computing In The 1960s

I do, however, know how to proceed with what I was thinking about earlier. There is some tiny vocabulary of FORTH, perhaps involving a manner of using a language which defines numbers in the terms outlined above, which would be simple enough to run on a simple computer, and this is where things get theoretical, because according to Alan Turing any computer, no matter how simple, can do anything any other computer can do given enough time and resources. This is the principle of the Turing machine. Moreover, the Turing machine can be realised in terms of a language referred to as the Lambda Calculus.

Underneath the user interface of the Jupiter Ace operates the Z80A microprocessor. This has 694 instructions, which is of course quite a bit more than the 140 words of the Jupiter Ace’s primitive vocabulary. Other processors have fewer instructions, but all are “Turing-complete”, meaning that given enough time and memory they can solve any computing problem. In theory a ZX81 could run ChatGPT, just very, very, very slowly and with a very big RAM pack. So the question arises of how far down you can strip a processor before it stops working, and this is the nub of where I’m going, because actually you can do it with a single instruction, and there are even choices as to which instruction’s best.

The easiest one to conceive of is “subtract and branch if negative”. This is a machine which has two operands in memory. One of them is a number, which it subtracts from the number it already has in mind. If the result of this number turns out to be negative, it looks at the other operand and jumps to the number indicated in the memory. Otherwise it just goes to the next memory address and repeats the operation. It would also save space on a chip if the values are stored in memory rather than the chip itself, so I propose that the program counter and accumulator, i.e. where the data are stored, are in the main memory.

Moreover, I’ve been talking about a single instruction but in fact that actual instruction can be implied. It doesn’t need to exist explicitly in the object code of the memory. Instead it can be assumed and the processor will do what that instruction demands anyway, so in a way this is a zero instruction set CPU.

What this very simple computer does is run a program that emulates a somewhat more complex machine which runs the stripped down FORTH natively. This is where it gets interesting, because the very earliest microprocessors, the 4004 and 4040, needed more transistors than this machine would, and it would’ve been entirely feasible to put this on a single silicon wafer in 1970. This is a microcomputer like those found in the early ’80s for which the technology and concepts existed before the Beatles split up.

This is of course a bit of a mind game, though not an entirely useless one. What I’ve basically discovered is that I already have a lot of what I need to know to do this task, but it’s on a level which is hard to apply to the problem in hand. But it is there. This is the way in.

Orwell’s ‘Nineteen Eighty-Four’

Thisses title might be a bit confusing, coming as it does straight after the last one, so this might end up being even less read than usual due to people thinking it’s the same post. It isn’t. I’m also doing all of this from memory without re-reading or re-watching anything, so I’m hoping I’ve got it right.

There was a time before I read ‘Nineteen Eighty-Four’, and it was before 1984. My image of it was very different from what it delivered. I imagined it would be futuristic and somewhat like ‘Brave New World’, which I think I read first. There are ways in which it is, from Orwell’s perspective anyway, and there is advanced technology in it, though not often in the way that might be expected. I think for someone who’s read neither, at least in the 1970s CE, the two novels are conceptually smushed together and are just weird high-tech dystopias without much distinction between the two. In fact I once came up with a fan theory to convert Orwell’s world into Aldous Huxley’s, which went on to become H. G. Wells’ ‘Time Machine’ world of the Eloi and Morlocks, but that’s not very literary tinkering of which I’m fond but probably bores most people and can’t be done without altering details of Huxley’s back story unless that’s unreliable in-universe. Once I’d read it, I had to rewrite history with authentic memories.

Winston

With the exception of ‘Coming Up For Air’ and presumably ‘Animal Farm’, which I haven’t read, Orwell’s central characters are generally similar to himself both psychologically and physically. Winston Smith is no exception. In fact, since Orwell was basically dying at the time, Winston is also not a well man. His varicose ulcer in particular gets mentioned a number of times. However, he’s also transposed down in history and some of his experiences are therefore inevitably different. He’s divorced, feels guilty about betraying his mother and sister and is living in the aftermath of a nuclear war. He’s also complicit in the regime, like all Outer Party members, his job being to rewrite history to accord with the current party line. Orwell was involved in the wartime BBC propaganda effort, working from Room 101 of course, and I presume this reflects his ambivalence about this work. However, Winston is far more heavily coerced than the author. He’s constantly surveilled, like all of the Outer Party. Incidentally, it’s notable that the proles are not surveilled to the same extent and seem to have a lot more fun than he and his colleagues have. It’s been said that fascist regimes rely very much on the middle class to succeed, so this may be it, and the low level of education among the poorest is accompanied by lack of political awareness. The working class don’t come across very positively in this novel, and unfortunately given the attitudes stereotypically associated with them in England today, the contempt for them continues. Orwell has seen their lives from the inside and it’s made him pessimistic about the idea that they can be the source of any revolutionary activity. This doesn’t sit well with me even while I suspect it’s often true. However, they’re not a monolith and different people have different attitudes and values.

Novel-writing machines

Julia, Winston’s love interest, works on the novel-writing machines and is of course mainly seen from his perspective in the novel. Recently, the novel ‘Julia’ has attempted to tell the same story from her viewpoint, which also helps the reader see Winston from outside. Julia disguises herself as an enthusiastic member of the Anti-Sex League, and this among other things provokes the thought that the whole society is built on dishonesty and bad faith. Everyone is encouraged to think that everyone else loves Big Brother. The concept of the novel-writing machine is interesting because it doesn’t seem like it fits technologically. The trope arises repeatedly in science fiction and outside it – I think Roald Dahl uses it and Jonathan Swift does too – and I suppose it’s the author’s nightmare and since Orwell seems to have been trying to cram everything he hated into the world of ‘1984’, it finds its place there. At the time of writing, though, it must’ve seemed completely impossible and it seems out of place in the general grimy, low-tech atmosphere of Airstrip One. The solution to this, I think, is that the Party invents anything it needs to keep the populace in check, whether propaganda or some other kind of technology, so where there’s a will, there’s a way. It also makes me wonder if technology is potentially much more advanced than is seen in day to day life by the common people but they only get to avail themselves of it when it helps Ingsoc. This theme is also visited in ‘Brave New World’ where it’s openly admitted that technology is deliberately held back. Focussing on the very obvious thing which hasn’t been said yet, yes this is AI chatbots and they absolutely can produce stories of poor quality with lots of cliches and stereotypes in them, which is exactly what writing in ‘1984’ does. Song lyrics are also written by machine if I remember correctly. Like the real world, the fun creative thing which people actually want to do is taken away from them and they’re left with drudgery. Creativity would be subversive of course. Another aspect of this is that Newspeak is quite mechanical in nature and it might be easier to mechanise textual production in it than in English, but I’ll return to that later.

Telescreens are the most obvious bit of tech in the novel. Supplemented by microphones, they ensure that nobody outside the Inner Party can go unobserved in that manner. In a humorous note, the gym instructor can see Winston failing to do his physical jerks and criticises him through the telescreen. Anthony Burgess, incidentally, provocatively stated that “‘1984’ is essentially a comic book”, but what he seems to have intended by that, apart from being edgy which I think is probably his main motivation, is that Orwell takes the immediate post-war situation in Britain with its austerity and rationing and extrapolated it over almost four decades, leading to a caricature which might not have been meant to be taken entirely seriously. In my desire to make sense of the technological minutiae of the novel, which is never entirely absent from my mind, I’m given to wonder if telescreens use cathode ray tubes like the televisions of the time or whether they’re flatscreens which work in a handwavy way, because there are enormous public telescreens in places like the one in Victory Square which suggests to me that there must be a massive long tube behind them the size of Nelson’s Column or something.

The other notable bit of technology in the book is the machine used to torture Winston during his interrogation. Probably like you, I’m not sure I want to go there in too much detail but it seems able to read his mind and there’s a quantitative rating system which reminds me of electric shock therapy for some reason. I get the impression that the machine can fix transitory thoughts in the mind before doubts set in.

The nature of truth

My English teacher once observed that the novel is as much a philosophical treatise as a work of fiction. This was before I’d formally studied philosophy, so it was presented to me at a time before I had fully formed and thought-through ideas about that, but the main issues seem to be those of history and truth, or perhaps the relationship between language, thought and experience. There’s an incident during Winston’s interrogation where O’Brien burns a piece of paper and says he doesn’t remember it. Winston has some difficulty conceiving of how he can refer to something which he claims is not remembered. This is of course doublethink: being able to hold two contradictory beliefs at the same time. The idea seems to be not only that one outwardly expresses contradictory propositions but that the actual mental activity involves sincerely embracing the contradiction. It isn’t even a question of some thought being required to reveal the contradiction: it’s just there, blatant, as an object of one’s attention. There’s a theme throughout the novel that the indoctrination goes all the way to the centre of the mind.

This relates to the Party’s hostility to orgasm. An orgasm is a subjective experience, often ecstatic, over which the Party has no control. It can make the outside world as drab as it likes, but because orgasm is generally seen as pure pleasure, often shared between people, it has to be eliminated. There’s no control over it. It’s also possible that the existence of orgasms in such a stark world would reveal that things could be better in other ways too because of the contrast. Beyond this though, it seems to be control for its own sake, and it’s what the Anti-Sex League is about. It’s therefore a particularly telling contrast that Julia of all people is in that organisation. She is using doublethink against Big Brother.

Then there’s history. Winston is aware of the Party rewriting history to attribute the invention of the steam engine to Big Brother. He is himself involved in this activity. O’Brien’s burning of the paper is a reference to the immediate past.

Bad Faith

Parsons is Winston’s neighbour and colleague, and is scarily conformist in a very bad faith kind of way. His wife and he, though not his daughter, have a deeply buried aversion to the regime but cover it not only with a veneer of approval but one which penetrates most of the way to the centre of their identity, though not quite all the way, though they won’t even admit it to themselves. Ingsoc has had more success with their daughter, who is no “oldthinker”. She bellyfeels Ingsoc because they have moulded her from birth, and she’s reminiscent of both the Hitler Youth and the children who were to emerge in East Germany who used to report their own parents to the government. She hardly belongs to the family and is really there as living surveillance. In a somewhat similar move to Winston’s as a boy, she betrays her father to the authorities by telling them the possibly fabricated tale that he said “Down with Big Brother” in his sleep. Although this may be her lie, it could also be that this is really what Parsons said because only in an unconscious state can he admit to his abhorrence of his situation. Whatever actually happened, Parsons praises his daughter for turning him in before the rot had truly set in, that is, before he had to admit the truth to himself.

‘The Place Where There Is No Darkness’

The above is my favourite quote of the entire novel. Winston has previously dreamt that his boss O’Brien is his saviour and he later appears to demonstrate this by letting him into the inner circle of the Party but also the illusory inner circle of the resistance. He imagines that this place is one of hope, but in fact it’s the Ministry Of Truth, where the lights are on all the time to prevent prisoners from sleeping, and also the light penetrates their minds to reveal their secrets, deepest wishes and worst fears. Darkness in this context is simply anything Big Brother wants to get rid of such as sexual pleasure and happiness in general. Although it’s not his intention, I feel very much that this metaphor of light as evil and darkness as good is very productive, and also reflects the fact that Oceania is an ethical photographic negative, also shown by slogans such as “Freedom Is Slavery” and “War Is Peace”.

Maintenance of hatred to distract and unify

A very familiar aspect of the novel is its emphasis on the need for an external enemy, whether Eurasia or Eastasia. Dorothy Rowe, the psychotherapist, used to concentrate very much on this idea and I once went to a talk from her on this subject where she pointed out that soon after the Cold War ended and many people expected a new era of peace, the first Gulf War ensued and we all of a sudden had a new enemy to distract us. During the real 1984, one recent manifestation of that enemy had been Argentina. Nowadays many people would say it was immigrants and asylum seekers, and here I have a question. Some people use this novel to defend what they see as the Free World against other agents and forces such as what they call communism, and then on the Left we would tend to see it as about the likes of totalitarianism and fascism in a more right wing sense. It’s interesting that it should work so well in such a double-edged way. Orwell leads us to see that Ingsoc calls itself socialist when it clearly isn’t, and that would seem to accord with the general left wing view of state capitalism as manifested in the Soviet Union and China, but it seems to work just as well the other way around. Recently we’ve had the “War Against Terror”, which is more abstract but the same thing. Big Brother also regularly retcons the constant alternating wars with Eastasia and Eurasia, more or less entailing that the other two powers constantly shift between alliance and war. Each needs the other two as enemies. This is a particularly vivid and relevant aspect of the novel today.

Newspeak

English is called “Oldspeak” in Oceania. The idea of Newspeak is twofold. One aspect of it is within the regime, to close down thought and reasoning subversive to Ingsoc, but it also serves the purpose for Orwell of being ugly and unpleasant, and also kind of mechanical, not requiring deep thought but rather doublethink. There’s a third aspect to it which I’ll come to in a bit. I’m not entirely sure about this but I have the impression that there are no capital letters. Winston doesn’t use them in his diary, which is in Oldspeak, and there are also no capital letters in Minitru memoranda. Winston observes that someone using Newspeak speaks like a block of text with no spaces between the words, or it may be an aspect of simplifying the language while losing nuance – destroying it actually. However, there are some capitals, such as “BIG BROTHER IS WATCHING YOU” and “INGSOC”. I’m sure I don’t need to go into much detail about the language if you’ve read the book. Orwell seems to buy very much into the Sapir-Whorf Hypothesis that language shapes the world, and therefore that restricting language is restricting freedom of thought. I don’t agree with this and in fact the hypothesis is, I think, largely discredited nowadays. Interestingly, to me, Suzette Haden Elgin tried to do the opposite by creating Laadan, a constructed language specifically geared to women’s experience, but later decided that it wasn’t actually any harder to articulate that in natural languages although other women have taken and developed her conlang and disagree. It does appear to be true that we think of things differently to some extent depending on the language we’re able to use: I found it much harder to express philosophical ideas in Gaidhlig than English and I don’t think that was my lack of competence in the language.

The extra aspect of this I mentioned, and I’m not sure whether it’s intentional, is that the simplicity of Newspeak reflects Esperanto, which had reached its peak about fifteen years previously. In fact I have written a short story in Newspeak to explore this, set in a community where only Esperanto is spoken. I’m not aware of any other fiction written in Newspeak. In general, Esperanto was considered progressive at the time, so I have some difficulty reconciling this, but then Orwell was also like that – he engaged in doublethink himself to an extent, so maybe he was externalising a habit of mind. Zamenhoff’s popular conlang had its momentum destroyed by fascism and Nazism.

Film Adaptations

To be fair, this should be called “The Film Adaptation” because although several have been made I have the 1984 version in mind. I found it very faithful in terms of the events. It would have been difficult to reproduce Winston’s thoughts verbatim there, but at one point O’Brien bends down next to him in the torture chamber looking old and tired and the text in the book reads ‘you are thinking. . . that my face is old and tired.”. I was of course primed by having read it, but that does, I think, get very clearly communicated in the film. Mike Radford, the director, said that there was nothing in the film that wasn’t happening somewhere in the world that year, a very similar claim to Margaret Atwood’s concerning ‘The Handmaid’s Tale’ that nothing in that had not been done to women somewhere. Orwell seems to have anticipated that one day the technology would exist to keep tabs on people minutely, which by the time of the real 1984 had already seemed to have gone too far and since then has only gone further. In a review of the film from the time of its release, “Shoplifters Will Be Prosecuted” was said to be the “real” version of “BIG BROTHER IS WATCHING YOU”. That year, the Met had set up a bank of cameras at Brent Cross which could recognise number plates of cars leaving and entering London by that route and cross-referenced them with DVLA records in Swansea. That was over forty years ago now. There were also concerns about computers keeping track of credit card transactions and cheques. Nowadays of course everything is done by card or bank transfer and those worries seem trivial, which just shows how much we’ve normalised all this. MI5 had also just bought two ICL mainframes with 20 Gb of storage, which doesn’t sound like very much now but compared to the 5 Mb which many hard drives could accommodate at the time, it was a heck of a lot and this had been done secretly – why? Another notable aspect of the film is that it shows nothing which didn’t exist in Orwell’s lifetime, so for instance IT is still based on valves. This leads to a little distortion in the story, particularly in the interrogation scenes, as they were clearly supposed to be more advanced than is shown on screen. Since Orwell’s central characters are self-inserts, John Hurt must have resembled him quite closely physically at the time, and I get the impression he must have starved himself to achieve that gaunt appearance. Apparently Orwell’s inspiration for the idea of altering back copies of ‘The Times’ originated from the editing of ‘The Great Soviet Encyclopaedia’ in the 1930s under Stalin’s orders, where articles on, for example, Trotsky were deleted and photos of scenes from the Russian Revolution airbrushed. Radford points out that for all the disquiet and woe of his situation, created by the Party itself, Winston actually genuinely seems to enjoy his job. Another character, possibly Symes, says that the destruction of words is a beautiful thing, and given that O’Brien has said that the only source of pleasure the Party wants to continue is the pleasure of a jackboot stamping on a face forever, much more overtly Symes but Winston also, both enjoy that aspect of their work in different ways. Symes is part of an effort to shrink English vocabulary to a size convenient for Ingsoc’s ideology and Winston destroys words printed on paper by burning them. Other sources of pleasure are denied them. During a break in filming, Radford watched a news item showing the Queen laying a wreath on the tomb of Jomo Kenyatta, who fought to liberate Kenya from the British in the ’50s. At the time he had been painted as Satan incarnate by the media, but all of a sudden he was rehabilitated and revered. Not that he should or should not have been, but the complete volteface is rather familiar. The year 1984 also saw the computerisation of much political campaigning, with for example the targetting of election leaflets on education to addresses of parents of school age children. All the stuff about our data being used to manipulate us is not new at all, although of course it’s become all-pervasive today.

A bit of an aside: there were two annoying pubic hair incidents in 1984, one connected with Nena’s armpits (okay, not pubic hair but you know what I mean) and the other Suzanna Hamilton’s, which was visible on screen. I didn’t give it a second thought at the time, but apparently more recent audiences have found it quite shocking and worthy of comment. To be honest this reminds me of the incident with the fillings in the mouth of the screaming woman, who had been born into the post-nuclear world where there was presumably no dentistry, at the end of ‘Threads’, in that it really seems like a distraction from the real point of the film, but if you like you can actually shoehorn it in, in that women in Airstrip One don’t want to squander their paltry wages on using razors to remove body hair but in fact I very much doubt anyone at all in Britain was doing that in 1948. A few other things: Richard Burton’s health was failing at the time and took forty-five takes to do one of the scenes because he couldn’t remember his lines, so he was in fact very old and tired at that point. He actually died two months before the film was released so I’m guessing it was his last movie. The scenes generally kept pace with the diary dates in the book, so the opening scenes, for instance, were filmed on 4th April. This meant, of course, that it couldn’t be released until late in the year. In connection with both the theme and the insistence on using technology contemporary with Orwell’s life, Radford wanted to film it in black and white but Virgin refused, so instead the footage was put through bleach bypass to give it the washed-out appearance it had in theatres. This added to the cost of production because it meant that silver couldn’t be reclaimed from the negative or positive prints.

Then there’s the peculiar issue of the music. The initial plan had been to use David Bowie because of his album ‘Diamond Dogs’, but he was too expensive, so the Eurhythmics were approached instead and there is of course an album of their music for the soundtrack. However, all of that was Richard Branson’s idea and he hadn’t told Radford, who had hired Dominic Muldowney to do it, who ended up scoring the entire movie. Branson then vetoed Radford’s choice and the result is that in the initial theatrical cut most of the music is the Eurhythmics’, although it does seem rather quiet and brief most of the time, but some of it, for instance ‘Oceania, ‘Tis For Thee’, which plays in the opening scene in the cinema after the Two Minutes’ Hate, is by Muldowney. Some versions of the film on Blu-Ray give viewers the option of choosing between soundtracks but there’s also a DVD which only uses Muldowney’s, which I guess is much sought after because it’s out of print. Personally I like the Eurhythmics soundtrack but think it reflects the kind of impression one has before one has read the book and the Muldowney version is much more in keeping with the atmosphere of the film because Orwell didn’t forsee popular music going in the direction it in fact did.

The other thing about the film is its influence on other near-contemporary works. In particular, Terry Gillam’s ‘Brazil’ shares a very similar aesthetic, and Apple’s initial ad for the Mac is also self-consciously very similar to the first scene.

To conclude, it probably doesn’t need saying that there’s a lot that did need saying about this novel. When I tried to write an essay about it at school, I ended up just giving a detailed synopsis because I felt it said what it did so well that it was practically impossible for me to rephrase it in any way which would be helpful, which is, I think, a general problem with literary criticism of sufficiently high-quality works. There may never have been a point when ‘Nineteen Eighty-Four’ couldn’t’ve been taken to describe the world outside the window, but that’s equally true now and that’s a true mark of the universalism of a great work of literature.

Nineteen Eighty-Four and 1984

There you go: don’t say I don’t listen to my readers! I don’t want this to seem self-indulgent, so before I start I want to point out that this is a response to a comment, that someone would like me to do something like this, so that’s what I’m doing.

Without tinkering with HTML, it seems difficult to provide links within a document in WordPress, so for now I’ll just give you a table of contents in order to prevent you being overwhelmed with the length of this post:


1. The Eternal Present

2. The Never-Ending. . .December?

3. George Orwell Is Better Than War-Warwell

4. My Secret Diary, Aged 16¾

5. A Collision With The Great White Whale

6. Armageddon

7. The Stereophonic Present

8. Harvest For The World

9. The Ending Story

10. Life Off The Fast Lane

11. Green Shoots

  1. The Eternal Present

To me, the year 1984 CE is a kind of eternal present. I sometimes joke about this, saying that all the years after that one were clearly made up, and someone pointed out to me that that was highly Orwellian, but in fact it really is the case that all years are made up and we just have this arbitrary numbering scheme based on someone’s incorrect guess about the birthdate of Jesus, and yes, here I’m assuming there was an historical Jesus, which considering I’m Christian is hardly surprising.

2. The Never-Ending. . . December?

There is a fairly easy if absurd way to make it 1984 still, which is just to have a never-ending December. It’s currently Hallowe’en 2025, in which case it’s the 14945th December 1984. This wouldn’t be a completely useless dating system and I sometimes think we can conceive of time (in the waking sense: see last entry) differently according to how we choose to parcel it up. Another way of making it 1984 would be to date years from forty years later, and no that’s not a mistake as there was no year zero in the Julian or Gregorian calendars. There was one in a certain Cambodian calendar of course, from 17th April 1975, where it was inspired by the French revolutionary Year One, the idea being that history started on that date because everything that happened before that was irrelevant, being part of capitalism and imperialism I presume. My insistence that it’s always 1984 is the opposite of that, as I’m affectedly sceptical about anything happening afterwards. Coincidentally, I use a day-based dating system starting on 17th July 1975 in my diary, and I don’t actually know why I do this, but it’s only ninety-one days after the start of Year Zero (there are other things to be said about Pol Pot which would reveal the over-simplification of this apparent myth). It’s based on the first dated entry in any notebook and my mother’s suggestion that I keep a diary which I didn’t follow. It’s actually the second dated entry, as the first one is of a series of measurements of a staircase, which isn’t really about anything personal. I’ve also toyed with the idea of Earth’s orbit being a couple of metres wider, which would make the year very slightly longer but which would add up over 4.6 aeons (Earth’s age) to quite a difference, but if that were so, asteroid impacts and mass extinctions wouldn’t’ve happened which did and other ones which didn’t might’ve, so it totally changes the history of the world if you do that. If the year was a week longer, it would now be 1988 dated from the same point, but a lot of other things would also be different such as the calendar. It’s quite remarkable how finely-tuned some things are.

3. George Orwell Is Better Than War-Warwell

Although I could go on in this vein, I sense it might irritate some people, so the rest of this is going to be about my feeling of the eternal present, how 1984 actually was to me and thoughts about George Orwell. I’m just telling you this if you feel like giving up at this point.

I have habitually said that “George Orwell is better than War-Warwell” as a reference to Harold MacMillan’s paraphrase of Winston Churchill, and I wonder if Churchill is one of those figures who is always having quotes misattributed to him, like Albert Einstein. The trouble is, of course, that this is a practically meaningless phrase which I can’t do anything with, although Sarada has published a story with that title. I’ve read a lot of Orwell, although unlike most people who have that doesn’t include ‘Animal Farm’. It’s been suggested that if he’d lived longer, he would’ve gone to the Right and become a rather embarrassing figure like David Bellamy or Lord Kelvin, but of course we don’t know and I don’t know what that’s based on. He was known to be quite keen on the idea of patriotism though, so maybe it’s that.

Within the universe of his novel ‘Nineteen Eighty-Four’, we don’t actually know that it is that year. It does seem to be about that time, because Winston Smith was a small boy just after the end of World War II. The Party is constantly revising history and is now claiming that Big Brother invented the steam engine, so it seems easily possible that it isn’t exactly 1984 and that either new years have been written into history or removed from it, and just maybe it’s always 1984 and has been for many years by that point. Maybe they just want to save on printing new calendars or are trying to perfect the year by repeating it over and over again, for example. Maybe ‘Nineteen Eighty-Four’ is like ‘Groundhog Day’, and what we read is merely one iteration among many of that story. I’ve heard, although appropriately maybe this can’t be trusted, that Orwell simply came up with it by transposing the last two digits of the year he wrote it. Whereas it’s possible to play with this, the truth is probably simply that he needed to give Winston enough time to grow up and reach his forties so he could tell the story.

It interests me that there was a somewhat jocular, artsy attempt to claim that a period called the 19A0s existed between the late ’70s and early ’80s which has been edited out of history, which is similar to the Phantom Time Hypothesis. Just to cover these, I’ve written about this before, and the Phantom Time Hypothesis, so if you want you can read about it there.

A slightly puzzling aspect of ‘Nineteen Eighty-Four’ is why its title is spelt out rather than written as figures, but it seems that this was common practice at the time. It’s one thing that everyone gets wrong about the book, as it’s almost always referred to as ‘1984’. I should point out that one reason I didn’t get any further than A-level with English Literature is that I experience an impenetrable thicket of associations whenever I consider mainstream creative works which make it difficult to respond meaningfully to them. In the case of Orwell’s novel though, since it’s arguably science fiction it might be more appropriate than usual to do so, since that’s also how I respond to that genre but find it more in keeping with that kind of imagination. I’m not alone in this it seems: Orwell’s novel is analysed in such a manner by the YouTube channel ‘1984 Lore’. I myself used Newspeak to write a short story about a kibbutz-like community on another planet where everyone actually spoke Esperanto to explore whether language restricts thought, portraying it in terms of the idea that it does.

4. My Secret Diary, Aged 16¾

My personal experience in the year 1984 represents a peak in my life. Note that it’s just one peak, neither the biggest nor the only one. It doesn’t overshadow the year of my wedding or the births of our children, grandchildren or anything like that. ’82 and ’83 are also significant in their own ways. ’82 I thought of as the “endless summer” characterised by the nice pictures of young people in yellow T-shirts and long blond hair on the envelopes you got back from the chemists with the photos in them, and ’83 had been particularly poignant, but the year after those had been highly focussed on for a long time in various circles by many people. 1984 opened for me hiding under a table in a suburban living room in Canterbury whispering to my friend about when midnight came. I was wearing a navy blue M&S sweatshirt whose inner flock was worn on the inside of the left elbow, a blue and white striped shirt with a button-down collar which I was only wearing because she liked it, and jeans which annoyed me by not having any bum pockets, and she was wearing jeans which did have bum pockets and a white blouse with yellow check-line lines on it, but it was completely dark so neither of us could see anything. I was sixteen and had had a lot to drink considering my age, naughtily, as had she. We eventually conjectured that midnight must have passed and I rang my dad, who came to pick me up and whom I immediately told I’d had some alcohol (Martini, Cinzano and a Snowball) which my friend saw as not only typical of my impulsiveness and indiscreetness but also liable to get me in trouble but it didn’t. The street lights looked rather blurry on the way home. Thus opened my 1984. A few days later I was back in the sixth form and my friend Mark Watts, who was later to go on to found an investigative journalism agency and uncover a number of cases of child sexual abuse, informed me that it was vital that we didn’t fall for whatever spin the media were likely to put on it being the year named after that novel and that whenever he referred to George Orwell it would be under the name Lionel Wise (Eric Blair – Lionel Blair; Eric Blair – Eric Morecambe – Ernie Wise), which was quite clever if also rather adolescent, which is what we were. We were all very conscious that it was 1984 at last. Anne Nightingale played David Bowie’s ‘1984’ and Van Halen’s ‘1984’ on her request show on the evening of New Year’s Day. I didn’t have a hangover, because I don’t get them. I asked my brother to record something off Anne Nightingale because I was about to go out again to see my friends, and it happened that the next track was Steve Winwood’s ‘While You See A Chance, Take It’, which I’d wanted to get on tape for years but he cut it off halfway through the first verse. The machine on which that was recorded was a rapidly failing mono Sanyo radio cassette recorder which my mum was annoyed was deteriorating so fast seeing as it was less than four years old and I’d got it for my thirteenth birthday. Incidentally, I’m writing all this without reference to diaries or any other kind of record. I just remember it, plainly, clearly, in great detail, and I don’t know how this compares to others’ memories. My memories of much of the ’80s are as clear as flashbulb memories because they occur within my reminiscence bump. There are errors, such as the exact name of the Steve Winwood record, but also a lot of clarity. Anyway, later that year on my seventeenth birthday, 30th July, I got a stereo boom box possibly from Sony which I first recorded on on 8th August, namely Tracey Ullman’s ‘Sunglasses’, followed by ‘Smalltown Boy’. In September, I got my first job, as a cashier at the new Safeway, which looked enormous to me at the time but on returning to the Waitrose which it now is seems really tiny nowadays, and lost it after eleven weeks due to being too slow on the till, not assertive enough to turn people away from the “Nine Items Or Less” (now “fewer” apparently) queue, and £2 out on the cashing up on two occasions. Apparently this was a lot stricter than other places, such as Lipton’s where my sister worked and who was much further out than I on many occasions when she first worked there. I could say more about her situation there but probably shouldn’t. Anyway, I got £1.41 an hour from Safeway which I saved up to buy the first big item I’d ever got for myself, which was a Jupiter Ace microcomputer. Which brings me to computers.

I was very into computers in the early to mid-’80s, but also deeply ambivalent about them. At the start of the year, the family had owned a ZX81 for a year and a bit. I found this annoying because it was such a low-spec machine, but restrictions fuel creativity so it was in fact not a bad thing. I was spending a lot of my time reading computer magazines and wishing I had a better computer, which I resolved late in that year, and also writing software, mainly graphically-oriented, which was difficult considering that our computer only had a resolution of 64×48, although I was later able to increase this to 192 on the Y-axis by pointing the I register on the Z80A somewhere else than the character set, so I could make bar graphs which looked quite good. I did also write a computerised version of Ramon Llull’s ‘Machine That Explains Everything’, a couple of primitive computer viruses and an adventure game. Later on, after I got the Jupiter Ace, I got it to display runes and produce screeds of nonsense words in Finnish. As I said though, I was ambivalent. I’ve never been comfortable with my interest in IT for several reasons, and for more reasons at this point. One reason was that at the time I was communist, and also kind of Stalinist, and felt that the use of IT and automation as fuelled by the microchip boom would create massive unemployment and reduce the power of the workers to withdraw their labour. However, it isn’t clear to me now why me not having a ZX81 would’ve made any difference to that. In the middle of the year, I decided that communism was over-optimistic and there was a brief period during which people were very eager for me to adopt their views, but I quickly opted for Green politics. I was not yet anarchist and believed in a Hobbesian state of nature. Besides this perspective, I was also uncomfortable about my interest in computers because it seemed nerdy, something very negative at the time, and unbalanced – obsessive and not “humanities” enough to my taste. It felt too much like my comfort zone and not challenging enough. It did, however, become apparent that I had spent so much time studying computers, with text books as well as mags and experimentation, that I could’ve easily aced the O-level, which was another example of how my formal educational focus was outside educational institutions at the time, and it was also suggested that my aforementioned friend with whom I hid under the table and was trying to learn BASIC at the technical college, would’ve welcomed me teaching her. This got to the point where I helped her with her homework. On another occasion, an acquaintance was trying to write a FORTH programming language interpreter in Z80 assembler and I had a look through it with interest. One of my other friends later went on to write parts of the major GNU text editor “religion” Emacs, already almost a decade old by ’84, which I still use today. However, I found my interest in computers made me feel embarrassed and self-conscious and I felt somewhat ashamed of it. I think I found a lot of my interests at the time to be very personal and not something I felt comfortable sharing with others.

It was also the year of my perhaps most significant cultural shift. I entered the year enthusiastic about mainstream literature and poetry. I had been warned, though, by my O-level English teacher, that A-level English Lit was likely to spoil my appreciation of reading, and this did in fact happen. Early in the year my enthusiasm continued and I came to enjoy reading poetry and literature. I planned to continue my writing on the works of Samuel Beckett as part of my A-level and the fact we were studying Joyce gave me optimism in that regard. We had a fair bit of freedom to do that kind of thing. In the summer exams, my practical criticism of a particular poem was chosen as a model answer for others to emulate and I was able, for example, to uncover themes in poetry which my teacher hadn’t noticed, which was mainly due to my insistence on maintaining a wide education. I was applying to university in the later part of the year, having researched them in the earlier part, and having opted for degrees in English and Psychology or Philosophy and Psychology, I was clearly sufficiently committed to English at the time to consider it as a first degree. However, all of that was about to go to shit.

5. A Collision With The Great White Whale

It may be worth analysing what went wrong in some depth, but the simple facts of how it happened were as follows. My A-levels were in English, RE and Biology, which I want to stress is a very popular combination. At the end of the first year, around June, there was a marine biology field trip which was in itself quite formative for me because I didn’t relish getting stuck in the stinky, sticky black tarry mud encouraged by the anaerobic respiration in Pegwell Bay, an estuary on the edge of Thanet. It was cold and wet, and the water was of course salty, and I thought I’d ruined that sweatshirt I’d mentioned earlier which I was once again wearing. My dissatisfaction was palpable. Anyway, it was assumed by the English department that those who were off on the field trip would, possibly from their friends, learn their summer reading assignments, which were to read James Joyce’s ‘Dubliners’ anthology and Herman Melville’s ‘Moby Dick’. I didn’t get that information, didn’t talk about the assignments with my friends because it wasn’t a priority for us and consequently was confronted with reading an absolute doorstep of a book plus much of the Joyce one, which was less problematic because being short stories it was easy to catch up with that one. I was then confronted, on reading Melville’s novel, with a load of American men murdering whales for a living. Right then, I wasn’t even vegetarian but I did, like a lot of other people, believe in saving the whale. Over my childhood, I’d read a lot of story books about animals, like ‘Ring Of Bright Water’, ‘All Creatures Great And Small’, ‘Incredible Journey’, ‘Bambi’, ‘Watership Down’ and ‘A Skunk In The Family’. Of course there was peril in these and also horrible deaths on occasion, not to mention sad endings, but the focus was on the otter, the bovines, dogs, cats, deer, rabbit and skunk. There is no problem with depicting them being treated badly, suffering and so forth. But in ‘Moby Dick’, there is never any sympathy or focus on the experience of the whales or acknowledgement of them as victims, in a similar manner to the people who had lived in North America before White colonisers turned up. It was all about something else, and there wasn’t just an elephant in the room but a whale. I was unable to bring myself to step into Ishmael’s or anyone else’s shoes. The only bits I could tolerate were the encyclopaedic sections. I could go into more depth here. I think Melville was probably trying to make a whale-sized book, was using the whale as a metaphor for the intractable and incomprehensible nature of, well, nature and the world in general and as a tabula rasa, them being white like a piece of paper, and there’s the angle that the whale is in some way a phallic symbol. Ahab also anthropomorphises the whale, seeing them as a rival in a battle with him when in the end the whale is just the whale and doesn’t even realise the tiny figures above lobbing harpoons at them are even conscious beings. From the novel’s perspective, the whale probably isn’t even a conscious being. Hence I was confronted with what I read as a hostile, nasty and animal-hating, actually animal-indifferent story where I couldn’t work out whether any of the characters were supposed to be sympathetic and,moreover, the only chapters I could actually garner any interest in were dismissed as mere padding by my teachers. I also found, for some reason, that the same approach I’d been taking to poetry up until the summer no longer seemed to work. It probably didn’t help that one of my teachers was a frustrated Classics teacher who later left and taught that at the King’s School, although I was interested in the classics she managed to shoehorn into the lessons such as Oedipus Tyrannus, the Oresteia and Antigone. I would say, though, that I really didn’t get on with the Oresteia because I felt very much that it lacked universalism. None of that was in the exams of course, but I wasn’t ever very oriented towards those. I was more just interested or not.

The autumn of the year was marked mainly by anxious procrastination about submitting my UCCA form, which I handed in a month later than I was supposed to due to indecision about what to put in my personal statement, which wasn’t up to much partly because of not wanting to admit what I was interested in, and partly because of not pursuing it in a public way due to the shame I felt about admitting it. I also got annoyed with universities insisting on being put first, so rather than selecting places I actually wanted to go to, although my first choice, Keele, I was very keen on due to the balanced and eclectic nature of their educational approach, I deliberately listed Nottingham, Reading and Exeter, followed by Sheffield in which I was in fact fairly interested in. I got rejected by all of them except Keele and Sheffield, Exeter apparently by return of post. Among the polys I applied for Hatfield, Oxford and NELP, and would’ve got into NELP in fact. I liked the modular nature of the course at Oxford, which appealed to me for the same reason as Keele did.

6. Armageddon

Another association which arrived in 1984 and which has been with me ever since is the idea of “proper Britain”. I may have mentioned this before, but the notorious nuclear holocaust drama ‘Threads’ was broadcast on 23rd September 1984, notable for being the first depiction of nuclear winter in the mass media, and I remember being edgelordy about it by saying to my friends that it was over-optimistic. I was ostentatiously and performatively depressive at the time. I did not in fact feel this, but my takeaway from it was probably unusual. There’s a scene at the start where Ruth and Jimmy are canoodling on Curbar Edge above Hope Valley which really struck me. It was grey, drizzly and clearly quite cold, even though I think the action begins in May. There’s also the heavily built up large city of Sheffield, where I might be going in a year or so, and it suddenly crystallised my image of what Britain was really like. Not the South with its many villages and small towns densely dotted about with relatively dry and sunny weather, which I was used to, but the larger block of large post-industrial cities with redbrick terraced houses, back-to-backs, towerblocks and brutalist municipal architecture set against a background of rain, wind and greyness. I relished that prospect, and it felt like real Britain. This is how the bulk of the British population lives, and it becomes increasingly like that the further north you get, hence my repeated attempts to move to Scotland, which in a way I feel is more British than England because of many of those features. By contrast, if you go from Kent to France it’s basically the same landscape and climate with different furniture. Maybe a strange reaction to a depiction of a nuclear war, but there you go.

I did, however, also feel very much that it would be strange and foreign to move away to an area dominated by Victorian redbrick terraced houses. I couldn’t imagine that they’d ever feel like home to me and I couldn’t envisage settling down there. I was still very much a Southerner at that time. I was also, however, fully aware of the privileged bubble I was living in and it made me feel very awkward.

Nor am I ignoring the actual content of the film. The Cold War and the threat of nuclear destruction was very high in many people’s minds at the time and it almost seemed inevitable. This made even bothering to make plans for the future seem rather pointless and almost like busy work. We all “knew” we were going to die horribly, as was everyone around us, so doing the stuff I’ve mentioned, like applying to university, seemed more like something I did as a distraction from that worry than something with an actual aim sometimes, depending on my mood. This had a number of consequences. One is that I wonder if a lot of Gen-Xers underachieve because they missed out on pushing themselves into things in their youth, expecting the world to end at any time. Another is that as the ’80s wore on, pop music and other aspects of popular culture began to reflect that anxiety. Ultimately even Squeeze (basically) ended up producing an eerie and haunting post-nuclear song in the shape of ‘Apple Tree’. Alphaville’s ‘Forever Young’ particularly captures the attitude and is widely misunderstood. The reason we’d be forever young is that we’d never get a chance to grow up and live out full lives. That single was released a mere four days after ‘Threads’ was broadcast.

7. The Stereophonic Present

Speaking of music, there were something like four bands in the Sixth Form at that point, the most prominent being The Cosmic Mushroom, clearly influenced by the Canterbury Scene even in the mid-’80s. My own attitude to music was to concentrate on cassettes because I didn’t trust myself to take care of vinyl properly. The advent of proper stereo in my life was on my birthday at the end of July, and there’s something vivid and recent-sounding about all stereo music I own for that reason. This is in fact one factor in my feeling that 1984 is current rather than in the past. The present is characterised by clear, stereophonic music, the past by lo-fi mono, and that switch occurred for me in summer that year. This is actually more vivid than the earlier shift between black and white and colour TV. Incidentally, CDs were out there for sure, but only for the rich, having been first released two years previously. Like mobile ‘phones, they were a “yuppie” thing, like jug kettles. Back to music. Effectively the charts and my perception of them that year were dominated by ‘Relax’, by Frankie Goes To Hollywood. This was released in November the previous year and entered the charts in early January. This got banned as it climbed the charts, which boosted its popularity enormously and got it to number 1. It stayed in the Top 100 until April the next year. We played it at the school discos, the other standard being ‘Hi-Ho Silver Lining’, which we all used to sing along and dance to. My personal preferences included The The, Bauhaus and The Damned at the time, although the ongoing appreciation of the likes of Kate Bush continued.

8. Harvest For The World

On 24th October, the famous Michael Buerk report on the famine in Ethiopia was broadcast. This led in the next couple of years to Live Aid and Run The World, but from that year’s perspective it only just began. There’s been a lot of justified criticism of media framing of the famine, but as a naive teenager I didn’t have much awareness of that and simply saw it as a disaster which required a response from me, which was initially in the form of a sponsored silence for the whole school in the sports hall, then later a sponsored 24- or 36-hour fast supervised by one of my biology teachers in which I also participated. Although I can’t really mention this without pointing out that the whole thing was dodgy, it did start a ball rolling which continued in much later political activism on my part and a passionate youthful idealism to make the world a better place, which I felt confident had to come soon and meant action from me. ‘Do They Know It’s Christmas’ was a further effort in that campaign, satirised by Chumbawumba as ‘Pictures Of Starving Children Sell Records’ and roundly criticised by the World Development Movement, but at the time I knew nothing of this. By the way, it’s remarkable how the unpopular Chumbawumba cynicism managed to get from the political fringe into the mainstream in just a few years with the Simpsons parody ‘We’re Sending Our Love Down The Well’ only eight years later, although that was also linked to a Gulf War song it seems, which however is in that tradition, which I first became aware of, superficially, that year. In fact I can’t overestimate the importance of this sequence of events, even with its grubby and cynical connotations, and my support of it has a simplicity and innocence which I wish in a way I still had. I want the world to be one in which something like that works straightforwardly and simply. As I’ve said before, nobody is Whiter or more middle class than I am.

A rather different aspect of this is that I and someone called Louise almost got the giggles during the sponsored silence and we both spent most of our time doing it, which was I think a whole hour, trying not to laugh. A while after that the same thing happened with the two of us in an English class, though on that occasion we gave into it and there was actually nothing provoking it at all. It then spread through the whole class. Once again, in an English class shortly after that, the teacher, discussing Moby Dick of course, took out a model of a sperm whale on wheels unexpectedly and rolled it up and down the desk, which again led to uncontrollable laughter. This was Thatcher’s Britain, yes, and most of us hated her, but it wasn’t grim or joyless, at least for seventeen year olds, and I actually managed to get some pleasure out of Herman Melville’s writing!

CND was very active at the time. I, however, was not, for a couple of reasons. I was slightly uncomfortable with the idea of unilateral disarmament, and in fact that was the last of the standard lefty/Green causes I committed to, but I had a feeling they were right and wanted to go on the demos but never actually did. This is by contrast with the Miners’ Strike. Kent, like Northern France, was a coalmining area and the strike was very close to us because several of my friends were in coal miners’ families. I asked what I could do but nothing really came to mind. I was also aware of hunt sabbing but was unable to work out how to find out about it. Had I got involved in that, I might’ve gone vegan years earlier than I did.

9. The Ending Story

Then there was cinema. My aforementioned friend under the table rang me up one day and just said we should go and watch ‘Champion’ at the ABC. That cinema, incidentally, was managed by someone I later got to know when he and I both coincidentally moved to Leicester. I was surprised my friend just spontaneously bet on the horses when I’d never dreamt of doing that, at the time because it was gambling. The film, in case you didn’t know as it may be quite obscure, was based on a true story about a famous jockey who has cancer and survives. One impression I got from it was that he looked like Lionel Blair, which is the second time I’ve mentioned him today. At this time it was still possible to sit in the cinema for as long as you wanted while the same films, yes, films plural, played over and over again. This was actually the last year it was possible. The year after, I’d just finished watching ‘Letter To Brezhnev’ and the ushers chucked us all out. It was a real shock, and you don’t know what you’ve got till it’s gone. It meant that parents could use cinemas as babysitting services, though this may have been somewhat reckless by today’s standards. They did the same with swimming pools: Kingsmead had this going on, although specifically in ’84 I didn’t exercise much apart from walking eight miles, to school and back, every day. This lazy year ended immediately with my New Years’ resolution to go running every morning from 1st January 1985.

‘Ghostbusters’ was also quite memorable. I took my younger brother to see it and I wasn’t expecting the whole audience to shout the song when it came on. It’s a good film, with a memorable scene involving a fridge and an unforgettable line which is usually cut towards the end. It also mentions selenium for no apparent reason, and has Zener cards at the start. At the time, rather surprisingly, it seemed to be generally accepted even in academia that some people were psychic. I often wonder whether it’s really good-quality research which has led to received opinion on this changing or whether it’s just a reputational thing that psi is now widely rejected by academic researchers. The other major film I remember watching was ‘Star Trek III’, which is also very good, and at the time there was no plan to bring Star Trek back. It was considered a sequel too far by one of my friends, so at the time it looked like the show was completely defunct and they were trying to revive it beyond all reason. I also saw ‘2010’, which I liked for incorporating the new findings about Europa, but it definitely lacks the appeal of the original. Incidentally, the long gap between Voyager visits to Saturn and Uranus was underway and the remaining probe wouldn’t get there for another two years. The original ‘Dune’ also came out this year, and although I wanted to see it, I don’t think it came to Canterbury. I wouldn’t’ve liked it at the time, having seen it since, and oddly I had the impression it was in a completely different directing style and that it was also a 3-D film. It may also have been the most expensive feature film ever made at the time. ‘1984’, of course, also came out then, but that deserves its own treatment. As other people I’ve since got to know of my age have commented, ‘Neverending Story’ marked the first time I perceived a film as definitely too young for me, and in a way that realisation reflected the twilight before the dawn of adulthood to me.

10. Life Off The Fast Lane

Speaking of marks of adulthood, many of my peers were learning to drive and passing their tests at this point. Although I got a provisional licence that year and my parents strongly suggested I learn, I refused to do so for environmental and anti-materialistic reasons. Although I’ve had lessons since, I’ve never in fact got there and I’ve also heard that an ADHD diagnosis can bar one from driving in any case, if it affects one’s driving ability. I’m not sure mine would but I do think my dyspraxia is a serious issue there. 1984 is in fact the only year I’ve independently driven any motorised vehicle, namely one friend’s scooter and other’s motorbike. Like the underage drinking, it’s apparent that we didn’t take certain laws particularly seriously at the time and I’m wondering if that was just us, our age or whether that’s changed since. I was dead set against learning to drive, and this was probably the first thing which marked me as not destined to live a “normal” adult life. It has on two occasions prevented me from getting paid work.

Television didn’t form a major part of my life at the time. We couldn’t get Channel 4 yet, so the groundbreaking work done there was a closed book to me. ‘Alas Smith And Jones’ started in January and incredibly continued to run for fourteen years. I’d stopped watching ‘Doctor Who’ two years previously when ‘Time Flight’ was so awful that I decided it was a kid’s show and put it away. Tommy Cooper died on stage. The second and final series of ‘The Young Ones’ broadcast. ‘Crimewatch UK’, which would eventually become compulsive but guilty viewing for Sarada and me, started. In a somewhat similar vein, ‘The Bill’ started in October, which I used to enjoy watching years later due to the handheld camera work, which made it seem very immediate and “real” somehow. NYPD Blue is like that for other reasons incidentally. ‘Casualty’ was still two years in the future and ‘Angels’ had just ended, so I was in a wilderness of no medical dramas.

11. Green Shoots

Also, of course, the Brighton hotel bombing took place, and many of my friends felt very conflicted because on the one hand there was the general sympathy and empathy for people being attacked, injured and killed, but on the other they were very much hated for what they were doing. I’m sure this was a widespread feeling, and there is of course the band Tebbit Under Rubble, which very much expresses one side of that sentiment. Greenham Common was in progress and a major eviction took place in March. Although I was later to become heavily involved in the peace movement, at the time I was still very much on the sidelines although some of the people I knew were connected, and I do remember thinking that computer and human error were major and unavoidable risks which meant that the very existence of nuclear arsenals was too dangerous to be allowed to continue.

Then there was the Bishop of Durham, and since I was doing an A-level in RE at the time, his stance was highly relevant. The Sea Of Faith Movement was in full swing, which promoted a kind of secularised Christianity which was largely non-theistic or even atheist in nature, and the foundations were being laid in my mind which I’d later extend but allow the high-control group I became involved in to demolish, almost inexplicably. Over that whole period, I was expected to read a newspaper of my choice and take cuttings from it on relevant religious and moral issues to put in a scrapbook, so my long-term readership of ‘The Guardian’ began a few months before this and persisted through the year. It was either 25p or 30p at the time, and this was before colour newspapers had come to be. I had also been an avid Radio 4 listener since 1980, but unlike later I also listened to Radio 3 a bit, never really managing to appreciate classical music to the full.

This was also the year I finally decided I wanted to become an academic philosopher, and I still think I could’ve followed that through though it didn’t happen. This is the end of a kind of winnowing process probably connected to my dyspraxia, where I became increasingly aware of practical things which I simply couldn’t do, I’d been put off biology by the griminess and unpleasantness of field work and therefore philosophy was the way forward. That said, like many other people I was also very motivated to study psychology in an attempt to understand myself, and as you probably know a lot of psychology undergraduates begin their degrees by being concerned about major issues in their own personalities, so in that respect I’m not unusual. I also presented two assemblies, one on existentialism and the other on the sex life of elephants as a parable of romantic love.

I feel like this could go on and on, so I’m going to finish off this reminiscence in a similar way to how I started. My emotional world revolved around the friend I was hiding under the table with at the beginning of the year and our significance to each other was important to both of us. About halfway through it, having just visited her she became concerned that she and I were going to be found together alone in the house by her parents who were coming back unexpectedly, so I left the house by the back door and crept surreptitiously over the front garden, only to be stopped and “citizen’s arrested” by their next door neighbour. This turned out to make the situation more embarrassing for her and me than it would’ve been if I’d just left when they came back. I don’t know if anything can be made or a picture can be drawn of who she or I was at the time by putting those two incidents together.

I’m aware that I haven’t talked about Orwell’s book and its adaptations as much as I’d like, so that’s something I’ll need to come back to, and there are huge things I’ve missed out, but I hope I’ve managed to paint a portrait of my 1984 and possibly also yours. I may also have portrayed someone who peaked in high school, but I do also think tremendous things happened afterwards. 1984 is, though, the first foothill of my life, which makes it significant. It’s sometimes said that the reminiscence bump is only there because fifteen to twenty-five is the most eventful period of one’s time here, but maybe not. It’s hard to say.

Would The Afterlife Be After Life?

Someone, who knows who they are I think, made a stimulating comment on here which I picked up on this morning and I thought it might be worth responding to, so here it is.

First of all, I should probably point out that when I say “afterlife” it could equally well apply to future reincarnation, and in fact I want to mention what’s on my mind regarding that too. I’ll start with an experience I had shortly after becoming Christian.

The high-control faith organisation I became part of at eighteen was very conventionally evangelically Christian, and people within it set out their own views regarding a Christian’s fate, and at this point a surprise might be in order because it wasn’t like the conventional views of Heaven and Hell. In fact, I’ll start with that. The “demotic” culturally Christian understanding of the fate of human beings is something like, if you have enough good deeds, when you die your soul leaves your body and goes to another realm immediately which we call Heaven, and if you’re bad enough, your soul leaves your body and goes to another realm immediately which we call Hell. Heaven is an eternal place of reward and Hell an eternal place of punishment. Human experience continues after death in this form.

The above is basically never what reflective evangelical Protestants believe. There may be recent converts who do believe that or perhaps people who don’t particularly involve themselves in Bible study, small groups, quiet times and the like, although it seems likely to me that people in their church are likely to pick up on that and encourage them. It’s also possible that since I’m of a more philosophical, and therefore perhaps surmised to be a more theological, bent than my born-again Christian peers at the time, the discussion may have led me in that direction and it’s actually very common for them not to have reconsidered this idea, but it seriously is not found, so far as I know, anywhere in evangelical fundamentalist Protestantism. There’s also the rather silly idea that Heaven is above the sky and Hell below the ground.

It’s more like this, as I understand it. Humans are widely considered to be new creations at conception and to persist until death as a combined living soul and body unit. They are once again new creations if they make a commitment to Christ, i.e. become Christian, and some believe that humans are soul and body before conversion and become soul, spirit and body afterward. On death, there is an interval during which individuals have no experience and are effectively asleep, a period referred to as “soul sleep”. At the Day of Judgement, humans receive a resurrection body which is perfect and incorruptible, which again is accompanied by their soul. They have memories of their life on Earth and proceed to be judged by God. If they have been saved, or would’ve been saved if they’d heard the Good News but didn’t, or had it distorted in some way, God conveys them to a non-Earthly realm where they live forever in bliss. If not, they are conveyed to another realm where they suffer forever. In either case, the soul is a new creation at conception which continues to be conscious, except when asleep, comatose or temporarily dead, experiencing time sequentially with a past, present and future whose quality does not change after death. In other words, they believe in an afterlife.

I can’t guarantee that I’ve got this right and there’s likely to be a fair bit of variation between views within evangelical Biblically literalist Protestant Christians. Some of them probably believe exactly this, others don’t. Another set of beliefs about this is arguably more Biblical, and it’s what the Jehovah’s Witnesses believe. This is that humans are created, possibly from conception, as physical conscious beings who continue in consciousness until death. After death they cease to be conscious until God chooses to resurrect them if they have died before the Day of Judgement, then they are recreated as physical conscious beings in perfected bodies. After being judged, if they are not saved in the JW sense, and I’m not sure what that is incidentally, they simply cease to exist. If they are saved, they inherit the Earth as an earthly paradise and as physical conscious beings. There are some other complications, but that’s basically it as I understand it, and again it involves sequential consciousness. After being resurrected, we will recall our former lives as having happened already and our experience will continue after the gap which began with our deaths, providing we’re saved of course. Jehovah’s Witnesses give the impression of being fundamentalist and conservative nowadays, but back in the day they were, as conservative evangelicals in the late 20th century CE have pointed out, actually liberal Christians, or rather descended from them. They don’t believe in a place of eternal conscious torment, that Jesus was divine or in a different heavenly realm. The Kingdom of God is on Earth for them. This is also reflected in their cosmopolitanism: JW Kingdom Halls are notable for their very representative congregations in ethnic terms, you can expect the same proportionof Blacks, Whites, South and East Asians among them as in the communities they’re in, and they are also truly global rather than being restricted to the English-speaking developed world. They are of course also wrong and a high-control spiritual organisation. Many would call them a cult. They’re sexist and homophobic. In my years-long discussion with JWs, the longer I conversed with them, the more convinced I became that they were wrong, both in terms of how they interpreted the Bible and in more general terms. There is also much to admire in them, for instance their pacifism.

Getting back to my involvement in the high-control religious group when I was eighteen, I found myself encountering recurrent major problems with their beliefs. I may write about this elsewhere, but it’s not important here. In terms of the way justice was served, I had a couple of major problems. One is that I felt, and still feel, that saving souls for Jesus becomes a substitute for actually doing good in the world. Another is that too much emphasis was placed on repentance, to the extent that Hitler could repent and be saved but some paragon of virtue could go to Hell for not being Christian. Consequently, I decided to revive my old belief in reincarnation. I had a model of the spiritual universe like this: space-time extends infinitely, or at least vastly, in all dimensions, in this case meaning the three of space and the one of time. Outside that realm are souls, which for the purposes of the model are points. From these points radiate lines to every incarnation each soul ever experiences, a bit like a spider with a colossal number of legs. From each of these lives, they learn important lessons and their position outside space and time is informed by the sum total of their experiences. That’s how I saw spiritual reality at the start of my adulthood.

There are problems with this model. The most important one I perceived at the time was the problem of why we don’t seem to have experienced previous lives as aliens or remember our future lives. If the enlightened oversoul to whom we are connected in our incarnate lives doesn’t experience time the way we do, and if we live in such a vast Universe the chances of being reincarnated in the immediate future or past as a human on this planet are extremely close to zero, yet we don’t seem to remember lives spent on different worlds. Moreover, since our eternal oversouls are not within time and reincarnation is not consecutive, there doesn’t seem to be anything stopping us from remembering future lives unless we are in general blocked from remembering other lives. Although there are said to be cases of people remembering former lives, I’m not aware of anyone claiming to remember their own future lives, although there do seem to be cases of premonition.

I stopped believing in that model fairly early on. It was mainly an attempt to make sense of life and the world spiritually in a hostile environment, so when I left that I was able to let go of that belief. For a while I was dualist, i.e. I believed in a soul and a body which existed in the same sense, i.e. two concrete, equally real entities which interact. The problem with that view may be that it’s “not even wrong” – it can’t be discussed rigorously because it falls apart under the most cursory examination. I don’t object to the idea of a soul, but I don’t think it’s a ghost in the machine, and it’s worthwhile digressing here into what I find a fascinating set of views held by some Christians.

Some Christians are physicalist, and I’d venture to say that some of them don’t realise they are but would if they thought about it. The problem with soul sleep after death followed by resurrection and consciousness with memory of a former life is that there’s apparently nothing connecting the resurrected person to the historical figure they are supposed to be the same as, and therefore that there’s no justice in either rewarding or punishing them, or saving or damning them which is unfortunately not the same thing. God creates someone and they live out their life, alternatively either being a good or bad person or becoming Christian or refusing to do so. Then they die, and eventually nothing physical remains of them. At some point in the future, God recreates a seemingly identical person with a perfected body, the same personality as before and with accurate memories of a former life. But this is, in a way, just God playing a game. This new identical creation has not committed the sins or done the good deeds of the previous person because there’s nothing linking them and they’re not the same person. They don’t deserve either good or bad treatment based on that previous person’s life and no justice is served. Without a soul of some kind, there can be no justice because it means death is the end. Therefore, most Christians would probably say there is such a thing as a soul, and they’d probably tend to think of it as a kind of phantom reflecting the person as they are in life, or perhaps a brilliant point of light or something. To their credit, my main interlocutor in the high-control group would not be drawn on defining the soul despite some suggestions I gave him, and with hindsight that could be the right attitude, although it might also mean he was worried that close examination would disintegrate his ideas. But as I said, Christian physicalism exists. Such Christians argue that Christian anthropology, i.e. its view of the nature of humans, has been inordinately influenced by Plato with his idea of the separation of the soul and body. They further see the Bible itself as supporting the view that we are living souls, i.e. that the references to us being “living souls” in the Bible actually refers to our embodied, living selves rather than something our bodies contain or are in some way connected to while we’re alive. Many would also claim that at no point is a disembodied human soul depicted in the Bible. Demons are of course, and I’d also raise the question of Saul attempting to talk to Samuel’s soul via a medium, that soul being identified as Samuel rather than a deceptive demon pretending to be him. They also see all this as being more aligned with the findings of modern science and medicine. I don’t personally think they’ve succeeded in making any connection between the original body and the resurrection body, which if I were to try that myself I’d probably say is the same person created from something like a Platonic form, so it’s like there’s the number 2, the word “two”, the digit “2” and the Roman numeral “II”, all of which refer to the objectively existing and unique number 2, but it’s not up to me to defend really.

I do not believe in the human experience of sequential time except in waking life. I see our experience of time as one moment following another in order to be confined to the sequence of days we live through awake, starting with our birth or perhaps before and ending with our death or an irreversible loss of any kind of consciousness at the ends of our lives. However, it isn’t that simple and you’ve probably noticed that I’m obliquely referring to other states of consciousness, where matters are entirely different. The anti-theistic philosopher Daniel Dennett, of whom I’m not generally much of a fan, did make an interesting observation regarding sleep, which is that we don’t know that we’re experiencing dreams. It could just be that dreams are messes in our sleeping brains which our waking brains try to make sense of, although I don’t think that can be true because of the existence of lucid dreams and things like people talking in their sleep, sleepwalking and so forth, apparently acting out their dreams as they occur. Nonetheless, I have had an experience which suggested to me that dreams are not as they seem, which is that I dozed off with the radio on, woke up a few minutes later and my dream began with radio sounds when I woke up and ended with sounds from it as I dropped off. The only way I can make sense of this in conventional terms is that my dream consisted of assembled and confused information present in my brain resulting from sleep when I woke up, and that was my brain assembling that in the wrong order.

However, I don’t think it’s either/or, and I’m not the only person to believe this. Dennett’s belief that lucid dreams, i.e. dreams where the dreamer becomes aware they’re dreaming and takes control of it, are not experiences strikes me as the result of his dogma about the nature of consciousness forcing him to absurd conclusions and probably also reflects on how he accounts for all consciousness, i.e. very badly. All that said, I think you can have it both ways, and here’s why: wakefulness has one attitude to reality and dreaming has another. It’s also feasible that all states of consciousness have their own unique attitudes. In particular, time doesn’t operate the same way in dreaming as it does in everyday life. I don’t want to go into too much depth here, but I once had an extremely detailed dream in which I see places and people whom I had no idea existed at the time, and this is a single and particularly notable incident of many such. Dreams, I think, actually do sometimes foretell the future, and the only way I can make sense of this is to understand them as presenting temporal events in a different way to how they occur to the waking mind. This is certainly true in the case of past events, but my more extraordinary claim is that they also present events which haven’t yet occurred. All that said, judging by how our thinking and consciousness as waking people operates, dreams are indeed not temporal events at all but just arbitrary patterns in our minds which we make sense of when we awake, but that presentation and understanding is that of a wakeful, living brain and is not more true or more valid than the experience one has in another state of consciousness such as dreaming. It’s more like a three-dimensional cube being projected onto a flat surface and looking like a square or a hexagon. Our minds when awake simply can’t do anything else with the experience. For that reason, I also think that dreams don’t occur while we’re asleep, which is one reason I narrate them in the present tense. What actually happens is that a conduit opens to experiences which are no less valid or real, in their own terms, at a particular point in our waking lives. There was never a time when the dream someone has at the age of forty wasn’t there: it exists outside sequential time.

J W Dunne took this approach, which went on to influence J B Priestley and Olaf Stapledon among others. In his ‘An Experiment With Time’, published in 1927, Dunne claimed on the basis of prophetic dreams that there are two time dimensions, only one of which governs our lives. Another level of consciousness occupies the other time dimension, and there is an infinite regress into higher and higher time dimensions. This is interesting but not quite how I see things. I think that when we’re both alive and awake, we experience time sequentially, but that only makes sense within that state. Beyond that state, time is different and possibly indescribable and incomprehensible to us as we are now. Dreams are clues to this, but there’s a lot more to reality which they only hint at. Hence the question “what happens after we die?” is based on false assumptions about time. Death only occurs to our waking selves, and in fact it doesn’t even do that because as far as that mode of our consciousness is concerned, we always have a past, present or future. Death is not something we experience. I also find it entertaining, though maybe meaningless, to think of my life as an endless loop, which is however only operating in a general sea of consciousness and not limited to it, so maybe we live through our lives and go on to experience amnesia combined with death and rebirth into the same life repeated infinitely. As well as the other people I’ve mentioned, the author Ian Watson has expressed the idea that the “afterlife” is a dream state in which Hell is the inability to dream lucidly and Heaven is lucid dreaming, which can however be induced in the damned, liberating them from Hell by doing so.

Now for reincarnation. There seem to be two views of this. In one, we progress or regress in each life and are reincarnated accordingly. In another, we simply reincarnate without any particular plan or direction. The former is the southern and eastern Asian view on the matter, and it’s possible that their view of reincarnation is more valid because of the Valeriepieris Circle:

This circle represents half the population of the world. More people live inside this circle than outside it. Interestingly, to me anyway, it includes the main area where people take the existence of reincarnation for granted. The reason this is interesting is that this area is also the one where people are most likely to be reincarnated if it is true, so if there’s any evidence that people have lived before, for instance memories of former lives, that’s the area where they could be most easily verified or supported. If reincarnation is true, the most likely places religions or other belief systems which accept that are to arise is within that circle, and that is in fact what’s happened. It doesn’t prove anything of course. People would be less likely to experience it in large areas of tundra, desert or on oceanic islands, and of course the Abrahamic religions did arise in desert areas. It doesn’t mean people wouldn’t believe in it elsewhere but it could be seen as evidence for it.

I’m not going to question the reality of people being able to remember things they “couldn’t” because they appear to have happened in someone else’s life. I’m prepared to accept that as at least a theoretical possibility and I’m more interested in what it might imply. The most common interpretation of this taking place is that someone’s soul lived out a life in one body which then died and they’re now in another body, often that of a small child, who can remember some events which occurred in the previous life. However, that isn’t the only explanation and it depends on the existence of a soul or persistent self which may not be real. David Hume, some other Western philosophers and of course Buddhists have the idea that there is nothing you can point to which is “I”. Instead, there are simply experiences in a stream linked by memories and anticipation. I don’t agree with this for two reasons. One is that I believe that total loss of memory which didn’t otherwise injure a person, or if you like cloning or duplication, would still be followed by a person with a very similar personality. There are cases of identical twins separated at birth who have ended up almost duplicating each other’s lives unwittingly, even to the extent of getting a dog of the same breed and calling him the same name. The other is that you are the person others relate to or see you as, for instance their parent, sibling, boss, mentor or favourite musician. These kinds of identity are real. However, they’re not the same as having a soul, and for that reason I think it makes as much sense to suppose that it isn’t the soul who is reincarnated but their various memories and experiences are reassembled, probably as a collage from many lives, in a new person. However, there is one proviso here: those experiences might only exist as part of someone’s whole life, and if that’s lived with integrity that would lead to a larger chunk of someone being reincarnated, and perhaps ultimately as the whole person undergoing that process. This is odd because it kind of means that the better life one leads, the more likely one is to be reincarnated rather than the other way round.

So to conclude, there have been two themes in this post.  One is the nature of identity and time, and the other is what can be said to happen beyond this life.  In that, I’ve committed myself to discussing only religious views, but it’s also possble that these thoughts can be adapted to more non-religious views. Some of them are inspired by Heidegger and existentialism, after all. Let me know what you think. It really isn’t that deep.

My Writing Style

I’m fully aware that I’m too wordy, don’t stick to the point and talk about arcane topics a lot, not just on here but in face to face conversations. This is partly just what I do, in the sense that I’m unable to do otherwise or employ it as a bad habit. In a world full of shortening attention spans and loss of focus though, I feel that however ineptly, this is still worth doing.

In the process of doing this, I continued this blog post in a fairly lightweight word processor called AbiWord which we stopped using because it had a tendency to crash without warning and without there being any salvageable document when this happened, and it proceeded to do exactly that, so this is in a way a second draft. One of the many features AbiWord lacks, and this is not a criticism because the whole philosophy is to avoid software bloat, is a way of assessing reading age. Word, and possibly LibreOffice and OpenOffice, does have such a facility, which I think uses Flesch-Kincaid. A blank was drawn when I said this to Sarada so it’s likely this is not widely known and in any case I looked into it and want to share.

There are a number of ways of assessing reading age, and as I’ve said many times it’s alleged that every equation halves the readership. When I was using AbiWord just now, I decided to write these in a “pseudocode” manner, but now I’m on the desktop PC with Gimp and stuff, I no longer have that problem although of course MathML exists. Does it exist on WordPress though? No idea. Anyway, the list is:

  • Flesch-Kincaid – grade and score versions.
  • Gunning Fog
  • SMOG
  • Coleman-Liau
  • ARI – Automated Readability Index
  • Dale-Chall Readability Formula

Flesch-Kincaid comes in two varieties, one designed to rank readability on a scale of zero to one hundred. It works like this:

206.835−1.015(average sentence length)−84.6(average syllables per word)

It interests me that there are constants in this and I wonder where they come from. It also seems that subordinate clauses don’t matter here and there’s no distinction between coordinating and subordinating conjunctions, which seems weird.

The grade version is:

0.39(average sentence length)+11.8(average syllables per word)−15.59

This has a cultural bias because of school grades in the US. I don’t know how this maps onto other systems, because children start school at different ages in different places and learn to read officially at different stages depending on the country. Some, but not all of the others do the same.

Gunning Fog sounds like something you do to increase clarity and I wonder if that’s one reason it’s called that or whether there are two people out there called Gunning and Fog. It goes like this:

0.4((words/sentences)+100(complex words/words))

“Complex words” are those with more than two syllables. This is said to yield a number corresponding to the years of formal education, which makes me wonder about unschooling to be honest, but it’s less culturally bound than Flesch-Kincaid’s grade version.

SMOG rather entertainingly stands for “Simple Measure Of Gobbledygook”! Rather surprisingly for something described as simple, it includes a square root:

This is used in health communication, so it was presumably the measure that led to diabetes leaflets being re-written for a nine-year old’s level of literacy. I don’t know what you do if your passage is fewer than thirty sentences long unless you just start repeating it. Again, it gives a grade level.

Coleman-Liau really is nice and simple:

0.0588L−0.296S−15.8

L is the mean number of letters per one hundred words and S is the average number of sentences in that. This again yields grade level, although it looks like it can be altered quite easily by changing the final term. It seems to have a similar problem to SMOG with short passages, although I suppose in both cases it might objectively just not be clear how easily read brief passages are.

The ARI uses word and sentence length and gives rise to grade level again:

4.71(characters per word)+0.5(words per sentence)−21.43

Presumably it says “characters” because of things like hyphens, which would make hyphenation contribute to difficulty in reading. I’m not sure this is so.

The final measure is the Dale-Chall Readability Formula, which again produces a grade level. It uses a list of three thousand words fourth-grade American students generally understood in a survey, any word not on that list being considered “difficult”:

There are different ways to apply each of these and they’re designed for different purposes. I don’t know if there are European versions of these or how they vary for language. The final one, for example, takes familiarity into account as well as length.

When I’ve applied these to the usual content of my blog, reading age usually comes out at university degree graduate level, which might seem high but it leads me to wonder about rather a lot of stuff. For instance, something like sixty percent of young Britons today go to university, so producing text at that level, if accurate, could be expected to reach more than half the adult population. However, the average reading age in Britain is supposed to be between nine and eleven, some say nine, and that explains why health leaflets need to be pitched at that level. All that said, I do also wonder how nuanced this take is. I think, for example, that Scotland and England (don’t know about Wales I’m afraid, sorry) have different attitudes towards learning and education, and that in England education is often frowned upon as making someone an outsider to a much greater extent than here, and this would of course drag down the average reading age. That’s not, however, reflected in the statistics and Scottish reading age is said to be the same as the British one. I want to emphasise very strongly here that I am not in any way trying to claim that literacy goes hand in hand with intelligence. I have issues with the very concept of intelligence to be honest but besides that, no, there is not a hereditary upper class of more intelligent people by any means. Send a working class child to Eaton and Oxbridge and they will come out in the same way. I don’t know how to reconcile my perception.

But I do also wonder about the nature of tertiary education in this respect. Different degree subjects involve different skills, varying time spent reading and different reading matter, and I’d be surprised if this leads to an homogenous increase in reading age. There’s a joke: “Yesterday I couldn’t even spell ‘engineer’. Today I are one”. Maybe a Swede? Seriously though, although that’s most unfair, it still seems to me that someone with an English degree can probably read more fluently than someone with a maths one, and the opposite is also true with, well, being good at maths! This seems to make sense. The 1979 book ‘Scientists Must Write’, by Robert Barrass tries to address the issue of impenetrability in scientific texts, and Albert Einstein once said, well, he is supposed to have said a lot of things he actually didn’t so maybe he didn’t say this either, but the sentiment has been expressed that if you can’t explain something to a small child you don’t understand it yourself.

I should point out that I haven’t always been like this. I used to edit a newsletter for brevity, for example, and up until I started my Masters I used to express myself very clearly. I also once did an experiment, and I can’t remember how this opportunity arose, where I submitted an essay in plain English and then carefully re-wrote it using near-synonyms and longer sentences and ended up getting a better grade for the “enhanced” version, and it wasn’t an English essay where I might’ve gotten marks for vocabulary. On another occasion I was doing a chemistry exam (I may have mentioned this elsewhere) and there was a question on what an ion exchange column did, and I had no idea at the time, so I reworded the question in the answer as something like “an ion exchange column swaps charged atoms using a vertical cylindrical arrangement of material”, i.e. “an ion exchange column is an ion exchange column”, and got full marks for it without understanding anything at all. This later led me to consider the question of how much learning is really just about using language in a particular way.

So there is the question of whether a particular style of writing puts people off unnecessarily and is a flaw in the writer, which might be addressed. This is all true. Even so, I don’t think it would always be possible to express things that simply and also it’s a bit sad to be forced to do so rather than delighting in the expressiveness of our language. Are all those words just going to sit around in the OED never to be used again? But it can be taken too far. Jacques Lacan, for example, tried to make a virtue of writing in an obscurantist style in order to mimic the experience of a psychoanalyst not grasping what a patient is saying by creating reading without understanding, and in particular was concerned to avoid over-simplifying its concepts. Now I’ve just mentioned Lacan, and I don’t know who reading this will know about him. Nor do I know how I would find that out.

I’m not trying to do what he does. Primarily, I am trying to avoid talking down to people and to buck the trend I perceive of shrinking attention and growing tendencies to dumb things down, just not to think clearly and hard. Maybe that isn’t happening. Perhaps it’s my time of life. Nonetheless, this is what I’m trying to do, for two reasons. One is that talking down to people is disrespectful. I’m not going to use short and simple words and sentence structures because that to me bears a subtext that my readers are “stupid”. The other is that people generally don’t benefit from avoiding thinking deeply about things and being poorly-informed. It’s in order here to talk about the issue of “stupidity”. I actually have considerable doubt that the majority of people differ in how easily they can learn across the board for various reasons. One is that in intellectual terms, as opposed to practical, the kind of resistance found in the physical world doesn’t exist at all. This may of course reflect my dyspraxia, which also reflects what things are considered valuable. Another is that the idea of variation in general intelligence is just a little too convenient for sorting people into different jobs which are considered more or less valuable or having higher or lower status, and as I’ve doubtless said before, the ability to cope with boredom is a strength. I also think that the idea of a single scale of intelligence, which I know is a straw man but bear with me, has overtones of the great chain of being, i.e. the idea that there are superior and inferior species with the inferior ones being of less value.

There are, though, two completely different takes on intelligence.

Structure here: wilful stupidity and the false hierarchy of professors.

As I’ve said before, I try not to call people stupid, for two reasons. One is that if it’s used as an insult, it portrays learning disability as a character flaw, which it truly is not. It is equally erroneous to deify the learning disabled as well. It’s simply a fact about some people which should be taken into consideration. Other things could be said about it but they may not be relevant to the matter in hand. The other is that the idea of stupidity is that it’s an unchangeable quality of the person in question, and this is usually inaccurate. An allegedly stupid person usually has as much control over their depth and sophistication of thinking as anyone else has. Therefore, I call them “intellectually lazy”. For so many people, it’s actually a choice to be stupid. As noted earlier, there are whole sections of society where deep thought is frowned upon and marks one out as an outsider, and it’s difficult for most people to go against the grain. This is not, incidentally, a classist thing. It exists right from top to bottom in society. Peer pressure is a powerful stupifier.

There is another take on stupidity which sees it as a moral failing, i.e. as a choice having negative consequences for others and the “stupid” person themselves.  This view was promoted prominently by the dissident priest and theologian Dietrich Bonhoeffer in the 1930s after Hitlers rise to power and in connection with that.  The idea was later developed by others.  This form of stupidity might need another name, and in fact when I say “intellectual laziness”, this may be what I mean.  It could also go hand in hand with anti-intellectualism.

Malice, i.e. evil, is seen as less harmful than intellectual laziness as evil carries some sense of unease with it.  In fact it makes me think of Friedrich Schillers play ,,Die Jungfrau von Orleans” with its line ,,Mit der Dummheit kaempfen Goetter selbst vergebens” – “Against stupidity the gods themselves contend in vain”, part of a longer quote here:

Unsinn, du siegst und ich muß untergehn!

Mit der Dummheit kämpfen Götter selbst vergebens.

Erhabene Vernunft, lichthelle Tochter

Des göttlichen Hauptes, weise Gründerin

Des Weltgebäudes, Führerin der Sterne,

Wer bist du denn, wenn du dem tollen Roß

Des Aberwitzes an den Schweif gebunden,

Ohnmächtig rufend, mit dem Trunkenen

Dich sehend in den Abgrund stürzen mußt!

Verflucht sei, wer sein Leben an das Große

Und Würdge wendet und bedachte Plane

Mit weisem Geist entwirft! Dem Narrenkönig

Gehört die Welt–

Translated, this could read:

Folly, thou conquerest, and I must yield!

Against stupidity the very gods

Themselves contend in vain. Exalted reason,

Resplendent daughter of the head divine,

Wise foundress of the system of the world,

Guide of the stars, who art thou then if thou,

Bound to the tail of folly’s uncurbed steed,

Must, vainly shrieking with the drunken crowd,

Eyes open, plunge down headlong in the abyss.

Accursed, who striveth after noble ends,

And with deliberate wisdom forms his plans!

To the fool-king belongs the world.

Now I could simply have quoted the line in English of course, but as I’ve said, I don’t believe in talking down to people and it’s a form of disrespect, to my mind, to do that, so you get the full version.  This is spoken by the general Talbot who is dismayed that his carefully laid battle plans are ruined by the behaviour of his men, who are gullible, panicking and superstitious, in spite of his experience and wisdom, which they ignore.  I think probably the kind of “stupidity” Schiller had in mind was different, perhaps less voluntary, but this very much reflects the mood of these times.

Getting back to Bonhoeffer, he notes that intellectual laziness pushes aside or simply doesn’t listen to anything which contradicts one’s views, facts becoming inconsequential.  It’s been said elsewhere that you can’t reason a person out of an opinion they didn’t reason themselves into in the first place.  People who are generally quite willing to think diligently and carefully in other areas often refuse to do so in specific ones.  People can of course be encouraged to be lazy in certain, or even all, areas, because it doesn’t benefit the powers that be that they think things through, and this can occur through schooling and propaganda, and nowadays through the almighty algorithms of social media, or they may choose to take it on themselves.  Evil can be fought, but not stupidity.  Incidentally, I’m being a little lazy right now by writing “stupidity” and not “intellectual laziness”.  The power of certain political or religious movements depends on the stupidity of those who go along with it.  This is also where thought-terminating clichés come in because Bonhoeffer says that conversation with a person who has chosen to be stupid often doesn’t feel like talking to a person but merely eliciting slogans and stereotypical habits of thought from somewhere else.  It isn’t coming from them even if they think it is, in a way.  Hence the use of the word “sheeple” and telling people to “do your own research”, which in fact often means “watch the same YouTube videos as I have” is particularly ironic because it’s the people telling you to do that who are thinking less independently or originally than the people being told.  Thinking of Flat Earthers in particular right now, which I’m going to use as an example because it’s almost universally considered absurd and is less contentious than a more obviously political example, there are a small number of grifters who are just trying to make money out of the easily manipulated, a few sincere leaders and a host of “true believers” who are either gullible or motivated by other factors such as wanting to be part of something bigger or having special beliefs hidden from τους πολλους.  I’m hesitant to venture into overtly political areas here because of their divisive nature, but hoping that using the example of Flat Earthers can be agreed to be incorrect and almost deliberately and ostentatiously so.

He goes on to say that rather than reasoning changing people’s minds here, their liberation is the only option to defeat this.  This external liberation can then lead to an internal liberation from that stupidity.  These people are being used and their opinions have become convenient instruments to those imagined to be in power.

This is roughly what Bonhoeffers letter said and it can be found here if you want to read it without some other person trying to persuade you of what he said.  In fact you should read it, because that’s what refusing to be stupid is about. Also, he writes much better than I. That document continues with a more recent development called ‘The Five Laws Of Stupidity’, written in 1970 by the social psychology Carlo Cippola. The word “stupidity” in his opinion refers not to learning disability but social irresponsibility. I’ve recently been grudgingly impressed by the selfless cruelty of certain voters who have voted to disadvantage others with no benefit to themselves. A few years ago, when the Brexit campaign was happening, I was of course myself in favour of leaving the EU and expected it to do a lot of damage to the economy, which was one reason I wanted it to happen, but I would’ve preferred a third option where the “U”K both left the EU and opened all borders, abolishing all immigration restrictions. This is an example of how my own position was somewhat similar to that of the others who voted for Brexit, but in many people’s case they were sufficiently worried about immigration and its imagined consequences to vote for a situation which they were fully aware would result in their financial loss. In a way this is admirable, and it illustrates the weird selflessness and altruism of their position, although obviously not for immigrants. Cippola’s target was this kind of stupidity: disadvantage to both self and others due to focus on the latter. This quality operates independently of anything else, including education, wealth, gender, ethnicity or nationality. People tend to underestimate how common it is, according to Cippola. This attitude is dangerous because it’s hard to empathise with, which is incidentally why I mentioned my urge to vote for Brexit. I voted to remain in the end, needless to say. Maliciousness can be understood and the reasoning conjectured, often quite accurately, but with intellectual laziness (I feel very uncomfortable calling it “stupidity”) the process of reasoning has been opted out of, or possibly been replaced by someone else’s spurious argument. This makes them unpredictable and means they themselves don’t have any plan to their benefit in attacking someone. There may of course be people who do seek an advantage but those are not the main people. Those are the manipulators: the grifters.

I take an attitude sometimes that a person with a certain hostility is more a force of nature than a person. This is of course not true, but it’s more that one can’t have a dialogue with them, do anything to break through their image of you and so on, so all you can do is appreciate they’re a threat and do what you can to de-escalate or preferably avoid them. This is a great pity because it means no discussion is likely to take place between you, and they’re not going to be persuaded otherwise. They may not even be aware of the threatening nature of their behaviour or views.

Cippola thought that associating with stupid people at all was dangerous, but of course this feeds into the reality tunnel problem nowadays. This is what I’ve known it as, although nowadays it tends to be thought of in terms of echo chambers and bubbles. We surround ourselves, aided by algorithms, with people who agree with us, and this fragments society. Cippola seems to be recommending that, and with over half a century of hindsight we seem to have demonstrated to ourselves that that impulse shouldn’t be followed.

Casting my mind back, a similar motive may have been part of what led to my involvement in a high-control religious organisation. I have A-level RE. This in my case involved studying Dietrich Bonhoeffer, and the approach generally was quite progressive and liberal, including dialogue between faiths, higher criticism and the like. On reaching university, I found that the self-identifying Christians with whom I came into contact were far more fundamentalist and conservative, but because I regarded this as demotic, the faith of the common people as it were, I committed myself to that kind of faith. This is not stupidity in a general sense, as most of the people there could be considered conventionally intelligent, some of them pursuing doctorates for instance. However, they did restrict their critical faculties when it came to matters of faith, and in that respect I was, I think, emotionally harmed by these people, though I don’t blame them for it. This is the kind of selective and deliberate “stupidity” which is best avoided.

I’m aware that I’ve described this all rather unsympathetically and perhaps with a patronising tone. This is not my intention at all and it may be more to do with the approach taken by the writers and thinkers I’ve used here. I’ve also failed to mention James Joyce and Jacques Lacan at all here, which may be a bit of an omission. What I’m attempting to show is respect, and what I’m requesting from the reader is focus (and I have an ADHD diagnosis remember), long attention spans and complexity and nuanced thought. I’m not asking for agreement, but I would like those who disagree with me to have thought their positions through originally, self-critically and with respect for their opponents. I write the way I do because I know people are generally not stupid and can choose not to be.

Goddities

This is going to be me going at it like a bull at a gate rather than just sitting down and composing my mind and thoughts about the issues at hand. My basic idea with this is to try to explore the common ground or otherwise between atheism and theism, because I sometimes wonder if we’re talking about the same thing or just using the same words. There are certain things which atheists have been known to do which I feel have just been designed for the specific occasion of their argument rather than having a wider respectability, and there are other things which, well, are just interesting for everyone, or at least might be, and I want to plonk all these things together today and talk about them.

The first one is something I’ve mentioned before, which is the question of active and passive atheism. I insist on a definition of atheism as the existence of a belief that no deities exist rather than the absence of a belief that a deity exists. I’ve been over this, so I’ll be brief. The motivation for defining atheism passively is to set it as the default belief, but in doing so one is forced to accept peculiar implications. We assume all sorts of things, which is in itself interesting and complicated because in fact we seem to have uncountably infinite assumptions but only a finite number of active beliefs. Therefore an assumption is not something which is happening in anyone’s mind. It’s something one has not done. This seems messy and excessive to me, and is actually more or less the exact issue which many philosophers have with the nineteenth century philosopher Gottlob Freges view of concepts, so it’s something which has been flogged to death in philosophy already and to produce this definition at this stage, I think, reflects a lack of philosophical training. It comes across to me as naive and reflecting a kind of thinking on the spot which hasn’t had its rough edges knocked off it. On the other hand, perhaps it reflects some kind of demographic shift. As I understand it, analytical philosophers have had very little interest in the concept of God since the start of the tradition, which was probably Freges thought itself back in the 1870s CE, but they may also have been enjoying this lack of interest in a more overtly theistic and religious society than nowadays, or perhaps a less confrontational one in this area, so the definition of atheism as the absence of a belief may have become more accepted simply because more atheists, as opposed to apatheists which probably characterises most philosophers, are now in academia. Nonetheless, there is no word for someone who doesn’t believe in Russell’s teapot or that there’s an invisible gorilla in every room, so in such a situation there may as well be no word for atheism, but clearly there should be and it does mean something. But I won’t go on.

Second issue: small g “god”. There are atheists who insist on using a small g for the name God. I think they do this because they want to equate God conceptually with what they think of as other deities. This, I think, is also erroneous and an example of an over-reaction to a situation they have kind of imagined. Look at it this way: atheists claim God is a fictional character. It’s possible to go further than that and claim that God is an incoherent concept, but that isn’t atheism, although it’s an interesting position to take and one I have more than a little sympathy with. Fictional characters are given names. We know who Gandalf is, who Bridget Jones is, and unfortunately we know who Bella Swan is (actually I forgot and had to look that up!), and they all have names beginning with capital letters. Is god supposed to be someone like ee cummings or archie the cockroach? Someone once said to me I was confusing myself by capitalising God, which they didn’t explain but I think it’s along the lines that God is just one deity among many. It is, though, a little bit interesting that we generally just call God “God” and don’t say, for instance, Metod any more, which used to be a word used for God and seems to mean “measurer” (i.e. “mete-er”) and “arranger”, which could be a euphemism or a kind of title but is in any case a name for God.

This is of course related to “I only believe in one fewer deities than you do,” which involves the supposition that theistic Christians believe the likes of Ba`al and Zeus don’t exist. This also I think is seriously misconceived and fairly thoughtless. My view of the other deities is not that they don’t exist but that they’re God under different names. They do of course have other attributes, but then if God exists, God is beyond human understanding, so we have no better idea of what attributes are true of God than of any other deities who are, in any case, God by other names. So yes, I do believe in all those deities because they’re all the same deity. Another rather unsettling consequence of saying I’m atheist about all the other deities is that it’s very like the Islamophobic belief that Allah is not God and that Muslims are not worshipping the same god as Christians. It has disturbingly racist overtones to it, to my mind, which is of course a feature of “New Atheism”, and this is where it gets interesting. Many Christians claim Muslims worship a different, false god and not the God of the New Testament, or presumably the Hebrew scriptures, where they see continuity, and among Christian nationalists I would expect a very strong denial that Muslims worship God. This unifies some theists and atheists. The details of the denial may be different though. For instance, Christian nationalists might want to distinguish between the Christian trinitarian God and the Islamic indivisible divine unity, whereas the New Atheist approach is more likely to be along the lines of imaginary beings being given different attributes, including the trinity or otherwise.

Emphasising the fact that New Atheism is not all anti-theistic atheism is vital. It’s also possibly a movement whose time has passed. Nor would I want to say that anyone within that movement is overtly racist. They are characterised, and perhaps led, by Richard Dawkins, Daniel Dennett, Sam Harris and Christopher Hitchens, notably all White men, meaning that they will all have unconscious bias, some of which I inevitably share by virtue of my whiteness and to some extent other aspects of my social conditioning though not all. This by no means makes anti-theistic atheism unsalvageable, but equally it’s important to note that atheism is not monolithic. I always think of South Asia in this respect, with the separate Jain, Samkhya and Carvaka beliefs that God cannot or does not exist, among others, in one case because the force of karma is a sufficient explanation for the Cosmos, and more recently the Marxist anti-theistic movement there, though this is clearly influenced by the West. Some New Atheists see the development of European culture under Christian influence as a necessary precondition for the emergence of what might be termed a more liberal or progressive approach which includes atheistic approaches to reality, possibly including South Asian Marxist activists.

One major problem, I think, with anti-theist approaches in general is that they seem to make a major assumption which really doesn’t seem warranted and is odd for a group which tends to see itself as rational. That is that the urge to be religious can be removed from human psychology even if it should be. It seems to me that there are several reasons why this is unlikely. We have cognitive biasses involving finding patterns in things, we engage in magical thinking which may be the basis of rationality, and large communities tend to drift away from their constituted foundations after a while. We also have ego defences. The idea that a non-religious mind set could be adopted by the general population may not be realistic. There don’t seem to be any societies which are entirely non-religious, and when it does occur officially, religion creeps back in somewhere, such as superstitious beliefs about luck and fate. There are of course very large numbers of non-religious people whose lives are entirely healthy and well-adjusted, but they’re not an entire society and there’s too much diversity between people’s personalities and influences to conclude that everyone could live their lives that way. This has nothing to do with whether religious claims to truth are correct. This also seems to be an article of faith among, for example, humanists – that society can exist, whether or not it’s a good thing, without religion. I really want to stress that I’m not saying religion is needed, just that we don’t know if it even could be eliminated. In fact, ironically this belief is almost religious in itself, although I would also insist in defining religion in a different way which doesn’t emphasise belief.

I feel like I’ve spent several paragraphs low-key slagging off atheism. This isn’t what I want to do at all. I want it to be the way things are in my own life most of the time, and probably increasingly so in these isles with the possible exception of Ireland, that whether one is theist, atheist or agnostic is a private matter one would prefer not to talk about with people outside one’s possibly religious community and maybe not even that. What I’m trying to do is establish common ground and I’m not looking for a fight. There are more important things to engage in conflict over and it can be divisive even to bring this up, but at the same time it feels messy and naive, so I’m going to carry on.

Something which is not so divisive is the rather more nuanced approach found in both religious and non-religious circles which is not firmly atheist, theist, deist or agnostic, which is present both in some forms of mysticism and Western philosophy. Many religious mystics, and in fact a lot of just ordinary religious people like me, would say God is beyond human understanding, and in particular there’s the via negativa, which is the idea that you can best say what God is not in order to suggest what God is. God is also said to be unlike any created thing, and it’s a very familiar experience to find that one can’t express a religious experience in language. Similarly, there’s ignosticism and theological non-cognitivism, which I’ve talked about before on here. In the mid-twentieth century, there was a movement within analytical philosophy called logical positivism which attempted to establish that meaning, i.e. either truth or falsehood, only inheres in statements which are axiomatic, express necessary truths or can be empirically verified. Along with this claim was the one that religious statements were not in any of these categories and therefore they were meaningless. This is not the same thing as being false and in a way it corresponds quite well to the mystical position. Logical positivism is now considered passé, but other areas of Western philosophy have adopted a somewhat reminiscent position. My ex is of course German and among other things a philosopher in the continental tradition. When we got together, I was worried they might be Christian but it turned out that they saw religious claims very much as not having truth values in a manner I found reminiscent of logical positivism but which have much more in common with the postmodern condition, which sees philosophy as a branch of literature and everything as up for deconstruction. Statements about God make sense in their own communities and theology is a poetic or narrative truth, but these truth claims are no more or less valid than those of maths and science. Postmodern theology has been adopted by people in religious communities. There is, however, no truth outside language according to this.

I mean, I have certain views of course, as this view is both ableist and speciesist, but it is nevertheless interesting that there is a kind of agreement in this area between, of all things, postmodernity, religious mysticism and logical positivism. These are not all there is to philosophy of course, but it strikes me that this shows a way forward for us all. There are of course other non-theistic religions and non-theistic traditions within Christianity and Judaism.

Getting back to gripes though, there’s another cluster of beliefs which tend to be considered as universally associated. This is not a definitive list but I hope I’ve captured most of them:

  • Theism
  • An afterlife
  • Souls and bodies as separate items which coexist in the same sense
  • Varying fates according to actions in this life
  • Subjectively sequential time extending beyond death
  • Theological voluntarism/divine command theory
  • Literal and unironic belief

The first three in particular seem to be closely associated with each other. For instance, it’s often said that people want to believe in God because they don’t want to die, so in other words they see the prospect of an afterlife, or possibly reincarnation, to follow from the idea that God exists. There’s also an implicit assumption that God is good and/or loving in theism, which unless you agree with the ontological argument for God’s existence out of the best-known “proofs” of God has no connection with whether God exists or not. In fact I strongly suspect a lot of fundamentalist evangelist Protestants don’t, deep down, believe God is good at all but are afraid to admit it even to themselves because God would be telepathic and know they believe this. Nonetheless their public view is that God is good and just.

In each case you can uncouple the bullet-pointed belief from theism. It’s entirely feasible to believe in an afterlife in isolation, with no God. There are also Christian physicalists, who believe God will re-create us all in superior physical form at the end of time with no separate entity bearing our consciousness. Jehovah’s Witnesses may fall into this category. Alternatively, there are religions which are strongly atheist but believe in souls, such as the Jains. So far as I can tell, even faithful Judaism as opposed to the reconstructionist form is pretty much agnostic on what happens when they die, and as a Christian I think it’s important for ethical reasons to ignore any claims about what happens beyond this life, if anything. My views on the nature of time make it a bit involved for me to go into this just now without it taking over the post. Theological voluntarism and divine command theory are the idea that God alone makes ethics meaningful, a belief which can only sincerely be held by a psychopath. Finally, literal and unironic belief relies on Biblical literalism, which is seriously compromised by Biblical criticism, and there is also a project to imagine history as proceeding as young Earth creationists and otherwise Biblically literalist people suppose but with no God. Incredibly, there really are people who believe that and are atheist.

I very much get the impression that some anti-theistic atheists really would prefer theistic Christians to be conservative evangelicals, and I seem to remember Richard Dawkins saying that liberal and progressive Christianity are dangerous because they represent a kind of gateway drug to extremism. It also seems to me that some anti-theists simply think that’s what Christians are like as a block, and I think this is our fault because of those of us who are particularly strident and emphatic about our bigotry. In fact churches can be excellent factories for anti-theistic atheists and we’re responsible for creating them in many cases. But on both sides there is a tendency, which I’ve probably exhibited here, to caricature the other side, whereas in fact there could be said to be no sides at all, just people dedicated to the truth.

Will We Get Fooled Again?

Most people see Karl Marx primarily as a communist thinker, and this does of course make a lot of sense. What is perhaps much less appreciated is that whereas he wasn’t a fan of capitalism, he still saw it as an improvement over its predecessor, feudalism, and a necessary stage in progress towards a communist utopia. This is also very important in how we view societies which have labelled themselves, or been labelled, communist. In this post, I intend to go into what Marx saw as positive about capitalism, and also the suggestion by many people today that we are not actually living in a capitalist society any more, but have returned to feudalism. In doing so, it might seem like I’m praising capitalism. I’m not. I’m just stating what Marx saw as positive about it.

Marx’s view of history, and this actually is substantially Friedrich Engels’ view, is that it shifts over to one side, then there’s a reaction and finally a synthesis arises out of both. This view is linked also to a fairly radical nineteenth century view of physics which later changed due to the advent of relativity, quantum mechanics and, in biology the New Synthesis. Unfortunately, the Soviet Union continued to prize dialectical materialism, which is the metaphysics behind Marxism, well into the twentieth century CE and as a result used Lysenko’s ideas about crop production and development, leading to famine. I have various thoughts about this which are not worth sharing here right now as I’m trying to stay focussed.

Before I go on, I want to outline the very clear line of progress Marx described in history. Economics and society evolve from feudalism into capitalism and then into communism, but the important thing about this with hindsight is that he saw the capitalist phase as a necessary social order before communism could emerge from it. Communism cannot, therefore, arise out of a feudal society. Russia and China were not really capitalist but feudal, and consequently the changes they underwent were not into communism, regardless of what they might say, but into capitalism. Communist rhetoric was used but although they may have attempted to impose something resembling it upon the countries concerned, there was no large industrial working class to organise and Marx would’ve predicted the next stage in those countries was not communism but capitalism. The other countries which became “communist” were spread from those pre-existing state capitalist countries. Marx probably envisaged the earliest communist revolutions occurring in the UK, Germany, the US and so forth, i.e. countries which were already fully capitalist, not in China or Russia. Finally on this point, the current “People’s Republic of China” has a stock market, which completely rules out the possibility of it being a communist country because it means goods are commodified. It’s impossible to have stocks without this. This is a definitional thing, not a political point. You can have various takes on this, including the idea that communism can never work, but the fact remains that China is necessarily not communist and Russia may have been communist for a short period of time ending by the end of the 1920s but Marx would’ve said it was impossible for it to be sustainably so, if it was at all. That is what Marxist theory actually says, and it said that before the Russian Revolution too. It isn’t changing the theory to fit the facts.

Nonetheless, counter-intuitive though it may seem, Marx did see capitalism as a good thing in relative terms, and also as temporarily necessary, mainly because it’s better than feudalism. Marx could be seen as largely morally sceptic, although he also contradicts himself about that. More specifically, he believed that one’s values were determined by one’s economic circumstances. Many say that they hate landlords but still want to be one and would behave exactly the same way if they became one. That’s Marx’s view. Therefore, in a way he could be said to be saying that capitalism isn’t evil at all. It’s just a phase society passes through as it progresses. I would also say that Marx does actually seem to care from time to time. It’s fair to say that Marx saw ethical values as determined by material conditions, so for instance he gets around the Kantian problem of universalism, where “what if everyone did the same?” is what determines right and wrong, leading to describing shoplifting food from a small greengrocer as either “depriving someone of their means of livelihood” or “providing food for one’s family” by saying it’s actually both, but means something apparently irreconcilably different to the bourgeoisie and the proletariat. However, he did also clearly show sympathy for the plight of the poorest in society, which looks to me like morality.

Specifically, capitalism is better than feudalism. Peasants are tied to the land and sold with it. A worker under capitalism is, legally speaking, free to leave their employer and do paid work for themselves or someone else. However, at least back in the day, they weren’t property owners in any substantial sense.

Due to everyone being tied down, including lords and ladies, royalty and so forth, feudalism is static. Nobody is socially mobile. You’re born into your station, live in it and die without changing it. It’s about subsistence for most people, and also tradition. Capitalism isn’t like that, at least in theory. In capitalism, production has been freed from this system and is able to produce things way beyond what anyone needs. It makes industrialisation possible, technological change is promoted and there’s large scale global trade. If someone has an idea in capitalist society, it’s much more likely to be realised than in feudalism, since in the latter case the chances are the person with the idea is working from dawn to dusk and won’t be able to communicate their idea to others who will be able to act on it.

Largely, feudalism operates locally and on a small scale, and is fragmented. You can raise and grow food on the local farm, but there’s no way you can go to a distant market and produce can’t usually be brought to you from across the globe. By contrast, the global economy means that people become more cosmopolitan and accepting of others, to some extent unifying the world, people are less parochial and the basis for a global community emerges.

Relationship-wise, feudalism is personal and that’s open to corruption. There’s the lord and the vassal, the master and the apprentice. There was also religious pressure to keep these in place and there was an unchanging hierarchy in place. More was going on between people than just money – it was all personal, like the relationship between children and parents. Under capitalism this changed to contractual obligations which were written down and quantifiable, which also made it possible for exploitation to be tracked by following the money. Prior to this, whereas there might have been a vague sense of injustice, though probably diverted by religious justifications and the kind of deference one might feel towards a father figure and in British society today still towards royalty for many people. This is very different from noticing how hard you’ve worked and how long your hours are while you have an actually countable sum of money in your pay packet each week or month. That focusses the mind on the inequality, which has now been largely converted into numbers.

Once all of these things are in place, and this is where it becomes clear Marx is talking about capitalism as a transitional state, the machinery exists to bring socialism about. There has to be a proper infrastructure to enable workers to travel to and from their workplaces. Workers mainly live in cities where they can become more aware of their conditions and act together to improve them. Also, the existence of extreme wealth becomes more visible and therefore imaginable to the working class. They’re more likely to ask why other people have got their money, i.e. the money they had taken off them as profit for the owners, and it makes it possible to imagine a society where conditions are much better and more equal. This last is of course a grudging acknowledgement which entails that it’s just a phase in Marx’s opinion, but the point is that all of these things become possible when they weren’t in feudalism.

To be honest, a lot of this seems quite questionable to me and I want to do two things with it. One is to point out ways in which feudalism actually scored over capitalism. Firstly local production and consumption are very much what we need right now considering the environmental problems caused by global trade, and it would also be good to have a more personal, though also equal, relationship with other people one works for. Secondly, I’ve long imagined that I might do better in a society where I have a set place and a role, and I haven’t been ambitious for a long time. However, that’s all very well to imagine, but it would depend on that role actually suiting someone. Otherwise it would mean being trapped in a job one hates, and that job incidentally could be Queen or King and one could still hate it.

However, I also have a couple of other questions about this. One is that of the spice trade, which also applies to the likes of precious metals. Spices were being grown in the Far East and South Asia for centuries and then traded with Europe, even the British Isles. There were monasteries and castles where ginger, cinnamon, cloves and nutmeg were all commonly used in cooking. This doesn’t correspond to the idea of local production and consumption painted above. The other is that peasants probably had more leisure time, or at least time off work, than factory workers, with the church imposing holidays and feast days. Work was also seasonal and there would’ve been times when it would’ve served no function and therefore wouldn’t’ve happened. These two phenomena are answered by Marx and Marxists. The spice trade was seen as the beginning of capitalism and as operating externally to the feudal system. It would still have been unfeasible for a peasant to buy a nutmeg at a local market. As for leisure time, the switch was from task orientation to time orientation, something which also connects to the infrastructure necessitated by capitalism and world trade in the form of railway timetables and navigation across oceans. Although peasants might have been freer in that sense, they would not have been able to improve their lot collectively, or at least they wouldn’t’ve done so because of other constraints, including religious and cultural ones. So it’s said, anyway.

So to reiterate, whereas Marx wasn’t exactly a fan of capitalism he did see it as progressive compared to feudalism. He saw progress as moving forward inevitably through feudalism and capitalism to communism, and his views also reflect the dynamism dialectical materialism sees as embedded in reality. Incidentally, it’s tempting to go on and on about dialectical materialism here but I’ll resist that. I’ve actually long thought that given the existence of the US Constitution, the fact that it’s a republic and so forth is a sign that it could’ve been expected to evolve into a communist society, and in fact that in the late nineteenth century it was probably the prime candidate for doing so. When people are genuinely patriotic about the freedom and democracy of the US, that anticipates its progress into communism, which is why it’s so bizarre that they have such a phobia of it.

Everyone reading this by now stands a good chance of being aware of Frank Herbert’s ‘Dune’ series, so forgive me if you’ve heard this in depth. My own experience is that I’ve read ‘Dune’ itself and later had my doubts about the quality of its sequels, so I didn’t read them, and also the Dune Encyclopedia. The David Lynch film is disappointing, there was also a fever dream by Jodorowsky the director trying to make a version which just failed because he was stoned all the time, and finally there’s the current cinematic adaptation which I again saw the first installment of and found disappointing, although that’s more me. It’s also a reaction to Asimov’s Foundation Trilogy and the main influence on the Star Wars franchise. Its relevance here is that it describes a future feudal society spanning the Galaxy. It’s fairly complex but breaks down as follows. The Landsraad is a conglomeration of great houses, families which run the Universe and jostle for position. They’re given fiefdoms over various worlds, such as House Atreides and Caladan. Each great house has a share in CHOAM, the overriding megacorporation that does everything except transport. There’s also the Spacing Guild, which operates a monopoly over space travel based on their navigators, the mentats, who I think use the spice melange and Holtzmann engines which enable travelling without moving. This can only be done by the mentats, who are sentient humans who are mutated by the spice, because of the Butlerian Jihad, which was fought over whether AIs should exist, ending in them being banned. The Spacing Guild is in partnership with CHOAM and move everything, or rather, ensure that things and people that start out in one place end up in another. The great houses have various degrees of power. The imperial house itself, House Corrino, attempts to maintain a monopoly on violence through highly trained soldiers called the Sardaukar, originating from the prison planet Salusa Secundus and keep everyone under control, or at least apparently do. House Atreides, however, have troops of their own which may compete successfully with the Sardaukar. The Atreides are also feuding with House Harkonnen. Behind all this is a quasi-religious order called the Bene Gesserit, an all-female group expert in manipulation, who are secretly working towards breeding a female Messiah over centuries from members of the great houses. The spice melange is found only on the planet Arrakis and has a similar role to fossil fuels in the real world. As well as enabling mentats to fold space and therefore “shorten the way”, melange is extremely addictive – stopping it kills you – enables certain people to see the future and extends lifespan. Many of the more powerful people in the great houses are on spice. Incidentally, this brings the book ‘Cyclonopedia‘ to mind, so maybe there’s a link there too.

All this is easy to translate into the real world, and seems to represent Herbert’s attempt to explore the possibility that the default state of human society is neither fully automated luxury gay space communism nor capitalism but in fact feudalism. Shorn of many of its more implausible elements, the ‘Dune’ universe does in fact seem to reflect one view of how society and economics do in fact operate today. For a long time, people have been talking about “late capitalism”, but I prefer to call it “mature capitalism”, in the sense that it’s a permanent economic and social order which will change only with our extinction, but in a way, maybe the term is accurate and capitalism is in fact coming to an end, but it isn’t being replaced by communism but by what Yanis Varoufakis calls “technofeudalism”. Maybe Marx was right in predicting the end of capitalism, but wrong about what it would become, and rather than being progressive, the world is actually slumping back into something resembling feudalism.

His idea goes like this:

In feudal times, land ownership and rent were the chief sources of wealth. Profit existed but it was less important, for instance it existed in the spice trade but wasn’t something the peasants could avail themselves of. Under capitalism, capitalists dominate the media, parties and banks, and in particular are expert at manufacturing public opinion and values, thereby bypassing democracy. Our current system is not based on profit but rent, i.e. there is limited access to resources not found in ownership. I’ve mentioned this before, but examples of how this happens are found in vehicle lease agreements and things like having to pay extra to turn on seat heating in BMWs – you already have the vehicle but have to subscribe for the right to warm your seats even though the facility to do so is already available. This is also found in the switch over to subscription models, such as the rental of various software packages. There’s also the cloud. We don’t own ebooks, TV programmes or music a lot of the time and the companies controlling them can simply withdraw the right to access them on a whim. If you make an app, Google Play, Apple and the Microsoft Store control access to it most of the time and the developers have to pay a subscription when in theory they could spread knowledge of it via other channels, but this is now difficult because of the next development, social media. The domination of the internet by social media is also akin to the peasants belonging to the landed gentry. It’s said that if you aren’t paying for something, it’s you who are being bought and sold, an older example being women in clubs being allowed in for free or given free drinks – this is because the club wants men to pay to get in and dance with and have sex with them. The same kind of situation exists today on Twitter, Facebook and so on. People spend most of their online time on these and streaming sites. Companies are no longer oriented towards making a profit, but make their money through subscription charges, also known as rent. Market dominance is more important than profit. It also means, practically, that money is constantly siphoned away from us to billionaires. Because it made no profit, Amazon paid no tax in Ireland, which would’ve been on profit, and deals have to be made by the producers of items to have the right to sell on Amazon, which is the main marketplace nowadays. This of course means they get their state-sponsored infrastructure, such as roads, for free because we pay for it. The same thing happens with Etsy. All this is why Varoufakis sees us as being in a new feudalism. In short, for Varoufakis there are no more open markets, but money is made instead by renting of closed digital estates. The Web used to be the Wild West but has now undergone enclosure like mediaeval land.

It isn’t just him either. There has also, for quite some time, been a view that we are entering the “New Middle Ages”, also known as Neo-Mediaevalism. This was an idea from the 1970s onward which reached its zenith in the ‘noughties. One distinctive thing about the actual Middle Ages was that it only applied to Europe after the collapse of the Roman Empire, an area which has also been referred to as Christendom, i.e. the Christian part of the world. It didn’t apply, for example, to the Mayan civilisation, Songhai Empire, Arab or Islamic world and so forth. Unlike that, though, the new Middle Ages applies to the whole human world, which may be important as it means there are no accessible geographical forces or assets outside it rendering it susceptible to change. No spice trade for example. In today’s world, Christendom is replaced by the New World Order, which shouldn’t be confused with the conspiracy theory. Today, the rest of the human world feels much more the influence of the US, for example. There are overlapping authorities as there were in Mediaeval Europe, but in this case they include multinationals, NGOs, those pursuing political ends violently without overt governmental permission, global trade and international organisations. This leads to a situation of various authorities to which one owes fealty, which might manifest itself, for example, in not having one’s employment rights honoured because one’s employer has more money and power than the government of one’s country. Instead of knights, we have military drones and instead of the Pope we have Elon Musk, but the power relations are the same. Instead of the Church and families having the power and money, it’s the likes of Jeff Bezos and social media. The lords own the platforms, such as Twitter, Amazon, Instagram, Facebook and Netflix, the vassals are the content creators (even this blog would count as one if it had more readers, so maybe I don’t want more readers), and the users are the peasants. People also settle back into what they idealise as a simpler place and time, when of course it really wasn’t.

Smartphones have replaced pitchforks and our new coats of arms are made by Nike and Disney, but underneath it all we seem to have gone back to the Middle Ages, which this time encompasses the world. Capitalism has indeed been superceded, but instead of moving forward, it’s going back. I don’t know what we’re supposed to do about this and ultimately it probably depends on how flexible we are as a species to change our ways. Olaf Stapledon once said of the fall of Western civilisation that one could no more expect the world population to change by the time of collapse than to expect ants to assume the habits of water beetles if their nest was flooded. I hope he’s wrong.

Pacperson Fever

(Well, it couldn’t really be anything else, could it?)

There’s this thing called displacement activity and I may be engaging in it now. Nonetheless this is, as they say, “important government work” and will help me achieve my end result.

The above image is of course a still from the starting point of the Pac-Man arcade game by Namco, which even though it came off the Pac-Man Wiki looks badly cropped. On further thought, this is probably due to the characters at the top of the screen beginning right at the start of where they’re stored rather than the alternative approach of putting a blank row of pixels all the way around, but when you see it like this it looks messy. Incidentally, if you think this is just going to be a nerdy analysis of Pac-Man in computery terms, you’d only be partly right.

To coin a phrase, Pac-Man is rather like life, and in fact may be a little more like life than most other computer games, with the obvious exception of yer akshual Life by John Conway in 1970, and even then it’s still probably more like human life as we live it here and now. The player character, i.e. the munching head which looks like a pizza with a slice missing, makes his (I’ll go into that later) way through the maze, eating, occasionally finding an opportunity which turns his life around for a short while in the form of power pills, pursued by a number of threats. Also, if he goes off one side he ends up coming in on the other, which is I think possibly one way of understanding how the passage of time might work in the entirety of waking life and which was alluded to by Friedrich Nietzsche in his myth of the Ewige Widerkehr or eternal recurrence. See? Only nine lines into talking about Pac-Man and I’ve already reached existentialism.

This may seem like a trite and whimsical musing, but actually you can develop this considerably, which is of course what I’m doing. If Pac-Man can mirror life, maybe he can also go through the same tribulations as the rest of us are wont to. Maybe he can have mid-life crises and mental health issues. Maybe he can get together with Ms Pac-Man (“-man”?) and have little pacbabies. Yes, he can do all that and pursuing that far enough Pac-Man would turn into The Sims, but in my current project I want to focus on the mental health side of things, although I do have to say also that if he did buy a bright red sports car and start zooming around the maze at 100 kph it would probably be a better strategy than what he’s been doing for almost four dozen years and he’s clearly overdue for one at his time of life.

I don’t want to reveal too much yet, but you can weave mental health issues into Pac-Man remarkably easily, but for some reason nobody ever seems to have done it. I thought of it almost sixteen years ago but didn’t do anything with it either. Just one example: depression causes a monochrome maze to load with few apparent escape routes, walls made of ghosts and no power pills. Pac-Man himself can only move very slowly and there are extra ghosts some of which actually turn out to be fake but you can’t tell which ones. I have a few other scenarios in mind and plan to develop more, so there’s already plenty to go on. The details of how all this is going to work would bore most people but basically my plan is to use Python 3 on Windows with Pygame to do it. This is not what I’m interested in writing about just now, and that really would be displacement activity because it’s writing about it instead of writing it for real.

It was an early example of an extremely popular video game. There was even a contemporary song about it. It happened around the same time as the Rubik’s cube craze first took off (that came in two waves) and there used to be a lot of this kind of thing. I personally didn’t play it at the time for two reasons: I had no money and I considered it a frivolous use of computing power, which actually was how I saw most video games back then. I was a strange child. I may still be a strange child come to think of it. Anyway, so far as I know, genuine colour arcade video games started with Galaxian in 1979. Although Space Invaders looked like it was in colour, this was achieved by strips of cellophane across the monitor. My first encounter with it was in the lobby of ABC Cinema in Canterbury, and I remember it was raining outside at the time. Having no money, I could only watch the attract mode go through its motions, but it did occur to me that it might be quite exciting to have an actual computer with that kind of graphics. It may have preceded us having a colour TV at home, although in fact it was probably a couple of months after that rather underwhelming event. Underwhelming because guess what? I was also disdainful of colour. I don’t know what was wrong with me to be honest. Earlier in my life I’d been thought by teachers to be completely colourblind because I never used colour in drawings at school.

Pac-Man was originally called Puck-Man after the Japanese onomatopoeic term “paku paku taberu”, which refers to the cartoon-like image of someone eating continually until the table is bare. Unfortunately it was found that vandals could easily modify the first word of the title and it was changed to Pac-Man to prevent that. So far as I can tell, the monitor, which is of course a CRT like an old-fashioned TV, is on its side compared to a TV, because it’s higher than it is wide if I recall correctly. The resolution is 224 x 288 and there are, it seems, sixteen colours. This is the kind of thing you wouldn’t see on an affordable home computer for another couple of years, although the BBC Micro could do something similar somewhat earlier and the VIC-20 and TI-99/4A were capable of something similar. It saddened me, though, that it was kind of imprisoned only into doing that because actually it really looked quite capable if it was allowed to be a bit more flexible. Basically I wanted a home computer that could do what Pac-Man did and also enabled the user to exploit the hardware properly. The actual memory is quite limited because like much graphics hardware at the time it was based on tiles. That is, each 8 x 8 pattern was stored separately, meaning that the repeated items on the screen only needed to be represented by single bytes, with each item such as a wall piece, ghost, player character (although they were animated and are the most complex part of the game graphically), ordinary pill, power pill and blank space only had to be stored once, then displayed as tokens. In fact I’ve worked out that there could be fewer than sixteen variations, although there are probably more, meaning that in fact only four bits are needed to represent each, except that alphanumeric text can also be represented which bumps the numbers up, although possibly only to 32 or 64. I presume there was another area of memory storing the colour data, though this is just a guess. The sound is clearly more sophisticated than just single square wave beeps as well, so I presume there’s some kind of hardware. The screen is therefore really only 36 x 28 with various fixed shapes, plus there must be a colour attribute memory of the same size with one colour for the foreground and one for the background. This makes the whole video RAM around 2K. Crucially for my understanding, it used a Z80A microprocessor, meaning that I would be able to follow the code easily, but it’s odd because it somehow feels like it ought to be running on a 6502. It seems to have used 16K to run the actual program, in ROM, and there was 1K of memory left over. 16K may not sound like much today but in fact it seems like quite a lot to me.

I will get back to the more human side in a minute but I need to write this down first.

My main experience of Pac-Man is on the Jupiter Ace. There were at least two commercially available versions of the game, one of which ran on the 3K machine, which is effectively actually more like 1K, and it was a bit ropy but nonetheless a full game. The 19K version was better, with a splash screen for example, and was called Gobbledegook, shortened to “gobblegook” as a filename. There was also a 3K type-in version from ‘Your Computer’, February 1983, and since it was type-in all the code and graphics are available and I might take a look at it. It’s graphically better than the commercial version, whose maze is quite small, except that it has no power pills and only two ghosts, plus although the maze wraps round, it does it weirdly, with one side higher than the other. I’ll show you. This is the type-in version maze:

It’s fair enough, although it looks quite different to the original. Gobbledegook looks like this:

The commercial 3K game, Greedy Gobbler, looked like this:

Hence it’s clearly possible to fit it into less than 16K with a Z80 processor.

Anyway. . .

One of the things which made Pac-Man so successful is that it wasn’t a shoot-’em-up and therefore had cross-gender appeal. It also didn’t have so much of a learning curve. It’s therefore a bit odd that Ms Pac-Man got introduced a while later, which seemed to be identical to the original except for a bow on the player character’s head. I presume this was aimed at making it more appealing to women, although I also doubt very much that that worked. It’s an example of pointless gendering.

It also escaped my attention almost entirely until recently that the different ghosts have names and different characters, and that one is female and the others male. These are: Inky, Pinky, Blinky and Clyde. Pinky is supposed to be a female but this is not NamCo’s doing but a fan thing, because she’s pink. Another aspect of this is that they have different personalities. Blinky chases Pac-Man directly, Pinky ambushes, which apparently is seen as feminine, Clyde just wanders around arbitrarily and Inky is complicated. He does something like find a point two spaces in front of Pac-Man, then finds Blinky and aims for a point between the two. This leads to him being interpreted as shy or cautious by players.

All this is quite surprising because it seems to be quite sophisticated for a fairly small program and I’m not sure players would’ve cared if this hadn’t been included. It seems that four ghosts wandering around randomly would’ve worked just as well, but seeing as this is in the original it probably is worth including. It’s also interesting how people can project personality onto such things.

Right, that’s enough for now. I need to get on with it. I’m going to use something like the Gobbledegook layout, as it’s close to the original and I’m familiar with the maths. Enough displacement activity.

Did David Bowie Ask The Wrong Question?

I often do bait and switch on here and I should honour the title to some extent, so here it is. There’s much to admire about David Bowie and the world lost a genius a few years ago. I’ve blogged about him before and whereas I can’t be bothered to fish those bits out I do remember tracing his reference to superhumans in ‘Oh You Pretty Things’ back via Arthur C Clarke’s ‘Childhood’s End’ to Olaf Stapledon’s writing, particularly ‘Odd John’. But that isn’t what I’m thinking of right now. I’ll never forget the first time I heard ‘Space Oddity’. It was an oddity to hear that accent on the radio. I don’t know why exactly, because there were lots of southern English rock stars at the time, but somehow it seemed really, I don’t know, ground breaking. Of course he was groundbreaking in other ways. My ex is a big Bowie fan, and has found him a gateway into sci-fi, a genre she previously despised, but as I said to her, and for some reason I think this applies to him in particular, you can’t take things too far from his lyrics without ruining them. For instance, in the album ‘Ziggy Stardust’ and his Mott The Hoople single ‘All The Young Dudes’, we seem to be expected to believe that there will be a mains supply for electric guitars, organs and amplifiers during the apocalypse. In fact, maybe there will be and that’s a mark of his visionary nature, but it bothers me. They should’ve been acoustic.

What I have in mind today though is ‘Life On Mars’. This has now famously been used as the basis for the excellent time travel police procedural series, which to me felt like Sam Tyler travelling back to a time when things were “normal”. Unsettlingly, that series itself is now almost twenty years old and the same gap separates us from Windows 3.1 as Sam Tyler from Gene Hunt. Well, sort of – no spoilers! My take on the track itself, though, is that it’s about someone despairing of how life is here on Earth and hoping there’s life on Mars instead because at least then there’d be something better out there. I’ve said before that my greatest fear is that there is no life elsewhere in the Universe, mysterious encounters in Sussex chalkpits notwithstanding. This is also why I’m so peed off with the scarcity of phosphorus. Anyway, this is in fact a major reason why I’m so focussed on the possibility of alien life. I may have just written a nine thousand word long rambling blog post about silicon-based life, but the subtext is the same as Bowie’s song’s. As Monty Python put it, “pray to God there’s intelligent life somewhere up in space, ‘cos there’s bugger all down here on Earth”. I don’t entirely agree with that by the way. I think many of us choose not to think complexly, which is one reason we’re in this mess.

Okay, have I done enough of that now? Enough relatable stuff? Seriously though, I’ll try not to go off on one.

Because as you must know by now, it really looks like there might at some point have been life on Mars, according to recent discoveries there. I have to admit that at this point in the proceedings I have little idea what they’ve found, but I seem to remember it’s an iron compound which is found as a product of terrestrial life, possibly a sulphur one, which needs to have quite a lot of energy input to form but is then stable and has no known non-biochemical routes to its formation, that is, including the biochemical route involving the evolution of a technological species which can do sums like many of us, which I’m sure nobody sensible is suggesting. This is the latest stage in a long history of claims about Martians, and at some point it was considered so certain that there was intelligent life there that a competition for ways to communicate with aliens specifically excluded Mars because it was thought too easy. There have been claims of canals, lichens, and later on a scaled-down set of claims regarding something like bacteria. In particular there was the Labeled Release (American spelling) Experiment on the Viking lander which appeared to show positive results, i.e. the results which NASA had pre-decided would be best explained by life, but the problem was that the other two experiments were negative. It’s frustrated me until recently that they did this but right now it seems more like the way the scientific method works: come up with an idea, test it and then do everything you can possibly think of to prove a positive result wrong. On the other hand, looking at it non-scientifically at the time, it felt like they were in denial about the existence of life, possibly because it’s an audacious and potentially career-ending claim if it ends up being refuted, but also because it’s such an Earth-shattering claim. But this puzzles me a bit because in fact for a long time, since at least 1877 up until 1965, it was basically considered a dead cert that there was life there, and often also on Venus at the same time, and it didn’t seem to make much difference to the human race that we thought it was out there. Maybe this is to do with most people not being very focussed on space, but at least in the ’50s and ’60s this was definitely not so and in fact this was probably one source of inspiration for Bowie’s track. Getting back to Viking, it’s now thought that the results of the experiment were caused by perchlorate in the soil, a bleach-like substance which it’s also been claimed originated from the sterilisation process in the reaction chamber before the lander left Earth, although I think it’s now established that perchlorate is high in Martian soil. In fact I seem to remember (look at me failing to check my sources – sorry) that it makes Andy Weir’s ‘The Martian’ unfeasible, though maybe ingenuity would’ve got him out of his predicament some other way. Weir has since said that Watney could’ve washed it thoroughly first, so maybe, although wouldn’t he then have ended up with most of his water full of bleach? Maybe not. I’m not a chemist. There’s also been a view that the dendritic appearance of some terrain close to the poles is due to the action of microbes, something I went into in depth when I put the Martian calendar for 214 TE (telescopic era) together if anyone remembers that – it involved me throwing an inkjet printer into the larder with considerable force at one point.

What’s happened is that the Perseverence Rover in Jezero Crater has found what they call “leopard spots” on rock samples. Organic carbon-containing mudstones have been found to contain nodules and reaction fronts rich in ferrous iron phosphate and sulphide minerals. Vivianite is one possible candidate, which probably coincidentally is found in bivalve shells, and another is greigite, which is a ferrimagnetic mineral regarded as a biosignature, in other words a sign of life. Other processes which could have produced these minerals involve heating which doesn’t seem to have happened to the rocks in question as they would show other signs, for instance in their crystal structure. It seems that redox reactions have occurred there, that is, reactions involving the transfer of electrons between substances, one example of which is burning and another internal respiration. These rocks are around three thousand million years old, and at that time the same chemical reactions were occurring on Earth, mediated by microorganisms. So there are these two neighbouring planets on both of which chemical reactions usually associated with life are taking place. On Earth, it’s known that this is due to life, but what about Mars? The paper in question has eliminated other possibilities as likely explanations. Further investigation by NASA is of course not likely to occur due to funding cuts, but China might end up doing a sample return mission, that is, bringing samples back to Earth, in the next decade.

For me there are a couple of takeaways from this. One is that space exploration moves agonisingly slowly. This is probably an artifact of being born in the 1960s CE., but I was under the impression that there would be a human mission to the Red Planet from 1979 to 1981. This then got repeatedly postponed. The other is that science tends to do the same thing, although it’s also punctuated by revolutionary bursts of activity, according to the philosopher Thomas Kuhn anyway. It’s very cautious and tries hard to be boring. We seem to be edging very gradually into a position of accepting that there has been life elsewhere in the Universe, and that it was also found elsewhere in this solar system in a similar condition to its state on Earth at the time. Whether it exists on Mars now is another question, although of course “life finds a way”. Whereas that’s a bit of pop-culture tat, there is an element of truth in it and to be fair it’s quite a good line. You only have to look at a seedling growing between two paving stones to see that, but living on a practically airless, arid rock bathed in ultraviolet and dropping daily below the temperature of Antarctica is a considerably taller order than that. Maybe.

There are several possible worlds in this solar system other than Earth which may be hospitable to life as we know it. These include Venus, Mars, Ceres, Ganymede, Callisto, Europa, possibly Jupiter, Enceladus, Titan and maybe even Triton, Pluto and Charon. Several of these are quite a bit friendlier to it than Mars, although the question of it arising in those places in the first place also arises. Maybe it didn’t arise on those worlds though, and simply seeded them having arisen in space. If that’s so, maybe it’s the cloud that formed this solar system which gave rise to life, which then arrived on various planets, moons, asteroids, comets, wherever, and either died or, metaphorically, took root there. If that’s so, with reference to the previous post on here, it would probably show up as having the same chirality of molecules as we, i.e. left-handed proteins and right-handed carbs. It’s been suggested that life here must have pre-dated the Earth for two reasons: it seemed to arrive almost before it was possible for it to form, and looking at mutation rates in DNA takes it back to a point before the formation of this planet. To clarify, there’s a set mutation rate in DNA and RNA which enables scientists to date roughly when diverse organisms had a common ancestor, and incidentally this is usually before the first definite members of two groups turn up separately as fossils, which could mean a couple of things. The complexity of many genomes has increased over time as well, and this too can be measured from the genes which organisms still share. If you extrapolate these rates back to the point where the minimum information for an organism to function is present in the genome, you get a period of about nine or ten thousand million years ago, or roughly twice the age of the Earth. This isn’t generally regarded as solid evidence though. What it does suggest, interestingly, is that not only does life here descend from organisms present in the solar nebula, but actually it’s from a source which existed before this solar system had even begun to form.

I’m not going to base anything firmly on that possibility, but others have been suggested, one of which is that life arrived here from Mars aeons ago, which is supported by the likelihood that Mars was probably actually friendlier to life back then than Earth was. These redox reactions may be from the exact same taxon of organisms on both planets. And this is where it gets difficult.

David Bowie asked “is there life on Mars?”, but was this the right question? Many people have said that if life can be found there, or in or on any other world in this solar system, it guarantees that there’s life elsewhere in the Universe. Well, it really, really does not. Suppose we do find incontrovertible evidence that there is, just now, life on Mars, and also on several other worlds in this solar system, and moreover that it’s remarkably similar in some ways to life on Earth, for instance possibly sharing some genes with us, and has the same chiralities in proteins and carbs as us. That means that all of that life has a common ancestor. That common ancestor might have arisen in this solar system, or at least locally before this solar system formed. In terms of chirality, maybe there’s something about the processes of the Universe which lead right- and left-handed molecules of the respective types to form and persist while their mirror images don’t, or maybe there’s something about mirror life which means it won’t function, in which case all life of the kind we know in the Universe would have those chiralities for some very fundamental reason, but we’re still drawing conclusions from a very small sample. Maybe there’s either just something about this solar system which makes it more likely that life would emerge here, such as the relative abundance of phosphorus, or maybe it just did emerge here against all odds because we live in a very large Universe, many of whose planets are covered in a reddish-brown tarry goo instead of life.

For all we know, planets and moons here could be rich in life forms, and that would be a cheering thought, but that doesn’t of itself guarantee that the rest of the Cosmos is not utterly barren. For all we know, there could be endless lifeless worlds filling the Universe, which nothing whatsoever wrong with them but simply because the chances of it arising are vanishingly small. I’m sometimes haunted by the thought of some very, very Earth-like planet orbiting, I dunno, Delta Pavonis or whatever, with a perfectly comfortable surface temperature, oceans, continents, rain, thunderstorms, rainbows, mud, puddles children would love to splash in, sunsets over idyllic beaches lovers could walk along, or other phenomena alien beings could appreciate in their own way if they existed, but which will never, ever even see a single bacterium before their stars overheat and destroy them. Trillions of them, all without life. And this solar system being full of life would be of no significance, no consequence to that situation, because life just arose this one time. And this is why I say that if it could be proven that life existed nowhere in the Universe, I would stop worshipping God. It’s like a deal-breaker in a relationship for me. I would be terminally angry with such a Creator for sustaining in existence such a vast and uninhabited Cosmos. It would be really bad.

This, then, is why I say David Bowie is asking the wrong question. It’s the right one if understood in terms broader than just Mars, that is, if Mars is just a stand-in for another planet or other location where life could persist. Mars is just our next door neighbour, and we already know our bushes might end up growing over the fence or our aphids might end up infesting next door’s roses. Big deal. The Universe is so big that the size of this solar system is nothing to it.