How Real Is Maths?

As you may know, I was involved in a high-control parachurch organisation in the mid-1980s CE when I went to university for the first time. Over the first few months, I didn’t resist them much, at least externally, because I wanted to give them a chance and see whether their claim that God and evangelical Protestantism really did have all the answers. I then went back to Canterbury for Xmas and bought my dad a book about mathematics, something he was very keen on and had a good grasp of at the time, which I also ended up reading myself. In this book, which I think may have been Martin Gardener’s ‘Mathematical Circus’, there was an interesting chapter on different degrees of infinity. In maths, there are countable and uncountable infinities. Countable infinities do take forever to count but given an infinite period of time it can be done. Uncountable infinities are just not countable at all. So for example, there are infinity whole numbers and infinity points in space, but those two infinities are different. It can be proven that this is so as follows: Suppose you have an infinite number of cards with a one on one side and a zero on the other, and you make an infinite number of infinitely long rows of these cards in order, starting with zero and ending with infinity. Have you then produced all possible infinite sequences of ones and zeros? No. You can start in the top left hand corner of this array and turn a card over, then go on diagonally, one row and one column down forever, turning the cards over until you reach the bottom right hand corner infinitely far away. The number you have then generated, running diagonally down the arrangement, is not in that sequence because bit n of sequence n will always be different from the number in that position on the grid. Hence there must be a larger infinity. This leads to peculiar consequences. For instance, it means you can in theory take a sphere of a given size, remove an infinite number of points from it and construct another equally-sized sphere from them without reducing the size or integrity of the first one. Georg Cantor, who first thought of this way of understanding infinity, spent the later part of his life going in and out of mental hospitals, partly due to the hostility of other mathematicians to this concept and its implications and possibly also because the concept he came up with was a cognitohazard. To some extent, thinking of this may have broken his brain.

With steely determination, I returned to university and immediately confronted a member of the cult, not on this issue but other, more practical ones such as intolerance of other spiritual paths and homophobia. However, because we were discussing an infinite being, namely God, I mentioned in passing this concept, and his interesting response has often given me pause for thought since. He regarded this view of infinity, and by extension much of pure mathematics, as a symptom of the flawed nature of the limited and fallen human mind. I can’t remember exactly how he put it but that’s what it entailed. At a later point he tried to explain what I’d said to someone else as “infinity times infinity”, which is not what this is, and advised them not to think about it, which in a way is fair enough. He was a medical student, and it may not be worthwhile to waste your brain cells on it in such a situation, except that it might be useful for psychiatric purposes, because, well, what are cognitohazards? Are they actually significant threats to mental health and are there enough of them encountered in daily life or even occasionally for them to be proper objects of study?

Something which definitely would be a cognitohazard is Graham’s Number. Until fairly recently, Graham’s Number, hereinafter referred to as G, was the largest actively named number. Obviously you could talk about G+1 and so on, but that’s not entirely sensible. G is the upper bound of a solution to a particular problem involving bichromatic hypercubes. Take a hypercube of a certain number of dimensions and join all the vertices together to form a complete graph on 2^n vertices. Colour each edge either one colour or another. What’s the smallest number of dimensions such a hypercube must have to guarantee that every such colouring contains at least one single-coloured subgraph on a plane bounded by four vertices? This number might actually be quite small, namely thirteen. However, it might be, well, extremely large doesn’t really cut it to describe how big it is, so let me just say it might not be that small at all. It might be G.

G can actually be expressed precisely, but in order to do so a special form called Knuth’s up-arrow notation has to be used. There’s an operation called exponentiation which is expressed very easily on computers and other such devices as “^”. Hence 2^2 is two squared, 2^3 two cubed and so on. Although it would probably be fine to use the caret to express this, in the past “↑” has been used for both this operation and in particular in Knuth’s notation. In his scheme, 2↑4 is 2 x 2 x 2 x 2, which is of course sixteen. However, more arrows can be added, so 2↑↑4 is “tetration”, 2↑(2↑(2↑2)), which is 65536 (or ten less than three dozen and two zagiers in duodecimal). Then there’s “pentation”, 2↑↑↑2, which is expanded further as 2↑↑(2↑↑(2↑↑2)), and has something like 19729 digits. This can be continued as long as necessary of course, and G is expressed in this notation as 3↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑3, which should be sixty-four arrows but I haven’t checked. That is, perhaps surprisingly, the exact value of the number. If every Planck volume in the observable Universe were to represent a digit, there still wouldn’t be enough space to write it out longhand. It is literally true that if a human were to visualise G, it would cause their head to implode and turn into a black hole. This is not a joke: that’s what would actually happen. So Graham’s Number is also a cognitohazard.

Nowadays, larger finite numbers have been used. TREE(3), which I’ve mentioned before, also involves graphs, as does Simple Subcubic Graph Number 3, which renders TREE(3) insignificant. There’s an even larger finite integer which resulted from a large number duel in 2007 which I could represent here but I’d probably be talking to myself. Actually, I will:

This is too hard to type out without fiddling about with LaTeX, so here’s the first bit written out longhand, unfortunately with a bic. The next bit is based on this definition, and reads “The smallest number bigger than every finite number m

{\displaystyle m}

 with the following property: there is a formula ϕ(x1)

{\displaystyle \phi (x_{1})}

 in the language of first-order set-theory (as presented in the definition of Sat

{\displaystyle {\mbox{Sat}}}

) with less than a googol symbols and x1

{\displaystyle x_{1}}

 as its only free variable such that: (a) there is a variable assignment s

{\displaystyle s}

 assigning m

{\displaystyle m}

 to x1

{\displaystyle x_{1}}

 such that Sat([ϕ(x1)],s)

{\displaystyle {\mbox{Sat}}([\phi (x_{1})],s)}

, and (b) for any variable assignment t

{\displaystyle t}

, if Sat([ϕ(x1)],t)

{\displaystyle {\mbox{Sat}}([\phi (x_{1})],t)}

, then t

{\displaystyle t}

 assigns m

{\displaystyle m}

 to x1

{\displaystyle x_{1}}

.”

Phi is a Goedelisation and s a variable assignment.

It wouldn’t be difficult to understand this but I haven’t entirely bothered to pursue it. I showed you the actual notation to introduce a new point: mathematical formalism. Also, the fact that this might look like gibberish illustrates an important feature of mathematics on which they capitalise: maybe it’s just a game based on symbols.

When I first read ‘Beginning Logic’, at about the same time as I was resisting the cult, I was rather surprised when the author defined the logical symbols in terms of their physical appearance as marks on paper rather than in more mathematical-type terms, and the fact that I’ve written that out might tempt one to think that ultimately that’s all they are and this form is nothing more than a kind of game which we give meaning to. This appears to be formalism, an approach found in various disciplines which emphasise form over content. The possible connection to the Bauhaus slogan “form follows function” is not lost on me, but rather than pursue that right now I should probably talk about formalism itself. Formalism as applied to literature, for example, yields Russian formalism, an early twentieth century substantially Soviet movement linked to New Criticism which held that literary criticism could be objective by letting the text stand by itself and ignoring influences and authorship, focussing on autonomy (what I just said), unity, which is that every part of a work should contribute towards the whole, and defamiliarisation, that is, making the familiar seem unfamiliar. Martian poetry springs to mind here.

Translating this to maths, formalism is the view that maths consists of statements about the manipulation of sequences of symbols using established rules. Like formalism in literary criticism, it ignores everything outside that realm, so it kind of makes everything into pure mathematics among other things. This is what I was confronted with when I first learnt formal logic, hence that photo. It’s a series of symbols on a piece of paper which there are rules about manipulating, which expresses a very large number given the comment which refers to it underneath.

Now the reason this interests me in the context of my acquaintance (friend? I don’t know) is that there is another philosophical position about maths called Platonism, which is the belief that maths is discovered and already exists. This is similar to believing in the existence of God, so my friend (why not) held an unusual position in that he thought at least one area of maths, and I think by implication much of the rest of it, wasn’t “out there” but was invented by human beings, yet he also believed in God, i.e. something which is “out there” in that sense just as mathematical Platonism sees maths. There doesn’t seem to be anything essentially wrong with this position but it is a bit odd and feels inconsistent. He also probably thought that the “plain reading” of Biblical values referred to objective principles such as not stealing, honouring the Sabbath and so on, which are in that situation like how many people, theistic or otherwise, view maths. But he didn’t view maths like that. I don’t know if he was aware of the apparent contradiction.

On the other hand, I can totally get on board with the idea that whatever we might think about how reality works is completely wrong because the Universe is beyond our comprehension. If we consider certain animals, we perceive their understanding as being limited in various ways. For instance, they might be blind cave fish or they might be sessile filter-feeders living in burrows below the high tide mark, and we suppose that they don’t understand the world as much as we do. Although I think this is accurate, and I should mention that we’re also limited in various ways, particularly in lacking a sense of smell as good as most other mammals, there’s no reason to suppose that the way we think is any more adequate or discerning about reality. All we might have is a system that works most of the time regardless of all the stuff we don’t know about. That said, it still feels like various things must exist, such as current experience and a physical world. In view of that possibility, I do have some sympathy with my friend’s take on this although it felt somewhat unconsidered in his case.

In fact I’d take it further into his world and say that as humans we do in fact have limited understanding, and I would compare ourselves with God. We’re fallible and certain things are beyond us. Moreover, there’s the question of the Fall, and I have to be careful here. Our understanding is also strongly constrained by the kinds of cultures and societies we live in, which to some extent is what the Fall really is. So like him, I do in fact link it to my spirituality and feel that a little bit of humility is in order. In that way, both constructivism and Platonism could be true. There could be mathematical truths known only to God, or for an atheist mathematical truths which could in theory be discovered by a sufficiently powerful mind, and other mathematical activities and forms which are merely games played by our own finite minds.

I’ve done a bit of bait-and-switch here, by swapping formalism for constructivism, and they’re not the same thing. Constructivism is also known as intuitionism, and sees mathematics as built by mathematicians. Hence it does have a meaning beyond the mere manipulation of symbols through rules but the meaning is given by the mathematicians. In other words, maths is invented, but it is real.

To illustrate the difference between formalism and constructivism, I’d like to go back to the diagonal proof of aleph one, ℵ₁, as mentioned above. According to formalism, ℵ₁ is a validly defined symbol and the system is internally consistent, so there’s no problem. Constructivism, though, would reject the proof and even its premises. The set of all numbers, according to this view, is only ever potentially infinite as it can never be completed. Even real numbers, i.e. the set of numbers including all decimal fractions between the integers, are only valid insofar as they can be constructed in a finite way. That infinitely long sequence of zeros and one, and all the ones under it, only exist up to the point where that process has in some way actually been done at some point in the history of the Universe, so in other words infinity of either kind is only a potential and really not even that since the Universe won’t exist forever in a form hospitable to minds capable of performing maths. I would say that this has to be a non-theistic view, since given theism there is an eternal and infinite mind which can and maybe does do all that, which makes Platonism true, although of course God might have better things to do or never get round to it.

An extreme form of constructivism is ultrafinitism. I think of this metaphorically as some mathematical objects being in focus and others being to a greater or lesser extent blurred. So for example, the lower positive integers are in perfect focus, sharp and truly instantiated by virtue of the extensive construction they’ve undergone through continual use. Less well-focussed are the non-integral rational numbers, zero and the negative numbers, and as one ascends higher, further away from zero, away from numbers which can be reduced to fractions and into imaginary, complex and hypercomplex numbers, the less sharply focussed they become, until something like Graham’s number or an octonion is just a meaningless blur and the infinities are grey blobs. This is just an image of course, so here goes.

To an ultrafinitist there is no infinite set of natural numbers because it can by definition never be completed. It goes beyond that though. For instance, a relatively mildly high number, Skewes’s Number, is about 10^10^10^34. It represents the point at which one formula used to estimate the number of prime numbers below a certain value switches from an overestimate to an underestimate. There are also higher Skewes’s Numbers for when the value switches to an overestimate again. It can be proven that this happens but the lowest exact value is unknown, and it may be impossible to calculate it, putting it in a different position from G, which can be precisely known. Peculiarly, this could mean that Skewes’s Number doesn’t exist in these terms but Graham’s does.

This gives rise to a vague set known as the “feasible numbers”, which are numbers which can be realistically worked upon using computers and the like. The question arises of how to account for such things as π, because it seems like it goes on forever, but ultrafinitists apparently view it as a procedure in calculation rather than an actual number. Incidentally, it’s difficult to refer to numbers in this setting because words like “real” and “imaginary” have long since been nabbed by mathematicians for specific meanings which don’t refer to the obvious interpretation of those terms. I suppose I could say “existing” or “instantiated”.

Some mathematicians also view maths as essentially granular. That is, the idea that there are two ways to do maths, one involving continuous functions as with infinitesimal calculus, the other exemplified by the group of integers with addition involving discrete entities, is flawed, and therefore there are no such things as irrational numbers.

Although he didn’t get as far as ultrafinitism itself, Wittgensteins thought does provide a useful basis for it. He views infinity as a procedural convenience and only potential rather than actual, and maths as an activity involving construction of novel concepts which didn’t pre-exist to be discovered. In general, he’s a very concrete philosopher. I’m actually not that keen on a lot of his thought, although some of it’s good such as the family resemblance definition, which could be applied here. Logical positivism also wouldn’t allow for such concepts, but I don’t consider that a respectable school of philosophy so much as an interesting footnote in the history of ideas.

Ultrafinitism has major consequences for physics. Singularities arise in various places in physics and cosmology. A rather prosaic example is that the degree of stress before a material cracks is infinite. This can be resolved by removing the idealised notion that such a material is a continuous substance rather than made up of atoms or other particles. Some other areas where singularities arise are more exciting, but this could operate as an illustration of how the problem might be addressed. Specifically, there was a singularity at the Big Bang, there’s one in the centre of a black hole and also one in the mass, time and length alterations at the speed of light. This has a remarkable consequence, at least as I see it: for an ultrafinitist, the speed of light can be exceeded. Ultrafinitism strongly suggests that faster than light travel is possible and that in some sense the Big Bang never happened. The first in turn also implies that time travel backwards is also possible. At this point, ultrafinitism begins to feel too good to be true, but then a light bulb would probably have seemed like that to a mediaeval European, so that would be argument from incredulity.

There’s also a problem for the theist with ultrafinitism and finitism, in that it implies that any deity would not be eternal or infinite. However, it’s important not to allow a “God of the gaps” in at any point. God should never be used as an explanation for a physical phenomenon. However, the concept of God may be moribund for them because of the need to posit octonions as variables in Bell’s Theorem.

What all of this seems to mean is that quantum physics makes more sense than relativity for the ultrafinitist because it makes reality granular. The difficulties it poses for relativity and cosmology could be a sign that there’s something about relativity which is only an approximation of the real world, but we don’t know what. However, we don’t generally accept the idea that stress before a crack is infinite because it doesn’t accord with our view of the world that something so outlandish would exist in everyday life every time we drop a piece of porcelain onto a stone floor, so maybe we should also reject the idea of lightspeed being a limit or the Big Bang being a beginning. The fact remains that relativity is very well tested and used in daily life, for instance with satnav. It isn’t just an abstract theory about a realm of reality few people venture into and it does seem odd to say that despite all the evidence in its favour, it just will fail at a certain point. Moreover, although I’m at peace with the concept of time travel, many people would object to that implication.

To conclude, I’m aware that I’ve wandered all over the place with this, and my response to this impression is as follows: yesterday I heard someone on the radio comment that as one’s age advances it’s as if different parts of one’s brain want to break up the band and follow solo careers, so maybe this blog post is evidence of my melting brain.

The Way In

Backing the losers

I’ve a tendency to back losers. For instance, Prefab Sprout and The The were my favourite bands when I was in my late teens and both were markedly unsuccessful. In keeping with this trend, I used to have a Jupiter Ace computer and I’ve alluded to this a few times on this blog. Jupiter Cantab, the company which designed and made it, had a total of I think five employees, seemed to work out of a garage and a suburban house in Cambridgeshire and had a turnover of around five thousand pounds. They went bust maybe a year or two after releasing the Ace in October 1982 CE and had to sell off all their old stock, a happy thing for me as it meant I could acquire a new computer. Its hardware was very basic even for late ’82, but its firmware decidedly was not. Unlike practically every other home computer of the time, the Ace used not BASIC but FORTH as its native programming language. Perversely, I considered writing a BASIC interpreter for it for a while but it seemed a bit silly so I didn’t.

FORTH, unlike BASIC as it was at the time, was considered a “proper” programming language. It has two distinctive features. One is that it uses a data structure known as the stack, which is a list of numbers in consecutive locations in memory presented to the user as having a top and a bottom. Words in the FORTH language usually take data off the top of the stack, operate on them and may leave one or more results on it. This makes the syntax like Latin, Turkish or Sanskrit rather than English, since instead of writing “2+2”, you write “2 2 +”, which leaves 4 on the stack. The other feature is that rather than writing single programs the user defines words, so for example to print out the character set one writes:
: CHARSET ( This is the defined word and can be whatever the user wants except for control characters ) 255 32 DO I EMIT LOOP ;
If one then types in CHARSET and presses return (or ENTER) in the Ace’s case), it will print out every character the Ace can display except for the graphics characters whose codes are below 32.

What you see above is the output from the Ace when you type in VLIST, i.e. list every word in its vocabulary. I think there are a total of about 140 words. All of them fit in 8K and show that FORTH is a marvellously compact language compared to BASIC or in fact most other programming languages. For instance, the ZX81’s BASIC has around forty-one words. FORTH on the Ace, and in general, was so fast that the cheapest computer faster than it, the IBM PC, cost more than a hundred times as much. For instance, in order to produce sound it was possible, as well as using the word BEEP, to define a word that counted from 0 to 32767 between vibrations and still produce a respectable note by moving the speaker in and out. By contrast, the ZX81 would take nearly ten minutes to count that high and had no proper sound anyway. This is a somewhat unfair comparison but illustrates the gulf between the speed of this high level language and the other.

Whittling Down The Vocab

As I’ve said, FORTH consists of words defined in terms of other words and therefore some people object to calling code written in it “programs”, preferring “words”. The fact that this definitional process was core to the language immediately made me wonder what would constitute a minimal FORTH system. There are quite a few words easy to dispense with in the vocabulary listed above. For instance, the word SPACES prints whatever number of spaces is indicated by the number on top of the stack, so 32 SPACES prints 32 spaces. However, this word could’ve been defined by the user, thus:
: SPACES 0 DO SPACE LOOP ;

The DO-LOOP control structure counts between the two numbers on top of the stack and executes the code between DO and LOOP the number of times it takes to do that. It can be taken a lot further than that though. SPACE and CR are two words with a similar structure: they print out a character. SPACE unsurprisingly prints out a space. CR does a carriage return. Both are part of the standard ASCII character set and the word for printing the ASCII character indicated by the number on top of the stack is EMIT, so they can be defined thus:
: SPACE 32 EMIT ;

: CR 13 EMIT ;

Hence three words are already shown to be unnecessary to the most minimal FORTH system, and the question arises of what, then, would constitute such a system. What’s the smallest set of words needed to do this?

The Ace had already added quite a lot of words which are not part of standard FORTH-79 and omitted others which are easily defined, examples being all the floating point words, PLOT, BEEP, CLS, VIS and INVIS. Others are trivial to define, such as 2+, 1- and 1+. Others are a bit less obvious: PICK can be used to replace DUP, SWAP and OVER, ROT is a special case of ROLL and so can be defined in those terms. . , that full stop, which prints a number, can be replaced by the number formatting words <#, # and #> . You can continue to whittle it down until you have a very small number of words along with the software which accepts input and definitions, and you’re done. In fact, if you know the hardware well enough you can make it even smaller because, with the Jupiter Ace for example, you know where the display is stored in RAM, how the stacks work (there are actually two because practically all computers implicitly use a stack for subroutines) and when it comes down to it it’s even possible to define words which accept machine code, the numbers computers actually use which represent simple instructions like adding two numbers together or storing one somewhere.

This is about how far I got until recently I managed to join two ideas together that I hadn’t previously managed.

Logic To Numbers

As you probably know, my degrees are in philosophy and my first degree is in the analytical tradition, which is the dominant one in the English-speaking world. It’s very common for philosophy degrees to be rubbished by the general public and even within philosophy, continental and analytical philosophers are often hostile to each other. What may not be appreciated is that much of philosophy actually closely resembles mathematics and by extension, computer science. When the department at my first university was closed down, some of it merged with computing. It also turns out, a little surprisingly, that one of my tutors, Nicholas Measor, was a significant influence on the theory of computing, having helped develop modal mu calculus, which is concerned with completeness of systems and temporal logic. He wrote a paper called “Duality and the Completeness of the Modal mu-Calculus” in the ’90s. This has kind of caused things to fall into place for me.

The Dedekind-Peano axioms for the set of natural numbers are central to the theoretical basis of arithmetic. They go as follows:

  1. 0 is a natural number.
  2. For every natural number x, x=x.
  3. For every natural number x equal to y, y is equal to x.
  4. For all natural numbers x, y, z, if x=y and y=z then x=z.
  5. For all a and b, if a is b natural number and a=b then a is a natural number.
  6. Let S(n) be “the successor of n”. Then for every natural number n, S(n) is a natural number.
  7. For every natural number S(m) and S(n), if S(m) = S(n) then m=n.
  8. For every natural number n, S(n)=0 is false.
  9. If K is a set such that 0 is in K, andfor every natural number nn being in K implies that S(n) is in K,then K contains every natural number.

You can then go on to define addition, subtraction, multiplication and inequalities. Division is harder to define because this is about integers, and dividing one integer by another may lead to fractions, decimals and so forth. I’ve known about all this since I was an undergraduate but hadn’t given it much thought. It is, incidentally, possible to take this further and define negative, real and presumably imaginary, complex and hypercomplex numbers this way, but the principle of knowing that that’s possible is enough really.

Dyscalculic Programming

If you have a language which can express all of these axioms, you have a system which can do most arithmetic. And this is where I had my epiphany, just last week: you could have a programming language which didn’t initially use numbers at all, because numbers could be defined in terms of these axioms instead. It would be difficult to apply this to FORTH because it uses sixteen bit signed binary integers as its only data type but I don’t think it’s impossible and that would mean there could be a whole earlier and more primitive programming language which doesn’t initially even use numbers. This is still difficult and peculiar because all binary digital computers so far as I know use sequences of zeros and ones, making this rather theoretical. It’s particularly hard to see how to marry this with FORTH.

Proof Assistants

Well, it turns out that such programming languages do exist and that they occupy a kind of nebulous territory between what are apparently called “proof assistants” and programming languages. Some can be used as both, others just as the former. A proof assistant is a language somewhat similar to a programming language but helps the user and computer together arrive at proofs. I have actually used one of these without realising that that was what I was doing, back in the ’80s, where the aforementioned Nicholas Measor wrote an application for the VAX called “Citrus” after the philosopher E. J. Lemmon, who incidentally died 366 days before I was born, whose purpose was to assist the user to prove sequents in symbolic logic. My approach to this was to prove them myself, then just go along to the VAX in the computer basement and type in what I’d proven, although it was admittedly helpful on more than one occasion. While using this, I mused that it was somewhat like a programming language except that it wasn’t imperative but declarative and wondered how one might go about writing something like that. I also considered the concept of expressive adequacy, also known as functional completeness, in this setting in connection once again with FORTH, realising that if the Schaeffer stroke were to be included as a word in FORTH a whole host of definitions could easily provide any bitwise function. It was also borne in upon me that all the logics I’d come across so far were entirely extensional and that it might even be a distinctive feature of logic and mathematics per se that it was completely extensional in form. However, I understand that there are such things as intensional logics, and I suppose modality might be seen in that way although I always conceive of it in terms of possible world semantics and multivalent truth values.

It goes further than that though. I remember noticing that ALGOL 60 lacks input/output facilities in its standard, which made me wonder how the heck it was supposed to be used. However, it turns out that if you are sufficiently strict with yourself you can absolutely minimise I/O and do everything inside the compiler except for some teensy little bit of interaction with the user, and this instinct, if you follow it, is akin to functional programming, a much later idea which enables you to separate the gubbins from how it looks to the user. And there are purely functional languages out there, and at this point I should probably try to express what I mean.

From Metaphysics To Haskell

Functional programming does something rather familiar. Considering the possibility that programming can dispense with numbers as basic features, the emphasis shifts to operators and functions and they become “first-class citizens”. This, weirdly but then again not so weirdly, is exactly what happens in category theory. Haskell is the absolutely paradigmatic functional language, and it’s been said that when you think you’re programming in it, it feels like you’re actually just doing maths. This approach lacks what you’d think would be a crucial feature of operating a computer just as ALGOL 60 can’t actually print or read input, and such things are known as “side-effects” in functional programming. If a function does anything other than take the values, performing an operation on them and returning the result, that’s a side-effect. This makes it easier to formally verify a program, so it’s linked to the mu calculus.

I’ve now mentioned Haskell, which brings me a bit closer to the title of this post and now I’m going to have to talk about monads. Monads are actually something like three different things and it’s now occurred to me that if you put an I at the start rather than an M you get “ionad”, which gives me pause, but this is all quite arcane enough. Firstly, Leibniz’s metaphysics prominently features the concept of monads. In 1714, he brought out a ninety-paragraph book called ‘The Monadology’ setting out his beliefs. It wasn’t originally his idea but he developed it more than others. Leibniz’s monads are indivisible units within reality which has no smaller parts and is entirely self-contained, though not physical like an atom. Anything that changes within it has to arise within itself – “it has to really want to change”. Since monads don’t interact there’s an arrangement called “pre-ordained harmony” where things in each monad are destined to coincide appropriately. I mean, I think this is all very silly and it arises from Descartes and the problem of the interaction of soul and body, but it’s still a useful concept and got adopted into maths, specifically into category theory. In that, it’s notoriously and slightly humorously defined thus: “in concise terms, a monad is a monoid in the category of endofunctors of some fixed category”, and this at least brings us to the functor. A functor is a mapping between categories. Hence two different fields of maths might turn out to have identical relationships between their elements. It’s a little like intersectionality in sociopolitical terms, in that for example racism and sexism are different in detail but are both forms of marginalisation, the difference being that intersectionality is, well, intersectional, meaning that different kinds of oppression do interact, so it isn’t quite the same as either a monad or a functor. Finally, in Haskell a monad is – er. Okay, well at this point I don’t really know what a monad is in Haskell but the general idea behind Haskell was originally that it was safe and also useless because you could never get anything into or out of a program written in it. This isn’t entirely true because it does do work in a thermodynamic sense, so if you take a computer switched on but doing nothing and you run a Haskell program on it which does something, it does get at least slightly warmer. That is, it does stuff to the data already inside it which you can never find out about, but it’s entirely self-contained and does its own thing. So that’s all very nice for it, but rather frustrating, and just now I don’t know how to proceed with this exactly, except that I can recognise that the kind of discipline one places oneself under by not knowing how one is going to get anything on the screen, out of the speakers or off the keyboard, trackball, mouse or joystick has the potential of making one’s programming extremely pure, if that’s the word: operating in an extremely abstract manner.

Home Computing In The 1960s

I do, however, know how to proceed with what I was thinking about earlier. There is some tiny vocabulary of FORTH, perhaps involving a manner of using a language which defines numbers in the terms outlined above, which would be simple enough to run on a simple computer, and this is where things get theoretical, because according to Alan Turing any computer, no matter how simple, can do anything any other computer can do given enough time and resources. This is the principle of the Turing machine. Moreover, the Turing machine can be realised in terms of a language referred to as the Lambda Calculus.

Underneath the user interface of the Jupiter Ace operates the Z80A microprocessor. This has 694 instructions, which is of course quite a bit more than the 140 words of the Jupiter Ace’s primitive vocabulary. Other processors have fewer instructions, but all are “Turing-complete”, meaning that given enough time and memory they can solve any computing problem. In theory a ZX81 could run ChatGPT, just very, very, very slowly and with a very big RAM pack. So the question arises of how far down you can strip a processor before it stops working, and this is the nub of where I’m going, because actually you can do it with a single instruction, and there are even choices as to which instruction’s best.

The easiest one to conceive of is “subtract and branch if negative”. This is a machine which has two operands in memory. One of them is a number, which it subtracts from the number it already has in mind. If the result of this number turns out to be negative, it looks at the other operand and jumps to the number indicated in the memory. Otherwise it just goes to the next memory address and repeats the operation. It would also save space on a chip if the values are stored in memory rather than the chip itself, so I propose that the program counter and accumulator, i.e. where the data are stored, are in the main memory.

Moreover, I’ve been talking about a single instruction but in fact that actual instruction can be implied. It doesn’t need to exist explicitly in the object code of the memory. Instead it can be assumed and the processor will do what that instruction demands anyway, so in a way this is a zero instruction set CPU.

What this very simple computer does is run a program that emulates a somewhat more complex machine which runs the stripped down FORTH natively. This is where it gets interesting, because the very earliest microprocessors, the 4004 and 4040, needed more transistors than this machine would, and it would’ve been entirely feasible to put this on a single silicon wafer in 1970. This is a microcomputer like those found in the early ’80s for which the technology and concepts existed before the Beatles split up.

This is of course a bit of a mind game, though not an entirely useless one. What I’ve basically discovered is that I already have a lot of what I need to know to do this task, but it’s on a level which is hard to apply to the problem in hand. But it is there. This is the way in.

Is Revelation A Source Of Knowledge?

This is not about the Book of Revelation, though as I typed it I realised it sounded like I was about to do some exegesis on the last book of the Bible. No, it means revelation in the sense of an experience of divine origin. The other thing is, this is something which I’ve been trying to sort out in my own mind for about fifteen years.

This may actually be quite a short post as it merely aims to pose a question, not to answer it.

I’ll start with a popular analytic definition of knowledge as justified true belief. A re-statement of this is that knowledge is belief which cannot rationally be doubted. There seem to be two sources of knowledge at this standard. One is direct experience. That is, although one might be dreaming, one cannot deny that one is currently experiencing a particular sensory quality when it’s happening. These are known as qualia: qualities or properties as experienced or perceived by a person. The singular is “quale”. Although the ringing in one’s ears may not reflect an actual sound and the odour of burning may be the result of an imminent stroke, the fact remains that one does have the relevant experience. This is not in doubt and cannot in fact be doubted rationally.

The other source of knowledge is logic and mathematics, or at least it seems to be. For instance, 2+3=5. This can be known. It can also be known that if it’s raining then it’s raining. One might also go on to claim that two parallel lines never meet by definition, but this is where a possible flaw in this source of certainty emerges, because it famously turned out that this was not so. Euclid’s Fifth Postulate, which attempts to establish this fact through logic, is oddly wordy and unwieldy, and this is because it turned out that the parallel line claim was not axiomatic but based on observation, and it further turned out that in actual physical space, parallel lines don’t always stay the same width apart and do in fact tend to meet at an enormous distance. Likewise, logic’s reliance on bivalent truth values may be a similar flaw as these may not be enough. There might be meaninglessness, for example, or tense-based truth: something might be true now but false in the future. All that said, logic and mathematics seem to be a good basis for certainty independent of experience: multivalent logic exists and so does non-Euclidean geometry. Incidentally, it’s worth noting that the number of things which can be known from this source alone is infinite, so it isn’t true that a fairly extreme form of scepticism leaves one with knowledge of almost nothing.

Suppose, though, that you believe in an omnipotent source of reliable knowledge such as God. It doesn’t have to be God but I am of course theist myself. If you’re not, this will probably sound highly arcane and theoretical to you but you could look at this more as a thought experiment or perhaps something that can be applied to another force acting on consciousness and it may mean that it’s logically possible that what I’m about to suggest can happen. Anyway, here it comes:

If an omnipotent and omniscient entity exists, that entity would be able to create knowledge in the human mind. Henceforth I’m going to call that entity, theoretical or otherwise, God. Putting it simply, God can do anything, so God can make people know things. That means that God can remove doubt when something is true, and if there is a God, revelation can be a source of knowledge.

However, there’s a caveat here. God doesn’t do everything God can do. When I was a child, I saw a graffito on a fence post saying “I hate you”, and for some reason interpreted it as God’s message to me. Don’t ask me why. I rushed home rather distressed and came into the kitchen, where my mother was listening to a song on cassette called ‘Our God Reigns’. In my perturbed state I heard this as “Our God hates”. I asked her if God hated me and she laughed, replying, “No! God is incapable of hate!”. This didn’t reassure me much because I was aware that the concept of God included omnipotence, meaning that if God so chose, God could indeed hate. This is the prototype of a belief about God I have today that God is capable of anything, but doesn’t invariably act on that capacity. Hence God can hate but doesn’t, or at least God chooses not to hate humans. Applying this to the matter at hand, that would mean that God might be able to force us to know things but does not choose to do so. Hence we are left with confident belief at most rather than actual knowledge in the sense that God provides us with anything it’s rationally impossible to doubt.

To me, it seems quite invasive and controlling for God to cause this to happen in one’s consciousness. It seems to violate the principle of free will. However, it could be that God would respond to one giving consent to bring this about in some way. “God I believe: help my unbelief.” Would it happen then? Prayers are not always answered the way one might expect. It’s undoubtedly also true that omnipotence means God could create a feeling of complete confidence in something which isn’t so, which is not knowledge.

I think that’s the issue stated as clearly as I can, but there’s another approach to this based on the general use of language. In many cases, if we were to insist on exact meanings for words, they’d end up not referring to anything. Nothing physical is perfectly spherical, perfectly flat or perfectly smooth. Hence if I were to say something like “Here is a smooth one metre sphere resting on the flat upper face of a two metre cube”, it would fail to refer to any real situation because the “sphere” wouldn’t be perfectly spherical, exactly a metre in diameter or perfectly smooth, and it wouldn’t be resting on a perfectly flat perfect cube exactly two metres on an edge. Nonetheless I might seem to have referred to a situation correctly and usefully, and to be that nitpicky about language and reference is plainly silly. Now for the situation with God causing me to know something. Maybe my standard of what constitutes knowledge is too high with justified true belief. Maybe knowledge is just belief that is near enough to certainty that it would make no odds. Otherwise we’d be stuck with a concept of knowledge useless for a wide variety of practical situations.

So that is basically the question I’m asking and a few considerations related to it. It’s also something I asked a few times on Yahoo Answers of all places in the vain hope of getting a sensible answer. All I got in the long run was some legalistic moderator saying I shouldn’t ask the same question more than once, even though I asked it several years after failing to get a helpful answer. Ah well.

Lateral Thinking

(c) BBC Enterprises 1997, will be removed on request

I have to admit I’m somewhat out of my depth on this one, although a kind of family and cultural osmosis has led to considerable familiarity with the movement in the past. In personal terms, I can relate to the topic in two ways. One is to think of myself as someone who is only able to think laterally. The other, which I haven’t been able to understood, was a comment by a client who said I never think laterally, but always vertically. I don’t know what to make of this apparent contradiction.

Edward de Bono died on 9th June 2021. That pattern recognition device went the way of all flesh, and flesh it was – he was often keen to emphasise that the mind was no computer, although I do wonder whether he included quantum computers in this. Or rather, did his concept of a computer as a vertical thinking machine, as it were, still apply to quantum computers?

This is not going to be a complete survey or review of De Bono’s work. He published seven dozen and one books and also had other rôles, so it’s unlikely I can do justice to him today, so I’ve decided to focus on two aspects of his earlier work on lateral thinking. Before I do that, I want to bring something to your attention about this. De Bono for me is not just someone who is “out there” because my father was very interested in him in the late ’60s and into the ’70s when I was born and through my childhood, and he does seem to have applied some of his principles and thought to my upbringing, so it’s quite likely that the way I approach things now is related to that way of thinking. I remember some exercises I’ve done at my father’s behest on the matter. In his case, he was attempting to apply it to his paid work in operational research and management, where it seems to have been quite popular, but as De Bono himself says, it shouldn’t be learned through its application because that places a restrictive filter on it, but should be considered a subject in its own right. If my thinking is linked to lateral thinking in this way, it’s also likely that I can’t perceive that it is.

Lateral thinking is contrasted with vertical thinking, and he is keen to emphasise that the former is not to be considered better than the latter. Both are appropriate but apply in different circumstances. Verical thinking is less creative and proceeds from given unquestioned premises step by step in a manner which is preferably not open to flexibility. This is often necessary, and is the kind of thinking we probably come across most of the time in technically-oriented walks of life such as mathematics, logic, computing and possibly science. I hesitate to commit myself entirely to this idea though, because whenever human beings are involved, creativity has a rôle, and this has always been so. De Bono would relate lateral thinking to insight, creativity and humour, and he almost has a theory of humour, but I’ll come to that. In vertical thinking, the only available method for changing ideas is conflict. Either a new idea is introduced and kind of enters into battle with the old idea in someone’s head, which it either wins or loses, or new information confronts one and conflicts with the old, leading to its hopefully dispassionate acceptance or rejection, which is said to be how science works. Thomas Kuhn would point out at this point that the war of ideas in science is heavily influenced by the career positions and choices of scientists and can’t be considered as occurring in an abstract realm where a new hypothesis or theory is mechanically accepted or rejected, but this is an idealised way of looking at science and I think we can probably agree that it’s how it should work.

An example of how it might work differently, and I don’t know if I can dignify this with the label of “lateral thinking” but here it is anyway, is my approach to the composition of Saturn’s rings. In the early 1970s, no space probe had been sent past the asteroid belt and there was conflict between astronomers who believed the particles making up the rings were icy and those who thought they were rocky. I chose to conclude that they were ice-covered rock. It turned out they were mainly made of frozen water, but this is probably an early example of lateral thinking and I know I applied it elsewhere. Bear in mind that this was a six year old, so it isn’t going to have the sophistication of an adult professional astronomer. Bear in mind also that I’m not commenting here on whether it’s right or wrong, which is another feature of lateral thinking.

Conflict between ideas only works where objective evaluation is possible. Very often, in vertical thinking new information is examined through the filter of preëxisting information and structures, which can cause the old idea to become more entrenched. I personally think the idea of non-baryonic dark matter is a good example of this. Another example might be found in religious fundamentalism, as with sexism and homophobia, where rather than attempting to moderate the prejudice in the light of new attitudes and even scientific research, people just dig in deeper, sometimes to the extent that it seems, at least to an outsider, that some churches are primarily concerned with hatred.

A good way of changing ideas is to rearrange the available information by use of insight. As recognised patterns are used, they become more firmly established. This reminds me of a friend who became delusional, or rather a friend whose delusions were unusual and began to affect her life adversely, and it seemed to me that one element in their reinforcement was that backtracking would involve acknowledging that she was wrong and that she’d used a lot of time and energy in maintaining them which had come to have major adverse consequences for her life. Nonetheless there’s a need to attempt to deal with the manifold, and the main way of doing it seems to be to convert established patterns into a kind of code for dealing with the world. The mitzvot of the Torah would seem to be one example, and vocabulary is another, and to me this raises the question of how much learning is really linguistic rather than some other kind. All of these are filters which leave out a lot of information, of necessity, but it’s possible that this information, were it acknowledged, would end up forming a new pattern not noticed before.

Crucially, De Bono tends to deprecate the notion of the mind as computer, or any kind of machine (although this becomes more contentious when one considers the possibility of what can be simulated). Rather, the mind is an environment rather like a landscape in certain ways. Now I’m conscious that I’ve already used the metaphor of a landscape to describe neurodiversity, and wish to dispel the notion that there’s a connection here. The mind for him is a specialised environment which allows information to organise itself into patterns. This reminded me rather of Gestalt psychology, which rejected empiricism and structuralism and is largely based on the idea that the mind tends to impose higher order phenomena such as movement and patterns on lower order sense impressions. I would call these higher order phenomena supervenient. Since Gestalt psychology now largely survives as therapy, this also suggests that if lateral thinking is helpful, it too could be used as a form of therapy, where people are trying to break out of maladaptive patterns in their emotional lives. In fact, right now I see lateral thinking as particularly useful in this area.

Restructuring is hard because existing structures grab the attention. Nonetheless there are times when restructuring occurs spontaneously in the human mind, and de Bono mentions three: insight, humour and lateral thinking. I would perhaps add revelation and epiphany to those. I once asked a non-religious psychologist friend of mine if he had had anything corresponding to religious experience and he mentioned insight as being somewhat akin. The experience of insight, in fact, was so difficult for thinkers to explain in the European Middle Ages that they posited the idea that God illuminated the contents of the mind. Humour constitutes a brief and reversible restructuring, which I found interesting, but couldn’t tell if he was proposing a complete theory of humour or not. Insight, on the other hand, is a long-term restructuring, or rather the beginning of one. De Bono appears to offer something like a definition of lateral thinking at this point, as “restructuring, escape and the provocation of new patterns”, and as such this reminded me rather of the somewhat later but also highly seminal ‘Gödel, Escher, Bach – An Eternal Golden Braid’ by Douglas Hofstadter.

Lateral thinking is in a way an attempt to generate creativity consciously. However, in the formal case the process itself may be hidden, often from the creator themselves. I’m reminded somewhat of Dalí and his paranoiac critical method and of the suggestion that one overcome writer’s block (not a problem for me so am I naturally a lateral thinker?) by cutting up and rearranging text, which is almost the same thing as one of the exercises he proposes. Lateral thinking generates its own direction by placing ideas next to each other as a form of progress, whereas vertical thinking is led by principles and is passive. Vertical thinking is also constrained by the choice of premises. It also tends to create sharp divisions and uses extreme polarisation, and this is particularly interesting since these may be the major problems with today’s society and were far less severe in the late ’60s. Is there a way of applying lateral thinking to this issue? One of its functions is to temper the arrogance of rigid conclusions. However, as he says, de Bono is not fundamentally opposed to vertical thinking, and believes that lateral thinking can support and help it in the long run. “You cannot dig a hole in the wrong place by digging deeper”, as he says, but digging a hole for the purposes of this metaphor is still a vertical process, so you think laterally to transport yourself to a better location and might then proceed to use vertical thought. This mode of thinking is not new either. There are also people who naturally gravitate towards it, and this is where it gets personal again. I would certainly say that many people on the Halfbakery are constitutionally lateral thinkers, and would include myself in that number, but as I’ve said, one of my clients has said that I always think vertically and am incapable of thinking laterally. I’m not sure what this means. It clearly is how he perceives me, but why is it so much at odds with my self-image, which is the opposite?

De Bono wrote another book called ‘The Mechanism Of Mind’ to which he makes extensive reference. I haven’t actually read it, but again uses the metaphor of a landscape to describe the mind. A flat limestone plain might gradually develop waterships and channels as it gets rained upon and these will eventually cut permanent ponds and lakes, and also streams and rivers. This is the memory of the land. It may also be influenced by differing composition of that land, such as granite as opposed to limestone. If there are instinctive schemata applied by the mind to the world, they might be seen as similar to the varying composition of the land, and the entrenchment of memory and learning is akin to the erosion and formation of bodies of water of particular forms. The land remembers where the rain and snow fall. Likewise, so does the mind remember things. I like this metaphor because it’s very un-computerised.

Even so, he sometimes seems to have a rather IT-oriented approach to thinking. For instance, in a later book he introduced a series of two number codes to sum up entire phrases. The predecessor of this idea is also present in his early work, where he proposes that communication can be abbreviated into trigger words. This is not “trigger” in today’s sense, where it refers to features which may cause anxiety to certain groups of people or people who have suffered particular traumatic experiences, but more like words which trigger a series of associations like a computer subroutine, and it seems ironic that this very un-computer-like device, the mind, according to de Bono, can also undergo something rather akin to programming in this way, although there’s no imperative element so maybe it’s object-oriented.

At the top of this post I described him as a “pattern recognition device”. This is more or less how he sees the mind, more precisely a pattern-recognition system. Most or all of the patterns the mind comes to recognise are not built in, although I’m not so sure about that. For instance, our sense of hearing is attuned in development to recognise voices and we tend to see faces everywhere even when they’re absent, such as the Badlands Guardian and the faces on Mars. The cognitive psychological view that the brain consists substantially of modules would also tend to contradict this, although it’s conceivable that modules could arise from a non-modular infant brain through learning. This is of course the nature-nurture debate, or in epistemological terms rationalism vs empiricism. In any case, this pattern-making tendency allows the mind to communicate, or perhaps a better word is “interact”, with its external environment. The patterns are always artificial, which seems to mean that de Bono doesn’t believe in natural kinds, i.e. types of things which exist objectively. He goes on to say that in a sense the mind is a mistake-making system, in that it mistakes one thing for another. Although an obvious example is our tendency to imagine faces in inanimate objects, it applies more broadly in that one must reject some of the features of an item one apprehends in order to conceive of it as like another. One has to generalise. Those patterns which promote survival are then selected. For example, one may have noticed a pattern that clear colourless liquid tends to quench thirst, but if one is surrounded by vessels containing water, acetone, turpentine and isopropanol one might wish to modify the pattern to include odourlessness, although I suspect that doesn’t eliminate everything. Hence one doesn’t end up poisoning oneself.

A further claim is that the mind doesn’t actively sort information but information sorts itself out. I’m not sure what he means by this, but I think the idea is that the mind constitutes a hospitable environment for the sorting of information and is a self-organising system. Such things can easily be seen in the living world, such as with shoals of fish all turning at once or ants’ nests working apparently purposefully when the individual worker ants each have only a very limited range of responses, and the brain is a similar system, with each neurone being little more than a logic gate with a modest ability to store information from previous inputs. He then made a claim about attention span, which didn’t seem to use the term in the way it’s generally understood now, and I was unfortunately lost, I hope temporarily, on this point.

A much clearer feature is that the order in which information is encountered changes the pattern perceived and can lead to it becoming harder to reorder the information. For instance, if one is playing a game of Hangman and chooses letters in one sequence, a word might quickly become obvious, but if one had started with a different set it might be considerably less so. “-A-A-A-A-A-A” probably suggests “taramasalata” to a lot of people, but “T——–T-” probably doesn’t. If the same information is deliberately presented in a different order, it may suggest a different solution, or a solution. Jokes often rely on such things, though there is always a switch back to a serious mode, or there ought to be. Unfortunately, this doesn’t always happen. I once told someone the pyramids were supposed to have been built with the points at the bottom and the base at the top because the builders got the plans upside down, then discovered several years later that she had taken me seriously until she started an archæology degree. In another example, someone learnt the wrong physical examination technique due to a joke by their tutor, which could’ve had quite serious consequences. This also applies as poe’s law – one often can’t tell if people are joking or being satirical on the internet and shifts in pattern recognition can occur as a result, but not necessarily positive ones.

Pattern recognition speeds up identification and reaction, but also has a number of disadvantages. Patterns tend to fixate and cannot easily be altered, or new patterns can’t be as easily perceived. Change is difficult – to use a psychotherapy cliché, you “have to want to change”. Conversely there is also the paradox of change, where it takes place when one pays less attention to it. There are also butterfly effects, although these can be positive. Anything resembling a standard pattern will be perceived as such. For instance, it turns out to be notoriously easy to be misdiagnosed with schizophrenia and psychological researchers have done this to get admitted to mental hospitals even though the symptoms they described didn’t fit the diagnosis. Established patterns grow. I would see mission creep as a manifestation of this. When the only tool you have is a hammer, everything looks like a nail. Patterns also shift suddenly rather than smoothly: the rings of Saturn are either made of ice or rock but “can’t” be made of both.

Contrasts between vertical and lateral thinking are then outlined. Vertical thinking is selective, lateral generative. In vertical thinking, some of what one perceives or might perceive without preconceptions has to be rejected. Not so with lateral thinking. Vertical thinking is about being right or wrong, but lateral thinking is about richness of thought content. The imagination is more engaged. Vertical thinking proceeds along a path it has discovered or arrived at, whereas lateral thinking attempts to find many paths. Even when a solution has been found, lateral thinking can continue to look for more options, and once again I’m reminded of the Halfbakery. Vertical thinking is analytical, lateral provocative. Vertical thinking has to be right at every step to be valid, but lateral thinking recognises that being wrong may lead to a better solution in the end. Lateral thinking explores the less likely paths. It hears hooves and imagines zebras rather than horses.

Po

At this point it became clear that I wasn’t easily going to outline the entire corpus of the guy’s thought, so I’ve decided to focus on the Teletubby at the top of this post, so to speak: “po”. In logic, there is truth and falsehood, “yes” and “no”. There is negation. This involves rejection of the alternative deemed incorrect. De Bono introduces a third option: “po”. I couldn’t help but be reminded of the Greek interjection “ποπο”, which is an expression of surprise or dismay, rather like “yikes”, and this may be where he got it. It also called to mind how the Samoan language negates statements: it turns them into yes/no questions, which seems to me to be a form of etiquette. The opposite of “Is it raining” (“ua timu”) seems to be “ua timu?” – “is it raining?” although I may have got that wrong. Whether or not that’s so, the approach taken by such an utterance is less arrogant and more polite than simply saying “no”, because the word “no” often has power, as was shown by Danny Wallace’s book ‘Yes Man’, and although it needn’t be, that power can be quite aggressive. After all, it involves rejection. “Po” is to lateral as “no” is to vertical thinking. He describes it as a “laxative” rather than a “negative”. It might seem at first that it allows for a third truth value, but I don’t think this is the intention, and it doesn’t fit neatly into multivalent logic simply because it doesn’t fit into logic. It’s a laxative in the sense that it can get thought moving rather than stop it. It withholds judgement. It can also be used as several parts of speech.

“Po” can be a conjunction. De Bono gives the example of “computers po omelettes”, which places two apparently unrelated things together to allow them or their associations to interact. That conjunction might bring to mind a recipe app which takes as its input the contents of your larder or fridge and gives possible meal ideas as output. It can also introduce a random word. Here the example is “po raisin”. The concept of a raisin is introduced to a discussion to stimulate ideas, perhaps of data compression by “dehydration”, e.g. reversibly removing a major but unimportant constituent of a picture which can be added back in later, or perhaps then becomes more concentrated information which can be used differently. For instance, an image of a mainly black night sky could have the completely black areas replaced by information telling a viewer or program that certain polygons in the image are devoid of content, and consequently asterisms, constellations or star clusters might become more evident. It can be used to signal that what follows doesn’t in fact “follow” and saves time and confusion by admission that a particular point is not arrived at by a conscious train of thought, thereby encouraging serendipity. “Po” lets someone be wrong without judgement because by being wrong one may find a better way of doing something than how it’s always been done. It can protect an idea from judgement: it’s short for “this is probably not true but let’s just pursue it and see where it goes.” It can also alter the problem to see if there’s a solution. Dividing eleven items fairly between three people can be achieved if you add an item of your own to share it out and then negotiate with the person who has that to have it back or share it on a regular basis with the others, or spacing four trees an equal distance apart could be managed by using a hillock or depression in the middle of the other three, thereby forming a tetrahedron.

“Po” has other functions. It can challenge the arrogance of established patterns. It is not po-faced. It might do the same with their validity. It can liberate information to allow it to come together and give new patterns. It can rescue information from pigeonholes. There’s a real life example of this for me because my surname is unusually short and begins with a rare letter, so I used to have my pigeonhole for internal mail filled up with rejected missives intended for other people because they assumed nobody’s name began with that letter, so I often had to rescue information from that pigeonhole due to the assumptions of others. Having experienced one possible alternative arrangement, it can encourage one to search for more. It is never judgmental. It can sometimes be translated as “that may be the best or only way but let’s look for others”. Hence it can have unintended consequences, and although those may be disruptive, sometimes they’re precisely what one needs.

All of this leads me to wonder what a “po man” would be like. Danny Wallace’s book ‘Yes Man’ tells of his experiment with his life when he became persuaded that he had got into the habit of saying “no” too often. He therefore committed himself to saying “yes” to everything and everyone for a period of time to see what might happen. This included questions like “are you looking at my girlfriend?”, which had interesting consequences. It was later adapted into the Jim Carrey film ‘Yes Man’, although there it was fictionalised – I don’t know how accurate the book is either of course. It is of course easier for a man to pursue this than a woman – I just want to drop that in. Stopping saying “no” is giving up power, and you might have to start fro a position of greater power to do that and have it not devastate your life. The question arises, therefore, of what the life of a “po person” would be like? What would it be like if every response you gave to a yes/no question aimed to juxtapose apparently unrelated things or opened up possibilities? I don’t know the answer to this, and I would also want any answer to explore other possible meanings of the word.

Po punctures pomposity. It reminds us that apparently inevitable information arrangements may in fact be arbitrary. It counteracts “no” and it heals divisions, and we really need that today. It diverts from the obvious, may provide a tension-relieving laugh or smile like the use of humour to defuse a tense situation, and it prevents overreaction and the swing towards polar opposites.

Although it occurs to me that “po” cannot work on a computer, whereas binary truth values can, I’m not sure that’s true of computers that are not digital or binary. I think this might indicate that the way we use our minds is almost a deliberate imitation of how we imagine computers work. Maybe we’re making ourselves in their image? What if we’re more like quantum computers or analogue ones?

Criticism

To an extent, it’s probably healthy to treat criticism of de Bono’s ideas with suspicion, as he seems to be something of an outsider and may not have too many people supporting his positions within academia. There is also a heady sense of power in judgement and rejection. Even so, it has been claimed that there’s little evidence to support Edward de Bono’s claims. Their style, if not their content, brings Neuro-Linguistic Programming to mind. There is said to be sparse evidence that it’s broadly successful. Early studies showed benefits to children with learning difficulties but it was also tried with Australian Aboriginal children and didn’t help them beyond the area of creative thinking. This seems like a strange criticism to me since that would seem to be his main focus, and it would be difficult to find an area which wouldn’t benefit from improved creativity sometimes. It’s also been suggested that suspending judgement would slow down or reverse progress.

De Bono didn’t use experiment to produce his body of thought, and he relies heavily on anecdotal evidence. However, sometimes it’s important to do exactly that. As well as pattern recognition devices, we are a species telling stories to ourselves and needing to hear them. Even if lateral thinking is propped up by myth, it still benefits people by enabling us to believe in ourselves more, and it seems worthwhile to protect people from acid rejection and criticism. We need permission to fail and be wrong without that ruining our reputation or lives.

He may also place too much emphasis on an individual’s “aha” experience rather than the communal testing of the idea that follows. That doesn’t always matter though, because sometimes the details of content are not of great import to their existence. Art is art. A particular mural may evoke one set of feelings but they’re no less or more valid or valuable than those another might have kindled, and a particular piece of music can still be “our tune” as much as another one can be, but any of these could become more memorable or thought-provoking because they were arrived at through lateral thinking. The problem may be when they come in contact with a particular kind of reality.

He’s also not so much a pioneer as he makes himself out to be, at least in terms of addressing the question of creative thinking. Another example would be William James, about whom I’m afraid I know practically nothing.

Last Words

In conclusion, I would say that I do actually currently find the idea of lateral thinking interesting and helpful, particularly as a way of inventing a new means of relating and approaching my thoughts and feelings, although it may also work in other areas. Even if it’s a myth, myths are important and we need them in our lives, and there are many areas where it doesn’t matter if a provocative idea is true or false, and such areas may have positive real world consequences. So I think Edward de Bono made a valuable contribution to the world and wonder if the nay-sayers would benefit from the po-sayers.

Ethical Intuition And Homophobia

Back in the ’70s, when I was a child, my mother used to read the Bible to me. This was how I discovered that the written Torah appeared to condemn male homosexual acts. There are other takes on this apparently, but they’ve always seemed to be against the grain, perverse interpretations of what was pretty clearly extreme homophobia. At the time though, I didn’t have an issue with it and it seemed entirely logical that if sex was for reproduction, any form of sex which couldn’t lead to pregnancy was morally wrong. This was the simplistic understanding of a nine-year old.

When I was twelve, my English teacher compared homophobia to racism, and asked us, if we were opposed to racism, why would we be homophobic? It was the same kind of issue as far as he was concerned. This seemed an eminently consistent and sensible view to me, partly because at the time I considered racism to be a particularly terrible evil. One influence on my acceptance of this opinion was probably my own queerness, although I had yet to admit that to anyone. Certainly my White friend who was in the same English class as I and was similarly passionately anti-racist persisted in his homophobia for as long as I was aware of his opinions on the matter, which would’ve been another few years. In my case, I remember another pupil calling me “gay” in September 1981 and replying to him that it was terrible that he even considered it an insult. He too was still openly and strongly homophobic four years later. The one person who was aware of my sexuality and identity issues, which I used to call my “Problem”, once said of my opposition to homophobia that homosexuality was “not your Problem,” so clearly both she and I made a connection between the two.

But this post is not just about queerness and homophobia.

A few years later I went to University and became Christian. Before making a commitment, I expressed concern that I would have lots of questions about the issues the Christian faith raised for me, which were multiple. I was assured that this would not be a problem and that they encouraged questions. So, I converted and after a few months began to ask my questions, which were not all about homosexuality, but that was one major concern for me. So I brought it up, and the replies were varied. One was that it might currently be “fashionable” to tolerate homosexual activity but that God’s standards were unchanging and humans were not designed for that purpose. This was from a medical student by the way. Another homophobic Christian said, and this was more sympathetic, that he couldn’t imagine how bad it would be to find out you were gay and felt very sorry for them, but he was nonetheless still homophobic. But to me, this was just not an option, because by that point it seemed intuitively obvious that homosexual activity was not wrong and that homophobia was. As I’ve expressed it more recently, if the Bible told you that 2+2=5, you would either reject that part of the Bible (and possibly the whole thing) or try to work out why it seemed to you that it was saying that because it would clearly be saying something else, and since the Bible at least appears to condemn homosexual acts, that’s equally absurd and one could be expected to feel a similar motivation to resolve the problem.

This equation between the idea that 2+2=5 and the idea that homosexual acts are always sinful, I think, attempts to draw a parallel between the certainties of mathematics and the hope that ethics can be equally certain. There are positions in both ethics and mathematics which are called “intuitionism”. In maths, intuitionism is the position that maths has no external basis and is simply a creation of the mind. This is a more recent usage of the term in the philosophy of mathematics, preceded by Kant’s and his successors’ belief that intuition reveals the principles of maths as true a priori – they arise from logical deduction rather than observation. This seems counterintuitive (ha!) because to us, the Cosmos seems to run on maths and logic, and it’s also problematic for an externalist such as myself because we see concepts and ideas as external to the mind and having their own independent existence. It doesn’t seem to me that intuitionism and externalism could both be true, but since intuitionism can involve denial of the law of excluded middle (either P or not-P), maybe they could be. But at that point logic seems to have become what Arthur Norman Prior once called a “runabout inference ticket” where you can just conclude what you like from any premises. It doesn’t seem to be ultimately useful. This could, however, be psychoanalysed as a need for a feeling of certainty and solidity of foundations. It may not be that it’s mere logic.

Geometry is a notorious example of something which used to seem purely logical and valid without the need for observation to verify it. Euclidean geometry generally needs to be based on axioms which are intuitively true, such as “a straight line segment is the shortest distance between two points” and “a straight line segment can be extended infinitely as a straight line”. However, the fifth postulate is difficult to state simply. It can be stated thus: “If two lines are drawn which intersect a third in such a way that the sum of the inner angles on one side is less than two right angles, then the two lines inevitably must intersect each other on that side if extended far enough.” This amounts to the idea that parallel lines never meet, or meet at an infinite distance, and whereas it certainly seems true, the complexity of stating it rigorously makes it suspicious. In fact, it turns out that the Fifth Postulate is the result of observation rather than deduction, and other geometries are possible based on either assuming that parallel lines diverge or that they converge. The former, known as hyperbolic geometry, can be locally true in this Universe and would be most noticeable near the event horizon of a black hole, and the latter, known as Riemann geometry, is actually real geometry as it applies over most of the Universe, particularly on a large scale. Possibly counterintuitive truths which hold in the real world are, for example, that if you imagine the Earth wrapped in bandages, and you kept wrapping it in ever deeper layers, you would eventually find that you were surrounded by bandages and would be inside the ball rather than outside it, and that there is at any one moment a finite maximum distance between two points after which the direction between them reverses. These facts can be known easily to be true on a spherical surface: our antipodes are a maximum distance between two points on the surface of this bandageless orb, after which the directions between them reverses – go far enough east and you find yourself west of your starting point – and a large enough circle on the Earth’s surface will start to shrink if it “grows” any further.

If it’s possible for that postulate to be cast into doubt, and in fact turn out to be false, what else in mathematics could be? One possibility is that logic is also like this. For instance, truth and falsehood could simply be poles in between which other truth values exist or there could be truth values situated beyond truth, i.e. truth from falsehood could simply be the first step towards a “supertruth” infinitely more true than mere truth itself. If there’s that much play in both geometry and logic, perhaps all mathematics is merely an intuitionistic game. Even so, we do tend to operate on the principle that maths is set in stone and reliable most of the time.

Ethical intuitionism is in a sense the opposite kind of view to mathematical intuitionism. Formulated in response to the perceived failure of utilitarianism, it ran into its own problems later which are also thought to have shaped ethical thought later. As I’ve mentioned before, the Utilitarians attempted to prove the utility principle’s desirability by saying that everyone desires to be happy, which is in any case not true, but also suffers from the problem that one needs to attempt to prove that the greatest happiness of the greatest number is worthy of being desired, which is a problem with the English language: we lack the word “desirandous” or any clearer equivalent and are stuck with the “-able” at the end of “desir(e)”. Consequently, in Edwardian times the philosopher G E Moore sought to establish ethics based on the idea that goodness was a simple, non-natural property which could be intuited by people. There is a big problem with this: cultural and interpersonal relativism, which is why he said it was a simple principle: it could not be analysed into a simpler form. The later philosopher Alasdair Macintyre suggested that this step led to later problems in discussing ethical issues which were then picked up on and influenced newer theories.

As the twentieth century wore on, logical positivism and behaviourism became important. Both of these attempted to tidy up pesky things like religious language and psychological states to things which could be observed by the senses. According to Macintyre, because ethics had come to be discussed in terms of what could be intuited and was considered to be essentially impossible to analyse further, conversations about right and wrong in academic circles tended to get reduced to mere emotional expressions. This was known as emotivism, and in fact more or less amounts to ethical scepticism, although there were two versions of it. One actually attempted to reduce expressions of right and wrong to emotional expressions akin to screaming and laughter, just expressed verbally. Another form of emotivism claimed that ethical statements were simply expressions of approval and disapproval implying exortations to another to do the same. Later still, prescriptivism emerged, which was a revival of Kantian ethics which claimed that to say something was good or right meant that it was universalisable (what if everyone did the same?) and entailed an imperative. The problem with this position is well-known. It depends on how it’s described. “Everyone needs to eat” could be given as a reason for poor people to shoplift food, but “everyone needs to make a living” is a reason shoplifting might be wrong. Again, we could be reduced to merely emotional arguments. An oddity about this period of what’s called “non-cognitivist ethics”, i.e. that actual meaning is not relevant to right or wrong, is that it was held by Bertrand Russell, yet he was a strong campaigner on ethical issues such as free love and nuclear disarmament. He himself commented that he couldn’t reconcile the apparent contradiction.

Some theists, not including me incidentally, see ethics as that which is commanded by God – theological voluntarism. There is no right or wrong except what God chooses to tell us to do or forbids us from doing. This crops up, for example, in some Jewish views of kosher diet. Although explanations have been offered, such as the idea that pork is forbidden to avoid parasites or that the forms of certain species, such as cloven hooves, allow special access to the divine, another explanation is that the rules are completely arbitrary and only exist to ensure that God’s people obey without asking why. Belief in theological voluntarism sometimes leads to the peculiar claim that atheists cannot have a moral compass, when it is in fact a pretty weak form of metaethics. It also gives rise to the moral argument for the existence of God, which is that the awareness of morality as a real thing as opposed to mere custom with no real basis means there must be an ultimate moral authority to back it up. I don’t see things this way at all. I see God as merely reporting on what the right thing to do is from a position of infinite wisdom and knowledge. God might sanction something, for instance, due to positive consequences which we can’t perceive ourselves. In the case of kosher food, for example, it might be that there is a very good reason for it but we cannot understand that reason. In fact I would say that veganism is the “new kosher”, and in fact the “new halal”, so in fact I do use my own reasoning to avoid the negative consequences and associations of deliberately eating animal products. Surprisingly, there are atheist theological voluntarists who claim that ethics would make sense if there was a God, but there isn’t, so it doesn’t!

It certainly seems that any God would be bound by logic and mathematics, although this isn’t always held to be the case. By the same token, God to my mind would be aware of right and wrong, and this means that there is a fact of the matter about these things rather than them being non-cognitivist. Alasdair Macintyre sought to replace previous metaethical theories with ideas of vice and virtue, but I would reject that on the grounds that it seems to lead to judging people directly as essentially good or evil, which seems intuitively wrong to me. And there’s that concept again: intuition.

The essential problem with the idea of a moral sense is cultural relativism, and similarly, circumstances altering cases. Take the campaign against sex robots. Those who oppose them argue that it’s wrong to consume bodies as goods and that sex robots and sex workers have the same undesirable status: humans (let’s face it, probably men) would be using sex workers as means rather than ends and therefore also sex robots. Others disagree, claiming that condemning sex robots is transferring concerns about sexual objectification to actual objects. This is an example of how moral intuition could be questioned. The situation could also be tweaked: what’s the morality of allowing paedophiles to have robot children? These two examples also bring up the issue of “the wisdom of disgust”, something which is often evoked to justify homophobia and which might also explain kashrut. Disgust, culturally mediated in this case, is the reason sanitary towels are advertised using blue rather than red liquid. Presumably on another planet the humanoids all have bright blue blood and red liquid is used to advertise them instead. We have instinctive abhorrence of excrement, which protects us from danger. A teleological view would say that God has made us disgusted by excrement in order to keep us healthy, and likewise has made people disgusted by homosexual activity, thereby justifying homophobia. I would say this is an excellent reason for rejecting the idea of the wisdom of disgust. Research has apparently shown that right wing people are more likely to equate disgust and immorality, which means rhetorically it might be more persuasive to appeal to disgust if your interlocutor is right wing. To me, the idea of there being a strong connection between disgust and ethical judgement is never going to gain any ground because I used to have a button phobia, and it’s clearly absurd for a person disgusted by one specific feature of the world to expect it to be banned or controlled in some way simply because of that disgust. That’s clearly not a good guide to morals.

To return to the history of my opposition to homophobia as an intuition, it does seem to be informed by some kind of reasoning. I have a kind of tangential stake in it, some might say a direct one, but it’s also influenced by the fact that disgust as a guide to ethics is manifestly absurd to me due to the button phobia, and also by a kind of inductive inference from racism. But it’s also very deep-seated, to the extent that the very fact that fundamentalist Christians tend to be recalcitrantly homophobic is sufficient reason to reject their world view, and it’s disappointing that they don’t themselves perceive things that way.

I have to say that in spite of difficulties with it, I find intuitionism the most appealing metaethical theory. Although the biggest problem with it is that it seems to make it impossible to resolve disagreements about right and wrong, moral codes tend to agree broadly across cultures, even when their connections must’ve been in the palaeolithic, and to me this suggests that there does seem to be something like a moral sense. This is metaphorical. I don’t imagine there is a sensory organ of some kind in the brain which responds to “conscience radiation” or something. However, I do think we have a moral instinct, and it makes sense to have an innate conscience which enables society to hold together and operate without individuals being taken advantage of too much, although sadly this seems to fail very often. There’s also a problem with the fact that if you actually do try to extract widespread moral principles from the religious and social codes of the world, many of them are homophobic, sexist and so forth. This is why a deeper set of principles must be used. This was the subject of my first degree dissertation, which wasn’t actually very good. I’m not going into it again here.

An ethical sense would seem to be identical with the conscience and distinct from disgust and charm, both of which are often misleading. For an example away from sexual ethics, disgust could prevent one from treating an illness, performing life-saving surgery or working in sanitation, but all of these are very positive things to do ethically. Conscience has been called “the voice of God”. In a situation where a theist has difficulty with conservative religion because of its homophobia or sexism, their conscience cannot allow them to concede or tolerate that prejudice, and if conscience is the word of God, God would themselves be convicting them to rebel or do something else to act against it.

Although the moral argument for the existence of God doesn’t work for a separate divine being “out there” in the Universe or beyond it, there’s another possible take on this based on Ultimate Concern. The philosopher Paul Tillich manages to separate the issue of theism from religion with this concept, which makes the idea of religion less Westernised as it allows for non-theistic religions, which of course do also exist in the West, for example Spiritualism and the Free Zone. Tillich calls faith “the state of being ultimately concerned”. By this he means whatever one holds sacred. This, I think, is a widespread object in most people’s psyche, including non-religious people. It needn’t be God. It could be love, altruism, rationality, compassion, perhaps even one’s own ego for narcissists, but it’s just as real for most non-religious people as it is for religious people and theists. For a Quaker, it might be the spark of divine in us all, and for atheist Quakers there may be no need to alter that. Conscience could be an Ultimate Concern, in other words one’s God, and because this closes off the concept from argument and questions about the existence of an external deity or not, it could be quite a good one. It’s even ineffable in some ways, because of the inscrutability of ethical intuition.

It is of course problematic to have a set of inaccessible moral principles due to the difficulty in being able to see them collectively in the same way. Coming back to sexual orientation though, this is something which can actually be known because it isn’t so much observed as immediately present to the consciousness, when, for example, one might feel attraction to someone of the same gender. One possible response would be to deny it because it clashes with one’s religious values, and clearly this is a fairly common phenomenon given the large number of people involved in reparative “therapy” who are either openly gay already or admit to it, and pastors who have been stridently homophobic and again turn out to be gay, but this shouldn’t be taken as the rule for homophobia among the religious. There really are people who struggle with the homophobia of the Abrahamic religions and only very reluctantly concede to it. On the other hand, I used to know a man who said he wished he could be as disgusted by other kinds of sin as he was by what he saw as the sin of homosexually expressed love. There is an internal process going on here. In one situation, one is divided against oneself because one knows oneself to be queer but struggles against it. In the other, which rather self-righteously I would claim for myself, one’s awareness of one’s queerness and its incontrovertible nature leads one to reject any understanding of religion which is homophobic, and to be honest, if it turns out homophobia is central to any faith, the voice of God, as it were, would surely lead me to reject that faith.

Subtract One And Branch

In case you’re wondering why I’m not talking about the murder of Sarah Everard and the fallout from that, I try to avoid covering gender politics on this blog because I have another blog devoted to that alone, and I also tend to avoid saying anything (see Why I’ve Gone Quiet) because cis women need to be heard more than I do on these issues and I don’t want to shout them down. In fact the other blog is quite infrequent for the same reason. It doesn’t mean I don’t care or don’t consider it important.

You are in a living room, circa 1950. In one corner sits a bakelite box, lit up from within behind a hessian grill with a design like rays from a rising Sun at the front, and a rectangular panel carrying names like “HILVERSUM” and “LUXEMBOURG”. In the other corner sits another somewhat similar bakelite box with a series of lamps at the top and a slot at the bottom, into which cards with rectangular windows punched out of them can be inserted. There is a row of push buttons above it. This is the domestic computer. It didn’t exist, but could it have?

In this post I mentioned that some say the last computer which could be “completely understood” was the BBC Micro, released in 1981, and expressed my doubt that this was true because apart from memory size, even an upgraded IBM PC would probably be about as sophisticated. However, this got me to thinking about a tendency I have to minimalise in terms of IT, and wondering how far it could be taken and still leave useful hardware, and that also brings up the question of what counts as useful.

In this other post, I described a fictional situation where, instead of the Apple ][ being one of the first mass market microcomputers, Sinclair ended up bringing out the ZX Spectrum six years early, that is, just after its Z80 CPU was released. That isn’t quite what I said: read the post if you see what I mean. The point is that the specifications of the two computers are very similar and if the ULA in the Speccy is realised via discrete logic (smaller and simpler integrated circuits), all the hardware necessary to construct a functional equivalent to it except for the slightly faster microprocessor was available already, and if someone had had the idea, they could’ve made one. Then a similar mind game arises: how much further back is it possible to go and still manufacture a reasonaly small but workable computer? Could you, for example, even push it back to the valve era?

Apologies for already having said things which sound off-putting and obscurantist. As an antidote to this, I want to explain, just in case you don’t know, how digital computers work. Basically there’s a chip which fetches codes and data from memory. The codes tell the computer what to do with the data and the chip can make decisions about where to look next from the results of what it’s done with those data. It’s kind of like a calculator on wheels which can read instructions from the path its travelling on.

There are two basic design philosophies taken with microprocessors, which for the sake of brevity I’m going to call Central Processing Units, or CPUs. One is to get it to do lots of sophisticated things with a large number of instructions. This is called the CISC approach – Complex Instruction Set Computer. The CPUs in most Windows computers are CISCs. The other is to get it to do just a few things with a small number of instructions, and this is called a RISC – Reduced Instruction Set Computer. Chips in tablets and mobile phones are usually RISCs. In particular they’re probably going to be using one of the CPUs in the ARM series, the Acorn RISC Machine. The Pentium and the like are kind of descended from the very early Intel 8080, which had a large number of instructions, but the ARMs are the spiritual successors of the 6502.

The 6502, which was the CPU used in the Apple ][, BBC Micro and the similar 6510 used in the Commodore 64, were designed with simplicity in mind, and this involved using far fewer transistors than the Z80(A) found in the Spectrum. The latter had 694 instructions, many of which could only be accessed by prefix bytes acting a bit like shift keys on a typewriter. By contrast, not taking address modes into consideration, the 6502 only had fifty-six. Processors usually have single or double cells of memory used to store numbers to work on or otherwise use internally called registers. The Z80 had nineteen, I think, but the 6502 only had six. To be honest, I was always a lot more comfortable using the Z80 than the 6502, and as you may be aware this was a constant source of debate between nerds in the early ’80s, and to some extent still is. The 6502 always seemed really fiddly to me. However, it was also faster than its competitor, and comprised fewer transistors because of the corners which had been cut. Unlike the Z80, it used pipelining: it interpreted the next instruction while executing the previous one, but it was also faster because it used fewer instructions. It needed less time to work out what to do because the list of instructions was much shorter.

Research undertaken in the 1980s counted the proportions of instructions used in software, and it was found, unsurprisingly, that for all processors a small part of the instruction set was used for the majority of the time. This is of course the log-normal distribution which applies to a wide range of phenomena, such as the fact that eighty percent of the income from my clients used to be from twenty percent of them, and that one herb out of five is used in four out of five prescriptions, and so on. In the case of CPUs this meant that if the jobs done by the majority of instructions could be performed using the minority of often-used instructions, the others could be dispensed with at the cost of taking up more memory. An artificial example of this is that multiplying six by seven could be achieved by adding seven to itself six times, and therefore that an integer multiply instruction isn’t strictly necessary. Of course, this process of performing six additions could take longer than the multiplication would in the first place, but choosing the instructions carefully would lead to an optimal set, which would all be decoded faster due to the smaller variety.

The question therefore arises of the lowest possible number of instructions any CPU could have and still be Turing-complete. A Turing-complete machine is a computer which can do anything a theoretical machine Turing thought of in 1936 which consisted of an infinitely long strip of tape and a read-write head, whose behaviour can be influenced by the symbol underneath the head. It more or less amounts to a machine which, given enough time, can do anything any digital computer could do. My description of the calculator on wheels I mentioned above is effectively a Turing machine. What it means, for example, is that you could get a ZX80, rewire it a bit and have it do anything today’s most expensive and up-to-date PC could do, but of course very, very slowly. But it can’t be just any digital computer-like machine. For instance, a non-programmable calculator can’t do it, nor can a digital watch. The question is, as I’ve said, which instructions would be needed to make this possible.

There used to be a very simple programming language used in schools called CESIL (I created and wrote most of that article incidentally – there are various deeply obscure and useless articles on Wikipedia which were originally mine). As you can see from the link, there are a total of fourteen instructions, several of which are just there for convenience to enable input and output such as LINE and PRINT. It’s a much smaller number than the 6502 but also rather artificial, since in reality it would be necessary to come up with code for proper input and output. Another aspect of redundancy is the fact that there are three JUMP instructions: JUMP, JINEG and JIZERO – unconditional jump, jump if negative and jump if zero. All that’s really needed there is JINEG – jump if negative. It can be ensured that the content of the accumulator register is negative in advance, in which case JINEG can perform the same function as an unconditional jump, and since zero is only one greater than a negative integer, simply subtracting one and then performing JINEG is the same as JIZERO. Hence the number of instructions is already down to six, since multiplication and division are achievable by other means, although as constituted CESIL would then be too limited to allow for input and output because it only uses named variables. It would be possible, though, to ensure that one variable was a dummy which was in fact an interface with the outside world via some peripherals.

It has in fact been determined that a machine can be Turing-complete even if it has only one instruction, namely “Subtract One And Branch If Negative”. Incidentally, with special arrangements it can even be done with Intel’s “MOV” instruction, but it needs extensive look up tables to do this and also requires special address modes. Hence there can be a SISC – Single Instruction Set Computer. It isn’t terribly practical because, for example, to add two numbers in the hundreds this instruction would need to be executed hundreds of times. It depends, of course, on the nature of the data in the memory, and this has an interesting consequence. In a sense, this is a zero instruction set computer (which is actually something different officially) because it can be assumed that every location pointed to by the program counter has an implicit “SOBIN” instruction. The flow and activities of the program are actually determined by a memory location field which tells the CPU where to go next. This means that the initial value of the data in the accumulator needs to be fixed, probably as an entirely set series of bits equivalent to the word length.

This would all make for a very difficult machine to work with, but it is possible. It would be very inefficient and slow, but it would also reduce the number of logic gates needed to a bare minimum. It would simply consist of an arithmetic unit which would in fact be a binary adder because of two’s complement. Negative integers can be represented simply by treating the second half of the interval between zero and two raised to the power of the word length as if those numbers are negative.

This is a one-bit binary adder:

It works like this: 0+0=0 with no carry, 0+1=1 with no carry, 1+0=1 with no carry and 1+1=0 with carry. The symbols above, by the way, are XOR – “either – or -” at the top, AND in the middle and OR (actually AND/OR) at bottom right. These conjunctions describe the inputs and outputs where one, i.e. a signal, is truth and zero, i.e. no signal, is falsehood. Incidentally, you probably realise this but this logic is central to analytical philosophy, meaning that there are close links between philosophy, mathematics and computer science and if you can understand logic you can also understand the basis of much of the other two.

Most processors would do all this in parallel – an eight-bit processor would have eight of these devices lined up, enabling any two integers between zero and two hundred and fifty-five to be added and any two integers between -127 and 128 to be added or subtracted, either way in one go. But this could also be done in series if the carry is stored and the serial format is converted to and from parallel to communicate outside the processor. This reduces the transistor count further. All logic gates can be implemented by NAND gates, or if preferable by NOR gates. A NAND gate can be implemented by two transistors and three resistors and a NOR gate with the same components connected differently. Transistors could also be replaced by valves or relays, and it would also be possible to reuse some of the components in a similar manner to the serial arrangement with the adder, although I suspect the point would come when the multiplication of other components would mean this was no longer worthwhile.

I can’t really be precise as to the exact number of components required, but clearly the number is very low. Hence some kind of computer of a reasonable size could been implemented using valves or relays, or discrete transistors. ROM could be realised via fuses, with blown fuses as zeros and working fuses as ones, and RAM by capacitors, Williams tubes, which are cathode ray tubes with plates to feed the static charge back into the cathodes, or with rings threaded onto wires, all methods which have been used in the past. Extra parts would of course be needed but it is nonetheless feasible to build such a computer.

I feel like I’m on the brink of being able to draw a logic diagram of this device, but in practical terms it’s beyond me. I haven’t decided on input and output here, but that could be achieve via arrays of switches and flashing lights. And if this could be done with 1950s technology, who knows what the limit would be?