Diagonal And Vertical Time Travel

This is a bit of a thought dump where I try to work out the details of something I’m planning to use in a story, along with other bits I’ve adapted from elsewhere. It’s about time (travel).

I want to talk first of all about metaphors for the passage of time in language and, well, gestures I suppose, and it’s possible that this also comes into sign language. We tend to talk about going forwards and backwards in time, about “past” events and events “to come” and so forth. We also have links between the direction of writing, and what follows from that, and that of time. Our clocks move clockwise and progress bars on audio and video place the “early” direction on the left and “late” on the right. This has been going on for longer than the existence of easy video and audio playback on digital devices, because on tapes, for example, we have fast forward and play, with arrows pointing to the right, and rewind, with arrows pointing to the left. One thing I don’t know is whether Arabic and Hebrew use the same convention, even though their writing proceeds from right to left, but I would expect the dominance of Western culture to have dictated this. A less well-known aspect of this is somewhat more jarring for us as humans, as opposed to us as humans literate in Latin script. Languages are often described as having features to the right or the left, for example as having prefixes to the left and suffixes to the right, focussing on spoken language, but written language doesn’t always go that way, so it means that Hebrew, for example, might be described as having the definite article to the “left” of a noun even though as written that would be on the right. Incidentally, when people pause to think during a conversation, they will look to their right and up in cultures where writing is left-to-right and to their left and up where it’s right-to-left, and since this is learnt from other literate individuals and diffuses through culture that way, it’s even true of illiterate individuals such as small children. This illustrates how pervasive our time flow metaphors and conventions are. It should also be mentioned, for the sake of completeness as this probably won’t come up again here, that time can be thought of as linear or cyclical, and in theory cyclical time could be moving in any direction, although again this would depend on clockwise or anti-clockwise convention.

I could be wrong about this but I seem to remember that Chinese culture, inasmuch as it’s a single culture, agrees with my own view that the passage of time is vertical, falling from top to bottom. I presume the reason for this is the vertical direction of Chinese and related scripts, which brings me to wonder how it worked in Mongolia when it too had a vertical script. Another set of options for script direction exists in boustrophedon, where direction alternates and characters can be either inverted or mirrored, and I have no idea if that has any bearing on temporal metaphors. However, I want to put a case for a vertical time flow metaphor besides script direction, which seems fairly arbitrary, and then I can get on with this.

X, Y and Z axes are usually organised in order of left-to-right, front-to-back and low-to-high. On a graph, the Z axis usually points into the paper or screen and away from the viewer. The time axis is often referred to as W, because they ran out of letters and had to go backwards. As H.G. Wells pointed out in ‘The Time Machine’, gravity conventionally restricts our movement in the third dimension, i.e. height, in that if we consider the centre of, in our case, Earth as below us, we are liable to fall in that direction without support and have difficulty increasing our height above a reference level because it pulls us down. This difficulty is paralleled by our perception of the passage of time, because we are relentlessly propelled into later time from earlier, a process which is usually described as “forward” in time. Because of this similarity, it makes more sense to me to think of the passage of time like a waterfall, falling from top to bottom, and also as facing upwards since we have greater difficulty with accurately perceiving the future than the immediate past. This also has the advantage of working better in gestures and diagrams, because a vertical picture of the passage of time is the same for all observers, unlike time itself, whereas a time flow metaphor of forwards and backwards, or left and right, is reversed for someone opposite. That said, I’m pretty sure I don’t consistently gesture in this way. This vertical metaphor also works for the likes of family trees, and may also be used for timelines although this contradicts the order in which, for example, strata are laid down. A chronicle written in Latin script would, however, proceed vertically down the page, in a scroll, and in a codex (spined book) this would also be true when it was closed – the Book of Genesis is at the “top” of the Bible and Exodus is underneath it, in an English Bible, though in the Tanakh it’s the other way round. Family trees, incidentally, are actually the other way up compared to real trees.

Thinking of spacetime as a block, and simplifying space to two dimensions to make it possible to visualise it, it can be thought of as a kind of transparent cube with events embedded in it like flies in amber. The longer something goes on for, the longer its world-line is, from top to bottom, and at ordinary speeds, something moves around horizontally but is always earlier at the top than at the bottom. Then there’s the light-cone, which I regard as a crucial concept for time travel. The Sun is eight light minutes, Alpha Centauri four light years and Betelgeuse six hundred light years away. The upside-down, but still vertical, version of the light cone can be illustrated thus:

Flipping this over, events occurring above, such as the emission of light from a star, have only impinged upon an observer at the peaks of the cones if it happens within the top cone, and events occurring below, i.e. in the future, will only influence other things if they are within the light cone below the point where the peaks meet. The surface of the cone is defined by the speed of light because nothing can exceed it, and therefore there can be no causal connection between anything outside the light cone. Every location at a particular moment has these light cones.

This is where time travel comes in. The classical objection to travelling backwards in time is that it can create two types of paradoxes, or rather involves two types: the grandmother paradox and the ontological paradox. The grandmother paradox is illustrated most starkly by the possibility of becoming one’s own grandparent, leading to a loop in time with no cause and also a contradiction with recorded or otherwise “known” events. I know I’m not my own grandparent, or have a high degree of confidence that I’m not, because their identity and history aren’t mine. A slightly different paradox is the ontological one, exemplified by such events in fiction as the apparent eighteenth century CE glasses given to Captain Kirk by Bones in ‘Star Trek’ which were later pawned in 1986 in San Francisco, later possibly being acquired by McCoy and regifted. This means that certain complex objects have no origin at all, and raises questions such as what happens if someone sends a 21st century signed copy of the sheet music for Beethoven’s Fifth Symphony to Beethoven which he then copies instead of composing it for himself. It might be less disturbing if all that happens is a single electron or photon is transported in time.

My long-standing answer to this is to evoke what is called either the Novikov Self-Consistency Principle or the Blinovitch Limitation Effect, depending on whether you go with the real world or science fiction version of the effect. The Novikov Self-Consistency Principle holds that if an object travelling in time would cause a paradox of this nature, the probability of that occurring is zero. The Blinovitch Limitation Effect is similar and originated in ‘Doctor Who’, and means that either timelines cannot cross themselves or that if they do, some kind of catastrophe occurs. This happens for example in ‘Father’s Day’, where Rose encounters herself as a baby and it leads to damage to the time stream which can only be resolved by her father meeting his fate, that is, death under the wheels of a car.

My own solution to this has been to suppose that time travel “upwards” is only possible outside the light cone. That is, for example, you can move upwards in time but if you did so you would find yourself displaced at least the same distance in space from your point of origin as it would take light to travel to it by the moment of your departure. Hence if you travel three years upwards in time, say from 2023 to 2020 you might find yourself near Alpha Centauri because it then means that you can’t have any influence on where you came from. You might send a radio message warning people about Covid-19 but it wouldn’t get there until after the outbreak, for instance.

For this reason, I have tended to portray time travel as a “diagonal” process. You can’t stay in the same place and travel into the past, but will be displaced by the same distance in time and space: four light years for four years and so on. I also applied this to the idea of parallel universes, which I see as separated from us by extra dimensions, so your creation of a different history would simply mean that you had been transported to a parallel universe where that would always have been the case. The problem with this is that it kind of makes it look like any significant historical event was caused by time travel, or that there would be duplicate universes different only because in one a time traveller had turned up and made the same change as had happened without such intervention in another parallel universe. Thus in one universe, JFK isn’t shot because the guns jammed but in another someone came along and secretly did something to the guns in advance which caused them to jam. This seems suspicious.

There is a further problem with intersecting light cones, because events which can’t impinge on one light cone can do so on another, but I have to admit to being rather hazy about that, so I won’t go further. That, then, is diagonal time travel. There’s also vertical time travel.

Vertical time travel is what any object or person does when they’re sitting still, although of course real sitting still (or standing still) is not a trivial, or maybe not even a possible, thing because of things like orbits, rotation of planets and so forth, and the probable complete absence of an absolute frame of reference. Hence a vertical path through time is not a straight line, although it might look straight to observers who are stationary with respect to the object or event in question. Simplifying this, as in fact we always have to deal with this in other ways but generally think of ourselves as moving or stationary with respect to the scenery or neighbourhood as opposed to the reality that we are on a rotating, orbiting planet and in a solar system orbiting through the Galaxy, and so forth, time travel from the past to the future for most of us is almost perfectly vertical in nature. We are moving straight “down” in time, not diagonally. Sunlight is diagonal. In fact, all light in vacuo is diagonal because its velocity is the same for all observers.

Back to vertical time travel. Ignoring the complications mentioned just now, vertical time travel into the past should be impossible. Into the future it’s the norm. However, the conventional depiction of time travel in most popular culture is that of the protagonist moving through time, either into the future or the past. The latter clearly brings up the question of paradoxes, and conjures up the idea of a wise and intervening Universe for which there is no evidence stopping such things from happening. This is not, though, the only conceivable way an object or entity might move through time, and this is where my new ideas come in, and where this becomes a kind of world-building thought dump.

There is a second, less often used, version of time travel in fiction which is not used to explore paradoxes so far as I know, and it’s this which I’m going to call “vertical time travel”. This is illustrated only occasionally, as far as I know. The first example which comes to mind is found in Isaac Asimov’s ‘The Ugly Little Boy’. In this story (SPOILERS!) a Neanderthal child is brought into the twentieth century, although he eventually returns to his time with his H. sap. nurse. In this story, energy is required to hold objects in the modern world and budgetary constraints lead to them being returned to the past, including ultimately the Neanderthal boy, so this is not a one-way street and paradoxes are still possible. The TV series ‘Primeval’, which I haven’t seen, seems to have the premise of a similar effect where animals arrive in the present through temporal anomalies, but again this seems bidirectional. You also see the same kind of thing in «Les Visiteurs» and ‘Catweazle’, where a similar process brings characters from the Middle Ages into the twentieth century. Most of the time, perhaps to provide resolution in plot terms, the protagonists or objects are returned to the past and I can’t offhand think of any unidirectional time travel stories. They probably do exist as they allow for “fish out of water” stories. In fact most of the time this is satisfied by the idea of someone going into a state of suspended animation and being revived, as with ‘Rip van Winkel’, ‘Idiocracy’, the story that was based on, and Woody Allen’s ‘Sleeper’. This is a device allowing for time travel in a much more realistic manner than something like ‘Doctor Who’. However, actual vertical time travel allows for other possibilities which are quite intriguing and not available to such tales.

Time travel into the past can never be invented, only discovered. Look at it this way: the first time in history someone devises a time machine that goes upward in time, it will already have gone into the past. Hence it will have come into existence before it was invented. It might not be discovered in the sense that someone comes across it on its trip furthest back into the past, but the “inventor” has come across the machine at a later time than when it first sprang into existence. Alternatively, time travel might exist due to a peculiar combination of events and circumstances arising without the intentional use of technology, perhaps, for example, where a wormhole has orbited another at relativistic speeds due to one side being close to the event horizon of a black hole, resulting in an object able to capture masses and transport them to an earlier time in its history because time has passed more slowly at one end than the other. Incidentally, I’m not saying this is feasible, but such a situation could be discovered and wouldn’t’ve been invented. Hence there is a sense in which it’s impossible to invent a time machine completely aside from any technical reason why they would be impossible in themselves. Even possible time machines cannot be invented.

Given that, there is also a sense in which instances of time travel from the past might be discovered rather than deliberately or accidentally perpetrated, but I’ll come back to this. For now, imagine the following scenario. Objects move from the past to the present without ageing. By “ageing”, I mean any change due to the passage of time, so for example if a watch reading 4:45 pm moves an hour down the line, it still reads 4:45 pm on arrival, at 5:45 pm, and it still works. This amounts, I suppose, to the suspension of entropy, although it would seem inevitable that the object in question would not have a location, and in a sense would not exist, between the two occasions. It isn’t frozen in time: it has discontinuous existence. Otherwise any object on the surface of this planet, assuming it follows Earth, would remain in the location it succumbed to time travel and would seem to be a frozen item whose location could be discovered, and it would be observed. But maybe it wouldn’t, because this would involve interaction with the rest of the universe, and this would seem to imply change. Being subjected to millions of years of ionising radiation in an instant, and millions of years of thermal energy in the same time, would be likely to destroy such an object. Therefore suspension of entropy would mean the apparent absence of the object, perhaps existing in a merely latent state for geological epochs. After all, if time isn’t passing for an object, its electrical charges cannot attract or repel, its particles can’t change energy state, preventing their ability to emit or absorb photons, it has no weak interaction in its nuclei and so forth. It literally does nothing and does not interact with the Universe, although it has to be assumed that it retains momentum or it would end up buried inside the planet or in deep space.

This, then, is the fictional scenario I wish to present. There are objects which enter a kind of dormant state of existence for various intervals, or perhaps they simply enter a dormant state of existence until disturbed in some way. Then, after a period of time, a device comes along which can revive them back into existence. This does not occur causally, i.e. this is not a device which can actively retrieve objects from the past, although that is what it appears to do, because sub specie aeternitatis this is what would always have happened to such objects. Because of this, there can have been no interaction between such objects and the operator of the “discoverer” machine, or a paradox could be created. Even so, the operator of the discoverer subjectively experiences the decision to focus the machine on a selected target in the past which they would then bring into the present. That decision, though, was always casually determined, perhaps like all apparent instances of free will. Therefore, as far as the operator is concerned, they are making a decision to retrieve an object from the past, operating the machine and being presented with the intended object. So far, so good.

From this description, there seems to be no possibility of paradox, which makes it rather boring. Consider this though. If you have already interacted with an object in some way, you cannot bring it into the present. You can’t target last night’s dinner just after you’ve cooked it and eat it tonight, because if you did that you wouldn’t have had last night’s dinner. That said, what if you targeted someone else’s dinner from last night? You’ve had no causal interaction with that dinner. It could be in a kitchen you’ve never visited and be eaten by a complete stranger. What if that led to them developing a false memory that they had eaten the meal, because in one timeline they had but in yours they hadn’t?

To be highly specific, just suppose you went back to the set of ‘Moonraker’ and removed Branche Ravalec’s braces. You would then know what had happened, but everyone else would have the false memory of her having braces when they watched the film. In other words, and I would stress that this is fictional, this is an explanation for the Mandela Effect.

And it has legs! Now imagine a sealed box containing a cat. You have no knowledge whether this cat is dead or alive owing to a poison gas canister broken or remaining intact according to radioactive decay values. Also, you save the cat by removing her from her predicament before the canister is at all likely to be broken. A familiar example, but with a twist involving time travel. If you’d opened the box, you would never have succeeded in bringing the cat through time. Schroedingers Cat of course, but with a twist.

Now imagine a young woman shopping for groceries – yes, this is a heteronormative example. Her bag falls apart and her orange rolls along the pavement, to be picked up by a handsome young man. Their eyes meet and the rest is history. There are wedding bells and children. Then, one day a third party focusses their discoverer on the aforesaid orange and removes it to their present day. For that individual, this was always going to happen because they don’t know these people and it has no consequences for them. Meanwhile, spouses wake up in their own beds, perhaps at opposite ends of the country married to different people, with different children, with no memory of how this happened or anything, but with distinct memories of meeting each other, getting married, settling down and having children, who will now never exist. On the other hand, assuming they can do anything about this, and I suspect they can’t unless they can trace who did it and stop them without causally interacting with them, and how could that happen, they would end up back together but again, their own children conceived with their other partners would, again, never have existed.

I would maintain that all these possibilities are highly fruitful, and intend to write a story based on them. In the meantime, this post has served as a means of working out this detail of such a story for me. Thanks for your patience.

Theory

Let me get one thing out of the way before anything else. I would be the first to claim that human thought has biasses which prevent us from being neutral or objective, and that the specifics of how natural science is practiced create other biasses within it. Robert A Heinlein once said “man (sic) is not a rational animal but a rationalizing animal,” and I agree passionately with and have tried to live my life in accordance with that. I would also say that there’s a difference between scientific and non-scientific usages of the word “theory”, and the second tends to be unfairly deprecated, but confusion between these two often leads to misconceptions. This doesn’t mean that the scientific, more rigorous, use of the word is more valid, but the distinction is there and it should be known.

Looking at some other uses of the word, there are:

  • The colloquial, conversational use.
  • The scientific use.
  • Music theory.
  • Colour theory.
  • Political theory.
  • Driving theory.
  • Cultural theory.
  • Gender theory.
  • Literary theory.
  • Mathematical and logical theory.

I actually make some effort not to use the word “theory” when that isn’t what I mean, and to replace it with “hypothesis”. A hypothesis is a conjecture which has not been tested, and doesn’t become a theory until it gets through a test which could prove it wrong, and perhaps have got through that test several times carried out by several people. In order for it to reach even that stage, it needs to be definite enough for one to be able to articulate clearly beforehand what it would take to refute it. It’s particularly bad to modify an hypothesis to explain away apparent refutations, although that does allow one to come up with better hypotheses if one gets lucky. It has to be specific, and it has independent and dependent variables. The independent variable is the part of an experiment which can be changed and tested, so for example you might have a hypothesis that water boils at a lower temperature under lower pressure and use a vacuum pump to pump some of the air out of a sealed chamber, then use a barometer and a thermometer to measure the heated vessel of water and its surroundings. The dependent variable is the boiling point of water under lower pressure, and the independent variable is the pressure inside the chamber. This is what it takes to qualify something as an hypothesis.

A theory, in scientific terms, is what, if anything, comes out of the other end of this process, which must be practiced more than once by different people in different places while attempting to duplicate the conditions as closely as possible. For this to happen, experiments need to be written up carefully and in detail. Ideally, they should also be carried out by people who are dissimilar to the people who originally did the experiments and are still competent. Without taking up a position on the efficacy or otherwise of homœopathy, for example, many studies lack ecological validity because they don’t involve the kind of consultation homœopaths undertake before they prescribe remedies, if that’s what they are, and consequently the .fact that there is absolutely no trace of the substance left in the preparation must continue to be taken as irrelevant until there can be proper dialogue between homœopaths and skeptics. For the record, I find it exceedingly difficult to believe that homœopathy can work, but I cannot definitively say that it doesn’t unless a well-designed scientific procedure has been reproduced which is also ecologically valid which refutes it, and it is not a scientific position to assert that it doesn’t or does work until that’s been done, and as far as I know it never has. Hence my suspicion that it doesn’t work, like that of other people, including medical scientists, is not scientific and not based on a specific theory. However, in case you’re interested, that’s why I’m not a homœopath, among other reasons (e.g. that it isn’t vegan), so I just avoid it and withhold judgement.

A scientific theory is an explanation for a phenomenon which has been tested repeatedly in the manner described above and has not been refuted. There has been some controversy in the history of science regarding exactly what the process is, partly resulting from the problem that inductive inference is not strictly logical. Just because something has always happened doesn’t mean it always will, and actual logic is structured such that it’s impossible to draw false conclusions from true premises, which means that there is no logical link between cause and effect, which appears to destroy the scientific endeavour entirely. Of course, few people actually follow this through in their everyday lives because of the high degree of uncertainty it would bring. One of the breakthroughs Karl Popper made was to come up with an account of the scientific method which didn’t use inductive inference.

According to Popper, theories are simply held until they’re refuted. I’ve possibly gone a bit too far in accepting a kind of caricature of his beliefs in this area because I hold that all scientific theories are wrong. He did once make an off-the-cuff remark about the probability of arriving at a correct scientific theory being zero, but I think this was probably meant to be a joke. The basic idea is that scientific theories are simply used as practical ways of engaging with the world, such as building TV sets and internal combustion engines to mention two technological applications of theories, until something else comes along and proves them false. That doesn’t actually mean anything will ever do that, but then the question arises of whether the reason it doesn’t is because it’s literally impossible or is just because nobody happens to have come across a way of refuting it. To me, Popper is a somewhat questionable person because he threw his lot in with the likes of Hayek, although he didn’t seem to be as right wing, and although I don’t mean to be ad hominem about it, if his thought forms a consistent whole, this would for me cast doubt upon his views on the scientific method. His actually articulated political philosophy is akin to that of many right wing thinkers, that the existence of ideology is inherently oppressive, and I think this leads to being in denial about implicit ideologies and is a factor in the persistence of mature capitalism. Relating this back to the idea of theory, it seems mainly to amount to the idea that there can be rigorous scientific theories but no social or political theories of the same kind, and therefore that attempting to apply a political “theory” will always lead to oppression because it will be seriously wrong and not even a practical means of running a society. This is of course mainly aimed at Marxism.

Speaking of Marxism, another philosopher of science, Thomas Kuhn, has always struck me as a closet Marxist. Incidentally, I’m apparently two degrees of separation from him – we have (I presume he’s dead) a mutual acquaintance. This might mean that I’ve been subject to some kind of groupthink with respect to his beliefs. I vaguely recollect that I’ve already talked about him on this blog, so I won’t go into too much detail again, but his basic view is that science normally proceeds with entrenched theories held by people with experience and reputation which are only replaced when they are. When this replacement happens, it’s called a scientific revolution and science does then operate according to Popper’s view, but it’s the exception rather than the rule. Hence belief in the luminiferous æther persisted with more and more absurd properties being assigned to it until its existence was disproven by the Michelson-Morley experiment. This reminds me of when I used to use a twin tub to do the washing and various things went wrong with it until I was having to unscrew the central column of the washing machine bit, wheel it out to the back yard, upend it to drain out the water and put the column back in. It used to take me four hours of constant attention. Eventually a housemate pointed out that it would be easier to take it to the laundrette, so that’s what I did, but the point is that the difficulties had steadily accumulated without me really thinking of what the alternative might be until I reached a stage of considerable silliness. This happens in science as well. My personal view is that non-baryonic dark matter is an example of this.

On a somewhat related matter, science can sometimes get itself into a position where it becomes difficult to test its propositions using current technology. This happens with particle physics and accelerators, for example, in that it seems to have become impossible to build a sufficiently powerful particle accelerator to test certain hypotheses about the nature of matter. Another example is string theory, which seems to be untestable. However, in such circumstances ways are sometimes eventually found to test these theories, either through ingenuity or better devices for doing so. I’ve mentioned this before as well.

The colloquial looseness of the word “theory” is particularly prone to being misunderstood in the area of biology, where evolutionary theory is often described as “just a theory” and sometimes accused of being untestable. I want to address this by using the idea of “cell theory”. Cell theory is a genuine theory which is much less questioned by anyone than the theory of evolution, and is really just the idea that all living things are composed of cells, which are the basic units of life. As stated, this is actually wrong, and there are other ways in which it could be questioned, but it is basically true. Specifically, viruses, if they’re considered to be alive, are not made up of cells, there are syncytia, which are continuous bodies of cytoplasm with multiple nuclei and other organelles through them, fungi being an example, and what we think of as single-celled organisms could alternatively be thought of as organisms whose bodies are not divided up into cells and it’s a kind of useful fiction to consider an amœba and a white blood cell to be the same kind of thing because the former is a whole organism whereas the latter is a small part of a much larger one. It’s also not known if any shadow biosphere which might exist on or in Earth is made up of cells, and then of course there’s the possibility of life elsewhere in the Universe, if it exists, being very differently constituted. However, all of these things are details and they don’t really contradict the general truth of the theory. They don’t mean that if you come across a tree, say, or a jellyfish they won’t turn out to be made of cells. What happens is that theories become refined with scientific change. Cell theory is a theory which is also an approximate fact. The fact that most large plants and animals are made up of cells has been established and remains the case.

Applying this to evolution, yes evolution is a theory, but it’s a theory which is also a fact. There are refinements and controversies within it. For instance, Richard Dawkins and others are very keen on individual gene selection, where they see genes as the basic unit competing for survival, and tend to reject group selection, where the survival of individual genes is influenced by the evolution of groups of organisms. Another example is punctuated equilibrium, which is similar in a way to Kuhn’s idea, that a species stays stable and similar to its ancestors for a while, then suddenly undergoes rapid evolution in response to changing circumstances. There are also the details of how genes are represented, in the form of nucleic acids, and how they’re switched on or off, epigenetics, none of which was known in the early decades of evolutionary theory, and there are clearly exceptions to evolution in the form of planned breeding, genetic modification and the horizontal transmission of genes via viruses between unrelated organisms, but again, none of that contradicts the general theory of evolution by natural selection.

The refinement of theories can also be seen in the progress from Kepler through Newton to Einstein. Kepler was able to work out that the planets in this Solar System obeyed certain physical laws in that they moved in elliptical orbits with the Sun at one focus, faster when they were closer to the Sun and slower when they were further away, and that the time taken to orbit the Sun is proportionate to the square root of the cube of the mean distance from it. From this, Newton was able to generalise the laws of motion and gravity, which are considerably counterintuitive because we’re so used to air resistance and friction and don’t realise that the way things move on a planet with a substantial atmosphere is a special case. For instance, it may not be obvious that a moving object will travel in a straight line without changing its velocity unless other forces are acting upon it because they nearly always are in our experience. The preceding view is Aristotelian, and effectively applies to driving theory because of braking distances, for example. However, Newtonian physics also applies to driving theory because of the difficulty of using non-anti-lock brakes on ice or a wet road, at which point Aristotelian physics ceases to be applicable. A motorist trying to brake on ice has entered a Newtonian paradigm, and road traffic collisions and other mishaps illustrate very clearly that Newton’s theory is also a fact.

Moving beyond Newton to Einstein, it became clear that in some circumstances the laws of motion didn’t work. In particular, Mercury’s orbit precesses in such a way that it appears to imply that there’s another planet further in whose gravity pulls it about, and the Michelson-Morley Experiment shows that light travels at the same speed when it’s moving with the orbital motion of Earth, against that motion or at right angles to it. Hence further refinement was needed, and it came in the form of the general and special theories of relativity. There are various ways to demonstrate that relativity is true, some more arcane than others. For instance, subatomic particles are often unstable and have a half-life in the same way as radioisotopes. In a particle accelerator, these lives are longer according to how fast they’re moving. One of the starkest examples of why relativity is true is found in satellite navigation systems, which again apply to driving in the form of GPS and satnav, although interestingly they’ve been used by the military since the early 1960s CE. GPS satellites orbit at 14 000 kph and are in orbits where Earth’s gravity is weaker than on the surface. Both of these influence how fast the atomic clocks on board work, to the extent that they run around 38 microseconds faster per day than a clock stationary relative to Earth’s surface at sea level. Light travels more than eleven kilometres in that time. Therefore, the clocks in the satellites have to be adjusted to take this into consideration, or the error in locating a GPS receiver would accumulate by several miles every day. This also helps planes land safely in bad weather as it enables them to locate the runway in fog. Hence again it’s theory which is also factual. Some of us live in hope that a loophole will one day be found in the details of special relativity which will enable spaceships to reach the stars within a reasonable amount of time. If that happens, the fact will remain that most of the time relativity works fine. An exception needs to be found, and this may be present, for instance, in the space between two cosmic strings, which is however fine for moving between cosmic strings but not much else. If relativity was wrong, light would move more slowly if an observer was moving, and a moving torch would add its speed to the speed of its light, but this doesn’t happen. Also, Mercury’s orbit would either be different or there would be another planet orbiting closer to the Sun.

There are, however, wider usages of the word “theory” than just in science, as listed above. In this broader sense, a theory is a rationally-held abstract model of a phenomenon. It’s probably this usage which leads to confusion. In mathematics and logic, theory has to have a different meaning than in the other sciences because it can’t really depend on observation and testing in the same way. There are conjectures in mathematics, for example, that every even number is the sum of two primes, which has turned out to be the case for every example known but may at some point turn out not to be, and Fermat’s Last Theorem, mentioned here, that an+bn=cn is false for integer n>2. Entertaining 2109 as a real thing momentarily, and who knows, it may be, revealing the proof of Fermat’s Last Theorem to the psychical researchers in Cheshire would’ve disrupted our timeline, where Andrew Wiles proved it in 1994. The difference between mathematical theories and those of empirical science is that the former can be completely proved whereas the latter can only continue to be corroborated until proof of the contrary. Having said that, geometry in particular has turned out to be empirical rather than mathematical because of the claim in Euclidean geometry that parallel lines stay the same distance apart. They don’t. If they did, one consequence would be that GPS systems wouldn’t be prone to the same kind of error. Hence geometry does depend on observation and testing even though it went on for thousands of years before anyone realised it. This could also be true of logic. For instance, logic has the Law Of Excluded Middle and Non-Contradiction, which assert respectively that either-or always applies, such as something either being true or false, and that something cannot simultaneously both be and not be so. Quantum physics and some eastern philosophies suggest otherwise, and there are other kinds of logic which allow more than two truth-values, which may be in different orders incidentally, suggesting that there is more like a figurative hyperspace for truth, falsehood and others than a simple line along which truth and falsehood vary. Again, this has partly been refuted by observation. Hence mathematics and logic may not be as safe from refutation as they seem.

“Gender theory” is a polemical term rather than one actually applied by those who are said to practice it. Although there are such things as queer theory and feminist theory, this term actually refers to a purported conspiracy, sometimes seen as part of cultural Marxism when that is used as a label for a conspiracy theory. Phrases translatable as “gender theory” arose outside the Anglosphere and purports to refer to the idea that gender can be chosen at will and is being forced on children and society in general, and also aims to erode the idea of gender. This is a use of the word “theory” in a colloquial sense, because of course political and religious conservatives would assert that the reality is the biologically-based gender binary, so gender “theory” is not intended to refer to a set of beliefs which have been arrived at scientifically, unless the person involved distrusts science more generally. In fact any theories associated with gender with scientific support contradict the straw man created by this conspiracy theory.

And that’s another thing. In order to be scientific, and maybe they needn’t be, conspiracy theories would have to be able to make predictions and be open to falisification. There would need to be a test which would, if failed, demonstrate them to be false. Since there are so many of them, it’s hard to know where to start and also unfair to generalise. Moreover, this is a colloquial use of the word and it may be unfair to hold them to scientific standards. Some conspiracy theories turn out to be true. For instance, there really was a consortium of filament light bulb manufacturers which deliberately made them less durable than they were originally, and some of the longest lasting bulbs date from before this time, such as the Centennial Light, which has been on continuously since the first year of the twentieth Christian century. Although investigation has revealed that this really did happen, there is a bit of a problem relying on the existence of really old working light bulbs to attempt to confirm this, as there’s a built-in bias towards older light bulbs from the fact that if they’d lasted a long time they’d be more likely to be older. For all we know there are filament bulbs manufactured in 2011, the last year they were produced, sitting in new old stock and likely to last another century. Except that we basically do know there aren’t because the conspiracy was real.

Something like colour or music theory is different again. They seem to refer to a structure of concepts placed on top of something we are unable to perceive in particular ways, so they’re anchored in an unchangeable realm and represent various networks of ideas on top of them. For instance, complementary colours and the colour wheel make sense to us because of the nature of our sense of vision. Most humans have three types of cone cells and one type of rod cell, each with peak sensitivity to a different hue. Although they’re thought of as red, green, blue and monochrome, the peak sensitivity of the first is actually in the yellow range. Red cone cells are less common than the others among mammals. To most mammals, red berries such as tomatoes and holly are black, so their complementary “colour” would be white, but not for us. Likewise, if an animal can see ultraviolet as an additive primary colour, i.e. they have an ultraviolet cone cell but no red, the complementary colour to ultraviolet would be aqua, which would be equivalent to white. It isn’t clear to me how a colour wheel would work in these circumstances because for us, violet and purple are similar but in fact violet is almost a primary colour but purple is secondary. I suspect that this is because violet wavelengths are half that of red, so our red cone cells are triggered by alternate wavefronts of violet light, raising the question of whether ultraviolet would look yellowish or greenish to us, and on top of that whether we could see yellow-blue, which to us is impossible but is possibly what ultraviolet “ought” to look like. Hence colour theory depends on our physicality and can be thought of scientifically but something else has been built on top of our nature which is not, strictly speaking, universal.

Concepts are also theory-laden, as the phrase has it, and this erodes the distinction between what we perceive as factual and what we theorise about. We bring assumptions with us because it’s impossible to function otherwise. In a professional capacity, a psychiatrist of the old school might have been trained in Freudian analysis and look at a client’s interactions in terms of, say, cigarette-smoking being a phallic symbol rather than a physical addiction, so the behaviour they see in front of them might be interpreted completely differently. It affects all of us though. There are two kinds of theory-ladenness: semantic and perceptual. For the former, the words we use are based on pre-existing assumptions, hypotheses and theories. For instance, Brownian motion is the tendency of small particles in fluids to be battered asymmetrically by molecules and atoms, making them jiggle. This was first observed in pollen grains and thought to be something to do with them being alive, so there could be a confirmation bias there that everything which shows this kind of motion is living. The way I’ve explained Brownian motion, however, depends on atomic theory in a similar way. A related problem is that the theory to be tested can be assumed beforehand. Semantically, there is another kind of issue. For instance, temperature and heat are sometimes seen as interchangeable, and in an experimental write-up, temperature might be misreported or even misread as a result. In fact, temperature is a measure of the mean kinetic energy in the molecules or atoms of a substance whereas heat is a measure of the total kinetic energy. This comes into play with the upper atmosphere, which reaches a temperature of 2 500°C but a thermometer of the kind we’re used to employing will measure that as well below freezing because of the sparseness of the ions in that region. In some senses it’s actually meaningless that the atmosphere has that high a temperature but in others it is important.

There’s a well-known psychological experiment where some psychologists had themselves admitted to a psychiatric hospital by faking symptoms which did not correspond to any particular diagnosis. Once there, it took a long time for the staff to recognise that they were not mentally ill, even though they ceased to exhibit these symptoms immediately after admission, and there was also a hierarchical order to the people who realised they weren’t “ill”, starting with the lowest-paid and least professional workers and ending with the consultants. I would call that a good example of theory-ladenness, and it’s also interesting that education in a particular speciality actually conceals the apparent reality from those who ought to be experts. However, there is another possible interpretation of this experiment that it actually means that some psychiatric patients are not as they seem. This doesn’t mean they don’t correspond to the definition of mental illness, but it’s possible that society forces them to act in a certain way consciously because they lack a coherent rôle in it. This tallies with the social model of disability, that society disables people rather than disability being an inherent organic property of the individual.

There’s a tendency to think of theory as in opposition to practice. This is indeed sometimes the case. However, it’s equally true that we can’t avoid forming pre-conceptions, which are theories in a looser, non-scientific, sense, before we do things. Another problem, though, is that although there are perfectly valid non-scientific uses of the word “theory”, these can lead to misunderstandings as with the idea that evolutionary theory is “just a theory”, when it’s actually a fact as well-established as cell theory. At the same time, I wish the other senses of the word were more respected, because they are not in some sloppy realm where things are not thought through much of the time but constitute a firm basis. If I want to create a harmonious arrangement of clothing by dressing in complementary colours, the fact that that depends on most of us having only three types of cone cell doesn’t help anyone. I could insist, for example, that I’m wearing ultraviolet tights which look black to humans and pair them with a “complementary” teal skirt, but that’s not the same as wearing purple tights and a green skirt. In a way, it’d be good if we had more than one word for theory, but on the whole it’s futile for a sole individual to attempt to change language. Therefore we should really just be careful to think through how we are using that word and take steps to signal the distinction in other ways.

Could Science End?

Yesterday I considered the question of what civilisation would be like if nobody could do mathematics “as we know it”, which is one fairly minor suggestion for an answer to the Fermi Paradox of “where are all the aliens?”. Of course the simplest answer to this is that there aren’t any and probably haven’t ever been any, but there are also multitudinous other possibilities, many of which have interesting implications for us even if we never make contact with any. Yesterday, the fault was in ourselves, but what if the fault was in not our stars, but the stars? What if the issue is not that other intelligent life forms lack a capacity we do have, but that there is a realistic, external but still conceptual problem which prevents anyone from getting out there into interstellar space in a reasonable period of time? What if, so to speak, science “runs out”?

Even if there are no aliens, this possibility is still important. It’s entirely possible that they are in fact completely absent but science will still stop, and that would be a major issue. It would be rather like the way Moore’s Law has apparently run up against the buffers due to thermal noise and electron tunnelling. Ever since 1961, when the first integrated circuit was invented, there’s been an approximate doubling of transistors per unit area of silicon (or germanium of course) every two years or so, which may be partly driven by commercial considerations. However, as they get smaller, the probability of an electron on one side of a barrier teleporting to the other and thereby interfering with the operation of transistors increases. In 2002, it was theorised that the law would break down by the end of the decade due to Johnson-Nyquist noise, which is the disturbance of electrical signals due to the vibration of atoms and molecules tending to drown out weak signals, which is what nanoscale computing processes amount to. It isn’t clear whether Moore’s Law has stopped operating or not because if it does, it would have consequences for IT companies and therefore their profitability and share values, so the difficulty in ascertaining whether it has is a good example of how capitalism distorts processes and research which would ideally operate in a more neutral environment, and there’s also a tendency for people to suppose that scientific change will not persist indefinitely because of being “set in their ways” as it were, so it’s hard to tell if it actually has stopped happening. It’s been forecast, in a possibly rather sensationalist way, that once Moore’s Law does stop, there will be a major economic recession or depression and complete social chaos resulting from the inability of IT companies to make enough money to continue, but I don’t really know about that. It seems like catastrophising.

More widely, there are areas of “crisis”, to be sensationalist myself for a moment, in science, particularly in physics but as I’ve mentioned previously also perhaps in chemistry. The Moore’s Law analogy is imperfect because it isn’t pure scientific discovery but the application of science to technology where it can be established that a particular technique for manufacturing transistors has a lower size limit. This is actually a successful prediction made by physics rather than the end of a scientific road. However, the consequences may be similar in some ways because it means, for example, that technological solutions relying on microminiaturisation of digital electronics would have to change or be solved in a different way, which is of course what quantum computers are for. The end of science is somewhat different, and can be considered in two ways.

The first of these is that the means of testing hypotheses may outgrow human ability to do so. For instance, one possible time travel technique involves an infinitely long cylinder of black holes but there is no way to build such a cylinder as far as can be seen, particularly if the Universe is spatially finite. Another example is the increasing size and energy required to build particle colliders. The point may come when the only way to test an hypothesis of this kind would be to construct a collider in space, and right now we can’t do this and probably never will be able to. There would be an extra special “gotcha” if it turned out that in order to test a particular hypothesis involving space travel it would be necessary to have the engines built on those principles in the first place to get to a place where it could be falsified.

Another way it might happen is that there could be two or more equally valid theories which fit all the data and are equally parsimonious and there is no way of choosing among them. It kind of makes sense to choose a simpler theory, but on this level it becomes an æsthetic choice rather than a rational one because nothing will happen as a result of one theory being true but not the other. If all the data means all the observable data, this is the impasse in which science will find itself.

It also seems to be very difficult to arrive at a theory of quantum gravity. Relativity and quantum physics are at loggerheads with each other and there seems to be no sign of resolution. There “ought to be” some kind of underlying explanation for the two both being true, but it doesn’t seem to be happening. Every force except gravity is explained using the idea that particles carry the message of that force, such as photons for electromagnetism and gluons for the strong nuclear force, but gravity is explained using the idea that mass distorts space instead, meaning that gravity isn’t really a force at all. I’ve often wondered why they don’t try to go the other way and use the concept of higher dimensions to explain the other forces instead of using particles, but they didn’t and I presume there’s a good reason for that. It wouldn’t explain the weak force I suppose. However, there does seem to be a geometrical element in the weak force because it can only convert between up and down quarks if their spin does not align with their direction of motion, so maybe. But so far as I know it’s never been tried this way round, which puzzles me. There’s something I don’t know.

There may also be a difference between science running out and our ability to understand it being exceeded. Already, quantum mechanics is said to be incomprehensible on some level, but is that due to merely human limitations or is it fundamentally mysterious? This is also an issue evoked with the mind-body problem, in that perhaps the reason we can’t seem to reconcile the existence of consciousness with anything we can observe is that the problem is just too hard for humans to grasp.

People often imagine the ability to build a space elevator, which is a cable reaching thousands of kilometres into space to geostationary orbit up and down which lifts can move, making it far easier to reach space, but there doesn’t appear to be a substance strong enough to support that on Earth, although it would be feasible on many other planets, moons and asteroids using existing technology. We might imagine it’s just round the corner, but maybe it isn’t. Likewise, another common idea is the Dyson sphere, actually acknowledged by Freeman Dyson himself as having originally been thought of by Olaf Stapledon, which encloses a sun in a solid sphere of extremely strong matter to exploit all of its energy, which again may not exist. And the obvious third idea is faster than light travel, which is generally taken to be impossible in any useful way. One way the search for extraterrestrial intelligence (SETI) could be conducted is to look for evidence of megastructures like Dyson spheres around stars, and in one case a few people believed they’d actually found one, but what if they turn out to be impossible? Dyson’s original idea was a swarm of space stations orbiting the Sun rather than a rigid body, which seems feasible, but an actual solid sphere seems much less so. Our plans of people in suspended animation or generation ships crossing the void, or spacecraft accelerated to almost the speed of light may all just be pipe dreams. Our lazy teenage boasts will be high precision ghosts, to quote Prefab Sprout. Something isn’t known to be possible until it’s actually done.

If non-baryonic dark matter exists, the beautiful symmetries of elementary particles which the Standard Model of physics has constructed do not include it. And despite my doubts, it may exist, and even if it doesn’t there’s an issue with explaining how galaxies rotate at the rate they do. However, at any point in the history of science there were probably gaps in knowledge which seemed unlikely to be filled, so I’m not sure things are any different today. It reminds me of the story about closing the US patent office in 1899 CE, which is apparently apocryphal, because everything had been invented. However, there is also the claim that technological progress is slowing down rather than accelerating, because the changes wrought in society by the immediate aftermath of the Industrial Revolution were much larger than what has happened more recently. At the end of the nineteenth century, there seemed to be just two unresolved problems in physics: the ultraviolet catastrophe and the detection of the luminiferous æther. These two problems ended up turning physics completely upside down. Now it may be possible to explain any kind of observation, with the rather major exceptions which Constructor Theory tries to address but these seem to be qualitatively different. The incompleteness of these theories, such as the Uncertainty Principle and the apparent impossibility of reconciling relativity with quantum mechanics, could still be permanent because of the difficulty of testing these theories. Dark matter would also fall under this heading, or rather, the discrepancy in the speed of galactic movement and rotation does.

This is primarily about physics of course, because there’s a strong tendency to think everything can be reduced to it, but biocentrism is another possible approach, although how far that can be taken is another question. Also, this is the “trajectory and particles” version of physics rather than something like constructor theory, and I’m not sure what bearing that has on things. Cosmology faces a crisis right now as well because two different precise and apparently reliable methods of measuring the rate of expansion of the Universe give two different results. Though I could go on finding holes, which may well end up being plugged, I want to move on to the question of what happens if science “stops”.

The Singularity is a well-known idea, described as “the Rapture for nerds”. It’s based on the perceived trend that scientific and technological progress accelerate exponentially until they are practically a vertical line, usually understood to be the point at which artificial intelligence goes off the IQ scale through being able to redesign itself. Things like that have happened to some extent. For instance, AlphaGo played the board game Go (AKA Weichi, 围棋) and became the best 围棋 player in the world shortly after, and was followed by AlphaGo Zero, which only played games with itself to start with and still became better than any human player of the game. This was a game previously considered impossible to computerise due to the fact that each move had hundreds of possible options, unlike chess with its couple of dozen or fewer, meaning that the game tree would branch vastly very early on. But the Singularity was first named, by Ray Kurzweil, two and a half dozen years ago now, and before that the SF writer Murray Leinster based a story on the idea in 1946, and it hasn’t happened. Of course a lot of other things have been predicted far in advance which have in fact come to pass in the end, but many are sceptical. The usual scenario involves transhumanism or AI, so to an extent it seems to depend on Moore’s Law in the latter case although quantum computing may far exceed that, but for it to happen regardless of the nature of the intelligence which drove it, genuine limits to science might still be expected to prevent it from happening in the way people imagine. For this reason, the perceived unending exponential growth in scientific progress and associated technological change could be more like a sigmoid graph:

I can’t relabel this graph, so I should explain that this is supposed to represent technological and scientific progress up to the Singularity, which occurs where the Y-axis reads zero.

There’s a difference between science and technology of course. It’s notable, for example, that the development of new drugs usually seems to involve tinkering with the molecular structure of old drugs to alter their function rather than using novel compounds, and there seems to be excessive reliance in digital electronics on a wide variety of relatively scarce elements rather than the use of easily obtained common ones in new ways. And the thing is, in both those cases we do know it’s often possible to do things in other ways. For instance, antibacterial compounds and anti-inflammatories are potentially very varied, meaning for example that antibiotic resistance need not develop anything like as quickly as it does, even if they continue to be used irresponsibly in animal husbandry, and there are plenty of steps in the inflammatory process which can be modified without the use of either steroids or so-called non-steroidal anti-inflammatories, all of which are in fact cycloöygenase inhibitors, and there are biological solutions to problems such as touchscreen displays and information processing such as flatfish and cuttlefish camouflage which imply that there is another way to solve the problem without using rare earths or relatively uncommon transition metals. So the solutions are out there, unexploited, possibly because of capitalism. This would therefore mean that if the Singularity did take place, it might end up accelerating technological progress for quite a while through the replacement of current technology by something more sustainable and appropriate to the needs of the human race. Such areas of scientific research are somewhat neglected, meaning that in those particular directions the chances are we really have not run out of science. They could still, in fact, have implications for the likes of space travel and robotics, but it’s a very different kind of singularity than what Kurzweil and his friends seem to be imagining. It’s more like the Isley Brothers:

Having said that, I don’t want to come across as a Luddite or anti-intellectual. I appreciate the beauty of the likes of the Standard Model and other aspects of cutting edge physics and cosmology. I’m not sure they’re fundamental though, for various reasons. The advent of constructor theory, for example, shows that there may be other ways of thinking about physics than how it has been considered in recent centuries, whether or not it’s just a passing trend. Biocentrism is another way, although it has its own limits. This is the practice of considering biology as fundamental rather than physics. The issue of chemistry in this respect is more complex.

Returning to the initial reason this was mentioned, as a solution to the Fermi Paradox, it’s hard to imagine that this would actually make visiting other star systems technologically unfeasible. If we’re actually talking about human beings travelling to other star systems and either settling worlds or constructing artificial habitats to live in there, that doesn’t seem like it would be ruled out using existing tech. The Dædalus Project, for example, used a starship engine based on the regular detonation of nuclear bombs to accelerate a craft to a twelfth of the speed of light, though not with humans on board, and another option is a solar sail, either using sunlight alone or driven by a laser. Besides that, there is the possibility of using low doses of hydrogen sulphide to induce suspended animation, or keeping a well-sealed cyclical ecosystem going for generations while people travel the distances between the stars. There are plenty of reasons why these things won’t happen, but technology doesn’t seem to be a barrier at all here because methods of doing so have been on the drawing board since the 1970s. Something might come up of course, such as the maximum possible intensity of a laser beam or the possibility of causing brain damage in suspended animation, but it seems far-fetched that every possible technique for spreading through the Galaxy is ruled out unless somewhere out there in that other space of scientific theory there is some kind of perpetual motion-like or cosmic speed limit physical law which prevents intelligent life forms or machines from doing so.

All that said, the idea that science might run out is intriguing. It means that there could be a whole class of phenomena which are literally inexplicable. It also means humans, and for that matter any intelligent life form, are not so powerful as to be able to “conquer” the Cosmos, which is a salutory lesson in humility. It also solves another peculiarity that somehow we, who evolved on the savannah running away from predators, parenting and gathering nuts and berries for food and having the evolutionary adaptations to do so, have developed the capacity to understand the Universe, because in this scenario we actually haven’t.

Vulcan And Vulcan

If you say “Vulcan” to most people nowadays in an Outer Space context, the chances are they’ll think of Spock, and that’s an entirely valid thing to do. However, if you were to say it to anyone with much knowledge of astronomy in the nineteenth century, it would’ve called something completely different to mind: a planet which orbits the Sun even more closely than Mercury. I’m going to cover both in this post.

Firstly, the ‘Star Trek’ Vulcan, whose Vulcan name is Ni’Var. This is reputed to orbit the star 40 Eridani A, a member of a trinary star system also known as ο2 Eridani (Omicron-2 Eridani – that isn’t an “O”) sixteen and one quarter light years from here, and therefore also quite close to 82 Eridani, which is said to be one of the most suitable nearby stars for life, around which a possibly habitable planet orbits in real life. Of the stars, A is an orange dwarf, B a white dwarf and C a red dwarf which is also a flare star. Because B would previously have been a red giant and exploded, the chances are that any habitable planets orbiting A would have been sterilised by B’s outburst, and since C is a flare star, this is also unsuitable, although there would be nothing to stop an interstellar civilisation settling a planet in A’s habitable zone, which would of course be Vulcan.

As I’ve mentioned, I don’t pay much attention to either ‘Discovery’ or the new ‘Star Trek’ films, but I’m aware that Vulcan has been destroyed in revenge for the destruction of Romulus. I find this a bit annoying and I’m not sure what the point of it was plot-wise, but it doesn’t alter the in-universe fact that Vulcan was the homeworld of the first species to make open contact with humans when Zefram Cochrane first activated the warp drive. I’m also aware that that is inconsistent with the depiction of Cochrane in TOS. It is interesting, though, that any real planet in the habitable zone of 40 Eridani A would have been severely damaged by the 40 Eridani B supernova.

I understand Vulcan to have no moons, higher gravity than Earth and no surface oceans. I’m also aware that Romulans and Vulcans are the same species. It irritates me that they’re humanoid but also interests me that some of their anatomy and physiology is known, such as their copper-based respiratory pigment. Then again, although the in-universe explanation of widespread humanoid aliens is that we are all descended from humanoid ancestors who existed around the time our own Solar System formed, it’s also conceivable that convergent evolution would lead to similar body forms among sentient tool-using species. Back to Vulcan itself though. It has a thinner atmosphere than Earth’s, which I think justifies the copper-based blood pigment, and the sky and much of the surface is red. There are seas, i.e. large landlocked lakes, rather than oceans continuous with each other. Depending on the total surface coverage of bodies of water, I think this would probably make the planet uninhabitable for humans although clearly not for native life. 40 Eridani A is a K-type star, with a longer lifetime than the Sun’s in terms of being able to support a habitable planet, which, if orbiting at the distance necessary to receive the same quantity of light and head from its primary as we do from our Sun as a planet, would have a mean orbital radius of about 0.68 AU, i.e. sixty-eight percent of Earth’s distance from the Sun, and 223-day year. However, Vulcan is supposed to be hotter than Earth and might therefore be closer to its sun or have more greenhouse gases in its atmosphere, or it could just reflect less heat back into space, and in fact it probably would due to less ice on its surface. The difficult thing to account for with Vulcan is the combined higher gravity and thinner atmosphere, but there is another reason than gravity why a body might lose some of the gas surrounding it, which is consistent with what we “know” about Vulcan. Earth’s strong magnetic field is generated by our own large moon, Cynthia, which raises tides in our iron-nickel core and magnetises it like stroking a bar of iron with a magnet does, and that generates our magnetosphere, which traps ionising radiation from the solar wind which might otherwise reach Earth’s surface and strip away our atmosphere. Hence Vulcan, with no pre-existing satellites, would not have this benefit but would on the other hand still be able to hold on to some atmosphere because of its higher gravity, so maybe that is in fact realistic. Venus has no magnetic field but an extremely dense atmosphere, although not one hospitable to life at the solid surface, due to photolysis – the action of light on rocks releasing carbon dioxide gas. However, we’re basically aware that Vulcan’s atmosphere has enough oxygen to support human life without their own oxygen supply, and not enough carbon dioxide to poison us, which is 0.5% at our own atmospheric pressure. 170 millibars partial pressure of oxygen is required for this and CO2 cannot be making a significant contribution to the pressure, so we can surmise that the rest of Vulcan’s atmosphere substantially consists of other gases. It isn’t pure oxygen. In fact, it’s quite likely to be nitrogen if Vulcan physiology is anything like ours and their bodies consist partly of protein, as the nitrogen has to come from somewhere, so I’m going to say the mean surface air pressure is about 0.25 bars. I’ve plucked this figure out of the air, so to speak. There probably is no such thing as sea level there because of the various lakes with different presumed depths and heights, so this would be defined as some kind of mean distance from the centre of the planet or a level at which gravitational pull is close to a particular standard. The boiling point of water on Vulcan is therefore about 60°C, but we know from McCoy’s mouth that Vulcan is very hot compared to Earth, so this puts an upper limit on its surface temperature unless it’s so hot at the equator that it causes water to evaporate.

40 Eridani A is orange. The sky is likely to be close to a complementary colour, such as teal, given that, but because of the dusty surface it’s entirely feasible that it would in fact be pinkish due to small particles high in the atmosphere. Also, the general ruddiness of the planet as shown on screen gives the impression of heat and dryness, so artistically that does seem to be a good decision. The same features make some people think of Mars as a hot planet when in fact it’s often colder than Antarctica. Regarding sparse water cover, a thin atmosphere might make sense here too, particularly if water is regularly evaporating from the surface at the equator, since some might then be lost into space.

Vulcan would also lack plate tectonics if it’s like this, since that’s fuelled by water. The planet has no continents as such, but it does have active volcanoes and lava fields, which is to some extent to be expected as it corresponds to the “hot spot” situation in the centre of the Pacific plate on Earth, where magma seems to need to vent. Here, this results in Hawaiʻi, but on Vulcan a mountain range could be expected because there are no oceans. There would be nothing like the Pacific Ring Of Fire, and also no fold mountains because those are caused by the collision of continental plates.

Vulcan’s colour is depicted differently in different manifestations of the series. In TOS and Enterprise, it’s red. In TAS it’s yellower, and in TNG brownish. However, on Mars there is variation in colour from space due to a dust storm season, and this can be imagined on Vulcan too. Maybe one way to think of Vulcan is as a larger, hotter version of Mars.

The real 40 Eridani A does have a planet. This is, as usual, called “b”, and orbits much closer to the star than the inner edge of the habitable zone. It has a roughly circular orbit 0.22 AU from the star and a mass estimated at 8.5 times Earth’s (both those figures are rounded off). At Earth’s density, this would give it a diameter of around 25 000 kilometres, which is a type of planet unknown in our own solar system at any distance from us, and it’s classed as a “Super-Earth”, but it has a period of 43 days and would be like Mercury on its surface during the day, if it rotates at all. It’s also the closest known Super-Earth. Its orbit differs considerably from Mercury’s, which will become relevant later in this post, in being much less elliptical, which to me, in my probable naïveté, suggests there are no planets larger than it in at least the inner solar system.

This brings me to the other Vulcan. In the nineteenth Christian century, the French astronomer Urbain Le Verrier came up with a particularly accurate model of planetary motion within the Solar System. It had been noted that the most recently discovered planet, Uranus, tended to drift slightly behind and ahead of its predicted position given its distance from the Sun and shape of its orbit. From this, Le Verrier calculated mathematically that there was likely to be another planet further out pulling at it, and predicted its position, which turned out to be correct. In fact he almost had it named after him, but they eventually decided to call it Neptune. This established his reputation and consequently, when he turned his attention to the orbit of Mercury, people paid attention and took his views seriously.

Mercury’s orbit is quite unusual compared to the other planets, particularly if you ignore the period of time when Pluto was regarded as one. It’s the most eccentric orbit by a long way compared to the others, with a variation in distance from the Sun of around twenty percent. Le Verrier also noted that the movement of the “points” of the orbit precessed around the Sun much faster even when compared to its year of eighty-eight days than those of other planets. Just as he had with Neptune, Le Verrier proposed that there was either an as-yet undiscovered planet even closer to the Sun or a number of smaller bodies like asteroids within the orbit of Mercury, and since it would’ve been so close and so hot, he called it Vulcan after the Roman god of fire, Vulcanus. The planet’s existence could be confirmed in two ways. Either it could be detected in transit, as most planets are detected at the moment, or it could possibly be glimpsed during a total solar eclipse. A number of astronomers then reported that they had indeed seen this planet transiting the Sun. For instance, Edmond Lescarbault, a doctor, described a tiny black spot moving across the Sun faster than a sunspot, moving with the rotation of the Sun, would, and also lacking a sunspot’s penumbra. The observations even seemed to confirm Le Verrier’s prediction of Vulcan’s size and orbit. However, it was difficult to predict when these transits would occur because that depended on the tilt of Vulcan’s orbit compared to ours. Mercury, for example, can only be seen to transit the Sun in May or November because only then is the tilt of both its and our orbits aligned such that it can get between us and the Sun. The observations did seem to occur fairly randomly, but at first glance Mercury’s do as well, if you didn’t know anything about its movements already.

There was a total eclipse of the Sun in 1883, shortly after Le Verrier’s death in 1877, during which Vulcan was not observed. It was still possible that the planet was either behind or transiting the Sun at the time, but six further such observations, the last in 1908, also failed to turn it up, making it increasingly improbable that the planet existed. However since that time astronomers have claimed that close ups of the Sun’s surface do sometimes show small black dots which are not sunspots, although these may be imperfections of photographic plates, and there are asteroids which approach the Sun more closely than Mercury does, such as Icarus. It strikes me that it’s not only possible but probable that there are asteroids which orbit entirely within the orbit of Mercury, although they would have to be very small and would be difficult to observe or confirm. These are known as Vulcanoids, and would have to be between six kilometres and a couple of hundred metres in diameter. Every region of the Solar System which is not severely perturbed by the gravity of known objects has been found to contain objects like asteroids or comets, so if the innermost region of the system doesn’t have any this must be due to a non-gravitational effect. It is in fact possible that the light from the Sun is so strong at that distance that it would push smaller bodies away from it over a long period of time, so this may be the explanation. This might sound far-fetched, but it’s been proposed that this effect could be used to divert asteroids which would otherwise crash into Earth by painting them white in order that the pressure of light from the Sun would change their orbits, and this is also the principle used in a solar sail. The MESSENGER probe took photographs of the region but this was limited because damage from sunlight needed to be avoided. Much closer in than Mercury, asteroids are likely to vaporise of course.

Vulcan was considered to orbit 26 million kilometres from the Sun, giving it a sidereal period (“year”) of twenty-six days. At another point, observations appeared to show it had a year of 38.5 days. I think it was also supposed to be very small but I can’t track this down: possibly about a thirtieth the mass of Mercury, which with the same density would’ve given it a diameter of around 1 600 kilometres, probably meaning that if it had been found to exist it would’ve been demoted from planethood by now in the same way as Pluto was. In fact, if it did exist, it would indeed have perturbed the orbit of Mercury but the other factors which turned out to be the explanation for this phenomenon would still be in play, meaning that there would’ve been an even greater anomaly unless the planet happened to be exactly the right mass and in exactly the right place, and possibly retrograde. Some kind of pointless immense astroengineering project could probably achieve that to some extent, but why? Possibly to prevent us from being aware of relativity?

The fact is that the planets don’t simply orbit the Sun alone without influencing one another and the Sun. This is the famous three-body problem, that it’s impossible to work out in almost all cases how three bodies would orbit each other, and even more so the much larger number of massive bodies in the Solar System. It’s possible to work out how much gravitational influence the planets would have on each other if they were the only two bodies in the Universe though, and if initial conditions are known. For instance, Venus and Earth approach each other to within fifty million kilometres and have roughly the same mass, so left to themselves they would orbit each other at roughly twenty-five million kilometres from their centre of gravity once in forty-five millennia if I’ve calculated that correctly, and at the closest approach, which would be during a transit of Venus, that’s the gravitational pull we’re exerting on each other – about forty-five thousand times less than the Sun’s. Mercury is the least massive planet, being just over half the mass of Mars, the next smallest. Pluto is of course far lower in mass, and if Cynthia is considered a planet in its own right, that would be considerably less massive. Anyway, this means that Mercury is pulled around a lot by the other planets. Venus approaches it to within about 38 million kilometres but without doing the maths it isn’t clear if that’s the biggest gravitational influence because of Jupiter being so much more massive than the other planets, even though it’s far further away. Jupiter is over three hundred times the mass of Earth but would get within 4.8 AU of Mercury, which actually gives it roughly the same influence as Venus. But this is not the only reason Mercury’s orbit precesses as much as it does.

Albert Einstein listed a number of ways to test his theory of general relativity, one of which was the orbit of Mercury. The pull of the other planets is insufficient to explain precession in Newtonian terms. There’s still a bit left over if you try to do this. It’s at least seven percent larger than it “should” be. The explanation for this was instrumental in getting general relativity accepted. Einstein made three suggestions about how general relativity could be corroborated. One was that light would be red shifted if it passed through a gravity well. Remarkably, although it took something like four decades, the observation of 40 Eridani B eventually showed that this was so, I’m guessing because of the other stars in its system. Gravity stretches light because it distorts space. The second proposition was that stars observed near the Sun during a total solar eclipse (Again! They’re useful things) would appear to be in a different position because their light would be bent by the solar gravity, and this was indeed found to be so a few years later. However, the world had to wait for these two findings. The other one was that Mercury’s orbit would precess at the rate it did having taken into account the perturbations of all the other planets, and again this was found to be so, but in this case it was already known that this happened because Le Verrier had observed it in the previous century and the existence of Vulcan had been refuted. The reason this happens, I have to admit, I don’t really understand, but I can provide a kind of visual model of it which could show this.

The Rubber Sheet Theory is a model of space as if it’s two dimensional left to itself with weights representing stars and planets which, if placed on such a sheet would create dents in it. Obviously this is not an adequate explanation as such of general relativity for several reasons, one of which is that it uses gravity to explain gravity – that’s what’s pulling down the weights. It also makes space appear to be a substance, something which physicists had worked heavily against when they disproved the existence of the luminiferous æther, which since it was supposed to be extremely rigid wouldn’t work in this situation anyway. It shouldn’t be mistaken for Einstein’s theory itself, but it is a useful way of looking at it. In any case, if you imagine the kind of dent which shows up in the title sequence of Disney’s ‘The Black Hole’:

. . . which is like one of those charity coin collection things, space around the Sun is distorted to a limited extent like that, and attempting to do a “wall of death”-style orbit around it, which would in any case be elliptical rather than perfectly circular because the Universe is imperfect like that, would lead to your bike describing a series of ellipses which were not perfectly congruent with each other but were more like a spirograph pattern. Having written that paragraph with its references to a number of very ’70s things makes me wonder if it’s going to make any sense to someone born after Generation X.

Now I can see that this does happen, but I am also puzzled by it. Whereas I’m sure that I couldn’t aim a coin at one of those charity collection things in such a way that it would just circle around at the same level until friction interfered, and that at best if I could make it describe an elliptical path for a few revolutions, the bits of the ellipse furthest from and closest to the hole would precess, I would put that down to the fact that I, and anyone else to a lesser extent, can’t aim perfectly rather than simply due to the geometry of the hole. Nevertheless, this appears to be what I’m being asked to believe with this: that it isn’t only one’s inability to aim perfectly, or for that matter the friction the coin (or ball bearing – let’s take the instability of the coin out of the picture), that leads to this precession. But apparently not. Apparently, if you were to have too much time on your hands and designed some kind of precision ball bearing throwing machine for charity coin collectors, and it wouldn’t be popular because they want coins, not ball bearings, it would do the wobble thing even if it stayed circular enough not to fall down the hole immediately, and it would wobble more the closer it was to the whole. So they say, and this is what got general relativity accepted.

There have been other Vulcans. For instance, one of the many hypothetical planets in Western astrology is the intramercurial Vulcan, seen as the soul ruler of Taurus and orbiting once every twenty days. This Vulcan would go retrograde more often than Mercury. It’s fiery and urges the individual to look for non-physical knowledge, which makes sense given its history in astronomy. It was also suggested in a poll as the name of one of the moons of Pluto, and actually won the most votes but that was then named Kerberos after the Hadean dog, which was the runner up. Vulcan actually doesn’t seem like a very good name for a moon of an icy planet way out in the outer reaches of the Solar System, but I don’t know the reasons it wasn’t used. Maybe the IAU just didn’t want to be reminded of what they might regard as an embarrassing phase in the recent history of their science. In the Second Doctor story ‘The Power Of The Daleks’, there’s a planet called Vulcan which is settled by humans and highly volcanic with pools of fuming mercury on its surface. Doesn’t sound very nice at all really. There does not, however, seem to be an asteroid named Vulcan, which is quite surprising.

I’ve sometimes wondered if there’s a story behind the naming of the ‘Star Trek’ Vulcan and if it’s in any way connected to the hypothetical planet, but I don’t know. How about you?

The Machine That Explains Everything

Compare and contrast:

with:

. . . and I’m now wondering if anyone has ever put those two songs next to each other before, on a playlist or otherwise. While I’m at it, here’s another:

(not quite the same). I’ve probably done it now.

Then there’s this:

That’s quite enough of that. Or is it?

Like Franco Battiato, Chiara Marletto is Italian, although she was born at the opposite end of the country. She’s Torinese while he’s Sicilian, although he did move to Milan(o). However, this is not that important unless it says something about the nature of northern Italian culture or education and that’s another story. The germane issue is that there are two distinct approaches to science, if science is seen as based on physics, and that is not the only option – biocentrism is another, possibly relevant to where I’m about to go with this – one of which is much more prominent and formally developed than the other. I’m not talking about classical versus quantum or the issue of quantum gravity and the reconciliation of relativity with quantum physics, although those are important and this is relevant to the latter. ‘To Faust A Footnote’ is a musically-accompanied recitation of Newton’s laws of motion, or at least something like that, and describes the likes of trajectories and objects in motion. Such descriptions are found in Johannes Keplers laws of planetary motion, and although relativity and quantum mechanics are radically different in some ways from this classical approach, this aspect remains the same.

Around a century ago the world of physics saw the climax of a revolution. Triggered, I’m going to say, by two thoughts and perhaps experiments, it was realised that the idea of particles as little snooker balls which could ping about at any speed and were pulled towards each other and pushed apart by various forces which were similar in nature such as magnetism and gravity didn’t really describe the world as it actually is. The first clue had been known for millennia, which is that hot objects glow red rather than white. This led to the realisation that because all objects emit electromagnetic radiation at a range of frequencies, meaning that they would glow orange if they were slightly more than red hot, they would be expected to radiate all of their heat immediately, meaning they would drop to absolute zero. The solution to this is that the range of frequencies is discrete. It has steps and there are minimum packets of light energy called quanta, from the Latin for “how much”, and this prevents the quantity of heat being radiated by any object from being infinite. The other was the increasing difficulty maintaining the idea that light waves had a medium, the luminiferous æther, which would have to have various unusual properties combined to work in this way, culminating in the Michelson-Morley experiment, which showed that light travels at the same speed regardless of the speed of the observer, meaning for example that, 299 792 kps being the speed of light, if you were travelling at 299 791 kps, you would still measure the speed of light at 299 792 kps. You can’t catch up with it. In the Michelson-Morley experiment, light is emitted in two directions at mirrors, at right angles to each other, and the interference patterns are observed. Because Earth is orbiting the Sun at around 30 kps, if light is moving through a medium which is not being dragged with us, and it had been previously shown that it couldn’t be, it “ought” to be moving 30 kps more slowly in one direction than the other, which would show by the wave fronts lining up differently, but this doesn’t happen.

These two lines of thought led to two major new breakthroughs in theoretical physics. One is general and special relativity, the idea that moving observers find themselves dividing space and time up differently than stationary ones and that gravity is not a force but a distortion of space. The other is quantum mechanics, which is that there are inherent limits to accuracy and that probability is fundamentally involved in physical phenomena on a small scale, so there is no certain location or direction to a sufficiently small particle, but it’s more likely to be in one place or be going in a particular direction than another, and it’s these which constitute the waves, which are like a graph depicting how likely something is to be in a certain place at a certain time. These are both “big ideas”. Since then, particle and other physics has tended to involve tinkering with the details and working out the consequences of these theories, notably trying to relate them to each other, which is difficult. Related to quantum mechanics is the Standard Model, really a big set of related ideas which classifies elementary particles and describes electromagnetism and the strong and weak nuclear forces. Gravity is missing from this model. If gravity was suddenly to be non-existent inside a sealed space station able to control its own temperature and pressure, its trajectory would change but there wouldn’t be a fundamental change in anyone’s lifestyle aboard the space station, and this illustrates how big the rift is between the two sides of physics. There are also problems with the recent model of cosmology, there are two many parameters involved for it to be considered elegant (nineteen) and for some reason the weak nuclear force is a quadrillion times (long scale) stronger than gravity and nobody knows why. In order to account more fully for neutrinos, another seven apparently arbitrary constants will be needed, so the whole this is a bit of a mess, although it does work well. It’s also become difficult to test because of the high energy levels involved. Another issue is that there’s a lot more matter than antimatter.

There are also a number of straightforward, everyday phenomena which the kind of physics involving particles and trajectories can’t account for. For instance, a drop of ink in a jar of water starts off as a small, self-contained blob which then spreads out and leaves the jar with a more homogenous tint. This is the usual operation of entropy, but although physics can account for individual molecules of pigment colliding with hydrogen molecules and moving in all sorts of different directions, it can’t explain, for instance, why it happens that way round. Well, I say “physics”. In fact there is a perfectly good branch of physics which does at least assert that this kind of thing will happen, and it’s the one referred to by Flanders and Swann: thermodynamics. The Second Law of Thermodynamics asserts that the entropy of a closed system tends towards a maximum. Another maxim from thermodynamics is the counterfactual: a perpetual motion machine is impossible.

Everything must change. This is Paul Young next to an overbalanced wheel, which might be thought to spin permanently once set in motion, but it doesn’t. The idea of an overbalanced wheel is to shift mass from the edge towards the centre in order to keep it spinning, but it doesn’t work because the torque is actually the same on either side. I find objections to perpetual motion machines odd because at first read they generally appear to be critical of a minor flaw in the machine which might be easily remedied, such as friction, but in fact resolving that problem would introduce another, and it’s a limitation of the entire system rather than that minor issue. All you’re doing is moving the limitation to a different aspect of the device. It will always be there somewhere.

And now it’s time to introduce Constructor Theory (CT), to which Chiari Marletto is a substantial contributor. The idea of a machine being impossible is called a “counterfactual” in CT, and a perpetual motion machine is a good example of that. It needn’t be a machine though, just a situation such as a block of lead spontaneously transmuting to gold, which cannot happen, or rather, is almost impossible. Thermodynamics has enough mathematics in it as it is, but not the same kind as quantum mechanics or relativity. Some physicists seem to feel thermodynamics isn’t rigorous enough for that reason, but it can be made more so without straying into the kind of trajectory and particles paradigm used elsewhere, and the wording of the laws of thermodynamics could also serve as the basis for more precise and less natural language.

Marletto uses transistors as an example. A transistor is, functionally speaking, a switch which can be operated electrically to turn it on or off. This means it has two states. Many other things are functionally identical to transistors in binary digital computers, such as valves and relays, but their physical details can be removed from the situation when making a computer. A 6502 CPU, as found in the Apple ][ and BBC Micro among many other computers, is a microprocessor whose chip comprises microlithographed NMOS transistors, but has also been made occupying an entire board from discrete TTL integrated circuits and even covering the walls of a living room or bedroom with lower-scale integration components, and it could even be made from valves or relays, but it would be slower. In all these cases, the computationally important aspect of the logic network is the ones and zeros and the logic functions applied to them. There are physical realisations of these but there’s a level of abstraction which means they don’t matter. Constructor theory appears to aim to generalise this, not necessarily in terms of computing but with the same kind of detachment. That said, it still recognises information as a neglected and important aspect of physics.

A second phenomenon which physics as it stands can’t really make sense of is life. When an animal, including a human, dies, most or all of its cells can still be alive but the animal as a whole is dead. The corpse obeys the same laws of physics as currently understood as the animal did when it was alive. The Krebs Cycle tends to run for a while until the oxygen runs out and there’s no longer a way for carbon dioxide to be transported away, so the acidity increases, and enzymes within the cells begin to break them down. Genes can also be expressed for days after death. Moreover, bacteria decompose or cremation converts the body to an inorganic form, all within the purvey of physics and in the former case biology, and yet the transformation of life to death is profound and meaningful, and can be as completely described by physics and chemistry as any other classical process, but is in another way not described at all. The counterfactual here would be resurrection, but time’s arrow only points one way.

Information, heat and the properties of living things cannot be accommodated as trajectories because they’re about what can or cannot be made to happen to a physical system rather than what happens to it given initial conditions and the laws of motion. Constructor theory’s fundamental statements are about which tasks are possible, which impossible and why that’s so. The fundamental items are not particles or trajectories but “tasks”: the precise specification of a physical transformation of a system. The transformed object is known as the substrate, so in a sense the duality of trajectories and particles is replaced by that of tasks and substrates.

It might be worth introducing another metaphor here. Suppose you have a machine which includes a cup on the end of an arm. If you put a six-sided die in that cup and it reliably throws a six every time and is in the same state at the end as it was before you put the die in, that machine can be called a “constructor”. If it isn’t in the same state at the end as at the start, it may not be able to repeat the task reliably, which means it isn’t a constructor. Now for me, and for all I know this has been addressed in the theory because once again I’m somewhat out of my depth here, this seems to ignore entropy. All machines wear out. Why would a constructor not? Clearly the machine is a metaphor, but how literally can it be taken?

Although laws of physics in this framework are characterised by counterfactuals and constructors, and the language of tasks and substrates is used, it’s often possible to get from such a description to traditional, and here “traditional” includes quantum physics, statements couched in trajectory/particle terms. In this way constructor theory includes traditional physics and can be used everywhere traditional physics can be used, but it can also cover so much more than that, including information, life, probability and thermodynamics, thereby bringing all of these things into the fold in a unified way. For instance, both the position and velocity of a particle cannot be measured precisely at the same time, which is tantamount to saying that there cannot be a machine which measures both position and velocity. In that context, it’s fairly trivial – the “machine” bit seems tacked on unnecessarily – but in others, such as life and information, it wouldn’t be so.

Information is a little difficult to describe formally and this is one of those situations where although the word does correspond to how it’s used colloquially, particularly in the phrase “information technology”, it isn’t quite the same thing as that. There are mathematical ways of describing the concept, but before covering that it’s important to point out that simply because the word “information” is used in this way, there is in some way an authority or a greater right to use it thus. It’s like the way “nut” and “berry” are used botanically, in that a peanut is not a nut but nutmeg is, and a banana is a berry but a blackberry isn’t, but that doesn’t mean the way we use “nut” and “berry” is in any way inferior. Nonetheless, this is how I’m going to be using it here.

The more ordered something is, the less information it takes to describe. Glass is a haphazard arrangement of, for example, sodium silicate molecules, and to describe precisely where each is would take a long list of coördinates which couldn’t be compressed much, but a crystal of sodium chloride is relatively easy to describe as a cube-shaped lattice comprising chlorine and sodium a certain distance apart, and once you’ve done that, you’ve described the whole crystal. Hence the more entropic something is, the more information is needed to describe it. If a crystal is disturbed, perhaps by the inclusion of a few atoms of other elements, it will be more complicated, and need more information to describe it. Likewise, if mercury is a solid crystal below -40, melting that mercury complicates its structure and therefore in a sense, melting something and increasing its entropy is adding information to it. Strangely, it follows that one way of freezing things is to remove information from them, which is Maxwell’s Demon.

Maxwell’s Demon has been evoked repeatedly as a provocative thought experiment in physics. It can be described thus. Imagine an aquarium of water divided in two halves. There is a door in the middle of the partition where a demon stands who inspects the individual water molecules. If they’re moving faster than a certain speed, the demon lets them through to compartment B or leaves them in there, but if they’re moving more slowly, the demon lets them through to compartment A or leaves them there. As time goes by, the temperature of compartment A falls and that of compartment B rises, until compartment A has frozen. This appears to violate the laws of thermodynamics. If you’re uncomfortable with a demon, you can imagine a membrane between the two which is permeable one way to faster molecules and the other way to slower ones, but the issue remains. One counter-claim to this is that the membrane or demon has to have information-processing power to do this, and that would involve at least the initial input of energy if not its continuous use. The membrane is very “clever” and organised: it’s a technologically advanced bit of kit, or alternatively it’s a highly evolved organism or living system, all of which involved the input of a lot of energy at some point, perhaps in a factory that makes these things. If it’s actually a demon, it has a brain or something like it: it has to be able to think about what it’s doing, and that takes energy. This is why zombies would probably be nuclear-powered, incidentally, but that’s another story.

Leaving aside the question of whether this inevitably breaks the laws of thermodynamics by reducing empathy without greater energy input than output, Maxwell’s Demon is relevant to Constructor theory and has on a very small scale been used to do something which would be useful on a larger scale. This is effectively a fridge which runs by moving information around. The information needed to describe the frozen side of the aquarium is probably less than that required to describe the liquid, or possibly steam, side, because the frozen side consists of ice crystals, which take less information to describe than water or steam. This membrane works by taking information away from hot things. This has been done with a transistor. A device has been invented which separates high- and low-energy electrons and only allows the latter to reach the transistor, which therefore cools it. This is actually useful because it could be employed in cooling computers. A somewhat similar device is the Szilard Engine, which detects the location of a gas molecule in a chamber, places a barrier across it and closes a piston on the vacuum side before releasing the gas molecule. This, too, releases a tiny bit of energy via information, namely the information about where in the chamber the molecule is. It’s also subject to the Uncertainty Principle because if the chamber were sufficiently small, and in this case subatomic particles would have to be used, the point would come when there was only a probability that the piston would move, which would create different timelines, but this isn’t the point under consideration. Hence there is a relationship between energy, information and entropy with real consequences. This is no longer just a thought experiment.

At this point, I seem to have missing information, because I’m aware of all this and on the other side I’m also aware of Universal Constructors, but I can’t make a firm connection between them. The link may become clear as I describe them. If not, I might try to find some information, so to speak, to link them. It is connected to quantum computing. I know that much. Also, that Universal Constructors are based on cellular automata, and that I really can explain.

By Lucas Vieira – Own work, CC BY-SA 3.0, https://commons.wikimedia.org/w/index.php?curid=101736

In Conway’s game of Life, a grid of square cells, each of which is either occupied or empty, goes through generations where two or three occupied neighbouring cells preserves an occupied cell, any cell with three live neighbours becomes occupied and any cell with four disappears. If you watch the above GIF carefully, you can glean these rules. Conway’s Life is the best-known example of a cellular automaton, but there are others such as Highlife, with different rules, where cells with three or six neighbours become occupied and continue if they have two or three. Another one is Wireworld, which is useful for illustrating the way into one of the most important things about cellular automata:

By Thomas Schoch – Own work, CC BY-SA 3.0, https://commons.wikimedia.org/w/index.php?curid=1863034

This is an XOR (either/or) logic gate from Wireworld, which works particularly well as a means of building structures which work as logic gates or transistors. It’s probably obvious that any binary digital computer can be built in Wireworld, because if logic gates can be made, anything made from them can be. It’s less obvious that many other cellular automata also have this feature, including Life. This means that many cellular automata are Turing-complete. Turing-completeness is the ability to simulate any Turing machine, which runs on a tape on which it writes and erases symbols according to instructions which either tell it to halt or advance to another instruction by moving the tape one way or the other, the symbols also acting as instructions. Perhaps surprisingly, this machine can emulate any computer, and there’s a layer on top of this which means that it can simulate anything a classical computer can simulate. Turing-completeness can be applied not only to any digital binary computer but also to programming languages and other things. There is, for example, a computer design with a single instruction: subtract one and branch if negative. This can do anything, but might take a long time to do it. Any practical computer would not be designed like this and the idea also ignores the fact that the machine might take an extremely long time to do anything, and memory limitations are also ignored, but it means, for example, that with the right peripherals a ZX81 could simulate a supercomputer or just a modern-day high end laptop, really, really slowly, assuming that the extra couple of dozen bits needed could be added to the address bus! Maybe this is what happened with the Acorn System 1 in ‘Blake’s 7’ series D:

One way to extend the cellular automaton concept is to make it quantum, which can then have the same relationship to quantum computers as Turing-complete classical cellular automata have to classic digital binary computers. If built, quantum cellular automata (QCAs) would have very low power-consumption and if they are Turing complete, would also be a possible replacement for classical computers, and they can also be made extremely small. However, there are two distinct types of QCAs which happen to have been given the same name by separate teams. The QCAs I referred to as having low power consumption are quantum dot based. Quantum dots are important to nanotechnology. They have to be very small to work, and consist of particles a few nanometres across which switch electrons from their orbital state to the delocalised state found in metals which renders them conductors. This means they can act as switches just like transistors. If they’re linked in the right way, they can be used to build cellular automata. This, however, is not what Deutsch and Marletto mean by a QCA, David Deutsch being the other main proponent of Constructor Theory, because although quantum dots computers arranged as cellular automata could indeed be used as computers through being lattices of devices which can do Highlife, Life or Wireworld, or some other kind of automata, the electron transition can be treated as a classical bit rather than a qubit and the fact that it happens to be a quantum phenomenon doesn’t alter the basic design of the computer. Real quantum dot computing has been around since 1997 CE. Qubits can be actualised through such phenomena as spin or the polarisation of light, where there are two possible states, but they differ from bits in that they can be both zero and one until measured or observed, and if they are observed, this can influence the chain of cause and effect which led up to that point. This means, for example, that the factors of integers can be found by observing the qubit rather than having to calculate them iteratively, and this is much faster. Since cryptographic security depends on prime factors, this also means that quantum computing might make secure financial transactions over the internet insecure.

In the Marletto-Deutsch sense, a QCA is to a classical CA (cellular automaton) as a quantum logic gate is to a classical logic gate. A classical logic gate may have two inputs and one output, and the output depends on those inputs. A quantum logic gate is bidirectional. Measuring or observing the output influences what the input was. Hence one rule for Life, for example, is:

¬(A /\ B /\ C /\ D)=>¬E

where A, B, C and D are E’s neighbours. This is a one-way process. You could build an array of LEDs behaving as a Life game where logic gates such as the one above linked the cells represented by them together and just let it run, having set the original conditions, but there would be only one outcome and there’s no going back unless the pattern involved cycles. If quantum gates were involved instead, the outcome would, when observed, determine what had happened before it was observed, and this could be done by building a grid out of quantum gate devices rather than classical TTL integrated circuits.

A Universal Constructor can be built in Life. This is a pattern which can build any other pattern. In fact, patterns can be built which can copy themselves and they can be coupled to Turing machines in Life which can be programmed to get them to make the pattern desired. This is the first successful Universal Constructor:

This shows three Universal Constructors, each made by its predecessor. The lines are lists of instructions which tell the machines how to make copies of themselves. Mutations can occur in these which are then passed on. Perhaps unsurprisingly, these were thought up by John von Neumann, and are therefore basically Von Neumann probes as in this. These are potentially the devices which will become our descendants as intelligent machines colonising the Galaxy, and possibly turning it into grey goo, but this is not what we’re talking about here. Here’s a video of one in action.

These are machines which do tasks on substrates, and this is where I lose track. Deutsch (whom I haven’t introduced properly) and Marletto seem to think that physics can be rewritten from the bottom up by starting with the concept of Universal Constructors running in quantum cellular automata. I haven’t read much of their work yet, but I presume this Universal Constructor is an abstract concept rather than something which actually exists to their minds, or at least only exists in the same way as mathematical platonists believe mathematics exists. A mathematical platonist believes maths exists independently of cognition, so for example somewhere “out there” is the real number π. It’s certainly hard to believe that if there were suddenly no living things in the world with no other changes, there wouldn’t still be three peaks in the Trinidadian Trinity Hills for example. Another option would be formalism, where mathematics is seen as a mere game played with symbols and rules. If this is true, it’s possible that aliens would have different mathematics, but this fails to explain why mathematics seems to fit the world so neatly. These same thoughts apply to Universal Constructors. These things may exist platonically speaking, or they may be formal. It’s also difficult to tell, given its recent advent, whether Constructor Theory is going to stand the test of time or whether it’s just fashionable right now, and that raises another issue: if platonism is true, do bad theories or unpopular ones exist anyway? Also, even if Constructor Theory did turn out to be unpopular that wouldn’t be the same as it being wrong. We might well stumble upon something which was correct and then abandon it without knowing.

The reason these questions are occupying my mind right now is that the idea that physics is based on a Universal Constructor, which I presume is doing the job of standing for a Theory of Everything but again I don’t know enough, would seem to have two interesting correlations with popular ways of looking at the world. One is that the Universe is a simulation, which I don’t personally believe for various reasons, one of which is the Three-Body Problem (not the book). This is the impossibility of calculating the movements of three massive bodies relatively close to each other. It is possible to calculate both the movements of two such bodies and to find special cases of three bodies which are calculable, but it isn’t possible to calculate most such cases, and given that the Universe consists of many more than three bodies of relatively significant size, the calculations necessary would need a computer more complex by many orders of magnitude than the Universe. There are many cases where the third body is too small or distant compared to the others where a good approximation can be calculated, but if the orbit of Mercury differs even by a millimetre, it can completely throw the other planets in our Solar System out of kilter to the extent that Earth would end up much closer to the Sun or Mercury and Venus would collide before the Sun becomes a red giant. Therefore, if the Universe is a simulation it would need to be run by a computer far more powerful than seems possible. Nonetheless, it’s possible to shrink the real world down so that, for example, everything outside the Solar System is simply a projection, and this would help. If it did turn out to be one, though, it seems that Constructor Theory and the Universal Constructor would be a major useful component in running it. The second question is a really obvious one: Is the Universal Constructor God? Like the Cosmological Argument, the Universal Constructor is very different from the traditional view of God in many religions, because it seems to be a deist God who sets the world in motion and retires, or at least leaves her Universal Constructor to run things, and “proof” of a Creator is not proof of God as she’s generally understood because there’s nothing in there about answering prayers or revealing herself to prophets, among many other things. Also, this would be a “God Of The Gaps”, as in, you insert the idea of a God whenever you can’t explain something. Nonetheless it is at least amusingly or quaintly God-like in some ways.

To summarise then, Constructor Theory is motivated by the problem of using conventional physics to describe and account for such things as the difference between life and death, the direction in which entropy operates and the nature of the way things are without supposing initial conditions. It seeks to explain this by proposing the idea of a Universal Constructor, which is a machine which can do anything, and more specifically performs tasks on the substrate that is the Universe, and also local cases of the Universe such as a melting ice cube, exploding galaxy or dying sparrow. This Universal Constructor can be composed of quantum cellular automata and is a kind of quantum computer, which it has to be because the Universe is quantum. This reminds me a bit of God. Have I got it? I dunno. But I want to finish with an anecdote.

Back in 1990, the future hadn’t arrived yet, so ‘Tomorrow’s World’ was still on. Nowadays it would just be called ‘Today’s World’ of course. At the start of one of the episodes, Kieran Prendeville, Judith Hann or someone said that CERN were building a “Machine That Explains Everything”, and they then went on to talk about a new design of inline roller skate. I’ve never forgotten that incident, mainly because of the bathos, but I suppose it was the Large Electron-Positron Collider. Of course, incidentally at the same time in the same organisation Tim Berners-Lee was inventing a different kind of “machine that explains everything”, but it seems to me now that this is also what Constructor Theorists are trying to do, because a Universal Constructor is definitely a machine, and it’s definitely supposed to explain everything. So that’s good. Apparently the game Deus Ex also has something with the same name, which I know thanks to Tim Berners-Lee’s invention, but I can’t find an explanation for it.

Oh, and I may have got all of this completely wrong.