Could Science End?

Yesterday I considered the question of what civilisation would be like if nobody could do mathematics “as we know it”, which is one fairly minor suggestion for an answer to the Fermi Paradox of “where are all the aliens?”. Of course the simplest answer to this is that there aren’t any and probably haven’t ever been any, but there are also multitudinous other possibilities, many of which have interesting implications for us even if we never make contact with any. Yesterday, the fault was in ourselves, but what if the fault was in not our stars, but the stars? What if the issue is not that other intelligent life forms lack a capacity we do have, but that there is a realistic, external but still conceptual problem which prevents anyone from getting out there into interstellar space in a reasonable period of time? What if, so to speak, science “runs out”?

Even if there are no aliens, this possibility is still important. It’s entirely possible that they are in fact completely absent but science will still stop, and that would be a major issue. It would be rather like the way Moore’s Law has apparently run up against the buffers due to thermal noise and electron tunnelling. Ever since 1961, when the first integrated circuit was invented, there’s been an approximate doubling of transistors per unit area of silicon (or germanium of course) every two years or so, which may be partly driven by commercial considerations. However, as they get smaller, the probability of an electron on one side of a barrier teleporting to the other and thereby interfering with the operation of transistors increases. In 2002, it was theorised that the law would break down by the end of the decade due to Johnson-Nyquist noise, which is the disturbance of electrical signals due to the vibration of atoms and molecules tending to drown out weak signals, which is what nanoscale computing processes amount to. It isn’t clear whether Moore’s Law has stopped operating or not because if it does, it would have consequences for IT companies and therefore their profitability and share values, so the difficulty in ascertaining whether it has is a good example of how capitalism distorts processes and research which would ideally operate in a more neutral environment, and there’s also a tendency for people to suppose that scientific change will not persist indefinitely because of being “set in their ways” as it were, so it’s hard to tell if it actually has stopped happening. It’s been forecast, in a possibly rather sensationalist way, that once Moore’s Law does stop, there will be a major economic recession or depression and complete social chaos resulting from the inability of IT companies to make enough money to continue, but I don’t really know about that. It seems like catastrophising.

More widely, there are areas of “crisis”, to be sensationalist myself for a moment, in science, particularly in physics but as I’ve mentioned previously also perhaps in chemistry. The Moore’s Law analogy is imperfect because it isn’t pure scientific discovery but the application of science to technology where it can be established that a particular technique for manufacturing transistors has a lower size limit. This is actually a successful prediction made by physics rather than the end of a scientific road. However, the consequences may be similar in some ways because it means, for example, that technological solutions relying on microminiaturisation of digital electronics would have to change or be solved in a different way, which is of course what quantum computers are for. The end of science is somewhat different, and can be considered in two ways.

The first of these is that the means of testing hypotheses may outgrow human ability to do so. For instance, one possible time travel technique involves an infinitely long cylinder of black holes but there is no way to build such a cylinder as far as can be seen, particularly if the Universe is spatially finite. Another example is the increasing size and energy required to build particle colliders. The point may come when the only way to test an hypothesis of this kind would be to construct a collider in space, and right now we can’t do this and probably never will be able to. There would be an extra special “gotcha” if it turned out that in order to test a particular hypothesis involving space travel it would be necessary to have the engines built on those principles in the first place to get to a place where it could be falsified.

Another way it might happen is that there could be two or more equally valid theories which fit all the data and are equally parsimonious and there is no way of choosing among them. It kind of makes sense to choose a simpler theory, but on this level it becomes an æsthetic choice rather than a rational one because nothing will happen as a result of one theory being true but not the other. If all the data means all the observable data, this is the impasse in which science will find itself.

It also seems to be very difficult to arrive at a theory of quantum gravity. Relativity and quantum physics are at loggerheads with each other and there seems to be no sign of resolution. There “ought to be” some kind of underlying explanation for the two both being true, but it doesn’t seem to be happening. Every force except gravity is explained using the idea that particles carry the message of that force, such as photons for electromagnetism and gluons for the strong nuclear force, but gravity is explained using the idea that mass distorts space instead, meaning that gravity isn’t really a force at all. I’ve often wondered why they don’t try to go the other way and use the concept of higher dimensions to explain the other forces instead of using particles, but they didn’t and I presume there’s a good reason for that. It wouldn’t explain the weak force I suppose. However, there does seem to be a geometrical element in the weak force because it can only convert between up and down quarks if their spin does not align with their direction of motion, so maybe. But so far as I know it’s never been tried this way round, which puzzles me. There’s something I don’t know.

There may also be a difference between science running out and our ability to understand it being exceeded. Already, quantum mechanics is said to be incomprehensible on some level, but is that due to merely human limitations or is it fundamentally mysterious? This is also an issue evoked with the mind-body problem, in that perhaps the reason we can’t seem to reconcile the existence of consciousness with anything we can observe is that the problem is just too hard for humans to grasp.

People often imagine the ability to build a space elevator, which is a cable reaching thousands of kilometres into space to geostationary orbit up and down which lifts can move, making it far easier to reach space, but there doesn’t appear to be a substance strong enough to support that on Earth, although it would be feasible on many other planets, moons and asteroids using existing technology. We might imagine it’s just round the corner, but maybe it isn’t. Likewise, another common idea is the Dyson sphere, actually acknowledged by Freeman Dyson himself as having originally been thought of by Olaf Stapledon, which encloses a sun in a solid sphere of extremely strong matter to exploit all of its energy, which again may not exist. And the obvious third idea is faster than light travel, which is generally taken to be impossible in any useful way. One way the search for extraterrestrial intelligence (SETI) could be conducted is to look for evidence of megastructures like Dyson spheres around stars, and in one case a few people believed they’d actually found one, but what if they turn out to be impossible? Dyson’s original idea was a swarm of space stations orbiting the Sun rather than a rigid body, which seems feasible, but an actual solid sphere seems much less so. Our plans of people in suspended animation or generation ships crossing the void, or spacecraft accelerated to almost the speed of light may all just be pipe dreams. Our lazy teenage boasts will be high precision ghosts, to quote Prefab Sprout. Something isn’t known to be possible until it’s actually done.

If non-baryonic dark matter exists, the beautiful symmetries of elementary particles which the Standard Model of physics has constructed do not include it. And despite my doubts, it may exist, and even if it doesn’t there’s an issue with explaining how galaxies rotate at the rate they do. However, at any point in the history of science there were probably gaps in knowledge which seemed unlikely to be filled, so I’m not sure things are any different today. It reminds me of the story about closing the US patent office in 1899 CE, which is apparently apocryphal, because everything had been invented. However, there is also the claim that technological progress is slowing down rather than accelerating, because the changes wrought in society by the immediate aftermath of the Industrial Revolution were much larger than what has happened more recently. At the end of the nineteenth century, there seemed to be just two unresolved problems in physics: the ultraviolet catastrophe and the detection of the luminiferous æther. These two problems ended up turning physics completely upside down. Now it may be possible to explain any kind of observation, with the rather major exceptions which Constructor Theory tries to address but these seem to be qualitatively different. The incompleteness of these theories, such as the Uncertainty Principle and the apparent impossibility of reconciling relativity with quantum mechanics, could still be permanent because of the difficulty of testing these theories. Dark matter would also fall under this heading, or rather, the discrepancy in the speed of galactic movement and rotation does.

This is primarily about physics of course, because there’s a strong tendency to think everything can be reduced to it, but biocentrism is another possible approach, although how far that can be taken is another question. Also, this is the “trajectory and particles” version of physics rather than something like constructor theory, and I’m not sure what bearing that has on things. Cosmology faces a crisis right now as well because two different precise and apparently reliable methods of measuring the rate of expansion of the Universe give two different results. Though I could go on finding holes, which may well end up being plugged, I want to move on to the question of what happens if science “stops”.

The Singularity is a well-known idea, described as “the Rapture for nerds”. It’s based on the perceived trend that scientific and technological progress accelerate exponentially until they are practically a vertical line, usually understood to be the point at which artificial intelligence goes off the IQ scale through being able to redesign itself. Things like that have happened to some extent. For instance, AlphaGo played the board game Go (AKA Weichi, 围棋) and became the best 围棋 player in the world shortly after, and was followed by AlphaGo Zero, which only played games with itself to start with and still became better than any human player of the game. This was a game previously considered impossible to computerise due to the fact that each move had hundreds of possible options, unlike chess with its couple of dozen or fewer, meaning that the game tree would branch vastly very early on. But the Singularity was first named, by Ray Kurzweil, two and a half dozen years ago now, and before that the SF writer Murray Leinster based a story on the idea in 1946, and it hasn’t happened. Of course a lot of other things have been predicted far in advance which have in fact come to pass in the end, but many are sceptical. The usual scenario involves transhumanism or AI, so to an extent it seems to depend on Moore’s Law in the latter case although quantum computing may far exceed that, but for it to happen regardless of the nature of the intelligence which drove it, genuine limits to science might still be expected to prevent it from happening in the way people imagine. For this reason, the perceived unending exponential growth in scientific progress and associated technological change could be more like a sigmoid graph:

I can’t relabel this graph, so I should explain that this is supposed to represent technological and scientific progress up to the Singularity, which occurs where the Y-axis reads zero.

There’s a difference between science and technology of course. It’s notable, for example, that the development of new drugs usually seems to involve tinkering with the molecular structure of old drugs to alter their function rather than using novel compounds, and there seems to be excessive reliance in digital electronics on a wide variety of relatively scarce elements rather than the use of easily obtained common ones in new ways. And the thing is, in both those cases we do know it’s often possible to do things in other ways. For instance, antibacterial compounds and anti-inflammatories are potentially very varied, meaning for example that antibiotic resistance need not develop anything like as quickly as it does, even if they continue to be used irresponsibly in animal husbandry, and there are plenty of steps in the inflammatory process which can be modified without the use of either steroids or so-called non-steroidal anti-inflammatories, all of which are in fact cycloöygenase inhibitors, and there are biological solutions to problems such as touchscreen displays and information processing such as flatfish and cuttlefish camouflage which imply that there is another way to solve the problem without using rare earths or relatively uncommon transition metals. So the solutions are out there, unexploited, possibly because of capitalism. This would therefore mean that if the Singularity did take place, it might end up accelerating technological progress for quite a while through the replacement of current technology by something more sustainable and appropriate to the needs of the human race. Such areas of scientific research are somewhat neglected, meaning that in those particular directions the chances are we really have not run out of science. They could still, in fact, have implications for the likes of space travel and robotics, but it’s a very different kind of singularity than what Kurzweil and his friends seem to be imagining. It’s more like the Isley Brothers:

Having said that, I don’t want to come across as a Luddite or anti-intellectual. I appreciate the beauty of the likes of the Standard Model and other aspects of cutting edge physics and cosmology. I’m not sure they’re fundamental though, for various reasons. The advent of constructor theory, for example, shows that there may be other ways of thinking about physics than how it has been considered in recent centuries, whether or not it’s just a passing trend. Biocentrism is another way, although it has its own limits. This is the practice of considering biology as fundamental rather than physics. The issue of chemistry in this respect is more complex.

Returning to the initial reason this was mentioned, as a solution to the Fermi Paradox, it’s hard to imagine that this would actually make visiting other star systems technologically unfeasible. If we’re actually talking about human beings travelling to other star systems and either settling worlds or constructing artificial habitats to live in there, that doesn’t seem like it would be ruled out using existing tech. The Dædalus Project, for example, used a starship engine based on the regular detonation of nuclear bombs to accelerate a craft to a twelfth of the speed of light, though not with humans on board, and another option is a solar sail, either using sunlight alone or driven by a laser. Besides that, there is the possibility of using low doses of hydrogen sulphide to induce suspended animation, or keeping a well-sealed cyclical ecosystem going for generations while people travel the distances between the stars. There are plenty of reasons why these things won’t happen, but technology doesn’t seem to be a barrier at all here because methods of doing so have been on the drawing board since the 1970s. Something might come up of course, such as the maximum possible intensity of a laser beam or the possibility of causing brain damage in suspended animation, but it seems far-fetched that every possible technique for spreading through the Galaxy is ruled out unless somewhere out there in that other space of scientific theory there is some kind of perpetual motion-like or cosmic speed limit physical law which prevents intelligent life forms or machines from doing so.

All that said, the idea that science might run out is intriguing. It means that there could be a whole class of phenomena which are literally inexplicable. It also means humans, and for that matter any intelligent life form, are not so powerful as to be able to “conquer” the Cosmos, which is a salutory lesson in humility. It also solves another peculiarity that somehow we, who evolved on the savannah running away from predators, parenting and gathering nuts and berries for food and having the evolutionary adaptations to do so, have developed the capacity to understand the Universe, because in this scenario we actually haven’t.

The Machine That Explains Everything

Compare and contrast:

with:

. . . and I’m now wondering if anyone has ever put those two songs next to each other before, on a playlist or otherwise. While I’m at it, here’s another:

(not quite the same). I’ve probably done it now.

Then there’s this:

That’s quite enough of that. Or is it?

Like Franco Battiato, Chiara Marletto is Italian, although she was born at the opposite end of the country. She’s Torinese while he’s Sicilian, although he did move to Milan(o). However, this is not that important unless it says something about the nature of northern Italian culture or education and that’s another story. The germane issue is that there are two distinct approaches to science, if science is seen as based on physics, and that is not the only option – biocentrism is another, possibly relevant to where I’m about to go with this – one of which is much more prominent and formally developed than the other. I’m not talking about classical versus quantum or the issue of quantum gravity and the reconciliation of relativity with quantum physics, although those are important and this is relevant to the latter. ‘To Faust A Footnote’ is a musically-accompanied recitation of Newton’s laws of motion, or at least something like that, and describes the likes of trajectories and objects in motion. Such descriptions are found in Johannes Keplers laws of planetary motion, and although relativity and quantum mechanics are radically different in some ways from this classical approach, this aspect remains the same.

Around a century ago the world of physics saw the climax of a revolution. Triggered, I’m going to say, by two thoughts and perhaps experiments, it was realised that the idea of particles as little snooker balls which could ping about at any speed and were pulled towards each other and pushed apart by various forces which were similar in nature such as magnetism and gravity didn’t really describe the world as it actually is. The first clue had been known for millennia, which is that hot objects glow red rather than white. This led to the realisation that because all objects emit electromagnetic radiation at a range of frequencies, meaning that they would glow orange if they were slightly more than red hot, they would be expected to radiate all of their heat immediately, meaning they would drop to absolute zero. The solution to this is that the range of frequencies is discrete. It has steps and there are minimum packets of light energy called quanta, from the Latin for “how much”, and this prevents the quantity of heat being radiated by any object from being infinite. The other was the increasing difficulty maintaining the idea that light waves had a medium, the luminiferous æther, which would have to have various unusual properties combined to work in this way, culminating in the Michelson-Morley experiment, which showed that light travels at the same speed regardless of the speed of the observer, meaning for example that, 299 792 kps being the speed of light, if you were travelling at 299 791 kps, you would still measure the speed of light at 299 792 kps. You can’t catch up with it. In the Michelson-Morley experiment, light is emitted in two directions at mirrors, at right angles to each other, and the interference patterns are observed. Because Earth is orbiting the Sun at around 30 kps, if light is moving through a medium which is not being dragged with us, and it had been previously shown that it couldn’t be, it “ought” to be moving 30 kps more slowly in one direction than the other, which would show by the wave fronts lining up differently, but this doesn’t happen.

These two lines of thought led to two major new breakthroughs in theoretical physics. One is general and special relativity, the idea that moving observers find themselves dividing space and time up differently than stationary ones and that gravity is not a force but a distortion of space. The other is quantum mechanics, which is that there are inherent limits to accuracy and that probability is fundamentally involved in physical phenomena on a small scale, so there is no certain location or direction to a sufficiently small particle, but it’s more likely to be in one place or be going in a particular direction than another, and it’s these which constitute the waves, which are like a graph depicting how likely something is to be in a certain place at a certain time. These are both “big ideas”. Since then, particle and other physics has tended to involve tinkering with the details and working out the consequences of these theories, notably trying to relate them to each other, which is difficult. Related to quantum mechanics is the Standard Model, really a big set of related ideas which classifies elementary particles and describes electromagnetism and the strong and weak nuclear forces. Gravity is missing from this model. If gravity was suddenly to be non-existent inside a sealed space station able to control its own temperature and pressure, its trajectory would change but there wouldn’t be a fundamental change in anyone’s lifestyle aboard the space station, and this illustrates how big the rift is between the two sides of physics. There are also problems with the recent model of cosmology, there are two many parameters involved for it to be considered elegant (nineteen) and for some reason the weak nuclear force is a quadrillion times (long scale) stronger than gravity and nobody knows why. In order to account more fully for neutrinos, another seven apparently arbitrary constants will be needed, so the whole this is a bit of a mess, although it does work well. It’s also become difficult to test because of the high energy levels involved. Another issue is that there’s a lot more matter than antimatter.

There are also a number of straightforward, everyday phenomena which the kind of physics involving particles and trajectories can’t account for. For instance, a drop of ink in a jar of water starts off as a small, self-contained blob which then spreads out and leaves the jar with a more homogenous tint. This is the usual operation of entropy, but although physics can account for individual molecules of pigment colliding with hydrogen molecules and moving in all sorts of different directions, it can’t explain, for instance, why it happens that way round. Well, I say “physics”. In fact there is a perfectly good branch of physics which does at least assert that this kind of thing will happen, and it’s the one referred to by Flanders and Swann: thermodynamics. The Second Law of Thermodynamics asserts that the entropy of a closed system tends towards a maximum. Another maxim from thermodynamics is the counterfactual: a perpetual motion machine is impossible.

Everything must change. This is Paul Young next to an overbalanced wheel, which might be thought to spin permanently once set in motion, but it doesn’t. The idea of an overbalanced wheel is to shift mass from the edge towards the centre in order to keep it spinning, but it doesn’t work because the torque is actually the same on either side. I find objections to perpetual motion machines odd because at first read they generally appear to be critical of a minor flaw in the machine which might be easily remedied, such as friction, but in fact resolving that problem would introduce another, and it’s a limitation of the entire system rather than that minor issue. All you’re doing is moving the limitation to a different aspect of the device. It will always be there somewhere.

And now it’s time to introduce Constructor Theory (CT), to which Chiari Marletto is a substantial contributor. The idea of a machine being impossible is called a “counterfactual” in CT, and a perpetual motion machine is a good example of that. It needn’t be a machine though, just a situation such as a block of lead spontaneously transmuting to gold, which cannot happen, or rather, is almost impossible. Thermodynamics has enough mathematics in it as it is, but not the same kind as quantum mechanics or relativity. Some physicists seem to feel thermodynamics isn’t rigorous enough for that reason, but it can be made more so without straying into the kind of trajectory and particles paradigm used elsewhere, and the wording of the laws of thermodynamics could also serve as the basis for more precise and less natural language.

Marletto uses transistors as an example. A transistor is, functionally speaking, a switch which can be operated electrically to turn it on or off. This means it has two states. Many other things are functionally identical to transistors in binary digital computers, such as valves and relays, but their physical details can be removed from the situation when making a computer. A 6502 CPU, as found in the Apple ][ and BBC Micro among many other computers, is a microprocessor whose chip comprises microlithographed NMOS transistors, but has also been made occupying an entire board from discrete TTL integrated circuits and even covering the walls of a living room or bedroom with lower-scale integration components, and it could even be made from valves or relays, but it would be slower. In all these cases, the computationally important aspect of the logic network is the ones and zeros and the logic functions applied to them. There are physical realisations of these but there’s a level of abstraction which means they don’t matter. Constructor theory appears to aim to generalise this, not necessarily in terms of computing but with the same kind of detachment. That said, it still recognises information as a neglected and important aspect of physics.

A second phenomenon which physics as it stands can’t really make sense of is life. When an animal, including a human, dies, most or all of its cells can still be alive but the animal as a whole is dead. The corpse obeys the same laws of physics as currently understood as the animal did when it was alive. The Krebs Cycle tends to run for a while until the oxygen runs out and there’s no longer a way for carbon dioxide to be transported away, so the acidity increases, and enzymes within the cells begin to break them down. Genes can also be expressed for days after death. Moreover, bacteria decompose or cremation converts the body to an inorganic form, all within the purvey of physics and in the former case biology, and yet the transformation of life to death is profound and meaningful, and can be as completely described by physics and chemistry as any other classical process, but is in another way not described at all. The counterfactual here would be resurrection, but time’s arrow only points one way.

Information, heat and the properties of living things cannot be accommodated as trajectories because they’re about what can or cannot be made to happen to a physical system rather than what happens to it given initial conditions and the laws of motion. Constructor theory’s fundamental statements are about which tasks are possible, which impossible and why that’s so. The fundamental items are not particles or trajectories but “tasks”: the precise specification of a physical transformation of a system. The transformed object is known as the substrate, so in a sense the duality of trajectories and particles is replaced by that of tasks and substrates.

It might be worth introducing another metaphor here. Suppose you have a machine which includes a cup on the end of an arm. If you put a six-sided die in that cup and it reliably throws a six every time and is in the same state at the end as it was before you put the die in, that machine can be called a “constructor”. If it isn’t in the same state at the end as at the start, it may not be able to repeat the task reliably, which means it isn’t a constructor. Now for me, and for all I know this has been addressed in the theory because once again I’m somewhat out of my depth here, this seems to ignore entropy. All machines wear out. Why would a constructor not? Clearly the machine is a metaphor, but how literally can it be taken?

Although laws of physics in this framework are characterised by counterfactuals and constructors, and the language of tasks and substrates is used, it’s often possible to get from such a description to traditional, and here “traditional” includes quantum physics, statements couched in trajectory/particle terms. In this way constructor theory includes traditional physics and can be used everywhere traditional physics can be used, but it can also cover so much more than that, including information, life, probability and thermodynamics, thereby bringing all of these things into the fold in a unified way. For instance, both the position and velocity of a particle cannot be measured precisely at the same time, which is tantamount to saying that there cannot be a machine which measures both position and velocity. In that context, it’s fairly trivial – the “machine” bit seems tacked on unnecessarily – but in others, such as life and information, it wouldn’t be so.

Information is a little difficult to describe formally and this is one of those situations where although the word does correspond to how it’s used colloquially, particularly in the phrase “information technology”, it isn’t quite the same thing as that. There are mathematical ways of describing the concept, but before covering that it’s important to point out that simply because the word “information” is used in this way, there is in some way an authority or a greater right to use it thus. It’s like the way “nut” and “berry” are used botanically, in that a peanut is not a nut but nutmeg is, and a banana is a berry but a blackberry isn’t, but that doesn’t mean the way we use “nut” and “berry” is in any way inferior. Nonetheless, this is how I’m going to be using it here.

The more ordered something is, the less information it takes to describe. Glass is a haphazard arrangement of, for example, sodium silicate molecules, and to describe precisely where each is would take a long list of coördinates which couldn’t be compressed much, but a crystal of sodium chloride is relatively easy to describe as a cube-shaped lattice comprising chlorine and sodium a certain distance apart, and once you’ve done that, you’ve described the whole crystal. Hence the more entropic something is, the more information is needed to describe it. If a crystal is disturbed, perhaps by the inclusion of a few atoms of other elements, it will be more complicated, and need more information to describe it. Likewise, if mercury is a solid crystal below -40, melting that mercury complicates its structure and therefore in a sense, melting something and increasing its entropy is adding information to it. Strangely, it follows that one way of freezing things is to remove information from them, which is Maxwell’s Demon.

Maxwell’s Demon has been evoked repeatedly as a provocative thought experiment in physics. It can be described thus. Imagine an aquarium of water divided in two halves. There is a door in the middle of the partition where a demon stands who inspects the individual water molecules. If they’re moving faster than a certain speed, the demon lets them through to compartment B or leaves them in there, but if they’re moving more slowly, the demon lets them through to compartment A or leaves them there. As time goes by, the temperature of compartment A falls and that of compartment B rises, until compartment A has frozen. This appears to violate the laws of thermodynamics. If you’re uncomfortable with a demon, you can imagine a membrane between the two which is permeable one way to faster molecules and the other way to slower ones, but the issue remains. One counter-claim to this is that the membrane or demon has to have information-processing power to do this, and that would involve at least the initial input of energy if not its continuous use. The membrane is very “clever” and organised: it’s a technologically advanced bit of kit, or alternatively it’s a highly evolved organism or living system, all of which involved the input of a lot of energy at some point, perhaps in a factory that makes these things. If it’s actually a demon, it has a brain or something like it: it has to be able to think about what it’s doing, and that takes energy. This is why zombies would probably be nuclear-powered, incidentally, but that’s another story.

Leaving aside the question of whether this inevitably breaks the laws of thermodynamics by reducing empathy without greater energy input than output, Maxwell’s Demon is relevant to Constructor theory and has on a very small scale been used to do something which would be useful on a larger scale. This is effectively a fridge which runs by moving information around. The information needed to describe the frozen side of the aquarium is probably less than that required to describe the liquid, or possibly steam, side, because the frozen side consists of ice crystals, which take less information to describe than water or steam. This membrane works by taking information away from hot things. This has been done with a transistor. A device has been invented which separates high- and low-energy electrons and only allows the latter to reach the transistor, which therefore cools it. This is actually useful because it could be employed in cooling computers. A somewhat similar device is the Szilard Engine, which detects the location of a gas molecule in a chamber, places a barrier across it and closes a piston on the vacuum side before releasing the gas molecule. This, too, releases a tiny bit of energy via information, namely the information about where in the chamber the molecule is. It’s also subject to the Uncertainty Principle because if the chamber were sufficiently small, and in this case subatomic particles would have to be used, the point would come when there was only a probability that the piston would move, which would create different timelines, but this isn’t the point under consideration. Hence there is a relationship between energy, information and entropy with real consequences. This is no longer just a thought experiment.

At this point, I seem to have missing information, because I’m aware of all this and on the other side I’m also aware of Universal Constructors, but I can’t make a firm connection between them. The link may become clear as I describe them. If not, I might try to find some information, so to speak, to link them. It is connected to quantum computing. I know that much. Also, that Universal Constructors are based on cellular automata, and that I really can explain.

By Lucas Vieira – Own work, CC BY-SA 3.0, https://commons.wikimedia.org/w/index.php?curid=101736

In Conway’s game of Life, a grid of square cells, each of which is either occupied or empty, goes through generations where two or three occupied neighbouring cells preserves an occupied cell, any cell with three live neighbours becomes occupied and any cell with four disappears. If you watch the above GIF carefully, you can glean these rules. Conway’s Life is the best-known example of a cellular automaton, but there are others such as Highlife, with different rules, where cells with three or six neighbours become occupied and continue if they have two or three. Another one is Wireworld, which is useful for illustrating the way into one of the most important things about cellular automata:

By Thomas Schoch – Own work, CC BY-SA 3.0, https://commons.wikimedia.org/w/index.php?curid=1863034

This is an XOR (either/or) logic gate from Wireworld, which works particularly well as a means of building structures which work as logic gates or transistors. It’s probably obvious that any binary digital computer can be built in Wireworld, because if logic gates can be made, anything made from them can be. It’s less obvious that many other cellular automata also have this feature, including Life. This means that many cellular automata are Turing-complete. Turing-completeness is the ability to simulate any Turing machine, which runs on a tape on which it writes and erases symbols according to instructions which either tell it to halt or advance to another instruction by moving the tape one way or the other, the symbols also acting as instructions. Perhaps surprisingly, this machine can emulate any computer, and there’s a layer on top of this which means that it can simulate anything a classical computer can simulate. Turing-completeness can be applied not only to any digital binary computer but also to programming languages and other things. There is, for example, a computer design with a single instruction: subtract one and branch if negative. This can do anything, but might take a long time to do it. Any practical computer would not be designed like this and the idea also ignores the fact that the machine might take an extremely long time to do anything, and memory limitations are also ignored, but it means, for example, that with the right peripherals a ZX81 could simulate a supercomputer or just a modern-day high end laptop, really, really slowly, assuming that the extra couple of dozen bits needed could be added to the address bus! Maybe this is what happened with the Acorn System 1 in ‘Blake’s 7’ series D:

One way to extend the cellular automaton concept is to make it quantum, which can then have the same relationship to quantum computers as Turing-complete classical cellular automata have to classic digital binary computers. If built, quantum cellular automata (QCAs) would have very low power-consumption and if they are Turing complete, would also be a possible replacement for classical computers, and they can also be made extremely small. However, there are two distinct types of QCAs which happen to have been given the same name by separate teams. The QCAs I referred to as having low power consumption are quantum dot based. Quantum dots are important to nanotechnology. They have to be very small to work, and consist of particles a few nanometres across which switch electrons from their orbital state to the delocalised state found in metals which renders them conductors. This means they can act as switches just like transistors. If they’re linked in the right way, they can be used to build cellular automata. This, however, is not what Deutsch and Marletto mean by a QCA, David Deutsch being the other main proponent of Constructor Theory, because although quantum dots computers arranged as cellular automata could indeed be used as computers through being lattices of devices which can do Highlife, Life or Wireworld, or some other kind of automata, the electron transition can be treated as a classical bit rather than a qubit and the fact that it happens to be a quantum phenomenon doesn’t alter the basic design of the computer. Real quantum dot computing has been around since 1997 CE. Qubits can be actualised through such phenomena as spin or the polarisation of light, where there are two possible states, but they differ from bits in that they can be both zero and one until measured or observed, and if they are observed, this can influence the chain of cause and effect which led up to that point. This means, for example, that the factors of integers can be found by observing the qubit rather than having to calculate them iteratively, and this is much faster. Since cryptographic security depends on prime factors, this also means that quantum computing might make secure financial transactions over the internet insecure.

In the Marletto-Deutsch sense, a QCA is to a classical CA (cellular automaton) as a quantum logic gate is to a classical logic gate. A classical logic gate may have two inputs and one output, and the output depends on those inputs. A quantum logic gate is bidirectional. Measuring or observing the output influences what the input was. Hence one rule for Life, for example, is:

¬(A /\ B /\ C /\ D)=>¬E

where A, B, C and D are E’s neighbours. This is a one-way process. You could build an array of LEDs behaving as a Life game where logic gates such as the one above linked the cells represented by them together and just let it run, having set the original conditions, but there would be only one outcome and there’s no going back unless the pattern involved cycles. If quantum gates were involved instead, the outcome would, when observed, determine what had happened before it was observed, and this could be done by building a grid out of quantum gate devices rather than classical TTL integrated circuits.

A Universal Constructor can be built in Life. This is a pattern which can build any other pattern. In fact, patterns can be built which can copy themselves and they can be coupled to Turing machines in Life which can be programmed to get them to make the pattern desired. This is the first successful Universal Constructor:

This shows three Universal Constructors, each made by its predecessor. The lines are lists of instructions which tell the machines how to make copies of themselves. Mutations can occur in these which are then passed on. Perhaps unsurprisingly, these were thought up by John von Neumann, and are therefore basically Von Neumann probes as in this. These are potentially the devices which will become our descendants as intelligent machines colonising the Galaxy, and possibly turning it into grey goo, but this is not what we’re talking about here. Here’s a video of one in action.

These are machines which do tasks on substrates, and this is where I lose track. Deutsch (whom I haven’t introduced properly) and Marletto seem to think that physics can be rewritten from the bottom up by starting with the concept of Universal Constructors running in quantum cellular automata. I haven’t read much of their work yet, but I presume this Universal Constructor is an abstract concept rather than something which actually exists to their minds, or at least only exists in the same way as mathematical platonists believe mathematics exists. A mathematical platonist believes maths exists independently of cognition, so for example somewhere “out there” is the real number π. It’s certainly hard to believe that if there were suddenly no living things in the world with no other changes, there wouldn’t still be three peaks in the Trinidadian Trinity Hills for example. Another option would be formalism, where mathematics is seen as a mere game played with symbols and rules. If this is true, it’s possible that aliens would have different mathematics, but this fails to explain why mathematics seems to fit the world so neatly. These same thoughts apply to Universal Constructors. These things may exist platonically speaking, or they may be formal. It’s also difficult to tell, given its recent advent, whether Constructor Theory is going to stand the test of time or whether it’s just fashionable right now, and that raises another issue: if platonism is true, do bad theories or unpopular ones exist anyway? Also, even if Constructor Theory did turn out to be unpopular that wouldn’t be the same as it being wrong. We might well stumble upon something which was correct and then abandon it without knowing.

The reason these questions are occupying my mind right now is that the idea that physics is based on a Universal Constructor, which I presume is doing the job of standing for a Theory of Everything but again I don’t know enough, would seem to have two interesting correlations with popular ways of looking at the world. One is that the Universe is a simulation, which I don’t personally believe for various reasons, one of which is the Three-Body Problem (not the book). This is the impossibility of calculating the movements of three massive bodies relatively close to each other. It is possible to calculate both the movements of two such bodies and to find special cases of three bodies which are calculable, but it isn’t possible to calculate most such cases, and given that the Universe consists of many more than three bodies of relatively significant size, the calculations necessary would need a computer more complex by many orders of magnitude than the Universe. There are many cases where the third body is too small or distant compared to the others where a good approximation can be calculated, but if the orbit of Mercury differs even by a millimetre, it can completely throw the other planets in our Solar System out of kilter to the extent that Earth would end up much closer to the Sun or Mercury and Venus would collide before the Sun becomes a red giant. Therefore, if the Universe is a simulation it would need to be run by a computer far more powerful than seems possible. Nonetheless, it’s possible to shrink the real world down so that, for example, everything outside the Solar System is simply a projection, and this would help. If it did turn out to be one, though, it seems that Constructor Theory and the Universal Constructor would be a major useful component in running it. The second question is a really obvious one: Is the Universal Constructor God? Like the Cosmological Argument, the Universal Constructor is very different from the traditional view of God in many religions, because it seems to be a deist God who sets the world in motion and retires, or at least leaves her Universal Constructor to run things, and “proof” of a Creator is not proof of God as she’s generally understood because there’s nothing in there about answering prayers or revealing herself to prophets, among many other things. Also, this would be a “God Of The Gaps”, as in, you insert the idea of a God whenever you can’t explain something. Nonetheless it is at least amusingly or quaintly God-like in some ways.

To summarise then, Constructor Theory is motivated by the problem of using conventional physics to describe and account for such things as the difference between life and death, the direction in which entropy operates and the nature of the way things are without supposing initial conditions. It seeks to explain this by proposing the idea of a Universal Constructor, which is a machine which can do anything, and more specifically performs tasks on the substrate that is the Universe, and also local cases of the Universe such as a melting ice cube, exploding galaxy or dying sparrow. This Universal Constructor can be composed of quantum cellular automata and is a kind of quantum computer, which it has to be because the Universe is quantum. This reminds me a bit of God. Have I got it? I dunno. But I want to finish with an anecdote.

Back in 1990, the future hadn’t arrived yet, so ‘Tomorrow’s World’ was still on. Nowadays it would just be called ‘Today’s World’ of course. At the start of one of the episodes, Kieran Prendeville, Judith Hann or someone said that CERN were building a “Machine That Explains Everything”, and they then went on to talk about a new design of inline roller skate. I’ve never forgotten that incident, mainly because of the bathos, but I suppose it was the Large Electron-Positron Collider. Of course, incidentally at the same time in the same organisation Tim Berners-Lee was inventing a different kind of “machine that explains everything”, but it seems to me now that this is also what Constructor Theorists are trying to do, because a Universal Constructor is definitely a machine, and it’s definitely supposed to explain everything. So that’s good. Apparently the game Deus Ex also has something with the same name, which I know thanks to Tim Berners-Lee’s invention, but I can’t find an explanation for it.

Oh, and I may have got all of this completely wrong.