Those two words up there are commonly bandied about in popular culture and discourse and may or may not be anything to be afraid of. The first is entirely Latin in origin. The other is a mixture of Greek and Latin via French. Words of such origins in English reflect the elitism of mediaeval times when you could be burnt at the stake for reading the Bible in English, as it would challenge the authority of the establishment and gatekeepers belonging to the higher echelons of Church and state, as depicted recently on ‘Wolf Hall’. Any phobia you might have of Latinate and Greek words is the result of ancient rulers trying to keep you in what they think is your divinely preordained place at the bottom of society. Ignore this aversion.
Having said all that, neither word is particularly high-falutin’ in terms of intellectual braininess-signalling because both have occurred a lot in pop culture by now. Typing them into a popular lyrics website yields several pop songs called ‘Multiverse’ or ‘Hyperspace’. They bring disco or acid house to mind when I see those words, and The Shamen’s ‘Destination Eschaton’ in particular. I haven’t bothered to explore further though, so maybe not. But this is not some arcane mathematical treatise. I am a member of no academic communities and don’t pursue such things with rigour. Always bear in mind that I am never more than a pseudo-intellectual and that insofar as I have any strengths, the main one is being able to sound clever even though I’m
First of all, what are they? The Multiverse is the collection of all possible worlds. Wilhelm Leibniz, in his attempt to justify the ways of God to us, seems to have come up with the idea of possible worlds, with the idea that God, being good, would ensure that creatures would reside in “the best of all possible worlds”, defined as the most varied world with the greatest simplicity. That doesn’t seem to follow at all of course – it’s easy to imagine a very simple ultimate torture chamber for instance, where every mote of dust experiences eternal infinite pain in a unique manner. Voltaire satirised the whole idea in ‘Candide’. Nonetheless it’s possible to imagine that if the Universe were a simulation, it could be procedurally generated from a numerical seed to allow the greatest variation from the simplest principles. I should probably dilate on this with an example.
One genre of computer game is the Roguelike. This consists of a series of either isometric or two-dimensional depictions of floors in a practically infinite descending series of dungeons, caverns, cellars or whatever, containing various hazards and powerups with exits. The original Rogue is descended from a game called perdit5 found on mainframes in 1975 and intended to mimic Colossal Cave Adventure without the relatively massive memory overheads that needed. I’m confident that I could implement a Roguelike in well under 16K. They’re very simple to write, but I’ll compare and contrast first.
Suppose you want to implement a map for a series of dungeons in 32K. Each dungeon consists of a thirty-two by thirty-two grid and there are thirty-two floors. Each square of that grid has one of two hundred and fifty-six possibilities in it, including the position of the player, an exit downstairs, a hundred or so special potions or other powerups, a hundred or so monsters or hazards, a wall, blank floor and so on. Now you could just make a three-dimensional 32K array and plonk all that stuff into it, and that would work fine except that it would take up half of a sixteen-bit complement of memory, which in older eight-bit computers is rather too much. Another approach would be to start with that array and work out how to compress it. For instance, you could ensure that instead of storing every location on a blank expanse of flooring, you just store two five-bit values plus two five-bit coördinates for the location of that bit of floor and you’ve stored up to an entire kilobyte of data in less than three bytes. You would of course have to write code to “decompress” this, at least as a display on the screen, which would slow things down a bit, but in theory you’d be able to ensure the way the data were actually stored and even interacted with via the player remained compressed. At this point I could start going off on a tangent about generative adversarial networks but I won’t.
A much more compact alternative to this (am I assuming it’s the only other option?) is to generate the hazards, powerups and exits randomly. That way the actual data amount to being compressed practically to nothing, except that if it’s truly random you get a different set of maps every time and you can never play the same game twice. This is fine for most purposes, but in fact because classical computers are so deterministic, true randomness is practically impossible to achieve using purely digital circuits. It does exist out there in the messy world but inside the computer it’s really difficult to do. I used to do it by reading the refresh register of the Z80 CPU, which was responsible for maintaining the contents of dynamic RAM and changed from moment to moment, but that wasn’t truly random and also if I remember correctly only changed six of its eight bits, so needed to be masked using some kind of Boolean function. Another quasi-random approach I took was to take consecutive values from the code of the system software, which presumably can’t be done any more because of the tendency of modern computers to hide their internal gubbins from potential crackers for security reasons.
However, it may not be desirable to have true randomness. A pseudorandom sequence of numbers can be generated using a straightforward deterministic algorithm. For instance, and I haven’t checked this which is coming off the top of my head, it could start with the eight bit unsigned number 170, rotate it right three times, reverse it and XOR it with 22, then use that value to generate the next number and so on. In fact I shall try this. 170, 1, 26, 44…seems to work, but may end up in a short cycle. 170 was the seed. If the seed was something different a different sequence of “random” numbers would result, and this is key because it amounts to the entire Roguelike map being compressed to that single seed number, which in this case is a whole number from zero to 255 inclusive. Hence the seed for a certain algorithm amounts to a compression of its results to a remarkable degree.
Imagine, therefore, a computer which generates a whole universe in this way, and that you want the best of all possible universes generated from such a seed. What would that seed be? Might it be forty-two?
Taking this back to the Big Bang Burger Bar, clearly a process does occur whereby the Universe unfolds from initial conditions and these could easily have been different. However, in a deterministic universe things could only ever turn out one way according to those conditions. This is not in fact how things do proceed though. Take the bomb on Hiroshima. This consisted of uranium-235, each atom of which had a set probability of decaying, which may have been set off by a cosmic ray or other external ionising radiation. I don’t know, but I suspect it wasn’t because it was sealed in a thick metal shell. Therefore, the initial decay would’ve been random, i.e. it could’ve been any atom in the uranium used. Which atoms split first is of no consequence in the ending of the Second World War, but what if it hadn’t gone off at all? This is, perhaps surprisingly, entirely possible. There is a certain absolutely minute probability that the atom bomb could’ve been built exactly as it was and then for it not to have worked at all. The same applies to nuclear reactors, radioactive tracers in medicine and other situations. They’re fantastically improbable but the probability is not zero, and the result of an apparently dud bomb dropping on Hiroshima would not only have, perhaps temporarily, saved the lives of over a hundred thousand people immediately plus however many are going to die as a result of the irradiation, which has no limit – people born thirty years from now could still die as a result of Hiroshima – but possibly have prevented the Cold War and everything which followed from that. This particular kind of “what if” is not the same as “what if Hitler had died in the First World War?”, because most of the processes which involved his survival were set in place at the Big Bang, ignoring the probabilistic nature of nucleosynthesis. The Cold War is not a deterministic event at all, from the viewpoint of the events of 6th August 1945. The important question, though, is whether the Big Bang was, and to the extent which I believe in the Big Bang, I don’t believe it was, although I have had a problem with this which is now resolved. I’ll come to that.
At an early point in the Universe’s history, a seemingly random distribution of matter and energy came to be from a previous homogeneous state. Hence there were regions where matter was more concentrated, from which more galaxies and stars formed, and others where it was less so, which may have become the voids. Within those galaxies and stars, other variations in concentration and other conditions occurred, ultimately leading to Adolf Hitler and Rubik’s cubes becoming popular in the early 1980s. However, given that these conditions were not determined, or don’t appear to have been, ab initio, I think a probabilistic element exists at the Planck Epoch. The Planck Epoch is the first hundred quadrillionths (long scale) of a yoctosecond, which is not determined because of the scale of space and time involved, meaning that cause and effect didn’t operate. I suspect that the initial conditions of the Universe were probabilistic at this stage, which means that in fact there wasn’t a Universe at all at this point but a multiverse, because all the parallel universes related to changes in initial conditions started then and there, which at the time also happened to be everywhere.
It’s possible to think of the timeline of each universe as a literal line running through time and to think of this particular universe as a line with other lines either side of it, but there’s a problem with seeing it in this way. In this model, the distance of a timeline from one’s own is related to its probability compared to the actual state of affairs. This would mean that two simultaneous 50% probable events could be either side of the “real” timeline at the same distance from it. However, this can’t account for more than two events of exactly the same probability, and for this reason the model has to be at least two dimensional. After this point too, various branches occur which need to be arrayed in the two-dimensional space of the diagram, although the whole multiverse is not a tree but a forest. An example would be Hiroshima, with the branch where the bomb didn’t work being way off to one side somewhere. There’s a branch even further off where radioactivity hasn’t been discovered because no atomic decay phenomenon has ever been recognised and has not occurred in a place where it could be detected as itself, which is a world diverging from ours in 1896.
Two questions arise at this point. Is the model in which the universes branch or start off rooted in the Big Bang literally a space consisting of at least two dimensions? And, if it is, is it coherent to see the Big Bang as occurring at the edge of time? I’ll start with the first one. If the two dimensions of probability are literally dimensions, all the space between timelines has to be “filled in” and probability is quantised like everything else in time and space because it is itself a property of space-time. The timelines aren’t separated by anything but form a continuum like the more familiar space and time. This is also one candidate for conceiving of probability even if it needn’t be like spatiotemporal dimensions. In fact the option for gaps does not exist because a timeline which cannot happen, which is what those gaps would amount to, would have a probability of zero and therefore be at the edge of the probability dimensions. There are other kinds of space which are not literally “space”. One example is isotopic spin space. Isotopic spin space is basically the symmetrical way in which varieties of particles are arranged, so for example for every type of positively charged particle there must be a corresponding negatively charged particle which is otherwise similar, and so on. It doesn’t seem to be literally space in the geometrical sense. To be honest I can’t tell whether probability space is literally two or more dimensions of space time or not, but I always think of it that way. Thus in my ‘Here Be Dragons’, the Yates-Leason device swaps two masses diagonally not only through space and time but also through probability, meaning that they both enter each others’ timelines while being lost to their own – it’s a plot device which enables travel to parallel universes in other words. It could be that something about the properties of probability rule out it being a kind of space, but if it is, that makes the multiverse hyperspatial – it has more than just the dimensions of space and time. I’m not unusual in thinking of it in this way. The title of Murray Leinster’s story ‘Sidewise In Time’ refers to this, and it’s clear that Douglas Adams’ Hitch-Hiker universe also works like this. Whether it’s true or not is another matter.
The other question regards the edge of such a space. Space itself has no edge but is not infinite, which is also an answer to the irritating question “what is space expanding into?” provided you don’t believe in ‘Brane Theory. Space is not expanding into anything because it isn’t a “thing” but a relationship between physical items, and what the idea of space expanding means is that the maximum possible distance between two locations is constantly increasing. The maximum possible distance, incidentally, is the distance at which the directions in which the two objects are relative to each other swap over. This combination of distance and direction is what space is. Time is apparently simpler than that, at least in classical terms, because it’s conventionally conceived of as being just one dimensional and therefore just what stops everything happening at once, with the extra feature of having a direction. This assumes one thing and ignores another, respectively that there is only one time dimension (there might be more than one) and that things don’t really happen simultaneously. It also ignores the fact that the proportion of space-time which is space to something and time to it is divided up differently according to how fast it’s moving and how strong gravity is, but I don’t want to over-complicate things.
Nonetheless, there is a problem with the idea of space-time having an edge, which would be the beginning and end of time, space itself being non-Euclidean and finite without an edge as described above. It might be easier to explain using space. Suppose space is like a piece of fabric and there’s a rip in it. Where is that rip? If an object moves into that rip, it either skips over it and carries on on the other side or disappears completely. I can’t say “disappears into it” because there is no “into”. “Into” implies a destination, and destinations are in space. Hence there can’t be a rip in space, although there can be a pit or possibly a bump, although if there is a bump as opposed to a pit that would make faster than light travel possible and it isn’t because that would be too good to be true and various other things would also then be possible such as gravity control, tractor beams and practically limitless energy as a consequence. The same problem emerges with space having an edge. There can’t literally be a “wall” off which things “bounce” or on which they get stuck just because of space as such. There can be a maximum distance, partly due to gravity, but if there was a limit as described there would simply be empty, inaccessible space beyond it and a relationship of direction (“beyond”) and distance would still exist – in other words space would still be there with nothing in it. Such a situation would be more like the Jain cosmos, which is infinite except for a region six hundred light years high in the middle where a subtle substance allowing movement fills it. That’s a conceivable set of circumstances, but probably not one which actually exists.
This raises a problem with time. The model of the multiverse I’ve just outlined does seem to have an edge, namely the beginning of time. The solution to this issue with space is that space kind of “loops round”, although I prefer to express that in terms of space being non-Euclidean and simply having a maximum distance at any one time, if indeed there is such a set of circumstances as “one time”. In temporal terms that would make time cyclical of course, which is quite popular and has even been respectable in cosmology when it was thought by some that the Universe would ultimately contract to a point and start “again”. However, there isn’t enough mass in the Universe for that to happen and in fact it will continue to expand forever if the Big Bang is all there is to the truth about its origins. This also seems to place a limit on the nature of other universes because it means none of them can both expand at the same rate as this one and have more mass, or they would in fact be looping and the Big Bang wouldn’t be the start of every universe, plus it would lead to the same “ragged border” which would be involved in discontinuous parallel universes. However, the recent strong evidence in favour of time flowing in two directions from the Big Bang, therefore making the Universe symmetrical in terms of parity, space and in matter-antimatter terms does away with the problem of a beginning to time, because the time before the Big Bang is an identical sequence of events happening backwards. Hence this does away with the idea of an edge to time, because there is no end to it, and there was no beginning – the Big Bang is in the “middle”, though not literally because of eternity.
There is another way in which parallel universes can be made sense of which can coëxist with this model, but I won’t go into this now.
So it’s quite simple really isn’t it?