I Just Wanted To Know The Word For “Centre”!

Gàidhlig is, as you know, a language I find phenomenally hard. I’ve said in the past that the best way of learning it is to know it already. It’s a bit like when you stop and ask for directions to somewhere and the reply is “Oh, I wouldn’t start from here if I were you”. Nonetheless it’s got to be done.

The above picture is of the Burns Centre in Dumfries. The obvious joke will not be made here as Doonhamers are thoroughly sick of it and have heard it a thousand times. It occurred to me the other day though, that although I know the Welsh words for “centre”, owing to almost getting a job at the Canolfan y Dechnoleg Amgen in Wales – they’re actually “canol” and “canolfan” and not speaking Welsh I have no idea what “-fan” does – I had no idea at all what the Gàidhlig word was. I do know the word for “middle” – meadhan – but “middle” is not “centre”. I’m also aware that the word in English is used figuratively as well as literally, which is also a usage of “canolfan” in Welsh, but wasn’t cognisant of such a usage or otherwise in Gàidhlig.

Well, it turns out, unsurprisingly, that it isn’t that simple, although the reasons it isn’t aren’t quite linguistic. It starts out fairly straightforwardly. The Robert Burns Centre is probably called something like An Ionad Raibeart Burns, assuming “Raibeart Burns” doesn’t need to be put into the genitive, and it’s even true that “ionad” means “centre” in figurative and literal terms, as well as meaning “location” and “situation”, although I’m still confused as to how it’s pronounced the way it is because there’s clearly a rule about whether the I or the O is pronounced, so I initially thought it was “yonnat” but apparently it’s “innet” (I’m not bothering with IPA at the moment because I’m on the wrong device for it and in any case it’s been said that the IPA is inadequate for transcribing this language, and there’s a whole other conversation to be had about that). So you might think you’ve got it sorted and everything’s very very good, but actually it isn’t, at least from about 2008 CE onward, because at that point someone did something subversive.

Technical language is often perceived as a barrier to understanding which maintains an in-group and an out-group. This is certainly sometimes true, but at other times not using it makes it almost impossible to talk about something. Cults, sorry, new religious movements, often seem to use language this way in order to exclude outsiders from understanding what they’re talking about and also often to make it seem to their followers that they know what they’re on about. When this isn’t done, which notably occurs in botany with the words “nut” and “berry”, people often object because it leads to bananas being called berries and peanuts not being nuts. In fact hardly anything is a nut. To hide this quandary away, scientists and mathematicians often draw on Greek or Latin as a kind of nice neat cover for the messy box of what to call things. Hebrew and Sanskrit are also sometimes used. In fact, Sanskrit is often formally used to refer to phenomena in Gàidhlig. Rather refreshingly, “ionad” is used thus, presumably as part of some kind of statement against the Latinisation or Hellenisation of technical terms.

Understanding this usage is possibly one of the steepest learning curves I’ve ever encountered. This is how it’s described when you type something related into Google:

As a Grothendieck topos is a categorified locale, so an ionad is a categorified topological space. While the opens are primary in topoi and locales, the points are primary in ionads and topological spaces.

Clear? Didn’t think so. It isn’t even as straightforward as being about topology or group theory. It sounds like a concept related to topological space but that’s only tangentially true, because apparently this is category theory. The idea seems to be to take various branches of maths and generalise the concepts and processes which exist and occur within them. It feels like a theory of everything but it isn’t. It’s kind of metamathematics although I’d prefer to reserve that idea for something like number theory. It involves three types of thing, one made up of the other two. Categories, made of objects and morphisms. To my rather naive brain, this sounds a bit like group theory and a bit like topology, and probably a bit like linear algebra if I knew what that was, which I don’t, so I’m going to wrestle with this here and try to understand it.

My first thought was the Canterbury Cross, which was used as the emblem for my secondary school and looks like this:

Back when I’d just started at that school, we were supposed to make an ashtray, because in those days tobacco smoking lacked the stigma it has now been allowed to acquire. This involved taking a square sheet of aluminium and clipping the corners inward to make a shape somewhat like this, then folding them inward. Being dyspraxic, my attempt to do this was catastrophic. I was shockingly bad at practical subjects, or rather the ones I was actually allowed to do, which is again another story. On one occasion I was simply sawing a piece of perspex into the right shape and it literally exploded very loudly in a puff of acrid smoke, to which my plastic teacher’s response was to ask, wearily, “What have you done now?”. I could go into the gender politics of all this but anyway, we’re talking about the Canterbury Cross. My initial attempt at understanding a ionad is that it’s like the middle portion of this cross in that you can trace a line from it to each of the arms, but not from one arm to another. This is not quite what I mean of course, because it never is, but there seems to be a sense in which this is true. It is in fact dead easy to draw a line from one arm to another, but it still seems to be connected in such a way that the others aren’t. This is probably not it though.

Category theory is apparently difficult because it’s an abstraction of an abstraction. Group theory and topology are already quite abstract, though still applicable quite easily. Category theory takes it a step further. I’m going to have another go.

Maths generally consists of objects and operations on those objects. 2+2=4. Addition is the operation there and the numbers are the objects. Likewise, the top slice of a Rubik’s cube can be turned clockwise through a right angle and then turned back, and there are twelve possible sets of arrangements of a Rubik’s cube which it’s impossible to reach from any of the other sets. These operations of turning are within these sets of arrangements and this is a typical application of group theory. The sets of arrangements are the objects. I’m currently trying to imagine a species of intelligent extraterrestrials who grasp group theory intuitively but can’t count, because they have five sexes. More on that another time. Anyway, geometry has this too. A shape can be reflected, magnified, rotated and so on. In each of these cases and many others, there are the operations and the elements. Category theory summarises branches of mathematics by turning them into a series of items joined together in various ways by arrows, so it aims to do to maths what maths aims to do to the world, and it does it with things like this:

Presumably, and this is just me, if you can find two branches of maths which can be summarised using the same diagrams, they’re really the same branch and if there’s another diagram which is known from one branch but not another all of whose other diagrams are the same, it’s worth looking into whatever’s represented by that extra diagram as it might well work in the other branch.

I seem to have gone rather far from the Canterbury Cross here and that might well be due to there being no connection between the two topics. In fact I think there’s bound to be a connection because of the nature of the shape, but it might not be what I think it is. For instance, you can take a Canterbury Cross and flip it horizontally, vertically or diagonally without changing the shape, and you can also reflect it, so there are clearly symmetry groups which can be applied to it which can’t to, for example, the conventional long cross used as a symbol of the Christian faith, so things can be done to this which are relevant to group theory. So it is relevant, but the thing is that you could do the same kind of thing with a Star of David, though different in detail because that shape can also be rotated to fit into itself in various ways which a Canterbury Cross can’t, and all that stuff you could represent very generally in a Category Theory diagram but there’s nothing special. So it seems I haven’t got anywhere near understanding what an ionad actually is, except that it’s something to do with Category Theory.

So, my next guess then, which might well be wrong for all I know, is that Grothendieck Topology is a way of looking at those diagrams which compares them so that one can generalise from them and make useful advances by comparing different mathematical fields. Is it that? I don’t know! And I seem to have to work out what that is in order to work out what ionad actually means in that sense.

So I seem to have arrived in some sort of state of conceptual splodge and confusion. It almost feels like I can’t bridge the gap between incomprehension and the holy grail that is the concept of “ionad”. I feel the same way about calculus, which in one of the two cases I consider to be the idea of being able to tell which way a wiggly line will go next and wonder vaguely whether astrologers use it to locate planets or whether they just use ephemerides, and that’s as far as I can get. With calculus, by the way, I’m aware of there being two mutually inverse types. With category theory, who knows? How do you get to the point where you can confidently say you can understand something? How do you know you haven’t got it completely wrong? Well, usually I suppose you can test it in the real world, so if I wire a three-pin plug wrongly I will briefly know when the electric shock throws me across the room and kills me, and if I make a (vegan) soufflé wrongly I will become aware of that when it collapses as soon as I take it out of the oven, but in this case, how will I know when I’ve got it wrong? It seems too abstract to test. I want to savour this state of personal bafflement and adumbrate its characteristics.

(Can you even make vegan soufflés?)

So to survey my mathematical knowledge, I can manage the following:

  • I scraped an O-level in maths. This probably doesn’t indicate much about how well I understand it though, because I’m fluent in French even though I failed the O-level but not in Spanish even though I have a B at GCSE.
  • At first degree level, I’ve studied statistics to the extent that I can see through deceptive practices which purport to employ it, use it in my own quantitative research and assess the quality of other quantitative research. However, stats is arguably not maths.
  • Also at first degree level, I’m very confident in the use of formal logic and have extended my knowledge beyond the mere understanding of sequents, truth-tables and well-formed formulae, and I also have a firm grasp of the foundations of mathematics, which extends into number theory.
  • I’ve pratted about a bit with stuff like fractals, non-Euclidean geometry and things which take my fancy on the lower levels of the kind of fun maths which crops up in the likes of Martin Gardner’s and Douglas Hofstadter’s writing.
  • Not sure if it’s maths but I’m kind of okay at coding provided OOP isn’t involved and it follows an imperative paradigm.

I’m also not scared of maths. I’m not wonderfully good at it but in the same way as someone who feels almost alien to me might enjoy a kick-about with a football of a Saturday afternoon as opposed to playing in the FA Cup, I dabble a little bit. For instance, I’m motivated to find a non-iterative algorithm for calculating square roots although I haven’t got round to it yet. I also find it incomprehensible how people can say that they’ve never applied most of the maths they learnt at school and wonder how hard their lives must be as a result, unless they don’t realise they’re applying it. Last night I used E=mc² and 4πr² along with a bit of trig to work out how much energy our solar panels are likely to get from the Sun today, and to me that seems useful although somewhat inaccurate owing to the fact that the planet inconveniently has an atmosphere, furthermore with clouds in it, and that really is not that hard although it takes quite a long time if you don’t use a calculator, and where’s the fun in that? I suppose that has the same role in my life as football does in someone else’s. But I still can’t understand this. I also wish I knew how close I was getting.

Let’s have another go.

There are these things called topoi, and other things called pre-sheaves and sheaves, and they relate to this situation. Topoi appear to be places set up to do particular kinds of maths comfortably. Is that what they are? Well, I just asked an AI and it may have been trying to please me because that’s what they do, but it agreed that that’s what a topos is. It also started talking about sheaves, so yikes.

Okay, so what’s a sheaf and why are there pre-sheaves? My initial thought here is that we have conceptual ring binders, we’re wandering all over a large warehouse covered in mathematical papers from all sorts of fields, and we’re collecting them together in the ring binders according to what category (there’s that word again) they’re in, and that the pre-sheaves are the empty binders and the sheaves are the full binders. Is that it? Plug that metaphor into an AI and see what it says. . .

Right, done that with two different AI chatbots and I’m wary that they may be eager to please, but both of them said that I wasn’t too far off although open sets are involved. I think of open sets as akin to the Bedeutungen of family resemblance definitions as opposed to those of definitions based solely on necessary and sufficient conditions, and to be honest I think I’m right about that. I could be confidently incorrect of course. And once again, leaving the sycophancy problem aside, although I’m not completely correct, I’m not one hundred percent wrong either. There also seems to be something about them sharing a corner.

As I’ve said, there was this guy called Alexander Grothendieck who was unlucky enough to be born in Germany in 1928. After a traumatic childhood, he became a mathematician, some say the most important of the twentieth century CE. At some point he actually left mathematical academia and became a political activist and a religious recluse, and he gave lectures in Vietnam while being bombed. I know very little about him but I wonder, given that limited information, whether his life indicates the potential role of maths in people’s lives as a source of inner peace, and also the affinity between mathematical beauty and the spiritual realm. I am actually trying to do that right now in writing this. I’m trying to escape, and I hope to provide others a temporary respite, from the vexing nature of current political developments. All that said, I also wonder if it is in fact germane to the current situation in some way. For instance, while I’m writing this I’m not worrying about Gaza, the rise of global fascism or the toilet problem. It may however be the source of a potential argument against the supreme court ruling on “single sex” spaces, but it doesn’t have to be to serve a therapeutic purpose.

And I’ll carry on. I’d say that Grothendieck was responsible for innumerably many mathematical ideas except that because he was a mathematician one must pick one’s words carefully and note that in fact the cardinality of his ideas is not the same as the power of the continuum and that, depending on how you count ideas, he probably had a finite number of them. On the other hand, it might depend on what counts, so to speak, as an idea. In any case, one of the many things he came up with is the aforementioned Grothendieck Topology. I’m abandoning this for now due to sheer bafflement and lack of mental energy.

Here’s a thought. England’s surface southeast of the Tees-Exe Line and the English coastline from the Tees to the Exe are very different in character to Scotland’s surface and coastline. Is it possible that the concept of the ionad is more useful or applicable to either of those aspects of Scotland than the part of England mentioned, and of course I’d like that because the concept itself is from Gàidhlig, or rather Q-Celtic. The big difference between the two coastlines, to start with, is that Scotland is more fractal than lowland England, and actually any of England but it’s more striking defined thus. Something similar also applies to mainland Scotland combined with its islands, to Scotland with the lochs and sea lochs and by extension to Scotland including the mountains. And this has practical applications: it’s harder to get around here than it is in lowland England and you get situations where Mull of Kintyre is seventy kilometres from Kilmarnock as the crow flies but 272 kilometres by road, mainly due to Loch Fyne. Here there could be steep slopes, lochs in the way and a very fractal coastline, or islands at varying distances from each other which may even exist intermittently according to the tide. Southeast England is much smoother and less complicated on the whole. At the same time it’s worth remembering that an ionad is a concept found in an abstraction of abstractions which may therefore still not apply very well to the physical geography of Scotland.

Except that I think it does. There are several aspects to this place resulting from its geology, which has consequences for its terrain, coastline, transport network, biomes, other aspects of ecology, dialects and presumably other cultural aspects. For instance, here’s the Scottish rail network:



. . .and this is the Central Belt’s rail network, found in the rectangle within the other map:

Due to the population distribution and engineering difficulties, the complexity of the rail network is the opposite of the complexity of Scottish terrain. It seems feasible that some kind of table of ratios between the fractal dimension of the surface in a particular area and the number of train stations or connections could be constructed, and there might also be some mileage, so to speak, in working out how long it takes to get between two places by rail, and then comparing it to how long it takes by road and separating that into walking, cycling, driving and taking the bus, or for that matter a ferry or plane. In fact all this analysis could reveal things about transport policy and decisions made by the Westminster or Scottish governments on these matters. Considering the fractal nature of the terrain and coastline together with the topology of various transport networks suggests also that it would be useful to find some way of unifying these two different mathematical ways of considering the country.

It goes beyond that too. The Gàidhlig language is, at least from the outside, characterised by remarkable variations in accent. Moreover, the distribution, both today and historically, of different dialects and languages in Scotland is likely to be connected to the terrain and accessibility of different parts of the country. In England, at least historically, there has been notable variation in accent in Lancashire in particular, and it seems that similar variation occurs in the Gàidhealtachd, to the extent that if your Gàidhlig is poor people might just perceive you as being from a different island rather than just not very good at it. This is because of the divisions caused by multiple islands and glens separated by peaks, a similar situation as obtains in New Guinea, and interestingly also in the sea around New Guinea, causing respectively great linguistic and biological diversity. It’s been said that Scotland is able to masquerade as all sorts of other countries, such as Norway, the Caribbean and maybe Austria. All of this variation is linked to the terrain, and I’m sure could be usefully modelled mathematically. I’d also be very surprised if this was irrelevant to ecology and biomes.

Therefore, there are several different fields of maths which could be used to capture and express the complexity of this country in various useful ways. For instance, anyone who’s played Britannia will be aware that it usually takes ages for the Picts to disappear, something reflected in real world history, and this is I guess because they were hunkered down in remote areas which couldn’t be easily accessed by other peoples, and maybe the living was also so hard there that they didn’t bother. This hypothesis could, I think, be tested using some kind of mathematical approach. There is also a very small tree line in the Cairngorms and there seem to have been glaciers there, again in a small area, until something like the seventeenth century. It took longer for wolves to become extinct here than it did in England. There are all sorts of things like this which result from the distinctive characteristics of the northwestern part of Great Britain and its associated smaller islands which can be modelled mathematically in different ways, and they’re practically very important. The logistics of moving things or oneself around the country, for example, or of understanding the locals in different places, are connected to this.

Here, then, are various mathematical ways of approaching the question of Scotland.

Firstly, the inverse correlation between rail network complexity and terrain complexity lends itself to graph theory, operations research and algebraic topology. In the last, islands and mountains constitute holes. The problem of finding the most efficient routes between places belongs to operations research. So with this there’s:

  • Graph theory
  • Algebraic topology (I hold my hands up here to say I only have a vague grasp of what this is).
  • Operations research (which was actually my dad’s job).

Secondly, the isogloss patterns in Gàidhlig accent variation could involve:

  • Graph theory again, regarding communities as nodes and communication links as edges of various weights.
  • Topological spaces, where dialect regions are open sets with isoglosses as boundaries between them.
  • Sheaf theory, apparently. Goodness knows how. I haven’t got to the point where I understand this much except to imagine lots of people wandering around with ring binders in a warehouse with scattered random maths papers all over the floor. I’m getting there.

Thirdly (this is stylistically frowned upon isn’t it?), biome variation:

  • Cellular automata of all things! The idea that in a particular area, there may be more or fewer resources required by particular species which determines whether they flourish or something else does, or perhaps something else flourishes on the corpses of what didn’t flourish.
  • Statistics: picks up the patterns of biomes. In particular I strongly suspect that there’s more biodiversity at boundaries between biomes than deep within large homogenous biomes, and Scotland of all places has those boundaries in spades, and I’d like to look into that.

Fourthly, climate:

  • Fluid dynamics (some of these things are just words to me, but not this one).
  • Differential equations (these definitely are).

Fifthly, the legendary fractal nature of the coastline:

  • Fractal geometry (who’d’ve thought?).
  • Chaos theory.
  • Something called Measure Theory.

The power law regarding the size of lochs, islands and their distribution:

  • This is again fractal geometry, as it’s essentially a vertical version of the coastline issue.
  • Statistical distribution along the lines of Zipf’s Law and, I’m guessing, the log-normal distribution, alias the 80:20 rule.
  • The phenomenon of clustering in random and pseudorandom distributions, manifested here on a plane.

In this case, deviations from these tendencies are themselves interesting. For instance, it might turn out that the areas of lochs are not distributed in such a way that the majority of them constitute together less than half of the water area in Scotland. For a start, nine-tenths of British fresh water is in Loch Ness.

The fields which come up repeatedly here are fractal geometry, topology and actually measure theory, which I mainly left out because I don’t know what it is. It seems that it arose out of the Banach-Tarski paradox, which includes such oddities as being able to dissemble a single ball and then build two balls of the same size as the first one out of a finite number of components, or taking a ball bearing apart mathematically and reassembling it into an object the size of the Earth. Clearly these things can’t actually be done, but they seem to be intuitively feasible when you look at the details, because spheres and balls are each infinitely large sets, and you can take an infinite number of items out of an infinite set and still be left with an infinitely large set. Measure theory tries to resolve this problem by providing a way to decide exactly how big sets are. I can’t take this any further.

So, there are these three areas of maths along with certain others which come up at least a couple of times: measure theory, topology and fractal geometry. Just in passing, Scotland is not unique in this respect because there are other countries and regions in the world to which these same features apply. These include the aforementioned Papua New Guinea, the South Island of Aotearoa/New Zealand, Japan (maybe Hokkaido even more than the whole of Japan), Switzerland, Norway and of course Nova Scotia. Not all of these have the full set, and it should also be borne in mind that there are also “anti-Scotlands”, including the Netherlands, countries which include bits of the Sahara Desert, most of Antarctica and Kansas. I’d also be very interested to know how North Carolina fits in. It isn’t either that these countries and regions are boring or even that the same mathematical fields don’t apply to them, but what doesn’t happen is that the fields in question apply usefully or interestingly to them. In British terms, the opposite of Scotland in these respects is probably East Anglia. Hence this comparison has already become meaningful and productive and hasn’t just been a waste of time. Also, seriously, no disrespect to the places which are “boring” in this respect, and in fact for all I know there are different aspects of those countries to which exactly the same mathematical fields could become relevant, such as the distribution of sizes of grains of sand in the Sahara.

All of these fields include concepts of dimension, open sets, functions and spaces. The concepts of sheaves and ionadan also come up, so at long last I might finally be able to declare myself ready to understand what “ionad” actually means.

An ionad, which is actually taken from the Irish sense of the word rather than the Gàidhlig but the word is the same barring accent and pronunciation, means “place” or “locale”. In that way it’s a little similar to the concept of locus in geometry, and it aims to mix topology and category theory in such a way as to allow one to reason spatially in a point-free and structured manner. An ionad is like a topological space whose open sets are the starting block and points can be derived from those open sets. If topology, category theory and sheaf theory are each thought of as circles in a Venn diagram, like red, green and blue in additive colour or cyan, magenta and yellow in subtractive colour, an ionad is the bit in the middle which is white in the former case and the infamous “brown splodge” in the latter. Of course I’m nowhere near understanding sheaf theory at this point and still have the Filofax people wandering all over the explosion in the maths warehouse in my head, but I’m closer. But apparently an ionad is useful in the following ways (and others):

  • It explains how different parts of Scotrail interact without assuming the points are primary, so presumably it could work as a way of explaining train delays and replacement bus services.
  • It helps to describe when native speakers of Gàidhlig are likely to perceive each other as speaking with different accents and when they’re likely to hear them as familiar, even when there are some differences in those accents.
  • It enables you to model what happens on the borders of two biomes such as peatland and Caledonian rain forest rather than having to think of the border as merely a line between two more easily understood biomes. There, it allows smooth models rather than sudden jumps.
  • You can spot scaling rules about the coastline of Scotland and understand its geometry without having to think of it as a series of straight lines or curves.
  • It does the same thing with the size distribution of lochs, which is hardly surprising considering the Scottish terrain is just a plane-based version of the line which is the coastline.

The idea over all of this is that you don’t start with the points but with open ideas about what categories might be needed, so you might think in terms of Highland and coastal towns, towns with active train stations and the Gàidhealtachd.

So to finish, whereas I still don’t really have a confident understanding of what an ionad is, I do very much feel that as a mathematical concept it seems to be particularly apt as applied to Scotland, and generally have a feeling that it’s like when oil floats on water or an air bubble rises through a burn, but paying attention to the boundary between them and the skin of the bubble in their own right and as primary. That, I think, is what an ionad is!

And I’m perfectly happy for someone to come along and explain why I’m completely wrong.

Neutrinos & Neŭtrinoj

Okay, let’s just go hell for leather on this. Most of it is just going to come out of my head.

When I was seven, I was really interested in nuclear physics. I had this naive idea that if I knew everything all matter and forces were made of down to the smallest level, I would effectively know everything, full stop. The error of this thought was borne in upon me when I realised I had no idea what the scientific name of the freshwater shrimp was, anything about that animal’s anatomy and so on, and more broadly that just because you knew everything about the building blocks of everything in the Universe didn’t mean you knew much about the things that were made of ’em. Seven year olds, eh?

However, one thing that did impinge upon my learning was that atoms were made up of electrons orbiting a nucleus made of neutrons and protons, that the nuclei were held together by the strong nuclear force carried by pions, light was made up of photons, and protons and neutrons were part of a larger class of fairly massive particles called hadrons, which were generally composed of smaller particles called quarks held together by gluons. This is now such a long time ago that nuclear physics has changed considerably since then and some of the ideas are very outdated. For instance, at one point it was thought that quarks were made up of smaller particles called “rishons”, whose number of types enabled the standard model to be simplified by seeing them as pairs of rishons of a few kinds. Only two types of rishons would be needed for there to be four types of quark. Also back then, the top and bottom quarks had yet to be detected, so people mainly talked about up, down, strangeness and charm, and it hasn’t been lost on me that that’s the title of a Hawkwind album. This was, however, about three years before the album came out and at the time I would’ve been entirely contemptuous and condescending about this piece of music with pretentions to be high art. I should hastily add that I’m not like that any more.

Now it was quite easy for my undeveloped child’s mind to understand all this. Basically, each particle had mass, charge (or no charge) and spin, and these properties defined what that particle was. They were divided into bosons and fermions, and also leptons, mesons and baryons according to their mass from light to heavy. I was for some reason particularly excited about mesons. Many of these particles are unstable, and when they break up their mass, charge and spin need to be conserved. To take a completely wrong example, say there’s a meson with a mass five hundred times that of an electron. If it broke up into two smaller particles, if each has a mass 250 times that of an electron, that would conserve mass. However, this is totally wrong because some of a particle’s mass is lost in binding energy, so each of the pair would actually have a mass more than half of the meson’s. That’s not in any way a real example but because I’m ploughing on at a rate of knots, I’m not looking any details up and can’t remember the properties of a pion, for example. Leptons, incidentally, are not like that. They’re stable, lighter and not made of anything smaller. This is about mesons and baryons. If you asked my seven-year old self, I’d be able to give you the properties, but my late middle-aged and addled brain can no longer so do.

But, more simply, if a neutral particle emits an electron it will become positively charged because electrons are negatively charged and that needs to be preserved.

Not all objects have mass at all. If an object has a mass of zero, it has no choice but to move at the speed of light, which is actually not the specific speed of light but just the fastest speed there is, to quote Monty Python. Neither must every object have charge. Most objects we come across in everyday life, such as combs and loo rolls, tend to have no charge most of the time although if you comb hair with a comb it will become charged and be able to pick up small bits of loo roll, so it does happen. On a subatomic scale, though, there are essentially charged particles, namely the electrons and protons making up atoms, but there also have to be the uncharged neutrons which balance out the nucleus and prevent it exploding. Both protons and neutrons are made up of charged quarks which together add up to one unit of positive charge or no charge because they cancel each other out or they don’t. So that’s also simple.

Spin is however really not simple. Although this makes no sense to our macroscopic brains, spin is quantised and a property like charge and mass. Also, even more strangely, a particle can have non-integral spin, and if it does, it needs to be flipped over twice to reverse its spin. Look at it this way: there’s a gyroscope spinning in a clockwise direction when viewed from above. Usually, to make it spin anticlockwise all you need to do is turn it upside down. If, however, it had non-integral spin, if you turned it upside down, it still wouldn’t be spinning anticlockwise and you’d have to flip it over again to make it spin the other way. This is very weird and as I’m typing this I wonder if I’ve got it wrong but it’s been said that if you think you understand quantum mechanics, you don’t understand it, so presumably I do understand this because I don’t.

So why am I bothering to mention this little detail? Well, it was discovered some time ago that the way subatomic particles break down must conserve mass (taking into account that some of their mass is converted to energy when they’re stuck together), electrical charge and also this other property, spin. In order for this to work properly, there must be massless and chargeless particles. These particles seem to be nothing, but they aren’t because they have spin. These are neutrinos, and they come in various types because they’re leptons and they have corresponding more tangible particles, so there are for example muon’s neutrinos and electron’s neutrinos. They are of course bloody weird. The way they’re detected is by filling enormous buried underground tanks with dry cleaning fluid and trying to detect the tiny number of atoms which are changed by interaction with them. Since they’re produced in nuclear reactions, the Sun emits them. It’s been said that a million neutrinos passed through a wall of lead a light year thick, almost all of them would come out the other side. They virtually do not interact with matter. About forty-odd years ago, there was a problem understanding the Sun because it wasn’t producing enough neutrinos. In other words, the massive tanks of dry-cleaning fluid under the Alps or wherever they were kept were not producing detectable numbers of different atoms. I mean, I don’t know how you detect a couple of altered atoms in six hundred tonnes of dry cleaning fluid, but apparently you can. I’ve probably missed the point. Incidentally, I don’t want to go off on too much of a tangent but I find it kind of annoying that there is a thing called “dry-cleaning fluid“. How is it dry-cleaning then, eh? And while I’m at it, it used to be used to decaffeinate coffee, or something similar did, and it always used to give me a stomach ache when I drank it. Tangent alert!

Okay, so my point is that for whatever reason I didn’t have any problem accepting that neutrinos existed even though they were massless and practically didn’t interact with matter, at the age of seven. In fact, in 1987 it turned out that neutrinos did in fact have some mass because a couple of hundred thousand years ago, a star exploded in one of the Milky Way’s satellite galaxies and the neutrinos took a few seconds longer to get here than the light, implying that they were travelling more slowly than light and must therefore have mass.

Now for the famous problem.

Centuries ago, Johannes Kepler worked out that the time it takes a planet to orbit the Sun can be worked out straightforwardly from its distance. To quote Kepler’s Third Law, “the squares of the orbital periods of the planets are directly proportional to the cubes of the semi-major axes of their orbits”. Semi-major axis is the mean distance of a planet’s orbit from the Sun. Isaac Newton generalised this law to come up with the law of gravity, and it’s supposed to apply to everything in the Universe. There’s a lot more, but this is the important bit. It means that if you look, for example, at the triple star system next to the Sun, you can work out from the masses of Proxima Centauri, α Centauri A and B and their distances from each other how long it takes them all to orbit each other. The much closer A and B are close in mass to each other and take eighty years to orbit and Proxima, which is eleven thousand times the distance of Earth from the Sun, takes 550 thousand years to orbit the other two, whose total mass is what matters. However, it doesn’t stay this simple.

If you look at a galaxy, you might think you can calculate its mass from the number of stars in it and their sizes. Galaxies rotate very, very slowly: our solar system takes something like 225 million years to orbit its centre. It ought therefore to be expected that the time taken for a star twice as far out from the centre to orbit this galaxy should be the square root of the cube of twice as long as 225 million years, i.e. a little under three times as long. However, it actually only takes about twice as long. Why?

Obviously you can’t see everything in the Galaxy. If you were looking at this solar system from α Centauri, even with a fantastically powerful telescope, you wouldn’t be able to see Jupiter, any of the other planets or moons, any of the dwarf planets or any of the asteroids, and between the stars there are also rogue planets wandering in interstellar space, dust, maybe black holes and brown dwarf “stars” too dim to see as well as potential comets very slowly orbiting sheer light months from the Sun, so there definitely is missing mass which means galaxies would seem to rotate faster than just counting up the stars would predict, and also, coming back to neutrinos, they also increase the mass of galaxies somewhat. However, for this to work as a way of accounting for how fast galaxies spin, and also how quickly galaxies “near” each other affect each others’ motion, well over three-quarters of the mass of a galaxy would have to consist of this sort of stuff. Maybe it does, but it probably doesn’t because for example 99.8% of the mass of our solar system is the Sun. Therefore it probably isn’t lots of ordinary invisible matter doing this.

Therefore, scientists decided that there must be something called WIMPs – Weakly Interacting Massive Particles. This is by contrast with MACHOs – MAssively Compact Halo Objects – which is the idea that there’s a roughly spherical cloud of massive ordinary matter also orbiting galaxies outside the plane of the arms. I’m about to come to the point by the way. To me, and to a lot of other people, WIMPs seem to be invented just to explain the problem. They’re most of the matter in the Universe, but they conveniently do not interact with the matter we’re familiar with. I paused there because I almost wrote “do not interact with ordinary matter”, but in fact if this is true they actually are “ordinary” matter, and it’s atoms, molecules and light which aren’t, in other words all the stuff which can be detected. Hence I think this is silly and go with a different hypothesis, which is MoND – Modified Newtonian Dynamics. This is the idea that the explanation is that Newton’s laws of gravity only work on relatively small scales and break down if you consider them over thousands of light years or further.

All this said, I am perfectly well aware that I’m no physicist. I did have my doubts about the cosmological constant as a teenager but just thought it was because I didn’t understand physics, then it turned out Albert Einstein had the same doubts and found it embarrassing, so maybe I should listen to my intuition more. Maybe people can be too embedded in their area of expertise to realise the flaws in their thoughts. Or, maybe an outsider just doesn’t understand properly and only thinks she does.

But my point is not about this but ageing and how people accept and reject things as they get older. As a child, I liked the idea of neutrinos because they were absurd and weird, and therefore fascinating. As a fifty-seven year old, and in fact even when I was quite a bit younger, I find the idea of dark matter, which is what I’ve just described, hard to accept even though it’s quite similar in a lot of ways to neutrinos. And that’s the process of becoming more conservative as you get older, and therefore this now becomes not an abstruse argument about physics or cosmology but personality and maturity.

There has been a pattern where young people start off left wing and become right wing old people. This is apparently less true than it used to be. Why this happens is another question. It may be that as one acquires wealth and possessions, one realises that one’s position of poverty and the apparent need to depend on the state for financial support was temporary and would also be for other people, and therefore things will get better or easier if one takes responsibility for things. Or, it could be that as one’s career advances, one makes moral compromises and therefore descends into self-deception and rationalisation for them. Or, again, maybe it’s cognitive decline and an inevitable process of being more easily duped by politicians and media-based propaganda. Conservative ideas are more appealing because they’re about the “good old days”. If this last one is so, the answer to a drift to the right may be just to decide that one was more likely to have been correct before one started losing one’s marbles and freeze one’s political opinions at that stage, but when new situations arise it can be harder to apply those principles than it used to be.

Why has this image appeared at this point on this blog post? Well. . .

I accept that neutrinos exist. Once upon a time, in 1887 CE, a guy called Lazarus Zamenhoff living in Poland invented a language called Esperanto, which I’m sure you’ve heard of. It was designed to be simple and logical, and was specifically constructed in a Europe where the various powers had been at each others’ throats for centuries, so the vocabulary was mainly based on Romance, Germanic, Greek and Slavic roots, plus a few completely invented words. In order to make learning it easier, Zamenhoff introduced the idea that words could be plugged together to change their meaning in a completely regular way, so for example the prefix “mal-” would turn a word into its opposite: “bona” means “good”, “malbona” means “bad”, and more controversially, “knabo” means “boy” and “knabino” girl, and in fact in the original version of the language all the gendered terms are unmarked when masculine and marked when feminine in this way. This is sexist but there are practical reasons for it. As in natural languages which mark this kind of gender, there are ways of working round it.

One of the possible flaws in the language is what this approach to word building does to words which are similar in many languages. For instance, take the English word “school”. In French this is «école», in German ,,Schule”, in Indonesian “sekola” and so on. All words which look and sound similar. In Esperanto, the word for school is “lernejo” – “learn-place”. This is easy to form and break down, and it reduces the need to learn a more opaque word, but the chances are that in many cases the word will already exist in the learner’s native language and there may at least initially be some puzzlement.

Germaine Greer’s famous book is called ‘The Female Eunuch’ in English, the language it was written in. The Esperanto word for “eunuch” is the rather logical word “neŭtro”, related to the adjective “neŭtra”, meaning “neuter”. However, since unmarked nouns in Esperanto usually refer to males or inanimate items, “neŭtro” means “male eunuch”. “-Ino” is the feminising ending. Hence the Esperanto word for “female eunuch” is “neŭtrino”! I don’t know whether Greer’s book has been translated into Esperanto or what its title is if it has, and I also don’t know what the Esperanto word for the elementary particle is, but logic suggests that neutrinos are called something else in Esperanto and the word “neŭtrino” does in fact mean “female eunuch”. If not, the chances are that when Esperantists talk about fundamental particles they say that there are vast tanks of dry-cleaning fluid under the Alps intended to detect female eunuchs and that when scientists detected Supernova 1987A, they noticed that female eunuchs don’t move at the speed of light. Well I could’ve told ’em that.

So what’s my point? Do I have one? Surprisingly, yes. As I’ve got older, like many other people my acceptance of novelty has become less flexible and although I was fine with neutrinos I wasn’t fine with dark matter. Neutrinos were discovered in 1956, though they were theorised earlier. ‘The Female Eunuch’ was published in 1970. Esperanto had its rules laid down in 1887, and although better-designed versions of the language have been proposed since, it’s difficult to accept them because the whole point of Esperanto was that it was supposed to be a single language which everyone could use. I actually think Esperantido is loads better but I wouldn’t use it because it isn’t the original. This rigid design is reflected in the fact that these two concepts, the female eunuch and the neutrino, have happened in the world getting on for a century after the language was devised, but it isn’t open to accepting new ideas in that way. As such, Esperanto represents exactly what happens to us as we get older, but not because we compromise or become more conservative, but simply because we were designed for an earlier age, in the case of Esperanto, one where neutrinos were unknown and second-wave feminism was unthought of.

It occurs to me also that second-wave feminism itself has also been superceded and may be in the same position, but that’s another story.

The Machine That Explains Everything

Compare and contrast:

with:

. . . and I’m now wondering if anyone has ever put those two songs next to each other before, on a playlist or otherwise. While I’m at it, here’s another:

(not quite the same). I’ve probably done it now.

Then there’s this:

That’s quite enough of that. Or is it?

Like Franco Battiato, Chiara Marletto is Italian, although she was born at the opposite end of the country. She’s Torinese while he’s Sicilian, although he did move to Milan(o). However, this is not that important unless it says something about the nature of northern Italian culture or education and that’s another story. The germane issue is that there are two distinct approaches to science, if science is seen as based on physics, and that is not the only option – biocentrism is another, possibly relevant to where I’m about to go with this – one of which is much more prominent and formally developed than the other. I’m not talking about classical versus quantum or the issue of quantum gravity and the reconciliation of relativity with quantum physics, although those are important and this is relevant to the latter. ‘To Faust A Footnote’ is a musically-accompanied recitation of Newton’s laws of motion, or at least something like that, and describes the likes of trajectories and objects in motion. Such descriptions are found in Johannes Keplers laws of planetary motion, and although relativity and quantum mechanics are radically different in some ways from this classical approach, this aspect remains the same.

Around a century ago the world of physics saw the climax of a revolution. Triggered, I’m going to say, by two thoughts and perhaps experiments, it was realised that the idea of particles as little snooker balls which could ping about at any speed and were pulled towards each other and pushed apart by various forces which were similar in nature such as magnetism and gravity didn’t really describe the world as it actually is. The first clue had been known for millennia, which is that hot objects glow red rather than white. This led to the realisation that because all objects emit electromagnetic radiation at a range of frequencies, meaning that they would glow orange if they were slightly more than red hot, they would be expected to radiate all of their heat immediately, meaning they would drop to absolute zero. The solution to this is that the range of frequencies is discrete. It has steps and there are minimum packets of light energy called quanta, from the Latin for “how much”, and this prevents the quantity of heat being radiated by any object from being infinite. The other was the increasing difficulty maintaining the idea that light waves had a medium, the luminiferous æther, which would have to have various unusual properties combined to work in this way, culminating in the Michelson-Morley experiment, which showed that light travels at the same speed regardless of the speed of the observer, meaning for example that, 299 792 kps being the speed of light, if you were travelling at 299 791 kps, you would still measure the speed of light at 299 792 kps. You can’t catch up with it. In the Michelson-Morley experiment, light is emitted in two directions at mirrors, at right angles to each other, and the interference patterns are observed. Because Earth is orbiting the Sun at around 30 kps, if light is moving through a medium which is not being dragged with us, and it had been previously shown that it couldn’t be, it “ought” to be moving 30 kps more slowly in one direction than the other, which would show by the wave fronts lining up differently, but this doesn’t happen.

These two lines of thought led to two major new breakthroughs in theoretical physics. One is general and special relativity, the idea that moving observers find themselves dividing space and time up differently than stationary ones and that gravity is not a force but a distortion of space. The other is quantum mechanics, which is that there are inherent limits to accuracy and that probability is fundamentally involved in physical phenomena on a small scale, so there is no certain location or direction to a sufficiently small particle, but it’s more likely to be in one place or be going in a particular direction than another, and it’s these which constitute the waves, which are like a graph depicting how likely something is to be in a certain place at a certain time. These are both “big ideas”. Since then, particle and other physics has tended to involve tinkering with the details and working out the consequences of these theories, notably trying to relate them to each other, which is difficult. Related to quantum mechanics is the Standard Model, really a big set of related ideas which classifies elementary particles and describes electromagnetism and the strong and weak nuclear forces. Gravity is missing from this model. If gravity was suddenly to be non-existent inside a sealed space station able to control its own temperature and pressure, its trajectory would change but there wouldn’t be a fundamental change in anyone’s lifestyle aboard the space station, and this illustrates how big the rift is between the two sides of physics. There are also problems with the recent model of cosmology, there are two many parameters involved for it to be considered elegant (nineteen) and for some reason the weak nuclear force is a quadrillion times (long scale) stronger than gravity and nobody knows why. In order to account more fully for neutrinos, another seven apparently arbitrary constants will be needed, so the whole this is a bit of a mess, although it does work well. It’s also become difficult to test because of the high energy levels involved. Another issue is that there’s a lot more matter than antimatter.

There are also a number of straightforward, everyday phenomena which the kind of physics involving particles and trajectories can’t account for. For instance, a drop of ink in a jar of water starts off as a small, self-contained blob which then spreads out and leaves the jar with a more homogenous tint. This is the usual operation of entropy, but although physics can account for individual molecules of pigment colliding with hydrogen molecules and moving in all sorts of different directions, it can’t explain, for instance, why it happens that way round. Well, I say “physics”. In fact there is a perfectly good branch of physics which does at least assert that this kind of thing will happen, and it’s the one referred to by Flanders and Swann: thermodynamics. The Second Law of Thermodynamics asserts that the entropy of a closed system tends towards a maximum. Another maxim from thermodynamics is the counterfactual: a perpetual motion machine is impossible.

Everything must change. This is Paul Young next to an overbalanced wheel, which might be thought to spin permanently once set in motion, but it doesn’t. The idea of an overbalanced wheel is to shift mass from the edge towards the centre in order to keep it spinning, but it doesn’t work because the torque is actually the same on either side. I find objections to perpetual motion machines odd because at first read they generally appear to be critical of a minor flaw in the machine which might be easily remedied, such as friction, but in fact resolving that problem would introduce another, and it’s a limitation of the entire system rather than that minor issue. All you’re doing is moving the limitation to a different aspect of the device. It will always be there somewhere.

And now it’s time to introduce Constructor Theory (CT), to which Chiari Marletto is a substantial contributor. The idea of a machine being impossible is called a “counterfactual” in CT, and a perpetual motion machine is a good example of that. It needn’t be a machine though, just a situation such as a block of lead spontaneously transmuting to gold, which cannot happen, or rather, is almost impossible. Thermodynamics has enough mathematics in it as it is, but not the same kind as quantum mechanics or relativity. Some physicists seem to feel thermodynamics isn’t rigorous enough for that reason, but it can be made more so without straying into the kind of trajectory and particles paradigm used elsewhere, and the wording of the laws of thermodynamics could also serve as the basis for more precise and less natural language.

Marletto uses transistors as an example. A transistor is, functionally speaking, a switch which can be operated electrically to turn it on or off. This means it has two states. Many other things are functionally identical to transistors in binary digital computers, such as valves and relays, but their physical details can be removed from the situation when making a computer. A 6502 CPU, as found in the Apple ][ and BBC Micro among many other computers, is a microprocessor whose chip comprises microlithographed NMOS transistors, but has also been made occupying an entire board from discrete TTL integrated circuits and even covering the walls of a living room or bedroom with lower-scale integration components, and it could even be made from valves or relays, but it would be slower. In all these cases, the computationally important aspect of the logic network is the ones and zeros and the logic functions applied to them. There are physical realisations of these but there’s a level of abstraction which means they don’t matter. Constructor theory appears to aim to generalise this, not necessarily in terms of computing but with the same kind of detachment. That said, it still recognises information as a neglected and important aspect of physics.

A second phenomenon which physics as it stands can’t really make sense of is life. When an animal, including a human, dies, most or all of its cells can still be alive but the animal as a whole is dead. The corpse obeys the same laws of physics as currently understood as the animal did when it was alive. The Krebs Cycle tends to run for a while until the oxygen runs out and there’s no longer a way for carbon dioxide to be transported away, so the acidity increases, and enzymes within the cells begin to break them down. Genes can also be expressed for days after death. Moreover, bacteria decompose or cremation converts the body to an inorganic form, all within the purvey of physics and in the former case biology, and yet the transformation of life to death is profound and meaningful, and can be as completely described by physics and chemistry as any other classical process, but is in another way not described at all. The counterfactual here would be resurrection, but time’s arrow only points one way.

Information, heat and the properties of living things cannot be accommodated as trajectories because they’re about what can or cannot be made to happen to a physical system rather than what happens to it given initial conditions and the laws of motion. Constructor theory’s fundamental statements are about which tasks are possible, which impossible and why that’s so. The fundamental items are not particles or trajectories but “tasks”: the precise specification of a physical transformation of a system. The transformed object is known as the substrate, so in a sense the duality of trajectories and particles is replaced by that of tasks and substrates.

It might be worth introducing another metaphor here. Suppose you have a machine which includes a cup on the end of an arm. If you put a six-sided die in that cup and it reliably throws a six every time and is in the same state at the end as it was before you put the die in, that machine can be called a “constructor”. If it isn’t in the same state at the end as at the start, it may not be able to repeat the task reliably, which means it isn’t a constructor. Now for me, and for all I know this has been addressed in the theory because once again I’m somewhat out of my depth here, this seems to ignore entropy. All machines wear out. Why would a constructor not? Clearly the machine is a metaphor, but how literally can it be taken?

Although laws of physics in this framework are characterised by counterfactuals and constructors, and the language of tasks and substrates is used, it’s often possible to get from such a description to traditional, and here “traditional” includes quantum physics, statements couched in trajectory/particle terms. In this way constructor theory includes traditional physics and can be used everywhere traditional physics can be used, but it can also cover so much more than that, including information, life, probability and thermodynamics, thereby bringing all of these things into the fold in a unified way. For instance, both the position and velocity of a particle cannot be measured precisely at the same time, which is tantamount to saying that there cannot be a machine which measures both position and velocity. In that context, it’s fairly trivial – the “machine” bit seems tacked on unnecessarily – but in others, such as life and information, it wouldn’t be so.

Information is a little difficult to describe formally and this is one of those situations where although the word does correspond to how it’s used colloquially, particularly in the phrase “information technology”, it isn’t quite the same thing as that. There are mathematical ways of describing the concept, but before covering that it’s important to point out that simply because the word “information” is used in this way, there is in some way an authority or a greater right to use it thus. It’s like the way “nut” and “berry” are used botanically, in that a peanut is not a nut but nutmeg is, and a banana is a berry but a blackberry isn’t, but that doesn’t mean the way we use “nut” and “berry” is in any way inferior. Nonetheless, this is how I’m going to be using it here.

The more ordered something is, the less information it takes to describe. Glass is a haphazard arrangement of, for example, sodium silicate molecules, and to describe precisely where each is would take a long list of coördinates which couldn’t be compressed much, but a crystal of sodium chloride is relatively easy to describe as a cube-shaped lattice comprising chlorine and sodium a certain distance apart, and once you’ve done that, you’ve described the whole crystal. Hence the more entropic something is, the more information is needed to describe it. If a crystal is disturbed, perhaps by the inclusion of a few atoms of other elements, it will be more complicated, and need more information to describe it. Likewise, if mercury is a solid crystal below -40, melting that mercury complicates its structure and therefore in a sense, melting something and increasing its entropy is adding information to it. Strangely, it follows that one way of freezing things is to remove information from them, which is Maxwell’s Demon.

Maxwell’s Demon has been evoked repeatedly as a provocative thought experiment in physics. It can be described thus. Imagine an aquarium of water divided in two halves. There is a door in the middle of the partition where a demon stands who inspects the individual water molecules. If they’re moving faster than a certain speed, the demon lets them through to compartment B or leaves them in there, but if they’re moving more slowly, the demon lets them through to compartment A or leaves them there. As time goes by, the temperature of compartment A falls and that of compartment B rises, until compartment A has frozen. This appears to violate the laws of thermodynamics. If you’re uncomfortable with a demon, you can imagine a membrane between the two which is permeable one way to faster molecules and the other way to slower ones, but the issue remains. One counter-claim to this is that the membrane or demon has to have information-processing power to do this, and that would involve at least the initial input of energy if not its continuous use. The membrane is very “clever” and organised: it’s a technologically advanced bit of kit, or alternatively it’s a highly evolved organism or living system, all of which involved the input of a lot of energy at some point, perhaps in a factory that makes these things. If it’s actually a demon, it has a brain or something like it: it has to be able to think about what it’s doing, and that takes energy. This is why zombies would probably be nuclear-powered, incidentally, but that’s another story.

Leaving aside the question of whether this inevitably breaks the laws of thermodynamics by reducing empathy without greater energy input than output, Maxwell’s Demon is relevant to Constructor theory and has on a very small scale been used to do something which would be useful on a larger scale. This is effectively a fridge which runs by moving information around. The information needed to describe the frozen side of the aquarium is probably less than that required to describe the liquid, or possibly steam, side, because the frozen side consists of ice crystals, which take less information to describe than water or steam. This membrane works by taking information away from hot things. This has been done with a transistor. A device has been invented which separates high- and low-energy electrons and only allows the latter to reach the transistor, which therefore cools it. This is actually useful because it could be employed in cooling computers. A somewhat similar device is the Szilard Engine, which detects the location of a gas molecule in a chamber, places a barrier across it and closes a piston on the vacuum side before releasing the gas molecule. This, too, releases a tiny bit of energy via information, namely the information about where in the chamber the molecule is. It’s also subject to the Uncertainty Principle because if the chamber were sufficiently small, and in this case subatomic particles would have to be used, the point would come when there was only a probability that the piston would move, which would create different timelines, but this isn’t the point under consideration. Hence there is a relationship between energy, information and entropy with real consequences. This is no longer just a thought experiment.

At this point, I seem to have missing information, because I’m aware of all this and on the other side I’m also aware of Universal Constructors, but I can’t make a firm connection between them. The link may become clear as I describe them. If not, I might try to find some information, so to speak, to link them. It is connected to quantum computing. I know that much. Also, that Universal Constructors are based on cellular automata, and that I really can explain.

By Lucas Vieira – Own work, CC BY-SA 3.0, https://commons.wikimedia.org/w/index.php?curid=101736

In Conway’s game of Life, a grid of square cells, each of which is either occupied or empty, goes through generations where two or three occupied neighbouring cells preserves an occupied cell, any cell with three live neighbours becomes occupied and any cell with four disappears. If you watch the above GIF carefully, you can glean these rules. Conway’s Life is the best-known example of a cellular automaton, but there are others such as Highlife, with different rules, where cells with three or six neighbours become occupied and continue if they have two or three. Another one is Wireworld, which is useful for illustrating the way into one of the most important things about cellular automata:

By Thomas Schoch – Own work, CC BY-SA 3.0, https://commons.wikimedia.org/w/index.php?curid=1863034

This is an XOR (either/or) logic gate from Wireworld, which works particularly well as a means of building structures which work as logic gates or transistors. It’s probably obvious that any binary digital computer can be built in Wireworld, because if logic gates can be made, anything made from them can be. It’s less obvious that many other cellular automata also have this feature, including Life. This means that many cellular automata are Turing-complete. Turing-completeness is the ability to simulate any Turing machine, which runs on a tape on which it writes and erases symbols according to instructions which either tell it to halt or advance to another instruction by moving the tape one way or the other, the symbols also acting as instructions. Perhaps surprisingly, this machine can emulate any computer, and there’s a layer on top of this which means that it can simulate anything a classical computer can simulate. Turing-completeness can be applied not only to any digital binary computer but also to programming languages and other things. There is, for example, a computer design with a single instruction: subtract one and branch if negative. This can do anything, but might take a long time to do it. Any practical computer would not be designed like this and the idea also ignores the fact that the machine might take an extremely long time to do anything, and memory limitations are also ignored, but it means, for example, that with the right peripherals a ZX81 could simulate a supercomputer or just a modern-day high end laptop, really, really slowly, assuming that the extra couple of dozen bits needed could be added to the address bus! Maybe this is what happened with the Acorn System 1 in ‘Blake’s 7’ series D:

One way to extend the cellular automaton concept is to make it quantum, which can then have the same relationship to quantum computers as Turing-complete classical cellular automata have to classic digital binary computers. If built, quantum cellular automata (QCAs) would have very low power-consumption and if they are Turing complete, would also be a possible replacement for classical computers, and they can also be made extremely small. However, there are two distinct types of QCAs which happen to have been given the same name by separate teams. The QCAs I referred to as having low power consumption are quantum dot based. Quantum dots are important to nanotechnology. They have to be very small to work, and consist of particles a few nanometres across which switch electrons from their orbital state to the delocalised state found in metals which renders them conductors. This means they can act as switches just like transistors. If they’re linked in the right way, they can be used to build cellular automata. This, however, is not what Deutsch and Marletto mean by a QCA, David Deutsch being the other main proponent of Constructor Theory, because although quantum dots computers arranged as cellular automata could indeed be used as computers through being lattices of devices which can do Highlife, Life or Wireworld, or some other kind of automata, the electron transition can be treated as a classical bit rather than a qubit and the fact that it happens to be a quantum phenomenon doesn’t alter the basic design of the computer. Real quantum dot computing has been around since 1997 CE. Qubits can be actualised through such phenomena as spin or the polarisation of light, where there are two possible states, but they differ from bits in that they can be both zero and one until measured or observed, and if they are observed, this can influence the chain of cause and effect which led up to that point. This means, for example, that the factors of integers can be found by observing the qubit rather than having to calculate them iteratively, and this is much faster. Since cryptographic security depends on prime factors, this also means that quantum computing might make secure financial transactions over the internet insecure.

In the Marletto-Deutsch sense, a QCA is to a classical CA (cellular automaton) as a quantum logic gate is to a classical logic gate. A classical logic gate may have two inputs and one output, and the output depends on those inputs. A quantum logic gate is bidirectional. Measuring or observing the output influences what the input was. Hence one rule for Life, for example, is:

¬(A /\ B /\ C /\ D)=>¬E

where A, B, C and D are E’s neighbours. This is a one-way process. You could build an array of LEDs behaving as a Life game where logic gates such as the one above linked the cells represented by them together and just let it run, having set the original conditions, but there would be only one outcome and there’s no going back unless the pattern involved cycles. If quantum gates were involved instead, the outcome would, when observed, determine what had happened before it was observed, and this could be done by building a grid out of quantum gate devices rather than classical TTL integrated circuits.

A Universal Constructor can be built in Life. This is a pattern which can build any other pattern. In fact, patterns can be built which can copy themselves and they can be coupled to Turing machines in Life which can be programmed to get them to make the pattern desired. This is the first successful Universal Constructor:

This shows three Universal Constructors, each made by its predecessor. The lines are lists of instructions which tell the machines how to make copies of themselves. Mutations can occur in these which are then passed on. Perhaps unsurprisingly, these were thought up by John von Neumann, and are therefore basically Von Neumann probes as in this. These are potentially the devices which will become our descendants as intelligent machines colonising the Galaxy, and possibly turning it into grey goo, but this is not what we’re talking about here. Here’s a video of one in action.

These are machines which do tasks on substrates, and this is where I lose track. Deutsch (whom I haven’t introduced properly) and Marletto seem to think that physics can be rewritten from the bottom up by starting with the concept of Universal Constructors running in quantum cellular automata. I haven’t read much of their work yet, but I presume this Universal Constructor is an abstract concept rather than something which actually exists to their minds, or at least only exists in the same way as mathematical platonists believe mathematics exists. A mathematical platonist believes maths exists independently of cognition, so for example somewhere “out there” is the real number π. It’s certainly hard to believe that if there were suddenly no living things in the world with no other changes, there wouldn’t still be three peaks in the Trinidadian Trinity Hills for example. Another option would be formalism, where mathematics is seen as a mere game played with symbols and rules. If this is true, it’s possible that aliens would have different mathematics, but this fails to explain why mathematics seems to fit the world so neatly. These same thoughts apply to Universal Constructors. These things may exist platonically speaking, or they may be formal. It’s also difficult to tell, given its recent advent, whether Constructor Theory is going to stand the test of time or whether it’s just fashionable right now, and that raises another issue: if platonism is true, do bad theories or unpopular ones exist anyway? Also, even if Constructor Theory did turn out to be unpopular that wouldn’t be the same as it being wrong. We might well stumble upon something which was correct and then abandon it without knowing.

The reason these questions are occupying my mind right now is that the idea that physics is based on a Universal Constructor, which I presume is doing the job of standing for a Theory of Everything but again I don’t know enough, would seem to have two interesting correlations with popular ways of looking at the world. One is that the Universe is a simulation, which I don’t personally believe for various reasons, one of which is the Three-Body Problem (not the book). This is the impossibility of calculating the movements of three massive bodies relatively close to each other. It is possible to calculate both the movements of two such bodies and to find special cases of three bodies which are calculable, but it isn’t possible to calculate most such cases, and given that the Universe consists of many more than three bodies of relatively significant size, the calculations necessary would need a computer more complex by many orders of magnitude than the Universe. There are many cases where the third body is too small or distant compared to the others where a good approximation can be calculated, but if the orbit of Mercury differs even by a millimetre, it can completely throw the other planets in our Solar System out of kilter to the extent that Earth would end up much closer to the Sun or Mercury and Venus would collide before the Sun becomes a red giant. Therefore, if the Universe is a simulation it would need to be run by a computer far more powerful than seems possible. Nonetheless, it’s possible to shrink the real world down so that, for example, everything outside the Solar System is simply a projection, and this would help. If it did turn out to be one, though, it seems that Constructor Theory and the Universal Constructor would be a major useful component in running it. The second question is a really obvious one: Is the Universal Constructor God? Like the Cosmological Argument, the Universal Constructor is very different from the traditional view of God in many religions, because it seems to be a deist God who sets the world in motion and retires, or at least leaves her Universal Constructor to run things, and “proof” of a Creator is not proof of God as she’s generally understood because there’s nothing in there about answering prayers or revealing herself to prophets, among many other things. Also, this would be a “God Of The Gaps”, as in, you insert the idea of a God whenever you can’t explain something. Nonetheless it is at least amusingly or quaintly God-like in some ways.

To summarise then, Constructor Theory is motivated by the problem of using conventional physics to describe and account for such things as the difference between life and death, the direction in which entropy operates and the nature of the way things are without supposing initial conditions. It seeks to explain this by proposing the idea of a Universal Constructor, which is a machine which can do anything, and more specifically performs tasks on the substrate that is the Universe, and also local cases of the Universe such as a melting ice cube, exploding galaxy or dying sparrow. This Universal Constructor can be composed of quantum cellular automata and is a kind of quantum computer, which it has to be because the Universe is quantum. This reminds me a bit of God. Have I got it? I dunno. But I want to finish with an anecdote.

Back in 1990, the future hadn’t arrived yet, so ‘Tomorrow’s World’ was still on. Nowadays it would just be called ‘Today’s World’ of course. At the start of one of the episodes, Kieran Prendeville, Judith Hann or someone said that CERN were building a “Machine That Explains Everything”, and they then went on to talk about a new design of inline roller skate. I’ve never forgotten that incident, mainly because of the bathos, but I suppose it was the Large Electron-Positron Collider. Of course, incidentally at the same time in the same organisation Tim Berners-Lee was inventing a different kind of “machine that explains everything”, but it seems to me now that this is also what Constructor Theorists are trying to do, because a Universal Constructor is definitely a machine, and it’s definitely supposed to explain everything. So that’s good. Apparently the game Deus Ex also has something with the same name, which I know thanks to Tim Berners-Lee’s invention, but I can’t find an explanation for it.

Oh, and I may have got all of this completely wrong.

Spin Is Not What It Seems

Nor is isospin, but then that’s less well-known.

Most of what people say about quantum physics focusses on things like entanglement, acausality and uncertainty, with a kind of mystical bent, but there’s also something else which most people ignore which is equally weird, and on top of that is something else again which is as weird too, if not even weirder. These two things are spin and its oddly- but appropriately-named sibling isospin.

It’s been said that if you think you understand quantum mechanics, it means you don’t. This may or may not be true and there are different opinions about what it actually means, but I would say this is also true of spin and isospin. I’ll deal with spin first.

If you hold a spinning gyroscope, you can feel the difficulty of shifting it from the direction its pointing. If it’s one of those small toy ones, it won’t wrench you off your feet but its rotation will be shifted into your body if you’re standing. In a swivel chair, a sufficiently large and massive rotating object will rotate the chair if you try to move the object into a different angle. This principle is useful, and is for example employed with rifle bullets, spacecraft and compasses to stabilise them. Whereas magnetic compasses are useless near the poles, gyrocompasses can float around as they move and will therefore continue to point north if they’ve been set up that way in the first place. A spacecraft will tumble unpredictably in space unless it’s stabilised in some way, and one way of doing so is to make it spin as it launches, which keeps it pointing in the same direction. This particular spin is often counteracted by ejecting something spinning in the opposite direction to ensure the spacecraft instruments or devices stay facing the requisite direction later on.

These are all illustrations of angular momentum. Momentum in general is the tendency for an object to keep moving in the same way unless something stops it, that is, unless another force acts upon it. This is true of masses moving in straight lines, and of spinning masses. They will continue to spin in the same way around the same axis unless something acts on them to shift them or slow them down, and when this happens that momentum has to go somewhere as spin rather than in a straight line. This is called angular momentum.

We tend to think of atoms as consisting of nuclei surrounded by electrons in orbit around them: that is, rotating. Ferromagnetism happens when the atoms in a material are all lined up spinning in the same direction, and only applies to very few materials, notably iron but also cobalt and nickel. If you think of atoms as gyroscopes, which they are, what you’re doing when you magnetise something is shifting the axes of rotation of a load of gyroscopes, and that angular momentum shift has to go somewhere. And it does. If you suspend a piece of unmagnetised iron in space in zero gravity conditions, or more accessibly hang it from a thread, then apply a magnetic field to it, this will to some extent magnetise the block and shift the atoms, and it will start spinning. This is known as the Einstein-De Haas Effect. Yes, that Einstein.

This change in angular momentum can be measured quite easily because the mass of the iron is known and its rotation can be timed and observed. However, even if you take into account all the angular momentum involved in the shift, it doesn’t account for all of the spin. This is because the electrons themselves are tiny magnets pointing in a particular direction, and the magnetic field aligns not only the atoms but also the electrons. Now here’s the crucial question. How can an electron point in a particular direction? The answer is that it has an axis of rotation, and this accounts for the discrepancy in the rate of spin the lump of iron has. This difference in angular momentum just taking the orbitals into account and the actual difference allows the spin of the electron to be found.

And this is where things get really weird.

If you calculate the spin of an electron, either assuming the smallest probable size of the particle or the much more likely scenario of thinking of an electron as a point in space, there is an imponderable problem. If the electron has a size at all, in order to generate the amount of angular momentum it has, it would have to be moving faster than light. If, on the other hand, an electron is a point, it’s featureless except for location, so how can it be spinning at all? A point in space doesn’t have a direction or an axis of rotation in the conventional sense, so – huh?

This is not some abstract thing happening due to the vagaries of scientific theory either. A lump of iron really does start spinning if magnetised, and taking into account all of the rotational movement of the electrons in their orbitals shifting is not enough to account for the exact rate of rotation. In the end, then, there seems to be only one possibility: spin is a fundamental property of matter. From our usual perspective, it definitely looks as if there are just objects which are not spinning which we can rotate or might start or stop rotating, or speed up and slow down, and so on, as if it’s just another thing going on in the world, but that isn’t actually what’s happening. On a tiny scale, spin is an intrinsic property of matter like electrical charge or its absence. Moreover, it’s quantised. In the same way as there are jumps between values of something on a small scale rather than an infinitely smooth transition, so for example electrical charge is either neutral, equivalent to an electron, the reverse of an electron, and if a quark either -⅓ or + ⅔ or the opposite for their antiquarks, which add up to an equivalent charge to the electron when they form a nucleon, which is just as well because otherwise atomic matter couldn’t exist. This will become relevant.

Spin has been described as “classically non-describable two-valuedness”, as it’s indescribable in the sense that it can’t actually be properly understood but must exist. Subatomic particles don’t literally spin in the same sense as a wheel or planet, but behave as if they do. All subatomic particles have a spin of either a whole number or a multiple of ½ other than a whole number. The former are bosons, the latter fermions. Fermions are “stuff” and bosons forces, so for example quarks and electrons are fermions and gluons and photons are bosons. Non-integral spin particles have a peculiar property which doesn’t seem to make sense, which is that to reverse their spin they need to be turned through not one but two full circles. How is this possible? Well, imagine a Möbius strip, which is a joined ribbon with an odd number of twists in it, usually simplified to just one. Following the edge around with your finger pointing to the right will reach the point where it points to the left after 360°. In order to get back to pointing the finger to the right, a further 360°of the strip have to be traversed. It’s easier to do this with a strip of paper or ribbon than try to imagine it, for me anyway. This is a good model for how half-integer spin particles work and how it’s possible for them to have to be turned right round twice before they’re back to their initial state. Incidentally, there’s a short story called ‘A Subway Named Möbius’ where a complicated underground train system has one more tunnel added to it which causes a train to disappear, and it doesn’t come back again until the tunnel gets blocked off again. I’m not by any means saying that’s anything more than a fanciful story, but if a topological analogy of that kind can be made regarding such a fundamental feature of physical reality, albeit on a quantum level, it does make me wonder what’s possible. For instance, it’s possible to imagine that space as a whole is “twisted” in all three dimensions, such that any journey round the Universe ends up with one finding one’s home planet is mirrored, or rather seems to be because one has in fact reversed, because the topology of three-dimensional space could in theory be analogous to a Möbius strip. A Möbius strip with an even number of twists is effectively not one at all.

This property of fermions, for complicated reasons I don’t understand, means that no two fermions can occupy the same energy state. This is not the case with bosons. For instance, a laser consists of innumerable photons in the same energy state because they’re bosons, but it effectively means that light cannot form structured matter. It can do things like form caustics and be focussed to a point, and the like, but fermions can build themselves into atoms. Nuclei have to consist of nucleons in different energy states, although they are less obviously in shells than electrons, but neutron stars also have to be in this state – every single neutron in a neutron star is in its own distinct energy state. The electrons in an atom organise themselves much more clearly into different levels, in the form of the shells which enable the periodic table to exist, with heavier elements having more shells and different properties. Without that, there would be no chemistry and no materials as we know them. The fact that I don’t understand this is a source of discomfort to me which I feel very driven to remedy, but right now that’s how things are. It also makes me wonder about Bose-Einstein condensates. These are an unusual state of matter which happens when a low-density gas consisting of bosons is cooled to almost absolute zero and the atoms become overlapping waves and ultimately a single, collective wave comprising all the atoms because they’re larger than the distances between them. Although atoms are made of fermions, each atom as a whole can be a boson if the total number of nucleons and protons is even, so this means that the possibility of attaining this state depends on isotope number as well as what kind of element the gas is, in a similar way to how helium-4 becomes a superfluid at a higher temperature than helium-3. If they were fermions, this would be impossible because they wouldn’t be able to occupy the same energy state.

For us to exist, spin must also, and there also have to be integral and non-integral varieties. It’s a sine qua non of our reality. The Multiverse presumably means there are other universes where there are, for example, only fermions or bosons, or perhaps universes where there is no spin, but these are all very boring places. A universe with just bosons would have no structured matter but instead consist of rays of energy, and one with just fermions would have no structured matter either but simply electrons.

A particle is supposed to have mass, charge and spin. Of the charge values, these can be either positive, negative or neutral, and of the integral and non-integral varieties mentioned above depending on whether quarks, leptons or both make them up. This addition also occurs with spin. Neutral particles clearly do exist, for instance neutrons, whose existence can be deduced fairly easily with precise enough measurements. Chlorine has two common stable isotopes, and if one does something like react salt with something else in distilled water or tries to make a saturated solution of pure sodium chloride in it, one is soon confronted with the fact that the ratio between the weights of the same amount of salt and other substances has to involve a fraction. This is because all normal chlorine is a mixture of the two types of atoms with different numbers of neutral particles, and these are neutrons. Mass, charge and spin all have to be conserved in nuclear and other processes, so for example if a potassium-40 atom emits a positron, one of its protons must become a neutron and it becomes argon-40, and unstable particles decay into various different “fragments”, but they must all add up to having the same charge and spin. Hence a negatively charged muon may become an electron with the same charge but since an electron is so much less massive than a muon, the spin, i.e. the angular momentum, still has to go somewhere, which is into a muon neutrino and an electron neutrino. Likewise, when a neutron leaves the safe confines of an atomic nucleus it only has about a quarter of an hour to “live”, and will decay into a proton and an electron, conserving charge, and also an electron neutrino. They have extremely low mass but observation of the 1987 CE supernova 168 000 light years away revealed that they do have some because of the timing of their arrival here compared to light. Supernovæ produce bursts of neutrinos because the protons in them collapse into a neutron star, converting themselves to neutrons in the process and emitting neutrinos. There are three types of neutrinos, associated with tauons, muons and electrons.

Neutrinos are a bit mind-boggling because they have no mass or charge but only spin,but they must exist because otherwise the accounts wouldn’t balance, as it were. However, there was a problem with solar neutrinos detected in the 1960s, when it turned out the Sun was producing fewer of them than current physics said it should. Until this was resolved, it was possible, though of course extremely unlikely, that for some reason the Sun had stopped working and that the light and heat we were getting from it was simply the last blast of a defunct star, so in a way it was quite worrying, but it’s okay now.

Before I get to the next bit, I want to mention a much older form of philosophy than nuclear physics. Back in the day, there were supposed to be four elements opposed to each other: earth, air, fire and water. Each had two qualities opposed to each other, namely dry and wet and cold and hot. Their atoms were also supposed to correspond to the five Platonic polyhedra, which is why there are five elements rather than four. All of this makes mathematical sense and you can imagine flipping the eight-pointed star round, turning it through 90° and so on – it’s symmetrical. It could even have predictive power in that if one of them was missing, its qualities could be determined, and it has correspondances in alchemy, psychology, astrology and humoral medicine, the last of which is actually useful in herbalism. However, it isn’t applicable to science as it’s usually practiced today, and someone claiming to use it, as I just did, might be seen as undermining their ethos. Nonetheless, the symmetry is real.

By Mike Gonzalez (TheCoffee) – Work by Mike Gonzalez (TheCoffee), CC BY-SA 3.0, https://commons.wikimedia.org/w/index.php?curid=284321

There’s also a symmetry in the physics of elementary particles, which allows one to anticipate where gaps may exist implying other particles yet to be discovered. Symmetries can be analysed by group theory in mathematics. One of the most obvious places where this crops up is with Rubik’s cubes, where certain turns may or may not be performed in a particular order to return the cube to its original state. With Rubik’s cubes there are also “orbits”. If you take one to pieces and put it back together arbitrarily, the chances are you will have placed it in another orbit in which there is no arrangement with all the faces the same colour. I think there are eight of these. Groups also apply to arithmetic, so it makes sense to introduce them with that familiar subject. A group is a set with some operations of a certain kind performable on it. It has an identity element, inverse elements and these are associative. For instance, addition and integers form a group because adding zero doesn’t change a number, adding a positive number can be undone by adding the same negative number and it doesn’t matter where you put the brackets: (2+1)+3 = 2+(1+3). Likewise with a Rubik’s cube, keeping it in the same position and turning the top row one twist to the right and then the right hand side one twist downward can be undone by turning the right hand side one twist upward and the top row one twist to the left, and there’s also an identity element in that if you leave the cube alone, it stays the same, which sounds a bit silly but these are just two examples of groups which can be easily understood. Group theory is relevant to crystallography and cryptography. Take this sentence, for example. ROT13 turns it into “Gnxr guvf fragrapr sbe rknzcyr”, and applying ROT13 to it again turns it back into “Take this sentence for example”.

Physics has various symmetries. For instance, there’s symmetry between matter and antimatter, and there are other symmetries such as the correspondence between leptons and quarks. Electrons, muons and tauons accord with up and down, charm and strangeness and top and bottom. The names of up/down and top/bottom are not accidental, although there were moves to name top and bottom truth and beauty instead.

Group theory can be used to classify different forms of symmetry. Spin falls into the SU(2) group. This is one of the Lie groups, which are groups which also behave like spaces. SU groups are “Special Unitary” groups, and I should point out here that I have never knowingly understood matrices and they were a significant hole in my mathematical knowledge at school, because I could never understand how to multiply them, so I’m just going to have to let this pass and say this is this thing, this is out there, and that’s it. I believe I can safely assume that anyone with at least a CSE in maths will get them and understand this better than I can because it’s just my personal blind spot. Having said that, I will kind of give it a go.

There are six flavours of quark: up, down, strangeness, charm, top and bottom. These can be arranged in a hexagon and can be swapped to some extent. A neutron is two down quarks and one up, and a proton two up and one down. The names seem to relate to these properties, because if up and down were swapped in an atomic nucleus it would swap the neutrons and protons. Mathematically this can be envisaged as being part of the SU(3) group, and this is the other area in which the word “spin” has been used: isospin. Isospin is another property of matter which has the same kind of symmetry as spin but is not spin. Then again, spin in the subatomic sense is really quite far from our intuitive understanding of rotating objects, so the fact that this is also called spin, relatively speaking, is not a big leap from the other kind of spin. It’s also why the words “top”, “bottom”, “up” and “down” are used. Just as an electron can be thought of as having an arrow pointing up which can be flipped through two turns to be an arrow pointing down, although it has no link with gravity which determines up and down in everyday parlance, so can some quarks be thought of as “up”, flippable conceptually to “down”, and “top”, flippable to “bottom”. If SU(3) is applied to hadrons (mesons or nucleons), they can be flipped to other hadrons with similar properties. Another application of group theory revealed a gap in the pattern which turned out to be the omega-minus particle consisting of three strange quarks, which was detected and confirmed that group theory could be fruitfully applied to isotopic spin.

Why is it called “isospin” or “isotopic spin”? Well, nuclei are isotopes of various kinds, so for example there’s helium-3, made of two protons and one neutron, as well as helium-4, consisting of two protons and two neutrons, and tritium, an isotope of hydrogen comprising two neutrons and one proton. If the nucleons in these were swapped, they would respectively be tritium, helium-4 and helium-3. This is a form of symmetry pertaining to isotopes, and it influences their stability because there are certain isotopes of elements which would be stable whether or not the neutrons in them were protons and vice versa, and these are particularly stable isotopes. Extending this into the transuranic realm of synthetic elements, it’s possible to predict which isotopes of heavy elements are likely to be the most stable.

It’s also a system of classification, because at one point in the mid-twentieth century a large number of hadrons were known, almost all of which seemed to have no prospect of being part of ordinary matter or having any special relevance to it, which was very puzzling. Another, more recent, puzzle is whether this is just a case of making pretty patterns, albeit useful ones, out of elementary particles or whether it reflects something profound about the nature of physical reality. Murray Gell-Mann, who thought this up, referred to it as the Eightfold Path à la Buddhism, and Fritjof Capra has written extensively on the idea of links between subatomic physics and Eastern spiritual concepts such as Daoism. Western philosophers tend to think of this as jejune and crass.

There is an issue regarding what appears to be the appropriation of quantum physics ideas by the New Age movement in such films as ‘What The Bleep Do We Know?’ and ‘The Secret’. On the other hand, there is also the question of whether this is an excessively proprietorial attitude on the part of some nuclear physicists and academics. But that’s a topic for another post.

The Ring Earth

Unknown property rights – removed on request
Intellectual property rights unknown. Please contact for removal.

Not to be confused with Ringworld or yesterday’s post, this is about whether a doughnut-shaped planet could exist, but just to clear that up, Ringworld was a concept thought up by Larry Niven for his ‘Known Space’ series of a megastructure consisting of a ring-shaped terrain orbiting a star and given day and night by rectangular shades orbiting further in. It would require as yet undiscovered materials, in other words unobtainium, not to be confused with unobtainium, to be built, although a more diffuse ring of habitats or indeed planets in a single orbit is entirely feasible. This is not that.

Relatively small bodies in this Solar System are representative of the possible shapes which can be achieved by given amounts or quantities of solids, liquids, gases and composite matter made thereof. The fourth state of matter is entirely different but planets are not made of plasma, practically by definition. The smallest approximately object gravitationally obliged to be round is of course the Death Star moon Mimas:

Even this isn’t that close to being perfectly round because of the relatively huge crater Herschel, but it can also be seen that it has a noticeably rough outline in this picture. It’s allowed to have fairly large mountains and deep craters. Mimas is around four hundred kilometres in diameter, although it deviates by about twenty from this, but that’s still pretty round when considering that the deepest trench in our ocean is almost twenty kilometres lower than the highest of our mountains, compared to sea level which is not perfectly spherical itself. Mimas is in absolute terms as close to being round as Earth is, given that our mountains, valleys, trenches, continents and abyssal plains were not scaled down on a Mimas-sized model of our planet.

The next Saturnian moon down from that size appears to be Hyperion:

So far as I can tell, just as Mimas is the smallest roughly round world in our Solar System, so is Hyperion the largest object which is a long way from being round at 360 x 266 x 205 kilometres. It’s within about thirty kilometres of Mimas’s smallest diameter, yet it manages to be rather irregular. It looks more like a pebble of pumice than a planet, and of course it’s neither. Its mean density is only just over half that of water, which is actually lower than pumice, and also lower than that of Mimas, which is about 1.15 times water’s. There must be a complicated relationship between strength, rigidity and density which decides the shape of objects of about Hyperion’s and Mimas’s size.

This house has many Escher prints owing to my family’s joint enthusiasm for the artist. One of my favourites, which I’m sad to say is not on any of our walls, is ‘Double Planetoid’. This shows two intersecting tetrahedra, one completely unaltered by technology, the other completely covered in building. Of this, Escher says:

Two regular tetrahedrons that penetrate one another, float through space like a planetoid. The light-coloured one is inhabited by human beings who have completely transformed their region into a complex of houses, trees and roads. The darker tetrahedron has, of course, remained in its natural state, with rocks on which plants and prehistoric animals are living. The two bodies fit together to make a whole but they have no knowledge of each other.’

M C Escher

It would be possible to make a real copy of ‘Double Planetoid’ somewhere in space, at its approximate scale. In fact it would also be possible to scale it up to at least twenty-seven kilometres on a side if it were carved from granite, even if it had Earth’s gravity. However, it probably wouldn’t arise without intelligent manipulation being involved somewhere, except maybe in an infinite Universe or a parallel world somewhere. It would also not generally be possible for ordinary matter to exert sufficient gravity to make a real version of this rather than a model.

I bring this up because clearly an object like Hyperion could be sculpted into a particular shape, although in its case this would probably already be constrained by gravity so it might end up quite rounded off, but one like Mimas couldn’t. Again, there are likely forms something could take other than round, usually just irregular and lumpy, when they’re fairly small, and many of these are seen in asteroids and small moons. 433 Eros, for example, is often described as “sausage-shaped”:

(I’m not sure that’s how I’d describe that). The asteroid Cleopatra is one of several described as dumbbell-shaped:

Once an object is the size of a planet, though, the options for possible shapes closes down a lot. Considering this as an Earthling, the tallest possible cylindrical column of granite is said to be about the height of the Matterhorn, around half that of Everest at 4 478 metres, and the tallest possible pyramid of granite is 13 400 metres, on this planet. However, this can’t be strictly true because if the height of Everest and the depth of the Marianas Trench are added together the total comes to 19 882 metres, so given a wide enough base this can be exceeded. The diameter of the geoid – the shape of this planet defined by the level water would reach given only the influence of gravity and rotation on this planet, which approximately means sea level – varies by over thirty kilometres between the poles and the equator, which is again somewhat more than twice 13.4 kilometres. Twice is fine because we’re talking diameter rather than radius, but more than twice suggests there are other influences, such as rotation. Steel can, if I recall correctly, form a cylindrical column up to thirty kilometres high and there are a few specialised substances which could be used to build a tower which officially reaches into space, but they’re exotic and would have to be specially synthesised.

This full-disc image of Jupiter was taken on 21 April 2014 with Hubble’s Wide Field Camera 3 (WFC3).

The planet with the most obvious deviation from spherical in this Solar System is Jupiter, which has a polar diameter of 133 708 kilometres but an equatorial one of 139 820, which is a variation of 4.5%. This is because Jupiter is a substantially fluid body consisting of liquids and gases, and because it spins very fast. In terms of velocity the planet’s equator is moving over thirty times faster than Earth’s and it’s also over three hundred times our mass. However, Jupiter happens not to be the least spherical planet in our neighbourhood. That honour goes to Saturn, whose rings may disguise the fact. Saturn has an equatorial diameter of 116 460 kilometres and a polar one of 108 728, which is a variation of over seven percent. This may be connected to the fact that it’s also the least dense planet.

It’s established, then, that a planet can be tangerine-shaped rather than spherical if it’s sufficiently fluid. These two examples are also large and rotate fast. Earth is not like that, but it’s theorised that there are planets out there in the Universe which are mainly made of water, or which have extremely deep oceans. These could presumably assume such a shape, which on a planet the size of Earth would be like having a stationary wave almost a thousand kilometres high circling the equator, and in fact this would even be noticeable from the surface as it’s close to a gradient of one in ten. This could be described as a tangerine-shaped planet, but I have to say I don’t find that idea very interesting. The shape is officially called an “oblate spheroid”. There are stars which are markèdly flattened in this way, such as VTFS 102 in the Large Magellanic Cloud’s Tarantula Nebula, which is three times wider at the equator than at the poles, but stars are not made of solids, liquids or gases.

Another well-known variation of a spheroid is the rugby-ball shape, or prolate spheroid, and there are also stars of this shape. Some binary stars orbit each other so closely that they are mutually distorted into elongated shapes of this kind, and I don’t know this but it seems possible to me that this is also because of how fast they’re whizzing round. The question arises of whether a planet could have such a shape. Larry Niven, again, imagined such a planet in the Sirius system he called Jinx, whose “poles”, so to speak, were effectively vast plateaux rising out of the atmosphere at each end, and the “equator” was a high gravity area. The humans living near the equator needed to be very strong and muscular to cope. I don’t feel convinced that this is possible for a largely solid planet, but just as Saturn and Jupiter can get squished by their rotation I can see that if there is a system somewhere with a double gas giant, this might be what shape those planets would assume. The same might even apply to double deep ocean planets.

Other possibilities are very limited. For instance, an egg-shaped planet flat at one end and pointed at the other is difficult to envisage. However, there is one possibility which, oddly, is very far from being spherical but is still possible.

I’ve mentioned the periodical ‘Manifold’ on here a couple of times. This was a mathematical magazine published by the University of Warwick, one of my almæ matres, from 1968 to 1980, one of whose claims to fame is that it invented the game ‘Mornington Crescent’. I used to read it back then, and one of its many whimsies was a fictional toroidal planet whose name escapes me, with six cities all joined together by an underground railway. This is a reference to a well-known mathematical puzzle involving three houses all of which need a water, gas and electricity supply but none of the pipes could cross each other. This is impossible to arrange on a flat surface but works fine on a torus.

When I read this, I mused that it was a shame that such a planet could never exist, and I started working out things like I did above, with the likes of pyramids as very high mountains and various irregularities in its surface ruling it out. I then realised that I couldn’t actually find a reason for such a planet not to exist, and just assumed I didn’t have the mathematical prowess to work out why it couldn’t.

Well, it turns out that it can, in the sense that if a toroidal object of Earth’s volume, mass and composition could be formed in the first place, it wouldn’t be susceptible to collapsing into a spheroidal form. The above shape, surprisingly, is gravitationally stable. Incidentally this would also apply to water in free fall, so a spinning doughnut shaped swimming pool in space made entirely of water is completely feasible, under a pressurised atmosphere of course. There’s a fairly easy way of understanding this. It’s already been shown that tangerine-shaped planets exist, which are largely fluid and flattened by spinning. This is a kind of limiting case of that situation. If a fluid planet ended up spinning fast enough, not only would it become flattened but its matter would be completely pulled away from its axis of rotation. Most planets can be considered either to start out as fluid, i.e. they are either actually liquid, such as made of magma, or of sufficiently small lumps of matter that they behave on a planetary scale as if they were, just as an actual liquid consists of molecules or a heap of sand or dust can flow like water, have surface waves and even drown people. The difficulty is in imagining a scenario where this would actually happen on its own. The alternative is simply to say it’s being done by intelligent life but then the imagination falters a little as well, because how powerful would a civilisation have to be to have the resources to make its own planets? Also, why?

Nonetheless, however it came into being, once it was there it could continue as long as any other planet in its current shape, and this is a little surprising because the deviation from a sphere in this case is extreme. I also have to admit to a little confusion and have to insert an explanatory note. I can’t honestly tell whether this shape is sustainable simply due to the planet’s gravity or whether it would also need to be rotating fast with the axis passing “vertically” through the hole. I suspect the former is the case, but even if it isn’t, the second case would guarantee that it’s possible, although it’s not clear how fast it would have to be spinning. That would also depend on the proportions of the torus. Now for the explanatory note. Thus far I’ve assiduously avoided using the words “centrifugal force” because that doesn’t exist as such, as is well-known, but it can be quite awkward to express oneself without using those words. What is in fact happening in this situation is that the mass of the planet is constantly “trying” to move in an infinite number of straight lines, all tangent to its surface at the outer equator, but is pulled away from that path owing to the electromagnetic and gravitational forces holding it together.

It’s also very unclear how big this planet would be. According to the second picture at the top of this post, the north-south distance across Afrika is rougly equivalent to the width of the “tube” of the torus, the same distance for Australia is its thickness and the hole is about the same size as Australia again. Hence that version of a toroidal Earth is 3 000 kilometres thick, 7 000 kilometres wide on either side and has a hole 3 000 kilometres in diameter. This raises two questions for me about how to calculate the volume and surface area of a torus and also what to call the different features of the shape. Strictly speaking, the shape in the picture is not a torus because it’s not circular but elliptical in cross-section. The distance from the centre of the hole to the outer edge is called the “major radius”, R, and that from the centre to the inner edge is the “minor radius”, r. There’s also the aspect ratio, which is R/r. Strictly speaking, not only is the above not a torus (although the blue image is), but even if it was, it would only be a particular kind, namely the ring torus. There are also horn and spindle tori. A horn torus has its circular cross-sections touch at the centre, so strictly speaking has no hole, and a spindle torus has the circles overlapping. Both of these shapes are slightly more achievable for a planet than the ring in terms of events happening without intelligent intervention.

The formulæ for surface area and volume are respectively:

where p=R and q=r. This suggests several “equivalences”. One is the size of a torus with the same surface area as Earth’s, another the volume of such a torus, another the size of a torus with the same volume as Earth’s and another the surface area of that torus. All of these are also dependent on R and r, and thus the aspect ratio. I’m not going to address these immediately.

The torus in the second picture has basically the same continents as the real world, but Antarctica seems to be missing. In fact it can be concluded that the polar regions are missing altogether. However, there are two circles corresponding to the poles and of course two further circles corresponding to the Equator. Assuming the planet is held together by its rotation and doesn’t have constant daylight anywhere on its surface, i.e. not rotating with the hole axis facing the Sun, the “polar” regions are pretty close to lands which are equatorial on the real Earth in that image, although the other side has another circle which in the Arctic. Meanwhile there are inner and outer equators, and the outer passes through the Mediterranean. Assuming no axial tilt the inner equator is in eternal darkness and therefore colder than Antarctica, which would take a lot of water out of circulation and probably cool the whole planet. If it’s tilted at the same angle as we are, on the other hand, it would be exposed to sunlight some of the time and the “polar” circles would also have seasons, half a year of night and half of day and so forth, as they have here.

If this planet maintains its shape through rotation, there will probably be strong winds and ocean currents everywhere. There’s also an important topological difference between a spheroidal and a toroidal Earth. Topologically, considering the troposphere (the bit with the weather in it) as a single layer, there must always be at least two locations on Earth where there is no wind. This is not so on a toroidal planet because the hypothetical still spots could be lined up to be in the hole. Ocean currents are like this in the real world, because the land punches holes in the ocean in which potential still points could be located. If you go high enough in Earth’s atmosphere, the air is no longer dragged along by our rotation, so perhaps a toroidal Earth could have a relatively calm troposphere like ours is.

Apparent gravity would also vary across its surface. The rapid spin would act against gravity at the outer equator and in favour of it at the inner one. Some time ago, Alfred Wegener attributed continental drift to the centrifugal effect he called Pohlflucht. Arguably, as it depends on how rigid the planet is, Pohlflucht could be a reality on this world. Perhaps the continents actually would cluster around the outer equator. If they did, though, they would have to be quite mountainous to prevail over the water, which would be pulled into a belt in the same region. This, however, might actually be so because the lower gravity would favour higher mountains in that area. It seems to be shaping up into a situation where the inner region is a cold, flat desert, there are two strips of land either side of the outer equator along with a tendency for continents to move towards the outer equator where they form fold mountains which are, however, submerged under a deep ocean, which resembles the Tethys of our prehistoric past.

There needs to be a moon of some kind to generate a protective magnetic field. This could orbit at the outer equator. The toroidal magnetosphere thus formed would be a different shape than the real one.

From the surface, there are conventional horizons to the north and south, but to the east and west the vista depends on where you are. On the outer equator, the situation is pretty much as it is here. Near the “polar” circles, the planet is effectively flat along one circumference, and the landscape or seascape (snowscape more likely) disappears into the haze of the atmosphere. The inner equator offers the most spectacular view. During the day, the sides climb upwards into curved, hornlike shapes which gradually plunge into night, forming an overhead arc. At night, the situation is the same but there would be a visible daylit sector which would first recede up the horn, travel across the sky and then descend towards the observer until dawn. On this inner surface, gravity would be high, so the view might be nice but it would also be quite uncomfortable or even uninhabitable. I’m assuming here that there would be an axial tilt.

There’s a limit to the relative size of the hole. The narrower the ring, the less stable the planet. Both of the first two illustrations are viable, but a more traditional banded ring shape would be highly volcanic because it would tend to flex and crack under the forces maintaining its shape. Hence a doughnut shape is best. Even then the day would only last about three hours. A moon might move in a straight line in and out of the hole, or it could follow an ∞-shaped orbit.

The remarkable thing about this scenario is, of course, that it isn’t impossible. There could never be a tetrahedral or cube-shaped planet and the largest conceivable regular polyhedral planet would probably be something like a dodecahedron perhaps somewhat larger than Mimas but still much smaller than Earth or even Mercury, because the vertices would effectively be high mountains. In fact planets are in a sense polyhedral because they aren’t perfectly smooth spheroids on one scale, although on a smaller scale the jagged peaks and steep valleys would be rounded – this is a fractal issue, because on a smaller scale still they’d be jagged again, and so on. However, they are also very close to being smooth. As far as I can tell, the only possible shape a planet could be which is radically different from a sphere is a torus. What isn’t clear is whether it could ever happen on its own. I can easily believe that there are occasionally asteroids which have holes all the way through the middle, although as far as I know there are no known examples in this Solar System. A very rapidly spinning protoplanet could form into a torus, and the question then arises of what could cause it to spin so rapidly. Perhaps if it were high in iron and close to a neutron star this could happen, but it would be unlikely to be habitable. A non-habitable toroidal planet is unsurprisingly much easier to devise than a habitable one. However, given the will, the technology and the access to resources, nothing at all seems to stop an intelligent technological culture from making such a planet on a whim, or perhaps as a work of art. Isn’t that amazing?

Middle-Sized?

I don’t know if you’ve ever seen the short film “Powers Of Ten”. It starts with a photo of a picnic and zooms out to one hundred million light years, then zooms in to a hundred attometres. It can be seen here:

I have a distinct memory of a different film and wonder if it’s been remade. Despite the date on this I think this is the 1968 version. ‘The Voices Of Time’ was published in 1966. The maximum zoom out is to 1024 and the maximum zoom in is to 10-16 metres, neither of which are absolute limits. Nor does the upper bound correspond to the limits of knowledge at the time so far as I can tell, and a metre is not in the middle of that range. The middle would be somewhere like ten kilometres, which is of the order of the width of Chicago, probably somewhat smaller. The idea of it being in the middle is a bit nebulous-sounding. What I mean to ask is, how big are we in terms of powers of ten, or for that matter any other number, in the scheme of things? Are we as much bigger than the smallest possible length as we are smaller than the largest length, or are we off to one side, and if so, which?

The smallest possible length is the Planck Length. This is 1.616255(18)×10−35 metres. Strictly speaking there is no upper limit because it appears that space will continue to expand for ever, and even if it doesn’t it isn’t because there’s a geometrically ordained maximum size, but the diameter of the Universe is said to be 28 gigaparsecs, which is 8.635317 x 1026 metres. Incidentally, the upper figure has spurious accuracy. While we’re “out here”, I may as well work out the volume of the Universe, and I may have this wrong. The Universe is not spherical but hyperspherical, and its volume corresponds to the surface area of a sphere in the same was as that corresponds to the circumference of a circle. The formula for the circumference of a circle is of course 2πr and the surface area of a sphere is 4πr2, so I, perhaps naïvely, would deduce that the formula for the volume of a hypersphere is 16πr3. It’s a bit difficult to work out what the “diameter” of the Universe means because it isn’t spherical, but assuming it means the diameter of the hypersphere which in practical terms constitutes space, this gives it a volume of 4 x 1081 metres. It’s also worth using these figures to calculate the difference between this and the volume of a sphere of the same size, that formula being (4/3)πr3, which would give the Universe a volume of “only” 3.37158 x 1080 metres, which is only a dozenth of the size. This illustrates the significance of the fact that Euclidean geometry doesn’t apply at this scale, and it also means that a sphere exactly half the size of the Universe is twelve times bigger on the inside than it is on the outside. In Whovian terms, it’s dimensionally transcendental. It’s also possible to stick these two big figures together and work out one in terms of the other: how many Planck volumes are there right now? The answer is a figure with a hundred and eighty-seven digits, which permits an upper limit to the useful value of π, although as time goes by it would drift out of kilter so many more places may in fact be necessary. In the unlikely event that you need this figure, go here, which gives it to a million decimal places. I find this quite reassuring because it suggests that memorising the number in question isn’t entirely pointless, or maybe that’s disappointing.

Why is a Planck Length the shortest possible length? The reason for this originates in the “ultraviolet catastrophe”. It’s been known for thousands of years that when an object gets hot, it glows red, then orange, then yellow, then white. However, nobody knew why for most of our history. Given classical physics, why is it that hot objects don’t simply glow white and get brighter as they get hotter? There would, however, be a problem with them doing this. If they just glowed at the entire range of frequencies of light, this would include all frequencies shorter than visible light and this would be infinite if the variation of frequencies could be any figure at all between any two other figures. Obviously a hot object is not infinitely bright, but why?

The answer is that there is a minimum difference between frequencies of the light emitted by a hot object. This means that physical reality has a granularity to it. It has, in terms of computer graphics and video, a frame rate and a resolution, all determined by Planck’s constant, h, and the speed of light, c. Light can only be omitted in discrete quantities. There is not an intermediate energy level below a certain fineness and instead energy leaps between these levels without having any values in between. The minimum quantity is known as a quantum, and the energy of a photon is equivalent to its frequency multiplied by h. It solves a lot of problems. For instance, if electrons in orbitals constantly radiated energy over a continuous range, they would spiral into the nucleus and the atom would collapse. Instead, an electron can only have certain clearly defined energy levels. The Planck Length is given by the formula:

. . . where G is the gravitational constant and ℏ is h divided by 2π. The Planck Time is then the time taken for light to travel this distance.

The thing about the Planck Length in terms of scale is that it’s so much smaller than anything significant which seems to be happening, such as the size of the “smallest” subatomic particles. A zoom into the Planck Length would mainly be very boring because it’s nineteen orders of magnitude smaller than the limit in ‘Powers Of Ten’, which is equivalent to a speck of dust compared to something like ten dozen times the diameter of the orbit of Neptune. However, assuming that the film was made in 1968, certain fundamental particles such as quarks had not been established to exist yet, so nowadays it would be possible to go further. At this scale, it’s conceivable that “quantum foam” exists. Spacetime may be fluctuating in nature at these dimensions like a stormy sea, which also suggests that there is energy present in a pure vacuum. How this might be extracted, and whether it would be desirable to do so, is another question. It’s sometimes thought that the Universe is not at its lowest energy level and if that level were to be reduced to zero, for instance by “mining” the energy of quantum foam, that true vacuum would spread out at the speed of light from where it was formed and destroy everything.

Getting back to the question in hand, the smallest possible scale is the Planck Length of the order of 10-35 metres, and the largest possible scale is the Universe itself, whose current diameter is of the order of 1026 metres. This means we are on the large size. Of the sixty-one orders of magnitude possible at the moment, we’re the thirty-fifth smallest and the twenty-sixth largest. Middle-sized is around the thirtieth from either end, which is around ten microns or somewhere between the size of a white blood cell and a red blood corpuscles. Organisms of this size include protists and single-celled algæ. They are to the Universe as the Planck length is to them. Even so, we are close to being middle-sized in the grand order of things in that a factor of a million is not hugely significant when the number considered is around ten decillion. A hundred thousand times bigger than we is the size of a region of England such as the Midlands, and that’s not terrifyingly and incomprehensibly enormous. Therefore we are, very roughly, in the middle.

A More Literary Bit

I don’t know what pretensions I have to dare describe anything I write as appropriate for the above heading, but there it is. Yesterday I made this YouTube video:

Incidentally, I’m thinking of going back to making YouTube videos, but in future they’re likely to include no speaking and I won’t be showing my face on them, if I bother at all.

I found this rather unsatisfactory. I was going for the impression that the rather overgrown back garden was like a jungle at a smaller scale, but there were a couple of issues. One was that most of this wasn’t truly at ground level, and the other was that there seemed to be precious few animals in that video. I may give it another go at a later date. What I wanted was a lush forest-like appearance teeming with animal life, such as spiders, ants, beetles and flies. Something like this but with animals:

We do, to Sarada’s chagrin, have plenty of horsetails in our garden but they’re not forty metres tall. It’s really a testament to them that they’re still around after 300 million years, and to me it raises the question: when you get smaller, is it like going back in time? After all, on a sufficiently tiny level there are no vertebrates, or rather the vertebrates who do exist are great hulking monsters. There’s a frog who is less than eight millimetres long, and in Britain the minimum size seems to be a few centimetres. Mammals and birds as they’re now constituted can’t be smaller than a certain size because they would be physically incapable of eating enough food to keep their body temperatures at the right level to survive, so getting smaller is a journey into the past in terms of the animals all being “cold-blooded”, except of course that as discussed previously a flying insect isn’t really cold-blooded at all if it has to put much effort into flying. However, also at this scale animals don’t so much need to put effort into flying as into not flying, because for them the air is a fairly thick, buoyant fluid which they don’t so much fly through as swim in.

J G Ballard’s novel ‘The Enormous Space’ tells the story of a man who resolves never to leave his house again. As the days go by, his house expands until even the room he’s in is too vast to traverse. It’s been adapted into a TV play by the BBC:

Because of lockdown (I almost gave that a capital letter), some of us have found our homes becoming our worlds like the character in this piece, but to the various denizens of our dwellings they already are. The longest line section (actually geodesic) which can be drawn in the area I have lived my entire life within is about two thousand kilometres long, from Inverness to Rome, so that’s my world, in a way. Reducing this by a thousand gives an area the size of a small town, so for an ant, say, this is their world. The vegetated area of the garden is about twelve metres long, so magnifying that by a thousand makes it twelve kilometres, like a large forest in terms of England today. But this is mainly a bamboo forest with prodigiously high “trees”, since it’s largely grass. The tallest bamboo species is Dendrocalamus giganteus, which is up to thirty-five metres high, and at a scale of one to a thousand this is equivalent to a fairly well-manicured lawn, which we don’t currently have. To an ant, the moderately tall grass in the back garden is something like ten times the height of the tallest bamboo, making it more like a redwood forest, though of course not woody because of the relatively lower gravity.

This is truly a different world. The gravitational acceleration is less important there because the relative masses are a thousand million times lower. An insect could easily fall out of a skyscraper without being harmed, even though the gravity operating on a two millimetre long organism is in a sense a thousand times as strong. The atmosphere becomes a much more important factor, even the dominant one. Water becomes if anything more dangerous because its surface tension not only allows it to be walked on but also to capture an insect permanently even though they wouldn’t sink, and this opens up a whole ecological niche of predators who can prey on the victims of surface tension such as raft spiders and pond skaters. At the same time there are still the more familiar predators and prey in the form of ladybirds, wolf spiders and aphids.

It’s easy to think of oneself as trapped in one’s home, and since I’m a carer that is particularly a hazard for me. However, not only do I continue to have communication with the outside world, but also I have access to the microcosm. Even without a microscope I can observe the relatively large animals living in the house and garden, and when I get down to the middle-sized animals such as the hundred micron Colpoda, which will be present in the soil here like it is all over the place, and the crinoid-like Vorticella likely to be present in the guttering whose stalks are around the same length, the garden is relatively the size of that good old colloquial unit Wales. How could I want for any more? I can also go the other way, though since I live in England with its grey skies, not quite so far. But on a clear night, like anyone else I can realistically see individual stars thousands of light years away. The whole observable Universe is around me and half of it is accessible, though this presumes I have my own observatory and in practical terms is far less so because I’ve only got a pair of binoculars. But even so, I can see the Orion Nebula, 1 300 light years away, and the Pleiades open star cluster, 440 light years from here, and so on.

In the end, then, although it’s important to get out of the house, to some extent it’s what one makes of it, and the scope for what I might call adventure but is probably better called observation, even just from this one small house and garden in an English Midlands town, is vast. Just because the slightly larger than medium scale at which we happen to live lacks, in the East Midlands anyway, rainforests, elephants, lions and whales, doesn’t mean it doesn’t contain an equally fascinating array of wildlife on another level, and just because we’re confined to Earth doesn’t mean we can’t observe a fascinating wider Galaxy. What more could anyone want? Isn’t it great to be middle-sized?

The Anti-Universe

A prominent mythological theme is that of time being cyclical. For instance, in Hinduism there is a detailed chronology which repeats endlessly. Bearing in mind that the numbers used in mythological contexts are often mainly there to indicate enormity or tininess, there is the kalpa, which lasts 4 320 million years and is equivalent to a day in Brahma’s life. There are three hundred and sixty of these days in a Brahman year, and a hundred Brahman years in a Brahman lifetime, after which the cycle repeats. Within a Brahman Day, human history also repeats a cycle known as the Yuga Cycle, which consists of four ages, Satya, Treta, Dvapara and Kali. The names refer to the proportion of virtue and vice characterising each age, so Satya is perfect, life is long, everyone is kind to each other, wise, healthy and so on, satya meaning “truth” or “sincerity”, Treta is “third” in the sense of being three quarters virtue and one quarter vice, Dvapara is two quarters of each and Kali, unsurprisingly the current age, is the age of evil and destruction. Humans start off as giants and end as dwarfs. Then the cycle repeats. Thus there are cycles within cycles in Hindu cosmology.

The Maya also have a cyclical chronology, including the Long Count, in a cycle lasting 63 million years. Probably the most important cycle in Mesoamerican calendars is the fifty-two year one, during which the two different calendars cycle in and out of sync with each other. The Aztecs used to give away all their possessions at the end of that period in the expectation that the world might come to an end.

The Jewish tradition has a few similar features as well. Firstly, it appears to use the ages of people to indicate their health and the decline of virtue. The patriarchs named in the Book of Genesis tend to have shorter and shorter lives leading up to the Flood, which ends the lives of the last few generations before it, including the 969-year old Methuselah. Giants are also mentioned in the form of the Nephilim, although they are seen as evil. I wonder if this reflects the inversion of good and evil which took place when Zoroastrianism began, where previously lauded deities were demonised. There is also a cycle in the practice of the Jubilee, consisting of a forty-nine year Golden Jubilee and a shorter seven year Jubilee, and obviously there are the seven-day weeks, which we still have in the West.

The Hindu series of Yugas also reflects the Greek tradition of Golden, Silver, Bronze and Iron Ages, which was ultimately adopted into modern archæology in modified form as the Three-Age System of Stone, Bronze and Iron. The crucial difference between the Hindu and Greek age system and our own ideas of history is that they both believed in steady decline whereas we tend to be more mixed. We tend to believe in progress, although our ideas of what constitutes that do vary quite a lot. In a way, it makes more sense to suppose that everything will get worse, although since history is meant to be cyclical it can also be expected to get better, because of the operation of entropy. Things age, wear out, run down, burn out and so on, and this is the regular experience for everyone, no matter when they’re living in history, and it makes sense that the world might be going in the same direction. On the longest timescale of course it is, because the Sun will burn out, followed by all other stars and so on.

Twentieth century cosmology included a similar theory, that of the oscillating Universe. It was considered possible that the quantity of mass in the Universe was sufficient that once it got past a certain age, gravity acting between all the masses in existence would start to pull everything back together again until it collapsed into the same hot, dense state which started the Universe in the first place. There then emerge a couple of issues. Would the Universe then bounce back and be reborn, only to do it again in an endless cycle? If each cycle is an exact repetition, does it even mean anything to say it’s a different Universe, or is it just the same Universe with time passing in a loop?

This is not currently a popular idea because it turns out that there isn’t enough mass in the Universe to cause it to collapse against the Dark Energy which is pushing everything apart, so ultimately the objects in the Universe are expected to become increasingly isolated until there is only one galaxy visible in each region of the Universe where space is expanding relatively more slowly than the speed of light. This has a significant consequence. A species living in a galaxy at that time would be unaware that things had ever been different. There would be no evidence available to suggest that it was because we can currently see the galaxies receding, and therefore we can know that things will be like that one day, but they would have no way to discover that they hadn’t always been like this. This raises the question of what we might have lost. We reconstruct the history of the Universe based on the data available to us, and we’re aware that we’re surrounded by galaxies which, on the very large scale, are receding from each other, so we can imagine the film rewinding and all the stars and galaxies, or what will become them, starting off in the same place. But at that time, how do we know there wasn’t evidence of something we can no longer recover which is crucial to our own understanding of the Universe?

Physics has been in a bit of a strange state in recent decades. Because the levels of energy required cannot be achieved using current technology, the likes of the Large Hadron Collider are not powerful enough to provide more than a glimpse of the fundamental nature of physical reality. Consequently, physicists are having to engage in guesswork without much feedback, and this applies also to their conception of the entire Universe. I’ve long been very suspicious about the very existence of non-baryonic dark matter. Dark matter was originally proposed as a way to explain why galaxies rotate as if they have much more gravity than their visible matter, i.e. stars, is exerting. In fact, if gravity operates over a long range in the same way as it does over short distances, such as within this solar system or between binary stars, something like nine-tenths of the mass is invisible. To some extent this can be explained by ordinary matter such as dust, planets or very dim stars, and there are also known subatomic particles such as the neutrinos which are very common but virtually undetectable. The issue I have with non-baryonic dark matter, and I’ve been into this before on here, is that it seems to be a specially invented kind of matter to fill the gap in the model which, however, is practically undetectable. There’s another possible solution. What makes this worse is that dark matter is now being used to argue for flaws in the general theory of relativity, when it seems very clear that the problem is actually that physicists have proposed the existence of a kind of substance which is basically magic.

If you go back to the first moment of the Universe, there is a similar issue. Just after the grand unification epoch, a sextillionth (long scale) of a second after the Big Bang, an event is supposed to have taken place which increased each of the three extensive dimensions of the Universe by a factor of the order of one hundred quintillion in a millionth of a yoctosecond. If you don’t recognise these words, the reason is that these are unusually large and small quantities, so their values aren’t that important. Some physicists think this is fishy, because again something seems to have been simply invented to account for what happened in those circumstances without there being other reasons for supposing it to be so. They therefore decided to see what would happen if they used established principles to recreate the early Universe, and in particular they focussed on CPT symmetry

CPT symmetry is Charge, Parity and Temporal symmetry, and can be explained thus, starting with time. Imagine a video of two billiard balls hitting and bouncing off each other out of context. It would be difficult to tell whether that video was being played forwards or backwards. This works well on a small scale, perhaps with two neutrons colliding at about the speed of sound at an angle to each other, or a laser beam reflecting off a mirror. Charge symmetry means that if you observe two equally positively and negatively charged objects interacting, you could swap the charges and still observe the same thing, or for that matter two objects with the same charge could have the opposite charges and still do the same thing. Finally, parity symmetry is the fact that you can’t tell whether what you’re seeing is the right way up or upside down, or reflected. All of these things don’t work in the complicated situations we tend to observe because of pesky things like gravity and accidentally burning things out by sticking batteries in the wrong way round or miswiring plugs, but in sufficiently simple situations all of these things are symmetrical.

But there is a problem. The Universe as a whole doesn’t seem to obey these laws of symmetry. For instance, almost everything we come across seems to be made of matter even though there doesn’t seem to be any reason why there should be more matter than antimatter or the other way round, and time tends to go forwards rather than backwards on the whole. One attempt to explain why matter seems to dominate the Universe is that for some reason in the early Universe more matter was created than antimatter, and since matter meeting antimatter annihilates both, matter is all that’s left. Of course antimatter does crop up from time to time, for instance in bananas and thunderstorms, but it doesn’t last long because it pretty soon comes across an antiparticle in the form of, say, an electron, and the two wipe each other off the map in a burst of energy.

These physicists proposed a solution which does respect this symmetry and allows time to move both forwards and backwards. They propose that the Big Bang created not one but two universes, one where time runs forwards and mainly made of matter and the other where time goes backwards and is mainly made of antimatter, and also either of these universes is geometrically speaking a reflection of the other, such as all the left-handed people in one being right-handed in the other. This explains away the supposèd excess of matter. There’s actually just as much antimatter as matter, but it swapped over at the Big Bang. Before the Big Bang, time was running backwards and the Universe was collapsing.

In a manner rather similar to the thought that an oscillating Universe could be practically the same as time running in a circle because each cycle might be identical and there’s no outside to see it from, the reversed, mirror image antimatter Universe is simply this one running backwards with, again, nothing on the outside to observe it with, and therefore for all intents and purposes there just is this one Universe running forwards after the Big Bang, because it’s indistinguishable from the antimatter one running backwards. On the other hand, the time dimension involved is the same as this one, and therefore it could just be seen as the distant past, which answers the question of what there was before the Big Bang: there was another universe, or rather there was this universe. It also means everything has already happened.

But a further question arises in my head too, and this is by no means what these physicists are claiming. As mentioned above, one model of the Universe is that it repeats itself in a cycle. What we may have here is theoretical support for the idea of a Universe collapsing in on itself before expanding again. That’s the bit we can see or deduce given currently available evidence. However, in the future, certain evidence will be lost because there will only be one visible galaxy observable, and the idea of space expanding will be impossible to support even though it is. What if one of the bits of evidence we’ve already lost is of time looping? Or, what if time just does loop anyway? What if time runs forwards until the Universe reaches a maximum size and then runs backwards again as it contracts? There is an issue with this. There isn’t enough mass in the Universe for it to collapse given the strength of dark energy pushing it apart, but of course elsewhere in the Multiverse there could be looping universes due to different physical constants such as the strength of dark energy or the increased quantity of matter in them, because in fact as has been mentioned before there are possible worlds where this does take place. Another question then arises: how does time work between universes? Are these looping universes doing so now in endless cycles, or are they repeating the same stretch of time? Does time even work that way in the Multiverse, or is it like in Narnia, where time runs at different speeds relative to our world?

It may seem like I’ve become highly speculative. In my defence, I’d say this. I have taken pains to ignore my intuition in the past because I believed it was misleading. However, there appears to be an intuition among many cultures that time does run in a cycle, and the numbers these cultures produce are oddly similar. The Mayan calendar’s longest time period is the Alautun, which lasts 63 081 429 years, close to the number of years it’s been since the Chicxulub Impact, which coincidentally was nearby and wiped out the non-avian dinosaurs. The Indian kalpa is 4 320 million years in length, which is again quite close to the age of this planet. Earth is 4 543 million years old and the Cretaceous ended 66 million years ago, so these figures are 4.6% out in the case of the Maya and 5% for the kalpa. Of course it may be coincidence, and the idea of time being cyclical may simply be based on something like the cycle of the day and night or the seasons through the year, but since I believe intuitive truths are available in Torah and the rest of the Tanakh, I don’t necessarily have a problem with other sources. Parallels have of course been made between ancient philosophies and today’s physics before, for example by Fritjof Capra in his ‘The Tao [sic] Of Physics’. Although much of what he says has been rubbished by physicists since, there is a statue of Dancing Shiva in the lobby at CERN and one quote from Capra is widely accepted:

“Science does not need mysticism and mysticism does not need science. But man needs both.”

Stuff That Nearly Is

Most of what happens is fairly predictable, and they say that in an infinite Universe, which this probably isn’t, everything is not only possible but inevitable. That is, everything that is possible must exist. This is not, however, so. For instance, at first glance it seems possible that the whole of space could be filled with diamonds one millimetre in diameter separated from eight other diamonds of the same side by one metre. Certainly this seems to be a stable arrangement, particularly if they’re all slowly orbiting each other, and the gravitational forces balance each other, but we can easily establish by observation that this is not so. Therefore there is an infinite set of things which are entirely possible but don’t exist, at least in this universe. Likewise, there could be quite a few things which we only believe are possible because we don’t know what rules them out, and it might be imagined that in fact there is really only one true way that things can be. However, this is not so.

Now there is this:

I’m not going to pretend that I can remember what any of the symbols in this equation mean, and without that information this is just an impressive-looking piece of technobabble. It’s also abbreviated and I may have written some of it down wrongly. Expanded, it would fill a sheet of A4. But this is in fact almost that famous Holy Grail, of a formula which written down fully explains everything physical in the Universe. What’s missing is gravitation, because gravity may not be a force at all, unlike the four used in the sequent above. But right now, that may as well say “abracadabra” to me because it means nothing to me.

There seems, however, to be a problem with the idea that that thing up there along with a similar account of gravity really would be a theory of everything. I mentioned previously (somewhere on here, can’t find it right now) that fine tuning is wanting of an explanation. As far as anyone can tell, there is no link between the relative strengths of the different forces, including gravitation, which make heavier atomic matter and through it chemistry, biochemistry, life as we know it and human beings possible, that means it has to be the case that things are the way they are, and consequently we live in a multiverse almost entirely consisting of very simple and boring universes incompatible with chemistry. There seems to be no cause for these proportions, and consequently there are a number of things which nearly are, but aren’t in this universe. I’ve mentioned these before, but not in as much detail.

Be warned, because I’m about to talk about nuclear physics, and I’m going to do so from a position of considerable ignorance, but not total ignorance.

It should go without saying that a convenient way of looking at atoms is that they consist of orbitals associated with electrons which have various shapes, such as a four-leaved clover or a dumb bell, balanced by a nucleus at the centre which “wants” to have the same positive charge as the total negative charge of the electrons. Hence helium has two positively charged particles, protons, and two negatively charged ones in its orbital, electrons. Hence chemistry. Atomic nuclei, with the exception of the most common isotope of hydrogen, also contain a similar number of neutrons, which are uncharged. The heavier the atom is, the more neutrons are needed proportionately to balance the protons in order that it remain as stable as possible. Hence with carbon and oxygen, with six and eight protons respectively, there’s a stable isotope with the same number of neutrons. By the time uranium is reached, there are no stable forms but the most stable has ninety-two protons and a gross plus two – 146 – neutrons. Even that isn’t enough. The issue is that like charges repel, so atomic nuclei struggle to stay in one piece since they consist of clusters of positive and neutral charges.

There are various factors and forces involved in atomic nuclei, not all of which I can easily call to mind, but two ways of understanding how they work are the shell model and the liquid drop model. No two otherwise identical particles in close proximity can have the same energy level. In atoms this manifests in the form of electrons having “shells” – they have different energy levels into which they slot, so there are for example two possible electrons at the lowest state and eight in the second, and then it gets complicated. This arrangement also applies to protons and neutrons in atomic nuclei, which can also be thought of as a kind of condensed version of the electron shells which make chemistry possible. The liquid drop model sees atomic nuclei as akin to droplets floating in space, and as such they have cohesive and adhesive forces and surface tension. Just as a drop of water has a skin round it which is difficult to penetrate and tends to hold the water together, so have atomic nuclei, and just as the interior of a drop of water consists of water molecules which stick together, so do atomic nuclei. Drops can coalesce and separate, and so can atomic nuclei.

The mass of an atomic nucleus is never an exact sum of the masses of its protons and neutrons because in order for it to hold together, some of their mass has to become energy. Exactly how much can be calculated by considering the nucleus in terms of the forces mentioned above, along with some others. This is known as the “empirical mass formula” and takes account of the nuclear volume, the surface area of the nucleus, the repulsion between the protons, the fact that nucleons need to be paired according to their spin (which I hadn’t mentioned before for simplicity’s sake), and the fact that they cannot occupy exactly the same energy level, which is known as the asymmetry term. All of these taken together, and there is a formula but I won’t bother you with it, explain much of what happens between and inside atoms, but there is a second property known as “magic numbers”. Certain isotopes and elements are more stable than they would be expected to be given this model and the associated formulæ, and consequently others are less so if you take these “magic numbers” to be “normal”. Therefore, the shell model is also needed. Both of them apply to real atomic nuclei and don’t contradict each other. If an atomic nucleus has a magic number of either neutrons or protons, it will be unusually stable. These numbers include 2, 8, 20, 28, 50, 82 and 126. Of these, element 126 has yet to be recognised, but the others are helium, oxygen, nickel, tin and lead, as far as protons are concerned, and as I mentioned before, oxygen-16, which is doubly magic, is particularly stable.

Incidentally, it’s fairly easy to demonstrate that elements have different isotopes, particularly chlorine, which is unusual in having two common stable forms, 35 and 37, of which the former comprises roughly three quarters of stable chlorine atoms and the latter one quarter. Careful measurement of weights of ordinary table salt in its reactions in solution with other substances such as sodium and potassium hydroxide reveal that the proportions involved don’t correspond to whole numbers. This can be demonstrated with ordinary household chemicals if you use large enough amounts and measure them precisely enough. It doesn’t require sophisticated equipment or hard to understand calculations. It could even come into making soap, particularly if the fatty acids have relatively short chains, such as the ones high in coconut or palm oil. The very easiest would be from certain substances in goat’s milk, but that wouldn’t be vegan, but coconut and palm are also ethically questionable.

One of the consequences of these forces and factors is the pattern of stability and instability in the periodic table. There’s at least one stable isotope of each of the first forty-two elements, then technetium appears to throw a spanner in the works. When the periodic table was first compiled, a number of gaps became apparent. Sometimes this was just because the element concerned hadn’t been discovered but wasn’t particularly unusual, as with gallium, although gallium is quite an unusual metal. However, in the case of element 43, it just seemed to be missing, and wasn’t officially discovered until the 1930s. By contrast, gallium had been discovered in 1875 and the actual metal was obtained a year later. It turns out that there is no stable atomic nucleus with forty-three protons, the element now known as technetium, which is universally and usefully radioactive. The same is true of the rare earth metal promethium, whose atomic number is sixty-one, but has stable elements either side of it, and in fact all other rare earth metals have stable isotopes. The heaviest nucleus which has a stable isotope is that of bismuth, whose atomic number is eighty-three. Bismuth and gallium share the unusual property of expanding on freezing, a very rare property crucially also true of water, which is another factor in making life of our kind possible in the Universe. Above bismuth lie polonium, astatine, radon and francium, all of which are exotic in various ways. Polonium is one of the most toxic elements known, completely down to its radioactivity. Astatine, along with tennessine, is a radioactive halogen, like chlorine and iodine. At any given time there are only twenty-five grammes of astatine in this planet’s crust. Radon is fairly abundant and the heaviest known gaseous element. Finally, francium is a radioactive alkali metal, slightly more common than astatine at thirty grammes. Francium and astatine don’t really have meaningful chemistry, partly because they’re so rare and partly because they’re too radioactive to be cool enough to have predictable reactions. It should probably also be mentioned that there’s another series of elements like the rare earths which are all too familiar and are again quite radioactive. These are the actinides, which include uranium and plutonium. One other element is worth mentioning here, although its situation is kind of the reverse of the others’. This is tungsten, which has five stable isotopes plus a further one, tungsten-180, with a half-life of two thousand million æons. The others can also decay but are around a thousand times more stable and have never been observed doing so. Tungsten is also the only third-row transition element which has a biological role, although it is also usually slightly toxic to animal life, so it’s conceivable that life’s existence depends on its near-stability.

The difficulty is in quantifying exactly how much the relative strengths of the strong nuclear force and electromagnetism could change before this arrangement of stability and instability would also change. The shell model would be unaltered by this, but the liquid-drop model would have different parameters if these were different. It’s tempting just to base everything around the inverse square law and the volume of the nucleus, but this wouldn’t take the surface tension factor into account, for instance. However, if the inverse square law were all that was required, the calculations look like this. For there to be a stable isotope of astatine, the strong nuclear force would only need to be relatively powerful enough to hold a nucleus slightly larger than bismuth together. The most stable isotope of bismuth is 209, and the most stable isotope of astatine is 210. If these atomic nuclei are assumed to be spherical, which they aren’t because nucleons don’t tessellate into spheres, and assuming the inverse square law is the only significant factor, a bismuth nucleus could be thought of as consisting of 209 units of volume each corresponding to a nucleon, and an astatine one of 210, which is 0.4% larger. This would actually mean the strong nuclear force would only need to be 0.2% stronger for this to happen, but in fact this figure is inaccurate. To be honest I can’t even work out in which direction. It certainly seems as though there would only need to be a small tweak for there to be stable isotopes of both astatine and francium, and since there are plenty of alkali metal-halogen compounds such as sodium chloride, there nearly is a salt called francium astatide. This would be a white, translucent substance with cubic crystals.

Technetium is more complicated. Both it and promethium have odd atomic numbers at forty-three and sixty-one, and odd-numbered elements are both rarer and less stable than even-numbered ones of similar mass. Even-numbered elements often form from the collision of α particles, or other elements previously formed in that way, and these, being helium atoms, have an atomic number of two, so the more common elements are even. Also, this allows particles to pair off inside the nucleus, again making them stabler. This is only a partial explanation though, because clearly an element like nitrogen or gold has stable isotopes in spite of being odd-numbered. Technetium decays to molybdenum (element forty-two) or ruthenium (element forty-four), both of which are more stable, so the issue is really why it’s less stable than either of those. All elements have radioactive isotopes, but technetium and promethium only have those. Technetium-99 has an odd number of protons but an even number of neutrons, which makes it more stable because of pairing: it has forty-three protons and fifty-six neutrons, making it the most stable isotope of that element. Less stable elements are also supposed to be furthest from the “magic numbers”, which would make the “anti-magic” numbers 5, 6, 14, 24, 25, 39 and 67 (and 103, which would be unstable anyway). Thirty-nine is close to forty-three, but is yttrium, not notably unstable. A light element without stable isotopes could, however, be expected to have a total number of nucleons close to 103 (which technetium-99 has), a total number of protons close to thirty-nine (also true), be odd-numbered (which it is) and have about sixty-seven neutrons, which is somewhat higher than technetium-99’s fifty-six. However, the liquid-drop model only includes approximations for quantum factors, and there’s a more sophisticated method called the Strutinsky Smoothing Method, invented by Vilen Strutinsky, which is more accurate, and can be expressed by this equation:

And yes, I know I haven’t bothered to specify what any of that means, but the point is, there is something out there which does seem to predict that there are no stable isotopes of technetium or promethium. Just considering promethium for a moment, this has an atomic number of sixty-one and its most stable isotope has a mass close to one hundred and forty-six, with eighty-five neutrons. At this point, it’s probably worth digressing slightly into my habit of using duodecimal numbers, because this is a good illustration of why it can be a good idea. Stating this in base twelve, promethium has an atomic number of five dozen and one and its most stable isotope has a gross and two nucleons with seven dozen and one neutrons. This shows much more clearly how these numbers are “unbalanced” than the relative mess of the decimal system can, because the occurrence of sloppy-sounding numbers like these is much rarer in the duodecimal, and focussing on these figures immediately suggests there is something up with them. They aren’t neat enough. The same can be done with technetium-99’s three and a half dozen and one protons, four dozen and eight neutrons and eight dozen and three nucleons, although it’s less glaring. Promethium is in fact the least stable of any of the first seven dozen elements, even less so than the notorious polonium.

The question is, then, could a different strength ratio of the strong and electromagnetic forces cause technetium and/or promethium to have at least one stable or metastable isotope? It occurs to me that both of them are in slightly the “wrong” place with regard to nucleon, neutron and proton numbers, which is apparently down to quantum physics in a way I can’t understand, but the classic “bad” numbers would be thirty-nine and sixty-seven, which are yttrium and holmium. Holmium has the distinction of being the most boring and useless element in the periodic table according to some chemists, and it’s notable that mentioning its name tends not to ring bells in most people’s minds. It’s just kind of “there”. It is a rare earth metal but not a particularly remarkable one, although it can be used to generate the most powerful artificial magnetic fields possible and can also damp down nuclear reactions, which actually sounds like an example of how boring it is. There would be technical differences in the design of nuclear power stations and possibly MRI magnets, but that would be it. It might not be possible to achieve the strength of magnetic fields found in certain circumstances.

Stable promethium is a similar prospect. Most of the uses of promethium depend on its radioactivity, for instance luminous paint and atomic batteries, but again there are alternatives to this, particularly considering that a different rare earth metal would be the radioactive one instead. It would probably have ended up in a phosphor on CRT TV sets and monitors. It would not, however, have the same name.

Yttrium is technically not a rare earth metal but is very similar to them. If it turned out not to be stable, it would not have been detected in the sample of rock from the Swedish village of Ytterby, and at least four different elements would have different names: yttrium, terbium, erbium and ytterbium. Since a relatively high-temperature superconductor was discovered as a result of a typo in the formula, confusing yttrium and ytterbium, this would have significant scientific and technological consequences. It might also have delayed the discovery of rare earth metals themselves. The newly discovered blue pigment Oregon Blue could not exist.

I’ve mentioned the possible Mandela Effect regarding technetium before. If technetium had not been radioactive, it would be called masurium, because of the irreproducible result in the 1920s which appeared to detect it, which in this case would be confirmed. The name “technetium” refers to the fact that it’s a manufactured element. Again, the uses of technetium in the actual world are based on its radioactivity. For instance, it’s used as a radioactive tracer in medicine. Stable technetium would be very different. Here’s a logarithmic graph of the relative abundance of the chemical elements in our planet’s crust:

A couple of things are obvious from this graph. One is that the odd-numbered elements are considerably rarer than their even-numbered neighbours, hence the zig-zag. Another is that there is a sudden plunge in abundance after element forty-two, molybdenum, and a gap where technetium, or masurium, ought to be. This seems to predict that technetium would be about as abundant as silver, although it would probably not occur native like that metal, meaning that it would probably be a minor precious metal like rhodium. It’s also significant that that plunge in abundance would be less severe because there would still be radioactive isotopes of technetium which would break down into ruthenium, an iron-like metal which would therefore also be more common. Technetium has a very high melting point, putting it into the category of metals like tantalum and tungsten, and is a powerful catalyst in the dehydrogenation of isopropanol. It would also be useful in the manufacture of stainless steel. Moreover, since it’s a relatively light element it might have a biological function, which molybdenum has for example. I can imagine it being used as an antidote to isopropanol poisoning. There are a large number of molybdenum-containing enzymes and it may have been essential to the evolution of cells with nuclei, so it’s conceivable that technetium would have a similarly significant rôle. Maybe life has actually struggled with the absence of technetium and would have evolved more quickly if it existed in a stable form, perhaps making complex life more common in the Universe. However, all this is speculation.

Astatine is a case which chemistry examiners are very keen for candidates to consider. The halogens have a regular and particularly predictable set of properties which change as they occur further down on the periodic table. Fluorine is a highly reactive and dangerous yellow gas, the most reactive element of all. Chlorine is still reactive enough to support combustion and is a green gas with a relatively high boiling point. Bromine is a fuming red liquid and iodine a dark purple shiny non-metallic solid. The colour changes are due to the absorption bands of the atoms moving across the spectrum as the electron shells increase in number. Astatine would therefore probably be a relatively inert black solid which looks like anthracite. Every other halogen has a biological rôle, so it’s possible that we’d require astatine instead of iodine for thyroid hormones. Astatine would be an unique halogen because it would be a metalloid. For example, it would be a fairly good conductor of electricity and might be a semiconductor, making it useful in electronics.

There appears to be a similarly predictable trend down the column for alkali metals, and of course cæsium is the most reactive of them all. This suggests that francium is highly reactive, but it has also been suggested that it isn’t, although I don’t know the argument for that. The low melting points of all the alkali metals also means francium could be liquid at room temperature, with a melting point of 8°C, and is about two and a half times as dense as water. Francium has never been observed as an actual lump of metal, or pool for that matter, since it’s radioactive enough to cause itself to vaporise instantly.

Between astatine and francium lies the radioactive and famous gas radon. This has a boiling point higher than any other elemental gas except chlorine, but unlike chlorine it is not very reactive, although relative to noble gases it would be. For instance, radon difluoride is entirely feasible, and in fact it does exist, as does radon trioxide, and there are thought to be higher radon fluorides. If it weren’t radioactive, it’s conceivable that it could collect as pockets of liquid in the ice near the south pole. It would be over four times as dense as water.

Polonium poses an interesting issue. It wouldn’t be poisonous, and it’s also alleged that tobacco contains trace amounts of polonium which make it more carcinogenic than it otherwise would be because of the radioactivity rather than the toxicity. If this is true, a world with stable polonium wouldn’t just differ in terms of Aleksandr Litvinienko not being assassinated that way or Marie Curie not dying of cancer quite as young, but possibly in altering the fates of millions of people, since tobacco would be marginally less carcinogenic, and this could mean various people lived longer and got to influence the world in untold ways. It would also not be called polonium as it wouldn’t’ve been Marie Curie who discovered it, who named it after her native Poland (and this also means Francium would have another name).

The question arises more broadly of whether there would be any other differences. I suspect that a different profile of radioactive elements would influence continental drift. Continental drift occurs because of convection currents in the mantle, heated by radioactive decay. The density of both the mantle and the continents would be slightly different if these stable elements existed, as they would be present in both, and the heating effect would be lower. I think this all adds up to slower continental drift, meaning that the Atlantic Ocean could be narrower, the Mediterranean wider and Australia slightly further south. The differences in world maps reported by those who believe the Mandela Effect is something other than confabulation tend to be in the directions of continental drift, and I find that slightly suspicious. Why would we misremember the locations of land masses consistent with their direction of movement rather than at right angles to them, or in some other way? This is one reason I suspect the Mandela Effect is more than just psychological, eccentric though it makes me. Of course it could also be that a slight difference in the relative strength of electromagnetism and the strong nuclear force would have a chaotic effect and end up producing a universe like this one, full of stars and galaxies, but not the familiar ones, because the early tiny irregularities in the distribution of density in space could be different and later become voids and superclusters which are

While I’m talking about the sky, I want to make one final observation. There are stars whose spectra show they’re high in technetium even though the element is unstable. It’s been suggested that this might be a signal sent by aliens, or at least evidence of alien technology tampering with stellar evolution. It occurs to me that although we generally believe the laws of physics are uniform throughout the Universe, this might mean they aren’t. Maybe there are, after all, planets out there in this universe where these elements are stable, but perhaps also whose laws of physics are incompatible with the survival of the human body, for instance because iodine could be highly radioactive. But the simplest and most boring, and therefore true, explanation is that these stars are high in technetium for some other reason. Nonetheless, there are alternate universes where the elements I mention are stable.

Our Shadow Twins

There more or less have to be parallel universes because this Universe is “fine-tuned”. The alternative would seem to be to require a Creator, and although there is a Creator, or rather a Sustainer because God is not within time, nothing in the Universe should be allowed to imply or suggest that there is one as that would be a “God Of The Gaps”.

I should probably explain fine tuning. There are certain constants governing the relative strengths of the four known forces in the Universe which, if they varied even slightly, would make rocky planets and life as we know it impossible. Examples are as follows:

  • Electromagnetism is a sextillion (long scale) times stronger than gravity. If it were much smaller, the Universe would have collapsed in on itself before the stars could have formed.
  • When deuterium nuclei fuse to form stable helium-4, the nucleus loses 7% of its mass. If it lost 6%, only hydrogen would exist, and if it lost 8%, all the nuclei in the Universe would’ve fused together within a fraction of a second of the Big Bang and there would be no atomic matter at all. That said, that is quite a large range, determined by the strong nuclear force.
  • If dark energy was slightly stronger compared to gravity, stars would not be able to form because they’d be ripped apart by the expansion of space. If it was slightly weaker, the Universe would’ve collapsed by now.
  • If other than three spatial dimensions were extensive (there are others, which are however very small and so don’t influence this), there would be problems with the weakening of gravity at a given distance which would again either cause collapse or make it impossible for stars to form.

There are several other examples, but taking these together is enough to illustrate the issue, because the improbabilities multiply, and some of them even seem to be part of an infinite range of possibilities, usually very boring ones because they either involve the Universe collapsing in on itself almost immediately after the Big Bang or merely consisting of hydrogen atoms thinly spread throughout space. The situation we actually find is fantastically improbable because of this. It’s also been suggested that the specific existence of the element carbon is suspiciously unlikely, and water is also such an unusual compound that it too is unlikely, but the details of these involve once again the strengths of the strong nuclear force in the case of carbon and that of electromagnetism in the case of water. There is presumably a version of water in a parallel universe which is still H2O but is a gas at well below its current freezing point and contracts when it freezes, is not a good solvent and so forth. In fact probably most versions of water are like that. Likewise, and this is I admit a very sloppy calculation because several different forces are involved in holding atomic nuclei together and they don’t obey the inverse square law, the last element to have stable isotopes is bismuth, and if the strong nuclear force was forty percent weaker, stable carbon atoms could not exist and carbon-based life would be impossible.

Because of all this stuff, some theistic religious people believe that there must be a God. However, there’s a problem, or rather several problems, with that argument. Firstly, even if it does entail a creator, it fails to entail a God like the one in the Bible, Qur’an or whatever. Secondly, the Universe which actually exists is almost completely empty and life seems to be a mere detail, possibly on only one planet and even if widespread it would still only have come into existence on the tiny grains in a vast void. In fact, this almost completely empty void may be a clue to the nature of reality. What we’re confronted with when we look into the night sky is unimaginably enormous distances between stars, whose visible examples are unsuitable for life as we know it, organised into galaxies which are also separated by relatively much smaller distances and organised into clusters forming a kind of “foamy” arrangement around enormous voids like bubbles. Only occasionally are the conditions suitable for the concentration of nuclear matter, and even more seldom do rocky globes form. When we consider Earth, we realise how special she is, but that exceptional nature is contingent on the fact that we are here in the first place to do the considering. The anthropic principle says the same is true of the Universe: there are plenty of other universes but they don’t have any life or observers in them. Ergo, there are parallel universes. The alternatives seem to be enforced belief in a Creator with a capital C or a multiverse, and that multiverse would likewise consist almost entirely of empty universes which have either already ceased to exist or contain only widely space hydrogen atoms and perhaps molecules floating in otherwise empty space. Although I’m theist, I choose the latter.

The question then arises of how inevitable anything is. Alternate history usually depends on PODs, Points of Divergence, such as Hitler dying before coming to power or JFK not being assassinated, and from a macroscopic level it seems entirely plausible that Henry Tandey could have decided to shoot Hitler on 28th September 1918 or that Lee Harvey Oswald missed his target on 22nd November 1963. But in fact these PODs are only apparent. Free will is probably illusory, there’s a whole chain of unknown events influencing those moments and for all we know that chain of cause and effect stretches all the way back to the beginning of the Universe. It will undoubtedly be the case that slight variations in physical constants do indeed lead to differences in the universe, but what we imagine is easily possible could turn out to be completely impossible. The question of whether this is true depends on chaos theory and quantum physics.

I’ll take chaos theory first. This is the whole butterfly effect thing. It was found at some point that computer programs written to forecast the weather gave completely different results depending on how many decimal places the data input to them were calculated to. Given the very many decimal places involved before one hits the Planck length, Planck time and so forth, which amounts to the fixed “resolution” of the Universe, which can’t be calculated anyway because so many perfect instruments would be involved that they’d nudge the weather in a particular direction, there seems to be only a weak connection between cause and effect, and for all we know, as David Hume asserted, none at all. If science is supposed to be based only on what can be observed, cause and effect can’t be and therefore it’s problematic including it in science at all, which rather undermines the whole of science. That said, it does still seem that in principle cause and effect often operate deterministically. You can’t usually expect to jump off the roof of a skyscraper and not fall to your very probable death. Maybe the improbabilities are smoothed out by the arbitrary nature of the universe on a small scale. I don’t think chaos theory is a very promising reason to posit that things could have been different.

Quantum mechanics is another matter entirely. There are no hidden variables. That is, if a radioactive atom is observed, there is no way to predict when it will decay into an atom of another element and that isn’t just because we’re unable to observe processes going on at a sufficiently small scale, but because there simply is no causal chain involved at all. All that can be done is to predict that half of a sample of carbon-14, for example, will have decayed in 5 730 years, give or take forty years, and that prediction only approaches fifty percent with the increase of the size of the sample. However, these are acausal processes. There is absolutely no chain of events other than the formation of the atom leading up to its destruction. It just happens.

Hence there are two contrary factors involved in the nature of parallel universes. On the one hand, there is the causal chain stretching back to the Big Bang, and on the other there are acausal events associated with quantum events. The question then arises of whether the Big Bang itself, or its immediate aftermath, was strongly associated with such events. It could be that things have always been different or that all significant events in our own history can be traced back to quantum events after the beginning of the Universe. All that can be said confidently is that if a known chain of events can be traced back to a quantum event, there are parallel universes where this turned out differently.

A fairly trivial example is the issue of the discoveries of technetium/masurium and astatine/alabamine. The actual names of these elements are technetium and astatine. Neither have stable isotopes. The reason their names could have been different is that they weren’t discovered for sure when they were first apparently identified. In 1925, German chemists bombarded a mineral called columbite with an electron beam and appeared to detect a faint X-ray signature of what would be element 43. Later researchers, however, could not replicate this experiment and consequently, although it was named masurium, it was still considered undiscovered. If a greater number of atoms in the sample of this element had not decayed in the attempt to reproduce the result, element 43 would have been confirmed and would have been named masurium. Likewise, with alabamine, scientists at Alabama Polytechnic believed they had discovered the missing halogen which belonged under iodine in the periodic table in 1931, but their method was found to be invalid. The case of alabamine is slightly different, which I’ll go into in a moment. But because of the method of its discovery, there undoubtedly is a parallel universe in which technetium is called “masurium”. That’s a real place.

The case of astatine is slightly different. Astatine is only a couple of nucleons too heavy to be a stable element. Using the same rough and ready calculations as I did with carbon, for there to be a stable isotope of astatine the strong nuclear force would only have to be 0.08% stronger than it is. This may be the wrong figure but the principle is the same: it would only have to be a hairsbreadth stronger than it is “here” in our timeline for stable astatine to exist. In such a situation, polonium would also have a stable isotope and therefore would be less dangerous and would not have been used to poison Aleksandr Litvinenko. This, however, is a minor detail because probably it would just mean francium would’ve been used instead.

The two scenarios are therefore two different ways alternate histories could happen. In one, the Universe has been different since the Big Bang, astatine is a stable element and Litvinenko was poisoned using francium instead of polonium. In the other, its timeline and ours forked in 1925 and is probably practically identical to our own with the exception that technetium is called masurium.

This brings me to the Mandela Effect. Nowadays, most people seem to have reached the conclusion that the Mandela Effect is only accepted by cranks, and I would agree that there’s a lot of noise in the signal, but in the masurium/technetium example we have a real live Mandela Effect which is present in the scientific community that pivots on an acausal principle. This is inside the establishment, although it looks very different to a typical ME. For this reason, I will continue to maintain that parallel timelines are a valid explanation for some MEs. That’s it: that’s all I’m going to say about this for now because I know it’s generally considered crazy and you’re going to think I’ve gone to Nubicuculia if I go on.

There have been attempts to set up quantum lotteries. Although these are successful, as far as I know there are no serious lotteries using this principle. This is a pity, because if there were they’d amount to real forks in history set off by quantum events. As it stands, the only examples I can think of which involve genuine quantum forks other than masurium/technetium are very improbable, although there are guaranteed to be timelines where this happened. For instance, radioactivity was first discovered when Ernest Rutherford left a piece of the mineral pitchblende next to a photographic plate in a drawer and discovered it was blackened. If this hadn’t happened, radioactivity would have taken longer to be discovered. However, the only way in which that could have happened is if the number of atoms decaying was so small that it wasn’t enough to influence the emulsion on the plate, and considering the amount of substance involved, it’s very improbable. That said, somewhere out there such a timeline does exist. There’s presumably a timeline where radioactivity has yet to be discovered, which would leave a lot of mysteries about the Universe, such as how stars work or how old this planet is. There would be no radiotherapy, the Second World War would not have ended in the way it did, there would be no atomic batteries or nuclear power stations, no Cold War and so on. It is a fantastically improbable universe. But it does exist out there somewhere, and is a very different world. Even the people who live in it don’t understand it, because a big piece of the puzzle is missing. However, radioactivity can be discovered at any time. History is teetering on a knife edge in this world.

The question now arises of who we are. If a POD has occurred after our conception in any parallel universe, are we the same people? My ME explanation requires transworld identity, because I believe memories are transferred between universes when the brain is in an unusual state such as a stroke, seizure or coma. Transworld identity is the belief that an object can exist in more than one possible world, including the actual world (and here the world “actual” really just means “this” and “actual world” means “here”). The alternative theory is that counterparts exist in other possible worlds but that they’re not the same thing. David Lewis holds this, for example. It’s feasible that most people would hold that one is the same person if a POD takes place after conception, or perhaps birth, rather than before it. If they believe in the transmigration of souls, they would almost certainly hold that it doesn’t require a POD to take place that late because they would already claim that someone is the same person living a life in another time and place. If they also accepted that karma existed, different circumstances regarding conception might lead to that soul entering a different body and this could mean that the “same” person could be different in many ways in another possible world, being born in the Congo rather than Canada, in the rainforest rather than Vancouver, and so forth. This is someone else’s belief system rather than mine.

Even so, I do have something in common with people who believe in reincarnation: I don’t actually believe personal identity depends on karyotype. Here’s why. If it turns out that someone has a genetic disorder, they and the people close to them would tend to wish that they had never acquired that disorder rather than wishing they were someone else. These are two different things. Therefore, we don’t identify with our genes and our identity doesn’t depend on having been conceived in a particular way. Nor does it depend on the specific substance of our bodies, because if our parents, particularly our pregnant mothers, had eaten a different diet (such as the potatoes on one side of the field rather than the other, not miso instead of yeast extract or something), it wouldn’t make us different people unless it had a major influence on our development, and possibly not even then. What does that leave? There is no soul, so it isn’t that. Nor is it our genes. Nor is it the substance of our bodies. The answer, I think, is that we are socially defined, both passively and actively. In one sense we are the “software” running on the “hardware” of our bodies, although the metaphor of the brain as computer shouldn’t be pushed too far and it’s important to be aware that other parts of our bodies, such as the endocrine system and the nerves in our digestive system, also form a supervenience base for our psyches. It’s difficult to know how close our brains are to computers and how relevant this is to our identities. In another sense, we are externally defined. For instance, we have the legal concept of “next of kin”, which formalises a custom which already exists in social life: we are siblings, offspring, parents and so forth. Therefore, in a parallel universe where a child whose genetic makeup is rather different from this one, has a different temperament and so on, could still be the eldest daughter, have the same name, same birthdate and so forth, and is arguably the same person. In particular, she might not have the leukæmia which killed her in another universe, because at no point was that leukæmia something anyone in the family owned psychologically: it was a disease attacking her, an outsider enemy. I presume this is how many people with cancer approach their illness, but maybe I’m wrong. But that disease could be in her genome.

I don’t know enough detail about how ionising radiation interacts with DNA to be sure about this, and I should probably know more, but I would expect cosmic rays, which are nuclei and protons raining down onto Earth’s surface at near-light speed, to be to some extent the product of nuclear decay and to some degree interact with the molecules in question in such a way as to change the isotope of specific atoms. The existence of radiation in the environment on this planet, whether or not it results from human activity, would certainly be non-deterministic in nature, although the actual presence of that radiation is only technically not so. That is, it’s possible for a scenario as described above with Rutherford’s pitchblende failing to be sufficiently radioactive to influence his photographic plate to occur, but its probability is infinitesimal. Hence there is an element of pure luck involved in mutation which means that it is possible for minor phenotypical differences between members of the same species in parallel worlds to occur, though only to the extent that this doesn’t influence their fitness to survive, although this does also mean there are extinctions which occurred in one world but not another. However, there is another aspect to identity which suggests the “shadow people” I referred to in the title.

It’s widely known that ordinary human body cells each have two pairs of chromosomes which are reduced to one pair in gametes via meiosis:

Overview of Meiosis
Date
20 June 2016
Source
Own work
Author
Rdbickel
– slightly cropped

It should be noted that the four daughter nuclei in this process are complementary to each other. The one at the top is a perfect counterpart to the one at the bottom and the two in the middle are counterparts of each other. Therefore, for either of the gametes which led to the cell line associated with who we are, there is a complementary alternative. This means there are at least four possible versions of each of us, even assuming the copying process goes without a hitch, which incidentally it never does. For instance, for a White blue-eyed fair-haired child whose mother is White with brown eyes and dark hair and whose father is White with blue eyes and fair hair, there is another potential version who is perfectly complementary, and two more versions who are partly complementary, because different gametes united. These gametes will have existed at some point, and they might even produce a viable child in the case of fraternal twins. These complementary people probably do occasionally exist in the same world. I would estimate that this occurs in one pair of about 500 million fraternal twins. Since in a population of eight thousand million there are around 350 million twins, there’s an even chance that somewhere out there today, this situation exists, and there have probably been about six or seven such pairs in the whole of human history, which by the way emphasises the fact that there are a lot of people around today. But in any case, we all have these shadow people, which brings me to the illustration at the top of the post.

This is a fairly famous gender-swapped version of post-war Prime MInisters of the United Kingdom, which notably has only two men because there have only been two female PMs. The counterparts in question here would usually have different karyotypes. That is, if you are yourself XX, your shadow twin would be XY and therefore usually male. The main situation where they wouldn’t be, incidentally, is complete androgen insensitivity – this is not about trans issues at all right now. However, although we do tend to focus quite strongly on gender as part of identity, there would also be lots of other traits which would differ. We have two children, one of whom resembles one parent quite closely and the other of whom resembles the other. I presume this is because dominant traits from one gamete are more strongly expressed in one than the other. Their shadow twins would be the other way round, which means that they would look very like their siblings, just in a different birth order. Their eyes would also be a different colour. My own shadow twin would still have blue eyes, but also straighter hair. I say that, but the popularly understood traits said to be inherited by single alleles are often not, such as eye colour. There’s also another sex-related issue. Two of the intersex conditions are referred to as Klinefelter’s and Turner Syndrome. The former is XXY and the later just one X chromosome with no counterpart. These two conditions are therefore complementary and a Turner person’s shadow twin would be XXY and vice versa. There’s also chimerism. Some people would be reverse chimeras of their twins, for instance they would be largely cell line A with some of cell line B, but their shadow twin would be largely cell line B with some of cell line A.

It’s also true that every generation of a lineage produces only a quarter of these potential individuals. This means that there are also sixteen possible parents involved, and the number rapidly becomes extremely large. This brings home how unlikely it is that any of us were ever born. Just focussing on the perfect complements, the probability that every person in the world today was their shadow twin is of the order of four to the power of eight thousand million to one. Although this is very improbable, it’s far more likely than the situation I described with the discovery or otherwise of radioactivity.

At this point it becomes clear that there is an issue with the nature of probability. Rutherford’s discovery is genuinely probabilistic and acausal. It could “just happen”, and there’s no need for an explanation. It isn’t so clear that the shadow twin situation could simply happen because there definitely seems to be a deterministic thread running through the whole of meiosis and fertilisation. This raises the question of the nature of probability. Probability is sometimes seen as simply a measure of the frequency of occurrences, so for example half the time a coin comes up heads and half tails, so it has a 50/50 probability of coming up either way. This is an empirical approach, as it’s simply based on observation. The other approach is based on rational degree of belief. For all we know, a coin tossed on a particular occasion might come down heads or it might come down tails, and there is no known reason to prefer one outcome over the other. However, there is in fact a cause, each time, for it landing the way it does, presumably to do with how forcefully it was flipped, the angle, air currents and tiny differences between individual coins which make them slightly unfair. For instance, I believe it’s slightly more likely that a coin will land heads up because I think the tails side is slightly heavier and will tend to weigh the coin down, and I tested this once and found the coin I was tossing was heads up sixty-four times out of a hundred. This helps confirm the hypothesis but doesn’t prove it. Ultimately, there may be two kinds of probability, one deterministic and one not, but the deterministic version of probability could stem from the initial conditions of the Big Bang and therefore not be ultimately so. Incidentally, using possible worlds semantics makes it difficult to use certain terminology. For instance, the world “probably” then comes to mean “in most possible worlds”, in other words something like “usually”. This gets confusing when referring to the theory itself. For instance, I can’t say “most parallel universes have always been separate” because I would then be effectively saying “in most possible worlds, most possible worlds have always been separate”. It could even be that this leads to a contradiction which refutes the theory of parallel universes, and that’s pretty serious because it starts to look like proof for the existence of some kind of First Cause and supports theism or deism to a limited extent.

I am now going to make one of these odd-sounding statements. Namely, “it’s possible that shadow twins exist in other universes”. This could be expanded as saying “there are some possible worlds in which there are some possible worlds where there are shadow twins.” This sounds peculiar, and makes it sound like there are two levels of possible worlds, on whose higher level lies the idea that there are a vast number of arrays of further vast numbers of parallel universes. Using the “rational degree of belief” view of probability, this can be restated as “for all I know, there exist possible worlds where shadow twins exist”. If this is so, it’s possible to imagine the following situation. There is a parallel universe where every representative of a final generation of humans is their shadow twin. In fact there would be several. This uses the criterion of childlessness to select the set of people involved. There’s also the question of whichever cohort includes you. You have a shadow twin, and depending on whether you have descendants you are either in the final generation or one of its recent predecessors.

Getting back to the prime minister picture, these are not photographs of a common type of parallel universe. Not only would the individuals concerned look different besides their gender, and also probably have different personalities, but also these are photographs from a matriarchal society, and quite an odd one at that because the political system of the United Kingdom is otherwise very similar, with Eton, Oxbridge and so forth putting these people in the same positions. In reality, most of the people depicted in the picture would not have become Prime Minister at all because they would have different histories based on their gender. This picture asks us to believe that a woman, Winston Churchill’s shadow twin, would have become PM in 1940, only twenty-two years after Constance Markievicz, which is hard to imagine. Their lives would probably have been much more like those of their sisters, assuming they had any, than their lives in this world. The idea of shadow twins constitutes an interesting thought experiment regarding the nature of gender roles and the patriarchy.

Finally, I’m going to revisit the fringe theory of the Mandela Effect. If there really are shadow twins who are to some extent a sex- or gender-swapped version of oneself in parallel universes, this could sometimes have an interesting consequence which is similar to the idea of a soul of one gender in the body of another as an explanation for gender identity issues. My explanation for hardcore MEs is that individual experiences and memories occasionally get transferred into brains in parallel universes when the brain enters an unusual state. If this happened often enough with a shadow twin, the person concerned could conceivably end up with a different gender identity. However, this suggests that we all go around constantly thinking to ourselves something like “I am a man opening this door” or “I am a woman picking this apple”, when of course we do nothing of the sort. Also, it’s quite an outlandish explanation compared to something much simpler and more easily testable such as chimerism or CAG repeat sequences on the AR gene. Hence I’m going to put that out there, note its similarity to the dubious idea that there are not only souls but also that those souls are gendered, and acknowledge that believing in non-psychological explanations of MEs at all is widely considered dubious. But I do wonder sometimes.

“What Is The Universe Expanding Into?”

Steve, I wrote this with you in mind.

Yahoo Answers is, as I mentioned previously, about to die, although it’s a death by a thousand cuts. In the past I’ve used this blog to put more thoroughly thought-out answers to frequently-asked questions on the site, so I’ve probably addressed this before, but right now I have a different and perhaps less dogmatic take on this question than I usually adopt. Before I go on, I should probably insert the standard diagram people put in nowadays when talking about the Big Bang:

Strictly speaking, this diagram is inaccurate because it shows a two-dimensional projection of a three-dimensional model of a four-dimensional set of circumstances. Take the barred spiral galaxy at top right. If the X-axis is supposed to be time, we should be concluding that the left hand arm of that galaxy happens first, then the end of the right hand arm and the nucleus, and finally the middle of the right hand arm. Also, space is two-dimensional in this picture when for most practical large-scale purposes it really has three dimensions. In other words, this isn’t so much a diagram as an illustration intended to communicate the history of the Universe since the Big Bang. You can’t take it too seriously. It has an artistic, creative aspect.

One possibly inaccurate, because it isn’t really intended to be that accurate, feature of this diagram is the way it shows space. It’s a black rectangle into which the Universe is expanding. There is an outside to this Universe, and at that point you’d be forgiven for asking, if the Universe is everything, what’s the blackness outside it supposed to be? Why is that not also the Universe? The Jains, of all people, had an answer to this. They believed that the Universe as we know it was suffused with a substance which made movement possible, but was surrounded by infinite space from which this was absent. Nowadays, maybe we could do something similar with the idea of dark energy, the apparent force which causes the Universe’s expansion to accelerate. The above picture has a literal “bell end”. It flares out rather than widening steadily or perhaps slowing down from left to right. This is the influence of dark energy, as it represents accelerating expansion. I suppose it’s possible to think of the Universe as infinite space with at least one region where dark energy is active. However, this is neither how I think of it nor, as far as I know, the way scientists do.

Before I go on, I want to make a point about the nature of science at this scale. In certain circumstances, rational thought is “bigger” than science. Maths is one example of that. There’s plenty of pure mathematics which seems to have no practical application and even applied maths doesn’t need to be tested by observation if it’s proper pure maths. For instance, it’s a mathematical truth that any roughly spherical planet covered by an atmosphere must have at least two points on its surface where there’s no wind at any moment, although these points may move. However, our oceans needn’t have any points where there’s no current because there’s land on this planet. Likewise, a doughnut-shaped planet needn’t have any such locations, nor need any planet with at least two mountains high enough to stick up into the stratosphere. There’s no need to observe any planets to prove this because it’s a mathematical fact. I’m not entirely sure about this, but I suspect that cosmology may also have aspects of this: it may not be possible to approach the nature of the Universe entirely scientifically because there’s by definition only one example of the Universe and it can’t be compared to others. This is a particular view of the nature of the Universe which either includes the Multiverse as part of the Universe or in some way demonstrates that this Universe is all there is. There are a number of conceivable ways in which there could be other universes, but some of the arguments for it not only rely on logic and maths but also require that they cannot be observed even in principle. For this reason, without disrespecting the field, there’s a way in which cosmology cannot be scientific. James Muirden once said:

The Universe is a dangerous place – a sort of abstract wilderness embracing the worlds of physics, astronomy, metaphysics, biology and theology. These all subscribe to the super-world of cosmology, to which students of these various sciences can contribute. Strictly speaking there is no such person as a ‘cosmologist,’ for the simple reason that nobody can be physicist, astronomer, metaphysicist, biologist and theologian at the same time.

James Muriden, ‘The Handbook Of Astronomy’ 1964.

It isn’t clear though whether something which is outside the realm of science will always remain there, and in this view, it may be that there’s not in principle something imponderable about cosmology if the mind pondering it is sufficiently powerful, but simply that the span of disciplines is too broad for anyone to grasp. There certainly seem to be cosmologists nowadays, but maybe they’re cosmologians.

Although I don’t want to dwell on that, I do want to point out that it isn’t immediately obvious what space and time are. The nature of space in particular seems to depend on observation. It’s possible to doubt the existence of space but not the passage of time, since as far as we know we are disembodied viewpoints imagining the world but we can only do that imagining if time passes. This is in spite of the fact that spacetime is unified, so it isn’t clear how we’re immediately confronted with time but not space. Maybe there are more advanced minds in the Universe who experience both with the same immediacy. But there are, in any case, at least two different ways of thinking of space and this is what I usually based my answer on.

Space can be thought of as a thing or a relationship. That is, it could be understood as a container, as it were, in which objects are located, but also an object in itself. The Universe clearly is an object, but that doesn’t mean it’s made of space and studded with galaxies like spotted dick. There is a famous “balloon” analogy applied to space, which views the galaxies as spots on the surface which move apart from each other as the balloon inflates. This makes it sound like there’s a hyperspace into which the Universe is expanding, but this may not be the case.

In maths and physics, the concept of space is often used to make arcane ideas simpler. For instance, up, down, top and bottom quarks seem to refer to direction and location, but of course they don’t. They’re just called that to indicate that they are related to each other more closely than they are to other quarks. Likewise, we might talk about the temperature rising and falling, but that doesn’t mean there’s a spatial dimension called temperature. This can even be taken into the realm of space itself. We impose the idea of several dimensions on the idea of direction and temporal precedence, but there are reasons to suppose that this is mere convenience.

Suppose space is an actual thing. What would happen if there was a tear in it? It would surely mean that one could go into that tear, wouldn’t it? But how could that happen if there was no space there, since it’s torn? Does it mean anything to say that you can take a one metre sphere out of space? What happens when you move “into” it? How would it be different from a point? This suggests that there’s a flaw in thinking of space as the fabric of the Universe.

Consequently, space can be thought of as a combination of direction and location. Location can be described, more or less, using three numbers, although since there are higher dimensions this doesn’t work perfectly. It is, however, true, that relative to one’s current position a list of numbers is sufficient to describe where something else is. This tells you how far away something else is and in what direction. However, there is no absolute position. The Universe has no centre, or its centre is everywhere. This would also be true if space is infinite but it isn’t. However, as I’ve just said, space cannot have an outside, so how can this be?

The answer is that there is a maximum distance between two points, after which the direction between them reverses. This follows from the fact that the parallel postulate is incorrect: parallel lines do in fact meet at an enormous distance in most circumstances, and nearer than that in special circumstances to do with extremely high gravity. These are just properties of that group of qualities we refer to as space or spacetime, in a similar sense to addition working the same way either way round and subtraction not. When it’s said that space is expanding, all that means is that the maximum possible distance between two locations is increasing. That doesn’t imply that any actual object is expanding. A further clue to this being so is that although it’s impossible to travel faster than light, sufficiently distant objects do recede from each other at superluminal speeds. This would be impossible if space was an object unless the mass of such an object could only be expressed by a number on the complex number plane, but the distance between nearby locations increases at less than the speed of light, at a specific distance at the speed of light and at a greater distance greater than the speed of light. This is impossible for a single object because it would have to have real mass in small quantities, zero mass at the volume of the observable Universe and imaginary mass at greater than that volume. I have to say that’s an interesting set of properties and I’m not sure if it really is impossible.

The point is that in this view the Universe has no outside or, in terms of hyperspace, no interior. It clearly does have a three-dimensional interior, but not an interior in terms of a larger set of large dimensions. This account is slightly complicated by the fact that as well as time there are tiny further dimensions, but it usually makes more sense to measure the length of a pencil line than its area.

That’s an expanded version of my usual answer to the question “what is the Universe expanding into?” but it could be wrong. The reason it might be wrong is fascinating, and therefore probably not valid, but here it is anyway: ‘Brane Theory.

You might think at first that Brane Theory is just “Brain Theory” spelt wrong. That would be funny, but sadly it’s not so. Brane Theory is an extension of string theory and although I’m not afraid of maths, I can’t understand it fully. I’ve already mentioned the issue of extra dimensions which are, however, tiny. Brane theory uses this idea to explain why gravity is so much weaker than the other forces, if indeed it is a force. It isn’t immediately clear to observation, but there seem to be three major forces in the Universe plus gravity: electromagnetism, the strong force and the weak force. Of these, electromagnetism is obvious except that it may not be realised that light is part of electromagnetism. The strong force prevents atoms other than hydrogen from exploding as soon as they form, since their nuclei are made up of positively charged particles which repel each other. The weak force is a bit more obscure, and might be better described as the weak interaction because it doesn’t involve attraction or repulsion. It amounts to a tiny force field which occurs when radioactive decay involves atoms emitting beta particles, which are fast electrons. When a nucleus releases an electron, because it’s negatively charged and there are no negatively charged particles in the nucleus, a neutron becomes a proton, or the nucleus emits a positron and a proton becomes a neutron. In the former case it means the element moves one place up the periodic table. But nothing is pushing or pulling, which makes it confusing. The strong and weak nuclear forces are very small scale in their range, only operating within atomic nuclei, and for some reason the strong nuclear force is 128 times weaker at double the distance. Electromagnetism is more straightforward, probably because we experience it ourselves directly and obviously in the form of light, current, magnets, compasses, lightning and so on, and it diminishes like gravity, following the inverse square law. That is, for example, a light source emitting light all around it such as the Sun will do so in a sphere and because a sphere twice the size has four times the volume, it will be a quarter as bright from twice as far away. Gravity may not even be a force at all, but the distortion of spacetime by mass, and is anomalously weak. A magnet can pick up a piece of iron against gravity even if the magnet only has a mass of one gramme, yet Earth’s mass is nearly six quintillion (long scale) times the mass of the magnet. That’s ridiculously weak.

Brane theory, at least sometimes, attempts to solve the problem of gravity being as weak as it is by using extra dimensions. Instead of exerting a force in three-dimensional space, gravity may be doing so in hyperspace, which means that instead of weakening due to the geometry of a sphere, it does so due to the geometry of a higher, multidimensional cousin of a sphere, but the other forces are confined to three-dimensional space, in a thin membrane, hence the name “Brane Theory”, which is of course expanding in hyperspace. It’s also theorised that just after the Big Bang, in the part of the above diagram labelled “inflation”, this Universe collided with another one, causing this inflation.

So in other words, perhaps it isn’t a silly question to ask what the Universe is expanding into. This still doesn’t require space to be a thing, but makes the galaxies and stars into a thin, three-dimensional skin on a four-dimensional or multidimensional bubble. The answer is therefore possibly that the Universe is expanding in hyperspace, which is also not a thing but a way of describing distances and directions which need more than three numbers relative to where you are.

A few bits and pieces I want to clear up. This might all be thrown up in the air by the recent discovery of the way muons precess, because that suggests that the standard model of particle physics is wrong. And finally, I may have got this wrong myself. That is, what I just said might turn out to be nothing like what Brane Theory actually is. But note this: it’s maths and I’m not afraid of it. Lots of people are afraid of maths, and think they’re no good at it. I may well also be no good at maths, but I’m not afraid of it. This is a tangential point but very important, and probably has more bearing on everyday life that Calabi-Yau manifolds and stuff have anyway.