Space 1999

Illustrative purposes only: will be removed on request

It’s weird to think that 1999 CE is over two decades in the past now. There are certain years in the last century and this one which have a kind of cachet of being futuristic about them, including 1984, 1990, 1999, 2000 and 2001. The PET 2001 computer, released in 1977, clearly exploited this, as of course did the film ‘2001’. Wordstar 2000, the word processor, was published in 1984. Then, when the years in question come and go, many things labelled with those years acquire an instant patina of zeerust, with the unfortunate exception of ‘Nineteen Eighty-Four’, of which the Burkiss Way once said “if you missed any of that, there’s another chance to see it by looking out of the window”.

‘Space 1999’ is of course one of these. Some people take the approach with SF set in what used to be the future but is now the past, so to speak, that it’s actually alternate history. Like ‘UFO‘, this is another Gerry Anderson offering, which in its early stages of planning was originally going to be the second series of that programme. I have only rather vague memories of its first transmission, and missed a lot of episodes because it seems to have been blocked against ‘Doctor Who’. It’s notable that the serials of the latter I can’t recall are accompanied by episodes of the former which I can, and I don’t know if Gerry Anderson himself planned it as a competitor but it does appear that ITC and ITV both treated it as such, and for me apparently it did compete. This was a pretty normal thing for BBC and ITV to do at the time. We were primarily a BBC household despite my father’s big-C Conservatism, and my general impression of ITV as a child was that it seemed to consist of brash, cheap knock-offs of BBC TV programmes, with some exceptions such as ‘Coronation Street’, which at the time had no licence-funded equivalent. In fact I still feel this to some extent even today, and I can’t tell if it’s deep-seated atavistic prejudice or actually true.

I think it was Lew Grade who objected to ‘UFO’ on the grounds that it seemed to be about people “taking tea in the Midlands”. I would disagree with that. I think they probably did it in the Thames Valley, and in one case in County Galway. I’ve posted a long screed about ‘UFO’ already so I won’t repeat myself too much, but to me, one of the appeals of the earlier series was that it focussed to some extent on the human relations involved in Ed Straker in particular having to run a secret organisation which was saving Earth from alien invasion but was never allowed to tell anyone about it, which for example broke up his marriage. It’s also been described as “adventures in Human Resources”, which is spot-on for many of the episodes, and that’s part of the appeal. It makes it more universal and relatable. However, apparently Lew Grade, or whoever it was, didn’t like this at all and wanted the second series to be more about space, and avoid any scenes set on Earth. Gerry Anderson responded rather startlingly by offering to blow up the world in the pilot. This is, well, two things. Firstly, it fits in quite well with his general keenness on making things explode, which is seen all the way through ‘Thunderbirds’, for example. Secondly, it sounds like he was annoyed with whomever suggested this to him and was being sarcastic. Of course, at the time there was a certain other writer who used to keep blowing up the world and went on to make a hugely successful SF series out of it, but that was still in the future at the time. However, Douglas Adams’s motivation for doing that was more to do with getting rid of the threat of aliens invading Earth or ending the world as something which tended to hang over stories at the time. It does seem likely that ‘Space 1999’ was trying to do something similar: get the threat to Earth out of the picture and concentrate on space.

‘UFO”s ratings unexpectedly fell later in the series, leading to the idea of a second season being shelved, but by that time so much work had been done on it that it would’ve been a waste to abandon the project, and it was re-designed and turned into an entirely new series which made no attempt to maintain continuity with the previous one. Having said that, many fans and others have made the minimal effort required to link the two and there’s nothing on-screen to contradict the idea that it is a continuation of sorts. One published version of this is that SHADO succeeded in ending the alien threat and extended Moonbase, using it as a place to dump nuclear waste. This also explains why the Alphans are never surprised at the fact of alien existence. They presumably realised there were around twenty years previously during the SHADO era. Moreover, there are similarities between both Moonbases, in that they are established for a pressing need but then used for other purposes. There has to be a financial justification for them being there in the first place.

The series is kind of delicately poised between being complete rubbish and really good in a peculiar way. At the time, it was the most expensive British TV series ever made, or rather the first season was. It’s also the first Gerry Anderson series to have more than one season, although ‘UFO’ I think was sometimes transmitted in two blocks. It definitely shows visually that a lot of money was thrown at it, and in many respects it just looks so nice that I’m tempted just to bask in the futuristic spaciness of the whole thing and be done with it. However, before the advent of reality TV and the landscape channel, fictional telly was expected to have plots, and those – hmm, not so much. The general high quality of the special effects and the sets, and to a lesser extent the costumes, which I’ll be coming back to, is to some extent also quite frustrating because of the low production values elsewhere in TV SF, such as ‘Doctor Who’, and if you wanted spacy stuff, the choice at that time was exceedingly limited. Apart from this and Who there was ‘The Tomorrow People’, and that may have been it. Nothing else springs to mind in that specific sub-genre until a couple of years later when, boosted by the popularity of the astronomically-budgeted ‘Star Wars’ (another quote from ‘The Burkiss Way’: “special effects so expensive it would’ve been cheaper to build real spaceships!”) ‘Battlestar Galactica’ and ‘Buck Rogers In The Twenty-Fifth Century’ came along. Another issue with this is that it seems very much to be style over substance, and it’s a sad waste that the lavish production wasn’t accompanied by good storylines. If ‘Star Trek Phase II’ had happened, it probably would’ve looked like this but with better writing.

Taking a bird’s eye view of the proceedings, the general tone of the series was clearly to emphasise the sheer mysteriousness of space. The denizens of Moonbase Alpha don’t know what they’re going to come across next, and when they do come across it they frequently finish the story hardly any the wiser. As such, this is fine. However, it’s difficult to push it far without it becoming absurd and silly. A particular issue with the entire series is that it is unusually far, even for television science fiction, from scientific plausibility or accuracy, probably even more so than ‘Doctor Who’. It’s hard even to know where to begin with this, there are so many problems. Neutrinos travel thousands of times faster than light, all the aliens speak English, most of the aliens are humanoid except that unlike ‘Star Trek’ their hair rather than their noses are funny, and for some reason black holes are referred to as “black suns”. “Constellation” is used in a way which makes zero sense, as is the word “galaxy”. I suspect you cannot watch this if you have even a CSE in physics without seeing massive problems with the so-called “science” and in the end you just have to ignore it. However, I don’t want to let it off quite that easily. The main “scientist” in Year One (and that’s another thing I need to discuss) is Victor Bergman, who comes across as more of a mystic. There is some emphasis on the idea that faith trumps science most of the way through, whereas there are other examples of TV space opera which are more about the importance of both acting together. There are other ways of looking at it which accord quite well with ‘UFO’, which is that it isn’t so much science fiction as space horror. I’ve mentioned Alan Frank’s ‘Galactic Aliens’ many times in this blog as a good example of that, and one episode in particular, ‘Dragon’s Domain’, is an especially good space horror story. Gerry Anderson’s work has had a valuable rôle in introducing children to engineering and science, and many of them have been inspired to follow careers in it as a result, so although their primary purpose is of course entertainment, and not particularly cerebral entertainment at that, it’s still quite worrying when a programme of this type pays so little attention to scientific accuracy, because impressionable young minds are receiving those data and a lot of them do not compute. I did, though, learn the cloud top temperature of Uranus from ‘Death’s Other Dominion’ and that was accurate based on the astronomical information available at that time, so it isn’t a total loss. Isaac Asimov famously criticised the whole premise of the show, which is that exploding nuclear waste pushed Cynthia, which they of course call the Moon because they’re relatively sane in that respect, out of orbit, when in reality an explosion of such violence would have destroyed it without even shifting its path much. Gerry Anderson felt it was much maligned because other implausible space shows and films didn’t bear the brunt of much criticism for their inaccuracy. It is notable that the most prolific writers, Christopher Penfold and Johnny Byrne, were not primarily SF writers, but in the case of television it’s entirely normal for individual writers to write for a wide variety of series. Hence the issue might be that they didn’t, for whatever reason, have a scientific advisor.

Victor Bergman, a central character in Year One, is completely absent from Year Two and no explanation is offered. He’s never mentioned, so far as I’ve noticed. This is fairly typical of the difference between the two seasons. It’s almost as if Year Two is a reboot, except that both depend on the premise established in the first episode, ‘Breakaway’. This is due to behind the scenes meddling, if that’s the right word. In a way, Year Two is to Year One as Year One is to ‘UFO’. There’s also major time inconsistency between the two. Year Two episodes begin with Helena Russell’s log, like the captain’s log in ‘Star Trek’, with number of days since ‘Breakaway’ replacing star dates. However, many of the episodes in Year One appear to be happening at the same subjective time as those in Year Two. As might be expected from a series at this time, there is very little continuity within seasons either, and not much like an arc. However, I’m not convinced this is really so. Serials were a normal part of television, such as ‘The Duchess Of Duke Street’, and ‘Doctor Who’ showed considerable continuity of course. However, both ‘Space 1999’ and ‘UFO’ have no official running order and the times in Year Two in particular jump back and forth, with some episodes apparently happening simultaneously. Hence you can only really watch most of it as isolated episodes which all press the reset button at the end. It’s kind of in the spirit of the other Anderson series that it does this because up until that point, each series had only had a single season by design. There were never any plans to make a second season of anything, not because they were unpopular – they were often real hits, such as ‘Thunderbirds’ – because they were trying out various ideas to sell to the American market. This is also why although all the series are British, many of them have an American feel to them, and also American actors and characters. Year Two, however, does come across as significantly more like slightly later series such as ‘Battlestar Galactica’, which is partly because it’s being run by Fred Freiberger rather than Sylvia Anderson. He was involved in the final series of ‘Star Trek’ TOS. The first season aspires to be more like ‘2001 A Space Odyssey’ to the extent that it even has a year one different to 2000 in the title of the series, plus the word “space”, and it also seems to be aspiring to be intellectual and “deep” while missing the mark. Year Two puts it out of its misery by mainly becoming an action-based series, but that’s fine because previously it came across as having ideas above its station.

One of the implicit themes in the series, which is quite subtle but has been acknowledged to be present by those involved in it, is that from the start the Alphans (Moonbase Alpha inhabitants) are being guided in their adventures by what’s been referred to as a “Space God”. The explicit theme of a deity comes up twice. In Year One, they fall into a black hole and encounter an entity who seems to be a female God. In Year Two they encounter a fake God who seems to know everything about them and their history but turns out to be solar powered. This motif mitigates to some extent the apparently arbitrary amble of the Alphans across the Universe with a planet of the week, because Space God can be seen as guiding them. An oddity, which is necessary for the series to continue of course, is that they continually search for and encounter habitable planets with one flaw which they proceed to resolve over the course of the episode, making them ideal for settlement, and then abandon them. Thus it comes across as a parable against perfectionism.

The passage of time for the Alphans appears to be different than for those left back on Earth, which makes sense if they’re moving near the speed of light. In the early episode ‘Death’s Other Dominion’, which like one other episode has that actor whose name must always be written in capitals, BRIAN BLESSED, it’s been eight hundred years since the Uranus (pronounced “your anus” here) mission of 1986, but in the later ‘Journey To Where’, only ten dozen years have passed on Earth. Given the premise of having fallen through wormholes and the influence of the time dilation effect, the passage of eight hundred years in what may have been a few subjective months is not problematic, but if just over a century has passed considerably later, it’s more problematic. Incidentally, whereas you might expect the tradition of a Scottish episode in many Anderson series to be dropped for one apparently set outside the Galaxy, ‘Journey To Where’ is in fact just such a story!

Given the relative dreadfulness of much of the acting (so I’m told), the large number of plot holes and allowances a twenty-first century viewer must make for a series which is “of its time”, as the phrase goes, the continuing outstanding quality of the model scenes can be something of a relief. Although they tend to be quite short, they are at least as good as any of the 1960s series are. The Eagle spacecraft hold the distinction of being the only spacecraft I’ve ever dreamt about, and that’s decades after the series ended. Their design is extremely impressive, and the fact that they’re part of a range also including Swifts and Hawks is likewise well thought-through and a nice bit of world-building. I found myself not wanting the model bits to end because I knew I’d get plunged back into the general mediocrity of the rest of the programme. However, some of this is a mindset problem caused by the general prejudice against it. However, one issue with the models is that they occasionally include human figures in spacesuits which are rather unconvincing. On one occasion this was actually exploited positively when an apparently spacesuited person turned out to be empty and on an automatic rover aiming to blow up the enemy’s vessel, and it feels to me like this was suggested by that issue. I found the scenes of the Eagles more convincing than any preceding Anderson production with the possible exception of the rocket launch in ‘Doppelgänger’.

It often felt to me like certain issues could’ve been easily resolved if the Alphans had shown less enmity, which was apparently there to drive the drama. Maybe this is true of most dramatic situations and it certainly would’ve made for a more boring series. Even so, there are many situations I felt were unexplored. The Alphans were previously on tours of duty and therefore none of them were initially invested for the long run. Nor were they trained space explorers in most cases. The strains became apparent in some episodes, which was well-done. For instance, one episode covers a mental health issue called Green Sickness, where people in a close-knit group develop a kind of collective delusion based on wishful thinking that the commander, John Koenig, is hiding hospitable planets from them and rebel against him. I found this particular idea very psychologically convincing, and the way the leader of the group was portrayed seemed quite realistic too. But one thing which was missed was depicting what happens when Earthly authority disappears and only three hundred people are left. Presumably money has also disappeared in any meaningful sense, for example, and Koenig is, as his name suggests, effectively King because there is no higher authority available to anyone except when aliens interfere. Incidentally, Koenig’s rôle is strongly influenced by the fact that Martin Landau, his actor, basically had something in his contract which meant he always had to “win”, although there were also a couple of “Koenig-lite” episodes as they’d be called nowadays, which helped to be honest. This has a couple of undesirable consequences for the plots. It means, for example, that he seems to micro-manage, because it always has to be him who does crucial things, and other characters tend to be shown as incompetent so that he can shine. This is all rather clumsy. However, it may also be realistic because, as already observed, these people have been thrust out into deep space against their will and were never cut out to do these jobs. This is quite refreshing compared to many other TV series, SF or not, where too many characters are shown to be unrealistically omnicompetent, diligent and dedicated. Doctor Helena Russell, Koenig’s love interest, is a particularly incompetent physician, which is rather unfortunate, but luckily there is another one who is more adept, but not the head of department. The Peter Principle?

The introduction of Maya, played by Catherine Schell, at the start of Year Two as a replacement for Bergman, since she’s a scientist, supposèdly, is clearly “something for the dads”. She has the same rôle as Leela in ‘Doctor Who’, and was similar enough to a Vulcan physically that at the age of nine I used to mix the two up. Her function is not, however, just eye candy. She’s also a bit of a deus ex machina in that she’s a shape-shifter – she can become anything organic, apparently including an organic mineral. This raises certain issues because her mass varies as appropriate for her size and clothes tend to appear and disappear even if they weren’t created by her during the metamorphosis. A couple of positive things about her appearance, though, are that she’s able to introduce life forms who can breathe chlorine or survive in a vacuum. She’s also a lot more personable and relatable than most of the other actors, and has a warmth to her which is lacking elsewhere. Her father is also played by BRIAN BLESSED. There’s also a fake BRIAN BLESSED later on whom she morphs into when she has a fever.

Doctor Helena Russell is unfortunately not played very well by Barbara Bain, whose performance is rather wooden. A relationship is developed between her and John Koenig which is far from convincing, which is weird because the people who played them were married in real life. Her use of medical terminology reminds me of this Mitchell And Webb sketch:

At one point she calls EEG a “brain pulse”!

I get the impression that the writers forgot the basis of the show in Year Two. This may be down to the influence of the ‘Star Trek’ guy, because it started to be about visiting the planet of the week with only a little lip service paid to the idea of settling anywhere. Many years later, a video was produced which I haven’t seen which tied up the series. The Alphans found a hospitable planet they called Terra Alpha which they settled on. Although this was never on the cards, it would’ve been nice to see them settle and experience the problems of trying to survive and thrive on a new planet. It’s also not resolved how Earth managed without a satellite. ‘Journey To Where’ shows a devastated planet with frequent earthquakes and everyone living in domed cities to protect themselves from the environment, but through much of the series it isn’t clear that the planet itself hasn’t succumbed to the disaster, which would mean the Alphans are the only humans left, which suggests it’s rather imperative that they make the decision whether to have children and how. I understand that in the Big Finish audio version of Breakaway, children are trapped on Moonbase Alpha as well as adults, which would be useful.

As well as models, there are impressive sets and interesting costumes. The sets are somewhat reminiscent of ‘2001’ and also ‘Silent Running’. They pre-date the dark period which seems to start with ‘Alien’ a few years later and was emphasised further by ‘Blade Runner’. At this point space bases and travel are in bright, antiseptic-looking environments, all white and pastel with geometric shapes. Computers are still mainly of the blinkenlights persuasion. There are lots of doors which go “shwish”. In Year One there are windows in Main Mission which in one episode where an atmosphere is temporarily acquired turn out to have catches on them for some reason, but in Year Two the base is underground (“sublunar”?) and has no windows in the control room, and we also see caves which are used for various purposes, referred to as catacombs. The Anderson tradition of moving people around in sliding contraptions continues with the horizontal tubes which act as a kind of railway around the base and deliver people to the Eagles without the need to build extra sets. Many of the actors, props and sets were reused for the intermediate film and attempted pilot ‘Into Infinity – The Day After Tomorrow’, a personal favourite of mine at the time which redeems the scientific illiteracy of the series itself by being thoroughly educational and science-based. It also has BRIAN BLESSED in it.

The costumes, by contrast with ‘UFO’ where Sylvia Anderson was responsible, were created by a gay male fashion designer called Rudi Gernreich, who attempted to make them fairly unisex. Their most noticeable detail is the zips on their left sleeves and trouser legs, and yes they are wearing flares. This at the time reminded me of those jumpers little children used to wear with buttons on the shoulders, and I found them disconcerting, partly because I can’t stand asymmetrical clothing. The zips are also oversized because they were on the TV – they’re Talon rather than YKK. Another feature is that the colour of the left sleeve indicates rank and function, which makes sense but is rather made a nonsense of by the introduction of long-sleeved jackets on top in Year Two. Aliens are usually stereotypically “roby”. In one rather questionable episode there’s a prison planet with all-female prison guards in scarlet catsuits carrying whips, which all seems rather unnecessary but I expect they were quite comfortable at least. The spacesuits are orange with white zips and usually metal helmets with an unfortunate revealing tendency to have their visors flip up. Gernreich was firmly opposed to the sexualisation of the human body as well as the idea that nakedness was shameful, and this was the basis of his philosophy, and saw his task as to heal society of its screwed-upness about sex. Letting him lose on a futuristic show allowed him full rein here and regardless of its ropiness in other respects, the viewer gets to appreciate this.

Finally, a few famous faces and names turn up. Pamela Stephenson, as in ‘Not The Nine O’Clock News’, is a guest star in one episode, Bernard Cribbins kind of reprises his spoon purveyor character Mr Hutchinson as a robot in ‘Brian The Brain’. It was a shade reminiscent of that awful Twiki from ‘Buck Rogers In The Twenty-Fifth Century’. There seems to have been a period in the 1970s and early ’80s when comedy robots were compulsory, perhaps due to R2D2 except that in this case it precedes ‘Star Wars’. Patrick Troughton also appears in the final episode, playing an emperor who is having immortality foisted on him, and one episode is written by Terrance Dicks, who also wrote some ‘Doctor Who’ and numerous Target novelisations of the same.

I’d like to close by considering whether this shows a window into how science fiction in popular media was approached just before ‘Star Wars’ and disco came on the scene, and if it’s typical of attitudes towards science at the time. You certainly get the impression that TV companies just thought you could put any old crap on the telly provided it was pretty and had lots of explosions and action in it, particularly if it was supposed to be science fiction or space opera. Although it’s quite shockingly ignorant, I’m not sure how much this has changed. I’m hard-pressed to think of any TV show ever which is hard science fiction. This also shares with ‘UFO’ the idea of psychic powers being accepted by mainstream science and it’s also vitalist rather than mechanistic: the belief that there is a distinct life force not present in non-living things. My personal opinion on this particular issue is that there are at least emergent properties, but I don’t believe there could be such a thing as a “life sign” detector, which crops up here just as it does in ‘Star Trek’. I have the impression that you can put together various popular “science fictiony” things from the 1970s and arrive at a sloppy kind of vague, faulty and incomplete understanding of the nature of science, including the scientific method. There’s this, ‘Doctor Who’, ‘Star Wars’ and perhaps also ‘Alien’, and in my experience Alan Frank’s work too, although that’s not well known. What worries me about this is a little like the way Mary Whitehouse and the National Viewers’ And Listeners’ Association were bothered about what they saw as too much sex and violence on television, but in a different direction (and for the record I had very little sympathy with her or her organisation). It concerns me that a large number of young, impressionable children sat down in front of this and other shows of their ilk and proceeded to receive a very inaccurate and misleading impression of the Universe and science, and in this show in particular you don’t really see “scientists” behaving like scientists at all. Of course people do know it’s all just pretend, but do they get a good impression of what science is actually about from it? For instance, would there be as much anti-vaxx and climate change denial around today were it not for the possibly insidious influence of this kind of thing? I don’t know. Maybe I’m taking it all more seriously than even children did at the time. Also, it might not have changed all that much.

All that said, it is true that the Universe is stranger than we can imagine, as Einstein said and as was quoted in ‘Into Infinity’, and one thing this series did manage to do was to portray a “WTF” Universe which was essentially weird and mysterious, and I commend it for that. It’s also stylistically very impressive, with the usual exception of monster costumes, and it also has Sanskrit in it!

Möbius Mass-Transit

In 1950, Armin Deutsch wrote a story about the Boston subway called ‘A Subway Named Möbius’, for which spoilers will follow almost immediately.

Ready?

OK.

In this story, a tunnel is added to the MBTA subway, which curently looks something like this:

By Michael Kvrivishvili – Flickr: Boston Rapid Transit Map, CC BY 2.0, https://commons.wikimedia.org/w/index.php?curid=28660767

Sounds like a story which would at best only appeal to train spotters, right? Not a bit of it. The tunnel in question involves Boylston and changes the topology of the network in such a way that one of the trains vanishes along with everyone on it and only reappears several weeks later. When it does reappear, another train disappears. The story was adapted into the 1996 Argentine film ‘Moebius’, set on the Buenos Aires metro, Subte, whose current layout is:

There is a small amount of technical mathematical terminology in the original story, which is here. I’m not sure if this is more than the use of jargon to impress the reader, but it is genuine, valid, mathematical vocabulary relating to graph theory. Graphs in this sense are not the likes of line graphs, histograms or pie charts, but more like the maps shown above, in which pairs of items are related to each other in some way. In general, such a set of items with their relationships could be represented by dots (nodes) and lines (edges) on a piece of paper, or rather, some of them could be. I’m not sure all of them could, for instance the surface of a torus might not work. In the story, the addition of the Boylston shuttle has caused the graph to have “infinite connectivity”, and it’s this which I find most dubious.

Connectivity has a couple of meanings in mathematics. In graph theory, connectivity is the minimum number of elements (nodes or edges) which need to be removed to separate the network into at least two isolated networks. If this is the meaning used here, it sounds very doubtful that this could really happen because there is not an infinite number of either nodes or edges, and there couldn’t be in a mass-transit system. There are at least two other meanings, both topological. One is that it’s possible to move between any two points in the space, which is of course true in many mass-transit systems, but presumably not all because I would expect there to be ones which either have yet to be linked up or can’t practically be connected. The physical correspondent of either graph will be a different shape and the constraints on movement include the fact that the trains move along rails in tunnels of impenetrable brick, concrete or stone. There are other ways of considering this network – for example, flooding would effectively block off the lower layers. It also works as a template. Model railways could be constructed from these maps. They are also, famously, topological rather than geographic. This famous map:

. . . is geographically like this:

Copyright status unknown – will be removed on request

However, for the purposes of the story and the film, the topology is the relevant aspect, and it’s also the relevant aspect for passengers. Presumably for the people running the system, geography is more important for various reasons, such as power consumption, timetabling and location of depots.

The Glasgow Underground is something like the second or third oldest of its kind and despite protests has never deviated from its original plan:

As it stands, this map has a clockwise and anti-clockwise circle which happens to correspond closely to the geographic layout, or at least more so than the Tube does. However, in all these cases, the tube layout would be topologically the same if the layouts were folded in half or stretched in the middle and compressed at the edges. If Leicester Square and Covent Garden were six kilometres apart and Chesham and Chalfont & Latimer were only 260 metres apart, provided the other relations remained the same, the Tube map would still be accurate. They could also in theory, be upended and become a system of lifts linked by horizontal trains and still be topologically the same. There could be a high rise building doubled over on itself whose lifts and, well, trains, correspond to the layout of the London Underground. This is what topology is about. Of course, as soon as an extra tunnel or lift shaft was constructed, the topology would change, and depending on the geographical shape of the system, this would be more or less convenient. If the Glasgow Underground were folded over on itself it would be relatively easy, depending on its orientation, to put a lift between Partick and St Enoch, but as it stands a tunnel under the Clyde between those two stations would lead to a topologically identical system.

I presume that most people know what a Möbius strip, also known as an Afghan Band, is, but in case you don’t, take a strip of paper, put an odd number (such as one!) of twists in it and glue the ends together. Pretty simple stuff of course, but unlike a similar strip without twists or an even number thereof, such a band has only one edge and one side. This is Wikipedia’s illustration:

It might seem contrary to attempt to claim that this shape has only one side and one edge because when it comes down to it it’s just a strip of material with a twist in it, but in fact it is exactly that. If you trace the edge with your finger, you will have to go round twice to return to the original spot and if you colour it in on one side, you’ll have to do the same. As an actual physical object which takes its thickness and the interior of the sheet of paper it’s made of, this isn’t what it is, but even then it’s a three dimensional shape with only two faces, two fewer than a tetrahedron, which we usually assume to have the minimum possible number of four, but as far as I can tell, either one or two edges.

One of the peculiar features of Möbius strips is that if you cut them in half down the middle, you get a longer, narrower strip, but if you cut them a third of the way from the edge instead you get two linked strips. Hence there are three “lines of interest”, as it were. A line near the edge describes a single edge, which however is at both the top and the bottom of the strip. A line near the centre appears to loop round twice. Lines around a third of the way in start near the top, plunge down near the bottom and return to near the top. All of the lines make two complete circuits of the strip before meeting themselves. The central line doesn’t change level, assuming that the topology of the strip corresponds to minimised distances between the lines.

Now imagine each one of those lines is an underground rail tunnel, like the Glasgow underground or the London tube circle line. A cross-section of the strip at any point would appear to show five tunnels, which would be vertically arranged at one point and horizontally arranged 180° away from that point. However, there are in fact only three tunnels, even though every transverse section seems to pass through five. A relatively simple case of this system involves them vertically oriented on one side, gradually rotating to an horizontal arrangement on the other, but this is only one version of the “geographical” arrangement. They could all be rotated through right angles so the tunnels pass near the surface along half of their route and then all plunge deep underground along the other half, they could be inside a narrow tower, or each of them could be imperfect circles and meander around. However, whatever else happens, to conform to the topology of a Möbius strip, they must at some point “twist” around to the opposite side an odd number of times, the simplest case being once.

In order to make this simple, I want to imagine this system to consist of apparently five tunnels arranged vertically on one side and horizontally a semicircle away along their route. I also want there to be a system of lift shafts linking them all together and to the surface, perhaps ten of them, plus an extra set of five vertical shafts where they reach their horizontal orientation. Also at that point, the so-called “lift shaft” is horizontal, so it’s more a walkway or travelator, or perhaps a supplementary train tunnel since it is running horizontally. If we were talking about Glasgow here, which is the closest to this arrangement but still not very close, if the clockwise and anticlockwise routes were in tunnels at different levels, only one more tunnel would be needed to complete the arrangment and then each of the fifteen stations could have five different levels joined by stairs, elevators or lift shafts. If it were the Tube, the Circle Line is the most obvious, or it would be if it was actually even topologically a circle, which it isn’t:

Ignoring the western branch though, which I’m sure has an official name but I have no idea what it is, the “Circle” Line has two and a third dozen stations, by contrast with the mere fifteen of Glasgow’s. However, both systems score over Subte and the MBTA in actually having circular routes, topologically speaking.

The question arises of what the point of any of this is. Why would anyone bother to design an underground with three different levels of completely superimposed tracks? However, I kind of want to look past this. Thinking of the shops and other concessions associated with the pedestrian subways in London near the Tube stations, maybe this is an entire underground city, or rather village, with a couple of hundred subterranean chambers, some residential, some business and some administrative, and the like. But for whatever reason, this subterranean railroad exists. In my mind, anyway.

Although it would be possible for the tracks to maintain the trains at the same angle, i.e. upright, it would also be possible to make them monorails, both suspended and on a rail, with overlapping rails and beams, with the carriages rolling around in circular frames to keep them in the same position and avoid moving the passengers around in ways they might not like. Alternatively, they could all just be strapped in very securely, or perhaps in padded pods, with four doors, two in the walls, one in the ceiling and one in the floor. Magnetic levitation is another option. For safety reasons, only the doors facing the station platforms should open.

Although this does all seem rather pointless, there would be advantages to such a system. For instance, it would save wear and tear on tracks or other equipment if twice the length of tunnels were required to return to the original point. It could also allow for train arrivals and departures to be staggered so that five times as many trains were operating on three times as many tracks, although this would work against the wear and tear advantage. Trains on the same track would have more mean spacing between them, reducing the probability of collisions. However, there are also more significant electrical advantages. A resistor in such a shape will do so without causing magnetic interference. Nikola Tesla patented a similar electromagnetic device in 1894 for the wireless transmission of electricity. It’s also possible that this will enable high-temperature semiconductors, meaning that if the tracks are indeed linear induction motors, such a shape would be ideal for them. See also this video:

There is a connection between superconductivity and Möbius strips in any case because of the nature of quantum spin as mentioned previously. The way spin behaves for fermions, the particles of which matter as opposed to forces are made, can be envisaged using a strip of this kind, because as previously mentioned, the magnetic field of a fermion has to be turned through 720° to return to its original polarity, which is the same as an arrow pointing towards the edge of the strip. At low temperatures, fermions such as electrons can pair up, but the members of these pairs can be thousands of atoms apart, and because each electron is a fermion, the two together act as a boson, with integral spin. Also, because of the distance, many such bosons can occupy the same space. Because they are bosons rather than fermions in such a condition, they can have the same energy states, and this makes superconductivity possible, although not all superconductors work like this.

It also occurs to me, and this is probably nothing, that plasma of protium (ordinary hydrogen) could be suspended to form such a strip electromagnetically and this might “do something”, but this is all very vague. It’s probably nothing, but this plasma would consist entirely of fermions.

A suggestion made by a mathematician on a rather related subject was the minimum number of edges a shape could have. I only remember this very vaguely, but just as a Möbius strip has just one edge despite appearing to have to, this topologist stated that there was no known reason why a shape shouldn’t have zero edges. This is true in any case of certain shapes such as spheres and tori, but the idea was that there could be no sides or edges, and this could be constructed from the theoretical infinitely elastic “plasticine” which topological shapes are made of. It’s easy to imagine, say, taking a triangular shape with three strips, twisting each a different odd number of times (1, 3, and 5 for example), gluing them together and ending up with something weird. But intuition tells us it’s impossible, as do the laws of physics, because if you imagine folding something in the right way causing it to disappear in, to quote Douglas Adams, “a puff of unsmoke”, that clearly wouldn’t happen because matter has to go somewhere. Another way of thinking of it is to imagine it entering hyperspace, and therefore only seeming to vanish, but prima facie that doesn’t seem any more feasible. However, if such a shape started off with more than three dimensions, or it was a distortion of space itself rather than being made of matter as such, as might be achieved by negative mass or actual mass, maybe something else could arise. It’s possible that the Universe itself has a “twist” in it somewhere which makes it effectively into a multidimensional analogue of a Möbius strip, and if that is the case, a trip round the Universe through such a spatial anomaly would bring objects back as mirror images of themselves.

There are macroscopic consequences of fermion spin converting to boson spin, such as the aforementioned superconductivity but also superfluidity. What isn’t clear to me is whether a direct macroscopic manifestation of something like a Möbius strip could happen. Maybe it could. Quantum computers exist, for example. So do superfluids, superconductors and Bose-Einstein condensates. A magnification of non-integral spin would involve something like a gyroscope which needed to be inverted twice before it was spinning the other way, or indeed a tunnel which would cause a magnetically levitated train to turn upside down only if it went through it twice. It’s also rather imponderable what state such a train would be in if it had only gone through it once. Would it then be potentially inverted so that it would only need to go through it once to turn upside down? Would it mean that a passenger who changed trains or got off after one circuit would then carry the potential to turning upside down themselves if they travelled through the tunnel again? It’s very difficult to contemplate even as a thought experiment.

Armin Deutsch was primarily an astronomer. His story could itself be seen as a loop, as could the film ‘Moebius’, as a train is discovered to have vanished just after the original one disappears. The film, of course, could literally have been made as a strip with a twist in it if it were on one reel, and become a never-ending story. Alternatively, there is a way to make a story a Möbius strip by having the characters swap identities as the plot proceeds. I feel that the possibilities of the Möbius strip with respect to story writing have yet to be explored. That would be a real twist in the tale.

The Metric System And Decimalisation

I am a child of the 1960s CE, and as such my formative years were dominated by various events in the British national life. These included the Three-Day Week, power cuts, the paper shortage, the summer of ’76, the Winter Of Discontent and the breakdown of the Post-War consensus. More globally, I can remember the Vietnam War, some of the Apollo missions and Watergate. All of those refer to US-dominated events of course. In terms of popular culture, I can recall a couple of Beatles songs, notably ‘The Long And Winding Road’, glam rock and the gradual rise in popularity of colour TV.

Most notably, though, for the purposes of this post, I can recall decimalisation and the drive to metrication in this country. Decimalisation was a very early memory for me. Decimal Day was on 15th February 1971. I remember the Max Bygraves song which nobody else seems to, for example. I don’t recall the likes of sixpences or threepenny bits in circulation although like anyone else of my age I clearly recall the use of shillings and florins because they were only withdrawn in 1993. Although there were many complaints, decimalisation has clearly been entirely successful, although the existence of the florin is evidence of an earlier effort to do so which didn’t work as well. It was part of a proposal made in 1847 to replace shillings and pence with units of a tenth and a hundredth of a pound, made by one Sir John Bowring, and was introduced to test public opinion. The words “dime” and “decade” were suggested as names, and it’s quite surprising that it ended up being called a florin, which was the name of a Dutch coin of about the same value at the time. A royal commission had been set up earlier in the 1840s to investigate the possibility of decimalisation. However, the idea had been floating around since the late seventeenth century. The first European decimal currency was the Russian ruble (рубль), introduced in 1704, and over the next couple of centuries many other countries decimalised. China had been using it for far longer. The UK was, unsurprisingly, a late adopter of a completely decimal currency, and I’m personally wondering if that’s connected to it being perceived as French because the Napoleon had made many other countries go decimal.

I’m actually not a fan of decimal currency as such. Neither am I a fan of currency with arbitrary divisions of different values. I recognise the value of having to convert such numbers mentally or on paper as a form of intellectual exercise, but also think this is outweighed by the slow pace of such conversions even in most practiced minds. But it makes a lot of sense that a crown can be divided fairly between two, three, four, six, a dozen, fifteen, thirty or sixty people in the old money, even more if ha’pennies and farthings are involved, whereas the modern decimal crown, should it be used as currency, can only be divided by five. There’s a sense of equity there which I regard as very positive. Hence the “old money” had its benefits but going on to impose twenty shillings to the pound the next step up whereas not exactly a mistake as the decision was probably never consciously made, is at least inconvenient and inconsistent. Only nostalgia can really be evoked as a reason to stick with it, or possibly a sense of national identity. However, it could’ve been the other way round as the first ideas of decimalisation in England arose in 1682, and we could have ended up as the first country with decimal currency and have spent centuries looking down at our noses at those silly furriners with their unwieldy money. So very un-British, don’tcher know?

It was suggested that instead of having pounds and pence, we should have shillings and pence. Pounds seem so iconic that I can’t imagine that ever being taken seriously, but who knows? Maybe that could’ve happened after all and we’d be thinking of pounds as the antiquated version. I remember English tourists in Austria talking about their Schilling seeming old-fashioned, like it was still the War, but in fact I found the Austrian use of that currency, which at the time was roughly equivalent to what our shilling would’ve been, made it a no-brainer to keep track of how much I was spending, by contrast with Italy and its lira, which seemed basically like monopoly money to me (no offence to Italians meant). All that’s gone now of course, and without commenting on the economics and politics of the situation, the actual name of the Euro seems a bit ill-thought through, although presumably it’s supposed to emphasise a common European identity. I wonder if potential Leave voters would have been subliminally made more sympathetic to the EU if they’d adopted a British-sounding unit such as the shilling instead. Then again, I know I wouldn’t’ve been swayed, having only very reluctantly voted remain, and I don’t want to belittle their reasoning or dimiss their altruistic motives for wanting to leave.

The ultimate ease with which we adjusted to decimal currency was not at all reflected in what happened with British metrication. Like the currency issue, this could have been the other way round too, because before the French Revolution plans were afoot in Britain to devise a system of units of this kind. It was frustrated partly by the Revolution itself and partly because the US, UK and France couldn’t agree on which country the quadrant defining the length of the metre should pass through. This issue would later have been superceded by the redefinition of units using physical constants such as wavelengths of light, although by then the damage had been done and all sorts of encrustations of nationalism had reduced the streamlining of global adoption. Incidentally, something which has long intrigued me, and is shown in the above map, is the position of Myanmar, which has expressed its intention of adopting the metric system but seems not to have done so, and still uses its own national measurements which are not easily converted to either imperial or metric. However, its currency is close to being decimal, although it has its own units for five and two and a half mu. It uses both imperial and metric units in its interaction with the outside world. I don’t understand whether this is a political statement of nationalism or something else because to my shame I lack a firm grasp of the nature of the régime in the country. The other four larger exceptions (not for the time being acknowledging what’s going on in Polynesia as that’s new to me) are the “U”K, US, Canada and Liberia. The US and Liberia form a unit because the latter was founded as if it was the US but an independent Black majority version of the country, so it makes sense that it would use Imperial. The US use of Imperial is harder to explain. As with Britain, there were moves to metricate the country in the 1960s and ’70s which didn’t succeed. Canada has presumably been dragged along by American ties, so although it’s largely metric it probably has little choice but to use Imperial in some circumstances. The “U”K is going to be something I know a lot more about than the others because I live here.

Britain is stuck between the two systems now because Thatcher halted metrication in the 1981. Whereas I’m no fan of the Imperial system and not that big a fan of metric for reasons which are probably already apparent but upon which I shall dilate forthwith, I would expect most people to agree that the worst of all situations in this respect is to be forever stuck between the two. To be frank, I’d actually prefer us to have stayed Imperial than the current situation, which is more an indictment of the status quo than a recommendation. My father was the metrication officer for his workplace, so I’m a little closer than most people are to the programme. The British Standards Institution, again something my father was very involved in along with the ISO, began to raise the matter in 1962, culminating in the establishment of the Metrication Board at the end of ’68. Because my family was rather central in the drive to metrication in this country, I can’t really tell how others perceived it and my own experience involved the use of metric rather than imperial units from an early age. State schooling also went over to teaching the metric system from the late ’60s, meaning that I am of the cohort which expects units to be in metric and tends to think in those terms, at least more than most other people in this country do. There was very little contrary pressure for me. Imperial units were quaint, antiquated things like putting units before the tens or the “old money”. The metric system was the FUTURE! It was as if the units we were learning to use as children would be the ones a few of us might use on our missions to Mars or when designing sentient robots. But there was for some reason a surprising degree of resistance to them, for no apparent reason other than its association with this country, possibly the empire, and the past. People don’t like change, I suppose. ‘Nationwide’ (there it is again) held a competition where they had two cars, one called Long Live The Mile and the other Vive Le Kilomètre, which would go a certain distance depending on votes for the imperial or the metric system, and the discrepancy between the two was enormous, something like ten to one if I recall correctly, in favour of the imperial system. This was a publicity stunt of course because it’s like a telephone poll, which tends to distort scores in favour of one option rather than the other, but it remains the case that there very probably was such a big inequality in the country’s opinion. It’s also notable that rather than just calling the metric car Long Live The Kilometre, they gave it a French name, making it seem like the metric system was foreign rather than international, and moreover associating it with our traditional enemies the French, and also the EEC. A stunt to be sure, but probably one which reflected the national mood fairly well.

There is currently a British single-issue pressure group called the UK Metric Association which continues to promote the metric system in this country, which among other things aims to address certain myths about metrication. One of these, which is currently a fairly hot topic, is that the EU forced metrication upon us. Well, the UK joined the EEC on the 1st January 1973, several years after metrication began, in 1965, and metrication and decimalisation really proceeded hand in hand. I’m too young to remember, but there seems to have been no suggestion that decimalisation was some kind of foreign plot. The metric system is also not more associated with the EU than it is with most of the rest of the world, with only three countries seriously diverging. There is also a fair bit of anti-American xenophobia in this country which however has not been harnessed in the same way. Ordnance Survey maps have been using a metric grid system since the 1940s, meaning that our roads and other civil engineering projects have inherited that system on their own maps and surveying is also done in metric, so we have soft metrication on our roads and the like, not hard, but they aren’t imperial. Incidentally, soft metrication is where pounds and inches are still used ostensibly but the quantities are actually quoted in metric, as with the labels on groceries, so you still get a pound of sugar, for example, but it’s labelled as 453 grammes. It was also claimed that phrases such as “give them an inch and they’ll take a mile” would be outlawed, which is just not true. I can remember that when the Sex Discrimination Act 1975 was introduced, it was similarly stated that it would become illegal to say words like “hostess”, which was a complete lie. I presume these rumours arose without planning or instigation though. A related falsehood regards placenames such as the Mile End Road, which is obviously absurd.

The problem with having two systems is not trivial. For instance, it can mean there need to be two production lines for standards of dimensions in metric and imperial. There are legal problems because of planning and property, and restraining orders, using metric but being quoted or thought of in imperial. Oddly, because the rest of the Commonwealth is metric, this puts Britain at odds with them as well as the EU. The Thatcher government is responsible for this mess. A few months after being elected, the Conservative government of 1979 abolished the Metrication Board, which it regarded as an unnecessary waste of public money, but the cost to business of maintaining two systems can be considerable, for instance through increasing the cost of factory machines having to be duplicated to work in them both when manufacturing products for export, and this isn’t even confined to the smaller businesses which tend to be ignored by government but even multinationals. The excuse was made that its work was done, but it clearly wasn’t. And now we’re stuck.

It’s odd how a government headed by a scientist such as Thatcher should decide to halt metrication. This, in fact, is part of the general oddness of the Thatcher years, where someone who might be expected to make policy decisions based on evidence due to their background appears not to have done so. However, there’s also the largely spurious association between the system and the Common Market, which might go some way to explaining it, and it’s also a crowd-pleaser. In that respect, in a way she didn’t go far enough because returning completely to the imperial system would have been even more popular and would have had a nationalistic flavour to it. Thus I’m not sure why she made that particular decision. It’s also interesting that by 1979 there was absolutely no interest or mileage (!) to be made from going back to the old money, and in fact the new money could be said to have become a rallying point later in the 1980s with sterling in opposition to the EMU/ECU.

One important point in favour of certain imperial units should be made. A yard is three feet and a foot is twelve inches. This means a yard can be divided exactly by three and a foot by two, three, four and six. The same cannot be done using the metric system, which only allows exact integer divisions within an order of magnitude of two and five. However, this is more an inadequacy of the metric system than an advantage of the imperial as the units are inconsistent in most cases. Although I can’t track it down, there is some kind of decimal system within the imperial which is however obscured by the existence of intermediate units, but this is not to recommend it because ten is only chosen due to most European languages using decimal counting.

I’ve been referring to this system as “metric” throughout, but of course the official name is Système International, reflecting its largely French origins. It’s also been known as the extended MKSA system, and this contrasts with another version of the metric system referred to as CGS. MKSA stands for Metre, Kilogramme, Second, Ampere, the base units from which most of the others can be mathematically derived, and when I first learned of this I found it peculiar that the kilogramme rather than the gramme was in there because it’s a multiple, but if grammes had been used, many of the other units would’ve ended up the wrong size. CGS was an older system, based on centimetres, grammes and seconds, which has a similar discrepancy in using the centimetre rather than the metre. Einstein’s famous E=mc2 uses this system as it measures the speed of light in centimetres per second and consequently calculates the energy released as ergs rather than joules, which are a hundred thousand times larger than ergs. There are a separate set of derived units, including the aforementioned erg but also the dyne rather than the newton, and in electromagnetism there have been several different versions of units which nowadays have been reduced to the gauss, which is quite a small quantity roughly equivalent to the strength of Earth’s magnetic field.

There have been two generations of units in another way too. The original units are based on the metre, kilogramme and second, and consequently more derived units can be made regarding area, volume and velocity, but the improvement of scientific knowledge (if that’s the right way of looking at it) in physics and chemistry led to new units being invented such as the mole (which is actually really a number) and the electromagnetic units, which are however associated with the “more basic” units. There are also at least two units which are mathematically derived: the radian and steradian, which measure angle and three-dimensional angle. These make sense from the perspective of trigonometry but don’t seem very useful or easy to work with. That said, they have a “naturalness” to them which most other units lack. For instance, kelvin and what I call centigrade but is also known as celsius both rely on dividing temperature from the melting point of ice to the boiling point of water at sea level by a hundred, and water turns up in other units too such as the kilogramme, which is the mass of a cubic decimetre of water at 4°C, but in fact it might make more sense to make the temperature scale more like decibels, doubling indefinitely with cooling in order to keep absolute zero at an infinite distance. More prosaically, the use of ares, which are a hundred square metres, and the more commonly employed hectare, is a bit peculiar, as are litres and tonnes.

Several units in the imperial system were based on the human body, which I suppose makes them more “human”, although of course this body is that of an able-bodied white adult male, which isn’t ideal. The original units of the metric system, by contrast, are based on the distance between the North Pole and Equator divided by ten million, the earlier definition of the metre. Later on it was defined as an appropriate fraction of the distance travelled by a light beam in vacuo. But during the twentieth century, various discoveries were made regarding such things as the charge of the electron and quantum mechanics which revealed that the Universe effectively already has a system of weights and measures and that the one we devised from the eighteenth Christian century onward was not easily converted to it. The most obvious one of these units would of course be velocity: the speed of light in vacuo is much more fundamental than metres per second. The others are based on quantum mechanics and measure length, time, energy, force and temperature. They figuratively represent the resolution and frame rate of reality, and are extremely unwieldy as such, as length and time are very short indeed, far smaller than the effective size of subatomic particles or any currently established familiar feature of the world, and force and temperature are extremely high. As the metric system has demonstrated, though, one can multiply or divide one’s way out of the predicament.

The Planck Length is 1.616255(18)×10−35 metres, which is as much smaller than a proton as a grain of sand is smaller than the distance to τ Ceti. However, it’s also very close to being an exact multiple of ten times smaller than a mile, which ironically means that if we based our measurements on a decimal system like the metric one, one of our units would be only 0.4% longer than a mile was in the first place. The Planck Time is about 5.39×10−44 seconds, and is the time taken for light to cross the Planck Length. This isn’t as close as the multiple of a Planck Length is to a mile, but a suitably multiplied time unit in a kind of metric system does come out reasonably close to a minute at 53.9 seconds. The exception to the inconvenient sizes of the units is the Planck Energy, which is about the same as the chemical energy of a full tank of petrol in a car at 34.2 megajoules. The other two are far too large but can of course be scaled down. The Planck force is said to be the gravitational attraction between two objects of Planck mass one Planck length apart, and puzzlingly for me also the same with electromagnetic attraction or repulsion, and is 1.210295 x 1044 newtons. A Planck mass is 21.8 microgrammes, so like Planck energy it’s within reasonable distance of being useful as it is. It’s defined as the mass of a particle whose wavelength is one Planck length when that is equivalent to the energy of that particle. Finally, Planck temperature is 1.416784(16)×1032 K, which is the highest possible temperature. I don’t know why.

All of this is expressed in decimal. The units themselves are one choice of more fundamental measurements than what we have now, but simply to use them in multiples of ten like the metric system is questionable. As you probably know, I advocate for the duodecimal system although there are other options which have their own merits, but they would involve changing the very words we use for counting in some cases, perhaps to an international standard. Hexadecimal has its own plusses, being easy to halve the quantities and so forth. But a system of weights and measures based on universal constants, assuming they really are constant, would be shared by other sentiences throughout the Universe in the sense that they would at least be able to understand it if their physics is the same, and would also make various calculations much simpler, just as BMI is based on the metric system for example, or calculations of drug doses are easier in metric.

Maybe that would be the best system, and it stops me from throwing my lot in with the UK Metric Association, but as a well-known phrase has it, “don’t let the best be the enemy of the good”. The metric system is a compromise, but one that’s accepted unconsciously by most of the human world insofar as it’s involved in the global economy. Presumably uncontacted tribes do not use the metric system. British rejection of metric was a precursor of Brexit. The public perceived the system as foreign and associated with mainland Europe despite enormous evidence to the contrary such as the Commonwealth’s adoption of metric. The imperial system may also have an appeal precisely because it’s less rational and thought-through than metric if one has a distrust for human reasoning and planning, which seems to be a major motivation for political conservatism. Awareness of the flaws in the system does in fact lead me to some sympathy with the idea that metric is not ideal, but it doesn’t make me want to replace it with the far inferior imperial system.

Given all that, though, one thing still puzzles me. The decimalisation and metrication projects began to be pursued in their most recent phase in the 1960s. Decimalisation succeeded so well that the old money is forgotten now, but metrication failed. Why is this?

Space Camels

Photo by Shukhrat Umarov on Pexels.com

Some time around 1975-77, the early evening news and magazine programme ‘Nationwide’ did a Christmas special about life in the year 2000. I can remember a few details. The cod was considered an endangered species or extinct, there was a test tube with an embryo in it and women were no longer familiar with the idea of skirts or dresses. It’s seemingly impossible to track down, but since Richard Stilgoe was involved maybe that’s just as well, but then so was Valerie Singleton. Anyway, one of the things it featured was a TV schedule including ‘The Universe About Us’ as a parody of the well-known natural history programme ‘The World About Us’, which was about life native to asteroids and how they coped without an atmosphere, and it was this that really piqued my interest.

At the time, I used to exercise my imagination in rather a limited way by a kind of analogical method. For instance, I used to think that what was happening with audio at the time would happen with video two decades later, so the ubiquity of cassette recorders in 1977 I imagined to extend to video recorders with built in screens and cameras in 1997. I also used to extend two dimensions to three and replace rotary motion with linear, so if I’d done a session on two-dimensional tessellation I would try to imagine how that would work in three, and try to think of ways of replacing wheels with the likes of linear induction motors. I was actually a little concerned that this process of analogising was a bit lazy and wanted to come up with another way of imagining things which was a bit more flexible and original, but of course it did bear a limited amount of fruit.

I did this with the idea of organisms who didn’t breathe oxygen by imagining an airless planet or moon to be like a desert on Earth, except that the environment in question was effectively an oxygen desert, where not only water but also oxygen was scarce. I don’t remember too much about it, but one thing I do recall was the idea of trees with deep roots to reach subterranean water deposits as a basis for life forms who sought out oxygen deposits deep underground in a similar way. There will be a notebook somewhere with further details in it. I also eventually came up with the idea of a Martian whose body was based on similar principles. It had a large dome on top of its body covered in holes which it used to inhale air, which it then compressed to breathable density using piston organs. The problem with this is that there is practically no oxygen in the Martian atmosphere and it would have to be “cost-effective”: that is, in an atmosphere with a usable amount of oxygen in it, the energy expended in compression would have to be lower than the energy released by respiration. This is actually a practical problem with respiratory diseases. If your lungs are unable to function without a lot of respiratory effort, you can actually end up losing weight because you burn so many calories by the energy spent on breathing, and of course ultimately you could go so far from breaking even that it would actually be fatal.

This assumes, of course, that life requires oxygen, which is by no means so. It so happens that our own metabolism is built around the famous Krebs cycle which liberates energy from glucose by carefully controlled oxidation, with a small bit at the beginning called glycolysis which only releases a small amount of energy without needing oxygen, and there are plenty of completely anærobic organisms – ones who do not require oxygen – and even ones for whom oxygen is toxic. However, for a living thing to rely only on anærobic respiration would be much less efficient than using oxygen and they would be unable to compete well with species occupying similar niches which could avail themselves thereof.

The only reason there is much free molecular oxygen in our atmosphere is that æons ago, cyanobacteria evolved which were able to combine carbon dioxide and water to store energy and produced oxygen as a by-product. This actually ended up poisoning most of the life around on this planet which had thriven up until then and plunged it into a global ice age where there were even glaciers at the equator due to the lack of a greenhouse effect from the carbon dioxide which had been removed from the atmosphere. It was actually a bit of a disaster, and it demonstrates very clearly that oxygen can be a liability for life rather than essential to it. It is in fact implicated in the kind of damage associated with aging, and if life like us could survive without respiring in an oxygen-rich environment we might end up living a lot longer, barring accidents. However, it remains to be seen how we would manage to derive energy to do all that living, and perhaps if we were only able to use anærobic respiration we would take a lot longer to get things done and life might end up seeming about the same length anyway.

Photosynthesis is not the only way free oxygen can arise in an atmosphere. The Jovian moon Europa and Saturn’s moon Enceladus both have extremely thin oxygen atmospheres because of the breakdown of ice and in the latter case water vapour, and in the case of Enceladus this oxygen is actually transported to Titan’s much thicker atmosphere. It’s thought that a very common type of planet would be the “water world”, which could form in several different ways but consist of an ocean hundreds of kilometres deep over a layer of ice which is only there due to compression and not cold. Such a world would start off with a water vapour atmosphere but ultraviolet radiation from its sun would split up the molecules and the hydrogen would escape into space, leaving the oxygen behind, probably at breathable levels. However, life as we know it on such a planet is another question because depending on how thick the ice is, volcanic activity and rock could be deeply buried under the ocean bed and heavier elements wouldn’t be available, so it’s likely that larger such planets would be lifeless due to lack of material resources. On smaller worlds, the oddity may be that even though photosynthesis might never have evolved, heterotrophs such as fungi and animals might, without needing plants, but there would still need to be producers for them to eat.

It’s also been suggested that although organisms benefit from oxidation or other chemical processes to release energy, other forms of carbon-based biochemistry might use other elements than oxygen to do it. In fact it isn’t necessary to go as far as another planet to see this happening because even here there are sulphur bacteria which use that element instead. In fact sulphur is used metabolically in a number of different organisms in various ways. There are two opposite processes chemically referred to as oxidation and reduction. Oxidation is the loss of electrons whereas reduction is gain, and sulphur bacteria are a big personal reason why I didn’t go into marine biology. As a teenager, I did field work on a mud flat in Kent which was rich in anærobic bacteria releasing stinky hydrogen sulphide living in a black, tarry layer under the mud in which I got completely covered, which seriously put me off doing any more of that kind of thing. I wonder, in fact, whether this was part of the point of the activity. Anyway, from the comfort of this urban East Midlands sofa, I am able to pontificate on the matter in a more detached manner. Sulphur bacteria occur in several different types and use sulphur for various purposes. The element was present in quantity on this planet before oxygen respiration evolved and would have been an ample source of energy. Some archæa do the same. They may actually “breathe” sulphate rather than sulphur as such, and whereas when oxygen is breathed it’s reduced to water, sulphur produces hydrogen sulphide. However, both elemental sulphur and various sulphur compounds are used. Sulphur, being in the same column of the periodic table as oxygen, has certain similar properties, although its valency, unlike oxygen’s which is always two, varies. Further down that column are selenium, tellurium and polonium, and all but the last perform useful functions in some living things, the function of polonium being of course to kill things and be extremely dangerous, but none of them are abundant enough to be used for respiration. Sulphur is a solid at room temperature and at sea level pressure it only melts at 239°C, so it’s unlikely to be a respiratory gas. An ecosystem based on sulphur would therefore probably be completely aquatic. However, sulphur is the fifth most common element on the surface of this planet and the tenth most common cosmically and it crops up all over the Solar System, such as in the clouds of Venus, as sulphuric acid oceans on early Mars and all over Io both as an element and as frozen sulphur dioxide. All of this suggests that there are many worlds out there in the Universe with sulphuric acid cycling through the atmosphere in the same way as water does on Earth, and depending on its concentration it could be very hostile to the development of life, which sadly could also apply to Mars and Venus. Nonetheless, the worlds themselves could be quite interesting geologically and chemically.

A popular science fictional choice of another option to oxygen is chlorine, which I’m pretty sure I’ve mentioned before on here. The potential for marine organisms to produce elemental chlorine gas is considerable because of the salt content of the oceans, and it may be that whereas we on this planet have gone down the oxygen route, others will have a large amount of chlorine in their atmospheres. If this is so, and their oceans are like ours in other ways, they will also contain a lot of caustic soda, so from our perspective if there’s any life there at all it will be in some way extremophile. Such oceans might also be high in elemental iron, as were Earth’s before the oxygen catastrophe, as it’s known. For me, the issue with chlorine is that it’s liable to produce “dead ends” in molecules. Oxygen, being bivalent, can participate in groups which join both to the main part of an organic molecule and other elements such as hydrogen, and can also occur in rings, but chlorine only has a valency of one and therefore terminates a group and can neither form part of a carbon chain or ring. This would give chlorine a different function in such biochemistry and there might still be a rôle for oxygen in it anyway, though not as a breathing gas. If the parallel to oxygen was close, photosynthesis would involve the combination of tetrachloromethane with hydrochloric acid, or rather hydrogen chloride, to form a partially substituted chlorinated hydrocarbon as an energy store and respiration would involve the production of tetrachloromethane. At our atmospheric pressures, tetrachloromethane is only gaseous above 77°C although it melts at -22, but chlorine is a powerful greenhouse gas so it’s feasible that a planet with a high-chlorine atmosphere would be quite warm and have water on its surface above our own boiling point, or again the possibility exists of aquatic life only. Incidentally, it hasn’t escaped my attention that in the above word equation I assumed hydrochloric acid or hydrogen chloride to be the main constituent of the oceans rather than water, which may be incompatible with life. This, however, is just a straight naïve substitution of chlorine for oxygen, which might not parallel a genuine viable set of processes upon which biochemistry could be based. For instance, and again this is tinkering, retaining water in that equation still leads to free chlorine and tetrachloromethane in the atmosphere but also a kind of chlorinated “sugar”. The real processes of photosynthesis and ærobic respiration are a lot more complex than that famous equation suggests, and there may be flexibility in there somewhere.

The collaborative science fiction project Orion’s Arm has had a go at creating a chlorine-based planet class, claiming that it’s unlikely that the process could take place easily and that they’re likely to be either rare or the result of something like a terraforming process by intelligent aliens. However, they do turn up in science fiction quite often. John Christopher’s ‘Tripods’ trilogy depicts aliens who aim to convert our atmosphere to one high in chlorine so they can settle our planet. Isaac Asimov’s ‘C-Chute’ describes a human spacecraft which gets taken over by chlorine breathers during a war and the human attempt to reclaim it in a toxic atmosphere. Getting back to the Orion’s Arm article, I agree that weathering would be more pronounced on such a planet and that photosynthetic pigments are likely to be purple because of the greenness of chlorine gas, but in fact it’s also theorised that chlorophyll is a second generation pigment on this planet necessitated by prior purple microörganisms using up the rest of the spectrum, so in fact it might well be the case that even most habitable planets would have purple vegetation and that Earth is unusual in having green plants.

Another option I’ve wondered about but am almost sure is not viable is fluorine. This is the element after oxygen in the periodic table and also the most chemically reactive of all elements. Physically, it has similarities with oxygen, with a similar boiling point, although it’s yellow. This is by contrast with chlorine which at our sea level pressure is only -34.1°C, meaning incidentally that chlorine planets would have to be hotter than Earth to be viable unless they had something like lakes of pure molten chlorine at the poles. However, fluorine is so reactive that it would be difficult to dislodge from its bonding. For a long time it seemed entirely unfeasible to me that any planet could have free fluorine in its atmosphere, but in fact it is possible, though in small amounts and probably only locally. Fluorite mineral is locally common here in the English East Midlands. This is calcium fluoride, which releases hydrogen fluoride, or hydrofluoric acid, when sulphuric acid acts on it. This leads to the disturbing situation of a planet with pools of hydrofluoric acid at least briefly on its surface, before it eats through the rocks and makes its way towards the mantle. Once it encounters heat, however, it would dissociate into hydrogen and fluorine, or when struck by lightning it might also separate. It would then combine very easily, to the extent that it could even form xenon fluoride in small amounts. Hence I think a planet with a little free fluorine in its atmosphere is possible, but it would be quite short-lived and incompatible with life. That said, fluorine does exist in terrestrial biochemistry in teeth and bones where fluoride content is high in water, and also in krill for some reason I don’t understand.

At the top of this post, I gave the impression it was going to be about space camels, and it is. That is, it’s about the possibility of alien animals who can thrive in an atmosphere rich in their respiratory gas for long periods of time, and I am still going to do that. The point here is that such animals may not breathe oxygen in the first place.

Among the simplest and most easily plausible situations is simply an ecosystem like ours but no oxygen respiration, just glycolysis. There are animals who don’t breathe on our own planet. There is a cnidarian parasitic on salmon who doesn’t breathe. In our cells, like those of most other animals, there are symbiotic organelles descended from bacteria called mitochondria which are largely responsible for processing glucose to release energy in combination with oxygen. Henneguya salminicola is a microscopic relative of jellyfish whose mitochondria don’t do this. There’s also a whole phylum of animals, the Loricifera, which includes species who never come in contact with free oxygen, living in Mediterranean sediment, and may also lack mitochondria. The famous Cryptosporidium, a pathogenic alveolate which I unfortunately have considerable personal experience with due to its presence in water in Leeds in the 1990s, does not respire using oxygen. There are also innumerable species of anærobic bacteria and archæa. On this planet, all of the larger organisms live in special and restricted environments, and although they are larger, they’re still pretty small compared to us. It does, however, at least prove that there can be animals who don’t breathe oxygen and are fine, and that would be one option for evolution, or indeed a path that the whole of evolution could take on a planet with no oxygen in its atmosphere, perhaps using a different energy source than light to power its biosphere. Very many aspects of our anatomy and physiology do depend on our need for oxygen, such as a circulatory system including a heart, and of course lungs, but it isn’t clear that an animal who doesn’t breathe at all wouldn’t need one if larger than a certain size because there would still be a need to move nutrition and waste products around, and there might even be lungs because of the need to vocalise for communication, or perhaps to exhale nitrogenous waste such as ammonia. Presumably organisms evolving in an oxygen-free environment right from the start would also have many bodily compounds which would react, perhaps even violently, with oxygen if they were to come in contact with it, possibly even being highly inflammable.

Another very common and straightforward technique for surviving without breathing is found among whales, dolphins, seals, sirenians and possibly early humans. These are simply good at holding their breath, and are in that sense “oxygen camels”. Sperm whales, for example, can hold their breath for up to an hour and a half, and a lower metabolic rate could cause this to increase to several hours, so it’s interesting to speculate whether the likes of ichthyosaurs and plesiosaurs might have gone for hours without breathing. In a way, then, oxygen camels not only exist but we may even be them ourselves. We have the diving reflex, where our heart rate slows down when we are immersed in water. All vertebrates, as far as I know, can also store oxygen using a hæmoglobin-like pigment in their muscles so that it can be readily available for use when needed rather than having to rely instantly on blood oxygen.

Another possibility, which I’ve explored elsewhere in collaboration with someone else, is of an animal consisting largely of a thin “skin” which performs many different biological functions but is bladder-like, containing sacs of air like a lilo. Such an animal takes a similar approach to oxygen as a succulent plant does to water, storing it when plentiful and calling on reserves when needed. However, the volume of gas could make this rather ungainly. Perhaps there could be airship-like animals on some planets who do this though. Sky whales, as it were.

A more elegant approach would involve storing oxygen, sulphur or chlorine chemically and releasing it when needed, and if space camels exist this is, I suspect, the most widely adopted solution, probably in combination with greater than usual reliance on anærobic respiration, or perhaps “achloric” respiration in some cases. This would involve relatively dense solid compounds which could be induced to release oxygen or chlorine at manageably slow rates, rather like fat deposits can be called upon to release energy for metabolism. Camels partly rely on the water content of their humps in the sense that the adipose tissue stores water rather than the humps actually being “water tanks”, but this is not the most important store of water in their bodies, which is a combination of the bloodstream and one of their stomachs along with dry fæces and viscous, low-water urine. However, it isn’t clear how much this could be extended compared to breathing. Another possibility is something like hibernation when oxygen or chlorine levels are low, or perhaps the ability to switch over to another respiratory element such as the much more compact sulphur by changing the respiratory pathway and storing sulphur compounds.

Why, though, would a situation arise where a respiratory element varied in availability? This happens on our own planet because we have air-breathing animals who have returned to the water. Perhaps on another planet with plateaux above the level of breathable oxygen it would be necessary for animals venturing onto them, perhaps to exploit an ecological niche too extreme for their lowland colleagues, to have such adaptations. A similar situation might emerge in the upper atmosphere, with the airship-like animals, although it should be borne in mind here that they would need to employ a lighter-than-air gas such as hydrogen to maintain their altitude, perhaps consuming aerial flora. Or, bird-like animals might fly into the upper atmosphere and glide, becoming dormant for a while perhaps to avoid predators or harsh environmental conditions, although what could be harsher than the upper atmosphere? Incidentally, this is still in the troposphere, so in a sense it would not be the “upper atmosphere” as lift and drag would still have to apply.

Applying camel physiology to a low-oxygen (assuming it is oxygen) environment, there’s the efficient use of oxygen in the body, akin to the low level of water in the urine, the storage of oxygen in special corpuscles which are somewhat like red blood corpuscles but hold onto their oxygen for longer and the chemical conversion of compounds in storage to release molecular dioxygen. On the subject of dioxygen, ozone would be a slightly more efficient way of packaging oxygen and hydrogen peroxide considerably more efficient, although it would have to be protected from catalase and the body would have to be protected from it, which occurs in white blood cells. The human body is 65% oxygen by mass, although little of this could be usefully released without causing fatal chemical reactions. A space camel could also have an extra lung used solely for storage, which could exhale into the other lungs when needed. As it stands, most of the oxygen inhaled into human lungs emerges from them unused. This could be remedied by compression and the removal of carbon dioxide.

Therefore, I think there could be space camels, and environments in which that would be a useful adaptation, if there are aliens at all, but they might not be able to breathe oxygen and might even burst into flames if they landed here unprotected. Or, they could be like enormous inflatable camel balloons floating through the stratosphere. Burning giraffe anyone?

Overthought Mirror Universe Head Canon

It may not be obvious to more recent readers of this blog, but there used to be substantial ‘Star Trek’ content here. I reviewed every episode of TOS and gave a more general overview of the Animated Series and TNG. You can probably find them if you search for episode titles. I think there are around fifty of them. However, I am not a Trekkie or a Trekker. I don’t have a problem with Trekkers. It’s just that I think TV and cinema are not ideal media for science fiction because they rely more on the visual than the cerebral, and often have no choice but to appeal to a wider audience, which can lead to watered down content and in particular scientific implausibility, which I find really grating and distracting.

Spoilers for ‘Star Trek’, The Iliad and ‘Buffy The Vampire Slayer’ follow.

That said, I do have a particular interest in ‘Star Trek”s Mirror Universe concept and have given it considerable thought. Just in case you don’t know, the ‘Star Trek’ “universe” is in fact more of a duoverse, if that’s the word. Whereas it does have various parallel timelines, one of its biggest distinctive contributions to popular culture is the idea of “dark” and “light” versions of its universe, although the emphasis is of course very much on the light one. This idea has been adapted to other franchises, in the case of ‘Buffy’, in at least two different ways.

The idea is introduced in ‘Mirror, Mirror‘. The away team are on the Halkan homeworld having failed to negotiate for dilithium mining rights, and beam up during an ion storm. This leads to them teleporting aboard an Enterprise in a universe very unlike their own in the sense that all the worst parts of human behaviour have come to the fore and the best parts are repressed and the Terran Empire holds sway. Meanwhile, their counterparts from that universe have arrived aboard the Enterprise we know and love. The Terran Empire is basically fascist. “Behaviour and discipline has become brutal, savage” as Kirk puts it in his log in his much-imitated style. This mirror universe concept was later developed in subsequent works, both canonical and non-canonical, such as ‘In A Mirror Darkly’, a number of DS9 episodes and notably in ‘Star Trek Discovery’, which however I haven’t seen because I dislike the general ethos of the series.

People have had various thoughts about the nature of the Mirror Universe which often involve the common idea of a point of divergence (POD), used to explain alternate timelines in general. That is, a particular event in the past turned out differently, leading to a fork in history. This is a common science fiction trope and can be seen in ‘It’s A Wonderful Life’ and ‘SS GB’ for example. One claim, made in non-canonical writing, is that the POD occurred during the Trojan War when Achilles kills Priam rather than showing him mercy, but even accepting that this occurred this could also be seen as symptomatic of the general atmosphere of the universe rather than a specific turning point. I admit to not having read the book in question because, as I’ve said, I’m not really a Trekker.

What looks at first glance to be a very fruitful possibility here is Harlan Ellison’s ‘The City On The Edge Of Forever’, which I reviewed here. Dr McCoy, Kirk and Spock go back to the 1930s CE and rescue a peace campaigner from being killed in a car accident, which leads to the US becoming non-aggressive in the Second World War and the triumph of Nazism. This might be expected to lead to a scenario where there’s a fascist interstellar empire dominated by humans, but in fact there is apparently no Enterprise at all, and quite possibly no interstellar human presence. This would not have happened in the Mirror Universe. Instead we would’ve seen the hostile, aggressive version present in ‘Mirror, Mirror’.

There are a few other suggestions. One is that the Terran Empire is a continuation of the Roman Empire, which I imagine would accord quite well with the Trojan War turning out differently. Another is that it simply represents the triumph of fascism in the mid-twentieth century, and a third suggestion is that it means the Age of Enlightenment emphasised opposition to democracy more strongly. However, the problem with all of these is that if it were as simple as a mere POD, or even several, we wouldn’t see what we do on screen. From a real-world perspective, it isn’t possible to show a completely alternative dramatis personæ from the majority of the episodes in a given series, so instead the same characters exist with different personalities. One impressive thing about ‘Star Trek’ is that it manages to make a virtue out of the necessity of working within the constraints of being a popular TV series and walking a tightrope between being liberal-progressive and still acceptable for mainstream American TV, and of constraints can be very stimulating to creativity. The presence of the same cast and props, scenery and the like is a different kind of restriction, but one which has been used very cleverly in these episodes.

Like some other people, I would go a different way with the idea. One possibility and I think the answer is to be found in a surprising place: phasers.

There is another episode of the original series which I think goes some way towards explaining what’s going on if you choose to accept it. In ‘The Tholian Web‘, the Enterprise discovers a “ghost ship”, the USS Defiant, which Spock establishes is trapped in an “interphase”, and humans affected by it become aggressive and murderous because the fracture in space “damages” the human nervous system. Kirk vanishes but appears in a mirror in Uhura’s quarters. It turns out he’s appearing at regular intervals and is beamed aboard, leading to him becoming permanently physically manifested.

In the mirror universe in 2155 CE (‘In A Mirror Darkly‘) the Tholians detonate a tricobalt warhead inside the gravity well of a dead star, creating an interphasic rift to 2268 in the “Prime” universe. This is retconned as the cause of the deaths of the Defiant’s crew in a mass murderous rampage, and allows the Terran Empire to access twenty-third century technology.

Phasers and disruptors work by producing artificial particles called nadions. They can also be used to close subspace fractures, similar to the fractured space encountered by the Enterprise in ‘The Tholian Web’. In some TNG episode I can’t track down, Geordi La Forge and one other character find themselves on an empty version of the Enterprise while having apparently disappeared from the prime version.

This is what I think nadion particles do. In the real world, and presumably in the Star Trek duoverse, particles manifest as waves of probability. If the likelihood of them being in a particular position in space is plotted on a graph, this will show up as peaks and troughs like a wave form. These waves have a particular phase. When a quantum goes out of phase, if it’s a boson it can cancel out another boson and there can instead just be nothing in that position. Fermions are different due to their spin and cannot cancel each other out. Nadions, in my opinion, change the phase of particles in general, such that they cannot interact with particles in the prime universe. It’s also known from Star Trek canon that there is a void between the prime and mirror universes. When a phaser or disruptor is fired at a life form or object, it doesn’t destroy the object or kill the life form, but shifts its phase so that it is no longer in ordinary space but in the interphase void. This is what happened to Kirk in ‘The Tholian Web’, although in his case the particles making up his body hadn’t been fully shifted out of phase and therefore periodically came back into phase before slipping back out, like an interference pattern. This is nightmare fuel, because it means that when a phaser or disruptor is fired at someone, rather than killing them, it shifts them into a void where they may, depending on how well they’re protected, suffocate or die of thirst slowly over a period of days in black nothingness.

Now back to the mirror universe. The mirror universe is out of phase with the prime universe. People in the mirror universe have the same disruption to their nervous systems as was seen in ‘The Tholian Web’, making them more aggressive and violent. However, their societies and biology have evolved to cope with this. In the meantime, in the prime universe we tend to see people behaving in a much more peaceful and calm manner than they do in our own world, which we generally tend to put down to the fact that they’re living in a post-scarcity utopia. This, in my head canon, is not the case, or rather it is, but there’s a cause for it. I would claim that the prime universe comprises matter in an optimal phase. Hence the mirror and prime universes are not separate timelines but two versions of the same timeline. Moreover, they depend on a third, more fundamental universe which is intermediate. Events in both of them are dragged along by this third universe and don’t follow exact cause and effect, because if they did there would be very rapid and radical divergence between the two other universes. There must be a common controlling factor between them. The “prime” universe is in fact not prime at all, but as divergent as the mirror one.

Finally, I would also claim that this third fundamental universe is our own reality, not literally of course because ‘Star Trek’ is fiction, but in the sense that our future is neither dystopian nor utopian but something in between. We can glean certain things about our future from the nature of both universes, such as the fact that there are other intelligent life forms in the Universe, that the protagonists we encounter in them also exist in our own future and that there is some space-faring organisation involving humans, but it’s a kind of average place.

To conclude, I do think it’s worthwhile as well as entertaining to speculate in this way because applying real world physics to ‘Star Trek’ to see how it would be difficult to make work helps one to understand how the actual Universe works. For instance, if what I’ve just suggested is coherent it would mean that there are no fermions in the ‘Star Trek’ universe, which is true in a sense because it consists only of images on screens and the photons which impinge upon our retinæ. This also connects to the Holodeck, Emergency Medical Hologram and Captain Proton threads, since in ‘Star Trek’ it seems that light does resemble the matter composing the likes of Picard, Janeway and the Enterprise much more closely than it does in reality. Also, it provides two fruitful sources of fan fiction: an intermediate, morally neutral future involving the same characters and setting, and a horrific void into which the victims of phasers are ejected to die slowly and horribly. So it’s all good.

Pathological Science And The Procession Of The Damned

It’s easy to get the impression that science is very different than what it ideally is, and also to confuse science with more broadly-based rational thought. And there is a lot of good science out there, of which I’m a fan, but we tend to think of it as a collection of facts about the nature of things rather than something provisional and founded in a specific method. Crucially, we also forget that it’s practiced by people who, although they try hard to exclude bias from their research, are part of a larger social system which has its own influences, and also that we do all have cognitive biases which stop us from clearly perceiving evident truth. Also, there is sometimes a built-in bias which is there for good reasons but which, for all we know, means that science is unable to venture in certain directions which may, however, be nonetheless fruitful if only we could reach them. Scientists themselves generally agree that they’re fallible.

In this post, I want to talk about the phenomena of pathological science and Forteana, which are opposites. Pathological science happens when researchers are deceived by the likes of personal bias or marginal results into accepting certain concepts as findings. Forteana are phenomena which others, often non-scientists, believe exist but which are rejected by science, where that rejection looks from the outside like prejudice. This includes things like frog falls, the Loch Ness monster, Bigfoot, UFOs as alien spacecraft and spontaneous human combustion. Now I’m not at all going to say I believe personally in all of these things. True Forteans, who may be Scotsmen, have a slogan: “As a Fortean, I have no opinion”. I’m not sure everyone describing themselves as Fortean subscribes honestly to this refusal to commit, but it’s what we’re supposed to do (if I can say “we”: I’m not sure I’m actually Fortean myself).

I’ll start with some examples of pathological science.

X-rays definitely seem to be real. They were discovered by Wilhelm Röngten in the late nineteenth century. At the time, they must have seemed almost magical. Here was a kind of invisible ray which could shine through cardboard and excite a fluorescent screen, and his wife became the first person to receive an X-ray, of her hand, to which she said “I have seen my death”. In fact, as well as being dangerous to living tissue, which was not appreciated fully by physicists at the time, they are quite spooky in this way because we associate skeletons with death.

A few years after the discovery of X-rays, another scientist appeared to discover an analogous form of radiation he referred to as N-rays. Other kinds of “rays” had been found recently such as electrons and ionising radiation from unstable atoms, so rays were in the air, as it were. Prosper-René Blondlot was a respected French physicist, the first to measure the velocity of radio waves in 1891 by adapting the rotating mirror method for visible light. This is where a rapidly rotating polygon of mirrors reflects a beam of light falling on it in a wide circle, which can then be interrupted by a spinning cogwheel. I’ve tried this with a telescope and a spinning flat mirror several kilometres from a high-rise building, set on a hill, but it was too hazy to work. His work contributed to his positive, and fairly earned, reputation as a respectable scientist. In 1903, he made a famous and influential mistake when he noted the fluctuation of a spark when exposed to X-rays. He went on to claim that there was another form of radiation emitted by everything except green wood and certain metals which could be refracted using an aluminium prism. Various people, notably Lord Kelvin, attempted to replicate his results without success but more than ten dozen other scientists said they’d managed to do so. Their existence was only refuted when a scientist surreptitiously removed a prism and the same results were reported, when they could not have been according to the concept of N-rays as being refractable through metal. It’s said that Blondlot went mad as a result of this, but this is disputed and seems to be about as valid as his concept.

It’s fair to note that the scientific method in this case led to self-correction after only a short period of time, but it’s also fair to note that the initial acceptance and self-deception were related to his reputation, which is an example of ad hominem, and in another way the influence of ethos. The ad hominem “fallacy” is usually evoked for the likes of “they would say that, wouldn’t they?” and is complicated by the fact that it is in fact a form of inductive influence, and scepticism regarding that is entirely valid but most people do accept its utility. In this case, ad hominem works in the opposite direction. It refers to a situation where someone’s reliable scientific research and breakthroughs have led to them being in a position where their opinion, which was genuinely held, was accepted less critically than usual. That said, it did end up being corrected.

Phlogiston and luminiferous æther are further examples of this which, unlike N-rays, were much more insidious and persistent. It used to be believed that combustion involved the removal of a substance from the burning material which led to its degenerate state. When burnt materials which gained weight, which strictly speaking is all of them but they also give off fumes and smoke so it isn’t obvious, it was concluded that phlogiston must have negative weight. I often wonder if science could have somehow sustained that into later times by simply accepting the idea, which would’ve given oxygen a negative atomic mass. It’s possible to extend it slightly by classifying reactions into constructive and destructive and then asserting that combinations which are the latter are with elements whose mass is negative, but that probably wouldn’t allow for the periodic table and there would be difficult facts to reconcile, such as chlorine having a positive mass or methanol being denser than methane. One version of a warp drive involves a shell of partly negative mass surrounding the spacecraft, so maybe if the idea had been more persistent we might have ended up with physics including the concept, which for physics as it is turns out to be very hard to justify. It’s the road not taken, and certainly from here it looks impossible, but the countless minds applied to make the system we have now work have not been applied to the alternative.

Luminiferous æther is more recent. Æther was originally the fifth element, apart from earth, air, fire and water, also known as quintessence. Once it was realised that light propagates as waves, as does radio, it was concluded that the waves had to be in a medium of some kind but apparently not atomic matter. As time went by, more and more elaboration had to be applied to this medium which made it increasingly contrived. To carry vibrations as high frequency as gamma rays it would have to be stronger than steel and yet somehow it doesn’t apply friction to heavenly bodies because their orbits don’t constantly shrink. Also, the Michelson-Morley experiment, which involves firing beams of light in a cross shape and observing their interference patterns, shows that light travels at the same speed relative to the observer regardless of whether it’s moving in the same direction as Earth’s orbital motion, at right angles to it or in the opposite direction, meaning that there is no rigid medium in which light waves move. It was however noted that for this to be possible, objects had to contract in the direction of motion, which perhaps surprisingly they do.

Comparisons have been made between the æther and dark matter. Of course, mainstream modern-day physicists deny this because they believe dark matter was a conclusion forced upon them. I should probably be more specific about the kind of dark matter I don’t accept. The reason astrophysicists believe in dark matter is that the rate at which galaxies rotate and move towards and away from each other strongly suggests they’re much more massive than they look. It’s as if Earth only took a few weeks to orbit the Sun, so it’s understandable that people might believe in it. I believe in it in the sense that there are probably large numbers of rogue planets, brown dwarfs, free particles and a lot of dust and rocks in galaxies which can’t be observed, but not in the sense that there’s a whole category of matter which is in principle only detectable by its gravitational pull. Another possibility is that dynamics work differently over long distances, and in fact there’s a precedent for that in Euclidean geometry becoming less and less Euclidean over long distances. It isn’t that outlandish. And I do believe that non-baryonic dark matter is pathological science.

Polywater is an example of pathological science which happened to get enshrined in ‘Star Trek’, a bit like how quasars being in our own galaxy were. It was found that water repeatedly sent through quartz capillary tubing ended up much more viscous than ordinary water and had different boiling and freezing points at the same pressure than ordinary water. It was thought that the molecules were joining together in chains, making them harder to disrupt, and there was also concern that it might convert all the water it came into contact with into polywater, like ice-nine in Vonnegut’s ‘Cat’s Cradle’ or strange matter as mentioned a couple of days ago. However, it turned out just to be water with other stuff dissolved or suspended in it, such as small particles from the sides of the capillary tubes. However, it took a dozen years to reach this conclusion and in the intervening period it was even considered important enough that there was concern that the Soviet Union was ahead of NATO in researching it.

I could go on producing examples of pathological science, but it’s important to note that all of these except non-baryonic dark matter, which I might be wrong about of course, were eventually shown not to exist. The trouble is that we don’t know whether there’s any pathological science going on right now, my suspicions about dark matter notwithstanding. This isn’t about malice or misconduct either. It’s what people do in groups and the difficulty of maintaining a unique or unpopular opinion under pressure, particularly when that has consequences for your career. Even here, I’m not claiming dishonesty but peer pressure having psychological effects, and I’m aware that scientists make assiduous attempts to avoid this, but they’re human beings like the rest of us, and it can be very difficult to detect one’s own cognitive biasses, particularly when everyone else around you has the same biasses about the same concerns. It happens in all sorts of groups such as companies, countries, families, religious sects, political parties, charities and professions. Why would it not also happen in the scientific community? Moreover, given the ideally neutral and objective nature of the scientific method, is it not better to be aware of that bias and assert that it exists than to deny it without examination? That said, there is apparently no process which is a foolproof metho for avoiding this kind of thing, although having a research community around one helps. Maybe what works best is lack of isolation. Then again, Mendel, Wegener and even Einstein were working in isolation and came up with now well-accepted theories, so sometimes isolation is fruitful.

The other side of this is Forteana. These are things which are said to be unfairly excluded from mainstream science, so in a sense, if they’re real, they are the opposite of pathological science in that they are phenomena which could be studied but aren’t. I should point out at this stage that there’s a possible third category of things which could exist but can’t be studied. Ball lightning is close to falling into the last one because it occurs so sporadically.

There are a couple of good examples of now scientifically established things which were long considered fantasies. Meteorites used to be thought impossible because they are pieces of rock or metal falling from the sky, which was previously considered to be an impenetrable barrier with only Earth below being made up of rocks. Will O’ The Wisp, a glow emitted by decaying organic matter above graves and in marshes, was similarly considered superstitious and directly connected to ghosts, and was therefore completely rejected by scientists, but turns out to be due to the action of light-emitting bacteria in decomposition. However, there are many other marginal things in which people believe which don’t seem to fit into the established scientific network of theories, and other things which do at first glance but become problematic in detail. For instance, I’ve always thought it was odd that there seemed to be more evidence for the Sasquatch than the Yeti because in prehistoric times there actually were apes almost three metres tall near where the latter are reported but what’s known about North American prehistory seems to rule the former out there. Likewise, the Loch Ness Monster is practically guaranteed not to exist because of the lack of food in the loch and because a relict of Mesozoic fauna would have repopulated much of the rest of the world, but like the Sasquatch and Yeti those are theory-laden conclusions. I don’t personally believe in any of those, but cryptids nevertheless have sometimes been found to exist.

It’s interesting to look at the situations people who support and to some extent research into Forteana professionally. These would include ufologists, parapsychologists, ghost hunters and those who look into the existence of cryptids. Studies have shown that such people are often outsiders who feel excluded or distrust formal academia. This may be due to the kind of paid work they’ve been able to find, lack of educational attainment or personality factors, but it often means they are isolated from an academic community of their peers, and there’s another way of looking at this. Each one of us is potentially such a person, including university researchers. What stops us from following our own blind alleys is having other people to give us reality checks and discuss our work with us, along with training in research methods. It’s easy to be supercilious about such people but really anyone could be in their position. All of this, of course, is based on the idea that they’re wrong, and this isn’t always so.

There are also exceptions to this stereotype. For instance, Sir Peter Scott the ornithologist and son of the Antarctic explorer was a firm believer in and researcher into the Loch Ness monster. He was a founder of the WWF and set up the Wildfowl and Wetlands Trust. Scott had a Cambridge education in natural sciences but actually graduated in Art History. Another example is Charles Berlitz, from the family which founded the language courses. He was a Yale graduate and polyglot, and also a believer in Atlantis, the Bermuda Triangle and the ancient astronaut hypothesis. He did a fair bit of his own research into words from widely separated, unrelated languages which were the same or similar, which he used to argue for the existence of an original Atlantean tongue from which they were all descended, and this was his actual area of expertise. Hence it does kind of happen that people who are respectable, i.e. part of the establishment for what that’s worth, and also adept academically can take Forteana seriously. However, although I believe Scott was in earnest about the Loch Ness monster, I get the impression that Berlitz may have taken a different attitude and may not have taken the content of his writing that seriously even though he purported to be a believer in the various different subjects he was promoting.

Charles Fort himself was self-taught and a journalist, and like Berlitz and Scott was independently wealthy. His attitude was somewhat different and broader. His writing style is idiosyncratic, unlike that of most journalists or any other people who write for a living, but what can be gleaned from his books seems to be that he was concerned that certain things he believed to be real, and for which he thought there was a lot of evidence, were rejected out of hand by scientists, and crucially, whereas he did offer explanations, they tended to have a facetious flavour. For instance, he claimed that there was a planet called Genesistrine near Earth which was having a conversation with this planet in the form of objects ejected from its surface as if they were love letters. It’s hard to tell, but it feels like he didn’t really mean it. Rather, his hypotheses come across as surreal brainstorming which served as an antidote to the conservatism he perceived to be innate and harmful to the scientific approach, or rather, the approach taken to discover things. However, he was also apparently a geocentric flat Earther. Is there any other kind? This would suggest he was also zetetic.

Zetetic cosmology is the basis of most serious non-religious flat Eartherism. The foundational concept is that one trusts what one perceives directly as an accurate representation of reality. Thus Earth appears to be flat, the Sun appears to orbit it and the stars appear to be fundamentally different than the Sun and are fixed and set in a dome above the flat Earth, with planets orbiting this version of Earth between us and them. The word “zetetic” is used here in a more restricted sense than its wider, and quite rare, usage. The trouble is that to truly trust the evidence of the senses, one would have to enquire more deeply into discrepancies in this experience, but nowadays we live in a different world than Fort’s because we have regular air travel and global telecommunications, but even in his time he would have had train timetables available to him, which can be used straightforwardly to refute the idea that Earth is flat even if one believes that ships disappear over the horizon due to refraction of light.

It would be ad hominem to distrust the whole corpus of Fort’s beliefs just on the basis that he was a flat earther, not least because it isn’t clear that he took any of his own beliefs seriously. He seems to have considered most of his conjectures to be highly provisional to the extent that he wouldn’t commit himself to them, and that is close to how science would ideally operate, so although he might seem to be a flat Earther, this doesn’t accord with his idea that Earth and Genesistrine are both equal planets conversing with each other. Was Genesistrine also supposed to be flat? Having said that, he also suggested there was a “Super-Sargasso Sea” out of which frogs and other organisms and objects fell onto us.

The zetetic position sounds like scepticism, but it has the issue of being intermediate because it does trust the evidence of the senses, which the Cartesian method of doubt does not. Hard scepticism would take a much stronger position, such as doubting the existence of subjectivity or the external world, to start from opposite ends. Zetetics, in a way, don’t doubt enough, but they do distrust authority. The trouble is really that they confuse distrust of authority with distrust of logic or rationality. That said, there are often vested interests and cognitive biasses.

Probably the two most important philosophers of science in the twentieth century were Karl Popper and Thomas Kuhn. Popper’s view of scientific progress was that it cannot rely on induction because induction is fallacious reasoning. That is, constant observation of two phenomena occurring together with no exceptions is not enough to arrive at a theory because there’s no logical way to conclude that this is invariably so. Nothing rules out the very next observation being false, for instance. Therefore, Popper sees theories as propositions which are to be falsified. That is, theories must be susceptible to being shown to be false by experiments or observation. Actual objective truth is at an infinite distance from science regardless of its current form, although Popper does not emphasise this particular angle. Moreover, only one example is needed to refute the conjecture. His view that science does not aim at truth but verisimilitude is often criticised, but for me this is the main aspect of them which appeals to me. I believe he has other problematic views, notably in politics, but his philosophy of science, though it isn’t quite the same as mine, is actually pretty good.

My views are different because I’m influenced by Thomas Kuhn. For Kuhn, science does indeed operate according to Popper’s views some of the time, but on the whole it doesn’t. He emphasises the social interplay between sciences: his view of science is kind of like “office politics”. Science for Kuhn changes through revolutions. People new to a field experiment and come up with testable hypotheses which then become theories if they are not falsified, as in Popper, but what actually determines their acceptance is the reputation of the scientist concerned, meaning that it only becomes entrenched, or established to put it more politely, when there are enough scientists with the new paradigm in positions of influence. When this paradigm shift takes place, science does indeed work the way Popper says it does. Kuhn is responsible for the term “paradigm shift”. I personally think Kuhn’s views are close to Marxism, and therefore correct, although incomplete.

Applying Kuhnian epistemology to Forteana, it seems feasible that there is an outer darkness in which phenomena occur without being taken seriously by the scientific community, but it’s very difficult to distinguish between them and “nonsense”. They can, however, have an emotional or figurative truth to them in the same way as scientific propositions can, and it’s always important to take the psychological meaning of both accepted and rejected hypotheses and theories into consideration.

A couple of further distinctions need to be stressed. There doesn’t seem to be a reason why essentially non-investigable things wouldn’t exist. If come up with a complex network of theories and concepts which account for all observable reality, that doesn’t logically rule out unobservable reality of which we can never know anything. There could also be things which can’t be investigated for other reasons, because they’re too influenced by observers for example or extend beyond any energy level human technology could produce in the case of elementary particles. Finally, there could be underdetermination. What if there were two incompatible theories of everything both of which took all observable data into consideration and were equally parsimonious? How could we choose between them?

Spin Is Not What It Seems

Nor is isospin, but then that’s less well-known.

Most of what people say about quantum physics focusses on things like entanglement, acausality and uncertainty, with a kind of mystical bent, but there’s also something else which most people ignore which is equally weird, and on top of that is something else again which is as weird too, if not even weirder. These two things are spin and its oddly- but appropriately-named sibling isospin.

It’s been said that if you think you understand quantum mechanics, it means you don’t. This may or may not be true and there are different opinions about what it actually means, but I would say this is also true of spin and isospin. I’ll deal with spin first.

If you hold a spinning gyroscope, you can feel the difficulty of shifting it from the direction its pointing. If it’s one of those small toy ones, it won’t wrench you off your feet but its rotation will be shifted into your body if you’re standing. In a swivel chair, a sufficiently large and massive rotating object will rotate the chair if you try to move the object into a different angle. This principle is useful, and is for example employed with rifle bullets, spacecraft and compasses to stabilise them. Whereas magnetic compasses are useless near the poles, gyrocompasses can float around as they move and will therefore continue to point north if they’ve been set up that way in the first place. A spacecraft will tumble unpredictably in space unless it’s stabilised in some way, and one way of doing so is to make it spin as it launches, which keeps it pointing in the same direction. This particular spin is often counteracted by ejecting something spinning in the opposite direction to ensure the spacecraft instruments or devices stay facing the requisite direction later on.

These are all illustrations of angular momentum. Momentum in general is the tendency for an object to keep moving in the same way unless something stops it, that is, unless another force acts upon it. This is true of masses moving in straight lines, and of spinning masses. They will continue to spin in the same way around the same axis unless something acts on them to shift them or slow them down, and when this happens that momentum has to go somewhere as spin rather than in a straight line. This is called angular momentum.

We tend to think of atoms as consisting of nuclei surrounded by electrons in orbit around them: that is, rotating. Ferromagnetism happens when the atoms in a material are all lined up spinning in the same direction, and only applies to very few materials, notably iron but also cobalt and nickel. If you think of atoms as gyroscopes, which they are, what you’re doing when you magnetise something is shifting the axes of rotation of a load of gyroscopes, and that angular momentum shift has to go somewhere. And it does. If you suspend a piece of unmagnetised iron in space in zero gravity conditions, or more accessibly hang it from a thread, then apply a magnetic field to it, this will to some extent magnetise the block and shift the atoms, and it will start spinning. This is known as the Einstein-De Haas Effect. Yes, that Einstein.

This change in angular momentum can be measured quite easily because the mass of the iron is known and its rotation can be timed and observed. However, even if you take into account all the angular momentum involved in the shift, it doesn’t account for all of the spin. This is because the electrons themselves are tiny magnets pointing in a particular direction, and the magnetic field aligns not only the atoms but also the electrons. Now here’s the crucial question. How can an electron point in a particular direction? The answer is that it has an axis of rotation, and this accounts for the discrepancy in the rate of spin the lump of iron has. This difference in angular momentum just taking the orbitals into account and the actual difference allows the spin of the electron to be found.

And this is where things get really weird.

If you calculate the spin of an electron, either assuming the smallest probable size of the particle or the much more likely scenario of thinking of an electron as a point in space, there is an imponderable problem. If the electron has a size at all, in order to generate the amount of angular momentum it has, it would have to be moving faster than light. If, on the other hand, an electron is a point, it’s featureless except for location, so how can it be spinning at all? A point in space doesn’t have a direction or an axis of rotation in the conventional sense, so – huh?

This is not some abstract thing happening due to the vagaries of scientific theory either. A lump of iron really does start spinning if magnetised, and taking into account all of the rotational movement of the electrons in their orbitals shifting is not enough to account for the exact rate of rotation. In the end, then, there seems to be only one possibility: spin is a fundamental property of matter. From our usual perspective, it definitely looks as if there are just objects which are not spinning which we can rotate or might start or stop rotating, or speed up and slow down, and so on, as if it’s just another thing going on in the world, but that isn’t actually what’s happening. On a tiny scale, spin is an intrinsic property of matter like electrical charge or its absence. Moreover, it’s quantised. In the same way as there are jumps between values of something on a small scale rather than an infinitely smooth transition, so for example electrical charge is either neutral, equivalent to an electron, the reverse of an electron, and if a quark either -⅓ or + ⅔ or the opposite for their antiquarks, which add up to an equivalent charge to the electron when they form a nucleon, which is just as well because otherwise atomic matter couldn’t exist. This will become relevant.

Spin has been described as “classically non-describable two-valuedness”, as it’s indescribable in the sense that it can’t actually be properly understood but must exist. Subatomic particles don’t literally spin in the same sense as a wheel or planet, but behave as if they do. All subatomic particles have a spin of either a whole number or a multiple of ½ other than a whole number. The former are bosons, the latter fermions. Fermions are “stuff” and bosons forces, so for example quarks and electrons are fermions and gluons and photons are bosons. Non-integral spin particles have a peculiar property which doesn’t seem to make sense, which is that to reverse their spin they need to be turned through not one but two full circles. How is this possible? Well, imagine a Möbius strip, which is a joined ribbon with an odd number of twists in it, usually simplified to just one. Following the edge around with your finger pointing to the right will reach the point where it points to the left after 360°. In order to get back to pointing the finger to the right, a further 360°of the strip have to be traversed. It’s easier to do this with a strip of paper or ribbon than try to imagine it, for me anyway. This is a good model for how half-integer spin particles work and how it’s possible for them to have to be turned right round twice before they’re back to their initial state. Incidentally, there’s a short story called ‘A Subway Named Möbius’ where a complicated underground train system has one more tunnel added to it which causes a train to disappear, and it doesn’t come back again until the tunnel gets blocked off again. I’m not by any means saying that’s anything more than a fanciful story, but if a topological analogy of that kind can be made regarding such a fundamental feature of physical reality, albeit on a quantum level, it does make me wonder what’s possible. For instance, it’s possible to imagine that space as a whole is “twisted” in all three dimensions, such that any journey round the Universe ends up with one finding one’s home planet is mirrored, or rather seems to be because one has in fact reversed, because the topology of three-dimensional space could in theory be analogous to a Möbius strip. A Möbius strip with an even number of twists is effectively not one at all.

This property of fermions, for complicated reasons I don’t understand, means that no two fermions can occupy the same energy state. This is not the case with bosons. For instance, a laser consists of innumerable photons in the same energy state because they’re bosons, but it effectively means that light cannot form structured matter. It can do things like form caustics and be focussed to a point, and the like, but fermions can build themselves into atoms. Nuclei have to consist of nucleons in different energy states, although they are less obviously in shells than electrons, but neutron stars also have to be in this state – every single neutron in a neutron star is in its own distinct energy state. The electrons in an atom organise themselves much more clearly into different levels, in the form of the shells which enable the periodic table to exist, with heavier elements having more shells and different properties. Without that, there would be no chemistry and no materials as we know them. The fact that I don’t understand this is a source of discomfort to me which I feel very driven to remedy, but right now that’s how things are. It also makes me wonder about Bose-Einstein condensates. These are an unusual state of matter which happens when a low-density gas consisting of bosons is cooled to almost absolute zero and the atoms become overlapping waves and ultimately a single, collective wave comprising all the atoms because they’re larger than the distances between them. Although atoms are made of fermions, each atom as a whole can be a boson if the total number of nucleons and protons is even, so this means that the possibility of attaining this state depends on isotope number as well as what kind of element the gas is, in a similar way to how helium-4 becomes a superfluid at a higher temperature than helium-3. If they were fermions, this would be impossible because they wouldn’t be able to occupy the same energy state.

For us to exist, spin must also, and there also have to be integral and non-integral varieties. It’s a sine qua non of our reality. The Multiverse presumably means there are other universes where there are, for example, only fermions or bosons, or perhaps universes where there is no spin, but these are all very boring places. A universe with just bosons would have no structured matter but instead consist of rays of energy, and one with just fermions would have no structured matter either but simply electrons.

A particle is supposed to have mass, charge and spin. Of the charge values, these can be either positive, negative or neutral, and of the integral and non-integral varieties mentioned above depending on whether quarks, leptons or both make them up. This addition also occurs with spin. Neutral particles clearly do exist, for instance neutrons, whose existence can be deduced fairly easily with precise enough measurements. Chlorine has two common stable isotopes, and if one does something like react salt with something else in distilled water or tries to make a saturated solution of pure sodium chloride in it, one is soon confronted with the fact that the ratio between the weights of the same amount of salt and other substances has to involve a fraction. This is because all normal chlorine is a mixture of the two types of atoms with different numbers of neutral particles, and these are neutrons. Mass, charge and spin all have to be conserved in nuclear and other processes, so for example if a potassium-40 atom emits a positron, one of its protons must become a neutron and it becomes argon-40, and unstable particles decay into various different “fragments”, but they must all add up to having the same charge and spin. Hence a negatively charged muon may become an electron with the same charge but since an electron is so much less massive than a muon, the spin, i.e. the angular momentum, still has to go somewhere, which is into a muon neutrino and an electron neutrino. Likewise, when a neutron leaves the safe confines of an atomic nucleus it only has about a quarter of an hour to “live”, and will decay into a proton and an electron, conserving charge, and also an electron neutrino. They have extremely low mass but observation of the 1987 CE supernova 168 000 light years away revealed that they do have some because of the timing of their arrival here compared to light. Supernovæ produce bursts of neutrinos because the protons in them collapse into a neutron star, converting themselves to neutrons in the process and emitting neutrinos. There are three types of neutrinos, associated with tauons, muons and electrons.

Neutrinos are a bit mind-boggling because they have no mass or charge but only spin,but they must exist because otherwise the accounts wouldn’t balance, as it were. However, there was a problem with solar neutrinos detected in the 1960s, when it turned out the Sun was producing fewer of them than current physics said it should. Until this was resolved, it was possible, though of course extremely unlikely, that for some reason the Sun had stopped working and that the light and heat we were getting from it was simply the last blast of a defunct star, so in a way it was quite worrying, but it’s okay now.

Before I get to the next bit, I want to mention a much older form of philosophy than nuclear physics. Back in the day, there were supposed to be four elements opposed to each other: earth, air, fire and water. Each had two qualities opposed to each other, namely dry and wet and cold and hot. Their atoms were also supposed to correspond to the five Platonic polyhedra, which is why there are five elements rather than four. All of this makes mathematical sense and you can imagine flipping the eight-pointed star round, turning it through 90° and so on – it’s symmetrical. It could even have predictive power in that if one of them was missing, its qualities could be determined, and it has correspondances in alchemy, psychology, astrology and humoral medicine, the last of which is actually useful in herbalism. However, it isn’t applicable to science as it’s usually practiced today, and someone claiming to use it, as I just did, might be seen as undermining their ethos. Nonetheless, the symmetry is real.

By Mike Gonzalez (TheCoffee) – Work by Mike Gonzalez (TheCoffee), CC BY-SA 3.0, https://commons.wikimedia.org/w/index.php?curid=284321

There’s also a symmetry in the physics of elementary particles, which allows one to anticipate where gaps may exist implying other particles yet to be discovered. Symmetries can be analysed by group theory in mathematics. One of the most obvious places where this crops up is with Rubik’s cubes, where certain turns may or may not be performed in a particular order to return the cube to its original state. With Rubik’s cubes there are also “orbits”. If you take one to pieces and put it back together arbitrarily, the chances are you will have placed it in another orbit in which there is no arrangement with all the faces the same colour. I think there are eight of these. Groups also apply to arithmetic, so it makes sense to introduce them with that familiar subject. A group is a set with some operations of a certain kind performable on it. It has an identity element, inverse elements and these are associative. For instance, addition and integers form a group because adding zero doesn’t change a number, adding a positive number can be undone by adding the same negative number and it doesn’t matter where you put the brackets: (2+1)+3 = 2+(1+3). Likewise with a Rubik’s cube, keeping it in the same position and turning the top row one twist to the right and then the right hand side one twist downward can be undone by turning the right hand side one twist upward and the top row one twist to the left, and there’s also an identity element in that if you leave the cube alone, it stays the same, which sounds a bit silly but these are just two examples of groups which can be easily understood. Group theory is relevant to crystallography and cryptography. Take this sentence, for example. ROT13 turns it into “Gnxr guvf fragrapr sbe rknzcyr”, and applying ROT13 to it again turns it back into “Take this sentence for example”.

Physics has various symmetries. For instance, there’s symmetry between matter and antimatter, and there are other symmetries such as the correspondence between leptons and quarks. Electrons, muons and tauons accord with up and down, charm and strangeness and top and bottom. The names of up/down and top/bottom are not accidental, although there were moves to name top and bottom truth and beauty instead.

Group theory can be used to classify different forms of symmetry. Spin falls into the SU(2) group. This is one of the Lie groups, which are groups which also behave like spaces. SU groups are “Special Unitary” groups, and I should point out here that I have never knowingly understood matrices and they were a significant hole in my mathematical knowledge at school, because I could never understand how to multiply them, so I’m just going to have to let this pass and say this is this thing, this is out there, and that’s it. I believe I can safely assume that anyone with at least a CSE in maths will get them and understand this better than I can because it’s just my personal blind spot. Having said that, I will kind of give it a go.

There are six flavours of quark: up, down, strangeness, charm, top and bottom. These can be arranged in a hexagon and can be swapped to some extent. A neutron is two down quarks and one up, and a proton two up and one down. The names seem to relate to these properties, because if up and down were swapped in an atomic nucleus it would swap the neutrons and protons. Mathematically this can be envisaged as being part of the SU(3) group, and this is the other area in which the word “spin” has been used: isospin. Isospin is another property of matter which has the same kind of symmetry as spin but is not spin. Then again, spin in the subatomic sense is really quite far from our intuitive understanding of rotating objects, so the fact that this is also called spin, relatively speaking, is not a big leap from the other kind of spin. It’s also why the words “top”, “bottom”, “up” and “down” are used. Just as an electron can be thought of as having an arrow pointing up which can be flipped through two turns to be an arrow pointing down, although it has no link with gravity which determines up and down in everyday parlance, so can some quarks be thought of as “up”, flippable conceptually to “down”, and “top”, flippable to “bottom”. If SU(3) is applied to hadrons (mesons or nucleons), they can be flipped to other hadrons with similar properties. Another application of group theory revealed a gap in the pattern which turned out to be the omega-minus particle consisting of three strange quarks, which was detected and confirmed that group theory could be fruitfully applied to isotopic spin.

Why is it called “isospin” or “isotopic spin”? Well, nuclei are isotopes of various kinds, so for example there’s helium-3, made of two protons and one neutron, as well as helium-4, consisting of two protons and two neutrons, and tritium, an isotope of hydrogen comprising two neutrons and one proton. If the nucleons in these were swapped, they would respectively be tritium, helium-4 and helium-3. This is a form of symmetry pertaining to isotopes, and it influences their stability because there are certain isotopes of elements which would be stable whether or not the neutrons in them were protons and vice versa, and these are particularly stable isotopes. Extending this into the transuranic realm of synthetic elements, it’s possible to predict which isotopes of heavy elements are likely to be the most stable.

It’s also a system of classification, because at one point in the mid-twentieth century a large number of hadrons were known, almost all of which seemed to have no prospect of being part of ordinary matter or having any special relevance to it, which was very puzzling. Another, more recent, puzzle is whether this is just a case of making pretty patterns, albeit useful ones, out of elementary particles or whether it reflects something profound about the nature of physical reality. Murray Gell-Mann, who thought this up, referred to it as the Eightfold Path à la Buddhism, and Fritjof Capra has written extensively on the idea of links between subatomic physics and Eastern spiritual concepts such as Daoism. Western philosophers tend to think of this as jejune and crass.

There is an issue regarding what appears to be the appropriation of quantum physics ideas by the New Age movement in such films as ‘What The Bleep Do We Know?’ and ‘The Secret’. On the other hand, there is also the question of whether this is an excessively proprietorial attitude on the part of some nuclear physicists and academics. But that’s a topic for another post.

Mercury and Bepicolombo

Boy Mercury shooting through every degree

The B52s, ‘Roam’, (c) 1989 CE

Most people, if they wanted to associate music with the planet Mercury, would probably either think of Freddie Mercury or Gustav Holst’s Planet Suite. Not me of course, because I can’t think of the obvious. It seems that this song has erotic innuendo which totally whooshed over my head, but that still doesn’t exempt it from being associated with Beppicolombo today. So far as I can tell, there’s nothing particularly special about today’s encounter compared to the series of other encounters which the probe will undergo over the next few years, but it’s also true that Beppicolombo is only the third spacecraft ever sent to the planet in question. The first, Mariner 10, flew past in 1974 and erroneously reported the presence of a moon, and I think that was also the one which established that Mercury didn’t simply show one face to the Sun all the time. Certainly this is what was reported in the popular science books and articles I read at the time. It also detected a strong magnetic field, which is apart from Earth the only planet in the inner solar system with one.

MESSENGER was the next probe, whose mission took place in the first half of the 2010s. The problem is that Mercury is difficult to reach because spacecraft have to be moving relatively fast and because it’s near the Sun the gravity of the star will overwhelm that of the planet at a low altitude above the surface, since it’s also the smallest and least massive major planet. This would, incidentally, make the presence of any moons hard to maintain. But Mercury is not just a clone of Cynthia even though the two seem quite similar, and even are to some extent.

Mercury and Earth are the densest planets in this Solar System. They also both have strong magnetic fields. The surface gravity is close to that of Mars, but because it’s physically smaller and a lot hotter it has much more difficulty holding onto an atmosphere, which is extremely thin and consists of what to us would seem like a bizarre array of gases such as calcium, sodium, potassium, hydrogen, atomic oxygen, helium and molecular oxygen along with water vapour. The hydrogen and helium are captured from the solar wind by the magnetic field and I presume the water vapour is from ice in polar craters. Because it has hardly any axial tilt, there are craters near the poles, such as Chao Meng-Fu, which are in permanent darkness at their bottoms, where the ice resides. The metals are forced away from the surface by the Sun and form a tail so many millions of kilometres long along the orbit that they are something like an eighth of the way round before they become undetectable. This feature is shared with Jupiter’s moon Io, which also has a sodium tail. However, it seems a bit of an exaggeration to dignify the sparse atoms and molecules of gas hanging around near the surface as an atmosphere, since they never collide with each other like they would in an ordinary gas, but do the same kind of things as they do on Cynthia, ricocheting off the surface, bouncing up and down and so forth.

So far as I know, Beppicolombo has no colour cameras. It was also going to deposit a lander, which it didn’t do in the end because it would’ve been too expensive. Both of these decisions, if the first is true, strike me as bad PR. Colour photos of Mercury and data, and hopefully images, from the surface would surely be really impressive, and it’s worth doing those just to engage the public, but apparently not.

Just a quick infodump to get all this out of the way. Mercury is intermediate between Cynthia and Mars in size, is the densest planet in the Solar System other than Earth and has a lemon-shaped orbit, which is again the most elliptical of any solar planet known. It rotates once every fifty-seven days with a negligible axial tilt and orbits once every eighty-eight. It isn’t as hot as the solid surface of Venus during the day, at around 400°C, but is the coldest planet in the Solar System at night at -200°C. It has a fairly heavily cratered surface and it can be difficult to distinguish whether a small portion of the surface is Cynthia or it. It was instrumental in corroborating the General Theory of Relativity which predicted that its orbit shifted its angle by 1.2 arcseconds each time, but before Einstein it was thought that there was an even closer planet to the Sun, named Vulcan, which explained this orbital perturbation. There are Mercury-crossing asteroids, including the relatively famous Icarus. Astrologically, Mercury often goes retrograde, meaning that it appears to reverse its direction in the sky, because it’s orbiting inside our orbit and will inevitably dip towards or fall away from the Sun from our perspective. There are even some professional astronomers who have never seen it because it stays so close to the Sun and is smaller and further away than Venus along with reflecting less light. It can, however, be observed in broad daylight with the right telescope if you know where you’re looking, though this would be risky to the eyesight. I think that’s it as far as what I assume “everybody” knows about the planet.

The reason it used to be thought that Mercury always faced the Sun was that it rotates three times for each two of its years and its synodic period (the time taken between successive closest approaches to us) is almost exactly two Mercurian days.

The above map was made by the Greek astronomer Antoniadi in the mid-twentieth century. Although like many such maps it’s pretty inaccurate, it does at least record the presence of Caloris Basin in the southeast as Solitudo Hermæ Trismegisti. Some of the features on Mercury have quite odd names. For instance, there’s a series of cliffs called Pourquoi Pas Rupes and the twentieth longitudinal parallel is called Hun Kal after the Mayan for twenty. Caloris Basin is somewhat similar to the Mare Orientale on Cynthia or Asgard and Valhalla on Callisto, being a vast impact crater, but Mercury doesn’t really have maria like Cynthia.

There should as far as possible be a link between people’s everyday experience and scientific phenomena. This is difficult with Mercury because it’s almost invisible to most people. If you believe in Western astrology, you’re probably aware of Mercury retrograde at least, and Mercury does transit the Sun more often than Venus from where we are. This is where Mercury can be seen to cross the Sun’s disc, meaning that it might be projectable using a telescope. They happen in May or November, and occur much more often than Venus at about once every seven years on average. Sometimes Mercury only passes over the edge of the Sun. The planet is both smaller and further away than Venus when it transits. Venus I have observed doing it, and it gave me a major impression of the sheer size of the Solar System that even the nearest planet, practically Earth-sized, looked that tiny when it was closest to us. Mercury is kind of more like Cynthia orbiting alone. One significant issue with Mercury’s transit compared to that of Venus is whether the black drop effect would be visible. When Venus, with her thick atmosphere, crosses the limb of the Sun, there’s a kind of fuzzy line joining the shadow to the rest of the sky, and this is often attributed to that atmosphere, but in fact Mercury, with no significant atmosphere, exhibits the same effect even when observed from space, thereby eliminating the factor of our own air. Hence it doesn’t seem to be due to a planet having an atmosphere. This is also significant for detecting planets orbiting other stars and whether they have atmospheres. Incidentally, it’s also possible to observe changes in light level caused by transits by observing moonlight, although of course it’s very subtle. There will be a simultaneous transit of the two planets in the year 69163, and before that Mercury will transit the Sun during a partial solar eclipse in 6757.

Meteorites very occasionally reach us from Mercury. One was found in the northwestern Sahara containing chromium diopside crystals, which are green, but may not be Mercurian given known facts about the composition of the surface. Although this is not the meteorite, this is what chromium diopside looks like:

By Rob Lavinsky, iRocks.com – CC-BY-SA-3.0, CC BY-SA 3.0, https://commons.wikimedia.org/w/index.php?curid=10151574

It reminds me of lunar olivine, and like it, is a semi-precious stone. This is an actual chunk of the meteorite itself:

It’s a magnesium calcium silicate, and can become asbestos. There are faults on the surface of the planet, meaning that like Earth and no other planet in the system, it’s tectonically active, which in turn means that if this mineral is indeed from Mercury, it could have been transformed to asbestos on its surface. However, this may not make it intrinsically more hazardous to potential astronauts (and there will be none) than moondust, which is also potentially quite harmful, being jagged and unoxidised until it comes in contact with a terrestrial organism and this rock may not be from Mercury anyway. I would imagine that the extremes of temperature there have considerably weathered the terrain.

The interior is largely taken up by a core rich in iron and the magnetic field may be caused by the same dynamo effect as here, since the Sun’s tidal forces are much stronger there than here, or it may be residual from formation or the result of being directly magnetised by the solar magnetic field. I don’t know if this is true, but I would expect the crust to be higher in heavy elements than here, and for them to be more exposed due to the lack of weather and oxygen, although I would also suppose that their distribution would tend not to be in the form of specific ores due to the lack of liquid water. There are no Van Allen belts because the magnetic field is too weak in comparison with the solar wind. Heat could also be expected to weaken the magnetosphere.

Hun Kal is at 20° because the prime meridian had been decided approximately as the subsolar point at the first perihelion in 1950, and when Mariner 10 got there that location was on the night side. At the time, presumably it had been thought that that point was permanently at noon with the Sun directly overhead.

Caloris Basin is so-called because it’s directly under the Sun at closest approach, and is therefore the hottest area on Mercury. It has about the same area as Mexico, which by scale is similar to the size of Antarctica compared to Earth, and is surrounded by a ring of fairly small mountains. It’s many times the size of Mare Orientale. Around the exact opposite point is an area of so-called “weird terrain”, which is hilly and thought to result from the conduction of seismic vibrations around the planet from the impact into a focal point there. Just as on Earth the type and deflection of quake waves is like an X-ray of the planet, revealing where the solid core is, so does the terrain on the opposite side from Caloris Basin reveal Mercury’s internal structure, since much of it was formed by the shock waves. Superimposed on that are the ejecta splashed up by the impact, which also travelled all round the planet.

Unique to Mercury are the “blue hollows”. Although these are somewhat mysterious they seem to be linked to the evaporation of solid material and resemble craters to a limited extent except for showing none of the usual signs other than being dents in the surface. There’s no rim, central peak or ejecta. They are of course blue, light blue in fact.

The planet seems to have shrunk by seven kilometres since its formation, which has led to ridges appearing on the surface. I wonder if this is to do with substances such as potassium and sodium with low sublimation points being lost to space during the day, which I also think might explain the hollows.

There’s something about craters, though, which I find somehow tedious and deadening. I could go on and on about the craters there at this point but it would probably bore you stiff. And the question there is why? Mountains are not boring after all, are they? This links into my post about whether Cynthia is boring. I suppose the thing about mountains is that you can imagine climbing or exploring them. But a crater such as Arizona Meteor Crater seems very interesting to me, as does the Chicxulub Impact which wiped out the non-avian dinosaurs. Maybe it’s just me. So that concludes this bit of the post.

Only three spacecraft have ever been sent to Mercury. The first of these was in 1974, Mariner 10. For over three decades this was the only source of images of the planet and only just over a third of the surface had been photographed. By a stroke of luck, Caloris Basin was at the terminator at the time, meaning that the weird terrain was also, but this also meant that the full extent of the basin was unknown. It also flew by Venus. Mariner 10 was the first spacecraft to use the gravity of another planet to aid its trajectory and also the first to send back live TV pictures of another planet, although I would expect “live” to be a fairly misleading description of something whose bandwidth was 117.6 kilobaud. This is actually pretty impressive when you consider modems of the turn of the millennium were only half that fast. Because it used the gravity of Venus, it didn’t need to carry much fuel, as that was only needed to make fine course corrections, which it did by attitude adjustment nozzles firing nitrogen along the edges of the two solar panels. It used a sunshade to protect its instruments against the intense radiation at the orbit of Mercury. Like many other space probes it was designed to orient itself using the Sun and the bright star Canopus. It carried a TV camera connected to a Cassegrain telescope, which gave it a long focal length in a short tube, able to image things in ultraviolet as well as visible light. The resolution was a total of 640 000 pixels, which is 800×800 if that’s the aspect ratio decided, each pixel being represented by a byte. There was also a radiometer able to measure temperatures to within 0.5°C, a plasma detector which discovered Mercury’s magnetic field, a magnetometer, a second telescope to detect charged particles which also detected the magnetic field and an airglow spectrometer which was able to detect the glow of sodium in the atmosphere and beyond. This is actually bright enough to be seen by the human eye, so looking into the sky on Mercury an astronaut would perceive a faint orange tinge. Another instrument was able to detect gases by absorption of light.

When I look at something like that, it always makes me think that technology even that long ago was a lot more advanced than we give it credit for. Although it obviously wasn’t using internet protocols, this probe was able to transmit wireless data over millions of kilometres at twice the rate of a dial-up modem two dozen years later, and 800×800 resolution in eight-bit colour, which is what this and many other spacecraft had in conjunction with Mission Control, wasn’t achieved in affordable PCs until the late 1990s either. On the other hand, the processing power of these machines was very limited. Although I can’t track down the details, Mariner 10 cannot possibly have been using a microprocessor to do its stuff and even the Mars rovers only used CPUs which went out of date in about 1980. This isn’t so much a criticism of them as the hardware which exists now. If you can build a spacecraft which goes to Mercury and does all that stuff without even using a microchip, and if later on very modest processors indeed can be used to achieve even more, why are we now using so much more advanced computers to do much less impressive stuff?

We had to wait until the next century for the next probe, MESSENGER. This is named after the messenger of the gods, Mercury, but it actually purports to stand for “MErcury Surface, Space ENvironment, GEochemistry, and Ranging”, clearly a backronym. MESSENGER managed to image the entire globe, as it was designed to go into orbit around it. It detected the first clear images of the blue hollows, which Mariner 10 had only managed to get rather blurry pics of. It imaged the whole of Caloris Basin, measured the concentration of calcium over the planet anddiscovered that the magnetosphere was at twenty degrees to the axis of rotation. It also imaged a “family portrait” of the whole Solar System and was eventually crashed into the planet. It described one of the more recent and to me baffling trajectories which seem to involve a large number of orbits around the Sun while the spacecraft gradually approaches its destination, therefore taking several years to reach it. If you think about it, no destination within our orbit ought to take more than half a year. I’m sure there’s an answer.

Bepicolombo is of course the current mission. It’s joint European-Japanese and I’d expect it to be a lot more sophisticated again, although I’m not sure what that would mean. Like MESSENGER, and presumably all contemporary probes, it’s doing the same kind of weird orbit which takes a very long time to get anywhere. I really want to know what that’s about. It comprises a photographic orbiter and a magnetosphere investigating satellite – two different satellites and is named after the scientist who came up with the slingshot idea for Mariner 10. Since its name is in the title, I’d better go on about it.

It’s flown by Venus twice, the second time on 10th August this year (2021) and has just flown past Mercury for the first time, only seven and a half weeks after Venus. I think this may be the fastest journey between planets ever. It will fly by the planet a further five times, then go into orbit round it on 5th December 2025. It has an ion drive, using xenon. It’s about time really – the concept behind these, which is to ionise atoms and accelerate them out of the engine using linear induction and can theoretically accelerate up to over 160 000 kph, is decades old and even though it has one I have my doubts about whether it’s really using it in earnest or it’s just there to be fancy. It isn’t being used as the main propulsion system and will only be used with a very low thrust, and there are also chemical rockets. One of the instruments was built in Leicester, which makes me happy. It isn’t the first time either.

Mercury is seen as breaking a pattern because for the other terrestrial planets there is a relationship between their density and their size, such that the smaller a planet is, the lighter the materials it’s made from are. This applies to Cynthia as well. However, Mercury is an exception. It’s also particularly close to its (and our) sun and this is a possible clue as to what happened in other solar systems, which also have very close planets. There’s a hypothesis that Mercury was previously a gas giant but has lost all its atmosphere because it fell so close to the Sun, but I think this idea is deprecated now. It also has an unusual orbit, which is also strongly influenced by the other planets in the Solar System. For all these reasons, the European Space Agency and the Japan Aerospace Exploration Agency are quite interested in it. It’s also searching for asteroids inside Earth’s orbit. The bandwidth is slower than Mariner 10’s was, I presume because data compression is better nowadays.

The Mercury Surface Element, which was cancelled, would’ve been a 44 kg disc ninety centimetres across to land 85° from the equator near the terminator, battery-powered due to 40% of the landscape there being likely to be in shadow and it would’ve had a camera. I just think it’s really sad they didn’t do this.

Right: that’s it. This is unfortunately later than I’d hoped as I missed the actual first rendezvous, but it is what it is.

Exotic Matter

Yesterday I talked about the way soft SF treats the concept of antimatter, which is mildly irritating from a scientific perspective but interesting in terms of what’s projected onto it. I also briefly mentioned why the depiction of antimatter is implausible. You cannot have a mineral ore which contains just some antimatter, and matter cannot gradually transform itself into antimatter in more than minute amounts without exploding or otherwise spectacularly destroying itself. That said, antimatter is not actually that exotic. For instance, even bananas emit positrons due to their radioactive potassium content, and I’d be interested to know if fly ash also does so. However, positrons are not all there is out there in the peculiar matter stakes.

It’s probably widely known that protons and neutrons are made of quarks, the former being two up and one down quark and the latter two down and one up, glued together in the nucleus by pions and orbited by electrons, which are leptons. All of these except electrons are hadrons, being made of two or three quarks. Leptons, though, are different. They are not analysable into smaller parts and are truly fundamental, and there are various kinds, the most familiar being the electron. There are a dozen types of lepton, grouped into particles and antiparticles, meaning there are six matter leptons, including electrons, muons, electron neutrinos, muon neutrinos, tauons and tauon neutrinos. Tauons and muons used to be classed as mesons, but mesons are now thought of as pairs of quarks, intermediate between the mass of electrons and nucleons. The masses of hadrons is lower than the sums of their constituent quarks because part of their mass is converted to energy to keep them together.

Muons and tauons, though, are basically heavier versions of electrons. They aren’t “made of” anything, but are just “there”. Each lepton corresponds to a quark, which are also just there, although it used to be thought they were made of smaller particles called “rishons”, after the Hebrew word for “first”. Hence there are also six quarks, which can be paired off as heavier and lighter types. All of these particles taken together are called fermions, which are what “stuff” is made of. The other type of particle is the boson, whose rôle is to bear forces, including such things as photons, W and Z particles and of course the Higgs particle, but these are not part of matter as such.

Neutrinos were theorised to exist in 1930 CE to account for what happened to the extra energy apparently lost when a subatomic particle decays. They were detected in 1956, eleven years before my birth, and this gives me pause for thought. Neutrinos have no mass or charge, and are almost undetectable. They only have spin. Nowadays, when I hear about non-baryonic dark matter which is supposed to make up most of the stuff in the Universe, I feel it’s suspicious because it’s rather too convenient that there just happens to be all this stuff which can’t be detected by any conceivable instrument except through its gravitational influence, and yet I have no problem accepting that neutrinos exist, possibly because they were simply established before I was born and so part of the general background of things. What to make of this?

However, neutrinos are detectable, by huge tanks of dry-cleaning fluid buried underground. This is tetrachloroethylene. Very seldom, neutrinos convert an atom of chlorine-37 to argon-37, which is then detected after the tanks are purged with helium and the argon separated. Something similar can be done with gallium-71, which it occasionally converts to the radioactive germanium-71, and since this is denser than tetrachloroethylene I presume this works better because the chances of interaction are higher when atoms are more crowded. There are other ways of doing this, but for me this is sufficient since it corroborates the existence of neutrinos, which can’t be done with non-baryonic dark matter.

If it existed, non-baryonic dark matter would count as exotic, and it’s divided into hot and cold types. Cold dark matter is the most speculative, and to my mind the most ridiculous, because it’s supposed to be like ordinary matter to some extent, possibly forming into atom-like structures and even organised matter like planets and living organisms, although those last are way out on a limb and not widely accepted scientific opinions. Hot dark matter is fast-moving, and in fact quite similar to neutrinos in a way because it constantly streams through the Universe, presumably orbiting and being generally influenced by other mass, unlike neutrinos which travel near light speed.

But it doesn’t exist. If it did, it would count as exotic matter. I have my own solutions to the problem but I won’t be going into them here.

Another kind of exotic matter which is merely speculative and probably doesn’t exist is the magnetic monopole. This arises from the thought that just as there are electrically negative and positive particles, so there ought to be isolated north and south magnetic poles with no local correspondents. If magnetic monopoles did exist, they would form extremely dense matter compared to atomic matter, but it would be similar to atomic matter in that the monopoles would form the nucleus and electrons would orbit them, but in much smaller orbitals, making the matter much denser. For this reason it’s been speculated that magnetic monopoles may have sunk to Earth’s core and therefore not be detectable on the surface. There is in fact no observational or experimental evidence that they exist at all. However, one does sometimes hear of news that they’ve been detected or used. This seems to be hype, because these are quasi-particles like the electron holes I mentioned yesterday. They’re emergent properties of larger bulks of atomic matter which behave like magnetic monopoles would if they existed, but can be explained in terms of physics which doesn’t involve these apparently mythical beasts. They occur in spin ice, which is not ice but named by analogy with spin glass, which is not glass. Particles have an intrinsic spin to them which can line up or be haphazard and is connected to magnetism. Spin ice is a crystal composed of tetrahedra with atoms at the corners two of whose poles point into the shape and two out. If this is heated, single atoms out of the four begin to flip over, so that their magnetic poles face in opposite directions, creating pairs of apparent north and south poles isolated within the tetrahedra which can then move across the crystal separate from each other and increasing in distance from each other as if they’re isolated particles when in fact they’re just very long, thin magnets known as Dirac strings. This kind of monopole can be moved around, meaning that magnetic currents can exist in the same way as electric ones can, except that they will always be alternating rather than direct.

Quasi-particles turn up quite a lot in condensed matter physics. As well as magnetic monopoles and electron holes, there are phonons. These are to sound as photons are to light: particles of sound, as it were. Phonons are important in superconductivity, which is conduction of electricity at the speed of sound in the material concerned without resistance. Other examples are rotons, which are quanta of superfluid vortices, and excitons, which are combined electrons and electron holes. These are not exotic matter, but that doesn’t mean they can’t behave like it. For instance, if an electron can orbit a magnetic monopole, maybe it can orbit this kind of fictitious magnetic monopole too. Just a thought: it probably can’t.

Positrons are probably the most familiar form of antimatter which turns up in fairly familiar settings. For instance, there are electrical processes taking place above thunderclouds as well as below them which can involve the generation of positrons. Gamma rays are generated by electrons being deflected by air molecules, which then pass close to atomic nuclei and become positrons and electrons, which stream up into space. Positrons are also generated when radioactive decay occurs in the form of protons, which are positively charged, becoming neutrons. This happens with potassium-40, carbon-11, aluminium-26 and oxygen-15. This form of radioactive decay is employed in positron emission tomography, where a radioactive tracer is injected to image things like blood circulation and tumours. Oxygen-15 is an example of an isotope used for this purpose, and this is also, unsurprisingly, how bananas produce positrons.

I mentioned muons near the start of this post. A muon, along with a tauon, is essentially a very heavy electron, with a charge equivalent to an electron but a mass of slightly under 207 times that of an electron. It has a half-life of around two microseconds, which is unusually long for an elementary particle, of which only electrons are stable apart from the ones travelling at light speed which obviously would be because time doesn’t pass for them. Muons penetrate much further than electrons because of their mass, and can therefore be used to image the inside of objects which are relatively deeply buried or embedded. It apparently isn’t used for medical imaging, but muons can be used for room temperature nuclear fusion by acting as nuclear catalysts. Muons can be generated by accelerating ionised hydrogen, in other words protons, into fairly light nuclei such as carbon, to release pions which then decay into muons. They do need to be generated because they’re difficult to store due to their short lifetime.

Muons can orbit nuclei in the same way as electrons can, and this is the first kind of real exotic matter. Like magnetic monopoles, this kind of exotic matter is much denser than ordinary matter because muons are denser and orbit closer to atomic nuclei. This makes room-temperature nuclear fusion possible because the radius at which the orbital is located is two hundred times closer and collision can occur much more readily. However, it takes more energy to produce muons than this would liberate, so it’s useless, at least at the moment. Since the mass of a muon is 207 times that of an electron and that of a proton is 1 836 times that of an electron, this kind of atom, known as a muonic atom, is over 900 000 times as dense as hydrogen, meaning that a litre of it would weigh eighty-two kilos if it were a gas. It would also be ridiculously radioactive, decaying by beta decay. Muons can also replace individual electrons in heavier atoms, as with hydrogen-4.1 Hydrogen-4.1 is actually helium in that it has a helium nucleus, but hydrogen in the sense that it only has one electron and is unionised (or at least it was before Thatcher – goodness only knows what happens now!). A sufficiently heavy atom with an orbiting muon would significantly lengthen its lifetime because additional electrons move faster until they approach the speed of light, but it isn’t clear to me where in the atom such a muon would be located because with hydrogen-4.1 the muon is quite close to the nucleus. As for hydrogen-4.1, I’m not sure about this but I think it would be a superfluid, because this depends on whether a substance, nearly always helium of course, is composed of bosons or fermions. So this is hydrogen which can be a superfluid and is denser than helium. Superfluids do weird things like flow uphill and pour better through small holes than large ones. If hydrogen-4.1 is thought of as helium, this also means this is reactive helium.

The other way muons can form exotic matter is by becoming the nuclei of a hydrogen-like atom. Because muons, like electrons are negatively charged, either antimuons have to form the nucleus rather than muons or positrons would have to replace the electrons. This is known as muonium and is stable enough to have chemistry. There are known isotopes of elements which are less stable than muonium, although its own half-life is the same as that of the muon or antimuon itself at two microseconds. The size of the atom is close to that of hydrogen itself, and considered as an element it can be thought of as a particularly light isotope of hydrogen. In fact it would be the lightest known element at something like a ninth the mass of protium, which is ordinary hydrogen. There is a compound called muonium chloride, which does very little because it’s so unstable, but breaks down into chlorine gas and muons.

A number of other atom-like things can be made from subatomic particles. There’s muononium, where a muon and antimuon orbit each other, positronium, where a positron and electron do the same, and also a theoretical but never detected pionium, where two oppositely charged pions are in association, useful for studying the strong nuclear force.

The only trouble is, all of these, and there are others as well, are very unstable and break down in microseconds of even less. But there are other forms of exotic matter which are likely to be more stable, sometimes in a very unfortunate way. One of these is strange matter.

Strange matter has a misleading name. Strangeness is just the name of a property of subatomic particles when they are massive but form easily and decay slowly, carried by the third quark known as strangeness. The terminology used in nuclear physics tends to be very divorced from the same words used in everyday English. It may occur in the centre of neutron stars, which are themselves made of exotic matter in the form of neutronium, which I’ll come to later. It’s thought that under sufficient pressure, the very distinction between nucleons is lost and therefore there is a state of matter consisting entirely of quarks without them being separated into separate, larger particles, and strange matter is an example of this, made solely of strange particles. However, that mere smooshing together doesn’t necessarily make quark matter, which may consist of up and down quarks, into strange matter. If the density is high enough, strange matter becomes more stable than this state and quark matter would be strange in these circumstances. Hence it’s likely that some neutron stars have strange cores, but nobody is ever going to be able to encounter the stuff. However, there is a second rather worrying alternative to this view.

Sufficiently massive neutron stars, on the verge of becoming black holes, could consist of masses of strange matter several kilometres across with a thin outer layer of neutronium. Also, in some cases when a nucleus decays, it may become a small piece of strange matter, or when strange stars collide, similarly, larger, but still fairly small, pieces of strange matter may “chip off”. This second type is called a “strangelet”. Strangelets are mixtures of up, down and strangeness, and once they’re over about a metre in diameter they’re referred to as strange stars. Atomic matter contains no strangeness because protons and neutrons are more stable than neutral lambda and sigma baryons, but in bulk, strange matter may be more stable even when not under pressure, meaning that any atomic matter it encountered, such as the planet Earth, would become a strange star, which is incompatible with biochemistry, or much else for that matter. This was the worry people had about the LHC: that it could produce a strangelet which would convert the whole planet. This scenario is very like the false vacuum. However, it’s been pointed out that in all likelihood strangelets rain down on this planet the whole time anyway, and if it was going to happen it would’ve done so by now.

Neutronium is a less extreme form of matter which just consists of neutrons packed together and has a density of around a hundred thousand tonnes per cubic millimetre. Where neutronium exists, it amounts to an atomic nucleus of enormous size composed solely of neutrons, which when free would decay after about a quarter of an hour, but in the form of neutronium are stable just as in atomic nuclei. Below a certain size, and I’m not sure what it is, the strong nuclear force isn’t sufficient to hold neutronium together so there can’t be atomic nuclei made solely of neutrons, for example. A possible use of neutronium is to cause a gravitational field, but there are problems with this because for it to be at the level humans can survive, a neutron star would have to be millions of kilometres away from them unless they’re in orbit around it, in which case there would be tidal forces away from the centre of gravity. It would be far less manageable than the centrifugal imposition of gravity, and impractical for a spacecraft since it would involve moving a greater than a solar mass.

Quark-gluon plasma was in the news a few years ago as it was achieved in a particle accelerator. It was also the composition of the early Universe. Gluons are the bosons which hold quarks together in nucleons. Plasma might suggest a rarefied state but this is by no means the case with such a plasma. It can be thought of as matter which exists in conditions which are so hot that not only can unionised atoms not hold together but the very particles making up atomic nuclei can’t either, but when it existed the Universe was so small that it was also very dense. It’s like a sample of the early Universe, before protons and neutrons had even formed.

Near the start of this post, I mentioned tauons. These are even more massive and short-lived leptons, and like muons are likely to be able to form exotic atoms in various ways. For instance, they can orbit in conjunction with their antiparticles or form the nucleus of a hydrogen-like atom. However, tauons have a lifetime in femtoseconds, so the possibility of any chemistry is non-existent.

I was originally provoked into writing this by trying to imagine how a stable mixture of matter and antimatter could exist. There can never be covalent or ionic bonds between atomic matter and antimatter because the electrons and positrons would annihilate each other, but those are not the only ways atoms and subatomic particles can associate, as is illustrated by the existence of exotic atoms. The problem with these is that most of they are highly unstable. However, clearly matter can accomodate electron holes as quasi-particles, although these don’t annihilate electrons when they come in contact with each other. There are clathrate compounds which consist of cages of atomic bonds containing atoms or molecules without bonds with them, so the possibility of stored antimatter in the form of positrons might involve something like that. The positron would need to be equally repelled on all sides by positive ions, and these would have to be in a stable configuration, so a tetrahedral crystal like that of a spin ice might be an option, but it’s hard to imagine a situation where there could be such a suspension. It would, however, be possible to suspend antimatter in the form of plasma magnetically, so scaling that down, a substance containing tiny holes but consisting of a kind of foam of atoms with spin directed towards these holes could possibly store them in the cavities, but they’d have to get in there in the first place, so the prison would have to be built around them. The energy could then be released by removing atoms gradually, causing the positrons to be attracted to and annihilate electrons, either within the substance or beyond it. Once this process had started, there would probably be a chain reaction and everything would rush out. It would be a minor form of matter-energy conversion which would result in a plasma plus lots of liberated energy at around a thousandth of the mass. This is still a lot of energy, since a milligramme of annihilated energy from a gramme of substance is still nine terajoules, which is a fully-fuelled Jumbo Jet of energy. Hence the energy density would still be extremely high and the question is then of how much energy would be expended making that arrangement, assuming it worked. But it means, for example, that rather than providing a car with fuel or recharging it, it could simply contain a small amount of this matter which would last the length of the car’s existence. It would, however, need to be resistant to corrosion and have a very high melting point. Maybe it should be made of platiniridium.

Mind Over Antimatter

Illustrative purposes only – will be removed on request

Spoilers for Doctor Who’s ‘Planet Of Evil’, Buffy The Vampire Slayer’s ‘Normal Again’ and Space 1999’s ‘Matter Of Life And Death’ follow.

I’ve been watching a lot of old SF TV and films recently, and have now reached the mid-’70s. Well, I say that. What I’m actually doing is following Anderson productions through from ‘The Dark Side Of The Sun’ down towards the present, but that isn’t exactly my focus today because I’ve noticed two interestingly similar uses of a science fiction motif which don’t seem to make a lot of sense, one in ‘Space: 1999’ and one in ‘Doctor Who’: antimatter.

Antimatter is definitely not what it’s shown to be in either of these. Starting with ‘Doctor Who’, there’s the serial ‘Planet Of Evil’, whose air dates are 27th September to the 18th October 1975, and with ‘Space 1999’ (is there a colon there?), the episode ‘A Matter Of Life And Death’, broadcast on 27th November 1975. Hence these two are very close together. This could almost be titled ‘The Depiction Of Antimatter In British SF Shows of autumn 1975’. The weird thing about the two of these is that both of them make antimatter into something it absolutely is not.

I’m going to start with describing what antimatter really is, how it was discovered and so forth. The first hint that antimatter was possible was Paul Dirac’s 1928 CE paper ‘The Quantum Theory Of The Electron’ which pointed out that there didn’t seem to be any reason why electrons should have negative charge. They just did. Now there’s a device called a cloud chamber, which contains humid air almost at the point where it starts to form droplets of water in a fog, and this is used to detect subatomic particles, which leave vapour trails behind them due to upsetting the delicate balance of the conditions. Other, similar devices are bubble and spark chambers. If a magnetic field is applied through a cloud chamber, it unsurprisingly causes charged particles to curve in a direction corresponding to their charge, so for example α particles, which are doubly positively charged helium nuclei, will go one way and electrons, which are negatively charged, will go the other. At any time, cosmic rays are passing through the atmosphere, objects on Earth and Earth itself in the case of neutrinos, so any cloud chamber will detect various particles from those, although most are filtered out by Earth’s own magnetic field. Thus you get a wide “zoo” of different kinds of particles constantly raining down from space, including β particles, which are just fast electrons and can be bent by a magnetic field. At some vague and disputed time in the late 1920s CE, scientists began to notice that not only were there electrons, but there were also other particles which seemed to be exactly the same as electrons except for one thing: they bent the other way in a magnetic field. In other words, they were positively rather than negatively charged. These particles were dubbed “positrons”.

Since I’m primarily talking about fiction here, I’m going to talk about Isaac Asimov’s use of these in his “positronic robots”. Asimov’s robot stories are primarily about the ethics practiced by said robots, but there’s a blurry technical background to them in that they all have positronic brains. This is essentially technobabble, but the idea is that robots’ heads contain something rather like a computer (and Asimov’s first stories in this vein more or less predate the invention of the digital computer) made of platinum-iridium alloy which operates by the creation and destruction of positrons. On one occasion, Asimov comments “no, I don’t know how this is done”. Since his focus is on the Three Laws, this is just off happening to one side and is rarely the focus of his fiction, but one thing he does say is that a positronic brain cannot be made without conforming to those laws. However, the reason for this seems to be that they have been such a central part of the ethics of US Robots that in order to do so, one would have to reinvent the wheel, so it isn’t that there’s a fundamental physical principle that makes this impossible. That said, in one of his stories a human character is captured by an alien robot which also obeys the Three Laws to the extent that it, too, “cannot harm a human being or through inaction allow a human being to come to harm”, so it seems that whereas there is no physical reason why using positrons prevents a robot from acting unethically, it’s more like the utility and function of such a machine is fundamentally ethical, in the same way as, for instance, any light source is going to have to emit visible light to be worthy of the name, so there is a reason why they’re like that which is as immutable as the principle of using positrons, but it works on a different basis which is more social, perhaps related to Asimov’s other big concept, psychohistory.

Although all of this is very vague, it’s still possible to discern a limited amount of nebulous creativity around the concept, if it’s worthy of that name. Platiniridium, as the alloy is in reality known, has some real world features which communicate something about the situation. Their use signals that the positronic brain is of extremely high value, since platinum is dearer than gold. The two metals are among the heaviest, that is the densest, of the chemical elements, communicating that positronic brains are very weighty, i.e. important. Platinum also has the reputation of being shiny, so it’s bright, an attribute which can be used metaphorically for intelligence, and also a sense of high technology – a gleaming bright ultra-scientific future. I can’t say that all of these things were operating in Asimov’s mind when he thought of it, but they are all in there for a reader. Another less obvious aspect, bearing in mind that he was originally a chemist, is that the alloy is particularly unreactive and has a very high melting point, so it’s resistant to physical assaults, which is what constant bombardment with positrons would be. However, this can’t be taken far beyond the figurative realm because in fact there’s no reason to suppose, and nor was there in the 1930s, that platiniridium would be any more resistant to damage from positrons than any other kind of atomic matter. If significant amounts of positrons were moving through platiniridium alloy, they would increasingly ionise both elements, they would become oxidised and probably melt from the extreme heat generated.

There does in fact seem to be a way of building a valve-based positronic computer, and it would have certain advantages, one of which is that it wouldn’t need an external power source, but any such device would also be extremely radioactive and dangerous, so it could only really safely operate in deep space, and there’s no particular reason for doing so. Another area in which positrons could be said to have sort of come up is in the electron holes which allow transistors, and therefore microchips, to operate. These are the absence of particles behaving as if they’re real, but oppositely charged, so if there could be a form of matter allowing electrons and positrons to co-exist, this would be a genuine aspect of computing where they would have a rôle. However, Asimov was writing at a time before electronic digital computers existed as such. Colossus, the first stored-program digital computer, was built in 1943, three years after ‘Robbie’ was written. Also, although the possibility of antimatter had first been thought of in 1898, at the time he was writing, positrons were en vogue but other antiparticles had yet to be detected and were probably absent from even the scientifically-educated public consciousness, though of course not to actual physicists.

The key feature of positrons in this usage was probably their ephemeral nature, like that of thoughts in the conscious mind, and in general there is no complex set of ideas in his fiction to back this particular one up. In fact it’s rather unusual in that respect, as he was a professional scientist and often provided a lot of technical detail regarding such things. For instance, at around the same time he wrote a story about a spoon made of ammonium ions which looked exactly like it was made of metal but turned out to stink horribly and was therefore unusable, and this is based on the common observation that the ammonium ion, NH4+, behaves rather similarly to an alkali metal such as sodium or potassium and could perhaps be made to form into a bulk metal in some way. This is speculative, to be sure, and doubtless impractical, but the scientific detail involved is considerable and important. Compared to that, his positronic brain is very vague. In fact, whereas Asimov is generally a hard science fiction writer, the only major exception being the usual one of allowing faster-than-light travel when he’s actually writing SF as opposed to fantasy, the positronic brain is more a soft sci-fi idea, more like a light sabre or a food pill than a robot (ironically) or an alien.

The concept was borrowed from his work into a number of others, including ‘Doctor Who’ and ‘Star Trek’. The earliest mention in the former seems to be in 1966, in the Second Doctor story ‘The Power Of The Daleks’, where a character erroneously speculates that the Daleks might be controlled by one. In ‘The Evil Of The Daleks’, broadcast the same year, the same regeneration attempts to implant the “human factor” into such a device, to be placed in a Dalek. Later, in the Fourth Doctor serial ‘The Horns Of Nimon’, a robot is understood to be controlled by a “positronic circuit”. In ‘Star Trek’, the android known as Data has a positronic brain, and the phrase “Asimov’s dream of a positronic brain” is used at one point as if it was a well thought-out concept with firm theoretical underpinnings, and also some sort of technological Holy Grail. In the ‘Star Trek’ universe, they’re supposed to have the ability to configure and program themselves in a way which would be impossible with electronic circuitry. What the concept does, insofar as it is one rather than just a vague idea, is create a non-existent type of technology which can have all sorts of things projected onto it without annoying plausible scientific facts getting in the way. I’d go so far as to say ‘Doctor Who’ does the same thing, particularly where the human factor is being induced into the Daleks using them. When asked about whether his robots were conscious, Asimov replied that they were, and ‘Reason’ certainly suggests that they are through the deployment of the Cartesian method of doubt by QT-1. If you believe that some objects are conscious and others not, as most adults in the West probably do right now, you are stuck with the problem of what could make something like a computer conscious, and his solution to that, and even more so that of ‘Star Trek’ and ‘Doctor Who’, is to posit the positron as a potentially perceiving particle. This is possible because it’s outside everyday experience.

Positrons are simply one example of antimatter, and moreover, one which managed to escape from the general science-fictional concept, possibly because although they are anti-electrons they’re only rarely called that. The wider concept of antimatter turns up particularly in the matter-antimatter generators which release energy to power star drives in all sorts of stories, and this, assuming antimatter can be manufactured in bulk, is an entirely feasible use, because the total energy locked up in matter and antimatter would be released if they came into contact with each other, usually creating an almighty explosion. This is what the equation E=mc2 expresses, or rather, it expresses the quantity of energy present in matter. There’s enough energy in a single grain of sugar to keep the population of Melton Mowbray alive for life, and from this it can be seen that chemical energy is ridiculously inefficient. However, such a prodigious release of energy is potentially very dangerous, and this has been used in science fiction as well, in the form of the Total Conversion Bomb.

These are both relatively scientifically plausible ideas, and given that enough antimatter could be found or produced, both would be entirely feasible. They blow fusion power and bombs out of the water of course, and given that existing weapons of mass destruction are worrying enough, they may not be desirable but the fact remains that they probably could exist quite easily. But for some reason, in autumn 1975 two science fiction TV series ended up using the concept of antimatter in a really weird way which is completely alien to scientific theory and shows no signs of ever being realistic.

The first of these is ‘Planet Of Evil’, a Doctor Who adventure, with the classic Fourth Doctor and Sarah-Jane Smith lineup at the start of the Hinchcliffe era. I read the Terrance Dicks novelisation rather than the TV version, probably because I was watching ‘Space 1999’ on the Other Side! The Tardis picks up a distress signal from Zeta Minor, a planet on the edge of the Universe, over thirty thousand years in the future from whenever Sarah Jane comes from (see Unit Dating Controversy) in the year 37 166 CE. It turns out there’s an antimatter monster on the planet who is killing everyone, and is able to pass between this Universe and the antimatter Universe via a pool of antimatter, which is black and has no reflections. The Morestrans are a species or race whose sun is going out and they’ve arrived on the planet to mine antimatter ore, which will provide energy for their planet for generations to come. However, the antimatter is prevented from leaving the planet by the planet itself, and it also acts like the potion in ‘Strange Case Of Doctor Jekyll And Mister Hyde’ by gradually bringing out a primal, evil side in people.

To analyse this, antimatter in this does seem to share some properties with real antimatter in one way, sort of, in that it provides a prodigious source of energy. However, it isn’t clear that this is only because it interacts with matter, which is potentially just as good a source. It isn’t a property of antimatter specifically. Antimatter also seems to be “evil”. It opposes matter in the sense that it’s its enemy. In a sense this is also true, because matter and antimatter are each others’ enemies in that they annihilate each other, but here it’s more like matter is good and antimatter evil. I haven’t read Robert Louis Stevenson’s novella so I don’t know if he goes into what’s in the potion or whatever, but I suspect that antimatter here is largely a plot device to represent that potion in an updated way. The idea of antimatter being present in an ore of ordinary matter probably doesn’t make much sense, because if there were actual atoms of antimatter, there’d also have to be a way to prevent them from coming into contact with matter or they would immediately mutually wipe each other out. The idea that such a thing could exist somewhere “out there” depends on Einstein’s famous dictum that “the Universe is not only stranger than we imagine, but stranger than we can imagine”. This is clearly true, but there’s no reason to suppose that antimatter ore made largely of matter is possible at all. To me, it suggests some kind of electromagnetic suspension of particles in a cage-like crystal structure, and it might happen that positrons could be captured by positively charged ions in a rock. This raises the question of how close bits of matter and antimatter could get before they interact destructively, and this is an important issue because of the quantum mechanical implications of the probability of a particular particle being in a specific location. Given that, it seems that two pieces of matter and antimatter approaching each other would increase their probability of annihilation as they got closer, which also means there’s an issue regarding the speed of light. But all of this is beside the point because it isn’t about the properties of real matter and antimatter but what it means in this ‘Doctor Who’ story, which being based on the novella will presumably be to do with the potential for good and evil coexisting in all of us and in Victorian terms the hypocrisy of private actions and public appearances, which is likely still to have been valid in 1974, when I presume it was written, and of course today. Given our current hindsight and the likes of Savile at the BBC doing what he did, and this being kept quiet or just rationalised away, ‘Face Of Evil’ comes across in a more sinister way as almost a commentary on child abuse happening at the time. In this context, antimatter becomes the inner evil, secret, hidden side, and there’s also a sense of greed in wanting the power from antimatter ore and that power corrupting.

The location of the planet, at the edge of the Universe, is probably also relevant and in fact this is what I mainly got from reading it. The pool, mysterious and bottomless, is like a portal into a neighbouring universe where antimatter dominates. I get the impression that there’s a kind of “Duoverse” with a plane down the middle, with matter on one side and antimatter on the other, and that the two sides are in an uneasy truce. Zeta Minor is like a border checkpoint between two mutually hostile territories. There’s also the influence of ‘Forbidden Planet’ and therefore also ‘The Tempest’, and the Doctor does in fact quote Shakespeare in the story. The famous jungle set is clearly linked to the isle which is “full of noises”. The monster is thus very obviously Caliban, although the story is directly based on the film rather than the play and there are differences. The semi-visible monster closely resembles that in the film, and in the case of the Doctor Who story the semi-visibility is to do with it only being partly in our Universe, i.e. world, and incapable of reaching all the way into it, and therefore being essentially other-wordly. But the trouble is that I can’t go into much depth about ‘Planet Of Evil’ because of my unfamiliarity with it, and also with Shakespeare and Robert Louis Stevenson.

The other example is much fresher in my mind, as I only watched it yesterday. ‘Space 1999: Matter Of Life And Death’, and I think there’s no article in this title either, so it refers directly to antimatter having those fundamental qualities, or perhaps matter being life and antimatter being death. So far, the entire series of ‘Space 1999’ has seemed quite odd to me, being closer to space horror like ‘Alien’ and ‘Event Horizon’, and of course the children’s book ‘Galactic Aliens’, than science fiction or space opera. Then again, ‘Doctor Who’, particularly of the Hinchcliffe Era, has strong elements of that genre too, but because it wasn’t on the Other Side, I might judge it less harshly. Even so, ‘Matter Of Life And Death’ is a problematic episode among many of the same in the series, which however I’ll leave largely aside for a future date. If the viewer takes the idea that Helen Russell is simply being allowed to see things less apocalyptically after the calamity at the climax of the episode, it makes the whole of the rest of the series take place in her imagination. It’s very like the Buffy episode ‘Normal Again’, but if a series of such high quality is allowed to do that, so should ‘Space 1999’ be judged fairly. In any event, I’m not here to discuss the whole of that series in depth although it is worth remembering that this is very far indeed from hard SF at this point.

Here’s the plot: An Eagle reconnaissance mission has discovered an apparently perfect planet for human life, which is named Terra Nova. During their visit, their craft is struck by lightning, knocking both crew members senseless, and returns automatically to Moonbase Alpha. When it lands, Dr Helen Russell goes aboard to find a third person present: her missing presumed dead husband, mysteriously revived and present on a distant planet he never went anywhere near, as far as she knows. When taken to Alpha’s medical bay, their equipment is unable to detect heartbeat or any other vital signs and it also turns out that he only has a normal pattern of body heat when he’s in her presence. There is pressure to discover more about the planet and considerable enthusiasm to settle on it, so he’s injected with a dangerous stimulant drug. He’s monosyllabic and largely unresponsive to everyone after this except his wife, with whom he has a more involved conversation and others conclude that he is using her life force to sustain his own life. He’s taken to be questioned and says he can’t tell where he came from but can tell them the planet is dangerous to them. He also says that Terra Nova is inhabited, “but not in the way you think”, then dies when he hears they will go there anyway. After his death, his body begins to “reverse polarity” (it actually says that!), which is a sign that it’s going to become antimatter, and this is dangerous because of the release of energy which will probably destroy Moonbase Alpha when it’s complete. The corpse then vanishes, after shocking someone with a burst of energy. They land on the planet. All seems well at first, and in fact this scene of their arrival is one of the few in the series I clearly remember. Everything seems fine, with parrots, edible fruit, breathable air and potable water. Then the Moonbase fails and the entire satellite explodes, a landslide kills Koenig and Sandra goes blind. After all that, Helen’s husband appears again and tells her it’s all about perception and she can choose to see things the way they were.

This is a largely unsatisfactory story of course, partly because it’s in the “it was all a dream” category, which at least one other ‘Space 1999’ episode, and also an episode of ‘UFO’ also do, and this is really scraping the bottom of the barrel. It’s been seriously suggested that the writers were on acid when they came up with the idea, but leaving all that aside it’s still interesting to consider how it portrays antimatter. First of all, apparently a gradual transition from matter to antimatter is possible. Professor Bergman refers to “reversed polarity”, which I think is probably also a reference to ‘Doctor Who’, but also presumably means there’s an intermediate stage during which the subatomic particles making up the corpse only have some of their properties reversed, such as spin or charge, without being fully-fledged antiparticles. To be honest I do have some sympathy with the idea of there being particles preserving symmetry in other ways, but I get the feeling this is a very naïve view of physics, so I’m going to stick with the idea that it’s all or nothing: something is either a specific particle or its antiparticle with nothing in between. Otherwise it would be like saying something is slightly reflected. Alternatively, maybe it means that some of the particles have converted but others haven’t, which is again unfeasible as this would cause a huge surge of energy fuelled by mutual annihilation.

This episode is clearly inspired by ‘Solaris’, originally a story by Stanisław Lem and later made into two films (and an operating system). However, for some reason both films and the novel are so much better than this. ‘Solaris’ is extremely thought-provoking and lends itself to many interpretations. Its sentient ocean is replaced here by antimatter, which has a protean nature and is utterly alien. The idea seems to be that antimatter does not belong in this Universe but is able to mix with it to a limited extent, and is essentially mysterious and incomprehensible to us. The statement that the planet both is and is not inhabited is part of this. It corresponds to a wider sense of mystery and alienness found throughout the series. And of course, antimatter is once again metaphorical.

I can only presume that the concept of antimatter was topical at the time due to some kind of scientific breakthrough, which led to it being included in these scripts. Having said that, I do think the perception of antimatter is significant for both. The particle I think of as “gypsy”, also known as a psion or psi meson, was detected first in 1974, and whether it was valid or not there was also the idea that atomic matter included a small admixture of charmed matter where one of the quarks of a nucleon was replaced by a charm quark. This is not the same as antimatter, because there’s no fundamental incompatibility involved, but I don’t know if it’s actually the case or possible. My own impression of charm at the time was that it made some nucleons slightly more massive, causing matter to clump together in the form of galaxies rather than be spread smoothly throughout the Universe, but please remember I was only seven at the time and didn’t know much about nuclear physics. In any event, if this kind of mixture was a current idea in science at the time, the popular understanding of it might allow for the notion that there could be a metastable mixture of matter and antimatter which lasted more than a tiny fraction of a nanosecond but was still unstable over a short term compared to a human lifespan, and this mixture idea occurs in both works – the corpse in ‘Space 1999’ and the ore in ‘Doctor Who’. Both of them include a strong component of otherness in their idea of antimatter. In ‘Planet Of Evil’ it seems to be linked to ideas of horror and another universe at war with this one, which is kind of metaphorically true of matter and antimatter. In ‘Matter Of Life And Death’, antimatter is dangerous but also just utterly alien and beyond our understanding, and may also be linked to the idea of the Other Side in the sense of a spirit world beyond death. There’s an occult flavour to both of these.

On one level I find it quite annoying when scientific concepts are used like this. There doesn’t seem to be a good reason for using those specific ideas rather than something more fantastic and obviously made up which has no pretensions to a scientific basis. On another, I do have sympathy with it, because it attempts to express the essential mystery of what I might call “The Beyond”. There’s a very human projection here of fear of the unknown, but also sense of wonder, which is essential to science fiction. I’m not sure whether I’d describe either of these series as science fiction though.

One of the factors in play here is having to put series on screen for popular consumption. ‘Star Trek’ has this issue too, as do probably all TV series aiming for more than a niche audience. It’s like the limeflower tea sold in supermarkets which also has lime peel in it because that’s what some consumers expect. On the other hand, a character in ‘Space 1999’ itself makes an interesting point in another episode, that as time goes by a mythology for the modern age will be created, and it’s possible that this is what’s happening here. But we also have to live in a scientifically literate civilisation.

I’ve also noticed that I’m a lot more forgiving of technobabble and its consequences on ‘Doctor Who’ than I am on ‘Space 1999’, and I can’t help thinking that this is simply because the latter is on the Other Side. Maybe to me, BBC TV matters, and ITV antimatters.