A Passing Phase

Trigger warning: cancer, infertility.

We humans have long tended to think of ourselves as the pinnacle of creation or evolution. Aristotle, though a better biologist than he was a physicist, organised everything into a “great chain of being”, starting at the bottom with materia prima, unformed matter, and progressing upward through minerals, plants, invertebrates, vertebrates of various kinds and reaching its peak in “man”, and yes I do mean man as he was supposed to be better than woman. Although there were ideas of evolution around at the time, with natural historians wondering if humans had emerged from the water, this wasn’t supposed to be something up which beings ascended. They were just set statically in their positions. Christians later added God to this scale, above humans, although it’s possible Aristotle had already done that. I don’t remember it that clearly.

Thousands of years later, along came Linnaeus, actually Carl von Linné, a botanist who invented Latin binomials aiming to describe all life in neat categories called genera and species in a work entitled Systema natura. Homo sapiens is a good example, another one, probably not invented by Linnaeus himself, being Boa constrictor. There’s a sense of security in his system, which has been much modified since he invented it although the principles remain the same. I don’t know if he had the idea of hierarchy in his system in general but he certainly courted controversy by including humans in the system. Later still, Wallace and Erasmus and Charles Darwin, along with Lamarck, came up with the theory of evolution, leading to a strange set of misconceptions summed up by the question “if we came from monkeys, why are there still monkeys?”. There are a couple of things wrong with this question as well as the idea that things are moving upwards when they evolve, which are worth mentioning now. One is that the more recent invention of cladism attempts to group related organisms as everything more closely related to a particular species than another, meaning that there’s a clade called simians including New and Old World monkeys and also apes, including humans, but there isn’t really a clade for monkeys, and also nothing ever evolves out of its clade, so insofar as there are monkeys we are still monkeys and nothing ever stops being one, and also the idea that evolution is advance and everything moves “up”. Just looking at the great apes, there is one species which has evolved less than the others, the orangutan, but because they’ve changed less than the others they retain features in common with them, but more significantly, human hands are more primitive, in the sense of having changed less, than those of chimps or gorillas whose hands have evolved for knuckle-walking as well as handling things, and the famous “march of progress” graphic is completely spurious and also dodgy in various ways, because we didn’t evolve from chimp-like ancestors except insofar as we are chimp-like ourselves, and it’s as true to say that the other apes are descended from us as we are from them. I think I’ve already talked about orangutan on here though.

In other words, in a sense there is no progress. That said, things do get more efficient sometimes. Modern predatory carnivores can run faster than their ancestors and replaced another group of predatory mammals who couldn’t capture prey with their paws but had to use their jaws to do so, for example. But even as far as intelligence is concerned, because humans can use language our short-term memories are much worse than those of chimps and our common ancestors. This is particularly interesting because the recent concern about social media and the internet more generally reducing attention span and concentration is actually only the latest phase in a process which began with the appearance of language, continued with the invention of writing and the growth of literacy and reached a more advanced stage with our current “goldfish” brains (actually goldfish have good memories of course).

Intelligence of the kind we have has been thrown up as something which appears to be useful to us and our ancestors in recent geological times, but to refer to the title of this blog, could be a passing phase. There are problems with being able to learn a lot which animals who don’t need to do this don’t have. Firstly, humans have to learn to do many things which other species can do instinctively, such as walk. Quite often, animals have a simple “party trick” such as spinning an orb web in the case of some spiders, which is not reflected in the rest of their accomplishments, but of course a human could learn to weave a net for a similar purpose. Termites can build arches, but humans can invent arches and learn how to make them from others, by word of mouth, observation, study or muscle memory.

All this comes at a cost. We have a long childhood and in order to reproduce physically (we’re social and cultural beings who also reproduce in the noösphere), ideally we need to get through puberty. We then need to find a partner and wait for pregnancy to produce one or occasionally two or more offspring at a time, who then take up much of our time and most of our energy. I’ve made this a heteronormative account for the sake of simplicity, and there are other possible narratives regarding lifetimes, but whatever they are, we are cultural, we depend on each other and what we do takes a long time, so the same principles still stand.

At the same time, we’re developing goldfish brains in several ways, mainly in connection with digital ICT. We’re outsourcing a lot of our thought. Nowadays, people even use AI chatbots to talk to potential romantic partners. We’re – I mean, I hardly need to say this, feels like a string of platitudes – dominated by social media, fake news, fake images generated again by AI and who knows what else?

In the meantime, we interfere with the biosphere without even thinking about what we’re doing, although the fact that we think and have the kind of intelligence we have leads to the damage we do, even unwittingly. That intelligence, such as it is, is a potential liability to the planet’s life.

While all that is going on, something else carries on upon the sea bed and elsewhere. There are, to take a particular example, animals called placozoa who are simply irregular, lichen-like layers of cells clinging to rocks and consuming algae and other microörganisms in their vicinity. And then elsewhere there are certain tumours which can be passed from animal to animal. One of these is canine venereal transmissible tumour, which is a sarcoma usually transmitted by mating between canine animals in several species including dogs, wolves and coyotes and growing on the genitals. Another is Devil facial tumour, which is a similarly-spread tumour affecting the faces of Tasmanian devils and transmitted when they bite each other during fighting. These tumours and the placozoa spread without needing to find themselves mates, have practically no gestation or maturation period and they don’t need no education. There are also transmissible cancers among bivalves such as cockles and mussels. At the same time, they’re rare.

Henrietta Lacks is a well-known woman whose cervical squamous cell carcinoma is notorious for still thriving seventy-three years after her death, is effectively immortal and has replaced other carcinoma cell lines growing in labs to the extent that certain lines have been unwittingly lost by being taken over by her cancer. I have to mention too that Ms Lacks’s heirs have never seen a cent of the millions of dollars profit which have resulted from the research done on her cells and that her name was for a long time completely unknown to the general public. They’re known as HeLa cells.

I know I’ve said all this before, and I’m reiterating it because it occurs to me that this train of thought can develop in a direction I haven’t previously considered. I’m sorry about the repetition, but I have something new.

To repeat what I’ve said previously, another interesting phenomenon is that of organoids. Sometimes, the cells we shed into sewers from our bodies begin new lives briefly by starting to divide and form structures in sewage works. And of course we know that untreated sewage is often discharged into the sea.

Transmissible cancers are admittedly rare, but bear with me.

Putting these bits together, suppose HPV, which is partly responsible for HeLa cells, were to produce just the right mutation in cervical squamous cell carcinoma to make it transmissible in the same way as canine transmissible venereal tumour. This is improbable but at the same time entirely feasible. It’s a malignant cancer able to invade and destroy tissues, including those of the reproductive system, and it can cause infertility. At the same time, cells are shed into the sewers which reach the sea when discharged into it. It’s also passed on during childbirth although not usually to the genitals, and it’s terminal if not treated. This can be expected to spread somewhat like AIDS. When they reach the sea, they continue to divide and attach themselves to the bodies of marine mammals with naked skin such as whales and seals, spreading malignantly into their skin and in the case of seals and the smaller cetaceans killing them, while allowing themselves to be shed into the water where they infect other individuals. Some of them settle on the sea bed and feed on microbes, similarly to placozoa.

The second ingredient is linked to Covid but extended. One of the long-term effects of Covid on some people is cognitive impairment, reported here, although the effects are relatively mild. I’m tempted to measure it in terms of IQ but that would just give a spurious sense of precision and quantity. Covid is likely to be only an early example of many pandemics because of deforestation and climate change leading to the movement of viral vectors such as bats into new environments where they’re more likely to come into contact with people. AIDS was probably caused by this, four dozen or so years ago, more specifically by the human consumption of bushmeat. It doesn’t stretch credulity either to expect the after-effects of viral pandemics to cause a reduction in intelligence, although clearly describing it as a reduction kind of assumes some kind of scale and I’ve already said that scales are somewhat odious, not in all cases, so it gets a bit difficult to express what I mean by this. What I mean is that people will be less able to solve intellectually-demanding problems and think critically.

Now imagine in this world of attritional cognitive decline caused by a series of pandemics stemming from deforestation and climate change that we continue to be confronted with various problems, another of which is antibiotic resistance, and not only lack the mental capacity to address them as a species but also have the very bodies aimed at addressing them starved of resources and the ability to operate together in a global research community as we’re currently seeing in the US. At this point it might even be necessary for AI to take over, and if it isn’t, bad decision-making could lead to that happening anyway.

This leads me to the third consideration in this mess: AI misalignment. It isn’t that AI is malevolent. The idea was once suggested that an AI might be instructed to make as many paperclips as possible and go on to convert the whole planet into paperclips, then send out space probes to turn everything else possible in the Universe into paperclips. This is a somewhat silly example, but it’s like the monkey’s paw story of wishing for various things and getting them ironically and malignly granted. Imagine therefore that in this human world of cognitive decline, AIs are instructed to “ensure biological humans survive for as long as possible”, the idea being to guard against something like mind uploading into the cloud or the manufacture of robots with human cognition and our memories copied into them. So they obey the instruction. They locate the currently rare tumours, place them in vats or perhaps coastal lagoons, guard them effectively, redirect all agricultural food supplies to them and reason that this decision encourages the mindless, unintelligent variety of human cell lines which is less harmful to the environment than human intelligence and technology. Humans as we understand them are then left sterile, dying of viral infections, less intelligent than before by gradual degrees and unable to take care of ourselves. Intelligence wanes and dies.

So that ^^^ basically.

And we’re all dead, but on the bright side there are massive vats of cancer tumours all over the world which also leak into the sea where they kill all the dolphins and seals.

Of course, this is a perfect storm of a prospect and in particular the transmissible tumour angle is quite improbable, but there is a biological argument that this world of human survival only in the form of cancer is supposed to illustrate that intelligence may be something we prize and think of as the pinnacle of some kind of progress, but actually could be a passing phase which is actually a liability to the survival of our genes and in our civilisation education and good critical thinking skills are the kind of thing which excludes the people doing it from contributing to a society dominated by people without, so whereas this passing phase of liberalism and tolerance would promote the long-term survival of the species, it can’t have a long-term influence unless people are flexible enough to move beyond scarcity-based economics. Ironically, so-called eugenics is also harmful to our long-term survival because it reduces diversity. To give a strictly physical example, a species which varies in its heat and cold tolerance, with some individuals thriving in hot weather and others in cold, would be able to survive through fluctuations in temperature over a long period. A world of blond-haired, fair-skinned and blue-eyed people is incestuous. And whereas Musk, for example, might prefer to spread prosecute’s genes preferring prosecute’s own traits, prosecute doesn’t have the broad perspective of what may be adaptive and selected for in the long run.

The short-term benefits of language and shared memory along with the capacity to act upon them become brittle not because we’re intelligent but because we’re not intelligent enough. If we were able to anticipate and work through the probable consequences of how we’ve acted in detail and be vividly aware of them, we might be more resilient in the circumstances we’ve created for ourselves. Maybe it’s the crows next time, or maybe there won’t be another turn. Earth’s story is long and indifferent, and the Medea Hypothesis captures what this might be about. This is the Gaia Hypothesis’s evil twin. According to the Medea Hypothesis, far from ushering the planet into a more habitable condition, multicellular life is self-destructive and tends to push it into a situation where only simple single-celled organisms can survive. I’m not sure this is illustrated by this specific trajectory though. It may be more that intelligence is just one of countless possible survival strategies life can manifest and simple undirected arbitrary processes just lead to us blundering into the next phase, which won’t favour intelligence at all. If this is true though, it may or may not relate to the state of the human world, or there may be an analogue to that feature. What would an intelligent society look like? Or is it intelligence or wisdom? Have we lived through the period of history where intelligence has much influence on politics or world events? If so, what does that mean for progressive and conservative views? I can’t help but be tempted by the idea that liberal democracy, good though it was, was a brief phase in a few countries which is long since gone. And my reaction to that is not to adopt conservatism as that clearly doesn’t work and is in any case morally reprehensible. So what is to be done?

‘The Book Of Predictions’ – A Review of Successful and Failed Predictions

Forty years ago now, having published two Books of Lists, the authors listed on the front cover shown here compiled ‘The Book of Predictions’, probably in connection with the fact that the 1980s had just begun. The preceding two titles are creatures of their time in that their content is now the kind of thing you might find on Buzzfeed or elsewhere on the internet, but since such things were entirely new at the time they were very popular and there were two further editions after ‘Predictions’. I was personally fascinated by them. They only ended when the internet began to be more available to the general public, the last book being published one year before Windows 95 came out.

‘Predictions’ is, as is often acknowledged within its pages, very hit and miss and many of the predictions seem ludicrous today. It also isn’t just about predictions of the future past 1981, but has, for example, a whole chapter on the subject of bad predictions, another on well-known predictors such as Nostradamus and pieces on matters such as nuclear weapons, agriculture and music. However, the bulk of the book consists of predictions taken from various people broken up into periods starting in 1982 and ending in 2030, but some are much further ahead with the last date being in the year 3000.

One of the interesting things which can be done with this book today is not simply to look back on it and laugh, or see what it got right and what it got wrong, but to ascertain which predictors were the most successful and look at themes detectable in the book about what was wrong but widely expected, what was right and widely expected and what was correct but only predicted by one or two people. From that, it’s possible to select the kind of person who tends to be correct about the future.

Predictors can be placed in several categories. There are bookies, experts, science fiction writers and a fourth category I might describe as “psychics”, although that isn’t entirely accurate because, for example, it includes astrologers. Not everyone asked took it seriously and quite a few people pointed out, perhaps partly as a way of protecting their reputation, that they didn’t intend for their contributions to be considered predictions so much as forecasts. Arthur C Clarke, for example, did this, and he’s an interesting case because of his own ‘Profiles Of The Future’ published two decades previously, which sought to do something quite similar in the realm of science and technology. ‘Predictions’ has a wider remit, covering the likes of geopolitics, population, climate change and warfare.

Conceivably, the book could suffer either from vagueness plus confirmation bias or from the “right twice a day” effect. It seems likely that if one makes a large enough number of predictions, some of them will be true simply due to their variety and the laws of probability. Moreover, with hindsight the words written could be crammed into the moulds subsequent events provided, particularly if they’re quite general. This is often said of Nostradamus’s work, for example, but as I’ve covered elsewhere it’s notable that if you read interpretations of his ‘Centuries’ they actually do sometimes seem remarkably accurate. Nostradamus is subject to the kind of “hyper-skepticism” which is not really scepticism at all but involves people making up their minds in advance that he’s wrong, and in fact he was widely interpreted as predicting 9/11 several decades before it happened. Erica Cheetham’s books, for example, published in the late 1970s, describe the Twin Towers attack quite accurately. Nonetheless it’s important to bear such cognitive biasses in mind.

The content is in some places adversely affected by people choosing to propagandise and grind axes. A particularly notable example is David S Sullivan of the CIA, who described the USSR taking over the world. His “predictions” now read as either paranoid or nakedly indoctrinaire. Other people seem to have described what they would like to happen without paying much attention to the trends of history or the Zeitgeist, such as one author who portrayed extensive neighbourhood food gardening programs, occurring for example on high-rise building roofs, which is a nice idea and might even be practical, but also seems quite fanciful and is clearly wishful thinking.

Notably, a number of predictions were made multiple times and turned out to be wrong, and it’s instructive to contemplate why they haven’t happened. These include: nuclear war, nuclear blackmail, orbital solar power stations (that one in particular is probably the most common of all), the Jupiter Effect, widespread hydroponics, high inflation, controlled fusion, 3-D television, widespread use of holograms, a lunar base, a new Ice Age and underground cities. One of these probably needs further explanation if you weren’t old enough to remember it at the time. The Jupiter Effect was a prediction made in 1974 on the basis of the fact that on 10th March 1982, all the planets including Pluto would be on the same side of the Sun within 95° of each other, which was true and clearly easily predictable because of the known movements of bodies within the Solar System. The idea was that the tidal forces raised by all the planets on this one would lead to quakes and volcanic eruptions, and there was also a retroactive prophecy that it had occurred two years earlier and caused the eruption of Mount St Helens. The San Andreas Fault was a particular focus. Some of the predictors in the book not only included this prediction, but went on to describe probable political, economic and social consequences. To be fair, if there had been such a cluster of natural disasters, the predictions conditional upon them would probably have been quite accurate, and although it wasn’t widely discussed at the time, it’s also conceivable that the Butterfly Effect could have a hand in it happening, but it didn’t, and John Gribbin, who wrote the book and its sequel about Mount St Helens along with Stephen Plagemann, later said he was embarrassed about the forecast and retracted his claims.

Controlled fusion, of course, comes up over and over again in predictions and is permanently “thirty years away”. If a similar compilation of predictions were to be made today, it would almost certainly be included. It hasn’t happened of course, and back then many people would have been just as dismissive of it as I’m being now. That doesn’t mean it will never happen. In Brian Stableford’s ‘History of the Third Millennium’, I think he placed its achievement in the 2070s, when it depended on supercomputers being able to predict and vary the necessary configuration of the magnetic containment bottle sufficiently fast, so maybe. I don’t know. I put controlled fusion in ‘1934’. However, there’s another more peculiar and mysterious incorrect prediction which was made over and over again: orbital solar power. For instance, the July 1976 National Geographic includes an illustration of an enormous solar power station several kilometres across being constructed at the L5 point near Cynthia (“the Moon”). Several reasons why this hasn’t happened may be: that we’ve stayed in low-Earth orbit since the Apollo missions; the microwave beams needed to transmit power back to this planet are effectively death rays; the fossil fuels lobby. It is, however, a glaringly different future in that respect because if this had been done safely, the Arab countries relying on oil wealth would no longer be able to do so and would probably have become more liberal, and of course there would be less anthropogenic climate change.

This brings me to the fourth widely made error: the expectation that there would be a new Ice Age. This was famously held by Nigel Calder, editor of the ‘New Scientist’ at the time, but it was in any case uncertain. A 1977 edition of the ‘National Geographic’ includes an article called ‘What’s Happening To Our Climate’ where uncertainty is expressed about whether it would get hotter or colder, and also includes the first reference I read to the Butterfly Effect. I remember reading it and concluding that it would get hotter, which disappointed me because I found the idea of a new Ice Age quite exciting. There’s even a list of which countries would be worst affected by it, including this one and Bhutan. Nigel Calder did contribute to the book and made the prediction that by 2000, anthropogenic climate change would have been refuted and also that SETI would have been abandoned due to negative results. It’s interesting that the second of these hasn’t happened, and I’ll return to that.

Hydroponics are a “futuristic” and science fiction staple of a few decades ago, and are still done sometimes. They have the advantage of not needing any suitable soil and are free from competitive species, but they haven’t caught on. I don’t know why this is. I actually almost did them myself this year but Sarada vetoed it due to lack of room in the house. Somewhere on this blog is a breakdown of land use taken from this book, and it’s way more efficient to grow many crops hydroponically than in soil, and also preserves the soil from erosion. Hydroponic plants have a greater yield, grow twice as fast as plants in soil and use a tenth of the water. Harvesting is simpler. However, there can be waterborne harmful organisms, they need a lot of monitoring so they’re labour-intensive, the plants are reliant on human intervention rather than soil for nutrition and the root systems are small, so I imagine you can’t easily grow carrots or potatoes this way. There also needs to be a reliable power supply and the initial investment is higher than for soil-grown crops. However, the disadvantages don’t seem to be major enough for them not to have been adopted so I don’t really know why they aren’t more popular. I can see the issue with developing countries and hydroponics though.

High inflation can be seen as an extrapolation of “more of the same”. People at the time widely expected inflation to continue as it had, and even the predictions of inflation falling in the book are quite modest, one being eight percent. This didn’t happen of course, because increasing unemployment was used as a policy to keep wages down and decrease the cost of production. Presumably at the time this was only theoretical or considered beyond the pale, or it may just simply be that at the time inflation was considered a fact of life. It’s very common for this to be assumed at the time and TVTropes even has a page on it. There’s also some disguised inflation, particularly in property prices and therefore the cost of accommodation, which tends to be excluded from quoted figures, so the true inflation rate is in fact higher than it seems.

Three-dimensional displays and holograms are another anomaly. The latter were popular at the time of writing, and ‘The History Of The Third Millennium’ published in 1986 used two as cover images. There was also experimental holographic cinema. It appears that for there to be actual displays like televisions and monitors, there would need to be very small rapidly moving parts, which may be the problem. Again, I’m not sure about this one. Still holograms in particular ought to have caught on more than they have, but in fact it seems to have been a limited fad. Lunar bases also haven’t happened, which is due to the curtailment of human space travel due to perceived high costs without much result and the fact that it has tended to be public sector.

Finally in the list of unfulfilled popular predictions is the reduction in average work hours. This hasn’t happened because of what’s been called the “bullshit jobs” phenomenon, when useless paid work increases, and because universal basic income hasn’t happened either. There’s also the usual prediction that school hours would shrink or reduce to nothing and be replaced by home ed, which I won’t be discussing here because I have a whole blog devoted to the matter. In a way, of course, this has now happened, though not in a very positive manner.

There’s also a number of popular accurate predictions. Top among these is the internet. The internet as a popular tool was predicted from as long ago as seven decades ago and is quite possibly the most predictable thing ever to have happened from the perspective of the mid-twentieth century. There are also several other inventions and facilities connected to this which are also widely predicted, including mainstream domestic waste recycling, Electronic Funds Transfer by consumers, video calling, ebooks and ebook readers, print on demand, online shopping, including for groceries, computers beating chess champions, pharmaceutical cognitive enhancement, the Hubble Space Telescope, the detection of exoplanets and a major nuclear power station disaster resulting in many losses of life in the mid-’80s. Another one which is kind of half-right and half-wrong is the widespread use of mobile ‘phones, which were of course expected to be worn on the wrist. Ultimate Black majority rule in South Africa was also popularly predicted, although like the collapse of the Soviet Union, also predicted by many, it was expected to happen much later than it in fact did. There isn’t much more to say about these predictions other than that they were correct, except that they share a feature with the more sporadic correct predictions.

This feature is that although there are many unequivocally correct predictions in the book, they tend to be dated much earlier than they in fact happened. A relevant statement is made about this too. One contributor made the observation that predictions for less than a decade in the future are usually too radical and those made for more than a decade in the future too conservative. This is borne out by the pattern in the book itself. Many things are predicted for the ’80s which did eventually happen but long after the end of that decade. On the other hand, the end of Apartheid and the Soviet Union are often either missed completely or dated much later than they actually happened. We can apply this idea to today when we look at scientific and technological predictions, that the dramatic, perhaps more hyped ones predicted today to occur by 2031 are going to be dated wrong but may well happen eventually, and that those expected after 2031 today (2021) will sometimes be correct but occur earlier when they are. This raises the question of what happens with the events predicted towards the beginning of that time, and it also suggests, as if it could be done that mathematically, that events predicted to be around a decade away are, if correct, the most likely to be accurately dated. Whereas this may not be so, it does suggest that an optimum period for accuracy could be calculated given enough predictions. It would also be surprising if predictions made for the following year are the least accurate. In fact, a popular date for correct but chronologically inaccurate predictions in the book is 1987, which is just over half a decade later than when they were made. The accuracy graph has a hump about five or six years down the line, declining towards ten years and then dropping below the X axis afterward.

A further pattern is that there are two types of people predicting correctly. One, rather small, set of people makes hit after hit, as if they’ve seen documentary footage of the history of the next fifty years. A much larger group has one or two hits. I’ll list some of the sporadically accurate predictions (by content rather than date) first:

  • Devolved local government, i.e. small “town halls” distributed throughout a district rather than in a single location.
  • New diseases due to environmental destruction. This is around the time AIDS was discovered but at that point it hadn’t been connected to deforestation.
  • The discovery of a new phylum of animals in the deep ocean.
  • Brain implants for neurological problems
  • An insulin pump
  • Drones
  • Self-driving cars
  • Computers more common than cars and used more than driving
  • “Communism” ends in the USSR, which then breaks up
  • Public-private partnerships and ’80s focus on the rôle of government in the economy
  • Biosphere II
  • Principles of social order become science-based rather than ethics-based
  • Video evidence and testimony admissible in court rooms (the first of these may change due to deepfakes)
  • An explosion in CGI animation in cinema
  • Online music libraries with agents recommending tracks and artists according to the consumer’s personal tastes
  • Reduction in inflation. Amazingly, only one person out of dozens made this correct prediction, and even the date is accurate.
  • Gated communities
  • A viral pandemic starting in the Far East which spreads throughout the world, predicted to occur in 2025.

There are also a few “super-predictors” – people who got almost everything right, though usually not in terms of the year they would happen. The crucial thing here is to try and work out what factors make these people so good at it, but before I get to them, I want to mention a few people who stand out. For me, two of them are Steve Wolfe and Roy Wysack, but I get the impression they’d prefer me not to discuss their predictions so I’m going to omit them apart from that mention, and in any case that’s because there’s a personal connection. There’s a rather sad list in hindsight made by Jim Fixx, who predicts among other things that he will compete in the Boston Marathon in 2030 at the age of ninety-eight. Rather oddly, the SF writer A E van Vogt appears to mention Jim Fixx’s death in the same volume, although it didn’t happen until 1984. Isaac Asimov, whom one might expect to be amazingly accurate, actually only got one thing right and that was the internet, which is such a widespread prediction as not to be remarkable at all. The overpopulation gurus Anne and Paul Ehrlich didn’t make one accurate prediction. The last honourable mention goes to Timothy Leary, who was laughably, ludicrously wrong about absolutely everything, and I can’t help but wonder if there’s a link to drug-taking there.

Onto the super-predictors then.

David Pearce Snyder is still active today and describes himself as a consulting futurist. He predicted nationwide EFT, software piracy, email scams, internet shopping, smart meters, the decline of small businesses due to online shopping, Chernobyl, the growth of small extremist political parties in the US, 9/11 (not in detail) and online courses. However, he also predicted that all of these things would come to pass by 1989. Even Chernobyl is predicted two years early.

Joseph Martino, who seems to have died recently, made a number of predictions regarding consumer ownership in terms of percentage of households owning particular products. These included 90% videodisc or equivalent ownership by 2006, which is probably accurate – he’s unwittingly talking about DVDs; 90% of correspondence by email by 2005; 90% professional use of internet journals by 2004 and 90% school access to the internet by 2001; 90% ownership of video games consoles by 1992 (actually probably the Sega Mega Drive); 90% of commerce by consumers by EFT by 1995 (the date is wrong here but it did happen). That’s a lot of “ninety percents”. These are all roughly correct, and it’s notable that he was able to predict accurately because he was approximate. He didn’t name the popular games consoles involved, was aware of the likelihood of a replacement to videodisc but didn’t know what it was and so forth, and this may be what helped his accuracy. However, it isn’t just vagueness which allows it to be fitted into the facts post hoc, but a kind of flexibility of imagination and openness to possibilities, which may be key to his success.

Andrei Sakharov is in there too, and was quite accurate. He successfully predicted the use of computers for accurate weather forecasting and protein folding, the invention of smart materials and the detection of exoplanets, which is a common prediction.

Bell Labs is not an individual but has a history of accurate prediction. They managed to predict personal ‘phone numbers (these exist but are also realised through mobile ‘phone use), the redundancy of professional telephone installers, the internet and voicemail services as opposed to answerphones.

Professor Garry Hunt is also still with us. He predicted the detection of a planet beyond Pluto (actually dwarf planets due to their redefinition but from a 1981 perspective this is correct), the discovery of rings around Neptune (I also expected this and by the time Voyager had found Jupiter’s rings it seems to be a bit of a no-brainer but it wouldn’t be scientific to have said this at the time), problems caused by space débris, Mars rovers (oddly very late on, in the late 2020s), non-attributable anthropogenic climate change, i.e. the stochastic increase of weather-related disasters which can be attributed as a group to climate change but in no individual cases, a decline in coal production and the rise of Zoom videoconferencing, obviously not under that name. You can learn more about him here.

Trudy E Bell and the SF author Ben Bova were both editors of the science magazine OMNI and the former also edited ‘Scientific American’. Between them, they predicted spacelab, non-scientists on space shuttle missions (nobody predicted the Challenger disaster incidentally), the absence of SETI results, space tourism by the rich and a US Space Force, i.e. Trump’s idea although I’m not clear what that is.

I’m going to take a break from this list to talk about the SETI predictions. Both Nigel Calder and Trudy E Bell predicted that SETI would be abandoned due to lack of positive results, but this has not happened. To be ad hominem for a second, Calder’s prediction was coupled with his dramatically incorrect and also politically incorrect prediction that anthropogenic climate change leading to overall warming would be soundly refuted. Whether or not you accept anthropogenic climate change (and it’s anti-scientific to reject it), it’s very clear that the majority of scientists and governments do, and if you like you can compare it to my own rejection of non-baryonic dark matter. I’m aware that the consensus is in favour of the existence of non-baryonic dark matter, but I might want to assert my belief that it doesn’t exist by claiming that it will be rejected within ten years. However, I very much doubt that it will be rejected, so it seems to me that Calder’s claim has an emotive element to it. Regarding SETI, do people know what that is? Just in case you don’t, SETI is the Search for ExtraTerrestrial Intelligence, mainly via attempting to detect radio signals although there are other data such as attempting to find megastructures being built around Sun-like stars. The most positive result is from the “Wow!” signal, which predates the book. It’s been noted since 1981 that the interval during which detectable significant radio signals would be transmitted would be very short compared to the length of a civilisation’s history. We no longer use analogue signals much in the developed world (I don’t know how things are elsewhere) and this was not appreciated at the time of publication. That said, it’s equally possible that we’re all just in denial about it. Also, accidentally transmitted signals from our planet don’t reach as far as Proxima Centauri and it’s also been suggested that for all we know, other civilisations use zeta rays, which we have yet to discover. Hard to say really what’s going on.

It was G. Harry Stine, now long-since deceased, who contrasted short- and long-term predictions and also said he made forecasts rather than predictions. It’s therefore ironic that he managed to be a super-predictor. He predicted the internet, new light-weight structural materials (such as carbon and boron nitride nanotubes), electronic picture frames, landscape channels, large flat-screen displays, cognition-enhancing drugs, transgenic babies and the electronic alteration of brain function. Stine was involved in model rocketry, a SF author, libertarian activist and was instrumental in the development of the Strategic Defense Initiative. Hence he was in a position close to government policy and also technology and speculative fiction, and maybe it was this combination which made him such a good predictor.

Roy Mason, as an architect an example of nominative determinism, was quite successful. He predicted domestic internet access, working from home and large flat display screens with scenic displays (and it occurs to me now that this is actually what we later called “desktop wallpaper”). Marvin Adelson, a name which may be incorrect, predicted computer simulation and mockups of planned buildings, poor people living as caretakers in largely empty buildings and innovative spectacular architecture in oil states.

Then there’s the sexologist and psychotherapist Albert Ellis, who is generally spot-on about everything in his own area of expertise. In his case I get the impression that he was one of the movers who helped create the world as it is today in sexual terms, which is why he’s so accurate. It’s kind of him doing this, almost as if he just published his agenda to the world and said, “this is what I’ll be doing for the rest of my life”, which ended in 2007. He predicted no guilt about masturbation by 1992 along with less than ten percent of people feeling guilt about premarital sex, the routine use of condoms to prevent STDs and millions of people leaving orthodox religion because of its prudishness and prejudices about sex. By 2000, most couples would cohabit before marriage. In 2010, most faith groups would be more liberal about sex, and by 2020, 85% of married women and 90% of married men would have had pre-marital sex. He didn’t make a single incorrect prediction.

Charlie Gillett, a music journalist, predicted the rise of independent record labels in the ’80s, increased popularity of world music in the West and online music libraries from the 1990s. Again, he was correct, but he also made a prediction about music tracks missing a single instrument which the listener could play along to with theirs, which is similar to karaoke but if it happened, didn’t become popular. Gillett would’ve got to see these things come to pass as he died in 2010.

After this lot come the psychics, and you might expect them to have posted a load of rubbish, but this is actually not so. A disparate group of people have been lumped together here and it’s notable that the astrologer Andrew Reiss didn’t get anything right, but the actual people claimed to be psychics didn’t do too badly. Bertie Catchings had quite a few misses, but also followed the usual pattern of getting things correct but placing them too early. These included satnav, GPS use by the public, Google Earth, a data TV channel, ubiquitous mobile ‘phones by 1990 (actual handheld ‘phones this time rather than the usual wristwatch device prediction), computerised address and ‘phone directories and tags for lost children (which are actually really mobile ‘phones again). Ann Fisher got gated communities correct and similar fortress-like protection for corporate headquarters. One Francine Steiger successfully predicted domestic biomasse fuel. Beverly Jaegers is an interesting one. She was an ex-sceptic who described herself as having received “no fall on the head”, having “no special powers” but was “just a person who tried and found she could do it”. Consequently it isn’t clear exactly what’s going on with her and it probably merits further exposition. She predicted laser surgery, insulin pumps and liver transplants. Jaegers claims that psychic abilities are in everyone and can be brought out via training and hard work, which is a very appealing line. One would want to believe it was true, but it’s still interesting that she had moved from scepticism to belief. It seems plausible to me that there might be a way to work on data received via the senses subconsciously to form some kind of Gestalt which turns out to be so. Psi as in extrasensory perception might not be required. It could be more than guessing, detailed and uncannily accurate.

Here are a couple more superpredictors. Ian Miles predicted Madonna-style eroticism in fashion, dyed hair, ebook readers (using cassettes though) linkable to home micros, questions over the intrusion of video into privacy and the existence of special interest groups around niche porn and also child sexual abuse rings. Also, he foresaw multimedia PCs with internet access.

The last person I want to consider is one Arnold Brown, co-chair of an “invisible college” of corporate futurists and planners. Again, this person might be a bit like Alfred Ellis, in that he had his finger on the pulse of actual future planning, which raises the question of whether he was in fact reporting on some kind of plan which has been successfully realised. He predicted the idea of retro charm and the collector’s item status of any manufactured item pre-dating 1945, interactive TV, pay TV, electronic games, video discs, minicameras, home computers, the internet, online retail catalogues, online grocery shopping, internet banking and the end of print encyclopædias. He specifically mentions ‘The Book Of Predictions’ as something which would be superceded by online sources, and may well have had the books of lists in mind as well. Besides all that, he also envisaged the rise of extreme sports, the decline of paper book, newspaper and magazine publishing and self-driving cars.

There’s also an article on a Central Premonitions Registry, which is of considerable interest. The founder, Robert Nelson, had to sift through huge quantities of rubbish and religious rants against what he was doing, and found his work quite depressing as it involved endless doom mongery from the general public, but his aim was to find reliable predictors. One of them is included in the book, but only got one prediction correct – inflation falling to eight percent. However, he does mention one person who was quite remarkable. Arlene Handy was a poorly educated and barely literate woman whose letters were almost illegible. She claimed to be visited regularly by two spirits in her dreams who show her the future in her dreams. In the twelve years she wrote to Nelson, every prediction was wrong or impossible to understand except one. On 16th February 1973, she dreams that two figures in white turbans climbed over a two metre high fence in Khartoum and killed an American ambassador. On 2nd March that year, Cleo A Noel Jr, US ambassador to Sudan, was assassinated by two members of the Black September Movement after being kidnapped, with the details as predicted. So, the question arises, was this just pareidolia after the fact, a case of a hit due to sheer volume of material?

In 1982, OMNI reported that the most successful predictors were those who used both hemispheres of their brains to do so. Whereas this partakes of the whole hemisphericity myth, it’s possible to salvage something from this, and not absolutely necessary to posit precognitive abilities. What it means is that if you want to make a high proportion of successful predictions, your best bet would be to use both intuition, imagination and hunches and analysis, logic and extrapolation, i.e. the supposèd right and left brain functions. This sounds like good advice. The best strategies for successful prediction, or rather perceived successful prediction, are not the same as the best approach to real success in this area. It’s possible to make a large number of vague predictions, and then some of them will turn out to sound as if they’re correct. On the other hand, a degree of vagueness in the right way is not the same as dishonesty, but recognises that the future is substantially unknown, so for example the prediction of video on demand tailored to the user, made in this book, is essentially YouTube, but it’s an achievement to abstract it sufficiently from what was possible or thought of at the time to predict what would actually happen. Similarly with agents which choose content based on one’s previous preferences. Neither of these are dependent on knowing the details of internet browsers or web servers, but they are nonetheless significantly accurate almost because they’re vague, and that vagueness is not cheating which allows anyone to read something into the prediction with hindsight. If you are in some way embedded in a particular field, it seems that you’re likely to be good at making predictions in that field. This is true, for example, of Alfred Ellis and Arnold Brown. On the other hand, if the CIA guy can be taken at his word, he was heavily involved in intelligence work and it almost seems that it was that that led to him being unable to see the wood for the trees and making phenomenally incorrect forecasts about the state of the world. It’s also notable that some of the things everyone expects to happen really do happen, and nobody is really surprised, but maybe about half of the events most people would agree are bound to happen actually don’t, and it’s worth asking what the differences are between those two categories. It’s also important to be detached from one’s prejudices. Several of the most accurate predictors on here have a background which I find completely incompatible with my values, and on the other side, there appear to be, for whatever reason, accurate predictors who say they’re psychic, which would be hard for someone to swallow if they were of K-skeptical bent.

For what it’s worth, I do believe in precognition as a psionic ability as opposed to being merely a talent for guessing the future accurately based on rational processes resulting from information received through the scientifically recognised senses. The reason I believe this is that there are various events for which this is the simplest explanation, and up until fairly recently, the possibility that psionics exists was taken seriously in academia. The fact that it isn’t currently doesn’t make it any less valid. I’ve noticed other reviews of this book tend not to accept its accuracy in some areas. For instance, there’s a review on Goodreads which says only two percent of the predictions are correct, and this is a major underestimate. It’s also useful as a pointer to the superpredictors, and it’s worth listening to those who are still around and reading what else those who have passed on have to say, and also examining their lives, to work out what makes a good predictor. Hence the book is still worthwhile, and will doubtless still be in 2031 by which time the term will mainly have run its course.

The Anti-Universe

A prominent mythological theme is that of time being cyclical. For instance, in Hinduism there is a detailed chronology which repeats endlessly. Bearing in mind that the numbers used in mythological contexts are often mainly there to indicate enormity or tininess, there is the kalpa, which lasts 4 320 million years and is equivalent to a day in Brahma’s life. There are three hundred and sixty of these days in a Brahman year, and a hundred Brahman years in a Brahman lifetime, after which the cycle repeats. Within a Brahman Day, human history also repeats a cycle known as the Yuga Cycle, which consists of four ages, Satya, Treta, Dvapara and Kali. The names refer to the proportion of virtue and vice characterising each age, so Satya is perfect, life is long, everyone is kind to each other, wise, healthy and so on, satya meaning “truth” or “sincerity”, Treta is “third” in the sense of being three quarters virtue and one quarter vice, Dvapara is two quarters of each and Kali, unsurprisingly the current age, is the age of evil and destruction. Humans start off as giants and end as dwarfs. Then the cycle repeats. Thus there are cycles within cycles in Hindu cosmology.

The Maya also have a cyclical chronology, including the Long Count, in a cycle lasting 63 million years. Probably the most important cycle in Mesoamerican calendars is the fifty-two year one, during which the two different calendars cycle in and out of sync with each other. The Aztecs used to give away all their possessions at the end of that period in the expectation that the world might come to an end.

The Jewish tradition has a few similar features as well. Firstly, it appears to use the ages of people to indicate their health and the decline of virtue. The patriarchs named in the Book of Genesis tend to have shorter and shorter lives leading up to the Flood, which ends the lives of the last few generations before it, including the 969-year old Methuselah. Giants are also mentioned in the form of the Nephilim, although they are seen as evil. I wonder if this reflects the inversion of good and evil which took place when Zoroastrianism began, where previously lauded deities were demonised. There is also a cycle in the practice of the Jubilee, consisting of a forty-nine year Golden Jubilee and a shorter seven year Jubilee, and obviously there are the seven-day weeks, which we still have in the West.

The Hindu series of Yugas also reflects the Greek tradition of Golden, Silver, Bronze and Iron Ages, which was ultimately adopted into modern archæology in modified form as the Three-Age System of Stone, Bronze and Iron. The crucial difference between the Hindu and Greek age system and our own ideas of history is that they both believed in steady decline whereas we tend to be more mixed. We tend to believe in progress, although our ideas of what constitutes that do vary quite a lot. In a way, it makes more sense to suppose that everything will get worse, although since history is meant to be cyclical it can also be expected to get better, because of the operation of entropy. Things age, wear out, run down, burn out and so on, and this is the regular experience for everyone, no matter when they’re living in history, and it makes sense that the world might be going in the same direction. On the longest timescale of course it is, because the Sun will burn out, followed by all other stars and so on.

Twentieth century cosmology included a similar theory, that of the oscillating Universe. It was considered possible that the quantity of mass in the Universe was sufficient that once it got past a certain age, gravity acting between all the masses in existence would start to pull everything back together again until it collapsed into the same hot, dense state which started the Universe in the first place. There then emerge a couple of issues. Would the Universe then bounce back and be reborn, only to do it again in an endless cycle? If each cycle is an exact repetition, does it even mean anything to say it’s a different Universe, or is it just the same Universe with time passing in a loop?

This is not currently a popular idea because it turns out that there isn’t enough mass in the Universe to cause it to collapse against the Dark Energy which is pushing everything apart, so ultimately the objects in the Universe are expected to become increasingly isolated until there is only one galaxy visible in each region of the Universe where space is expanding relatively more slowly than the speed of light. This has a significant consequence. A species living in a galaxy at that time would be unaware that things had ever been different. There would be no evidence available to suggest that it was because we can currently see the galaxies receding, and therefore we can know that things will be like that one day, but they would have no way to discover that they hadn’t always been like this. This raises the question of what we might have lost. We reconstruct the history of the Universe based on the data available to us, and we’re aware that we’re surrounded by galaxies which, on the very large scale, are receding from each other, so we can imagine the film rewinding and all the stars and galaxies, or what will become them, starting off in the same place. But at that time, how do we know there wasn’t evidence of something we can no longer recover which is crucial to our own understanding of the Universe?

Physics has been in a bit of a strange state in recent decades. Because the levels of energy required cannot be achieved using current technology, the likes of the Large Hadron Collider are not powerful enough to provide more than a glimpse of the fundamental nature of physical reality. Consequently, physicists are having to engage in guesswork without much feedback, and this applies also to their conception of the entire Universe. I’ve long been very suspicious about the very existence of non-baryonic dark matter. Dark matter was originally proposed as a way to explain why galaxies rotate as if they have much more gravity than their visible matter, i.e. stars, is exerting. In fact, if gravity operates over a long range in the same way as it does over short distances, such as within this solar system or between binary stars, something like nine-tenths of the mass is invisible. To some extent this can be explained by ordinary matter such as dust, planets or very dim stars, and there are also known subatomic particles such as the neutrinos which are very common but virtually undetectable. The issue I have with non-baryonic dark matter, and I’ve been into this before on here, is that it seems to be a specially invented kind of matter to fill the gap in the model which, however, is practically undetectable. There’s another possible solution. What makes this worse is that dark matter is now being used to argue for flaws in the general theory of relativity, when it seems very clear that the problem is actually that physicists have proposed the existence of a kind of substance which is basically magic.

If you go back to the first moment of the Universe, there is a similar issue. Just after the grand unification epoch, a sextillionth (long scale) of a second after the Big Bang, an event is supposed to have taken place which increased each of the three extensive dimensions of the Universe by a factor of the order of one hundred quintillion in a millionth of a yoctosecond. If you don’t recognise these words, the reason is that these are unusually large and small quantities, so their values aren’t that important. Some physicists think this is fishy, because again something seems to have been simply invented to account for what happened in those circumstances without there being other reasons for supposing it to be so. They therefore decided to see what would happen if they used established principles to recreate the early Universe, and in particular they focussed on CPT symmetry

CPT symmetry is Charge, Parity and Temporal symmetry, and can be explained thus, starting with time. Imagine a video of two billiard balls hitting and bouncing off each other out of context. It would be difficult to tell whether that video was being played forwards or backwards. This works well on a small scale, perhaps with two neutrons colliding at about the speed of sound at an angle to each other, or a laser beam reflecting off a mirror. Charge symmetry means that if you observe two equally positively and negatively charged objects interacting, you could swap the charges and still observe the same thing, or for that matter two objects with the same charge could have the opposite charges and still do the same thing. Finally, parity symmetry is the fact that you can’t tell whether what you’re seeing is the right way up or upside down, or reflected. All of these things don’t work in the complicated situations we tend to observe because of pesky things like gravity and accidentally burning things out by sticking batteries in the wrong way round or miswiring plugs, but in sufficiently simple situations all of these things are symmetrical.

But there is a problem. The Universe as a whole doesn’t seem to obey these laws of symmetry. For instance, almost everything we come across seems to be made of matter even though there doesn’t seem to be any reason why there should be more matter than antimatter or the other way round, and time tends to go forwards rather than backwards on the whole. One attempt to explain why matter seems to dominate the Universe is that for some reason in the early Universe more matter was created than antimatter, and since matter meeting antimatter annihilates both, matter is all that’s left. Of course antimatter does crop up from time to time, for instance in bananas and thunderstorms, but it doesn’t last long because it pretty soon comes across an antiparticle in the form of, say, an electron, and the two wipe each other off the map in a burst of energy.

These physicists proposed a solution which does respect this symmetry and allows time to move both forwards and backwards. They propose that the Big Bang created not one but two universes, one where time runs forwards and mainly made of matter and the other where time goes backwards and is mainly made of antimatter, and also either of these universes is geometrically speaking a reflection of the other, such as all the left-handed people in one being right-handed in the other. This explains away the supposèd excess of matter. There’s actually just as much antimatter as matter, but it swapped over at the Big Bang. Before the Big Bang, time was running backwards and the Universe was collapsing.

In a manner rather similar to the thought that an oscillating Universe could be practically the same as time running in a circle because each cycle might be identical and there’s no outside to see it from, the reversed, mirror image antimatter Universe is simply this one running backwards with, again, nothing on the outside to observe it with, and therefore for all intents and purposes there just is this one Universe running forwards after the Big Bang, because it’s indistinguishable from the antimatter one running backwards. On the other hand, the time dimension involved is the same as this one, and therefore it could just be seen as the distant past, which answers the question of what there was before the Big Bang: there was another universe, or rather there was this universe. It also means everything has already happened.

But a further question arises in my head too, and this is by no means what these physicists are claiming. As mentioned above, one model of the Universe is that it repeats itself in a cycle. What we may have here is theoretical support for the idea of a Universe collapsing in on itself before expanding again. That’s the bit we can see or deduce given currently available evidence. However, in the future, certain evidence will be lost because there will only be one visible galaxy observable, and the idea of space expanding will be impossible to support even though it is. What if one of the bits of evidence we’ve already lost is of time looping? Or, what if time just does loop anyway? What if time runs forwards until the Universe reaches a maximum size and then runs backwards again as it contracts? There is an issue with this. There isn’t enough mass in the Universe for it to collapse given the strength of dark energy pushing it apart, but of course elsewhere in the Multiverse there could be looping universes due to different physical constants such as the strength of dark energy or the increased quantity of matter in them, because in fact as has been mentioned before there are possible worlds where this does take place. Another question then arises: how does time work between universes? Are these looping universes doing so now in endless cycles, or are they repeating the same stretch of time? Does time even work that way in the Multiverse, or is it like in Narnia, where time runs at different speeds relative to our world?

It may seem like I’ve become highly speculative. In my defence, I’d say this. I have taken pains to ignore my intuition in the past because I believed it was misleading. However, there appears to be an intuition among many cultures that time does run in a cycle, and the numbers these cultures produce are oddly similar. The Mayan calendar’s longest time period is the Alautun, which lasts 63 081 429 years, close to the number of years it’s been since the Chicxulub Impact, which coincidentally was nearby and wiped out the non-avian dinosaurs. The Indian kalpa is 4 320 million years in length, which is again quite close to the age of this planet. Earth is 4 543 million years old and the Cretaceous ended 66 million years ago, so these figures are 4.6% out in the case of the Maya and 5% for the kalpa. Of course it may be coincidence, and the idea of time being cyclical may simply be based on something like the cycle of the day and night or the seasons through the year, but since I believe intuitive truths are available in Torah and the rest of the Tanakh, I don’t necessarily have a problem with other sources. Parallels have of course been made between ancient philosophies and today’s physics before, for example by Fritjof Capra in his ‘The Tao [sic] Of Physics’. Although much of what he says has been rubbished by physicists since, there is a statue of Dancing Shiva in the lobby at CERN and one quote from Capra is widely accepted:

“Science does not need mysticism and mysticism does not need science. But man needs both.”

UFOs

I don’t always give myself much time to think up ideas for these posts because I’m trying to produce one a day. Today it looked for a moment that it was Flying Saucer Week, but in fact that was in March, but it’s probably worth writing about them anyway, so here it is.

It’s trivially true that Unidentified Flying Objects exist, since a UFO is nothing other than an object in the sky whose nature is unknown to an observer. More specifically, if one person doesn’t know what something is, someone else might. Presumably the stricter definition, based on the original use of the term, would be of an aerial object whose identity was unknown to any observer. As a philosopher, I have to inject a note of doubt here since I see knowledge as belief which it’s rationally impossible to doubt, making absolutely everything which one believes to be in the sky a UFO, but that’s not a useful definition. Google Ngrams, a search engine for historical textual references on paper as well as, presumably, online text in more recent decades, shows the word UFO to have been used first in the late 1930s, to have climbed rapidly in incidence from 1950, peaking in 1978 (around the time of ‘Close Encounters’ but is that cause or effect?), then again in 2000 (‘X-Files’? Same question) and finally in 2012, which I suppose could’ve been connected to the 2012 phenomenon. As for ‘flying saucer’, they start in 1940 or so (whenever the term was invented, by a pilot), peaking in 1955 and then again in 2012. The curves are rather similar. I could do this all day, but finally, the term “little green men” begins in Georgian times, takes off in the 1930s and climbs slowly but steadily up to the present day. This illustrates the fact that “little green men” used to mean leprachauns, then got transferred over to presumed humanoid aliens.

Historically, the perception of UFOs predates the invention of either term by millennia. They’re referred to in ancient sacred texts, by Roman philosophers and others, including astronomers, and the “airship flap” of the 1890s, but reports of their observation peaked in 1957, which is of course when the Soviet Union sent Sputnik 1 into space. It is not in fact the case, as is often asserted, that they are never seen by trained observers such as astronomers and military people, although they are unsurprisingly less often seen.

Before you jump to the conclusion that I believe our planet is regularly visited by visible alien spacecraft, I want to emphasise that I absolutely do not. There are multitudinous problems with this idea. Firstly, if UFOs were alien spacecraft, the question arises of how a technology capable of sending spacecraft across interstellar space with intelligent life on board wouldn’t also have technology which enabled them to avoid being detected. Secondly, the beings associated with UFOs are humanoid, which practically guarantees that they wouldn’t be alien, although there’s another possibility there which I used to consider as a child but have since rejected. Thirdly, there wouldn’t be a government cover-up of UFOs because there’s no reason to suppose that aliens would respect human hierarchies or governments, so if anyone knows about them, “anyone” would, not just people in the higher echelons. This is predicated on my belief that hierarchical societies can’t last long enough without destroying themselves to achieve interstellar travel, but it’s possible that they would respect our system I suppose.

I’ve talked about the idea of humanoid aliens before. There are three possibilities here. There could be few or no humanoid aliens because of the manifold vagaries of evolution, there could be many due to convergent evolution, or humanoid aliens could be manufactured or genetically engineered as ambassadors to the human species. If there are few rather than no humanoid aliens, those could also be deployed as ambassadors. However, since it isn’t even clear there are any other life forms at all in the Universe, I think it’s safe to say that UFOs with “aliens” of that kind on board are not spaceships from elsewhere. Consequently, back when I did believe in UFOs of this kind, and Barney and Betty Hill’s experience is an example of remarkable evidence for them although still not enough to make me believe any more, I thought they were time machines. A famous representation of this idea is found in the 1980 Play For Tomorrow ‘The Flipside of Dominick Hyde’, whose main character travels back to 1980 and becomes his own ancestor, as one does. However, Dominick Hyde is from the fairly near future. My own version of this hypothesis was that they were from many millennia into the future, because they were significantly physically different from us.

I’m not sure exactly where to fit this bit in, so I’ll put it here. It’s been noted that UFOs tend to take one of three forms: “cigars”, “saucers” and spheres. A four-dimensional hyperspheroid analogous to the shape of an oblate spheroid (a squashed ball with long axes of symmetry in one dimension and a short one in the other, like a slim discus) would intersect with our own three-dimensional space in these three ways. Imagine a discus being sliced in various ways. It could look like a sausage shape, a circle or an oval, and a four-dimensional analogue could manifest the same way in our space. Then again, glimpsing an object of an approximately discoid shape would also be mistaken for those three shapes, so there may be no need to jump into hyperspace for this one.

Avro Canada VZ-9 Avrocar

This is a VTOL aircraft called an Avrocar, introduced in 1958 as part of a secret Cold War aircraft programme. It uses the Coandă Effect, which is the tendency for a jet to remain attached to a convex surface. This is undoubtedly a real aircraft which is also undoubtedly saucer shaped. It is almost literally a “flying saucer”, and this is where I think an explanation for UFOs of the more convincing kind is. I’m afraid this is going to be a bit of a conspiracy hypothesis on my part.

Back in the late 1970s, I saw a “flying banana”. Actually, not a flying banana, which is a heavy helicopter with rotors at front and back, but a Mil V-12, which looks like this:

I don’t understand how I can have seen this helicopter, because it’s a Soviet aircraft and the Cold War was seriously on at the time. Nonetheless I did, and it flew over my house in Kent. I remember the red, white, blue and silver colour scheme and it took me a long time, several years ago, to track this down, but this is undoubtedly what I saw. However, I drastically overestimated the distance and thought it was three miles long, and genuinely believed it to be an alien spacecraft for quite some time. I still don’t know what it was doing there, but presumably the British government knew about it because there are no reports of it attacking anything or being attacked, and it seems unlikely that that could’ve been hushed up. It happens to be the largest helicopter ever built, but it is not three miles long. As far as I know there are no similar NATO helicopters in the sense of the colour scheme, although there are transverse rotor ‘copters aplenty.

I have built up a hypothesis around this single data point which I think accounts for many UFOs. I believe UFOs are sometimes secret military aircraft. Governments are at peace with the idea that a lot of people think they’re alien spacecraft and make a big fuss about there being a cover-up, because the people concerned are, not wishing to insult anyone, “useful idiots”, as the phrase has it. That is, people supporting a cause without realising the full implications intended for that cause. The LGB Alliance comes to mind here, for example. Incidentally, although this phrase is attributed to Lenin, he doesn’t seem to have said it. Anyway, reports of UFOs by untrained observers when they are in fact secret military aircraft combined with public perception of the people concerned as delusional would be a convenient way of hiding such a programme. I’m not saying that’s definitely what happens, but as far as we know there’s only one species able to discover this kind of technology, it’s a fact that NATO used flying saucer-like craft in the 1950s and it is at least a more plausible explanation than the idea that they’re alien spacecraft. I am, however, not particularly attached to this idea. I think probably most of the time people see Venus and misidentify it.

One of my favourite little details about British post-war history is that British Rail once designed a flying saucer. The way this happened is rather convoluted and the craft involved would’ve been pretty environmentally unfriendly. A diagram is shown above, appearing in their patent application, GB1310990, made in March 1973. Here’s an extract of their text:

A space vehicle includes a platform under which is provided a thermonuclear fusion zone to which liquid fuel is supplied under pressure to be ignited by beams from lasers. The platform mounts electromagnets, possibly superconducting magnets, to deflect charged particles produced by the fusion reaction; some particles are deflected so as to be received on insulated electrodes for generation of electric power. Excess thermal energy produced in the reaction is removed by cooling tubes to a radiating surface. The lasers may be energized by an homopolar generator. The latter may also be used as a reference for stabilizing the vehicle by varying the electrostatic voltages on sections of the electrodes to apply a correcting couple to the vehicle. By controlling voltages on sections and also the fields from magnets, the thrust on the vehicle can be directed to control the attitude and direction of the craft. A passener cabin is included.

This seems to involve the expulsion of large amounts of ions. Apparently, and I’m speaking from memory here, there was a department within British Rail which was given free rein to come up with projects like this, and it seems that this evolved out of an idea for a train station platform which could raise itself to ease access to carriages by passengers. It employs two pieces of technology which have yet to be cracked properly. One is fusion power, which is always thirty years away from being realised, and has been since the War, a bit like a human mission to Mars really. The other is superconducting magnets, which exist and are even used in MRI scanners, suitably cooled, but superconductivity at room temperature may never be achieved although there has been some progress in recent decades. It was designed by someone called Charles Osmond Frederick. The patent lapsed in 1976 and is quite famous, and also taps into the optimism following the Space Race, and really the question is not how far out of touch with reality this is, but why something like it didn’t happen. I don’t think technology could have advanced fast enough for it to have happened in the twentieth century, but maybe if the will was there it could have. Fusion reactors, though, have various problems. One is that the radiation inside them is so intense that it would make the casing dangerously radioactive and also brittle, meaning that although there may not be radioactive waste from the fuel itself they would still produce it and be remarkably unsustainable. It’s possible to trigger a fusion reaction but only recently has more energy been gained from it than had to be put in to create it. The alternatives are pellets and magnetically confined plasma, the problem with the latter being turbulence in the plasma making it hard to contain. It also uses tritium, which is rare – one hydrogen atom in 32 million is tritium. This tritium is also lost into the casing and the water in the coolant, or most other coolants. Tritium is radioactive. As for superconductivity, this can be done using liquid helium but with both of these technologies lots of massive matter is involved, so it seems unlikely that the flying saucer could take off.

The picture at the top of this post shows the Futuro Pod, a mainly fibreglass structure which was mass-produced in the late ’60s and early ’70s. It probably counts as a “tiny house”. There are about four dozen of them, designed by the Finnish architect Matti Suuronen as a mountain cabin, and were killed off by the Oil Crisis as they’re made of plastic. Today they’re very expensive. They aren’t even particularly suitable as permanent accommodation but are more like log cabins or bothies, so it can be presumed from “real” flying saucers that the people on them are either very small, and they are after all little green people, or on a day trip, perhaps from a mother ship, in which case they’d probably have to travel much faster than light. Having said that, a Futuro is eight metres across and four metres high, which is a lot more space for a being of half human stature, so maybe.

Finally, there’s the obvious religious aspect of the whole thing. If someone lacks the option to believe in some of the old-time religion due to either rationalising it away or not being exposed to it in the first place, it does make sense to project spiritual beliefs onto the sky, so these craft and their occupants could be seen as angels or deities, or even perhaps demons. I personally find it a little difficult to dispell this idea and I’m theistic, so it’s understandable that belief in UFOs would fill the vacuum. But I’ve discussed this in depth elsewhere so I shan’t bother with it now.

To The End Of The Earth

It used to be thought that we were about halfway through our planet’s history, and that conditions would continue in the way they have in the last few hundred million years until the Sun becomes a red giant in something like five thousand million years’ time. Sadly, this is not now considered likely, but that’s not really because of us or any damage we might be doing to the planet’s long term prospects. It turns out that our Sun has something more hostile in store for us in less than an æon. And at this point I should probably explain my words.

Firstly, I still use the long scale with large numbers, so for me a billion is 1012 and so on – 1 followed by twelve 0’s. The short scale, where a billion is 109, 1 000 000 000, is American and when I say “American” I mean both continents. It’s fairly wasteful to use up the words for numbers on lower values, so I don’t do it. That said, ironically from an English-using perspective, the short scale does line up better with metric multiple prefixes such as giga- for “billion” and tera- for “trillion” and so on. There’s also already a perfectly good word, “millard”, referring to a hundred thousand anyway.

Secondly, the word æon, from the Greek word ‘αιων meaning “age” or “generation”, and sometimes translated in the Bible as “world” in a fairly pejorative way, is a unit of time lasting a thousand million, or millard, years. From the same root stems the word “eon”, which is a division of time above “era”, so I’ll talk about that too. Earth’s past history is divided on the longest temporal scale into eons, namely the Hadean, Archean, Proterozoic and Phanerozoic, this last being our current eon. From the Archean onwards, these are divided into eras (the well-known Palæozoic, Mesozoic and Cenozoic in the past 540 million years or so), periods (for example the Triassic, Jurassic and Cretaceous), epochs (in our case the Pleistocene, Holocene and probably the Anthropocene), ages, for example the Meghalayan which lasted from some time in the Bronze Age and might be considered to have finished in the 1950s, and finally chrons, which in the case of the current Sub-Atlantic started around the time Rome was starting to expand. It gets a bit confusing because of the archæological Three Age System of Stone, Bronze and Iron, and incidentally we are still in the Iron Age, which collides with the chrons.

With a couple of exceptions, Earth’s future is as yet unmapped as far as actual names for intervals of time are concerned, but it certainly isn’t unmapped according to scientific understanding, which of course could change easily. In fact it did just that in the past few years with the realisation that we haven’t got as long as we thought. I’ve already gone into a fictionalised history of the next two hundred million years which mainly amounts to Dougal Dixon’s work on ‘After Man’, ‘Man After Man’ and ‘The Future Is Wild’. This is somewhat feasible and somewhat based on science, though forty year old science, and has some degree of validity, but there is a firmer understanding of the probable near future, and also well beyond that until the Sun dies. Thus I’ll start with the next few million years.

It’s been proposed that we’re currently in the Anthropocene Epoch, but it isn’t clear when it started. The previous epoch, the Holocene, covered the time since the end of the last Ice Age, but in recent years it’s been reconsidered and now there’s a popular movement to divide the Holocene off from the past few years because of the major effect our own species is having on Gaia, hence Anthropocene – ‘ανθροπος + καινος = > human + freshness. All the epochs in the Cenozoic end in “-cene” because they’re relatively recent. The geological dating system uses “BP” to name particular fairly recent times, usually within the history of our genus Homo, which stands for “Before Present”, the “present” being defined as the year 1950. Consequently one suggestion is to date the Anthropocene from 1950. Another rather similar proposal is that it begin from the earliest nuclear weapons tests, since these have left a long-lasting change in the geological record by irradiating the world and changing its radionuclide signature. A third suggestion is that it begin with the Industrial Revolution, and finally Heather Davis has proposed that it start in 1492, since this is when Europeans began to conquer the rest of the world. Rupert Sheldrake, who articulated the Gaia Hypothesis, recently proposed that the Neocene will follow the Anthropocene in the near future, which basically coincides with the Singularity and marks the point where machines will sort the environmental problems we’ve created. This would make the Anthropocene ridiculously short, possibly less than a century, but Sheldrake embraces that, linking it to the acceleration of change, which may have started nearly an æon ago with the appearance of multicellular life. The future is of course unknown and our existence may have vast consequences of which we’re currently unaware and can’t anticipate, but there’s also what might be called the “geological future”, that is, the future as it will proceed assuming that human activity lacks major long-term consequences for the planet, which is probably less hubric and more Copernican, as it were.

Naming things doesn’t necessarily give you any control over them though.

The most obvious issue in the relatively near future is anthropogenic climate change. It isn’t clear whether what we do to the climate is far-reaching enough to end the recent spate of ice ages, of which there have been five from the Pleistocene onwards so far. It might even trigger one, because if Antarctic icebergs spread far enough they may reflect more heat into space and cool the planet. There are various ideas about the next ice age. The most popular seems to be that it will happen anyway, in about fifty millennia, which is when it’s “scheduled”. More recently this has been questioned, and some climatologists believe there will still be another ice age but that it will be in a hundred millennia, because by that point climate will have returned to the point where it would’ve been without our technology as it has recently been. Of course it may also be that we or our machine successors will just “re-wild” most or all of the planet and things will get back to “normal”. This degree of uncertainty regarding even the relatively near geological future might be seen as indicating that this is just idle speculation, but in fact it may not be because certain things are well-known and established scientific facts it seems unlikely we’ll be able to avoid, such as entropy, and those can be predicted fairly confidently.

A lot of this is covered in the popular video ‘Timelapse Of The Future’:

I’ve covered this before here, and there are similarities between this post and that one and its successor, but I hope I’m saying something fresh here too.

Fifty thousand years from now, the day will be one second longer. This is because the lunar tidal action on Earth gradually slows our rotation. I’ve previously been curious about how long it would take before the year has exactly three hundred and sixty-five days, and if this change is linear, leap years will become unnecessary by the time each day is fifty-nine seconds longer, almost three million years from now, and before that date they could be rarer, say every five years by six hundred millennia from today. To be honest, I find the idea that the Gregorian calendar would still be in use by then absurd, but there are similar assumptions made about the likes of long-term contracts and economic planning, so maybe it will, and Y2K is an example of a problem caused by assuming such things would not be in place for longer than a few years.

A quarter of a million years hence, Lō’ihi will break the surface of the Pacific Ocean, although it may of course be either deeper or shallower by then depending on which way sea levels go. This is the next Hawaiian island, to the southeast of Hawai’i itself. This will continue as the Pacific plate and the hotspot shift over many millions of years and the islands to the northwest erode away. By six hundred millennia from now, the chances are that an asteroid one kilometre in diameter will have hit us, although this could happen at any time. The energy released by this would be equivalent to around sixty times the detonation of every nuclear weapon in the world. There’s a modelling tool for asteroid impacts here.

Around a million and a quarter years from now, a red dwarf star called Gliese 710 will be very close to the Solar System, less than a quarter of a light year away. By two million years hence, judging by previous events when this has happened, the ocean will once again be alkaline enough for coral to recover. This acidification occurs because of the increase in atmospheric CO2. Ten million CE will be around the time the Afrikan Rift Valley will be flooded and the new continent, which Dougal Dixon named Lemuria, will start to move across the Indian Ocean. Also by this time, even without a mass extinction most species around today will have died out and, I hope, been replaced. Fifty million years from now the map of the world will look roughly like this:

(I actually think this is exaggerated in the sense that it assumes the rate of continental drift to be faster than it in fact is).

Around 200 million years from now, there will be a new supercontinent, whose exact shape is hard to predict because nobody knows much about which way Antarctica will move. This restores the planet to the situation as it was before the dinosaurs evolved, and makes for a large amount of desert with extreme temperatures near the centre of the continent, very hot during the day and very cold at night. It will also increase the amount of oxygen in the atmosphere, and means a single world ocean and a single landmass covering 29% of Earth’s surface. While this continent is in place, the Hadley cells either side of the Equator will move to 40° either side of it. This will increase the already high percentage of desert land by a further 25%. This supercontinent will have broken up by about 450 million years from now, leading to the kind of climate found here during the Age of Dinosaurs, and also at around this time the likelihood of a mass extinction from a gamma ray burst, which will cause it to rain concentrated nitric acid, means it’s likely to have happened by about this time.

There may just be time for another supercontinent to form about 600 million years from now, by which time there will be no more total solar eclipses because of our satellite’s widening orbit, but there will still be annular eclipses where some of the Sun’s surface remains visible.

Then, unfortunately, a major catastrophe will ensue. Up until this point, a process referred to as the carbonate-silicate cycle has kept considerable amounts of carbon dioxide in the atmosphere. Rain dissolves this gas and acidifies, landing on rocks and gradually dissolving them. Calcium and bicarbonate ions are washed into the ocean, where it’s incorporated into the hard parts of organisms such as plankton, molluscs and coral. This sinks to the ocean bed, where it’s buried and ends up in the magma under the crust. Volcanic eruptions then return this to the atmosphere as carbon dioxide. But the Sun is gradually getting brighter, and by this time the light will be strong enough to start weathering the rocks faster than their carbon can be released back into the air, and will also start to dry the land, reducing rainfall and therefore carbon reaching the sea. The rocks will also harden, slowing continental drift and since that’s responsible for throwing up new volcanoes along the edges of the plates, these will erupt less often. At a certain point, around 600 million years from now, one form of photosynthesis known as C3 will cease to operate due to insufficient carbon dioxide in the atmosphere. This will lead to a gradual decline and eventual extinction, first of green herbs such as annuals, then deciduous trees, then broad-leaved evergreens and finally conifers. I would expect that during this time, evolution would lead to other plants occupying their vacant niches. That said, there’s still C4 photosynthesis, which can function at a lower level of carbon dioxide, and there are many plants which use this type, particularly those in the spurge family, and they already look quite alien and futuristic:

Photograph of Euphorbia helioscopia, taken in Machida city, Tokyo, Japan. Croped & resized.
Date
17 May 2006
Source
Own work
Author
Sphl

Water vapour is a much more powerful greenhouse gas than carbon dioxide and consequently this evaporation of water from the oceans and elsewhere will start to raise surface temperatures. Because of less photosynthesis, oxygen will also fall and therefore the ozone layer will break down and there will be more oxidation at the surface due to more ultraviolet light penetrating to ground level, removing even more oxygen from the atmosphere. By 850 million years or so in the future, C4 photosynthesis will become impossible and the cycle run by the sun through plants will cease to function. This means that only animals who don’t breathe oxygen or rely on plants for food, directly or indirectly, could survive. This would, for example, include worms living in geothermal vents at the bottom of the ocean who feed on bacteria. However, the ocean will also be disappearing and once the average surface temperature exceeds 47°C 1.1 æons from today, the amount of water vapour in the atmosphere will start to run a feedback loop through the greenhouse effect, causing runaway evaporation from the oceans and a slide into a situation where the only life which can survive will be in places like lakes and caves at the tops of mountains or near the poles, and finally not even that. 1.6 æons from now only bacterial and archeal life will remain, and 2.8 æons hence even the poles will be at 150°C.

I find this all rather claustrophobic and suffocating, which is a bit of a weird reaction. I look around at the trees in the park, the people, badgers and spiders in this household, note that I can breathe the air and that there is evidence of human activity all around in the form of houses, roads, vehicles, furniture, whatever, and it really saddens me that it will come to an end so soon, but I also find it weird because we’ve got 800 million years to go. However, they used to think that Earth would stay in about the same state for about as long as it had already existed, so theory has robbed us of three or four æons of life. There’s only enough time for another two supercontinents, by contrast with maybe ten which have happened before on this world. But the future is in fact unknown and may not be like the past, or continue trends which began then. We have intelligent tool-using life now, and those tools may find a way to lengthen our stay, or alternatively hasten our demise. Also, if some of us were to leave this planet permanently and entirely to settle elsewhere, that gives us more hope, if hope is the word. But a Doomsday Argument-like scenario makes that unlikely. Then again, maybe it isn’t up to us. Maybe another species of animal will start to invent more advanced tools and technology before the carbonate-silicate cycle breaks down. Maybe there will even be such beings around as it starts to happen. Who knows? The future is unknown.

Why The End Might Not Be Nigh

Yesterday’s post, as well as being mistitled, was probably quite depressing, although that depends on your view of human extinction since many people don’t consider that to be a bad thing. As a kind of antidote, I’ve decided today to offer a more encouraging view of our future, assuming that you consider the continued existence of the human race as positive. I’ve covered the Doomsday Argument before, but did it in quite an idiosyncratic manner, concentrating on my own thoughts about its possible flaws. This post is more an outline and survey of the Doomsday Argument and its rebuttals.

The Doomsday Argument has its origins in the astrophysicist Richard Gott’s visit to the Berlin Wall in 1969. The Wall began to be built in 1961 and Gott visited it eight years later. After speculating about how long it would be there, he did a quick calculation, and I get the impression this was mental arithmetic, and reached the conclusion that it would be demolished some time between 2⅔ and two dozen years after that date in 1969. In fact it came down in 1991, twenty-one years later. This provoked him to publish his calculation in a scientific paper in 1993 where he applied the same calculation to the history of the human race, concluding with 95% confidence that we would cease to exist between twelve and eighteen millennia from 1993. This is of course quite a big range, but it’s notable to me that the Berlin Wall came down towards the end of that period.

The Berlin Wall version of the argument is the original and has also succeeded in predicting its demise, and is therefore worth looking at closely. A random visitor to the Berlin wall will be there at some point in its history. It’s likely that Gott visited the Wall some time between 25% and 75% of the way through its duration, because that’s half of its history, so a steady stream of visitors would put them somewhere in that interval half of the time. If they then make a prediction about when it will come down, the most confident period would be that it would last between a third and three times as long as it had been in existence, because they can believe fairly confidently that they’re between a quarter and three-quarters of the visitors in chronological order (more than 50% probability) and therefore it will last somewhere between a third as long again (if they’re at 75%) and three times as long again (if they’re at 25%).

Now apply that to human history. It might at first seem that it predicts that if anatomically modern humans came into existence around 300 000 years ago, we would continue to exist for between a hundred millennia and getting on for a million years, again with 95% confidence, which should be taken as read from now on. This doesn’t work quite the same way though. Visitors to the Berlin Wall were assumed, fairly reasonably, to have occurred at a roughly constantly frequency according to Gott’s argument, but the same doesn’t apply to the whole human population, which increases exponentially. Therefore it isn’t about where in history you are chronologically so much as the order of your birth among all the human births that will ever be. The figures I use for my version of the argument are the population of the planet in about 1970, my own birth in 1967, the figure of all human lives up until 1970 quoted at the time and a thirty-year doubling time. The population at that time was around 3000 million and the estimate at that time was 75 000 million. Given that figure of 3 000 million, 6 000 million would be the population by 2000, 12 000 million by 2030, 24 000 million by 2060 and 48 000 million by 2090. It reaches 96 000 million by 2120 at this rate of doubling, meaning that the last birth could be said to occur by that time at 50% probability assuming that everyone born in 1970 was still alive, but earlier than that otherwise because there would’ve been more human lives. We can assume, for example, that almost everyone born in 2000 would be dead by 2120, the figure only needs to go as high as 150 000 million in toto anyway, and so on. But the figures work out as between 25 000 million and 225 000 million further births after 1967 given these rather inaccurate figures, which place the earliest time before 2060 and the latest before the end of next century. You will gather from my vagueness that I can’t do calculus. Or look at it this way: if everyone who ever lived considered the question of whether they were in the first or second half of the number of human births which will ever be, almost half will be correct. (It’s possible that there is an exact “middle” birth if the total number of people who will ever live is odd rather than even.)

Most people agree that this argument is flawed, and I’ve previously mentioned my own objections to it, but there are superficial and deeper causes of the flaws. The superficial reasons for the above figures are that they’re sloppy and inaccurate. Population doubling time has been quoted at between twenty-eight and thirty-five years during the period it was widely considered a major concern, and adjusting for those moves the dates to between 2054 and 2225. It also turned out that the doubling rate fell recently and that economic development reduces the size of families, so it’s been estimated, and again this is an old figure, that the world population will stabilise at eighteen millard (thousand million) in the mid-twenty-second century, which gives us centuries to go. A rather less superficial argument is based on selecting my own birthdate, because the argument can be made for anyone who has ever had this thought, and therefore there could easily be a prediction thousands of years ago that puts us way beyond the latest 95% confidence limit today. The argument is equally valid no matter whose life you use as an example. The date changes as time passes. If someone had made the prediction about the Berlin Wall in 1990, the lower bound of their confidence interval would’ve been in 2000.

But there are other problems with the argument which are not to do with these details or even applying it to human extinction. Before I go into them, I want to make two observations. Firstly, there’s a tendency for people who do believe in its validity to dismiss other’s (go on, ask me about that apostrophe, I dare you!) arguments as indicating that they haven’t understood it properly. Secondly, although it’s widely agreed that it’s invalid, the reasons are multiple, and people who believe it’s invalid for one reason often don’t agree with the other reasons given. This complicates things.

One objection to the argument is that it assumes nothing is known about where one is in human history. It seems to make sense to flip a coin if one is asked the question “was your birth in the first or second half of the total number of human lives?” and go with that answer only if one believes the coin to be fair. If one knows it isn’t fair and will always come up heads, it’s no longer rational to choose tails. If anything relevant can be known about our place in history, it changes the odds. For instance, it could be discovered that there was a correlation between the prevalence, lethality and spread of pandemics on the one hand and the level of population on the other which would make it very likely that a population above ten billion would lead to human extinction within an average human lifespan, in which case as soon as it hit that number and stayed there for seventy years or so, our demise was guaranteed. I don’t personally like this argument because I can’t think of anything which is that reliable which is relevant to human survival. I believe that we are in fact in ignorance, partly because measures might be taken to prevent the apocalypse once its likelihood had been calculated. On the other hand, that might be optimistic given how keen everyone seemed to be, for example, on ignoring the finding that pandemics were in fact much more likely to happen in current circumstances.

There’s also a converse argument which goes like this. The more intelligent life forms which will ever exist, the more likely it is that I exist. There are various ways in which my existence, like everyone else’s, is improbable, and the combination of traits which lead to someone like me existing becomes increasingly probable the more people there will ever be. If there are going to be 200 thousand million people, the chances of someone like me existing might be ten percent – nine out of ten possible worlds with 200 thousand million people in their history don’t have me in them. But if there are going to be 200 billion in that scenario, a thousandfold greater, each world would end up having around a hundred examples of someone like me in its history somewhere.

Here’s another argument, and I may have got this wrong. The Doomsday Argument is an early example of other similar arguments. One of these is the argument that the human species will never substantially settle anywhere off Earth because if there were, for instance, fifty million habitable worlds in the Galaxy and each had a population of a million with a life expectancy of a century for a millennium, all of which are very conservative assumptions, the probability of living before that era is only 0.015%. There are other similar arguments. Therefore there is a sense in which those who are aware of this argument are early adopters. They’re like the people who bought the bug-prone version of a new gadget who were used as guinea pigs by the manufacturer, and therefore the argument they accept is likely to be less sophisticated and more flawed than its successors. We could be working towards a more successful predictor of the future than this argument, and since we’re aware that it only has a short history, we probably have the wrong one. This sounds peculiar to me, which is why I think I might have got it wrong.

We could also be early humans. It might be that the fact that we’re human-basic rather than transhuman is an argument for us not being very advanced in history. We don’t currently augment our bodies much internally, but the technology to make that possible is already in its infancy and will become more advanced. The fact that we don’t download music directly to our brains yet, unlike practically everyone who will be born more than two centuries from now, is evidence that we are unusual.

The fact that mass extinctions only seldom happen has also been used. This is again a probabilistic argument, and can be modified to refer to individual dominant biological taxa. But there seem to have been six mass extinction events in the past 540 million years, so the chances of us being in one are small. Whereas I think that’s valid, it clearly isn’t true because we are in fact in the middle of one right now, probably related to our activity. But dominant species are said only to go extinct about once in a million years, so that’s another odds-based argument for this not being a threat.

Another objection is based on the St Petersburg Paradox. Suppose you bet on a coin coming up heads, and every time the coin is flipped and doesn’t, your winnings double. The expected winning is infinite even though intuition suggests that it will in fact be small compared to how much you put in, because the probability of losing halves with every flip. The rational choice would therefore appear to be to place all your money on the game. I may not be following this argument correctly, but it seems to relate to each generation of human existence being a toss-up between being the last and not being the last, and in the same way, the expected number of human beings is infinite. To be honest this makes no sense to me and I’m not sure I’ve expressed it correctly.

Carlton Caves has offered this example as a rebuttal. Imagine you encounter someone whose fiftieth birthday is today. By the logic of the Doomsday Argument, they have a one in three chance of living to one hundred and fifty. I see this as referring to the idea of having special knowledge, because we know that nobody seems to have lived more than about ten dozen years.

A little like the early adopter argument, there is a self-referential counter-argument. The Doomsday Argument was thought of fairly recently. Including the Berlin Wall calculation, it’s currently four dozen and two years old. Therefore, it is likely to be refuted some time between sixteen and one gross and a half dozen years from now, in other words 2037 and 2171. However, if this argument for its refutation works, it means the Doomsday Argument is valid, which is a paradox. This, though, is problematic because it assumes that the argument can be disproven, which may not be so.

I haven’t found this to be a particularly satisfactory post because I’m not feeling on top of the arguments. Attempts to disprove the Doomsday Argument are very popular and the whole field is rather confusing to a non-mathematician such as myself. That said, if you look at my other post on this topic, you’ll see my own reasons for doubting it. Unfortunately though, or perhaps unfortunately, merely disproving the argument itself doesn’t prevent the possibility that we will soon be extinct. Tomorrow I plan to talk about that.