The Progress Plateau

There’s a popular idea in nerd circles that at some point there will be a technological singularity. This means that rates of technological and scientific progress are accelerating, so that if it were possible to plot a graph of such change it would be exponential and eventually become almost vertical. This is the singularity. In graphical form, it looks like this:

This is a bit abstract, so it can be illustrated with a familiar example. From the start of the 1960s CE, the number of transistors which could be fitted in a given area doubled about once every two years. This manifested in various ways. It meant that every couple of years, the RAM available on a computer of, say £1000, doubled, the speed at which they worked doubled and so on. This is called Moore’s Law, and it might no longer apply. The problem with being able to tell whether it does or not arises from the fact that it isn’t in the interests of making a profit to say it has, and in fact commercial interests may always have driven it. There are other areas where acceleration of this kind can be seen, such as the sequencing of DNA. The human genome project was described as like the Apollo missions when it started in 1990. It finished in 2005, but today it can be done in a few weeks for a couple of hundred quid or less, hence 23andme and the others. This could be expected across the board, and different areas of science and technology help each other.

The usual scenario envisaged where this happens is via artificial intelligence, and looking at the likes of Midjourney and ChatGPT, one could be forgiven for thinking it’s about to happen, but Ray Kurzweil first published his book predicting the singularity in 1999. Predictions were made of something similar before that. Murray Leinster’s 1946 story ‘A Logic Named Joe’ told of a point when computers on the internet would achieve sapience and be able to solve any problem, including giving sex advice to small children, planning perfect murders and curing drunkenness instantly (one of these things is not like the other but I’m in a hurry) due to the information available to them online. This story is yet another example, incidentally, of how the internet is one of the least surprising things ever to happen. In 1946, the most advanced computers were – well you know how the routine goes. Massive great room-sized devices less powerful than a digital watch, or whatever.

But the future is not like the past. That’s what makes it the future. That said, things have happened in the past which might be clues as to what will happen. At the moment, one assumption is often that because scientific and technological progress has accelerated steadily, it will continue to do so. There are even deep time-based views which see the current acceleration as a continuation of an acceleration of biological evolution over æons, and this does make some sense. Over most of Earth’s history, life consisted of single-celled organisms who appeared very soon after this planet formed and didn’t develop hard parts until 600 million years ago, almost nine-tenths of the way through this planet’s history, and when modern humans first appeared we spend hundreds of thousands of years not doing anything like cave-paintings and so forth until the last tenth or so of our own history, then there were the twenty-five millennia or more between that and the emergence of agriculture, the thousands of years between that and the Industrial Revolution and so on. However, much of this is very centred on the way we live now being the focus of progress, and it’s a platitude to say that that may not actually be anything to be proud of.

There is another suggestion that progress is slowing down. Neil Armstrong stepped onto the surface of another world less than six dozen years after the Wright brothers achieved the first powered flight. At the time, there was a plan to put people on Mars just over a decade later but the last time humans left Low Earth Orbit was now more than fifty years ago. Project that backwards another fifty-one years and it precedes the first trans-Atlantic flight. Just imagine that projected back forwards. It would mean no flights from Britain to Australia until at least the early 1970s, no communication satellites, no Skylab, no Shuttle. In that particular area at least, progress has ground to a halt. Admittedly, this is partly because of advances in automatic space probes, but it isn’t the only way in which progress has decelerated. For instance, by 1970 the developed world had motorways, good sewers, commercial air travel, mechanised farming, long-distance ‘phone calls and all sorts of other things, and these are seen as the features of modern life over fifty years later, and although there have been disruptive changes since, the difference between life in 1920 and 1970 here at the Western world was surely far bigger than that between 1970 and today in 2023.

That said, there are indeed still new disruptive technologies such as social media, smartphones, 3-D printing and video calls. Many of them, however, are either tightly focussed on ICT or rely on it in some way. Another relatively disruptive piece of tech which arose recently is outpainting, which takes a photograph or other image and imagines its surroundings. Applying that to the graph above would lead to a steepening curve and a singularity.

But what if it’s like this?

We aren’t precisely aware of most technological and scientific trends, although we arguably are in digital electronics. Hence even a subtle deviation from an exponential curve wouldn’t be easy to spot. This is an outpainting of the left half of this curve using the prompt “A curve on a line graph with axes”:

This starts to deviate from the actual curve at about 1 on the X axis. I haven’t made any assumptions which would suggest this in the prompt.

My curve without cropping was a sigmoid. Sigmoid means “S-shaped”, although it’s a bit peculiar because an actual sigma is often not S-shaped. It is, however, used to refer to the letter S rather than the Greek letter. I can recall from when I was eleven years old plotting the temperature rise of ice over a bunsen burner and found it to have this shape. It takes more heating to change ice to cold water than it does to heat cold water to hot and once again more heating to boil hot water than to heat warm water by the same number of degrees. It crops up as the logistic function, which expresses population growth. A few individuals in a closed habitat will initially exponentially increase their population, assuming the gene pool is large enough, but will eventually level out as resources are used up. This exact example may in fact be relevant to progress if that depends on people having ideas and being able to act upon them, because the world’s human population has expanded and education has increased, leading to more scientists and engineers and more people able to put their ideas into production. Now that population growth is decelerating, perhaps progress will as well. In particular, the availability of resources is relevant here as this is being artificially restricted to non-renewables, and failure to follow a plant-based diet also means to some extent that our resources are more limited than necessary strictly on the dietary front. There are many other examples. A game between two players is unlikely to have be won by a player in the first few moves, and if they’re losing they are less likely to turn that around in the last few. Likewise, a tumour is likely to grow exponentially until it’s killing the patient, at which point it will no longer have a hospitable environment to do so because the body keeping it alive is no longer able to function properly.

Another rather salient curve of this shape is the learning curve. It takes a long time to start to learn something, then there’s a smooth increase in skill and experience which levels off again as one completes the task. On the other hand, the more one learns about something, the more one realises there is to learn, so that looks more like an exponential curve. The question is whether there is a limit to human knowledge and ability in general. Are we learning and becoming more capable in a finite space of possible knowledge and skills or are we discovering new vistas all the time? On top of that, are we going to cease to be capable of understanding more or being able to do more after a certain point because of our own limitations? Can we overcome those limitations through what we learn, for instance with cognitive enhancing drugs or AI?

Actual technological product lines do mature in a sigmoid way. Pocket calculators, for example, still exist, presumably for use in exams. They took a long time to evolve from abaci and mechanical adding machines, but in the 1970s and 1980s they became increasingly sophisticated very quickly. Nowadays they’ve levelled off. Mobile ‘phones seem to be similar. Early on, they were brick-like devices which could only be used for voice calls, and it took a long time for them to emerge from that stage. Then there was a period between the early ’90s and the early ‘noughties, also known as a decade I suppose, where they made rapid progress. Once the smartphone became popular, this changed to incremental progress on such things as resolution, camera quality and battery life. Making them a different size would make them less user-friendly and some of the facilities, such as video calls, are not actually that popular. Resolution on any device is a case in point, because the theoretical useful limit might be reached when the pixels become smaller than the angular resolution of the eye, which is determined by cone cells in the retina. In fact it is a little further than that because of what’s known as the Nyquist-Shannon sampling theorem, which is that something has to be at least twice as good as the bandwidth to avoid undesirable artifacts, so actually a pixel has to be at least somewhat microscopic to work properly. This means that any increase in resolution beyond a certain point becomes mere hype, and also wasteful because it needs four times the storage to double the now visually perfect resolution.

Hence there really is a sigmoid function in the improvement of certain devices. Toothbrushes used to go through a cycle where they returned to their well-known “default” form as other things were tried and rejected. Presumably the other versions are “nice tries” or possibly ways to get people to buy expensive new-style trendy toothbrushes. Razor heads are a notorious example of this as they simply seem to involve adding another blade every few years, although there are now bidirectional razors which really do seem to be an improvement. There is conflict between the needs of capitalism and technological progress in certain directions. For instance, there is never going to be a time under capitalism when cars or mobile ‘phones are going to be able to reproduce themselves in some way, and durable products which continue to be useful because they’re not wearing out are not profitable. A couple of years ago, I got interested in a company which sold more sustainable ‘phones, so I went to their website and was asked if I already had a mobile. On answering “yes”, I was sent away again because the greenest thing to do in those circumstances is to hang on to the one you’ve got! This sounds like a terrible business model because the first thing they do, and try quite hard at it, is to turn away customers. I have no idea if they’re still in business or if they have another way of surviving and making a profit.

The bigger question is whether just as progress in specific areas of technology follows a sigmoid curve, technological, and scientific progress in general does. If it didn’t, it would be because radically new forms of science and technology come along to fill the gap left by the mature older theories and devices, or because there is tech which can simply keep improving drastically for centuries. Arthur C. Clarke once said that if a distinguished elderly scientist says something is impossible, they will be proven wrong very quickly. And this does happen. An example which sticks in my head is Lord Kelvin, who in his old age insisted that Earth couldn’t be more than a couple of hundred million years old because of how it would cool over that time, not realising that radioactivity continues to heat the planet from within. David Bellamy’s climate change denial might also be an example. The question arising in my mind as I write this is, have I just got old? Am I saying further progress is impossible just because that’s what old age makes me think? But I’m not that old. I’m four dozen and eight. And in spite of that possibility, or perhaps because of it, it still seems very much that just as there’s a limit on individual tech, there’s also a limit of a similar kind on tech in general.

This would mean we are currently living through an era of rapid progress which will slow down. If that’s so, is there an easy way to estimate where we are in that and when it will reach a plateau? If it’s true that progress has indeed slowed since the 1960s, that might be some kind of inflection point where the curve went from concave to convex and if a measure could be found for when it really took off, that might give an estimate of how long we have until it levels out. The Industrial Revolution started around 1760 and Apollo 11 was in 1969. If history obeyed these kinds of laws, the levelling out can therefore be expected to occur around 2178. Another way of looking at this is similar to the way the Doomsday Argument works. The astrophysicist Richard Gott, from Louisville, Kentucky, visited the Berlin Wall in 1969 and predicted that the Wall would stand for at least 2⅔ years but no more than two dozen years after his visit. This was not based on any special understanding of international relations or politics, but on statistics. At the time, the Wall had been in existence for eight years, and on the basis of this he estimated that it would continue to stand for between a third and three times its then age, based on the principle of mediocrity, i.e. that there was nothing special about his particular visit, and the principle of indifference, that in the absence of information all possibilities are equally probable. This is true if probability is a statement of rational degree of belief. Half of all visits to the Berlin Wall can be expected to occur over half its lifetime, given the second principle. That period is between a quarter and three-quarters of its total lifetime, so it will continue to exist for thrice as long as it already has or a third of the time it already has. In fact it fell in November 1989. This principle has also been used to conclude, probably wrongly as the linked post argues, that one’s own birth is about half way through the total number of human births, and as I measured that from 200 000 BP based on an estimate made in 1976 there had been 75 milliard human births, and assumed population doubling every twenty-eight years would continue, that the last human birth would occur some time around 2134. These estimates, though, are egocentric, as someone born thousands of years ago would be able to estimate that the human race should’ve ended by now and it obviously hasn’t. Also, anyone visiting or being born outside that zone, i.e. near the end of the Wall or the human race, or near their beginning, will be very wrong. It’s just that the chances are that we are inside that zone.

As already mentioned, human population growth is likely to be sigmoid due to loss of resources and because species in a particular habitat have sigmoid population growth as a result. It would be interesting and relevant to know if this applies to omnivores, since one option we have which wouldn’t be available to, say, dolphins or cats, would be to modify our diet, starting to use other resources. Maybe this is what omnivorous species do. This kind of growth also scuppers the Doomsday Argument, and in fact population growth is slowing so for the purposes of that particular graph the line has already become convex. For technological and scientific progress’s sake, though, what are the results? The earlier limit of the take off point is vague, and possibly also in different places for technological progress and scientific progress. The spinning jenny is often mentioned as the start of the former, invented in 1764. Steam engines are a bit of a weird jump off point because they have existed for a surprisingly long time, having been invented in China around a thousand years ago and in Greece about twice that long back. It seems to have been James Watt’s improvements which led to them becoming practical as a source of power. This enabled iron to be refined more efficiently and machine tools are the final piece of the jigsaw. This all points to around 1760. Neil Armstrong stepped off the Eagle 209 years later, when I was almost two. Hence I was born 207 years after the onset of the Industrial Revolution at a time when the global human population was doubling every twenty-eight years. There were around eight hundred million people in 1750 and 3 610 million in 1970. Very approximately, this means that about six thousand million people were born between 1750 and 1970, meaning that by Gott’s argument there ought to be somewhere between two thousand million additional people and eighteen thousand million people born before progress flattens out. The lower estimate means that would’ve happened already but we know it hasn’t, so maybe the half way version works better in this case. This means after the births of six thousand million more people since 1967, the year of my birth, and we could already be more than half way there. Current world population is around 7 888 million, and about half the population alive in 1970 have died, so that’s an increase of 2 473 million, with spurious accuracy. If that rate doubles in the next fifty years, that takes us near the six thousand million point, so we could expect significant technological progress to end by about 2080, probably before.

All that said, there are a couple of ways in which very obvious progress could be made but hasn’t been. It’s been noted that technology is biassed towards able-bodied White cis men of a certain age range, isn’t particularly suitable for marginalised people, and in fact can even kill them. In this respect, we’ve not made much progress. The other way is linked to this: we are not living in an age of progressive politics. Quite the reverse. If a graph could be drawn for progressive politics, it would’ve peaked in about 1978 and is currently back in the 1960s or earlier. Things are going backwards in that respect and don’t show any signs of reversing. This influences technological and scientific progress. The increase in belief in Young Earth Creationism, for example, will have a knock-on effect on cancer research, to pick a fairly clear example, because cancer is effectively independent evolution. The oppression of female, queer and Black people deprives the world of their talents and skills, not only because of their special perspective but simply because they’re human beings who would otherwise be able to exercise and develop them. However, perversely this could mean that progress can continue for longer because it means the curve we currently experience is shallower and more drawn out due to the relative lack of talent. Simply emancipating women to the same degree as men would telescope the curve to half its length.

Why might we want progress to end though? There are a couple of reasons, linked to each other. One is that although humans probably evolved during a time of relatively rapid change, we throve during the flat period extending through the Palæolithic. We got used to the process where wisdom gained by elders could be usefully passed on to future generations. If someone discovered that onions were edible, something which has long mystified me, that could be passed on to grandchildren and we still have that knowledge today. By contrast, if someone in the early twentieth century learnt to write cursive with a fountain pen and that it wasn’t a good idea to share them because the nibs bend according to the individual writer, that information is now almost useless because people don’t even write much with bics nowadays, let alone pens with proper nibs. This means that older people are not so much fonts of useful knowledge and are probably less respected as a result. I can probably put on an LP at the right speed without scratching it, use a rotary dial telephone and other people can drive using manual transmission, but the former two of these are already useless and the latter will be too once cars are all electric. I might sound like an old fogey saying all this but it means that a corpus of a particular kind of skill is constantly lost rather than built up precisely because we are building on our predecessors’ achievements so quickly.

The other, which is again linked, is future shock. Heidi and Alvin Toffler famously dealt with this in 1970 although the term dates from 1963. Many aspects of their work are outdated, but the continuing existence of future shock as a general experience is indisputable. The concept is based on culture shock. I can’t use chopsticks or sign language and I walk most places. These three things would make it hard or impossible for me to adjust to life in East Asia, the deaf community or most of urban America. The same kind of difficulties emerge for us all due to rapid technological change and according to the Tofflers goes hand in hand with social change. It involves confusion, anxiety, isolation and depression. Disposability, built-in obsolescence, the end of tradition and a new kind of nomadic existence provoked by the need to change careers often due to the end of old industries and the start of new ones along with skills becoming outdated are all features of contemporary life. There have also been changing social norms, some of which seem quite positive such as the greater acceptance of homosexual relationships. However, it may be that this kind of change is temporary. We don’t know what will come out of the other side of course, but a new set of traditions could be built up, and in fact that’s nothing new because much of what we think of as tradition was actually invented in the nineteenth Christian century.

Both of these aspects might end at some point, always assuming we last long enough as a species, and we will return to a time which is much more high-tech and scientifically advanced but which doesn’t change as rapidly as today.

Finally, I want to point out how useful this might be to an SF writer. This post was inspired by an observation someone made about Asimov’s stories, in that the kind of robots who exist thousands of years in the future are not in fact very different to the ones which exist in his fictional twenty-first century. Another aspect of this in his writing is how oddly similar the culture and technology of his late Galactic Empire, some thirty thousand years after Hiroshima, are to the time he was writing. Books are on microfilm, people still smoke tobacco, there are apparently no robots and there are voice-operated machines to be sure, but they’re typewriters. Computers don’t seem to have a significant role at all. This looks very dated by today’s standards, but maybe in a way it’s a more accurate view of the future than one in which enormous change is ongoing. It makes it easier to write and imagine, and whereas it does become increasingly dated, it avoids zeerust, Douglas Adams’s concept of datedness which afflicts the now retrofuturistic.

If we survive, we don’t know what the world will be like centuries from now, but it’s also possible that the world in two hundred years won’t be that different technologically than the world in five hundred. Maybe it’s progress which will become dated, though hopefully not before environmental and social progress have made their marks.