A Look Back At The Third Millennium

Back in 1987 CE, I finally got round to joining Leicestershire public library. In a way this was entirely superfluous as I was also a member of Leicester University library (and still am, because that’s how it works, although I lost my card a long time ago and last used it in the late 1990s), but the kind of books were different. I used it to get a quick overview of subjects I needed to study in more depth as part of my degree, and also for novels and art books. One of the first books I took out, after Hugh Cook’s ‘The Shift’ which incidentally I highly recommend, was Brian Stableford’s and David Langford’s offering ‘A History Of The Third Millennium’, which is an unfiction book whose image I shall now try to retrieve from the dark recesses of the web:

(actually that’s just Wikipedia). The illustration of the nautilus shell you see on that cover is in fact one of several options, including the acorns which I’ve seen on mine and the library copy, and is a hologram rather than a two-dimensional photograph. There was also a paperback version which I used to own:

The big, hardback version (whereof there was also a large-format paperback I think) scored over the small paperback in the lavish, full-colour illustrations and of course the hologram on the front cover. I don’t know if anyone reading this remembers UB40’s 1982 album UB44:

This was the earlier, limited edition, bearing a hologram, replaced soon after by this:

I actually quite like the second cover as well.

So the thing is, if you were living in Britain in the ’80s, you might have got the impression that the future would have lots of holograms in it. Oddly, the only holograms we seem to see regularly are on back cards. I do not know why this is. It seems to me that they’re still pretty groovy (geddit?) and that they ought to be all over the place, but they aren’t. They do have their drawbacks. For instance, this form of hologram doesn’t display real colours but shows a spectrum of them across the image. There are ways around this but not with printed still images. Nonetheless, representational holograms at least were a fad which went out of fashion and I don’t know why. They were probably replaced by Magic Eye images, also known as random dot stereograms:

I’ve made a few of these, on a Jupiter Ace. They’re quite easy. Another possible visual replacement is the Mandelbrot Set, in a sense.

Just as holograms have gone out of fashion, but seemed like the future at the time, some of Langford’s and Stableford’s book also, unsurprisingly, proved to be highly inaccurate and projected the trends of the time unrealistically, as it turned out, into the future, but other aspects were bang on. There’s also something about the illustrations not being CGI which forges a connection between the reader now, when we are very accustomed to it, and the fact that some of them, although obviously manipulated in an analogue way, had to be based on real models at some point, which gives them a vividness lacking in computer graphics. I almost feel sad to say this because I was very into CGI as a teenager and my main motivation for interest in computers was their possibilities in that direction, but the idea has a kind of soullessness to it which is quite saddening. It isn’t about whether they’re convincing but the need to feel an anchor in the physical world. There’s also artistry in how the images must’ve been created when they are fake. It’s a little like the ingenuity of helical scanning on video cassettes to make it possible at all.

The most glaring anachronism is that the world depicted has the Soviet Union and the Warsaw Pact persisting for centuries, although it also sees Stalinism as dwindling to nothing very quickly. It was published around the time Gorbachev came to power and after a short period it became obvious that he was going to take the USSR in a very different direction. However, it also predicted that entrepreneurial capitalism would come to an end and that planned economies would become the norm, and this is really what’s happening, particularly in the wake of Covid. What we have now, quite possibly, is a situation where small businesses go to the wall and are replaced by corporations. For instance, a small takeaway could close down due to lack of footfall but its facilities would be bought up by a fast food franchise, siphoning off the income to where it does no good for anyone significant and effectively taking it out of the economy, at least locally. In the book, this process is envisaged as being driven by technological change, where manufacturing becomes more specialised and the division of labour becomes more sophisticated, and this does happen to some extent and may be responsible for the impression I get, at least as an outsider, that individual jobs are often incomprehensible to the people holding them. However, governments are also seen as having to exercise more control over the owners of large enterprises, which I don’t see happening, and I’m also not sure what the writers mean when they say “owners” because of the nature of shares. One thing which does seem realistic to me is the purchase of small nations by multinationals. I can absolutely see this happening and wouldn’t expect it to be confined to small nations either. The description of the real interests of multinationals also seems entirely accurate. They are described as constituting great cartels with no interest in competition, but more in avoiding taxation, protecting their markets and maintaining stability. On the other hand, governments are seen as in opposition to them because they try to avoid taxation, but this is the opposite of the real situation, which is that they prefer to tax the poor and leave the rich to enjoy their stolen money, and perhaps find new ways to take money off the poor. It all seems a bit idealistic really, but still.

An interesting chapter covers a series of epidemics, not really pandemics, which broke out from 2007 to 2060. The first leads to the overthrow of apartheid because it incapacitates all ethnicities in South Africa but because the Whites are in the minority this enables the others to mount an uprising against them. South Africa then disintegrates into a number of small self-governing republics. There is a theme of deniability here and it’s explicitly stated that none of the epidemics are necessarily genetically engineered although many of them were convenient. Several of them seem to be aimed at particular ethnic groups, and it has been mentioned that this might be possible although I suspect it wouldn’t work very well because of the mixture of genes we all have. One seems to be instigated by the US against Latinx immigrants, and not only succeeds but spreads into Mexico, Central and South America and kills many millions, beginning from Los Angeles. This one is rather poignant. It happens in 2015, is limited to a year and is quickly contained in the US by a vaccination program which only takes three weeks. This is amazingly different from the real situation with Covid-19. The attribution, though plausibly deniable, that the viruses involved in all of these are genetically modified is an interesting parallel to the real world conspiracy theory that Covid was genetically modified by the Chinese. In fact, the book also depicts a Chinese virus from Wenzhou (温州) causing sudden hepatitis which kills 38 million. In fact it would be possible to identify genetic modification because entire genes would be spliced in, meaning that long, continuous stretches of genetic code would differ from the wild strains, and at the time of writing genetic fingerprinting was being developed at my alma mater, so in a sense the authors missed a trick. It is in fact the case that we are likely to be plagued by a series of pandemics due to deforestation over the next few decades, and it’s notable that the predictions of death toll are far smaller than the real numbers of casualties we’re currently experiencing. A new variety of AIDS is predicted for 2032, whose long incubation period helps it spread, and it also causes sterility and arose in Poland. The US “triplet plagues” are three simultaneous viruses, one causing paralysis and neuropathy, a second causing leukæmia and a third solid cancers. These kill ten million within a year. By 2060, the viral plagues have ceased, apparently because they hurt the perpetrating groups as much as the intended victims. This particular chapter is interesting to compare and contrast with the reality of Covid and the probable reality of future plagues, although there’s no need for any conscious instigation for this to happen. Also, they were right about the overthrow of Apartheid although not about the cause or the timing – it’s a quarter of a century later here. Another thing they got right, sadly, was that pandemics would be better managed in the developed world than the global South.

The chapters on energy use are interesting. They seem to be based on accurate projections of fossil fuel and nuclear power use although the likes of COP didn’t exist at the time. Coal and oil use peak in 2025 and 2000 respectively, but the cost of fuels relative to inflation rises thrice as high for the latter. It’s a little hard to understand how a fuel used for transport and manufacture is able to rise in price that fast independently of the prices of other goods, but there might be an explanation somewhere in the text. The reason for the rises in price is that increasingly marginal sources are used, particularly for oil, such as oil shales and sands. It can be assumed that fracking is going on given the perspective we have. Coal also becomes more expensive because of deeper mines having to be dug. Imports of oil also get harder due to countries wanting to hang on to their own supplies. This leads to biofuels, mainly ethanol, being developed in countries without these resources. Fission power is if anything less popular than in reality, mainly due to Green parties,which achieve a modicum of power. There is a meltdown in Vologda in 2004, which is probably close enough to other European countries to be significant, and the issue of enforced internationalism is also mentioned, this case being an example of pollution leading to neighbouring countries being concerned about each others’ activities.

The US President Garrity, 2024-32, introduces restrictions on commercial plastic use and fossil fuel automobiles and conspicuous consumption ends. This is unpopular and blamed for a recession, but fuel shortages have already led to a recession by this point which is sufficiently severe that the additional measures make little difference. In fact I wonder if 2024-8 will prove to be Trump’s second term, in which case none of this will happen, and I’m also sure nothing this pronounced was agreed at COP-26. The expense of manufacture and energy leads to the maintenance rather than disposal of equipment, which encourages manual labour again. This again is the opposite of what has happened so far. Built-in obsolescence is a major issue, although there is the Right To Repair movement, and if this succeeds this could lead to the possibility of maintenance and repair becoming more popular by the end of this decade. This chapter also notes that uranium mining suffers from the same unsustainability problem as fossil fuels, but doesn’t mention thorium reactors.

Stableford and Langford blame the energy austerity measures imposed on consumers in the mid-century on profligate use of energy from the mid-twentieth century onward, and we would probably all agree with this. Energy use for individual consumers is rationed and large-scale energy use concentrates on public utilities. Property taxes are based on heating inefficiency, smart meters monitor consumption and issue on the spot fines and long distance ‘phone calls are cut off after a certain period. All of this is intrusively surveilled. Although I can imagine such things becoming necessary, I can’t see them being implemented. Nor can I see steps being taken to prevent us entering this predicament, so there are a lot of questions here about what will actually happen when it comes to the crunch. It is, however, clear that governments are able to exploit xenophobia resulting from this kind of situation, so whatever else happens it seems clear that right wing populism will be fuelled, so to speak, by this kind of crisis. On a side note, it predicts the Roomba in this bit.

Three necessities are mentioned for fusion: an accurate simulation in advance of changes in the plasma in order to continue containment; more powerful and efficient magnetic fields; and, a form of shielding which would absorb most of the neutrons and protect the outer casing. All of these things are solved, and they do seem in my rather naïve view to capture all the issues. The simulation problem is addressed as an outgrowth of what we now call the Human Genome Project, which is referred to as “Total Genetic Mapping”, as software was needed to achieve this. I’m sure that’s true but I’m not sure how this would be relevant, which isn’t to say that it isn’t. In the book, the efficiency of the magnetic fields is achieved by the invention of room temperature superconductors. Finally, the alloy which acts as a neutron sink is manufactured in orbit because only in zero G can metals of different densities, such as aluminium and lead, become an alloy without gravity separating them. I see this bit as an attempt to demonstrate the benefits of zero gravity manufacturing conditions but it is also an attempt to address the problem of the casing being so heavily irradiated that it becomes radioactive waste in its own right and also needs to be replaced. Room temperature superconductors do now exist but only under immense pressure, so another problem has been created. Previously the issue of creating magnets powerful enough to contain plasma under sufficient pressure to cause fusion was addressed by using liquid helium to cool the magnets and circuitry almost to absolute zero, which led to a mind-numbing temperature gradient because the plasma itself was at 150 million Kelvin. Now the problem is pressure, but there may be a hint at a solution when you realise that both the plasma and the superconductor need to be under very high pressure. This is too big a subject to talk about in this post really. Incidentally, fusion reactor efficiencies are misquoted in two ways. Firstly, the ratio of energy input to the plasma to its energy output is not the whole story because total energy input is greater, and secondly the conversion of heat to electricity is only fifty percent efficient at best. There’s also energy input to the tritium extraction process and tritium is also scarce, at one hydrogen atom in 32 million. The alternative is to use helium 3, which is abundant in lunar regolith, but we are not anywhere near managing it at the moment anyway. It’s looking like I’m going to have to blog about this subject separately, but this brings home the interesting topicality and relevance of the book to contemporary events, because all the things mentioned are current issues in fusion research.

By the time fusion power becomes practical, the public and government perceive it as having been dangled in front of them for so long that they’re sceptical and people have also readjusted to the new energy régime. Biotech is also getting all the money because it’s more glamorous, so it isn’t until the 2090s that fusion generators come online at all. Once they have, there are further delays. It’s realised that neutrons emitted by fusion can be used to make weapons-grade plutonium, there are squabbles over the sitings of the reactors because it’s felt that some redress is needed for the global South for the previous amassed wealth achieved via fossil fuel use by the North, and given that they are located there, the cost of building an electricity grid sufficient to carry the power out of the countries considerably offsets the benefits. Deuterium plants also have to be located in or near the sea, so it doesn’t help landlocked territories. There are also teething problems, such as damage to the plant from the intense heat and radiation, meaning that the reactors need to be redesigned and rebuilt.

All of this section feels remarkably grounded in reality and practical considerations. There is nothing waffly in this. I can completely buy the idea that should fusion power ever prove practical, this is very much along the lines of what would happen. We can already see Third World nations objecting to what they see as the North pulling up the ladder after themselves by changing the energy goalposts, and this reluctance is basically the same thing. This accords with the general tone of convincing politicking combined with speculative, but not wildly so, conjectures regarding technological and scientific change. This is definitely hard SF.

Unsurprisingly, an issue following on from this is that of anthropogenic, or otherwise, climate change. The emphasis is on global warming and sea level rise although it is mentioned that changes in ocean currents and rainfall patterns lead to unanticipated results such as a general reduction in crop yields accompanied by sporadic increases in some areas due to shifts making land more suitable for particular crops such as cereals. This can be seen in reality today, for instance with the increasingly friendly English climate for grape-based wine production. It’s also uncertain, in the book, how much fluctuations in solar activity contribute to the situation, but again as in reality, they’re generally thought to mitigate the effects of climate change. Ocean acidification hadn’t been identified as a problem at the time and is therefore ignored, as are the risks from clathrate hydrates releasing methane.

The prediction of sea level fluctuation is that it will rise sixteen metres between 2000 and 2120 at a maximum rate of twenty-four centimetres a year and then drop once humanity gets its act together to a stable level two metres above the 2000 level by 2200. Shanghai is the first city to be affected by the rise, starting in 2015 and being obliterated by 2200. Tokyo and Osaka are similarly threatened but this is overtaken by events because in the late twenty-first century Japan is practically destroyed by quakes and the population disperses throughout the globe. Speaking of quakes, attempts to protect Los Angeles and San Francisco are hampered by seismic activity in California. All of this is quite well thought-through, although I have yet to check the elevation of the relevant cities. More widely in the US, attempts to rescue New York City and Los Angeles are the main focus, leading to resentment in the South, particularly Florida and Texas. The bicentennial of the Civil War in the 2060s leads to civil unrest in the Southern States because of the focus on settlements outside the area. This is a little similar to the Hurricane Katrina situation.

Comparing this with real life, Shanghai is indeed very low-lying at 2-4 metres above sea level. China is also disproportionately affected by sea level rise for a continental nation, as is much of East Asia. In Shanghai, there was catastrophic flooding killing seventy-seven people in 2012 and there are attempts to create mangrove swamps to increase resilience. For some reason I don’t understand, sea level is rising faster in East Asia than elsewhere. How is this possible? Clearly there’s something about the oceans I don’t understand. As for New York City, I don’t know what’s been done yet but there are plans to fortify the shoreline in Manhattan. The devastation of New Orleans also occurs but from flooding due to sea level rise rather than the hurricane, and of course this is still on the cards.

Another successful prediction is made concerning public response to climate change. People take it personally and realise it’s about their children and grandchildren. Having said that, it often seems to me that people are remarkably unconcerned in reality about it and I find this puzzling. But we do have Greta Thunberg and Extinction Rebellion.

The destruction of Honshu occurs in 2084-85 and starts with an earthquake followed by the eruption of Mount Fuji and the emergence of a new sea volcano. This leads to a Japanese diaspora and the blurring of cultural and ethnic distinctions. Clearly this is an unpredictable event although the nations of the Pacific Rim are all at risk. In order to tell a story, the authors have to commit themselves to a particular date and location, but there’s a more general principle here. It’s a bit Butterfly Effect, because it’s equally feasible that it could happened to California, which would have different consequences because of it being somewhat integrated with the rest of the States.

There follows a to me rather depressing chapter on genetically modified food, where the reduction in yields caused by climate change is only mitigated to subsistence levels by the engineering of more suitable varieties for the new climatic conditions. This leads to the production of SCP – Single-Cell Proteins – initially as fodder but illicitly eaten by vegans as a meat substitute until it’s legalised for human consumption later on. Complete foods are also created in the form of grains which contain all essential nutrients, off which the inventor lives for a decade but is accused of cheating. This reminds me of Huel and also breatharianism to some extent. Then there’s a description of all the small-scale subsistence farmers who have been forced off their land by megascale monoculture agriculture growing patented crops, which balances the rather technocratic tone of the previous chapter. These are known as the “Lost Billion”, the number of people affected (short scale), no longer able to farm what used to be their land and reduced to the status of refugees. Some of them resort to armed struggle and others join apocalyptic religious cults as a coping mechanism for the destruction of their way of life. Sea farming also expands greatly, something I personally strongly believe in, in the form of algal and blue-green algal farming, which would serve to satisfy many nutritional needs while redressing the phosphorus imbalance. Seaweeds are also grown, particularly by Australia due to its extensive shallow seas, but also along the entire west coast of South America. This is from the 2060s. In my mind, I envisaged just ordinary seaweed but their version of it is genetically modified seaweed, which is also used for biodiesel. It often isn’t realised how much oil there is in algæ, which I presume is to enable them to float near the surface and photosynthesise. As the authors point out, more than two-thirds of sunlight falls on the sea and it is an underexploited resource. Not that it’s ours to exploit necessarily as it would have an impact on the ecosystem there, but it’s a question of minimising that impact elsewhere.

Unsurprisingly, the most predictable thing ever, the internet, is, well, predicted. Amusingly, ebook readers are for some reason only introduced in the 2060s after false starts from 2005 onward. There are also wall screens. I don’t know if domestic wall screens will ever become popular. In theory we could have them now, as larger screens exist in public places for such purposes as advertising and as whiteboard replacements. All anyone need do is buy one and put it in their home, but people don’t do this. Maybe they will one day, and it’s important to remember that this is supposed to be about what happens in the next 979 years. Speaking of which, it also speaks of financial transactions going through a cycle of security and insecurity, which is entirely feasible if quantum computers develop the capacity to hack encryption through fast factorisations.

Then they talk about employment. They see it as eliminating white-collar jobs faster than manual labour because of the need to maintain new technology and the damage done by climate change. Hikikomori are also mentioned, though not by name. It’s described as “TV withdrawal” and as affecting mainly people in poorer countries, who seek to escape from the reality of life into the more idealised version, particularly in advertising, seen on television. There is resistance to home-working and people continue to commute because they see working at home for pay as unnatural. I can see some of this to be sure, and for the real world there’s the issue of economic support for ventures which are used by commuters and people going to work such as fast food stands and sandwich shops, among other things. City centres also stayed expensive. An interesting phenomenon which as far as I know hasn’t happened is an organisation known as Speedwatch, starting in 2004, which begins as a mutual support group for the victims of dangerous drivers and develops into a vigilante group assassinating motorists who exceed the speed limit or otherwise drive dangerously, which although it ends in the perpetrators being imprisoned is argued to make roads safer by introducing a deterrant. Restrictions on private vehicles increase while the leaders are in jail, and in 2021 on being released, they claim to have won. Public transport is boosted. Now this would be sensible, which probably explains why it hasn’t happened. Electric cars are introduced but are underpowered. The Sinclair C5’s successors, planned in reality, are more successful. The time frame is approximately correct, with petrol cars ceasing to be manufactured in 2030, by which time there is in any case more home-working. Airships come back too, for obvious reasons. I really want this to happen but don’t think it will.

That, then, is the first part of the book. The wider sweep of the worldbuilding, which extends far beyond the third millennium, was used as the basis of much of Brian Stableford’s fiction, such as the Emortality Series and his short story ‘And Him Not Busy Being Born’. His earlier novels bear no relation to all of this as far as I can tell. David Langford mainly writes parodies, such as ‘Earthdoom’, which I have read, but also came up with the idea of the brain-breaking fractal image known as “basilisk”, which leads to online images being made seriously illegal. He writes the newssheet ‘Ansible’ and also the Ansible Link column in ‘Interzone’, and has won more Hugos than anyone else ever. The rest of the book is also interesting but tends to branch out beyond what’s relevant today. There is a first contact towards the end, but since humans have been so genetically modified by then, it doesn’t really feel like one. They also remove the natural limit on the human lifespan, so there are no longer such things as disease and old age, and this is an important issue in much of Stableford’s work.

It isn’t so much about accuracy and datedness that this work is interesting as the focus on Realpolitik and the quality of the research put into it. Yes, it’s dated and yes it reflects the time it was written in (these are not the same thing), but it’s also believable and quite frank about the risks we present ourselves with, particularly in the area of climate change and fossil fuel use. I highly recommend it, even now.

Sinclair

By Prioryman – Own work, CC BY-SA 4.0, https://commons.wikimedia.org/w/index.php?curid=35368168

Clive Sinclair, the home electronics pioneer and entrepreneur, has just died at the age of eighty-one. Although I am not officially a fan of entrepreneurs, being rather left wing, I nevertheless have a soft spot in my heart for his products. The drama-doc ‘Micro Men’ covered his story and rivalry with Acorn in the late ’70s and early ’80s quite nicely, but there’s more to Sinclair and his two major companies than the events and products of that period. Incidentally, the two of us share a birthday.

He was born in 1940, in Richmond, and founded Sinclair Radionics five days before his twenty-first birthday, having raised funds by writing magazine electronics articles. This first company was bought out by the National Enterprise Board and he was paid off in the late ’70s, and he proceeded to start the company which actually made the famous computers and C5, Sinclair Research Ltd. This was later taken over by Amstrad, but he continued with another new company, releasing the Z88 in 1987, and a number of other products. I would say his products are characterised in three ways: they were cheaper than their rivals, they tended to get announced way earlier than they were released and they often had teething problems due to their relatively short development phases.

As far as I can remember, Sinclair’s first product was an amplifier in 1962, followed by a pocket radio the year after. This second product was self-assembly, as were several of his products up until the ZX Spectrum two decades later. This presumably made them cheaper, but this wasn’t unusual at the time. By 1966, he’d designed a pocket television, 405-line if I remember correctly, whose design was unfortunately too complex to get it beyond the prototype stage, but I’d say this is still an achievement for that time:

Copyright status unknown: will be removed on request.

This was at a time when pocket trannies were quite a novelty, although I’m aware of much older DIY projects to build crystal sets to fit in wallets so as a concept they weren’t actually very new. This design looks very ‘sixties. I can imagine it turning up on ‘Star Trek’ TOS. Although this was advertised but didn’t happen, establishing a familiar pattern, the Microvision did eventually come to market in 1977, and I remember it doing so clearly. There was a display model in the window of Barrett’s toy shop in Canterbury for ages. Like its predecessor, it had a two inch screen and I remember it being advertised as being able to pick up TV transmissions from all over the world, which I found dubious at the time and imagine was either untrue or only true in the sense that if you took it to a country with the same TV system as ours, it would also pick up programmes there.

Looking at those controls, which I remember from the time, it looks like it could switch between PAL and NSTC (the American system) and possibly also 405-line transmissions, so the claim seems misleading but technically true. One of my friends hooked this up to another of Sinclair’s products, the ZX81, and was able to display what amounted to pretty high resolution graphics on it, in the sense of pixels per inch, but I’m getting ahead of myself.

In 1972, they introduced the first slimline pocket calculator, the Executive for £79.95 plus VAT. This tendency to quote prices without VAT was really irritating and seems to have stopped happening nowadays. Amstrad also did it. The use of LEDs on calculators and digital watches at the time made them quite power hungry compared to LCD displays which came in later. Sinclair was for some reason very hostile to the idea of LCDs for a very long time, not including them in any of his products until the late ’80s. You can see from this device the beginning of a tendency to have rather uncomfortable and impractical keyboards, which continued with his computers in the next decade.

By Windshear – Own work, CC BY-SA 4.0, https://commons.wikimedia.org/w/index.php?curid=58073110

Nineteen models of calculator were produced by Sinclair over the years. The once I remember best is the above, Sinclair Cambridge Programmable, which was advertised in the ‘New Scientist’ in 1976. It had a maximum of three dozen steps in its programs and included a conditional branch instruction. A later model extended this to eighty steps, but both were only accurate to four significant figures. One of the oddities of ’70s programmable calculators is that they didn’t lead smoothly into microcomputers. You might think that the design of these devices would become steadily more advanced until they actually became like home micros in a way, but instead, and this is across the board with the exception of Hewlett-Packard, microcomputer design starts again from scratch. The one exception is the HP85, but this didn’t lead to anything else in the long run. In the case of Sinclair this may have been because he lost the intellectual property rights to his calculators when the NEB took over his company, but I’m just guessing.

By The original uploader was Prof.Dr. Jens Kirchhoff at German Wikipedia.(Original text: de:Benutzer:Prof.Dr. Jens Kirchhoff) – Self-photographed, Attribution, https://commons.wikimedia.org/w/index.php?curid=2045759

Another Sinclair innovation was the Black Watch, a self-assembly LED digital watch. A couple of interesting things about this design are that the LED has the same blue colour (red when illuminated of course, as red LEDs preceded the other colours) as the calculator above, and the black colour, shared with Sinclair’s first calculator and to be repeated on its computers later. The Black Watch was not successful because it didn’t keep good time due to the quartz crystal running at different speeds at different temperatures and the batteries only lasting ten days as opposed to the advertised length of a year, and they were then difficult to replace. Also the circuitry was vulnerable to damage by static electricity. Therefore a lot of the features of later products were already discernible, if you allow the word “feature” to include the ideas expressed

A lot of kits were left over after the watch was withdrawn and in order to use these up they were remarketed as a clock for car dashboards. I actually admire the ingenuity of doing this and the economy of using components which were just lying around appeals to me. In practical terms though, I wonder whether the technical problems were resolved or if they didn’t affect a dashboard clock as much as they would a wristwatch.

If you were to ask most people to name Sinclair computers, the first one to come to mind would probably be the ZX Spectrum, followed by the ZX81, probably with the QL and ZX80 sharing a fairly distant third place. However, the ZX80 wasn’t Sinclair’s first computer. That honour goes to the 1977 product, the Mk 14. Now I have to say that I find most mid-’70s hobbyist microcomputers rather confounding, and if you wanted an Acorn equivalent to a Mk 14 it would probably be the System 1. Other similar computers include Commodore’s KIM-1 and the MPF-I, which I imagine ceased to exist when Apple decided to sue the heck out of the company which made it when it moved on to the rather Apple ][ -like MPF-II. Anyway, this is a Mk 14:

Taken from OLD-COMPUTERS.COM – will be removed on request.

These were not user-friendly machines by any stretch of the imagination. Moreover, unlike the System-1 and KIM-1, which both used the 6502 CPU, for some reason Sinclair opted to use the “Scamp”, also known as the 8060. I have never understood why the 8060 is designed as weirdly as it is. It sounds like it’s supposed to be an Intel CPU like the x86 series or 8080 and 8085, and maybe that was a marketing ploy, but it was made by National Semiconductor. Although it can access a 64K address space, it does so by changing the function of several of its pins which are quite important in what they’re doing already, and it increments the program counter before fetching instructions, meaning that address 0 cannot contain an op-code unless it branches back to it. I also seem to remember it didn’t have a stack pointer, so subroutines would be very difficult to implement. It was used as a lift (elevator) controller and some of them are probably still in use, and that’s fine, but it doesn’t seem to lend itself very comfortably to writing general purpose programs, which is what Sinclair was using it for in the Mk 14. It was also, probably due to its unpopularity, much more expensive than the mass-market Z80, 6502 and 6809. It seems perverse to have such a fiddly piece of hardware in the first place then be made even less user-friendly by employing the 8060.

The Mk 14 cost only £39.95, had half a kilobyte of ROM and 256 bytes of RAM. It would presumably have had a machine code monitor as firmware and be programmable using hex opcodes using the LED seven-segment display for output. Remarkably, in the light of future developments, Clive Sinclair wasn’t keen on the idea of bothering with computers at all, and didn’t actually even use them himself well into the 1980s, and this seems to have been a factor in the genesis of Acorn. Chris Curry managed the Mk 14 project and soon went on to found Acorn with Herman Hauser in 1978 to build the System 1 and eventually design the ARM chips which now power tablets and mobile ‘phones, and I’m wondering if this was due to Sinclair’s failure to appreciate the potential of computers. It’s quite strange to think of this now.

One thing the Mk 14 did manage to do was persuade Sinclair that there was a market for home computers, and he went on to design and release the ZX80, in 1980:

By Daniel Ryde, Skövde – Originally from the Swedish Wikipedia., CC BY-SA 3.0, https://commons.wikimedia.org/w/index.php?curid=439384

I like the design of the ZX80 case, as in 1980 terms it looks very futuristic. It has the whiteness of the Cambridge calculators and of course a flat panel keyboard, which was very en vogue at the time in the form of hi-fi and music centre controls. It was also a nightmare to type on. It could be bought either complete or as a kit, and in the former condition it was the first computer ever to be sold for under £100. People tend to think of it as very primitive. My mother considered a friend of mine to be uppity because his parents bought him one when it came out. It had 1K of RAM and 4K of ROM, which included a BASIC interpreter which could only work in integers. Also, it was a bit like President Ford who couldn’t walk and chew gum at the same time, because it could either do computing or show things on the TV but not both, so when it was actually running a program the picture would disappear. However, it was also very fast because of the integer BASIC and it was able to use full keywords when the later ZX81 had to abbreviate, such as “CONTINUE” and “GO SUB” instead of “CONT” and “GOSUB”. This leaves one with the impression that the ROM is quite spacious when in fact it only had about as much information in it as two sides of handwritten foolscap. It was made entirely from readily-available parts rather than commissioned or in-house chips, as was usual at the time, and apart from the CPU, which thank goodness was a Z80 rather than the silly 8060 used in its predecessor, RAM and ROM, was composed mainly of integrated circuits in the form of discrete logic, which was again pretty standard for micros of that time. It sold about 50 000 units. It is possible to get it to produce a steady display through using interrupts carefully, but it wouldn’t do it out of the box. My perception of it at the time was that it was still very much a niche product about which I knew practically nothing. In the publicity, SInclair claimed it was powerful enough to run a nuclear power station but I’m unaware of any supporting evidence for that.

By Evan-Amos – Own work, CC BY-SA 3.0, https://commons.wikimedia.org/w/index.php?curid=18300824

Sinclair would dedicate the next few years to mainly producing exclusively home computers and peripherals, and in 1981 they started to produce the computer which I think is still the cheapest new computer on first introduction ever: the ZX81at £69.95. In hardware terms, the unexpanded ZX81 is functionally equivalent to a ZX80, but internally it made the major innovation of putting all the logic on the integrate circuits into a single chip, resulting in the board only using a total of five chips compared to the ZX80’s seventy-eight. There was now 8K of ROM, including almost a full floating point BASIC lacking READ, DATA and RESTORE, and the RAM was expandable to 56K. The compromises in the BASIC were due to the inclusion of a number of instructions for interfacing with the new ZX Printer, a thermal printer which required special metal coated paper. I actually consider this an unfortunate decision which was probably connected to marketing the printer. The display was steady but this was achieved by getting the computer to multitask between running programs and displaying the screen, which made it four times slower running the same code than the ZX80, and it was further slowed if a RAMPack was used because this meant dynamic RAM rather than static, which is slower. I’m guessing that the initial decision to use a Z80 CPU was made with an eye to such a later expansion as it has its own built-in RAM refresh facility which can double as a kind of quick random number generator. This machine was probably responsible for the microcomputer boom of the ’80s. My perception of it is rather dominated by the fact that it was our first home computer. It was also frustratingly limited even at the time, but this spurred third party developers to come up with their own expansions for the likes of colour, high resolution graphics and sound, some of which went on to produce their own computers.

1982 brought the legendary ZX Spectrum:

By Bill Bertram – Own work, CC BY-SA 2.5, https://commons.wikimedia.org/w/index.php?curid=170050

This was once again a huge leap forward. It was at the time the cheapest computer with sound, high-res graphics and colour. Although it once again used the Z80A CPU, it shares many of the features of the Apple ][ while improving on them, and for this reason I’ve written about it as part of an alternate history here. It is entirely feasible that a functional equivalent to the ZX Spectrum could’ve been put together from April 1976 onward, because in the following year a rather similar, though 6502-based, computer arrived on the scene. However, the chances are the world wasn’t ready for it in the mid-’70s and at that time it would probably have cost about £1500. It’s all discussed on the link. Sinclair wanted his machine to be considered as the BBC micro, but the BBC wanted a “real” computer so they chose the Acorn Proton project instead. Clive Sinclair objected to the idea of the BBC endorsing a specific model of computer because they were a publicly-funded body and he saw it as similar to advertising. I wonder if in fact he was influenced by his experiences with the NEB, which seems to have taken advantage of all his hard work and given him an insufficient golden handshake while apparently denying him the opportunity to capitalise on his ideas.

The ZX Spectrum was not intended to be a games machine but that was certainly its main use. The same applies to a lesser extent to the BBC Micro. With the Commodore 64 it was the most popular computer before IBM PC clones came to dominate everything. The original keyboard was not much of an improvement on the previous two products, and single-keyword entry, intended to circumvent the problem of having a poor quality keyboard, led to such absurdities as it taking four keystrokes to type the word “INK”. Sinclair promised a stringy floppy called the Microdrive, and Interfaces 1 and 2, all of which were both delayed and led to a long waiting list. The Spectrum persisted for a very long time, undergoing several upgrades and continued to be manufactured for some time after Sinclair became part of Amstrad, by which time it had a proper multichannel sound chip, RGB monitor interface, built-in disc drive and something approaching a typewriter keyboard, and it was possible to opt out of single keyword entry when powering on. There were many Spectrum clones, notably behind the Iron Curtain and in Latin America, some of which extended the capabilities beyond recognition and were more like early ’90s PCs in their specs, and there were also computers such as the Sam Coupe which was far more capable than the Speccy but was also compatible with it. It was an incredibly persistent computer. Sinclair also had three rather nebulous projects connected to the Spectrum called the Loki, Janus and Pandora, which however did not materialise.

Through the ’80s, Sinclair was aiming to produce a laptop like the Grid Compass. They planned to do it with the ZX80, ZX81 and Spectrum. It almost came to fruition with their next computer. No new computers were announced in 1983 in spite of the huge glut of new micros being released by loads of different manufacturers. Then, in 1984, the QL was pronounced. Standing for “Quantum Leap”, this was alleged to be a 32-bit computer and Sinclair saw itself as having leapt over the 16-bit era and just going straight for 32-bits. However, it was based on the 68008, a version of the 68000, which was internally thirty-two bit but had an eight-bit data bus. As had often occurred before, the hardware was buggy and the first QLs were released with a lumpy thing hanging off the back called a “dongle” which fixed them. The QL was the first Sinclair computer to have something like a proper keyboard, although it still wasn’t up to the standards of many other more expensive home micros. Storage was in the form of two built-in Microdrives. This formed the basis of ICL’s One Per Desk, which was a hybrid computer and communications terminal and was used by BT as the Merlin Tonto. The QL didn’t seem to catch on, but it’s hard for me to tell because it coincided with the point when I decided to go cold turkey on IT, not liking the feeling of being addicted to computers and wanting to become a more balanced person.

A much more public failure was the notorious C5 electric vehicle. Sinclair had big plans for this, which would’ve climaxed with the C15, which was what we would call a Smart Car today and looked very similar. None of this happened of course, as the C5 itself was not a success. The only time I’ve seen a C5 in use was at one of the halls of residence at my university, where it was being pedalled around by one of the students. It’s shown at the top of this post. The C5 is an electrical tricycle with a polypropylene body designed partly by Lotus. One of the problems with it is that it’s too low to be visible from larger vehicles. The battery range was short, the maximum speed was only 24 kph and it suffered from the usual Sinclair problem of not being delivered on time. Sinclair’s vehicles division went into receivership after less than a year. However, even today it has an enthusiastic hobbyist community which has managed to soup it up to travel at over 200 kph, although I can’t say I fancy the idea of riding in or driving one at that speed. Research into developing the C5 had been going on for five years before it was released. There was, however, no effort to develop a more advanced battery than the lead-acid ones used in milk floats, with the rationale that better batteries would come along eventually from third-party manufacturers. Of course that did eventually happen, but not for decades after the C5 had bitten the dust. Reviews from the motoring press were decidedly negative. It’s considered to be one of the worst marketing failures since World War II.

By Binarysequence – Own work, CC BY-SA 3.0, https://commons.wikimedia.org/w/index.php?curid=29980489

Going back a couple of years, Sinclair made one more attempt at a pocket TV, this time a flat screen. Oddly, Sinclair had, as I’ve said, a fixation on the idea that CRTs would always be superior to LCDs. The idea behind the TV80, illustrated above, is to bend the electron beams round a corner to enable the electron gun to be placed beside the screen rather than behind it. In his ongoing dream of producing a laptop computer, Sir Clive planned to incorporate such displays in portable versions of his computers, and was dead set against LCDs. This may have been due to their inadequacy at the time. The TV80’s screen was also magnified by means of a Fresnel Lens, which are those magnifying things you used to see on the backs of buses and in lighthouses – flat, thin lenses which can magnify like ordinary contact lenses. However, it was noted at the time that LCDs would soon overtake this technology.

Taken from OLD-COMPUTERS.COM . Will be removed on request.

In 1986, Amstrad took over Sinclair. Amstrad continued to use the Sinclair trademark for some of its products, but from that point onward, Sinclair had no part in developing new products with his badge. This leads to the rather anomalous phenomenon of a Sinclair PC, the PC-200, which in 1988 was still using CGA and MDA. It had just two ISA slots for expansion, but the case wasn’t high enough to accommodate the cards. However, it was not really a SInclair product anyway.


In 1988, no longer able to release products under his own name, Sinclair finally achieved his dream of a portable, battery-powered computer with its own display, which was however LCD monochrome. This was the Z88, an interesting Z80-based device which included BBC BASIC and adapted versions of Acorn’s own productivity apps. When the Z88 first came out, I found it very confusing because it certainly seemed and looked like a proper Sinclair machine but wasn’t called that. It’s black, A4-sized and actually seems to have a nice keyboard for once. It feels like Sir Clive either couldn’t legally attach his name to it or didn’t think it was good publicity to do so. Compared to what he actually wanted to do, which was to have a large, possibly colour, flat screen display at an angle to the main unit, this is not it, and in fact from this point on most of his products feel like him making the best of a bad job. And I don’t feel that his stuff was actually shoddy as such, but that he was setting his sights lower henceforth. It must have felt like a bit of a comedown to have to use an LCD on this device.

I want to mention three more products which I think illustrate this sense of compromise. The first of these is an electric motor for a pedal bike called the ZETA – Zero Emission Transport Accessory. This appeared in 1994, was upgraded to the ZETA II in 1997, then the ZETA III, and was finally retired in I think 2002. It’s an electric motor powering a wheel which is fixed to a bicycle frame to boost its speed up to about 20 kph. Incidentally, the maximum speed here is the same as that of a C5 and this is no coincidence, because above that speed these devices would be officially classed as motor vehicles with the concomitant legal connotations. In fact, both the C5 and the ZETA could easily be designed to go faster, and hobbyist communities circumvent their limiters, but it changes their legal status. It kind of feels like Sir Clive was limiting himself in more ways than one with these. He also produced the Zike (he seems to like the letter Z), which was an electric bike, once again limited to 15 mph for the same reasons as the others, and weighing only eleven kilogrammes. This unfortunately failed probably because it was associated in the public mind with the C5 even though it was a complete rethink and if it had been produced by a different company it would priobably have done fine.

The absolute final bit of kit associated with the guy was the SeaScooter. This still exists and can still be bought! It came out in 2001 and is an underwater motorised vehicle scuba divers can hang onto to transport them through the sea. It goes at 3 kph and operates up to twenty metres depth, and can be recharged overnight. It’s a bit of a departure for Sinclair although once again there’s a sense of him adapting something unsuccessful to a new environment where he hoped it would achieve greater success.

Who, then, was Sir Clive Sinclair? Someone who was very much part of British private sector industry in the 1960s into the twenty-first century, whose ideas were ahead of both his time and his company and manufacture. Many of his products did have an air of cheapness about them, but they were also very impressive and high-concept, and he seems to have had a tenacity and resourcefulness you don’t see very often. It seems unlikely that we will see his like again.