My Jupiter Ace

Hoarding tends to be frowned upon. Of course, to the hoarder, it seems entirely sensible and “normal” to engage in the practice others describe in this way. Aristotle had something to contribute to this. He was the apparent inventor of the concept of the “happy medium” (which I think turns up in ‘A Wrinkle In Time’ but I may be misremembering). That is, virtues are the ideal position between two pairs of vices. Courage, for example, is between cowardice and recklessness. However, the happy medium is never exactly halfway between its corresponding vices. Courage is more like recklessness than cowardice for example. Likewise, tidiness is going to be closer to one thing than the other. Most people seem to see it as more like obsessive over-neatness where you can’t do anything for fear of causing a mess than slovenliness. To my mind, the happy medium is closer to messiness. Somebody writes psychiatry textbooks and manuals, and those people are likely to normalise their own methodical tendencies, which could manifest as excessive neatness, and therefore regard untidiness as problematic.

Now don’t get me wrong. It is problematic, and it’s also much easier to become untidy than it is tidy. Nonetheless, a couple of observations will be made at this point by that nebulous genetic subject which makes them appear objective by using an impersonal construction. One of them is that I collected old copies of the ‘Radio Times’, not to be confused with the ancient Greek philosopher Θεραδιοτιμης, for six years until my dad got annoyed with the clutter and had me throw them out. I doubt it was exactly six years, but at four dozen editions annually over half a dozen years that’s a couple of gross, and since each one costs £7.50 on Ebay, that’s over two thousand quid’s worth of magazines. I also still have a fair number of ‘2000 AD’ comics from 1977, which are worth a fair bit. I do not believe it was the right decision to throw these things out.

This brings me to the subject of this blog post: the Jupiter Ace, which I’m always tempted to call the “Joobrrace” due to the fact that it’s one of those terms you can use to practice rolling your R’s. I should point out first that the term “Jupiter Ace” has actually been used for two completely separate things. There’s the computer illustrated at the top of this post and there’s a band which had a minor hit in 2005 called ‘A Thousand Years‘. Although this is slightly confusing, I’ve long thought that the sleeve design for this single would work as the cover illustration for a computer manual:

Given the appearance of the ZX81 manual, can you not just see how this would work really well?

Leaving the band aside though, once upon a time, there were a lot of home computers, all unique. Each one had a personality of its own and was usually incompatible with all the others. They did, however, tend to have standard interfaces. I first paid close attention to microcomputers in 1981, and up until that point I’d made various assumptions about them which turned out to be untrue and, to me, rather startling. I had assumed that they would all use the programming language Pascal or something else. I was very surprised to find that they nearly all used BASIC. As far as I was concerned, BASIC was just a language for people just starting out in programming and wouldn’t be used on “proper” computers. This was in fact so on mainframes and minicomputers around this time. The languages I was familiar with, such as Algol-60, COBOL and FORTRAN, were a lot more popular on those, so I just assumed that those would be used on microcomputers, in ROM, so that they would boot into a development environment-like program which would then let you put lines of FORTRAN, say, in and compile and run the program. As I said, I assumed that Pascal would be the favourite because to me that language seemed to have a kind of contemporary vibe to it at the time. It was being pushed fairly hard, but initially, like BASIC, was intended as a language to teach programming rather than having serious use. In particular, the idea behind Pascal was that it should be structured – that the code could be read and planned easily and methodically, with blocks and control structures imposed on the user. By 1981, it had started to fall from grace because this very approach to structure restricted its flexibility. I’m not going to get all technical on you here because that’s not what I’m getting at, but in general I tended to be confounded by programming languages as they were presented because they didn’t seem to have any facilities for using things like sound and graphics, or even interacting with a CRT-style display, because they were designed for a world of punchcards and teletypes. It was all rather puzzling.

There were a few exceptions. For instance, going way back to 1975, IBM had introduced a desktop computer (not a micro as its processor occupied an entire board) which ran APL, “A Programming Language” based on symbols rather than words of which I happen to be a fan due to its lack of language bias and terseness. An APL-native micro also existed in the early 1980s, and APL was used to do the exploding and rotating Channel 4 station ident in 1982. The more expensive business machines also had no programming language firmware and the user would have to purchase a programming language as an additional piece of software, so the situation wasn’t just that BASIC was universal. There were also some home micros, such as the Microtan 65, which could only be programmed in machine code, and others which would boot into a “monitor”, which is a simple program with single letter commands for manipulating and viewing memory contents, and executing machine code programs either loaded or typed in by the user, as a series of hexadecimal numbers.

The standard practice of using BASIC in firmware on home micros usually went further than just the unextended form of the language. It was usually Microsoft BASIC, often in an extended form which constituted a de facto standard. There were other versions of BASIC, used particularly in British as opposed to American home computers, such as Sinclair BASIC used in the ZX80, ZX81 and Spectrum, and BBC BASIC, which began on the BBC Micro and Electron but was later adapted for use on IBM PC clones and other machines such as the Tatung Einstein. It was also possible to buy alternative programming languages such as FORTH. And of course the mention of FORTH brings me to the main object of today’s discussion: the Jupiter Ace.

Clive Sinclair was apparently not a particularly easy person to work with. Shortly after the ZX Spectrum had been designed, a small number of employees, possibly just two, left the company to found Jupiter Cantab, apparently retaining their intellectual property on certain aspects of that phenomenally successful computer, and proceeded to design, manufacture and market a radically new machine, the Jupiter Ace, in autumn 1982. The hardware of the computer in question was not particularly special. It comes across as a cross between a ZX81 and a Spectrum, though without colour or true high resolution graphics. However, the really unusual thing about the Ace was that instead of having BASIC in ROM, it had FORTH. This is a highly idiosyncratic language with two distinctive features. Firstly, it uses Reverse Polish Notation. Instead of “2+2” it uses “2 2 +”. There is a structure in memory in most computers called the stack, which is a series of consecutively stored numbers originally used as addresses in memory to which a program will return. In FORTH’s case, a number typed will be placed on the stack and a “word”, such as “+”, will expect a certain number of values on that stack and operate accordingly, often depositing its own result on the stack for future use. Secondly, words are defined by the user instead of programs, consisting of other words, so for example, squaring a number could be defined thus:

: SQUARED
DUP * 
;


“DUP” duplicates the number on top of the stack, “:” opens a definition of a new word, in this case “SQUARED”, and “;” closes it. Thenceforth, typing something like “9 SQUARED” would put 81 on top of the stack and so on.

Advantages of FORTH include structure and speed. The standards at the time didn’t include floating point numbers, but the Ace had a few proprietary extensions which allowed them. They could’ve been defined by the user, but since the stack has to contain ordinary floating point values, it makes more sense to extend the user interface to recognise any series of digits with a decimal point as a floating point number. Unlike the BASIC available on most home micros at the time, Ace FORTH didn’t support text strings in an easily-used way, but it did have arrays and a text buffer and again, it could be modified to allow them.

The Jupiter Ace did very badly. Although it was an interesting device, it was let down by the absence of colour and poor sound. Although the keyboard was similar to the Spectrum’s, this was fairly normal for the time, but because it couldn’t have the Sinclair system of entire keywords being produced by a single keystroke, this meant it was in much heavier use, which made its cumbersome nature much more obvious. It comes across very much as the kind of computer which might’ve been produced in the late ’70s, though in a much better case, with better interfaces and a superior keyboard, such as the TRS80 Model 1 from 1978. Consequently, Jupiter Cantab went bust and sold off their remaining stock to Boldfield Limited Computing, which in turn reduced the price from £89.95 to £30. This happened in 1984.

Another thing which happened in 1984 was that Safeway opened a branch in Canterbury for the first time, leading to my first paid job, as a cashier at the age of seventeen. I was paid £1.41 an hour, which was a huge amount for me at the time. This was before minimum wage, but prior to that I’d only had a pound a week. I lost the job after only twelve weeks due to my unassertiveness. For instance, I was on the “9 Items Or Less” (sic) till but couldn’t bring myself to turn customers away if they brought whole trolleys of stuff, and I didn’t want to ask for extra change so I ended up paying people in pennies. However, in that time I succeeded in amassing enough cash to buy a Jupiter Ace, so around October time I received one, and at the same time I bought a 16K RAM pack to upgrade the memory to 19K. I can’t remember how much that cost, but the initial outlay would’ve been about twenty-one hours work.

Unlike most people who bought an Ace, although I found the FORTH interesting I actually got it as an upgrade. My previous computer, a 16K ZX81, which my father bought the whole family, was the absolute cheapest available computer at the time. It was ingeniously designed to be as cheap as possible, and that design rendered it rather atypical as a computer. For instance, to this day computers use the ASCII character set, although nowadays this is a subset of the much larger Unicode which includes most of what you might ever want to type, although I find it inadequate due to things like its lack of Bliss symbols, which I use extensively in writing. The ZX81, though, only used sixty-four symbols, including twenty-two graphics characters used to draw Teletext-style pictures, and it lacked lowercase letters and a lot of the usual graphemes such as “@” and “\”. It also defaulted to black text on a white background and had an unchangeable white border, and in its 1K version barely had enough memory to display a full screen of text, so it would skip the memory for lines less than thirty-two characters long. The screen also didn’t scroll unless you included an instruction to in the program, when it would scroll a single line, and the cursor for input stayed at the bottom of the screen. There was also no sound. However, because Sinclair had a fair bit of financial oomph behind it, they were able to design a large custom chip which did everything the computer needed apart from processing programs and storing information, and to this day I find this very impressive, because the total chip count is only five:

This is the kind of achievement which is impressive because of the limitations the available technology imposed upon the designers. It’s similar to the helical scan mechanism on a VCR in a way, in that only that inspiration even makes it possible.

By contrast, the Ace had a full ASCII character set with redesignable characters, single-channel pitch-duration sound, a proper scrolling screen and a white on black display like a “proper” computer. It also had slightly more memory. However, Jupiter Cantab were a tiny and impoverished company, so small in fact that their turnover, not adjusted for inflation, actually overlapped with my own turnover as a herbalist in the ‘noughties, though over that period sterling had halved in value. It’s remarkable to contemplate that the size of the company was less than one order of magnitude greater than our partnership. One practical consequence of this was that they were unable to have the kind of custom chip designed and produced for them which gave Sinclair the advantage with the ZX81 a year earlier and had to resort to discrete logic. I’ll come back to that in a minute, but I want to make the observation that this is a good example of how poverty is expensive. Instead of employing one chip, Jupiter Cantab had to use many:

Those smaller components on the right hand side of the board are mainly doing similar jobs to the large chip on the left of the ZX81’s, but there are many more of them. They also need to be soldered onto the printed circuit board, and it makes the design of the board more complex. This makes the whole computer more expensive to make, and unlike the Sinclair computers, only smaller numbers of components could be purchased, making them more expensive per unit. On the other hand, unlike the ZX81 and Spectrum, the Jupiter Ace is not really a “pile ’em high and sell ’em cheap” product because they didn’t have the option to make them cheaply. There are, even so, clear signs of cost cutting. The sound is produced using a buzzer rather than a speaker, which seems to be identical to the Spectrum. An odd design decision exists in a number of British micros, where rather than routing the audio via the TV speaker, a separate loudspeaker or unfortunately a buzzer was used on the motherboard, and I don’t know much about the design but that seems to me to add to the cost of the hardware while interfering with the quality of the sound.

The chips involved were bought off the shelf and are available to the general public even today. In order to replace a ZX81 ULA, the large chip on the left which does “everything” (it actually does less than the discrete logic on the Ace board because much of the work to put the older computer’s display on a TV is done via system software) has to be replaced by another large chip that does “everything”. With an Ace, there is a “right to repair” as it were, because all that need be done is for the malfunctioning chip to be located and replaced by another, very cheap, integrated circuit. In fact it’s still possible to build an Ace today from scratch with pretty basic equipment. It’s possible also to build a ZX80 in the same way, and since a ZX81 is, functionally speaking, just a ZX80 with different firmware, that can be done too, but not with only five chips and a simple motherboard.

The personal significance of the Ace to me, as a milestone in my life, is that it was the first durable and substantial product I bought with my own money. This landmark would for many people be followed by increasingly impressive and expensive things rather rapidly, ramping up over less than a decade to the likes of a car and a house. This never happened for me for reasons I can’t explain, and in fact if I knew why my life considered in such terms failed so badly, the chances are it wouldn’t have done. It’s probably connected to neurodiversity and mental health issues, but in any case it means this very cheap product bought nearly forty years ago has more sentimental significance to me than most others. I have now succeeded in buying a second-hand car, although I can’t drive so it’s for Sarada, and for most people this is the kind of thing they manage to do by the time they’re in their early twenties and they’d be able to drive it themselves. Hence the kind of failed product the Ace is reflects my own sense of failure in life.

There’s another, rather similar, aspect to this. I always tend to back the loser. Probably the most obvious example of this is that I’m a Prefab Sprout fan. This band is known mainly for a novelty song, ‘The King Of Rock And Roll’, which is about a band known mainly for a novelty song. It’s unintentionally meta. There are other aspects of their career which are like this. For instance, the lead singer and songwriter Paddy McAloon once penned and sang the lines “Lord just blind me, don’t let her innocent eyes reminds me”, and proceeded to go blind suddenly as he drove along a motorway. Fortunately he survived. Anyway, there would have been a point, back in 1982, when Prefab Sprout released ‘Lions In My Own Garden’, then some other band, maybe Lloyd Cole And The Commotions or Frankie Goes To Hollywood, had their own debut singles released, and somehow I get into the first and only to a limited extent the other two. Granted, most of this is down to the fact that most undertakings are unsuccessful, but for some reason my interest in something seems to be the kiss of death. Prefab Sprout and the Jupiter Ace computer were both critically acclaimed and enthused about with good reason: both were unsuccessful. I could name all sorts of other things which have a similar trajectory and about which I was quite keen at the time. What does this mean?

All that said, there is a sense in which the fortunes of the Jupiter Ace have now changed. Like the Radio Times, they are now a lot more valuable than they were when they first came out. They can go for more than a thousand quid each now. The trouble is, mine doesn’t currently work. I also suspect it’s fried, but it may not be. This is where something unexpected may come to my rescue.

I am, as you probably know a philosophy graduate. Most people say that it’s an excellent qualification for flipping burgers but in fact it isn’t because like many other people, I examined arguments for veganism while I was studying and became vegan as a result, so the burgers in question should probably be veggie. However, it is in fact useful in various ways, one of which is that you get to understand symbolic logic and Boolean algebra. There are various reasons for this, such as helping one understand the foundations of mathematics and distinguishing between valid and invalid arguments, but in any case logic is central to philosophy. While I was studying the subject, another student found that applying a particular technique to the design of digital circuits helped him simplify them and use fewer components. In general, there happens to be an enormous overlap between philosophy and computing. After the department was closed down, the logic and scientific method subsection of the department merged with computing, and as far as I know survives to this day.

One practical consequence of this is that I have no problems understanding how computers work, at least simple ones such as this, and a possible consequence of that is that it might even be possible for me to repair it and sell it. I should add, however, that mere knowledge of how the logic circuits, for want of a better word, work still leaves a massive chunk of ignorance about electronics in general. I do know why the machine is broken. It’s because the polarity of the power supply was reversed, meaning that current flowed in the wrong direction through the circuit, thereby damaging at least some of the components beyond repair. What I’m hoping, and I’m not terribly optimistic about this, is that the voltage regulator was destroyed but protected everything else. However, the cost of the components is such that it would still be cost effective to replace everything on the board, thereby ending up with a working Ace, since they sell for such a high price. This is, however, a philosophical issue because it amounts to the Ship of Theseus paradox. If everything which makes up the Ace is replaced by something else with the same function, is it still an original Ace? What does that mean about the value?

There’s something out there called a Minstrel:

This is an updated Ace. It costs £200 but has 49K memory rather than 19K and seems to be able to use USB storage. I don’t know much about it, but I am aware that it works with newer televisions. One of the differences between the two boards, other than the larger memory chips, is the absence of the silver and red Astec modulator, whose function is to interface with a conventional CRT television. Unlike many other cheap computers of the time, the Jupiter Ace had the rudiments of a monitor interface available without modification, although the signal needed to be amplified, and nowadays a modulator just gets in the way because it means you have to have an old-style TV as well.

Although it’s tempting to attempt to upgrade this computer I am under no illusions regarding my abilities and it would be good if I even ended up with a working model at the end. It would be interesting to know how much a non-working Ace would go for, but clearly a working one would be worth more.

This is the plan:

  • Ensure a good connection between the Ace and a CRT TV via a cable.
  • Use a ZX81 power supply to turn it on.
  • If it doesn’t work, replace the voltage regulator.
  • If it still doesn’t work, replace every component until it does.
  • Sell it.

Right, so that’s it for today. I was going to talk about nostalgia a bit but I’ve probably bored you senseless.

Wordle

You must surely know Wordle, but just in case you don’t, it’s a daily game where you have to guess a five-letter word in six goes. I won’t post an example in case it’s one you haven’t done. If you get the right letter in the wrong place, you get a yellow tile. The right letter in the right place gets you green. The wrong letter completely yields a black tile.

This is very similar to the old computer game Moo and the board game Mastermind. I’d even go so far as to say it basically is Moo, but with words rather than numbers or patterns of colours. Mastermind works like this. One player sets up four pegs of different colours, including missing ones, which is hidden from the other. The other then has to guess by putting down their own sets of pegs along something like twenty holes, depending on the size of the board, and the first player places black or white pegs in a 2×2 grid at the side of the guess row indicating right pegs in the right place (black) and right colours in the wrong place (white). The colours of the guesses seem to have changed, as indicated by this illustration:

Photo taken by User:ZeroOne

Completely wrong guesses are indicated by blanks. Mastermind came out in 1970, and in my own child mind there was a clear association with the quiz programme but in fact the two have little in common. I actually thought it was a tie-in at the time, but I no longer think it was and it seems to have been coincidental. I have a tendency to be silly with games, and even more so as a child, and I remember one of my “patterns” being completely empty. I won with that one.

Moo is kind of the same game. Invicta, the company which made the “board” game, actually branded a calculator-like device in 1977 on which one could play that game:

By MaltaGC – Own work, CC BY-SA 4.0, https://commons.wikimedia.org/w/index.php?curid=114645559

Moo was written for the TITAN computer system in the 1960s. I first came across it in a BASIC programming language primer in 1981 CE. This version has the computer generate a four-digit positive integer which the user then has to guess. Right guesses in the right place are “bulls” and right ones in the wrong place are “cows”. It’s been proven that any four-digit sequence needs at most seven goes to get it right. Before being computerised, Bulls & Cows was a paper and pencil game.

Wordle is a little different, but not very. There is a four-letter version which was apparently also a pencil and paper game and that variant is out there online too. I imagine the issue with that is that a lot of profanities would be used.

Now, I could make myself out to be an expert on Wordle but I’m really not. I’m sure there are strategies but so far I haven’t worked one out much. Because you only get one go a day, it’s hard to practice. Certain combinations of letters are more likely in English generally. For instance, if there’s a Q there will very probably be a U, meaning that Q is only likely to be found in the first three positions. A while back I actually studied form for the first hundred or so Wordle answers, tallying which letters were most likely to occur in which positions. This is not completely random because five-letter English words will inevitably have certain features. For instance, no word in English, so far as I know, begins with “MK” or “TK”, but many begin with “SL” and “TH”. Based on this selection, it came to appear that the most likely word was “SRORE”, which is plainly wrong. A more likely word is “SHIRE”. However, there is actually a definitive list of all Wordle answers and that word was used on 22nd January 2022. No, I haven’t looked closely at that list although I do know what the last word is.

Wordle has a finite lifespan. It will end on 20th October 2027 and there are a total of 2 314 words. However, there is a much larger number of words allowed in the sequence leading up to the answer. I have the number 12 000 in my head but maybe not. This is the biggest difference between Wordle on the one hand and Moo and Mastermind on the other. The other two permit any combination, meaning that nothing needs to be stored. To use modern jargon, the patterns in the other two are procedurally generated. There is an algorithm which determines which symbols occur where, or rather, it’s completely random. In computerised versions it’s more likely to be pseudo-random. Not so with Wordle, which needs a series of stored words. Or does it? Is there a way to determine meaningful English five-letter words? It’s already clear that very few if any of end in Q, but my intuition tells me that at best there would be a substantial number of nonsense words if you tried to do this, which rules out that approach.

It might be thought that the game’s reliance on a set number of words would make it a creature of the age of cheap information storage, but this is only partly true. The ASCII version of an uncompressed list of 2314 five letter words is only 11K plus a couple of hundred bytes, and even that’s a lazy way of storing them because only two dozen and two characters are in use, which is only five bits per character, reducing it to just over 7K. This is with no real compression algorithm, but of course there can easily be one because of common sequences of vowels and consonants, or both, and the non-occurrence of letters in particular places. For instance, it’s rare for a Y to occur immediately before another letter in the middle of a word and rare for an I to occur at the end. However, because of the list of permissible words this is not the whole story, and if that is 12 000 the storage needed will be several times larger at around 37K.

I will now appear to digress.

In 1987, the big hit computer game was of course Tetris, also known, a little irritatingly, as TETЯIS. At the time the kind of home computer you might find which was somewhat up-market but nonetheless just about affordable for some people was the Amiga 500, with a 68000 CPU, 512K of RAM and a screen resolution comparable to that of a PAL TV. Nonetheless, there is now a version of Tetris for the ZX81 and it would’ve been feasible to write one for the very first mass-market microcomputers, particularly the Apple in low-resolution graphics mode. This brings to mind the oddity that whereas some inventions depend heavily on a series of predecessors leading up to shortly before their own appearance, others are just waiting for someone to think of them. The form factors of PC cases are, er, a case in point. It would’ve been entirely feasible for a Georgian cabinet maker to have churned out wooden and metal PC cases although it would’ve been a while before anything suitable could’ve been put in them, so there would’ve been no market.

End of apparent digression.

Wordle is an example of this low-dependency type of game. In 1755, a couple of people could’ve taken Samuel Johnson’s dictionary and used it to play it. Even in computer form it could’ve existed quite a long time ago. The limiting factor is the storage space needed for the list of possible intermediate words. There are a total of 12 972 words in its dictionary, whereof only a small fraction are permissible as answers. It’s possible, using modern compression algorithms, to get this down to 43K but that doesn’t mean an old computer would have had the storage space to process such algorithms, which might also be very slow. However, even without working very hard it’s feasible to get this down to 40Kb, meaning that the entire dictionary could be held in the RAM of an eight-bit computer. That computer would of course have to do other things than just hold words in its memory. Dispensing with that requirement, it would also be possible to take the same approach as early spell-checking algorithms and have the machine check words for feasibility. For instance, it could disallow any instance of five identical letters or impronounceable consonant clusters, or, as Commodore 64 Scrabble used to do, simply trust the user not to use nonsense words. With the latter approach, only 12K of RAM would be required for the list of right answers.

Here’s a possible 16K ZX81 implementation of Wordle. There are 2314 words in the dictionary, compressed to a packed five-bit per letter form, taking up less than 12K. There is no error-checking for forbidden words. A correct letter in the correct place is shown as white on black, a correct letter in the wrong place as black on white flashing with white on black, which would have to be done through software, and a completely incorrect letter is black on white. The program asks for the date when you start, converts it to a position value for a compressed series of strings (maximum string length on the ZX81 was 4096, so three strings would be needed) and loads that four-byte value into a short string literal, clearing the last seven bits which would belong to the next word. Every word the user inputs is appropriately compressed and compared five bits at a time with the string. This is then displayed on screen with only the flashing letters stored by position in the display file. If all letters are correct, the appropriate response is generated from an array depending on how many turns the user has had.

I think it’s clear that if this is feasible on a 16K ZX81 it would also be feasible on practically any computer (except maybe the MC-10) manufactured since 1982, and in most cases colour could be used. This is not a difficult game to implement, even in BASIC, although it seems to lend itself more to BCPL, C, FORTH or Assembler. It’s just eminently doable, and it even existed in some form on computers back to about 1968.

As to strategy, I have little idea. There’s little opportunity to practice with only one word a day, and in that way it’s a bit of a leveller. I have developed very rudimentary tricks. For instance, I tend to move a letter one place in either direction if it’s the right one in the wrong place, I start with a small list of possible words (CRANE, ADIEU, POINT or SOARE) and avoid impossible consonant clusters, which I have to anyway because they aren’t in the dictionary. I can’t actually implement this algorithm because it would involve looking at the word list and therefore cheating, but obviously it can be done quite straightforwardly.

Bye for now.

Sinclair

By Prioryman – Own work, CC BY-SA 4.0, https://commons.wikimedia.org/w/index.php?curid=35368168

Clive Sinclair, the home electronics pioneer and entrepreneur, has just died at the age of eighty-one. Although I am not officially a fan of entrepreneurs, being rather left wing, I nevertheless have a soft spot in my heart for his products. The drama-doc ‘Micro Men’ covered his story and rivalry with Acorn in the late ’70s and early ’80s quite nicely, but there’s more to Sinclair and his two major companies than the events and products of that period. Incidentally, the two of us share a birthday.

He was born in 1940, in Richmond, and founded Sinclair Radionics five days before his twenty-first birthday, having raised funds by writing magazine electronics articles. This first company was bought out by the National Enterprise Board and he was paid off in the late ’70s, and he proceeded to start the company which actually made the famous computers and C5, Sinclair Research Ltd. This was later taken over by Amstrad, but he continued with another new company, releasing the Z88 in 1987, and a number of other products. I would say his products are characterised in three ways: they were cheaper than their rivals, they tended to get announced way earlier than they were released and they often had teething problems due to their relatively short development phases.

As far as I can remember, Sinclair’s first product was an amplifier in 1962, followed by a pocket radio the year after. This second product was self-assembly, as were several of his products up until the ZX Spectrum two decades later. This presumably made them cheaper, but this wasn’t unusual at the time. By 1966, he’d designed a pocket television, 405-line if I remember correctly, whose design was unfortunately too complex to get it beyond the prototype stage, but I’d say this is still an achievement for that time:

Copyright status unknown: will be removed on request.

This was at a time when pocket trannies were quite a novelty, although I’m aware of much older DIY projects to build crystal sets to fit in wallets so as a concept they weren’t actually very new. This design looks very ‘sixties. I can imagine it turning up on ‘Star Trek’ TOS. Although this was advertised but didn’t happen, establishing a familiar pattern, the Microvision did eventually come to market in 1977, and I remember it doing so clearly. There was a display model in the window of Barrett’s toy shop in Canterbury for ages. Like its predecessor, it had a two inch screen and I remember it being advertised as being able to pick up TV transmissions from all over the world, which I found dubious at the time and imagine was either untrue or only true in the sense that if you took it to a country with the same TV system as ours, it would also pick up programmes there.

Looking at those controls, which I remember from the time, it looks like it could switch between PAL and NSTC (the American system) and possibly also 405-line transmissions, so the claim seems misleading but technically true. One of my friends hooked this up to another of Sinclair’s products, the ZX81, and was able to display what amounted to pretty high resolution graphics on it, in the sense of pixels per inch, but I’m getting ahead of myself.

In 1972, they introduced the first slimline pocket calculator, the Executive for £79.95 plus VAT. This tendency to quote prices without VAT was really irritating and seems to have stopped happening nowadays. Amstrad also did it. The use of LEDs on calculators and digital watches at the time made them quite power hungry compared to LCD displays which came in later. Sinclair was for some reason very hostile to the idea of LCDs for a very long time, not including them in any of his products until the late ’80s. You can see from this device the beginning of a tendency to have rather uncomfortable and impractical keyboards, which continued with his computers in the next decade.

By Windshear – Own work, CC BY-SA 4.0, https://commons.wikimedia.org/w/index.php?curid=58073110

Nineteen models of calculator were produced by Sinclair over the years. The once I remember best is the above, Sinclair Cambridge Programmable, which was advertised in the ‘New Scientist’ in 1976. It had a maximum of three dozen steps in its programs and included a conditional branch instruction. A later model extended this to eighty steps, but both were only accurate to four significant figures. One of the oddities of ’70s programmable calculators is that they didn’t lead smoothly into microcomputers. You might think that the design of these devices would become steadily more advanced until they actually became like home micros in a way, but instead, and this is across the board with the exception of Hewlett-Packard, microcomputer design starts again from scratch. The one exception is the HP85, but this didn’t lead to anything else in the long run. In the case of Sinclair this may have been because he lost the intellectual property rights to his calculators when the NEB took over his company, but I’m just guessing.

By The original uploader was Prof.Dr. Jens Kirchhoff at German Wikipedia.(Original text: de:Benutzer:Prof.Dr. Jens Kirchhoff) – Self-photographed, Attribution, https://commons.wikimedia.org/w/index.php?curid=2045759

Another Sinclair innovation was the Black Watch, a self-assembly LED digital watch. A couple of interesting things about this design are that the LED has the same blue colour (red when illuminated of course, as red LEDs preceded the other colours) as the calculator above, and the black colour, shared with Sinclair’s first calculator and to be repeated on its computers later. The Black Watch was not successful because it didn’t keep good time due to the quartz crystal running at different speeds at different temperatures and the batteries only lasting ten days as opposed to the advertised length of a year, and they were then difficult to replace. Also the circuitry was vulnerable to damage by static electricity. Therefore a lot of the features of later products were already discernible, if you allow the word “feature” to include the ideas expressed

A lot of kits were left over after the watch was withdrawn and in order to use these up they were remarketed as a clock for car dashboards. I actually admire the ingenuity of doing this and the economy of using components which were just lying around appeals to me. In practical terms though, I wonder whether the technical problems were resolved or if they didn’t affect a dashboard clock as much as they would a wristwatch.

If you were to ask most people to name Sinclair computers, the first one to come to mind would probably be the ZX Spectrum, followed by the ZX81, probably with the QL and ZX80 sharing a fairly distant third place. However, the ZX80 wasn’t Sinclair’s first computer. That honour goes to the 1977 product, the Mk 14. Now I have to say that I find most mid-’70s hobbyist microcomputers rather confounding, and if you wanted an Acorn equivalent to a Mk 14 it would probably be the System 1. Other similar computers include Commodore’s KIM-1 and the MPF-I, which I imagine ceased to exist when Apple decided to sue the heck out of the company which made it when it moved on to the rather Apple ][ -like MPF-II. Anyway, this is a Mk 14:

Taken from OLD-COMPUTERS.COM – will be removed on request.

These were not user-friendly machines by any stretch of the imagination. Moreover, unlike the System-1 and KIM-1, which both used the 6502 CPU, for some reason Sinclair opted to use the “Scamp”, also known as the 8060. I have never understood why the 8060 is designed as weirdly as it is. It sounds like it’s supposed to be an Intel CPU like the x86 series or 8080 and 8085, and maybe that was a marketing ploy, but it was made by National Semiconductor. Although it can access a 64K address space, it does so by changing the function of several of its pins which are quite important in what they’re doing already, and it increments the program counter before fetching instructions, meaning that address 0 cannot contain an op-code unless it branches back to it. I also seem to remember it didn’t have a stack pointer, so subroutines would be very difficult to implement. It was used as a lift (elevator) controller and some of them are probably still in use, and that’s fine, but it doesn’t seem to lend itself very comfortably to writing general purpose programs, which is what Sinclair was using it for in the Mk 14. It was also, probably due to its unpopularity, much more expensive than the mass-market Z80, 6502 and 6809. It seems perverse to have such a fiddly piece of hardware in the first place then be made even less user-friendly by employing the 8060.

The Mk 14 cost only £39.95, had half a kilobyte of ROM and 256 bytes of RAM. It would presumably have had a machine code monitor as firmware and be programmable using hex opcodes using the LED seven-segment display for output. Remarkably, in the light of future developments, Clive Sinclair wasn’t keen on the idea of bothering with computers at all, and didn’t actually even use them himself well into the 1980s, and this seems to have been a factor in the genesis of Acorn. Chris Curry managed the Mk 14 project and soon went on to found Acorn with Herman Hauser in 1978 to build the System 1 and eventually design the ARM chips which now power tablets and mobile ‘phones, and I’m wondering if this was due to Sinclair’s failure to appreciate the potential of computers. It’s quite strange to think of this now.

One thing the Mk 14 did manage to do was persuade Sinclair that there was a market for home computers, and he went on to design and release the ZX80, in 1980:

By Daniel Ryde, Skövde – Originally from the Swedish Wikipedia., CC BY-SA 3.0, https://commons.wikimedia.org/w/index.php?curid=439384

I like the design of the ZX80 case, as in 1980 terms it looks very futuristic. It has the whiteness of the Cambridge calculators and of course a flat panel keyboard, which was very en vogue at the time in the form of hi-fi and music centre controls. It was also a nightmare to type on. It could be bought either complete or as a kit, and in the former condition it was the first computer ever to be sold for under £100. People tend to think of it as very primitive. My mother considered a friend of mine to be uppity because his parents bought him one when it came out. It had 1K of RAM and 4K of ROM, which included a BASIC interpreter which could only work in integers. Also, it was a bit like President Ford who couldn’t walk and chew gum at the same time, because it could either do computing or show things on the TV but not both, so when it was actually running a program the picture would disappear. However, it was also very fast because of the integer BASIC and it was able to use full keywords when the later ZX81 had to abbreviate, such as “CONTINUE” and “GO SUB” instead of “CONT” and “GOSUB”. This leaves one with the impression that the ROM is quite spacious when in fact it only had about as much information in it as two sides of handwritten foolscap. It was made entirely from readily-available parts rather than commissioned or in-house chips, as was usual at the time, and apart from the CPU, which thank goodness was a Z80 rather than the silly 8060 used in its predecessor, RAM and ROM, was composed mainly of integrated circuits in the form of discrete logic, which was again pretty standard for micros of that time. It sold about 50 000 units. It is possible to get it to produce a steady display through using interrupts carefully, but it wouldn’t do it out of the box. My perception of it at the time was that it was still very much a niche product about which I knew practically nothing. In the publicity, SInclair claimed it was powerful enough to run a nuclear power station but I’m unaware of any supporting evidence for that.

By Evan-Amos – Own work, CC BY-SA 3.0, https://commons.wikimedia.org/w/index.php?curid=18300824

Sinclair would dedicate the next few years to mainly producing exclusively home computers and peripherals, and in 1981 they started to produce the computer which I think is still the cheapest new computer on first introduction ever: the ZX81at £69.95. In hardware terms, the unexpanded ZX81 is functionally equivalent to a ZX80, but internally it made the major innovation of putting all the logic on the integrate circuits into a single chip, resulting in the board only using a total of five chips compared to the ZX80’s seventy-eight. There was now 8K of ROM, including almost a full floating point BASIC lacking READ, DATA and RESTORE, and the RAM was expandable to 56K. The compromises in the BASIC were due to the inclusion of a number of instructions for interfacing with the new ZX Printer, a thermal printer which required special metal coated paper. I actually consider this an unfortunate decision which was probably connected to marketing the printer. The display was steady but this was achieved by getting the computer to multitask between running programs and displaying the screen, which made it four times slower running the same code than the ZX80, and it was further slowed if a RAMPack was used because this meant dynamic RAM rather than static, which is slower. I’m guessing that the initial decision to use a Z80 CPU was made with an eye to such a later expansion as it has its own built-in RAM refresh facility which can double as a kind of quick random number generator. This machine was probably responsible for the microcomputer boom of the ’80s. My perception of it is rather dominated by the fact that it was our first home computer. It was also frustratingly limited even at the time, but this spurred third party developers to come up with their own expansions for the likes of colour, high resolution graphics and sound, some of which went on to produce their own computers.

1982 brought the legendary ZX Spectrum:

By Bill Bertram – Own work, CC BY-SA 2.5, https://commons.wikimedia.org/w/index.php?curid=170050

This was once again a huge leap forward. It was at the time the cheapest computer with sound, high-res graphics and colour. Although it once again used the Z80A CPU, it shares many of the features of the Apple ][ while improving on them, and for this reason I’ve written about it as part of an alternate history here. It is entirely feasible that a functional equivalent to the ZX Spectrum could’ve been put together from April 1976 onward, because in the following year a rather similar, though 6502-based, computer arrived on the scene. However, the chances are the world wasn’t ready for it in the mid-’70s and at that time it would probably have cost about £1500. It’s all discussed on the link. Sinclair wanted his machine to be considered as the BBC micro, but the BBC wanted a “real” computer so they chose the Acorn Proton project instead. Clive Sinclair objected to the idea of the BBC endorsing a specific model of computer because they were a publicly-funded body and he saw it as similar to advertising. I wonder if in fact he was influenced by his experiences with the NEB, which seems to have taken advantage of all his hard work and given him an insufficient golden handshake while apparently denying him the opportunity to capitalise on his ideas.

The ZX Spectrum was not intended to be a games machine but that was certainly its main use. The same applies to a lesser extent to the BBC Micro. With the Commodore 64 it was the most popular computer before IBM PC clones came to dominate everything. The original keyboard was not much of an improvement on the previous two products, and single-keyword entry, intended to circumvent the problem of having a poor quality keyboard, led to such absurdities as it taking four keystrokes to type the word “INK”. Sinclair promised a stringy floppy called the Microdrive, and Interfaces 1 and 2, all of which were both delayed and led to a long waiting list. The Spectrum persisted for a very long time, undergoing several upgrades and continued to be manufactured for some time after Sinclair became part of Amstrad, by which time it had a proper multichannel sound chip, RGB monitor interface, built-in disc drive and something approaching a typewriter keyboard, and it was possible to opt out of single keyword entry when powering on. There were many Spectrum clones, notably behind the Iron Curtain and in Latin America, some of which extended the capabilities beyond recognition and were more like early ’90s PCs in their specs, and there were also computers such as the Sam Coupe which was far more capable than the Speccy but was also compatible with it. It was an incredibly persistent computer. Sinclair also had three rather nebulous projects connected to the Spectrum called the Loki, Janus and Pandora, which however did not materialise.

Through the ’80s, Sinclair was aiming to produce a laptop like the Grid Compass. They planned to do it with the ZX80, ZX81 and Spectrum. It almost came to fruition with their next computer. No new computers were announced in 1983 in spite of the huge glut of new micros being released by loads of different manufacturers. Then, in 1984, the QL was pronounced. Standing for “Quantum Leap”, this was alleged to be a 32-bit computer and Sinclair saw itself as having leapt over the 16-bit era and just going straight for 32-bits. However, it was based on the 68008, a version of the 68000, which was internally thirty-two bit but had an eight-bit data bus. As had often occurred before, the hardware was buggy and the first QLs were released with a lumpy thing hanging off the back called a “dongle” which fixed them. The QL was the first Sinclair computer to have something like a proper keyboard, although it still wasn’t up to the standards of many other more expensive home micros. Storage was in the form of two built-in Microdrives. This formed the basis of ICL’s One Per Desk, which was a hybrid computer and communications terminal and was used by BT as the Merlin Tonto. The QL didn’t seem to catch on, but it’s hard for me to tell because it coincided with the point when I decided to go cold turkey on IT, not liking the feeling of being addicted to computers and wanting to become a more balanced person.

A much more public failure was the notorious C5 electric vehicle. Sinclair had big plans for this, which would’ve climaxed with the C15, which was what we would call a Smart Car today and looked very similar. None of this happened of course, as the C5 itself was not a success. The only time I’ve seen a C5 in use was at one of the halls of residence at my university, where it was being pedalled around by one of the students. It’s shown at the top of this post. The C5 is an electrical tricycle with a polypropylene body designed partly by Lotus. One of the problems with it is that it’s too low to be visible from larger vehicles. The battery range was short, the maximum speed was only 24 kph and it suffered from the usual Sinclair problem of not being delivered on time. Sinclair’s vehicles division went into receivership after less than a year. However, even today it has an enthusiastic hobbyist community which has managed to soup it up to travel at over 200 kph, although I can’t say I fancy the idea of riding in or driving one at that speed. Research into developing the C5 had been going on for five years before it was released. There was, however, no effort to develop a more advanced battery than the lead-acid ones used in milk floats, with the rationale that better batteries would come along eventually from third-party manufacturers. Of course that did eventually happen, but not for decades after the C5 had bitten the dust. Reviews from the motoring press were decidedly negative. It’s considered to be one of the worst marketing failures since World War II.

By Binarysequence – Own work, CC BY-SA 3.0, https://commons.wikimedia.org/w/index.php?curid=29980489

Going back a couple of years, Sinclair made one more attempt at a pocket TV, this time a flat screen. Oddly, Sinclair had, as I’ve said, a fixation on the idea that CRTs would always be superior to LCDs. The idea behind the TV80, illustrated above, is to bend the electron beams round a corner to enable the electron gun to be placed beside the screen rather than behind it. In his ongoing dream of producing a laptop computer, Sir Clive planned to incorporate such displays in portable versions of his computers, and was dead set against LCDs. This may have been due to their inadequacy at the time. The TV80’s screen was also magnified by means of a Fresnel Lens, which are those magnifying things you used to see on the backs of buses and in lighthouses – flat, thin lenses which can magnify like ordinary contact lenses. However, it was noted at the time that LCDs would soon overtake this technology.

Taken from OLD-COMPUTERS.COM . Will be removed on request.

In 1986, Amstrad took over Sinclair. Amstrad continued to use the Sinclair trademark for some of its products, but from that point onward, Sinclair had no part in developing new products with his badge. This leads to the rather anomalous phenomenon of a Sinclair PC, the PC-200, which in 1988 was still using CGA and MDA. It had just two ISA slots for expansion, but the case wasn’t high enough to accommodate the cards. However, it was not really a SInclair product anyway.


In 1988, no longer able to release products under his own name, Sinclair finally achieved his dream of a portable, battery-powered computer with its own display, which was however LCD monochrome. This was the Z88, an interesting Z80-based device which included BBC BASIC and adapted versions of Acorn’s own productivity apps. When the Z88 first came out, I found it very confusing because it certainly seemed and looked like a proper Sinclair machine but wasn’t called that. It’s black, A4-sized and actually seems to have a nice keyboard for once. It feels like Sir Clive either couldn’t legally attach his name to it or didn’t think it was good publicity to do so. Compared to what he actually wanted to do, which was to have a large, possibly colour, flat screen display at an angle to the main unit, this is not it, and in fact from this point on most of his products feel like him making the best of a bad job. And I don’t feel that his stuff was actually shoddy as such, but that he was setting his sights lower henceforth. It must have felt like a bit of a comedown to have to use an LCD on this device.

I want to mention three more products which I think illustrate this sense of compromise. The first of these is an electric motor for a pedal bike called the ZETA – Zero Emission Transport Accessory. This appeared in 1994, was upgraded to the ZETA II in 1997, then the ZETA III, and was finally retired in I think 2002. It’s an electric motor powering a wheel which is fixed to a bicycle frame to boost its speed up to about 20 kph. Incidentally, the maximum speed here is the same as that of a C5 and this is no coincidence, because above that speed these devices would be officially classed as motor vehicles with the concomitant legal connotations. In fact, both the C5 and the ZETA could easily be designed to go faster, and hobbyist communities circumvent their limiters, but it changes their legal status. It kind of feels like Sir Clive was limiting himself in more ways than one with these. He also produced the Zike (he seems to like the letter Z), which was an electric bike, once again limited to 15 mph for the same reasons as the others, and weighing only eleven kilogrammes. This unfortunately failed probably because it was associated in the public mind with the C5 even though it was a complete rethink and if it had been produced by a different company it would priobably have done fine.

The absolute final bit of kit associated with the guy was the SeaScooter. This still exists and can still be bought! It came out in 2001 and is an underwater motorised vehicle scuba divers can hang onto to transport them through the sea. It goes at 3 kph and operates up to twenty metres depth, and can be recharged overnight. It’s a bit of a departure for Sinclair although once again there’s a sense of him adapting something unsuccessful to a new environment where he hoped it would achieve greater success.

Who, then, was Sir Clive Sinclair? Someone who was very much part of British private sector industry in the 1960s into the twenty-first century, whose ideas were ahead of both his time and his company and manufacture. Many of his products did have an air of cheapness about them, but they were also very impressive and high-concept, and he seems to have had a tenacity and resourcefulness you don’t see very often. It seems unlikely that we will see his like again.

My Hardware History – A Tale of Mass Disempowerment

(c) Gary Cattell

Not gonna lie: this post is inspired by Brian of Brian’s Blog fame, which you should all of course now visit and whom you should subscribe. But his post is about operating systems, whereas mine is about how it’s all gone to Hell in a handcart.

Ethical decision making in the marketplace often comes down to trust versus self-sufficiency. On the one hand, when you buy something, even from a sole trader, you are kind of out-sourcing your ethical choices to a third party, whom you may not know and in whom your trust may be misplaced. To some extent, it may not even be their fault that they have been forced to take negative and harmful paths in what they do, because it’s the system, not the people. The very fact that we live in a monopoly capitalist society forces us to harm others. To an extent, leaving the larger political picture out of the equation for a bit, one can take control of one’s life to varying degrees on the ethical front, but ultimately this usually amounts to self-sufficiency. The extent to which one is self-sufficient correlates with the degree of responsibility one is taking for how one’s actions affect others, including the planet. However, even there it’s important to recognise privilege. One may have access to an allotment and be able to grow food, but that depends on various factors such as the good fortune of having that access due to much land formerly devoted to allotments being sold off to the highest bidder, and not living in a food desert, having the time to put into raising that food and so on. The same applies to gardening for food, to a greater extent. Not everyone has the luxury of a garden where they can grow crops, and foraging only works even to the degree it does because relatively few people do it. There’s also a knowledge, experience and skills base some people can draw on and others can’t. The fact that I’m a herbalist makes it easier for me to forage, for instance.

In the case of growing food, one usually at least has a fairly good chance of having facilities and skills needed to manage this. In other areas this may not be so. For instance, we can’t make a gate ourselves, or so I tell myself (I do have very limited carpentry skills which I never use and were pretty poor anyway, so in theory I could), but we could buy a gate and fix it onto the wall using the abilities available in this household. This costs about £50. Presumably buying the timber and having invested in the available tools would cost us less, but on asking someone else to do it for us we were quoted a figure of many hundreds of pounds.

The opposite end of this kind of self-sufficiency is found in the likes of computers and allied devices. The degree of skill and automation and the combined effort of innumerable people, is what makes devices like laptops, tablets and smartphones, and their associated storage media and support items such as chargers, cables and Wi-Fi, possible. That huge morass of people doing things for you is difficult to replace with your own skills because there is only so much that can be fitted inside a single head. Given a few lengths of wire and some relays, I could probably put together an arithmetic and logic unit which worked on a very short binary word length, and it isn’t often appreciated that one major reason I can do this is that I’m a philosophy graduate. I’m also dyspraxic though, so it would be a struggle for different reasons than the mere knowledge of how to do it. Consequently I rely on others for the digital electronics I use, and that reliance means that, as usual, I’m handing over responsibility for ethical choices to other people, whose own choices are compromised by working within a capitalist system.

I need to get down to specifics.

In the 1970s, the microprocessor was seen as a threat to manual labour. Since I was entering a Marxist phase at the time, it simply seemed wrong to have anything to do with microcomputers back then, since it would be supporting companies whose products were putting people out of paid employment. Compared to my peers, our family were relatively late adopters of all sorts of technology, such as colour TV, cassette recorders and stereo record players. I knew one early adopter whose family bought a ZX80 when it came out. My family and others were rather disdainful of this and saw it as kind of “uppity”, I suppose. We got a ZX81 in November 1982, some time after the price had come down owing to the introduction of the ZX Spectrum. After only a week, we purchased a 16K RAMpack. However, by the time we had one, I’d known BASIC for about a year, having taught myself from textbooks with no hands on experience. I was still almost superstitiously suspicious of even home micros at the time. After all, Clive Sinclair had claimed that the ZX80 was powerful enough to control a nuclear power station, so taking that at face value even that had the potential to replace human workers aplenty. At the time I had no concept of automation creating jobs to replace the ones that had been destroyed, so all I could see for the future was mass unemployment.

The ZX81, and to an extent its predecessor, has a kind of nostalgic charm to it even for the time. For instance, like older and larger computers it has its own character set, exclusively upper case letters and only sixty-four printable characters, and in particular it uses “**” to denote raising to a power rather than “^”, or actually “↑” for most micros of that vintage if I remember correctly. It also displays black characters on a white background, giving the impression that the output is all coming out of a printer onto the old-fangled paper which looked like it had green musical staves on it and holes up the sides, and was folded with perforations separating the sheets. It was also, in practical terms, silent out of the box. My enduring impression of the ZX81 is that it’s an early ’60s minicomputer trapped in a plastic matchbox, and as such it had a flavour of former glories about it.

To me, the most mystifying thing about this computer was that it somehow seemed to be able to produce sufficiently detailed characters as would appear on a much higher resolution display but could not actually address those pixels directly. Why couldn’t it draw in as much detail as it displayed text? There were 2×2 graphics characters but they only afforded a resolution up to 64×44. It also didn’t scroll unless you told it to. I attempted to combine characters by printing them in the same location, expecting a kind of overstrike effect, but that didn’t work. Then there was the question of machine code. At the time, I assumed that computers directly acted upon the BASIC code they were given. When I saw the table at the back of the manual showing the machine code instruction equivalents to the alphanumeric and other characters, I drew the erroneous conclusion that the microprocessor simply read the BASIC line by line off the display file and executed it to achieve the ends, so for example SIN was a series of three instructions which together could achieve a floating point trig function, and so could COS, and so forth. This is kind of how the human brain operates with language, so I would defend this naïve view.

Another unexpected feature of microcomputers was that they almost all used BASIC. I had expected that the relatively cheap home computers such as the VIC-20 or TI-99/4A would use that language, but because it’s Beginner’s All-purpose Symbolic Input Code, more expensive computers would have built-in PASCAL, FORTRAN or ALGOL. This was, however, only true to a very limited extent.

It was eventually borne in upon me that programming languages were themselves software, written in a more fundamental language called machine code which controlled the microprocessor directly, so I learnt Z80 machine code and programmed in it in a limited way. I discovered there were ways of getting the ZX81 to produce sound and increase its vertical resolution, and even managed to produce a program which played the drum machine bit of ‘Blue Monday’. This suggests that I traversed a very steep learning curve very quickly since we acquired the computer in late November 1982 and New Order’s single was released less than five months later. I felt uncomfortable with the extent to which I seemed to be fixated on computer programming and tried to maintain interest in other topics. My attitude to computers has always had this ambivalence to it. It’s also very likely that my pursuit of this hobby adversely affected my O-level results, and it’s a shame that the knowledge I was acquiring couldn’t have been used to get an O-level in computing. I actually used to help pupils who were studying computing with their homework, and I’ve often wondered what the disconnect is here. It reflects a pattern in my life of not being able to integrate my skills and experience with formal accreditation or recognition, and I suspect it’s linked to neurodiversity but I don’t know how.

Much of the time I spent programming the ZX81 was also spent envying the specifications of more powerful computers, but at the time I think my parents were trying to motivate me to find paid work, which I did in fact do and proceeded to buy, of all things, a Jupiter Ace. This is a fairly famous computer designed by the team who did the ZX Spectrum at Sinclair Research Ltd and then left to form their own company, and is chiefly known for the fact that it had the programming language FORTH in ROM rather than BASIC. This was almost unique. There was an attempt to design a more advanced FORTH-using micro called the Microkey 4500, which was basically a wishlist fantasy computer which sounded excellent but hardly even got to the drawing board stage, but to me the main appeal of the Ace was that it behaved like a “proper computer”. It has the complete ASCII character set, displays white text on a black background and has a Spectrum-style keyboard. It is in fact very similar to the Spectrum even down to the font it uses, but lacks colour and point-addressable high resolution graphics. However, by the time of its release, October 1982, consumers were expecting computers to have high resolution colour graphics and proper sound. For some reason I’ve never understood to this day, most British micros at the time had built-in speakers for sound rather than using the speaker of the TV they were plugged into, which sometimes compromised the sound production to a barely audible beep and necessitated the addition of built-in sound hardware while the audio capability of the TV was just sitting there unused. A strange decision which would probably make more sense if I knew more about electronics. Jupiter Cantab, the company which made the Ace, went bust after less than two years and this enabled me to buy the Ace. This has a different special place in my life because it was the first durable product I ever bought with money I’d earned myself, and I still own it today.

FORTH is sufficiently close to the machine that it can actually be implemented as machine code in itself, and much of the language as supplied is definable in terms of more primitive words in that language. FORTH’s appeal to me was that it enabled me to work very closely with the hardware of the computer without having to use actual op codes. I attempted to design a prefix notation version of the language, but although I wrote a complete description which came quite naturally to me, I never completed it. I noticed also in myself a tendency to attempt to reduce user-friendliness to ease programming: for instance, I opted to use signed sixteen bit integers as the only data type and express them in hexadecimal with leading zeros.

By this time I’d been aware of operating systems for about two years. I was at first only cognisant of CP/M, which had been devised in 1974, and Unix, which I think dates from 1969 although clearly not in its later form as the dates begin in 1970. MSDOS and PCDOS were also out there somewhere but since IBM PC compatible computers cost upwards of £3000 at the time I regarded them as permanently out of reach. Oddly, we did in fact have a PC clone briefly in our house in 1983, although it was never switched on and was simply being stored there for my father’s workplace. Incidentally, at the time that workplace had been using a confoundingly simple minicomputer they’d bought in the 1960s which appeared to have only four op codes. I found this hard to believe even at the time but extensive study of the technical manuals showed that this was indeed the case. I have no idea what it was now, but it was very strange and sounds like it would’ve been a challenge to program, though an interesting one.

For me, Unix was associated with minicomputers and wouldn’t even get out of bed for what at the time seemed like a ridiculously vast amount of RAM. However, there were also versions for the 32:16-bit 68000 and I think also the failed Z8000, although oddly CP/M was also rewritten in C for the 68000, which seemed ridiculously underpowered to me at the time. It was at that point that an annoying trend became apparent to me, which had been going on since at least the 1960s when I’m aware it was implemented on the PDP-11 range of minicomputers. There were privileged and user instructions. On the 6809 CPU, there had been a system and a user stack pointer, though both available to the user via machine code and assembler (a direct one-to-one programming language slightly friendlier to the user). On its more powerful successor, the 68000, the system stack pointer was unavailable to the user and only the user stack pointer was accessible. Other tendencies also came into play, such as many of the system flags being alterable only by the operating system and whole areas of memory locked out from user access. This is done for security and stability purposes, but to me it felt patronising and hand-holding, and also like an in-built class system. There’s hoi polloi, the likes of us, and there’s the Programmer, who wrote the operating system and has total control. We are merely their minions, second-class computer users, and this was the start of a trend to lock people out of controlling their own devices which continues today. It’s the opposite of self-sufficiency and it means you have to trust whoever wrote and often sells or licences the operating system.

There was also another trend which drives me round the bend: virtual memory. When I first learned about multiuser systems, I was astonished to find that they would sometimes save the whole program the user was running and switch to another user to load their program and run that, continuing in that cycle depending on how many users were on the system. Since hard drive storage is mechanical, it’s many orders of magnitude slower than solid-state RAM or ROM, and this makes things very slow, so I assumed that this was a soon-to-be-superceded by cheaper and larger memory sizes. This didn’t happen. What happened instead was that later operating systems were designed to pretend there was more physical memory than there actually was, with the result that it was all too easy for a computer to get lost in its own internal musings and kind of forget there was some person out there trying to use the bloody thing. Fortunately we now have solid state drives and the situation is somewhat better.

Way into the late 1980s I would still doodle Z80 assembler programs in notebooks meant to do various things, though without being interested in implementing them. By that time, GUIs were starting to take over, and thereby hangs another tale.

I liked the Xerox Star and the Apple Lisa, which stole the former’s user interface and preceded the Macintosh. That to me did seem to be the way forward with computers at the time. Later on, Microsoft tried to copy it, and it seemed like a pointless thing bolted onto the front of the operating system which slowed it down so much that it wasn’t worth whatever benefits it might bring. The same applied to GEM, its main competitor. To me, a GUI feels like another way computer design is taking control away from the user. This was just the beginning. I am used to imperative and procedural programming. As far as I’m concerned, a computer program is a list of orders or instructions organised into smaller subroutines which tell the computer what to do. Putting it like that, it seems absurd to me that anything else would ever be so. Object-oriented programming began to become very popular, and at no point have I remotely understood it. Every description of what it is seems to use a metaphor which fails to describe what’s actually going on inside the device. It will do something like say that there’s a class of vehicles which have the properties of size, number of wheels and so forth which can be applied to create new concepts of vehicles, such as distinguishing a car, motorbike and lorry. That’s all very well, but I can never make a connection between that metaphor and computer programming, which looks like something completely different. It also uses a declarative paradigm, where you just seem to tell the computer what’s there and leave it be, which baffles me because how can anything actually be happening if you haven’t made it happen? I’ve attempted to describe my understanding in terms of variables and procedures, but people in the know have always told me I haven’t got it right. It’s been said, and I’ve come up against this, that if you’ve learned imperative and procedural programming, it makes it so much harder to learn OOP (Object-Oriented Programming). I also can’t shed the impression that a lot of it is obscurantist technobabble hiding a naked emperor. And if you can’t see what’s going on inside, how can you have control.

Another annoying trend has been created by the easy availability of memory. Back in the old days, computers needed to be programmed efficiently in terms of memory. For instance, a spell checker would be based on the rules of English spelling and object to, say, a Q not followed by a U or the use of “AY” in the middle of a word, but it didn’t have a dictionary. Nowadays, it does, and that takes up many megabytes in its uncompressed form although I’m sure it is compressed. Likewise, chess games have tended to store whole lists of possible moves and try to find their way up a tree to the best result, using up huge amounts of memory, whereas previously they would’ve used the rules of chess and, I presume, some cleverly-written strategy to get to win. To me, this seems lazy and disappointing. I want programs to be optimised to the same extent as they were when 64K seemed like impossible luxury. So much processing power is also wasted on running the GUI. We don’t need this because it isn’t really using the computer. It’s just making the computer easier to use by tying up processing power in unnecessary trinkets.

So: you might think I’m a Linux person. I did too for a long time, but I’m not. As far as I’m concerned, you have to take care of your hardware and ensure it remains useful for as long as possible for a number of reasons:

  1. It takes a lot of energy, resources and environmental damage to make a computer and the working conditions of the people who mined the minerals, refined the compounds and put the device together are often questionable. Once all of that is done, you should be able to hang onto it for a long time.
  2. We ourselves haven’t necessarily got that much money and we should expect our devices to be as useful as possible. That means access to hardware and no hand-holding.
  3. When we finally discard our hardware it goes to landfill or some poor Third World community where it poisons the rivers and gives the people disassembling it physical disabilities, causes cancer and congenital birth defects, not to mention what it does to the biosphere.

All of this has got to be the number one priority when we consider computers and other devices, and to be fair Linux does go a considerable way to addressing this. But there’s a problem. A lot of the people who are involved in designing and coding for Linux are very focussed on supporting up to date hardware. It’s a big community and they aren’t all doing that, but many of them are and it’s often hard to find people who are more genuinely concerned on the E-Waste and other sustainability aspects of the issue. The other thing, and this may be less problematic today than it used to be, is that Linux people are often not people people, and in the end this amounts to user interfaces which are not very friendly. I’m reminded of the Dilbert cartoon strip of the computer programmer saying “it’s my philosophy that the computer interface should hurt the user”, and employing samples of birds being killed by cars as the main notification sound. I’ve had to use Linux GUIs which are unwittingly displaying at 320×200 and put the OK button completely outside the display, or don’t recognise that the memory map for the video RAM is organised in such a way that everything looks like incomprehensible multicoloured vertical stripes. And yet a perfectly good, low-end 640×480 sixteen colour display could’ve been used which somehow is not the default. Why?

Don’t get the impression that I’m not as susceptible as most other people to the appeal of Windows. I was very impressed by Windows 3.1, which I didn’t come across until 1998 due to forcing myself to go cold turkey on computers for a decade or two. As far as I’m concerned, I’d be happy for the GUI to look like that today and it all seems a bit unnecessary to make it any fancier, particularly because in doing so you’re consigning millions of PCs to landfill for no good reason. I think Program Manager, the main shell for Win 3.1, hung around until at least Windows XP although it didn’t retain its look. It may be due to the fact that our 486-based 33 MHz PC with VGA graphics and non-working sound card was just less messed-about than other computers we’ve used since, but it was the most stable version of Windows I’ve ever used in earnest. It crashed once the whole time we used it, and that was a major and worrying event. Incidentally, there used to be an Explorer-style shell for Win 3.1, which made it practically indistinguishable from classic 32-bit Windows, which I used for a short period of time, and in fact I’ve even installed it as the default shell on later versions due to it being more compact and stable than the default shell.

We then leapfrogged over Windows 95 to use Windows 98 on a computer which was a total mess. It was a 120 MHz Pentium with SVGA, a working sound card and 16 Mb RAM. This is below the recommended specs for Windows 98, but there were other imponderable issues with that PC at the time. It only lasted a few months before we handed it over to a friend who was more au fait with computers, who got it to work. We replaced it with our final desktop format computer, an AST Bravo MS P/75 with an ATI Mach-16 graphics card, which incidentally has 2-D acceleration but not the 3-D which had become practically universal by that point. This was actually a downgrade, but was more reliable, and at that time I was really into the themes and skins available on Windows 98. I also struggled endlessly to get Linux to work on it. QNX was fine, but then it always is, BeOS also worked okay. It ultimately got upgraded to 128 Mb RAM and a piggy-back 133 MHz Pentium if I remember correctly. This was the first computer to have a reliable dial-up internet connection. It was fine at running Quake but it was only barely capable of showing DivX videos even when upgraded to the nines.

The next stage came in 2002, as the start of our plan to give up television, partly for the children’s sake. This meant ultimately having access to DVDs and therefore we bought a new computer, though with a CD-ROM drive, mainly in order to get broadband internet. By this time we were accumulating E-Waste enormously because I was reluctant to let the old hardware go the way of all silicon. This PC had an Athlon and ran Windows XP. Windows XP was a good operating system on the whole but was the first O/S to break compatibility with 16-bit applications and drivers, which necessitated the slinging out of various bits of hardware. This was also our first PC to have USB slots, which I upgraded as well. The initial specifications of this machine were 128 Mb RAM, 40 Gb hard drive, 1 GHz Athlon of some description, on-board graphics, Silicon Integrated Systems motherboard. It started off with Windows 98 and we upgraded it in 2004. One thing I didn’t like about Windows XP was its childish-looking graphical scheme, but it still had Windows Classic, whose appearance was far better. This was also the first computer we used a TFT screen with, and the amount of space taken up by a 22″ CRT monitor is something to behold.

In 2007, we got a Windows Vista machine. This was because its predecessor had exploded due to me installing a graphics card whose power requirements exceeded that of our previous computer. Apparently it emitted something like ball lightning, although I wasn’t there at the time. This we persisted with for a further seven years. My chief issue with Windows Vista was that left to itself it seemed to spend too much time making the windows look translucent. Interestingly, the system requirements for Windows after a certain point went into reverse, probably because people were no longer impressed with the eye candy. In 2015 we acquired the computer which is currently sitting upstairs and of course runs Windows 10. Ironically, the stability of Windows 10 has made it possible to install Linux properly and use it on that PC. I have a history of using Live distributions of Linux on memory sticks as my preferred operating systems because they’re more secure that way.

Windows PCs have become very much secondary in our lives in recent years, as is probably true for many other people. We mainly use Android and Chromebooks, although Sarada still uses a Windows laptop from time to time. There no longer seems to be the pressure to upgrade, or maybe we’ve just become sidelined and lost interest in using them in other ways. I still program using FORTH and I have the modern version of the Commodore 64, which I don’t use as much as I should. To be honest, I’m not a big fan of the 64 because its BASIC doesn’t support the hardware properly and the palate is very odd for no apparent reason, but again all these are challenges to which I should rise. I’m vaguely considering writing a few arcade games for it, although I should stress that I’m very much in the Z80 camp rather than the 65 series one. I’ve never found the 6502 (technically the 6510 in the case of the 64) easy to program because I find its address modes and the complete impossibility of dealing with sixteen-bit integers irksome, although again it’s down to possibly admirable minimalist design.

We’ve also owned a few computers I haven’t mentioned. I bought a Tandy Color in 1999, there was a UK101 in our airing cupboard for a while, we had a Windows ME PC which was technically actually an ACT Apricot of all things, and also a plasma display 286-based Toshiba laptop I used DOSLynx on as a browser. That actually ran Windows 1.0!

Although I’ve never done it, I am curious as to the possibility of programming ARM or 64-bit Intel processors in machine code or assembler without an operating system as such. It would need a computer dedicated to that purpose of course. I would also imagine that attempting to use an ARM in that way would be a bit of a tall order, although I don’t know, because my understanding is that its instruction set is optimised for use with code no human has directly written nowadays, but I presume that Intel-like CPUs are another matter. But just as I’ve never really got into coding in a major way, I doubt I’ll ever get round to it. One thing I do still do with coding is occasionally write FORTH or BASIC programs to work out maths problems, and I’ve long preferred to use APL to spreadsheets, which quite frankly I can’t get my head round.

I very much doubt I’ll ever make the transition to object-oriented programming.