My Jupiter Ace

Hoarding tends to be frowned upon. Of course, to the hoarder, it seems entirely sensible and “normal” to engage in the practice others describe in this way. Aristotle had something to contribute to this. He was the apparent inventor of the concept of the “happy medium” (which I think turns up in ‘A Wrinkle In Time’ but I may be misremembering). That is, virtues are the ideal position between two pairs of vices. Courage, for example, is between cowardice and recklessness. However, the happy medium is never exactly halfway between its corresponding vices. Courage is more like recklessness than cowardice for example. Likewise, tidiness is going to be closer to one thing than the other. Most people seem to see it as more like obsessive over-neatness where you can’t do anything for fear of causing a mess than slovenliness. To my mind, the happy medium is closer to messiness. Somebody writes psychiatry textbooks and manuals, and those people are likely to normalise their own methodical tendencies, which could manifest as excessive neatness, and therefore regard untidiness as problematic.

Now don’t get me wrong. It is problematic, and it’s also much easier to become untidy than it is tidy. Nonetheless, a couple of observations will be made at this point by that nebulous genetic subject which makes them appear objective by using an impersonal construction. One of them is that I collected old copies of the ‘Radio Times’, not to be confused with the ancient Greek philosopher Θεραδιοτιμης, for six years until my dad got annoyed with the clutter and had me throw them out. I doubt it was exactly six years, but at four dozen editions annually over half a dozen years that’s a couple of gross, and since each one costs £7.50 on Ebay, that’s over two thousand quid’s worth of magazines. I also still have a fair number of ‘2000 AD’ comics from 1977, which are worth a fair bit. I do not believe it was the right decision to throw these things out.

This brings me to the subject of this blog post: the Jupiter Ace, which I’m always tempted to call the “Joobrrace” due to the fact that it’s one of those terms you can use to practice rolling your R’s. I should point out first that the term “Jupiter Ace” has actually been used for two completely separate things. There’s the computer illustrated at the top of this post and there’s a band which had a minor hit in 2005 called ‘A Thousand Years‘. Although this is slightly confusing, I’ve long thought that the sleeve design for this single would work as the cover illustration for a computer manual:

Given the appearance of the ZX81 manual, can you not just see how this would work really well?

Leaving the band aside though, once upon a time, there were a lot of home computers, all unique. Each one had a personality of its own and was usually incompatible with all the others. They did, however, tend to have standard interfaces. I first paid close attention to microcomputers in 1981, and up until that point I’d made various assumptions about them which turned out to be untrue and, to me, rather startling. I had assumed that they would all use the programming language Pascal or something else. I was very surprised to find that they nearly all used BASIC. As far as I was concerned, BASIC was just a language for people just starting out in programming and wouldn’t be used on “proper” computers. This was in fact so on mainframes and minicomputers around this time. The languages I was familiar with, such as Algol-60, COBOL and FORTRAN, were a lot more popular on those, so I just assumed that those would be used on microcomputers, in ROM, so that they would boot into a development environment-like program which would then let you put lines of FORTRAN, say, in and compile and run the program. As I said, I assumed that Pascal would be the favourite because to me that language seemed to have a kind of contemporary vibe to it at the time. It was being pushed fairly hard, but initially, like BASIC, was intended as a language to teach programming rather than having serious use. In particular, the idea behind Pascal was that it should be structured – that the code could be read and planned easily and methodically, with blocks and control structures imposed on the user. By 1981, it had started to fall from grace because this very approach to structure restricted its flexibility. I’m not going to get all technical on you here because that’s not what I’m getting at, but in general I tended to be confounded by programming languages as they were presented because they didn’t seem to have any facilities for using things like sound and graphics, or even interacting with a CRT-style display, because they were designed for a world of punchcards and teletypes. It was all rather puzzling.

There were a few exceptions. For instance, going way back to 1975, IBM had introduced a desktop computer (not a micro as its processor occupied an entire board) which ran APL, “A Programming Language” based on symbols rather than words of which I happen to be a fan due to its lack of language bias and terseness. An APL-native micro also existed in the early 1980s, and APL was used to do the exploding and rotating Channel 4 station ident in 1982. The more expensive business machines also had no programming language firmware and the user would have to purchase a programming language as an additional piece of software, so the situation wasn’t just that BASIC was universal. There were also some home micros, such as the Microtan 65, which could only be programmed in machine code, and others which would boot into a “monitor”, which is a simple program with single letter commands for manipulating and viewing memory contents, and executing machine code programs either loaded or typed in by the user, as a series of hexadecimal numbers.

The standard practice of using BASIC in firmware on home micros usually went further than just the unextended form of the language. It was usually Microsoft BASIC, often in an extended form which constituted a de facto standard. There were other versions of BASIC, used particularly in British as opposed to American home computers, such as Sinclair BASIC used in the ZX80, ZX81 and Spectrum, and BBC BASIC, which began on the BBC Micro and Electron but was later adapted for use on IBM PC clones and other machines such as the Tatung Einstein. It was also possible to buy alternative programming languages such as FORTH. And of course the mention of FORTH brings me to the main object of today’s discussion: the Jupiter Ace.

Clive Sinclair was apparently not a particularly easy person to work with. Shortly after the ZX Spectrum had been designed, a small number of employees, possibly just two, left the company to found Jupiter Cantab, apparently retaining their intellectual property on certain aspects of that phenomenally successful computer, and proceeded to design, manufacture and market a radically new machine, the Jupiter Ace, in autumn 1982. The hardware of the computer in question was not particularly special. It comes across as a cross between a ZX81 and a Spectrum, though without colour or true high resolution graphics. However, the really unusual thing about the Ace was that instead of having BASIC in ROM, it had FORTH. This is a highly idiosyncratic language with two distinctive features. Firstly, it uses Reverse Polish Notation. Instead of “2+2” it uses “2 2 +”. There is a structure in memory in most computers called the stack, which is a series of consecutively stored numbers originally used as addresses in memory to which a program will return. In FORTH’s case, a number typed will be placed on the stack and a “word”, such as “+”, will expect a certain number of values on that stack and operate accordingly, often depositing its own result on the stack for future use. Secondly, words are defined by the user instead of programs, consisting of other words, so for example, squaring a number could be defined thus:

: SQUARED
DUP * 
;


“DUP” duplicates the number on top of the stack, “:” opens a definition of a new word, in this case “SQUARED”, and “;” closes it. Thenceforth, typing something like “9 SQUARED” would put 81 on top of the stack and so on.

Advantages of FORTH include structure and speed. The standards at the time didn’t include floating point numbers, but the Ace had a few proprietary extensions which allowed them. They could’ve been defined by the user, but since the stack has to contain ordinary floating point values, it makes more sense to extend the user interface to recognise any series of digits with a decimal point as a floating point number. Unlike the BASIC available on most home micros at the time, Ace FORTH didn’t support text strings in an easily-used way, but it did have arrays and a text buffer and again, it could be modified to allow them.

The Jupiter Ace did very badly. Although it was an interesting device, it was let down by the absence of colour and poor sound. Although the keyboard was similar to the Spectrum’s, this was fairly normal for the time, but because it couldn’t have the Sinclair system of entire keywords being produced by a single keystroke, this meant it was in much heavier use, which made its cumbersome nature much more obvious. It comes across very much as the kind of computer which might’ve been produced in the late ’70s, though in a much better case, with better interfaces and a superior keyboard, such as the TRS80 Model 1 from 1978. Consequently, Jupiter Cantab went bust and sold off their remaining stock to Boldfield Limited Computing, which in turn reduced the price from £89.95 to £30. This happened in 1984.

Another thing which happened in 1984 was that Safeway opened a branch in Canterbury for the first time, leading to my first paid job, as a cashier at the age of seventeen. I was paid £1.41 an hour, which was a huge amount for me at the time. This was before minimum wage, but prior to that I’d only had a pound a week. I lost the job after only twelve weeks due to my unassertiveness. For instance, I was on the “9 Items Or Less” (sic) till but couldn’t bring myself to turn customers away if they brought whole trolleys of stuff, and I didn’t want to ask for extra change so I ended up paying people in pennies. However, in that time I succeeded in amassing enough cash to buy a Jupiter Ace, so around October time I received one, and at the same time I bought a 16K RAM pack to upgrade the memory to 19K. I can’t remember how much that cost, but the initial outlay would’ve been about twenty-one hours work.

Unlike most people who bought an Ace, although I found the FORTH interesting I actually got it as an upgrade. My previous computer, a 16K ZX81, which my father bought the whole family, was the absolute cheapest available computer at the time. It was ingeniously designed to be as cheap as possible, and that design rendered it rather atypical as a computer. For instance, to this day computers use the ASCII character set, although nowadays this is a subset of the much larger Unicode which includes most of what you might ever want to type, although I find it inadequate due to things like its lack of Bliss symbols, which I use extensively in writing. The ZX81, though, only used sixty-four symbols, including twenty-two graphics characters used to draw Teletext-style pictures, and it lacked lowercase letters and a lot of the usual graphemes such as “@” and “\”. It also defaulted to black text on a white background and had an unchangeable white border, and in its 1K version barely had enough memory to display a full screen of text, so it would skip the memory for lines less than thirty-two characters long. The screen also didn’t scroll unless you included an instruction to in the program, when it would scroll a single line, and the cursor for input stayed at the bottom of the screen. There was also no sound. However, because Sinclair had a fair bit of financial oomph behind it, they were able to design a large custom chip which did everything the computer needed apart from processing programs and storing information, and to this day I find this very impressive, because the total chip count is only five:

This is the kind of achievement which is impressive because of the limitations the available technology imposed upon the designers. It’s similar to the helical scan mechanism on a VCR in a way, in that only that inspiration even makes it possible.

By contrast, the Ace had a full ASCII character set with redesignable characters, single-channel pitch-duration sound, a proper scrolling screen and a white on black display like a “proper” computer. It also had slightly more memory. However, Jupiter Cantab were a tiny and impoverished company, so small in fact that their turnover, not adjusted for inflation, actually overlapped with my own turnover as a herbalist in the ‘noughties, though over that period sterling had halved in value. It’s remarkable to contemplate that the size of the company was less than one order of magnitude greater than our partnership. One practical consequence of this was that they were unable to have the kind of custom chip designed and produced for them which gave Sinclair the advantage with the ZX81 a year earlier and had to resort to discrete logic. I’ll come back to that in a minute, but I want to make the observation that this is a good example of how poverty is expensive. Instead of employing one chip, Jupiter Cantab had to use many:

Those smaller components on the right hand side of the board are mainly doing similar jobs to the large chip on the left of the ZX81’s, but there are many more of them. They also need to be soldered onto the printed circuit board, and it makes the design of the board more complex. This makes the whole computer more expensive to make, and unlike the Sinclair computers, only smaller numbers of components could be purchased, making them more expensive per unit. On the other hand, unlike the ZX81 and Spectrum, the Jupiter Ace is not really a “pile ’em high and sell ’em cheap” product because they didn’t have the option to make them cheaply. There are, even so, clear signs of cost cutting. The sound is produced using a buzzer rather than a speaker, which seems to be identical to the Spectrum. An odd design decision exists in a number of British micros, where rather than routing the audio via the TV speaker, a separate loudspeaker or unfortunately a buzzer was used on the motherboard, and I don’t know much about the design but that seems to me to add to the cost of the hardware while interfering with the quality of the sound.

The chips involved were bought off the shelf and are available to the general public even today. In order to replace a ZX81 ULA, the large chip on the left which does “everything” (it actually does less than the discrete logic on the Ace board because much of the work to put the older computer’s display on a TV is done via system software) has to be replaced by another large chip that does “everything”. With an Ace, there is a “right to repair” as it were, because all that need be done is for the malfunctioning chip to be located and replaced by another, very cheap, integrated circuit. In fact it’s still possible to build an Ace today from scratch with pretty basic equipment. It’s possible also to build a ZX80 in the same way, and since a ZX81 is, functionally speaking, just a ZX80 with different firmware, that can be done too, but not with only five chips and a simple motherboard.

The personal significance of the Ace to me, as a milestone in my life, is that it was the first durable and substantial product I bought with my own money. This landmark would for many people be followed by increasingly impressive and expensive things rather rapidly, ramping up over less than a decade to the likes of a car and a house. This never happened for me for reasons I can’t explain, and in fact if I knew why my life considered in such terms failed so badly, the chances are it wouldn’t have done. It’s probably connected to neurodiversity and mental health issues, but in any case it means this very cheap product bought nearly forty years ago has more sentimental significance to me than most others. I have now succeeded in buying a second-hand car, although I can’t drive so it’s for Sarada, and for most people this is the kind of thing they manage to do by the time they’re in their early twenties and they’d be able to drive it themselves. Hence the kind of failed product the Ace is reflects my own sense of failure in life.

There’s another, rather similar, aspect to this. I always tend to back the loser. Probably the most obvious example of this is that I’m a Prefab Sprout fan. This band is known mainly for a novelty song, ‘The King Of Rock And Roll’, which is about a band known mainly for a novelty song. It’s unintentionally meta. There are other aspects of their career which are like this. For instance, the lead singer and songwriter Paddy McAloon once penned and sang the lines “Lord just blind me, don’t let her innocent eyes reminds me”, and proceeded to go blind suddenly as he drove along a motorway. Fortunately he survived. Anyway, there would have been a point, back in 1982, when Prefab Sprout released ‘Lions In My Own Garden’, then some other band, maybe Lloyd Cole And The Commotions or Frankie Goes To Hollywood, had their own debut singles released, and somehow I get into the first and only to a limited extent the other two. Granted, most of this is down to the fact that most undertakings are unsuccessful, but for some reason my interest in something seems to be the kiss of death. Prefab Sprout and the Jupiter Ace computer were both critically acclaimed and enthused about with good reason: both were unsuccessful. I could name all sorts of other things which have a similar trajectory and about which I was quite keen at the time. What does this mean?

All that said, there is a sense in which the fortunes of the Jupiter Ace have now changed. Like the Radio Times, they are now a lot more valuable than they were when they first came out. They can go for more than a thousand quid each now. The trouble is, mine doesn’t currently work. I also suspect it’s fried, but it may not be. This is where something unexpected may come to my rescue.

I am, as you probably know a philosophy graduate. Most people say that it’s an excellent qualification for flipping burgers but in fact it isn’t because like many other people, I examined arguments for veganism while I was studying and became vegan as a result, so the burgers in question should probably be veggie. However, it is in fact useful in various ways, one of which is that you get to understand symbolic logic and Boolean algebra. There are various reasons for this, such as helping one understand the foundations of mathematics and distinguishing between valid and invalid arguments, but in any case logic is central to philosophy. While I was studying the subject, another student found that applying a particular technique to the design of digital circuits helped him simplify them and use fewer components. In general, there happens to be an enormous overlap between philosophy and computing. After the department was closed down, the logic and scientific method subsection of the department merged with computing, and as far as I know survives to this day.

One practical consequence of this is that I have no problems understanding how computers work, at least simple ones such as this, and a possible consequence of that is that it might even be possible for me to repair it and sell it. I should add, however, that mere knowledge of how the logic circuits, for want of a better word, work still leaves a massive chunk of ignorance about electronics in general. I do know why the machine is broken. It’s because the polarity of the power supply was reversed, meaning that current flowed in the wrong direction through the circuit, thereby damaging at least some of the components beyond repair. What I’m hoping, and I’m not terribly optimistic about this, is that the voltage regulator was destroyed but protected everything else. However, the cost of the components is such that it would still be cost effective to replace everything on the board, thereby ending up with a working Ace, since they sell for such a high price. This is, however, a philosophical issue because it amounts to the Ship of Theseus paradox. If everything which makes up the Ace is replaced by something else with the same function, is it still an original Ace? What does that mean about the value?

There’s something out there called a Minstrel:

This is an updated Ace. It costs £200 but has 49K memory rather than 19K and seems to be able to use USB storage. I don’t know much about it, but I am aware that it works with newer televisions. One of the differences between the two boards, other than the larger memory chips, is the absence of the silver and red Astec modulator, whose function is to interface with a conventional CRT television. Unlike many other cheap computers of the time, the Jupiter Ace had the rudiments of a monitor interface available without modification, although the signal needed to be amplified, and nowadays a modulator just gets in the way because it means you have to have an old-style TV as well.

Although it’s tempting to attempt to upgrade this computer I am under no illusions regarding my abilities and it would be good if I even ended up with a working model at the end. It would be interesting to know how much a non-working Ace would go for, but clearly a working one would be worth more.

This is the plan:

  • Ensure a good connection between the Ace and a CRT TV via a cable.
  • Use a ZX81 power supply to turn it on.
  • If it doesn’t work, replace the voltage regulator.
  • If it still doesn’t work, replace every component until it does.
  • Sell it.

Right, so that’s it for today. I was going to talk about nostalgia a bit but I’ve probably bored you senseless.

My Hardware History – A Tale of Mass Disempowerment

(c) Gary Cattell

Not gonna lie: this post is inspired by Brian of Brian’s Blog fame, which you should all of course now visit and whom you should subscribe. But his post is about operating systems, whereas mine is about how it’s all gone to Hell in a handcart.

Ethical decision making in the marketplace often comes down to trust versus self-sufficiency. On the one hand, when you buy something, even from a sole trader, you are kind of out-sourcing your ethical choices to a third party, whom you may not know and in whom your trust may be misplaced. To some extent, it may not even be their fault that they have been forced to take negative and harmful paths in what they do, because it’s the system, not the people. The very fact that we live in a monopoly capitalist society forces us to harm others. To an extent, leaving the larger political picture out of the equation for a bit, one can take control of one’s life to varying degrees on the ethical front, but ultimately this usually amounts to self-sufficiency. The extent to which one is self-sufficient correlates with the degree of responsibility one is taking for how one’s actions affect others, including the planet. However, even there it’s important to recognise privilege. One may have access to an allotment and be able to grow food, but that depends on various factors such as the good fortune of having that access due to much land formerly devoted to allotments being sold off to the highest bidder, and not living in a food desert, having the time to put into raising that food and so on. The same applies to gardening for food, to a greater extent. Not everyone has the luxury of a garden where they can grow crops, and foraging only works even to the degree it does because relatively few people do it. There’s also a knowledge, experience and skills base some people can draw on and others can’t. The fact that I’m a herbalist makes it easier for me to forage, for instance.

In the case of growing food, one usually at least has a fairly good chance of having facilities and skills needed to manage this. In other areas this may not be so. For instance, we can’t make a gate ourselves, or so I tell myself (I do have very limited carpentry skills which I never use and were pretty poor anyway, so in theory I could), but we could buy a gate and fix it onto the wall using the abilities available in this household. This costs about £50. Presumably buying the timber and having invested in the available tools would cost us less, but on asking someone else to do it for us we were quoted a figure of many hundreds of pounds.

The opposite end of this kind of self-sufficiency is found in the likes of computers and allied devices. The degree of skill and automation and the combined effort of innumerable people, is what makes devices like laptops, tablets and smartphones, and their associated storage media and support items such as chargers, cables and Wi-Fi, possible. That huge morass of people doing things for you is difficult to replace with your own skills because there is only so much that can be fitted inside a single head. Given a few lengths of wire and some relays, I could probably put together an arithmetic and logic unit which worked on a very short binary word length, and it isn’t often appreciated that one major reason I can do this is that I’m a philosophy graduate. I’m also dyspraxic though, so it would be a struggle for different reasons than the mere knowledge of how to do it. Consequently I rely on others for the digital electronics I use, and that reliance means that, as usual, I’m handing over responsibility for ethical choices to other people, whose own choices are compromised by working within a capitalist system.

I need to get down to specifics.

In the 1970s, the microprocessor was seen as a threat to manual labour. Since I was entering a Marxist phase at the time, it simply seemed wrong to have anything to do with microcomputers back then, since it would be supporting companies whose products were putting people out of paid employment. Compared to my peers, our family were relatively late adopters of all sorts of technology, such as colour TV, cassette recorders and stereo record players. I knew one early adopter whose family bought a ZX80 when it came out. My family and others were rather disdainful of this and saw it as kind of “uppity”, I suppose. We got a ZX81 in November 1982, some time after the price had come down owing to the introduction of the ZX Spectrum. After only a week, we purchased a 16K RAMpack. However, by the time we had one, I’d known BASIC for about a year, having taught myself from textbooks with no hands on experience. I was still almost superstitiously suspicious of even home micros at the time. After all, Clive Sinclair had claimed that the ZX80 was powerful enough to control a nuclear power station, so taking that at face value even that had the potential to replace human workers aplenty. At the time I had no concept of automation creating jobs to replace the ones that had been destroyed, so all I could see for the future was mass unemployment.

The ZX81, and to an extent its predecessor, has a kind of nostalgic charm to it even for the time. For instance, like older and larger computers it has its own character set, exclusively upper case letters and only sixty-four printable characters, and in particular it uses “**” to denote raising to a power rather than “^”, or actually “↑” for most micros of that vintage if I remember correctly. It also displays black characters on a white background, giving the impression that the output is all coming out of a printer onto the old-fangled paper which looked like it had green musical staves on it and holes up the sides, and was folded with perforations separating the sheets. It was also, in practical terms, silent out of the box. My enduring impression of the ZX81 is that it’s an early ’60s minicomputer trapped in a plastic matchbox, and as such it had a flavour of former glories about it.

To me, the most mystifying thing about this computer was that it somehow seemed to be able to produce sufficiently detailed characters as would appear on a much higher resolution display but could not actually address those pixels directly. Why couldn’t it draw in as much detail as it displayed text? There were 2×2 graphics characters but they only afforded a resolution up to 64×44. It also didn’t scroll unless you told it to. I attempted to combine characters by printing them in the same location, expecting a kind of overstrike effect, but that didn’t work. Then there was the question of machine code. At the time, I assumed that computers directly acted upon the BASIC code they were given. When I saw the table at the back of the manual showing the machine code instruction equivalents to the alphanumeric and other characters, I drew the erroneous conclusion that the microprocessor simply read the BASIC line by line off the display file and executed it to achieve the ends, so for example SIN was a series of three instructions which together could achieve a floating point trig function, and so could COS, and so forth. This is kind of how the human brain operates with language, so I would defend this naïve view.

Another unexpected feature of microcomputers was that they almost all used BASIC. I had expected that the relatively cheap home computers such as the VIC-20 or TI-99/4A would use that language, but because it’s Beginner’s All-purpose Symbolic Input Code, more expensive computers would have built-in PASCAL, FORTRAN or ALGOL. This was, however, only true to a very limited extent.

It was eventually borne in upon me that programming languages were themselves software, written in a more fundamental language called machine code which controlled the microprocessor directly, so I learnt Z80 machine code and programmed in it in a limited way. I discovered there were ways of getting the ZX81 to produce sound and increase its vertical resolution, and even managed to produce a program which played the drum machine bit of ‘Blue Monday’. This suggests that I traversed a very steep learning curve very quickly since we acquired the computer in late November 1982 and New Order’s single was released less than five months later. I felt uncomfortable with the extent to which I seemed to be fixated on computer programming and tried to maintain interest in other topics. My attitude to computers has always had this ambivalence to it. It’s also very likely that my pursuit of this hobby adversely affected my O-level results, and it’s a shame that the knowledge I was acquiring couldn’t have been used to get an O-level in computing. I actually used to help pupils who were studying computing with their homework, and I’ve often wondered what the disconnect is here. It reflects a pattern in my life of not being able to integrate my skills and experience with formal accreditation or recognition, and I suspect it’s linked to neurodiversity but I don’t know how.

Much of the time I spent programming the ZX81 was also spent envying the specifications of more powerful computers, but at the time I think my parents were trying to motivate me to find paid work, which I did in fact do and proceeded to buy, of all things, a Jupiter Ace. This is a fairly famous computer designed by the team who did the ZX Spectrum at Sinclair Research Ltd and then left to form their own company, and is chiefly known for the fact that it had the programming language FORTH in ROM rather than BASIC. This was almost unique. There was an attempt to design a more advanced FORTH-using micro called the Microkey 4500, which was basically a wishlist fantasy computer which sounded excellent but hardly even got to the drawing board stage, but to me the main appeal of the Ace was that it behaved like a “proper computer”. It has the complete ASCII character set, displays white text on a black background and has a Spectrum-style keyboard. It is in fact very similar to the Spectrum even down to the font it uses, but lacks colour and point-addressable high resolution graphics. However, by the time of its release, October 1982, consumers were expecting computers to have high resolution colour graphics and proper sound. For some reason I’ve never understood to this day, most British micros at the time had built-in speakers for sound rather than using the speaker of the TV they were plugged into, which sometimes compromised the sound production to a barely audible beep and necessitated the addition of built-in sound hardware while the audio capability of the TV was just sitting there unused. A strange decision which would probably make more sense if I knew more about electronics. Jupiter Cantab, the company which made the Ace, went bust after less than two years and this enabled me to buy the Ace. This has a different special place in my life because it was the first durable product I ever bought with money I’d earned myself, and I still own it today.

FORTH is sufficiently close to the machine that it can actually be implemented as machine code in itself, and much of the language as supplied is definable in terms of more primitive words in that language. FORTH’s appeal to me was that it enabled me to work very closely with the hardware of the computer without having to use actual op codes. I attempted to design a prefix notation version of the language, but although I wrote a complete description which came quite naturally to me, I never completed it. I noticed also in myself a tendency to attempt to reduce user-friendliness to ease programming: for instance, I opted to use signed sixteen bit integers as the only data type and express them in hexadecimal with leading zeros.

By this time I’d been aware of operating systems for about two years. I was at first only cognisant of CP/M, which had been devised in 1974, and Unix, which I think dates from 1969 although clearly not in its later form as the dates begin in 1970. MSDOS and PCDOS were also out there somewhere but since IBM PC compatible computers cost upwards of £3000 at the time I regarded them as permanently out of reach. Oddly, we did in fact have a PC clone briefly in our house in 1983, although it was never switched on and was simply being stored there for my father’s workplace. Incidentally, at the time that workplace had been using a confoundingly simple minicomputer they’d bought in the 1960s which appeared to have only four op codes. I found this hard to believe even at the time but extensive study of the technical manuals showed that this was indeed the case. I have no idea what it was now, but it was very strange and sounds like it would’ve been a challenge to program, though an interesting one.

For me, Unix was associated with minicomputers and wouldn’t even get out of bed for what at the time seemed like a ridiculously vast amount of RAM. However, there were also versions for the 32:16-bit 68000 and I think also the failed Z8000, although oddly CP/M was also rewritten in C for the 68000, which seemed ridiculously underpowered to me at the time. It was at that point that an annoying trend became apparent to me, which had been going on since at least the 1960s when I’m aware it was implemented on the PDP-11 range of minicomputers. There were privileged and user instructions. On the 6809 CPU, there had been a system and a user stack pointer, though both available to the user via machine code and assembler (a direct one-to-one programming language slightly friendlier to the user). On its more powerful successor, the 68000, the system stack pointer was unavailable to the user and only the user stack pointer was accessible. Other tendencies also came into play, such as many of the system flags being alterable only by the operating system and whole areas of memory locked out from user access. This is done for security and stability purposes, but to me it felt patronising and hand-holding, and also like an in-built class system. There’s hoi polloi, the likes of us, and there’s the Programmer, who wrote the operating system and has total control. We are merely their minions, second-class computer users, and this was the start of a trend to lock people out of controlling their own devices which continues today. It’s the opposite of self-sufficiency and it means you have to trust whoever wrote and often sells or licences the operating system.

There was also another trend which drives me round the bend: virtual memory. When I first learned about multiuser systems, I was astonished to find that they would sometimes save the whole program the user was running and switch to another user to load their program and run that, continuing in that cycle depending on how many users were on the system. Since hard drive storage is mechanical, it’s many orders of magnitude slower than solid-state RAM or ROM, and this makes things very slow, so I assumed that this was a soon-to-be-superceded by cheaper and larger memory sizes. This didn’t happen. What happened instead was that later operating systems were designed to pretend there was more physical memory than there actually was, with the result that it was all too easy for a computer to get lost in its own internal musings and kind of forget there was some person out there trying to use the bloody thing. Fortunately we now have solid state drives and the situation is somewhat better.

Way into the late 1980s I would still doodle Z80 assembler programs in notebooks meant to do various things, though without being interested in implementing them. By that time, GUIs were starting to take over, and thereby hangs another tale.

I liked the Xerox Star and the Apple Lisa, which stole the former’s user interface and preceded the Macintosh. That to me did seem to be the way forward with computers at the time. Later on, Microsoft tried to copy it, and it seemed like a pointless thing bolted onto the front of the operating system which slowed it down so much that it wasn’t worth whatever benefits it might bring. The same applied to GEM, its main competitor. To me, a GUI feels like another way computer design is taking control away from the user. This was just the beginning. I am used to imperative and procedural programming. As far as I’m concerned, a computer program is a list of orders or instructions organised into smaller subroutines which tell the computer what to do. Putting it like that, it seems absurd to me that anything else would ever be so. Object-oriented programming began to become very popular, and at no point have I remotely understood it. Every description of what it is seems to use a metaphor which fails to describe what’s actually going on inside the device. It will do something like say that there’s a class of vehicles which have the properties of size, number of wheels and so forth which can be applied to create new concepts of vehicles, such as distinguishing a car, motorbike and lorry. That’s all very well, but I can never make a connection between that metaphor and computer programming, which looks like something completely different. It also uses a declarative paradigm, where you just seem to tell the computer what’s there and leave it be, which baffles me because how can anything actually be happening if you haven’t made it happen? I’ve attempted to describe my understanding in terms of variables and procedures, but people in the know have always told me I haven’t got it right. It’s been said, and I’ve come up against this, that if you’ve learned imperative and procedural programming, it makes it so much harder to learn OOP (Object-Oriented Programming). I also can’t shed the impression that a lot of it is obscurantist technobabble hiding a naked emperor. And if you can’t see what’s going on inside, how can you have control.

Another annoying trend has been created by the easy availability of memory. Back in the old days, computers needed to be programmed efficiently in terms of memory. For instance, a spell checker would be based on the rules of English spelling and object to, say, a Q not followed by a U or the use of “AY” in the middle of a word, but it didn’t have a dictionary. Nowadays, it does, and that takes up many megabytes in its uncompressed form although I’m sure it is compressed. Likewise, chess games have tended to store whole lists of possible moves and try to find their way up a tree to the best result, using up huge amounts of memory, whereas previously they would’ve used the rules of chess and, I presume, some cleverly-written strategy to get to win. To me, this seems lazy and disappointing. I want programs to be optimised to the same extent as they were when 64K seemed like impossible luxury. So much processing power is also wasted on running the GUI. We don’t need this because it isn’t really using the computer. It’s just making the computer easier to use by tying up processing power in unnecessary trinkets.

So: you might think I’m a Linux person. I did too for a long time, but I’m not. As far as I’m concerned, you have to take care of your hardware and ensure it remains useful for as long as possible for a number of reasons:

  1. It takes a lot of energy, resources and environmental damage to make a computer and the working conditions of the people who mined the minerals, refined the compounds and put the device together are often questionable. Once all of that is done, you should be able to hang onto it for a long time.
  2. We ourselves haven’t necessarily got that much money and we should expect our devices to be as useful as possible. That means access to hardware and no hand-holding.
  3. When we finally discard our hardware it goes to landfill or some poor Third World community where it poisons the rivers and gives the people disassembling it physical disabilities, causes cancer and congenital birth defects, not to mention what it does to the biosphere.

All of this has got to be the number one priority when we consider computers and other devices, and to be fair Linux does go a considerable way to addressing this. But there’s a problem. A lot of the people who are involved in designing and coding for Linux are very focussed on supporting up to date hardware. It’s a big community and they aren’t all doing that, but many of them are and it’s often hard to find people who are more genuinely concerned on the E-Waste and other sustainability aspects of the issue. The other thing, and this may be less problematic today than it used to be, is that Linux people are often not people people, and in the end this amounts to user interfaces which are not very friendly. I’m reminded of the Dilbert cartoon strip of the computer programmer saying “it’s my philosophy that the computer interface should hurt the user”, and employing samples of birds being killed by cars as the main notification sound. I’ve had to use Linux GUIs which are unwittingly displaying at 320×200 and put the OK button completely outside the display, or don’t recognise that the memory map for the video RAM is organised in such a way that everything looks like incomprehensible multicoloured vertical stripes. And yet a perfectly good, low-end 640×480 sixteen colour display could’ve been used which somehow is not the default. Why?

Don’t get the impression that I’m not as susceptible as most other people to the appeal of Windows. I was very impressed by Windows 3.1, which I didn’t come across until 1998 due to forcing myself to go cold turkey on computers for a decade or two. As far as I’m concerned, I’d be happy for the GUI to look like that today and it all seems a bit unnecessary to make it any fancier, particularly because in doing so you’re consigning millions of PCs to landfill for no good reason. I think Program Manager, the main shell for Win 3.1, hung around until at least Windows XP although it didn’t retain its look. It may be due to the fact that our 486-based 33 MHz PC with VGA graphics and non-working sound card was just less messed-about than other computers we’ve used since, but it was the most stable version of Windows I’ve ever used in earnest. It crashed once the whole time we used it, and that was a major and worrying event. Incidentally, there used to be an Explorer-style shell for Win 3.1, which made it practically indistinguishable from classic 32-bit Windows, which I used for a short period of time, and in fact I’ve even installed it as the default shell on later versions due to it being more compact and stable than the default shell.

We then leapfrogged over Windows 95 to use Windows 98 on a computer which was a total mess. It was a 120 MHz Pentium with SVGA, a working sound card and 16 Mb RAM. This is below the recommended specs for Windows 98, but there were other imponderable issues with that PC at the time. It only lasted a few months before we handed it over to a friend who was more au fait with computers, who got it to work. We replaced it with our final desktop format computer, an AST Bravo MS P/75 with an ATI Mach-16 graphics card, which incidentally has 2-D acceleration but not the 3-D which had become practically universal by that point. This was actually a downgrade, but was more reliable, and at that time I was really into the themes and skins available on Windows 98. I also struggled endlessly to get Linux to work on it. QNX was fine, but then it always is, BeOS also worked okay. It ultimately got upgraded to 128 Mb RAM and a piggy-back 133 MHz Pentium if I remember correctly. This was the first computer to have a reliable dial-up internet connection. It was fine at running Quake but it was only barely capable of showing DivX videos even when upgraded to the nines.

The next stage came in 2002, as the start of our plan to give up television, partly for the children’s sake. This meant ultimately having access to DVDs and therefore we bought a new computer, though with a CD-ROM drive, mainly in order to get broadband internet. By this time we were accumulating E-Waste enormously because I was reluctant to let the old hardware go the way of all silicon. This PC had an Athlon and ran Windows XP. Windows XP was a good operating system on the whole but was the first O/S to break compatibility with 16-bit applications and drivers, which necessitated the slinging out of various bits of hardware. This was also our first PC to have USB slots, which I upgraded as well. The initial specifications of this machine were 128 Mb RAM, 40 Gb hard drive, 1 GHz Athlon of some description, on-board graphics, Silicon Integrated Systems motherboard. It started off with Windows 98 and we upgraded it in 2004. One thing I didn’t like about Windows XP was its childish-looking graphical scheme, but it still had Windows Classic, whose appearance was far better. This was also the first computer we used a TFT screen with, and the amount of space taken up by a 22″ CRT monitor is something to behold.

In 2007, we got a Windows Vista machine. This was because its predecessor had exploded due to me installing a graphics card whose power requirements exceeded that of our previous computer. Apparently it emitted something like ball lightning, although I wasn’t there at the time. This we persisted with for a further seven years. My chief issue with Windows Vista was that left to itself it seemed to spend too much time making the windows look translucent. Interestingly, the system requirements for Windows after a certain point went into reverse, probably because people were no longer impressed with the eye candy. In 2015 we acquired the computer which is currently sitting upstairs and of course runs Windows 10. Ironically, the stability of Windows 10 has made it possible to install Linux properly and use it on that PC. I have a history of using Live distributions of Linux on memory sticks as my preferred operating systems because they’re more secure that way.

Windows PCs have become very much secondary in our lives in recent years, as is probably true for many other people. We mainly use Android and Chromebooks, although Sarada still uses a Windows laptop from time to time. There no longer seems to be the pressure to upgrade, or maybe we’ve just become sidelined and lost interest in using them in other ways. I still program using FORTH and I have the modern version of the Commodore 64, which I don’t use as much as I should. To be honest, I’m not a big fan of the 64 because its BASIC doesn’t support the hardware properly and the palate is very odd for no apparent reason, but again all these are challenges to which I should rise. I’m vaguely considering writing a few arcade games for it, although I should stress that I’m very much in the Z80 camp rather than the 65 series one. I’ve never found the 6502 (technically the 6510 in the case of the 64) easy to program because I find its address modes and the complete impossibility of dealing with sixteen-bit integers irksome, although again it’s down to possibly admirable minimalist design.

We’ve also owned a few computers I haven’t mentioned. I bought a Tandy Color in 1999, there was a UK101 in our airing cupboard for a while, we had a Windows ME PC which was technically actually an ACT Apricot of all things, and also a plasma display 286-based Toshiba laptop I used DOSLynx on as a browser. That actually ran Windows 1.0!

Although I’ve never done it, I am curious as to the possibility of programming ARM or 64-bit Intel processors in machine code or assembler without an operating system as such. It would need a computer dedicated to that purpose of course. I would also imagine that attempting to use an ARM in that way would be a bit of a tall order, although I don’t know, because my understanding is that its instruction set is optimised for use with code no human has directly written nowadays, but I presume that Intel-like CPUs are another matter. But just as I’ve never really got into coding in a major way, I doubt I’ll ever get round to it. One thing I do still do with coding is occasionally write FORTH or BASIC programs to work out maths problems, and I’ve long preferred to use APL to spreadsheets, which quite frankly I can’t get my head round.

I very much doubt I’ll ever make the transition to object-oriented programming.

Subtract One And Branch

In case you’re wondering why I’m not talking about the murder of Sarah Everard and the fallout from that, I try to avoid covering gender politics on this blog because I have another blog devoted to that alone, and I also tend to avoid saying anything (see Why I’ve Gone Quiet) because cis women need to be heard more than I do on these issues and I don’t want to shout them down. In fact the other blog is quite infrequent for the same reason. It doesn’t mean I don’t care or don’t consider it important.

You are in a living room, circa 1950. In one corner sits a bakelite box, lit up from within behind a hessian grill with a design like rays from a rising Sun at the front, and a rectangular panel carrying names like “HILVERSUM” and “LUXEMBOURG”. In the other corner sits another somewhat similar bakelite box with a series of lamps at the top and a slot at the bottom, into which cards with rectangular windows punched out of them can be inserted. There is a row of push buttons above it. This is the domestic computer. It didn’t exist, but could it have?

In this post I mentioned that some say the last computer which could be “completely understood” was the BBC Micro, released in 1981, and expressed my doubt that this was true because apart from memory size, even an upgraded IBM PC would probably be about as sophisticated. However, this got me to thinking about a tendency I have to minimalise in terms of IT, and wondering how far it could be taken and still leave useful hardware, and that also brings up the question of what counts as useful.

In this other post, I described a fictional situation where, instead of the Apple ][ being one of the first mass market microcomputers, Sinclair ended up bringing out the ZX Spectrum six years early, that is, just after its Z80 CPU was released. That isn’t quite what I said: read the post if you see what I mean. The point is that the specifications of the two computers are very similar and if the ULA in the Speccy is realised via discrete logic (smaller and simpler integrated circuits), all the hardware necessary to construct a functional equivalent to it except for the slightly faster microprocessor was available already, and if someone had had the idea, they could’ve made one. Then a similar mind game arises: how much further back is it possible to go and still manufacture a reasonaly small but workable computer? Could you, for example, even push it back to the valve era?

Apologies for already having said things which sound off-putting and obscurantist. As an antidote to this, I want to explain, just in case you don’t know, how digital computers work. Basically there’s a chip which fetches codes and data from memory. The codes tell the computer what to do with the data and the chip can make decisions about where to look next from the results of what it’s done with those data. It’s kind of like a calculator on wheels which can read instructions from the path its travelling on.

There are two basic design philosophies taken with microprocessors, which for the sake of brevity I’m going to call Central Processing Units, or CPUs. One is to get it to do lots of sophisticated things with a large number of instructions. This is called the CISC approach – Complex Instruction Set Computer. The CPUs in most Windows computers are CISCs. The other is to get it to do just a few things with a small number of instructions, and this is called a RISC – Reduced Instruction Set Computer. Chips in tablets and mobile phones are usually RISCs. In particular they’re probably going to be using one of the CPUs in the ARM series, the Acorn RISC Machine. The Pentium and the like are kind of descended from the very early Intel 8080, which had a large number of instructions, but the ARMs are the spiritual successors of the 6502.

The 6502, which was the CPU used in the Apple ][, BBC Micro and the similar 6510 used in the Commodore 64, were designed with simplicity in mind, and this involved using far fewer transistors than the Z80(A) found in the Spectrum. The latter had 694 instructions, many of which could only be accessed by prefix bytes acting a bit like shift keys on a typewriter. By contrast, not taking address modes into consideration, the 6502 only had fifty-six. Processors usually have single or double cells of memory used to store numbers to work on or otherwise use internally called registers. The Z80 had nineteen, I think, but the 6502 only had six. To be honest, I was always a lot more comfortable using the Z80 than the 6502, and as you may be aware this was a constant source of debate between nerds in the early ’80s, and to some extent still is. The 6502 always seemed really fiddly to me. However, it was also faster than its competitor, and comprised fewer transistors because of the corners which had been cut. Unlike the Z80, it used pipelining: it interpreted the next instruction while executing the previous one, but it was also faster because it used fewer instructions. It needed less time to work out what to do because the list of instructions was much shorter.

Research undertaken in the 1980s counted the proportions of instructions used in software, and it was found, unsurprisingly, that for all processors a small part of the instruction set was used for the majority of the time. This is of course the log-normal distribution which applies to a wide range of phenomena, such as the fact that eighty percent of the income from my clients used to be from twenty percent of them, and that one herb out of five is used in four out of five prescriptions, and so on. In the case of CPUs this meant that if the jobs done by the majority of instructions could be performed using the minority of often-used instructions, the others could be dispensed with at the cost of taking up more memory. An artificial example of this is that multiplying six by seven could be achieved by adding seven to itself six times, and therefore that an integer multiply instruction isn’t strictly necessary. Of course, this process of performing six additions could take longer than the multiplication would in the first place, but choosing the instructions carefully would lead to an optimal set, which would all be decoded faster due to the smaller variety.

The question therefore arises of the lowest possible number of instructions any CPU could have and still be Turing-complete. A Turing-complete machine is a computer which can do anything a theoretical machine Turing thought of in 1936 which consisted of an infinitely long strip of tape and a read-write head, whose behaviour can be influenced by the symbol underneath the head. It more or less amounts to a machine which, given enough time, can do anything any digital computer could do. My description of the calculator on wheels I mentioned above is effectively a Turing machine. What it means, for example, is that you could get a ZX80, rewire it a bit and have it do anything today’s most expensive and up-to-date PC could do, but of course very, very slowly. But it can’t be just any digital computer-like machine. For instance, a non-programmable calculator can’t do it, nor can a digital watch. The question is, as I’ve said, which instructions would be needed to make this possible.

There used to be a very simple programming language used in schools called CESIL (I created and wrote most of that article incidentally – there are various deeply obscure and useless articles on Wikipedia which were originally mine). As you can see from the link, there are a total of fourteen instructions, several of which are just there for convenience to enable input and output such as LINE and PRINT. It’s a much smaller number than the 6502 but also rather artificial, since in reality it would be necessary to come up with code for proper input and output. Another aspect of redundancy is the fact that there are three JUMP instructions: JUMP, JINEG and JIZERO – unconditional jump, jump if negative and jump if zero. All that’s really needed there is JINEG – jump if negative. It can be ensured that the content of the accumulator register is negative in advance, in which case JINEG can perform the same function as an unconditional jump, and since zero is only one greater than a negative integer, simply subtracting one and then performing JINEG is the same as JIZERO. Hence the number of instructions is already down to six, since multiplication and division are achievable by other means, although as constituted CESIL would then be too limited to allow for input and output because it only uses named variables. It would be possible, though, to ensure that one variable was a dummy which was in fact an interface with the outside world via some peripherals.

It has in fact been determined that a machine can be Turing-complete even if it has only one instruction, namely “Subtract One And Branch If Negative”. Incidentally, with special arrangements it can even be done with Intel’s “MOV” instruction, but it needs extensive look up tables to do this and also requires special address modes. Hence there can be a SISC – Single Instruction Set Computer. It isn’t terribly practical because, for example, to add two numbers in the hundreds this instruction would need to be executed hundreds of times. It depends, of course, on the nature of the data in the memory, and this has an interesting consequence. In a sense, this is a zero instruction set computer (which is actually something different officially) because it can be assumed that every location pointed to by the program counter has an implicit “SOBIN” instruction. The flow and activities of the program are actually determined by a memory location field which tells the CPU where to go next. This means that the initial value of the data in the accumulator needs to be fixed, probably as an entirely set series of bits equivalent to the word length.

This would all make for a very difficult machine to work with, but it is possible. It would be very inefficient and slow, but it would also reduce the number of logic gates needed to a bare minimum. It would simply consist of an arithmetic unit which would in fact be a binary adder because of two’s complement. Negative integers can be represented simply by treating the second half of the interval between zero and two raised to the power of the word length as if those numbers are negative.

This is a one-bit binary adder:

It works like this: 0+0=0 with no carry, 0+1=1 with no carry, 1+0=1 with no carry and 1+1=0 with carry. The symbols above, by the way, are XOR – “either – or -” at the top, AND in the middle and OR (actually AND/OR) at bottom right. These conjunctions describe the inputs and outputs where one, i.e. a signal, is truth and zero, i.e. no signal, is falsehood. Incidentally, you probably realise this but this logic is central to analytical philosophy, meaning that there are close links between philosophy, mathematics and computer science and if you can understand logic you can also understand the basis of much of the other two.

Most processors would do all this in parallel – an eight-bit processor would have eight of these devices lined up, enabling any two integers between zero and two hundred and fifty-five to be added and any two integers between -127 and 128 to be added or subtracted, either way in one go. But this could also be done in series if the carry is stored and the serial format is converted to and from parallel to communicate outside the processor. This reduces the transistor count further. All logic gates can be implemented by NAND gates, or if preferable by NOR gates. A NAND gate can be implemented by two transistors and three resistors and a NOR gate with the same components connected differently. Transistors could also be replaced by valves or relays, and it would also be possible to reuse some of the components in a similar manner to the serial arrangement with the adder, although I suspect the point would come when the multiplication of other components would mean this was no longer worthwhile.

I can’t really be precise as to the exact number of components required, but clearly the number is very low. Hence some kind of computer of a reasonable size could been implemented using valves or relays, or discrete transistors. ROM could be realised via fuses, with blown fuses as zeros and working fuses as ones, and RAM by capacitors, Williams tubes, which are cathode ray tubes with plates to feed the static charge back into the cathodes, or with rings threaded onto wires, all methods which have been used in the past. Extra parts would of course be needed but it is nonetheless feasible to build such a computer.

I feel like I’m on the brink of being able to draw a logic diagram of this device, but in practical terms it’s beyond me. I haven’t decided on input and output here, but that could be achieve via arrays of switches and flashing lights. And if this could be done with 1950s technology, who knows what the limit would be?