Not gonna lie: this post is inspired by Brian of Brian’s Blog fame, which you should all of course now visit and whom you should subscribe. But his post is about operating systems, whereas mine is about how it’s all gone to Hell in a handcart.
Ethical decision making in the marketplace often comes down to trust versus self-sufficiency. On the one hand, when you buy something, even from a sole trader, you are kind of out-sourcing your ethical choices to a third party, whom you may not know and in whom your trust may be misplaced. To some extent, it may not even be their fault that they have been forced to take negative and harmful paths in what they do, because it’s the system, not the people. The very fact that we live in a monopoly capitalist society forces us to harm others. To an extent, leaving the larger political picture out of the equation for a bit, one can take control of one’s life to varying degrees on the ethical front, but ultimately this usually amounts to self-sufficiency. The extent to which one is self-sufficient correlates with the degree of responsibility one is taking for how one’s actions affect others, including the planet. However, even there it’s important to recognise privilege. One may have access to an allotment and be able to grow food, but that depends on various factors such as the good fortune of having that access due to much land formerly devoted to allotments being sold off to the highest bidder, and not living in a food desert, having the time to put into raising that food and so on. The same applies to gardening for food, to a greater extent. Not everyone has the luxury of a garden where they can grow crops, and foraging only works even to the degree it does because relatively few people do it. There’s also a knowledge, experience and skills base some people can draw on and others can’t. The fact that I’m a herbalist makes it easier for me to forage, for instance.
In the case of growing food, one usually at least has a fairly good chance of having facilities and skills needed to manage this. In other areas this may not be so. For instance, we can’t make a gate ourselves, or so I tell myself (I do have very limited carpentry skills which I never use and were pretty poor anyway, so in theory I could), but we could buy a gate and fix it onto the wall using the abilities available in this household. This costs about £50. Presumably buying the timber and having invested in the available tools would cost us less, but on asking someone else to do it for us we were quoted a figure of many hundreds of pounds.
The opposite end of this kind of self-sufficiency is found in the likes of computers and allied devices. The degree of skill and automation and the combined effort of innumerable people, is what makes devices like laptops, tablets and smartphones, and their associated storage media and support items such as chargers, cables and Wi-Fi, possible. That huge morass of people doing things for you is difficult to replace with your own skills because there is only so much that can be fitted inside a single head. Given a few lengths of wire and some relays, I could probably put together an arithmetic and logic unit which worked on a very short binary word length, and it isn’t often appreciated that one major reason I can do this is that I’m a philosophy graduate. I’m also dyspraxic though, so it would be a struggle for different reasons than the mere knowledge of how to do it. Consequently I rely on others for the digital electronics I use, and that reliance means that, as usual, I’m handing over responsibility for ethical choices to other people, whose own choices are compromised by working within a capitalist system.
I need to get down to specifics.
In the 1970s, the microprocessor was seen as a threat to manual labour. Since I was entering a Marxist phase at the time, it simply seemed wrong to have anything to do with microcomputers back then, since it would be supporting companies whose products were putting people out of paid employment. Compared to my peers, our family were relatively late adopters of all sorts of technology, such as colour TV, cassette recorders and stereo record players. I knew one early adopter whose family bought a ZX80 when it came out. My family and others were rather disdainful of this and saw it as kind of “uppity”, I suppose. We got a ZX81 in November 1982, some time after the price had come down owing to the introduction of the ZX Spectrum. After only a week, we purchased a 16K RAMpack. However, by the time we had one, I’d known BASIC for about a year, having taught myself from textbooks with no hands on experience. I was still almost superstitiously suspicious of even home micros at the time. After all, Clive Sinclair had claimed that the ZX80 was powerful enough to control a nuclear power station, so taking that at face value even that had the potential to replace human workers aplenty. At the time I had no concept of automation creating jobs to replace the ones that had been destroyed, so all I could see for the future was mass unemployment.
The ZX81, and to an extent its predecessor, has a kind of nostalgic charm to it even for the time. For instance, like older and larger computers it has its own character set, exclusively upper case letters and only sixty-four printable characters, and in particular it uses “**” to denote raising to a power rather than “^”, or actually “↑” for most micros of that vintage if I remember correctly. It also displays black characters on a white background, giving the impression that the output is all coming out of a printer onto the old-fangled paper which looked like it had green musical staves on it and holes up the sides, and was folded with perforations separating the sheets. It was also, in practical terms, silent out of the box. My enduring impression of the ZX81 is that it’s an early ’60s minicomputer trapped in a plastic matchbox, and as such it had a flavour of former glories about it.
To me, the most mystifying thing about this computer was that it somehow seemed to be able to produce sufficiently detailed characters as would appear on a much higher resolution display but could not actually address those pixels directly. Why couldn’t it draw in as much detail as it displayed text? There were 2×2 graphics characters but they only afforded a resolution up to 64×44. It also didn’t scroll unless you told it to. I attempted to combine characters by printing them in the same location, expecting a kind of overstrike effect, but that didn’t work. Then there was the question of machine code. At the time, I assumed that computers directly acted upon the BASIC code they were given. When I saw the table at the back of the manual showing the machine code instruction equivalents to the alphanumeric and other characters, I drew the erroneous conclusion that the microprocessor simply read the BASIC line by line off the display file and executed it to achieve the ends, so for example SIN was a series of three instructions which together could achieve a floating point trig function, and so could COS, and so forth. This is kind of how the human brain operates with language, so I would defend this naïve view.
Another unexpected feature of microcomputers was that they almost all used BASIC. I had expected that the relatively cheap home computers such as the VIC-20 or TI-99/4A would use that language, but because it’s Beginner’s All-purpose Symbolic Input Code, more expensive computers would have built-in PASCAL, FORTRAN or ALGOL. This was, however, only true to a very limited extent.
It was eventually borne in upon me that programming languages were themselves software, written in a more fundamental language called machine code which controlled the microprocessor directly, so I learnt Z80 machine code and programmed in it in a limited way. I discovered there were ways of getting the ZX81 to produce sound and increase its vertical resolution, and even managed to produce a program which played the drum machine bit of ‘Blue Monday’. This suggests that I traversed a very steep learning curve very quickly since we acquired the computer in late November 1982 and New Order’s single was released less than five months later. I felt uncomfortable with the extent to which I seemed to be fixated on computer programming and tried to maintain interest in other topics. My attitude to computers has always had this ambivalence to it. It’s also very likely that my pursuit of this hobby adversely affected my O-level results, and it’s a shame that the knowledge I was acquiring couldn’t have been used to get an O-level in computing. I actually used to help pupils who were studying computing with their homework, and I’ve often wondered what the disconnect is here. It reflects a pattern in my life of not being able to integrate my skills and experience with formal accreditation or recognition, and I suspect it’s linked to neurodiversity but I don’t know how.
Much of the time I spent programming the ZX81 was also spent envying the specifications of more powerful computers, but at the time I think my parents were trying to motivate me to find paid work, which I did in fact do and proceeded to buy, of all things, a Jupiter Ace. This is a fairly famous computer designed by the team who did the ZX Spectrum at Sinclair Research Ltd and then left to form their own company, and is chiefly known for the fact that it had the programming language FORTH in ROM rather than BASIC. This was almost unique. There was an attempt to design a more advanced FORTH-using micro called the Microkey 4500, which was basically a wishlist fantasy computer which sounded excellent but hardly even got to the drawing board stage, but to me the main appeal of the Ace was that it behaved like a “proper computer”. It has the complete ASCII character set, displays white text on a black background and has a Spectrum-style keyboard. It is in fact very similar to the Spectrum even down to the font it uses, but lacks colour and point-addressable high resolution graphics. However, by the time of its release, October 1982, consumers were expecting computers to have high resolution colour graphics and proper sound. For some reason I’ve never understood to this day, most British micros at the time had built-in speakers for sound rather than using the speaker of the TV they were plugged into, which sometimes compromised the sound production to a barely audible beep and necessitated the addition of built-in sound hardware while the audio capability of the TV was just sitting there unused. A strange decision which would probably make more sense if I knew more about electronics. Jupiter Cantab, the company which made the Ace, went bust after less than two years and this enabled me to buy the Ace. This has a different special place in my life because it was the first durable product I ever bought with money I’d earned myself, and I still own it today.
FORTH is sufficiently close to the machine that it can actually be implemented as machine code in itself, and much of the language as supplied is definable in terms of more primitive words in that language. FORTH’s appeal to me was that it enabled me to work very closely with the hardware of the computer without having to use actual op codes. I attempted to design a prefix notation version of the language, but although I wrote a complete description which came quite naturally to me, I never completed it. I noticed also in myself a tendency to attempt to reduce user-friendliness to ease programming: for instance, I opted to use signed sixteen bit integers as the only data type and express them in hexadecimal with leading zeros.
By this time I’d been aware of operating systems for about two years. I was at first only cognisant of CP/M, which had been devised in 1974, and Unix, which I think dates from 1969 although clearly not in its later form as the dates begin in 1970. MSDOS and PCDOS were also out there somewhere but since IBM PC compatible computers cost upwards of £3000 at the time I regarded them as permanently out of reach. Oddly, we did in fact have a PC clone briefly in our house in 1983, although it was never switched on and was simply being stored there for my father’s workplace. Incidentally, at the time that workplace had been using a confoundingly simple minicomputer they’d bought in the 1960s which appeared to have only four op codes. I found this hard to believe even at the time but extensive study of the technical manuals showed that this was indeed the case. I have no idea what it was now, but it was very strange and sounds like it would’ve been a challenge to program, though an interesting one.
For me, Unix was associated with minicomputers and wouldn’t even get out of bed for what at the time seemed like a ridiculously vast amount of RAM. However, there were also versions for the 32:16-bit 68000 and I think also the failed Z8000, although oddly CP/M was also rewritten in C for the 68000, which seemed ridiculously underpowered to me at the time. It was at that point that an annoying trend became apparent to me, which had been going on since at least the 1960s when I’m aware it was implemented on the PDP-11 range of minicomputers. There were privileged and user instructions. On the 6809 CPU, there had been a system and a user stack pointer, though both available to the user via machine code and assembler (a direct one-to-one programming language slightly friendlier to the user). On its more powerful successor, the 68000, the system stack pointer was unavailable to the user and only the user stack pointer was accessible. Other tendencies also came into play, such as many of the system flags being alterable only by the operating system and whole areas of memory locked out from user access. This is done for security and stability purposes, but to me it felt patronising and hand-holding, and also like an in-built class system. There’s hoi polloi, the likes of us, and there’s the Programmer, who wrote the operating system and has total control. We are merely their minions, second-class computer users, and this was the start of a trend to lock people out of controlling their own devices which continues today. It’s the opposite of self-sufficiency and it means you have to trust whoever wrote and often sells or licences the operating system.
There was also another trend which drives me round the bend: virtual memory. When I first learned about multiuser systems, I was astonished to find that they would sometimes save the whole program the user was running and switch to another user to load their program and run that, continuing in that cycle depending on how many users were on the system. Since hard drive storage is mechanical, it’s many orders of magnitude slower than solid-state RAM or ROM, and this makes things very slow, so I assumed that this was a soon-to-be-superceded by cheaper and larger memory sizes. This didn’t happen. What happened instead was that later operating systems were designed to pretend there was more physical memory than there actually was, with the result that it was all too easy for a computer to get lost in its own internal musings and kind of forget there was some person out there trying to use the bloody thing. Fortunately we now have solid state drives and the situation is somewhat better.
Way into the late 1980s I would still doodle Z80 assembler programs in notebooks meant to do various things, though without being interested in implementing them. By that time, GUIs were starting to take over, and thereby hangs another tale.
I liked the Xerox Star and the Apple Lisa, which stole the former’s user interface and preceded the Macintosh. That to me did seem to be the way forward with computers at the time. Later on, Microsoft tried to copy it, and it seemed like a pointless thing bolted onto the front of the operating system which slowed it down so much that it wasn’t worth whatever benefits it might bring. The same applied to GEM, its main competitor. To me, a GUI feels like another way computer design is taking control away from the user. This was just the beginning. I am used to imperative and procedural programming. As far as I’m concerned, a computer program is a list of orders or instructions organised into smaller subroutines which tell the computer what to do. Putting it like that, it seems absurd to me that anything else would ever be so. Object-oriented programming began to become very popular, and at no point have I remotely understood it. Every description of what it is seems to use a metaphor which fails to describe what’s actually going on inside the device. It will do something like say that there’s a class of vehicles which have the properties of size, number of wheels and so forth which can be applied to create new concepts of vehicles, such as distinguishing a car, motorbike and lorry. That’s all very well, but I can never make a connection between that metaphor and computer programming, which looks like something completely different. It also uses a declarative paradigm, where you just seem to tell the computer what’s there and leave it be, which baffles me because how can anything actually be happening if you haven’t made it happen? I’ve attempted to describe my understanding in terms of variables and procedures, but people in the know have always told me I haven’t got it right. It’s been said, and I’ve come up against this, that if you’ve learned imperative and procedural programming, it makes it so much harder to learn OOP (Object-Oriented Programming). I also can’t shed the impression that a lot of it is obscurantist technobabble hiding a naked emperor. And if you can’t see what’s going on inside, how can you have control.
Another annoying trend has been created by the easy availability of memory. Back in the old days, computers needed to be programmed efficiently in terms of memory. For instance, a spell checker would be based on the rules of English spelling and object to, say, a Q not followed by a U or the use of “AY” in the middle of a word, but it didn’t have a dictionary. Nowadays, it does, and that takes up many megabytes in its uncompressed form although I’m sure it is compressed. Likewise, chess games have tended to store whole lists of possible moves and try to find their way up a tree to the best result, using up huge amounts of memory, whereas previously they would’ve used the rules of chess and, I presume, some cleverly-written strategy to get to win. To me, this seems lazy and disappointing. I want programs to be optimised to the same extent as they were when 64K seemed like impossible luxury. So much processing power is also wasted on running the GUI. We don’t need this because it isn’t really using the computer. It’s just making the computer easier to use by tying up processing power in unnecessary trinkets.
So: you might think I’m a Linux person. I did too for a long time, but I’m not. As far as I’m concerned, you have to take care of your hardware and ensure it remains useful for as long as possible for a number of reasons:
- It takes a lot of energy, resources and environmental damage to make a computer and the working conditions of the people who mined the minerals, refined the compounds and put the device together are often questionable. Once all of that is done, you should be able to hang onto it for a long time.
- We ourselves haven’t necessarily got that much money and we should expect our devices to be as useful as possible. That means access to hardware and no hand-holding.
- When we finally discard our hardware it goes to landfill or some poor Third World community where it poisons the rivers and gives the people disassembling it physical disabilities, causes cancer and congenital birth defects, not to mention what it does to the biosphere.
All of this has got to be the number one priority when we consider computers and other devices, and to be fair Linux does go a considerable way to addressing this. But there’s a problem. A lot of the people who are involved in designing and coding for Linux are very focussed on supporting up to date hardware. It’s a big community and they aren’t all doing that, but many of them are and it’s often hard to find people who are more genuinely concerned on the E-Waste and other sustainability aspects of the issue. The other thing, and this may be less problematic today than it used to be, is that Linux people are often not people people, and in the end this amounts to user interfaces which are not very friendly. I’m reminded of the Dilbert cartoon strip of the computer programmer saying “it’s my philosophy that the computer interface should hurt the user”, and employing samples of birds being killed by cars as the main notification sound. I’ve had to use Linux GUIs which are unwittingly displaying at 320×200 and put the OK button completely outside the display, or don’t recognise that the memory map for the video RAM is organised in such a way that everything looks like incomprehensible multicoloured vertical stripes. And yet a perfectly good, low-end 640×480 sixteen colour display could’ve been used which somehow is not the default. Why?
Don’t get the impression that I’m not as susceptible as most other people to the appeal of Windows. I was very impressed by Windows 3.1, which I didn’t come across until 1998 due to forcing myself to go cold turkey on computers for a decade or two. As far as I’m concerned, I’d be happy for the GUI to look like that today and it all seems a bit unnecessary to make it any fancier, particularly because in doing so you’re consigning millions of PCs to landfill for no good reason. I think Program Manager, the main shell for Win 3.1, hung around until at least Windows XP although it didn’t retain its look. It may be due to the fact that our 486-based 33 MHz PC with VGA graphics and non-working sound card was just less messed-about than other computers we’ve used since, but it was the most stable version of Windows I’ve ever used in earnest. It crashed once the whole time we used it, and that was a major and worrying event. Incidentally, there used to be an Explorer-style shell for Win 3.1, which made it practically indistinguishable from classic 32-bit Windows, which I used for a short period of time, and in fact I’ve even installed it as the default shell on later versions due to it being more compact and stable than the default shell.
We then leapfrogged over Windows 95 to use Windows 98 on a computer which was a total mess. It was a 120 MHz Pentium with SVGA, a working sound card and 16 Mb RAM. This is below the recommended specs for Windows 98, but there were other imponderable issues with that PC at the time. It only lasted a few months before we handed it over to a friend who was more au fait with computers, who got it to work. We replaced it with our final desktop format computer, an AST Bravo MS P/75 with an ATI Mach-16 graphics card, which incidentally has 2-D acceleration but not the 3-D which had become practically universal by that point. This was actually a downgrade, but was more reliable, and at that time I was really into the themes and skins available on Windows 98. I also struggled endlessly to get Linux to work on it. QNX was fine, but then it always is, BeOS also worked okay. It ultimately got upgraded to 128 Mb RAM and a piggy-back 133 MHz Pentium if I remember correctly. This was the first computer to have a reliable dial-up internet connection. It was fine at running Quake but it was only barely capable of showing DivX videos even when upgraded to the nines.
The next stage came in 2002, as the start of our plan to give up television, partly for the children’s sake. This meant ultimately having access to DVDs and therefore we bought a new computer, though with a CD-ROM drive, mainly in order to get broadband internet. By this time we were accumulating E-Waste enormously because I was reluctant to let the old hardware go the way of all silicon. This PC had an Athlon and ran Windows XP. Windows XP was a good operating system on the whole but was the first O/S to break compatibility with 16-bit applications and drivers, which necessitated the slinging out of various bits of hardware. This was also our first PC to have USB slots, which I upgraded as well. The initial specifications of this machine were 128 Mb RAM, 40 Gb hard drive, 1 GHz Athlon of some description, on-board graphics, Silicon Integrated Systems motherboard. It started off with Windows 98 and we upgraded it in 2004. One thing I didn’t like about Windows XP was its childish-looking graphical scheme, but it still had Windows Classic, whose appearance was far better. This was also the first computer we used a TFT screen with, and the amount of space taken up by a 22″ CRT monitor is something to behold.
In 2007, we got a Windows Vista machine. This was because its predecessor had exploded due to me installing a graphics card whose power requirements exceeded that of our previous computer. Apparently it emitted something like ball lightning, although I wasn’t there at the time. This we persisted with for a further seven years. My chief issue with Windows Vista was that left to itself it seemed to spend too much time making the windows look translucent. Interestingly, the system requirements for Windows after a certain point went into reverse, probably because people were no longer impressed with the eye candy. In 2015 we acquired the computer which is currently sitting upstairs and of course runs Windows 10. Ironically, the stability of Windows 10 has made it possible to install Linux properly and use it on that PC. I have a history of using Live distributions of Linux on memory sticks as my preferred operating systems because they’re more secure that way.
Windows PCs have become very much secondary in our lives in recent years, as is probably true for many other people. We mainly use Android and Chromebooks, although Sarada still uses a Windows laptop from time to time. There no longer seems to be the pressure to upgrade, or maybe we’ve just become sidelined and lost interest in using them in other ways. I still program using FORTH and I have the modern version of the Commodore 64, which I don’t use as much as I should. To be honest, I’m not a big fan of the 64 because its BASIC doesn’t support the hardware properly and the palate is very odd for no apparent reason, but again all these are challenges to which I should rise. I’m vaguely considering writing a few arcade games for it, although I should stress that I’m very much in the Z80 camp rather than the 65 series one. I’ve never found the 6502 (technically the 6510 in the case of the 64) easy to program because I find its address modes and the complete impossibility of dealing with sixteen-bit integers irksome, although again it’s down to possibly admirable minimalist design.
We’ve also owned a few computers I haven’t mentioned. I bought a Tandy Color in 1999, there was a UK101 in our airing cupboard for a while, we had a Windows ME PC which was technically actually an ACT Apricot of all things, and also a plasma display 286-based Toshiba laptop I used DOSLynx on as a browser. That actually ran Windows 1.0!
Although I’ve never done it, I am curious as to the possibility of programming ARM or 64-bit Intel processors in machine code or assembler without an operating system as such. It would need a computer dedicated to that purpose of course. I would also imagine that attempting to use an ARM in that way would be a bit of a tall order, although I don’t know, because my understanding is that its instruction set is optimised for use with code no human has directly written nowadays, but I presume that Intel-like CPUs are another matter. But just as I’ve never really got into coding in a major way, I doubt I’ll ever get round to it. One thing I do still do with coding is occasionally write FORTH or BASIC programs to work out maths problems, and I’ve long preferred to use APL to spreadsheets, which quite frankly I can’t get my head round.
I very much doubt I’ll ever make the transition to object-oriented programming.
