Is Cyberspace Haunted?

Loab – An explanation may be forthcoming

I may have mentioned this before on here, but there used to be a popular “spooky” non-fiction book called ‘The Ghost Of 29 Megacycles’. This was about the practice of listening to static on analogue radio and apparently hearing the voices of the dead. A similar technique is known as Electronic Voice Phenomenon, which is a more general version of the same where people listen out for voices on audio tape or other recording media. It’s notable that this is a highly analogue process. It’s no longer a trivial task to tune out a television or radio and get it to display visual or produce audio static so that one can do this. Audiovisual media nowadays are generally very clean and don’t lend themselves to this. One saddening thing to me is that we now have a TV set which will display pretend static to communicate to us that we haven’t set it up properly. It isn’t honest. There is no real static and in fact it’s just some video file stored on the hardware somewhere which tries to tell the user there’s an unplugged connection or something. You can tell this because it loops: the same pixels are the same colours in the same place every few frames. I find this unsettling because it implies that the world we live in is kind of a lie and because we haven’t really got control over the nuts and bolts of much technology any more. There’s that revealing temporally asymmetric expression committing oneself that the belief that in that respect the past and future are qualitatively different. It is important to acknowledge this sometimes, but can also bring it about via the force of that potentially negative belief. However, the demise of the analogue has not led to the demise of such connections, although it long seemed to have done so.

Most people would probably say that we are simply hearing, or in some cases seeing, things which aren’t really there in these cases. Others might say, of course, that this is a way to access the Beyond, so to speak, and interpret the voices or other experiences in those terms. If that’s so, the question arises as to whether it’s the medium which contains this information or whether the human mind contacts it directly via a random-seeming visual or sonic mess, having been given the opportunity to do so. Other stimuli grab the attention to specific, organised and definite details too much for this to happen easily. There’s no scope for imagination, or rather for free association.

Well, recently this has turned out no longer to be so. Recently, artificial intelligence has been advancing scarily fast. That’s not hyperbole. It is actually quite frightening how rapidly software has been gaining ground on human cognition. Notable improvements occur within weeks rather than years or decades, and one particular area where this is happening is in image generation. This has consequences for the “ghost of 29 megacycles” kind of approach to, well, I may as well say séances, but this is going to take a bit of explaining first.

Amid considerable concern for human artists and their intellectual property, it’s now possible to go to various websites, type in what you want to see and have a prodigiously furiously cogitating set of servers give you something like that in a couple of minutes. For example, sight unseen I shall now type in “blue plastic box in a bookcase” and show you a result from Stable Diffusion:

That didn’t give me exactly what I wanted but it did show a blue plastic box in a bookcase. Because I didn’t find a way to specify that I only wanted one blue plastic box, it also gave me two others. I’ll give it another try: “A tree on a grassy hill with a deer under it”:

The same system can also respond to images plus text as input. In my case, this has let to an oddity. As you know, I am the world’s whitest woman. However, when I give Stable Diffusion’s sister Diffuse The Rest, which takes photos plus descriptions, such as “someone in a floral skater dress with curly hair, glasses and hoop earrings”, it will show me that all right, but “I” will be a Black woman more often than not. This is not so with many other inputs without a photo of me. I get this when I type it into Stable Diffusion itself:

This is obviously a White woman. So are all the other examples I’ve tried on this occasion, although there is a fair distribution of ethnicity. There are worrying biasses, as usual, in the software. For instance, if you ask for a woman in an office, you generally get something like this:

If you ask for a woman on a running track, this is the kind of output that results:

This is, of course, due to the fact that the archive of pictures on which the software was trained carries societal biasses therewith. However, for some reason it’s much more likely to make me Black than White if I provide it with a picture of myself and describe it in neutral terms. This, for example, is supposed to be me:

The question of how it might be addressed arises though. Here is an example of what it does with a photo of me:

You may note that this person has three arms. I have fewer than three, like many other people. There’s also a tendency for the software to give people too many legs and digits. I haven’t tried and I’m not a coder, but it surprises me that there seems to be no way to filter out images with obvious flaws of this kind. Probably the reason for this is that these AI models are “black boxes”: they’re trained on images and arrive at their own rules for how to represent them, and in the case of humans the number of limbs and digits is not part of that. It is in fact sometimes possible to suggest they give a body extra limbs by saying something like “hands on hips” or “arms spread out”, in which case they will on occasion continue to produce images of someone with arms in a more neutral position as well as arms in the explicitly requested ones.

In order to address this issue, it would presumably be necessary to train the neural network on images with the wrong and right number of appendages. The problem is, incidentally, the same as the supernumerary blue boxes in the bookcase image, but in most situations we’d be less perturbed by seeing an extra box than an extra leg.

I have yet to go into why the process is reminiscent of pareidolia based on static or visual snow and therefore potentially a similar process to a séance. The algorithm used is known as a Latent Diffusion Model. This seems to have replaced the slightly older method of Generative Adversarial Networks, which employed two competing neural networks to produce better and better pictures by judging each other’s outputs. Latent Diffusion still uses neural networks, which are models of simple brains based on how brains are thought to learn. Humans have no access to what happens internally in these networks, so the way they are actually organised is quite mysterious. Many years ago, a very simple neural network was trained to do simple arithmetic and it was explored. It was found to contain a circuit which had no connections to any nodes outside that circuit on the network and was therefore thought to be redundant, but on being removed, the entire network ceased to function. This network was many orders of magnitude less complex than today’s. In these cases, the network was trained on a database of pictures ranked by humans for beauty and associated with descriptions called the LAION-5B Dataset. The initial picture, which may be blank, has “snow” added to it in the form of pseudorandom noise (true randomness may be impossible for conventional digital devices to achieve alone). The algorithm then uses an array of GPUs (graphical processing units as used in self-driving cars, cryptocurrency minint and video games) to continue to apply noise until it begins to be more like the target as described textually and/or submitted as an image. It does this in several stages. Also, just as a JPEG is a compressed version of a bitmap image, relying in that case on small squares described via overlapping trig functions, so are the noisy images compressed in order to fit in the available storage space and so that they get processed faster. The way I think of it, and I may be wrong here, is that it’s like getting the neural network to “squint” at the image through half-closed eyes and try to imagine and draw what’s really there. This compressed image form is described as a “latent space”, as the actual space of the image, or possibly the multidimensional space used to describe it as found in Generative Adversarial Networks, is a decompressed version of what’s actually used directly by the GPUs.

If you don’t understand that, it isn’t you. It was one said that if you can’t explain something simply, you don’t understand it, and that suggests I don’t. That said, one thing I do understand, I think, is that this is a computer making an image fuzzy like a poorly-tuned television set and then trying to guess what’s behind the fuzz according to suggestions such as an image or a text input. This process is remarkably similar, I think, to a human using audio or visual noise to “see” things which don’t appear to be there, and therefore is itself like a séance.

This seems far-fetched of course, but it’s possible to divorce the algorithm from the nature of the results. The fact is that if a group of people is sitting there with a ouija board, they are ideally sliding the planchette around without their own conscious intervention. There might be a surreptitious living human guide or a spirit might hypothetically be involved, but the technique is the same. The contents of the latent space is genuinely unknown and the details of events within the neural network are likewise mysterious. We, as humans, also tend to project meaning and patterns onto things where none exist.

This brings me to Loab, the person at the top of this post, or rather the figure. The software used to discover this image has not been revealed, but seems to have been Midjourney. The process whereby she (?) was arrived at is rather strange. The initial input was Marlon Brando, the film star. This was followed by an attempt to make the opposite of Marlon Brando. This is a technique where, I think, the location in the latent space furthest from the initial item is found, like the antipodes but in a multidimensional space rather than on the surface of a spheroid. This produced the following image:

The phenomenon of apparently nonsense text in these images is interesting and more significant than you might think. I’ll return to it later.

The user, whose username is Supercomposite on Twitter, then tried to find the opposite of this image, expecting to arrive back at Marlon Brando. They didn’t. Instead they got the image shown at the top of this post, in other words this:

(Probably a larger image in fact but this is what’s available).

It was further found that this image tended to “infect” others and make them more horrific to many people’s eyes. There are ways of producing hybrid images via this model, and innocuous images from other sources generally become macabre when combined with this one. Also, there’s a tendency for Loab, as she was named, to “haunt” images in the sense that you can make an image from an image and remove all the references to Loab in the description, and she will unexpectedly recur many generations down the line like a kind of jump scare. Her presence also sometimes makes images so horrendous that they are not safe to post online. For instance, some of them are of screaming children being torn to pieces.

As humans, we are of course genetically programmed to see horror where there is none because if we instead saw no horror where there was some we’d probably have been eaten, burnt to death, poisonned or drowned, and in that context “we” refers to more than just humans. Therefore a fairly straightforward explanation of these images is that we are reading horror into them when they’re just patterns of pixels. We create another class of potentially imaginary entities by unconsciously projecting meaning and agency onto stimuli. Even so, the human mind has been used as a model for this algorithm. The images were selected by humans and humans have described them, and perhaps most significantly, rated them for beauty. Hence if Marlon Brando is widely regarded as handsome, his opposite’s opposite, rather than being himself, could be ugliness and horror. It would seem to make more sense for that to be simply his opposite, or it might not be closely related to him at all. A third possibility is that it’s a consequence of the structure of a complex mind-like entity to have horror and ugliness in it as well as beauty. There are two other intriguing and tempting conclusions to be drawn from this. One is that this is a real being inhabiting the neural network. The other is that the network is in some way a portal to another world in which this horror exists.

Loab is not alone. There’s also Crungus:

These are someone else’s, from Craiyon, which is a fork of Dall-E Mini. Using that, I got these:

Using Stable Diffusion I seem to get two types of image. One is this kind of thing:

The other looks vaguely like breakfast cereal:

Crungus is another “monster”, who however looks quite cartoonish. I can also understand why crungus might be a breakfast cereal, because of the word sounding like “crunch”. In fact I can easily imagine going down the shop, buying a box of crungus, pouring it out and finding a plastic toy of a Crungus in it. There’s probably a tie-in between the cereal and a TV animation. Crungus, however, has an origin. Apparently there was a video game in 2002 which had a Crungus as an easter egg, which was a monster based on the original DOOM monster the Cacodemon, who was based on artwork which looked like this:

Hence there is an original out there which the AI probably found, although I have to say it seems very apporopriately named and if someone were to be asked to draw a “Crungus”, they’d probably produce a picture a bit like one of these.

It isn’t difficult to find these monsters. Another one which I happen to have found is “Eadrax”:

Eadrax is the name of a planet in ‘The Hitch-Hiker’s Guide To The Galaxy’ but reliably produces fantastic monsters in Stable Diffusion. This seems to be because Google will correct the name to “Andrax”, an ethical hacking platform which uses a dragon-like monster as its mascot or logo. An “eadrax” seems to be a three-dimensional version of that flat logo. But maybe there’s something else going on as well.

There’s a famous experiment in psychology where people whose spoken languages were Tamil and English were asked which one of these shapes was “bouba” and which “kiki”:

I don’t even need to tell you how that worked out, do I? What happens if you do this with Stable Diffusion? Well, “kiki” gets you this, among many other things:

“Bouba” can generate this:

I don’t know about you, but to me the second one looks a lot more like a “bouba” than the first looks like a “kiki” instance. What about both? Well, it either gets you two Black people standing together or a dog and a cat. I’m quite surprised by this because it means the program doesn’t know about the experiment. It doesn’t, however, appear to do what the human mind does with these sounds. “Kiki and Bouba” does this:

Kiki is of course a girl’s name. Maybe Bouba is a popular name for a companion animal?

This brings up the issue of the private vocabulary latent space diffusion models use. You can sometimes provoke such a program into producing text. For instance, you might ask for a scene between two farmers talking about vegetables with subtitles or a cartoon conversation between whales about food. When you do this, and when you get actual text, something very peculiar happens. If you have typeable dialogue between the whales and use this as a text prompt, it can produce images of sea food. If you do the same with the farmers, you get things like insects attacking crops. This is even though the text seems to be gibberish. In other words, the dialogue the AI is asked to imagine actually seems to make sense to it.

Although this seems freaky at first, what seems to be happening is that the software is taking certain distinctive text fragments out of captions and turning them into words. For instance, the “word” for birds actually consists of a concatenation of the first part, i.e. the more distinctive one, of scientific names for bird families. Some people have also suggested that humans are reading things into the responses by simply selecting the ones which seem more relevant, and another idea is that the concepts associated with the images are just stored nearby. That last suggestion raises other questions for me, because it seems that that might actually be a description of how human language actually works mentally.

Examples of “secret” vocabulary include the words vicootes, poploe vesrreaitas, contarra ccetnxniams luryea tanniouons and placoactin knunfdg. Here are examples of what these words do:

Vicootes
Poploe vesrreaitas
contarra ccetnxniams luryea tanniouons
placoactin knunfdg

The results of these in order tend to be: birds, rural scenes including both plants and buildings, young people in small groups and cute furry animals, including furry birds. It isn’t, as I’ve said, necessarily that mysterious because the words are often similar to parts of other words. For instance, the last one produces fish in many cases, though apparently not on Stable Diffusion, but here seems to have produced a dog because the second word ends with “dg”. It produces fish because placoderms and actinopterygii are prominent orders of fish.

It is often clear where the vocabulary comes from, but that doesn’t mean it doesn’t constitute a kind of language because our own languages evolve from others and take words and change them. It can easily be mixed with English:

A flock of vicootes in a poploe vesrreaitas being observed by some contarra ccetnxiams luryea tanniouons who are taking their placoactin knunfg for a walk.

This has managed to preserve the birds and the rural scene with vegetation, but after that it seems to lose the plot. It often concentrates on the earlier part of a text more than the rest. In other words, it has a short attention span. The second part of this text gets me this:

Contarra ccetnxiams luryea tanniouons taking their placoactin knunfg for a walk.

I altered this slightly but the result is unsurprising.

Two questions arise here. One is whether this is genuine intelligence. The other is whether it’s sentience. As to whether it’s intelligent, I think the answer is yes, but perhaps only to the extent that a roundworm is intelligent. This is possibly misleading and raises further questions. Roundworms are adapted to what they do very well but are not going to act intelligently outside of that environment. The AIs here are adapted to do things which people do to some extent, but not particularly generally, meaning that they can look a lot more intelligent than they actually are. We’re used to seeing this happen with human agency more directly involved, so what we experience here is a thin layer of humanoid behaviour particularly focussed on the kind of stuff we do. This also suggests that a lot of what we think of as intelligent human behaviour is actually just a thin, specialised veneer on a vast vapid void. But maybe we already knew that.

The other question is about sentience rather than consciousness. Sentience is the ability to feel. Consciousness is not. In order to feel, at least in the sense of having the ability to respond to external stimuli, there must be sensors. These AIs do have sense organs because we interact with them from outside. I have a strong tendency to affirm consciousness because a false negative is likely to cause suffering. Therefore I believe that matter is conscious and therefore that that which responds to external stimuli is sentient. This is of course a very low bar and it means that I even consider pocket calculators sentient. However, suppose that instead consciousness and sentience are emergent properties of systems which are complex in the right kind of way. If digital machines and their software are advancing, perhaps in a slow and haphazard manner, towards sentience, they may acquire it before being taken seriously by many, and we also have no idea how it would happen, not just because sentience as such is a mystery but largely because we have no experience of that emergence taking place before. Therefore we can look at Loab and the odd language and perhaps consider that these things are just silly and it’s superstitious to regard them as signs of awareness, but is that justified? The words remind me rather of a baby babbling before she acquires true language, and maybe the odd and unreliable associations they make also occur in our own minds before we can fully understand speech or sign.

Who, then, is Loab? Is she just a collaborative construction of the AI and countless human minds, or is she actually conscious? Is she really as creepy as she’s perceived, or is that just our projection onto her, our prejudice perhaps? Is she a herald of other things which might be lurking in latent space or might appear if we make more sophisticated AIs of this kind? I can’t answer any of these questions, except perhaps to say that yes, she is conscious because all matter is. What she’s actually doing is another question. A clockwork device might not be conscious in the way it “wants” to be. For instance, it’s possible to imagine a giant mechanical robot consisting of teams of people keeping it going, but is the consciousness of the individual members of that project separate from any consciousness that automaton might have. It’s conceivable that although what makes up Laion is conscious, she herself is not oriented correctly to express that consciousness.

A more supernaturalistic explanation is that Midjourney (I assume) is a portal and that latent space represents a real Universe or “dimension” of some kind. It would be hard to reconcile this idea with a deterministic system if the neural net is seen as a kind of aerial for picking up signals from such a world. Nonetheless such beliefs do exist, as a ouija board is actually a very simple and easily described physical system which nevertheless is taken as picking up signals from the beyond. If this is so, the board and planchette might be analogous to the neural net and the movement of the hands on the planchette, which is presumably very sensitive to the neuromuscular processes going on in the arms and nervous systems of the human participants, to the human artists, the prompt, the computer programmers and the like, and it’s these which are haunted, in a very roundabout way. I’m not in any way committing myself to this explanation. It’s more an attempt to describe how the situation might be compared to a method of divination.

I’ve mentioned the fact there are artists involved a few times, and this brings up another probably unrelated concern. Artists and photographers, and where similar AIs have been applied to other creative genres the likes of poets, authors and musicians, have had their work used to train it, and therefore it could be argued that they’re owed something for this use. At the other end, bearing in mind that most of the images in this post have been produced rapidly on a free version of this kind of software and that progress is also extremely fast, there are also images coming out the other end which could replace what artists are currently doing. This is an example of automation destroying jobs in the creative industries, although at the same time the invention of photography was probably thought of in a similar way and reports of the death of the artist were rather exaggerated. Instead it led to fine art moving in a different direction, such as towards cubism, surrealism, impressionism and expressionism. Where could human art go stimulated by this kind of adversity? Or, would art become a mere hobby for humans?

Living In The Past One Day At A Time

In this blog, I’ve made occasional references to what I call my “Reënactment Project”, which is a long-term ongoing thing I’ve been doing since about 2017. The idea is that every day I make an at least cursory examination of the same day thirty-nine years previously. The reason for choosing thirty-nine years is that for the initial year I planned to do it all the dates were on the same days of the week, meaning that the years concerned were substantially similar. The very basic arithmetic involved is of some interest and I’ll be returning to that later in the post. A side-effect of the thirty-nine year difference is that I am thirty-nine years younger than my father, so he would’ve been the age I am now back then, which focusses me on ageing, life stages and how to stay as young as possible by doing things like addressing my balance through Yoga so it doesn’t deteriorate as fast as it has for him. I can see the end result and know some of the things to avoid, which means that if I do reach his current age I’ll probably have a completely different set of health problems from which my own hopefully not estranged descendants will in turn know what they should avoid. And so on.

My motivation for doing this stems from the disconcerting awareness that we edit our memories, and are also only able to experience things as we are at the time. Also, various media and popular misconceptions lead us to forget and mutate the memories we do believe ourselves to have, and this was particularly important for 1978 as it included the famous Winter Of Discontent, also the Winter Of Discotheque, and I feel we may have been usefully manipulated into seeing this particular season in a particular way to justify everything that came after it. I also want to know how I was as a child and adolescent and pay attention to things which are the seeds of how I am now, and also that which was in me which I didn’t end up expressing. There is of course a bit of a risk here because I’m living in the past and to some extent dwelling upon it, but I do have a life outside this project and find it quite informative and enriching for today’s experiences. However, in general it’s just interesting.

I’ve now reached 1982, and am in the depths of the Falklands War, which was a significant historical event in securing Margaret Thatcher a second parliamentary term. Well, I say “in the depths”. In fact an end to hostilities was announced on 20th June and the Canberra was almost home by 7th July, which is when I’m writing this. I more or less stand by the position I had by the mid-’80s on this subject, which is that Galtieri and Thatcher were both aware that a war would be likely to boost their popularity, although at the time I thought it was an actual conspiracy between them whereas now I just think they were both aware of its expediency. It came as something of a shock to me, a year later, when I realised we didn’t have fixed-term parliaments and therefore the Tories could take advantage of their victory by calling an election whenever they wanted. ‘Shipbuilding’ is redolent of the time:

Although I know Elvis Costello wrote and performed the song, the Robert Wyatt version is the one I associate most closely with the incident. Robert Wyatt was part of the Canterbury Scene and an early member of Soft Machine, so I’m obviously more likely to associate it with him. Just in case you don’t know, Wyatt got drunk and fell out of a window in 1973, paralysing himself from the waist down. Jean Shrimpton, my second cousin once removed, gave him a car and Pink Floyd raised £10 000 for him in a benefit concert. Tommy Vance once described him as “a man who has had more than his share of bad luck in life”.

Another association I make with the Falklands from the time is a play about an Irish barman who was accepted as a member of his community in London until the breakout of the war. He finds himself sandwiched between Irish Republicans and his customers, with racism growing against him which culminates in his murder. This was originally a radio play but later appeared on TV. Although the Troubles were significant and also a spur to creativity, there was a long period during which practically every new play was about them, and it became tedious and annoying. This wasn’t yet the case in ’82 though. There’s also the 1988 BBC TV drama ‘Tumbledown’.

1982 was probably the last year there was really any hope that the previous pattern of alternating Conservative and Labour administrations we were used to would continue into the decade. In fact, this had been a relatively recent development. The first Labour government after the Second World War had been followed by thirteen years of Tory rule, and it was only after that that an alternation of parties in power had begun, lasting only fifteen years. Nonetheless, up until 1982 that’s what most people seemed to expect, and that alternation had held policies and the general timbre of the country in the political centre because the next government could be expected to come along and undo much of what the previous one had done, and so on. This was satirised on the Radio 4 comedy programme ‘Week Ending’ which depicted the future of privatisation and nationalisation as permanently oscillating ad infinitum every five years, which was probably one reason I thought we had fixed terms.

I was communist in ’82, and when I say “communist” I mean Stalinist. I took it seriously enough that I attempted to learn Russian and listened regularly to Radio Moscow, and I was very upset when Leonid Brezhnev died. I was completely convinced that what the Soviet Union was saying about us and themselves was accurate and that the BBC and the like was nothing more than propaganda. I was also very concerned indeed about unemployment, racism and homophobia. I considered being called racist to be the worst insult imaginable, which of course misses the point. I was, however, still a meat eater and was, as you can probably tell, quite naïve. I was also a lovesick teenager in love with the idea of being in love.

However, this isn’t just about 1982 and the events of that year, for me or the world, but also the value of the exercise. It’s often been suggested that I have autistic tendencies and I imagine that this kind of meticulous rerun of the late ’70s and early ’80s is going to come across as confirmatory evidence for that. Clearly people do do things just because they want to and then come up with reasons for doing so to justify themselves to other people. My novel ‘1934’ covers a community where they have chosen to relive the mid-twentieth century over and over again in an endless loop because the leaders think everything has gone to Hell in a handcart ever since, and this would not be a healthy attitude. I made the mistake, a few years ago, of re-reading my diary in a particular way and found myself falling back into the mindset I had at the time in a way which felt distinctly unhealthy. Nonetheless, I consider this activity to be worthwhile because our memories are re-written, and history is written by the winners, in this case the winners of the Falklands War, so our memories are re-written by the winners.

It’s been said that films set in the past usually say more about the time they were made than the period they’re supposed to have happened in. Hence ‘Dazed And Confused’ is really about the 1990s, for example. We generally have a set of preconceptions about a particular period within living memory which turn into a caricature of the time which we find hard to penetrate to reach the reality, and it isn’t the reality in any case because it’s filtered through the preconceptions of the people at the time, even when those people were us. This much is almost too obvious to state. However, there’s also continuity. Time isn’t really neatly parcelled off into years, decades and centuries. People don’t just throw away all their furniture at the end of the decade, or at least they shouldn’t, and buy a whole new lot. We’re all aware of patterns repeating in families down the generations. It isn’t really possible to recapture the past as if it’s preserved in amber. But it is possible to attempt to adopt something like the mindset prevalent at the time, or the Zeitgeist, to think about today, and the older you get the more tempting it is to do so. Since the menopause exists, there must be some value in becoming an elder and sharing the fruits of one’s experience, even when one is in cognitive decline. And of course the clock seems to have been going backwards since 1979, making this year equivalent to 1937. World War II was so 2019.

How, then, does 2021 look from 1982? On a superficial level, it tends to look very slick and well-presented, although airbrushing had a slickness to it too. The graphic at the top of this post is more ’87 than ’82, but it does succeed in capturing the retro-futurism. Progressive politics was losing the fight with conservatism at the time, but the complete rewrite of how we think of ourselves had not yet happened. Nowadays, people are wont to parcel up their identity and activities into marketable units because they have no choice but to do so. The fragmentation there is as significant as the commodification. The kind of unity of experience which existed in terms of the consumption of popular culture back then is gone, although it was gradually disintegrating even then. We were about to get Channel 4 and video recorders were becoming popular among the rich, although they were still insisting that there was no way to get the price below £400 at the time, which is more like £1 400 today. It’s hard to tell, but it certainly feels like the mass media, government and other less definable forces have got better at manipulating public opinion and attitudes. This feels like an “advance” in the technology of rhetoric. However, we may also be slowly emerging from the shadow of the “greed is good” ethic which was descending at the time because we’ve reached the point where most public assets have been sold off and workers’ rights have been eroded that reality tends to intrude a lot more than it used to, and I wonder if people tend to be more aware of the discrepancy between what they’re told and what their experience is. Perhaps the rise in mental health problems is related to this: people are less able to reconcile their reality with the representation of “reality”, and are therefore constantly caught in a double bind.

It isn’t all bad. It’s widely recognised now that homophobia, sexism, racism, ableism and other forms of prejudice are bad for all of us and people seem to be more aware that these are structural problems as well. Veganism is better understood but also very commercialised, taking it away from its meaning. Social ideas which are prevalent among the general public today may have been circulating in academia at the time and their wider influence was yet to be felt. This is probably part of a general trend. There was also a strongly perceived secularisation trend which has in some respects now reversed. The West was in the process of encouraging Afghan fundamentalists and they may also have begun arming Saddam Hussein by this point, although that might’ve come later. CND was in the ascendancy, and the government hadn’t yet got into gear dissing them.

Another distinctive feature of the time was the ascendancy of home microcomputers, although for me this was somewhat in the future. I’ll focus more on my suspicions and distrust here. To me, silicon chips were primarily a way to put people out of work and therefore I didn’t feel able to get wholeheartedly into the IT revolution with a clear conscience. I had, however, learnt BASIC the previous year. I don’t really know what I expected to happen as clearly computers were really getting going and it seemed inevitable. There was also only a rather tenuous connection between a home computer and automation taking place in factories. However, by now the usual cycle of job destruction and creation has indeed ceased to operate, as the work created by automation is nowhere near as much as the work replaced by it, or rather, done by computers or robots in some way. My interest in computers was basically to do with CGI, so the appearance of a ZX81 in my life proved to be rather disappointing.

1982 was also the only year I read OMNI. Although it was interesting, and in fact contained the first publication of ‘Burning Chrome’ that very year, it also came across as very commercialised and quite lightweight to me compared to, for example, ‘New Scientist’. It was also into a fair bit of what would be called “woo” nowadays, and it’s hard to judge but I get the impression that back then psi was more acceptable as a subject of research for science than it is today. This could reflect a number of things, but there are two ways of looking at this trend. One is that a large number of well-designed experiments were conducted which failed to show any significant psi activity. The other is that there is a psychologically-driven tendency towards metaphysical naturalism in the consensus scientific community which has little basis in reason. I would prefer the latter, although the way the subject was presented tended to be anecdotal and far from rigorous. From a neutral perspective, there does seem to be a trend in the West away from belief in the supernatural, and the fact that this was thirty-nine years ago means that trend is discernible on this scale.

Then there’s music, more specifically New Wave. For me, because of my age and generation, New Wave doesn’t even sound like a genre. It’s just “music”. This may not just be me, because it’s so vaguely defined that it seems practically meaningless. It’s certainly easy to point at particular artists and styles as definitely not New Wave though, such as prog rock, ABBA, disco and heavy metal, but I perceive it as having emerged from punk, and in fact American punk just seems to be New Wave to me. It’s also hard for me to distinguish from synth-pop at times. British punk could even be seen as a short-lived offshoot of the genre. By 1982, the apocalyptic atmosphere of pop music around the turn of the decade was practically dead, although I still think there’s a tinge of that in Japan, The Associates and Classix Nouveaux. The New Romantics had been around for a while by then. I disliked them because I perceived them as upper class and vapid. I was of course also into Art Rock, and to some extent world music.

In the visual arts, for me 1982 saw a resurgence in my interest in Dalí, who had interested me from the mid-’70s onward, but this time I was also interested in other surrealists such as Magritte and Ernst, and also to some extent Dada. As with New Romantics, Dalí was a bit of a guilty pleasure as I was aware of his associations with fascism. This was all, of course, nothing to do with what was going on in the art scene of the early ’80s, although I was very interested and felt passionately positively about graffiti. I felt that the destruction of graffiti was tantamount to vandalising a work of art. To be honest, although I’m concerned that people might feel threatened by it and feel a lot of it is rather low-effort and unoriginal, I’m still a fan of it, although I wouldn’t engage in it myself.

1982 was close to the beginning of the cyberpunk æsthetic. I’ve already mentioned William Gibson’s ‘Burning Chrome’, which first appeared in OMNI this month in 1982, and there was also ‘Blade Runner’, which was already being written about, again in OMNI, although it wasn’t released until September. The influence of the genre can be seen in the graphic at the top of this post. To a limited extent even ‘TRON’, from October, was a form of bowdlerised cyberpunk, with the idea of a universe inside a computer. Cyberpunk is dystopian, near-future, can involve body modification, does involve VR and has alienated characters and anarcho-capitalism, with a world dominated by multinationals. ‘Johnny Mnemonic’ had been published, also in OMNI, the year before. The question arises of how much today’s world resembles that imagined by cyberpunk, and to be honest I’d say it does to a considerable extent, and will probably do so increasingly as time goes by.

On a different note, although the days and dates match up between 2021 and 1982, this will only continue until 28th February 2023, after which a leap day for 1984 will throw them out of kilter again. It can almost be guaranteed that years twenty-eight years apart will have the same calendar. One thing which can’t be guaranteed is the date of Good Friday and the other days which are influenced by it. This means that there is almost always a difference between calendars even when the days of the week match up. I also said “almost be guaranteed”. Because the Gregorian calendar skips leap days when they occur in a ’00 year whose century is not divisible by four, we are currently in a lng run of matching twenty-eight year cycles which began in 1900 and will end in 2100. Hence up until 1928 the years of the twentieth century don’t match up on this pattern, and likewise from 2072 onward there will be another disruption of the pattern down into the future. There are also other periods which match between leap days, such as the thirty-nine year one I’m currently exploring, which began last year and includes two complete years as well. This also divides up the years a little oddly, because since I was in full-time school at the time, academic years were also quite important to me, and in fact continued to be so right into the 1990s. This makes a period between 29th February 1980 and the start of September 1980 and will also make a further period between September 1983 and 29th February 1984. Finally, astronomical phenomena don’t line up at all really. Solar and lunar eclipses, and transits of Venus and Mercury, for example, won’t correspond at all.

So anyway, that’s one of the possibly pointless things I do with my time at the moment. It does bring home to me how slowly time does in fact go, because to be honest doing this seems to have slowed the pace of the passage of time back to how it was when I was fourteen or fifteen. What other effects it has on my mind I’m not sure, although I think there must be both positive and negative influences.