The Haunted BBC Micro?

I used to have an Acorn Electron. The thing about Electrons is that they think they’re BBC Model B microcomputers. Their system software is pretty close to or actually identical. However, when you come to actually use them, it becomes clear that they aren’t. They lack MODE 7, the Teletext mode, only have one sound channel and only have an edge connector as an interface. The CPU running both models of computer lacks specific I/O ports, unlike the Z80, and therefore peripherals have to be mapped directly onto the memory. Due to the hardware shortcomings of the Electron compared to the BBC B, there are unused spaces in the memory where the interface chips would’ve been.

One day I was looking through the Electron’s ROM (system software) and wrote a program to print out the printable bits of these regions. If you just look at memory contents and output them as characters, you end up changing the graphics mode, position of the cursor and so forth, and the colours on the screen, and while that’s entertaining for a bit it isn’t conducive to actually finding out what’s in the computer. This is because the ASCII control characters don’t actually print, and the BBC/Electron version of the character set is substantially used to communicate with the display hardware in quite sophisticated ways, probably because the BBC hardware is supposed to be adaptable as a terminal for the second processor. This second processor was ultimately to be the famous ARM whose descendants run today’s mobile phones. Hence the BBC is very much about telecommunications in that sense as well as many others. Anyway, if you blank out the most significant bit of the bytes in the memory and also only print out values above 31 (1F in hexadecimal), every character written to the screen will be printable. If you then look at the area of memory which is used on the BBC for peripherals, you find a list of acknowledgements for the people who designed and built the Acorn Electron. For some reason it isn’t stable and the longer it’s been since the computer’s been turned on, the less legible it is, so it’s a race to get to see it, but it’s there. I don’t know why it degrades. Doing a reset doesn’t restore the data either: you have to turn it on again to do that.

I seem always to back losers. For instance, I was a Prefab Sprout fan. If I like something it’s the kiss of death for it. Therefore, unsurprisingly, as well as the unsuccessful Electron I also had an even more unsuccessful Jupiter Ace. I used to do something similar with the Ace’s memory, dumping it to the screen. This is simpler with an Ace because it has fewer control characters. The 3K of static RAM in the Ace, as opposed to the dynamic RAM in the RAMpack, has a load of apparently random values when you turn it on, although like any other computer it also has system variables, and like many others it has working areas, video RAM, character shapes and the famous PAD FORTH uses for text manipulation, and of course the parameter stack as it’s a FORTH computer. The dynamic RAM of the RAMpack has blocks of zeros and hex FFs (255) in eights, I think, all the way through the unused map, which I assume to be an artefact of the hardware, although presumably the CPU does that thing of writing bytes every 256 locations or so to work out how much memory the computer has. Every time an unexpanded Ace is turned on, it has the same junk data in its RAM.

This phenomenon of nonsense in RAM and defining a word which displays it on the screen gave me the idea I hold to this day of the nature of dreams. It would be possible to get an Ace to turn those data into words. I’ve got it to produce random words in Finnish, for example, mainly because Finnish is an easier language to get a computer to produce than almost any other. English is a lot harder. I could’ve linked the two things together and got the Ace to turn all its random data into Finnish. I didn’t do this because I decided to go cold turkey on computers in about 1985 because I don’t trust my own interests and they seem a bit obsessive and unhealthy, but if I had, I wonder if it would’ve produced different Finnish for every Ace in existence, or if the random data were the same for all Aces. It didn’t happen on the ZX81 by the way. That just has zeros all the way through its unused memory. Anyway, this is my hypothesis about dreaming. When you wake from a dream, your memory contains random data like an Ace’s memory, and your consciousness is like the Finnish converter. It attempts to make sense of these data and you get the impression that you’ve just had an experience, although you usually know you haven’t. This is one of the reasons I always refer to events in a dream in the present tense, because the events in them did not happen in the past. However, this shouldn’t be taken to mean that they are invalid. Dreams are like tea leaves. They can be interpreted as a way of approaching reality with the added benefit that they’re already partly in this state when we receive them.

In I think February 1984 CE, economics teacher Ken Webster took a BBC Model B Micro home from his school to his seventeenth century cottage in the Cheshire-Flintshire border village of Dodleston. I’m going to be fairly brief about the details of this case, which is extensively written up elsewhere, including in his book ‘The Vertical Plane’, because I want to concentrate on something else. There were three people in the house: Ken, his girlfriend Debbie and a musician who lived upstairs whose name I can’t remember. That night, he left the computer on and the house was vacated when they went to the pub. On coming back, a poem had appeared on it. Over the next sixteen months a series of messages appeared to which he and some other people responded. Here are a few screenshots from a dramatic reconstruction:

I shall explain. There was an apparent dialogue between Ken and Debbie and a person appearing to live in the sixteenth or seventeenth century called Tomas Hardeman (living in the time before standardised spelling so his name is uncertain) who initially claimed to be a graduate of Jesus College Oxford and later Brasenose. There are both historical and grammatical inaccuracies in the messages purporting to be from the past. Tomas Hardeman is arrested for witchcraft and only released after the Ken threatened the sheriff who arrested him, who was apparently also communicating. The messages are then interrupted from a source known only as “2109”, possibly a year, which is more threatening and claims to be made of tachyons. Its spelling is also a little peculiar, with single consonants where we might put double ones and the “-tion” ending being spelt “-cion”. At the same time, there was poltergeist activity in the house, particularly the kitchen, where utensils tended to be piled up, and on one occasion Debbie came back to the house to find the cats nervous and all the furniture piled up in one corner of the living room. Brasenose College helped with the research and it emerged that there was indeed such a person who was expelled from the college for refusing to remove the Pope’s name from certain books in the library, which confirmed what had appeared in the messages. The Society for Psychical Research (SPR) then got involved, typed a number of questions into the computer without disclosing them to anyone, sealed it in the room for an hour, then deleted the messages, and got a reply which implied that “2109” was aware of their content. David of the SPR proceeded to ask the “entity”, if that’s what it was, the solution to Fermat’s Last Theorem, which was only found in 1994. It replied that the answer was only to be given if the questioner was prepared to lose soul, mind and body, so they didn’t proceed. “Harman” then said that he would write a book about the events to prove that they had happened and hide it somewhere, so that when it’s found it will be demonstrated that this was not a hoax. 2109 mainly seems concerned not to cause a temporal paradox. Oh, and the house was on a ley line, but then so was mine so that’s not unusual. Harman also mentioned that his house was made of red stone, and foundations of a building made of red stone were later found in the garden, so the house which stood there before was like that and Harman complained about the alterations made to the house in the intervening four centuries.

The mistakes made in the grammar and history were attributed by “Harman” to interference by “2109”. Both the SPR and more general sceptics agree that it was a hoax, but Ken and Debbie, particularly Debbie, insist that it wasn’t and it’s still unclear how it was done. Debbie has been very up in arms about it and expressed her annoyance at being accused of faking. She said she couldn’t understand why people thought so because she was not motivated to do such a thing. There are, however, textual similarities between Ken’s own writing and Harman’s. For instance, 26% of nouns are preceded by adjectives in both sets of text compared to an average of 32% taken from contemporary texts composed by other people, and in Ken’s case the sample is very large as it consists of his entire published book of 374 pages. Although this seems like more than a coincidence it doesn’t rule out the possibility that he was either doing it unconsciously or that the poltergeist was associated with him in some way, but I’m still basically convinced it was a hoax. Nevertheless there are some enormous difficulties in explaining how it was done.

I’ve seen some annoyingly naïve descriptions of how this was done, so I’ll go into the situation as it was then. Both the internet and email existed at the time. However, although it would be possible to connect a BBC micro to the internet (not the web of course) or to a Bulletin Board System, this computer was not connected in this way. BBC micros do have local area network connectors in the form of Econet, but again this one was not connected, at least while it was in the cottage. The SPR suggested that signals were being sent along the earth line of the plug and socket through the wiring of the house. Other than ROM, this BBC had no persistent memory. As it happens, this particular model was being used to run EDWORD, a sideways ROM for word processing, at the time. It was linked up to a green screen monochrome monitor, presumably without a Faraday Cage, and there was a 5¼” floppy disc drive with discs available.

The frustrating thing about the investigation is that as far as I know, nobody seems to have examined the hardware involved. The fact that the monitor was presumably unshielded means that it would’ve been possible to detect the signal and read what was on it from nearby using a scanner of some kind, so the content of the questions the SPR guy typed wouldn’t have been secure by the standards of the time. There was a dialogue, or at least it appeared to be interactive, and although the BBC micro could easily run a program like the Rogerian psychotherapist simulation ELIZA or the paranoid “patient” Parry, the sophistication of the responses means it has passed the Turing Test, which would be quite an achievement for a 2 MHz 6502-based micro with 32K RAM and the same ROM.

I regard all this as a puzzle to be solved by naturalistic means, because of the grammatical and historical errors. For instance, in the screengrab at the top, “BEHALTHE” is a spelling mistake which would never have been made by an English speaker of that era, and “WOT” is also incorrect because Midland English at the time strongly distinguished “WH” and “W” in speech, although Southern English didn’t. These would’ve been easy to fake and they seem to be poorly faked. There is, however, a claim that 2019 had a hand in the apparently older messages, which would explain the historical and linguistic inaccuracies. It’s also likely to be a valid excuse that telling the SPR the answer to Fermat’s Last Theorem would cause a temporal paradox, although it could presumably be stored in a sealed envelope and the people could be sworn to secrecy. But I think strong corroboration of backwards time travel would lead to a paradox anyway, meaning that there could only ever be vague references easily refutable or impossible to corroborate, so this is exactly what one would expect from a responsible message from the future.

The idea of the earth pin is interesting. Although it seems to have been suggested ignorantly by someone who didn’t know much about computers, it would in fact be possible with some hardware modification. The back of the BBC Micro looks like this:

Power is on the right, and likely to carry an earth line. Even if it doesn’t, one could be used. One of the other interfaces could be connected up to the earth, although I’m not sure which would be best. The cassette port is able to transmit data at 1200 baud along a single line, so wiring the in and out to the earth internally and having a way to switch remotely between the two is possible. Alternatively, a faster connection could be made between the Tube and the earth, and depending on how the Econet works that might be another option. The RS423 is, however, the obvious choice as it’s a communications interface. There would then need to be something connected to the wiring of the house, possibly something like a radio mike, which could then transmit and receive to another computer or terminal fairly nearby. But all of this would obviously involve modifying the hardware inside the case. The presence of a sideways ROM makes it feasible, although Edword would then have to take up less than 16K to allow for the software. Having said all that, I think the comment about signals entering and leaving via the earth is probably just a sign of being uneducated about computers.

The reason for this explanation is of course the need to look for a cause other than communication with someone living several centuries ago and an entity apparently 124 years in the future. The other options seem to be that there was communication with an entity in the future, communication with a timeless entity or communication with someone living in the past and someone else living in the future, or just talking to a ghost. “Harman” mentions a “boyste” of “leems”, I think in his fireplace or chimney, which could be the computer itself or something else. It’s also possible that voice dictation was supposed to have been used at his end because of how it’s described, factually or not. “Leem” means a glimmer of light and “boyste” appears to mean box, which could refer to a CRT monitor. It feels rather away with the fairies to say this, but it was possible to dictate to microcomputers at that time, although I suspect it didn’t work very well. When I say “at the time”, I mean the 1980s.

It really does seem like a hoax, and the biggest issue is really how it was done. Although I’ve mentioned one feasible way, there could be others, and it makes more sense to seek an explanation in hardware hacks than the supernatural or time travel. But that doesn’t mean that there is no paranormal or time travel, and the poltergeist isn’t explained by any of it.

A Look Back At The Third Millennium

Back in 1987 CE, I finally got round to joining Leicestershire public library. In a way this was entirely superfluous as I was also a member of Leicester University library (and still am, because that’s how it works, although I lost my card a long time ago and last used it in the late 1990s), but the kind of books were different. I used it to get a quick overview of subjects I needed to study in more depth as part of my degree, and also for novels and art books. One of the first books I took out, after Hugh Cook’s ‘The Shift’ which incidentally I highly recommend, was Brian Stableford’s and David Langford’s offering ‘A History Of The Third Millennium’, which is an unfiction book whose image I shall now try to retrieve from the dark recesses of the web:

(actually that’s just Wikipedia). The illustration of the nautilus shell you see on that cover is in fact one of several options, including the acorns which I’ve seen on mine and the library copy, and is a hologram rather than a two-dimensional photograph. There was also a paperback version which I used to own:

The big, hardback version (whereof there was also a large-format paperback I think) scored over the small paperback in the lavish, full-colour illustrations and of course the hologram on the front cover. I don’t know if anyone reading this remembers UB40’s 1982 album UB44:

This was the earlier, limited edition, bearing a hologram, replaced soon after by this:

I actually quite like the second cover as well.

So the thing is, if you were living in Britain in the ’80s, you might have got the impression that the future would have lots of holograms in it. Oddly, the only holograms we seem to see regularly are on back cards. I do not know why this is. It seems to me that they’re still pretty groovy (geddit?) and that they ought to be all over the place, but they aren’t. They do have their drawbacks. For instance, this form of hologram doesn’t display real colours but shows a spectrum of them across the image. There are ways around this but not with printed still images. Nonetheless, representational holograms at least were a fad which went out of fashion and I don’t know why. They were probably replaced by Magic Eye images, also known as random dot stereograms:

I’ve made a few of these, on a Jupiter Ace. They’re quite easy. Another possible visual replacement is the Mandelbrot Set, in a sense.

Just as holograms have gone out of fashion, but seemed like the future at the time, some of Langford’s and Stableford’s book also, unsurprisingly, proved to be highly inaccurate and projected the trends of the time unrealistically, as it turned out, into the future, but other aspects were bang on. There’s also something about the illustrations not being CGI which forges a connection between the reader now, when we are very accustomed to it, and the fact that some of them, although obviously manipulated in an analogue way, had to be based on real models at some point, which gives them a vividness lacking in computer graphics. I almost feel sad to say this because I was very into CGI as a teenager and my main motivation for interest in computers was their possibilities in that direction, but the idea has a kind of soullessness to it which is quite saddening. It isn’t about whether they’re convincing but the need to feel an anchor in the physical world. There’s also artistry in how the images must’ve been created when they are fake. It’s a little like the ingenuity of helical scanning on video cassettes to make it possible at all.

The most glaring anachronism is that the world depicted has the Soviet Union and the Warsaw Pact persisting for centuries, although it also sees Stalinism as dwindling to nothing very quickly. It was published around the time Gorbachev came to power and after a short period it became obvious that he was going to take the USSR in a very different direction. However, it also predicted that entrepreneurial capitalism would come to an end and that planned economies would become the norm, and this is really what’s happening, particularly in the wake of Covid. What we have now, quite possibly, is a situation where small businesses go to the wall and are replaced by corporations. For instance, a small takeaway could close down due to lack of footfall but its facilities would be bought up by a fast food franchise, siphoning off the income to where it does no good for anyone significant and effectively taking it out of the economy, at least locally. In the book, this process is envisaged as being driven by technological change, where manufacturing becomes more specialised and the division of labour becomes more sophisticated, and this does happen to some extent and may be responsible for the impression I get, at least as an outsider, that individual jobs are often incomprehensible to the people holding them. However, governments are also seen as having to exercise more control over the owners of large enterprises, which I don’t see happening, and I’m also not sure what the writers mean when they say “owners” because of the nature of shares. One thing which does seem realistic to me is the purchase of small nations by multinationals. I can absolutely see this happening and wouldn’t expect it to be confined to small nations either. The description of the real interests of multinationals also seems entirely accurate. They are described as constituting great cartels with no interest in competition, but more in avoiding taxation, protecting their markets and maintaining stability. On the other hand, governments are seen as in opposition to them because they try to avoid taxation, but this is the opposite of the real situation, which is that they prefer to tax the poor and leave the rich to enjoy their stolen money, and perhaps find new ways to take money off the poor. It all seems a bit idealistic really, but still.

An interesting chapter covers a series of epidemics, not really pandemics, which broke out from 2007 to 2060. The first leads to the overthrow of apartheid because it incapacitates all ethnicities in South Africa but because the Whites are in the minority this enables the others to mount an uprising against them. South Africa then disintegrates into a number of small self-governing republics. There is a theme of deniability here and it’s explicitly stated that none of the epidemics are necessarily genetically engineered although many of them were convenient. Several of them seem to be aimed at particular ethnic groups, and it has been mentioned that this might be possible although I suspect it wouldn’t work very well because of the mixture of genes we all have. One seems to be instigated by the US against Latinx immigrants, and not only succeeds but spreads into Mexico, Central and South America and kills many millions, beginning from Los Angeles. This one is rather poignant. It happens in 2015, is limited to a year and is quickly contained in the US by a vaccination program which only takes three weeks. This is amazingly different from the real situation with Covid-19. The attribution, though plausibly deniable, that the viruses involved in all of these are genetically modified is an interesting parallel to the real world conspiracy theory that Covid was genetically modified by the Chinese. In fact, the book also depicts a Chinese virus from Wenzhou (温州) causing sudden hepatitis which kills 38 million. In fact it would be possible to identify genetic modification because entire genes would be spliced in, meaning that long, continuous stretches of genetic code would differ from the wild strains, and at the time of writing genetic fingerprinting was being developed at my alma mater, so in a sense the authors missed a trick. It is in fact the case that we are likely to be plagued by a series of pandemics due to deforestation over the next few decades, and it’s notable that the predictions of death toll are far smaller than the real numbers of casualties we’re currently experiencing. A new variety of AIDS is predicted for 2032, whose long incubation period helps it spread, and it also causes sterility and arose in Poland. The US “triplet plagues” are three simultaneous viruses, one causing paralysis and neuropathy, a second causing leukæmia and a third solid cancers. These kill ten million within a year. By 2060, the viral plagues have ceased, apparently because they hurt the perpetrating groups as much as the intended victims. This particular chapter is interesting to compare and contrast with the reality of Covid and the probable reality of future plagues, although there’s no need for any conscious instigation for this to happen. Also, they were right about the overthrow of Apartheid although not about the cause or the timing – it’s a quarter of a century later here. Another thing they got right, sadly, was that pandemics would be better managed in the developed world than the global South.

The chapters on energy use are interesting. They seem to be based on accurate projections of fossil fuel and nuclear power use although the likes of COP didn’t exist at the time. Coal and oil use peak in 2025 and 2000 respectively, but the cost of fuels relative to inflation rises thrice as high for the latter. It’s a little hard to understand how a fuel used for transport and manufacture is able to rise in price that fast independently of the prices of other goods, but there might be an explanation somewhere in the text. The reason for the rises in price is that increasingly marginal sources are used, particularly for oil, such as oil shales and sands. It can be assumed that fracking is going on given the perspective we have. Coal also becomes more expensive because of deeper mines having to be dug. Imports of oil also get harder due to countries wanting to hang on to their own supplies. This leads to biofuels, mainly ethanol, being developed in countries without these resources. Fission power is if anything less popular than in reality, mainly due to Green parties,which achieve a modicum of power. There is a meltdown in Vologda in 2004, which is probably close enough to other European countries to be significant, and the issue of enforced internationalism is also mentioned, this case being an example of pollution leading to neighbouring countries being concerned about each others’ activities.

The US President Garrity, 2024-32, introduces restrictions on commercial plastic use and fossil fuel automobiles and conspicuous consumption ends. This is unpopular and blamed for a recession, but fuel shortages have already led to a recession by this point which is sufficiently severe that the additional measures make little difference. In fact I wonder if 2024-8 will prove to be Trump’s second term, in which case none of this will happen, and I’m also sure nothing this pronounced was agreed at COP-26. The expense of manufacture and energy leads to the maintenance rather than disposal of equipment, which encourages manual labour again. This again is the opposite of what has happened so far. Built-in obsolescence is a major issue, although there is the Right To Repair movement, and if this succeeds this could lead to the possibility of maintenance and repair becoming more popular by the end of this decade. This chapter also notes that uranium mining suffers from the same unsustainability problem as fossil fuels, but doesn’t mention thorium reactors.

Stableford and Langford blame the energy austerity measures imposed on consumers in the mid-century on profligate use of energy from the mid-twentieth century onward, and we would probably all agree with this. Energy use for individual consumers is rationed and large-scale energy use concentrates on public utilities. Property taxes are based on heating inefficiency, smart meters monitor consumption and issue on the spot fines and long distance ‘phone calls are cut off after a certain period. All of this is intrusively surveilled. Although I can imagine such things becoming necessary, I can’t see them being implemented. Nor can I see steps being taken to prevent us entering this predicament, so there are a lot of questions here about what will actually happen when it comes to the crunch. It is, however, clear that governments are able to exploit xenophobia resulting from this kind of situation, so whatever else happens it seems clear that right wing populism will be fuelled, so to speak, by this kind of crisis. On a side note, it predicts the Roomba in this bit.

Three necessities are mentioned for fusion: an accurate simulation in advance of changes in the plasma in order to continue containment; more powerful and efficient magnetic fields; and, a form of shielding which would absorb most of the neutrons and protect the outer casing. All of these things are solved, and they do seem in my rather naïve view to capture all the issues. The simulation problem is addressed as an outgrowth of what we now call the Human Genome Project, which is referred to as “Total Genetic Mapping”, as software was needed to achieve this. I’m sure that’s true but I’m not sure how this would be relevant, which isn’t to say that it isn’t. In the book, the efficiency of the magnetic fields is achieved by the invention of room temperature superconductors. Finally, the alloy which acts as a neutron sink is manufactured in orbit because only in zero G can metals of different densities, such as aluminium and lead, become an alloy without gravity separating them. I see this bit as an attempt to demonstrate the benefits of zero gravity manufacturing conditions but it is also an attempt to address the problem of the casing being so heavily irradiated that it becomes radioactive waste in its own right and also needs to be replaced. Room temperature superconductors do now exist but only under immense pressure, so another problem has been created. Previously the issue of creating magnets powerful enough to contain plasma under sufficient pressure to cause fusion was addressed by using liquid helium to cool the magnets and circuitry almost to absolute zero, which led to a mind-numbing temperature gradient because the plasma itself was at 150 million Kelvin. Now the problem is pressure, but there may be a hint at a solution when you realise that both the plasma and the superconductor need to be under very high pressure. This is too big a subject to talk about in this post really. Incidentally, fusion reactor efficiencies are misquoted in two ways. Firstly, the ratio of energy input to the plasma to its energy output is not the whole story because total energy input is greater, and secondly the conversion of heat to electricity is only fifty percent efficient at best. There’s also energy input to the tritium extraction process and tritium is also scarce, at one hydrogen atom in 32 million. The alternative is to use helium 3, which is abundant in lunar regolith, but we are not anywhere near managing it at the moment anyway. It’s looking like I’m going to have to blog about this subject separately, but this brings home the interesting topicality and relevance of the book to contemporary events, because all the things mentioned are current issues in fusion research.

By the time fusion power becomes practical, the public and government perceive it as having been dangled in front of them for so long that they’re sceptical and people have also readjusted to the new energy régime. Biotech is also getting all the money because it’s more glamorous, so it isn’t until the 2090s that fusion generators come online at all. Once they have, there are further delays. It’s realised that neutrons emitted by fusion can be used to make weapons-grade plutonium, there are squabbles over the sitings of the reactors because it’s felt that some redress is needed for the global South for the previous amassed wealth achieved via fossil fuel use by the North, and given that they are located there, the cost of building an electricity grid sufficient to carry the power out of the countries considerably offsets the benefits. Deuterium plants also have to be located in or near the sea, so it doesn’t help landlocked territories. There are also teething problems, such as damage to the plant from the intense heat and radiation, meaning that the reactors need to be redesigned and rebuilt.

All of this section feels remarkably grounded in reality and practical considerations. There is nothing waffly in this. I can completely buy the idea that should fusion power ever prove practical, this is very much along the lines of what would happen. We can already see Third World nations objecting to what they see as the North pulling up the ladder after themselves by changing the energy goalposts, and this reluctance is basically the same thing. This accords with the general tone of convincing politicking combined with speculative, but not wildly so, conjectures regarding technological and scientific change. This is definitely hard SF.

Unsurprisingly, an issue following on from this is that of anthropogenic, or otherwise, climate change. The emphasis is on global warming and sea level rise although it is mentioned that changes in ocean currents and rainfall patterns lead to unanticipated results such as a general reduction in crop yields accompanied by sporadic increases in some areas due to shifts making land more suitable for particular crops such as cereals. This can be seen in reality today, for instance with the increasingly friendly English climate for grape-based wine production. It’s also uncertain, in the book, how much fluctuations in solar activity contribute to the situation, but again as in reality, they’re generally thought to mitigate the effects of climate change. Ocean acidification hadn’t been identified as a problem at the time and is therefore ignored, as are the risks from clathrate hydrates releasing methane.

The prediction of sea level fluctuation is that it will rise sixteen metres between 2000 and 2120 at a maximum rate of twenty-four centimetres a year and then drop once humanity gets its act together to a stable level two metres above the 2000 level by 2200. Shanghai is the first city to be affected by the rise, starting in 2015 and being obliterated by 2200. Tokyo and Osaka are similarly threatened but this is overtaken by events because in the late twenty-first century Japan is practically destroyed by quakes and the population disperses throughout the globe. Speaking of quakes, attempts to protect Los Angeles and San Francisco are hampered by seismic activity in California. All of this is quite well thought-through, although I have yet to check the elevation of the relevant cities. More widely in the US, attempts to rescue New York City and Los Angeles are the main focus, leading to resentment in the South, particularly Florida and Texas. The bicentennial of the Civil War in the 2060s leads to civil unrest in the Southern States because of the focus on settlements outside the area. This is a little similar to the Hurricane Katrina situation.

Comparing this with real life, Shanghai is indeed very low-lying at 2-4 metres above sea level. China is also disproportionately affected by sea level rise for a continental nation, as is much of East Asia. In Shanghai, there was catastrophic flooding killing seventy-seven people in 2012 and there are attempts to create mangrove swamps to increase resilience. For some reason I don’t understand, sea level is rising faster in East Asia than elsewhere. How is this possible? Clearly there’s something about the oceans I don’t understand. As for New York City, I don’t know what’s been done yet but there are plans to fortify the shoreline in Manhattan. The devastation of New Orleans also occurs but from flooding due to sea level rise rather than the hurricane, and of course this is still on the cards.

Another successful prediction is made concerning public response to climate change. People take it personally and realise it’s about their children and grandchildren. Having said that, it often seems to me that people are remarkably unconcerned in reality about it and I find this puzzling. But we do have Greta Thunberg and Extinction Rebellion.

The destruction of Honshu occurs in 2084-85 and starts with an earthquake followed by the eruption of Mount Fuji and the emergence of a new sea volcano. This leads to a Japanese diaspora and the blurring of cultural and ethnic distinctions. Clearly this is an unpredictable event although the nations of the Pacific Rim are all at risk. In order to tell a story, the authors have to commit themselves to a particular date and location, but there’s a more general principle here. It’s a bit Butterfly Effect, because it’s equally feasible that it could happened to California, which would have different consequences because of it being somewhat integrated with the rest of the States.

There follows a to me rather depressing chapter on genetically modified food, where the reduction in yields caused by climate change is only mitigated to subsistence levels by the engineering of more suitable varieties for the new climatic conditions. This leads to the production of SCP – Single-Cell Proteins – initially as fodder but illicitly eaten by vegans as a meat substitute until it’s legalised for human consumption later on. Complete foods are also created in the form of grains which contain all essential nutrients, off which the inventor lives for a decade but is accused of cheating. This reminds me of Huel and also breatharianism to some extent. Then there’s a description of all the small-scale subsistence farmers who have been forced off their land by megascale monoculture agriculture growing patented crops, which balances the rather technocratic tone of the previous chapter. These are known as the “Lost Billion”, the number of people affected (short scale), no longer able to farm what used to be their land and reduced to the status of refugees. Some of them resort to armed struggle and others join apocalyptic religious cults as a coping mechanism for the destruction of their way of life. Sea farming also expands greatly, something I personally strongly believe in, in the form of algal and blue-green algal farming, which would serve to satisfy many nutritional needs while redressing the phosphorus imbalance. Seaweeds are also grown, particularly by Australia due to its extensive shallow seas, but also along the entire west coast of South America. This is from the 2060s. In my mind, I envisaged just ordinary seaweed but their version of it is genetically modified seaweed, which is also used for biodiesel. It often isn’t realised how much oil there is in algæ, which I presume is to enable them to float near the surface and photosynthesise. As the authors point out, more than two-thirds of sunlight falls on the sea and it is an underexploited resource. Not that it’s ours to exploit necessarily as it would have an impact on the ecosystem there, but it’s a question of minimising that impact elsewhere.

Unsurprisingly, the most predictable thing ever, the internet, is, well, predicted. Amusingly, ebook readers are for some reason only introduced in the 2060s after false starts from 2005 onward. There are also wall screens. I don’t know if domestic wall screens will ever become popular. In theory we could have them now, as larger screens exist in public places for such purposes as advertising and as whiteboard replacements. All anyone need do is buy one and put it in their home, but people don’t do this. Maybe they will one day, and it’s important to remember that this is supposed to be about what happens in the next 979 years. Speaking of which, it also speaks of financial transactions going through a cycle of security and insecurity, which is entirely feasible if quantum computers develop the capacity to hack encryption through fast factorisations.

Then they talk about employment. They see it as eliminating white-collar jobs faster than manual labour because of the need to maintain new technology and the damage done by climate change. Hikikomori are also mentioned, though not by name. It’s described as “TV withdrawal” and as affecting mainly people in poorer countries, who seek to escape from the reality of life into the more idealised version, particularly in advertising, seen on television. There is resistance to home-working and people continue to commute because they see working at home for pay as unnatural. I can see some of this to be sure, and for the real world there’s the issue of economic support for ventures which are used by commuters and people going to work such as fast food stands and sandwich shops, among other things. City centres also stayed expensive. An interesting phenomenon which as far as I know hasn’t happened is an organisation known as Speedwatch, starting in 2004, which begins as a mutual support group for the victims of dangerous drivers and develops into a vigilante group assassinating motorists who exceed the speed limit or otherwise drive dangerously, which although it ends in the perpetrators being imprisoned is argued to make roads safer by introducing a deterrant. Restrictions on private vehicles increase while the leaders are in jail, and in 2021 on being released, they claim to have won. Public transport is boosted. Now this would be sensible, which probably explains why it hasn’t happened. Electric cars are introduced but are underpowered. The Sinclair C5’s successors, planned in reality, are more successful. The time frame is approximately correct, with petrol cars ceasing to be manufactured in 2030, by which time there is in any case more home-working. Airships come back too, for obvious reasons. I really want this to happen but don’t think it will.

That, then, is the first part of the book. The wider sweep of the worldbuilding, which extends far beyond the third millennium, was used as the basis of much of Brian Stableford’s fiction, such as the Emortality Series and his short story ‘And Him Not Busy Being Born’. His earlier novels bear no relation to all of this as far as I can tell. David Langford mainly writes parodies, such as ‘Earthdoom’, which I have read, but also came up with the idea of the brain-breaking fractal image known as “basilisk”, which leads to online images being made seriously illegal. He writes the newssheet ‘Ansible’ and also the Ansible Link column in ‘Interzone’, and has won more Hugos than anyone else ever. The rest of the book is also interesting but tends to branch out beyond what’s relevant today. There is a first contact towards the end, but since humans have been so genetically modified by then, it doesn’t really feel like one. They also remove the natural limit on the human lifespan, so there are no longer such things as disease and old age, and this is an important issue in much of Stableford’s work.

It isn’t so much about accuracy and datedness that this work is interesting as the focus on Realpolitik and the quality of the research put into it. Yes, it’s dated and yes it reflects the time it was written in (these are not the same thing), but it’s also believable and quite frank about the risks we present ourselves with, particularly in the area of climate change and fossil fuel use. I highly recommend it, even now.

Two Forthcoming Projects

Shamelessly nicked from here, and will be removed on request, but I regard this as an ad for the OU course this is taken from

I generally resist medicalisation, and I’ve previously written on ADHD, so this isn’t primarily going to be about that issue in spite of the illustration. Nonetheless it’s there, and it means that like many other people, perhaps even everyone, the cog that represents me doesn’t fit well into the social machine, which is a problem for both society and myself. I would also say that my ADHD is just something which came to the attention of educational psychologists and medical professionals in the ’70s, when it was called hyperactivity, and is an aspect of my personality among several which entails a poor fit with society. In the Diagnostic and Statistical Manual of Mental Disorders, a problematic work per se but maybe somewhat salvageable, there’s often a category at the end of each set of disorders labelled “not otherwise specified”, which is the wastebasket taxon as it were, a “diagnosis of elimination”. As a healthcare professional, I’m aware that the textbook cases are the exception, and most of the time people have an array of signs and symptoms which can’t be easily pigeonholed, and the real puzzle is why anyone at all actually has the same condition. Leaving that aside, it’s also unclear if it’s appropriate to view mental health analogously to physical health at all, and there’s the social model of disability. Hence I will assert myself, controversially, as being “neurodiverse, not otherwise specified” and leave it at that. Strictly speaking this is a neurodevelopmental condition rather than a mental health one, but let’s not get even more bogged down.

All that notwithstanding, a few days ago someone asked me what my plans were. I misunderstood the question, thinking I was being asked about how I planned to generate an income in the long term while it was really about our relationship, which of course I won’t go into here, except to say that a plan to generate an income can be very important to a relationship because it’s nice to be in a position to take care of someone well and have enough money to help others, and there is of course the psychological benefit of being gainfully employed, such as it is, and also occupied in something which connects to the common good in some way. It’s partly about good mental health and social obligation. That said, I completely reject the work ethic because most paid work is probably harmful in the long run to society and the person doing it, and the problem is finding work that doesn’t do more harm than good, and that’s rare. Even so, I do sometimes succeed in getting people to give me money for what I do. In particular, I currently have a couple of ideas for medium-term projects, which I’m going to outline here. In doing so, I’m going to yank this blog post in the direction of another blog of mine (which I hardly ever write), but these things happen.

I’ll use headings again, I think. At some point I might even work out how to do hyperlinks within the post, but that’ll probably involve tinkering with the HTML. I don’t think it can be done with the WordPress block editor (grr).

1. The Ethical Periodic Table

Right now I’m not sure what form this will take, but it seems to lend itself much more to something online, or perhaps an app, than a physical book. Like my second idea, this has been kicking around a while, and this is the thing. I’m pathologically procrastinative. In case you’re wondering about the wording of that last sentence, I’m trying to avoid using a noun to describe myself because I think that fixes one’s identity mentally in an unhelpful way. Anyway, it goes like this. The Periodic Table may be the most iconic symbol of science. Right now I’m hard pressed to think of another one, although the spurious “evolution” parade purporting to show constant progress and the chart of the “nine” planets come to mind, these however being very much popularisations. As well as having chemical and physical profiles, each element also has an ethical, social and political profile connected to how it interacts with human society. For instance, arsenic is very high in drinking water in Bangladesh, tantalum has been associated with civil war in the Congo and there is an issue with phosphorus and algal blooms, among many other things per element. My “vision” is to provide a clickable periodic table with links to information, which I hope will be regularly updated, to balanced social profiles of each element, and I’m also curious as to whether there’s a pattern here: do some groups of elements present bigger problems than others and are there possible substitutions? This clearly lends itself much more to a computer device treatment than a book of pages, although one of those books with tabs might work. This suggests it could be an app as well as a website.

2. Corner Shop Herbalism

I detest the tendency for certain exotic herbs to become pushed and regarded as miracle cures and the answer to everything. I think this distorts research and is often environmentally unsustainable. I also think there’s a lot of gatekeeping in my profession which does not serve the public interest, but at the same time I’m aware that many people lack the necessary knowledge to deal with their own health problems easily, particularly in the realm of diagnosis. Consequently, for decades now I’ve had the idea of producing a book called ‘Corner Shop Herbalism’, which is about using herbal remedies which can easily be obtained over the counter or as invasive weeds or other common species in a foraging style, while maintaining their sustainable use. I’ve already planned this book to some extent and it covers a surprisingly large number of species, probably totalling more than a gross. This would be accompanied by various other chapters about when to seek professional help and details of why herbal medicine is a rational, vegan and useful approach to health. This could also be a website, but it lends itself also to being a physical book because that makes it a field guide useable with no electronic adjunct, and who knows when that might become necessary? We all know of the Carrington Event, after all.

Publicity And Marketing

This is the difficult, possibly insurmountable, obstacle. Self-publishing nowadays is easy. You just organise your manuscript into printable form, get a cover together and have people order it. People have different sets of skills, and the ability to publish without approaching a publisher replaces the problem of getting yourself published with the problem of publicity and marketing. This works fine for some people if they also have an aptitude in those areas, but it usually fails. I have a Kindle Fire, and I do recognise the considerable ethical issues with Amazon of course, but one thing I see a lot is a very large number of ebook adverts and recommendations. I have never followed up on any of these. Although I’ve advertised my business profusely myself, my usual response to an advert is to wonder what’s wrong with the product that it needs to be pushed. You don’t see ads for potatoes or petrol because people recognise the importance of those in their lives and they sell themselves.

Advertising is ethically and practically complicated. The German “Anzeige” translates both as “advertisement” and “announcement”, and I find this enlightening as to the nature of advertising. At its best, if you believe in the fruits of your labour as enhancing to potential customers’ quality of life, you still need to make them known to the public, and this is absolutely fine. However, the quality of goods and services often seems to be in inverse proportion of how heavily something is advertised, which supports my tendency to become suspicious of a product. There was a fairly prominent advert for the British Oxygen Corporation in the 1980s CE which depicted a lake full of flamingos which they claimed had previously been lifeless and that they had managed to restore to a healthy state. This immediately provoked the question in me whether they had done something dodgy more generally and were trying to boost their image. It isn’t relevant whether they actually did this, but if this kind of suspicion is often raised, it can make publicity counter-productive. On the other hand, maybe few people think like this. Regardless, there’s a tension between the contrariness of people generally and getting your product out there, and I don’t know how to resolve this.

I never pay for advertising now because of my history with it. The only advertising which ever worked was the Yellow Pages and by that I mean that no other form of paid advertising got me a single client. With the Yellow Pages, it worked to a limited extent and then, oddly, about half way through one year of advertising it suddenly cut off completely and I never got another customer (for want of a better word). I am still mystified by this. It’s clear that online advertising and other such activity killed the Yellow Pages, but there was no gradual decline in my case. It just stopped dead with no period of tapering off. After that, I cancelled the advertising and relied on word of mouth, which is of course very useful.

How to apply this to books though? Is the kind of marketing and publicity applicable to a herbal practice, and apparently not very, comparable to that of a book? It would seem to involve other aspects of publicity such as talks, walks, courses and signings, the first two of which I’ve done often and fairly successfully in terms of raising the general profile of herbalism but not clients. Would this work for a book? Is it possible to put together a course based on the ethical periodic table idea?

Many people worry about their image on the internet, and their data being used for nefarious purposes. Whereas these are legitimate concerns, mine are not in this area. From the start, I’ve thought of behaviour online as consisting of postcards. Everyone can see what you’re doing, but there are so many of them the chances of being noticed are minute. It’s like the lottery – the odds of winning are insignificant. In some places the odds are stacked against you, as for example with YouTube. As far as reading is concerned, there’s the issue of what might compete with the time which could be spent reading your own writing, and it’s notable that many people don’t even venture forth from social media to bother reading the content. I am guilty of that to some extent myself, but also watch myself so that I do it as little as possible. There’s much to be said about social media and personal data, but I won’t say it here because most of it is only relevant to my writing in terms of constituting a distraction from it. Consequently, I will do some promotion of the work on Facebook and Twitter, but don’t anticipate much response. How one would actually succeed in getting a response is another question, and I have no answers. I do know that my own efforts at search engine optimisation haven’t yielded much.

It’s easy to imagine a conspiracy or malice here, but in fact the answer is far more likely to be the impersonality and volume of the internet which causes this. Therefore, anything one does in this respect needs to be done for its own sake, and not to get an income or make a living. What one actually does to make a living is unknown, and as far as I can tell impossible. I’m always overawed by people who manage to have a full-time paid job because it is so far beyond my capabilities and I have no insight into how people do it. Consequently, I just do things which I consider worthwhile, and I definitely consider these two projects to be valuable, so I’ll be doing them with no expectation of a significant response. This is galling, but I’m used to it. I still don’t know how I’m going to survive though.

That’s all for today.

11A0 – 11B0

One of the drawbacks of the Unicode system is that it lacks proper duodecimal symbols. Hence rather than using unambiguous dozenal symbols, of which there are various forms, none of which I can type here, I’ve resorted to using A and B to represent ten and eleven. When I first thought about writing this post, it was going to be about the 1990s CE, but since I am fairly committed to duodecimal it’s instead about the years 1992 to 2004. At the start of this cycle (which is what I call the analogue to decades in duodecimal, after “A Cycle Of Cathay”), I was two dozen and obviously at the end I was three dozen, so it covers what might be regarded as the first cycle of my adult life. Almost equivalent to a Jovian year in fact. The brain is said to stop growing at the age of two dozen, so that could be said to mark the beginning of adulthood. It’s sometimes informative to shake up the way we measure space and time to see if it brings any new insights.

One insight this brings is the tendency for most of the world to think in terms of decades, centuries and millennia, because those bits of rhetoric and marketing, for example, and the psychological divisions created by nice, neat round numbers in our lives and history, will tend to be at odds with this method of reckoning ages and dates. There will appear to be a sudden flurry of activity around 11A8 which represents Y2K and the turn of the Millennium which looks quite distinct and perhaps a bit odd from a duodecimal perspective. Had we been working to a different base, and let’s face it it probably would’ve been octal or hexadecimal rather than duodecimal because of how digital computers represent integers, the year 2000 would’ve been 3720 or 7C0, both round numbers to be sure but not epoch-making ones.

While I’m on the subject of Y2K, this was one significant concern during the 11A0s. However, in some ways it was also a decidedly odd one. Whereas it made sense that various mainframes would be grinding through two-digit representations of the year in that way, programmers of yore having opted to save storage space back in the 1170s and 1180s because they expected the year 11A8 to be the realm of science fiction, hover cars and holidays on Cynthia, Microsoft didn’t have the same excuse because DOS had stored the year as a value starting from 4th January 198010 which would not have gone round the clock on 1st January 11A8 at all, and for some reason it was a problem they had actually introduced with Windows when it became an operating system rather than a front end quite a bit closer to the crucial date. I have no idea why they did this but it seems irrational.

There is an æsthetic based on this period, or the latter half of it at least, characterised by futurism, optimism and shiny, liquid and spherical 3-D CGI. It was the cycle the internet went mainstream, and up until 9/11 there seemed to be a distinct atmosphere of optimism about the future. It may have been ephemeral and vapid, but it was there. And this is where I have some sympathy, though not agreement, with the conspiracy theories built up around the Twin Towers. I can’t remember the minutiæ of their content and it may have been rather dissimilar to my view, but the parsimonious, Ockham’s Razor-style approach to be taken to this is to assert that building up the War On Terror around the incident made it very convenient for the military-industrial complex. It would be going too far to assert anything else, or to insert “suspiciously” into that, and in fact to do so would distract from the situation we need to confront: that it led to the situation where the idea of making life better for people was discarded for a fatuous agenda of protecting the public from violence committed by non-state actors, without regard for the cause of these acts or how to prevent them by changing social conditions, or comparing the number of people killed with the number killed in the countries concerned by the NATO powers. Subjectively, it was like they just couldn’t let us be hopeful or look forward to a better future. Oh no. They had to crap on our dreams instead.

But the dreams were in any case nebulous. In this country they were, for me, associated with the fairly mournful and small expectation that New Labour had been lying about being right wing extremists. That government also entered into an illegal war on the back of 9/11. Even so, on the day after the election in May 11A5 people were smiling at each other in the street because we thought the dozen and a half year long nightmare was finally over. For me, much of the time was very positive, because in that period we got married and had our two children, but this isn’t meant to be personal. In contradiction to that, it was also when I got heavily involved in home ed, trained, qualified and started to practice as a herbalist.

This was also the cycle when the internet became the Web. This actually started with the World Wide Web browser in 119A, Tim Berners-Lee’s invention of course, but even when I started using it at home in 11A7 there was still quite a presence in the form of the likes of Usenet, FTP sites and so on. At the time, this seemed like an entirely positive resource although I had reservations about inequality of access in the global South which led me to doubt the wisdom of allowing myself the privilege. It was also very expensive in terms of bytes per pound compared to today. What was definitely absent at the time was the strong influence of social media. There’s a sense in which social media have existed since the 1170s in the form of PLATO at the University of Illinois, and behaviour on bulletin boards was quite like that, but the scale on which it happened was very small compared to the world’s population. Classmates is a possible instance of the earliest social media website although there are various contenders: this one dates from 11A3. In another area of IT telecommunications, mobile ‘phones started to take off and as an afterthought, texting was included. This became very significant during the 11A0s and mobiles moved from being yuppie devices to must-haves. I actually still haven’t adjusted to this, to the annoyance of my immediate family, so in a sense to me the revolution afforded by mobile devices hasn’t happened in the same way. On the whole, I don’t think this is a bad thing.

Things were a lot more analogue back then. Video cassettes and laser discs, the latter very obscure to most people, were the only way to watch things on TV other than actual live-broadcast television itself. However, digital optical discs had existed since before the beginning of the cycle. This is a pattern, not particularly distinctive of the ‘A0s, that the technologies which were later to transform society already existed but had not been widely adopted. However, I don’t want this to turn into a mere consumerist survey of high-tech products, so I’ll go all the way back to the “End Of History”.

In 11A0, Francis Fukuyama claimed in his book of the same name that history had ended. What he meant by this was not that events would cease to occur but that liberal democracy had proved itself to be the best form of government and that it would in the long term become increasingly prevalent. This is an overwhelmingly depressing and perhaps smug position, and in fact I don’t think it even makes sense. The problem with the idea that liberal democracy will triumph is that the parties involved in such governments would ideally aim for something other than liberal democracy, such as fascism or socialism or something less extreme, and proper politics without those aims is impossible. Fukuyama’s view of “democracy” would be anything but, because it would involve bland, practically identical political parties which did nothing to change the status quo, and that isn’t democracy, whether you’re right wing, left wing or something else. It’s also proved not to be so since in any case, since nationalism, conservative religion and various forms of authoritarianism have become more influential since then. Now I have to admit that I haven’t read his book, but the ideas are around in public discourse. This is related to the blandification of the Labour Party during this period. People didn’t seem to want to vote for something which was actually good.

One of the most shocking things for British progressives over this period was the Conservative victory in 11A0. It was widely believed that Labour would win the election that year, and even exit polls strongly suggested a Labour majority. Instead, the Tories received a record-breaking number of votes. Following on my experience in the previous year where I became utterly disgusted with popular support for the first Gulf War, I just got really angry with English people in general at their dishonesty and cowardice. They hadn’t admitted that they were voting for the “nasty party” because they were ashamed, so on some level they either recognised it was wrong or that they wouldn’t be able to convince people that it was the right thing to do. This was probably the first time I experienced the peculiar nightmarish quality of a traumatically negative electoral or referendum result coming in on the radio overnight, which was to be repeated several times until the Trump and Brexit results. It also made the relatively progressive years between 1161 and 118B look like a blip in history when things were getting better for the common people, but the idea of doing that was now consigned to history.

All of that sounds quite depressing. However, it isn’t the whole story. The beginning of the cycle had been a time of awakening consciousness for many people, with Acid House and Ecstasy becoming important. I didn’t partake myself although the end of the previous cycle had involved a lot of dancing and clubbing. It felt like there was going to be some kind of conceptual breakthrough, although it had also been observed that the use of psychedelic drugs like LSD at that time was more like wanting a picture show than a fundamental shift in consciousness. I can’t comment from an informed position on that, but it seems to me that they have such a profound influence on the mind that even if people went into it with that in mind, they would still come out profoundly changed. Of course, the government either didn’t like this or decided to capitalise on some mythical “Middle England” by introducing the Criminal Justice Bill with its notorious “succession of repetitive beats” clause, and a number of other measures such as the end to the right to silence. This was in 11A2. It also clamped down on squatters, hunt sabbers and anti-roads protests. Another quote from the government at about this time was something like “we don’t want to go down in history as the government which allowed any kind of alternative society to survive”, which had a flavour of genocide about it. Also, in order for that to work, society as it was would need to have some kind of appeal to it and not be bent on the destruction of the planet.

In many ways, then, this period was one of contradictions. The establishment was heavily asserting itself in academia, which made me wonder about complacency in that area. This was just after I’d dropped out of an academic career in disgust at Nick Land’s and other people’s response to neoliberalism as almost something to be enjoyed, and feminist hostility to animal liberation. It occurs to me now that I might have stayed to defend progressive opinion and movements, and after that disillusionment I became rather aimless and cynical. But on the other hand, it was also a cycle of hope and optimism, with the expectation that progress could be made in other ways. And it wasn’t all negative. Nelson Mandela became president of South Africa, Germany reunified (this is a mixed bag of course but it meant the reunification of communities too), there was the Good Friday Agreement (again a mix because it seemed to mean giving up hope of a reunified Ireland), the re-establishment of the Scottish Parliament and the establishment of the Welsh Assembly, and on reflection the real flavour of that period was a strange mix of hope and despair. Hope seemed to be sustained through lack of political analysis and despair emerged on close examination of events, but that doesn’t invalidate the more positive side. I suppose the real question is, how can we extend the principle of hope, as Ernst Bloch put it, from this superficial shiny façade into something more profound which transmutes political action into something valuable?

Blogging And Politeness

This one’s a bit navel-gazy because I have something else coming up which needs a lot of attention.

If you have a WordPress blog, you’re presumably aware that it gives you a Mercator projection map of the world with a kind of heat map on it showing which countries get views of your posts. I’ve pondered this map a lot, and it troubles me on one level that it’s Mercator at all for all the usual reasons which I’ll just go into briefly here.

The Mercator Projection aims to produce maps which preserve compass direction and is, I think, about five centuries old. It’s notorious for making the northernmost areas look much larger than the equatorial ones and although it does the same in the Southern Hemisphere this only really affects Antarctica because apart from that continent the land is closer to the equator than in the North. It also has the remarkable effect of being infinite. It has to be cut off at the top and the bottom because it will just continue to stretch the distances so that it never reaches the poles. I sometimes imagine it showing individual snowflakes at the top and bottom. There are also some other choices made about the Mercator Projection as I usually see it here in these isles. It puts London in the middle and the North Pole at the top. Hence it’s responsible, for example, for the phrase “Sub-Saharan Africa”, which I dislike because it makes it sound like the force of gravity acts in a north-south direction and that Afrika south of the Sahara is somehow inferior, literally so in fact. However, all map projections but one distort areas or compass directions. You can project a globe onto a dodecahedron or icosahedron whose faces intersect the surface, which has very little distortion (and may be familiar to GURPS roleplayers), but this messes up directions. The other thing you can do is create a spiral whose spacing is infinitely small and unravel it, producing a one-dimensional strip of the surface which distorts nothing and is infinitely long, but that’s a mathematical curiosity with little practical use on the global scale, and in any case has to sacrifice the whole idea of compass directions.

Map projections are in a sense a question of etiquette, particularly if you’re trying to interact with the whole global population, or at least an evenly-distributed self-selected sample thereof. This is, I hope, what I’m trying to do. If you have a map which unnaturally shrinks certain areas and enlarges others, you are in a sense shrinking and enlarging the inhabitants. There are some other problems with this map too, and with any map which isn’t zoomable as far as I can tell. Micronations and smaller island nations are not really visible on it. There’s a list of countries accompanying the map, which helps, but you can’t see San Marino, Malta or Vatican City on this map, and whereas there are pop-ups as your cursor hovers over it, it’s like playing darts trying to find a small country in the Caribbean or Polynesia, for example. It’s also complicated by the way states claim territories. This is in fact a political map of the world, omitting, for instance, Antarctica because that’s not a country, and that’s fine, and a practical solution to some degree, but choices are always made with these things and they’re always political because everything is political.

When I was about ten, there were 225 countries in the world. There are now 193. At this rate, we should have world unity by 2280, assuming the reduction is linear. I don’t know why this decline has taken place. If you include Vatican City and Palestine, the number rises to 195, and of course that’s a political decision too. In fact, because everything is a political decision, whatever claim you make about even this number is going to tread on people’s toes. It’s all very touchy. For instance, I’ve mentioned Palestine now, which will probably offend some of my Israeli audience. And this is etiquette as well as politics. I remember a conversation I had in the early 1990s where I didn’t know how to refer to the northwestern part of the island of Ireland, and my interlocutor clearly had firm views on the idea that it was incontrovertibly part of the United Kingdom as if it wasn’t even controversial, when to me it really is very controversial indeed. I mention the date because of the Good Friday Agreement. But then maybe it’s important to clear the air sometimes and just be provocative. This relates to the universal polarisation problem which seems to have been worsened by the way people interact online, but was always at least potential if not actual, and was a lot worse in some parts of the globe than others, some of which were rather close to the English Midlands and Home Counties.

Consequently, as I sit here gaily typing away on this keyboard, always at the back of my mind is the awareness that the retinæ whereupon my words will be projected will have originated from zygotes of various genomes, karyotypes and locations onto which social construction will have projected ethnic and national characteristics, and I am bound to mess up from time to time, probably obliviously, and I will quite possibly never even find out what I’ve done wrong. It’s an adage of running a business, which this isn’t of course but still applies, that the majority of potential customers don’t give you feedback when they decide not to go with you. They just drift off never to be seen again. Certainly my own interaction with, say, a blog, answers to this description. But it means that whatever it was that put someone off is harder to discover, particularly if what one causes offence.

The above map is just for 2021 so far. The all-time map, dating back to I think about 2015 or so, still shows a similar picture and of course one of the things both maps incidentally show is that not many people read this blog. This is fine because I’m not really interested in getting a bigger audience, and its function is substantially somewhere to dump my thoughts and, I hope, improve my writing style. It doesn’t have an internally coherent set of topics either, hence the name. From the outside, there’s probably a pattern, but that’s going to arise from my personality, life history and identity. If it had a coherent theme, it might get more readers but that isn’t really my aim here. It is, however, mainly in English, and at a guess I’d say the second language on here is Ancient Greek, and that restricts the readership. It means many people will be reading it in a second language most of the time, and my readership will mainly be first-language English.

The biggest difference between the 2021 map and the all-time map is probably that the latter has more Afrikan countries represented, mainly on the Mediterranean coast. I wonder about this. I did blog quite a bit about North Afrikan concerns in the fairly recent past because of my feeling that North Afrika tends to be erased in the global consciousness to preserve an ethnic distinction between Black and non-Black people, and also due to the dominance of Arabic culture in the Maghreb, which tends to mask what’s going on in smaller communities there such as the Tuareg and Berbers. I haven’t done that so much recently because my own focus has moved somewhat southward in connection with the issues relating to BLM. Either of those things I will be mainly talking about as an outsider, though not entirely. For instance, the issue of what happened to some of my fairly recent ancestors being largely unknown is linked to the Atlantic slave trade and there are a few minor issues, but they’re trivial compared to proper full-scale racism, in which as a White person I am obviously part.

There are something like four dozen sovereign territories where nothing I’ve written on here has been read at all. This could sound a bit imperialist and egoistic – “I want to be heard all over the world” – but the real question is what are the factors, positive or negative, that lead to the distribution I see on this map. Actually, I am going to include the all-time map because this is getting silly:

Unsurprisingly, the darkest areas of this map are mainly English-speaking, namely these isles and the United States. In fact, the most readership of all is in this country, demonstrating that the local connection is at least as important as the language I use. The US is a close second, followed by Canada and Australia. The distribution of views is close to log-normal, also known as the 80:20 rule. Here’s a plot of the log-normal distribution:

(and here the limitations of the Chromebook I’m doing this on become apparent because I didn’t plot this myself, just copy-pasted it from a free source).

The darker blue line is the germanest. My blog has been viewed in just over a gross of countries. The thirtieth country on the list is Norway, with three dozen views, which is where a fifth of the number is reached. The notable absences are in southern Afrika, Outer Mongolia, Bolivia, Papua and Kalallit Nunaat (Greenland), and there are also no views from North Korea, Cuba or Suriname. The complete list of countries and territories which haven’t seen my blog is: Papua, North Korea, Kalaallit Nunaat, Outer Mongolia, Bolivia, Madagascar, Lesotho, Eswatini, Western Sahara, Senegal, The Gambia, Svalbard, Jordan, Syria, Iran, Mozambique, Cabo Verde, Zimbabwe, Somalia, Laos, Mauritania, Guinea, Guinea Bissau, Equatorial Guinea, Angola, Congo, Democratic Republic of the Congo, Niger, Malawi, Sudan, South Sudan, Benin, Tchad, the Central African Republic, Afghanistan, Suriname, French Guyana, Turkmenistan, Uzbekistan, Tajikistan, Kyrgyzstan, Bhutan, Cuba, Haïti, Liechtenstein, San Marino, Vatican City, Andorra, and some small island nations in Polynesia, the Caribbean and possibly elsewhere. There are some outliers which I don’t fully understand, notably Romania, which I think resulted from me entitling one post Caveat Procrastinator, and unsurprisingly there are also hits from Romania for Transylvanian English. What these stats fail to capture is how much of a blog post is read. I imagine most of them are just briefly glanced at.

Some of the gaps in the list probably reflect political and development issues. For instance, it isn’t that surprising that central Afrikans don’t read my blog. Bolivia may be an example of this. There’s also the question of censorship, which can be summarised by this map:

The green countries on this map have the least censorship and the fuchsia the most. It’s probably worthwhile combining this with a global internet access map:

(Chromebook limitations are again apparent here). This second map is based on a composite statistic known as the Web Index, which has “no information” in the places where I have tended not to get any views. It attempts to combine ease of access, freedom of information and empowerment, so to some extent it includes the data on the previous map. It’s also notable that a number of the countries involved which are freest on that map also seem to have poor internet access, so it’s more like the governments concerned don’t consider the internet to be sufficiently influential in their countries to bother to do anything about it.

Besides all this though, I’m often concerned about a clash of values between what I write and those of people reading it, and perhaps between their values and my identity, in various ways, such as ethnicity and the fact that I’m quite left wing and vegan. I sometimes feel like there are whole swathes of the planet where I could not exist and might as well be underwater as far as I’m concerned, not because I have any enmity with the people there but because they wouldn’t tolerate various things about me. When I see that someone has read a blog post of mine from there, it gives me pause for thought. In particular, I tend to get quite bothered by clashes in political opinion. I’m aware that I’m to the left of practically everyone. This is my chart according to Political Compass:

I’m aware of the inadequacies of that site incidentally. But the thing is, I care about people and I’m interested in politics. The mere fact that I’m libertarian socialist does the opposite of stopping me from caring about people, whatever their political views are. Likewise, being a religious theist doesn’t stop me from caring about non-religious people for their own sake, but does the opposite. Same with being vegan. I am all these things because I care about you all, whoever you are who may be reading this.

It’s just very difficult to be polite to everyone, particularly when one knows very little about their country, background and life. Consequently, it’s incumbent upon me to learn as much as I can about the human world, so as to be able to empathise with all of humanity. It’s not achievable, but surely it’s a worthy goal.

Astronauts Vs Computers

‘Rocket To The Renaissance’, written by Arthur C Clarke in about 1960 and expanded upon in his epilogue to ‘First On The Moon’, a book by Apollo astronauts, sets out many of his thoughts regarding the positive impact of human space travel on the human race. Since it was written in the mid-twentieth century by a White Englishman, though apparently a queer one, it unsurprisingly has its colonial biasses, though not fatally so. He focusses initially on White expansion across the globe, although he does also mention the views of non-White thinkers such as 胡適. That said, his point stands, and is paralleled by Arnold Toynbee, who once said:

Affiliated civilisations . . . produce their most striking early manifestations in places outside the area occupied by the “parent” civilisation, and even more when separated by sea.

I honestly can’t read this without thinking of the genocides committed by European powers, but there is a way of defusing this to some extent. There was a time when humans only lived in Afrika and slowly radiated out from that continent into the rest of the world, a process only completed in the twentieth century CE when we reached the South Pole, and not including the bottom of the ocean, which is of course most of the planet’s surface. Something I haven’t been able to track down is that there is supposed to be a genetic marker for the people who have spread furthest from East Afrika, which I presume means it’s found in Patagonia, Polynesia and Australia, although I suspect it actually refers to Aryans because there is indeed such a concentration in the so-called “Celtic Fringe”. Even this expansion may be problematic. It’s not clear what happened when Afrikan Homo sapiens left that continent and encountered other species of humans. Our genes are mixed with theirs, but they’re extinct and we don’t know how either of those things happened. It seems depressingly probable that we are all the descendants of children conceived by rape, within our own species, and this may have been the norm as we would understand it today, between or within our species. It seems more likely, though, that we simply outcompeted our relatives on the whole, and maybe the small portion of DNA from Neanderthals and Denisovans reflects their relatively smaller populations.

Leaving all this aside, the imperial winners of this million-year long onslaught on the planet benefitted culturally and technologically from it. 胡適 said:

Contact with foreign civilisations brings new standards of value.

And:

It is only through contact and comparison that the relative value or worthlessness of the various cultural elements can be clearly and critically seen and understood. What is sacred among one people may be ridiculous in another; and what is despised or rejected by one cultural group, may in a different environment become the cornerstone for a great edifice of strange grandeur and beauty.

Since I don’t want this to descend into some kind of patronising Orientalism, I’ll come back to Arnold Toynbee and his law of Challenge and Response. When difficult conditions are encountered, a minority of creative people respond by coming up with far-reaching solutions which transform their society. For instance, the Sumerians responded to the swamps in their area by irrigation and ended up kind of inventing civilisation as such, and the Church, having promulgated a belief system which caused the collapse of civilisation, went on to organise Christendom and invent Europe. We can of course still see the consequences of Sumer today all around us, but as I’ve mentioned before the very human geography of these isles reflects its location through the “diagonal” arrangement of cultural and economic differences we see locally due to the radial spread of change from the Fertile Crescent.

Even human expansion from East Afrika is problematic. There are clear signs that whatever it was we did led to enormous forest fires and the extinction of charismatic megafauna such as the nine metre long lizards who used to predate in Australia and the giant tortoises and birds of oceanic islands, not to mention the possibility that we helped wipe out the mammoths and woolly rhinos. Animals today tend to be nocturnal, smaller and to run away from humans because of what we’ve done in the prehistoric past. Nonetheless, there is an environment which is not problematic in this way. Actually, I should turn this round. The environments which are problematic from the viewpoint of being easily damaged and containing other sentient beings are largely confined to the thin film of air on this tiny blue speck we call Earth.

In his ‘Spaceships Of The Mind’, Nigel Calder pointed out that if we want to develop heavy industry, there’s always an environmental cost on this planet. On the other hand, if we were to do it in space, that problem goes away completely. Nothing we can do in space is ever going to make even the slightest scratch on the Cosmos in the forseeable future. Of course, it’s worth injecting a note of caution here because that attitude led to damage to our own planet, and locally even in space, that may not be true. Nonetheless, I do believe that one response to the energy crisis is orbiting solar power stations which beam their power back to remote receiving facilities on Earth which can then relay electricity globally, obviating the need for any fossil fuels or terrestrial nuclear power stations, or for that matter wind turbines or Earthbound solar arrays.

Space exploration has already yielded very positive results. These include the discovery of the possibility of nuclear winter, the Gaia Hypothesis, the Overview Effect and technological fallout. I’ll just briefly go into three of these.

  • Nuclear winter. When Mariner 9 reached Mars in 1971, there were problems imaging the surface due to a global dust storm. This was studied and it was noted that the fine particles in the atmosphere were blocking solar radiation and cooling the surface. The Soviet Mars 2 probe arrived at about the same time, sent a lander into the dust and it was destroyed. Carl Sagan then sent a telegram to the Soviet team asking them to consider the global implications of this event. This led to a 1982 paper which modelled the effect of nuclear firestorms and the consequential carbon particles in our own atmosphere which appeared to show that there would be a drastic cooling effect on this planet if that happened: the nuclear winter. Even now, with more sophisticated models, scientists recommend that global nuclear arsenals should be kept below the level where this is a significant risk during a nuclear exchange, and it’s also possible that it was a factor in ending the Cold War.
  • The Gaia Hypothesis. This is the belief that Earth is a homoeostatic system governed by its life. It’s still a hypothesis because many scientists still reject it or see it as only weakly supported, and it also coëxists with the Medea Hypothesis, that multicellular life will inevitably destroy itself. The roots of the hypothesis lie in Spaceship Earth and the observation that the other planets in the inner solar system, which didn’t appear to have life on them, were much less like Earth than might be expected. Up until the 1960s, life was more or less regarded as a dead cert on Mars because of the changes in appearance caused by the dust storms, which at the time were interpreted as seasonal changes in vegetation, and of course it had become popular to suppose there were canals there. On Venus, many people expected to find a swampy tropical world or a planet-wide water ocean teeming with life. When this didn’t happen, some scientists started to wonder if life had actually influenced this planet to keep it habitable rather than there already having been a hospitable environment for life which maintained itself. Viewing our whole Earth as alive is a way to engender compassion for all life, and is of course an example of hylozoism.
  • The Overview Effect. This is substantially related to the inspiration for the Gaia Hypothesis. When astronauts have seen Earth hanging in space, they have tended to gather a powerful impression of the fragility of life and the unity of the planet which has constituted a life-changing experience. The Apollo astronaut Edgar Mitchell set up the Institute of Noetic Sciences in response to his personal reaction, which was part of the human potential movement, and there are plans to make views of Earth from space available via virtual reality.

These are just three examples of how space exploration changes human consciousness for the better, and two out of three of them only happened because there were people in space, beyond low Earth orbit. Considering that even today only a tiny proportion of our species has ever been in space, and an even tinier proportion have left cis lunar space, this is an enormous influence relative to their number. It’s evident that the more astronauts and perhaps people living permanently off Earth there are, the more positive the effect on the human race would be.

But instead, we’ve gone the other way.

The biggest recent notable change in technology from a cultural perspective is of course information technology, mainly the internet and easy access to it via relatively cheap devices. This has led to the creation of cyberspace (I was there at the birth) and a generally inward-looking culture. I would contend that up until 1972, the human race had a spatial growing point, and that this had feedback into the rest of our cultures. And yes, it absolutely was the preserve of the rich and powerful countries, and yes, Whitey was on the “Moon”:

A rat done bit my sister Nell.
(with Whitey on the moon)
Her face and arms began to swell.
(and Whitey’s on the moon)I can’t pay no doctor bill.
(but Whitey’s on the moon)
Ten years from now I’ll be payin’ still.
(while Whitey’s on the moon)The man jus’ upped my rent las’ night.
(’cause Whitey’s on the moon)
No hot water, no toilets, no lights.
(but Whitey’s on the moon)I wonder why he’s uppi’ me?
(’cause Whitey’s on the moon?)
I was already payin’ ‘im fifty a week.
(with Whitey on the moon)
Taxes takin’ my whole damn check,
Junkies makin’ me a nervous wreck,
The price of food is goin’ up,
An’ as if all that shit wasn’t enough

A rat done bit my sister Nell.
(with Whitey on the moon)
Her face an’ arm began to swell.
(but Whitey’s on the moon)Was all that money I made las’ year
(for Whitey on the moon?)
How come there ain’t no money here?
(Hm! Whitey’s on the moon)
Y’know I jus’ ’bout had my fill
(of Whitey on the moon)
I think I’ll sen’ these doctor bills,
Airmail special
(to Whitey on the moon)

Gil Scot-Heron

The question here is of course of which America got the moon landing, and possibly which humankind. However, is there a reason to suppose that if enough people were to go into space it wouldn’t alter their consciousness enough for them to become, for instance, anti-racist and to recognise that we really are all in it together? To a Brit reading this, the reference to doctor’s bills brings the NHS to mind, and that kind of large-scale government-sponsored undertaking is pretty similar to NASA in many ways.

Apollo was also, of course, a propaganda coup, demonstrating what the so-called Free World could do that the “Communist” countries couldn’t. However, it wasn’t done via private enterprise or competition. It is at most an illustration of what a mixed economy can achieve, not capitalism. On the other hand, it could also be seen as an example of competition between the two power blocks dominating the world at the time, but is that capitalism?

As it stands, space probes even today have relatively low specifications, possibly due to long development times. In 1996, Pathfinder landed on Mars powered by an 8085 CPU running at 0.1 MHz. The Voyager probes run on a COSMAC 1802. There was eventually a problem with the Space Shuttle program because the craft used 8086 processors which became hard to find and had to be scavenged from antique PCs. The space program is startlingly primitive in this respect. As far as I know, there has only ever been one microcomputer based on the 1802 processor, the COMX 35, which came out in 1983. The Intel 8085 came out in March 1976, was a slightly upgraded version of the 8080, and was almost immediately eclipsed by the legendary Zilog Z80 which was released a month later. It had a longer life in control applications, which is presumably how it ended up in a Mars rover. The Shuttle program ended in 2011, which was thirty-three years after the 8086, a pretty conservative design in any case compared to the 68000 and Z8000, was mass-produced. Given all that primitive IT technology, the achievements of space probes are astonishing, and serve to illustrate the inefficiency of popular software used on modern devices on this planet. We have our priorities wrong.

I needn’t say much about the effect of social media on society. We all know it’s there, and it’s basically an ingrowing toenail, albeit one which has ingrown so far it’s started to pierce our brains. But we could’ve had a rocket to the renaissance, and instead we got Facebook and Trump. History has gone horribly wrong.

Neanderthal Pinhead Brains And The Sentient Internet

Stereotypically, Neanderthals tend to be presented as the classic “cave man” caricature, usually male, clubbing their female partners over the head and dragging them off by their hair, somewhat hairy themselves and of course notably unintelligent, oh, and living in caves. I’ve had a go at this stereotype and the other one about dinosaurs previously, but before I get down to things I may as well go through it briefly again.

First of all, dinosaurs are often used as a metaphor for something which is clumsy, overgrown and unable to adapt to a changing world. This really owes more to the Victorian image of dinosaurs as giant lizards than what’s known about them nowadays. Dinosaurs really got lucky, then got unlucky. The mass extinction at the start of their reign helped them take advantage of their various ecological niches, then the mass extinction at its end killed them off because many of them were very large. Many of the smaller ones survived as birds. If humans had been around at the end of the Cretaceous, we too would’ve bitten the dust.

Neanderthals are a kind of blank slate to many people onto which various things can be projected, and I may well be doing the same. Their brains were often larger than ours, but that doesn’t mean they were more intelligent. The probable cause of their brain size was to do with a bulkier body and the need for more pathways to help control and perceive that body. Whales have larger brains than we for similar reasons, although in their case that isn’t all there is to it. Nonetheless, when one considers that orang utan, gorillas, bonobos and chimpanzees are all capable of sign language, and chimps have learned to speak a few words but lack the vocal apparatus to master human speech effectively, this automatically places their “IQ” above that of the severely learning disabled. Note that I’m extremely sceptical of IQ as a concept. If orang utan intelligence is sufficiently similar to human to be assessed and rate above thirty on an IQ scale, Neanderthals are bound to be at least that intelligent. It’s also thought that human short term memory has suffered at the expense of developing language, as that of chimpanzees is far better than ours. Hence when Neanderthals come into the picture, it can be assumed safely that they would also have been capable of language and perhaps actually used it. The crucial final step in physical capacity for phonation – producing speech sounds with the vocal tract – is the position of the hyoid bone in the throat, which allows attachment for the larynx, glottis and tongue, and needs to be in a particular position to enable its owner to speak. The problem is that the hyoid is perhaps unique in having no articulation with any other bone in the body, and therefore tends to get lost in fossils. Consequently Neanderthal hyoids are often missing and it took until 1989 for it to be established that they were like ours.

A couple of issues are going to come up in this post which are probably going to be considered idiosyncratic on my part. Here’s the first. Although I am aware that the FOXP2 gene is considered important in human capacity to use language, and Noam Chomsky believes in an innate capacity for language as a distinctive feature of the human species, I have issues with this as potentially speciesist and am disappointed that such a clearly politically radical figure as he would promote this view. I believe humans stumbled upon language before we had a special ability to use it. There are examples of other species being able to use spoken and signed language as language, as opposed to merely imitating it, notably Psittacus erithacus, the Afrik/can Grey Parrot, who presumably had no predisposition in their genes for using it beyond the ability to produce speech sounds and so forth. Clearly a certain kind of cognition is necessary for this to happen, along with the ability to produce the sounds physically, and once spoken language exists it’s going to be selected for compared to individuals who don’t speak, and this will lead to some kind of marker in the genes – perhaps we are better at producing or hearing a wider range of speech sounds than other species for example – but the initial moment when the first baby made a sound like “mama” whose parent then interpreted it as a reference to her, which was perhaps the beginning of language, did not in my opinion depend on very specific physical traits and could have occurred in another species.

The genomes of living humans include a few genes from the Neanderthals and it’s thought there was hybridisation tens of millennia ago in our history. To a very limited extent, we are therefore Neanderthals ourselves unless we’re Afrikan. The highest percentage of Neanderthal genes is found in East Asians and they’re usually absent from people all of whose heritage is from Afrika south of the Sahara. Neanderthals would probably have been fair-skinned and maybe also blue-eyed, and have had straight hair. I personally wonder if they had epicanthic folds, which of course have a higher incidence among East Asians but are also found in Caucasians without any Asian ancestry, and I’m guessing that those people might also have inherited that trait from Neanderthals. Recently the Neanderthal genome has been in the news for conferring greater resistance to SARS-CoV2.

Now for the reason I’m writing this today.

In recent years it has become possible to culture brain cells in Petri dishes. This isn’t the same as growing an entire human brain in a vat, but involves producing pinhead-sized agglomerations of cells. Recently, a gene linked to brain development in Neanderthals has been spliced into human cells and grown in such a dish. For many people this has a high yuck factor. The specific gene involved is NOVA1, on the long arm of chromosome 14, which is associated with various cancers but also nervous system development. There’s an indirect connection between familial dysautonomia and the NOVA1 gene which primarily involves the autonomic nervous system and insensitivity to pain and sweet tastes, among other things, but as far as I know doesn’t influence cognition, so that doesn’t necessarily give us a clue, although it’s possible I suppose that the inability to taste sweet might be related to Neanderthal diet in some way. That’s a bit of a reach. Whatever else is so, mini-brains with the archaic NOVA1 variant look rougher to the naked eye than the smoother versions which have the variant common in today’s population. The archaic version developed more quickly than the unaltered one and started to show electrical activity sooner. In write-ups of this experiment, we’re assured that these mini-brains are not conscious.

I have a major issue with that assertion.

The question of the existence of consciousness is sometimes referred to as the “hard problem”. It’s been suggested that it may even be so hard that it’s beyond the capacity of the human mind to account for it. At the same time, there’s a recent strand in philosophical thought, characterised by Daniel Dennett, which is sceptical about the very idea of consciousness as an irreducible property. I can’t take Dennett’s views here seriously, for the following reason. He has made a very good argument for the idea that dreams are not experiences but false memories present in the brain on awakening onto which the mind then projects the impression of previous events. I take this idea fairly seriously although I don’t do the same thing with it as he does. It’s one reason why I recount dreams in the present tense. However, a good counter-argument to this is that lucid dreams – dreams in which one knows one is dreaming and is able to control the dream world – aren’t experiences either. Although he does produce an argument for this, I believe that his reason for making this assertion is kind of ideological, because we practically know that lucid dreams are experiences. They might not be dreams in the same sense as non-lucid ones are, but they are experiences to my mind, and claiming they aren’t seems to be part of his attempt to shore up his view of the nature of consciousness.

Dennett is sceptical about qualia. These are things like the “sweetness” of sweetness, the “purpleness” of purple and so on. They’re what people are talking about when they say “my red could be your blue”. His doubt about their existence is based on the idea that they are not a definable concept. This to me is a silly denial of subjectivity which makes no sense in itself. Dennett’s motivation for believing that dreams are not experiences, qualia don’t exist and that even lucid dreams are not experiences is based on a more general view of psychology that consciousness is a specific faculty within the brain which may have evolved and has selective advantages. This thought leads one into seriously murky ethical waters because it seems to be a rationalisation of the idea that some other species of animal are not conscious, which is suspiciously convenient for non-vegans. It just so happens that the voiceless don’t suffer because they don’t have a voice. How very useful this is for someone who eats meat. Kind of as useful as believing Black people are not conscious would be for a racist.

My own view of consciousness, panpsychism, tends to be seen as equally silly by some people. It’s my belief that consciousness is an essential property of matter rather like magnetism is. A ferromagnet is a particular arrangement of charged particles whose domains within, say, a lump of iron, are aligned and it’s able to attract ferrous metals such as steel. There are other, similar magnets, such as rare earth magnets, which are magnetic in the same way but contain no iron at all. On a subatomic scale, magnetism is manifested by elementary particles with spin and axes which amount to tiny electrical circuits, and I have to admit that my understanding of actual, fundamental magnetism is not very good, but there are clearly non-magnetic substances too, such as granite and most blood (unless it’s infected with malaria). Even these non-magnetic substances, though, do consist of magnetic particles.

Consciousness is the same, to my mind. Everything material is conscious, but in order for that consciousness to become manifest, matter needs to be arranged in a particular way, such as a human nervous system. However, just as there are magnets which are not made of iron, so there could be sentient beings who are not made of the same stuff as we are. Objects which have nothing like sense organs or motor functions are in a sense severely disabled entities, but they’re still conscious. This is my panpsychism.

I should point out too that panpsychism is unsurprisingly quite controversial and often ridiculed in philosophical circles, although good reasons for doing so are sometimes lacking. Even so, there are other accounts of consciousness, one of which involves the idea that it’s generated by a network of “black boxes” interacting with each other, which in the case of the human brain amount to nerve cells. You don’t have to believe in panpsychism to assert that a tissue culture is conscious, and to me it’s entirely clear that the assertion that anything made of matter is not conscious is not based on any kind of evidence but a bias towards the kind of view of the mind-body problem asserted by Dennett and others.

Consequently, it definitely isn’t safe to say that these “Neanderthal” mini-brains are not conscious, or that the ones based on unaltered Homo sapiens cells are not conscious. Before I go on to talk about the internet as potentially sentient, I feel a strong urge to go off on a tangent about my experience of the Mandela Effect.

I have several more detailed posts on this issue on this blog, here, here and here for example, but in the meantime I will sum up what it is before going on. The Mandela Effect is the situation where a number of people agree on a memory which is markèdly different from the consensus or establishment version of that memory. Most of the time, this is about minor details such as spelling of brand names or the appearance of brand logos, but occasionally the discrepancy is more significant. It’s named after the impression many people had that Nelson Mandela had died in prison in the 1980s, and sometimes that this led to a revolution which overthrew apartheid in South Africa. History clearly appears to record a very different chain of events involving Nelson Mandela being released from prison in 1991 and becoming president of South Africa soon after. I think that’s it anyway. There are various unusual reasons why I take this seriously which are largely based on Humean scepticism about cause and effect and the existence of possible worlds, which means I tend to deprecate accounts which merely refer to confabulation as an explanation – the construction of false memories due to misconceptions. There is some evidence against this being true, such as the fact that when the position of landmasses on maps varies, it always does so along the direction of continental drift and never at an angle to it.

I have a few personal Mandela Effects (MEs) which are rare but shared with at least two other people, and they tend to have things in common with each other. One of these is that a science museum had a planetarium like robot which responded to heat, light and movement and was run by a minibrain grown from cultured mole nerve cells, in the mid-1970s. Two similar MEs of mine are that in the late ’70s a process was devised to measure intelligence via brain scans which was used in selective education by the DoE in England to replace the 11+, which was later exposed as unreliable and discontinued, though this was a scandal because it adversely affected the lives of many people who were children at the time. A third one was to do with some guy who designed and built a domestic robot which was able to read aloud by 1975. These are three of many, and they are conceptually connected by being about intelligent-seeming neural processes. If they happened, they would’ve required an understanding of neurology which was absent at that time, in the case of the domestic robot presumably via some kind of reverse engineering. I accept that hardly anyone else has these memories, but it’s still odd that two other people who had no strong connection with me at the time do have them. And the thing about these memories, particularly the museum robot, is that they could potentially be realised by this kind of culture of brain cells in a Petri dish.

Now for the idea that the internet is sentient.

It was once asserted that the last computer a single individual could fully understand was the BBC Model B, a microcomputer which came out in 1981. There are a couple of problems with this statement. One is, what is meant by “fully understand”? It’s certainly possible, for example, for someone to hold the network of logic gates which constitutes the BBC Micro’s 6502 microprocessor in their head at the same time as the structure on that level of the 6845 chip responsible for its graphics capabilities and the SN76489 chip responsible for its audio, and then extrapolate from that to the machine code of the system software in its interaction with the motherboard and memory mapping of these various bits of hardware, although it would take some doing for most people. However, if I did that I would have a vague understanding of how the NPN transistors work, involving electron holes and their relay-like behaviour, but to be honest my understanding of silicon doping, for example, is pretty limited. When one says that the BBC Micro can be completely understood by one person, is that supposed to include the aspects of materials science which make the production of its hardware possible, or the mechanical properties of the springs in its keyboard? What does it mean to “fully understand” something? The other problem with this assertion is that the BBC Micro, as I understand it, isn’t essentially more complex than the original IBM PC. The latter has more memory and a more complex and faster processor, and its system software is usually PC-DOS or CP/M-86 and more advanced than the BBC’s MOS 1.2 and Acorn DFS, but it can still be understood and it lacked the built-in graphics and sound hardware of the eight-bit computer which ended up on the desks of so many British secondary schools. Later on, with sound and graphic cards added, the latter including the very same 6845 as used in the BBC, it still wouldn’t’ve been as complex and would still have been comprehensible. It seems to me that the ability to comprehend these devices fully in that sense probably ended around the time Windows 3.0 was released in 1990. But whatever else is the case, the point at which any one person could be said to understand a device including both hardware and system software is now decades in the past.

Now take these two facts together. Firstly, we really don’t know what makes consciousness possible. Secondly, the internet, a network of billions of devices hardly any of which are understood to a significant extent by any one person, is extremely complex and processes information it gathers from its inputs. And yet it’s often asserted that the internet is not sentient, as if we know what causes sentience. At the same time, there are many internet mysteries such as Unfavorable Semicircle and Markovian Parallax Denigrate, which can often be tracked down to some set of human agents, but nobody has a sufficient overview to be confident that every single one of these mysteries has a direct human cause, or even that a fraction of them have.

Hence I would say that we might suppose that the internet is neither conscious nor sentient, but in fact we don’t really have sufficient evidence that it isn’t. It really has quite a lot in common with a brain, in any case we don’t know why anything is conscious, and it’s even possible that everything is. Therefore, just maybe, the internet is sentient and nobody can confidently say it isn’t.