Future Englishes

English is currently the most successful human language, and in terms of users, the most successful human language of all time up until now. This thought raises another: was there a time when everyone used the same language, i.e. the ancestor of all spoken languages? Or, did spoken language appear several times in different places, so that there are or have been languages completely isolated from each other?

This is not what I’m going to go on about today, although it’s interesting to consider the related matter of how English will disappear. This may come up. I’ve written a novel on that very subject, after all, so you think it might. One day English must die out, maybe because it develops into something else, or maybe because the community using it will disappear. It could, for example, fall from grace for political reasons, but if that happens it’s likely to become detached from the original Anglosphere and be replaced by a more global impetus. Or, it could just become incomprehensible to present-day speakers and other users, as it has done in the past.

Linguistic communities are defined by mutual comprehensibility. That is, if two people with no knowledge of other languages are guaranteed to understand each other, they are using the same language. One-way comprehension isn’t enough. This is a 1989 translation of the Lord’s Prayer into Dutch:

Onze Vader in de hemel,

uw naam worde geheiligd,

uw koninkrijk kome,

uw wil geschiede,

op aarde zoals in de hemel.

Geef ons heden ons dagelijks brood

en vergeef ons onze schulden

zoals ook wij anderen hun schulden hebben vergeven,

en stel ons niet op de proef

maar verlos ons van de duivel.

…and this is a translation of the same into Afrikaans:

Onse Vader wat in die hemele is,

laat u naam geheilig word.

Laat u koninkryk kom,

laat u wil geskied,

soos in die hemel net so ook op die aarde.

Gee ons vandag ons daaglikse brood,

en vergeef ons ons skulde,

soos ons ook ons skuldenaars vergewe.

En lei ons nie in versoeking nie,

maar verlos ons van die bose.

A Dutch speaker won’t have any problems understanding both, but an Afrikaner might well struggle with the Dutch version, although both are in fairly simple language because of the nature of the prayer. Getting back to English, this can mean that speakers (I’m going to use that word for now although there are good reasons not to in some contexts) of certain registers would find it easier to understand the English of other eras, and speakers of the past would be more likely to understand later examples of the language than the other way round.

If I approach English naïvely, and this has to be a guess because of having reached the “unconscious competence” stage in German and earlier phases of English itself, I would guess that the cut-off point for spoken English comprehensible for someone who learned it in the late 1960s would probably be about 1500. The written version is misleading because our spelling is notoriously conservative, and earlier writings are easier to follow, although they also contain many false friends which give the reader the illusion of being able to understand them. For me, the iconic feature of English pronunciation which would obscure the language is Middle English long A. There are still accents which pronounce the long A as /ɛ:/, close to the vowel in RP “air”, but in 1500 that pronunciation would’ve been /æ:/, and to my mind that’s too big a difference for it to be readily understood to twentieth-century ears, let alone today’s. Nor would it be the only difference. However, someone in 1500 probably wouldn’t have the same difficulty understanding how we speak today, particularly away from the English home counties and the Southern Hemisphere, although the omission of “thou”, the use of the present continuous and the incessant use of “do” would be hard to handle. A speaker in 1400, though, wouldn’t be able to understand how we speak today. This takes us practically back to Chaucer’s time, when we’d have to handle something like this:

Whan that Aprille with his shoures soote,

The droghte of March hath perced to the roote,

And bathed every veyne in swich licóur

Of which vertú engendred is the flour;

Whan Zephirus eek with his swete breeth

Inspired hath in every holt and heeth

The tendre croppes, and the yonge sonne

Hath in the Ram his halfe cours y-ronne,

And smale foweles maken melodye,

That slepen al the nyght with open ye,

So priketh hem Natúre in hir corages,

Thanne longen folk to goon on pilgrimages,

And palmeres for to seken straunge strondes,

To ferne halwes, kowthe in sondry londes;

And specially, from every shires ende

Of Engelond, to Caunterbury they wende,

The hooly blisful martir for to seke,

That hem hath holpen whan that they were seeke.

That looks pretty close to present day English written down, but reading it aloud reveals how much the language has changed. Thus, if the average rate of change in the English language in the next half-millennium is the same as it was in the past one, we could expect people to be speaking a different language by about 2600. But it’s quite an assumption to suppose that the average rate of change will be the same.

The history of English, like that of many other languages, is divided into three periods: Old, Middle and Modern. However, considering that languages have different lifespans and rates of change vary over their history, this means that the divisions between them would occur at different times. For our tongue, the boundaries occur at the fairly arbitrary stages of 449, when the West Germanic tribes arrived in Britain, 1066, when the Norman invasion led to the oppression of English, and 1485, when the Battle of Bosworth Field occurred and is often used to mark the end of the Middle Ages in this country. Given that each division lasts about five centuries, we’re due for another phase in the history of our language.

Three different processes can be identified in the change of the English language. The first can be attributed to language change in general, so for example we can expect initial H to be dropped because that happens often in other languages. A Spanish dictionary I had as a child referred to older Spanish speakers pronouncing the H weakly and younger speakers having dropped it, and the initial H of Latin was also dropped many centuries before that. The second concerns distinctive English trends. The Great Vowel Shift comes to mind, and I’ve already alluded to that. Finally, there are the external influences on the language, such as the fact that it ceased to be the official Crown language for several centuries after 1066.

The first trend is easy to anticipate. The grammar of languages tends towards isolation. That is, they go from complex inflections of words – amo, amas, amat; amamus, amatis, amant – to simpler – j’aime, tu aimes, elle aime; nous aimons, vous aimez, elles aiment. In this case the spelling lags somewhat behind the pronunciation but even in writing the number of different forms has fallen from six to five at the cost of introducing an obligatory subject pronoun. Likewise in English we used to have different forms which have levelled to fewer, notably in the area of strong verbs, so that for example “help” and “climb” are now “helped” and “climbed” as past participles rather than “holpen” and “clomben”, and this process is continuing today with, for example, “thrived” rather than “throve” and “thriven”. Afrikaans has taken this trend further and now has no strong verbs at all. It also only has “is”, a development I can easily imagine happening in English too. Most verbs in the present indicative have little to lose nowadays, since most of them only vary in the third person singular – “she takes” rather than “she take” – although that development could occur.

As I’ve said, the Great Vowel Shift is the most obvious distinctively English trend, although similar processes have occurred in other languages. This has been blamed on the Black Death and the subsequent movement of people from Northern to Southern England. For this reason, vowels in English accents are often described in terms of their Middle English ancestors. The biggest changes are in the long vowels: A, open E, closed E, I, open O, closed O and U. Of these, long I and U (now spelt “ou” or “ow”) have changed the most, now being pronounced “eye” and “ow” as opposed to “ee” and “oo”. Their changes left a vacuum into which the other vowels were then able to move without creating ambiguity and confusion, A becoming “ay”, closed E and open E merging as “ee” in most accents, Irish being the main exception, closed O becoming “owe” and open “aw”. This shift is most pronounced in Australia and New Zealand and less so in North America than in RP. The short vowels are less affected. These shifts are somewhat paralleled in High German with the change from long I to “ei” and long U to “au”, although the rest haven’t been affected as much as in English, in the opposite direction in the merging of French “an/am” and “en/em” to the more open form followed by the alteration of “in/im” to what used to be the position of “en/em” (though with a consonant), and in Greek with the change of eta to an “ee” sound.

Initial consonant clusters have also tended to disappear, such as “kn-” and “wr-“, and a similar process occurred with “wh-” merging with “w-“, which has happened in my lifetime.

The third process would be the external influence on English. One of these is the spread of literacy, which has led to pronunciations falling somewhat more into line with spelling. For instance, “often” no longer has a silent T and “again” tends to be pronounced with a diphthong rather than short E. Another possible change could be wrought by the use of English as an international auxiliary language between people neither of whose mother tongues are English. In the Far East, it’s possible to encounter a form such as “I hear a smell”, which I imagine is influenced by a language which uses a more general verb for certain sensations than our separate “smell”, “taste” and “hear”, although for some reason I’d still expect “see” to be different. This might come to alter two particular idiosyncracies of English, namely the two separate verbs “do” and “make” but the single verb for “know”, which tend to be the other way round in at least many European languages if not others.

A couple of things are going on right now which could influence the future of this language. One is Brexit. English is used as a lingua franca in the rest of Europe, in its Commonwealth variant. If we leave the EU and stay outside it, it’s possible that we will fall more under the influence of American English, which has already been making itself felt since at least the War, and end up using at least an American idiom if not the actual General American accent itself, while the rest of Europe maintains “British” English, perhaps more influenced by Irish English and maybe also Scottish English than before. Another is the rise of other powers than the English-speaking United States, which may lead to the loss of prestige as an international language. A third issue, which we’ve only encountered for rather over a century so far, is that since about 1877 it’s been possible to record the voice, meaning that we hear older ways of speaking more than we used to. I would expect this to slow the change in English pronunciation, although of course listening even to the pronunciation of English English speakers in 1960 and before can sound quaint to us today.

There have been numerous fictional attempts to imagine the future of English, and I plan here to focus on six: Russell Hoban’s ‘Riddley Walker’, Will Self’s ‘The Book Of Dave’, Anthony Burgess’s Nadsat, George Orwell’s ‘Newspeak’, David Mitchell’s ‘Cloud Atlas’ and the attempt made in ‘The Dune Encyclopedia’. There will obviously be spoilers for all of these, although it could be said that it’s impossible to spoil a literary work because the plot is subservient to other aspects of the novel.

Russell Hoban imagines Kent two thousand years after a nuclear holocaust. He includes a map of the county as he imagines it then. I expect there’s a lot that goes over my head with this book, although it interests me because it’s set near my birthplace. One of the language’s distinctive features is its use of folk etymologies. It tends to analyse and reconstruct words based on presumed popular ideas of their origin, and these etymologies are often bawdy or risqué, such as “Dunk Your Arse” for Dungeness above and “Sam’s Itch” for Sandwich. This reflects the salvaging of the remnants of advanced technology for new purposes that now constitutes the characters’ way of life. There are some surprising grammatical developments such as “et” for “did eat”, which is a strong verb, rather than “eated”, part of a general inconsistency. Viewed realistically, it couldn’t be expected that the vernacular of two millennia in the future would be even remotely comprehensible to us today, particularly if all records have been destroyed.

Will Self’s ‘The Book Of Dave’ has much in common with Hoban’s, but Self clearly has an axe to grind about sacred texts having unforseen consequences and his novel is largely satirical. An embittered cabbie called Dave has angrily written down his feelings about women in the wake of a messy divorce, printed them on metal and buried them. After a flood destroyed much of Southern England, these are discovered, and by five hundred years later, this has become a sacred text written in a language now known as “Mokni”, largely based on Cockney and text speak. This religion is of course highly misogynistic. This includes quite inventive terms such as “befansemis” (Elizabethan semis) for “houses”, “childsupport” for “dowry”, “cloakyfing” for “burqa” and “dashboard” for “Milky Way”. The general idea seems to be of a restricted view of the world where the unwitting founder of the religion can’t look beyond his own restricted life, such as the dashboard of his own car, to see the wider world or the possibility of other perspectives.

‘The Book Of Dave’ and ‘Riddley Walker’ form a kind of pair, and whereas I haven’t heard this from Will Self, I’d expect him to acknowledge openly that Hoban was a major influence. The form of the language in both is linked to a general idea which extends beyond the usual changes one might expect in English, although the use of F and V for “TH” is to be expected and both are clearly being spoken in future worlds which are no longer globalised, so the influence of other countries is absent. The same is not true of Anthony Burgess’s “Nadsat”, the youth argot used in ‘A Clockwork Orange’.

I’m not sure when the events of Burgess’s novel are supposed to take place, but it’s clearly supposed to be in the near future of the publication date of 1962. The most distinctive feature of Nadsat is its use of Russian vocabulary, such as “horrorshow” for “good”, from the Russian “хорошо”. The name of the slang itself is from the Russian equivalent of the “-teen” morpheme. “Droog” for “friend” is another instance, but there are also other techniques of word formation such as eggy-peggy-type codes and rhyming slang. The idea was partly to create futuristic-seeming slang which wouldn’t seem quickly dated, a common problem with coinages and the use of slang in fiction. I can only suppose that Burgess chose Russian as a reference to the Cold War, and I wonder if it was meant as a sign of naïveté on the part of the youth subculture, kind of mindless rebellion against the establishment as seen in the later real world by the quasi-punk adoption of Nazi symbolism.

By far the most famous example of a future version of English is George Orwell’s Newspeak as used in ‘1984’. Although the main influence on this language’s creation was polemical in a political sense, Orwell also chose to include features he disliked in English as spoken at the time of writing. The role of Newspeak is to restrict thought by reducing the flexibility and variety of language. I used a Newspeak-derived version of English in my short story ‘Kibuco’, partly because the narrator’s first language was Esperanto, which had been used for a similar purpose in the story, and Newspeak and Esperanto share many features although I don’t know if it’s intentional. It’s said that an Esperanto dictionary will only be about a tenth the size of a similarly comprehensive dictionary in another European language because it relies so heavily on affixes. The idea behind this feature in Zamenhoff’s language is to make it easier to learn and acquire a useful vocabulary, but if Orwell is right about the thought-restricting capacity of such an approach, it could also be used for that purpose. Each new edition of the Newspeak dictionary is smaller than the last because words are being destroyed. This is all so well-known it’s probably not worth mentioning.

‘Cloud Atlas’ depicts two different stages in the development of English. The earlier example, ‘An Orison Of Sonmi-451’ is more sterile and restricted, perhaps like Newspeak although not deliberatedly constructed, using many genericised trademarks such as “nikon” for “camera”. This is mid-twenty-second century, and dystopian. This is part of the drift from the high-flown language of the chronologically earlier sections of the novel into the impoverished and commercialised vocabulary of the penultimate setting. After the fall, the second, later version of English is used, in ‘Sloosha’s Crossin’ An Ev’rything After’, but although it could be seen as a still more degenerate version of the language, it manages to be much more vibrant and expressive than its predecessor. The commercialised elements are gone. Unfortunately, I found it impenetrable and quite trying, and all the more so because it was the longest section of the book.

All of the above examples could be said to be distorted from the viewpoint that rather than trying to portray the actual future of English. The same doesn’t apply to the same extent to the languages of Frank Herbert’s ‘Dune’ series trilogy, of which there are two: Fremen and Galach. Most of the neologisms in the trilogy are from Fremen, which is a conservative descendant of the Arabic language, but Galach is a development of English and Slavic, mainly the former, although in the narrative parts of the novels it’s hardly explored at all. Fortunately, there is a de-canonised encyclopedia associated with the books which does, and in this case professional linguists have had a go at it in considerable detail. Five stages are distinguished, dating from about the year 9000 CE up until seventeen thousand years after that. The first stage involves the change of TH to F and V, as with several other examples above, “-ing” in gerunds becoming “-in”, along with the Second Vowel Cycle, which is a similar process to the Great Vowel Shift but applied to long pure vowels in present-day English. One of the crucial grammatical changes is the evolution of “of X” into “əX”, replacing the Saxon genitive “-‘s”, the idea being that it’s similar to “man o’ war”. This acts as a precedent to the junction of other prepositions to nouns and the development of an extensive inflectional prefixing case system. In the meantime, pleonastic pronouns begin to be used and are similarly appended to nouns. An example of the language given is the Galach for “a bird in the hand is worth two in the bush” – “baradit nehiidit beed gwarp tau aubukt”. However, it’s still quite sketchy.

In conclusion, all of this could be seen as rather optimistic because it isn’t at all guaranteed that there will even be people around to speak this language in the case of the mid-twenty-second century onward, or if there are, what kind of world they’ll be living in. This post-apocalyptic thought has been employed in the creation of some of these visions. On the other hand, there’s a strong theme of using changes in language to illustrate certain points, which is to be expected and not problematic. My own vision of English, if there’s anyone left to speak it in the future, it’s likely to have been influenced by non-European languages but sound rather like an exaggerated version of the New Zealand/Aotearoan accent along with a strongly creolised flavour to the grammar like Jamaican patois. That is, if it exists at all.


Writing And Depression

There’s said to be a correlation between writing and depression, in that people who have a diagnosis of clinical depression are more likely to keep a journal. One response to this finding is to advise depressive people not to keep diaries, in the expectation that there’s a causal relationship in that correlation. As far as I know, although this connection has been established, it’s unclear whether it’s because journalling, if that’s the word, and depression have a common cause, or whether depressed people write diaries for therapy, or, as seems to be the assumption, writing a diary helps make you depressed.

All three could be true to some extent. Just because you think something is therapeutic, it doesn’t mean it is. One thing I learnt from being a herbalist is that in terms of health, people tend to be their own worst enemies, and in particular that some people have a dynamic where they seem to be prepared to change absolutely anything about their lifestyle except for the one simple thing which would make the most difference. People are driven to self-harm. But self-harm itself isn’t simple and can be a coping strategy and a form of therapy for some. It can be motivated by numbness or the need to express outwardly pain one feels inwardly, and also self-harm can be very subtle, as with Lesch-Nyhan Syndrome for example. This is the inherited X-linked absence or deficiency of an enzyme which results in the build up of uric acid in the tissues, leading to physical damage and death before the age of thirty in most cases. It also involves self-harm, involving severe lip- and finger-biting, head-banging, gouging of eyes and scratching one’s face. People with it are often physically restrained to prevent them from doing this. But from a psychiatric point of view, one of the interesting things about Lesch-Nyhan is that in people with two X-chromosomes, who are therefore less affected, although it’s usually asymptomatic some of them develop emotionally self-sabotaging behaviour instead of physical self-harm, tending to exclude themselves from socialising in spite of wanting to, and pushing people away emotionally. I say “interesting”, but I could equally well have said “tragic”. It’s fair to say that those who are wont to self-harm physically are familiar with more subtle self-sabotage, and this could be carried out through writing.

Some people troll themselves. They make up sock-puppet accounts on social media and fora and comment on their own stuff in a negative way, in order to make themselves feel bad. This is very obviously self-harm, in this case in public. They tell themselves that they’re ugly and stupid, that nobody will ever love them and so forth. This is a kind of writing, of course, which is not therapeutic as far as I can see, although I shouldn’t make assumptions from the outside. All I can say is that it seems unlikely that it helps people feel better about themselves, although they may sometimes be trying to elicit sympathy from others.

But this is not necessarily particularly novel. The difference is the easy publicity of approaching it in this way. A rather less public and much older form of self-trolling might occur in writing a diary, and this wouldn’t seem to be therapeutic. A diary could consist of a series of passages where the diarist is trying to make themselves feel bad, but there could also be less overt ways in which the entries are harmful because they may brood over things and pull the writer down into the abyss.

I’m portraying this as if there’s a choice about it, but there may not be. I don’t wish to label and pathologise everything I do, but along with being practically certain I’m diagnosable as depressed, I’m also pretty sure I exhibit hypergraphia. Hypergraphia is the compulsion to write. There’s a case on record of a neurologist who wrote a textbook based on her compulsive note-taking and went into overdrive when she lost twin babies shortly after they were born. There’s an association with temporal lobe epilepsy and bipolar disorder and it responds to anti-depressants. It’s said to be rare but I’m extremely doubtful about that. Maybe it’s unusual for someone to write all over their walls and ceiling and proceed to cover every blank piece of paper they can find with writing, but that’s an extreme and judging by my own internal state, and also I suspect Sarada’s, the urge to write is constantly there and needs to be sat upon to stop it from happening, although the existence of writer’s block suggests that this is not always so. Isaac Asimov, Sylvia Plath and Danielle Steele are examples of hypergraphics whose products have turned out to be publishable and popular. However, even if it’s helpful, there are problems with writing because it can stop you from doing other things like earning a living, so if you are hypergraphic, you’d better hope you’re also lucky because your stuff may well not be publishable or noticed as such. It just spools out with no inner critic. The inner critic is something to do with the temporal and frontal lobes, or maybe their interaction, and we don’t always have the luxury of the right degree of connection between the two.

Not everything is hypergraphia. Sometimes there’s just diary-writing, but even therapeutic self-examination can later turn out to be problematic. When you write a journal, you may well be helping yourself work through stuff, but that could also be the stuff you need to work through at the time rather than later. If you do a good enough job of putting your feelings meaningfully down on paper, re-reading it can pull you back into that state of mind when you’ve got past it, and if like me you tend to dwell on the past and have difficulty letting go of things, this is quite a hazard. The same could apply to more creative writing such as short stories and novels.

There are reasons why writing itself as a profession or pastime could predispose one to depression. It’s a solitary activity and it can take a long time, if ever, to receive validation for one’s work. Your success depends more than usual on the approval of others, possibly lots of them. It can also be very hard to deal with rejection, although to be honest I can’t personally see a distinction between that and failed job applications. You might be writing indoors and depriving yourself of daylight, or you might be writing late into the night or find that your sleep is interrupted by ideas or conversations which you have to get down. If that happens, and they won’t leave you be unless you do something about them, you start to lose sleep which seriously risks depression and other mental health problems. You might also not exercise much, although I’ve found that exercise stimulates creativity, for better or worse.

Finally, back to the issue of writing as self-harm. This is where it gets complicated. If one is wont to exercise self-sabotage, it can get hard to tell whether pathologising one’s output is a sign of self-sabotage or the output itself is self-sabotage. This is the kind of thing one might want to write about, but then again, should I?

. . .

Nothing here today because I felt I was pressurising myself too much to churn out material and it was reducing the quality. But I have blogged on transwaffle.

Get Unknotted!

I don’t really hate many people at all. Michael Howard was one person I genuinely did hate, and on the whole the people I’ve hated have been distant from me rather than in my social circle. It’s probably telling to consider who the people I hate actually are. They tend to be either politicians or philosophers, which probably means the things which touch me most personally are philosophical and political issues.

Surely am I not alone in hating politicians. This is probably entirely normal, and the difference there is in the specific politicians I hate. I know, for example, that a lot of people hated Tony Benn. The only animosity I felt towards him was his continued loyalty to the Labour Party long after it had apparently become a force for evil, and even then I didn’t hate him. I just didn’t respect him any more. As it turns out, I was provisionally wrong because it’s clearly possible for the party to become left wing, also known as rational, again. But there’s something about politics which makes people hated. Looking back on Michael Howard, he accepted what became Section 28, which made it compulsory for teachers to condone hatred, and was responsible for introducing the Criminal Justice Act 1994, which removed the right to silence, among many other things. But is a politician to be judged on what they do in office? Jeremy Corbyn, for example, is not going to cancel Trident even though he believes it should be.

My hatred for philosophers is probably more like a sports fan’s hatred for a rival team, but it’s based on the idea that philosophers have a responsibility to the world. This could be opposed quite easily by a philosophical position, and sometimes is, but to me the point of everything is to make the world a better place, or at least to encourage people to do good, and an important role and duty of a philosopher is to criticise the way things are and come up with a different way of seeing and doing things. There are many philosophers who abrogate this responsibility, either because they’re in denial about it or because they lack the determination to stay on the side of the angels. Since coherence and integrity are important parts of philosophy, a position such as Heideggers support for the Nazis is untenable and throws all of his thought into doubt. It’s not enough to claim that he was living in fear in a totalitarian regime because he actually supported them with considerable enthusiasm, and even if he hadn’t, it would have placed his whole system of thought into doubt if it failed to give him the fortitude to have the courage of his convictions and stand against the Nazis regardless of reprisals. Not that it wouldn’t have been understandable, but it means something is wrong. Philosophy needs martyrs.

But I don’t hate Heidegger. My unease with his writing is more to do with what seem to be its implications – that it doesn’t impel one to oppose evil, or at least didn’t impel him to do so, which leads to me finding his philosophy hard to trust. Is there something about its implications which makes the Holocaust seem okay? But I wouldn’t think this about the medical research done by Joseph Mengele if it turned out to be useful, so I have to ask the question of what kinds of study are damned by their association and what aren’t. Maybe it’s just that Heideggers work is in the second category.

In other words, I tie myself up in knots about philosophy and philosophers.

Though there are philosophers I hate, two of whom are Jean Baudrillard and Jacques Lacan, who was also a psychoanalyst. It hasn’t escaped my attention that both are French, and I wonder about the significance of this. I generally reject the idea that language shapes thought, but I do believe culture taken as a whole influences it, so Baudrillard’s and Lacan’s dodginess are interesting from this perspective. Regarding Baudrillard, the main problem is that he’s playful with serious ideas and seems to be completely callous about real suffering and death, as with his “The Gulf War Never Happened”. Denying atrocities that really happened is the role of the likes of neo-Nazis and child molestors, not philosophers. This callousness extends for other continental philosophers in denying that other species suffer or are even conscious, which is suspiciously convenient.

When I say I hate Lacan, I don’t just mean I disagree violently with his thought and work. I mean I actually hate him as a person. His writing style is deliberately and self-consciously opaque. Choosing a random paragraph, translated into English:

The notion of an incessant sliding of the signified under the signifier thus comes to the fore – which Ferdinand de Saussure illustrates with an image resembling the wavy lines of the upper and lower Waters in miniatures from manuscripts of Genesis. It is a twofold flood in which the landmarks–fine streaks of rain traced by vertical dotted lines that supposedly delimit corresponding segments – seem unnatural.

Jacques Lacan, Ecrits. Translated by Bruce Fink.

Although this is out of context and might therefore be thought of as leading on from and to other things, but in fact wherever I quoted from, it would leave the reader with the same impression. There is an ongoing crisis of style in writing which includes his but started much earlier, perhaps with Hegel, in continental philosophy which leads me to wonder if the reason they write like that is that they have nothing important to say. On the other hand, I also wonder if the point is to act like random patterns for the reader to project meanings into, but when I read David Hume, whose work is over two centuries old by this point, I can’t help but be impressed by his clarity of style:

There are some philosophers, who imagine that we are at any moment conscious of what we call our self ; we feel its existence and its continuing to exist, and are certain—more even than any demonstration could make us—both of its perfect identity and of its simplicity. The strongest sensations and most violent emotions, instead of distracting us from this view ·of our self·, only focus it all the more intensely, making us think about how these sensations and emotions affect our self by bringing it pain or pleasure. To offer further evidence of the existence of one’s self would make it less evident, not more, because no fact we could use as evidence is as intimately present to our consciousness as is the existence of our self. If we doubt the latter, we can’t be certain of anything.

Although the passage from Lacan is a translation and the one from Hume isn’t, I still think Hume’s style is way less knotty than Lacan’s. There’s much more about Lacan to hate than that though, for instance his contempt for his clients and his uncaring attitude towards them killing themselves, and it should also be borne in mind that Freudian psychoanalysis generally involved fleecing unhappy people for years on end at exorbitant rates while being sexist and homophobic and using a florid style of theory which probably made their problems worse.

By Matemateca (IME USP) / name of the photographer when stated, CC BY-SA 4.0, https://commons.wikimedia.org/w/index.php?curid=68828200

Sometimes you can straighten out a tangle and end up with a straight length of whatever it is which was tangled up. Sometimes this even applies to a closed loop, and we don’t always realise the simplicity underlying these tangles. We might find that this tangle consists of one loop, two loops, three or more, joined or not joined. Knot theory classifies these in various ways, such as in the above exhibit. Although it isn’t included, the rings at the top of this post, Borromean Rings, obsessed Lacan in his later years. I should probably put another picture of them here:

The crucial thing about Borromean rings is that although they’re linked, removing any one of them leaves the other two unconnected. Lacan used this to illustrate the nature of the mind, namely the relationships between the real, the imaginary and the symbolic. None of these is more important than the other two and where one is missing, various forms of abnormal psychology result. For instance, psychosis identifies the real with the imaginary without recognising the symbolic, so the imagination just is real to a psychotic person and the possibility that it’s symbolic of something else is not present in their mind. My immediate problem with this is that it suggests that psychosis is a defect and there seems to be no motivation involved to reclaim madness as positive, although it is often negative, and I don’t think that should be ignored either. One reason Lacan shifted to using the knot is that he no longer considered language adequate to describe the human mind, which in fact is fair enough. He also considered adding an extra ring, the Sinthome, to the knot. James Joyce is seen as doing this. But there’s another problem with Lacan and the Borromean Rings (which are more of a chain than a knot but still a knot in the mathematical sense).

This is one of those occasions when I agree with Roger Scruton. It does happen sometimes. Scruton sees Lacan as a fool. Chomsky is also hostile to him, seeing him as a charlatan, and he’s also been seen as leaving a wasteland of damaged people who were unfortunate to come into contact with him. Just on the subject of the Borromean Knot though, the issue is that whereas it might superficially look like Lacan is trying to link mathematical knot theory and psychoanalysis, he doesn’t actually seem to know what he’s talking about and is merely using it as an extended metaphor, or perhaps trying to give his work a patina of wisdom and respectability without really understanding what he was doing. In the end, he just seems to be performing pretentiously and has taken psychoanalysis so far from the actual emotional problems people have that it’s useless.

Even so, knots are used by R D Laing to describe complicated serious games which really seem to ring true to me. To my surprise, his ‘Knots’ is sometimes described as a series of poems, which given my previous hang ups presumably means I can’t understand them, which is odd because I thought I did up until now. Here’s a quote which illustrates the kind of thing Laing considers to be a knot:

There must be something the matter with him

because he would not be acting as he does

unless there was

therefore he is acting as he is

because there is something the matter with him

He does not think there is anything the matter with him


one of the things that is

the matter with him

is that he does not think that there is anything

the matter with him


we have to help him realize that,

the fact that he does not think there is anything

the matter with him

is one of the things that is

the matter with him

there is something the matter with him

because he thinks

there must be something the matter with us

for trying to help him to see

that there must be something the matter with him

to think that there is something the matter with us

for trying to help him to see that

we are helping him

to see that

we are not persecuting him

by helping him

to see we are not persecuting him

by helping him

to see that

he is refusing to see

that there is something the matter with


for not seing there is something the matter

with him

for not being grateful to us

for at least trying to help him

to see that there is something the matter with


for not seeing that must be something the

matter with him

for not seeing that there must be something the

matter with him

for not seeing that there is something the

matter with him

for not seeing that there is something the

matter with him

for not being grateful

that we never tried to make him

feel grateful

This kind of text seems to make sense to me in a way Lacan’s never could, and it’s also an example of a psychological knot of a different kind than Lacan’s Borromean Knot. It also seems to be helpful in a way none of Lacan’s stuff is.

We do get ourselves tied in knots, and it might even be possible to draw lines between stages in that process which indicate exactly how we’ve got tangled. Given that, it does make sense to me to link knot theory with how we understand our own psyche even though it’s defiled by Lacan’s attention. It also seems to me that looking at my own prose style as a tangle might lead me to straighten it out and make it clearer in a way Lacan took pleasure in not doing with his own.

Mathematical knots are not the same as knots in string with loose ends. A mathematical knot is made of one or more tangled closed loops which cross over and under themselves and each other, or tangled loops which are fused at certain points. There are practical applications to them. One is in molecular biology. A DNA strand is coiled into coils of coils, and as it’s either read to produce proteins or to replicate itself in a process such as cell division, those coils would be bunched up until it became impossible to do anything with them because they’d be too tangled. This is solved by enzymes called topoisomerases. Much DNA is coiled in the opposite direction to the coils of the double helix itself, and topoisomerases reverse this coiling by cutting the strands and gluing them back together again. Organisms living at very high temperatures have DNA coiled in the same direction as the turns of the “staircase” because this stops them from melting apart by reinforcing them. Some antibiotics work by stopping topoisomerases from working, which means pathogens can no longer reproduce or produce toxins because they get too tangled. There’s also a disease called scleroderma, which involves the loss of elasticity in skin and other epithelial tissue which can be fatal, and ironically can be caused by exposure to chemicals used to manufacture PVC. This can involve the production of antibodies to a topoisomerase. Understanding knot theory better might open up possible approaches to dealing with scleroderma and the production of new antibiotics. It also occurs to me, and this is just me so I may well be wrong, that Alzheimers Disease also involves tangles of tau protein in brain microtubules, and tau protein can also be produced by physical trauma to the brain. I don’t know for sure, but I wonder if a drug could be produced which dealt with these tangles which could be used to prevent dementia and the damage caused by head injury, and since one of the consequences of head injury in childhood can be the development of paedophilia, this might turn out to be extremely practical. But that’s just my personal guess and I could be totally wrong.

You can also use it to tie bows faster and untangle things. One interesting question for me right is whether I can use it to clarify my writing, although some of that is easier to address through the likes of turning nouns into verbs, the passive voice into the active and analysing the logical structure of what I’m saying, so for example I could say “the fact that I think is incompatible with it not being the case that I exist” or I could say “I think, therefore I am”. This scene from ‘Blackadder Goes Forth’ is an excellent example of what I do wrong, and to be honest if I could get that sorted, I would myself be sorted. One of the reasons I write like this is that I’ve done a Masters in continental philosophy, so in a way it’s Lacan’s fault.

The other important possible application for me, though, would be to help sort out my own head and work out if I myself have any unnecessary tangles in how I think and feel, like R D Laing’s version of knots. Whether knot theory can be applied to that is quite imponderable, but wouldn’t it be great if it could? And if it couldn’t, you could always use it as an obfuscating theory to start a cult, like what seems to have happened with Lacan.

Hoverhome, Sweet Hoverhome

Life is a bridge: build therefore no house upon it

– Fake Buddha (also attributed to Jesus and others)

This is a painting of what I’ve generally thought of as the “In-Between House” in the Lake District town of Ambleside, previously in Westmorland though that county along with the Furness peninsula of Lancashire merged with Cumberland into Cumbria in 1974. This fact makes it a little harder to establish what I thought I knew about this edifice, which was that it was deliberately built between two counties on a bridge in order to avoid paying dues to either side. I now doubt very much that this is true because I don’t think it was on a border between two counties.

London Bridge also springs to mind in this respect – almost an entire town built on a bridge with its own gates and certain of its own rights compared to the rest of London. It was started in 1176 at Henry II’s behest, in memory of Thomas Becket, who had recently been murdered due allegèdly to him being overheard – well, I’m sure you know the story so I won’t go on. The point is that London Bridge actually did have buildings on it, up to two hundred, some dangerously overhanging the river and in any case a major fire hazard, with waterwheels in the arches which slowed the water enough, supposedly, for it to freeze over more often than it otherwise would’ve done, thereby enabling frost fairs on the Thames itself. Notoriously, it was also used to display the heads of executed people pour encourager les autres, as can be seen in the above picture at Traitor’s Gate at bottom right, and is said to be responsible for traffic driving on the left in these isles, with the single exception of Savoy Court:

(Note the position of the van).

Arthur C. Clarke once imagined that where we were going we wouldn’t need roads, because by the 1990s wheeled vehicles would be a thing of the past and hovercraft would’ve taken over. This is quite possibly the wrongest thing anyone has ever said, except for all the other things people have said like that, such as “guitar bands are on their way out, Mr Epstein”. It didn’t happen. It’s like airships but the other way round, because unlike those, which are highly fuel-efficient, Ground Effect Machines (GEMs) or Air Cushion Vehicles (ACVs) are extremely fuel-hungry, noisy and don’t turn corners easily. Active noise cancellation could make them somewhat quieter – you play the usual noise something makes but “turn the waves upside down” as it were, cancelling it out. They seem to be intrinsically noisy, which is very saddening, partly because noise, being energy, suggests they could potentially be made more efficient by making them quieter. The difficulty in steering makes them hard to use as routine modes of transport. The terrible fuel economy is offset a little by the fact that they would render roads obsolete but it’s not enough. They’re chiefly useful on surfaces which are not firm such as marshland and water.

The ground effect, which is the increased lift and reduced drag experienced by pilots just above a solid or liquid surface in an atmosphere, is also exploited by another type of vehicle which is though much rarer even than a hovercraft: the ekranoplan. This is basically a heavily-built plane which never gets above the level at which the surface effect is significant. It has wings but doesn’t really fly. They’re more efficient than aeroplanes or ships, avoiding drag from water and benefitting from the lift near the surface and travel faster than ships but are practically unknown outside the former Warsaw Pact countries.

Back to hovercraft though. I used to be so keen on hovercraft that as a child I had a blueprint of the SR.N-4 on my bedroom wall on an enormous piece of tracing paper. Even as an adult, Sarada once gave me a toy working hovercraft, which unfortunately I broke almost immediately by getting my hair wrapped round the fan. I also once had a crush on someone whose father was a hovercraft pilot, which is a demonstration of what it was to live in the one part of Britain where hovercraft were actually used in earnest – there used to be a significant number of hovercraft pilots in Kent. Later on, it came to me that there was a problem with living on the land or the water because in the former case you either have to buy land or pay rent which ultimately ends up in the pocket of the person or organisation owning the land, and in the latter you have to pay mooring fees or keep moving. In other words, you have to pay to live aside from the need for food, clothes and shelter, which is problematic because it may not be possible to approve what’s then done with the money. Dropping out of the rat race won’t work either because in order to get there you have to have been able to derive an income from somewhere so you can buy the land you need to build your house on or live off, so it could very well be “dirty money” which you’ve acquired by doing harm to the world and society. Living on a boat usually means you have to pay people to stay in a particular place unless you stay on the high seas, which brings its own problems.

The solution would therefore seem to be to live on a hovercraft. Since you’re not actually on a surface, you don’t need to pay anyone rent or mooring fees. The drawback is that you constantly have to burn fuel, or use whatever energy source you had before, because as soon as you let your craft sink to the ground or water you’re going to start incurring debt. Arthur C Clarke’s solution was efficient electrical storage or nuclear batteries, and the latter would work but at the cost of being very hazardous and polluting. Fusion power would also work but apart from the slight snag that it doesn’t exist, it is actually quite wasteful as the onslaught of radiation would make any housing brittle and needing to be replaced regularly.

This is not, however, a solution even if the energy problem is solved (and personally I wonder if the answer is to go into international waters and float for a bit, but it probably isn’t). This is because a hovercraft doesn’t count legally as a ship or road vehicle but as an aircraft, and is therefore subject to Civil Aviation Authorities. It’s also illegal to drive them “over” roads, which is a little unfair because the wear and tear would be minimal compared to wheeled vehicles. They’re subject to road vehicle regulations under the Hovercraft Act 1968, but the DVLA won’t tax them and car insurance firms won’t insure them, so although they would be legal, they can’t actually be piloted over the public highway. In view of the noise and the cornering problem, this probably isn’t a bad thing, but in themselves, they’re very safe. There’s only ever been one fatal hovercraft accident since they were invented, although it should be borne in mind that they’re not very common so this may be misleading.

There are no wheeled animals. Rotifers seem to have wheels but in fact these are halos of cilia which beat in a rotary fashion, and those only work because they’re small. Larger animals wouldn’t even be able to use those for locomotion. There are organisms whose locomotory organs or organelles are not directly connected to their bodies, which is one of the problems a living thing would encounter with a wheel if it needs to move it. Muscles, nerves and blood and lymph vessels would all get twisted unless the entire organ is “dead”. The rotary motion of bacteria and other microörganisms is achieved by positive and negative charges on the rotor and its housing, propelled by the usual reaction pathways which liberate energy from food. They don’t need anything like a blood supply or nerves because bacteria and other organisms with them are so small. It doesn’t scale.

By User:Fir0002 – own work of Fir0002, GFDL 1.2, https://commons.wikimedia.org/w/index.php?curid=1391928

Hovering, though, is something animals can do, hoverflies being an obvious example, although what a hoverfly does has little to do with the surface effect. Ekranoplan-type hovering, close to the ground with wings, is presumably how the earliest birds started to fly unless they began as gliders. Closer to hovercraft are the gastropods – slugs and snails. Even closer are the turbellarian flatworms such as the planarian, who secrete a layer of slime near a surface underwater (usually, there are also land-living flatworms) through which they swim using so-called “flame cells” on their undersides. These are cells with beating cilia which they use to push them along. Neither gastropods nor planaria, though, use an air cushion.

I have a vague memory of an idea for a non-sentient alien called a “hoverduck”. This was a duck-shaped animal who did move around on an air cushion, and the idea isn’t mine although I have recently elaborated it rather. Hoverducks have duck-like bills on their heads but lack jaws. Instead, they employ the wide slit at the front of these rigid bills to suck air in, containing small organisms and other living matter such as pollen and spores but also tiny aerial plants and flying or floating animals. These get stuck to filters in their pharynxes, where they go on to be digested and they then expel the air and waste products through their undersides to provide themselves with lift. They move around by aiming jets of air in various directions. Like ducks on this planet, they live on water, breathe air and can in a sense fly. In fact they superficially resemble ducks. I’ve been haunted by this image for over forty years and have no idea where it originates.

In conclusion then, there are a lot of nice things we can’t have. We can’t have airships and we can’t have hovercraft. We can’t even have hoverhomes, although we can live in airships, perhaps on Venus (but that’s another story). But one thing we could have, or rather encounter, is hoverducks, so all is not lost.

The One Without The Zeppelins?

Airship-filled skies are a cliché of alternate history fiction, to the extent that were we to take them as literal samples of possible worlds, our own timeline could be accurately characterised as “The One Without The Zeppelins”. In the Sherlock Holmes story ‘The Adventure of Silver Blaze’, the famous detective notices “the dog that didn’t bark in the night”, which is of course an inspiration for Mark Haddon’s ‘The Curious Incident Of The Dog In The Nighttime’. Before I get too scattered, I want to ask the crucial questions: Is the general non-use of airships in our timeline relatively improbable? Does our non-use of them depend on an event which could easily have gone differently? In the jargon of alternate history fiction, what’s our POD – Point Of Divergence?

One reason we don’t really have airships nowadays apart from the occasional advertising blimp such as the Goodyear Airship is that because they were filled with hydrogen, they got the reputation of tending to explode. There were two reasons for using hydrogen rather than helium. One was that the mass of molecular hydrogen, or at least protium, that is, hydrogen-1, is only half that of helium and therefore it’s somewhat more buoyant, and the other is that helium is much more expensive than hydrogen because on this planet it’s much rarer, even though it’s the second most abundant element, making up something like ten percent of all atomic matter in the Universe. This is because helium does not readily combine with other elements and therefore rises after its formation through the atmosphere and ends up in space. However, unlike most other non-radioactive elements, helium is constantly being produced by the rocks of this planet. This is due to the fact that helium consists of neutralised alpha particles, which are pairs of protons and neutrons bound together and are a common emission of unstable elements. Helium is occasionally trapped in deposits of natural gas because the rocks above them don’t let the gas through, and this is the main source of helium for us. By contrast, hydrogen is readily available despite being lighter than helium because it combines easily with other elements, forming compounds such as water and crude oil, and in fact hydrogen is present in more compounds than any other element, even carbon. It can fairly easily be extracted by passing a current through acidified water, or by the action of strong acids on metals. It’s also the most common element of all, comprising around nine-tenths of all atoms. That said, its eagerness to combine itself with, for example, oxygen, also known as combustion, does make it potentially very hazardous.

There are currently only about two dozen blimps in the world, of which only half are operational. The Hindenburg disaster put paid to their use as passenger craft, and was not the only calamity. The R101 above, publicly-funded rival of the private R100, was on its way from Bedfordshire to Karachi when it crashed in France, the immediate cause being a sudden downdraught, although there was a major fire which killed most of the people on board. There were other factors involved, such as the possible lack of experience of the crew, and it put paid to any further attempts by the British to develop passenger airships. That was in 1930, and the Hindenburg crash was in 1937. Although the R100 was more successful, it made no more flights after the R101 crash and was decommissioned. However, by the time of the Hindenburg disaster, heavier than air flight had been much improved and the chances are they would’ve been superceded anyway.

Airships were the largest flying machines ever built. They were often over two hundred metres long, in other words the same kind of scale as ocean liners and supertankers. To me at least they have an immense romantic magnetism to them and it really saddens me that they didn’t, er, take off. It must have been an amazing experience to see a vehicle of that size fly over, and an even more remarkable experience to fly in one. I have to confess that I don’t know what improvements were made in heavier than air craft which would’ve enabled them to take over anyway, so right now I can’t answer the questions I posed myself above. However, clearly if heavier than air flight had been developed later this would have extended the career of airships.

Airships are slower than planes and need to be much larger to carry the same weight because the entire mass of the airship with passengers and cargo has to be lighter than the equivalent volume of air. Air has a density of 1/800 that of water. The largest airship ever built was around two hundred metres long and thirty-three metres in diameter. Assuming this to be a cylinder, the equivalent weight of air is 206 tonnes. This is the maximum weight an object of those dimensions can have in this planet’s atmosphere at sea level and still be passively buoyant, as opposed to having to use some other force to hold or push it up. A cylinder of that size consisting entirely of hydrogen would weigh almost thirty and one of helium sixty tonnes. This gives the designer a completely laden additional weight of only 176 tonnes in the first place and 146 in the second. There are other possible lifting gases, the most obvious being ammonia, methane and neon. Neon weighs about half as much as air and is very rare on Earth despite being the fifth most abundant element in the Universe because like helium it doesn’t combine and it’s light enough to leave the atmosphere easily without being easily replenished. Methane and ammonia both weigh about the same as neon, but methane has the same drawback as hydrogen of being highly inflammable. Ammonia is quite toxic but only slightly inflammable and, like methane, can be produced by biochemical processes, although the Haber process is usually how it’s produced industrially. Even so, an ammonia-filled airship of that size has something like a hundred spare tonnes for the structure, cargo, fuel and passengers and to me looks like the best option. It might also be feasible to improve the buoyancy of an airship by turning it into a giant wing which provided some lift, or possibly by heating the lifting gas to reduce its density, which would then need to be kept away from oxygen.

Airships have one major advantage over planes – fuel economy and everything which follows from that, namely much reduced pollution. There are even pedal-powered airships, although much smaller than the Hindenburg or R101. This could lead to their adoption for air travel over planes at some point, although their speed is somewhat problematic as they’re much slower. Another advantage is that they can transport people and goods to inaccessible locations without the need to build roads or rail and without disrupting points between with noise and other pollution to the same extent.

Zeppelins were used for air raids in the Great War. This photo shows the aftermath of the sort of accidental bombing of Loughborough on the night of 31st January 1916, which killed ten and injured 150. This happened because the blackout had been enforced successfully in Leicester but not here, making us an easy target. The bombs were dropped on the Rushes. Apparently they thought they’d hit Liverpool and Sheffield, which strikes me as very odd because it’s hard to imagine that their ability to navigate was that primitive at that point. I’m probably missing something.

Air raids of that sort were ruled out in the Second World War due to the advances in aeronautics and the fact that slow airships filled with hydrogen were ridiculously vulnerable to attack. War also tends to give technological change and to some extent progress a boost, meaning that the War dealt the death blow to the already ailing dirigible. Hence with a little tinkering with the Treaty of Versailles, World War II could have been prevented and this might have slightly extended their lifetime, although things were already looking grim for them at this juncture. But without the War, perhaps they could have lasted into the 1950s, though a rather different 1950s than in OTL (Our Time Line).

There are niche applications for airships, such as the advertising function of the Goodyear Blimp and cruises and safaris, but Zeppelin World has widespread airships all over the place in the present day, so that doesn’t cut it. There is a slight advantage in the availability of lighter materials in recent times, and another is that they can moor over urban areas, unlike planes which for a long time needed to land away from cities, although this requires them not to explode. If they used for tours, it might make sense for tourist hotspots like London to have dirigible-filled skies, so the scene near the start of ‘Rise Of The Cybermen’ is realisable but not necessarily Lumic travelling there by airship, which suggests that they’re a routine, though possibly luxury, mode of transport. This is cheating though, because we’re clearly supposed to imagine widespread use.

Maybe the situation could arise by slowing the development of planes rather than accelerating that of airships. This would probably need a serendipitous discovery or two to go the other way. Again I don’t know enough about the history of aviation to suggest such a change. Or, maybe it could slow for social or political reasons. For instance, just as a major disaster was a minor factor in the end of airships, something like the early development of passenger planes resulting in multiple crashes, or maybe even an early 9/11-type scenario, would result in a taboo surrounding planes. A third possibility is lobbying or the failure to discover fossil fuel resources. On the whole, like my Caroline Timeline, an apparently simple change in the present day can only happen due to multiple changes in the past and vice versa.

Hence a series of PODs could lead to airships being routine, at least in niche uses, steadily later. The Treaty of Versailles being altered to be more sympathetic to the losers would have prevented Nazism, the Second World War and the Cold War, and as a side-effect, slowed the advance of heavier than air aviation, probably allowing a dirigible-dominated 1950s. The use of helium rather than hydrogen could also be a factor here, though only a minor one because in fact the Hindenburg disaster was only the final nail in the coffin. Conversely, early development of heavier than air passenger flights with a series of crashes would put the public off planes, although it might just make them generally more fearful of flying. A “helium lobby” active in government, fewer fossil fuels being found, environmentalism taking off earlier, all of these things could have propelled them further forward.

Or, it could just be as simple as Thomas Cook deciding to invest in airship cruises and tours and successfully marketing them, meaning that the more picturesque touristy parts of the world would just have them without a general adoption for more routine purposes. This, in the end, is probably the simplest scenario.

When it comes down to it then, we aren’t “The One Without The Zeppelins” at all. The ones with them are the odd ones out, not us.


The real Zeerust is a town in South Africa, but like “derby” and being “sent to Coventry” the name now has a life of its own, although perhaps only in a select circle, being one of the words in Douglas Adams’s ‘The Meaning Of Liff’ which was adopted and given a new meaning, namely the oddly dated feel which “futuristic” style acquires after a few years. It would be retrofuturism, but unlike that it isn’t doing it on purpose. Analysing the word, clearly “rust” represents the corrosion or patina forming on something after some time, like verdigris, and “zee” has the futuristic or alien Z at the beginning, which is perhaps trying too hard or has a kind of dated element to it where it was once thought to be a good way of demonstrating the shape of things to come. Once.

During my childhood in the late ’60s and throughout the ’70s, Art Deco seemed absolutely dreadful. It seemed to be trying to be ultramodern, but before a time when anyone even really knew what modern was yet. It had a kind of distinctive uncoolness like the one mysteriously acquired by parents just as their children enter adolescence, and it hadn’t yet been able to make its peace with the latter days as it now has. Somehow, this has managed to reinvent itself as something which is quaint rather than irritating. This is in contrast with the products of the ’60s:

This is the SUMPAC – the Southampton University Man-Powered Craft, dating from 1961, hence the sexism. In the series of various human-powered vehicles, this was the first one which to me looked “modern”. Earlier craft looked to me like bat-waterlily hybrids, but this one, and even more so its contemporary HMPAC Puffin, whose royalty-free image I can’t find, actually looked contemporary to me. The SUMPAC suffers a little in this respect due to the wooden frame. Later still there came the Gossamer Condor and Gossamer Albatross, which to me looked beyond contemporary:

This photo doesn’t do the aircraft justice, and looking at it now, it does actually look a little old-fashioned, but I wonder if this is to do with the ageing of the materials from which it was made rather than the vehicle itself. This effect leads to a misleading impression of the past, that it looked shabby and worn out, when in fact it didn’t. The past was bright and colourful, often streamlined and, well, youthful is one word which springs to mind. Machines, buildings and other artifacts do wear out of course, so we have the grainy, scratched, dust-covered black and white cinematography now, but it hasn’t always been that way, and all of this was not only fresh in terms of technological development but also in its general appearance and performance. Even taking all this away, I can convince myself that this is in fact rather old-fashioned.

A particularly vivid personal example of how media ages intrinsically is the Tornados’ ‘Telstar’, alleged to be Margaret Thatcher’s favourite record. This is the real Telstar 1:

This was the first active communications satellite able to relay television, launched in 1962. The single, to me, had a kind of utopian, “ages of plenty” feel as satirised by Donald Fagen in his ‘I.G.Y’, and yes, even in the early 1970s and beyond it did feel futuristic to me. Then at some point my brother told me, “you were born in 1967” and I realised that the only reason I liked it was that I was past it.

James Laver, a curator at the Victoria & Albert Museum who died in 1975, made an interesting observation regarding this process referred to today as “Laver’s Law”. Although he applies it to fashion in clothing, it can probably be applied more widely with a bit of modification. He claimed that the process of acceptance and rejection went through the following processes over the timescale mentioned:

  • Ten years before coming into fashion, something is considered indecent. One example would be the wearing of apparent underwear on top of outerwear which occurred in the 1980s but would definitely have been considered indecent in 1975 (notably before punk took off)
  • One year before coming into vogue, a sartorial decision would be considered shameless. It’s attention-grabbing in a way which shocks people, though not quite enough to get you arrested. Surprisingly from our perspective, there’s a possibly aprocryphal tale that when the haberdasher John Hetherington first wore a top hat in public in 1797, he was arrested for causing a breach of the peace. If this is true it suggests that it hadn’t even reached the point of shamelessness by that stage.
  • When in fashion, styles are considered smart. I’m not sure this is true because punk, for example, or the fashion of wearing denim with holes in it, isn’t supposed to look smart. I think this is either a rule of thumb or considered inaccurate. It doesn’t necessarily work even for Laver’s own area of focus because during the Restoration, for example, men’s fashion was about exuberance and to some extent shock. Nonetheless, I can see that this is often true although it often takes a while for clothing to get adopted into the formal mainstream.
  • One year after being in fashion, it’s considered dowdy. This raises further questions in my mind because it isn’t clear how long something stays in fashion and why. Nonetheless this kind of makes sense, although “dowdy” doesn’t feel like quite the right word. It’s more that the cognoscenti would see someone as out of touch with the latest trends.
  • A decade after, previously fashionable items become hideous. The classic example of this is of course flared trousers as considered in the ’80s, and it took a pretty long time for them to become rehabilitated. These along with wide collars, sideburns and kipper ties were absolutely iconic of hideous fashion as considered in about 1985. Likewise, the same applies to padded shoulders in the mid-1990s – they were a decade out of fashion and considered ugly. Personally I still think they’re ugly but that’s probably just me.
  • Another decade later, the fashions have become ridiculous. One again I would cite kipper ties in the ’90s and shoulder pads in the ‘noughties. Presumably it also means that in the post-war period, the “flapper” silhouette would also have seemed that way, but I wasn’t there.
  • A decade on from that, they become amusing, and certainly that can be seen with early 1990s trends nowadays. However, it isn’t clear whether this prevents things from coming back into vogue in an ironic way, like T-shirts with heavy metal band names on them for example.
  • Two decades further still, they have become quaint. With my just over half century of life, this places what was “in” at the time of my birth into the quaintness zone, so this would mean the military jackets and miniskirts of the late ’60s. Again, I’m not clear that those are now quaint.
  • At the century mark, romanticism is evoked. This would be 1920s fashions, more or less, at this point, but to me this would seem to range further, into the War years, possibly due to the separation of lovers and the danger involved giving lots of opportunities for that kind of thing.
  • Finally, a century and a half after something is in style it’s considered beautiful.

That’s fashion, perhaps substantially different to other aspects of daily life such as popular culture, architectural styles, gadgets and so on. This also illustrates that there’s a link between our feelings about this stuff and events of that era, and at times our perception of those events can strike chords in contemporary experience. For instance, someone living in the ’60s or ’70s would probably find the pessimistic and cynical tone of ’30s literature, such as ‘Down And Out In Paris And London’, harder to relate to than they would a decade later, assuming they had themselves moved with the times.

Attempting to apply this to design styles of the past, Art Deco, popular in the 1920s and 1930s, ought to look romantic to us now, and it probably does, but the imagery I have in mind right now is Fritz Langs ‘Metropolis’, which in fact seems quaint, although it seems to have done so for at least four decades now.

Much of this nowadays is presumably manufactured and manipulated by capitalism and its need to get people to throw things away and buy new products all the time. Fast fashion might also be expected to accelerate that, and it clearly does apply to retro tech and the like. Here’s my current mobile phone:

(Excuse the fingernails). This is the updated Nokia 3310. Around decades ago, we owned the original 3310:

This is of course considered a design classic, and although there are similarities between the current “3310” and the original, it seems to be considered that nowadays people wouldn’t be able to put up with the lack of features on the above device. This is true to an extent even for me, although I find the inclusion of a camera very irritating because the only time I use it is accidentally when I’ve pressed the wrong button. The main point is, though, that there are only seventeen years between the announcement of the first version and the revival. This is quite concerning in view of the ongoing ecological catastrophe. It’s notable, though, that our daughter’s car and my mobile have a similar aesthetic:

By Vauxford – Own work, CC BY-SA 4.0, https://commons.wikimedia.org/w/index.php?curid=69773718

This similarity, which is accidental, is nonetheless interesting because it suggests that our daughter and I have similar tastes, although commercialism circumscribes the pool of choices.

The extent to which fashion has no progressive context is another point. One criticism of Kuhnian views of scientific change is that they don’t seem to allow for the idea of advancement. A scientific theory is entrenched and then demolished when the new generation reaches a position of influence, without there having to be much to choose between the two theories, and this has been compared to the vagaries of fashion. In fact I’m not convinced they just happen, because such influences as fast fashion, for example, mean that the terribly unsustainable and environmentally hazardous polyester is currently extremely popular and the fact that so many clothes are bought online currently means that consumers don’t get to see and touch them, meaning that how they can be made to look in online images is more important than how they actually look and satisfy the other senses on being worn. Then again, because so much of our interaction is online anyway nowadays, it might matter less personally because they really do primarily have to look good in selfies and on YouTube and Facebook anyway, perhaps “appropriately” filtered. Nonetheless there is progress in fashion, for example the late nineteenth century Rational Dress Movement aiming to make clothing more practical and comfortable. And sadly, in the realm of architecture appearance can become more prized than safety and people’s lives, as the Grenfell travesty demonstrates. We may not be crushing the lives out of our viscera with corsets nowadays, but we’re still choking people with flammable cladding for aesthetic purposes.

By J-P Kärnä – Own work, CC BY-SA 3.0, https://commons.wikimedia.org/w/index.php?curid=28007325

I’m conscious of a shift in what counts as futuristic. As a child in the 1970s, my initial understanding of the futuristic included white and silver streamlining with round portholes, rather like the above Futuro house designed in the late ’60s and made of fibreglass or Cousteau’s Conshelf experimental underwater dwelling. Then I became aware of a second style of “futurism” which I thought of as “blue and silver”, with a kind of rectilinear approach similar to the illustrations of spacecraft made by Chris Foss. However, for some time I thought of these two as two visions of the futuristic rather than one being more futuristic than the other, although eventually “blue and silver” won out. This is also associated with airbrushing – precise control of quantity and direction of pigment popular in the 1980s.

I want to finish by returning to Douglas Adams. I think I’ve quoted exactly this before on here but unfortunately I don’t tag anything so I’ll just do it again. You may wish to join me in applying it to your own life:

Anything that is in the world when you’re born is normal and ordinary and is just a natural part of the way the world works. Anything that’s invented between when you’re fifteen and thirty-five is new and exciting and revolutionary and you can probably get a career in it. Anything invented after you’re thirty-five is against the natural order of things.