My Writing Style

I’m fully aware that I’m too wordy, don’t stick to the point and talk about arcane topics a lot, not just on here but in face to face conversations. This is partly just what I do, in the sense that I’m unable to do otherwise or employ it as a bad habit. In a world full of shortening attention spans and loss of focus though, I feel that however ineptly, this is still worth doing.

In the process of doing this, I continued this blog post in a fairly lightweight word processor called AbiWord which we stopped using because it had a tendency to crash without warning and without there being any salvageable document when this happened, and it proceeded to do exactly that, so this is in a way a second draft. One of the many features AbiWord lacks, and this is not a criticism because the whole philosophy is to avoid software bloat, is a way of assessing reading age. Word, and possibly LibreOffice and OpenOffice, does have such a facility, which I think uses Flesch-Kincaid. A blank was drawn when I said this to Sarada so it’s likely this is not widely known and in any case I looked into it and want to share.

There are a number of ways of assessing reading age, and as I’ve said many times it’s alleged that every equation halves the readership. When I was using AbiWord just now, I decided to write these in a “pseudocode” manner, but now I’m on the desktop PC with Gimp and stuff, I no longer have that problem although of course MathML exists. Does it exist on WordPress though? No idea. Anyway, the list is:

  • Flesch-Kincaid – grade and score versions.
  • Gunning Fog
  • SMOG
  • Coleman-Liau
  • ARI – Automated Readability Index
  • Dale-Chall Readability Formula

Flesch-Kincaid comes in two varieties, one designed to rank readability on a scale of zero to one hundred. It works like this:

206.835−1.015(average sentence length)−84.6(average syllables per word)

It interests me that there are constants in this and I wonder where they come from. It also seems that subordinate clauses don’t matter here and there’s no distinction between coordinating and subordinating conjunctions, which seems weird.

The grade version is:

0.39(average sentence length)+11.8(average syllables per word)−15.59

This has a cultural bias because of school grades in the US. I don’t know how this maps onto other systems, because children start school at different ages in different places and learn to read officially at different stages depending on the country. Some, but not all of the others do the same.

Gunning Fog sounds like something you do to increase clarity and I wonder if that’s one reason it’s called that or whether there are two people out there called Gunning and Fog. It goes like this:

0.4((words/sentences)+100(complex words/words))

“Complex words” are those with more than two syllables. This is said to yield a number corresponding to the years of formal education, which makes me wonder about unschooling to be honest, but it’s less culturally bound than Flesch-Kincaid’s grade version.

SMOG rather entertainingly stands for “Simple Measure Of Gobbledygook”! Rather surprisingly for something described as simple, it includes a square root:

This is used in health communication, so it was presumably the measure that led to diabetes leaflets being re-written for a nine-year old’s level of literacy. I don’t know what you do if your passage is fewer than thirty sentences long unless you just start repeating it. Again, it gives a grade level.

Coleman-Liau really is nice and simple:

0.0588L−0.296S−15.8

L is the mean number of letters per one hundred words and S is the average number of sentences in that. This again yields grade level, although it looks like it can be altered quite easily by changing the final term. It seems to have a similar problem to SMOG with short passages, although I suppose in both cases it might objectively just not be clear how easily read brief passages are.

The ARI uses word and sentence length and gives rise to grade level again:

4.71(characters per word)+0.5(words per sentence)−21.43

Presumably it says “characters” because of things like hyphens, which would make hyphenation contribute to difficulty in reading. I’m not sure this is so.

The final measure is the Dale-Chall Readability Formula, which again produces a grade level. It uses a list of three thousand words fourth-grade American students generally understood in a survey, any word not on that list being considered “difficult”:

There are different ways to apply each of these and they’re designed for different purposes. I don’t know if there are European versions of these or how they vary for language. The final one, for example, takes familiarity into account as well as length.

When I’ve applied these to the usual content of my blog, reading age usually comes out at university degree graduate level, which might seem high but it leads me to wonder about rather a lot of stuff. For instance, something like sixty percent of young Britons today go to university, so producing text at that level, if accurate, could be expected to reach more than half the adult population. However, the average reading age in Britain is supposed to be between nine and eleven, some say nine, and that explains why health leaflets need to be pitched at that level. All that said, I do also wonder how nuanced this take is. I think, for example, that Scotland and England (don’t know about Wales I’m afraid, sorry) have different attitudes towards learning and education, and that in England education is often frowned upon as making someone an outsider to a much greater extent than here, and this would of course drag down the average reading age. That’s not, however, reflected in the statistics and Scottish reading age is said to be the same as the British one. I want to emphasise very strongly here that I am not in any way trying to claim that literacy goes hand in hand with intelligence. I have issues with the very concept of intelligence to be honest but besides that, no, there is not a hereditary upper class of more intelligent people by any means. Send a working class child to Eaton and Oxbridge and they will come out in the same way. I don’t know how to reconcile my perception.

But I do also wonder about the nature of tertiary education in this respect. Different degree subjects involve different skills, varying time spent reading and different reading matter, and I’d be surprised if this leads to an homogenous increase in reading age. There’s a joke: “Yesterday I couldn’t even spell ‘engineer’. Today I are one”. Maybe a Swede? Seriously though, although that’s most unfair, it still seems to me that someone with an English degree can probably read more fluently than someone with a maths one, and the opposite is also true with, well, being good at maths! This seems to make sense. The 1979 book ‘Scientists Must Write’, by Robert Barrass tries to address the issue of impenetrability in scientific texts, and Albert Einstein once said, well, he is supposed to have said a lot of things he actually didn’t so maybe he didn’t say this either, but the sentiment has been expressed that if you can’t explain something to a small child you don’t understand it yourself.

I should point out that I haven’t always been like this. I used to edit a newsletter for brevity, for example, and up until I started my Masters I used to express myself very clearly. I also once did an experiment, and I can’t remember how this opportunity arose, where I submitted an essay in plain English and then carefully re-wrote it using near-synonyms and longer sentences and ended up getting a better grade for the “enhanced” version, and it wasn’t an English essay where I might’ve gotten marks for vocabulary. On another occasion I was doing a chemistry exam (I may have mentioned this elsewhere) and there was a question on what an ion exchange column did, and I had no idea at the time, so I reworded the question in the answer as something like “an ion exchange column swaps charged atoms using a vertical cylindrical arrangement of material”, i.e. “an ion exchange column is an ion exchange column”, and got full marks for it without understanding anything at all. This later led me to consider the question of how much learning is really just about using language in a particular way.

So there is the question of whether a particular style of writing puts people off unnecessarily and is a flaw in the writer, which might be addressed. This is all true. Even so, I don’t think it would always be possible to express things that simply and also it’s a bit sad to be forced to do so rather than delighting in the expressiveness of our language. Are all those words just going to sit around in the OED never to be used again? But it can be taken too far. Jacques Lacan, for example, tried to make a virtue of writing in an obscurantist style in order to mimic the experience of a psychoanalyst not grasping what a patient is saying by creating reading without understanding, and in particular was concerned to avoid over-simplifying its concepts. Now I’ve just mentioned Lacan, and I don’t know who reading this will know about him. Nor do I know how I would find that out.

I’m not trying to do what he does. Primarily, I am trying to avoid talking down to people and to buck the trend I perceive of shrinking attention and growing tendencies to dumb things down, just not to think clearly and hard. Maybe that isn’t happening. Perhaps it’s my time of life. Nonetheless, this is what I’m trying to do, for two reasons. One is that talking down to people is disrespectful. I’m not going to use short and simple words and sentence structures because that to me bears a subtext that my readers are “stupid”. The other is that people generally don’t benefit from avoiding thinking deeply about things and being poorly-informed. It’s in order here to talk about the issue of “stupidity”. I actually have considerable doubt that the majority of people differ in how easily they can learn across the board for various reasons. One is that in intellectual terms, as opposed to practical, the kind of resistance found in the physical world doesn’t exist at all. This may of course reflect my dyspraxia, which also reflects what things are considered valuable. Another is that the idea of variation in general intelligence is just a little too convenient for sorting people into different jobs which are considered more or less valuable or having higher or lower status, and as I’ve doubtless said before, the ability to cope with boredom is a strength. I also think that the idea of a single scale of intelligence, which I know is a straw man but bear with me, has overtones of the great chain of being, i.e. the idea that there are superior and inferior species with the inferior ones being of less value.

There are, though, two completely different takes on intelligence.

Structure here: wilful stupidity and the false hierarchy of professors.

As I’ve said before, I try not to call people stupid, for two reasons. One is that if it’s used as an insult, it portrays learning disability as a character flaw, which it truly is not. It is equally erroneous to deify the learning disabled as well. It’s simply a fact about some people which should be taken into consideration. Other things could be said about it but they may not be relevant to the matter in hand. The other is that the idea of stupidity is that it’s an unchangeable quality of the person in question, and this is usually inaccurate. An allegedly stupid person usually has as much control over their depth and sophistication of thinking as anyone else has. Therefore, I call them “intellectually lazy”. For so many people, it’s actually a choice to be stupid. As noted earlier, there are whole sections of society where deep thought is frowned upon and marks one out as an outsider, and it’s difficult for most people to go against the grain. This is not, incidentally, a classist thing. It exists right from top to bottom in society. Peer pressure is a powerful stupifier.

There is another take on stupidity which sees it as a moral failing, i.e. as a choice having negative consequences for others and the “stupid” person themselves.  This view was promoted prominently by the dissident priest and theologian Dietrich Bonhoeffer in the 1930s after Hitlers rise to power and in connection with that.  The idea was later developed by others.  This form of stupidity might need another name, and in fact when I say “intellectual laziness”, this may be what I mean.  It could also go hand in hand with anti-intellectualism.

Malice, i.e. evil, is seen as less harmful than intellectual laziness as evil carries some sense of unease with it.  In fact it makes me think of Friedrich Schillers play ,,Die Jungfrau von Orleans” with its line ,,Mit der Dummheit kaempfen Goetter selbst vergebens” – “Against stupidity the gods themselves contend in vain”, part of a longer quote here:

Unsinn, du siegst und ich muß untergehn!

Mit der Dummheit kämpfen Götter selbst vergebens.

Erhabene Vernunft, lichthelle Tochter

Des göttlichen Hauptes, weise Gründerin

Des Weltgebäudes, Führerin der Sterne,

Wer bist du denn, wenn du dem tollen Roß

Des Aberwitzes an den Schweif gebunden,

Ohnmächtig rufend, mit dem Trunkenen

Dich sehend in den Abgrund stürzen mußt!

Verflucht sei, wer sein Leben an das Große

Und Würdge wendet und bedachte Plane

Mit weisem Geist entwirft! Dem Narrenkönig

Gehört die Welt–

Translated, this could read:

Folly, thou conquerest, and I must yield!

Against stupidity the very gods

Themselves contend in vain. Exalted reason,

Resplendent daughter of the head divine,

Wise foundress of the system of the world,

Guide of the stars, who art thou then if thou,

Bound to the tail of folly’s uncurbed steed,

Must, vainly shrieking with the drunken crowd,

Eyes open, plunge down headlong in the abyss.

Accursed, who striveth after noble ends,

And with deliberate wisdom forms his plans!

To the fool-king belongs the world.

Now I could simply have quoted the line in English of course, but as I’ve said, I don’t believe in talking down to people and it’s a form of disrespect, to my mind, to do that, so you get the full version.  This is spoken by the general Talbot who is dismayed that his carefully laid battle plans are ruined by the behaviour of his men, who are gullible, panicking and superstitious, in spite of his experience and wisdom, which they ignore.  I think probably the kind of “stupidity” Schiller had in mind was different, perhaps less voluntary, but this very much reflects the mood of these times.

Getting back to Bonhoeffer, he notes that intellectual laziness pushes aside or simply doesn’t listen to anything which contradicts one’s views, facts becoming inconsequential.  It’s been said elsewhere that you can’t reason a person out of an opinion they didn’t reason themselves into in the first place.  People who are generally quite willing to think diligently and carefully in other areas often refuse to do so in specific ones.  People can of course be encouraged to be lazy in certain, or even all, areas, because it doesn’t benefit the powers that be that they think things through, and this can occur through schooling and propaganda, and nowadays through the almighty algorithms of social media, or they may choose to take it on themselves.  Evil can be fought, but not stupidity.  Incidentally, I’m being a little lazy right now by writing “stupidity” and not “intellectual laziness”.  The power of certain political or religious movements depends on the stupidity of those who go along with it.  This is also where thought-terminating clichés come in because Bonhoeffer says that conversation with a person who has chosen to be stupid often doesn’t feel like talking to a person but merely eliciting slogans and stereotypical habits of thought from somewhere else.  It isn’t coming from them even if they think it is, in a way.  Hence the use of the word “sheeple” and telling people to “do your own research”, which in fact often means “watch the same YouTube videos as I have” is particularly ironic because it’s the people telling you to do that who are thinking less independently or originally than the people being told.  Thinking of Flat Earthers in particular right now, which I’m going to use as an example because it’s almost universally considered absurd and is less contentious than a more obviously political example, there are a small number of grifters who are just trying to make money out of the easily manipulated, a few sincere leaders and a host of “true believers” who are either gullible or motivated by other factors such as wanting to be part of something bigger or having special beliefs hidden from τους πολλους.  I’m hesitant to venture into overtly political areas here because of their divisive nature, but hoping that using the example of Flat Earthers can be agreed to be incorrect and almost deliberately and ostentatiously so.

He goes on to say that rather than reasoning changing people’s minds here, their liberation is the only option to defeat this.  This external liberation can then lead to an internal liberation from that stupidity.  These people are being used and their opinions have become convenient instruments to those imagined to be in power.

This is roughly what Bonhoeffers letter said and it can be found here if you want to read it without some other person trying to persuade you of what he said.  In fact you should read it, because that’s what refusing to be stupid is about. Also, he writes much better than I. That document continues with a more recent development called ‘The Five Laws Of Stupidity’, written in 1970 by the social psychology Carlo Cippola. The word “stupidity” in his opinion refers not to learning disability but social irresponsibility. I’ve recently been grudgingly impressed by the selfless cruelty of certain voters who have voted to disadvantage others with no benefit to themselves. A few years ago, when the Brexit campaign was happening, I was of course myself in favour of leaving the EU and expected it to do a lot of damage to the economy, which was one reason I wanted it to happen, but I would’ve preferred a third option where the “U”K both left the EU and opened all borders, abolishing all immigration restrictions. This is an example of how my own position was somewhat similar to that of the others who voted for Brexit, but in many people’s case they were sufficiently worried about immigration and its imagined consequences to vote for a situation which they were fully aware would result in their financial loss. In a way this is admirable, and it illustrates the weird selflessness and altruism of their position, although obviously not for immigrants. Cippola’s target was this kind of stupidity: disadvantage to both self and others due to focus on the latter. This quality operates independently of anything else, including education, wealth, gender, ethnicity or nationality. People tend to underestimate how common it is, according to Cippola. This attitude is dangerous because it’s hard to empathise with, which is incidentally why I mentioned my urge to vote for Brexit. I voted to remain in the end, needless to say. Maliciousness can be understood and the reasoning conjectured, often quite accurately, but with intellectual laziness (I feel very uncomfortable calling it “stupidity”) the process of reasoning has been opted out of, or possibly been replaced by someone else’s spurious argument. This makes them unpredictable and means they themselves don’t have any plan to their benefit in attacking someone. There may of course be people who do seek an advantage but those are not the main people. Those are the manipulators: the grifters.

I take an attitude sometimes that a person with a certain hostility is more a force of nature than a person. This is of course not true, but it’s more that one can’t have a dialogue with them, do anything to break through their image of you and so on, so all you can do is appreciate they’re a threat and do what you can to de-escalate or preferably avoid them. This is a great pity because it means no discussion is likely to take place between you, and they’re not going to be persuaded otherwise. They may not even be aware of the threatening nature of their behaviour or views.

Cippola thought that associating with stupid people at all was dangerous, but of course this feeds into the reality tunnel problem nowadays. This is what I’ve known it as, although nowadays it tends to be thought of in terms of echo chambers and bubbles. We surround ourselves, aided by algorithms, with people who agree with us, and this fragments society. Cippola seems to be recommending that, and with over half a century of hindsight we seem to have demonstrated to ourselves that that impulse shouldn’t be followed.

Casting my mind back, a similar motive may have been part of what led to my involvement in a high-control religious organisation. I have A-level RE. This in my case involved studying Dietrich Bonhoeffer, and the approach generally was quite progressive and liberal, including dialogue between faiths, higher criticism and the like. On reaching university, I found that the self-identifying Christians with whom I came into contact were far more fundamentalist and conservative, but because I regarded this as demotic, the faith of the common people as it were, I committed myself to that kind of faith. This is not stupidity in a general sense, as most of the people there could be considered conventionally intelligent, some of them pursuing doctorates for instance. However, they did restrict their critical faculties when it came to matters of faith, and in that respect I was, I think, emotionally harmed by these people, though I don’t blame them for it. This is the kind of selective and deliberate “stupidity” which is best avoided.

I’m aware that I’ve described this all rather unsympathetically and perhaps with a patronising tone. This is not my intention at all and it may be more to do with the approach taken by the writers and thinkers I’ve used here. I’ve also failed to mention James Joyce and Jacques Lacan at all here, which may be a bit of an omission. What I’m attempting to show is respect, and what I’m requesting from the reader is focus (and I have an ADHD diagnosis remember), long attention spans and complexity and nuanced thought. I’m not asking for agreement, but I would like those who disagree with me to have thought their positions through originally, self-critically and with respect for their opponents. I write the way I do because I know people are generally not stupid and can choose not to be.

“They Wouldn’t Easily Let Themselves Become Greenlanders”

In the last post I mentioned the Sumerian sexagesimal system. Quite remarkably, the Sumerian language used base 60 to count. Although not all their number words survive, many of their names for numbers up to five dozen are simple. That is, they don’t have a smaller scale structure to their words like English, for example, has. We have a slight tendency towards vigesimal in the fact that the teens are named differently than the twenties, thirties and so forth, so we have secondary structure in our own numerical vocabulary. Sumerian also has this, but it doesn’t break up sexagesimal. The numbers 7 and 9 translate literally as “five-two” and “five-four”, but this is sporadic and doesn’t reflect a system, although it may have done so in prehistoric times.

It’s hard to imagine a widespread modern language which used that many basic numeral words. These traces suggest that the Sumerian system used to be smaller, but in some ways this complexity is typical of what might be called “primitive” languages. The trend in most languages is from complexity to simplicity, but this leads to a quandary.

If you assume the Whig conception of history, which is of general progress towards the current social order, you’re presented with a depressing view of the past if progress is synonymous with improvement. We can look back on a decline of overt racism, sexism, homophobia and other identity-based prejudice, better conditions for workers, more tolerance, increasing care for the vulnerable and the like, and will be confronted with the idea that the past was utterly appalling in all sorts of ways. This is not actually how things happened. For instance, whereas Georgian Britain had the slave trade, the death penalty for homosexual acts and a general contempt for the needs of the poor, it was also less puritanical than the Victorian era, and in some ways that made it a better place to live. This is a major oversimplification of course, but it definitely isn’t a case of a terrible past trending towards a good present across the board.

A similar illusion afflicts the conception of language change. A lot of the time it really feels like languages are all becoming simpler and easier with the passage of time. Taking English as an example, we now have fewer strong verbs, we use “have” rather than “be” as the auxiliary for all past participles used in the perfect tense, we don’t use “thou” any more and many of our consonants in clusters have become silent, such as “knight” and “know”. The same process seems to take place in almost any familiar, widely spoken language you can think of. Latin is generally much more complex than any of the modern Romance languages, North Indian languages and Romani are way simpler than Sanskrit and Greek has become much simpler grammatically than it was in the Bronze Age. Sometimes this trend seems to be completely across the board, and it leads to a very odd apparent conclusion: that prehistoric languages were so complicated that it’s hard to imagine children being able to learn them at all.

We can only trace most languages back a little way into the Neolithic. Before that, the nature of languages spoken is highly mysterious. The oldest traceable languages are probably the Afro-Asiatic ones, which may be descended from a parent language spoken eighteen thousand years ago, which is Upper Palæolithic, also known as the Mesolithic. Further back lies the Nostratic hypothesis, which attempts to link a large number of language families together, but this is not accepted by mainstream linguists. It is very tempting though, and it leads to a language which looks very much like some Caucasian languages in form. It should be noted that the Caucasian languages do not form a single family, but they are nonetheless characterised by extremely complex grammar and many consonant clusters and types of consonant, sometimes with very few vowels. The extinct Ubyx, for example, had seven dozen and two consonants but only two vowels, which is a record number of consonants only exceeded by click languages.

Types of languages can be classified in various ways. One is word order, so for instance English is SVO, Subject-Verb-Object, Hebrew, Arabic and the surviving Celtic languages are VSO and Latin, Sanskrit and Turkish are SOV. However, a more relevant way of addressing types of language is in the complexity of their grammar. Languages can be analytic, fusional, agglutinative or polysynthetic. English is very close to being analytic. Its words vary very little and it often expresses cases, tenses and other inflections with prepositions and auxiliary verbs, which are approaching particles as with “should of” instead of “should have” and “gotta” for “must” or “have to”. However, English is not completely analytic (also known as isolating), because for example it still has mutation plurals such as “teeth” and participles formed by adding suffixes. Mandarin Chinese is closer to this state, with even plural pronouns being expressed using a separate “word”, which is in fact a bound morpheme but is very regular, being the same for their words for “we”, “you” and “they”. Chinese tends to be thought of as a purely monosyllabic language but it’s also been stated that the mean number of syllables in Mandarin is “almost exactly two”. This is because of things like words for “insect” which can only be used together, the plural marker for pronouns being considered a separate word and the tendency to think of separate vowels as diphthongs. Nonetheless Mandarin and the other Chinese “dialects”, which are of course really separate languages, are particularly good examples of analytic languages.

Fusional languages have affixes and other changes in word form which tend to express more than one idea with a single change. Most Indo-European languages are fusional. English, for example, expresses both plurality and possession by adding an S on the end of nouns. It’s easier to illustrate this in other Indo-European languages. German “der” and “die” are fair examples of this. The former is used both for the feminine and plural genitive and the latter for the feminine nominative and accusative and the plural nominative and accusative. As well as being fairly characteristic of Indo-European languages, there’s also a tendency for non-Indo-European languages not to be fusional. The trend from fusional to analytic is evident in English in its current state, with relatively few but some strong verbs. Fusionality probably makes languages harder to learn as second languages whether or not one’s first language is fusional.

Somewhat similar to fusional languages are agglutinative ones. Turkish and Finnish are agglutinative, and so is Esperanto. Agglutinative languages have separate morphemes for each expressed idea whose forms change little when they’re added to words. Finnish is quite agglutinative but tends to weaken some of its suffixes, changing double consonants to single. Agglutinative languages may be able to express entire phrases in single words, as with the Finnish “tottelemattomuudestansa” – “because of your disobedience” (that may be misspelt). Agglutination and fusion are both features found in languages which are generally considered to be in the other class, so for example the Indo-European Armenian uses agglutination with nouns but not elsewhere and general word-building in English and many other fusional languages is also agglutinative, with something as simple as “everyday” being an example. Agglutinative languages can be seen as descended from analytical languages but tending to run their words together in a way which has become enshrined in grammar.

The final class of language is known as polysynthetic, and these are what I’m mainly going to talk about today. Polysynthetic languages have entire sentences as one word. There are other languages which are able to do this, and clearly one-word sentences exist in English for example, such as “go”, “yes”, “hello” and so forth, but in polysynthetic languages these are the norm. The title of this post, “they wouldn’t easily let themselves become Greenlanders” is a single word in Iñupiaq, one of the languages of the Inuit. Incidentally, I’m not going into the politics of why they’re sometimes called “Esquimaux” here because it’s more complex than might at first appear. For reasons which might be connected to sociolinguistic features of polysynthetic languages, most first-language speakers or English, Castilian, German and the like are more likely to quote references to such words rather than be able to form them themselves, because these are not common second languages. The above phrase is from an edition of the Encyclopædia Britannica and its original Iñupiaq is lost to me. One that I can retrieve, by copying unfortunately, is the Mohawk example of “tkanuhstasrihsranuhwe’tsraaksahsrakaratattsrayeri” – “the praising of the evil of the liking of the finding of the house is right”. This fifty letter word is said to be one of any number of Mohawk words of unlimited length, because Mohawk is a polysynthetic language. This, by the way, is why it cannot be true that the Inuit have many words for “snow”. It’s more that they have many words, some of which mention snow, but they could equally well be said to have lots of words for anteaters, animals for which the Arctic is not renowned. In fact I’m not sure they have a straightforward way of referring to anteaters, but I hope you take my point. And the problem here is that knowledge of polysynthetic languages outside their communities is usually sparse.

There’s some controversy as to what constitutes a polysynthetic language. One important aspect is polypersonal agreement in verbs. Swahili and other Bantu languages have this. The Swahili verb inflects for the object as well as the subject, so “nimekuosha” means “I washed you” and “sikukuosha” means “I didn’t wash you”, four or five words in English and a complete sentence with subject and object. However, what Swahili does not do is incorporate nouns into the verb phrase, and it’s probably this which makes a language truly polysynthetic. It’s easy to understand how it could happen. Just as Latin has “amo, amas, amat” and the like, where “-at” refers to “she/he/it/this/that” and probably a lot else, so could a language, instead of in a sense incorporating pronouns, actually use nouns as part of the verbal inflection, and that’s the point when it counts as a polysynthetic language. Incidentally, although I contrast “polysynthetic” with agglutinative and fusional here, and using the last two to refer only to non-polysynthetic languages, polysynthetic languages can in fact be fusional or agglutinative themselves, and will actually be one of these, or tend towards being one or the other.

Now, one of the surprising things about polysynthetic languages is that whereas there are globalised and industrialised nations with official languages of all sorts of typology, there are absolutely no countries at all with main official polysynthetic languages. Examples of the others are easy to find. Malay, Indonesia and China have isolating languages. European nations mainly have fusional languages. Finland, Turkey, Georgia and Hungary have agglutinative languages. Many of these nations have no indigenous polysynthetic languages in any case, but some have. There are in fact no polysynthetic languages at all which are widely spoken in terms of area or numbers, although there have been in the past. The exception to all this is Nahuatl, the language of the Aztec Empire, which currently has 1.5 million speakers. Apart from that, only Navajo and Cree are spoken by more than a hundred thousand people, and in fact “spoken” may be the operative word here because most polysynthetic languages have few literate speakers. It’s also notable that those three examples are all spoken in North (and Central) America, and at one point in the nineteenth Christian century it was thought that polysynthesis was a distinctive characteristic of American languages.

This last point might conceivably be why there are so few widely-spoken examples. If it really was a feature of America, the genocide visited by White people on Native Americans could explain this distribution. There are also many Australian aboriginal languages of this type, but again a similar process took place on that continent. Many Papuan languages are polysynthetic, but in this case it could be simply due to the wide variety of languages spoken in Papua. In Eurasia, most such languages are spoken in Eastern Siberia and this includes Ainu, which is a special case and also a potentially informative case study. The only European such languages are the Northwest Caucasian ones. They also seem to be absent from Afrika, at least insofar as Bantu languages are not considered under this heading due to not incorporating nouns. What is going on here? The situation often seems to be that marginalised, low population indigenous peoples such as the Ainu, Iroquois, Inuit, those of the Amazon and Siberia and Australian Aborigines, all tend to speak polysynthetic languages, in small groups isolated from the rest of the world and tend to be conquered by larger powers, particularly Westerners but not always: the Japanese, Chinese and Arabs have also done this. By contrast, all the powerful nations speak other types of language. Why?

Linguistic complexity is associated with small, isolated and stable communities with dense social networks, i.e. where everyone knows everyone else. The density of a social network can be measured by dividing the number of links between people in a community by the number of possible links. Where the result is high, the network is dense. Such groups are socially cohesive, stable and have few external contacts. Languages associated with these are relatively rarely learnt by outsiders, but before I go further on that, what exactly constitutes an outsider?

There are three orders of social network involvement. The first order consists of people linked to each other. This is the core. The second order comprises people who are linked to the first order but not the central members. The third order is of people with no direct connection to the first order zone. This seems to contradict the idea of six degrees of separation, but in fact there could be an exponential growth in possible zones four to six.

As stated above, I’ve had to raid reference works to come up with examples of words from polysynthetic languages because I’m not familiar with even one of them. I’m conscious of the occasional word from Nahuatl used in English, and Iñupiaq words like igloo, anorak, umyak and kayak, but no serious words from the languages themselves not used internationally. This is not just me. It’s because they just don’t speak much to outsiders. In the case of the Inuit, this may be due to being in a very hostile environment in small groups,and the same applies in Siberia and the Amazon. Elsewhere, it isn’t as obvious what’s going on.

One suggestion is that second language learners lead to the language becoming simpler for first language learners too, because there are certain things they just can’t manage. If that’s so, it means that the use of second languages is more normal than it seems to be for most first language English speakers today. It’s possible that the intermarriage outside communities would lead to something a little like creolisation, only not to the extent of borrowed vocabulary or mixture of languages.

Social networks often have “hub” people, who link people who don’t know each other together. Sarada and I became aware of a mutual acquaintance soon after we met who is definitely one of these, and I think I’ve experienced the influence they have over language in their own way. A few years ago, I used to edit a newsletter and used the word “planet” to refer to Earth in order to give a sense of unity and vary my use of language, and they found this usage peculiar, probably due to their world view, which separated the mundane from the celestial and reflected their negative view of science. It became very difficult to insist on retaining the use of the word, which contrasted with how it was used outside our community but in, for instance, associated pressure groups such as Greenpeace, Friends of the Earth and also in the Green Party, and I think this was probably a minor example of how hub people, sometimes inadvertantly, exert pressure to keep language use in a particular form, which can be both innovative and conservative, and I suspect similar forces are at play here

The lack of literacy, and its possible imposition from without, may also be a factor. If a whole community doesn’t use writing or read at all, it may not divide utterances into words in the same way as a literate one would. English has been strongly influenced by widespread literacy, which has changed the pronunciation of certain words to be more in line with spelling, such as “again” and “often”. If foreigners came into a linguistic group and decide on where the words are divided, they may make different decisions than the native speakers might. In the case of Nahuatl, the very nature of its writing was not close to how other cultures are literate. In fact, pre-Columbian Nahuatl has often been considered not to have a writing system at all in spite of the fact that it had paper and books with pages. It used ideograms and was also partly based on wordplay (such as “bee leaf” for “belief” would be in English), similar to how proto-writing worked in Bronze Age Mesopotamia. Aztec codices are reminiscent in some ways of graphic novels. In parts of Siberia, letters were written in pictures representing the situation, such as a crossed-out vertical spear to express not being able to see someone because they were beyond the horizon. In a culture using this kind of graphical communication, it seems possible that they didn’t particularly think of themselves as using words.

The same situation is likely to be familiar to a hearing parent using spoken language with a young child who is not deaf. Their early language use is unlike mature speech in various ways, including using phrases which they don’t analyse into words, and only later does this emerge. For instance, one of our children used to refer to an untidy scene as a “what a mess”, and to a ball as a “make a ball”, and a child of one of our friends used to say “hat on” and even “hat on on” for “hat”. Many English speakers will be aware of young children saying “wassat” as if it’s one word. If the influence of the idea of short, separate words was entirely absent, it’s easy to imagine a whole culture continuing to do this. Another example is our daughter saying “Llater” for “see you later” with a voiceless “Welsh” LL at the start. This can extend into adulthood even in English. One quoted example is “azeniayuenionya?” – “has any of you any on you?” as a request for loose tobacco. In a way, maybe it’s we who misunderstand the nature of spoken language and we imagine we’re saying the latter rather than the former. Another one is my text-speech like “cuinabit” – “see you in a bit”.

Celtic and Sanskrit are both known for their tendency to merge spoken words into each other, such that the unit of speech for a speaker of those languages may not actually be the word so much as the phrase. It’s also been suggested that French is on its way to becoming a polysynthetic language. It contains clitics, which are word-like morphemes which depend on full-fledged utterances and therefore cannot occur on their own. There has long been a major discrepancy between the spelling and pronunciation of French, and it shares with modern Celtic languages a connection between consecutive words in speech. Phrases such as «je te l’ai dit» and «je ne sais pas» come across differently in speech than writing, and considered as words are “zheteledi” and “zhensepa” in my made up on the spur of the moment orthography. Coming across French anew as if it were an unknown language, one might regard “di” as the stem of the verb, “-e-” as the sign of the perfect tense, “-te-” as the third person objective prefix and “zhe-” as the first person subjective. “Zhensepa” also has the “n-pa” circumfix which indicates the negative, and there are others such as “n-zhame” for “never” and so forth. This makes French a much more exotic-seeming language than the boring old so-called “Standard Average European” paradigm into which it tends to get forced.

French has liaison between “words”, which link the words together, as with «mes amis», where the only indication of the plural is outside what we would think of as the noun. It also has obligatory elision. In fact, many of the structures of French, once one ignores the written language, are quite similar to Bantu languages such as Swahili. There is some overlap in the areas where French and Bantu languages are spoken, and it’s interesting to speculate how first language Bantu speakers, such as those in the Congo, perceive their French speaking when they learn it. It’s possible that it tends to be refracted unnecessarily through a lens of European-ness. Conversely, is there a way at looking at, say, Swahili, which makes it seem more like written French? However, neither French nor Swahili have noun incorporation so far as I can tell, though it’s very difficult to view French without the filter of its written form.

The Ainu language is spoken in Northern Japan and previously in the Russian territory north of it called Sakhalin. It’s a moribund language which occupies an interesting position in linguistic typology. Ainu is completely unrelated to Japanese, and probably to any other language, but it does resemble other languages spoken in the area, such as Chukchi, in that it was previously polysynthetic. Ainu has gone from being polysynthetic to agglutinative. The yukar, Ainu sagas, are written in the former form which could be seen as the classical form of the language. Modern Ainu has similar syntax to Japanese, but it’s difficult to tell how strongly it was influenced by it because the two languages are both isolates and have been spoken in close proximity for centuries. It has only two native speakers now, although some Ainu have learnt it as a second language. Around three hundred people can understand it to some extent. This extreme endangerment means that it no longer occupies the usual position of a polysynthetic language of having an inner circle which doesn’t communicate much with the outside world or have much contact with other languages, and it means Ainu has been re-learnt by a lot of natively Japanese speakers. It probably goes without saying that like many other minority languages it’s been subject to persecution and attempts at eradication, and it was only recognised by the Japanese government in 2019.

After all that, the question then arises of whether prehistoric languages, when everyone was a hunter-gatherer, would have been polysynthetic. The trend from complexity to simplicity would certainly seem to suggest so, but it also appears to be a cycle. If that’s so, it’s possible to imagine prehistoric languages going through such a cycle, so that at any one time there would’ve been speakers of polysynthetic, agglutinative, fusional and analytic languages, perhaps coming into contact with each other. However, they’re definitely most common among hunter-gatherers in Western recorded history. There does seem to be a kind of Turkish-like typology which crops up repeatedly in human spoken language which suggests to me that left to ourselves we’d all end up speaking Turkish, though not literally – Quechua and Aymara are also similar in this way for example. However, perhaps this question can be answered by looking at the kind of societies people lived in during the last Ice Age. The word “Ice Age” (see what I did there?) might suggest a lifestyle like the Inuit and indigenous peoples of Siberia, but simply because people live that way in those conditions now doesn’t mean they did twenty thousand years ago. The question of behavioural modernity arises here, but ignoring that for the sake of not veering wildly off-topic, at the time we became a separate species, which oddly is much earlier than the time we stopped being able to interbreed with Neanderthals and Denisovans, there seem to have been between one and three hundred thousand of us. Early in the last Ice Age, a volcanic eruption seems to have caused a global famine among us which reduced our population to somewere between one and ten thousand. By the end of the Ice Age it had grown to somewhere between one and ten million. These low numbers suggest that language change would’ve been slower to me, because for example the three hundred thousand people who speak Icelandic would still be able to understand and be understood by their ancestors over a millennium ago, and if Australia is anything to go by, languages were spoken in extremely small groups – the people on the north side of Sydney Bay used to speak a different language from the people on the south side. However, it may be misleading to compare the situations of hunter-gatherers in recent times to those of the Old Stone Age because they have lived as long as us and are now living in areas which are harsher than, for example, the Mediterranean was a millennium or so after the last Ice Age. This is all assuming that people did live in small, isolated groups, when they may well not have done. Presumably there is archæological and palæontological evidence relevant to all this.

Finally, there is a rather depressing connotation to the phrase “they wouldn’t easily let themselves become Greenlanders”. Greenland, more properly known as Kalallit Nunaat, has by far the highest rate of people killing themselves in the world at eighty-three in a hundred thousand per annum. The next highest is Lesotho at seventy-two, followed by Guyana at 40.3 and Eswatini at 29.4. It’s the leading cause of death there among young men and eight percent of the population die by their own hands. Several factors are likely to be involved, such as insomnia caused by twenty-four hour daylight and perhaps also twenty-four hour darkness, and depression and alcoholism are also more common in the Arctic generally. However, a major contributory cause is likely to be culture shock between Inuit and Danish lifestyle. When you consider that the end of a lifestyle involving close-knit relationships and isolation from Western influence is likely to lead to a lot of stress, dysfunctional home environments and something resembling unemployment, it’s hardly surprising that these polysynthetic language speakers wouldn’t easily let themselves become Greenlanders. Maybe they shouldn’t have to.

Screen Reading

It’s commonly believed that there is a significant difference between how we approach ebooks and paper and print books. In fact, Sarada never reads books off any kind of screen for this reason and I’m sure she’s not alone. Rather gratifyingly for us old codgers, it turns out that this is backed up not only by research for us ancient ones, but also for digital natives. At the same time, ebooks do seem to offer some pluses.

(c) Activision, 1986. Will be removed on request.

I’m not sure when the first idea for an ebook arose. Certainly the ‘Hitch-Hiker’s Guide To The Galaxy’ is a fictional ebook, although a dedicated one. However, there’s an older design called the Dynabook, dating from 1968 which was intended to look something like this:

An illustration of the Dynabook educational computer as envisioned by Alan C. Kay. Will be removed on request.

Alan Kay’s idea was basically a GUI-powered laptop which was to inspire Xerox’s computers in the 1970s and was later adopted by Apple when they produced the Lisa and Mac, and went on to become the tablet computers and laptops of today. The display here is supposed to be one megapixel and touch-sensitive, there’s a stylus and removable memory, and it’s aimed at children. Around that time also, Arthur C Clarke had described a flat-screen newspaper reader in his 1968 novel, ‘2001, A Space Odyssey’, which went on to be depicted in the film itself. Before this, so far as I can tell the concept of ebooks was preceded by microfiche and microfilm readers, and the basic idea for these dates from 1851 and was used for pigeon ring messages in the 1870s. Hence people were actually reading text on screens, projected from a sheet of photographic film, way back into the 1920s, when the Library of Congress photographed a large number of books in the British Library for archival purposes, before the advent of public television. To be picky, silent movies would also have involved the reading of text off screens, though very brief passages, and there were also credits. There was also the “Readie”, an idea inspired by the “Talkie”, i.e. a movie with audio, which was:

A simple reading machine which I can carry or move around, attach to any old electric light plug and read hundred-thousand-word novels in 10 minutes if I want to, and I want to. [This machine would] allow readers to adjust the type size and avoid paper cuts.

Bob Brown, 1930.

This is of about the same age as the oldest paperback books, and a couple of thoughts occur on reading it. One is that it actually predates mains power sockets and needed to be plugged into a light socket instead, and the other is that it’s reminiscent of futurism with its un-ironic emphasis on speed. The perception was that cinema was outpacing the written word and therefore that there should be ereaders which would give it a boost. It’s notable that there’s not a lot of contemplation or concentration implied by this, and it isn’t clear whether this is to do with optimism about accelerating the human brain or something else. Like many other ideas of the time, there’s a disturbing air to it, at least for someone such as myself, born in the mid-twentieth century, because the speed seems to reflect a reaction against a slower, more meditative way of life. In a way, the attitude expressed here could even be seen as outdated because this is from the age of Dada rather than Futurism. Futurism is so-called because it’s supposed to outstrip even Modernism. It rejects tradition and the kind of comforts we’re used to, and is notoriously anti-feminist. This kind of idea is also akin to eugenics: that we don’t need many of the people who hold us back such as the disabled and those who have learning difficulties, and by extension the idea that there should be an Aryan master race because it had achieved so much more than everyone else, supposèdly. It’s pretty scary, and if ideas are presented sufficiently fast, maybe people won’t think about them so much. This applies also to novels and poetry because that kind of literature read at speed probably won’t work as well as a way of developing empathy and emotional wisdom, if at all. Brown’s idea is reminiscent of Speedwords, a system which attempted to compress information into less space and time, because he wanted to include special punctuation and abbreviations to accelerate reading speed further.

In the post-war era, the Spanish school teacher Angela Ruiz Robles became concerned for the welfare of her pupils having to lug heavy textbooks around all the time and invented a pneumatic-mechanical system called the
Enciclopedia Mecánica. This came to include electric illumination and a magnifying glass, and it’s interesting to contrast the vast arena the men’s inventions seem to attempt to encompass with the far more practical and personal approach this woman took.

Videotex originated in the UK in the late 1960s. It turned into what we thought of as Teletext and Prestel, which when the specifications were first issued was in fact ahead of what could be practically achieved at the time on a domestic terminal. The screen was forty columns wide and I think twenty-five lines deep, and guidelines for composition included the stipulation that paragraphs should be no more than four lines long, which is equivalent to a hundred and sixty characters, similar to Twitter in the pre-Trump era (oh, those halcyon days!). Since there was also a line between each paragraph and the graphics, and there also needed to be room on the screen for the graphics, and also every time a text effect was needed such as a colour change and double height a character space had to be skipped, it didn’t leave much for information. Even so, it’s notable that the nature of the content had to be altered to make it more manageable. I haven’t attempted to read an entire book on a 40×25 text screen. This format was later adopted by the BBC Micro to enable it to access both teletext services and software downloads via the TV aerial. Later still, Tangerine computers developed the Oric-1, which was supposed to be a Spectrum killer, but rather than catching on as such it ended up in the unexpected position of forming the basis of a French machine called the Telestrat, which was oriented around online communication via the telephone system, due to its very teletext-like text screen.

At around the same time as the beginning of Videotex, Michael Hart started Project Gutenberg, in 1971, with the US Constitution, Bill of Rights and the Bible. This was motivated by a desire to give back, having been given some storage space on a computer at his university. There’s so much in the commons and motivated by altruism which built the more positive aspects of the world of today. These books would have been accessed via CRT monitors with a wider screen, but green on black and monochrome. It’s difficult to read from such a screen, leading to fatigue and eye strain, which means that people probably would’ve read only short passages at a time even though the number of characters per screen would have reached about two thousand, translating to around five hundred words, by this point. The limitations of having to use an actual television tube would not have applied in that respect at least. But it’s still difficult to read off an old-style CRT. I used to find it made me irritable and I imagine it triggered seizures in some people. However, one big advantage of a CRT was that it was equally bright from any viewing angle, which was not the case with LCD flat screens as they relied on polarised light and ended up needing non-transparent electronics to support the pixels, leading to them appearing like a grid rather than a smooth display.

I just want to mention one more prehistoric ebook system, this time from 1980, because it illustrates something significant about the physicality of the devices involved. This is the US Defense Departments Personal Electronic Aid to Maintenance:

The US Department of Defense’s “Personal Electronic Aid to Maintenance”.
Wikimedia Commons

This is clunky and rather large, and it has a kind of physical presence to it which modern ebook readers lack. A physical book has substantial weight and size. Church Bibles and the Encyclopædia Britannica come to mind here although the early editions of the latter were quite small per tome. They feel like Serious Business in a way a Kobo or Kindle don’t. The above device was just for manuals for military equipment as far as I can tell, but considering the seriousness and weight military grade stuff has, it has a similar kind of aura. If you took one of these out and read ‘War And Peace’ on it, it would seem strangely appropriate, although its capabilities are largely obscure to me.

Copyright status unknown, illustrative purposes only, will be removed on request.

I wasn’t planning to turn this into a history of ebooks here so I’m going to skip forward. In the late 1990s, the ability to produce paper-like displays which use reflected light became feasible and this led to the first ereader as we would recognise them today, the Rocketbook, seen above. Something about the frame and the minimalism of the controls appeals to me here, that makes it feel more like a book, or perhaps even a painting – a work of art. I can imagine it also had a fair amount of heft to it. I have to say also that from an ecological perspective the idea of electronic paper is very appealing because of the low power required to maintain it, and there are late ereaders which will even continue to display the final page viewed after being turned off. Practically zero power consumption, in other words. Today’s ereaders still have that sometimes, but like many other bits of kit today they tend to gravitate towards mobile phones, which in this case means they’re tablets. I have a Kindle Fire, and although I read ebooks on it, I think of it as a tablet. I also wish I wasn’t supporting Amazon, and I’ll go into that in a bit.

It’s significant that there is now a division between the “software”, in this case the content of the ebooks on the device, and the devices themselves. I only ever thought of the Guide as an “electronic book”, in the words of Ford Prefect. With “DON’T PANIC” written in large friendly letters on the cover it was only ever going to be a display device for the stored content inside it, although this was somewhat modified as the series continued. There was no distinction between the book and its text, as it were. You couldn’t use it for anything else, except perhaps for eating your sandwiches off it. It was iconic enough to form the centre piece of the entire epic adventure in time and space, as an integrated product. However, looking at the Infocom version (later part of Activision) above, it can also be used as a calendar, clock, calculator, tan guide and “salad slasher”. I feel this takes it away from the original vision to some extent, but it’s clearly a satire on creeping featurism, though at a time before most people seemed to be aware of the issue. In the original, it was the towel which was more useful, even being modded sometimes to increase its utility. The Guide is not a towel.

This raises an issue with ereaders used as apps rather than dedicated devices, and there’s a further issue that a whole library is stored on an ereader or perhaps online nowadays. I use my Kindle as a radio, music player, TV set, calculator, star chart and all sorts of other things, and of course I also use the web browser and social media apps. It has the same kind of clutteredness to it as much experience via devices has these days, and that interferes with focus, attention span and concentration. I don’t think you even have to use the other apps for this to happen. You’re aware that they’re there, and that in principle you could close the book and “go” elsewhere at any time. The fact that I watch TV programmes and films, and perhaps even more YouTube with its own ephemeral tendencies and speed of delivery, through the very same display as I read ebooks is probably not conducive to taking them seriously. I currently have eight and a half dozen ebooks on the Kindle, and although I have read lots of them, I’ve also abandoned quite a few in mid-flow and gone on to buy more before continuing, although to be honest I’ve also tended to do that with paper books.

Research has compared physical and electronic books. I myself considered writing a dissertation comparing word processed, typed and written documents, though it came to nothing, back in 1989 – I abandoned it because the department required everything to be typed and preferred word processed documents. As that suggests, this is nothing new. Before I go there, I want to ask you some questions. You have now read a little over two thousand words, a zagier and a third if we’re going to stick to duodecimal, on, I presume, a screen of some description. How do you feel about it? How is it different from reading it in a magazine or a book? What would it be like if this had been handwritten, on a scroll, in a codex (a spined book) or on sheets of paper? I once wrote an essay using till receipts. How would that be? And is what you’re doing now similar to reading an ebook?

Paper books have corners, thickness, the ability to have bookmarks shoved in them, pages, page numbers and so forth. They’re also bound, sometimes in boards or, unfortunately, leather, and have spines, and they can have appreciable weight. I don’t know, but I imagine that when paperbacks first came out they were not taken as seriously as hardbacks. Nonetheless we did adjust. However, it’s been found that even people for whom electronic forms of communication and presenting text are familiar, ebooks don’t work as well as paper ones. It definitely isn’t just a generational thing. Ebooks are harder to navigate due to their lack of physicality, and their text can reflow easily. It may be a feature rather than a bug, but when I “flick” back to a previous page on the Kindle, I often find it’s been presented in a different format so that, say, a chapter which previously started at the top of a page now starts halfway down, and this takes me out of the immersive experience because it makes it ontic – the ebook reader becomes a tool whose encumbrance draws my attention negatively to its existence like a wet or smeary pair of spectacles. It is genuinely harder to navigate around in an ebook because of this absense of physical cues. As you work your way through a codex, you’re gradually assembling a sketchy map of the book in a way which either can’t be done with an ebook or is dependent on different cues such as the scroll bar. It’s even been suggested that ebooks open like a codex and have sides which inflate and deflate according to where you are in the text, which calls Robles to mind.

Humans didn’t evolve in situations where actual reading was necessary for survival, so the skills we use to do so are cobbled together from other abilities which were. Consequently, when we look at text we’re engaging with what we perceive as objects arranged in a particular way. Studies show that when reading cursive or ideographic script, we engage our motor cortices and subliminally imagine our hands writing out the characters concerned. Sarada would probably confirm that I unconsciously tend to write my thoughts in the air with my fingers at times. This is less true of printed text, but does highlight the physical element of reading. We mirror the act of reading with subliminal writing, probably because we physically engage with the text, and it’s harder to do so when it’s on a screen.

All of this brings to mind the possibility of entering into virtual reality to read an ebook. The popular astronomy program Celestia is generally a kind of graphics engine for three-dimensional exploration of the Cosmos, but it only lends itself to that and even though it was designed for those purposes, it includes a secret add-on for diary-writing. This creates a codex containing text from a file which is hidden at the centre of the Sun. It would be interesting to engage with that book and compare it to reading the same text purely off a screen sans accoutrements.

It’s been shown that students reading PDFs, which are more like physical books than ebooks are, tend to have difficulty finding information, and particularly returning to it, within a text than they do with codices. It’s also common for people to print out PDFs if they want to read them in more depth, but there’s still a problem there if you then have a stack of single-sided paper to read. Not only does it seem wasteful, but it also gives it a kind of disposable quality which binding removes. There’s also no recto/verso arrangement to remember, which would help you find the right bits. Ereaders, so far as I know, don’t even allow you to print the text out or copy-paste it so it can be, which would in any case be laborious and time-consuming.

The separation of device from content confers the same kind of disposable essence to a text as printing out a PDF. The ereader in front of you is not the book. It’s only pretending to be the book for the time being. This is why the Guide would be a better ebook than one stored in an ereader. Another approach I saw suggested once was in the series ‘The Mighty Micro’, broadcast in the early 1980s, which envisaged the texts being stored on ROM cartridges like games or software, which could be slotted into the ereader and had the branding of the books themselves on them. This, I think, would’ve worked well although it’s anachronistic in today’s online world. My tendency not to read through a whole ebook is probably partly due to this ephemeral nature and partly down to not constantly knowing where I am in the text and how much longer I need to persist. On the other hand, as David Lodge pointed out when contrasting cinema and codices, this makes it easier to surprise the reader and reflects our own lives, where we generally don’t know how long we’ve got left to live.

Reading a physical book also builds stronger associations externally than an ebook. I can remember the spine of C S Lewis’s ‘Voyage Of The Dawn Tr?der’ cracking and the pages falling out, and the fact that it had a typo on either the cover or the running heading meaning that I still don’t know if it’s ‘Dawn Trader’ or ‘Dawn Treader’. I recall that the last page of ‘Alice In Wonderland’ was missing and I didn’t read it until years later, and that the coupon for ordering the LP of ‘Don’t Panic’ was at the back of the first Hitch-Hiker’s book, overlapping with the last two lines of the page and necessitating my mother writing those two lines at the top of that page when I cut it out and posted it. I can remember that A E van Vogt’s story ‘The Sound’ is spelt ‘The Soond’ on the penultimate page on the running heading of my copy. All of these help to make the stories more memorable to me. There’s also when and where I read something, and how I got hold of it. I recall the incessant stamping of library books I pored over as a child, and the location on the shelves of the mysterious third Alice book. I don’t think any of this carries through to the ebook experience. I can reorganise the order of books in my Kindle library with a single tap of my finger. Reorganising a bookcase is a considerably more engaging and time-consuming activity. ‘A Woman In Your Own Right’ has a reflective cover so that if a woman picks it up, she becomes the cover illustration. Another book has glasspaper covers so that it can’t be put on the shelf without damaging its neighbours or the bookcase. Brian Stableford’s ‘A History Of The Third Millennium’ hardback edition has a hologram of either a sprig of acorns or an ammonite fossil on the front. None of these things are currently possible with ebooks.

But I’m not here to bury ebooks. I also want to praise them. As I’ve said, I do have a Kindle Fire and although I have no Kobo I do have Kobo ebooks which I read via a laptop app. I am both aware and extremely bothered about Amazon’s ethical record and do generally avoid buying anything physical from them, but I do download Kindle ebooks from them. I realise this too swells their coffers, and I’m not offering any more excuse for that than the usual one that the system has to change, but using ebooks means they haven’t been transported and fewer physical resources have been consumed to produce them. Not none, and there’s the embodied energy in the device itself and the various no doubt dodgy things Amazon do extends to their ebooks in one way or another, but I myself have an ebook or two on Amazon so I’m to a very limited degree exploited by them too. I’m tempted to go off on one here about Amazon’s exceedingly dubious politics and ethics, particularly regarding workers’ rights, but instead I will resist the temptation and make this point: I do not believe in the system which has enabled any such organisation to reach such power and wealth, and there’s an aspect of boycotting, which I very avidly do, which is a kind of guilt-tripping distraction from the actual unethical practices of the company itself. However, it still feels like Amazon owns all the ebooks in my library because it could cut me off at any point, and it’s notable that there doesn’t seem to be a Kobo app for the Kindle, which is probably the least surprising fact in history.

Ebooks appeal to minimalism. I’m currently sitting in a room which, like several others in this house, contains hundreds of books, and they’re a fait accompli of course, and they have intrinsic value absent from ebooks, but in order to achieve a degree of compactness in one’s life it would help in future if I acquired as many books as possible in electronic form. But I would like to have those books physically stored in a device. To an extent I’ve achieved this, because a while ago I took a large number of books which are out of copyright but widely available, such as ‘Gulliver’s Travels’ and the complete works of Shakespeare, to the charity shop and have them as text files, PDFs or HTML documents stored locally, but I have also bought a number of life sciences textbooks in physical form recently. I also have some illustrated Kindle ebooks which I view by plugging the laptop into the 70 cm TV screen rather than on the laptop or tablet, and this helps.

What would help even more would be for the spirit that led Michael Hart to found Project Gutenberg still to be alive, well and dominant on the internet, so that we could actually own ebooks, perhaps paid for, and not have them kind of lent to us by Amazon and others. That said, even with that situation, there are identifiable and persistent drawbacks to ebooks still experienced by the current generation which do not depend at all on unfamiliarity with technology.