Aquatic Apes

I’ve decided to try writing more spontaneously rather than delving a lot into sources of information like I have been recently. It’s good exercise for the memory and makes for a livelier style. Maybe it’ll also end up being less accurate, as I’m drawing on stuff from the 1980s here.

The other day someone posted a meme about humans being cute for various reasons. In general it was a good meme, but one probable inaccuracy jumped out at me. It was something like “although they’re not aquatic or amphibious, humans flock to be near water just for the pleasure of splashing about in it”. Fair enough as a bit of a meme I suppose, but probably wrong, because some people think we were once “aquatic apes” as the phrase has it, notably Elaine Morgan and manwatcher Desmond Morris. That’s in quotes because the idea isn’t that we used to be like dolphins, living in the sea full-time, but amphibious, living on beaches and in the sea, perhaps foraging in both and escaping from predators by wading into the water. It’s also suggested that the surviving species of elephant have a similar history. This is in contrast to the more usual savannah theory, which claims that we are descended from an ape who had to adapt to the veldt when the African rainforests dwindled due to the world drying up. I’m going to talk about this bit too.

During the Miocene there were a huge number of different ape species. This has led to the human evolutionary “tree” being described as more like a bush, because some of them also show parallel evolution, becoming steadily more like hominins but are in fact our sister groups. The world was wetter at the time because the Tethys Ocean, which encircled the equator, was able to flow all the way round, meaning there was no permanent ice in the Arctic and therefore more water available to the planet’s weather systems. This in turn meant larger rainforests. Then North and South America collided and the Gulf Of Mexico formed, causing the warm current to swirl round and head North, where precipitation increased and snow and ice built up, increasing the planet’s reflectivity and cooling it in a vicious circle which also dried it.  Hence the rainforests shrank and some apes were forced onto the savannah, where according to Elaine Morgan they then died out, but according to other people they evolved into humans.  Morgan managed to resolve this problem to her own satisfaction by suggesting that our ancestors survived by becoming amphibious and living on beaches and in the sea.

This is the evidence cited to support this claim:

  • We have a diving reflex.  If we are for some reason submerged, our hearts slow down.
  • We are largely hairless.  The body hair we have follows a streamlining pattern.
  • We have more breath control than other apes have.  Think of the hooting made by chimps.  They do that because they can’t control their respiration.
  • We have a hymen which protects us from sand entering our reproductive systems before penis in vagina sex takes place.
  • Penis in vagina sex usually occurs face to face as in other aquatic mammals.
  • The female orgasm.  I can’t remember the argument for this.
  • Large amounts of adipose tissue in breasts, enabling them to float and suckle young more easily in water.
  • Long scalp hair onto which babies can hang in water.
  • Downward-facing nostrils protecting us from accidentally inhaling water.
  • Bipedalism is easier in water and is adopted by other apes when they are wading through water and may therefore have first evolved due to this lifestyle.

There may be other reasons but those are the ones I can remember and as I stated earlier I’m trying to research less and type more spontaneously.  There are also a number of other observations which don’t pertain directly to the human body:

  • There’s a gap in the hominin fossil record of several million years.  I can’t remember where this gap is supposed to be or whether it’s still there, since Morgan’s ‘The Descent Of Woman’ was published in 1972 CE.
  • All Afrikan primates except humans have a baboon-generated retrovirus code written into their genomes.  No non-Afrikan primates have.  This suggests that our ancestors were, for whatever reason, isolated from other primates when this happened.
  • The oldest hominin remains, including tools, are found in Ethiopia and move south into the Rift Valley with time, suggesting that we spread from the Gulf of Aden southwards rather than from the Congo.
  • The first human stone tools are made from pebbles, suggesting that the technology arose first on beaches.

There’s also a side argument that succeeds or fails separately from the aquatic ape hypothesis, that elephants also had an amphibious phase in their evolution due to several features they have in common with humans but not mammoths.

One reason Morgan made this claim was that she believed palaeoanthropology focussed too much on male bodies and that if female bodies became the focus a number of traits would be easier to explain, namely the ones listed above.  Humans considered as female make much more sense as amphibious life forms than humans considered as male savannah-dwellers.  There is, in other words, a strong feminist motivation in her acceptance of this hypothesis, or conversely, a strong patriarchal motivation in the establishment’s rejection of it.  Now to me the interesting aspect of all this is not directly whether the hypothesis is well-corroborated but what it says about the scientific establishment and academic thought and research, particularly from a pro-feminist perspective.  It’s also interesting to contemplate how I perceive it.

The hypothesis is generally viewed as pseudoscientific and thoroughly refuted but it’s recognised that it still surfaces from time to time and there is some endorsement from celebrity science popularisers such as Desmond Morris and David Attenborough.  One issue with it is that because none of it seems to refer to bones and teeth, fossilised hominin remains are hard to assess on this basis.  It can be asserted that we have a hymen, breasts, approach hairlessness and so forth, but none of that has to do with the skeleton.  Against this, and this is just my opinion, is that adaptations to bipedalism are reflected in bones and joints.  However, it is true that the fossil record is difficult to use to back this up, and this highlights a general problem with the reconstruction of vertebrates from most fossils:  soft parts are rarely preserved compared to hard parts.  This applies particularly to non-avian dinosaurs, who, being closely related to birds, might be expected to have structures like wattles and combs but it’s unlikely we’ll ever know unless we find alien video recordings of them or something.  Pebbles, on the other hand, are clearly preserved, and these are again hard “parts”, so the question is, does this hypothesis really only depend on soft parts?  It seems these are not soft at all.

I’m not a scientist.  I have a fair bit of scientific knowledge and am aware of the scientific method, but I’ve done little research of my own since I finished A-level biology.  Not none, because some of my herbalism-related CPD involved original quantitative research, but I’m not a palaeoanthropologist by any means.  Gutsick Gibbon, however, is, and it seems fair to bow to her superior knowledge and experience.  The issue is with the source.  Elaine Morgan’s perspective on the issue was informed by her gender and allegiance to feminism:  another of her books is ‘The Descent Of Woman’ which emphasises the increased explanatory power of a model of evolution which sets female bodies as the default rather than male.  There’s a strong emphasis on “Man The Hunter” in traditional palaeoanthropology, which portrays men as going out to hunt dangerous prey and bringing them home to the cave while women stay in it, do a bit of foraging and take care of the children, and also that most of the nutritional value of the food they ate was in animals rather than plants.  Apparently though, this is not reflected in hunter-gatherer societies as observed by Western anthropologists.  The trouble is, though, that we tend to project our own ideas onto the past and that hunter-gatherer societies today, rather than being remnants of the Stone Age, have just as long a history as Western civilisation and its predecessors.  The other aspect of this is that Morgan is probably surrounded by men in her profession and field, and therefore that she and her opinions are likely to be at a disadvantage which leads to more people working to refute her hypothesis unsympathetically.  This is why I find Gutsick Gibbon’s rejection of it interesting, as she doesn’t seem to be motivated in such a way.  However, it may also be that she’s influenced by the general dismissal of the idea by her colleagues and mentors.  All of this brings up the question of how scientific theories change.

All of this is therefore about bowing to the opinions of experts who are fairly imagined not to be biassed in unhelpful ways.  There’s a degree of trust there of professionals which may have been eroded in recent years, leading to various beliefs being accepted which would previously have been ignored.  To my mind, it goes hand in hand with lack of deference, which is often a good thing.  For instance, nowadays there seems to be either more awareness of corruption in authority or more actual corruption, and where it’s detected accurately, this must surely be a good thing.  However, this approach of dubiousness may be dubious.  An opinion is not correct or worth considering in itself when compared to other more learnèd opinions.  Experience from outside the field may not be valid within it.  OFSTED comes to mind here.  Why should outsiders be listened to or taken seriously by educationalists and teachers with years or decades of experience?

Also, sometimes a particular characteristic can give rise to excessive sympathy.  For instance, there is a Black supremacist group which maintains among other things that melanin alone is the seat of consciousness and therefore that only Black people are conscious.  As a White person, I know this isn’t true.  They also believe that a Black scientist working thousands of years ago invented the White race through genetic manipulation.  There is certainly a sense of empowerment in these claims, but it occupies a special epistemological position because White people actually know that this is not the case.  Regarding the origin of fair skin, this has happened several times in hominin evolution, notably among the Neanderthals, but the most recent appearance is apparently among the Eastern Hunter-Gatherers of the future Russian steppes about ten thousand years BP (BP = before 1950).  Another, similar, example, is in the spelling, which I’ve adopted, of Afrika with a K.  The reason for this given doesn’t seem to be very soundly based.  The claim is that the spelling of “Africa” is entirely colonial and should therefore be rejected.  That said, I also have the impression that that spelling is primarily promoted by Afrikan Americans and not actual Afrikans, and the K is also used in the Afrikaans spelling of the word, which is often seen as a language of conquest.  Another big issue with this spelling is that Afrikan languages which don’t use Latin script wouldn’t use either C or K and in transliteration the former Roman province of Ifriqiya was written with the Arabic letter Qaf in mediaeval times (and of course the word “Mediaeval” is Eurocentric in any case).  It is, however, spelt “Afrika” in Maltese and Cape Verdean Creole, and also in Swahili.  In Wolof, it’s actually spelt “Afrig”!  So the issue here seems to be that the K spelling, though it does exist in many Afrikan languages, may reflect a mistaken claim made by Afrikan Americans about the culture of an entire continent about which it’s impossible to generalise, but that mistaken claim may in turn arise from the people concerned lacking the opportunity or the information to recognise that their claim is dubious, and therefore I’m still going to spell it with a K.  Maybe there’s something I don’t know, but the truth seems to be that the spelling varies and does sometimes include a C in languages which the people concerned own emotionally and consider to be Afrikan languages such as English, French and Portuguese, whereas the claim to the contrary seems to be pressured from outside the continent.  Maybe I’m wrong, and I’m very open to that possibility.

Elaine Morgan, who sadly died in 2013 CE, is a somewhat surprising person. Her degree was in English and she was a TV script writer, so she’s an outsider with respect to palaeoanthropology.  However, the aquatic ape hypothesis was not originally hers but was formerly mainly promoted by the marine biologist Alister Hardy.  Of course, a marine biologist is not an anthropologist but he was a life scientist.  Her motivation for adopting the hypothesis was, as I said, that the idea of “Man The Hunter” is androcentric but leaves a gap if it’s rejected as it’s then necessary to explain the differences between humans and other apes.  It was claimed also that she didn’t realise that Hardy’s claim was a glib and off-the-cuff remark which was never intended to be taken seriously. This is not so, and he actually wrote the Foreword to the second edition of her book.

Most naked animals with subcutaneous fat are aquatic mammals.  This is the basis of Morgan’s claim.  Philip Tobias, discoverer of Homo habilis and shaper of the savannah hypothesis, eventually came to reject that.  David Attenborough and the former promoter of the idea of “Man The Hunter” Desmond Morris, which previously irked Morgan and persuaded her to think otherwise, both appear to support it and her.  One startling claim of hers is that early hominins were already relatively hairless.  I’ve already mentioned that the idea that our ancestors were as hairy as chimps and gorillas may be mistaken because orangutan, the most conservative living great ape, is considerably less hairy than either and it’s already established that gorillas’ and chimpanzees’ knuckle-walking evolved separately after they diverged from their common ancestors, so this convergent evolution could equally apply to humans.  Looking at it from the perspective of a through-line from the common ancestors of orangutan and humans to humans ourselves, their predecessors, related to gibbons, would’ve been hairier, and their descendants may have gradually lost their hair until today’s situation with humans.  This doesn’t mean, though, that hominins didn’t habitually enter the water because that very lack of hair could’ve made it easier.  Inherited characteristics appear before they’re tested.  Moreover, our hair follows the lines of water currents across our bodies as if we were swimming forward in the water, with axillary and pubic hair, for example in regions facing away from the flow and also with tracks of lanugo or terminal hair in the same direction.

An example of the kind of criticism Morgan received was that her ideas were “thought up by a Welsh housewife”.  Not only is there nothing wrong with being either Welsh or a housewife, but also that fails to take into account that she was a scriptwriter for ‘Doctor Finlay’s Casebook’ and later ‘The Life And Times Of David Lloyd George’ and the TV adaptation of ‘Testament Of Youth’.  It might be a valid criticism of her writing that her degree was in English Literature rather than a science, but this wasn’t the focus.  Instead, her academic credentials and career success were ignored completely and she was apparently assumed to be primarily a home maker and her Celtic heritage was associated with ignorance and low intelligence, so it was both racist and sexist.  Her response to this, perhaps typically for a woman of her time, was to point out that it was an eminent male Sassenach biologist, knighted for services to science and Fellow of the Royal Society, who had previously proposed the idea.  In this case, though, Hardy’s ethnicity and gender didn’t protect him either as his ideas were equally poo-pooed by the scientific establishment.  It doesn’t mean he was right of course, but it’s telling that the response to the same ideas being proposed by a Welsh woman focussed not on the validity or otherwise of her ideas but on her identity.  All that said, it doesn’t mean she was right either and her position doesn’t confer infallibility.  She could also be expected to have some kind of academic rigour but the fact is that she was not a scientist.  Creative writing, however, does benefit from thorough research, and I’m guessing that her work on ‘Doctor Finlay’ increased her knowledge of human biology and the process whereby diagnoses are made on the basis of evidence.  Perhaps another main issue with her is that she was to some extent an autodidact.

Here comes another bullet list:

  • The only mammals with descended larynxes are humans, a species of North American deer and several species of aquatic mammals.
  • The only mammals which are born covered in vernix are humans and harp seals.
  • Baby humans have five times as much fat proportionately as baby baboons.  When immersed in water they float face up due to the distribution of that fat.
  • Not only is our sense of smell weak because we’re apes, but it’s actually even weaker than other apes.  The only other mammals with such a poor sense of smell are aquatic, notably whales.  This is because breath control makes it less functional.  In the case of sperm whales, they hold their breath regularly for up to ninety minutes.
  • We sweat more than any other species of mammal.  On the arid savannah, this would be a major liability.
  • The brain needs high levels of both ω-3 and ω-6 fatty acids, which are most common in the marine food chain.  Just as a side note, although these fatty acids are generally used as an argument for eating fish, organ meat and wild animals, they’re plentiful in marine algae and are not made by animal sea food sources themselves, so this is not an argument not to be vegan.

Incidentally, it’s notable that the points about vernix and baby fat are likely to be more evident to people who have given birth than those who haven’t.

It was recently found also that the “savannah” sites where hominin fossils are found have pollen from plants only found in forests, even including liana vines, which are only found in very dense rain forests.  Hence the theory that humans, sweating profusely and becoming dehydrated on the savannah, evolved there seems now to have been refuted.  Humans did evolve there to some extent, but the areas which are savannah now don’t seem to have been savannah back then.  Although the savannah hypothesis seems to have been refuted, it hasn’t been replaced by the aquatic ape hypothesis.

Even so, a wide-ranging comparison of humans and aquatic mammals, even beavers and otters, shows little similarity.  Clearly swimmers and divers have health problems arising from their activities such as nitrogen narcosis and swimmers’ nodes in the external auditory meatus due to water getting trapped in the ears during diving.  It is the case that diving animals do get the bends, and there are even fossils of marine reptiles showing evidence of it, so the mere fact of nitrogen narcosis may not be adequate evidence, but it isn’t at all clear why swimmers’ nodes would develop if we used to immerse our ears regularly.

What I take away from all this is a feeling of uncertainty.  Although I can clearly see how Morgan’s ideas were rejected for ad hominem reasons, or at least that this is a factor in their rejection to a greater extent than the ideas of others were, there are clearly people out there with a lot more knowledge and experience in the field than I who continue to reject them, presumably with good reason.  It helps that a famous female palaeoanthropologist rejects them too.  I wonder if this is connected with the wave of feminism each is associated with.  The fact that they’re also endorsed by respectable science popularisers with a background in relevant fields also seems to help back them up, but by saying that I seem to be committing the same fallacy as I’ve just accused others of committing against her.  But one thing is for sure:  Morgan may be wrong, but the objections made to her are primarily sexist and to some extent racist, and we’re now left with no hypothesis at all regarding the circumstances of evolution, and that seems most unfortunate.

Pink

There’s a joke in Fit The Fifth of ‘The Hitch-Hiker’s Guide To The Galaxy’ where Ford and Zaphod are looking for a spaceship to twoc in the “car” park at Milliways, and there’s “one with the infrapink lizard emblem on the neutrino cowling”. Now I’m pretty sure I’ve already blogged about infrapink, but it was a while ago so before I get going on this one I’ll start with that. I realised recently that I’d missed a trick.

I will, then, start with the concept of infrapink. First of all, pink itself is not a spectral colour. It doesn’t occur in the rainbow, similarly to how various other colours don’t such as brown and purple (as opposed to violet). It might be thought of as desaturated red, but it isn’t. This is desaturated red:

I would actually call that salmon pink, but that colour is less pink than other versions of the colour. Pink itself varies, but one of the things I think it is, is a mixture of red and violet. Violet is usually impossible to reproduce on standard devices because they’re either oriented around cyan, magenta, yellow and black or red, green and blue, so they usually replace violet with either magenta or red and blue, making it difficult to show. Baby pink is apparently FFB7CE and hot pink (which is the one at the top) FF69B4, whereas salmon pink is FFB91A. “Infrapink” is an example of a joke which works well on the radio, and it isn’t in the TV version. On fiddling with the concept of infrapink years ago, I concluded that it looked like teal, but wasn’t, which is a little surprising. This is because the “infra-” prefix pulls everything to the left, and since one way to think pink is to see it as partly blue in the RGB colour space, this can mean that the red component gets yanked out of human visibility while the blue bit gets moved over to teal, cyan or aqua. However, there’s another, somewhat weirder, way to look at infrapink, which is that it’s just as much ultrapink! If pink is seen as a mixture of red and violet, perhaps desaturated, it looks something like this:

I’d say this is somewhat close to baby pink but slightly darker. Anyway, if it is that, perhaps infrapink isn’t so much off the red end of the spectrum as off both ends. It could be a mixture of infrared and ultraviolet. This would mean it’s for someone “whose eyes respond to different wavelengths”, as Trillian once said. This version of infrapink would be invisible to most humans.

Now I’ve looked at the last two pinks, the first one looks less pink to me than the others. I’m guessing that this is because we mainly see colours by contrast with each other, and it’s possible to overthink over pink. Maybe ultrapink is overpink. Before I descend into clanging, I’d better abandon this line of so-called thought for the big thing which people always bring up regarding the colour pink in today’s Western culture, which is illustrated by means of a completely different colour from the nineteenth century CE, which this image isn’t:

Right, so let’s get this out of the way. The 1951 Disney animated Alice put her in a blue dress but it isn’t clear she was originally supposed to have that and some early colour illustrations show her in yellow. Surprisingly though, the corporation which notoriously shoved lemmings into water and presumably drowned them did actually get this more or less right, because that powder blue colour above was indeed considered a girls’ colour when Alice was written, whereas the much bolder pink was considered a masculine colour due to its brightness and assertiveness. I think everyone already knows this, but just in case they don’t, what apparently happened was that the Sears-Roebuck catalogue had a surfeit of pink items one year in the late 1920s and decided to market them to the mothers of girls as feminine, and from that point onwards it snowballed and now we have the situation where pink things are more expensive than blue ones because of the pink tax, i.e. there’s an attempt to con women into buying dearer but otherwise identical items in pink, such as razors, and by extension presumably also other items marketed at women and men which have nothing to do with the colour. I once proposed on the demipatisserie that this could be unified by simply manufacturing a universal pink pigment sold at a premium by an undertaking with a monopoly on producing it, hence unifying all the ridiculousness. Having said all that, it isn’t true that a reversal took place because the specific association of that one colour with particular genderedness did not exist.

Right, so we’ve done that now, haven’t we? I’ll move on.

The word “pink” is itself somewhat anomalous. This is a pink:

More precisely, this is a threepenny bit with a thrift plant on it, but thrift is in fact sea pink:

By Kernow Skies – The Photographer, CC BY-SA 3.0, https://commons.wikimedia.org/w/index.php?curid=150902384

This is in fact not the original pink either, but this is:

By 阿橋 HQ – 常夏石竹(地被石竹) Dianthus plumarius [上海植物園 Shanghai Botanical Garden], CC BY-SA 2.0, https://commons.wikimedia.org/w/index.php?curid=91905964

From this flower, which is a Dianthus plumarius, the name’s origin becomes clearer because it looks as if it’s been pinked, i.e. had at with a pair of pinking shears. The word “pink” seems to be from the Dutch “pink”, which was connected to notching or cutting to make that kind of serrated edge, although it also seems to be linked to kissing. It seems to be a kind of dainty and delicate action, but then the question arises of whether I’m just imagining it that way due to its current association with stereotypical femininity. The psychology of the colour pink does not seem simple.

Many other European languages seem to use a word linked to our “rose” to refer to the colour, and this is entirely sensible because roses are often pink. They’re rose-coloured. Roses are herbs of Venus, i.e. they are about prominent non-male reproductive parts and the health and appearance of the skin, so the feminine association is preserved. However, any such feminine association with complexion is, I’m guessing, a White thing. It remains the case that roses are useful for the skin regardless of its tone. My own primary association with roses is the taste of rose lassi, which I’m hoping can easily be made vegan as I haven’t had it in a while. The flavour and scent of roses is also associated in my mind with the colour, and I wouldn’t say that’s synaesthetic as it’s out there in the world, or rather from sense-data, rather than my own mind, if that makes sense. They have “rose” or their cognate with it, but we have “pink”. Moreover, the word “rose” calls the cognate for “red” to mind in many such languages.

There is something else which calls the Sears-Roebuck story into doubt: the pink triangle. The queer community has reclaimed the pink triangle symbol from the Nazis, but the use of pink in this setting, apparently to mean effeminacy, seems very early if the use of “pink for a girl” started only shortly before. Therefore it’s worth looking at the dates the two incidents happened, although the meanings are not identical. The initial colour was actually green when used for this purpose. I can’t pass over this casually incidentally. Queers were torn to pieces by dogs in front of their lovers in the concentration camps and persecuted even by the other inmates, and unlike victims in other groups who were murdered for existing, were not released after the War or pardoned, even posthumously. I just want to make that clear. It mustn’t be forgotten.

Nonetheless, this blog is not about that, so in a suitably solemn manner I shall now move on, simply observing that I don’t understand how the connection was made but noting that the green triangle was used previously, so it seems that that practice initiated by Sears had somehow filtered through to the Third Reich over a short period of time, that it was coincidence or that it had a more distant common origin. Or possibly that the colour signifies something more clearly about stereotypical femininity or some such. I don’t know, and unfortunately the attempt to make the connection is too painful to pursue right now.

By 草花写真館 – Transferred from ja.wikipedia, CC BY-SA 3.0, https://commons.wikimedia.org/w/index.php?curid=6056884

There are also pink carnations (whereof this isn’t one), the carnation itself, Dianthus caryophyllus, being in the same genus as the pink. It’s long struck me that carnations look remarkably similar to cultivated roses, but unlike roses I don’t think they’re edible although their scent is spicy, so maybe. Green carnations do symbolise homosexuality, so maybe that’s the source of the apparent coincidence.

But why is pink? More precisely, why are these pink things pink? I don’t understand the physics or chemistry of colour at all. I’ll set out what I do know, or think. In inorganic situations, some ions are coloured because the change in energy level of the electrons causes them to emit a particular wavelength of light, and some electrons absorb the complementary colour for the same reason, leaving only the colour the material reflects. Then there are things called chromophores, which as I understand it are parts of molecules which do something to light in order to make the substance they help comprise look a certain colour. Beyond this, I absolutely do not understand how colour works. I suppose a tiny bit of extra information is in the form of pH changing the colour of certain substances, such as litmus, and structural colour, such as the blue of some irises, veins and the sky along with the iridescence of the likes of bird feathers and almost all biological materials which are blue, but beyond that I do not get colour at all, not in that sense. So I look at pink and am aware that some plant pigments change colour according to pH and wonder if that’s why pinks and some carnations are pink, but to be honest I don’t know whether that’s true. I’m also aware of the pigment fuchsin, which is magenta and responsible for the colour of fuchsia petals and those of rosebay willowherb, but I don’t know how common that is. Another mystery to me is the colour of minerals. I simply don’t understand why amethyst is violet and rose quartz pink. I do know it’s due to impurities of metal in the crystal but I have no idea why that would make the crystals in question colourful. I have a vague idea that the crystal lattice is warped out of shape and that this somehow changes its colour, but a much stronger idea that this is complete b0ll0x. It’s a recalcitrant thing for me, like poetry, knitting or sign language: I get to a certain point, very early in the understanding, and I’m able to sign symmetrically for example, enjoy a limerick or suspect that knitting is a complicated form of interlocking knots, but then my brain just dies and I can’t go forward. I won’t get it any more after that. In particular, two things puzzle me: lake pigments and bleach. The former is slightly less puzzling but involves the difficulty of understanding how a particular substance can “take on” the colour of a dye without changing it. The latter is utterly confounding, as it seems to mean that no matter why something is a particular colour, a chemical change can take that colour away. How‽ What do these different kinds of colour have in common that can be altered by a single substance or process?

Rose quartz then. A pink cryptocrystalline form of quartz which gets its colour from titanium, iron or manganese, and then there are other forms of rose quartz whose colour arises from aluminium or phosphate, and that kind fades after exposure to the light. Leaving that last complication aside, the titanium in titanium dioxide is notoriously exceedingly white. Manganese, though, is pink, and is interestingly present in emotional tears but not the tears of physical irritation, meaning that it can apparently stain contact lenses pink. Iron is red or green in some circumstances. All of this is confusing because I don’t understand transition metals.

This is just going to carry on with me saying I don’t understand things if I’m not careful, so I’ll draw a line over this huge tangle of personal confusion and start talking about other stuff.

Pink is actually one of my favourite colours, and always has been. I have a distinct memory of reading a children’s book when I was about six in which a character dressed in bright pink appeared, then went off to climb a mountain, and from that point on I decided that pink was a particularly nice colour. I don’t have the emotional baggage some people have with it, and I feel the same overdose effect when I see an aisle in a toy shop stuffed with pink toys aimed at girls. I haven’t seen the Barbie movie, but I understand that it involved an attempt to reclaim pinkness from negative feminine connotations, which probably sounds too serious a way to describe it. I don’t know if it’s possible to inhabit a mental world shorn of social connotations of this kind. I used to have a habit of doodling shapes which looked a little like swastikas, and whereas there’s a story to be told about how the Nazis ruined that symbol, the fact remains that they did, so I don’t do it any more. Likewise, it’s difficult to stare pink steadily in the eye without seeing it as representing a negative feminine stereotype. However, something like a fuchsia plant or rose quartz does not maliciously arise in the non-human environment just to enforce a gender stereotype, and the more it’s seen in other contexts, the less it’s likely to be perceived in this way. I personally find pink a very nice colour, partly because it has overtones of softness and sensitivity to me and partly because it can be extremely bold and vivid. It’s also substantially a matter of personal taste and people should not be judged by their favourite colours.

In fact, pink isn’t my favourite colour. That’s violet, and to a lesser extent purple. I would say, in fact, that pink and purple are quite similar, and this gives me to analyse the network of colours which casts doubt upon the idea that “my red could be your blue” as the idea has it. Starting with black, white, grey and the spectrum of colours including indigo, the brightest colour is yellow, the darkest indigo and the closest to grey is blue. Hence there’s a range of colours across the spectrum beginning with the relatively dull red, brightening to yellow, dimming to indigo and then acquiring some redness again with violet, which I think happens because it’s twice the frequency of red. This also seems to be a closed system, since a colour wheel can be formed linking red to purple. Because the eye lens is slightly yellow and acts as a filter against ultraviolet, although the retinal cells respond to ultraviolet they rarely get the chance to do so, but it’s said that a cataract operation can lead to the ability to perceive such wavelengths. What that looks like, I have no idea. Pink on this “chart” would be a mixture of red and violet and slightly desaturated in some cases. In a way, pink is similar to brown in the sense that although it’s a perfectly viable colour, it has no analogue in the pure spectral colours, but the actual complementary colour to pink is more like olive, or sometimes a more saturated green. Brown’s complementary colour is blue, mainly because brown is “dark orange”, but there is another sense in which pink and brown are similar, in that they’re desaturated versions of other colours. This suggests another possible system of colours which arranges them as pale and dark. Olive and lime have a similar relationship except by hue, and I’d say that olive and steel blue are to each other as brown and pink are. The point of looking at colours this way is to show that the idea that qualia can simply be swapped is simply not true. There are also other associations with the colours, such as the flavours and odours with rose and the possible femininity stereotypes with pink. Pink is also in opposition to blue in that last respect, though in a very contingent way, and the question of how contingent all of these relationships are also arises.

One interesting aspect of pinkness which may or may not be contingent in this way is what’s known as “Baker-Miller Pink”, as seen above. The psychologist Alexander Strauss believed on the basis of research that different colours had hormonal, neural and cardiovascular effects, and this particular colour has been used to paint drunk tanks to calm people down. One question with this finding, however, is whether there’s a connection with gender stereotyping, and also whether such stereotyping would lead to dysphoria and acting out among men. When used in one jail, it correlated with an initial calming down followed by an increase above the usual rate of incidents. I also can’t help feeling this is pseudoscience, but maybe I’m wrong. After all, it makes perfect sense that red means danger because of blood and poisonous berries, and green means calm and lushness to some degree, yellow cheerfulness due to sunlight and so forth, so maybe there is something in it after all.

So, where does all this leave us with pink? It isn’t spectral, tends to be named after flowers and has fluctuated in its significance. It can be what you make it, but will be perceived by others in a particular way and it may not be possible to take it that way. Marcus du Sautoy, the science ambassador, is known for wearing pink and this is considered noteworthy simply because he’s a man. That shouldn’t be controversial – “judging by his outlandish attire, he’s some kind of free-thinking anarchist.”. In general, one characteristic of pink is its uncategorisable position – it doesn’t belong in the spectrum and doesn’t stay historically within effeminacy or femininity. It can calm, agitate, attract or repel, represent the delicacy of a petal or the fierce passion of a protest. There is another sense in which my pink is not your pink. But what is your pink? What does it mean to you and why?

The Other Pronouns

There’s been a lot of focus, including in my last post, on the question of pronouns recently, leading to peculiar responses such as where people say they “don’t have pronouns” or “don’t want to use pronouns”. This is weirdly ignorant but possibly reflects too strong a focus on a particular aspect of pronouns. Because of the way English works, most of this focus is on the third person and singular number versus the plural number, because English pronouns only explicitly express what we think of as gender in the third person. It could be said that there is some gendering of other pronouns, for instance I wouldn’t be surprised if married women are more likely to say “we” when referring to themselves than married men are, but the fact remains that we don’t perceive this variation much.

Today, though, I want to focus on the other pronouns, both personal and otherwise, because they tend to be lost in the heat of battle but are nonetheless interesting. These other pronouns, though not gendered in English, often are in other languages, but their gender is not the main thing I want to mention, because, to quote myself, “it’s the least interesting thing about them”. Well, usually. I will actually start with that though, and with the personal pronouns.

The basic system many languages have of personal pronouns is that they have singular and plural each of the first, second and third person: I, we, thou, you, it and they. English is unusual in this respect in that it lacks “thou”, more or less. Many languages have distinct polite and informal versions of “you” and distinct singular and plural versions of “you”, which can overlap such that plural “you” is often also polite “you”.

Urgh. I said I wasn’t going to discuss gender but I will because it’s worth getting it out of the way. In English we’re used to the third person singular pronouns being kind of gendered but actually not really. I’ll demonstrate. If for some reason we wanted to use “she”, “it” and “he” with an adjective, it wouldn’t vary according to that pronoun. Attributive adjectives are ungrammatical in English, but predicatives are very common, although in many Western European languages those don’t vary for gender. So, we say “she is tall” and “he is short”, but in French we’d add an “-e” for the feminine adjective. This happens, so far as I can tell, only with hair colour in English: a woman is blonde or brunette but a man is blond or brunet, and to be honest if I ever see the word “brunet” written down I shall be very surprised indeed. It doesn’t even extend to other hair colours such as “white”, “red”, “auburn” or whatever. It’s also annoying because it defines women by hair colour. This is basically the only time we can be even remotely said to use grammatical gender. Oh, actually I’m wrong: it crops up in the fossilised phrase “lady chapel” because that actually means the chapel belonging to The Lady, and is not “lady’s chapel”, so that’s a gender distinction. That’s it though, and in fact most people would probably perceive “brunette” and “brunet” as different words rather than the same word with a different ending.

One distinction we lack in English is gender in the second and first pronouns. This occurs in Spanish but in that it almost feels like an afterthought, and doesn’t occur in all cases. In other languages they generally seem to be fully-fledged pronouns as simple and short as each other, as opposed to having adjectives appended to them as in Spanish. As I’ve said, I won’t dwell on this.

In Old English, and even into early Middle English, there were dual personal pronouns, to refer to two people, as “wit”/”unc” and “git” (pronounced “yit”)/”inc”, the possessives being “uncer” and “incer”. Dual pronouns reappear in Tok Pisin, the Pidgin English of Papua, in a different form, and in Bislama, spoken in Vanu Atu, there’s also a trial form, but there’s a further complication with these so I won’t mention them yet. Gothic dual personal pronouns were “wit” and “jut”. In modern Icelandic, although the form of the dual pronouns survive, their sense doesn’t. They’re plural, and I think the original plural forms are now the polite forms of the pronouns but I might have misremembered. I personally think dual personal pronouns are very useful and I still use them in my diary as it often seems weird to use a plural for when there are only two of something. Dual third person pronouns have not been found in any recorded Germanic language, or in Latin. In Gothic, the verbs also have a dual conjugation for the first and second person, but this has never been found in English.

Something I meant to say last time but didn’t is that the operation of plural nouns sometimes looks quite like gender in English. “Scissors”, “glasses” and “trousers” for example refer to singular objects but use the plural. It seems slightly odd that “bra” is singular, since it could be “brum”, with “bra” as a plural if the etymology is ignored. These uses of the plural for singular objects involves “they” and only seems to happen when there is a duality to the item in question, so it’s like a trace of the dual although in fact it isn’t because there are genuine traces of the dual in English and even in modern pronouns, which again I’ll come to.

English has separate personal and demonstrative third person pronouns. Some other languages combine the two. The demonstratives are “this”, “that”, “these” and “those” and correspond in English also to the older words for “the” in the non-instrumental sense. Latin, Spanish and many other languages have three demonstratives, corresponding to the three persons, as in “hoc” (this by me), “iste” (that by you) and “id” (that over there). English gets by with just two, “this” and “that”. Remarkably, I think anyway although I’ve never come across anyone else mentioning it, Gothic has only one demonstrative in spite of generally making the finest distinctions of all Germanic languages: “þata”, which means both “this” and “that”. It’s the only language I’m aware of which only has one. Spanish does the same as Latin, with “eso”, “esto” and “aquel”, but as with some other Romance languages the referents have changed somewhat. There’s a language in Papua whose demonstrative pronouns refer to things like “towards the mountain”, “towards the lake” and so forth and there are very many of them. I thought this was Alamblak but apparently that just does the same as Spanish and Latin although its counting is peculiar, being based on 1, 2, 5 and 20 multiplied and added in various ways.

The real ‘Flowers For Algernon‘ personal pronouns for me are the inclusive and exclusive first person plurals. I found out about this distinction when I was about eleven and ever since it’s felt like a niggling but major problem for the English language. Remarkably, the vast majority of Indo-European languages get along without the distinction. It’s very simple, although I should warn you, you can’t unsee it once you know: many languages distinguish between “we but not you” and “you and I”. Austronesian languages such as Indonesian make this distinction, as do Dravidian languages. In the former case they also have an extra, dual number, meaning that there are five first person pronouns as opposed to the English two. I honestly can’t understand why we haven’t got these. It has been noted, though, that Indo-European words for “we” fall into two categories: ones of the “we” form and ones of the “nous” form. This suggests that clusivity did once exist in Proto-Indo-European but it didn’t even survive until the earliest form of the separate branches. Under the influence of Dravidian, some Indian languages do have this distinction although it isn’t related to these forms. I would say Papuan languages have this, but the thing about them is that there are hundreds and they vary a lot, so it’s possible to find many features in them which are present sporadically throughout the world.

This next bit is a bit mind-boggling in a peculiar way. It is technically possible for “you” to be inclusive or exclusive! This is interestingly difficult to think about. When one talks to someone, one says “thou” or “you”, but the “you”, being dual or plural, could refer to both or all the people one is addressing or it could refer to the people present and also to people not present. For instance, one could talk to a person and their partner, or one could talk to a person as part of the couple when the other person isn’t present. This apparently never happens though, even though it’s possible, and it’s thought by some linguists that this category of personal pronoun is impossible for the human mind to conceive of sufficiently clearly to exist. This raises further questions as to the nature of language. One linguist claims that this distinction is present in a critically-endangered language spoken in Vanu Atu called Southeast Ambryn, spoken by about three thousand people. I don’t think it’s that one can’t conceive of it. To me it seems simpler than split ergativity to get my head round. It’s more that someone who knows such a language would have to keep doing it, and as such it might inform issues around pronouns and gender identity, particularly xenopronouns.

Another confounding fact is that there are sometimes exclusive and inclusive words for “I”, kind of. Where there’s a regular way in which duals and plurals can be related to singular pronouns, it’s possible for the two versions of “wit” and “we” to be extrapolated back to “I” and continue to give two forms. That probably isn’t very clear, so I’ll illustrate with fake English pronouns. Suppose we had exclusive and inclusive versions of “we”, such as “wee” and “nee”, and then “yee” for “you”, and the vowel changes in all of them to “oo” in the singular, so “yoo” could be the singular word for “you”. There could then be “woo” and “noo” for the inclusive and exclusive words for “I”. Samoan does this. Its plural inclusive “we” is “mātou” and exclusive “tātou”, the duals are “mā‘ua,” and “tā‘ua” and the singulars “a‘u” and “tā”, although some of these pronouns have variants. Interestingly, you might think that if a number were to collapse into another one to simplify a language, the dual and the plural would merge, but in Samoan the dual and the singular merge instead. “Tā” can mean “I” or “we two”. The exclusive “I” is the usual word, and the inclusive “I” indicates emotional involvement, so for example in “am I going to get one then?”, we use the word “then” to signal emotional involvement, and translating that into Samoan would use the word “tā” and omit “then”. In the closely related Tongan, it’s more connected with modesty and is similar to how posh people use the word “one” in English for “I”.

There’s also a fourth person in some languages, known as the obviative. This is actually not only obviative but obvious, and I used to wonder why it doesn’t happen in English in particular. The obviative contrasts with the proximate, which is the only option in most European languages. It crosses over with the idea of topic prominence, and involves a distinction between more and less important items, so for example, “did you put the food on the table?” where the food is the focus, is “did you put it (PROX) on it (OBV)?” but if it was a table as opposed to a kitchen counter and that was the emphasis, “table” would be the proximate and the food obviative. Presumably this is less important in languages with gender, but as a language without gender it’s seemed odd that we don’t have it. This feeling, unlike the clusivity issue, actually pre-dates any knowledge I had of languages other than English, rather like my daughter’s abortive used of numerical classifiers when she was a toddler, and it makes me wonder how often young children stumble upon features of language absent from their native ones and then reject them as their ability in their first language improves.

English has the distinct and separate reflexive pronouns ending in “-self” and “-selves”. These vary in the third person with dialect, such that “hisself” and “theirselves” is sometimes used in non-standard English consistent with the other pronouns being possessive rather than objective. In many other languages these either don’t exist or the objective forms are used, and in some there is a specific dedicated pronoun for this purpose. In Scandinavian languages this pronoun has become part of the verb and provides them with a mediopassive voice, replacing an older mediopassive found in Gothic. The fact that our own reflexive pronouns are so long means we’re unlikely to develop this way of expressing anything from verbs this way, although the potential used to exist.

The other pronouns also exist. In particular, the English distinction between “who” and “what” is peculiar for our language in that it’s similar to a common vs neuter distinction rather than there being three gender-like forms here. We tend to get confused about “whom” and there seems to be incipient reluctance to say “whose” instead of “of which” or something similar when the referent is inanimate. I said I was going to return to the dual number. In fact this does have some traces in English, and one of these is found in the word “whether”. Nowadays this is used as a conjunction, but it clearly looks a bit like “either” and a bit like “which”, “either” and “neither” being other traces of the dual. “Whether” was previously the dual version of “which”, which has taken over its meaning. There are other remnants too, such as “both” instead of “all”, the slightly vague “couple” and less vague “pair”, and the more contentious “alternative” which can only correctly refer to one of two rather than one of many. Trial pronouns exist in Bislama and Tok Pisin, and also in some Austronesian languages. Lihir also has a paucal number for small numbers of items above three. Paucal seems more obvious than trial and just plain having a plural to me but it’s rare in reality. Sursurunga was thought to have a quadral number but in fact it simply starts to use a different kind of paucal at four and has a “lesser” paucal for two or three, so in fact there seem to be no languages at all, at least right now, with a quadral number.

Vietnamese, I recently discovered through a relative, has a large number of personal pronouns which relate to status and familial relationship, making distinctions which English doesn’t even make with kinship terms. These are all in the second person. Like Japanese and Indonesian/Malay, Vietnamese has formal and informal versions of the first person singular pronoun. The Malay/Indonesian polite word for “I” originally meant “slave”.

There are no languages which don’t have at least two numbers for one or more pronouns. That’s a linguistic universal. However, there are many which manage without what most languages seem to consider vital, including English with its single word for “you”. German shows that it’s possible to get away with a single word for “you”, “she” and “they”, although it doesn’t actually do this most of the time, with “Sie” and “sie”. It probably manages because the verb is inflected differently. Spanish and Portuguese have both adopted noun phrases for polite second person pronouns, namely “Vuestra Merced” and “a senhora”/”o senhor” respectively, and American Spanish rarely uses “tu” and “vosotros”/”vosotras”. Mandarin Chinese substitutes the word for “humble” for the first person singular and “honour” for the second person in polite speech. This is paralleled in the considerably more elaborate Japanese system. In fact it’s been argued that Japanese doesn’t actually have pronouns as such. It’s a topic-prominent language which tends to drop pronouns and nouns often start being used as pronouns which weren’t before. Several other East Asian languages are like this. Indonesian uses the word “tidak” (“thing”) for “it” when “it” refers to something as opposed to in a phrase like “it’s raining”. There’s also the very common phenomenon of pro-drop, which occurs in many European languages whose verbs are sufficiently inflected for the person to be indicated by them. Outside Europe, for instance in Swahili, verbs can be inflected for object, and even indirect object, as well as subject, meaning that a word such as “nitaiosha” means “I will wash it” (in Japanese, incidentally, this same word means “similar company”. There’s a group of unrelated languages whose words can sometimes be identical, Finnish being another.). Pronoun dropping is foreign to all Germanic languages as far as I know except possibly Gothic, although we do have pronoun avoidance. For instance, we sometimes consider “she” and “it” to be rude when referring to human adults.

This brings up the issue of how to avoid pronouns for political reasons, i.e. anti-sexism. English has difficulty with this in regular speech and writing although note form does it. For instance, I might write “Went to the shop” in my diary, although only if the day was particularly boring or I wanted to indicate I was breaking the Sabbath or something. There’s also “Would Madam like some wine?” or “Your Majesty”, “Your Grace”, “Your Worship” and so forth, although these last three include possessive adjectives similar in form to pronouns. The situation in Vietnamese, previously mentioned, seems to be that kinship terms are used instead of the second person.

What’s the minimum number of personal pronouns a language could get away with? Although the answer is obviously “zero”, because it could just use nouns, or each pronoun could have a use as a noun, which happens for example with “Ich” in German, I think the sensible answer is probably two, similarly to “this” and “that”. There’s “this person”/”these people” and “that person or thing”, meaning “you” or “she”/”he”/”it”/”they”. The fact that Gothic doesn’t distinguish between “this” and “that” might indicate that only one is needed, but that language had plenty of other pronouns which might indicate how it coped with that odd deficit. At the other end of the scale, there could be singular, dual, trial, paucal and plural numbers, inclusive and exclusive “we” and “you”, a three-person based system for the third and fourth persons and polite forms for all of them in five genders per person, those being feminine, indefinite gender, neuter, common, masculine and virile (a gender for male persons used in Polish). This technically yields five hundred pronouns, although some of them might make no sense. It can in fact be taken a lot further than that, but five hundred might be enough.

Grammatical And Other Gender

This post might look it belongs on the other blog. The reason it doesn’t will, I hope, become clear as I go on, unless of course I indulge in my usual obscurantist verbosity. I hope I won’t.

Grammatically speaking, English is now a genderless language. This fact leads to confusion because we do seem to have gendered third person singular personal pronouns, and don’t we know it? However, we lack a full grammatical gender system as found in other related European languages. I’m trying to do this post entirely from memory, and unlike the previous one, this one’s supposed to be true, but it may not be because I may not be remembering things correctly. However, from memory, I seem to recall that Armenian lacks gender, which if true is probably because of interaction with the genderless Caucasian languages spoken nearby. Apart from that, Bengali I’ve heard has a gender system practically identical to that of English, i.e. it has gendered third person pronouns which refer mainly to women and men separately plus a neuter pronoun which refers to everything else. Farsi, sometimes called “the English of Asia” due to its simple grammar, lacks gendered pronouns of any kind.

Confusion may result from the assertion that English lacks grammatical gender. So as not to distract you from the rest of what I’m going to say here, I’ll clarify that. In English, we talk about “she”, “it” and “he”, we have indefinite gender singular pronouns “they” and “one” and genderless pronouns such as “this”, “that”, “these” and “those”. We also have a common vs neuter set of pronouns in “who” vs “what”. Other things are also going on. We have an apparently single exception to using “it” for inanimate objects when we talk about ships and boats which is sometimes extended in various ways, for instance to countries and other vehicles. However, we do not have grammatical gender, because a complete grammatical gender system extends way beyond pronouns. The simplest form of gender system in Western European languages which most British English speakers are familiar with is the French system, where nouns are feminine or masculine with no neuter, and this affects not only which pronouns are used with them but also articles and adjectives, including present and past participles. This is about the minimum which can be expected from a real grammatical gender system. English, and Anglo-Norman with which it merged, used to have actual grammatical gender, but this didn’t survive, possibly because the two languages, with different systems, merged. Anglo-Norman French was almost entirely just feminine and masculine, with the lone neuter pronoun «ço» meaning “this”, whereas English had a similar three-gender system to German, with feminine, neuter and masculine. The confusion between referring to nouns in two different gender systems probably led to its demise.

They can be a lot more pervasive than that though. For instance the second and first person pronouns can also be gendered. This happens in the Spanish plural “vosotras” and “vosotros” for “you” in the familiar plural and even in the first person “nosotras”/”nosotros”. Tocharian languages I think even have feminine and masculine first person pronouns. Arabic and Hebrew are part of the Afro-Asiatic language family, which regularly has a feminine and masculine two-gender grammatical system. They have second person feminine and masculine pronouns. In some languages, gender also influences verbs in the sense that there are separate forms for feminine and masculine conjugation using finite verbs rather than just participles. This is about as far as it goes, I think.

Because we live in Europe and are surrounded by languages with gender systems, we tend to assume this is normal for foreign languages, but actually it isn’t. Grammatical gender does occur in hundreds of languages, including most Indo-European and Afro-Asiatic languages, but probably the majority of languages have no gender. The distribution is patchy. Dravidian languages in South India have gender. It’s also sporadically present in Australia, New Guinea (which is not surprising as there are six hundred languages there) and the Americas, with a few in Afrika south of the Sahara. However, the Afrikan gender systems are not related to sex.

Gender as we tend to come across it consists either of feminine and masculine, or feminine, neuter and masculine. The Celtic languages have the first system, as do Western Romance with a few exceptions in Italian, whereas German and the Slavic languages have the second, although Slavic is a bit more complicated than that. However, several European languages have a different kind of gender system which is more like our “who”/”what” system apart from being grammatical rather than semantic. These are the various other Germanic languages: Dutch, Danish, Norwegian and Swedish. I don’t actually know if Frisian has this system, but as it’s the closest continental European language to English it would be interesting to find out. These systems have the common and neuter genders, and there is no feminine or masculine in grammatical terms. In Danish in particular this is very useful, because Danish is a mumbled language whose words can be hard to distinguish. Hence the Danish word for “frog” and “seed” is the same but has common or neuter gender depending on what it means, as is the word “øre”, which either refers to the currency unit or “ear”, again depending on gender. Danish suffers from lacking the tones used in Norwegian and most Swedish which make it easier to distinguish between otherwise similar words, although it does use a glottal stop for a similar purpose. These gender systems, however, have nothing to do with sexes.

Within Europe, several languages completely lack gender. These include Turkish, Hungarian, Finnish, Sami and Basque. They may distinguish by pronouns between animate and inanimate objects, but this has no influence on the rest of the grammar. Caucasian languages, which are also European although we tend not to remember them here in Britain, are also genderless.

There is another phenomenon where gender distinctions are made between different classes of referent but all humans, or animate objects, are of the same gender. These tend to be called “noun classes” rather than genders but they are actually the same kind of system. Swahili has something like nine or ten noun classes. Unlike European genders, they have a very clear system, where for instance artifacts such as knives all have the same class, all animate objects have the same but groups of animates have a different class. Another one is abstract objects. I had a surprising conversation with a Swahili speaker once who didn’t realise this was how Swahili worked, which presumably either means it’s subconscious or they chose to learn each word and how it was pluralised, influenced words around it and so forth, separately. Swahili pronouns are not gendered, so it makes no difference to the one thing English speakers would expect it to.

Mandarin Chinese and Malay/Indonesian (which are more or less the same language) have no gender as such, but they do have a system of counting which classifies the objects counted in a similar way. In the same way as we say “fifteen head of cattle”, Indonesian counts animals by tails, “ekor”, and it and Mandarin have obligatory words inserted before the noun. Our daughter actually used to do this in English when she was first learning to speak in full sentences, referring to certain objects as “loaves” when she counted them but not others. Japanese kind of has first person singular gendered pronouns because when they do use their words for “I” and “me”, women tend to use different words than men. This is strange because there is no other gender at all in Japanese.

Turning now to the history of grammatical gender in English, I will begin with prehistory. Proto-Indo-European itself was spoken before the invention of writing, but can be reconstructed. Hittite and the other Anatolian languages, spoken at the start of the Iron Age, are a kind of sister group to all later recorded languages in the family and are therefore a kind of fossilised stage in the development of the original language which indicates the process whereby gender systems emerged. It basically works like this. In Hittite there are two genders, animate and inanimate. This seems quite logical to me, and is similar to the “common”/”neuter” system found in Scandinavian languages and Dutch. These did function as genders, changing the form of adjectives associated with them and being inflected differently in the nine cases, but they are only somewhat like the systems found in the likes of Sanskrit, Greek and Latin. What is thought to have happened is that there were three classes of referent approached differently in terms of grammar. There were most inanimate objects, and we would refer to each of them as “it”. Then there were two other classes of object, one consisting of agents and the other of adjectives. Hence a spear might be in the agent class, because it does something, whereas a flower might be in the adjectival class because it’s beautiful, i.e. mainly conceived of by an attribute. Since men were perceived as the “doers”, they were masculine, and because women were perceived as being rather than doing, e.g. being beautiful, they were feminine. It’s important to note, though, that it isn’t a case of a whole class of items being considered feminine in the human sense, or another whole class being considered masculine. It’s completely the other way round. There is a class of objects thought of as what they are, including women, and another class of objects thought of as what they do, and this includes men. Sex is secondary. Then there’s a third class which falls into neither category. That’s how it happened.

This was also only the initial situation. Words change their meaning over time, people think words which rhyme have the same gender and so on. All Latin names for trees are second declension feminine nouns even though they end in “-us” like all the second declension masculine nouns. “Manus” is also feminine. Due to the similarity, in Italian and related language, all these nouns have now become masculine. There has also been confusion between first declension neuter plurals ending in “-a” and feminine nouns, so for example the plural of “opus” is “opera” , but the latter is now a singular feminine noun.

Back to English though. Proto-Germanic had three genders: feminine, neuter and masculine. Originally, the words for “they” were also gendered. Gothic has “ijos”/”eis”/”ija” for “they” in the plural, with separate forms for feminine, neuter and masculine. Icelandic has the same distinction: “þær” “þau” “þeir”. Icelandic is important to English pronouns because our own third person plural pronoun, “they”, is inherited not from West Germanic but Old Norse, which is practically the same language. In Old English, by the time it was written down at least, made no distinction between its plural words for “they”. The only survival from this word is today’s “’em” for “them”, which is colloquial, but the word was “heo”/”hie” in the nominative and accusative, “him”/”heom” in the dative and instrumental and “hira” in the genitive. This compares to the singular feminine “heo”/”hie” in nominative and accusative, “hire” in the genitive, dative and instrumental. The neuter and masculine are the same as each other in the dative and instrumental and in the genitive, i.e. “him” and “his”, but otherwise differ, being “hit” and “he” in the nominative and “hit” and “hine” in the accusative. This mixture of somewhat confused forms is typical of pronouns in West Germanic: it’s seen today in the German “sie” and “Sie” for “she”, “they” and the polite form of “you”, and in “ihr” and “Ihr” for the informal plural “you” in the nominative, “her” in the genitive and “their” and polite “your”, again in the genitive. I haven’t conversed in Old English as much as in German, but I’ve never encountered any problems with the fact that these are homonyms. Notably, their word for “they” is often the same as their word for “you” and it doesn’t cause confusion.

The reason I’ve written this out rather than used a table is to emphasise the detail of what’s going on. In some circumstances, the word for “they” is the same as the one for “she” and in others it’s the same as the words for “it” and “he”. There is therefore, just as in German today, a lot of ambiguity here. As with German, this is often resolved by the form of the verb, because it would be plural when the pronoun is plural and singular along with the pronoun. In the meantime, “his” and “him” could be neuter or masculine. The ambiguity might, of course, be the reason we ended up with an Icelandic word for “they”, but it’s also interesting that we don’t say “him” or “‘im” even informally to replace “them”, but instead use a form derived from the Old English dative and instrumental, in both objective forms.

These, though, are just the pronouns, and in Old English there was a complete grammatical gender system. Each noun was feminine, neuter or masculine. Bear in mind that this was grammatical gender, not gender as the word is often used in English. The gender of a noun, as in many other language, was determined by its ending, including complete words. For instance, there’s a street in Canterbury called Burgate, from a feminine word for “city”, “burg”, and the neuter word for “gate”, “geat”. The whole word “burg-geat” is neuter, because the last word in the compound is neuter. Every noun that ends with “-a” is masculine, compared to the Latin tendency for such nouns to be feminine (but not always, e.g. “agricola”, “nauta”). There were various suffixes which conferred gender reliably such as “-dom”, as in “wisdom”, “-had”, as in “cildhad” (“childhood”) and “-scipe”, as in “friendship”, all masculine, and the feminine “-nes”, “-o”, “-ræde” and “-ung”: “rihtwisnes” – “righteousness”; “bieldo” – “boldness”; “hatræde” – “hatred” (that might be a spelling mistake on my part); “scotung” – “shooting”. One word for “man”, “mann” is masculine, as is “wer”, also meaning “man” as in “werewolf”. However, the main word for “woman” is “wif”, i.e. “wife”, and this word is neuter. Moreover, the ancestor of our current “woman” was “wifmann”, which is masculine. “Cild” and “bearn”, both words for children, are neuter, as is “mægden” (“maiden”). Hence if you started a sentence talking about a “wifman”, you would then refer to that person as “he” throughout it. Similar things happen today in Gàidhlig incidentally, so it still happens in Britain. I am personally accustomed to saying “it” when referring to a child or baby. The takeaway from this is that the whole time Old English was spoken there was only a loose anchor between pronouns and what we might call biological sex, or gender in the social sense, and likewise between grammatical and social gender.

Going back to the Old English pronouns, it’s notable that the forms of the singular third person have changed a fair bit and the plural has practically disappeared. The situation is complicated by the fact that the dominant dialect of English over most of the period before the Norman Conquest was West Saxon, spoken in Wessex, whereas the English in which this is written is descended from the Mercian dialect, spoken in the English Midlands. In particular, this means that the diphthongs written as “eo” and “ea” are not the direct ancestors of any modern English sounds. Hence the “heo” and “heom” above might be said not to survive, although related words in other dialects did. The distinctions between the likes of final A, E and U were lost, replaced by the schwa “murmur” vowel. This meant in turn that any grammatical distinction made through these vowels was also lost, and since these, among other things, were responsible for gender distinctions, these went too. Hence although different pronouns might’ve been used to refer to different nouns, they weren’t accompanied by as complex a gender system as previously. All that was left was an occasional “-e” appended to adjectives in the plural and after “þe”, as in “þe olde worlde”. Gender would still have shown up with the various demonstrative pronouns, although even here they were more inflected in the South. This left the personal pronouns, which were also changing.

Of course “he” still existed, as it does today. “It” emerged due to a dropped H. “Hie” survived at least into Chaucer’s time as “hi” and the genitive “hir”. “Heo”, the former feminine pronoun, turned into a confusing plethora of different words, including “he” and “ho”, and “hi” was accompanied by “he” as well as the new Scandinavian forms. In other words, it became entirely feasible for the word “he” to mean what we mean by “she” or “they”. This is the origin of generic “he”. There was pressure to resolve this situation, which was achieved by using the Scandinavian pronoun in the plural most of the time and by adapting the feminine “that”, which was “seo”, except of course that that diphthong had been lost, leading finally to “she”. So there was an apparent felt need to use a feminine pronoun at this point. Interestingly, and probably coincidentally, everyone was basically wearing dresses at this point too, the only gender difference being the heights of the waists. It’s hard to imagine how this situation would have arisen if the Bible had been interpreted the way it often is today, but that’s another conversation. “She” prevailed in about 1300, but the use of generic “he” persisted all the way into my lifetime and is even sometimes the source of contention today in 2023. The use of generic “he” is a good illustration of how the history of a word may not have any bearing on how it’s taken today. Nowadays, using generic “he” just is sexist, regardless of its history of originally including the feminine. I may be very attached to the history of language but I still recognise that it has its place. It does, for example, illustrate that the ability to refer to social gender at that point was considered vital, and this is not trivial as there were languages in similar situations where it wasn’t, so it says something about English society that this happened.

A more distinct animate third person singular pronoun was “ha”. This was also gender-neutral, but resembles neither “she”, “he” or “they” (“hi”) very closely. It persisted into the last century in some West Country speech and was adopted by Ursula Le Guin for the screenplay of her novel ‘The Left Hand Of Darkness’. It’s used sparingly today as a neutral pronoun, but is probably too exotic to catch on. However, it does date back to the Middle Ages.

Now for the question of apparently plural pronouns being used in the singular.

This occurs, for example, in Urdu, where married women tend to say “ham” (we) rather than “main” (I) because they’re used to referring to their whole household, and there’s the “editorial ‘we'” and the “royal ‘we'”. The former is when someone acts as a spokesperson and the latter is used by royalty in official proclamations. It also appears to crop up in the Bible but this may be a disguise for the possibility of these passages being polytheistic.

English is unusual in not using the singular “thou”, and in this respect it follows Anglo-Norman, which stopped using «tu» in favour of «vos», so I wonder if there’s a connection. The former English use of “ye” and “you” followed the French usage of these pronouns, the latter now being «vous», in that the plural was used for both the plural itself and formally, whereas the latter was informal and only ever singular. It’s used slightly differently in the King James Version of the Bible, simply to translate the singular and plural pronouns, making it look like “thou” is formal due to God being referred to in that way. This may have led to the erosion of the distinction in English, but I’m just guessing. It’s clearly in common use in 1611 CE, and also in the works of Shakespeare to 1613 which doesn’t get us (“me”?) much further. What seems to have happened is that during the seventeenth century, using “you” was thought polite and people were expected to be polite all the time. “Y’all” and “youse” are clearly attempts to address this, as is “you lot”, although that is definitely not polite. Quakers were well known to thou well past the time when standard English stopped, although they don’t do it any more and so it’s now only used dialectally. It’s also true that you’d probably have to know someone pretty well before you called them “thee”.

Bearing in mind the time scale, this finally brings me to “they”, which is enjoying a moment. “They” hadn’t planted itself in our language at the time of Chaucer, or at least the London dialect thereof. Being a Scandinavian word, it unsurprisingly moved from North to South. It’s used once in the text ‘Genesis And Exodus’, written in 1250 in Norfolk. Before that, the ‘Ormulum’, written in the previous century in the East Midlands, uses it, but since this area, where I happen to be sitting right now, had fairly recently been Danish, it’s not surprising it was present. The language of the Ormulum is idiosyncratic, using a spelling system seen nowhere else, and it’s also quite badly written. Also in ‘Genesis And Exodus’, “it” is used as a plural, and the word “his”, presumably because it has an S at the end, is too. All of this is going on while the English language is eclipsed by Norman French, so it was kind of decaying at the time. Chaucer does sometimes use “they” but not “their” or “them”. He died in 1400. There are also the reflexive pronouns, which nowadays are the likes of “herself”, “myself” and so on. Back then, they were used interchangeably with object pronouns, and the plural “-selves” forms didn’t exist. Therefore, “themself” crops up quite early, though not necessarily with a singular meaning.

The plural “they” appeared about a century before singular “they”. Grammatical gender was lost during the thirteenth century, just as the word “they” was being adopted into English. This could benefit from some grammatical context. American English uses generic “he” after “one”, as in “one does what he pleases”, whereas British English doesn’t: “one does what one pleases”. Singular “they” has tended to crop up in rather similar circumstances: “Had the Doctor been contented to take my dining tables as any body in their senses would have done …” – Jane Austen, ‘Mansfield Park’, 1814. The use of the English language before the eighteenth century was not strongly prescribed, but from that time on, such uses of “they” tended to be frowned upon on the grounds that it came across as plural, and “he or she” on the grounds that it was clumsy. Three important points come to mind here. Firstly, it’s actually three centuries older than singular “you”. Secondly, the use of generic “he” rather than “they” seems to be more American than British. Now I’m quite a fan of American English. For instance, I like the precision of “gotten” as a past participle better than “got”. Thirdly, Jane Austen used it, whose language is quite classical, focussed on sentence structure and balanced. It is of course possible that she was deliberately avoiding generic “he” because she mainly writes about female characters.

It is entirely standard usage to say something like “when the interview candidate comes in, make sure they have the right chair”, and I think in general that would go without comment or even being consciously noticed by most people. It’s a usage before meeting or otherwise encountering the person in question, and it may be the other singular usage, which seems to be newer, which bothers people. However, although I do think there are problems with the use of singular “they” which I’m going to outline later, I do think there’s a new usage, which is however difficult to define. It’s along the lines of assumed gender. Singular they is more often used in advance when someone’s gender is not known to be feminine or masculine. It’s now also used in arrears in this situation. That is, people now either claim the usage as a reported pronoun or it could potentially be used because appearances are sometimes deceptive. If a fool’s tiger and a real tiger were to have names with different grammatical genders, it might be advisable to use singular “they” if one didn’t know what one was seeing. It is quite possibly a question of evidence for gender: what was previously considered a sufficient condition for being gendered in a particular way is no longer, and only honest reporting of one’s pronoun could work. That’s probably the idea anyway.

It may be partly a question of semantic drift. I’m fond of saying that the words “silly”, “nice” and “gay” have changed their meanings dramatically, and in fact they overlap each other sometimes. It used to be common for people to object to the use of the word “gay” to mean “male homosexual”, but it sometimes seemed that they weren’t really deprived of a word and they could’ve said “gaudy”, “merry”, “happy” or many other words in its place. However, I don’t think this is all.

The English language is not very inflected. The present tense of regular verbs has only one variant, and sometimes not even that, namely “-s” for the third person singular. Hence if one does use singular “they”, maybe one could say “they is” or “they was”, but this is grating. There could potentially be a lack of important information. Other approaches can be taken though, such as wording things in the passive, using a genuinely plural “they”.

In conclusion then, grammatical gender is not gender, although it has been said that, for example, speakers of a language whose word for “bridge” is feminine will tend to use adjectives such as “elegant” to describe it, whereas when they speak a language whose word is masculine tend to describe them as “sturdy” and so forth, so it does seem to have a psychological effect. Singular “they” is older than singular “you”, and it may lose information to use it, but we’ve survived singular “you”. I actually prefer “it” because it’s a great leveller, but I’m also sure it’s very unpopular.

The Platinum Jubilee

Well, it was either that or a portrait of the Queen wasn’t it?

You probably know, because I’ve said it on here before, that I’m kind of technically republican but really don’t feel that strongly about it. I’ve read and watched lots of pro-republican propaganda and to be honest the emphasis on the monarchy being expensive calls to mind a lot of other things which are a much bigger waste, and I find it hard to motivate myself to care. I’ve said before that arguing about whether these nations should have an elected head of state or a hereditary one is like arguing about what colour the handle of the executioner’s axe should be. Having said that, there are many reasons for abolishing the monarchy. For instance, right now it means the monarch is almost certainly going to be White and until recent changes in the law probably also male, and if they aren’t heterosexual they’re probably going to have to be in the closet because of the succession, and none of those things are good. Looking further into the millennium, assuming a persistent monarchy, we’ll probably have three kings, assuming regnal names are the same as birth names: Charles III, William V and George VII, and there probably won’t be another queen until at least the 22nd Christian century. But one excellent reason for abolishing the monarchy is for the sake of the people subjected to it, the Royals themselves, because psychologically it takes its toll on them. George VI’s health seems to have been quite seriously damaged by his being king for example. Knowing that you will only ever have one job in the long term and are unable to do various things with your life must feel like a gilded cage to them, and it probably feels like much of what you do before you become monarch is just dabbling with life in full knowledge that it actually doesn’t amount to much. I can see the value of the likes of the Prince’s Trust and the Duke of Edinburgh Award, and yes, I’m talking about the family rather than the men born to be King here, but still, they must have to work very hard to infuse their lives with meaning.

Having said all that, there are other aspects to the Queen’s life and rôles. As well as being monarch, she’s head of state, not only of this country but also many others, such as Canada, and in a way it’s just like having a president, in that she fulfils a similar position. As a child, I noticed that foreign banknotes often had an ornamental frame on them which appeared to be blank, and being from a monarchy I thought these windows, which are in fact there to display the watermark clearly, were supposed to symbolise the fact that the country issuing them was a republic. People from republics disabused me of this notion and said they didn’t generally think of their countries as lacking a monarch or feel the need to indicate their absence. Now we have windows on our own banknotes of course, but not because we’re a republic.

The Queen has a long list of rôles, including head of the armed forces, and also head of the Church of England. Some would see these two as contradictory. However, having been an active member of the Anglican church in the past, I did genuinely feel that whatever else might be true, and whatever other political views I might had, the Queen was the head of my denomination, and this was significant. She seems to live her life in a Christian way and her faith seems to be important to her. Although it’s important not to fall into the trap of thinking they’re just like us in some ways, although of course we all share humanity, it does create a connection between us in the sense that she had this rôle thrust upon her, probably in a way which she perceives to be the hand of God, and has constantly been labouring under the responsibility since 1952, with the help of her Maker. And I can relate to that! I don’t feel she is merely in an unearned position of privilege or has a cushy life. In a theoretical situation where we became a republic, it’s still possible that she would’ve retained her position as head of a church, and being female, a woman who took on that function four decades before there were any women priests. That’s not insignificant.

If you do the calculations, it looks like the Queen and Charles will die in the same year. If she lives as long as her mother, she’s likely to die in 2027 at the age of 101. Her four predecessors, Charles’s male ancestors, died at the ages of 68, 70, 78 and 56. Her heir, born 1948, would die in 2026 if he lives to the same age as the former Edward VIII, and to be honest that particular “king” may have lived longer because he was able to go off and do what he wanted rather than stay as head of state. That said, life expectancy is longer in this country than it used to be, and there are alleged to be connections between tobacco smoking and each of these men’s deaths. Charles gave up smoking when he was eleven. Consequently, just on these bare stats, which fail to take much into consideration, it very much looks like he will never be King.

There’s a pattern in the way monarchs go in England. Long reigns are often followed by a flurry of short ones due to the fact that successors tend to be older by the time they get there. Also, unsurprisingly there are many more kings than queens, but proportionately the average length of a queen’s reign is longer than that of a king. Since William the Conqueror there have been three dozen kings and eight queens, if Lady Jane Grey and Matilda are included. The average length of queen’s reigns is bumped up by the two outliers, Victoria and Elizabeth II Of England. Monarchs who have managed not to reign without being executed, namely George III and Edward VIII, tend to live longer. I think we should bear this in mind because it shows the strain being monarch puts on people. It really isn’t a bed of roses.

At this point, provided Sumerian king lists are not taken seriously, nobody has been a monarch anywhere in the world or at any time in history longer than the current Queen. Although she is a figurehead, she probably also acts as a source of wisdom and experience for governments and would be able to do this to a greater extent than anyone else in history. She’s seen fourteen British prime ministers for example, and is not entirely hands-off in her rôle, but of course we don’t really know what’s going on with her. Eventually one may get to don the mantle of respectability simply by virtue of one’s age and length of time in office, but presumably she has reflected on the nature of successive governments. I do wonder how seriously some of her prime ministers have taken her though.

Another aspect of this is the nature of anniversary naming. On the whole the sequence could be expected to be something like: iron, bronze, silver, gold, platinum, with other interspersed “substances” in between. Sarada and I have had our silver wedding anniversary already, which makes me feel old. There are two diamonds, one at five dozen and one at seventy-five, so Queen Victoria was able to have a diamond jubilee but that was that. They have latterly been modernised, and are mainly seen to apply to marriages so they tend to have things like “electrical appliances” in them. The original is the golden jubilee, which was instituted in the Bible, consisting of seven times seven years plus one, due to the ancient Hebrews having no concept of zero. The Golden Jubilee was honoured more in the breach than the observance, but it’s a brilliant idea. All debts were forgiven and slaves and prisoners freed. I think there was also redistribution of land, in order to prevent the concentration of land ownership in the hands of the wealthy few. We could definitely do with something like that.

That’s it really. The official anniversary of the accession is today, but the celebrations will be in June.

How Free Are We Really?

The title of this post could be interpreted politically, and maybe it will be by the time I reach the end. As usual, this is not planned. I’m just setting down my thoughts as they come to me and splurging them out on the screen. But yesterday I made an interesting discovery which I feel impacts on my personal life and to some extent my identity.

As you doubtless know, I’m a herbalist. I have a whole blog devoted to that along with home ed. I qualified in 1999 CE at the age of thirty-two, having previously read for a humanities (philosophy) degree at Leicester University, and there were six years between graduating from my first degree and starting the herbalism course via the College Of Phytotherapy, during which I gained an MA in Continental Philosophy, got married and we had a child. This probably sounds like a fairly circuitous and unusual route for someone’s career path, such as it is, to take, and there are indeed not that many herbalists (there are probably still too many but that’s another story for another blog), so if you met a woman humanities graduate from Leicester Uni who is a herbalist, the chances are you’d think that was unusual. And there are also coincidences to be taken into consideration of course, and I’ve widened the scope of some of this to make it sound more plausible.

Now I don’t want to doxx anyone, so I won’t being, and therefore I’m going to have to be vague about this, but I can’t really pass over this without saying something because I’ve found something very interesting about which I previously had no idea: I am not the only woman who graduated from Leicester University in the late ’80s to early ’90s with a humanities degree and became a herbalist. I have somehow managed to avoid finding this out until yesterday. And we even knew each other at the time. We were acquaintances. Not friends, although we got on all right. We just didn’t know each other that well and didn’t have much to do with each other. There were five thousand undergraduates at that institution at that time, so it’s not that intimate.

It’s possible to do some stats on this, but before that it’s worthwhile reducing some of this down to some kind of testable hypothesis, along these lines. How probable is it that a woman graduating from Leicester University in the period 1988-91 with a humanities degree would later qualify as a herbalist? That’s five thousand students. Divide that by two and you have two and a half thousand. Divide that by the five faculties of the university and you get five hundred. Finally, there are currently four hundred and five medical herbalists of the kind I and this other person are in Great Britain plus the Isles. The population of Great Britain and the Isles is currently around 65 million, but was lower in the late 1960s when she and I were both born, so I’m going to go with the population in 1970, which was 55 million or less. The probability of being a woman humanities graduate of Leicester University in the period 1988-91 is therefore around 1 in 110 000. The probability of being a herbalist now from the population of Great Britain and the Isles (by the way, that last bit includes only three people and none of them are Orcadian or in Na h-Eileanan an Iar, which is a bit ominous), having been born by 1970, is less than one in 135 000. So far, this is just playing with numbers and not really statistics. There are probably quite a few variables involved, and the ones we can spot – gender, age, degree subject area, location – are quite possibly not the most important ones, but simply correlate with these.

I don’t want to turn this into a discussion of who becomes a herbalist because that belongs on the other, long abandoned, blog here, but a few things are worth mentioning. Herbalism, for most people, is what one now calls a “side hustle” rather than a main source of income. Some people can for whatever reason only find side hustles, which is Sarada’s and my position as a couple, and is rather unfortunate for our economic situation. I don’t know if this applies to the other herbalist, but there’s a tendency for this to happen more to women than men. Offhand, I’m not aware of any men at all who have one. I am aware of men who are dismissive and disrespectful of them and say they’re pretend businesses. These people should probably consider why people are in the position of not having work that pays better. Perhaps they’ve spent a lot more time doing unpaid work which enabled their male partners to go out and derive higher incomes? Or they may be socialised away from high wages.

When I was training in the 1990s, nine out of ten herbalists were women. It’s alleged that back in the ’80s, every single herbal student was female. I’m a little sceptical of this. Over this period, most herbal students were mature and already qualified in something else, such as pharmacy or nursing. Their attraction to herbalism was often characterised by frustration with healthcare as usually practiced, whether from personal experience or through seeing it in their paid work. Another factor may also be involved. In the case of myself and this other herbalist, we have moved from the humanities to STEM, although this particular kind of STEM field is perhaps a little unusual for such a profession. I suspect that this is related to an inherent bias towards women with aptitude in STEM ending up on humanities degrees and needing to “course correct” later. Another thing she and I have in common is that we chose to qualify in science subjects in order to meet the entry requirements for the course (which was the same one at the same institution), so there was clearly a reason, not necessarily the same one, for us not doing those qualifications at school age. It should also be noted that herbalism as practiced is unusual for a STEM field because it requires a high degree of empathy and interpersonal skills, and these are also expected by the clients, unlike some other fields such as allopathic medicine where the same would be equally useful but are not necessarily expected, particularly by people with a learned pessimistic view of healthcare. A possible major difference here is that I got the highest mark in chemistry for my year when I was thirteen and proceeded to give it up, which is an unusual thing to do. Therefore it might not be worth considering this in gender terms as much as what happens when someone with talents in science and technology ends up studying language or philosophy and needs to change direction later.

Another factor here is location. Leicester University was described at the time as “middle-class, middle of the road, middle of the country”, which is a fair portrayal and is probably still true. Overall it was right in the middle of British universities for academic achievement and long-term graduate earning potential. It’s decidedly not Nottingham. Nottingham is a kind of high-flying arts specialist university, in a way a bit like Kent at Canterbury but not to the same extent. A person in the “wrong” field is more likely to end up at Leicester than certain other places because their academic achievement in that field is likely to be lower. Hence these two variables, humanities changing to science and being in the “wrong” academic subject, may in fact be related. This might be reflected in a higher than usual number of women students in the humanities, but to be honest I don’t think that was so.

A further factor is that we both stayed in the English East Midlands. This could be to do with the characteristics of this region or a tendency not to go far geographically after graduation. The first thing that comes to mind when considering the East Midlands as part of Great Britain is its shockingly low biodiversity compared to other regions, and that might be expected to make it harder to practice as a herbalist. In practice, it does seem quite straightforward to grow and obtain the necessary remedies here, and this would in any case vary according to one’s choice of herbs. I’m unusual in trying to stick to herbs which are either native or grow well here and am in fact quite focussed on invasive species, which is probably unusual. I don’t think there’s anything particularly wonderful about the East Midlands that makes it suitable for practicing Western herbalism, and in fact I think that for England, it’s a particularly unsuitable area. Therefore our presence in this region probably reflects something about us. I can think of another CAM practitioner who is also a female humanities graduate from Leicester the same year as I, and she also lives in the East Midlands, but in her case there may be less connection to the ecological situation since she isn’t a herbalist. That, I think, eliminates herbalism per se as a factor.

The thing to consider here would be how likely someone was to move away from their university location after graduation, and why they might or might not do so. Having children is one factor, such that if you haven’t moved by the time you become a parent, you’re less likely to do so. I don’t know if this other herbalist is a mother, but it makes some kind of sense that if her career changed direction after a few years, it might lead to her becoming fixed in one area. It would also be useful from the viewpoint of social capital – you would build up more contacts in the immediate area which you could then use later in your practice. The other ways I think it could be made to work is either not directly practicing but doing mainly teaching, writing books or joining or taking over an established practice.

All of this, though, is a case study of a specific pair of people I happen to have noticed because I happen to be one of them. It helps me put my biography in context because neither of us were aware of the other’s life but we have ended up pursuing similar paths, even though to us they probably seemed like rational and free choices most of the time. The specifics of our situation are not that interesting to others. What is interesting is the question it raises about the nature of freedom. Many people do feel restricted in their choices, or they just are. Human trafficking and homelessness come to mind here, but it also happens in wealthier situations such as children entering the family business or expected to follow what their parents see as an illustrious profession. Those people are probably aware of their lack of freedom. As for the rest of us, maybe all we really have is the illusion of freedom. This is not quite the same as the idea that we are individually determined. I am pretty much convinced that on an ontic level there simply is no freedom, and that that is an illusion, substantially because both determinism and acausal events both mean one has no influence over them. However, the kind of freedom I’ve been discussing here is rather different, because it’s a little more like the idea of political liberty than the idea of free will. In fact, it’s intermediate. Whether or not you believe in free will, you will tend to have an opinion about the rôle of such things as free speech and the political franchise. That has no bearing on the issue. However, a governments also successfully manipulate the electorate in various ways, including but not limited to propaganda. That said, there’s no way they would be concerned about manipulating women into changing career direction from the humanities and becoming herbalists near their university towns. This is not only ridiculously specific but also only really doable in a “free” society like this one in broad strokes. It’s unlikely that anyone, anywhere, in government or the civil is attempting to socially engineer an increase in herbalists. They’re more likely to be doing the opposite. But this means that there are social forces operating on all of us anonymously, without purpose or intent, outside anyone’s consciousness, which are however just as deterministic as a centrally-planned political economy, and they are as detailed as those famous cases of identical twins separated at birth who end up living with the same breed of dog with the same name.

What are we to make of all this? I have no idea.

English English

What if the Norman Conquest hadn’t happened? How would people on this island be speaking now? What would’ve happened to the nation of England?

Although English is technically a Germanic language, it can sometimes be very hard to detect that aspect of it. This is less true of Scots. Thisses causes are ultimately the influence of Norman French and the Great Vowel Shift. The first created a precedent for the adoption of other words into English and also eroded some of the inflections. The other took the pronunciation of long vowels, later diphthongs, far, far away from its origins compared to most other European languages with the possible exceptions of French and Portuguese. I previously dashed over the five or six centuries of history between the departure of the Latin speakers and the arrival of the new Latin speakers, and an early discernible cause of the Norman invasion was that Edward the Confessor was raised in Normandy, because his mother Emma of Normandy fled England and Sweyn Forkbeard, and the reason she’d married into the Saxon royal family in the first place was to pacify Normandy, and so it goes back and back as usual, and the real problem with proposing these scenarios is whether anything other than the current state of affairs is feasible. Of course it is, but maybe a lot of what we imagine to be real counterfactual timelines only seem to be so because we don’t know enough, maybe even can’t know enough, about the ultimate causes of events. When truly acausal events have a major influence on history, the situation is different, so for example it’s possible, though amazingly improbable (and that measures the distance that universe is from this one) that Rutherford’s photographic plate didn’t become clouded by radioactivity from pitchblende, and consequently there are timelines where radioactivity wasn’t discovered until much later or at all, along with ones where Chernobyl didn’t happen, Hiroshima didn’t happen and so on, all independently of Rutherford’s discovery, but in fact all these events are practically certain. This raises the question of what improbable event of this nature has occurred in our timeline, and that may be the existence of the nuclear reactor in Gabon two thousand million years ago, or perhaps that’s failure to become a runaway nuclear explosion.

Nonetheless, I shall imagine a scenario resulting from some nebulous tenth century event in the English monarchy, or perhaps something else such as conflict between the Danes and their brethren the Normans, which prevented the Norman Conquest or any other successful invasion of this island by Romance-speaking nations. What would English be like today?

It’s sometimes claimed that English is the richer for the Normans. Whereas I think it’s true that it has led to greater flexibility which allowed the language to acquire loan words more easily during the imperial era, and also gave it a particular character, this is to malign other Germanic languages unfairly. Old Norse in its modern form as Icelandic has a fine literary tradition, as has German, and they certainly didn’t need to be propped up by another language. Hamlet’s “To be or not to be? That is the question. . .” was recast as “To be or not to be? That is what is mooted. . .” by someone like David Starkey, and seen as clumsy and impoverished, but this assumes that no other changes would’ve taken place in English as a result of the absence of Norman French influence and is therefore quite artificial. English sans French is not English with one arm tied behind its back, because a language is unlikely to remain restricted in this way but will develop into the space left by the non-existent French influence. For instance, the Germans call a printing press a Druckpresse and the Icelandic name is Prentvél, so they did adopt a Latinate term but we could’ve ended up calling it something like a “throngtram”, and we’d be fine. Nobody would be disadvantaged by that and we wouldn’t know the difference. Hamlet’s speech, and of course there would’ve been no Shakespeare but let’s ignore that for now because there would’ve been someone else, could’ve started as something like “To be or not to be, that is the fray”, from the Old English “frignan” – to ask, or perhaps “. . . that is the asking”. Something would’ve come along to fill the gap.

Henceforth I shall rid this writing of words from other tongues, although I know some will slip through. However, although this may well show that English can get along without those other words, it shouldn’t be taken as the way it would’ve been without Norman French on the grounds I went into above. In truth, I have been writing and speaking like this, on and off, for years since I hated French so at school that I sought to take out all of the words that stemmed from French in my speech, and also Latin. Nowadays it comes straightforwardly to me and has done for years, although the hatred I once felt is now gone as I’m now aware that it’s widely spoken in the Third World, such as Afrika. Though I know some French words will leak through, even Icelandic and German, while unkeen on words thence, do have some, as can be seen in the above “Druckpresse” and “Prentvél”.

Anoðer þing French did to English was to write it in its own spelling and ðis meant þrowing out some of þe alphabet. Þorn, eð, æsc and ƿynn all went, and sundry methods, often with H, arose instead. Moreover, French spelling was also foisted upon the vowels (ðere goes an un-English word!) as wið “OU” for “U”. So, from now on I scal be writing Englisc wið þe older spelling too, at þe risk of becoming hard to follow. Ðis also means getting back to my small “i” for “I”, since Englisc did ðat once too, before þe Normans.

Ðen ðere’s þe vowel scift. Ðis cannot be seen in writing on þe whole, but it means ðat the way words are said is no longer hu we have been saying ðem in þe last few hundred years. Yu can also take it as read ðat spellings like “know” and “ðoght” will have everything spoken raðer ðan just being a series of scapes which are most unlike þe way ðey are spoken. I am beginning to find it hard to write ðis nu wið ðese new meþods and I þink I will be making some mistakes.

Alðoh only Englisc underwent þe Great Vowel Scift, two oðer tongues had þe same þings happen to ðeir vowels in oðer ways. Englisc spelling was once marked by making boþ Y and I do þe same work. Þis arises from ðem having been unlike each oðer at first but becoming more alike later. Þe same happened in Icelandic. Both nu make what we wuld call a short I. In Englisc ðis has gone furðer owing to our vowel scift, so we nu have an “eye” for it too. Ðerefore we can believe ðat ðis melding, which happened for us about nine hundred years after Christ, would have happened anyway. Moreover, German has had þe same þings happen to its long U and I as we have to ours, and nu spells ðem “AU” and “EI”. Hence anoðer set of spellings comes to liht: Y is only written in words from oðer tongues such as Greek, and þe long I and U are spelt “AI” and “AU”, which i scal do here fortþwiþ.

Our speech was overshadowed for hundreds of years by French, and in ðat time it became somewhat rotten. Ðere was no highflown kind of Englisc – it was spoken by þews and þe loest of þe lo. Ðerefore its grammar was not given heed, and it grew downfallen. Once it was raised again into þe liht, it had taken on a niu scape. No more did it have “she”, “it” and “he” for words which named things, and no more did words betokening þe marks of a named þing end in vauels scowing which of þese holes þey belonged in. Had þat not happened, we wuld in all laiklihood stil have such þings to þink abaut when we spoke.

It has become hard to go on writing þis owing to what i have taken on board and i still feel þat þere is a bit to go into, so instead of grinding awai at it, i scal scow iu where we mai have ended up wið a sketch of our speech as it wuld be spoken todai. Bi þe wai, þe awkwardness of the wording here is not laike hau þe true speech wuld come over.

It sculd bee born in mind ðat ðee French swai upon Englisc writing no wuld haaven happened. In his stead weere ðer ongoinde spellings from ðee Olde Englisce taimes. On ðee whole, ðee tunge weere laik unto Middle Englisc mid oone oðer two oddnesses. Ic no can undertaaken ðat ic write ðis wel.

Ic scal beginnen aniu:

Ðat alphabet is sumhwat laik unto aur oȝen but for twein stafs ðat sinden offwesend: ðer sinden no Q oðer Y. Hwen one wuld wraiten ðo laudes, moate one “KW” and “I” forwenden, and one mote eek munen, ðat ðer be no laud “Y” auttaaken “I”. Ðer sinen eek sume more stafs not faunden in todais Englisc:

A, B, C, D, Ð, E, F, G, Ȝ, H, I, J, K, L, M, N, O, P, R, S, T, Þ, U, V, W, X, Z.

“Д, hwilc as an smale staf is “ð”, is laik unto aur unwhisperede “TH”. “Þ” oðer “þ” is ðat ilk whisperede oan. So firn so ic woat, “Ȝ” is onli in Englisch founden, hwer it is nau “GH” writen, and her it haþ þree lauds: als “CH” in “loch”, “H” in “humour” oðer ðee unwhisperede aforesaide “CH”. R is als in Italian oðer Scots. Ðer is no “Q” forðai most of ðee words mid “QU” in hem sinden fro French oðer Latin and ðee Olde Englisce words ðermid weren mid “CW” spelt. Forðai C is onli said als “CH” als in “curc” nauadais, ðis is nau spelt “KW”. “WH” is spelt “HW” als it was in Olde Englisc.

Ðee vowels worken ðus:

A – als in Norðern “man”

E – hwen laudli spooken, als in “when”. Hwen not, als in “mother”.

EA – als above but lenger.

I – als in “this”.

O – als in Scots “pot”.

OA – als above but lenger.

U – als in “put”.

AA – als in “barn”.

EE – als in Norðern “they”.

II – als in “machine”.

OO – als in Norðern “gnome”.

UU – als in “soon”.

Nouns

Ðis is hwer ðis Englisc straieþ fro true Englisc moste. Amung ðee Teutonike tunges spooken in Europe , Anglik aloan haþ but grammatical kin akin to ðee sexes of ðee þing oðer folk at hand. Of ðee oaðer, German, Aislandisc and Norn aloan haaven þree kin. Ðee oather al haaven twein: neuter and amainscap. Ðerfor we cunen taaken it ðat Englisc doo laikwaise. Herfor haave ic taken Englisc twein kin to haaven. Ðee formere waiflie and mannlie kin sinden becumen oan amainscap kin and neuter is jet neuter. Ðus alðoȝ a waif bi name klept scal “hee” klept be, ðat word “waif” itself is “it” klept. Ðer sinden but two þride pronomia personale. Laikwaise, “stoan” is amain – “ðee stoan”, not “ðat stoan”. “Stoan” is “hee”, not “it”.

Ðee ofteste kind of noun by fere is ðat hwilch has “-(e)s” for ðat manikind. Ðo eek forwenden ðat ilk for ðee genitivum. Ðer sinden ðoȝ sume nouns ðat haaven zero manikinds, swic als folk, þing, jear, swain, hors, sceep, deer, neat, weapen, faul and fisc. Sum herof sinden eek ðus in aur oȝene Englisc. Ðer sinden eek “-en” kinds, bilaiend oxen, eyen, breðren, cildern, lambren, kain, koalen, treen, meaten, steaden, sunen. Ðen ðer sinden sundrie nouns ðat haaven manikinds hwer ðee vowel is unlaik unto ðe oankind: foot – feet, man – men, goos – gees, maus – mais, laus – lais, kau – kain. Oaðere zero manikinds sinden “freend”, “feend”, “niȝt”, “faðer” and “breec”. Words borroȝed from Latin and Greek haaven ðee Latine nominative ending but not ðee Greek.

Adjektiva

Jee mauen haaven merked ðat sumhwat befalleþ ðee adjektivs in sume settings, hwer ðai oan “-E” after ðee ende of sume words but not al. Ðis is laik unto ðee oaðere Teutonik speeces, ðat maaken hem unalaik jif ðai twix ðeir word for “ðee” oðer “dat” and a noun sinden, oðer oaðere tookens of bestimmedness swich als “main” oðer “ðain”, and eke befor ðe manikind, swich als “an hiȝ cild”, “ðat hiȝe cild” and “sume hiȝe cildren”. Als in tru Englisch, ðer sinden sume adjektiva ðat haaven autlandisce “-er” and “-est” kinds, laik unto aure “better” and “best”, hwilc ðai eek haaven, but ðai haaven also “laite” – “lesse” – “least” and “far” – “fore” – “first” so wel so “long” – “lenger” – “lengest” and “strong” – “strenger” – “strengest”, and “elder and “eldest” sinden spoken midaut sister and breðren. So was it hwilom in true Englisc.

Of ðee tallis, “oan” and “two” haaven kinds beyond ðee nominativa. Oan haþ “oans” for ðat genitivum and “two” “twein” for ðat objektivum and “tweir” for ðat genitivum. Hens ðee tallis for reckoning sinden:

oan, two, þree, fower, faiv, six, seven, eȝt, nain, teen, enleven, twelf (becumeþ “twelve” jif bestimmed oðer manikind), þriteen, fowerteen, fifteen, sixteen, seveteen, eȝteen, ninteen, twenti, oan and twenti . . . hundred . . . þausend. Ðen we haaven eek ðee words: first, oaðer, þrid, ferþ, fift, sixt, seveþ, eȝteþ, niȝende, tenþe.

Artikula

Ðisse sinden liȝt. “An” and “a” sinden forwent als in tru Englisc. “The” overseteþ als “ðat” for oankind neutrum and “ðee” for al els. Oaðerwaise ðer is no token of kin oaðer ðan ðe pronomina. Ic am aware ðat ic overloade ðat word “kin” bai ðe wei.

Pronomina

Ðee firste persona pronomina sinden “ic” and “wee”. Ic kan maaken a grid herabaut:

Nominativumicweeðaujee
Genitivummainaurðainjuur
Objektivummeeusðeeju

Ðee þridde personae sinden:

AmainNeutrumManikind
Nominativumheeitðei
Genitivumhishisðeir
Akkusativumhinithem, ðem
Dativumhimhimhem, ðem

Ðis scoweþ ðee startlinde þing abaut ðat pronominum “he” als in Middle Englisc. In West Saxon, ðer weren þree þridde personale pronomina: “heo”, “hit” and “he”. In Middle Englisc, ðee laud “EO” bekam “E”, and ðerfor boþe ðee waiflie and manlie pronomina weren ðat ilk. Ðis led to ðe so-callede “generic he” but ðee need was felt for a niu waifli pronomen, hens “she”. Ðis meaneþ ðat menisce sinden “he” klept, even jif ðei waifs sinden. Ðee pronomina sinden also aloan in havind ðeir oȝene akkusative and dative kinds, mid “hin” and “him”, and ðis is moreover tru of ðee pronomina for askings forwent:

AloanMani
Nominativumhwoohwat
Genitivumhwoshwos
Akkusativumhwonhwat
Dativumhwomhwom

Ðat dativum his oȝene kind havind is kind of weak forþai non-livinde þing sinden seldom þing “given”, and ðat is tru of ale pronomina. Hawever, ðat dativum in Englisc foldeþ ðat instrumentale in, and ðerfor more waideli forwent is.

Ðer sinden eek bits of ðe twofolde kin left, swic als “hweðer” hwer wee “hwilc” sayen jif ðer sinden but two þing.

Verba

It haþ oȝenscip ðat ðee stronge verba sinden waidspreader ðan in tru Englisc, and even sume words ðat weren in Olde Englisc weak sinden strong bekumen. But befor ic doo ðo, mote one þinke of þee greatere kind of verbe:

walken, to walken:

ic walke

ðau walkest

hee walkeþ

wee, jee, ðei walken.

ic walkede

ðau walkedst

hee walkedeþ

wee, jee, ðei walkeden

walkind

walked

Ðe stronge verba sinden in seven bits cloven, and mor ðerof sinden in al ðan in tru Englisc. Also, ðei haven al of ðo kinds in Middle Englisc faunden, and niu stronge verba haven arisen hwen ðat stem raimeþ.

Ic feele nau ðat ic haave ȝenuȝ said, and ðee speec made is most akin to Middle Englisc. Ðer sinden but fiwe wendings from ðee tru Middle Englisc speec herin. Oan hardness is makind niwe words for þing ðat weren not back in ðee oldene dais. Ic haave curen to forwenden words from Latin itself for ðee grammaticale words, forþai ðee laiks of ðee Germans and Aislanders haven alaik doon.

If you’ve been patient enough to get this far, thank you for indulging me. This has proven quite a struggle to write and I suspect there are many inconsistencies in this post, which in fact replaces a different post on the idea of a generic Germanic language. However, now it’s seen the light of day I hope it’s not too boring.

E*N*E*M*I*E*S

Aerial view of WTC in March of 2001

The usual current take on ‘Friends’ is that it’s dated poorly in “woke” terms, and I hesitate to use that word because it sounds faintly mocking. This is fair enough, but there is more to it than that, there always has been and insofar as it is, obliviously or not, not very right on to today’s perception, that’s a route into a whole new interpretation of the series which capitalises on that very fact, to the extent that it’s true in the first place, and harmonises with some of the darker aspects of the comedy. I’ve long felt that trapped inside the series, there’s a serious drama trying to get out, so just like Ursula, I present to you ‘Friends”s evil twin, ‘Enemies’.

In fact the title doesn’t quite work because the characters would still be friends. Even so, there are ways in which they are bad for each other, in the sense that they are their own worst enemies. But think about it. They’re a group of six White friends in an exceedingly multi-ethnic city living in a bubble who don’t even react when the Twin Towers come down. Why not capitalise on this? It would be possible to emphasise this, in a ‘Rosencrantz And Guildenstern Are Dead’ kind of way, with lots of questionable stuff going on in the background such as currently homeless rough sleepers and high ethnic minority staff turnover in ‘Central Perk’. They could get stranded in London after Emily and Ross’s wedding because all the planes are grounded after the 9/11 attack and react to it in a self-absorbed manner. We could actually see some of the stuff which happens off-set, such as Rachel’s “honeymoon” in Greece with “Why you cry, Mrs Geller?” spelt out in great detail, and Chandler’s sudden trip to Yemen. They could return to a devastated and grief-stricken NYC and be completely oblivious of it.

However, it’s also true that even privileged people suffer, and there’s a major element of lack of privilege there anyway. There’s an episode of ‘MacGyver’ which does this really well with the Mayim Bialik character Lisa Woodman suffering from alcoholism in a setting of extreme wealth, with her parents in an abusive relationship. Chandler’s wealth doesn’t protect him from the crisis of masculinity, for example, and Phoebe is distanced from the other friends by her experiences as an adolescent. There’s this for example:

Ross gets sacked from work due to anger management issues, has to take medication and has a breakdown. Also, Ross’s son Ben is given a bottle of anti-depressants to use as a rattle. This is actually the moment which first made me think about this. Although many would say nothing is off-limits to comedy, the question here is why was Rachel’s mother taking them in the first place? There are plenty of external, situational reasons why Sandra might be doing this, such as her divorce. Speaking of that divorce, the writers do a very relatable job of showing an adult child’s experience of their parents’ separation, which I’ve been through myself.

The success of ‘Friends’ is to some extent paradoxical. From the writers’ perspective it doesn’t seem to be intended as a nicey-nicey feel-good sitcom by any means. Rather, it’s about the likes of the quarter-life crisis, being exploited as a young adult and having dysfunctional relationships. It isn’t a TV version of Ibiza or an 18-30 holiday. In the first episode, Ross’s marriage breaks up because his wife has come out as a lesbian, but she’s pregnant with their child, so there’s an immediate “broken home” to use a rather old-fashioned term. In the rest of that season, Phoebe finds a severed body part in her drink and Ross goes to ER, and in fact any involvement with the dystopian American healthcare system is fuel for serious drama. This is explored to some extent when Joey has a hernia and can’t afford to have it treated, and when Rachel and Monica commit insurance fraud. This of brings to mind the idea of a rather short, boring British version of ‘Breaking Bad’, where Walter White is diagnosed with cancer and just gets it treated and that’s it. Then there’s this line from ‘The One With The Boobies’:

Roger: Actually it’s, it’s quite, y’know, typical behaviour when you have this kind of dysfunctional group dynamic. Y’know, this kind of co-dependant, emotionally stunted, sitting in your stupid coffee house with your stupid big cups which, I’m sorry, might as well have nipples on them, and you’re like all ‘Oh, define me! Define me! Love me, I need love!’.

This is actually true, and a lampshade-hanging, and also emphasises the closed nature of the friends’ relationships. Few people can break into the group, and they’re clinging together out of fear of the outside world, and for some of them their lack of family support, including Monica, Chandler and Phoebe.

There’s also the aspect of several of the friends exemplifying a personality disorder. Phoebe is borderline (ugh), Ross dependent, Chandler avoidant, Monica obsessive-compulsive and Rachel’s and Joey’s personalities are, er, left as an exercise for the reader. There’s a lot of comedy potential in the interaction of people diagnosable with personality disorders by the conventional approach to them, but also a lot of potential for the portrayal of strife, anxiety and depression.

Another aspect, which I wish there was more of, is the occasional political commentary. ‘The One With The Dollhouse’ has this exchange:

Phoebe: Well, nobody wants a ghost. But you’ve got one, because the house is sitting on an ancient Indian burial ground.

Ross: Wait a minute, the house was built on radioactive waste, and an ancient Indian burial ground? That would never happen.

Phoebe: Okay, obviously you don’t know much about the U.S. government.

This can’t be pushed too far in its current format, but were it to be freed from the constraints of comedy and prime-time US television, more of this could be done.

Here are a few other situations in chronological order:

Monica falls victim to credit card fraud.

Chandler and Joey leave a baby on a bus.

Phoebe marries a gay man so he can avoid being deported.

Monica tests a potentially dangerous synthetic chocolate substitute on her friends.

Joey gets stalked.

A loner dies in his flat due to the friends’ abuse of him.

Chandler gets a mentally ill flatmate who is made homeless by Chandler’s and Joey’s deception.

Phoebe has her intellectual property stolen and exploited for commerical gain.

Ross and Rachel have a co-dependent relationship which will clearly never work even though they repeat the pattern in the last episode.

There’s plenty more of course.

Then there are the central characters:

Rachel is a spoilt child who can’t initially cope on her own but gradually learns to do so, having been thrown in the deep end.

Monica is increasingly obsessive-compulsive, discovers that she’s sterile and is neglected in favour of her older brother by her parents, and has had an eating disorder.

Ross is unable to get over his schoolboy crush on his sister’s best friend and is still fixated on his childhood hobby of dinosaurs.

Chandler is insecure in his masculinity due to his father being in the closet through much of his childhood and his narcicssistic mother.

Joey is only interested in shallow relationships and treats women really badly.

Phoebe is close to being schizophrenic and was homeless as a teenager due to her mother killing herself.

A lot can be done with all of this just on its own. It would be possible to set the situation up with these characters as a serious drama and just let things happen, and it’s also possible to incorporate some of the situations which arose in the actual sitcom as serious incidents. Hence, as I said, there’s a potential drama series here, with no laughs at all, and it would (still) be good.

Woody Allen, a name I hesitate to mention due to his current reputation, was nonetheless an adequate film maker. He had a go with a similar idea in his 2004 movie ‘Melinda And Melinda’, which purported to tell the same story once as a comedy and once as a tragedy. It didn’t really work, not least because the plots of the two versions were considerably different. It was considered one of his worst works. Neither is a pure comedy or drama because the drama has humorous elements and the comedy has serious points. Nonetheless, the idea as such is strong and what better choice would there be than to apply this to the überweiß sitcom?

Lateral Thinking

(c) BBC Enterprises 1997, will be removed on request

I have to admit I’m somewhat out of my depth on this one, although a kind of family and cultural osmosis has led to considerable familiarity with the movement in the past. In personal terms, I can relate to the topic in two ways. One is to think of myself as someone who is only able to think laterally. The other, which I haven’t been able to understood, was a comment by a client who said I never think laterally, but always vertically. I don’t know what to make of this apparent contradiction.

Edward de Bono died on 9th June 2021. That pattern recognition device went the way of all flesh, and flesh it was – he was often keen to emphasise that the mind was no computer, although I do wonder whether he included quantum computers in this. Or rather, did his concept of a computer as a vertical thinking machine, as it were, still apply to quantum computers?

This is not going to be a complete survey or review of De Bono’s work. He published seven dozen and one books and also had other rôles, so it’s unlikely I can do justice to him today, so I’ve decided to focus on two aspects of his earlier work on lateral thinking. Before I do that, I want to bring something to your attention about this. De Bono for me is not just someone who is “out there” because my father was very interested in him in the late ’60s and into the ’70s when I was born and through my childhood, and he does seem to have applied some of his principles and thought to my upbringing, so it’s quite likely that the way I approach things now is related to that way of thinking. I remember some exercises I’ve done at my father’s behest on the matter. In his case, he was attempting to apply it to his paid work in operational research and management, where it seems to have been quite popular, but as De Bono himself says, it shouldn’t be learned through its application because that places a restrictive filter on it, but should be considered a subject in its own right. If my thinking is linked to lateral thinking in this way, it’s also likely that I can’t perceive that it is.

Lateral thinking is contrasted with vertical thinking, and he is keen to emphasise that the former is not to be considered better than the latter. Both are appropriate but apply in different circumstances. Verical thinking is less creative and proceeds from given unquestioned premises step by step in a manner which is preferably not open to flexibility. This is often necessary, and is the kind of thinking we probably come across most of the time in technically-oriented walks of life such as mathematics, logic, computing and possibly science. I hesitate to commit myself entirely to this idea though, because whenever human beings are involved, creativity has a rôle, and this has always been so. De Bono would relate lateral thinking to insight, creativity and humour, and he almost has a theory of humour, but I’ll come to that. In vertical thinking, the only available method for changing ideas is conflict. Either a new idea is introduced and kind of enters into battle with the old idea in someone’s head, which it either wins or loses, or new information confronts one and conflicts with the old, leading to its hopefully dispassionate acceptance or rejection, which is said to be how science works. Thomas Kuhn would point out at this point that the war of ideas in science is heavily influenced by the career positions and choices of scientists and can’t be considered as occurring in an abstract realm where a new hypothesis or theory is mechanically accepted or rejected, but this is an idealised way of looking at science and I think we can probably agree that it’s how it should work.

An example of how it might work differently, and I don’t know if I can dignify this with the label of “lateral thinking” but here it is anyway, is my approach to the composition of Saturn’s rings. In the early 1970s, no space probe had been sent past the asteroid belt and there was conflict between astronomers who believed the particles making up the rings were icy and those who thought they were rocky. I chose to conclude that they were ice-covered rock. It turned out they were mainly made of frozen water, but this is probably an early example of lateral thinking and I know I applied it elsewhere. Bear in mind that this was a six year old, so it isn’t going to have the sophistication of an adult professional astronomer. Bear in mind also that I’m not commenting here on whether it’s right or wrong, which is another feature of lateral thinking.

Conflict between ideas only works where objective evaluation is possible. Very often, in vertical thinking new information is examined through the filter of preëxisting information and structures, which can cause the old idea to become more entrenched. I personally think the idea of non-baryonic dark matter is a good example of this. Another example might be found in religious fundamentalism, as with sexism and homophobia, where rather than attempting to moderate the prejudice in the light of new attitudes and even scientific research, people just dig in deeper, sometimes to the extent that it seems, at least to an outsider, that some churches are primarily concerned with hatred.

A good way of changing ideas is to rearrange the available information by use of insight. As recognised patterns are used, they become more firmly established. This reminds me of a friend who became delusional, or rather a friend whose delusions were unusual and began to affect her life adversely, and it seemed to me that one element in their reinforcement was that backtracking would involve acknowledging that she was wrong and that she’d used a lot of time and energy in maintaining them which had come to have major adverse consequences for her life. Nonetheless there’s a need to attempt to deal with the manifold, and the main way of doing it seems to be to convert established patterns into a kind of code for dealing with the world. The mitzvot of the Torah would seem to be one example, and vocabulary is another, and to me this raises the question of how much learning is really linguistic rather than some other kind. All of these are filters which leave out a lot of information, of necessity, but it’s possible that this information, were it acknowledged, would end up forming a new pattern not noticed before.

Crucially, De Bono tends to deprecate the notion of the mind as computer, or any kind of machine (although this becomes more contentious when one considers the possibility of what can be simulated). Rather, the mind is an environment rather like a landscape in certain ways. Now I’m conscious that I’ve already used the metaphor of a landscape to describe neurodiversity, and wish to dispel the notion that there’s a connection here. The mind for him is a specialised environment which allows information to organise itself into patterns. This reminded me rather of Gestalt psychology, which rejected empiricism and structuralism and is largely based on the idea that the mind tends to impose higher order phenomena such as movement and patterns on lower order sense impressions. I would call these higher order phenomena supervenient. Since Gestalt psychology now largely survives as therapy, this also suggests that if lateral thinking is helpful, it too could be used as a form of therapy, where people are trying to break out of maladaptive patterns in their emotional lives. In fact, right now I see lateral thinking as particularly useful in this area.

Restructuring is hard because existing structures grab the attention. Nonetheless there are times when restructuring occurs spontaneously in the human mind, and de Bono mentions three: insight, humour and lateral thinking. I would perhaps add revelation and epiphany to those. I once asked a non-religious psychologist friend of mine if he had had anything corresponding to religious experience and he mentioned insight as being somewhat akin. The experience of insight, in fact, was so difficult for thinkers to explain in the European Middle Ages that they posited the idea that God illuminated the contents of the mind. Humour constitutes a brief and reversible restructuring, which I found interesting, but couldn’t tell if he was proposing a complete theory of humour or not. Insight, on the other hand, is a long-term restructuring, or rather the beginning of one. De Bono appears to offer something like a definition of lateral thinking at this point, as “restructuring, escape and the provocation of new patterns”, and as such this reminded me rather of the somewhat later but also highly seminal ‘Gödel, Escher, Bach – An Eternal Golden Braid’ by Douglas Hofstadter.

Lateral thinking is in a way an attempt to generate creativity consciously. However, in the formal case the process itself may be hidden, often from the creator themselves. I’m reminded somewhat of Dalí and his paranoiac critical method and of the suggestion that one overcome writer’s block (not a problem for me so am I naturally a lateral thinker?) by cutting up and rearranging text, which is almost the same thing as one of the exercises he proposes. Lateral thinking generates its own direction by placing ideas next to each other as a form of progress, whereas vertical thinking is led by principles and is passive. Vertical thinking is also constrained by the choice of premises. It also tends to create sharp divisions and uses extreme polarisation, and this is particularly interesting since these may be the major problems with today’s society and were far less severe in the late ’60s. Is there a way of applying lateral thinking to this issue? One of its functions is to temper the arrogance of rigid conclusions. However, as he says, de Bono is not fundamentally opposed to vertical thinking, and believes that lateral thinking can support and help it in the long run. “You cannot dig a hole in the wrong place by digging deeper”, as he says, but digging a hole for the purposes of this metaphor is still a vertical process, so you think laterally to transport yourself to a better location and might then proceed to use vertical thought. This mode of thinking is not new either. There are also people who naturally gravitate towards it, and this is where it gets personal again. I would certainly say that many people on the Halfbakery are constitutionally lateral thinkers, and would include myself in that number, but as I’ve said, one of my clients has said that I always think vertically and am incapable of thinking laterally. I’m not sure what this means. It clearly is how he perceives me, but why is it so much at odds with my self-image, which is the opposite?

De Bono wrote another book called ‘The Mechanism Of Mind’ to which he makes extensive reference. I haven’t actually read it, but again uses the metaphor of a landscape to describe the mind. A flat limestone plain might gradually develop waterships and channels as it gets rained upon and these will eventually cut permanent ponds and lakes, and also streams and rivers. This is the memory of the land. It may also be influenced by differing composition of that land, such as granite as opposed to limestone. If there are instinctive schemata applied by the mind to the world, they might be seen as similar to the varying composition of the land, and the entrenchment of memory and learning is akin to the erosion and formation of bodies of water of particular forms. The land remembers where the rain and snow fall. Likewise, so does the mind remember things. I like this metaphor because it’s very un-computerised.

Even so, he sometimes seems to have a rather IT-oriented approach to thinking. For instance, in a later book he introduced a series of two number codes to sum up entire phrases. The predecessor of this idea is also present in his early work, where he proposes that communication can be abbreviated into trigger words. This is not “trigger” in today’s sense, where it refers to features which may cause anxiety to certain groups of people or people who have suffered particular traumatic experiences, but more like words which trigger a series of associations like a computer subroutine, and it seems ironic that this very un-computer-like device, the mind, according to de Bono, can also undergo something rather akin to programming in this way, although there’s no imperative element so maybe it’s object-oriented.

At the top of this post I described him as a “pattern recognition device”. This is more or less how he sees the mind, more precisely a pattern-recognition system. Most or all of the patterns the mind comes to recognise are not built in, although I’m not so sure about that. For instance, our sense of hearing is attuned in development to recognise voices and we tend to see faces everywhere even when they’re absent, such as the Badlands Guardian and the faces on Mars. The cognitive psychological view that the brain consists substantially of modules would also tend to contradict this, although it’s conceivable that modules could arise from a non-modular infant brain through learning. This is of course the nature-nurture debate, or in epistemological terms rationalism vs empiricism. In any case, this pattern-making tendency allows the mind to communicate, or perhaps a better word is “interact”, with its external environment. The patterns are always artificial, which seems to mean that de Bono doesn’t believe in natural kinds, i.e. types of things which exist objectively. He goes on to say that in a sense the mind is a mistake-making system, in that it mistakes one thing for another. Although an obvious example is our tendency to imagine faces in inanimate objects, it applies more broadly in that one must reject some of the features of an item one apprehends in order to conceive of it as like another. One has to generalise. Those patterns which promote survival are then selected. For example, one may have noticed a pattern that clear colourless liquid tends to quench thirst, but if one is surrounded by vessels containing water, acetone, turpentine and isopropanol one might wish to modify the pattern to include odourlessness, although I suspect that doesn’t eliminate everything. Hence one doesn’t end up poisoning oneself.

A further claim is that the mind doesn’t actively sort information but information sorts itself out. I’m not sure what he means by this, but I think the idea is that the mind constitutes a hospitable environment for the sorting of information and is a self-organising system. Such things can easily be seen in the living world, such as with shoals of fish all turning at once or ants’ nests working apparently purposefully when the individual worker ants each have only a very limited range of responses, and the brain is a similar system, with each neurone being little more than a logic gate with a modest ability to store information from previous inputs. He then made a claim about attention span, which didn’t seem to use the term in the way it’s generally understood now, and I was unfortunately lost, I hope temporarily, on this point.

A much clearer feature is that the order in which information is encountered changes the pattern perceived and can lead to it becoming harder to reorder the information. For instance, if one is playing a game of Hangman and chooses letters in one sequence, a word might quickly become obvious, but if one had started with a different set it might be considerably less so. “-A-A-A-A-A-A” probably suggests “taramasalata” to a lot of people, but “T——–T-” probably doesn’t. If the same information is deliberately presented in a different order, it may suggest a different solution, or a solution. Jokes often rely on such things, though there is always a switch back to a serious mode, or there ought to be. Unfortunately, this doesn’t always happen. I once told someone the pyramids were supposed to have been built with the points at the bottom and the base at the top because the builders got the plans upside down, then discovered several years later that she had taken me seriously until she started an archæology degree. In another example, someone learnt the wrong physical examination technique due to a joke by their tutor, which could’ve had quite serious consequences. This also applies as poe’s law – one often can’t tell if people are joking or being satirical on the internet and shifts in pattern recognition can occur as a result, but not necessarily positive ones.

Pattern recognition speeds up identification and reaction, but also has a number of disadvantages. Patterns tend to fixate and cannot easily be altered, or new patterns can’t be as easily perceived. Change is difficult – to use a psychotherapy cliché, you “have to want to change”. Conversely there is also the paradox of change, where it takes place when one pays less attention to it. There are also butterfly effects, although these can be positive. Anything resembling a standard pattern will be perceived as such. For instance, it turns out to be notoriously easy to be misdiagnosed with schizophrenia and psychological researchers have done this to get admitted to mental hospitals even though the symptoms they described didn’t fit the diagnosis. Established patterns grow. I would see mission creep as a manifestation of this. When the only tool you have is a hammer, everything looks like a nail. Patterns also shift suddenly rather than smoothly: the rings of Saturn are either made of ice or rock but “can’t” be made of both.

Contrasts between vertical and lateral thinking are then outlined. Vertical thinking is selective, lateral generative. In vertical thinking, some of what one perceives or might perceive without preconceptions has to be rejected. Not so with lateral thinking. Vertical thinking is about being right or wrong, but lateral thinking is about richness of thought content. The imagination is more engaged. Vertical thinking proceeds along a path it has discovered or arrived at, whereas lateral thinking attempts to find many paths. Even when a solution has been found, lateral thinking can continue to look for more options, and once again I’m reminded of the Halfbakery. Vertical thinking is analytical, lateral provocative. Vertical thinking has to be right at every step to be valid, but lateral thinking recognises that being wrong may lead to a better solution in the end. Lateral thinking explores the less likely paths. It hears hooves and imagines zebras rather than horses.

Po

At this point it became clear that I wasn’t easily going to outline the entire corpus of the guy’s thought, so I’ve decided to focus on the Teletubby at the top of this post, so to speak: “po”. In logic, there is truth and falsehood, “yes” and “no”. There is negation. This involves rejection of the alternative deemed incorrect. De Bono introduces a third option: “po”. I couldn’t help but be reminded of the Greek interjection “ποπο”, which is an expression of surprise or dismay, rather like “yikes”, and this may be where he got it. It also called to mind how the Samoan language negates statements: it turns them into yes/no questions, which seems to me to be a form of etiquette. The opposite of “Is it raining” (“ua timu”) seems to be “ua timu?” – “is it raining?” although I may have got that wrong. Whether or not that’s so, the approach taken by such an utterance is less arrogant and more polite than simply saying “no”, because the word “no” often has power, as was shown by Danny Wallace’s book ‘Yes Man’, and although it needn’t be, that power can be quite aggressive. After all, it involves rejection. “Po” is to lateral as “no” is to vertical thinking. He describes it as a “laxative” rather than a “negative”. It might seem at first that it allows for a third truth value, but I don’t think this is the intention, and it doesn’t fit neatly into multivalent logic simply because it doesn’t fit into logic. It’s a laxative in the sense that it can get thought moving rather than stop it. It withholds judgement. It can also be used as several parts of speech.

“Po” can be a conjunction. De Bono gives the example of “computers po omelettes”, which places two apparently unrelated things together to allow them or their associations to interact. That conjunction might bring to mind a recipe app which takes as its input the contents of your larder or fridge and gives possible meal ideas as output. It can also introduce a random word. Here the example is “po raisin”. The concept of a raisin is introduced to a discussion to stimulate ideas, perhaps of data compression by “dehydration”, e.g. reversibly removing a major but unimportant constituent of a picture which can be added back in later, or perhaps then becomes more concentrated information which can be used differently. For instance, an image of a mainly black night sky could have the completely black areas replaced by information telling a viewer or program that certain polygons in the image are devoid of content, and consequently asterisms, constellations or star clusters might become more evident. It can be used to signal that what follows doesn’t in fact “follow” and saves time and confusion by admission that a particular point is not arrived at by a conscious train of thought, thereby encouraging serendipity. “Po” lets someone be wrong without judgement because by being wrong one may find a better way of doing something than how it’s always been done. It can protect an idea from judgement: it’s short for “this is probably not true but let’s just pursue it and see where it goes.” It can also alter the problem to see if there’s a solution. Dividing eleven items fairly between three people can be achieved if you add an item of your own to share it out and then negotiate with the person who has that to have it back or share it on a regular basis with the others, or spacing four trees an equal distance apart could be managed by using a hillock or depression in the middle of the other three, thereby forming a tetrahedron.

“Po” has other functions. It can challenge the arrogance of established patterns. It is not po-faced. It might do the same with their validity. It can liberate information to allow it to come together and give new patterns. It can rescue information from pigeonholes. There’s a real life example of this for me because my surname is unusually short and begins with a rare letter, so I used to have my pigeonhole for internal mail filled up with rejected missives intended for other people because they assumed nobody’s name began with that letter, so I often had to rescue information from that pigeonhole due to the assumptions of others. Having experienced one possible alternative arrangement, it can encourage one to search for more. It is never judgmental. It can sometimes be translated as “that may be the best or only way but let’s look for others”. Hence it can have unintended consequences, and although those may be disruptive, sometimes they’re precisely what one needs.

All of this leads me to wonder what a “po man” would be like. Danny Wallace’s book ‘Yes Man’ tells of his experiment with his life when he became persuaded that he had got into the habit of saying “no” too often. He therefore committed himself to saying “yes” to everything and everyone for a period of time to see what might happen. This included questions like “are you looking at my girlfriend?”, which had interesting consequences. It was later adapted into the Jim Carrey film ‘Yes Man’, although there it was fictionalised – I don’t know how accurate the book is either of course. It is of course easier for a man to pursue this than a woman – I just want to drop that in. Stopping saying “no” is giving up power, and you might have to start fro a position of greater power to do that and have it not devastate your life. The question arises, therefore, of what the life of a “po person” would be like? What would it be like if every response you gave to a yes/no question aimed to juxtapose apparently unrelated things or opened up possibilities? I don’t know the answer to this, and I would also want any answer to explore other possible meanings of the word.

Po punctures pomposity. It reminds us that apparently inevitable information arrangements may in fact be arbitrary. It counteracts “no” and it heals divisions, and we really need that today. It diverts from the obvious, may provide a tension-relieving laugh or smile like the use of humour to defuse a tense situation, and it prevents overreaction and the swing towards polar opposites.

Although it occurs to me that “po” cannot work on a computer, whereas binary truth values can, I’m not sure that’s true of computers that are not digital or binary. I think this might indicate that the way we use our minds is almost a deliberate imitation of how we imagine computers work. Maybe we’re making ourselves in their image? What if we’re more like quantum computers or analogue ones?

Criticism

To an extent, it’s probably healthy to treat criticism of de Bono’s ideas with suspicion, as he seems to be something of an outsider and may not have too many people supporting his positions within academia. There is also a heady sense of power in judgement and rejection. Even so, it has been claimed that there’s little evidence to support Edward de Bono’s claims. Their style, if not their content, brings Neuro-Linguistic Programming to mind. There is said to be sparse evidence that it’s broadly successful. Early studies showed benefits to children with learning difficulties but it was also tried with Australian Aboriginal children and didn’t help them beyond the area of creative thinking. This seems like a strange criticism to me since that would seem to be his main focus, and it would be difficult to find an area which wouldn’t benefit from improved creativity sometimes. It’s also been suggested that suspending judgement would slow down or reverse progress.

De Bono didn’t use experiment to produce his body of thought, and he relies heavily on anecdotal evidence. However, sometimes it’s important to do exactly that. As well as pattern recognition devices, we are a species telling stories to ourselves and needing to hear them. Even if lateral thinking is propped up by myth, it still benefits people by enabling us to believe in ourselves more, and it seems worthwhile to protect people from acid rejection and criticism. We need permission to fail and be wrong without that ruining our reputation or lives.

He may also place too much emphasis on an individual’s “aha” experience rather than the communal testing of the idea that follows. That doesn’t always matter though, because sometimes the details of content are not of great import to their existence. Art is art. A particular mural may evoke one set of feelings but they’re no less or more valid or valuable than those another might have kindled, and a particular piece of music can still be “our tune” as much as another one can be, but any of these could become more memorable or thought-provoking because they were arrived at through lateral thinking. The problem may be when they come in contact with a particular kind of reality.

He’s also not so much a pioneer as he makes himself out to be, at least in terms of addressing the question of creative thinking. Another example would be William James, about whom I’m afraid I know practically nothing.

Last Words

In conclusion, I would say that I do actually currently find the idea of lateral thinking interesting and helpful, particularly as a way of inventing a new means of relating and approaching my thoughts and feelings, although it may also work in other areas. Even if it’s a myth, myths are important and we need them in our lives, and there are many areas where it doesn’t matter if a provocative idea is true or false, and such areas may have positive real world consequences. So I think Edward de Bono made a valuable contribution to the world and wonder if the nay-sayers would benefit from the po-sayers.

Sprot

. . . or “sport” as it’s more commonly known.

Photo by Lukas on Pexels.com

Pexels has just presented me with a number of thought-provoking images regarding this subject, one of which is used above. It also included a rather appealing 2-D aerial view of a soccer pitch, someone doing an asana and a woman in a hijab holding a football. This is probably some kind of salutary lesson to me.

Although it probably isn’t entirely necessary, you might need to be born into a family who is interested in sport to grow up interested in it yourself. I never really have been, if sport means an active outdoor competitive team game involving scoring. I’m certainly interested in activity and keeping fit, although more in the breach than in the observance sadly. I also get little flickers of interest which are not maintained for long if I feel a personal connection. Maybe it’s that personal connection which does it for people, so it could be the people you hang out with that achieve this.

As I say, it isn’t entirely absent. I’m not as interested as Sarada is in Wimbledon, and she by extension is also apparently somewhat interested in other tennis, but the experience of coming back home after my O-level and A-level exams and sitting in front of the TV wanting something to take my mind off the worry of how I’d done in them, and therefore watching Wimbledon, did leave a lasting impression on me. To the extent that I have some interest, it probably provides a welcome distraction from the troubles of my life and I’m not entirely detached from it.

As a child, I felt like an outsider not being interested in it and being from Kent, that was mainly manifested in cricket. It was just about possible to kindle a tiny, fitful, smouldering flame of interest in cricket for a couple of years back in the ’70s. This was when Kerry Packer was starting to organise World Series Cricket, which was a really crap idea and I did feel quite passionately opposed to that as a nine year old. From this distance and decades of neglect in focussing on the subject, my recollection of what that was about is quite faint, but it seems to have been about the over-commercialisation of the game. I do vividly remember Ian Botham becoming the first person to score a century and take eight wickets in one innings today in 1978, and Geoff Boycott scoring the hundredth century of his career at Headingley on 11th August 1977. In fact it sometimes seems like there’s another version of me who could have existed and didn’t decide this was not part of my identity, which is one reason why the dog has the name he has in this video:

The names Geoff Boycott, Mike Brearley and Ian Botham do in fact mean something to me, but I can’t push it that far. Were I to pretend to be into cricket, it would be an affectation, and it wouldn’t really be me. I wonder if that would be so if my enthusiasm for it had continued past 1978. Maybe it would’ve become part of a completely false persona.

The really big sport is of course soccer, and like many others I felt pressurised into the idea of supporting a particular team back in the day. In fact there were two, at different times. The first was Arsenal because my friend supported them, but everyone saw through my completely abstract and half-hearted affectation so I abandoned it. The other, though, was Celtic, and I was rather more serious about that, although the scope for being a lot more serious and still not being serious at all is considerable. I suspect that my family was on the Other Side as far as the Celtic/Rangers rivalry was concerned, so in a way it was quite subversive for me to be interested in Celtic although our general lack of interest in any kind of sport made that quite arcane.

So far I’ve only talked about being a spectator, and of course I’ve participated unwillingly in a lot of sport at school. My attitude to sport is similar to other people’s attitude to maths: it was something I did at school which I would never be able to apply in adult life, so what was the point? And I have to be honest: I find the idea of supporting a particular team as a spectator alone without also participating in that sport oneself, assuming one is able to, quite odd. I don’t know how big the overlap between playing a sport and watching it is. I presume the pleasures are very different. Anyway, I have played, and with one exception I never really enjoyed any of it.

Like most other Brits, I call soccer “football” in conversation but think this may confuse people not from round these parts, which is why I use the more specific term here. In England, calling it “soccer” is a kind of class thing and when I call it that it also feels like an affectation, like calling a lounge a “living room”, though I do in fact call that. It was notable that when one of my teachers went into hospital to have an operation on his knee, every other pupil who wrote to him said they missed him supervising their football on the playing field except me. It just wasn’t on my radar.

The other sports were rugby, tennis, rounders, hockey and basketball. I don’t recall playing any other kind of competitive team sport at school. Rugby suffered from an immediate problem caused by the teacher asking the wrong question in my first lesson, when he asked the whole class “hands up who knows how to play rugby”, and I, being in the centre of the group, was invisible when I was the only person not to raise my hand. Consequently I didn’t learn then, still haven’t learnt and spent the whole time walking and running around completely bewildered, but since I lacked any interest in it anyway I didn’t bother to do anything about his misconception. I don’t understand what scrums are about, for example, except that they’re unpleasant and sometimes painful.

Playing soccer generally involved getting freezing cold on the pitch because of there being no apparent way to get at the ball. Again, there was a lack of motivation to do so anyway. I did score one goal though. Actually I scored two but the other one was set up for me by friendly people on the other team. Rounders made zero sense to me. Its chief interest is its similarity to baseball, because of how seriously that game is taken in the US and Japan, but not at all here, and rounders itself is just a kid’s game which doesn’t go anywhere much in adult life so far as I can tell. As to why that might be, I do not know. Nor do I know what the difference between rounders and baseball is.

Basketball I disliked because it involved getting your hands dirty. Also, slightly like the rugby situation, in my first basketball lesson the teacher told us to do “lay-up shots” and none of us had any idea what he meant, and I still don’t know. I did have a slight interest in its predecessor Pok Ta Pok, and remember attempting to play that with some friends, but since I’m still alive I obviously didn’t follow all the rules and the appeal there is exoticism. It’s possible that if the sports hall had been cleaner I wouldn’t’ve hated touching the ball so much, and I presume that others weren’t as bothered by getting dirty. In one rugby session, the teacher had us all roll in the mud to ensure that we weren’t put off by the prospect of getting dirty, which seemed very unfair on the people who were going to end up washing our sports kit. It was also disgusting. Tennis was marginally better than the sports I’ve mentioned so far but I didn’t really understand the rules at the time and when we played doubles I seemed to end up being completely uninvolved. As soon as something starts to become a team, it stops being playable. Other people have got the ball and I can’t get a foothold.

Cricket was mainly just boring. Fielding was less unpleasant because there was little involvement in the game for most people most of the time anyway. Batting and bowling were practically impossible. In spite of knowing how to bowl, every ball I’ve bowled has been wide. However, I did exercise some sporting behaviour when playing cricket, because I was once asked to lose deliberately by accidentally on purpose knocking the bails off the stumps and refused to do so. This was because the next person to bat wanted to show off, and it didn’t seem like teamwork to me to do that. I personally couldn’t care less if my team won or lost but other people did. It was literally not cricket. And it seems strange to me now that my attitude to cricket had changed so much from what seemed like honest involvement in the game as a spectator to not giving a fig in so few years.

The one exception to all this was hockey, or “field hockey” as it would be known to many people outside Britain. I was actually, bafflingly, both really good at and enthusiastic about this sport. This came as a great surprise to everyone, not least myself. However, I was unable to indulge this enjoyment because hockey was deemed a “girl’s game” and it was only played by boys at my school for literally a few weeks out of the entire time I was at that school and there was no boys’ hockey team and so forth, so it went nowhere. I mean, I couldn’t care less now but it still seems that this was an example of the problems caused by gender or sex segregation in competitive sport, and this is where I’m going to have to break with the usual demarcation I exercise between blogs and talk about trans issues in sport.

Trigger warning: gender identity issues – skip to below if this isn’t your cup of tea.

Okay. Throughout my time at school, I was one of those weedy people who was the last to be picked on any team. Rather than seeing this as unfair, I embraced my role as a nebbish. In short distance races I literally always came last except for one occasion when I came second to last. In athletics, I couldn’t compete with anyone: long jump, high jump (and I’m quite tall by the way), shot put, discus, javelin, you name it, I was invariably the last in my group. And I wasn’t physically inactive. I walked many times the distance most of my peers did on a regular basis and the one thing I was good at which I was actually allowed to do was long distance running. It just seemed to be impossible to gain the strength or muscle bulk any of my male contemporaries managed.

I don’t have a complete answer for the issue of trans sportspeople, but I am aware that trans women in sports considered as a group perform significantly less well than cis men. As far as my own performance is concerned, two factors were involved. One was that I really could not care less about competitive sport and it seemed an alien world to me. I still don’t understand, most of the time, why I should care except to the extent that it could create common ground with other people who are more emotionally involved. I would expect the majority of gender dysphoric people assigned male at birth to take a similar attitude. The other was that I was simply physically weaker than practically any boy, so attempting to compete would be futile, even given an attempt to build up muscle bulk and the like, which in any case felt very “wrong” for my body and interfered with suppleness, which was important to me. If it turns out I am typical in this respect, I can’t really see why trans women in sports would even be an issue. I also struggle and fail to care about the entire realm of sport, which seems like, as I said, a tiny niche interest with no consequences or point, so what’s the relevance of this whole controversy going on in a realm which really doesn’t matter anyway?

Rant over

What I’m about to say could easily be taken the wrong way. Nonetheless, there is a tendency for various policies to be grouped together and rejected out of hand as a group because of their associations. This happens with Gaddafi’s régime in Libya, which I would be the first to condemn. That said, in one particular area he did seem to get something right, and that’s sport. Gaddafi regarded sport as pointless and uninteresting, and football players would be referred to in his media by number rather than name in order that they did not achieve celebrity status (and I’m sure this failed by the way). He also disapproved of football supporters because he didn’t see them as participants in the game itself but as mere spectators. His view was that people should participate in sport directly rather than watching it, and I may have misremembered but I think he handed over many professional sports facilities to the general public to this end. I am not interested in trying to defend the man’s political record and I consider cults of personality in particular to be idolatrous and a distraction from real issues in a similar way, in fact, to Muammar al-Gaddafi’s own view of competitive sport, but he happens, I think, to have this particular point 100% right. That said, there’s no need to impose this view on anyone else or adopt it as part of a national governmental policy.

Then there’s the question of football leagues. For some unknown reason, the four league divisions have been renamed and it now sounds to an outsider that the third division is the first, as I understand it. Now here’s the thing. I do, to my surprise, care a bit about Leicester City winning the FA Cup. I am actually glad that my local team broke the trend of massive impersonal football clubs winning the Premier League in 2016 and that they won the FA Cup this year. I’m also glad Leicester Tigers, the local rugby club, is successful because my grandchild exists because of that. I can’t bring myself to care about football or other sports in general, but I recognise that it gives millions pleasure. I do have a couple of issues with the nature of the support though. I don’t understand how fans can maintain a personal connection with a local team with any non-local players in it, because I would imagine that the local connection is the most important thing about supporting a team. For similar reasons, I don’t get why people support distant teams unless they perhaps have members who have some personal or local connection to them. These are rudimentary opinions which I could possibly get worked up about if I had any interest in sport, but of course I don’t.

In conclusion then, I don’t in any way despise people for either playing or supporting competitive sport but I definitely don’t get it most of the time, and I usually have to force myself to care. That said, I did very much care when my local football team won, although I don’t quite understand what it won because there seems to be a distinction between the league championship and the FA Cup. As far as Wimbledon is concerned, I can see the appeal of escaping into a realm which is separate from one’s own and entering a place where things matter precisely because they don’t matter, and I can also see the value of international competitions of that nature as an alternative to war. Finally, I can see that there could’ve been a different path in my life where I really did care about sport, but it probably would’ve been quite inauthentic and I’m glad I didn’t take it, and finally finally, I don’t think it was remotely sensible that I was discouraged from participating in the one sport about which I was actually enthusiastic and encouraged to play those I couldn’t care less about.