Baby Cavies

This post is about guinea pigs.  Partly.  It’s also being written in the teeny box of the app because it’s kind of a spontaneous thang.

There seem to be two major reproductive strategies, although this is apparently disputed. I don’t know why that is, but my understanding is that at one extreme, organisms produce millions of offspring in one go and die immediately after. Almost all of those offspring die before being able to reproduce, there’s no parental care but a few do make it through and do the same thing. At the other, organisms produce very few young and devote a lot of time to parenting having done so, so that isn’t the end of their lives, and when this happens they may produce litters, or single young (on the whole – I think most animals who usually have a baby at a time also occasionally have twins or multiple births), but from the viewpoint of replacement animals who reproduce sexually must be able to produce at least two offspring given that they are not in some way invasive. In extreme cases, the reproductive period may even end long before the animal dies of old age, as happens in humans.

Okay, now you’re gonna get your guinea pigs.

Apparently nobody knows why they’re called guinea pigs because they have no association with either Guinea or New Guinea, and I wonder whether at some point they cost a guinea each or whether they’re like turkeys, considered to come from somewhere considered exotic by Western Europeans and therefore associated with an arbitrary distant land. The German name is Meerschweinchen, which translates as “little porpoise”, and I don’t know why they’re called that either. The other name, cavy, is from the Tupi saujá, which means “spiny rodent”, and which they definitely aren’t, and in fact even the name isn’t that close for no known reason (to me). I’m going to call them cavies because the name “Guinea pig” freaks me out a bit, since I don’t know where it comes from, although “cavy” is almost equally weird.

There are, as far as I know, three main suborders of rodent. There are the mouse-like muriomorphs, the squirrel-like sciuromorphs and the cavy-like caviomorphs. This has probably changed since I learned this division due to the revolution in taxonomy which resulted from advances in genetic sequencing, but it remains the case that in terms of number of species and population, rodents are the most successful order of mammals and if mammals survive the current mass extinction at all, they will, and will therefore could end up being the last mammals of all. The first mammals were not rodents at all but very rodent-like in form, because in a way the default body plan for mammals is to be rodent-like, as seen in many marsupials and also shrews, golden moles and others. Multituberculates, arguably the most successful mammals ever, were also rodent-like and it’s theorised that they partly became extinct because their litter sizes were smaller rodents’.

Cavies themselves were never wild as the species they currently are. They are closely related to a wild cavy found in the Andes and I presume they have rapidly evolved to become reproductively incompatible with them. They can reproduce at the age of five weeks and produce two to four precocial pups, but the gestation period is fairly long at around two months. I should probably explain “precocial” vs “altricial” at this point. Cavies clearly do practice parental care, and when an animal does this the evolutionary option exists for young to be born before they are anything like fully developed. These are known as “altricial”. Humans do this, and it’s also common among birds, but ratites such as ostriches and tinamous don’t do this so probably the first real birds didn’t. Egg-laying mammals, however, do, and so do most rodents. Guinea pigs are unusual, compared at least to muriomorphs, in that they produce “precocial” young who are already furry and a little more independent than, say, baby mice or hamsters. They cannot, however, churn out massive litters of children over and over again like muriomorphs, and probably for this reason whereas they do sometimes eat their pups, this is relatively rare. There is a stark disposability to many glires offspring. Okay, I’ll explain glires again too.

The glires are the superorder including lagomorphs (rabbits, hares and pikas), rodents and scandentia (treeshrews) and are close to primates. I have a thing about the insistence that lagomorphs are technically not rodents because there is no definition of what constitutes any clade other than species (a breeding population), so all those families, orders, classes and the like are individually defined but there is no criterion at all which determines which level any of those is at. Therefore, either get rid of the idea of rodents or plonk lagomorphs together with ’em. I have slightly more sympathy with the idea that tupaias (treeshrews) are separate from rodents.

But anyway, rabbits breed like rabbits, and consequently it makes biological sense for them to end up eating their young. Not all archontoglires (including primates) do this. To quote Willy Wonka, “But that is called cannibalism, children, and is in fact frowned upon in most societies.” Humans generally have a taboo about eating babies. Not so most other archontoglires.

Caviomorphs, who I understand we’re now supposed to call “hystricomorphs”, were originally from Afrika like a lot of other animals, and got to South America by floating across on vegetation back when the Atlantic was narrower, during the Eocene. At the time, they were the only placental mammals in South America other than bats and xenarthrans (sloths, anteaters and armadillos, who are the sister clade of all surviving placental mammals and differ from the rest of us in interesting ways, e.g. they tend to be bulletproof, have a lower metabolic rate and an unusually large number of ribs), so they were able to radiate into all sorts of forms which would have been unfeasible in the rest of the world, such as becoming capybaras, who are basically rodent hippos. The largest capybaras were the size of small cars, but those died out a long time ago, probably when the Isthmus of Panama formed (that’s a guess).

Like all placental mammals, cavies lack abdominal ribs, don’t lay eggs and suckle their young through nipples. In their case they only have a single pair of nipples and if they don’t become pregnant when they’re fairly young the pubis can fuse, making it impossible to give birth vaginally. This situation is actually quite similar to that in non-eutherian mammals generally, who have bones making it impossible for them to give birth to live babies. And so we ask ourselves, how did we get here? How did we get into a situation where humans give birth vaginally to mainly singleton altricial babies and suckle them from a pair of pectoral nipples? What does it mean about our society? I haven’t filled in all of the second bit yet.

Humans are anapsids. We are descended from animals in the late Carboniferous who arose more or less directly from amphibians, or rather a vague group of vertebrates who included examples ancestral to living forms who are now called amphibians. We are not synapsids – “reptiles” or birds. Anapsids have tended to go to considerable measures to regulate their temperature, for instance by having large fins on their backs to absorb the sun’s heat and radiate it back again by letting the wind blow past it. The descendants of these animals began to use chemical reactions which created more heat than they absorbed, even to the extent that they were sometimes working in an apparently pointless cycle, because it was able to raise their temperature above the ambient. It’s worth bearing in mind, incidentally, that over most of the history of the synapsids the struggle would often have been to keep cool rather than warm, because of the climate, so in a very real sense it’s the mammals and their ancestors who were cold-blooded and the “reptiles” who were warm-blooded.

There are various ways in which synapsids regulate their temperature, one of which is sweating. That sweat carries antibodies, which are made of protein. The small babies hatching out from the eggs with a need to generate their own heat or keep themselves cool would need to be curled around by parents. One of the distinctive things about synapsid spines is that they can roll up – they can bend backwards and forwards as well as sideways. Hence they can keep themselves warm and their young can be too, at a point where the small size of their bodies means they get hot or cold very easily.

Imagine, then, sweaty things down burrows in the Permian, at a time when practically all the land formed a single continent almost from pole to pole, three times the size of today’s Eurasia – Pangaea. Such a vast continent would be mainly desert simply because so much of it would be so far from the single ocean, Panthalassa. Deserts away from the poles are hot during the day and cold at night, due to lack of cloud cover. Thus these sweaty things down burrows would have to huddle very close, and in doing so the young would lick the sweat for salt and to educate their immature immune systems with antibodies against the infections their parents had previously acquired immunity to. Later on they’d derive protein and other nutrition from the perspiration as well, and so was born suckling. Duck-billed platypodes and echidnas still suckle by licking skin secretions from their mothers.

It’s easy to think of those last two mammals as primitive, but in fact monotremes, for such are they dubbed, have unusually large brains for their size. They are, however, unusual for the mammals living today and they and other mammals did split off very early. From today’s perspectives, it makes sense to look at monotremes as one group, and marsupials and placental mammals as another, as the last two share much more recent ancestors than they do with the first.

One of the unusual things about monotremes is that they have poisonous spurs on their feet – they are venomous mammals. There is a sense in which other mammals are venomous, because our saliva can infect and kill other animals we bite. This is actually quite like the venom of snakes and even more like that of Komodo dragons, all of whom tend to get their toxins from bacteria living in their mouths. Human bites are, after dogs and cats, the most common bites leading to medical emergencies, and are usually inflicted by children. One hand infection in three is caused by a human bite. Cases of limb amputation and necrotising fasciitis have been reported, and death from infections. Nonetheless, platypus venom is another matter and is unlike snake venom or the toxins produced by salivary bacteria. It derives from modified immune system genes and causes a drop in blood pressure with no necrotic effects. It also contains a right-handed amino acid and although females have the spurs when young, and echidnas also, they’re vestigial. The venom is also secreted in the tears. Since similar spurs have been found in Mesozoic mammals, it seems reasonable to assume that they too were venomous. It can kill smaller animals and cause pain for months in humans. Questions of gender role arise in my mind here. Like mammary glands, venom glands are related to the immune system but whereas milk is nurturing, venom is destructive and defensive. I’m imagining male animals going out and hunting or fighting over females using their venom. Some multituberculates had spurs, as did Zhangheotherium quinquecuspidens and Gobiconodon, and although so far it hasn’t been possible to conclude that these spurs are venomous (other mammals do sometimes have spurs today which are not), they are thought to share their origin with those of monotremes, and just as a small animal today such as a wasp or a weaver fish might need to have venomous defences, at a time when mammals tended to be underdogs it’s easy to imagine that they might too.

Looking at their genomes, the common ancestors of monotremes and therian mammals seem to have lived around 210 million years ago, meaning that the early fossils of mammals such as Morganucodon and Megazostrodon are about twenty million years younger. Prior to that, synapsids were non-mammalian. Growth rings in the teeth of the earliest fossil mammals reveal another surprise: they probably weren’t “warm-blooded”. Today a mammal the size of Morganucodon could be expected to live a year or two, but they seem to have had a much longer life expectancy, of up to about fourteen years, which is similar to a living reptile of the same size. This is all the more surprising given that they were already producing sweat and apparently regulating their temperature that way, so maybe some mammals actually lost their ability to generate their own heat internally. This of course contradicts the “whig prehistory” assumption that everything is trying to turn into a human. In fact endothermy requires small animals to work like anything to get enough food. There are shrews who need to eat their own body weight in insects and other protostomes more than once a day. It’s a very efficient way to run a body, particularly if there are easy external sources of heat as there would’ve been in many parts of the planet in the late Triassic.

To monkey sensibilities such as our own, particularly in the richest parts of that same planet in the early twenty-first century of the Common Era, the disposability of muriomorph rodents seems most disturbing. A house mouse can be expected to live well under a year in the wild, although a genetically-modified mouse living in captivity could live up to five and a mouse whose genes have not been directly tinkered with could live up to four given a sufficiently friendly environment. They produce up to fourteen litters a year of up to ten young, although the average of both is much lower. This means a two year old house mouse could have produced as many as a hundred and forty children, and assuming half of those had only been reproducing for a year, that could mean almost ten thousand descendants. This usually means, of course, that most of them would’ve died by that point, which in turn means that life is cheap and they may well have died because their parents have eaten them. Although as humans we do find that harder to handle, even conditions for us used to be a lot harsher. I am one of seven children, three of whom died, and two being adopted in, and this is unusual for a mid-twentieth century British family but not so much a few generations ago or in another part of the world today, and this can lead to a certain lack of emotional engagement with the youngest children for one’s own emotional protection. Nonetheless the need for that emotional protection implies that children matter a lot to us. They also matter to other mammals and birds, and it’s important neither to anthropomorphise nor floccinaucinihilipilificate that in other species.

Another notable aspect of most mammals alive today is that we are born with at most only a thin set of membranes surrounding us and are in the case of placentals retained within our mothers’ bodies for a relatively long time. This is actually less than it might be for humans due to our proportionately large heads, necessitating fontanelles even then, which is one reason we’re altricial at birth. But the question arises of how most mammalian embryos started to be retained rather than being laid in eggs – viviparity.

Viviparity has evolved independently many times in vertebrates, such as in sharks, bony fish, amphibians and reptiles. Birds always lay eggs, I presume because of flight although it also applies to flightless birds. The closest to an exception are kiwis, who lay the largest eggs in proportion to their size and whose eggs hatch out very quickly afterwards. In the case of humans and other placental mammals, the origin of the placenta seems to be viral.

As viruses work by using their host cells to help them reproduce, they sometimes write their own genes into host DNA to do so. If this happens to a spermatozoön, ovum or zygote and the cell concerned survives, this will be present in the genome of most or all of the nucleated cells (or their mitochondria I imagine) of the organism concerned. Around eight percent of the human genome consists of viral genes, and in fact it’s theorised that ultimately the entire genome of most living organisms may originate from viruses, though it will clearly have evolved since. Leaving this possibility aside, most viral DNA in our genomes has stopped working completely, but the way placentae work is different and resembles an infection. The ball of cells all mammals start off as implants itself in the wall of an internal organ and bathes itself in the mother’s blood. This would normally provoke a successful immune response, but doesn’t because the fetal cells fuse, preventing white blood cells from getting a purchase on them. This is known as a syncytium, also found for example in respiratory syncytial virus infections, which protects dodgy cells from being attacked and enables them to produce more viruses. This is made possible by a viral protein called syncytin, previously used by viruses to bind with cells. Given that both marsupials and placental mammals have placentas, in the former case a rudimentary one formed from the yolk sac, and in some marsupials such as bandicoots even a more sophisticated placenta shortly before giving birth, this must have happened before they split from each other about 160 million years ago towards the end of the mid-Jurassic. Hence at some point, as if it wasn’t bad enough to scratch a living dodging the feet of thirty-ton Diplodoci, some ovum or zygote was infected with a virus. Instead of developing a shell and being pushed out, this egg started to invade the wall of the mother’s reproductive system and got stuck, only later being ejected. It probably happened to a lot of mammals, and a lot of them probably died in childbirth or underwent retention of dead fetuses which prevented them from reproducing, or maybe they just died straightforwardly of viral infections, but after this pandemic, a new kind of reproduction had begun to evolve and those mammals are our direct ancestors.

Bringing this back to the present, it’s interesting to note that we owe our existence as placental mammals who bond intergenerationally and invest our time in parenting to the extent we do, to a viral pandemic, and the hope remains that the current viral pandemic will lead to a similar leap in social evolution in the near future. A costly leap, but considering the price we are paying, we may as well get value for money out of it.

Cavy Babies

This is not about guinea pigs.

Sarada and I are now grandparents, and as such we’re watching our grandchild pass the usual milestones. This is of course a blessing not everyone has because not all children survive infancy and of those who do, some are learning disabled. I try not to forget our tremendous luck that so far, all of our descendants are still alive and have the kind of abilities associated with humans. It also makes me wonder about the past of the human race.

We generally can’t remember much about our infancy, and what we do seem to remember is often confabulated because human memory is not so much a store and recall process as a recall and rewrite process. It seems that it’s more important that we seem to remember things than that we actually remember them, and the further back you go in your memory, the more repeatedly recalled memories are likely to have become distorted by the creative, active revision process that is our apparent remembrance. Fortunately, we often have parents and siblings around us who can help us gain accurate information about our early lives, although they too can be influenced by family mythology. Since the invention of writing it has become easier to make accurate records of the past not immediately dependent on current human beings being around to recount the events, although this is probably an early stage in the steady process towards outsourcing our memories of which the likes of Google constitute a more recent stage. Left to our own devices, though, we don’t remember much about our early lives and much of what we do remember has been rewritten. We create our own mythologies about our infancy.

This is similar to human prehistory. Thinking of the Greeks, never far from my mind right now, the likes of Hesiod set down the general idea that before the current iron age, which is incidentally still current, was the Age of Heroes, corresponding to the Greek Dark Ages, preceded by the Bronze Age, itself preceded in reverse order by the Silver and Golden Ages. This is partly accurate, and confirmed to some extent by archaeological findings. I’ve talked about this elsewhere.

When our granddaughter, facilitated by videophone, lifts herself up into an upright position, having also recently begun to crawl, it makes me wonder about the past of the human race. At some point in the Pliocene our ancestors began to walk upright, possibly after having hung down from the branches of trees or, according to Elaine Morgan, having needed to step into deeper water to escape predators or find food, the situation in fact where other apes also stand upright. There are three changes involved in human bipedalism, one of which is the angle of the pelvis and the other two of which I’ve forgotten, each of which adds up to about thirty degrees away from quadrupedalism. Is our granddaughter recapitulating that early human history, or rather prehistory? We see her reach up for a toy rather than a branch carrying fruit, but that’s fruit for her and for all we know, maybe there was just something really interesting on that branch which our great-great-….great-grandmother seriously wanted to take a look at. Before walking, babies often go through a crawling phase, which seems rather odd to me for various reasons. One is that for some reason they take longer to learn to crawl than they actually spend crawling. It’s been said that one reason they take such a long time is that they don’t see as many examples of people crawling around them as they do walking, so they have to invent it for themselves, but it still seems rather odd that they bother to do it at all. I suppose it keeps them busy, although they get really frustrated by it too. On the whole, when they do crawl they always seem to do it on hands and knees, which makes sense but also makes me wonder because I’m not aware of any other quadruped who moves around like that, including those whose hind legs are longer than their forelimbs, and even those whose normal gait is bipedal don’t walk on their knees. Neither do that family in the Middle East who walk on all fours for neurological reasons whose gait may be connected to a former evolutionary stage in apes generally. Consequently, what babies do may not be an accurate representation of our past.

One thing hearing babies tend to do, it’s said, is produce a very wide range of speech sounds, and it’s even been claimed that a hearing baby with a vocal tract like that of most other humans will produce every sound in every language during the babbling stage. After carefully listening to our own children, this definitely doesn’t seem to be true. They did produce a wider range of sounds than were present in German and English, but there were a lot of sounds I never heard either of them utter, so I strongly suspect this isn’t so. More evidence that the phonological inventory of a babbling baby is smaller than that of the sum of spoken human languages is research which shows that babies adopted at birth still pick up the spoken languages of their birth mothers more quickly, i.e. if adopted into a family speaking the same language, than a language foreign to them. This strongly suggests that the fetus is listening in utero and has already started to divide up the speech sounds it hears in terms of a particular spoken language, although whether this is reflected in their babbling is another question entirely.

I recently commented on this blog on the slightly disturbing tendency for spoken languages to simplify in terms of inflection and phonetics as time goes by. For instance, Greek lost its distinction between η, ι & υ, & between ω & ο, started to drop its aitches ages before Cockneys were even thought of and so on. English, likewise, merged I and Y before the Norman Conquest, leading to the familiar tendency to spell words with interchangeable Y’s and I’s in the Middle Ages, and within my own lifetime the distinction between WH and W has been lost and the dark L at the end of words has now become a vowel or semivowel in all positions. This phonological simplification influences inflection. For instance, because schwa has taken over practically every unstressed short vowel in English nowadays, the older distinction between final -u, -a and -e in noun endings has been eroded and lost, and in any case there’s a general tendency towards levelling in inflection anyway. Again, as I’ve said before, if you wind the clock back far enough you end up discovering languages which are much harder to pronounce and with much more complicated grammar on the whole, though not exclusively: Greek’s ancestor didn’t distinguish the passive and middle voices but Ancient Greek does.

Due to this general trend, one is left imagining some prehistoric stage when people just babbled at each other and hoped for the best. If you regularly read this blog, you probably wonder whether that stage ever really ended! However, people other than me can express themselves to each other a lot more clearly than I seem to manage. It’s still confusing though, because one would expect more complex languages to take longer to learn and since people didn’t live as long back then, having to pick up a really complicated language would probably have taken up a relatively much bigger part of their lives.

One feature in particular never seems to have survived into any official national language which has existed in the past few centuries: polysynthetic language. There are certainly languages, such as Swahili, which inflect their verbs according to subect and object, meaning for example that it’s entirely possible to express something like “she used to visit me” in a single word, but there’s a more complex stage before that where something like “they wouldn’t easily let themselves become Greenlanders” or “the praising of the evil of the liking of the finding of the house is right” could be expressed in single words. Ainu, spoken in Northern Japan and previously Sakhalin and now practically extinct, evolved from a polysynthetic stage into a simpler form during recorded history. I know practically nothing about this but I do wonder why this never happens in widely spoken languages in the twenty-first century, or in national languages.

Caucasian languages are characterised by complex grammar and very large numbers of consonants compared to most other languages. Linguists who have attempted to reconstruct older prehistoric languages sometimes tend to produce something rather similar to one of those languages, Georgian, in that respect, although it’s now thought that this is a futile exercise. That said, it does seem likely that the complexity of the grammar and phonological inventory of Georgian is fairly representative of a spoken late Palaeolithic language.

It was thought that the Caucasian languages held the record for the number of consonants in any spoken language, but this has turned out not to be so. In fact, that record seems to belong to the click languages of Southern Afrika. Western linguists realised that what they had previously thought were a small number of clicks were in fact each one of several distinct sounds, and consequently that and a number of other subtle distinctions between speech sounds of other kinds has meant that the language with the most consonants is in fact !Xóõ, with seventy-seven. I’ve tended to call this language !Xo, so I’ll carry on doing that, mainly because it’s easier to type on an English keyboard. Click consonants do occur outside Afrika. For instance there is a register of an Australian language which uses them and they’re also used to express the negative in the Balkans and to express irritation and affection in English, though not as parts of words, and only in Afrika are they found as phonemes of that nature.

It used to be thought that there was a “Khoisan” language family to which most click languages belonged, and which were somewhat related to each other. Even then, though, it was acknowledged that several Bantu languages such as Xhosa and isiZulu did have their own click consonants and were not related to !Xo, !Kung, the gloriously named //au//’e and the rest. More recently it’s been recognised that most of the click languages are unrelated to each other except insofar as they all have clicks, and that the situation seems to be that they’re a Sprachbund like the Balkan languages, sharing features because they’re spoken in the same area and have become more like each other. I suspect that the languages currently spoken in these isles also form a Sprachbund, because for example they all have a circumlocutory way of expressing verbs, but maybe not.

There are probably three examples of language families which are merely convenient groupings based on geography rather than due to them being genuinely related. The other two are the Papuan languages of New Guinea and the Australian Aboriginal languages, the latter of which incidentally also share various features such as the absence of fricatives (e.g. S, F, TH, V, Z) and a special form of the noun to express fear of the item referred to. In both of these cases, the languages have been spoken in those regions for such a long time in relative isolation from each other that if they were ever all descended from a single ancestor it’s no longer possible to trace them back that far. I would suggest that the situation with click languages is somewhat different. As I’ve said before, the human population of Afrika is genetically the most diverse, so it’s not a huge exaggeration to say that the world consists of a number of ethnicities in Afrika plus another one which inhabits the rest of the planet. In fact this isn’t quite true because the genetics of North Afrikans tend to be quite close to that of various people in the rest of the Med and in Western Asia, and in fact there’s quite a bit of variation in Central Asia too. Even so, there’s a lot of variation in Afrika, particularly south of the Sahara.

This variation I think is probably a clue to the nature of the click languages. The reason Afrikan populations vary more than the rest of the human race is probably because the species has spent much longer in that continent than elsewhere, so they’ve had longer to evolve and there are also no geographical bottlenecks like Sinai and Gibraltar. Incidentally, Afrikans are also genetically the purest examples of Homo sapiens , the rest of us having Denisovan and/or Neanderthal ancestors as well as other H. saps. I think the same is likely to apply to Afrikan languages.

What I think is happening with Khoisan languages is that far from being a family or Sprachbund, they are in fact relatively conservative descendants of Palaeolithic languages which retain the wide range of different consonants which earlier spoken languages had because they were closer to babbling. This is not in any way a negative comment on the languages concerned. On the contrary, the difference is that Khoisan languages are more phonetically sophisticated than most other adult speech in that respect. They are probably ultimately related, but they’re also basal.

To illustrate what I mean, and the mistake which I think has been made here, I want to point out three other examples of this happening, two from biology and one from comparative linguistics. Flowering plants used to be thought of as divided into two main taxa: monocots and dicots. Monocotyledons have parallel venation, no tap roots and are never trees, among other things. Dicotyledons have tree-like branching veins, tap roots and are sometimes trees. It turns out that monocots are not closely related to each other whereas dicots are a family tree, so in other words the very course of their evolution resembles the nature of their veins. Therefore the monocots are basal – they are descended from the earliest forms of flowering plants and are sister groups of each other plus one more group, the dicots. Vertebrates have done something similar. There are synapsids (“mammals”), anapsids (“reptiles”), amphibia and various kinds of fish including eel-like jawless fish such as hagfish and lampreys. The hagfish and lampreys are only about as closely related to each other as they are to the rest, and the hagfish in particular are only distantly related to all other vertebrates, who form a more closely related bunch. Finally, the linguistic example is found in our own Indo-European language family, where it used to be thought that there was a split between SATEM languages such as Polish, Bengali, Armenian and Greek on the one side and KENTUM languages such as English, Albanian, Tocharian and French on the other. Once again, it turned out that the KENTUM languages form a relatively closely related group but the SATEM are only distantly related to them and each other.

This is what I think click languages are evolutionarily. Before humans left Afrika, we spoke a large range of languages with lots of different sounds in them, probably often including clicks. These were the putative “what the heck are you talking about?” languages which were, in a good way, closer to babbling than most of the languages spoken today. The rest of the world’s languages became simplified and easier to learn and pronounce, but some of the languages of Southern Afrika, although they diverged enormously from one another, retained their large phonological inventories, and these are the click languages. Interestingly, the highest incidence of albino humans is found in the same area as the click languages are spoken, and I think this reflects the genetic and linguistic diversity of the human population of Southern Afrika.

In closing, I want to stress very strongly that this doesn’t mean at all that Khoisan languages are in any way backward or primitive just because their languages are phonologically conservative. I suspect in fact that the grammar of Khoisan languages is not that complex compared, for example, to Georgian or some northern Native American languages. What I do think is that they have retained the complexity which the rest of us have lost. Moreover, I don’t think clicks were previously all there was to it. I think probably the first spoken languages of our species had sounds in them we can hardly imagine and subtle distinctions which nobody would be able to hear or express nowadays. Click languages are a globally valuable legacy of a glorious linguistic past.

Norman Is In Ireland

This is possibly not going to be one of my more coherent posts, although you could be forgiven for not noticing much difference between it and any others. It is partly about Dominic Cummings and this John Donne poem:

No man is an Iland, intire of itselfe; every man
is a peece of the Continent, a part of the maine;
if a Clod bee washed away by the Sea, Europe
is the lesse, as well as if a Promontorie were, as
well as if a Manor of thy friends or of thine
owne were; any mans death diminishes me,
because I am involved in Mankinde;
And therefore never send to know for whom
the bell tolls; It tolls for thee.

It isn’t difficult to think of examples of this, one of which is the life of the typical human being. We are born, having formed from matter in the biosphere, spend our lives exchanging much of the substance of our bodies with that biosphere and on dying, become one once again with that same biosphere. On the whole. Occasionally the carbon in our ashes is converted to diamond along with some of the nitrogen, or something might happen to preserve our bodies, such as falling into a tarpit, but hominin fossils are rare because we tend to be able to protect ourselves more effectively from physical threats, to the extent that one of the most dangerous animals in our environment is actually Homo sapiens. Leaving aside our ultimate fate, we are both physically dependent on the outside world and on society, which there is such a thing as. All of this is pretty bleedin’ obvious. The Spartan ἀγωγά involved a number of stringent measures including an examination by the Γερουσια soon after birth and the abandonment of a baby on a mountain if deemed unfit to become a soldier either to die or survive for several days. This was actually admired by other Greek states. Regardless of the ethics of the situation, it does illustrate very well that we are fundamentally social and cultural beings. There is not currently such a thing as a solitary human. There are hermits and feral children, to be sure, but feral children ally themselves with other social species and hermits have been social.

Heideggers (note the lack of apostrophe) insistence that all being is being towards death and that death is a solitary event which does not exist in one’s own being and is always in the future has been criticised for ignoring the natal aspect of our existence. There is a kind of solitariness in existentialism and the introduction of others is perceived as a threat or an onerous responsibility. And of course in a sense it is. What we owe to other people can be perceived as a great burden, but it’s so much more than that. We also have an origin, although we can’t perceive that origin because our minds are insufficiently organised at the start of our existence for that to happen. Heideggers view seems to be in a sense that of a sole individual standing alone before an abyss who had no parents or family, who made everything himself, and is male. Sartre’s view of the Look portrays the awareness of oneself as an object for the other’s subjectivity as irredeemably negative, and of course objectification, for instance sexual objectification by the male gaze, is indeed pretty negative, but away from that objectifying context there is the responsibility one has and what one owes to others because without them one would not exist.

This is where I get to Dominic Cummings. He has been much vilified recently, and I’ll come back to that attitude in a minute, for taking his family to Durham rather than observing the lockdown rules. In a way, he can’t be blamed for this because he’s a product of the isolating attitude engendered by Thatcher’s “there’s no such thing as society: there are only individuals and their families”, i.e. the view that we’re all separate even though we’re using a language invented by a whole community, are eating food produced by an army of farmers and other workers and emerged from a person’s body who themselves emerged from another’s all the way back to when a placental syncytial virus infected early eutherians back in the Cretaceous. And we rely on that virus too. We owe our existence as placental mammals to that event, and the many other viruses which have written their genes into our DNA over aeons.

This ecological idea of interdependence between people and also between us and our planet or Universe more widely doesn’t seem to be controversial from a scientific perspective. Ecology, linguistics, archaeology and the rest are all entirely respectably academic disciplines, but the fact of interdependence was applied to human relationships during the nineteenth century by Marx and Engels.

As well as being a theory of socio-economic relationships, Marxism arose out of Hegels dialectical idealism as expressed in his Phänomenologie des Geistes of 1807. In its form as understood by Marx and Engels, it has become dialectical materialism, which is based on several metaphysical principles: contradictions exist in the real world; entities are dynamic rather than static and they exist in relationships with other entities. All of these things are necessary conditions of entities, and incidentally when I say “entity” I mean “a thing with distinct and independent existence”, except of course that nothing is independent at all. Given that Marxism is said to arise out of this ontology, one might consider it to be uncontroversial. The problem, though, is that in fact Marxism does not seem to rest on these principles as firmly as one might hope. Marx is said to have “toyed with” the idea of using dialectical materialism as a means of explaining commodification, but given that he doesn’t seem to express this explicitly and it doesn’t seem to detract from understanding the idea, maybe it isn’t as clearly built on these foundations as might be thought. In fact there have been other attempts to approach Marxism, notably the post-war efforts to forge a different kind of political philosophy referred to as “Analytical Marxism”. This involved the view that the Marxist theory of history, which clearly would involve the “thesis-antithesis-synthesis” of historical materialism and the idea of dynamism – that nothing can be realistically considered as static and frozen in a particular instant – was in fact obscurantist and entirely unnecessary. As far as I know, analytical Marxism is now dead and discredited, although I don’t know the details. However, if it could be shown that a political theory could emerge logically from a rational and evidence-based view of reality, it would be good.

A rather similar phenomenon, I’ve long thought, is found in Jean-Paul Sartre’s adherence to Marxism. In his preface to Frantz Fanon’s Les Damnés de la Terre, Sartre states it to be a work in which “The Third World finds itself and speaks to itself through his voice”. All that is fine and good, since Fanon was in fact from the Caribbean and had a right to do that without false consciousness, and I don’t object to that. What I do object to is that it feels to me that Sartre is using his own terminology and philosophy as a kind of “bolt-on” appendix which is not organically part of his Marxism, and in fact I think he does that throughout. When you look at existentialism as a whole, it’s really about the individual and not about their relations and unity with the world, and it’s hard to see how this could be compatible with Marxism except through some kind of fancy word play which actually signals some kind of privilege and knowledge-hoarding on the part of the writers concerned.

One of the questions which arises for me in connection with politics is whether the individual is important. There is, first of all, a sense in which one’s duties towards each individual are supreme, and we also need to recognise that each individual has a unique perspective informed by her experiences of life. From a left-wing perspective, this can be linked to the idea that the working class know what works for them in production, but that information may not be communicated back to the bosses, to use a rather outmoded model of how a business might work. Thus information is lost and production is less efficient than it might be. But this is also a right-wing idea, potentially: an industry ought to be able to run itself rather than be regulated because it understands better than the political class how to manage its affairs most effectively. There are also more atomised needs, for instance in terms of unique aspects of personality. I appreciate this because as far as I can understand it I am very atypical neurologically in a way which doesn’t seem to fit particularly well into any particular diagnosis – I’m neurodiverse and there are aspects of my personality which can be classified as, for example, gender incongruence, ADHD and possibly being on the autistic spectrum, but none of these things is “textbook”. And it never is, for anyone. One of the more startling experiences of taking a consultation is that the very occasional client turns out to be a textbook case of a person with a particular condition. In that sense we are all individuals and it’s a moral imperative to take that into consideration. You have to consider, for example, that some people have peanut allergies or button phobias. That looks like a flat, straightforward medical implication.

But on another level, individuals are not important. Here in England, Henry VIII started the Reformation. In the Holy Roman Empire it was Martin Luther who did that. If by some artifice we go back in time and ensure that the Battle of Bosworth field is won by Richard III and his cronies rather than the Tudors, maybe the Reformation would’ve started later and been started by someone other than Henry VIII with his perceived need for a male heir and therefore an annulment or divorce, but it would still have happened. That King is like a chesspiece. It doesn’t matter to a game of chess that the king has a particular design other than that it’s distinguishable from a queen or pawn and moves in a particular way, and being in check or achieving checkmate is possible in innumerable ways, but the game is still won or lost, as is the establishment of Protestantism. The individual does not matter in politics. Our Margaret Thatcher is the British version of Ronald Reagan.

Likewise Dominic Cummings is not important. The fact that he broke the lockdown, probably as a result of neoliberalism convincing him that he’s an isolated individual who doesn’t owe anything to anyone else, is not the point. He doesn’t matter as himself. The way in which he does matter is that he represents a particular irresponsible attitude which has been encouraged by a particular social environment and set of attitudes which this society has been encouraging for decades. But that doesn’t mean we should blame him for that, because he is, like all of us, the product of his environment. If we accuse him personally, we’re playing a game which allows Jeremy Corbyn, for example, to be criticised on the basis of his personality rather than his ideas, and that’s a double-edged sword.

I haven’t met Dominic Cummings in person, but the impression given by the representations I’ve seen on this screen and others is of a non-conformist “weirdo”. When we feel tempted to use that word, we should check ourselves, because maybe for other people we too are weirdos. I don’t understand why he doesn’t conform to the usual standards of grooming and dress compared to the generally blandly besuited individuals to whom we are used, but I don’t hold that against him. I know that I don’t even know how to conform or what conformity is except for a vague collection of things such as the tendency for male establishment figures to wear suits. There may even be some kind of fellow-feeling there, unless of course Cummings image is carefully constructed, as it very well may be.

So to conclude this bizarre pell-mell rant, I would say this about Cummings. He is a chesspiece in the impersonal game of politics who has fallen into a position as a result of historical determinism as a representative of an unsustainable attitude which set a bad example, by being typical of the current political environment. As such, it doesn’t matter who he is and in another world, and perhaps another country in this world, there’s another person in the same position. His personality put him where he is, but that personality is the end of a long chain of events which simply means he’s the wrong person in the wrong place and time doing something wrong. I don’t care who he is, how he looks, anything like that. What’s important, as always, is that the impersonal forces of history have pushed us all into a situation where it’s particularly clear that we can’t carry on as we have. In a more real sense than usual, Cummings is a dinosaur: perfectly adapted to his environment, but in an environment which has just been hit by an asteroid his kind must cease to be influential. The future is birds and larger mammals. But I don’t care who he is and this shouldn’t be made personal.

Alien Cows and Cookie Dough

Most people know what a pufferfish is, and of course they’re interesting. I only recently managed to find out whether they inflated themselves with water or air. Since it’s a stress response, whereas it might be fun to get one to do that, it isn’t good for them. This means, of course that the dolphins who seem to use them to get high by stimulating that stress response and the release of tetrodotoxin are not being very nice if they have any sense of empathy, which raises all sorts of interesting questions I’m going to ignore for now.

Fugu are in fact porcupine fish rather than pufferfish, but the two groups are closely related and in the same order, the Tetraodontiformes. Another family in this order is the ostraciidae or boxfish. Just as fugu and pufferfish are covered in spines to defend themselves against predators, boxfish have their own sort of armour in the form of hexagonal plates on their skin adapted from scales, which lead to their body shape being kind of “boxy”, and a very rigid body, unlike the clearly highly pliable skin of fugu and pufferfish. They’ve taken the same “idea” in the opposite direction, and protect themselves just as well as their relatives, but instead of doing so by making themselves physically too spiky and occasionally enormous to be swallowed, as well as poisonous, they’ve instead turned themselves into tanks. Boxfish are apparently kept as aquarium fish, which is a little surprising to me, and again this could take me into the interesting but twiddly area of the ethics of aquaria. Okay, just a bit: I am interested in the idea of keeping ornamental seaweed or other marine or freshwater plants in an aquarium in a kind of abstract way, and I suspect that there would be little in the way of ethical problems keeping a caecilian in one provided she’d been bred in captivity, but besides those, it’s probably not ideal. I’ll never get round to it anyway and it feels like too big a responsibility to have to take care of fish, unlike human children who are a cinch.

Anyway, boxfish. Hexagons, if they’re regular, are unlike most other regular polygons with relatively few sides. Whereas they tile a flat surface excellently, like squares and equilateral triangles, they can’t do the same on a three-dimensional surface because the angle they would need to be placed at doesn’t allow for them to be wrapped round. There is one highly abstract exception to this which kind of fits into the series of Platonic polyhedra (tetrahedron, cube, octahedron, dodecahedron and icosahedron), which is that an infinite hexagonally-faced regular polyhedron could exist. I can’t remember what they’re called. This does, however, raise the question of what happens to the plates on a boxfish at the edges and corners.

On the whole, fish tend to swim by undulating their bodies in a moving horizontal S shape, unlike aquatic mammals such as whales who do so vertically. Boxfish can’t do this because their trunks are rigid. Instead, they slowly row through the water with their fins alone. They don’t need speed because they can secrete poison into the surrounding water and are difficult to get into. Consequently they have warning colouration.

Two of the boxfish are called cowfish. This brings to mind an outdated, pre-Darwinian idea that everything on land had to have a marine counterpart, so for example there are dragons and dragonfish, elephants and elephant seals and so on, which led to the oddity of the insistence that there were such entities as sea bishops, who as far as I can tell don’t exist, and it’s quite odd in fact to think of an office like that as having a biological form. Is there an accountant fish? It’s long seemed to me that there are in fact two sets of these. On the one hand there are “sea” versions, as in sea cow, sealion and the like, and on the other the “fish” versions such as lion fish and cowfish. I also wonder if there are aerial versions too. In any case, cowfish do exist. They’re up to about fifteen centimetres long and they have horns, hence the name, although they’re also slow like bovines. At this point I should probably allow myself a brief digression to point out that the weird English language has no generic basic word for these animals but only for varieties of them, which I think is probably because we’re so close to them conceptually that we can see the wood for the trees. That’s not true in many other languages by the way. It would be interesting to see what Frisian does.

Cowfish have horns for the same reason porcupine and pufferfish have spikes – to make them hard to eat. So far as I know they don’t charge and attempt to impale predators with them. But it’s the fact that they have horns which makes it possible to start to depart into a bit of worldbuilding.

We’re so familiar with life on this planet that a lot of the time it seems there’s no other options but to have organisms of that form, and in fact there is some justification for this. Ant eating mammals are very similar in form to each other though they’re not closely related, there have been marsupial wolves and big cats and South America used to be home to very horse-like animals. In the sea there are dolphins now, but previously the very similar ichthyosaurs and also a load of large fish and sharks which were again quite similar, their forms strongly dictated by the laws of physics. A second line of possibility emerges from the idea of simple or easily generated shapes such as trees and spirals, which turn up repeatedly in all sorts of life forms. Spheres are a good example, turning up in such organisms as sea urchins and coronaviruses, and means that if complex organic life large enough to see exists at all, tribbles almost certainly do. Another very common form on the submicroscopic scale, which oddly isn’t found at all in organisms or organs (such as fruit or eggs) visible to the naked eye is the regular icosahedron. Virions are so often icosahedral as to seem the rule rather than the exception, and it’s quite odd that there doesn’t seem to be, for instance, a crunchy icosahedral fruit or a cactus-like icosahedral plant with spines at its vertices.

Animal symmetry has a strong tendency to be bilateral, like a butterfly for example, where one side is a mirror image of the other. The major exception to this is found among the echinoderms, including starfish, sea urchins and sea cucumbers, all of which have pentaradiate symmetry like that of pentagons and pentagrams. This is thought to be because the corners are weak points between the plates and are opposed by a strong plate on the other side. Very early in the history of animal life, there were even triplanar animals, based on triangular symmetry, and the ancestors of all vertebrates are thought to have been completely asymmetrical. Hence simple geometrical shapes are influential in the structure of living organisms. In the case of boxfish, this shape is the hexahedron, more specifically cubes and rectilinear cuboids. There is, however, a weakness in this shape suggested by the purported reason for pentaradiate symmetry: the vertices and edges are opposite each other. Moreover, a box can be distorted without altering the lengths of its edges, unlike a tetrahedron. Due to this weakness, there is selective pressure for a cuboid animal to develop reinforcements at the corners, and this opens up an interesting possibility: alien cows.

Imagine a rectilinear cuboid body of a mobile terrestrial animal, perhaps a grazer. The vertices of this animal could be strengthened with projections, namely horns above and limbs below. As the Norn riddle has it, “føre honga, føre gonga, føre stad apo skø” – “four hang, four go (walk), four stand up to (the) sky”, except this has no udders. At this point, the aphis can be used as inspiration. This is a giant aphis, in a sense, although not a sap-drinker. The trunk is entirely inflexible, so the “head”, although having a mouth and two eyes with horns, has no neck. Therefore the mouth is on the end of a kind of tube and the “grass” is nibbled at the end before being swallowed upwards. The other end of the body contains a cloaca out of which calves are born. These have to be quite small due to the fact that the body has no room for expansion, but this is also true of aphids so it doesn’t require egg-laying. The pair of projections at the back are in a way their analogue to udders, although these are not mammals and don’t secrete milk for their young in the mammalian sense. Like those of aphids, they secrete a nutrient liquid, maybe like cookie dough.

As I mentioned yesterday, aphids and waterfleas share a practically identical reproductive strategy. Over much of the active period of the year they’re almost exclusively female and give birth to pregnant offspring without mating. Later in the season, males appear and mate with the females, producing eggs which then preserve the species over the winter. This can be made to work for the alien cows.

These cows don’t “know” they’re cows, and therefore it’s important not to boumorphise them. They’re alien animals on an alien world, and not vertebrates. Their bodies have armoured exoskeletons like lobsters, though not jointed, and they grow like arthropods, shedding these boxes and hiding away in caves while their new boxes harden. This leads to plains strewn with abandoned cow skeletons which are used by another species for shelter, in other words as huts. This other species is centaur-like, six-limbed and the front limbs have hands with opposable thumbs. They are in fact tool users, but they have an ancient symbiotic relationship with the cows, just as ants have with aphids on Earth. They “milk” these cows for their dough, on which they feed. This benefit is repaid by the centaurs keeping the cows safe from potential predators who also live on the plain.

A couple of things to note about these sentient centaurs. Their housing needs are satisfied by the cows rather than plain technology and are instinctive. Perhaps more importantly, although they aren’t vegan their relationship with the cows is truly symbiotic and they have no choice about it. They’re farmers, but instinctive farmers. It isn’t part of their culture because culture is everything you don’t have to do. A centaur without access to cow dough is as doomed as a human without gut flora. The ethics of this situation are interestingly different from dairy farming. Nonetheless, as well as having these two givens in their life, the centaurs do have technology and culture. They take the dough home with them and bake it into cookies, which they eat. If they ever decide to go into space and settle there, they’re going to have to take their cows with them, or rather, it probably wouldn’t occur to them not to, which makes the eggs rather convenient for them.

There is a fairly obvious problem with this scenario. Waterfleas and aphids live less than a year but these alien cows (and they are mainly cows because they’re almost all female), being larger and needing to grow, have to live longer because they need to be able to eat enough. The trouble is, in order to be habitable, a planet would seem to have to have a year no more than about two “of our Earth years” long. The hotter and larger the star, the further out its Goldilocks zone, but the hotter and larger the star, the shorter its lifetime and the less time there would be available for these organisms to evolve. If a star is more than about 40% more massive than the Sun, it would have rendered Earth and most of its solar system uninhabitable by the time human-type life was able to evolve, assuming Earth’s typical. This seems to rule out an ecosystem with cows of this kind because to have a year long enough for them to mature, it would have to be in an orbit too far out to support life, such as Jupiter or Saturn. Fortunately there’s an answer to this, found in our own system, and it starts by demolishing one assumption we tend to make about life outside this solar system: that it’s found on planets alone. This world is not a planet. It’s a moon of a planet bigger than Jupiter.

The moon Io is hot and covered in active volcanoes, to the extent that mapping it is a Forth Bridge-type task due to the constant remodelling its surface undergoes from the eruptions. This is despite the fact that at Jupiter’s distance from the Sun, the average temperature at the planet’s cloud tops is -145°C. Io manages to be so hot because the other large satellites in the Jovian system raise violent tides within it, heating it via friction. Hence some kind of “Super-Jupiter” could have an Earth-sized satellite heated by tidal forces from its neighbours, not to the degree Io is, but enough to give it a habitable climate. But this is a gargantuan planetary system even compared to Jupiter. Excluding Earth, which seems to be a special case, the largest moon relative to its primary’s size is Triton at around a five thousandth of Neptune’s mass, and moreover constituting 99.5% of the mass of all Neptune’s moons put together. In order to have a similarly proportioned satellite, Earth would have to be orbiting a planet around a dozen times Jupiter’s mass, and even then it looks like the other moons would be too small and far away to exert strong enough forces. But it isn’t quite that bad. A planet with only forty percent of Earth’s mass could be habitable for us, and life which evolved there could manage less than that. At a pinch, it would only need to be slightly larger than Mars to have liquid water on its surface, but that’s probably going too far.

Assume, therefore, that a planet with forty percent the mass of Earth is the largest moon in a Jovian-style system, proportionate to Ganymede but, as it were, in Io’s position, though without the primary’s Van Allen belts to avoid ionising radiation. A proportionately larger “Jupiter” would be sixteen times the real planet’s mass, which is still small enough to avoid becoming a star. One of the largest known exoplanets is CT Chamaeleontis b, which has seventeen times its mass and may therefore be a brown dwarf. A planet twelve times the mass of Jupiter is probably right at the upper limit for a planetary object as opposed to a brown dwarf, which is arguably a star, would produce its own radiation and therefore possibly be too hot to allow an otherwise Earth-like moon to be habitable. But it is possible.

A cow’s gestation period is close to a human’s and they reach puberty averaging at about a year. Given that they are born pregnant, these alien cows would, assuming a similar time scale, be less than nine months old when they first give birth, and would therefore probably need to be fully grown by then unless the first calves are smaller than subsequent ones. This gives scope for several generations of cows in a Jovian year of 11.86 of ours. Jupiter, though, has no seasons because it has no axial tilt, and in any case this moon would be constantly heated. The other gas giants, though, do have seasons because they are tilted and their days vary in length, with the exception of Uranus whose axis is practically at right angles to the orbit. This would influence day length and therefore the ability of plants to grow, so in spite of the fairly stable climate, there would still be seasons due to the proportion of time during which light is available. The light is also weaker than on Earth due to the planet being further from the Sun. Consequently, overwintering still makes sense and a situation can be imagined where the eggs lie dormant for several years before hatching out in the spring, whereupon several generations of cows would ensue. Late in the summer, bulls would appear, mate with the cows and eggs would once again be laid before all the cows and bulls die off for the winter, less food being available. In the meantime, the centaurs could hibernate, waiting for the new cows to appear in the spring.

This scenario allows for some interesting explorations. For instance, there’s a species of intelligent, technological instinctive farmers who, if they wish to explore space and settle elsewhere, would have to take their cows with them, spend several years sleeping, which is in fact quite useful for travelling between the stars even if they have only sublight velocity spacecraft, and would probably be surprised to find life on actual planets as opposed to moons. The ethics and values could be different due to the necessity to farm animals as opposed to our omnivory which allows us to be vegan. This is all very fruitful, even though they just eat cookie dough.

My Glistening Secretions

This is going to sound quite pretentious, but I can assure you it isn’t: I wish I could have writers’ block. I don’t, of course, but this doesn’t mean I can produce good stuff. In the past I’ve pointed out that there is a sign called hypergraphia – compulsive writing. There are various manifestations of this, including people who write all over their walls and probably ceilings in a manner which somehow reminds me of hoarding, another feature of my personality. Another practice described as hypergraphia is the compulsive keeping of a highly-detailed journal, rather along the lines of ‘Rain Man”s “serious injury” journal – “squeezed and pulled and hurt my neck”. People can find themselves writing on toilet paper, and I once wrote an essay entirely on till receipts, as a medium rather than a topic, although I did later put it on A4 paper. Such behaviour is a sign rather than a condition as such. There is no diagnostic label “hypergraphia”. It can be part of Geschwind Syndrome, a manifestation of temporal lobe epilepsy involving that, hyperreligiosity, atypical sexuality and circumstantial conversation. Anyone reading my stuff will have noticed the circumstantiality of my writing. Digression is my norm.

I have started to blog. It is not difficult enough. It’s always verbose rather than pithy and it flows, in a sense. It’s a secretion of my brain. Perhaps a glistening secretion. But is that glistening aesthetically pleasing or disgusting? I have no idea, and that’s the problem. Not having any insight into other people – I only know how to be myself – I can only conjecture that they have an inner critic which slows their flow: an encumbrance as herbalists might call it. I’m not sure if there’s one around here to ask, so I have to guess. This stuff, it flows out of me in typically verbose and circumstantial style, from day to day, like an exudate, as it were, and I feel the urge to show it to people on this blog and elsewhere.

There’s an apparently non-veganisable joke about someone who goes to a doctor and says “I think my brother is mad because he thinks he’s a chicken”. The doctor replies, “Should I arrange for him to see a psychiatrist”, and she answers, “well I would, but we need the eggs”. This is, I hope, my situation. I hope people need my eggs. Incidentally it has to be a man who thinks he’s a chicken because it adds a certain transphobic je ne sais quoi to the joke. It also brings up the issue of gestation and reproduction, because we seem to be assuming here that the eggs are sterile whereas this may be unwarranted. Maybe it’s parthenogenesis. This leads me to aphids of course.

Aphids are known to be “farmed” by ants. They suck the sap from plants and – well, what happens next? The ants milk the aphids for a sweet, sticky substance called honeydew, on which the ants feed. An obvious comparison to dairy farming can be made here although unlike that, this is an instinctive arrangement and form of symbiosis between ants and aphids. What isn’t clear to me is whether this stuff they produce is a secretion or an excretion. Are the aphids simply “overflowing” with plant juice and squeezing it out of their rear ends like dung or are they specifically producing it from something like glands on their backs? This is of course the problem with my own writing. Is it a secretion or an excretion? The aphis doesn’t know, so why would I?

And at this point I shall permit myself a digression. Aphids and waterfleas have a lot in common in terms of reproduction. For most of their active season they’re all female. Their young are born live and pregnant rather than hatching from eggs, and they can colonise an area very quickly because they can reproduce without mating. At a certain time of year, a few male individuals appear – in aphids these are the ones with wings, because they can fly around and sow their oats more easily, thereby increasing genetic diversity in the population. When they mate, they lay eggs which are tough and resistant to winter conditions, and they all die off, leaving their eggs to hatch out in the spring and start the process all over again. The parallels between the two are fascinating, and I’d be prepared to bet that if complex life exists on other planets (and moons), it will turn out that many of them will use the same strategy. But aphids are something else. They give birth at the age of twenty minutes to pregnant daughters.

Aphids are also somewhat bovine. They’re herbivores who live in herds and are milked by another species. The human practice of milking cows is not instinctive, but like other human activities does parallel those of other species which are instinctive: social insects also farm fungi, so that’s kind of arable farming, though for humans it’s an invention whereas for insects it’s instinctive. But what I don’t know about aphids, getting back to the point, is whether what I’m inaccurately describing as their exudate is an excretion or a secretion. It seems that if it’s a mere excretion, it isn’t very energy-efficient because they’re more than satisfying their metabolic needs and chucking away the rest, which takes extra energy. However, it may also be that the ants protect their herds from ladybirds and other predators, so it may not be wasted energy. If it’s a secretion, somehow that seems to make more sense, although the aphids are still not deriving nutrition from it and it might be energetically cheaper just to poo the stuff out without taking it into their internal environments, processing it and pushing it out again via glands on their backs. At this point I’m tempted to look this up but I’m not going to. I’m going to leave it at that because it raises a similar issue in my mind about my own secretions, or are they excretions?

I produce writing. I churn it out. I am an organism who writes. To me, this writing might just flow out without touching the sides, like regurgitating a textbook, or it might be getting processed and cast into a new form before being secreted. It occurs to me that strictly speaking, an aphis is not excreting because if she isn’t secreting, her honeydew is coming out without touching the sides, and that’s defaecation rather than excretion. Excretion involves processing followed by the expulsion of the “ashes” such as urea or carbon dioxide depending on which end you’re talking about (to some extent – sweating is almost identical to urination). Therefore I am secreting, because I don’t just regurgitate text but think about it before setting digit to digital device. A lot of that thinking isn’t under my own volition and much of it isn’t even conscious. Secretion then.

There used to be a book in my university library rejoicing in the highly memorable title ‘Sex And Internal Secretions’. I don’t think I ever picked it up and read it, and oddly I think the allure of the title was probably accidental, or could at least have been passed off as such to the publisher. Perhaps disappointingly, it merely refers to reproductive hormones, and as such clearly demonstrates one way of looking at glands. There are glands with ducts, producing content which then exits the gland by a tube leading to an external surface, bearing in mind that that’s topologically external rather than on the outside of the body – I’ll come back to that. Then there are the so-called “ductless glands”, which are organs which produce signalling chemicals, i.e. hormones, and release them into the bloodstream. Obvious examples are the pituitary, thyroid, parathyroids and adrenals. Some of them do both, such as the gonads and pancreas, and there are also other types of organs which also produce hormones such as the lungs, kidneys and heart. The fact that the lungs and kidneys produce hormones regulating blood pressure is, incidentally, supremely relevant right now as the receptors for an enzyme converting one of those hormones is also currently being used by a certain virus to get into cells and reproduce, which is why it tends to cause viral pneumonia but also causes blood pressure to fluctuate wildly and injures some people’s kidneys. I too produce internal literary secretions. I overthink and like anyone else, many of my thoughts never reach the surface, not because they’re any less worthy than other thoughts which do get expressed but may not be well-received. In fact I have a history of this. As a schoolchild, I used to write long essays and destroy them rather than handing them in, giving the impression I was lazy and unproductive. The main change today is that I no longer destroy them because I’m also a hoarder. I did, however, write two thirty thousand word long biology essays and hand them in at some point in order to prove to another pupil that a certain teacher didn’t have 6/10 as a ceiling for his marks, but it didn’t really have anything to do with the subject – my motivation was that someone was wrong in the classroom, and since he was homophobic I wanted to retaliate in some way.

As I’ve said, topologically the question arises of what really counts as external and internal. To a topologist, a doughnut and a piece of paper with a pinprick in it are the same shape. Imagining that all solid objects are made of infinitely pliable plasticine which cannot, however, be torn, a doughnut, a ring and a sheet of paper with a tiny hole in the middle, and in fact even a teacup, are all the same shape. Applying this to the human body without being too fussy about the finer features of our anatomy, we are doughnuts, kind of – not quite. We have tubes running through us from our mouths to our anuses whose contents are therefore in a sense external. Likewise, our lungs are open to the air, fortunately, and therefore their surfaces are also outside our bodies. Even the uterus and uterine tubes are external in that sense, which means that a fetus is in a sense merely adhering to an external surface, which sounds a bit precarious but may help emphasise certain issues in reproductive ethics – pregnancies are in a sense occurring outside bodies. Even the abdominal cavity is external in a sense because the uterine tubes are open at both ends. But there is, more or less, an internal environment, consisting of the bloodstream, the bones, muscles and the walls of viscera, and the brain and spinal cord along with many other organs and whole systems. These are often the parts of the body formed from the mesoderm, which is the filling in the sandwich of many animals’ embryos, consisting of three layers which roll up to form a three-layered tube. This is referred to as the internal environment. Of course, on a finer scale even these are external, and there is no interior at all. There’s simply a cloud of subatomic particles whirling around in a void. The Greek philosopher Δημόκριτος (Democritus) once said, νόμωι (γάρ φησι) γλυκὺ καὶ νόμωι πικρόν, νόμωι θερμόν, νόμωι ψυχρόν, νόμωι χροιή, ἐτεῆι δὲ ἄτομα καὶ κενόν – “by convention there is sweetness, bitterness, colour: in reality, there are only atoms and the void”. That’s one possible way of viewing reality, which has problems, but does express a truth, and it means that in a sense there is no exterior or interior.

At this point I’m hoping to be able to climb out of this metaphysical hole I’ve dug myself with this runaway metaphor and apply it once again to creativity. I suppose all of my thoughts and writings do occur in the world. I presume that after my death, if my writings survive they might end up getting read by some poor benighted victim, or maybe they too will be destroyed, like Kafka wanted done with his. In that sense, the stuff I’ve actually put to paper and not eaten, discarded or ignited is merely internal temporarily, like my physical body: it will eventually become part of the world and be publicly read, or rather, the possibility exists that it will emerge into the sunlight. Most of it won’t, as it’s only ever existed in my thoughts, which by the way may also be external. Gottlob Frege, when he wasn’t busy being a proto-Nazi, made the interesting observation that thoughts are not so much the contents of the mind as things which exist “out there”, rather like Platonic forms, waiting to be discovered. Although he didn’t mean the same thing by “Begriff” (concept) as most people, the general concept is more widely applicable. Maybe works of art do exist out there waiting to be discovered. Maybe nothing is ever invented. Instead, the human mind is merely a device which opens tunnels into a vast multidimensional bladder from which innumerable creative works shoot under pressure into the physical world and get splurged onto paper, canvas, TV screens, pianos and websites. There are certainly examples of very similar works and tropes which are not, however, the result of plagiarism. At least two short stories about mathematicians involve a character holding blackboard chalk in his mouth like a cigarette. ‘The Time Traveler’s Wife’ is to me annoyingly similar to ‘Slaughterhouse Five’ and I’m prepared to believe that isn’t plagiarism. Although it’s a cliché today, 1977’s ‘Lucifer’s Hammer’ and 1979’s ‘The Hermes Fall’ are both about a massive astronomical object threatening to hit Earth and I believe there was a court case about the plot similarities. It seems, however, to be entirely accidental, except that ’twas the season for asteroid impact novels. But there are stories out there waiting to be written, and they already exist even though nobody has ever thought of them, and maybe never will.

Getting back to the question of blockage, a ducted gland with a blocked passage, what herbalists call encumbrance but which could also be called obstruction, is generally not a good thing. The contents will build up and a cyst will form. The pancreas is the main gland which secretes digestive enzymes in the body, and normally produces and discharges them into the duodenum where they break most food down into absorbable states. It’s right next to the gall bladder, which makes sense because that’s main function is to emulsify food, such as it is at that point, and help the enzymes secreted by the pancreas to get to it as well as produce microscopic droplets of fat. Sometimes, of course, gallstones form and these can slide out into the common duct shared by the gall bladder and the pancreas. When this happens, the digestive enzymes can back up in the pancreas rather than being released into the external environment and, most unfortunately, proceed to digest the pancreas itself, and having done that go on to digest much of the rest of the abdominal organs. It’s probably one of the worst ways to die, and it’s caused by the failure of pancreatic secretions to get out there into the world of the digestive system. If I don’t write this stuff down somewhere, something similar happens to my mind. It’s another example of having to express one’s “insanity” in order to maximise mental health. If I don’t do this, and I suspect this applies to many other people, I will be lost in my internal musings and it would become increasingly difficult to engage with the world, even in relatively normal ways like going shopping or cooking dinner. The external world recedes from me if I don’t do this.

Most of the time, the word secretion seems to bring nasty fluids to mind to which we have evolved instinctive revulsion in order to protect us. This applies mainly to our own bodies. We’re not keen on mucus, pus or sweat on the whole, and even substances which remind us of them can be hard to engage with. However, not all secretions are necessarily nasty. The labiates (I’m supposed to call them “lamiates” nowadays but I prefer a system which actually describes the organisms to one which just names them after one genus) attract pollinating insects with glands which secrete substances very wont to evaporate and diffuse through the air, which also gives them scent and flavour to human beings. Most culinary and fragrant herbs are labiates, such as mint, rosemary, marjoram, thyme, sage and lavender. These are aesthetically pleasing to many humans (other fragrant herbs are available, such as the umbellifers (“apiaceae” for heaven’s sake!)). Hence some secretions are aesthetically pleasing, so it might be worth airing them.

I hate pearls because they’re a response to a foreign body in a living animal, which to me makes them like pus. In fact to me they even look a bit purulent, like blobs of discharge from a boil. Therefore somewhat aside from the fact that they aren’t vegan, they’re distasteful to me. However, many other people seem to like them, to the extent of making jewellery from them. I would hope that my own glistening secretions lead to similar impressions, regardless of how much to me they seem to have festered and gone septic, and maybe I’m in with a chance.

My other problem is that I’m not good at endings.

A Language Written By The Victors

Learning another language is generally supposed to be good for your brain and mental health, whether or not you have anyone to communicate with. Hence Sarada is currently learning Classical Greek and finding it very stimulating. In the meantime, although I’m not formally learning it, it has piqued my interest in classical culture. For instance, I didn’t previously fully appreciate that many of the tropes we’ve taken for granted in drama had to be thought up by someone, such as having more than one character in a piece of drama. Certainly we have the likes of Alan Bennett’s ‘Talking Heads’ and isolation due to Covid-19 has led to the production of more monologue-type drama, or something close, such as ITV’s Isolation Stories series, but on the whole we expect more. Also, a lot of Greek drama seems to have been effectively musical theatre, comedies weren’t taken seriously for a long time because they weren’t serious and so on. Politics are supposèdly outside the realm of drama, or maybe not, but they too are affected by what TV Tropes calls “Early Installment Weirdness”, with the idea of tyranny, far from being a pejorative term, actually being considered a viable way to run a state, and democracy being full-up honestly unpopular as opposed to aristocracy because of clearly expressed reasons, and this is bearing in mind that Greek democracy wasn’t anything like democracy as we know it today since hardly anyone was considered a citizen. Of course there is a pale ghost behind all this for that kind of reason – Classics is generally about dead white males, with some interesting exceptions such as Sappho and the Black Roman emperors Septimus Severus and Caracalla. Interestingly, Classical Mediterranean culture was basically ethnically colour-blind and homophobia wasn’t a thing either, but that’s not to say that there aren’t major problems with the likes of patriarchy and slavery.

Greek has generally just been Greek. Unlike Latin it hasn’t given rise to a whole family of languages spoken today, although it has been a major influence on other languages, chiefly nowadays for the use of its script and vocabulary in scientific, mathematical and other technical realms. So extensive is this, in fact, that I can recognise the vocabulary of most texts written in Classical Greek, although my grammar isn’t so hot. Greek also lent its script all over the place, crucially of course to Latin itself but also to Old Church Slavonic and through that many Slavic languages and written languages in Soviet-influenced countries, and also to Gothic, Etruscan and Coptic, although in each case modified somewhat. But it has one living descendant: Modern Greek. In a sense Modern Greek is to Ancient Greek as Italian is to Latin, which provokes me into wondering what European languages would’ve existed if Greece had managed to maintain its European political ascendancy and take over the Roman Empire rather than the other way round. But it does mean Greek isn’t particularly useful compared to Latin as a gateway to other languages. Gothic and Coptic both borrowed a lot from it but they are no longer enormously useful, although the latter is at least still used seriously and serves as a counter to Arab dominance in North Afrika.

There are of course all sorts of reasons for learning languages other than being able to communicate in or understand them. They introduce a new way of conceiving of the world for instance. English is unusual compared to many other languages in having a single word for “know” and two separate words for “do” and “make”. It also avoids using “thou” and has only one modern pronoun for “we”, sharing the latter with most other Western languages. Many other languages have a dual first person pronoun and separate inclusive and exclusive dual and plural personal pronouns, and in fact English used to have dual first and second personal pronouns, namely “wit” and “git” (pronounced “yit”). Another reason for learning a dead language is the help it can give you learning related languages, and this is particularly true of Latin, ancestral to the current spoken languages of possibly most living humans. The same applies to Sanskrit with the languages of Northern India and other parts of the Subcontinent. Other branches of the Indo-European language family may not have good written records going back millennia or they may simply not have been very productive. Greek, Albanian and Armenian are each the only representatives of their branch and Tocharian and Hittite have no living descendants. There are also branches of the family which have left hardly any trace: there’s a group near Mongolia separate from the Tocharians (who lived in today’s Turkestan) who can be confidently asserted to have spoken an Indo-European language but it was never written down and completely disappeared without even leaving any loanwords in other languages, so it’s gone forever. There may even be a branch of Indoeuropean spoken in the Pacific Northwest of North America by ancient non-European settlers, although this is highly controversial and seems pretty doubtful.

Due to the cultural biasses of Western academia, the best reconstructed ancient language not based on records of some kind is Proto-Indoeuropean itself. Something like half the languages currently spoken are descended from it, including English, Bengali and Serbo-Croat – in other words, most of the languages of the Indian Subcontinent, many Central Asian languages and most widely spoken languages in Europe. It isn’t entirely clear when and where this language was spoken, its own name was unknown and the people and culture involved are unclear, but it’s possible to track down some details. For instance, if the words for a particular tree or animal are related in a wide range of scattered languages, the chances are that it was native to the region in which it was spoken, and if the name for a piece of technology or other cultural feature is similarly common and its date of invention is known from archaeology or other records, this helps to determine the era during which it was spoken. Because Hittite and its relatives are themselves the oldest written records of Indo-European which have come down to us, it can be known that it must have preceded that civilisation, which began around 1600 BCE. Likewise, the earliest Greek records, which are incidentally pre-alphabetic, using an hieroglyphic-like script called Linear B, dates from 1450 BCE or so. Another possible clue is genetics, which I’ll come back to. There are shared words for “wheel” and for shorter edged weapons but not swords, so the original people must’ve post-dated the invention of the wheel but weren’t familiar with swords.

There are four significant theories as to where the original people involved were. One is the discredited and Nazi-adopted Northern European theory. This is based on the erroneous idea that the Aryans, and that is who we’re talking about, were fair-haired, blue-eyed white people. It even went as far as the claim that they were originally from the Arctic ice caps, which I suppose enables people to say they were the original Europeans. Like some other ideas, although it was adopted by the Nazis it has a somewhat less dishonorable history to it and was convenient for propaganda purposes. It is, moreover, true that the original Germanic homeland is in the southern part of what’s now Sweden. One reason for supposing Aryans are from Scandinavia or perhaps what became Poland in the 20th century is the word “lachs” for salmon, found in all sorts of languages in various forms. Salmon of the European kind are only found in the rivers emptying into the Atlantic and associated seas, so there seems to be a problem with them having this word. This is in fact known as the “Salmon Problem”. However, it’s now thought that the word which became “lachs” originally meant “trout” and didn’t refer to the leaping fish. There are of course other salmon such as the sockeye of Pacific North America, but these are not strictly relevant, although it would be interesting if the Tsimshian people had a word like “lax” for them because it would support the idea that they speak an Indo-European language. In fact the related Haida language calls one kind of salmon “chíin tluwáa”, which is nowhere near and is in any case a compound noun.

There’s a second nationalistically influenced theory of Aryan origin which claims that they arose from Northern India and spread across Eurasia. This one, like the Arctic theory, doesn’t really work at all and seems to be highly politically motivated. There used to be a bias in Indo-European philology towards Sanskrit which made reconstructions of the original language look a lot more like Sanskrit than it’s at all likely to have been, partly because Sanskrit, particularly the language of the Vedas, is one of the most conservative languages in the family which is still decipherable. There are political issues raised by the idea of an Aryan invasion of India. For instance, it’s been seen as justification for the British Raj. However, it requires the Indus Valley civilisation to be Aryan and doesn’t account for the language having words for many things which didn’t exist there at the time. It’s used by Hindu nationalists to justify racism and Islamophobia, which doesn’t make it untrue, but it is nonetheless untrue.

The other two, more respectable theories are Anatolian and Kurgan. The Anatolian theory involves the claim that since Hittite and its close relatives are the oldest recorded Indo-European languages, they probably started in Asia Minor – modern Turkey. It’s also noted that language variation tends to occur most close to the origin of a language. This can be seen with English. Americans often perceive British English as having a myriad of accents and dialects which vary a lot over a small area. In fact what’s happened is that the places where English is now spoken across the world were settled by English speakers and spread fairly rapidly, not allowing for much language change to occur, so whereas there are indeed variations in North American English, they’re nothing like as big as they are in England. This is seen as applying to the Anatolian theory, because for example the Hittites spoke one language, the Luwians another, then there’s Greek, Armenian, Illyrian (the branch from which only Albanian survives) and Slavic, all in quite a small area, plus Indo-Iranian further to the east. This is associated with Nostratic, a probably non-existent language spoken during the last Ice Age which was supposed to be ancestral to Indo-European and a wide range of other languages such as Finnish and Tamil, partly due to the considerable linguistic diversity in the Caucasus. However, there are again a number of items for which there were words in Proto-Indo-European which are a lot older than the Hittites, and the method used to date the language, Bayesian analysis, doesn’t work for languages so much as the words used, which could have been loan words.

The most popular current theory, then, is the Kurgan Hypothesis. I realise I’ve used the word “theory” all the way through, so I should probably come clean and start calling these hypotheses, although it’s a bit unfortunate that I’ve now referred to two ideas popular with quasi-fascist groups as “theories” and this as a mere hypothesis even though it’s better supported. The Kurgan Hypothesis is that the Aryans were a nomadic people living in the Chalcolithic – the “Copper Age”, which is immediately after the Neolithic or New Stone Age – in the area north of the Black Sea. They’re called Kurgans because of their burial mounds, which are called that in Russian. These were the Yamnaya people, who lived in that area from about 3300 BCE, and are the strongest candidates for the original Aryans. Genetic studies also support this: the type of Y chromosome they had is now found all across the area where Indo-European languages were spoken up until about 1500 CE, with its strongest concentration along the Atlantic coast, including the West of Scotland, Ireland and Brittany. In other words, the Celtic fringe. It is in fact the Y chromosome type carried down on the father’s side of my own family. I’ve heard it said that there’s a stretch of DNA which is found most in areas most remote from the place where Homo sapiens originated, which is therefore seen as conferring adventurousness and curiosity, and it’s possible that this is what they had in mind although it’s very Eurasia-centric. If this is true, it might be expected that even if we expand into the Galaxy, the shock front of our settlement will also be marked by this same piece of DNA. It also reminds me of Enya’s song ‘Aldebaran’, which imagines a Celtic spacecraft reaching that star. Maybe.

Unfortunately, “adventurousness” may be a bit of a euphemism. What it may in fact mean is that the Yamnaya, assuming that’s who they were, managed to spread their genes, languages and culture across much of the planet, and let’s face it, now most of it. These people were and are basically “The West” in cultural terms, although with some qualifications because the North Indians are them too and their religion was decidedly non-Abrahamic. What “adventurous” might mean in this context, sadly, may well be “belligerent and plundering”. It’s thought that their warlike, aggressive ways led to them subjugating the more peaceful, less patriarchal cultures which had prevailed in the areas before they got there. And although you can put a different spin on them, the Bhagavad Gita, the Eddas and the Iliad all come across as pretty murderous and violent. As a child, I mainly skipped over the Iliad and went on to the much more interesting Oddysey, although apparently the former can be seen as Achilles avenging his same-sex lover so all may not be lost. ‘The Silence Of The Girls’ also apparently takes a different view, telling it from a female perspective, and it hasn’t escaped my attention that I’m talking about the spread of a Y chromosome here rather than mitochondria.

Consequently there are a couple of disappointments and concerns about Proto-Indo-European. As I said, when you learn a language, you also pick up a whole world view. It may be similar to your own in some respects, because on the whole the language is known to other humans, or at least invented by one or more of them, but it’s bound to differ in several ways. This is one reason why language loss is so tragic. It’s a bit like losing a species of plant due to rainforest devastation which would’ve cured cancer – I realise that’s a crude example but you know what I mean. Just to take a random example, an Australian language was lost a few decades ago which had different words for different kinds of hole, which English lacks. The Q-Celtic languages express possession as being “on” someone and needed and desired items as being “from” them, which gives one a new perspective on needs, wants and property. Many languages distinguish between alienable and inalienable possession. “My leg” and “my body odour” is not the same as “my Rubik’s cube” or “my Ford Cortina” unless one is a Cortina/Rubik’s Cube/human hybrid (just realised that refers to something I wrote elsewhere on another blog but never mind, I’ll leave it in). So learning Proto-Indo-European would provide insight into a prehistoric perspective on the world. If one subscribes to any extent to the Noble Savage myth, this might be expected to provide some kind of holistic, peaceful unity with the Cosmos-type take on things. Unfortunately, while for all anyone knows that might really have existed, that isn’t how the Yamnaya saw the world at all, because they were the conquerors. The people they plundered, crushed, murdered and exploited would’ve had interesting languages too, but they’re mainly gone – the only survivor is Basque so far as I can remember, although there are other candidates such as Burushaski, and older examples like Elamite and Etruscan. The other disappointment is that although this is a prehistoric language, just about, it isn’t actually a Stone Age one. These people had copper weapons and knew how to smelt lead. They had just domesticated the horse, which in fact is probably one reason why they conquered the world.

Another reason for learning some Proto-Indo-European is a bit more promising. Just as learning Sanskrit helps you pick up Sinhala, Hindi and Bengali, among many others, being their ancestor, and Latin helps you with Ladino and Dalmatian (which is sadly extinct due to the last speaker getting blown up during some road work in 1898), so Proto-Indo-European should help you with “everything”! Not literally everything of course, but most languages originating in Europe and many of those from Asia, which of course spread during the colonial era to much of the rest of the globe. It also confronts you with the issue of complexity.

One thing which really bothers me about language change is that it tends to go from complex to simple. Languages generally become easier to pronounce, lose complex grammar and so forth as time goes by. Among Indo-European languages, English is an extreme case of this. Most of our verbs have only four forms: walk – walks – walked – walking. Even the most complex verb in the English language, “be” only has eight forms in present day English prestige dialects. Almost all noun plurals end in “-s” and “-es” and as for cases, most people will just look at you blankly if you even mention them. In the past, of course, English was much more complex. You only need to look at Shakespeare or the King James Bible to see all the “thees” and “thous” and their appropriate verbal forms. There also used to be more strong verbs – verbs like drive – drives – drove – driven – driving. “Help” and “climb” used to be strong verbs too, along with many others. As I’ve already mentioned, we used to have dual pronouns. Go back a bit further and we had five cases, three numbers (singular, dual and plural) and appropriate forms for verbs in all those numbers. English is now such a simple language in terms of inflections that if its history and connections weren’t known it wouldn’t even be considered Indo-European. Ancient Greek and Sanskrit, of course, have much more complex grammar. The former has six cases and initially has a dual number. Sanskrit is notoriously complex, more so even than Greek because it’s more conservative than Greek. If you then reconstruct the ancestor of all these, Proto-Indo-European, it’s on another plane of complexity, although still simpler than a lot of other languages which survive today such as the Inuit and Navajo. The same process leads to the loss of difficult consonant clusters – nobody pronounces the K in “know” or “knight” any more and all those “ough” spellings just look confusing to most people who don’t know their history. Hence Proto-Indo-European is a fortress of complexity and difficulty in pronunciation spoken by a warlike culture which devastated whole continents in late prehistory.

The reason this bothers me is a bit like the way history and ways of life bothered Sarada when she was a child. She lived through a time of perceived increasing fairness, mercy and the decline of various kinds of prejudice. She was aware that in Victorian times things were not so good, and extrapolating that led her to the conclusion that the distant past must’ve been unimaginably awful. I would tend to agree with her, although there also seems to be something of a cycle in these things and although, for instance, Georgian England was doubtless an awful place to live because, for example, of the Bloody Code which got people executed for stealing a handkerchief, it was also less prudish about sex, though in a very misogynistic and patriarchal way. I have a similar problem with the decline in complexity in language, which seems to be universal. It strongly suggests that there was a time when languages were so complicated that nobody could ever have learned them properly in a human lifetime, which was in any case a lot shorter than it is in the richer parts of the world today even taking infant mortality out of the equation. The almost extinct northern Japanese language Ainu, for example, used to use single words for entire sentences until fairly recently, and a lot of other languages still do. More than ninety percent of Inuit words are only spoken once, which is one reason why the myth of words for snow can’t be true or misses the point. It makes me think of cave people babbling gibberish at each other which occasionally made just a little bit of sense.

I have no intention of seriously plunging into Proto-Indo-European and seriously learning it as if it’s a going concern. It is in any case rather hard to do so because much of it has disappeared without trace. There are three very important sounds in the language whose presence can only be seen in their influence on pronunciation in its descendants and had long since vanished by the time writing was invented. Nonethelesss it does have a draw to it, and I will be learning some. It’s just a great pity that one of the few scraps of prehistoric culture that survives is the property of such an aggressive and destructive culture, although it might explain a lot about the nature of today’s world. This is not to say that there wasn’t a lot of violence and oppression elsewhere so much as that all the older cultures which may have been more peaceful and laid back, and nicer to live in for most of their members, just got slaughtered and raped. Pretty depressing really.

Whig Prehistory

“Whig History” is the idea that history has been inexorably leading us up to this point. For example, if we consider the world to consist largely of liberal democracies, it would look back into the past and interpret various battles, incidents at royal courts, riots and the like as all pushing the world towards a system of one person one vote to elect governments which respect the rule of law and make laws based on the will of the people as measured fairly, a free press and so forth. To the people living through these events, it wouldn’t seem that way and they might be horrified at the way the rabble are just allowed to have their way nowadays or something. You can see this in Ancient Greece, where aristocracy refers to rule by the best, and the idea of tyranny is not pejorative but seen as a viable option. The past is a foreign country and all that.

This can also be applied to prehistory. It’s tempting to think of the history of life on this planet as starting with primitive forms which didn’t work properly and gradually got better and better, culminating in humans and the like today. However, it’s easily possible that we could all be wiped out by a virus or that some human tumour cells could end up living a happy, successful existence independently after we’ve all died of cancer. On the whole, evolution doesn’t care about that sort of thing and in fact what we think of as signs of how advanced we are may be anything back. We rely a lot on our society and cognitive abilities, for example, but this means that an abandoned baby will die rather than thrive, we only have a few children at once and those children then take ages to become relatively independent, and even then we need each other all our lives. And that is a good thing. However, it may not be the best evolutionary approach for all circumstances and doesn’t work as well as other strategies, such as producing huge clouds of millions of offspring and abandoning them, but having them able to defend themselves from day zero. Most of them die but they’re immediately independent and they pass on the genome.

Looking back into the past, we see fish climbing out of the water (note that “climbing” – upwards? Towards the light?) and developing eggs with hard shells, turning into larger animals who then become dinosaurs while a load of tiny mouse-like animals scurry around their feet, and who then succumb to the Chicxulub Impactor and are replaced because, we might imagine, they’re old hat and can’t survive in the new world because of some kind of inherent superiority mammals have, and so on. This is a crude caricature of course, and it isn’t what happened. Every moment in the past used to be now, and it usually made sense and hung together. There were usually working ecosystems and although forms often did become extinct because more efficient competitors came along, that efficiency could be dependent on the conditions at the time, and had the others been transported into the future, when conditions had changed again, they might do absolutely fine. It’s easy to imagine a near future where time travel has been invented and we are overrun with swarms of early mammals like rodents, the land is covered by the Palaeozoic equivalent of Japanese knotweed and “dragonflies” the size of herons are driving sparrows into extinction. Realistically, this probably wouldn’t happen because the whole ecosystem wouldn’t be transported into the future, although this might be exactly the problem: no predators or infectious diseases to kill the griffinflies (which is what those giant “dragonflies” are sometimes called).

This applies, of course, to mammals. We have this idea that there was an “Age Of Reptiles” followed by an “Age of Mammals”. As I pointed out the other day, the very idea that there are such things as reptiles is a bit suspect due to them not forming a clade, and this is particularly relevant here because of the way vertebrate evolution actually seems to have gone. Mammals didn’t really evolve from reptiles. The earliest identifiable ancestors of mammals and the earliest identifiable ancestors of, say, lizards and crocodiles both evolved quite early on almost directly from amphibians, which incidentally are not amphibians as we know them today and it even used to be questioned whether they were even related to them, although it turns out they are. It seems more that early amphibians became fully terrestrial animals laying hard-shelled eggs on land, who persisted for some time but then evolved into two different classes of animal, one known as the synapsids and leading to mammals, the other leading to birds and “reptiles”. This happened in the Carboniferous and there were then two parallel classes of land vertebrates plus amphibians who lived side by side. After the biggest mass extinction of all at the end of the Permian, when only four percent of land life survived, there ensued what’s sometimes referred to as the “World Of Pigs”, where mammal-like species resembling to some extent pigs took over Pangaea, the single supercontinent which existed at the time, and there were, to be sure, reptiles, such as there ever are reptiles, in a subordinate position. Looking at such a world, one could be forgiven for supposing that these mammal-like forms would continue to dominate and become even more mammalian until maybe, and this really is quite Whiggish, humanoids evolved some time in the Cretaceous or something.

That didn’t happen. What actually happened was that small dinosaurs evolved, a bit like tiny flightless birds, towards the end of the Triassic and there was another mass extinction, setting the early mammal-like forms back and leading to the dinosaurs getting the upper hand for an amazingly long period of time, more than one hundred and thirty million years in fact. But mammals themselves evolved in the early Jurassic, pretty soon after dinosaurs first appeared. Similarly, after the impact at Chicxulub, referred to hereinafter as the K-T Event, reptiles persisted, including even quite large apex predators, and the modern dinosaurs we refer to as birds, prevailed. “This is a world where birds eat horses”, as Kenneth Branagh once memorably put it. There were even large reptiles coexisting with humans – the nine-metre long monitor lizard encountered by Australian Aboriginals in Australia and the giant tortoises on the Pacific islands. Another thing is the old idea of dinosaurs being cold-blooded, which is now long retired. It also seems now that the earliest mammals did not in fact generate their own heat because they seem to have lived several times as long as today’s mammals of that size.

Mentioning size brings me to my main point. In the previous post I focussed on our own ancestors, notably the omomyids and going back further into the end of the Cretaceous the common ancestors of rodents and primates. These are the survivors, but looking at the world at that time they would’ve been quite insignificant, not only compared to dinosaurs and reptiles but also to other mammals and mammal-like animals. There seem to have been primates for sixty-odd million years and in fact most groups of mammals around today have been for up to about sixty-something million years, having been part of the original expansion after the K-T event. The oldest currently suriviving order of mammals is probably the monotremes, including the echidna and the duck-billed platypus, but they’re not particularly representative of what mammals were like at the time and are more like tarsiers in that they’re unusually specialised and have managed to carve out a niche for themselves in which they survived, which is a bit strange considering that specialised organisms frequently do really badly when things change and do in fact die out on the whole. But anyway, size. There was a whole group of successful and sometimes fairly large mammals who were around for more than twice as long as any of the orders surviving today apart from the monotremes, and who could therefore be seen as the most successful mammals in the history of the planet. These were the multituberculates.

The multituberculates, to be fair, were often quite small and seemingly insignificant, and were superficially somewhat like rodents. The earliest known representative was Rugodon, who lived 160 million years ago, around thirty million years after the earliest known probable true mammals, the morganucodontids. At this point it probably bears repeating that one heck of a lot of extinct mammals are referred to as “-odon”, “-odontid” and so forth because their teeth are all that survive, being the hardest and therefore most durable part of the mammalian body, and the palaeontology of mammals could be described as a lot of sets of teeth mating with each other to produce exciting new forms of teeth. The last multituberculates are hard to identify, but they survived the K-T event and persisted into the so-called “Age Of Mammals”, possibly into the Miocene, which is the early part of our current geological period, the Neogene, but more conservatively probably only into the Eocene. They were not therian. Therian mammals include marsupials and placental mammals like ourselves, and are descended from small mammals traceable into the early Jurassic with fossils from the late Jurassic. They exclude monotremes. Multituberculates were, along with Gondwanatheres and probably some others, “Allotheria”, who were closer to Theria than monotremes but not closely related to either marsupials or placental mammals. They have no living descendants at all, in spite of the fact that they were the most successful mammals ever. “Look on my works ye mighty and despair”. In fact, if you look at the diversity of mammals since the dinosaurs, they peaked many millions of years ago and started to decline long before the evolution of humans, so that can’t be blamed on us, although of course the current mass extinction can be. Possibly the most disconcerting thing about multituberculates was that they may have lacked external ears. I know I think of visible ears as very distinctive of mammals, but monotremes lack them, as do many marine mammals. Multituberculates, though, may well not have had them and they seem to have been a therian innovation.

Much of the dentition was like that of the rodents, with gnawing incisors at the front with a diastema (gap) behind them, but there are a couple of big differences in their jaws and teeth. One of the premolars is much larger than the other teeth and is sharp at the front rather than the top, and instead of chewing their food by moving their jaws up and down or side to side, they bit front to back, which is why the “blade” is forward rather than on top of that tooth. This way of chewing is found in no living animals with the possible exception of dugongs, but was quite common in Mesozoic times. One consequence was that their jaw muscles were different than those of any living mammals, because they had to be able to pull back and forth as well as up and down. Elsewhere in the head are other indications of their differences. Crania in general usually give some indication of the shape of the brain, although this can be misleading. Mammals as they are today have familiar lobes to their brains as found in humans to some extent. A dog’s brain, for example, has frontal, temporal, parietal and occipital lobes just like a human’s, although placental mammals generally also have the corpus callosum linking the hemispheres which marsupials lack. Casts of crania give a rough idea of the external shape of an animal’s brain. In the case of multituberculates, though, the gross anatomy of the brain seems to have been completely different. Their brains evolved from those of the first mammals in a completely different direction from any living mammals. It’s probably going to be very hard to work out what was actually going on in those brains of course, but they were often quite large.

Later on in the Cretaceous, some of them evolved hypsodont teeth. These are teeth with deep layers of enamel which can resist the grinding effect of constantly chewing gritty food, and are found in horses, some rhinos, deer and other grazing mammals, suggesting that they were the very first grazers. Grass has a slightly odd history. Nowadays there are vast prairies, savannah and other huge areas of grassland, and of course lots of mammalian grazers, and this has been so for something like thirty million years, but grass did exist long before that, far back into the Mesozoic. However, it was by no means dominant. Rice- and bamboo-like grasses existed in India, which was not part of Asia at the time but an island continent, suggesting that it evolved before India split from Antarctica. Grass pollen is also found in the Cretaceous and dinosaur dung contains the distinctive types of phytoliths found only in grasses. But at the time, grass was just another plant. It wasn’t found in huge swathes like today, and the main plants on “grasslands” would’ve been ferns, horsetails and the now rare gnetales, which are kind of herbaceous forms of conifers like ma huang. At the time, a grass-eating mammal would have been highly specialised and not have been able to roam over an extensive plain of the stuff.

This brings home a startling contrast with mammals today. Ourselves included, many of us evolved in step with particular plants. A human diet excluding all flowering plants would have no fruit, no bread or pasta, none of the cooking oils, hardly any herbs or spices and so on. As primates we developed in the broad-leaved trees eating their fruit. The very reason we can see the colour red is probably to do with being able to spot berries and ripening or poisonous fruit. Likewise ungulates very often only eat grass, including rhinos, horses and donkeys. Therefore the predators who feed on them also depend on grass. None of those plants was dominant in the Mesozoic. Although broad-leaved, fruiting trees did appear towards the end of the era, most of the trees present through most of the career of the multituberculates were conifers, including monkey puzzle trees, and that name indicates exactly how poor a fit primates are with such organisms. Nonetheless, many multituberculates were arboreal, including the original Rugosodon. This, like some relatives, was able to swivel the feet around 180°, apparently to get a good grip on the tree, and others kind of had opposable thumbs, namely thumbs and big toes which were able to be moved away from and towards the other digits on those limbs, rather like the arrangement found in New World monkeys. This early example is also omnivorous, unlike most mammals at the time who were insectivores, and the ability to thrive on a range of foodstuffs could be a clue to their enormous success. This brings to mind the human situation, as we are also originally omnivorous. Unlike the multituberculates as far as anyone can tell, though, our very omnivory makes it easier for us to make a conscious choice to eat only plants and make up any nutritional loss via synthesising other nutrients through technology.

These were the first mammalian tree-dwellers. Many were chipmunk or squirrel-like, like our own ancestors millions of years later. The larger ones during the Mesozoic were up to about the size of badgers and foxes, and again unlike other mammals were herbivorous. These were the taeniolabids. The largest of all lived after the K-T event, and may have weighed up to about a hundred kilogrammes, considerably larger than an average-sized adult human and about the size of a wart hog.

What went wrong then? Why are there no multituberculates today? They survived the K-T event and even got large quite quickly afterwards, within a couple of million years. One of the most noticeable consequences of K-T was that it wiped out all animals with a mass above about two dozen kilos, which is the size of a beaver or gazelle. Clearly many larger mammals would not have survived, but there were still the squirrel-like forms and the smaller taeniolabids, who exist either side of the event. What seems to have happened is that in Asia in particular they did very badly and never recovered, unlike other parts of the world where they bounced back pretty fast. This enabled Asian glires to take advantage. Glires are the clade including rodents, lagomorphs (e.g. hares) and tree shrews, and certainly tree shrews and squirrels are pretty similar to ptilodonts, the squirrel-like multituberculates. Unlike the teeth of most glires (presumably excluding tree shrews), multituberculate incisors, like human and most other mammalian teeth, came in sets. They didn’t need to gnaw constantly to stop their teeth from outgrowing their mouths. Rodents may also have larger litters. Multituberculates, like all non-placental mammals, have rib-like bones in their abdomens and very small birth canals, meaning that they could only either have laid eggs or given birth to very immature young like marsupials. What seems to have happened is that rodents became successful in Asia and outcompeted them, then spread in a series of waves to most of the rest of the world, similarly driving them into extinction, and even today the rodents are the most successful group of mammals. In fact rodents evolved incredibly early, in the Danian Age, by four million years after the K-T event, and began to diversify in the Eocene. In the meantime, their cousin lagomorphs seem to have evolved in India when it was still isolated, were originally insectivorous and have declined considerably in the past few million years. It’s also possible that the multituberculates were already doing badly before the rodents replaced them outside Asia.

Now I’ve said that this is not supposed to be Whig prehistory, and by no means are things supposed to be evolving towards intelligent, tool-using vertebrates, but an intriguing possibility remains. What if K-T hadn’t happened? The multituberculates would not have lost the upper hand anywhere, and there would have been large-brained animals living in trees with opposable thumbs, perhaps moving into the niche made available by the evolution of broad-leaved soft fruit bearing trees. It doesn’t seem to stretch credulity too far to imagine today’s world with non-avian dinosaurs still in it today with intelligent, tool-using multituberculates instead of humans, still chewing their food in that peculiar back and forth manner and laying eggs. Maybe they’d even be humanoid and have the same kind of colour vision. But for all anyone knows, humans and intelligent life are a complete fluke and quite unlikely, so who knows? Even so, we shouldn’t assume things would have turned out anything like they have, and we shouldn’t lose track of the fact that in a very real sense multituberculates are so much more successful than primates, rodents or any other surviving order of mammals. And they walked with the dinosaurs.