Why I’ve Gone Quiet

A week or so ago, I posted what was supposed to be the first two of a series of entries about climate change, logical fallacies in arguments, the psychology of climate change denial, other areas of politics where similar denials and obfuscations take place and so on.  It was actually proving to be quite therapeutic to do this, because it’s what I’m currently studying and it proved to be a distraction from my usual internal musings and probably quite negative behaviour elsewhere.  Why not continue in this vein then?

This could be seen as yet another example of something which belongs on transwaffle, which if I’m honest to myself I consider moribund by this point but would allow me to witter on unobserved, as would a notepad.  The reason it’s not on there is that it can be re-stated in a gender-neutral way, but before I do that I’ll state it in terms of gendered relationships because it serves as a good illustration of stereotypically feminine and masculine language use.

I usually apply a rule to myself that in a meeting I will only start to contribute when at least six cis women, as I perceive them, have spoken.  A common result of this is that I never actually get to say anything, which is fine by me.  However, this tends to get messed up if I actually give an introductory address of some kind, as I did last week.  It’s then difficult to know what my position is, as I have already contributed to the discussion and people are then likely to ask me questions, to which of course I should respond.  This makes me feel I’m building up a deficit which I need to remedy later.  However, there is a problem when I don’t contribute because there often ensues what feels to me like a baffling and uncomfortable silence with people failing to make contributions to the conversation, and whereas I don’t want to speak just to fill the silence, there are often a load of things I’m burning to say, which I imagine are in other people’s minds but which they are unaccountably not saying, assuming their reasons are unlike mine.

Often my contributions are further delayed by men contributing to the discussion, because this means I have to wait even longer for six women to have said something.  The thing about this is that to some extent the onus is on the other people either to speak or refrain from speaking in order that I can speak.  At first this looks like a gender-based issue, but there are reasons for supposing that it isn’t.  Two women I know pretty well have themselves said that they often find themselves in situations where other people are quiet and not contributing noticeably to a conversation, and they speak and end up feeling that they’ve said too much compared to others.  The fact that this may not even be a gender issue is one reason this is here.

Also, in practicing this rule I have to presume the gender identity of the other people present.  Among the six women and the various numbers of men who say something, not all of them may be cis, and not all of them may be gender “euphoric” as it were.  There might be trans men, trans women who pass well and closeted gender dysphoric people.  Given the current incidence of gender and sexual minority people coming out of the closet, and the nature of the meetings that I go to, this is a bigger problem than it might otherwise be.  Consequently I feel even less like contributing.

Getting back to this blog, this post is more or less the text of my introduction to the subject.  Putting this through Gender Guesser yields the following result:


(This will probably turn out to be the featured image, which is a bit annoying).

Gender Guesser is not marvellously accurate of course.  However, its algorithm is based on the features of language which I focus on myself, and in fact my dysphoria is substantially focussed on psychological aspects of my assigned gender rather than aspects of physical appearance.  Consequently, claims that I am on the autistic spectrum depress me because of Baron-Cohen’s “extreme male brain” view of the nature of autism, for instance.

Analyses of female and male authorship of texts and speech have tended to throw up the same kinds of differences in use of language, but different interpretations have been made of these differences.  For instance, Otto Jespersen, the male Danish linguist who fluorished at the turn of the nineteenth and twentieth centuries and might therefore be expected to have attitudes typical of the time regarding sex and gender, claimed that the differences betrayed women’s inferior intelligence.  One example of this was that women use coördinating conjunctions such as “and”, “or” and “but” more than men do, and use subordinating conjunctions such as “therefore”, “although” and “because” less than men do, which Jespersen interpreted as meaning that women use language in a less intelligent way.  Later feminist commentators acknowledged these differences but attributed the cause to men being conditioned to be more confident in their use of language than women.

Deborah Tannen is probably the most important influence on me in this respect.  My understanding of stereotypically feminine and masculine use of language is that women are more emotionally involved and engaging, try to build empathy, share and put the reader at ease, and that their language is closer to the language of fiction even when they are writing factually, whereas men use language to hoard information, establish their superiority, write to inform and draw attention to themselves, and that their language is closer to factual text even when writing fiction.  Of course these are stereotypes and there is a big overlap, and other factors come into consideration such as copy-editing by people whose gender is different than the writer, attempts to adopt personas in fictional writing and the adoption of a particular style considered appropriate for a given passage.  However, the description I’ve just offered is of course very much wedded to traditional gender roles and caricatures of femininity and masculinity.  Therefore, at this point I choose to remove gender-based labels from these features and simply call them different styles of language use.

My claim is that a style of language that doesn’t hoard information but shares it is more positive and progressive than one which does the opposite.  This means, among other things, clarity and brevity, and efforts to avoid jargon, which is anti-language.  Anti-language is language created to exclude outsiders from understanding.  It can have positive uses, for instance Polari, the GSM argot, was effective in preventing homophobic persecution in Britain in the mid-twentieth century and possibly before.  As someone who is very interested in language, I have a strong affection for jargon and anti-language which is possibly unhelpful if I actually mean to communicate.  Knowledge is power.  That means that power-sharing involves effective communication of practically applicable information.  There are limits to this, for instance avoiding jargon can interfere with memory and strain attention span because more words may be needed to get the same point across and a jargon word which aims to lasso a bit of reality and refer to it can be a very helpful shortcut here.  I realise I don’t communicate well, and that if I was signing, I’d be signing towards myself a lot of the time.  It’s less obvious that I’m doing this in writing, but it’s still going on.  This is in a sense all mirror-writing, because I’m holding the page up and writing for my own sake, and it’s reversed for the reader.  Nonetheless I do want to communicate effectively.

People who didn’t know me before the 1990s probably don’t realise this, but it wasn’t always this bad.  It was mainly postgraduate work which led me to use language in this way.  Before that, I was even able to edit other people’s work for clarity and brevity.  The kind of language used in particular by French-language philosophers amounts to an anti-language.  Jacques Lacan is a notorious example of this.  “The less you understand, the more you listen” was one of his dicta.  His aim was apparently to generate a mystical-like meaning in the listener or reader rather than to be understood.  I wouldn’t say this attitude is unconnected to his emotionally abusive approach to “therapy”, and it’s not to be emulated.  It’s a disease, an attempt to fiddle while Rome burns which is rife in critical theory, and it needs to stop.  Anyway, this is the source of my own obscurantism, and I wish it would go away.

When I’ve written on such subjects as Fear, Uncertainty and Doubt, climate change denial or climate change itself, it’s an attempt at factual writing, but the question arises in my mind as to whether this is in fact the best approach.  Although knowledge is indeed power, it feels like there’s a sense in which using information in this way is playing the same game as one’s opponents.  I still feel there’s a place for it.  The question is whether it’s my place.  I’m expressing all this stuff, communicating all this information, but that’s informative rather than engaging writing and something I constantly struggle unsuccessfully against doing.  It’s informative rather than engaging, which I struggle endlessly but unsuccessfully against doing.  I would much prefer to engage my readers’ feelings than tell them about stuff, or show them stuff.

This brings me to the thorny issue of women-only shortlists.  I am now a member of the Labour Party, and of course I’m trans.  I believe it’s vital that Labour win the next election, and also that politics become “feminised”.  The quotes around that word are about the fact that it needn’t be labelled that way.  I want political debate, such as goes on to decide law in Parliament and elsewhere, to be a different kind of conversation, using a different approach to language, and I believe that in order for this to happen, trans women need to be excluded from women-only shortlists unless they’re using language in a less typically masculine way.  But they’re not.  I’ve taken extensive passages authored by trans women and put them through the Gender Guesser algorithm.  Every one showed up as having male authorship.  Likewise, I’ve taken ten passages by non-gender conformist cis women such as Germaine Greer and done the same.  They all came out as female.  Whereas the accuracy is only sixty percent, the chances of this being a mistake multiply each time, thereby reducing the probability of this being so.  By the time it’s happened ten times in a row, the probability of each being by chance is about one in a thousand, on each side.  Multiplied together, it’s one in a million.

To defuse this from a debate about trans issues, look at it this way.  We desperately need politics to be based on a different kind of interaction and communication than it is, and to have people with diverse experience in Parliament.  Assuming that trans women on women-only shortlists got elected, which is an unwarranted assumption, the fact that we tend to use language in the same way as a typical man strongly suggests to me that this would lead to it being politics as usual.  For this reason, unless you’re right wing in which case it serves your needs well that non-Labour candidates are likely to win in elections with trans women as Labour candidates, it’s a politically conservative move to allow people who use language in that way onto women-only shortlists.  There are several other good reasons for doing so too.  However, rather than basing it on gender identity, it could be based on analysis of language use.  It would be a politically expedient move to dodge the gender issue, but it would still have the same result.

Getting back to the subject of this blog, the reason I’ve gone quiet is similar to the incident, which you may remember, of when I started a home ed wiki, put a lot of effort into producing material, then realised nobody else had contributed and deleted everything.  There needs to be diversity in such things.  I say stuff, but other people are remaining silent.  Maybe they should start saying things, because then I will have permission to express myself, but as it stands I’m very reluctant to do so, which is a shame because I still believe that what I want to communicate is important.  Recent experience has shown me that a heck of a lot of people are poorly-informed and have opinions which are quite clearly spurious, but continue under the impression that they know stuff in a manner which endangers the survival of the species and the well-being of themselves and others.  But I don’t feel right about communicating this unless other people are also making a contribution.  So for now, I’m saying nothing.



I’m taking a break from the series of posts around the politics of denial, fallacies and so forth today, because it’s the seventy-third anniversary of the bombing of Hiroshima, and this shouldn’t be forgotten. It killed something like 100 000 people in one go, and after all this time it might be thought that nothing remains to be said about it, but if we’re going to remember the First World War we should also remember this.

One unfortunate person was involved in both Hiroshima and Nagasaki. 山口 彊, Yamaguchi Tsutomu, Nagasaki native and Mitsubishi employee, visited Hiroshima and was affected by the explosion. He then returned to Nagasaki, returned to work despite his injuries and was affected by the second explosion. Although he alone was recognised by the Japanese government as having been affected by both, there were around seventy others. He died of stomach cancer in 2010. As such, he was one of the 被爆者, hibakusha, people affected by the radiation of Hiroshima or Nagasaki. The hibakusha get a monthly allowance and free medical care, and there have been 650 000 of them, some not born at the time of the attacks. Around 170 000 survive today. There were also 20 000 Korean slaves in Hiroshima at the time and 2 000 in Nagasaki, and also prisoners of war, one of whom I’ve met.

At the time, the bombing of Hiroshima was considered epoch-making, and Asimov, for example, used it as a zero point for a dating system in some of his SF (works set further in the future use Galactic Era and Foundation Era instead). Likewise, Salvador Dali was inspired to begin his Nuclear Mysticism phase by these events. I feel quite strongly that Dali’s playfulness is in fairly poor taste in this context, rather similar to his use of Hitler a few years earlier.

It has been maintained, and personally I believe this, that the Japanese government was suing for peace shortly before the bombs were dropped, but that the US wanted to use them as a demonstration to the Soviets that they had a dangerous weapon which could be used against the USSR and the later Warsaw Pact countries. Also, there’s controversy over the translation of the Japanese term 黙殺, mokusatsu, “kill by silence” in a response to the diplomatic overtures from America demanding surrender. This could have been translated as “no comment” but was taken to mean “treat with silent contempt” by the Allies, and it’s suggested that Truman’s decision to have the cities bombed was swung by this interpretation. It may also have been an attempt by the Japanese to appease the military. At this point it would be easy to get into a discussion about Japanese customs and culture which plays into the Japanese exceptionalism agenda, but although it’s not clear when cultural considerations should be taken into consideration and when they should be ignored, Orientalism doesn’t seem appropriate here. Just as Aokigahara is in a strong sense just the Japanese Beachy Head, so is speculation about Japanese etiquette in this situation in danger of being racist.

Speaking of racism, the response of Gaijin (外人) to the bombings often shocks me. One person told me that he thought the problem was not that we had bombed the Japanese twice but that we hadn’t bombed the entire country with nuclear weapons. Another person, a friend of mine, was most unkeen on my attempt to introduce aspects of Japanese culture to our home ed group because of the way the nation had behaved towards us during the War. There is of course no excuse for any of that behaviour. However, I note that the mainly white people of Nazi Germany are not now, nor should they be, conflated with the Nazis, and I’m more than a little suspicious of the fact that the Japanese still have this stigma in people’s minds after all this time and just happen to be non-white and decidedly non-European.

A similar racism also affects the claim that the Japanese are the only nation to have suffered a nuclear attack. This is simply untrue. The tests on Bikini atoll in the 1940s and ’50s, carried out by the US, involved the removal of the Marshallese people from their homeland to a place which could not sustain their lives, causing starvation. Incidentally, the tests at Bikini are tangentially related to yesterday’s post as just as the presence of fossil fuel carbon 13 and absent carbon 14 skews radiocarbon dating for artefacts from the eighteenth century onwards, the radiation from Castle Bravo has likewise changed the accuracy of radiocarbon dating for more recent objects. It also led to fallout on the people living on Rongelap, Rongerik and Utrik Atolls, many of whom came down with acute radiation sickness afterwards.

Better known is the plight of the Western Shoshone whose land in Nevada has also been used to test nuclear weapons. This has led to many cases of cancer among them. The water table was, unsurprisingly, poisoned. The Western Shoshone claim that the US government is illegally trespassing on their land, and there is also a nuclear waste storage site at Yucca Mountain which was built without their permission.

I haven’t looked into it, but I presume the Soviet Union and China also carried out similar tests and the fact that I’ve only mentioned the US, and by extension NATO, in this post doesn’t mean I consider that any better. There’s no excuse for any of it.

To conclude then, Hiroshima and Nagasaki probably didn’t bring the Second World War to an end but slightly extended it, the reasons for bombing Hiroshima and Nagasaki were pretty dubious even given that there could be an excuse for doing so, and Japan is not the only nation to have been affected by nuclear weapons. It’s just that the other ones are indigenous people and therefore “don’t count”.

The Smoking Gun Of Climate Change

dangerous crime safety security
Photo by Pixabay on Pexels.com

Suppose there were two types of gun.  One of them is automatic.  Not just automatic automatic, but actually automatic.  It’s a roving gun “robot” which works on its own and aims and shoots people without human intervention, and it evolved without humans.  It also mimics human guns, so at first glance it’s indistinguishable from a gun manufactured by human beings.  The other type is a hand-held manufactured human gun.  Maybe the natural guns are too dangerous to approach, so humans ended up making their own.  The only difference is that human guns emit smoke with less nitrogen 15 in them than the natural guns.  And you arrive on a scene with a dead body which has recently been shot in the head, with a smoking gun lying next to it.  Measuring the smoke, it would be straightforward to work out whether it was homicide or an attack by a natural gun.

There are several lines of evidence which indicate that climate change is caused by human activity, mainly the release of carbon dioxide from fossil fuel combustion.  One of these is the isotopic composition of the carbon in the environment, and another is the pattern of warming in the atmosphere.  There is also the claim that fluctuations in solar activity drive climate change.  In today’s entry I’m going to look at all of these and possibly some others, starting with solar activity.


Early in the history of solar observation, astronomers discovered that the visible surface of the Sun wasn’t immaculate but suffered from occasional spots.  I presume that at the time certain religious people would have regarded this as a problem since it suggested that the Sun wasn’t perfect and that it had features invisible to the naked eye which therefore served no human purpose.  This kind of thinking is also sometimes associated with climate change denial, but I won’t veer off-topic.

It’s clearly not feasible to look directly at the Sun through a telescope with no protection, but it so happens that I have done this through wads of overexposed photographic negatives and by projecting the image onto paper, only the latter whereof I could recommend, but I do recommend it because if you have a pair of binoculars, this is one of the easiest pieces of astronomy you can ever do.  I can therefore confirm that sunspots do exist.  I’ve personally seen them.

Sunspots are planet-sized vortices on the photosphere (visible surface) of the Sun.  They are somewhat cooler than their surroundings and therefore look dark, but in isolation they would glow pretty brightly.  They follow an eleven-year cycle where they begin about half way up or down from the poles and gradually migrate towards the equator.  I can also confirm that this happens because I’ve seen it when observing the Sun in the ways I’ve described over that period of time.  Sunspots are associated with intense magnetic activity.

CC BY-SA 3.0, https://commons.wikimedia.org/w/index.php?curid=969067Sunspot_Numbers

One of the oddities found during the centuries over which sunspots have been observed is the Maunder Minimum.  Between the years of 1645 and 1715, only very few sunspots were observed.  This was also the time of the peak of the Little Ice Age, and consequently an association is often made between this period of time and the cold period in Europe during which frost fairs on the Thames happened, and the phrase “eighteen hundred and frozen stiff” originated.  It lasted from about 1200 to 1850 with a respite in the sixteenth century, and was at its worst during the same period as the Maunder Minimum.

Other stars also have spots, called in this case “starspots”.  In some cases they have so many that they make a substantial observable difference to the brightness of those stars.  The maximum coverage is about 30%, which is of course far more than the Sun.  If our star was covered in thirty percent sunspots, it would reduce its luminosity by around 15%, which if this planet relied simply on heat and light from it for maintaining its temperature would give it an average surface temperature of -21 degrees C, which would quickly drop as the ice covering it reflected sunlight back into space and we’d be looking at a Snowball Earth scenario, as occurred back in the Cryogenian period before complex life evolved.  Hence a planet orbiting a star with wild sunspot fluctuations would be difficult for us to live on, although the Cryogenian is thought by some to have kick-started the evolution of more advanced organisms here.

Such stars are not sun-like.  However, extensive observations of more apparently sun-like stars have been made, which led astronomers to the conclusion that intervals like the Maunder Minimum were occurring in a substantial fraction of stars like ours, meaning that it could be expected to happen every few centuries.  Climate change scientists therefore previously thought it was possible that fluctuations in solar activity were a significant factor in climate change, and therefore that the cycles in this planet’s orbit mentioned yesterday could be offset by a more active Sun, thereby warming the planet.

The Hipparcos mission changed all that in the late ’80s and early ’90s.  This satellite measured more precisely the distances to a large number of stars, thereby messing up a number of SF stories.  I don’t know how they can forgive themselves.  Seriously though, the mission is a good example of how apparently purely scientific missions in space can prove to be of enormous practical use.  It emerged that a large number of the stars with very low starspot activity were further away than previously calculated and therefore much more luminous than once thought, meaning that they were also much older than the Sun.  The magnetic field of stars is mainly generated by their rotation, which acts as a dynamo, and as they age, stars spin more slowly and therefore have weaker magnetic fields and therefore fewer sunspots.  These were not, as it turned out, sun-like stars at all, or rather, they were on the main sequence like the Sun, but considerably closer to becoming red giants, and now had permanently lower activity than it.  Starspots were a sign of stellar adolescence.  This means that the current rise in the mean temperature of this planet could not be accounted for by fluctuations in the activity of our Sun, and that the Milankovitch cycles I mentioned yesterday are in fact mitigating the warming.

CC BY-SA 3.0, https://commons.wikimedia.org/w/index.php?curid=46626565_Myr_Climate_Change

Looking back over the history of this planet reveals substantial fluctuations in global climate, such as the Cryogenian period mentioned earlier.  In the late Eocene, for example, the planet was so warm that the Arctic Ocean would’ve felt tepid and it would’ve been fine to swim in it naked.  Such major changes in climate are often used to argue that since Earth has been a lot hotter than it is today in the past, there won’t be a problem with it getting hotter again.  It is true that all the carbon stored in fossil fuels was previously in the atmosphere, oceans and living things, and that the carbon released on two occasions during the Eocene led to much warmer temperatures than the change today.  There are, though, a number of things wrong with this line of reasoning.

Whereas life did survive previous global warming events, they were usually associated with mass extinctions.  Moreover, humans are particularly sensitive to sea level rises because most of us live in low-lying ground near bodies of water.  Additionally, the speed of change is not on a geological time scale but an historical one, and the previous global warming events took many millennia to happen.  Finally, there’s the problem of induction.  That life survived is a given, but that doesn’t mean it will survive another one, particularly an unusually rapid one like today’s.

It’s been established, then, that today’s climate change is occurring, is not due to Milankovitch cycles or fluctuations in the energy of the Sun, but why blame human beings for it?  Previous global warming events associated with the release of carbon dioxide have been linked to volcanism, and the amount of carbon released by volcanoes and other processes is greater than the release from fossil fuels.  This is in fact so.  Annual fossil fuel use emits 29 gigatonnes of carbon dioxide, whereas land vegetation releases 439 gigatonnes and the ocean is responsible for an additional 332 gigatonnes.  However, the non-human emissions are more than balanced by absorption.  Volcanism is balanced by the reaction of acidified rain and other water with rock, and although some of that acid is from sulphur compounds in the atmosphere and is responsible for damage to forests and the like, is also formed from carbon dioxide in the atmosphere.  Likewise, the oceans and forests absorb carbon via photosynthesis, meaning that there is reabsorption of 450 gigatonnes by the land and 338 gigatonnes by the ocean each year.  This has in the past generally achieved a balance with the considerable carbon dioxide releases by non-human processes.  However, these absorption processes are linked to the emission processes, for instance more photosynthesis occurs from life which is associated with more carbon dioxide exhalation and the production of new rock from volcanic eruptions helps to absorb the carbon dioxide they produce.  If this wasn’t so, there would’ve been snowball earth scenarios before humans evolved, because the complete absence of carbon dioxide in the atmosphere would lead to an average temperature below freezing.  The balance also takes place over a very long period of time, as does the carbon dioxide emission.  The change which has occurred in the last couple of centuries would usually have taken at least five thousand years, and the last time atmospheric carbon dioxide was as high as it is now was during the Miocene Epoch.

It’s also possible to establish that the carbon present in atmospheric carbon dioxide and the oceans is from human sources because of the ratio of carbon 13 to carbon 12 in it.  Carbon from living sources has less carbon 13 than carbon from non-living sources, for example igneous rocks, because photosynthesis absorbs proportionately more carbon 12 than carbon 13.  Carbon 14, which is used for dating fairly recent formerly living matter, has more or less completely decayed in fossil fuels because they are hundreds of times longer than its half life, whereas carbon 12 and 13 are stable.  Consequently, material which includes substantial amounts of carbon from fossil fuels appears to be a different age according to carbon dating.  This fact was discovered without reference to climate science and cannot therefore be part of a conspiracy.  There is also less carbon 13 in the ocean and living matter than there would be if it was from volcanism, which dredges carbon up from the mantle where there is no organic life or photosynthesis.  It can therefore be shown that the carbon in the oceans and the atmosphere is from fossil fuels, not volcanism.

A couple more claims:

  • Good climate records don’t go back very far because thermometers used to be less accurate.

Whereas it’s true that thermometers were less accurate, climate records are based on non-technological processes such as the ratio of oxygen 16 to oxygen 18 in bubbles in ice cores from glaciers, tree rings and the direction of spiral shells in certain protozoa.  Bristlecone pine debris, for example, goes back around 9000 years.  This provides a more accurate measure of climate change than old thermometers, so there is no need to rely upon them.

  • The urban heat island effect.  Weather stations in urban areas are warmer than those in rural areas and climate change reflects urbanisation.

This is so.  The presence of more industrial activity leads to a local greenhouse effect and the absence of biological material on the ground, with the likes of concrete and asphalt replacing it, does mean that towns are warmer than the country.  Heat is also generated more in cities by industrial processes and heating.  However, the fastest-warming areas are places like Mongolia and the Amazon, and there is also rapid warming in ocean areas where there are no people or cities.  Meanwhile, places where there’s a lot of urban development sometimes warm more slowly, such as parts of the US and China.  There is also an upward trend in temperature in rural and urban weather stations.

  • Water vapour is a stronger greenhouse gas than carbon dioxide.

Whereas this is true, the release of carbon dioxide increases evaporation of water and leads to more being stored in the atmosphere.  There are only 400 parts per million of carbon dioxide in the atmosphere whereas water vapour can be as high as one percent.  The rising level of water vapour is initially triggered by the rising level of carbon dioxide, resulting in a feedback effect where the higher level of water vapour increases atmospheric temperature further and increases humidity, thereby trapping more heat.

Incidentally, ozone is also a greenhouse gas, but the loss of ozone from the upper atmosphere, which is now thankfully reversing, is not relevant to global warming.  This is because the stratosphere is cooling, which is again a sign of global warming via the greenhouse effect and further evidence that it’s nothing to do with the Sun.

The greenhouse effect in detail works as follows.  The Sun’s radiation is absorbed by gases which re-emit it as infrared.  This emission occurs in all directions, including up, but since the air gets thinner higher up it doesn’t continue to get re-absorbed and emitted up there because there are fewer atoms and molecules to do it.  Therefore, heat gets trapped in the troposphere, the lowest level of the atmosphere, and the stratosphere cools.  In the nineteenth century it was predicted that carbon dioxide would lead to greater increases in temperature at night and during winters because less heat would be lost, and this is in fact what is happening.  Hence it’s not so much heat waves and hot summers which indicate global warming as unusually warm spells in the winter, and unusually warm mean winter temperatures.

To sum up:

  • Global warming is taking place.  This can be seen by oxygen isotope ratios in ice cores, a general upward trend in frequency of temperature records and melting glaciers where the moisture of the air does not result in heavy snowfall.  All of these provide accurate measures of average global temperature going back thousands of years.
  • Global warming is not due to Milankovitch cycles or fluctuations in solar activity.
  • The greenhouse effect is not the result of volcanism because the ratios of carbon isotopes mean the carbon is from formerly living sources such as fossil fuels.
  • Climate change is a hazard because warming events in the past were slower and in any case resulted in mass extinctions.
  • Human beings are particularly sensitive to climate change because we tend to live near bodies of water.

Consequently, something should be done about it, don’t you think?

Human-Caused Climate Change

This may just be the first of several posts about climate change. My post on Fear, Uncertainty and Doubt referred to it several times and since I’m studying it right now, it seemed apt to talk about the actual climate science involved rather than the processes involved in denial, although at some point I’ll also go into various prominent areas in which this strategy is taken as well as talking about its details. There also seems to be room for other related matters such as the question of logical fallacies in arguments. But for now, I want to talk about climate change and why it is almost certainly human activity which is currently driving it.

The immediate impetus for this arises from two sources. One is this photograph:

Copyright unknown, will be removed on request.

This is Rovaniemi on the Arctic Circle a couple of weeks ago, when the temperature was a record-breaking 32 degrees C. It’s a dramatic illustration, but also perhaps one which suggests we should all be using text-only browsers on the web, because it may not be as powerful evidence for global warming as it appears to be. It was this picture which persuaded me I ought to look into this more. Certain pressure groups have a history of bad science behind them, and if they’re supporting what I might think of as the good guys, it’s unhelpful to use bad science to argue for the position, even if the position is true. Having said that, I’ve talked about logos, ethos and pathos before, and when I campaigned for the Peace Movement I was always careful to use pathos rather than logos to attempt to persuade people. There is a place for pathos and ethos, but there can also be a place for logos, and in this case the place is this post.

32 degrees C is in fact a record-breaking temperature for Rovaniemi and is in fact proportionately higher than all over record-breaking highs per month for the city. It’s also further from the mean temperature for any month on the hot side than cold records are on the cold side whether considered in terms of absolute temperature or degrees C or Fahrenheit. Such an observation might form the beginning of a stronger case for the validity of this temperature as evidence, but it still isn’t enough. I can recall seeing a video of Tromsø in June 1989 where it was clearly warm and sunny. Tromsø is several hundred kilometres northwest of Rovaniemi, although again comparisons are not necessarily useful because unlike the latter, it’s on the coast.

The image works well as rhetoric but should there be a failure in some way, runs the risk of discrediting the opinion it lends itself most easily to support, and as such is comparable to similar cherry-picking by climate change denialists. Climatology tends to deal in probabilities and weather is not climate. Thus the probability of heat waves might increase over the years as humans burn more fossil fuels and release more carbon dioxide into the atmosphere, but specific events such as this heat wave are not helpful as evidence. What might be helpful is to look at the frequency of heatwaves decade by decade in a particular place. However, the problem with this is that the term “heat wave” is not scientific. It’s more likely to turn up in journalism or everyday conversation and because it isn’t rigorously defined, it’s hard to point to specific events and call them heat waves, and therefore count them up and make a graph out of them. I have to admit I find it frustrating that climate science deals in probabilities because the inability to make a definite statement about weather conditions in Rovaniemi being connected to climate change, for example, makes it harder to persuade people that it’s real. Nonetheless we’re stuck with this and there are some definite things which can be said about the situation.

Although there are no firm definitions of heat waves, and while I’m on the subject I should mention in passing that droughts are also hampered in a similar way, this time by having too many definitions, record weather conditions just are what they are, and these can be employed usefully in arguments for climate change. Industrialisation correlates well with the practice of keeping accurate quantitative records of climate, meaning that at first it may seem difficult to track links between industrialisation, the use of fossil fuels and climate change because, for example, instruments become more accurate and human-based records taken before the Industrial Revolution were less reliable. There are, though, various ways round this problem. If climate change is not taking place, record high and low temperatures would be expected to keep pace with one another. Imagine temperature records started to be kept in 1850. This would mean that the summer of 1850 is almost certainly going to have the hottest weather recorded since records began. This would then presumably be followed by a winter with the coldest recorded weather since records began. If there is no overall change in climate, at a cursory estimate the next year then has a 50% chance of including the hottest and an equal probability of the coldest weather since records began, then the year after a one in three chance of each and so on. As time goes by, the likelihood of each record being broken would decline, and one would sometimes pull ahead of the other slightly and the situation would then reverse. This, though, is not what happens. What in fact takes place is that as the years roll on, record high temperatures pull ahead constantly of record low temperatures, and the ratio between the two sets of records is now about two hot records to each cold record. Hence this year in Rovaniemi is still useful as a data point but needs to be seen in historical context. I don’t know what it did last summer.

There are other aspects of heat waves which can be observed and recorded in this manner, in spite of the vagueness of the notion, in particular how early they begin in the summer, and again there is a trend here. It’s also possible to hypothesise about what causes heat waves in the Northern temperate zone. Rovaniemi is six kilometres south of the Arctic Circle and is therefore technically in a temperate rather than polar region. As such, it falls under the Jet Stream hypothesis of heat wave causation, and this is where things get more recognisably “sciency”, although the idea of a list of historical statistics is no less scientific than this is.

The Jet Stream is a meandering current of fast air eight to eleven kilometres above sea level where wind blows constantly at about 300 kilometres an hour. It’s used by jet aircraft to travel more efficiently between locations in the northern hemisphere. Other planets also have jet streams. For instance, Jupiter’s banded appearance is due to a large number of jet streams, although in its case they operate between a warmer lower level and a colder upper one. This temperature gradient is also what drives our own jet stream. Polar air is cold, tropical air warm. Since warm fluids such as air expand and cold fluids contract, with a few exceptions, tropical air expands towards the North Pole, displacing polar air towards the equator. This gives the jet stream a wavy course fluctuating between north and south. However, the difference between the temperatures of the polar and tropical air is reducing because the Arctic is warming faster than the tropics. This is because the ice cover, which previously reflected light and heat back into space, is reducing, exposing darker surfaces which absorb more heat and therefore heat the air. This reduction in temperature gradient leads to the meanderings becoming wider, because there is less contrast. According to the hypothesis, this means that warm conditions will tend to hang around longer over larger areas. It also means the same for colder conditions. This seems to have been partly confirmed by conditions in the Lower 48 US states in the winter of 2014. On the cold side of the jet stream, in the eastern states, the winter was unusually icy and harsh, and on the warm side, in the Southwest, there was one of the mildest winters on record. Hence if this hypothesis is corroborated, the fact of global warming, however caused, has led to a cold snap as well as an unusually mild winter. It occurs to me to wonder what consequences those locations might have for decisions to vote Democrat or Republican, and therefore for the acceptance or rejection of anthropogenic climate change.

So far, nothing I’ve said is a clinching argument for climate change being caused by human activity, and if you’re expecting that today I’m afraid you’ll be disappointed because although there is strong evidence, all I’ve managed to describe so far is a correlation between the use of fossil fuels and the rise in mean global temperatures. This could conceivably be mere coincidence, and unfortunately the other source of the impetus for my post doesn’t yet provide it either. It’s certainly interesting that the Arctic ice is melting just as we’re adding carbon dioxide to the atmosphere, but based on what I’ve said so far there could easily be another explanation. One thing it does illustrate, though, is the fickleness of opinion. A single cold spell can be enough to persuade some people that global warming is a myth, and likewise a single heat wave would often convince the same people that it isn’t. The issue is therefore how to make a convincing connection between the larger scale and longer term processes involved. What’s been said already, though, should be enough to demolish the myth that global warming isn’t taking place.

The second source was a conversation about why Earth is furthest from the Sun in June and closest to it in December. This is more puzzling for someone living in the northern temperate belt than someone in the south, because for the latter the summer happens when the Sun is closer. The answer is suggested by Geoffrey Chaucer’s reference to Aries in the Prologue to the Canterbury Tales:

the yonge sonne hath in the Ram his halve cours yronne.

This is a reference to the Vernal Equinox, written in the late fourteenth century, when it occurred half way through Aries, or around 2nd April rather than 20th or 21st March as it currently does. It’s a little confusing for us nowadays that the solstices occur so close to the time of our closest and furthest points from the Sun, the perihelion and aphelion respectively, because this is in fact a coincidence. As Chaucer’s line indicates, the equinox was a fortnight later six hundred years ago. This is because the elliptical orbit of the planet we’re on gradually shuffles round, one of several other things which happens to her movement over a moderately long period of time.

Assuming Earth to be a simple ocean-covered planet with no dry land at all, which is in fact a model used by climate simulation software to simplify things, although not in earnest to predict the actual climate on the real Earth, this situation would mean that there was an alternating pattern of milder seasons in one hemisphere and more extreme ones in the other which would swap over every few thousand years, because for a while the summer north of the equator would occur when it was closest to the Sun and therefore be hotter along with the winter being colder, followed by a period during which it happened when it was furthest away, giving it a cooler summer and a warmer winter. In fact, because the Earth’s orbit is nearly circular, it currently makes little difference as there is a 6% difference in the amount of light and heat we get from the Sun in summer and winter at the same latitude and same time of day and year, and the biggest difference is caused by the angle of the surface to sunlight, which varies a lot more than that.

The courses through which Earth goes are known as Milankovitch Cycles, and there are three of them. One has been hinted at already. The orbit around the Sun becomes more and less elliptical over a period of about ten thousand years. It’s currently near its most circular and we have a 6% variation in global sunlight per year because of the 3% deviation from a perfectly circular orbit. At its maximum, the eccentricity is 5%, which leads to an annual variation of up to thirty percent. Incidentally this is because a light source at twice the distance is only a quarter as bright because its light is spread over a larger area, so it isn’t 5% because that could only happen in a two-dimensional space.

The second and third variations are in axial tilt with respect to the Sun. Planets are generally giant gyroscopes. They have enormous momenta in their spins which keeps their geographical poles pointing the same way, but like smaller gyroscopes they do wobble, which Earth takes twenty-odd thousand years to do, and also how steep the tilt is. This only varies between 21.5 and 24.5 degrees, over a period of 41 000 years. A less pronounced tilt leads to milder winters because the sunlight is less spread out over the planet’s surface, giving more heat to the same area. This warms the atmosphere and enables it to store more moisture, which near the poles freezes and falls as snow. This snow reflects more sunlight back into space and cools down the planet, and there’s a feedback loop due to the spread of snow. The interaction of these three cycles leads to climate variation over a period of millennia, and is probably responsible for the alternating ice ages and interglacials we experience over the past 800 000 years or so.

Given that the planet is currently in a state where these cycles are combining to provide relatively cold winters, it would normally be expected that a new ice age would start in about the thirty-fifth century. This also means that the explanation suggested by some climate change deniers that the warmer climate is due to these cycles is incorrect. If anything, they’re making climate change less severe than it would otherwise be. There is a further point regarding sunspot activity, but I’ll leave that for another post.

Therefore, although the information I’ve given so far today isn’t enough to say for sure that human activity is responsible for climate change, it is enough to rule out the possibility that variations in orbit and axial tilt are the cause. They are in fact mitigating factors.

Again, this is unfortunately confusing to explain because it looks at first that milder winters result from these orbital and axial shifts, but there is a bonus fact which also works against the claims of climate change deniers: growth and shrinkage of glaciers.

It’s sometimes claimed that Earth is not warming because whereas many glaciers are shrinking, some are growing. The growing glaciers are a small minority, but this is in fact true. The reason it’s true is the same as the reason why milder winters can lead to ice buildup and long-term cooling. Some glaciers grow because they’re in parts of the world where moisture in the air becomes snow rather than rain. Because there’s more evaporation from the sea, and the warmer air has more capacity to hold the water, when it does precipitate, snowfalls are heavier in these places and the glaciers grow. Unlike the more general build up of ice which leads to ice ages, though, there’s still a general decrease in ice cover, which means less sunlight reflected back into space. Hence although some glaciers are indeed growing, it’s not enough to prevent global warming.

Furthermore, most glaciers are shrinking more than they have in the last ice age. Frozen plants such as mosses and lichen are now being exposed by melting glaciers which can be dated to before the last ice age, about 80 000 years ago. This means that those glaciers are now smaller than at any time since there were hippos living in the Thames, because the climate was that much warmer.

The mention of dating brings me to the question of isotopes, which can be used to construct a pretty clinching argument that climate change is human in origin. I’ll come to that in the next post.

Fear, Uncertainty And Doubt

No, this is not the adult version of Rice Krispies’ “Snap, Crackle and Pop” as a friend suggested. Incidentally, snap, crackle and pop are respectively the third, fourth and fifth derivatives of velocity, which I’ve mentioned before. Now you might think I have a smooth transition in mind between that fact(oid) and the subject of this post, but in fact I haven’t, unless it’s the question of whether that’s a real fact, because this is about fear, uncertainty and doubt as a political strategy or process.

The original concept of FUD occurred in the early 1960s as a marketing strategy, although this was more when it was named than when it first happened as I’m sure it can be traced back much further. The idea behind it is to sow the seeds of doubt in potential customers’ minds about goods and services you don’t want them to buy. This is slightly different from competitors’ products, although it also applies to them, because it can also be applied to one’s own. In that context it’s a form of conceptual built-in obsolescence. It’s not that you won’t support a product any longer or that it’s not built to last physically so much as that you hype your newest version and suggest there are, for example, security holes in the old one without specifying what those are. However, it’s easier to illustrate FUD using the notion of competition.

IBM used to be a regular practitioner of FUD. It would use its size and success to assert its stability compared to smaller startups, even if those companies were more innovative due to the less stultifying culture of a newer and less entrenched company. This enabled them, for example, to build the IBM 5150, nowadays known as the PC, with an inferior CPU clocked at a slower speed than other computers, with less memory, later on with unusually poor graphics, mere pitch-duration single channel sound and so forth, and still charge several times the price for it, because the competitors, such as Apricot, were less well-known, less resistant to the vagaries of the market and more vulnerable to collapse. Microsoft, who worked closely with IBM for a while, later took on this concept and applied it to their own products. For instance, Windows 3.1, which ran perfectly well as a GUI on Digital Research’s CP/M-86, was designed to throw up a spurious error message if installed on it rather than Microsoft’s MS-DOS or IBM’s PC-DOS, even though it worked perfectly well and was only triggered to produce that message if encrypted code in the software detected that it was in fact being installed on a competitor’s operating system. Later on, when the open source operating system Linux was being given away for free, Microsoft came up with the concept of TCO – Total Cost of Ownership – the idea being that it would take users a long time to learn how to use Linux because they were used to Windows, would be liable to make costly mistakes and so forth. Later on, Fox News joined in with this, presenting a news item about a student who had to drop out of college because her laptop had Linux and open-source software such as Open Office installed on it, and couldn’t keep up with her college work. I also strongly suspect that the anti-virus software industry is almost entirely based on FUD.

The mention of Fox News brings the subject into the realm of more overt politics. Just as a media company might support market leaders, the people who pay to advertise on them or perhaps the companies preferred by the party supported by their proprietors or simply an economic environment which they believe gives them the upper hand, so might a political party, pressure or lobby group use FUD against its competitors, though this time with respect to policies, ideas and other political parties. It probably should be recognised also that this is not something the Left or other progressive groups are immune to, and the question is in a sense more ethical than political: is it honest to do this, and if not, is it okay to tell lies for the sake of achieving a better end? More profoundly, does this kind of approach to political issues turn political discourse into something else which is unhelpful and less attached to reality because, for example, of the alienation of use and exchange value? Do we end up with a kind of floaty, detached politics because everything is regarded as a product, or is this merely a recognition of a fact of political life?

One particularly clear example of FUD in politics operates in the current situation between Labour and the Conservative Parties, or more broadly between centrally planned economies and what’s portrayed as laissez-faire capitalism. Voters sometimes appear to choose to vote for the Tories because they know them and expect them to be the nasty party, which they may see as a case of better the Devil you know. Labour might seem more egalitarian and care more about the vulnerable, but this is not a selling point for such a voter because it can be seen as well-intentioned but misguided. Who knows what chaos might ensue after a party not based on the tried and tested neoliberal policies of the last forty years was elected? At least you know that the Conservatives will screw the poor, act callously towards the vulnerable and only care about the wealthy. This is of course a caricature.

One of the consequences of privatisation of the public sector is the emergence of confusopolies. Although this concept comes from the surprising quarter of Scott Adams, author of DIlbert, who is no friend of the Left. A confusopoly is a group of companies offering similar products which manage to get people to buy their stuff by offering a bewildering range of options which are in fact all very similar to each other. A particularly clear example of this has been the mobile phone service market, but it also applies to the former public utilities. This maintains a situation where the quality of services offered doesn’t need to be very high or cost-effective, because the consumer is uncertain and feels life is too short. This would apply, for instance, to energy utilities. It’s kind of cooperative FUD. It can even apply to major political parties. In the Blair era, unless one had a particular interest in specific benefits which would accrue to one’s employer or a political career of some kind, there was no reason to choose between Conservative and Labour because their policies constituted just such a confusopoly. There were two parties offering infinitesimally different sets of policies which could in a sense have been chosen between by flipping a coin. Other factors did become relevant, such as the connections a particular party might have, but the manifestoes themselves didn’t constitute a reason for opting for one or the other.

The Greek for “fear” is of course “phobia”, which brings up another use of fear in politics. Xenophobia, homophobia and other forms of prejudice which aren’t actually called phobias such as sexism and ableism, are good ways of providing an unknown external enemy as a distraction from the real causes of people’s problems. Here too, fear, uncertainty and doubt operate, and it’s notable that the semantic drift which has taken place in the suffix “-phobia” also give it the scope of hatred and aversion. It’s a useful mental exercise in any case to overcome aversion, for example in facing one’s fears or defying pointless taboos, and it serves the interests of the powerful to keep these groups “other”. For instance, with Islamophobia, Muslims themselves are sometimes told that they don’t know enough about their religion to judge it positively whereas in fact if a large enough number of them disagree with extremism, that effectively makes the faith moderate. Before reaching this point, some non-Muslims might evoke the idea of taqiyya, an approach Muslims under Islamophobic circumstances are permitted to practice where they hide their faith for fear of persecution. This leads to a situation where nothing other than a mutually respectful relationship would allay the Islamophobia because the concept of taqiyya can always be used by their opponents to place uncertainty in the minds of people who might otherwise recognise Muslims as allies.

There’s a kind of genealogy of FUD relating to particular topics which have become politically significant over the years, where the same approaches are used by groups wishing to place doubt in the minds of the public. In fact, doubt has in a sense become a marketable product in itself. Wherever there are financial or other interests, any kind of entrenched power relationship in fact, involved, the deployment of doubt can become a useful approach, and just as something like ISO 9000 compliance or equal opportunity awareness training can be packaged and sold, so can doubt, and the same approaches and ruses can be adapted to different situations. An early example of this is the influence of tobacco on health, and it continues today in such areas as climate change denial, the promotion of creationism and opposition to abortion. This resembles the confusopoly in some ways, because rather than just flat-out denial, opponents of what I’m going to call the truth create a confusing array of sources of doubt instead. For instance, because correlation between tobacco smoking and lung cancer isn’t the same as tobacco smoking being a major factor in causing lung cancer, it could be claimed that there was a link between people taking up smoking or being exposed to tobacco smoke and other factors in their lifestyle which were likely to be carcinogenic. There may in fact be a half-truth here in that smoking tobacco paralyses the cilia clearing the respiratory tract and the irritation to the lungs causes some ciliated epithelium to be replaced by mucous epithelium, meaning that pollutants can penetrate further into the lungs and be harder to clear, and this is in fact backed up by studies comparing urban and rural rates of lung cancer in tobacco smoking, but the fact is that on the whole it would be better to live in an environment with cleaner air and be a non-smoker.

A notable approach taken by tobacco ads in the mid-twentieth century was the use of doctors to recommend particular brands, such as menthol cigarettes, in order that consumers would see apparent health experts seem to promote cigarette smoking. This approach is still taken today with climate change denial, and is one of five identifiable features of the use of FUD: fake experts. I would claim that for whatever reason, David Bellamy became a fake expert for the climate change denial lobby, making for example the claim that carbon dioxide was not a poisonous gas. In fact, although this isn’t strictly relevant to climate change, carbon dioxide is not merely dangerous because it displaces oxygen but because the body uses it as a signal to carry out certain reactions to reduce hypoxia, which like practically all poisons can be pushed too far and lead to fatal consequences.

A second feature of this approach is the use of logical fallacies. This can be seen for example in affirming the consequent. In the past, releases of large quantities of carbon dioxide by volcanism have led to global warming, as have variations in the Earth’s orbit such as the precession of the equinoxes leading to more extreme seasons by shifting the position of solstices to places closer to the Sun. Therefore, the denialist argument goes, global warming today is the result of processes which would have happened anyway regardless of human activity. This is logically P=>Q, Q therefore P, which doesn’t work. Implication is only false when a true premise entails a false consequent. A negative consequent proves that the antecedent is false, but a true consequent does not imply that the antecedent is true. Therefore the fact that global warming in the past can be explained by non-human factors can’t be used as evidence that it isn’t caused by human activity today.

Impossible expectations are the next approach taken in FUD. Although there was a general upward trend in the average global temperature from the early 20th century up until 2009, this has occasionally reversed, as it did in 2009. This is because the probability of warmer weather is not the same as the certainty of warmer weather. The 2009 kink in the graph is therefore pointed to as a failure of climatology when in fact it can be expected that a complex phenomenon such as this planet’s climate will not in fact alter smoothly and perfectly predictably over time.

The 2009 kink is also an example of cherry-picking. Another one of these is the lowering of sea level in 2010. This happened because there was so much flooding in South America and Australia that the water which would normally be in the oceans was on the land. Cherry-picking is the selection of information which goes against the general trend or impression to prove a point. In fact, the lowering of sea level in 2010 is good evidence for climate change.

A final approach is to consider conspiracy theories. Departing from the climate change examples, there is a claim made in Southern European pressure groups that there is a conspiracy among liberal intellectuals to eradicate the concept of gender in order to break up families and render people more easily manipulated by the state. This goes against the actual process of the proliferation of gender categories which is taking place. For better or worse, there are more gender categories than before, not fewer. Another example is cultural Marxism. This is slightly confusing because Cultural Marxism is a real movement, though a minor one, and mainly historical. Real cultural Marxism would, for example, analyse the phenomenon of K-Pop, with its stylised and mass-produced videos, artists, music and choreography, pittance wages for musicians and contracts which require cosmetic surgery and restrictions on diet, in terms of mature capitalism. By contrast, the conspiracy theory of Cultural Marxism sees it as an attack on Western society and values, so that very same example of K-Pop could be used by believers in the Cultural Marxism conspiracy as part of that attack through, for example, the use of oikophobia – aversion to one’s own cultural norms.

In closing, I want to look briefly at the concept of fake news. It’s claimed that inaccuracies and outright falsehoods in reporting are used to persuade the public of particular perspectives, and to some extent this does of course occur, though the question of which angle is the most accurate arises. The irony of Trump using this concept is that the rational response to not being certain of the world situation and one’s own position, or prospective position in society places one in a position where one would seek to adopt the most just principle for oneself in ignorance. This is very close to Rawls’s veil of ignorance and original position, and this exact position is used to justify a more liberal or socialist social order. If one really doesn’t know what will happen to oneself, a reasonable person would want a society where people are taken care of and able to avail themselves of opportunities regardless of their circumstances. The only trouble with this, although to me it seems like a rational response, is that it’s similar to economic rationalism, a position used to justify neoliberal economics. People do not in fact behave in economically rational ways for various reasons, and likewise, confronted with an information blackout caused by fear, uncertainty and doubt, they may likewise fail to make the decision to adopt an egalitarian approach. I don’t know what to do about this.

Day Of The Butterfly

“Nobody ever suspects the butterfly” – to become the size of Concorde and start eating people, for example. And to be frank, probably that will never happen. Nonetheless there are flesh-eating butterflies, such as the Purple Emperor:

By Kristian Peters – Fabelfroh – Own work, CC BY-SA 3.0, https://commons.wikimedia.org/w/index.php?curid=1324134Apatura_iris

Purple emperor butterflies do in fact eat decomposing flesh, and are often lured into position, for photographic purposes only of course, using the likes of stinky cheese and dirty nappies. This is the more brightly-coloured male version, and is one of the largest native British butterflies, the female being bigger, with a wingspan of up to 92 mm. They have slug-like caterpillars and used to be common in Victorian times up to about the Humber and in England and Monmouthshire. There are also a number of non-native species of purple emperor, such as the Indian one, but however they behave, English ones are pretty fierce as well as pretty and have been known to attack birds of prey. They very much go against the usual lepidopteran stereotype.

The largest butterflies of all are of course birdwings, which used to be shot rather than poisoned. The largest of these is the Southern Birdwing, native to South India with a wingspan of up to 190 mm:

By Vengolis – Own work, CC BY-SA 3.0, https://commons.wikimedia.org/w/index.php?curid=275918721200px-Troides_minos_06680

Insects have been around a very long time. However, there are two main categories of insect, and butterflies belong to the newer one. The older one, the hemimetabola, are characterised by “incomplete metamorphosis”, where the insect begins life as a miniature version of the adult, or imago, often minus wings, and grows in stages into the imago without pupating. The really ancient insects were like this and include the dragonflies, termites, cockroaches, headlice, grasshoppers and many others, not all of which are as old as their group by any means. The newer type of insect is the holometabola, who do pupate and tend to have markedly different larval forms compared to adults. These include butterflies and moths, bees, ants and wasps, beetles (one in four species of animal is a beetle so it seems a bit odd not to mention this in passing) and flies. Although they are newer, they’re not necessarily that new, dating at least from the late Carboniferous. Which brings me, of course, to the Carboniferous!

Of the eleven major geological periods which have passed since the massive proliferation of complex animals with hard parts which marks the start of the Cambrian almost 600 million years ago, the Carboniferous is the longest. It lasted as long as the time between the death of the giant dinosaurs and the present day, and for this reason is sometimes divided into two shorter periods, the Pennsylvanian and the Mississippian, which presumably have the longest names. The Carboniferous lasted 65 million years. It’s called that because of the carbon, i.e. coal, which was laid down at the time although that mineral was also laid down much later. At this point, as I so often do, I wonder how much other people know about this because for all I know I’m stating the obvious. Anyway, coal was, as you surely know, originally rainforests although trees didn’t evolve until later in the Carboniferous so they were made up of giant horsetails, tree ferns, clubmosses and the like, although conifers did exist by the end. Also at the end, an ice age occurred.

The atmosphere during the Carboniferous was very different from today. It was 32% oxygen, with the result that it’s thought that lightning strikes were enough to cause forest fires in even wet vegetation. There was a lot of that because the sea level was 120 metres higher than it now is, dropping to present day levels and then rising again to eighty metres above current levels by the end. This was partly because carbon dioxide was at three times the level of the Middle Ages, although there was an ice cap at the south pole the whole time. This is actually fairly typical of this planet, as most of the time since the Cambrian it has been warmer than today. The difference is that the life on the planet had evolved to cope with that at the time, so this is not a get out of jail free card for climate change floccinaucinihilipilificators, but it is in fact true that most of the carbon which was until recently stored in coal deposits was in the atmosphere as carbon dioxide during the early part of that period.

The level of oxygen is significant for insects because of the way they breathe. The insects who are around right now are all fairly small. The bulkiest insect that’s alive right now is the Goliath Beetle:

Although this animal is the size of an adult human fist, no part of the body is more than a couple of centimetres from the air. Likewise, there are long insects nowadays, such as the walking stick insects, and other species with large wings, but what they can’t do is be both big and bulky. This is because they breathe through a system of tubes and don’t use their blood to carry oxygen and carbon dioxide, or rather, they do, but it has no specialisations to do so. The bigger insects are, the more tubes their bodies need, until the point comes where an insect body would have to be a spongy, floppy mass which couldn’t work as an animal. This applies less to their wings because those are thin and flat. Another limiting factor is their hard exoskeletons, which would cause them to grind to a halt because of their weight, although this problem wouldn’t occur without there being a solution of some kind to the problem of respiration. This is also probably why they live so often in swarms, nests, hives and colonies of various kinds. They are perfectly capable of taking up a lot of mass but in the form of lots of bodies, so for example the total weight of termites on this planet is more than 400 million tons compared to the 300 million tons of humans. This is misleading because there are thousands of species of termite and that mass only refers to one species, Homo sapiens, although adding in all other apes wouldn’t make much difference in historical times. Incidentally the total weight of those nameless animals with horns whose milk humans steal a lot is greater than that of humans, meaning that just stopping doing that or eating them would make a huge positive difference to life on this planet. But anyway, this is about insects.

Because the oxygen content of the atmosphere in Carboniferous times was so much higher, insects could be a lot bigger. This is the famous Meganeura, often referred to as a dragonfly and certainly related to them, though not even as closely as damselflies. It was the size of a heron, with a wingspan of up to seventy centimetres. This was one of the largest insects ever. Having said that, most of that is the wings and the body is quite thin, so it’s not totally impossible that such an insect could survive today, although it would probably suffocate.

Although the insects of that time were enormous, it wasn’t until the end of the period that the holometabola turned up. Consequently, whereas cockroaches and these dragonfly-like insects had their “day”, the same doesn’t apply to beetles, wasps and butterflies. In fact, butterflies in particular are relative newcomers, not appearing in the fossil record until the Eocene, 50 million years ago. Prior to that, moths had existed since before the dinosaurs. Butterflies in general tend to rely on flowering plants, so until they became widespread it wasn’t likely that they would be successful – like bees, their ecological niche didn’t exist. The Purple Emperor is one of the few exceptions to this, since it eats carrion. There are presumably natural limits on the size of butterflies, since they could end up as prey if they get beyond a certain wingspan, although this doesn’t stop other animals, such as birds, from getting a lot bigger, and the size of a flying animal is somewhat limited in any case by its weight. Nonetheless, butterflies have never had their “day”. There has never been a time when there were truly giant butterflies. However, given the right conditions, there could be. Had butterflies existed at a time in geological history where the climate and atmospheric conditions were similar to the Carboniferous, butterflies the size of Meganeura could have existed. Is it possible, then, that at some point millions of years from now there will be heron-sized butterflies. Will there ever be a Day Of The Butterfly?

One fly, as it were, in the ointment is the problems which bees are currently running into, which will reduce the number of insect-pollinated flowering plants. I’m not sure how this works out ecologically because for all I know butterflies might be able to take over their pollinating role although I suspect it isn’t that simple. We might be looking at a future without many insect-pollinated flowering plants, meaning that grasses and the various wind pollinated trees might do okay but there will no longer be a role for butterflies, and in fact if it is the neonics which are killing bees off, they probably aren’t exactly wonderful for lepidoptera either. Another issue is birds. Insects are restricted in size as it is, but some of them are the size of small birds, although while flying birds exist, they presumably do a lot of things insects would be able to do otherwise, or even as well, as with humming birds for example.

A world in which there are giant butterflies, then, would have to be hot, have more oxygen in its atmosphere than today, and it would help for it to have insect pollinated flowering plants, though that isn’t essential. It would have to be hot because this would reduce the advantage “warm-bloodedness” confers, although technically flying insects can’t help but generate some body heat simply because they have to burn a lot of sugar to keep flapping their wings. This would mean, though, that the enormous appetite which birds have would become a liability, enabling insects to compete more effectively with them. This is therefore a greenhouse world like that of the Eocene when butterflies first evolved. There are various ways in which this could happen, one of which would be the movement of Antarctica away from the South Pole. This would make the whole planet less reflective of heat due to the disappearance of the southern ice cap, which would also raise sea level by up to a hundred metres. There are in fact three scenarios as to the position of the continents 200 million years in the future, both involving a supercontinent. The Amasia scenario is partly based on the observation that each supercontinent forms about 90 degrees away from its predecessor. It’s widely agreed that before any supercontinent forms, Alaska and Siberia will collide, Africa will ram into Europe and Australia into South East Asia while South America will continue to move west and once again detach itself from North America, while Antarctica will stay put for quite a long time. After that, things become less clear. Amasia is the theoretical continent which would form if South America also collided with North America, having rotated, and the lack of information about Antarctica means it’s unclear what would happen there.

The situation depicted above is referred to as Pangaea Ultima. This has Britain near the North Pole and North America in the tropics, and there is also a mountain range higher than the Himalayas in the former Afro-Eurasia.

Finally, there is Novopangaea. This assumes that Antarctica will move north, meaning that sea level will rise and that there will in fact be a truly united supercontinent.

As all of these situations involve giant continents, they can also be expected to be substantially covered in vast deserts regardless of the details. In the case of Novopangaea there’s also a single ocean, probably somewhat larger than the total water cover of the planet today because of the higher sea level, Most of the oxygen comes from the sea, so it’s possible that there would be more oxygen in that situation than the other two, However, land vegetation being less, there would be a smaller contribution from that contingent. In each situation, then, this stage is unlikely to involve giant butterflies.

However, all is not lost. Before the continents collide, there will be long periods during which there are substantial shallow seas due to the gradual uplift which will eventually become mountain ranges and therefore perhaps vegetation on the sea beds, producing extra oxygen. If this happens, we can have our giant butterflies, as the situation would then be similar to the Carboniferous as Pangaea was forming. As I said, a warmer planet would also be one where mammals and birds are at a disadvantage compared to insects, although that depends mainly on carbon dioxide content. This, however, could also happen due to the release of methane from clathrate hydrates on the ocean bed, which could happen if it rose due to heating.

It’s also possible, however, that these butterflies, with wingspans more than three times the width of the largest of today’s butterflies, wouldn’t be the innocuous nectar drinking pollinators of today, but predatory carnivores. If flowers become extinct because of the neonics and never evolve again from wind-pollinated plants, maybe the butterflies will move into new niches, and with the possible extinction of birds these niches include those of eagles and vultures. And the purple emperor provides a precedent showing that this can happen.

Therefore, if you don’t want there to be giant predatory butterflies in future, it’s probably quite important to take care of the bees.

Korean Britain

I haven’t blogged for a few days now and I found that I’d started posting walls of text on Facebook, so it looks like I’m going to have to come back here and do this. Because I wasn’t that keen on repeating my thoughts about Korea, since I covered that subject recently, I’ve been trying to avoid the topic here but as usual, my writing seems to have a will and insistence of its own, so it looks like I’m going to have to set this down somewhere. So this post is going to be partly about writing and partly about what I’m writing about and why.

Back in about 2011, I wrote a story called ‘Five Stages’ which was set in a police state and provoked the response “egad!” from someone at the sheer depth of nastiness I managed to depict in the dystopia. It was probably this story which persuaded people to say things like that there should be a special burning screaming skull to mark stories which I’d written so someone didn’t accidentally read them and not be able to get the images out of their heads. I think the main reason this happens is that I both elicit empathy for my characters and proceed to put them through hell. Anyway, the salient point of this story for the purposes of this blog is the totalitarian society I set it in. Initially, this was rather sketchy as all of the action of the story takes place in a prison cell and in fact inside the head of one of the protagonists. Later on, I played with the idea of extending the setting by telling the story of a geography teacher who had been ordered to deny there was any land on Earth’s surface other than Great Britain, refused to do so and was executed for it by the same government, having attempted to listen to French radio transmissions. In this setting, the idea was that Great Britain had closed borders and the smaller islands around it were seeded with anthrax, with the visible shore of France being explained away as sand banks or uninhabitable islands.

It then occurred to me that this scenario was quite similar to the image of North Korea we have, and I’m not about to become an apologist for the North Korean regime by any means, but it’s important to bear in mind that it has no real defenders or allies and therefore that what we actually think about the place exists in isolation from contrary opinions. That kind of situation tempts me to leap to the beleaguered party’s defence even if it’s indefensible.

As you know, I looked at the Korean language on here a few days ago, and that led to a preoccupation with things Korean. In particular, I looked into K-Pop, and found that the realities of South Korean life that reflected were really quite disturbing and probably meant that although South Korea is surely way easier to live in than North Korea, it isn’t actually very nice when it comes down to it and has a fair bit in common with the North in some ways. For instance, South Korea has the highest rate of people killing themselves in the world. This is possibly partly due to the Korean schooling system, and although again this is not homeedandherbs, I am at least briefly going to have to go into that.

North Korea claims to have the world’s highest literacy rate. South Korea’s is 97.9%, which is high but hard to assess. One anecdote of interest to me is about 김웅용, Kim Ong Yong, someone who was a source of great worry to me as a child, who learnt to read and write at the age of one, was fluent in four languages and understood calculus before he was five and started a physics degree at the age of eight. Just briefly, I felt threatened by the idea that there might be someone out there more intelligent than me, but nowadays that seems much less bothersome. It might be worth mentioning another child prodigy who was a near contemporary whose name escapes me for now, who learnt to read and write at nineteen months, and whose first language was English. The point is, of course, that Hangul, the Korean script, is remarkably easy to learn, and that will give people a head start in book smarts, so it’s possible that had 김웅용 grown up in an English-speaking country, or for that matter China, he might be so far behind that he wouldn’t have learnt to read until he was nearly two, and that would’ve slowed down other skills he acquired later.

Recent British governments seem to have felt the pressure to make our country more like the Far East in order to be economically competitive. We seem collectively to be afraid of being left behind by them, but at the same time the rates of people killing themselves in Japan and South Korea are exceedingly high (as they are in Eastern Europe) compared to most of the rest of the world. There’s also the Japanese hikikomori phenomenon of adults shutting themselves in and not engaging with the expectations of wider society, which also exists in South Korea under the name of 은둔형 외톨이 – eundunhyeong oetori (my transliteration which may therefore be wrong), and in fact also exists in the West, though possibly partly because of the educational and social system in the West seeking to mimic that of the Far East. In fact much of what I’ve recently heard about South Korea didn’t so much make me think “thank God it’s not like that over here” in a patronising kind of way as think “that’s the way we’re going too”. There’s a tendency, referred to as Orientalism, for Westerners to exoticise aspects of life in the Far East in a romantic or exceptionalising way. The Aokigahara “Suicide Forest”, for instance, was recently in the news due to a certain YouTuber behaving towards it in a disrespectful manner, and there’s a kind of sensationalising approach to this where we look at, in this case, Japan, with contrived wonder and horror that they have an entire forest where people go and kill themselves, and use that as if it’s a symptom of what’s wrong with Japanese society compared to the West. Well, guess what? We have Beachy Head. There’s nothing much remarkable about Aokigahara other than the fact that it’s yet another spot where people tend to kill themselves, just like Beachy Head or the Golden Gate Bridge in the West. A less serious example, perhaps, is the Japanese custom of taking your shoes off in the house. I have no connection with Japan and I do that and most people I know probably do it too, and you might fallaciously connect it to some kind of Shinto ritual or something but the real fact is that tramping across a pavement and treading in the remains of dog oomskah and proceeding to walk it into the house is just icky.

Getting back to Korea, here we have a country with a tragic history which still marks the people of both Koreas today. After being occupied by Japan during World War II, it was liberated by the Soviet Union and the US at the end of the War in the East, who as with Germany divided the territory between them, this time at the Thirty-Eighth Parallel. Japanese imperialism was of course rather akin to fascism and the US found it convenient to install a leader who later massacred and murdered many left-wingers who were rife in the country. There ensued of course the Korean War, where the North appeared to invade the South, and as is well-known although a truce was proclaimed in 1954 the war between the states has never officially ended. The North pursues an ideology known as 주체 – Juche – which is based on the idea of self-sufficiency and of course the usual caricature of communism which, to be fair, many people believe socialism and communism must always become. It has also come to focus on militarism to a considerable extent. For instance, its roads tend to be very wide in order to be used as runways for fighter jets. There’s no need, really, for me to go into that much depth about North Korea, or the Democratic People’s Republic of Korea as it’s better known, because it’s well-known.

The Republic of Korea, as the South is known, is perhaps less well-known. However, it’s had a history of cycles of attempts at democracy which became dictatorships and this only ended in the late 1980s, if it has, with the beginnings of liberal democracy. The 재벌, chaebol or corporations, were kind of established top-down in the 1960s and the likes of entrepreneurship and start-ups are quite foreign to the South Korean ethos. The schooling system is dominated by commercial pressures. On the whole, a typical adult would live in housing built by the chaebol, drive to work in a car made by it, work at the same chaebol and socialise in bars and restaurants also owned by the same organisation. This situation does exist in other countries of course, but not to the same extent.

K-Pop, the distinctively South Korean style of pop music, is an interesting illustration of how it works. The entertainment industry in South Korea is made up of three large chaebol who select their artists at a very early age, train them intensively for months or years, form them into groups and impose 노예 계약 – slave contracts – on them. They’re sometimes expected to undergo cosmetic surgery as a condition of employment, their food intake is strictly controlled and they have no control over the creative process at all. They are also hardly paid anything at all and get dropped unceremoniously after a few years, sometimes due to attempting to take some control over their work. The music, incidentally, has always been associated with videos and dance routines along with saturated colours, due to the fact that the country only began to accept modern pop music in the late 1980s when video and colour television were already popular. There’s also a stark contrast with the apparently casual and carefree scenes in the music videos and the reality of the musicians’ lives, which are a constant slog. K-Pop is the epitome of mature capitalism.

Except that in a way it isn’t really capitalism. As mentioned previously, there’s no entrepreneurship and the chaebol control the education system and provide much of the goods and services their employees need. Consequently, to some extent South Korea is quite similar to North Korea, although the sheer extent of the oppression and penury of the “People’s Republic” would probably have to be seen to be believed and doesn’t bear comparison. However, given the similarities between the systems, it’s quite inconsistent to refer to one as communist and the other capitalist. Furthermore, just as some may point at North Korea as an example of where socialism and communism inevitably end up, it would seem to be just as justifiable to cite South Korea as an instance of end-stage capitalism.

David Mitchell, author of ‘Cloud Atlas’, depicted a unified 22nd century Korea where it’s hard to work out whether the system is Northern or Southern. Juche is used to describe the ideology but chaebol are clearly also in firm control. The mass-production of employees via cloning seems like the logical conclusion of both countries’ systems. My head canon for ‘Cloud Atlas’ is that the North and South Korean governments eventually acknowledged that they were essentially running the same system and decided there was no reason not to unify, since the world had become homogenous in that respect. The North Korean government is in a sense the country’s single chaebol.

As with other aspects of life in the Far East, the situation with K-Pop can be dramatised as dystopian, but the situation isn’t hugely different in the West and in fact pop has trended in that direction after an interlude when independent record labels had more control. On the whole, in fact, it’s like the West is trying to copy South Korea and other Far Eastern nations, but the UK has its pesky tradition of democracy to interfere with that project.

What follows, then, is a work in progress. I’m trying to prepare a fertile soil for stories using an alternate history. And yes, I could be accused of playing a mind game here but I hope you can see that this particular mind game isn’t mere whimsy but serves a purpose. As it’s a work in progress and I’m presenting it here without having done a huge amount of research, it will undoubtedly have issues, particularly for anyone with a stronger grasp of history than I. Right now it’s very sketchy indeed.

Therefore I present to you:

Korean Britain

The point of divergence is early in World War II, possibly shortly before. In this reality I understand that the inventor of the Spitfire overheard a drunken German officer boasting that they were secretly building up military forces, and being forewarned, proceeded to plough resources into the RAF. In the “Korean Britain” timeline, this didn’t happen, and consequently the UK was less prepared for the War. We lost the Battle Of Britain, suffered the same kind of heavy losses as Germany suffered in reality, and were invaded and occupied by the Nazis, who installed Edward VIII as king. Under Nazi occupation, much of the population, including the resistance, developed Communist sympathies, and there was a Holocaust in Britain. Churchill was of course executed. In 1945, Britain was liberated by the US and the Soviet Union and the War came to an end.

The post-war agreement handed Northern Ireland over to the Republic and since Great Britain was technically an enemy power, having had its own military forces used against the Allies, it was occupied jointly by the USSR north of Nottingham and Stoke-on-Trent (the fifty-third parallel) and the US south of it. Both countries helped rebuild the infrastructure, but most of the picturesque ancient landmarks were now gone, having been bombed in the Baedecker Blitz. Due to American opposition to Communism, Edward VIII stayed on the throne and the US organised a strongly anti-Communist government partly from the Nazi sympathisers in the government put in place by the Third Reich. Meanwhile in the North, the Soviet Union set up a capital at Edinburgh and declares the People’s Republic of Britain, which also lays claim to the South, while in the South the government based at Westminster similarly lays claim also to the North, and calls itself the United Kingdom of England and Wales. England and Wales are separate nations but the People’s Republic is monolithic and doesn’t recognise Scotland, England and Wales as separate nations. The continuity in the North is essentially with Scotland rather than England, so it can be thought of as a massively expanded Caledonia. This is partly due to the greater Communist sympathies in Glasgow, although similar feelings also exist in the large industrial cities of the former Yorkshire and Lancashire.

In 1950, the People’s Republic of Britain (PRB) invaded the United Kingdom and war broke out. By the way, much of this is directly stolen from Korean history. In 1954, an armistice was declared although the war never officially ended. The Westminster government had communist and socialist sympathisers interned and executed and the Labour Party was declared illegal along with other left-wing parties. Meanwhile in the PRB, a one-party state came into being but the NHS exists, unlike in the UK where there has never been anything but private healthcare. In 1960, the PRB adopted the Shavian script for writing English, which is the script designed to write English phonetically after George Bernard Shaw left funding to establish one in his will:


The purpose of adopting this script is partly to aid literacy, which succeeds, and also to make it harder for citizens of the PRB to communicate with the outside world (which was also Stalin’s policy in the real world in some parts of the USSR). There are public executions, religion is illegal and Welsh and Gadhlig are banned – English, written in the Shavian alphabet, is the only language allowed. Since Edinburgh is now the capital, English diverges to being more Scottish but is also influenced by the RP tendency in Shavian, so a new form of English emerges. Much Newspeak-style terminology also exists within it. It’s also illegal to listen to foreign radio broadcasts, punishable by death.

The UK in the meantime, dominated as it is by the US, adopts largely American language and spelling. The two nations also use separate measuring systems, as the North goes metric and the UK doesn’t. Both countries decimalise their money but the PRB adopts the shilling and the UK continues with the Pound Sterling.

The official PRB line on its ideology, which is self-sufficient and state capitalist, i.e. nominally communist, is that it’s a rational system which is therefore incompatible with religion, and that those who disagree with it are often delusional. Therefore it has state psychiatric hospitals to accommodate dissidents, but it also has prison camps. Several famines have taken place.

In the UK, there have been coups, notably in 1972 when Edward VIII died and was replaced by Elizabeth. The country finally became officially democratic in 1987. There is now a thriving Britpop industry which is as plastic and manufactured as K-Pop and relies heavily on video promotions and dance routines. Prior to that, pop music was dominated by American influences and there were of course no Beatles or punk.

The self-inflicted death rate is very high in the UK.

The major population centres in the UK are London, Birmingham and Bristol. The population of London is around 20 million and it’s larger and more high-tech looking than our London. The PRB includes the cities of Lancashire and Yorkshire, Glasgow and Edinburgh. The total population of the DRB is about 15 million due to mass starvation. It’s grim up North.

A De-Militarised Zone separates the two countries at around the location of Nottingham and Stoke On Trent. Those two settlements are abandoned but the PRB maintains the pretence that they are occupied and has made them look impressive by painting them and building various large, modern-looking structures as a propaganda exercise. There are organised tourist trips to Edinburgh which are strictly controlled for foreign visitors.

In other words then, this is the Korean situation transplanted into Europe. The worrying thing, though, is not so much the PRB, which doesn’t bear much close resemblance to anything which happens in the real UK, as the fantasy version of the UK. That, unfortunately, is a nation which the real UK is coming ever closer to being, because apparently that’s how every government from 1979 onwards has wanted it to be. And although it’s not anywhere near as bad as North Korea or the made-up People’s Republic of Britain, it’s so close to how things really are that there hardly seems to be much point in using it as a setting. And that is not a good thing, because the schooling is rubbish, deaths by the victims’ own hands are rife and there’s precious little innovation there. Also, if you don’t conform, the prospects for you are dire.

But apparently this is how we’re supposed to be.