I presented this poster at the World Congress on Pain, held by the International Association for the Study of Pain in Boston, USA, in September 2018.
I presented this poster at the World Congress on Pain, held by the International Association for the Study of Pain in Boston, USA, in September 2018.
Glyphosate is guilty of causing cancer! Or is it? A jury in California decided that it caused the terminal cancer (non-Hodgkin’s lymphoma – a type of white-blood-cell cancer) in Dewayne Johnson, a school groundskeeper who used RoundUp, a glyphosate-containing herbicide, in his work. However, what a jury decides ain’t necessarily so.
Is glyphosate carcinogenic? Not according to the US Environmental Protection Agency (EPA) and the European Food Safety Agency (EFSA). However, the World Health Organization’s International Agency for Research on Cancer (IARC) states that glyphosate is “probably carcinogenic,” Category 2A in its classification of substances and circumstances according to their likelihood of causing cancer (see table).* But so, according to IARC, are red meat, wood smoke, drinks hotter than 65C, working as a hairdresser, and mobile phones.
Now at least some of these may be carcinogenic but what is needed is evidence. In the case of mobiles, which emit non-ionising radiation and might be thought unlikely to harm anyone, IARC cited the results of experiments where rats were exposed to heroic doses of mobile-type radiation, way above those experienced by people, throughout their 2-year lives. The IARC overstated the case in labelling mobiles “probably carcinogenic,” where the truth is that most likely the risk is zero.
The situation is similar with glyphosate. The IARC actually found little evidence from studies of humans exposed to glyphosate and equivocal evidence from rats and mice given high doses. IARC say that glyphosate may cause NHL in humans: nothing of the sort was found in animals where a few completely different tumours were found.
I looked at some of the studies for myself. There were few and the sample sizes were small. This means the results cannot be accounted as anything but weak evidence for an association between glyphosate and NHL, particularly since other studies show no link. Some studies passed the test of “statistical significance” but it is not generally understood that such results may still be false – “false positives.”
The IARC also claims that studies show evidence of genotoxicity (damage to chromosomes that might lead to cancer): if true, this would provide a mechanism by which glyphosate might cause NHL. However, its strongest evidence (from five regions in Colombia where glyphosate was sprayed onto illegal coca plantations) was contradictory. The authors concluded that “genotoxic damage associated with glyphosate spraying for control of illicit crops … is small and appears to be transient.”
Another study took white blood cells from three (3!) healthy volunteers and grew them with glyphosate. There was some evidence of changes to chromosomes and to the metabolism of the cells but at the very least such studies need repeating in realistic settings with larger numbers and longer time scales. And it is not clear that such changes could result in NHL.
Environmentalist Mark Lynas accuses the IARC of ignoring key evidence and mysteriously deciding to increase its level of the estimated danger posed by glyphosate. In evidence given in another case against Monsanto brought by people alleging that their NHL was caused by glyphosate, epidemiologist Dr Aaron Blair, a leading member of the IARC working group, seems to accept that unpublished data from two big studies (Agricultural Health Study (AHS) and North American Pooled Project (NAPP)) did not establish a link between NHL and glyphosate and that animal experiments only show a possible link with some cancers (but not NHL).
The IARC’s terms preclude it from considering unpublished data but this introduces a problem. It is more difficult to get research published which shows no effect than that which shows some effect. This is called publication bias. And Dr Blair accepted in court that the IARC did not have that information when it made its decision…and neither did any other regulatory agency. By coincidence, Dr Blair was actually aware of the unpublished data (but was precluded from telling IARC about it).
Other experts and interested parties, by no means supporters of Monsanto/Bayer, are sceptical of the verdict. Cancer Research UK says that the highest levels of glyphosate might increase the cancer risk but not the low levels experienced during normal use. Cancer epidemiologist Professor Paul Pharoah (University of Cambridge) says that some cited studies have serious design flaws and, if there was an effect, it would be very small. Others point out that glyphosate targets biochemical pathways only found in plants, and that it is quickly eliminated from the body if ingested or absorbed. The EPA says (December, 2017) that “glyphosate is not likely to be carcinogenic to humans” and poses no other risks to humans if used according to the instructions. The European Chemicals Agency and the EFSA agree, leaving the IARC on its own. At most, glyphosate should be classified Group 2B, “possibly carcinogenic.”
Dewayne Johnson’s terminal illness is a tragedy for him and his family but it has not been proved to have resulted from glyphosate. As Mark Lynas says, the evidence shows that glyphosate is extraordinarily safe and he points out that using glyphosate to kill weeds before crop-sowing reduces the need for tilling the soil, preventing soil erosion and loss of plant nutrients. He sees the attacks on Monsanto as linked to the unscientific demonising of genetic modification. Otherwise, campaigners against cancer would be targeting IARC Group 1 substances and circumstances which are carcinogens and trying to ban…bacon?!
|Group 1||Carcinogenic to humans|
|Group 2A||Probably carcinogenic to humans|
|Group 2B||Possibly carcinogenic to humans|
|Group 3||Not classifiable as to its carcinogenicity to humans|
|Group 4||Probably not carcinogenic to humans|
A recent article by Gavin Evans in The Guardian has drawn attention to a resurgence in the idea that race and intelligence are linked.1 These terms, though commonly used, are quite difficult to define…and for good reason. (see separate boxes below)
In the 19th century, despite the religious tradition that “God…hath made of one blood all nations of men,” it was axiomatic that there were different races with different abilities. Since European powers were dominant, “Caucasoid” (white) peoples were held superior. Other races were divided into Mongoloid (yellow), Malay (brown), American (red), and Negroid (black), in a hierarchy linked with the darkness of their skin.
For Darwin, the races were too similar not to have descended from a common ancestor but others held that the races had evolved separately. White slave-owners held their African slaves to be of a different, inferior, species which justified their enslavement. This idea that inferior races were liable to enslavement was unaltered by the fact of millions of white slaves being held by the Ottoman Empire from the 16th to 19th centuries, captured by Barbary pirates from as far north as Iceland.
Despite Darwin, as late as 1939, the prominent anthropologist Carleton S Coon divided people into Caucasoid, Congoid/Capoid, Mongoloid (including native Americans), and Australoid, believing them to be descended from different populations of Homo erectus, a view which some hold even now. Coon’s views were certainly of interest to those who believed in a hierarchy of races. However, another prominent anthropologist Alfred Kroeber (father of Ursula K Le Guin) actively opposed racist interpretations of human differences throughout his long career.
It is now accepted that Homo sapiens is one species with superficial differences in facial features, hair, eye and skin colours, and so on. Genes2 for these are distributed according to environmental factors such as temperature or sunlight. Other genes seem equally distributed and equally variable, with some exceptions, such as genes for lactose tolerance in dairy farming societies, genes for sickle cell trait in areas where malaria is prevalent, and genes for cystic fibrosis where tuberculosis is common. These genes have survival value in these environments.
However, this doesn’t stop some people asserting that there are genetic differences between ethnic groups which affect characteristics such as intelligence and tendencies to violence (e.g. alt-right hero Steve Bannon, fan of the Front National). DNA structure discoverer James Watson also strayed out of his field to assert that melanin was linked to libido.
There are appreciable differences in habitats occupied by different groups of humans, and in their cultures, but there is no evidence that humans of particular groups are genetically any more or less able to adapt to, act on, or alter their environments. The differences in the state of “advancement” of various human cultures already have an adequate explanation, as Jared Diamond says.3
Diamond, an expert on the birds of Papua New Guinea, was talking to a local politician who asked him “Why is it that you white people developed so much cargo and brought it to New Guinea, but we black people had little cargo of our own?” Diamond rejected the simplistic explanation that different “races” had different levels of ability and looked instead at their different environments. He argues that indigenous New Guineans and Australians are probably more intelligent than the white colonists, despite their “stone age” technology, since they easily master advanced industrial technology when given the opportunity. Caucasians were simply luckier: their civilisation arose in an area where metals could be obtained, plants and animals suitable for domestication existed, and the resulting denser populations encouraged the development of resistance to disease.
All this begs the question of what intelligence is. (see box) It is often assumed that the complex abilities humans have to share ideas and work with each other to gain their living can be measured, not in real life tasks, but with pencil and paper! This has given rise to IQ testing which inevitably reflects middle class Caucasian culture. Diamond speaks of how “stupid” he felt in the company of New Guineans who could follow faint jungle trails or erect a shelter but who would fail dismally in an IQ test!
Early IQ testing led to theories about the intelligence of immigrants to the USA. Robert Yerkes’ tests, used to evaluate draftees in WW1, showed that southern and eastern European immigrants had lower IQs than native-born Americans; that Americans from the northern states scored higher than those from southern states; and that African Americans scored lower than White Americans. Some began to talk about a “Nordic” race as being the most intelligent.
Partly driven by revulsion at the Nazis’ racist policies, scientists began to recognise the unscientific nature of IQ testing, ignoring as it did environmental and cultural factors. However, anti-immigration, eugenics, and segregation lobbies continued to use IQ tests to support their theories. Modern racist theories of intelligence emerged some 60 years ago with arguments that genetic differences made it necessary to segregate black and white children in school. In the 1960s, transistor inventor (!) William Shockley claimed that black children were innately unable to learn as well as white ones and psychologist Arthur Jensen argued that it was pointless trying to improve education for black children as their genes were to blame for their poor attainment (rather than poverty, discrimination, racist violence, unemployment, poor housing, and worse schools).
Murray and Herrnstein’s The Bell Curve (1994) refined the race and intelligence theory to argue that poor, especially poor black, people were inherently less intelligent than White or Asian Americans. They argued for reducing immigration, against welfare policies that “encouraged” poor people to have babies and against affirmative action. More recent opponents of affirmative action include Jordan B Peterson and James Damore (author of the Google memo opposing inclusion and diversity policies).4 Damore’s is an interesting case. He argues that women are inherently less likely to excel in software engineering for biological (i.e. genetic) reasons but then argues for dropping all diversity and inclusion initiatives, including those for Black and Hispanic people. Logically, he must feel that they are also genetically unfitted for software engineering…
Intelligence is not what intelligence tests measure. Practising intelligence tests can improve one’s attainment (as can having a good breakfast!) but doesn’t necessarily mean that one is more “intelligent.” But even if intelligence was simply determined by genes, it would still be the case that people should be encouraged to fulfil their potential. I don’t normally agree with the CBI but, when they said recently that thoughts, questions, creativity and team-working were just as desirable outcomes of education as academic achievement, they referenced a wider and more humanly relevant concept of intelligence.
What is intelligence?
In Latin, intelligens means understanding and comes from inter (between, among) and legere (to choose, select or pick out, and later to read). An excellent definition of intelligence is “the ability to use what you have got to get what you want.”5 Modern dictionaries have subtly changed this: “The ability to learn or understand or to deal with new or trying situations; the ability to apply knowledge to manipulate one’s environment or to think abstractly as measured by objective criteria (such as tests).”6 [my emphasis]
Thus, a general ability to understand one’s environment and manipulate it has become reduced to skill with abstract tests of certain abilities which produce a number. Other tests that produce numbers are to found in the educational system but, as the CBI recently complained, success in “exam factories” (i.e. schools) does not necessarily lead to success in work and life.
Are there genes for it?
Yes – human genes! We all share the vast majority of our genes and those genes give us our large (but not so large as Neanderthal) brains and they give us the ability to learn, which is key to mastery of our environments. But are there genes for the narrowly-defined intelligence which is measured by intelligence tests? No doubt! A study7 published in 2017 analysed the genomes of 78,000 people of European descent and identified up to 52 genes associated with a general intelligence factor, g (a measure that various IQ tests seem to share).
What this means is that these genes, which all humans possess, occur as two or more slightly different alleles:2 some alleles are associated with higher values of g, others with lower. Most of these genes seem to be involved in brain development or nerve functioning. There is a massive correlation between educational attainment and certain alleles but this is hardly surprising since intelligence tests measure the sort of knowledge and abilities taught in schools and tested in exams.
There are also moderate positive associations with brain volume, autism spectrum disorder, giving up smoking(!?), longevity… and moderate negative associations with Alzheimer’s disease, depressive symptoms, ever having smoked, schizophrenia, “neuroticism”… Other factors, such as BMI, insomnia, ADHD, have weaker negative links. These are modest conclusions, given the size of the study.
It would seem that knowledge of an individual’s genes would allow little to be predicted apart from educational attainment…but this can be found out anyway through the education process. It is difficult to see why this research has been done and what lessons it has.
Is there such a thing as race?
According to scientists, no.8 Neither of the biological concepts of race, genetically distinct or geographically isolated groups of a species, apply to humans. Svante Pääbo, an eminent evolutionary anthropologist, says “What the study of complete genomes … has shown is that even between Africa and Europe … there is not a single absolute genetic difference, meaning no single variant where all Africans have one variant and all Europeans another one, even when recent migration is disregarded.”
1Gavin Evans The unwelcome revival of race science. https://www.theguardian.com/news/2018/mar/02/the-unwelcome-revival-of-race-science
2Genes occur in different forms called alleles. All humans have the same genes but the different forms (alleles) are present in differing proportions in different populations. However, there is no general pattern to these differing proportions that would support the idea of separate races.
3Guns, Germs and Steel, Jared Diamond (1997)
5David Adam in The Genius Within (2018)
7Sniekers et al. Nature Genetics 2017;49(7):1107-12
8See https://www.scientificamerican.com/article/race-is-a-social-construct-scientists-argue/ and Biological races in humans https://www.ncbi.nlm.nih.gov/pubmed/23684745
I wrote this review in 1989 for the left-wing newspaper, Socialist Organiser. Unlike most other left journals of the time (and indeed today), SO felt it was important to be aware of scientific developments, as did our inspirers Marx. Engels, Lenin and Trotsky. SO’s successor Solidarity maintains this aim.
In 1963, when he was a student, Stephen Hawking was told he had motor neurone disease and had possibly two years to live. Now, confined to a wheelchair, unable to move, breathing through a hole in his windpipe, communicating by computer and voice synthesiser, he is one of the world’s leading theoretical physicists.
It cannot have been easy for Hawking to build his career, even with the devoted help of his family, colleagues and students. Luckily, theoretical physics requires little equipment and much thought. Like Newton before him, Hawking is Lucasian Professor of Mathematics at Cambridge. His major work has been to describe the appearance and behaviour of black holes.
And – a rare achievement for any scientist – Hawking has written a readable book about the origin of the universe, tackling the age-old questions: “Why is the universe the way it is?” And “Why are we here?”
Over the last 300 years, science has banished humanity from the centre of the universe to the sidelines. We live on a speck of dust orbiting round an average star near the edge of a galaxy of a hundred thousand million stars, surrounded by a hundred thousand million other galaxies. Was all this created just so we could exist?
Through the 20th Century, reality has become more and more weird. Light can only travel at one peed, which nothing else can reach; absolute time and speed do not exist; there are no simultaneous events; space-time is distorted by gravity so that straight lines do not exist; gravity and acceleration make clocks run slower and let radio-active particles live longer; matter and energy can be converted into each other; the universe is expanding and has a definite age; it started when all matter was concentrated at one point (a singularity) and then exploded in a ‘big bang.’
The list of strange truths does not end there. Energy comes in little packets called quanta, rather as matter does as particles; but both energy and matter can behave as waves; and we can never predict exactly how something will behave because we can never accurately know both its position and momentum.
Bizarre and disturbing though these facts are, they have all been identified as true many times, even down to the discovery of the echo of the Big Bang still reverberating round the universe as microwaves.
Hawking takes his readers through all these discoveries, including his own work on black holes. These are formed by the collapse of a large dying star under its own gravity. An astronaut on the surface of the star would be stretched like spaghetti by the colossal gravitational pull of the new black hole. Luckily, time would stand still at that moment.
Hawking has calculated that black holes are not really black. Though they crush matter out of existence, black holes radiate energy and are really a sort of cosmic recycling plant. The only equation included in the book, E = mc^2, exemplifies this conversion.
The story is leavened by humorous anecdotes or scenes from Hawking’s life. For instance, he describes how he met the Pope in 1981 at a Jesuit conference on the origin of the universe.
The Catholic Church had already, some 30 years earlier, accepted the Big Bang as being the same as the biblical moment of creation. The Pope sanctioned research into the evolution of the universe but not into the Big Bang itself since that was God’s work! Hawking had just given a talk denying the idea of a precise moment when the Big Bang had occurred.
This is Hawking’s particular contribution. He argues that the universe has a finite size but no boundaries, just like the surface of a ball but including time. But with no start to space-time there is no creation.
Some other physicists are eager to see the hand of God in determining the fundamental values of things, like the strength of gravity, so that intelligent life could evolve. If things like the charge and size of the electron, or the rate of expansion of the universe, had been even slightly different, life would not have been able to develop.Hawking argues, however, that things are as they are because, given the number of possible universes, one like this was most likely to result. Even less role for a creator!
Hawking ends by saying that a complete theory of everything would be the ultimate triumph of human reason for “then we would know the mind of God.” Since, up to there in the book, he had argued that there was little or no place for a creator, I can only assume he put the phrase in to sound good to reviewers.
That apart, I can’t praise the book highly enough. Read it!
This book review was written in 2010 for the paper Solidarity (for Workers’ Liberty). With the recent death of Stephen Hawking, I thought it was worth reminding readers of some of his popular books that explain difficult topics in physics. It was previously published in this blog as M-theory and “The Grand Design.”
Stephen Hawking’s latest  popular work (The Grand Design, written with physicist and author Leonard Mlodinow) seeks to answer questions that many have asked:
• Why is there something, rather than nothing?
• Why do we exist?
Hawking and Mlodinow (H&M) also pose a question which potentially answers the first two:
• Why this particular set of laws and not some other?
The answer, say H&M, is to be found in M-theory.
The trivial answer to the last question is that, if the laws were different, we would not exist and would not be asking any questions. But the observed laws seem to be very finely tuned to allow matter to exist in extended forms, like atoms, molecules and us. This has been called the anthropic principle and, in its strongest form, has often been given as circumstantial evidence in favour of design, allowing god to slip back in after being excluded from all other observed processes.
H&M controversially argue for a strong anthropic principle: “The fact that we exist imposes constraints not just on our environment but on the possible form and content of the laws of nature themselves”. However, their argument does not rely on a grand designer but on the possibilities inherent in M-theory.
M-theory (where M stands for membrane) is an attempt to unify all of the forces of nature into one overarching explanation, encompassing the very large and the very small. The reason for trying to do this is not just a love of orderly explanations but that previous unifying theories, that which unified the electric and magnetic forces in the 19th century, that which included quantum mechanics (quantum electrodynamics — QED) and that which unified the weak force with the electromagnetic (EM) force (the Standard Model) in the 20th century, led to enormous benefits. Promising attempts to unify the strong force with the EM and weak forces have been made (Grand Unified Theories — GUTs). M-theory is an example of a Theory of Everything (ToE) which aims to include the gravitational force.
Why the urge to unify or to build more inclusive theories? This sounds like the sort of “blue skies” research that politicians scorn, in favour of research with commercial benefits. However, the work of James Clerk Maxwell in the 19th century to uncover the relation between electric and magnetic fields, curiosity-driven, showed that electromagnetic fields spread through space at the speed of… light! Thus, light was an electromagnetic wave, which led to the discovery of radio waves, microwaves, X-rays, gamma rays, and to untold benefits in medicine and communication. It is quite reasonable (though not guaranteed!) that future unifying theories will lead to useful outcomes.
H&M’s approach leans heavily on the work of my favourite scientist, Richard Feynman, a profound thinker but also an engaging and playful character. You would be rewarded if you looked into his life (and perhaps watched clips of interviews with him on the BBC website).
Feynman worked on the science of the very small, where quantum effects rule. One example concerns the behaviour of light when it shines on two vertical narrow slits very close together. This gives rise, not to two vertical bars on a screen, but to a wide horizontal band of dark and light bars.
This has classically been explained by Thomas “Phenomenon” Young (1773-1829), another fascinating character, as the interference of the peaks and troughs of waves, sometimes reinforcing, sometimes cancelling each other, much as ripples in water do. This fatally wounded the particle theory of light held by Newton.
This commonsense explanation was however shown to be inadequate, not least by Einstein’s proof that light could act as particles, photons, in the photoelectric effect. Newton’s theory rose again Lazarus-like. More oddly (and contrary to Newton and indeed to common sense), faint beams of light consisting of single photons when shone on a double slit gradually reproduced, spot by spot, the interference pattern supposedly explained by wave behaviour.
The “solution” was to associate a probability wave with each photon so that where it ended up was essentially random but over time a distinct pattern emerged. It was as if each photon passed through both slits and the probabilities interfered with each other resulting in the detection of the photon at a particular place.
Theory predicted that matter particles would also have a probability wave associated with them and, sure enough electrons (and larger particles) behave in a similar way with a double slit — even single electrons interfere with themselves (this experiment was voted the most beautiful experiment in physics in 2002)!
Feynman’s explanation is that the system, in this case the single electron/double slit/screen system, has not just one but every history. The particles take every possible path on their way from the source to the screen — simultaneously! Furthermore, our observations of the particles go back into their past and influence the paths they take.
If, like me, you’re going “What?”, you’re in distinguished company: Feynman himself said “I think I can safely say that nobody understands quantum mechanics”. Nevertheless, the theory has passed every test.
Lots of people are unhappy with the implication that someone has to be looking before a quantum process is “forced” to arrive at a particular outcome — and yet this has been confirmed by many experiments. It actually is the case that the outcome is influenced by the process of measurement or detection (though this need not be a conscious process).
This sort of crazy quantum behaviour obeys strict laws. Laws of nature are not like human laws which seek to encourage certain preferred behaviours. They explain how things behave and how they can behave. The laws of modern physics, including the modern understanding of gravity, explain an incredible range of observations to incredible precision and have made amazing predictions which have almost entirely been borne out. H&M pose more fundamental questions, including “Is there only one set of possible laws?”
The laws are, needless to say, not entirely known. While three of the four forces of nature, the electromagnetic, weak and strong forces, have provisionally been united in the “standard model”, crucially gravity still needs to be integrated into the picture. This what M-theory, incorporating string theory and supergravity, seeks to do. One of its startling predictions is that there are 10 space dimensions and one time dimension, in contrast with our everyday experience of three space dimensions and one time. The unobserved dimensions are rolled up very small, so that particles are actually vibrating strings or membranes.
M-theory does not predict the exact laws observed. These depend on how the extra dimensions are “rolled up”. A great many universes are possible, some 10*500 or 1 followed by 500 zeroes, each with a different combination of fundamental constants, and it is not surprising that we exist in one where the constants are compatible with the evolution of life. The “apparent miracle” is explained.
H&M point out that the law of gravity is not incompatible with the emergence of a universe “from nothing”. In particular, the principle of conservation of energy is not violated (because, while matter energy is positive, gravitational energy is negative) and, at least in quantum mechanics, what is not forbidden is compulsory. Furthermore, with a wide range of possible sets of constants, some (at least one!) universes must come into existence in which life can evolve.
And here, without the need for a creator, we are!
In the discussions prompted by centenary of the first workers’ government, little has been said about the Bolsheviks and their science policies. This series of blogs about Marxism, the Bolsheviks, Stalin, and science draws, amongst other sources, on Simon Ings’ recent book Stalin and the Scientists,1 Douglas R Weiner’s book Models of Nature,2 and Loren R Graham’s Lysenko’s Ghost.3
“No previous government in history was so openly and energetically in favor of science. …[it] saw the natural sciences as the answer to both the spiritual and physical problems of Russia” (Graham quoted).1
“An individual scientist may not at all be concerned with the practical application of his research. The wider his scope, the bolder his flight, the greater his freedom from practical daily necessity in his mental operations, all the better” (Trotsky).4
Russia before the Bolshevik revolution was an unpromising prospect for the anti-capitalist movement. Atop the underdeveloped mainly agrarian base, lately emerged from feudalism, and a small urban working class, sat a tiny superstructure of art and science. This included people of world renown (composers such as Tchaikovsky, Borodin, Stravinsky, Prokofiev; authors such as Pushkin, Chekhov, Dostoyevsky, Tolstoy, Gorky, Mayakovsky; artists such as Repin, Chagall, Kandinsky, Malevich; other creatives such as Diaghilev, Fokine, Nijinsky, Pavlova) but relatively few scientists (such as Borodin (the same!), Mendeleev, Pavlov, Tsiolkovsky, Kovalevskaya, Kropotkin). Quite a few of these were as avant garde as any foreign contemporaries: for example, Pavlov and Metchnikoff received Nobel Prizes for Medicine in 1904 and 1908 (and Mendeleev should have got it for Chemistry).
The problem facing the Bolsheviks was an economically and socially backward country: a tiny working class; a multitudinous peasantry; a legacy of Tsarist repression; colossal war losses (3 million deaths from all causes; 4 million wounded). Isolated, the Soviet state fought against the White counter-revolutionaries, aided by 170,000-plus foreign soldiers; agricultural and industrial production collapsed, as did civil society (millions of orphans left wandering); famine and disease were rife (5 million died in the Volga famine of 1921-2, after crop failures; 3 million died of typhus in 1920 alone).
This is the background for Ings’ history of post-revolution science,1 Weiner’s book about the conservation movement in the USSR,2 and Graham’s book about the notorious Lysenko chapter in genetics.3
As Marxists, the Bolsheviks were very pro-science.5 Looking back in 1925, Trotsky summed up the best aspects of the Bolsheviks’ attitude: “The new state, a new society based on the laws of the October Revolution, takes possession triumphantly – before the eyes of the whole world – of the cultural heritage of the past.” On the independence of science from imposed political goals, he said “Only classes that have outlived themselves need to give science a goal incompatible with its intrinsic nature. Toiling classes do not need an adaptation of scientific laws to previously formulated theses.”4 He had in mind capitalist societies but his words apply equally to the Stalinist reaction soon to destroy the gains of 1917.
Trotsky explicitly accepted the heritage of the natural sciences: “The need to know nature is imposed upon men by their need to subordinate nature to themselves. Any digressions in this sphere from objective relationships, which are determined by the properties of matter itself, are corrected by practical experience. This alone seriously guarantees natural sciences, chemical research in particular, from intentional, unintentional, or semi-deliberate distortions, misinterpretations, and falsifications.”4 Trotsky had not counted on the fraudulent exaggerations or falsifications of “practical experience” by such as Lysenko, whose theories had the endorsement of Stalin himself, and the persecution even unto death of those who stood for scientific knowledge.
The Bolsheviks acted quickly to protect the environment as an important resource to be used to build socialism, rather than to be squandered for short term needs.6 This approach was followed in other fields but the nature of the government changed with the privations of the civil war, the early death of Lenin, and increasing bureaucratisation, culminating in Stalin’s domination. As Ings observes, “Leaders, politicians and bureaucrats have their hobby horses, of course. The problems start only when these people assume for themselves an expertise they do not possess, when they impose their hobby horses on the state by fiat. The Bolshevik tragedy was that, in donning the mantle of scientific government, the Party’s leaders felt entitled [even obliged] to do this.”
Ultimately, it was Stalin alone who was in a position to impose his hobby horses, or rather of those scientists he favoured. This was most egregious in the area of agriculture and genetics.7 Immediately after the revolution, however, the Bolsheviks found that many of the existing scientific establishment were willing to work with them, exemplified by the (Imperial) Academy of Sciences which, as early as the end of 1917, offered to aid “state construction.” However, organised scientific work was fairly impossible until the civil war and the ensuing famine caused by drought and crop failures in 1921-2 were over.
Gradually, scientists began to organise and reorganise. Scientific supplies, and even food and fuel, were scarce and scientists used cunning and ingenuity to collect equipment. Pavlov, for example, grew his own vegetables but lacked food for his experimental dogs.
The All-Russian Society for Nature Conservation was founded in 1924 and the movement had much success in setting up and running nature reserves with scientific goals of understanding the ecology of the Soviet Union. With Stalin’s “Great Break,” the 1929 turn towards building “socialism in one country,” the attitude towards science and nature began to change. By the early ‘30s, and the claimed completion of the first Five-Year Plan, the author Gorky, an enthusiast for Stalin’s rapid industrialisation, could describe nature not as something to be understood but as an “enemy standing in our way…our main foe.” This meant that nature, in particular the nature reserves, had to yield to the exploitation and pollution that accompanied canal and dam building, the steel plant construction, and the expansion of agricultural land.6
The Russian Association of Physicists was set up in 1925, later to produce Nobel Prizewinners such as Landau and Kapitsa. Many physicists were mentored by Sergei I Vavilov, whose brother Nikolai would become the most prominent victim of Stalin’s meddling in genetics.7 The emigré Cambridge physicist Peter/Pyotr Kapitsa, who was virtually kidnapped during a family visit to Russia, was another leading mentor. Despite Stalin’s doctrinaire rejection of Einstein’s theories, Russian physicists were successful in catching up with the USA in developing first an atom bomb and then a hydrogen bomb from 1944 on.8
Stalin’s purges affected science greatly, particularly when scientists defended science against Stalin’s mistaken theories. Many dedicated scientists were imprisoned or shot (or died of maltreatment) as “wreckers”, “terrorists”, or “foreign agents”. Ideological commitment to socialism was not a defence. Three out of eight Soviet delegates to the Second International Congress of the History of Science, Bukharin, Hessen and Vavilov, were shot or died in prison, while a fourth, Ernst Kol’man, though a Stalin supporter, was imprisoned for non-science reasons.9
After the death of Stalin, the worst ideological influences were relaxed or removed, but the attitude towards science and nature as something to be directed did not entirely change. This led to the ecological disaster of the Aral Sea and nuclear contamination in the Urals, while Lysenko gained the ear of Khrushchov, suggesting one unsuccessful agricultural venture after another. The top-down approach essentially continued until the end of the USSR.
Stalin’s worst errors were also repeated in Mao’s China in the so-called Great Leap Forward (1958-62). One particular episode epitomises the contempt of the Chinese Stalinists for science. The Four Pests Campaign focused on killing sparrows which the bureaucrats blamed for eating grain. In fact, as any ecologist could have testified, they also ate a lot of insects. With the sparrows largely eradicated, locust populations burgeoned. The “backyard steel furnaces” fiasco resulted in deforestation for fuel with the production of worthless low-grade pig iron. Mao lacked any knowledge of metallurgy and the experts who might have advised him were either in labour camps or cowed by the experience of the “Hundred Flowers Campaign.” The environmental damage and disruption of rural life caused by the Great Leap resulted in upwards of 30 million famine and other deaths.
This series of articles will cover Marxists’ attitudes to the natural sciences, physics in Russia, nature conservation, and Stalin’s deformation of genetics.
1Stalin and the Scientists: A History of Triumph and Tragedy, 2016.
2Models of Nature: Ecology, Conservation, and Cultural Revolution in Soviet Russia, 1988.
3Lysenko’s Ghost: Epigenetics and Russia, 2016
4Trotsky, Dialectical Materialism and Science, in Problems of Everyday Life (1925)
5See forthcoming article on the attitudes of Marxists to science
6See forthcoming article on nature and environment
7See forthcoming article about agriculture and genetics
8See forthcoming article about Soviet physics
9Science at the Cross Roads (1931/1971) comprises the contributions of the Soviet delegates.
As I was perusing Physics World1 earlier this year, I revisited an article by physicist John Powell2 (author of How Music Works and Why We Love Music) in which he proposed, in view of recent triumphs of populism, replacement populist units of measurement.
Of course, in the UK we could simply reinstate feet, pounds and hours (instead of the horrid European metres, kilograms and seconds), while in the US they have never gone away.
For Powell this would be too simple. He proposes furlongs, hundredweights and fortnights, on the rather contrived grounds that horse-racing is popular (measured in furlongs), as are holidays lasting a fortnight. He glosses over the choice of the hundredweight but, of course, this would reduce fat-shaming since nearly everyone’s weight would fall into the range of 1 to 3 cwt.
Elsewhere, a unit of the firkin (90lb) has been proposed, leading to the FFF system. Following the French revolution, times based on the day were proposed: the centi-jour would have been about 14 minutes.
Various constants of nature would have to be converted: Powell points out that the acceleration due to gravity, 9.8 metres per second squared, would be 71 gigafurlongs per fortnight squared. The speed of light in vacuo would be 1.8 terafurlongs per fortnight. Buying food would be awkward in hundredweights but I think this could be sorted with the division of the hundredweight into a hundred … weights! A weight of potatoes would be a bit over a pound or half a kilo.
Powell remarks that it would be popular for pi to have an exact value of, say, 3 as this would greatly simplify calculations of circular areas and so on. This reminds me that this value is implied in the Bible: I Kings 7:23-26 refers to a circular cauldron in Solomon’s temple with a diameter of 10 cubits and a circumference of 30 cubits. Now, as any fule kno, the ratio of the circumference to the diameter of a circle is pi (3.14 approx.) while 30/10 = 3. I am shocked (SHOCKED!) to find that the Bible literalists have almost entirely disregarded the word of God in this matter (though at least one person has addressed this problem and explained it away with a lot of assumptions that would have been unnecessary if the word “approximately” had been in the vocabulary of God).3
This reminds me of the sadly apocryphal stories of attempts to legislate more convenient values for pi in, of course, the USA. In one of these, in 19th Century Iowa, a legislator suggested that pi be defined as 3 to make things easier but the suggestion was quickly quashed in committee.
A more serious proposal originated with Edwin J Goodwin, an Indianan physician and amateur mathematician. In 1894, he believed that he had solved three ancient and unsolved problems in mathematics, namely squaring the circle, doubling the cube and trisecting the angle, using only a straightedge and compasses. His belief was not affected by the proof in 1882 that squaring the circle was impossible, confirming its proverbial meaning of attempting the impossible stretching back to at least 414BCE in The Birds by Aristophanes.
Goodwin persuaded the Indiana legislature to adopt his ideas in Engrossed Bill No. 246,4 generously allowing them to use his methods in state textbooks without charge, and it sailed through committee and the lower house before attracting criticism from a passing mathematics professor, who persuaded members of the Senate not to pass the bill. Section 2 of the bill states “the ratio of the diameter and circumference [of a circle] is as five-fourths to four.” This means that pi = 4/1.25 = 3.2 exactly, which it most definitely doesn’t (it’s about 2% less).
1 Monthly journal of the Institute of Physics and, together with Chemistry World (ditto of the Royal Society of Chemistry), my favourite reading.
2 Lateral Thoughts: Hail to the new, popular, units. (April 2017, p52)
While he may, or may not, be right in some areas, his chosen example, bus-driving, fails to support his argument. As a child in World War 2, he might have noticed that women drivers of buses, ambulances, fire engines, vans, lorries, and tractors were often to be seen.
It didn’t take the advent of power steering to nullify the upper body weakness of women, attributable to their lack of testosterone. Different gearing, steering wheel sizes, and driving techniques had already made it possible for anyone to drive heavy vehicles. It was merely the “unsuitability” of women for such jobs, suspended during wartime, that kept them out of such jobs. As Cordelia Fine says.
The number of insect species known is about a million and the number of individual insects alive at any one time is a mind boggling 10 billion billion (10*19), with about 300 times the mass of the human population; estimates of the total number of insect species waiting to be discovered go up to 30 million.*1,2
It was therefore concerning when recently it was reported that populations of flying insects had declined by between 76 and 82% in Germany over just 27 years.3 The study was carried out in 63 sites in nature reserves between 1989 and 2016. The technique was a simple one: tent traps were set up and the insects caught by these in a certain time were weighed. The decline affected all kinds of insect.
A long-running study in another German nature reserve showed a decline of 40% in moths and butterflies over 150 years. More recently, the European Environment Agency reported that 50% of grassland butterflies had been lost in 20 years in 19 European countries. It suggests loss of managed grasslands, either to scrub or to crop growing, and pesticides on neighbouring farmland as potential causes.4 And a worldwide study of invertebrate species (of which about 80% are insects) showed a 45% decline over the past 40 years.5
Anecdotally, the British media have commented on the disappearance of the “moth snowstorm” due to which night-time drivers (in the countryside) would have to clean their windscreens of the corpses of splattered moths which had mistaken their headlights for the Moon.2
This decline is serious for two main reasons. The wealth of insect species supports a large number of food chains with most obviously birds, but also fish, amphibians, reptiles and mammals (especially bats), at or near the top. Birds affected in Britain include the grey partridge and spotted flycatcher, both having declined by 95%, and the red-backed shrike, extinct since the 1990s, while the house sparrow has also shown a 50% decline since the 1970s.2 Furthermore, a great many plants, including many food ones, rely on insects to pollinate their flowers. These insects include not only bees but also moths, butterflies, beetles and hoverflies. Another may be that if predatory insects decline, populations of prey species that eat food crops could explode, leading to economic losses either from reduced yields or increased use of pesticides.
What is causing the decline and what should be done?
Agricultural practices, such as monocultures, great swathes of just one crop, reduce biodiversity. The removal of hedges, ponds and other refuges for wild life also reduce niches for insects and their food web members.
Pesticides are also a factor, especially when they affect other insects as well as crop pests. Many of the most harmful, such as DDT, have been banned but modern less harmful ones seem not to be entirely harmless. This may be the case with neonicotinoids (see below). The evidence about these is contradictory but seems to be coming down on the harmful side.6
Climate change does not seem to be a factor in the decline at present but as warming accelerates it may become one. If anything, increased temperatures should increase insect biomass. For example, the warmer winters of recent years may have allowed pest species to overwinter more successfully, leading to more crop damage. However, species that rely on particular plants for food may suffer if those plants cannot cope with climate change and become more scarce. Also, increased extreme weather events such as droughts would negatively affect insects.
Changing agricultural incentives to favour greater crop diversity, to keep or restore hedges and so on, to reduce pesticide use, for example by applying it directly rather than spraying it into the atmosphere, are all initiatives that could help. In a rare example of evidence-based policy-making, Environment Secretary Michael Gove says the UK will support Europe-wide ban on neonicotinoids after the German study.7
What are neonicotinoid insecticides?
These insecticides, developed from the 1970s onwards, have rapidly become popular because, unlike the organophosphate and carbamate insecticides, they have low toxicity to mammals, including humans, while being very toxic to insects. They now amount to about a quarter of the global insecticide market, with one of them, imidacloprid (patented in 1985 by Bayer), being the most widely used insecticide in the world.
Neonicotinoids (“new nicotinoids”) are similar to nicotine, an alkaloid produced by the tobacco plant (Nicotiana tabacum) and other members of the Solanaceae family (which includes deadly nightshade, potato and aubergine). Presumably it is produced as a defence against insects that would otherwise eat the leaves of the tobacco plant.
In humans, nicotine stimulates the brain’s nicotinic acetylcholine receptors (NAChRs), a class of receptors that promotes the release of dopamine and endorphin, stimulating the brain’s reward system. Nicotine is said to cause feelings of calmness and relaxation, while also making the user more alert. It can reach the brain some 15 seconds after inhalation of tobacco smoke. These effects are often desirable and even useful so it is unfortunate that nicotine intake is usually accompanied by a cocktail of carcinogens. It is even more unfortunate that it produces tolerance, where the user requires more and more to achieve the desired effect, and that it is highly addictive. Nicotine was previously widely used as an insecticide as it overstimulates insects’ central nervous systems, rather than making them feel relaxed, and kills them. It was phased out over its harmfulness to mammals, including people using it or their children and animals. However, it should be noted that it is impossible to get a fatal dose of nicotine from smoking.
Neonicotinoids are chemically different to nicotine: they cannot cross the blood-brain barrier in mammals (and so do not mimic the effects of nicotine), and bind much more strongly to insect NAChRs than to mammalian ones. They are also thought to be less harmful to fish, an important consideration when rain can run off fields into rivers. While nerve gases such as sarin prevent the breakdown of the nerve transmitter acetylcholine (ACh) in humans, neonicotinoids mimic the action of ACh in insects and also cannot be broken down. The result is the same: nerves are stimulated to fire continuously, causing paralysis and death.
Neonicotinoids were introduced after many insects had developed resistance to organophosphate, carbamate, and pyrethroid insecticides. Predictably, resistance has started to develop to them as well (travellers may wish to note that bed bugs in New Jersey are now resistant).
Neonicotinoids are absorbed by plant roots and leaves and travel to all parts of the plant, where they are taken in by herbivorous insects. Also, they are more persistent, that is they break down more slowly, than nicotine, offering more long-term protection to crops. They are active against a wide range of pests, such as aphids, whitefly, wireworms and leafhoppers. However, their wide range includes many non-target insects, some beneficial, such as bees.8 In theory, it should be possible to minimise exposure of other insects by applying the insecticide more carefully directly to the roots, rather than spraying. It is common to treat seeds before sowing which is a less dangerous process.
Neonicotinoids and bees
It was reported last year that the use of neonicotinoids on oilseed rape in England from 2002 is linked to an average decline in all bee species of 7%, with the worst affected being those that collected nectar from rape flowers. This is serious news not only for the natural world but specifically for that substantial section of agriculture that relies heavily on pollination by bees and other insects. This is especially so since bee numbers have already suffered a lot from the parasitic mite Varroa and the mysterious Colony Collapse Disorder. It seems that neonicotinoids can get into pollen and nectar and thence into the bees. The amounts involved are not lethal but it is suggested that they may cause behavioural changes that make bee colonies less viable. One study shows that bumblebee colonies affected put on less weight before winter and are less able to survive.1 In any case, neonicotinoids are found in bees and at least some bees seem to be adversely affected so, on the precautionary principle, neonicotinoid use should be restricted.
*Some 40% of insects are beetles. The great socialist scientist JBS Haldane, when asked what he deduced about God from contemplating the living world, replied “God has an inordinate fondness for beetles.”
2https://www.theguardian.com/environment/2017/oct/21/insects-giant-ecosystem-collapsing-human-activity-catastrophe (author Michael McCarthy, originator of the term “moth snowstorm”).
Good news! The ozone hole is shrinking at last, a rare success for collective action in response to scientific evidence.1 Unfortunately, it will take until 2050 to return to its 1980 levels. This is because the chemicals largely responsible for its depletion are very stable and those already released will persist in the atmosphere until then, even if no more emissions take place.
It’s 30 years since the signing of the Montreal Protocol which aimed to tackle the problem of the accelerating destruction of the ozone layer by chlorofluorocarbons (CFCs). Ozone in the stratosphere absorbs most of the Sun’s ultraviolet radiation (UVR) and without it life would be difficult or impossible except several metres below the surface of the oceans.
Ozone (O3) is made from oxygen (O2) by the action of UVR in the stratosphere. But for there to be oxygen in the stratosphere there first had to be oxygen in the lower atmosphere and this only appeared when Earth was about half the age it is now, with the evolution of photosynthesis by bacteria in the oceans. These produced oxygen as a waste product which gradually began to accumulate in the atmosphere. Ozone started to accumulate also and by half a billion years ago was absorbing enough UVR for the land to become habitable.
Scientists only became aware of these facts with:
A the prediction and then discovery of different types of light (radiation) with different wavelengths;
B the development of spectroscopy, the study of how matter absorbs and emits light; and
C the understanding of how hot objects emit energy in the form of light.
These were mostly the result of curiosity-driven research.
It was realised that the Sun should emit radiation of different wavelengths in the proportions predicted for the spectrum of a “black body” of the same temperature (about 5500 degrees Celsius). Spectroscopy showed that it did, with the puzzling exception of a region of wavelengths shorter than 310 nanometres, just beyond the violet region. This, the UV region, was about 1% of the predicted intensity. This meant that about 99% of UVR was being absorbed by something and an exhaustive search of likely chemical substances found that ozone was largely responsible.
The amount of ozone differs in different parts of the world and at different times of year, as does the intensity of UVR, so the amount of UVR reaching the ground is variable. In general, UVR is highest when the Sun is higher in the sky, i.e. in equatorial regions and during summer in northern and southern regions.
The UVR that gets through can be damaging to life, including humans in whom it causes sunburn, cataracts, and potentially fatal skin cancers. Many humans have melanin pigment in their skin which can absorb UVR before damage can occur but lighter-skinned people in high-UVR regions are at risk. Australia and New Zealand have the highest rates of melanoma in the world. It was therefore alarming to learn in 1985 that there was a great hole in the ozone layer above Antarctica. However, the story started earlier.
Refrigerators use the evaporation and condensation of liquids to transfer heat from the contents to the outside (you may have noticed warmth from the back of a fridge). Early fridges used easily liquefied gases such as methyl chloride, ammonia or sulfur dioxide, but these were toxic if released. Chemist Thomas Midgley2 developed the efficient synthesis of chlorofluorocarbons (CFCs) around 1930 and proposed their use as safe refrigerants. CFCs are very unreactive which is excellent for a refrigerant. Midgley demonstrated their safety by inhaling some and blowing out a candle. However, if released when a fridge is damaged or scrapped, their very stability means that CFCs persist in the atmosphere, eventually reaching the stratosphere.
Here the problem starts: a CFC molecule such as Freon (Cl2F2C) is hit by a UV photon and a chlorine atom (Cl) is knocked out. If this collides with an ozone molecule, it grabs an oxygen atom to make a ClO molecule, leaving an ordinary oxygen molecule that doesn’t absorb UVR. The ClO collides with another ozone molecule, making more O2 and regenerating the original Cl atom…which can now repeat the process with more ozone. The Cl is thus a catalyst for the breakdown of ozone. Each cycle removes two ozone molecules and there can be thousands of cycles before the Cl atom collides with something else and the process stops.3
This was realised in the ‘70s but no-one knew if the effect was significant until the late Joe Farman and colleagues found a massive hole in the ozone layer above Antarctica. The levels had dropped by some 40% in about ten years. Farman had been measuring the levels for about five years, first fearing that his instruments were faulty. NASA had failed to detect the drop as its computer software was programmed to ignore “unusual” readings.
The clear threat was that, as thinning of the layer spread, organisms would be affected by the increased UVR, particularly UVB. This would affect plant growth, harm populations of plankton in the upper levels of the oceans, and cause increased skin cancers and cataracts. Australia would be the first to be affected, with potential epidemic levels of skin cancer.
Due to different weather patterns, the Arctic had not yet developed an ozone hole but would eventually if nothing changed as the amount had also declined. Farman published his results in 1985 and, despite the opposition of the chemicals industry, the Montreal Protocol phasing out CFCs was signed in 1987. Readers may be surprised to learn that Margaret Thatcher played a positive role in this.4
It will take a long time for the ozone layer to return to its original thickness. In the meantime, we must make sure that governments and businesses adhere to the Montreal Protocol. But there is another problem: CFCs are actually more potent “greenhouse” gases than carbon dioxide and some of their ozone-friendly replacements, such as hydrofluorocarbons (HFCs), are even worse. Phasing out CFCs has already reduced the rate of global warming. One option is to amend the Montreal Protocol to include HFCs (they are already in the Kyoto Protocol) but the alternatives also have their own problems. Propane/methylpropane mixtures are very effective refrigerants but are flammable (but then so is methane, piped to most houses in the UK).
2 Thomas Midgley had “form.” In 1921, he showed that tetraethyl lead when added to petrol prevented the damaging phenomenon of engine “knock.” Despite knowing of its toxicity (and taking a year off to recover from lead poisoning), Midgley insisted that it was safe. It was marketed as “Ethyl” with no mention of lead. Having initiated the poisoning of young brains for decades, Midgley then inadvertently initiated the destruction of the ozone layer through CFCs. Later he contracted polio and was partially paralysed. He invented a contraption to get him out of bed but became entangled in its ropes, dying from strangulation. It has been said that he “had more impact on the atmosphere than any other single organism in Earth’s history.”
3 Step 1: Cl + O3 —> ClO + O2
Step 2: ClO + O3 —> Cl + 2O2
Step 1 is now repeated with the Cl atom regenerated in Step 2, and so on thousands of times.
4 You won’t often hear a good word from me about Margaret Thatcher but arguably she was instrumental in the discovery of the ozone hole and in the subsequent Montreal protocol. Hardline monetarist and privatiser though she was, when it came to science she was not so dogmatically in favour of the free market. With a Chemistry degree and PhD, she understood the need for “blue skies” (curiosity-driven) research.5 This may have partly explained why she protected the funding of the British Antarctic Survey (for which Joe Farman was working when he detected the ozone hole) where her colleagues saw only wasteful public expenditure. She could also understand the scientific evidence about CFCs and supported the Montreal Protocol. She also supported UK’s membership of CERN and the establishment of the IPCC to research climate change.
5 See Margaret Thatcher’s influence on British science, by George Guise