Intelligence and race: an example of racist science?

 

A recent article by Gavin Evans in The Guardian has drawn attention to a resurgence in the idea that race and intelligence are linked.1 These terms, though commonly used, are quite difficult to define…and for good reason. (see separate boxes below)

In the 19th century, despite the religious tradition that “God…hath made of one blood all nations of men,” it was axiomatic that there were different races with different abilities. Since European powers were dominant, “Caucasoid” (white) peoples were held superior. Other races were divided into Mongoloid (yellow), Malay (brown), American (red), and Negroid (black), in a hierarchy linked with the darkness of their skin.

For Darwin, the races were too similar not to have descended from a common ancestor but others held that the races had evolved separately. White slave-owners held their African slaves to be of a different, inferior, species which justified their enslavement. This idea that inferior races were liable to enslavement was unaltered by the fact of millions of white slaves being held by the Ottoman Empire from the 16th to 19th centuries, captured by Barbary pirates from as far north as Iceland.

Despite Darwin, as late as 1939, the prominent anthropologist Carleton S Coon divided people into Caucasoid, Congoid/Capoid, Mongoloid (including native Americans), and Australoid, believing them to be descended from different populations of Homo erectus, a view which some hold even now. Coon’s views were certainly of interest to those who believed in a hierarchy of races. However, another prominent anthropologist Alfred Kroeber (father of Ursula K Le Guin) actively opposed racist interpretations of human differences throughout his long career.

It is now accepted that Homo sapiens is one species with superficial differences in facial features, hair, eye and skin colours, and so on. Genes2 for these are distributed according to environmental factors such as temperature or sunlight. Other genes seem equally distributed and equally variable, with some exceptions, such as genes for lactose tolerance in dairy farming societies, genes for sickle cell trait in areas where malaria is prevalent, and genes for cystic fibrosis where tuberculosis is common. These genes have survival value in these environments.

However, this doesn’t stop some people asserting that there are genetic differences between ethnic groups which affect characteristics such as intelligence and tendencies to violence (e.g. alt-right hero Steve Bannon, fan of the Front National). DNA structure discoverer James Watson also strayed out of his field to assert that melanin was linked to libido.

There are appreciable differences in habitats occupied by different groups of humans, and in their cultures, but there is no evidence that humans of particular groups are genetically any more or less able to adapt to, act on, or alter their environments. The differences in the state of “advancement” of various human cultures already have an adequate explanation, as Jared Diamond says.3

Diamond, an expert on the birds of Papua New Guinea, was talking to a local politician who asked him “Why is it that you white people developed so much cargo and brought it to New Guinea, but we black people had little cargo of our own?” Diamond rejected the simplistic explanation that different “races” had different levels of ability and looked instead at their different environments. He argues that indigenous New Guineans and Australians are probably more intelligent than the white colonists, despite their “stone age” technology, since they easily master advanced industrial technology when given the opportunity. Caucasians were simply luckier: their civilisation arose in an area where metals could be obtained, plants and animals suitable for domestication existed, and the resulting denser populations encouraged the development of resistance to disease.

All this begs the question of what intelligence is. (see box) It is often assumed that the complex abilities humans have to share ideas and work with each other to gain their living can be measured, not in real life tasks, but with pencil and paper! This has given rise to IQ testing which inevitably reflects middle class Caucasian culture. Diamond speaks of how “stupid” he felt in the company of New Guineans who could follow faint jungle trails or erect a shelter but who would fail dismally in an IQ test!

Early IQ testing led to theories about the intelligence of immigrants to the USA. Robert Yerkes’ tests, used to evaluate draftees in WW1, showed that southern and eastern European immigrants had lower IQs than native-born Americans; that Americans from the northern states scored higher than those from southern states; and that African Americans scored lower than White Americans. Some began to talk about a “Nordic” race as being the most intelligent.

Partly driven by revulsion at the Nazis’ racist policies, scientists began to recognise the unscientific nature of IQ testing, ignoring as it did environmental and cultural factors. However, anti-immigration, eugenics, and segregation lobbies continued to use IQ tests to support their theories. Modern racist theories of intelligence emerged some 60 years ago with arguments that genetic differences made it necessary to segregate black and white children in school. In the 1960s, transistor inventor (!) William Shockley claimed that black children were innately unable to learn as well as white ones and psychologist Arthur Jensen argued that it was pointless trying to improve education for black children as their genes were to blame for their poor attainment (rather than poverty, discrimination, racist violence, unemployment, poor housing, and worse schools).

Murray and Herrnstein’s The Bell Curve (1994) refined the race and intelligence theory to argue that poor, especially poor black, people were inherently less intelligent than White or Asian Americans. They argued for reducing immigration, against welfare policies that “encouraged” poor people to have babies and against affirmative action. More recent opponents of affirmative action include Jordan B Peterson and James Damore (author of the Google memo opposing inclusion and diversity policies).4 Damore’s is an interesting case. He argues that women are inherently less likely to excel in software engineering for biological (i.e. genetic) reasons but then argues for dropping all diversity and inclusion initiatives, including those for Black and Hispanic people. Logically, he must feel that they are also genetically unfitted for software engineering…

Intelligence is not what intelligence tests measure. Practising intelligence tests can improve one’s attainment (as can having a good breakfast!) but doesn’t necessarily mean that one is more “intelligent.” But even if intelligence was simply determined by genes, it would still be the case that people should be encouraged to fulfil their potential. I don’t normally agree with the CBI but, when they said recently that thoughts, questions, creativity and team-working were just as desirable outcomes of education as academic achievement, they referenced a wider and more humanly relevant concept of intelligence.

_________________________________________________
What is intelligence?
In Latin, intelligens means understanding and comes from inter (between, among) and legere (to choose, select or pick out, and later to read). An excellent definition of intelligence is “the ability to use what you have got to get what you want.”5 Modern dictionaries have subtly changed this: “The ability to learn or understand or to deal with new or trying situations; the ability to apply knowledge to manipulate one’s environment or to think abstractly as measured by objective criteria (such as tests).”6 [my emphasis]

Thus, a general ability to understand one’s environment and manipulate it has become reduced to skill with abstract tests of certain abilities which produce a number. Other tests that produce numbers are to found in the educational system but, as the CBI recently complained, success in “exam factories” (i.e. schools) does not necessarily lead to success in work and life.

Are there genes for it?
Yes – human genes! We all share the vast majority of our genes and those genes give us our large (but not so large as Neanderthal) brains and they give us the ability to learn, which is key to mastery of our environments. But are there genes for the narrowly-defined intelligence which is measured by intelligence tests? No doubt! A studypublished in 2017 analysed the genomes of 78,000 people of European descent and identified up to 52 genes associated with a general intelligence factor, g (a measure that various IQ tests seem to share).

What this means is that these genes, which all humans possess, occur as two or more slightly different alleles:2 some alleles are associated with higher values of g, others with lower. Most of these genes seem to be involved in brain development or nerve functioning. There is a massive correlation between educational attainment and certain alleles but this is hardly surprising since intelligence tests measure the sort of knowledge and abilities taught in schools and tested in exams.

There are also moderate positive associations with brain volume, autism spectrum disorder, giving up smoking(!?), longevity… and moderate negative associations with Alzheimer’s disease, depressive symptoms, ever having smoked, schizophrenia, “neuroticism”… Other factors, such as BMI, insomnia, ADHD, have weaker negative links. These are modest conclusions, given the size of the study.

It would seem that knowledge of an individual’s genes would allow little to be predicted apart from educational attainment…but this can be found out anyway through the education process. It is difficult to see why this research has been done and what lessons it has.

Is there such a thing as race?
According to scientists, no.8 Neither of the biological concepts of race, genetically distinct or geographically isolated groups of a species, apply to humans. Svante Pääbo, an eminent evolutionary anthropologist, says “What the study of complete genomes … has shown is that even between Africa and Europe … there is not a single absolute genetic difference, meaning no single variant where all Africans have one variant and all Europeans another one, even when recent migration is disregarded.”
———————————————————————————————————-

1Gavin Evans The unwelcome revival of race science. https://www.theguardian.com/news/2018/mar/02/the-unwelcome-revival-of-race-science

2Genes occur in different forms called alleles. All humans have the same genes but the different forms (alleles) are present in differing proportions in different populations. However, there is no general pattern to these differing proportions that would support the idea of separate races.

3Guns, Germs and Steel, Jared Diamond (1997)

4See http://www.workersliberty.org/story/2017-09-21/google-memo-and-real-bias

5David Adam in The Genius Within (2018)

6Merriam-Webster online

7Sniekers et al. Nature Genetics 2017;49(7):1107-12

8See https://www.scientificamerican.com/article/race-is-a-social-construct-scientists-argue/ and Biological races in humans https://www.ncbi.nlm.nih.gov/pubmed/23684745

Advertisements

We’re here because we’re here: A Brief History of Time

I wrote this review in 1989 for the left-wing newspaper, Socialist Organiser. Unlike most other left journals of the time (and indeed today), SO felt it was important to be aware of scientific developments, as did our inspirers Marx. Engels, Lenin and Trotsky. SO’s successor Solidarity maintains this aim. 

In 1963, when he was a student, Stephen Hawking was told he had motor neurone disease and had possibly two years to live. Now, confined to a wheelchair, unable to move, breathing through a hole in his windpipe, communicating by computer and voice synthesiser, he is one of the world’s leading theoretical physicists.

It cannot have been easy for Hawking to build his career, even with the devoted help of his family, colleagues and students. Luckily, theoretical physics requires little equipment and much thought. Like Newton before him, Hawking is Lucasian Professor of Mathematics at Cambridge. His major work has been to describe the appearance and behaviour of black holes.

And – a rare achievement for any scientist – Hawking has written a readable book about the origin of the universe, tackling the age-old questions: “Why is the universe the way it is?” And “Why are we here?”

Over the last 300 years, science has banished humanity from the centre of the universe to the sidelines. We live on a speck of dust orbiting round an average star near the edge of a galaxy of a hundred thousand million stars, surrounded by a hundred thousand million other galaxies. Was all this created just so we could exist?

Through the 20th Century, reality has become more and more weird. Light can only travel at one peed, which nothing else can reach; absolute time and speed do not exist; there are no simultaneous events; space-time is distorted by gravity so that straight lines do not exist; gravity and acceleration make clocks run slower and let radio-active particles live longer; matter and energy can be converted into each other; the universe is expanding and has a definite age; it started when all matter was concentrated at one point (a singularity) and then exploded in a ‘big bang.’

The list of strange truths does not end there. Energy comes in little packets called quanta, rather as matter does as particles; but both energy and matter can behave as waves; and we can never predict exactly how something will behave because we can never accurately know both its position and momentum.

Bizarre and disturbing though these facts are, they have all been identified as true many times, even down to the discovery of the echo of the Big Bang still reverberating round the universe as microwaves.

Hawking takes his readers through all these discoveries, including his own work on black holes. These are formed by the collapse of a large dying star under its own gravity. An astronaut on the surface of the star would be stretched like spaghetti by the colossal gravitational pull of the new black hole. Luckily, time would stand still at that moment.

Hawking has calculated that black holes are not really black. Though they crush matter out of existence, black holes radiate energy and are really a sort of cosmic recycling plant. The only equation included in the book, E = mc^2, exemplifies this conversion.

The story is leavened by humorous anecdotes or scenes from Hawking’s life. For instance, he describes how he met the Pope in 1981 at a Jesuit conference on the origin of the universe.
The Catholic Church had already, some 30 years earlier, accepted the Big Bang as being the same as the biblical moment of creation. The Pope sanctioned research into the evolution of the universe but not into the Big Bang itself since that was God’s work! Hawking had just given a talk denying the idea of a precise moment when the Big Bang had occurred.

This is Hawking’s particular contribution. He argues that the universe has a finite size but no boundaries, just like the surface of a ball but including time. But with no start to space-time there is no creation.

Some other physicists are eager to see the hand of God in determining the fundamental values of things, like the strength of gravity, so that intelligent life could evolve. If things like the charge and size of the electron, or the rate of expansion of the universe, had been even slightly different, life would not have been able to develop.Hawking argues, however, that things are as they are because, given the number of possible universes, one like this was most likely to result. Even less role for a creator!

Hawking ends by saying that a complete theory of everything would be the ultimate triumph of human reason for “then we would know the mind of God.” Since, up to there in the book, he had argued that there was little or no place for a creator, I can only assume he put the phrase in to sound good to reviewers.

That apart, I can’t praise the book highly enough. Read it!

Why we are here – Stephen Hawking’s take

This book review was written in 2010 for the paper Solidarity (for Workers’ Liberty). With the recent death of Stephen Hawking, I thought it was worth reminding readers of some of his popular books that explain difficult topics in physics. It was previously published in this blog as M-theory and “The Grand Design.”

Stephen Hawking’s latest [2010] popular work (The Grand Design, written with physicist and author Leonard Mlodinow) seeks to answer questions that many have asked:

• Why is there something, rather than nothing?

• Why do we exist?

Hawking and Mlodinow (H&M) also pose a question which potentially answers the first two:

• Why this particular set of laws and not some other?

The answer, say H&M, is to be found in M-theory.

The trivial answer to the last question is that, if the laws were different, we would not exist and would not be asking any questions. But the observed laws seem to be very finely tuned to allow matter to exist in extended forms, like atoms, molecules and us. This has been called the anthropic principle and, in its strongest form, has often been given as circumstantial evidence in favour of design, allowing god to slip back in after being excluded from all other observed processes.

H&M controversially argue for a strong anthropic principle: “The fact that we exist imposes constraints not just on our environment but on the possible form and content of the laws of nature themselves”. However, their argument does not rely on a grand designer but on the possibilities inherent in M-theory.

M-theory (where M stands for membrane) is an attempt to unify all of the forces of nature into one overarching explanation, encompassing the very large and the very small. The reason for trying to do this is not just a love of orderly explanations but that previous unifying theories, that which unified the electric and magnetic forces in the 19th century, that which included quantum mechanics (quantum electrodynamics — QED) and that which unified the weak force with the electromagnetic (EM) force (the Standard Model) in the 20th century, led to enormous benefits. Promising attempts to unify the strong force with the EM and weak forces have been made (Grand Unified Theories — GUTs). M-theory is an example of a Theory of Everything (ToE) which aims to include the gravitational force.

Why the urge to unify or to build more inclusive theories? This sounds like the sort of “blue skies” research that politicians scorn, in favour of research with commercial benefits. However, the work of James Clerk Maxwell in the 19th century to uncover the relation between electric and magnetic fields, curiosity-driven, showed that electromagnetic fields spread through space at the speed of… light! Thus, light was an electromagnetic wave, which led to the discovery of radio waves, microwaves, X-rays, gamma rays, and to untold benefits in medicine and communication. It is quite reasonable (though not guaranteed!) that future unifying theories will lead to useful outcomes.

H&M’s approach leans heavily on the work of my favourite scientist, Richard Feynman, a profound thinker but also an engaging and playful character. You would be rewarded if you looked into his life (and perhaps watched clips of interviews with him on the BBC website).

Feynman worked on the science of the very small, where quantum effects rule. One example concerns the behaviour of light when it shines on two vertical narrow slits very close together. This gives rise, not to two vertical bars on a screen, but to a wide horizontal band of dark and light bars.

This has classically been explained by Thomas “Phenomenon” Young (1773-1829), another fascinating character, as the interference of the peaks and troughs of waves, sometimes reinforcing, sometimes cancelling each other, much as ripples in water do. This fatally wounded the particle theory of light held by Newton.

This commonsense explanation was however shown to be inadequate, not least by Einstein’s proof that light could act as particles, photons, in the photoelectric effect. Newton’s theory rose again Lazarus-like. More oddly (and contrary to Newton and indeed to common sense), faint beams of light consisting of single photons when shone on a double slit gradually reproduced, spot by spot, the interference pattern supposedly explained by wave behaviour.

The “solution” was to associate a probability wave with each photon so that where it ended up was essentially random but over time a distinct pattern emerged. It was as if each photon passed through both slits and the probabilities interfered with each other resulting in the detection of the photon at a particular place.

Theory predicted that matter particles would also have a probability wave associated with them and, sure enough electrons (and larger particles) behave in a similar way with a double slit — even single electrons interfere with themselves (this experiment was voted the most beautiful experiment in physics in 2002)!

Feynman’s explanation is that the system, in this case the single electron/double slit/screen system, has not just one but every history. The particles take every possible path on their way from the source to the screen — simultaneously! Furthermore, our observations of the particles go back into their past and influence the paths they take.

If, like me, you’re going “What?”, you’re in distinguished company: Feynman himself said “I think I can safely say that nobody understands quantum mechanics”. Nevertheless, the theory has passed every test.

Lots of people are unhappy with the implication that someone has to be looking before a quantum process is “forced” to arrive at a particular outcome — and yet this has been confirmed by many experiments. It actually is the case that the outcome is influenced by the process of measurement or detection (though this need not be a conscious process).

This sort of crazy quantum behaviour obeys strict laws. Laws of nature are not like human laws which seek to encourage certain preferred behaviours. They explain how things behave and how they can behave. The laws of modern physics, including the modern understanding of gravity, explain an incredible range of observations to incredible precision and have made amazing predictions which have almost entirely been borne out. H&M pose more fundamental questions, including “Is there only one set of possible laws?”

The laws are, needless to say, not entirely known. While three of the four forces of nature, the electromagnetic, weak and strong forces, have provisionally been united in the “standard model”, crucially gravity still needs to be integrated into the picture. This what M-theory, incorporating string theory and supergravity, seeks to do. One of its startling predictions is that there are 10 space dimensions and one time dimension, in contrast with our everyday experience of three space dimensions and one time. The unobserved dimensions are rolled up very small, so that particles are actually vibrating strings or membranes.

M-theory does not predict the exact laws observed. These depend on how the extra dimensions are “rolled up”. A great many universes are possible, some 10*500 or 1 followed by 500 zeroes, each with a different combination of fundamental constants, and it is not surprising that we exist in one where the constants are compatible with the evolution of life. The “apparent miracle” is explained.

H&M point out that the law of gravity is not incompatible with the emergence of a universe “from nothing”. In particular, the principle of conservation of energy is not violated (because, while matter energy is positive, gravitational energy is negative) and, at least in quantum mechanics, what is not forbidden is compulsory. Furthermore, with a wide range of possible sets of constants, some (at least one!) universes must come into existence in which life can evolve.

And here, without the need for a creator, we are!

The Bolsheviks, Stalin and Science

In the discussions prompted by centenary of the first workers’ government, little has been said about the Bolsheviks and their science policies. This series of blogs about Marxism, the Bolsheviks, Stalin, and science draws, amongst other sources, on Simon Ings’ recent book Stalin and the Scientists,1 Douglas R Weiner’s book Models of Nature,2 and Loren R Graham’s Lysenko’s Ghost.3

No previous government in history was so openly and energetically in favor of science. …[it] saw the natural sciences as the answer to both the spiritual and physical problems of Russia” (Graham quoted).1

An individual scientist may not at all be concerned with the practical application of his research. The wider his scope, the bolder his flight, the greater his freedom from practical daily necessity in his mental operations, all the better” (Trotsky).4

Russia before the Bolshevik revolution was an unpromising prospect for the anti-capitalist movement. Atop the underdeveloped mainly agrarian base, lately emerged from feudalism, and a small urban working class, sat a tiny superstructure of art and science. This included people of world renown (composers such as Tchaikovsky, Borodin, Stravinsky, Prokofiev; authors such as Pushkin, Chekhov, Dostoyevsky, Tolstoy, Gorky, Mayakovsky; artists such as Repin, Chagall, Kandinsky, Malevich; other creatives such as Diaghilev, Fokine, Nijinsky, Pavlova) but relatively few scientists (such as Borodin (the same!), Mendeleev, Pavlov, Tsiolkovsky, Kovalevskaya, Kropotkin). Quite a few of these were as avant garde as any foreign contemporaries: for example, Pavlov and Metchnikoff received Nobel Prizes for Medicine in 1904 and 1908 (and Mendeleev should have got it for Chemistry).

The problem facing the Bolsheviks was an economically and socially backward country: a tiny working class; a multitudinous peasantry; a legacy of Tsarist repression; colossal war losses (3 million deaths from all causes; 4 million wounded). Isolated, the Soviet state fought against the White counter-revolutionaries, aided by 170,000-plus foreign soldiers; agricultural and industrial production collapsed, as did civil society (millions of orphans left wandering); famine and disease were rife (5 million died in the Volga famine of 1921-2, after crop failures; 3 million died of typhus in 1920 alone).

This is the background for Ings’ history of post-revolution science,1 Weiner’s book about the conservation movement in the USSR,2 and Graham’s book about the notorious Lysenko chapter in genetics.3

As Marxists, the Bolsheviks were very pro-science.5 Looking back in 1925, Trotsky summed up the best aspects of the Bolsheviks’ attitude: “The new state, a new society based on the laws of the October Revolution, takes possession triumphantly – before the eyes of the whole world – of the cultural heritage of the past.” On the independence of science from imposed political goals, he said “Only classes that have outlived themselves need to give science a goal incompatible with its intrinsic nature. Toiling classes do not need an adaptation of scientific laws to previously formulated theses.”4 He had in mind capitalist societies but his words apply equally to the Stalinist reaction soon to destroy the gains of 1917.

Trotsky explicitly accepted the heritage of the natural sciences: “The need to know nature is imposed upon men by their need to subordinate nature to themselves. Any digressions in this sphere from objective relationships, which are determined by the properties of matter itself, are corrected by practical experience. This alone seriously guarantees natural sciences, chemical research in particular, from intentional, unintentional, or semi-deliberate distortions, misinterpretations, and falsifications.”4 Trotsky had not counted on the fraudulent exaggerations or falsifications of “practical experience” by such as Lysenko, whose theories had the endorsement of Stalin himself, and the persecution even unto death of those who stood for scientific knowledge.

The Bolsheviks acted quickly to protect the environment as an important resource to be used to build socialism, rather than to be squandered for short term needs.6 This approach was followed in other fields but the nature of the government changed with the privations of the civil war, the early death of Lenin, and increasing bureaucratisation, culminating in Stalin’s domination. As Ings observes, “Leaders, politicians and bureaucrats have their hobby horses, of course. The problems start only when these people assume for themselves an expertise they do not possess, when they impose their hobby horses on the state by fiat. The Bolshevik tragedy was that, in donning the mantle of scientific government, the Party’s leaders felt entitled [even obliged] to do this.”

Ultimately, it was Stalin alone who was in a position to impose his hobby horses, or rather of those scientists he favoured. This was most egregious in the area of agriculture and genetics.7 Immediately after the revolution, however, the Bolsheviks found that many of the existing scientific establishment were willing to work with them, exemplified by the (Imperial) Academy of Sciences which, as early as the end of 1917, offered to aid “state construction.” However, organised scientific work was fairly impossible until the civil war and the ensuing famine caused by drought and crop failures in 1921-2 were over.

Gradually, scientists began to organise and reorganise. Scientific supplies, and even food and fuel, were scarce and scientists used cunning and ingenuity to collect equipment. Pavlov, for example, grew his own vegetables but lacked food for his experimental dogs.

The All-Russian Society for Nature Conservation was founded in 1924 and the movement had much success in setting up and running nature reserves with scientific goals of understanding the ecology of the Soviet Union. With Stalin’s “Great Break,” the 1929 turn towards building “socialism in one country,” the attitude towards science and nature began to change. By the early ‘30s, and the claimed completion of the first Five-Year Plan, the author Gorky, an enthusiast for Stalin’s rapid industrialisation, could describe nature not as something to be understood but as an “enemy standing in our way…our main foe.” This meant that nature, in particular the nature reserves, had to yield to the exploitation and pollution that accompanied canal and dam building, the steel plant construction, and the expansion of agricultural land.6

The Russian Association of Physicists was set up in 1925, later to produce Nobel Prizewinners such as Landau and Kapitsa. Many physicists were mentored by Sergei I Vavilov, whose brother Nikolai would become the most prominent victim of Stalin’s meddling in genetics.7 The emigré Cambridge physicist Peter/Pyotr Kapitsa, who was virtually kidnapped during a family visit to Russia, was another leading mentor. Despite Stalin’s doctrinaire rejection of Einstein’s theories, Russian physicists were successful in catching up with the USA in developing first an atom bomb and then a hydrogen bomb from 1944 on.8

Stalin’s purges affected science greatly, particularly when scientists defended science against Stalin’s mistaken theories. Many dedicated scientists were imprisoned or shot (or died of maltreatment) as “wreckers”, “terrorists”, or “foreign agents”. Ideological commitment to socialism was not a defence. Three out of eight Soviet delegates to the Second International Congress of the History of Science, Bukharin, Hessen and Vavilov, were shot or died in prison, while a fourth, Ernst Kol’man, though a Stalin supporter, was imprisoned for non-science reasons.9

After the death of Stalin, the worst ideological influences were relaxed or removed, but the attitude towards science and nature as something to be directed did not entirely change. This led to the ecological disaster of the Aral Sea and nuclear contamination in the Urals, while Lysenko gained the ear of Khrushchov, suggesting one unsuccessful agricultural venture after another. The top-down approach essentially continued until the end of the USSR.

Stalin’s worst errors were also repeated in Mao’s China in the so-called Great Leap Forward (1958-62). One particular episode epitomises the contempt of the Chinese Stalinists for science. The Four Pests Campaign focused on killing sparrows which the bureaucrats blamed for eating grain. In fact, as any ecologist could have testified, they also ate a lot of insects. With the sparrows largely eradicated, locust populations burgeoned. The “backyard steel furnaces” fiasco resulted in deforestation for fuel with the production of worthless low-grade pig iron. Mao lacked any knowledge of metallurgy and the experts who might have advised him were either in labour camps or cowed by the experience of the “Hundred Flowers Campaign.” The environmental damage and disruption of rural life caused by the Great Leap resulted in upwards of 30 million famine and other deaths.

This series of articles will cover Marxists’ attitudes to the natural sciences, physics in Russia, nature conservation, and Stalin’s deformation of genetics.

1Stalin and the Scientists: A History of Triumph and Tragedy, 2016.
2Models of Nature: Ecology, Conservation, and Cultural Revolution in Soviet Russia, 1988.
3Lysenko’s Ghost: Epigenetics and Russia, 2016
4Trotsky, Dialectical Materialism and Science, in Problems of Everyday Life (1925)
5See forthcoming article on the attitudes of Marxists to science
6See forthcoming article on nature and environment
7See forthcoming article about agriculture and genetics
8See forthcoming article about Soviet physics
9Science at the Cross Roads (1931/1971) comprises the contributions of the Soviet delegates.

Taking back control (of our units)

As I was perusing Physics World1 earlier this year, I revisited an article by physicist John Powell2 (author of How Music Works and Why We Love Music) in which he proposed, in view of recent triumphs of populism, replacement populist units of measurement.

Of course, in the UK we could simply reinstate feet, pounds and hours (instead of the horrid European metres, kilograms and seconds), while in the US they have never gone away.
For Powell this would be too simple. He proposes furlongs, hundredweights and fortnights, on the rather contrived grounds that horse-racing is popular (measured in furlongs), as are holidays lasting a fortnight. He glosses over the choice of the hundredweight but, of course, this would reduce fat-shaming since nearly everyone’s weight would fall into the range of 1 to 3 cwt.

Elsewhere, a unit of the firkin (90lb) has been proposed, leading to the FFF system. Following the French revolution, times based on the day were proposed: the centi-jour would have been about 14 minutes.
Various constants of nature would have to be converted: Powell points out that the acceleration due to gravity, 9.8 metres per second squared, would be 71 gigafurlongs per fortnight squared. The speed of light in vacuo would be 1.8 terafurlongs per fortnight. Buying food would be awkward in hundredweights but I think this could be sorted with the division of the hundredweight into a hundred … weights! A weight of potatoes would be a bit over a pound or half a kilo.
Powell remarks that it would be popular for pi to have an exact value of, say, 3 as this would greatly simplify calculations of circular areas and so on. This reminds me that this value is implied in the Bible: I Kings 7:23-26 refers to a circular cauldron in Solomon’s temple with a diameter of 10 cubits and a circumference of 30 cubits. Now, as any fule kno, the ratio of the circumference to the diameter of a circle is pi (3.14 approx.) while 30/10 = 3. I am shocked (SHOCKED!) to find that the Bible literalists have almost entirely disregarded the word of God in this matter (though at least one person has addressed this problem and explained it away with a lot of assumptions that would have been unnecessary if the word “approximately” had been in the vocabulary of God).3

This reminds me of the sadly apocryphal stories of attempts to legislate more convenient values for pi in, of course, the USA. In one of these, in 19th Century Iowa, a legislator suggested that pi be defined as 3 to make things easier but the suggestion was quickly quashed in committee.
A more serious proposal originated with Edwin J Goodwin, an Indianan physician and amateur mathematician. In 1894, he believed that he had solved three ancient and unsolved problems in mathematics, namely squaring the circle, doubling the cube and trisecting the angle, using only a straightedge and compasses. His belief was not affected by the proof in 1882 that squaring the circle was impossible, confirming its proverbial meaning of attempting the impossible stretching back to at least 414BCE in The Birds by Aristophanes.
Goodwin persuaded the Indiana legislature to adopt his ideas in Engrossed Bill No. 246,4 generously allowing them to use his methods in state textbooks without charge, and it sailed through committee and the lower house before attracting criticism from a passing mathematics professor, who persuaded members of the Senate not to pass the bill. Section 2 of the bill states “the ratio of the diameter and circumference [of a circle] is as five-fourths to four.” This means that pi = 4/1.25 = 3.2 exactly, which it most definitely doesn’t (it’s about 2% less).

Monthly journal of the Institute of Physics and, together with Chemistry World (ditto of the Royal Society of Chemistry), my favourite reading.
Lateral Thoughts: Hail to the new, popular, units. (April 2017, p52)

http://www.icr.org/article/524).

4

http://www.agecon.purdue.edu/crd/localgov/second%20level%20pages/indiana_pi_bill.htm

Letter to The Psychologist

(published December 2017, p6)
Dear editor,
 
In the discussion between Cordelia Fine and Joe Herbert (Is testosterone the key to sex differences in human behaviour? October issue, p44-8*), both agree that sexist attitudes can account for much gender imbalance in employment. However, Professor Herbert insists that biological differences, testosterone-related, account for part of the imbalance.

While he may, or may not, be right in some areas, his chosen example, bus-driving, fails to support his argument. As a child in World War 2, he might have noticed that women drivers of buses, ambulances, fire engines, vans, lorries, and tractors were often to be seen. 

It didn’t take the advent of power steering to nullify the upper body weakness of women, attributable to their lack of testosterone. Different gearing, steering wheel sizes, and driving techniques had already made it possible for anyone to drive heavy vehicles. It was merely the “unsuitability” of women for such jobs, suspended during wartime, that kept them out of such jobs. As Cordelia Fine says.

Les Hearn
*https://thepsychologist.bps.org.uk/volume-30/october-2017/testosterone-key-sex-differences-human-behaviour

Insect Armageddon

The number of insect species known is about a million and the number of individual insects alive at any one time is a mind boggling 10 billion billion (10*19), with about 300 times the mass of the human population; estimates of the total number of insect species waiting to be discovered go up to 30 million.*1,2

It was therefore concerning when recently it was reported that populations of flying insects had declined by between 76 and 82% in Germany over just 27 years.3 The study was carried out in 63 sites in nature reserves between 1989 and 2016. The technique was a simple one: tent traps were set up and the insects caught by these in a certain time were weighed. The decline affected all kinds of insect.

A long-running study in another German nature reserve showed a decline of 40% in moths and butterflies over 150 years. More recently, the European Environment Agency reported that 50% of grassland butterflies had been lost in 20 years in 19 European countries. It suggests loss of managed grasslands, either to scrub or to crop growing, and pesticides on neighbouring farmland as potential causes.4 And a worldwide study of invertebrate species (of which about 80% are insects) showed a 45% decline over the past 40 years.5

Anecdotally, the British media have commented on the disappearance of the “moth snowstorm” due to which night-time drivers (in the countryside) would have to clean their windscreens of the corpses of splattered moths which had mistaken their headlights for the Moon.2

This decline is serious for two main reasons. The wealth of insect species supports a large number of food chains with most obviously birds, but also fish, amphibians, reptiles and mammals (especially bats), at or near the top. Birds affected in Britain include the grey partridge and spotted flycatcher, both having declined by 95%, and the red-backed shrike, extinct since the 1990s, while the house sparrow has also shown a 50% decline since the 1970s.2 Furthermore, a great many plants, including many food ones, rely on insects to pollinate their flowers. These insects include not only bees but also moths, butterflies, beetles and hoverflies. Another may be that if predatory insects decline, populations of prey species that eat food crops could explode, leading to economic losses either from reduced yields or increased use of pesticides.
What is causing the decline and what should be done?
Agricultural practices, such as monocultures, great swathes of just one crop, reduce biodiversity. The removal of hedges, ponds and other refuges for wild life also reduce niches for insects and their food web members.

Pesticides are also a factor, especially when they affect other insects as well as crop pests. Many of the most harmful, such as DDT, have been banned but modern less harmful ones seem not to be entirely harmless. This may be the case with neonicotinoids (see below). The evidence about these is contradictory but seems to be coming down on the harmful side.6

Climate change does not seem to be a factor in the decline at present but as warming accelerates it may become one. If anything, increased temperatures should increase insect biomass. For example, the warmer winters of recent years may have allowed pest species to overwinter more successfully, leading to more crop damage. However, species that rely on particular plants for food may suffer if those plants cannot cope with climate change and become more scarce. Also, increased extreme weather events such as droughts would negatively affect insects.

Changing agricultural incentives to favour greater crop diversity, to keep or restore hedges and so on, to reduce pesticide use, for example by applying it directly rather than spraying it into the atmosphere, are all initiatives that could help. In a rare example of evidence-based policy-making, Environment Secretary Michael Gove says the UK will support Europe-wide ban on neonicotinoids after the German study.7

What are neonicotinoid insecticides?
These insecticides, developed from the 1970s onwards, have rapidly become popular because, unlike the organophosphate and carbamate insecticides, they have low toxicity to mammals, including humans, while being very toxic to insects. They now amount to about a quarter of the global insecticide market, with one of them, imidacloprid (patented in 1985 by Bayer), being the most widely used insecticide in the world.

Neonicotinoids (“new nicotinoids”) are similar to nicotine, an alkaloid produced by the tobacco plant (Nicotiana tabacum) and other members of the Solanaceae family (which includes deadly nightshade, potato and aubergine). Presumably it is produced as a defence against insects that would otherwise eat the leaves of the tobacco plant.

In humans, nicotine stimulates the brain’s nicotinic acetylcholine receptors (NAChRs), a class of receptors that promotes the release of dopamine and endorphin, stimulating the brain’s reward system. Nicotine is said to cause feelings of calmness and relaxation, while also making the user more alert. It can reach the brain some 15 seconds after inhalation of tobacco smoke. These effects are often desirable and even useful so it is unfortunate that nicotine intake is usually accompanied by a cocktail of carcinogens. It is even more unfortunate that it produces tolerance, where the user requires more and more to achieve the desired effect, and that it is highly addictive. Nicotine was previously widely used as an insecticide as it overstimulates insects’ central nervous systems, rather than making them feel relaxed, and kills them. It was phased out over its harmfulness to mammals, including people using it or their children and animals. However, it should be noted that it is impossible to get a fatal dose of nicotine from smoking.

Neonicotinoids are chemically different to nicotine: they cannot cross the blood-brain barrier in mammals (and so do not mimic the effects of nicotine), and bind much more strongly to insect NAChRs than to mammalian ones. They are also thought to be less harmful to fish, an important consideration when rain can run off fields into rivers. While nerve gases such as sarin prevent the breakdown of the nerve transmitter acetylcholine (ACh) in humans, neonicotinoids mimic the action of ACh in insects and also cannot be broken down. The result is the same: nerves are stimulated to fire continuously, causing paralysis and death.

Neonicotinoids were introduced after many insects had developed resistance to organophosphate, carbamate, and pyrethroid insecticides. Predictably, resistance has started to develop to them as well (travellers may wish to note that bed bugs in New Jersey are now resistant).

Neonicotinoids are absorbed by plant roots and leaves and travel to all parts of the plant, where they are taken in by herbivorous insects. Also, they are more persistent, that is they break down more slowly, than nicotine, offering more long-term protection to crops. They are active against a wide range of pests, such as aphids, whitefly, wireworms and leafhoppers. However, their wide range includes many non-target insects, some beneficial, such as bees.8 In theory, it should be possible to minimise exposure of other insects by applying the insecticide more carefully directly to the roots, rather than spraying. It is common to treat seeds before sowing which is a less dangerous process.

Neonicotinoids and bees
It was reported last year that the use of neonicotinoids on oilseed rape in England from 2002 is linked to an average decline in all bee species of 7%, with the worst affected being those that collected nectar from rape flowers. This is serious news not only for the natural world but specifically for that substantial section of agriculture that relies heavily on pollination by bees and other insects. This is especially so since bee numbers have already suffered a lot from the parasitic mite Varroa and the mysterious Colony Collapse Disorder. It seems that neonicotinoids can get into pollen and nectar and thence into the bees. The amounts involved are not lethal but it is suggested that they may cause behavioural changes that make bee colonies less viable. One study shows that bumblebee colonies affected put on less weight before winter and are less able to survive.1 In any case, neonicotinoids are found in bees and at least some bees seem to be adversely affected so, on the precautionary principle, neonicotinoid use should be restricted.
………..
*Some 40% of insects are beetles. The great socialist scientist JBS Haldane, when asked what he deduced about God from contemplating the living world, replied “God has an inordinate fondness for beetles.”
1https://www.si.edu/spotlight/buginfo/bugnos
2https://www.theguardian.com/environment/2017/oct/21/insects-giant-ecosystem-collapsing-human-activity-catastrophe (author Michael McCarthy, originator of the term “moth snowstorm”).
3http://journals.plos.org/plosone/article?id=10.1371/journal.pone.0185809
4https://www.eea.europa.eu/highlights/populations-of-grassland-butterflies-decline
5https://e360.yale.edu/features/insect_numbers_declining_why_it_matters
6https://www.nature.com/news/the-bitter-battle-over-the-world-s-most-popular-insecticides-1.22972?WT.mc_id=TWT_NatureNews&sf159501690=1
7https://www.theguardian.com/environment/2017/nov/09/the-evidence-points-in-one-direction-we-must-ban-neonicotinoids
8https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4284386/

Return of the ozone!

Good news! The ozone hole is shrinking at last, a rare success for collective action in response to scientific evidence.1 Unfortunately, it will take until 2050 to return to its 1980 levels. This is because the chemicals largely responsible for its depletion are very stable and those already released will persist in the atmosphere until then, even if no more emissions take place.

It’s 30 years since the signing of the Montreal Protocol which aimed to tackle the problem of the accelerating destruction of the ozone layer by chlorofluorocarbons (CFCs). Ozone in the stratosphere absorbs most of the Sun’s ultraviolet radiation (UVR) and without it life would be difficult or impossible except several metres below the surface of the oceans.

Ozone (O3) is made from oxygen (O2) by the action of UVR in the stratosphere. But for there to be oxygen in the stratosphere there first had to be oxygen in the lower atmosphere and this only appeared when Earth was about half the age it is now, with the evolution of photosynthesis by bacteria in the oceans. These produced oxygen as a waste product which gradually began to accumulate in the atmosphere. Ozone started to accumulate also and by half a billion years ago was absorbing enough UVR for the land to become habitable.

Scientists only became aware of these facts with:
A the prediction and then discovery of different types of light (radiation) with different wavelengths;
B the development of spectroscopy, the study of how matter absorbs and emits light; and
C the understanding of how hot objects emit energy in the form of light.
These were mostly the result of curiosity-driven research.

It was realised that the Sun should emit radiation of different wavelengths in the proportions predicted for the spectrum of a “black body” of the same temperature (about 5500 degrees Celsius). Spectroscopy showed that it did, with the puzzling exception of a region of wavelengths shorter than 310 nanometres, just beyond the violet region. This, the UV region, was about 1% of the predicted intensity. This meant that about 99% of UVR was being absorbed by something and an exhaustive search of likely chemical substances found that ozone was largely responsible.

The amount of ozone differs in different parts of the world and at different times of year, as does the intensity of UVR, so the amount of UVR reaching the ground is variable. In general, UVR is highest when the Sun is higher in the sky, i.e. in equatorial regions and during summer in northern and southern regions.

The UVR that gets through can be damaging to life, including humans in whom it causes sunburn, cataracts, and potentially fatal skin cancers. Many humans have melanin pigment in their skin which can absorb UVR before damage can occur but lighter-skinned people in high-UVR regions are at risk. Australia and New Zealand have the highest rates of melanoma in the world. It was therefore alarming to learn in 1985 that there was a great hole in the ozone layer above Antarctica. However, the story started earlier.

Refrigerators use the evaporation and condensation of liquids to transfer heat from the contents to the outside (you may have noticed warmth from the back of a fridge). Early fridges used easily liquefied gases such as methyl chloride, ammonia or sulfur dioxide, but these were toxic if released. Chemist Thomas Midgley2 developed the efficient synthesis of chlorofluorocarbons (CFCs) around 1930 and proposed their use as safe refrigerants. CFCs are very unreactive which is excellent for a refrigerant. Midgley demonstrated their safety by inhaling some and blowing out a candle. However, if released when a fridge is damaged or scrapped, their very stability means that CFCs persist in the atmosphere, eventually reaching the stratosphere.

Here the problem starts: a CFC molecule such as Freon (Cl2F2C) is hit by a UV photon and a chlorine atom (Cl) is knocked out. If this collides with an ozone molecule, it grabs an oxygen atom to make a ClO molecule, leaving an ordinary oxygen molecule that doesn’t absorb UVR. The ClO collides with another ozone molecule, making more O2 and regenerating the original Cl atom…which can now repeat the process with more ozone. The Cl is thus a catalyst for the breakdown of ozone. Each cycle removes two ozone molecules and there can be thousands of cycles before the Cl atom collides with something else and the process stops.3

This was realised in the ‘70s but no-one knew if the effect was significant until the late Joe Farman and colleagues found a massive hole in the ozone layer above Antarctica. The levels had dropped by some 40% in about ten years. Farman had been measuring the levels for about five years, first fearing that his instruments were faulty. NASA had failed to detect the drop as its computer software was programmed to ignore “unusual” readings.

The clear threat was that, as thinning of the layer spread, organisms would be affected by the increased UVR, particularly UVB. This would affect plant growth, harm populations of plankton in the upper levels of the oceans, and cause increased skin cancers and cataracts. Australia would be the first to be affected, with potential epidemic levels of skin cancer.

Due to different weather patterns, the Arctic had not yet developed an ozone hole but would eventually if nothing changed as the amount had also declined. Farman published his results in 1985 and, despite the opposition of the chemicals industry, the Montreal Protocol phasing out CFCs was signed in 1987. Readers may be surprised to learn that Margaret Thatcher played a positive role in this.4

It will take a long time for the ozone layer to return to its original thickness. In the meantime, we must make sure that governments and businesses adhere to the Montreal Protocol. But there is another problem: CFCs are actually more potent “greenhouse” gases than carbon dioxide and some of their ozone-friendly replacements, such as hydrofluorocarbons (HFCs), are even worse. Phasing out CFCs has already reduced the rate of global warming. One option is to amend the Montreal Protocol to include HFCs (they are already in the Kyoto Protocol) but the alternatives also have their own problems. Propane/methylpropane mixtures are very effective refrigerants but are flammable (but then so is methane, piped to most houses in the UK).

Notes:

1 http://www.unep.org/stories/story/still-fresh-30-ozone-hole-healing-montreal-protocol-takes-climate-change

2 Thomas Midgley had “form.” In 1921, he showed that tetraethyl lead when added to petrol prevented the damaging phenomenon of engine “knock.” Despite knowing of its toxicity (and taking a year off to recover from lead poisoning), Midgley insisted that it was safe. It was marketed as “Ethyl” with no mention of lead. Having initiated the poisoning of young brains for decades, Midgley then inadvertently initiated the destruction of the ozone layer through CFCs. Later he contracted polio and was partially paralysed. He invented a contraption to get him out of bed but became entangled in its ropes, dying from strangulation. It has been said that he “had more impact on the atmosphere than any other single organism in Earth’s history.”

3 Step 1: Cl + O3 —> ClO + O2
Step 2: ClO + O3 —> Cl + 2O2
Step 1 is now repeated with the Cl atom regenerated in Step 2, and so on thousands of times.

4 You won’t often hear a good word from me about Margaret Thatcher but arguably she was instrumental in the discovery of the ozone hole and in the subsequent Montreal protocol. Hardline monetarist and privatiser though she was, when it came to science she was not so dogmatically in favour of the free market. With a Chemistry degree and PhD, she understood the need for “blue skies” (curiosity-driven) research.5 This may have partly explained why she protected the funding of the British Antarctic Survey (for which Joe Farman was working when he detected the ozone hole) where her colleagues saw only wasteful public expenditure. She could also understand the scientific evidence about CFCs and supported the Montreal Protocol. She also supported UK’s membership of CERN and the establishment of the IPCC to research climate change.

5 See Margaret Thatcher’s influence on British science, by George Guise
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4123659/

The Google memo: there’s bias and then there’s bias

Google’s Ideological Echo Chamber: How bias clouds our thinking about diversity and inclusion, by James Damore1

James Damore, the recently (and perhaps unjustly) fired Google employee, criticises what he sees as the “left bias” of Google which has created a “politically correct monoculture” which “shames dissenters into silence.” This left bias translates as “Compassion for the weak; disparities are due to injustices; humans are inherently cooperative; change is good (unstable); open; idealist.” A right bias would hold views such as “Respect for the strong/authority; disparities are natural and just; humans are inherently competitive; change is dangerous; closed; pragmatic.”

Like all stereotypes, these caricatures have some elements of truth and Damore is keen to distance himself from both but in reality he comes down on one side.

Put simply, Google’s stated policy is to encourage groups which are under-represented in their current workforce to apply for jobs or promotion. These include: women, around 50% of the population (31% overall in Google; 20% in technical posts; 48% in non-technical posts, doubtless lower-paid; 25% in leadership positions); Blacks (undefined but presumably African Americans), 13.3% of the US population (2% overall; 1% technical; 5% non-technical (lower-paid); 2% leadership); Hispanics, 17.6% of the population (4% overall; 3% technical; 5% non-technical; 2% leadership).2

To reiterate, there is a marked imbalance in the employment of Blacks and Hispanics in all areas of Google and of women in all but non-technical posts, relative to the US population. Damore chooses to focus his arguments on Google’s attempts to redress the balance for women. His arguments do not deal with ethnic or other minorities (except, curiously, conservatives) but his concluding suggestions do!3

He then produces a series of truisms and half-truths about male-female differences which he proposes as “possible non-bias causes of the gender gap in tech [i.e. software engineering].” He himself accepts that he is talking about averages and that there is a substantial overlap between the sexes so nothing can be deduced about any individual. He therefore sets a high bar if he expects these differences to account for a 20:80 split in tech jobs.2

Damore refers to biological differences that he claims are universal across cultures, highly heritable, linked to prenatal testosterone, and “exactly what we would predict from an evolutionary psychology perspective.” These include, he says, “openness directed towards feelings and aesthetics rather than ideas…a stronger interest in people rather than things…empathizing versus systemizing.” This may direct them towards social or artistic areas (why then are there more male composers and painters?). It is not clear how this makes women less suitable (on average) to code software programs (or men to be more suitable to be managers).

There is also “extraversion expressed as gregariousness rather than assertiveness.” Damore says this results in women being less likely to ask for raises, speaking up …or leading. Google has tried to counter the reticence of women to put themselves forward for promotion. They sent an email to all engineers quoting studies showing that (1) girls don’t tend to raise their hands to answer maths problems, though they are more often right than boys; and (2) women don’t tend to volunteer ideas in business meetings, though their thoughts are often better than those of male colleagues: the email also reminded recipients that it was time to apply for promotion. Applications from women soared, and with greater success than for male engineers. It is not clear why Damore would object to this.4

Damore points to evidence that women show more “neuroticism” than men but his source (Wikipedia) points out that this concept is not well-defined. He also says that higher status is more likely to be a male goal, using the lack of women in top jobs as evidence (thus assuming what he set out to prove). Curiously, he sees the preponderance of men in dangerous jobs such as coal-mining, fire-fighting and garbage collection(!) as part of their drive for status.

What Damore does not reference is that cultural and individual sexism and misogyny discourage some (many?) girls and women from pursuing studies and careers in areas that have historically been denied to them or away from which they have been directed by peers, family or advisers. If girls and women were encouraged to see software development as something that was open to them, where they would be welcomed, but they still didn’t apply in equal numbers, then we could perhaps start looking for other explanations. The question of welcoming is crucial. If male employees disrespect or sexually harass them, women may not wish to stay.5 It is likely that, with encouragement at school and college, and with a non-discriminatory working environment, instead of 20:80, something approaching balance would be achieved: it might not be 50:50 – it might conceivably be 60:40 – who knows?

According to Wendy Hall, a computer science professor, there isn’t such an imbalance in several Asian countries, indicating cultural rather than biological influences on gender imbalance in US information technology companies.6 Professor Hall refers to a decrease in women on computer science courses in UK universities from 25% in 1978 to 10% in 1987. In the US, women’s participation in historically male-dominated fields such as medicine, law, physical sciences rose from about 10% in 1970 to between 40 and 50% in 2010; computer science followed the same trajectory from about 12% in 1970 to about 37% in 1985 but thereafter declined to around 18% in 2010 (from blogger Faruk Ateş).7 We have to look for other than biological explanations for these changes.

Ateş points out that many pioneers of computing and programming were women but that, from the late 1960s, women were actively discouraged from going into computing by professional organisations, ad campaigns, and by aptitude tests that favoured men. Stereotypes of computer programmers as awkward male nerds appeared in films in the 1980s. Ateş and Hall also refer to the marketing of video games on home computers, such as Sinclair and Amstrad, preferentially to boys in the 1980s, giving an impression that “technology is for boys, not girls.” Other scientists have also argued against Damore, including Angela Saini,8 and Erin Giglio.9

A number of scientists have weighed in on Damore’s side, claiming that his views are in line with research findings on sex differences. Thus males tend to be “thing-oriented” and females to be “people-oriented” and women’s and men’s interests tend to match job preferences. Therefore, we should expect imbalances in gender ratios for jobs. (The fact that “women’s” jobs tend to be paid less is just a massive coincidence.) One study asks subjects about their preferences for these jobs: “car mechanic, costume designer, builder, dance teacher, carpenter, school teacher, electrical engineer, florist, inventor(!), and social worker.” No doctor, lawyer, bus-driver, para-medic, politician, accountant…

A closer look at many jobs show that the duties do not easily split into either “thing-oriented” or “people-oriented,” being more a mixture. Further, the proportions of men and women in some occupations have varied enormously over history: examples include physical labour occupations during wartime, or in other countries, and the medical profession from the 19th century, when women were banned, to now when a majority of entrants to medical school are women.

What is disturbing is that these scientists choose to investigate sex differences to explain observed gender imbalances in occupations when we already have a perfectly good explanation – the different experiences of boys and girls. Boy and girl children are treated differently by their mothers and significant others right from birth and, even in the supposedly egalitarian societies of the West, sex roles and expectations are reinforced throughout childhood and beyond. It may be that the “natural” ratio in software engineering is not 50:50 but we will never know since we don’t have a Planet B for comparison.

It is also disturbing that the research itself does not clearly show many statistically significant differences between the sexes that are relevant to suitability for software engineering. For every study showing some effect (such as higher general intelligence (“g”) scores) in men, there is another not showing this. Further, where there are well-documented differences, for example in visuo-spatial skills such as mental rotation, these can be reduced or removed with training.

To Damore’s credit, he suggests ways to make software engineering more woman-friendly (making programming more people-oriented and collaborative, fostering cooperation, making tech and leadership jobs less stressful, offering more part-time work, and, intriguingly, freeing men from their current inflexible gender role, allowing them to become more “feminine”).

However, Damore incorrectly sees Google’s encouragement of applications from historically under-represented groups as discriminatory, failing to recognise that, even if women would not necessarily take up tech jobs in equal proportion to men, there is no reason other than discrimination (not just at Google) for Blacks and Hispanics to be seriously under-represented in Google as a whole and especially in tech and leadership jobs.3 In the absence of any better policies, his proposals would perpetuate the present unfair treatment of African Americans and other oppressed minorities.

There is bias in Google and in the job world in general but it’s against women and minorities, not against white men like James Damore.

 

…………………………………………………….
https://medium.com/@Cernovich/full-james-damore-memo-uncensored-memo-with-charts-and-cites-339f3d2d05f

https://www.google.com/diversity/

Damore’s suggestions include “Stop restricting programs and classes to certain genders or races.” One programme cited is BOLD. Google states that “The BOLD Immersion program is open to all higher education students, and is committed to addressing diversity in our company and the technology industry. Students who are members of a group that is historically underrepresented in this field are encouraged to apply.” Another is CSSI. Google describes this as being for “graduating high school seniors with a passion for technology — especially students from historically underrepresented groups in the field.” It is odd that Damore interprets this as “restricting … to certain genders or races.” He also mentions Google’s Engineering Practicum intern programme which states that it is for “undergraduate students with a passion for technology—especially students from historically underrepresented groups including women, Native American, Black, Latino, Veteran and students with disabilities.” I suppose it is an occasion for rejoicing that Damore doesn’t oppose Google’s encouragement of veterans and people with disabilities to apply. To reiterate, this is in the context of only 2% of Google’s employees being Black (population average 13%) and 4% Hispanic (18% of population). [all emphases mine]

https://www.washingtonpost.com/news/the-switch/wp/2014/04/02/google-data-mines-its-women-problem/?utm_term=.87a62e59d73d

This survey reveals that 87% of female tech staff responding had experienced demeaning comments from colleagues and 60% had received unwanted sexual advances. Individual stories range from infuriating to sick-making: https://www.elephantinthevalley.com

https://theconversation.com/growing-role-of-artificial-intelligence-in-our-lives-is-too-important-to-leave-to-men-82708

https://hackernoon.com/a-brief-history-of-women-in-computing-e7253ac24306

Angela Saini, author of Inferior: How Science Got Women Wrong, deals with some of Damore’s points in The Guardian: https://www.theguardian.com/commentisfree/2017/aug/07/silicon-valley-weapon-choice-women-google-manifesto-gender-difference-eugenics

Erin Giglio, a PhD student in evolutionary biology and behaviour and a graduate in psychology and genetics (and blogger), cites peer-reviewed evidence contradicting Damore’s arguments: https://medium.com/@tweetingmouse/the-truth-has-got-its-boots-on-what-the-evidence-says-about-mr-damores-google-memo-bc93c8b2fdb9

Hunt debunked: there is no “weekend effect in the NHS

The Tory Health Secretary, Jeremy Hunt, provoked the first ever strike by doctors in NHS England last year when he tried to force through a new contract for junior doctors that would have significantly worsened pay and conditions. He justified this on the spurious grounds that:

  • There was a weekend effect whereby patients admitted to hospital at weekends had a significantly higher risk of dying (the Department of Health (DH) published references to eight studies which were claimed to prove this);
  • Rectifying this effect required more junior doctors to work longer at weekends. This was supposed to be part of the government’s promise to introduce a “seven-day” NHS without any extra staff; and
  • This had to be achieved without costing any more.

Hunt’s use of the Tories’ supposed mandate to introduce a seven-day NHS is in itself thoroughly misleading. Hospitals have always operated throughout the week and both junior and senior doctors work at weekends. It is in primary care, GPs’ surgeries, that a five-day NHS operates, and experimental weekend GP services tend not to be much used by patients. But, even admitting Hunt’s seven-day claim, is there actually a weekend effect and are junior doctors’ hours a factor?

Previously, I showed that the DH’s eight studies on the weekend effect included only two independent pieces of work.1 Those studies showing a weekend effect did not try to explain it but suggested that a lack of senior doctors at weekends might be one factor: none referred to a role for junior doctors.

Since the DH’s publication of Hunt’s evidence, the DH itself admitted that it had no evidence that a seven-day NHS would have any effect on deaths or on time spent in hospital. Since the DH’s evidence also, curiously, showed a decreased rate of deaths at weekends, it is conceivable that things might get worse!

Using Hunt’s cited papers, I showed that greater illness among weekend admissions could completely account for increased mortality. Now Professor Sir Nick Black, an adviser to DH and NHS England, has blown Hunt’s case out of the water with more objections to the whole idea of a weekend effect.2

Black shows first of all, referring to his own work in 2010, that methods of calculating hospital death rates (Hospital Standardised Mortality Rates – HSMRs) were flawed.3 HSMR is the ratio of Observed Deaths to Expected Deaths. The observed deaths are not so easy to get wrong as it’s fairly obvious when a patient has died. However, the estimate of expected deaths can be more or less accurate, depending on the completeness of the information available about patients. Ideally, the ratio will be 1:1, i.e. expected deaths will be the same as actual deaths. But, an underestimate of expected deaths will produce an apparent excess of observed deaths, and questions will be asked.

The obvious question, “Did we get our estimates right?”, does not seem to have occurred to Hunt and his advisers. Black describes three problems with the expected deaths calculation.

  • First, some patients’ conditions (morbidities) are miscoded. Black illustrates this with a study on stroke patients, published in May 2016 in the British Medical Journal but inexplicably missed by Hunt and his top medical adviser Professor Sir Bruce Keogh.4 This study found that stroke patients admitted as non-emergencies on weekdays (with lower risk of death) were frequently miscoded as new stroke patients (with a higher risk). Their lower actual rate of death resulted in weekend emergency stroke admissions having an apparently increased risk of death. When the coding was correct, the weekend effect disappeared!
  • Second, the particular characteristics of each case are not always accurately recorded as a result of delays in doing tests and this can affect estimates of survival, as well as actual survival! It might be expected, according to Hunt’s arguments, that this would be a problem at weekends. Black refers to another study of stroke patients, again published in 2016 in another top medical journal, The Lancet, and again inexplicably missed by Hunt and advisers.5 This study found no weekend effect when comparing the quality of health care associated with different days and times of admission. For your information, the worst time to be admitted was overnight on weekdays.
  • Third, patients often have co-morbidities (more than one thing wrong with them) and may not die of the condition for which they were admitted. Other conditions are less likely to be noted or rated for seriousness for weekend admissions which tend to be emergencies. This is important since each condition should contribute to the estimated probability of an individual’s death. If some conditions are not recorded, the expected deaths are underestimated, producing an apparent excess of observed deaths. Black here refers to another 2016 study6 that examined attendances and admissions from all English A&E departments for an 11 month period. Similar numbers attended on weekdays and weekends but significantly fewer were admitted to hospital on weekends (27.5% versus 30%). Weekend admissions tended to be direct from the community, rather than via GPs, and were significantly sicker than weekday admissions. This means that a greater proportion of that smaller number admitted at weekends died within 30 days, not because of poorer care but because they were sicker.

The last point has been confirmed by another 2016 study7 using a new scale of risk of dying based on seven physiological variables. They found that patients admitted from A&E departments at weekends were sicker on average. After adjusting for this, they did not have a greater risk of dying than equally sick people admitted on weekdays.

So the weekend effect does not exist and nor do Hunt’s “11,000 extra deaths per year.” But how many extra deaths occur because of the government’s refusal to fund the NHS and social care adequately?

References

1https://randomwalkinscience.wordpress.com/2016/02/16/lies-damned-lies-and-jeremy-hunts-statistics/ http://www.workersliberty.org/node/26281

2Black N. Higher Mortality in Weekend Admissions to the Hospital: True, False, or Uncertain? JAMA 2016:316(24);2593-4

3http://www.bmj.com/content/340/bmj.c2066

4https://www.ncbi.nlm.nih.gov/pubmed/27185754

5https://www.ncbi.nlm.nih.gov/pubmed/27178477

6http://journals.sagepub.com/doi/10.1177/1355819616649630

7http://qjmed.oxfordjournals.org/content/early/2016/07/12/qjmed.hcw104