Return of the ozone!

Good news! The ozone hole is shrinking at last, a rare success for collective action in response to scientific evidence.1 Unfortunately, it will take until 2050 to return to its 1980 levels. This is because the chemicals largely responsible for its depletion are very stable and those already released will persist in the atmosphere until then, even if no more emissions take place.

It’s 30 years since the signing of the Montreal Protocol which aimed to tackle the problem of the accelerating destruction of the ozone layer by chlorofluorocarbons (CFCs). Ozone in the stratosphere absorbs most of the Sun’s ultraviolet radiation (UVR) and without it life would be difficult or impossible except several metres below the surface of the oceans.

Ozone (O3) is made from oxygen (O2) by the action of UVR in the stratosphere. But for there to be oxygen in the stratosphere there first had to be oxygen in the lower atmosphere and this only appeared when Earth was about half the age it is now, with the evolution of photosynthesis by bacteria in the oceans. These produced oxygen as a waste product which gradually began to accumulate in the atmosphere. Ozone started to accumulate also and by half a billion years ago was absorbing enough UVR for the land to become habitable.

Scientists only became aware of these facts with:
A the prediction and then discovery of different types of light (radiation) with different wavelengths;
B the development of spectroscopy, the study of how matter absorbs and emits light; and
C the understanding of how hot objects emit energy in the form of light.
These were mostly the result of curiosity-driven research.

It was realised that the Sun should emit radiation of different wavelengths in the proportions predicted for the spectrum of a “black body” of the same temperature (about 5500 degrees Celsius). Spectroscopy showed that it did, with the puzzling exception of a region of wavelengths shorter than 310 nanometres, just beyond the violet region. This, the UV region, was about 1% of the predicted intensity. This meant that about 99% of UVR was being absorbed by something and an exhaustive search of likely chemical substances found that ozone was largely responsible.

The amount of ozone differs in different parts of the world and at different times of year, as does the intensity of UVR, so the amount of UVR reaching the ground is variable. In general, UVR is highest when the Sun is higher in the sky, i.e. in equatorial regions and during summer in northern and southern regions.

The UVR that gets through can be damaging to life, including humans in whom it causes sunburn, cataracts, and potentially fatal skin cancers. Many humans have melanin pigment in their skin which can absorb UVR before damage can occur but lighter-skinned people in high-UVR regions are at risk. Australia and New Zealand have the highest rates of melanoma in the world. It was therefore alarming to learn in 1985 that there was a great hole in the ozone layer above Antarctica. However, the story started earlier.

Refrigerators use the evaporation and condensation of liquids to transfer heat from the contents to the outside (you may have noticed warmth from the back of a fridge). Early fridges used easily liquefied gases such as methyl chloride, ammonia or sulfur dioxide, but these were toxic if released. Chemist Thomas Midgley2 developed the efficient synthesis of chlorofluorocarbons (CFCs) around 1930 and proposed their use as safe refrigerants. CFCs are very unreactive which is excellent for a refrigerant. Midgley demonstrated their safety by inhaling some and blowing out a candle. However, if released when a fridge is damaged or scrapped, their very stability means that CFCs persist in the atmosphere, eventually reaching the stratosphere.

Here the problem starts: a CFC molecule such as Freon (Cl2F2C) is hit by a UV photon and a chlorine atom (Cl) is knocked out. If this collides with an ozone molecule, it grabs an oxygen atom to make a ClO molecule, leaving an ordinary oxygen molecule that doesn’t absorb UVR. The ClO collides with another ozone molecule, making more O2 and regenerating the original Cl atom…which can now repeat the process with more ozone. The Cl is thus a catalyst for the breakdown of ozone. Each cycle removes two ozone molecules and there can be thousands of cycles before the Cl atom collides with something else and the process stops.3

This was realised in the ‘70s but no-one knew if the effect was significant until the late Joe Farman and colleagues found a massive hole in the ozone layer above Antarctica. The levels had dropped by some 40% in about ten years. Farman had been measuring the levels for about five years, first fearing that his instruments were faulty. NASA had failed to detect the drop as its computer software was programmed to ignore “unusual” readings.

The clear threat was that, as thinning of the layer spread, organisms would be affected by the increased UVR, particularly UVB. This would affect plant growth, harm populations of plankton in the upper levels of the oceans, and cause increased skin cancers and cataracts. Australia would be the first to be affected, with potential epidemic levels of skin cancer.

Due to different weather patterns, the Arctic had not yet developed an ozone hole but would eventually if nothing changed as the amount had also declined. Farman published his results in 1985 and, despite the opposition of the chemicals industry, the Montreal Protocol phasing out CFCs was signed in 1987. Readers may be surprised to learn that Margaret Thatcher played a positive role in this.4

It will take a long time for the ozone layer to return to its original thickness. In the meantime, we must make sure that governments and businesses adhere to the Montreal Protocol. But there is another problem: CFCs are actually more potent “greenhouse” gases than carbon dioxide and some of their ozone-friendly replacements, such as hydrofluorocarbons (HFCs), are even worse. Phasing out CFCs has already reduced the rate of global warming. One option is to amend the Montreal Protocol to include HFCs (they are already in the Kyoto Protocol) but the alternatives also have their own problems. Propane/methylpropane mixtures are very effective refrigerants but are flammable (but then so is methane, piped to most houses in the UK).



2 Thomas Midgley had “form.” In 1921, he showed that tetraethyl lead when added to petrol prevented the damaging phenomenon of engine “knock.” Despite knowing of its toxicity (and taking a year off to recover from lead poisoning), Midgley insisted that it was safe. It was marketed as “Ethyl” with no mention of lead. Having initiated the poisoning of young brains for decades, Midgley then inadvertently initiated the destruction of the ozone layer through CFCs. Later he contracted polio and was partially paralysed. He invented a contraption to get him out of bed but became entangled in its ropes, dying from strangulation. It has been said that he “had more impact on the atmosphere than any other single organism in Earth’s history.”

3 Step 1: Cl + O3 —> ClO + O2
Step 2: ClO + O3 —> Cl + 2O2
Step 1 is now repeated with the Cl atom regenerated in Step 2, and so on thousands of times.

4 You won’t often hear a good word from me about Margaret Thatcher but arguably she was instrumental in the discovery of the ozone hole and in the subsequent Montreal protocol. Hardline monetarist and privatiser though she was, when it came to science she was not so dogmatically in favour of the free market. With a Chemistry degree and PhD, she understood the need for “blue skies” (curiosity-driven) research.5 This may have partly explained why she protected the funding of the British Antarctic Survey (for which Joe Farman was working when he detected the ozone hole) where her colleagues saw only wasteful public expenditure. She could also understand the scientific evidence about CFCs and supported the Montreal Protocol. She also supported UK’s membership of CERN and the establishment of the IPCC to research climate change.

5 See Margaret Thatcher’s influence on British science, by George Guise


The Google memo: there’s bias and then there’s bias

Google’s Ideological Echo Chamber: How bias clouds our thinking about diversity and inclusion, by James Damore1

James Damore, the recently (and perhaps unjustly) fired Google employee, criticises what he sees as the “left bias” of Google which has created a “politically correct monoculture” which “shames dissenters into silence.” This left bias translates as “Compassion for the weak; disparities are due to injustices; humans are inherently cooperative; change is good (unstable); open; idealist.” A right bias would hold views such as “Respect for the strong/authority; disparities are natural and just; humans are inherently competitive; change is dangerous; closed; pragmatic.”

Like all stereotypes, these caricatures have some elements of truth and Damore is keen to distance himself from both but in reality he comes down on one side.

Put simply, Google’s stated policy is to encourage groups which are under-represented in their current workforce to apply for jobs or promotion. These include: women, around 50% of the population (31% overall in Google; 20% in technical posts; 48% in non-technical posts, doubtless lower-paid; 25% in leadership positions); Blacks (undefined but presumably African Americans), 13.3% of the US population (2% overall; 1% technical; 5% non-technical (lower-paid); 2% leadership); Hispanics, 17.6% of the population (4% overall; 3% technical; 5% non-technical; 2% leadership).2

To reiterate, there is a marked imbalance in the employment of Blacks and Hispanics in all areas of Google and of women in all but non-technical posts, relative to the US population. Damore chooses to focus his arguments on Google’s attempts to redress the balance for women. His arguments do not deal with ethnic or other minorities (except, curiously, conservatives) but his concluding suggestions do!3

He then produces a series of truisms and half-truths about male-female differences which he proposes as “possible non-bias causes of the gender gap in tech [i.e. software engineering].” He himself accepts that he is talking about averages and that there is a substantial overlap between the sexes so nothing can be deduced about any individual. He therefore sets a high bar if he expects these differences to account for a 20:80 split in tech jobs.2

Damore refers to biological differences that he claims are universal across cultures, highly heritable, linked to prenatal testosterone, and “exactly what we would predict from an evolutionary psychology perspective.” These include, he says, “openness directed towards feelings and aesthetics rather than ideas…a stronger interest in people rather than things…empathizing versus systemizing.” This may direct them towards social or artistic areas (why then are there more male composers and painters?). It is not clear how this makes women less suitable (on average) to code software programs (or men to be more suitable to be managers).

There is also “extraversion expressed as gregariousness rather than assertiveness.” Damore says this results in women being less likely to ask for raises, speaking up …or leading. Google has tried to counter the reticence of women to put themselves forward for promotion. They sent an email to all engineers quoting studies showing that (1) girls don’t tend to raise their hands to answer maths problems, though they are more often right than boys; and (2) women don’t tend to volunteer ideas in business meetings, though their thoughts are often better than those of male colleagues: the email also reminded recipients that it was time to apply for promotion. Applications from women soared, and with greater success than for male engineers. It is not clear why Damore would object to this.4

Damore points to evidence that women show more “neuroticism” than men but his source (Wikipedia) points out that this concept is not well-defined. He also says that higher status is more likely to be a male goal, using the lack of women in top jobs as evidence (thus assuming what he set out to prove). Curiously, he sees the preponderance of men in dangerous jobs such as coal-mining, fire-fighting and garbage collection(!) as part of their drive for status.

What Damore does not reference is that cultural and individual sexism and misogyny discourage some (many?) girls and women from pursuing studies and careers in areas that have historically been denied to them or away from which they have been directed by peers, family or advisers. If girls and women were encouraged to see software development as something that was open to them, where they would be welcomed, but they still didn’t apply in equal numbers, then we could perhaps start looking for other explanations. The question of welcoming is crucial. If male employees disrespect or sexually harass them, women may not wish to stay.5 It is likely that, with encouragement at school and college, and with a non-discriminatory working environment, instead of 20:80, something approaching balance would be achieved: it might not be 50:50 – it might conceivably be 60:40 – who knows?

According to Wendy Hall, a computer science professor, there isn’t such an imbalance in several Asian countries, indicating cultural rather than biological influences on gender imbalance in US information technology companies.6 Professor Hall refers to a decrease in women on computer science courses in UK universities from 25% in 1978 to 10% in 1987. In the US, women’s participation in historically male-dominated fields such as medicine, law, physical sciences rose from about 10% in 1970 to between 40 and 50% in 2010; computer science followed the same trajectory from about 12% in 1970 to about 37% in 1985 but thereafter declined to around 18% in 2010 (from blogger Faruk Ateş).7 We have to look for other than biological explanations for these changes.

Ateş points out that many pioneers of computing and programming were women but that, from the late 1960s, women were actively discouraged from going into computing by professional organisations, ad campaigns, and by aptitude tests that favoured men. Stereotypes of computer programmers as awkward male nerds appeared in films in the 1980s. Ateş and Hall also refer to the marketing of video games on home computers, such as Sinclair and Amstrad, preferentially to boys in the 1980s, giving an impression that “technology is for boys, not girls.” Other scientists have also argued against Damore, including Angela Saini,8 and Erin Giglio.9

A number of scientists have weighed in on Damore’s side, claiming that his views are in line with research findings on sex differences. Thus males tend to be “thing-oriented” and females to be “people-oriented” and women’s and men’s interests tend to match job preferences. Therefore, we should expect imbalances in gender ratios for jobs. (The fact that “women’s” jobs tend to be paid less is just a massive coincidence.) One study asks subjects about their preferences for these jobs: “car mechanic, costume designer, builder, dance teacher, carpenter, school teacher, electrical engineer, florist, inventor(!), and social worker.” No doctor, lawyer, bus-driver, para-medic, politician, accountant…

A closer look at many jobs show that the duties do not easily split into either “thing-oriented” or “people-oriented,” being more a mixture. Further, the proportions of men and women in some occupations have varied enormously over history: examples include physical labour occupations during wartime, or in other countries, and the medical profession from the 19th century, when women were banned, to now when a majority of entrants to medical school are women.

What is disturbing is that these scientists choose to investigate sex differences to explain observed gender imbalances in occupations when we already have a perfectly good explanation – the different experiences of boys and girls. Boy and girl children are treated differently by their mothers and significant others right from birth and, even in the supposedly egalitarian societies of the West, sex roles and expectations are reinforced throughout childhood and beyond. It may be that the “natural” ratio in software engineering is not 50:50 but we will never know since we don’t have a Planet B for comparison.

It is also disturbing that the research itself does not clearly show many statistically significant differences between the sexes that are relevant to suitability for software engineering. For every study showing some effect (such as higher general intelligence (“g”) scores) in men, there is another not showing this. Further, where there are well-documented differences, for example in visuo-spatial skills such as mental rotation, these can be reduced or removed with training.

To Damore’s credit, he suggests ways to make software engineering more woman-friendly (making programming more people-oriented and collaborative, fostering cooperation, making tech and leadership jobs less stressful, offering more part-time work, and, intriguingly, freeing men from their current inflexible gender role, allowing them to become more “feminine”).

However, Damore incorrectly sees Google’s encouragement of applications from historically under-represented groups as discriminatory, failing to recognise that, even if women would not necessarily take up tech jobs in equal proportion to men, there is no reason other than discrimination (not just at Google) for Blacks and Hispanics to be seriously under-represented in Google as a whole and especially in tech and leadership jobs.3 In the absence of any better policies, his proposals would perpetuate the present unfair treatment of African Americans and other oppressed minorities.

There is bias in Google and in the job world in general but it’s against women and minorities, not against white men like James Damore.



Damore’s suggestions include “Stop restricting programs and classes to certain genders or races.” One programme cited is BOLD. Google states that “The BOLD Immersion program is open to all higher education students, and is committed to addressing diversity in our company and the technology industry. Students who are members of a group that is historically underrepresented in this field are encouraged to apply.” Another is CSSI. Google describes this as being for “graduating high school seniors with a passion for technology — especially students from historically underrepresented groups in the field.” It is odd that Damore interprets this as “restricting … to certain genders or races.” He also mentions Google’s Engineering Practicum intern programme which states that it is for “undergraduate students with a passion for technology—especially students from historically underrepresented groups including women, Native American, Black, Latino, Veteran and students with disabilities.” I suppose it is an occasion for rejoicing that Damore doesn’t oppose Google’s encouragement of veterans and people with disabilities to apply. To reiterate, this is in the context of only 2% of Google’s employees being Black (population average 13%) and 4% Hispanic (18% of population). [all emphases mine]

This survey reveals that 87% of female tech staff responding had experienced demeaning comments from colleagues and 60% had received unwanted sexual advances. Individual stories range from infuriating to sick-making:

Angela Saini, author of Inferior: How Science Got Women Wrong, deals with some of Damore’s points in The Guardian:

Erin Giglio, a PhD student in evolutionary biology and behaviour and a graduate in psychology and genetics (and blogger), cites peer-reviewed evidence contradicting Damore’s arguments:

Hunt debunked: there is no “weekend effect in the NHS

The Tory Health Secretary, Jeremy Hunt, provoked the first ever strike by doctors in NHS England last year when he tried to force through a new contract for junior doctors that would have significantly worsened pay and conditions. He justified this on the spurious grounds that:

  • There was a weekend effect whereby patients admitted to hospital at weekends had a significantly higher risk of dying (the Department of Health (DH) published references to eight studies which were claimed to prove this);
  • Rectifying this effect required more junior doctors to work longer at weekends. This was supposed to be part of the government’s promise to introduce a “seven-day” NHS without any extra staff; and
  • This had to be achieved without costing any more.

Hunt’s use of the Tories’ supposed mandate to introduce a seven-day NHS is in itself thoroughly misleading. Hospitals have always operated throughout the week and both junior and senior doctors work at weekends. It is in primary care, GPs’ surgeries, that a five-day NHS operates, and experimental weekend GP services tend not to be much used by patients. But, even admitting Hunt’s seven-day claim, is there actually a weekend effect and are junior doctors’ hours a factor?

Previously, I showed that the DH’s eight studies on the weekend effect included only two independent pieces of work.1 Those studies showing a weekend effect did not try to explain it but suggested that a lack of senior doctors at weekends might be one factor: none referred to a role for junior doctors.

Since the DH’s publication of Hunt’s evidence, the DH itself admitted that it had no evidence that a seven-day NHS would have any effect on deaths or on time spent in hospital. Since the DH’s evidence also, curiously, showed a decreased rate of deaths at weekends, it is conceivable that things might get worse!

Using Hunt’s cited papers, I showed that greater illness among weekend admissions could completely account for increased mortality. Now Professor Sir Nick Black, an adviser to DH and NHS England, has blown Hunt’s case out of the water with more objections to the whole idea of a weekend effect.2

Black shows first of all, referring to his own work in 2010, that methods of calculating hospital death rates (Hospital Standardised Mortality Rates – HSMRs) were flawed.3 HSMR is the ratio of Observed Deaths to Expected Deaths. The observed deaths are not so easy to get wrong as it’s fairly obvious when a patient has died. However, the estimate of expected deaths can be more or less accurate, depending on the completeness of the information available about patients. Ideally, the ratio will be 1:1, i.e. expected deaths will be the same as actual deaths. But, an underestimate of expected deaths will produce an apparent excess of observed deaths, and questions will be asked.

The obvious question, “Did we get our estimates right?”, does not seem to have occurred to Hunt and his advisers. Black describes three problems with the expected deaths calculation.

  • First, some patients’ conditions (morbidities) are miscoded. Black illustrates this with a study on stroke patients, published in May 2016 in the British Medical Journal but inexplicably missed by Hunt and his top medical adviser Professor Sir Bruce Keogh.4 This study found that stroke patients admitted as non-emergencies on weekdays (with lower risk of death) were frequently miscoded as new stroke patients (with a higher risk). Their lower actual rate of death resulted in weekend emergency stroke admissions having an apparently increased risk of death. When the coding was correct, the weekend effect disappeared!
  • Second, the particular characteristics of each case are not always accurately recorded as a result of delays in doing tests and this can affect estimates of survival, as well as actual survival! It might be expected, according to Hunt’s arguments, that this would be a problem at weekends. Black refers to another study of stroke patients, again published in 2016 in another top medical journal, The Lancet, and again inexplicably missed by Hunt and advisers.5 This study found no weekend effect when comparing the quality of health care associated with different days and times of admission. For your information, the worst time to be admitted was overnight on weekdays.
  • Third, patients often have co-morbidities (more than one thing wrong with them) and may not die of the condition for which they were admitted. Other conditions are less likely to be noted or rated for seriousness for weekend admissions which tend to be emergencies. This is important since each condition should contribute to the estimated probability of an individual’s death. If some conditions are not recorded, the expected deaths are underestimated, producing an apparent excess of observed deaths. Black here refers to another 2016 study6 that examined attendances and admissions from all English A&E departments for an 11 month period. Similar numbers attended on weekdays and weekends but significantly fewer were admitted to hospital on weekends (27.5% versus 30%). Weekend admissions tended to be direct from the community, rather than via GPs, and were significantly sicker than weekday admissions. This means that a greater proportion of that smaller number admitted at weekends died within 30 days, not because of poorer care but because they were sicker.

The last point has been confirmed by another 2016 study7 using a new scale of risk of dying based on seven physiological variables. They found that patients admitted from A&E departments at weekends were sicker on average. After adjusting for this, they did not have a greater risk of dying than equally sick people admitted on weekdays.

So the weekend effect does not exist and nor do Hunt’s “11,000 extra deaths per year.” But how many extra deaths occur because of the government’s refusal to fund the NHS and social care adequately?



2Black N. Higher Mortality in Weekend Admissions to the Hospital: True, False, or Uncertain? JAMA 2016:316(24);2593-4






Don’t rule out nuclear power: a debate on the Left

These articles were published in Solidarity in 2011: an opinion piece by me (Don’t rule out nuclear power), a reply by Theo Simon (Don’t rule out workers’ power), and an answer by me (Why I support nuclear power (as one of a range of alternatives to fossil fuels)).

The intervening five years have not seen any great change in the rising trend of “greenhouse gas” emissions, though a treaty has recently been ratified to limit these to a level that would cause no more than a 2 degree Celsius rise, still likely to cause major disruption.

Don’t rule out nuclear power

Our society is powered largely by burning fossil fuels. This is the equivalent to living on our savings. Fossil fuels — oil, coal and gas — were laid down over a period of a hundred or so million years and we are using about a million years’ worth every year. Even if there were not the risk of climate change, we should be looking for alternatives.

Ultimately, we need to be aiming for complete renewability, but this will require some massive changes in human societies, and some enormous leaps forward in technology. Humans have never used any resources renewably (apart from a few insignificant exceptions).

The immediate alternatives to fossil fuels include wave, tide, wind, hydroelectric, geothermal, biomass, solar and nuclear power. All have their up and down sides but all can make some contribution, and it would be foolish to rule any out without strong reaons. That is just what many environmentalists do when they rule out nuclear power from the future energy mix. Can other sources suffice?

Recently, New Scientist looked at one scientist’s efforts to “do the math” (2 April 2011). Axel Kleidon, a physicist from the Max Planck Institute for Biogeochemistry in Germany, has calculated that building enough wind farms to replace fossil fuel-derived energy would actually remove a significant amount of energy from the atmosphere and alter rainfall, turbulence and the amount of solar radiation reaching the Earth’s surface.

Humans at present use some 47 terawatts (TW or trillions of watts = joules per second) of energy of which 17 TW come from fossil fuels. The rest is made up of renewable sources, mainly harvesting farmed plants. This is only about one twenty-thousandth of the energy coming from the sun.

But the useful energy available to us is restricted by the laws of thermodynamics to what is termed the “free” energy, the rest being unusable heat. Kleidon calculates that we are using some 5-10% of the free energy, more than is used by all geological processes, such as earthquakes, volcanoes and tectonic plate movements! If we were to set up wind and wave farms with a theoretical output of 17 TW, we would find, first, that a lot of waste heat would be produced, contributing to global warming. We would also deplete the available energy in the atmosphere: Kleidon calculates that this could reduce the energy to be harnessed from the wind by a factor of 100.

There are other sources of energy but these have their drawbacks. Geothermal power stations rely on pumping water into hot rocks fractured by explosions, but experimental plants are losing unacceptable amounts of water underground so the outputs are lower than expected.

Solar electricity relies on rare elements such as indium and tellurium, which are projected to run out within decades. Cheaper versions of solar cells still require another rare element, selenium.

Solar heating, using large mirrors to focus the Sun’s rays to boil water and drive turbines, is a very promising technology but it is not clear that this could fill more than part of the gap. For one thing, the Sun does not shine so strongly (or at all) on many parts of the Earth or during many times in the year.

Is it wise to rule out nuclear power? Many eminent environmentalists are coming round to the view that it isn’t.

Mark Lynas, writing in the New Statesman shortly after the disaster at the Fukushima nuclear plant in Japan (21 March 2011), warned that a panicky abandonment of nuclear power would lead to catastrophic global warming, a far greater problem. He argues that renewable sources are not going to be able to fill the gap in energy for countries like Japan, certainly in the short to medium term, and they will simply increase their use of fossil fuels.

And long-time environmentalist George Monbiot (Guardian, 22 March 2011) called for a sense of perspective over Fukushima, with no deaths (apart from two killed at the plant by the tsunami), and over the enormous disruption of the landscape which would be necessary if renewables were to supply all of our energy needs. Not only would there be enormous areas devoted to onshore windfarms, but also increased networks of grid connections to get the electricity to where it was needed. Pumped storage facilities would be needed to store the energy for when it was needed.

Other options favoured by some involve reversing the pattern of industrialisation and moving people back into rural communities where power could be produced locally. Except, according to Monbiot, it couldn’t. In the UK, he says, generating solar power involves a “spectacular waste of scarce resources”, while wind power in populated areas is largely worthless, since we build in sheltered spots. And direct use of energy by damming rivers or harvesting wood would wreck the countryside.

One of the UK’s oldest environmentalist groups, Friends of the Earth (FoE), consistently opposes nuclear power. Its five year-old report, Nuclear power, climate change and the Energy Review, raises the following objections.

Nuclear power is error-prone and likely to fail in ways dangerous to lots of people; it assists in the proliferation of nuclear weapons; it is vulnerable to terrorist attack; and that it is anyway unnecessary to use nuclear power at all in the complete replacement of fossil fuels in power generation and transport which FoE also calls for.

The claim is repeated that, though nuclear power generates electricity without releasing CO2, the extraction of uranium and the building of plant result in carbon emissions — as though this was a significant objection. Every current and proposed energy technology will result in carbon emissions as the concrete, steel, etcetera, will have to be made using current fossil fuel resources. The point is that it will make far less overall than the fossil fuel burning it will replace.

The Green Party uses many of the same arguments. Both the Greens and FoE both give expense as an argument against new nuclear power, and yet the report the Greens cite states that the increased nuclear option would be the cheapest, while the no nuclear/all renewable option would be the most expensive (necessitating energy imports as well!). FoE’s own figures show nuclear power’s costs sitting right in the middle of all other energy sources.

Another problem identified is that of disposal of waste, including dangerous high-level waste. This has a solution — burial in geologically stable strata deep underground. The waste has to be inaccessible for about 100,000 years, but there are plenty of rock layers where movements of chemicals is measured in a few metres per million years (for example, the Oklo “natural” reactor in Gabon).

The problem of nuclear accidents was perhaps the most prominent criticism raised by FoE five years ago, and the accident at Fukushima would not diminish the shrillness of their alarms. Nowhere do FoE or the Greens even mention the possibility of improved safety features in current reactor designs, for instance, ones that rely on gravity to flood overheating reactor cores with water, rather than as at Fukushima using pumps whose electricity could be cut off by an earthquake.

Nowhere do they raise the need for new designs using thorium which are “fail-safe” and could be adapted to burn up the high level waste which is such a problem and has to be dealt with, whether we have nuclear power or not. And nuclear reactors even now are burning up “surplus” nuclear weapons.

The Labour Party’s “green wing”, the Socialist Environmental and Resources Association (SERA), does not differ from FoE and the Greens in opposing nuclear power, though they concentrate on problems of time and money. They ignore the fact that the delays are due to the political cowardice of Labour governments and refusals to support research into new reactor designs.

It is notable that the environmentalists seem to have stopped blaming nuclear power stations for clusters of childhood leukaemias (no link with any other form of illness has been found). Such clusters are in fact found in many places where workers and their families have moved from elsewhere and may be due to lack of resistance to locally occurring viruses.

If one hoped for an independent voice from the SWP, one would be disappointed. In a slightly revised update of a 2006 pamphlet, Martin Empson refers blithely to the cancers and other illnesses coming to the Fukushima clean-up workers “as with the Chernobyl disaster”. He is clearly unaware of the massive differences in the two cases and the absence of evidence of long-term harm in the unfortunate but brave Chernobyl workers who survived initial exposure to radiation.

He sets up the straw person who argues that nuclear power is “the only way that we can produce low carbon electricity” and repeats the irrelevant fact that some CO2 will be released in setting up reactors. He insists that “Fukushima shows that nuclear power is extremely dangerous”. He doesn’t recognise that the reactors survived one of the most powerful earthquakes and tsunamis recorded with minimal damage and would have been virtually problem-free had a fail-safe cooling system been installed — as should and could have happened.

He repeats the discredited allegations of clusters of leukaemias around nuclear plants. He rubbishes suggestions of as few as 4,000 excess deaths due to Chernobyl which came from a United Nations report in 2005, preferring another “independent” report which suggested some half a million deaths already(!). He seems unaware of the latest UN report which drastically reduces estimates of illness and death from Chernobyl. It states that 28 of 134 “liquidators” died of acute radiation sickness at the time and a further 19 have died but not of radiation-linked diseases. Fifteen of some 6,000 cases of thyroid cancer have died (this problem arose only because of the criminal negligence of USSR authorities). No other deaths have definitely been attributed to radiation from Chernobyl. Professor Wade Allison, a radiation expert from Oxford University, argues that people’s natural defence mechanisms against radiation damage have been greatly under-estimated.

The environmentalists and the SWP appear to be unaware of the fact that fossil fuel extraction and use is thousands of times more dangerous than nuclear power.


Nuclear power, climate change and the Energy Review, Friends of the Earth 2005

Meeting the UK’s 2020 energy challenge: Do we need new nuclear?, Alan Whitehead MP, SERA January 2008

Climate Change: Why Nuclear Power is Not the Answer, Martin Empson, SWP 2006 (“updated” 2011)

Health effects due to radiation from the Chernobyl accident, UNSCEAR 2011

Radiation and Reason: The Impact of Science on a Culture of Fear, Wade Allison (ISBN 0-9562756-1-3, pub. 2009),

Don’t rule out workers’ power (by “Theo”)

Les, Your article seems to be based mainly on the arguments now being put forward by George Monbiot and Mark Lynas – both deservedly respected thinkers and researchers on climate change. Though you list the objections to Nuclear Power, you don’t even attempt to answer many of them, and on the issue of waste disposal, plant safety and cost, you repeat the fundamental mistake of Lynas, and (to a lesser extent) Monbiot, which is that you fail to see the reality of Nuclear Power within the context of a global capitalist economy. Astonishingly, you don’t even call for public ownership and democratic workers control of nuclear production.

Critically, you also fail to question the projected “energy gap” which is being used to justify Nuclear Power expansion as a necessary stop-gap to maintain our energy supply without catastrophically increasing CO2 emissions. And you don’t ask what is the best way forward for energy in the interests of the working class.

Capitalism is immensely – criminally – wasteful of the fossil fuel energy we are currently burning up as if there was no tomorrow. Insulation and energy conservation at every level could slash by a third our current consumption in Britain. Vast amounts are burned globally to power totally uneccessary production for manufactured consumerist needs, in order to generate private profit. The stuff that is produced is purposefully designed to break rather than to last or be easily repairable, leading to more energy being burned for repeat production. Personal travel and the transport of goods occur in a totally irrational way because of the demands of the competitive capitalist economy. Advertising, marketing, commercial lighting, stuff on standby out of work-hours etc, are also inherently massively wasteful. I dare say that they have occurred precisely because fossil fuels have been such a cheap source of energy in the past, but none of these aspects of Capitalism’s energy wasting are in the interests of working people, except in the immediate interest of providing jobs.

So far as work is concerned, how many jobs could be created in the insulation and conservation industries, premises conversion, public transport expansion etc that energy efficiency demands? And at what saving to working-class people?

It is true that people in the rest of the world will have a growing energy need over the next decades, but while they need and have a right to expect more, we in the western capitalist world could actually use a lot less with no drop in social well-being and an actual improvement in the living conditions of most working people. Monbiot and Lynas make the mistake of equating the energy needs of competitive capitalism with the rational needs of humanity, and I think this article does the same.

Monbiot and Lynas are ultra-aware, (as are most other environmentalists, though you seem to question it) of the urgent need to cut CO2 emissions. It is so urgent that it raises a problem for revolutionaries as, with or without a socialist transformation, we need to deal with it now if the human species is going to survive. This is why the desperate measure of proliferating a hazardous technology seems necessary and acceptable to some climate-change analysts. But I think it betrays a class attitude which is not acceptable for socialists.

It concentrates more power and wealth, with massive public subsidies, into the hands – and behind the fences of – corporations with an appalling track-record. Nuclear Power by it’s very nature demands high security and centralised control, and in the present world that means also an inherent lack of the transparency and democratic accountability which are absolutely essential where hazardous industries are concerned.

Monbiot and Lynas play fast and loose with the safety of working people in their calculations – one nuclear accident has the potential to destroy the lives of hundreds if not thousands of workers and working-class communities – even if as it appears some radiation dangers have been miscalculated in the past. As Fukushima showed, the unthinkable can still be avoided, but only at an inconceivable public expense for the containment and clean-up, at the cost of wholescale evacuation and land contamination, and with the long-term health fears for everyone exposed. Even then it was touch and go.

Japanese Nuclear Power used to be heralded as the safest in the world, before the unthinkable happened. You say Fukushima “would have been virtually problem-free had a fail-safe cooling system been installed — as should and could have happened”. Ah, yes – If only capitalism hadn’t cut corners and disregarded safety, it would have been virtually (only virtually?) problem-free! Elsewhere you talk about “the possibility of improved safety features” as if a capitalist industry will go for the best and safest method rather than the cheapest it can get away with. But this is one industry where shoddy workmanship means potentially mass disaster.

You don’t deal with the proliferation argument at all. But you can’t advocate nuclear power expansion in one country without it being for all countries, however unstable or tyrannical they are.
And again, you then have to take responsibility for how it will actually be developed in those countries and how much that increases the hazard of nuclear accidents occurring which are a threat to people everywhere, and particularly to the workers living near them.

You seem to have single-handedly dealt with the waste problem, so my great grandchildren will thank you for that! You also play down renewables alarmingly. If you are so confident that designers can improve nuclear design, why don’t you have the same faith in workers in the renewables sector to devise better ways of harnessing the sun’s power directly and indirectly? What’s wrong with the Europe-wide supergrid idea, using new conducting technologies, integrated renewable energy generation on a continental and local scale, and massive solar harvesting in Northern Africa etc?
Monbiot and Lynas have no faith in the ability of the international working class to take control of the situation and transform production. But most of the international working class will have little faith in our new-nuclear saviours, especially with Fukushima still steaming away. We need to come forward with a strong and uncompromising socialist programme for energy and cutting emissions, not give any more energy to this divisive and hazardous distraction.

Why I support nuclear power as one of a range of alternatives to fossil fuels

Back in the 70s, like many on the left, I was alarmed by what seemed to be the cover-up of the risks of nuclear power in the 50s and 60s. The indiscriminate power of nuclear weapons to kill in large numbers also marked many on the left with a fear of nuclear energy. But, as Maynard Keynes put it, “when the facts change, I change my mind”.

We only have one planet and it is overwhelmingly likely that “we” (or greedy capitalists, if you like) are altering its climate for the worse by returning carbon dioxide to the atmosphere a million times faster than it was originally locked away in fossil fuels. And, despite attempts to reduce carbon emissions, these are actually rising … by over 5% last year, from 29.0 to 30.6 gigatonnes (Gt or billion tonnes).

And, of the 13.7 Gt released by electricity generation, 11.2 Gt is “fixed” for the foreseeable future, since it will come from existing or planned fossil fuel power stations that will be operating in 2020.

The closure or cancellation of nuclear power stations makes this much worse, since these are the main proven alternative source of electricity. Countries which have reacted to recent scares, rather than evidence, include Japan, Germany, Malaysia, Thailand, Italy and Switzerland.

Truthfully, the potential risks of radiation are massively exaggerated by anti-nuclear groups in comparison with the actual risks of the fossil fuel industry to workers and the public. In particular, the environmental risks of radiation are minimal — wildlife is flourishing in the exclusion zone round Chernobyl and, as James Lovelock has pointed out, in the atom bomb test sites in the Pacific.

Furthermore, the difficulties of replacing nuclear power, let alone the whole fossil fuel industry, with renewables are minimised (see my article in Solidarity 203, 11 May 2011—

It is said (by Theo Simon, Letters, Solidarity 204, 18 May — that “nuclear power demands high security and central control”, as if these were necessarily bad.

Central control would anyway be needed to construct tens of thousands of wind turbines, on- and offshore, and the new supergrid of thousands of kilometres which would be needed to get the electricity to the cities. Already, proposals to introduce new systems of pylons have provoked mass protests in Wales, Scotland, Somerset and the West Midlands. And putting cables underground would be ten times more expensive.

Apparently, I fail “to question the projected ‘energy gap’ which is being used to justify nuclear power expansion”. The argument goes that, if the most wide-ranging programme of insulation and energy conservation is undertaken world-wide (the like of which has never been seen), then the electricity generated by nuclear power would not be needed. As the Spartans once said in a different context, if!

Once again, let’s look at the reality of nuclear power. The worst accident of all time, Chernobyl, has killed 43 people. This was due to the criminal negligence of the USSR police state. 28 workers were fatally irradiated while bringing the reactor under control. 15 young people died of thyroid cancer, entirely avoidable had the bureaucrats issued potassium iodide tablets (as was done promptly in Japan recently). Other estimates of potential deaths range from 9,000 to 900,000 but even the lowest of these seems to be way too high. So far, no other deaths have been proved to be due to the Chernobyl disaster.

As Wade Allison (author of Radiation and Reason) states, the ability of living tissue to repair radiation damage has been wildly underestimated. In radiation treatment of cancers, healthy tissues receive up to five times the fatal dose of radiation but spread over several weeks, during which time they efficiently repair the damage.

Many accidents have occurred in nuclear power plants. In those resulting in radiation leaks, there have been … no deaths or even injuries among the public. A few workers have died, usually because they were close to the incident. Otherwise, nuclear workers are healthier than the general population. A 2% increased risk of cancers linked to radiation is dwarfed by a 24% decreased risk of death from other cancers, according to a Canadian study. It also found that nuclear workers lived longer than average. And this under capitalism!

I am accused of listing the objections to nuclear power but not attempting to answer many of them. In particular, in the areas of waste disposal, plant safety and cost, I fail to “see the reality of nuclear power within the context of a global capitalist economy”. Trading content-free accusations, I might accuse others of failing to see the reality of renewable energy within the context etc. etc.

Of course, I did deal with plant safety and waste disposal. A recent Physics World (May 2011) shows that more modern designs would have survived both the Japanese earthquake and tsunami. These include better back-up generators and containment for molten fuel in case of a meltdown, and passive (i.e. not depending on a power supply) emergency cooling, operated by gas pressure or gravity. In fact, modifications to the Fukushima model to reduce radiation leaks in case of an accident were proposed by scientists 30 years ago but rejected as too expensive. Meanwhile, other similar power plants survived the earthquake and tsunami undamaged.

On radioactive waste, I said that deep storage in stable strata was perfectly plausible. Reprocessing would reduce the amount and feed back fuel to nuclear plants. The relevance of the “global capitalist economy” to this is not clear, except that they won’t pay for it. In any case, the danger of waste has been greatly overstated. Five metres of concrete would absorb all the radiation from anything. Wade Allison “would be perfectly happy” to have high-level waste buried 100 metres below his house, while James Lovelock has “offered to take the full output of a nuclear power station in my back yard.”

Alternatives to fossil fuels consist of two proven technologies, nuclear and hydroelectric power (HEP), a host of promising but unproven ones, and the mirage (at present) of a vast reduction in energy demand.

All have environmental and/or health implications. HEP requires vast dams flooding arable land and wildlife habitats, disrupting river ecosystems, destroying estuarine fisheries, reducing the fertility of flood plains, and endangering lives in case of collapse.

The Three Gorges dam in China necessitated flooding 1000 towns and villages, and “removing” 1.4 million people. Since completion in 2006, the reservoir has been plagued by pollution and algae. The dam is silting up, while the extra weight of water is causing geological problems. Downstream, the reduction in flow has led to a drought affecting 300,000 people, with drinking water reservoirs containing only “dead water.” Shipping can no longer use large stretches of the river. It is worrying that Switzerland is phasing out the nuclear power that provides 40% of its electricity, replacing it with HEP.

It is also worrying that Germany, the sixth biggest emitter of carbon dioxide, is phasing out nuclear power, increasing carbon emissions by 3%. If it can afford to do without the electricity from its nuclear plants, it would be better to keep them open while closing down an equivalent number of fossil fuel plants, cutting CO2 emissions proportionately.

In Japan, phasing out nuclear power will cause massive shortfalls in energy. The optimistic scenarios of Energy-Rich Japan (ERJ — all involve substantial reductions in demand (so far untested), while some involve reductions in population — by up to 20%! Since an increase will be needed in order to care for the ageing population, this seems particularly unrealistic.

In particular, ERJ claims that transport energy can be reduced by 70% with hydrogen-powered vehicles. They don’t mention the following problems.

1 Hydrogen is inefficiently produced from fossil fuels; solar-powered electrolysis of water is even more expensive.

2 Highly flammable hydrogen must be stored in pressurised tanks, no doubt to be released in traffic accidents.

3 A new infra-structure for hydrogen supply would have to be built, “a matter for policy decisions and market forces” (ERJ) (!?).

4 Fuel cells to “burn” the hydrogen use costly platinum catalysts which can be poisoned by impurities in the hydrogen or air, which is also needed; their reliability over long periods is unknown; they would easily freeze in cold weather; they would be a magnet for thieves.

5 Incidentally, ERJ assumes that much of the hydrogen would be imported (from where?).

Other aspects of ERJ’s schemes are equally vague. Much geothermal energy would be needed, though this technology is notoriously unreliable. Curiously, nowhere in 250-plus pages is there a mention of earthquakes or tsunamis!

It is difficult to avoid James Lovelock’s conclusion that “only nuclear power can now [my emphasis] halt global warming” — but this is not to accept nuclear power as it is. The possibility of fail-safe thorium-powered reactors is ignored not only by the (capitalist) industry which will not or cannot afford the research costs but by the Left and environmentalists. Supported by eminent scientists such as Carlo Rubbia (former head of CERN), thorium reactors do not have a chain reaction to go out of control. They rely on a stream of neutrons from a particle accelerator which could be instantly switched off. Using plentiful thorium, they can also “burn” other radioactive materials, including surplus bombs … and high level radioactive waste. Radioactive material decays into stable isotopes, usually lead. Plutonium takes about 100,000 years to reduce to 1/20 of its original amount. Thorium reactors accelerate this process greatly (Accelerated Transmutation of Waste), reducing the volume of waste and the time for which it would have to be kept safe.

A final point: Theo accuses me of ignoring the “proliferation argument”, which he seems to equate with the simple possession of nuclear power. There are many difficult steps to building nuclear weapons and it is clear that these have not proliferated anything like as fast as civil nuclear power. More of a problem is terrorism and here too it is not clear that nuclear power plants are uniquely vulnerable and dangerous targets. More importantly, many conflicts are, and will be increasingly, over resources, particularly as the climate changes. Nuclear bombs won’t be much use in these!

Yet more deaths in the UK fossil fuel industry (four workers killed in a Welsh oil refinery explosion in March 2011; five coal miners killed in Wales and Yorkshire in September) should help put the supposed dangers of nuclear power in perspective. Multiply these figures by at least 1,000 worldwide. According to Environmentalists for Nuclear Energy (, environmental opposition to nuclear energy is the “greatest misunderstanding and mistake of the century”. We should be demanding that nuclear power be expanded and improved, rather than phased out.

But let’s demand the safest forms of nuclear power, as well as support for renewable energy research.

Prescription opioids are the opium of the people

The 2016 World Congress on Pain (WCP), meeting in Yokohama in late September, held a packed Special Session on Opioids. The theme was their role in pain medicine. This might seem fairly settled since the analgesic properties of opium have been known for at least 3000 years. Not so!

The scene was set by eminent pain specialist Jane Ballantyne, president of Physicians for Responsible Opioid Prescribing and adviser to the US Centers for Disease Control and Prevention (CDC). She described how over the last 25 years sales of prescription opioids have soared, as have emergency admissions and deaths. In the US, some 1 in 5 patients with chronic non-cancer pain (CNCP) are prescribed opioids; since 1999, sales of prescription opioids have quadrupled; between 1999 and 2014, over 165,000 people died from overdose related to prescription opioids; more than 14,000 died this way in 2014, at least half of all opioid overdose deaths; nearly 2 million Americans abused or were dependent on prescription opioids in 2014, a quarter of those taking prescription opioids; over 1,000 people are treated in emergency departments for misusing prescription opioids every day.1

Eighty per cent of opioid prescriptions world wide are in the US, with just 5% of the population.2 This is not because Americans are suffering more pain: it is the product of drug companies “educating” physicians and patients, together with a production line model of health care. How has it come to this and will the problem spread? Drug companies would no doubt like to increase their opioid sales. This is a gigantic problem without an obvious solution. The new CDC Guidelines on Prescription Opioids1 may prevent the worsening of the situation but rolling back such a tide of addiction to legal drugs will not be easy.

The history of medical opioid use

The opium poppy, Papaver somniferum, has been known at least since Neolithic times (and perhaps even by Neanderthal people) and was widely cultivated and used in ancient Egypt, Sumer, Greece and so on. Morphine was isolated from opium in the 19th Century and this allowed safer dosing, since the amount being dispensed could be accurately measured. Later, derivatives of morphine or compounds with similar actions, such as heroin, methadone, pethidine, oxycodone, hydrocodone and fentanyl, were developed. These, the opioids, are mainly used for anaesthesia in operations (pethidine, fentanyl), for pain relief during childbirth (pethidine), and post-operative pain (often morphine). Morphine is supplied to US and British soldiers for use if injured on the field of battle. Opioids were also used as a cough suppressant (e.g. Codeine Linctus) and to treat diarrhoea (e.g. Collis Browne’s or Kaolin & Morphine).

Since the opioids efficiently suppress the acute pain of injury or operation, a wholly desirable outcome, one might wonder why they are so tightly controlled or even banned around the world. One reason is that the therapeutic dose is fairly close to the toxic dose: they suppress the breathing reflex and an overdose stops the victim breathing. As Paracelsus said, “The dose makes the poison,” and for heroin the Therapeutic Index (TI: the ratio of the toxic dose to the effective dose) is 25:1. This is a problem for recreational heroin users who don’t know the purity of the drug they are taking.

Another reason is that, if the patient takes opioids over a long period, they develop a tolerance to the drugs: the amount needed to achieve the desired effect slowly increases and can reach levels that would be instantly fatal to a new patient.

The main reason for controlling or banning opioids is that they are very addictive. This is less of a problem for those taking them, as they should, for short periods to deal with acute pain or to deal with pain associated with some terminal cancers. But, for those taking them for chronic pain or to experience the euphoric effect found with larger non-therapeutic doses, dependence or addiction can result, as well as side effects such as constipation, breathing problems in sleep, heart problems, suppressed immune systems, more bone fractures (perhaps because of dizziness and slower reactions), and disruption of hormone systems (including sex hormones). There is also, paradoxically, increased sensitivity to pain in a significant proportion of chronic opioid users.

For most of the time that opium has been known, it has been legal in most of the world, if rather frowned upon when used recreationally. Indeed, the British authorities allowed opium sale in India and imposed it by force in China in the Opium Wars. Sales of laudanum (tincture of opium and alcohol) in Britain were legal though regulated from 1868. Gradually, particularly in the first half of the 20th Century, opium and its derivatives became illegal unless prescribed by a doctor. Following the International Opium Convention in 1912, drug control was incorporated into the Treaty of Versailles in 1919, and the League of Nations signatories agreed to prohibit trade in narcotics except for medical uses. Laws have become stricter and the “war on drugs” has escalated so that many countries now impose stiff penalties, up to execution, for possession and sale of opioids. Except in terms of job creation, this war has not succeeded.

The problem of not enough and of too much opioids

Like other wars, this one has caused collateral damage with the legitimate medical use of opiates, especially in palliative care of cancer patients, being restricted unnecessarily. The WCP Special Session on Opioids3 heard from an Indian pain specialist that in half of the world opioids were not available to alleviate unbearable suffering. In her own country, opioids were theoretically available but legal restrictions made doctors afraid to prescribe them for fear of falling foul of the criminal law.

It was in the USA, however, that the situation was the most bizarre. Alongside serious jail terms for mere possession of opioids, the drug companies had successfully argued from about 1980 that opioid prescriptions should be allowed for patients with chronic (long-term) non-cancer pain. It was argued that this would not result in dependence problems since only a small percentage of patients had hitherto become addicted to prescription opioids. This went against medical advice that they be used only for acute pain or for cancer pain, especially in those with a terminal diagnosis.

The epidemic started in 1995 when the US Food and Drug Administration (FDA) approved the opioid painkiller OxyContin (oxycodone). Its manufacturer Purdue Pharma sold $45 million’s worth of OxyContin in 1996, $1.1 billion in 2000, $3.1 billion in 2010, some 30% of the painkiller market. It achieved this by aggressive advertising and targeting doctors already prescribing a lot of painkillers. The result has been a large number of people addicted to OxyContin and as many deaths as occur with illegal use of opioids. The opioid-paracetamol mixture Vicodin (containing hydrocodone) is involved in opioid dependence but also in deaths from paracetamol overdose.

This is at present almost entirely a US problem, with 80% of the world’s opioid consumption, legal and illegal, taking place in the USA. Most of the users are poor whites in areas like the Appalachians: hence its nickname “hillbilly heroin.” The historic pattern of under-treatment of pain in Afro-Americans due to racist assumptions has ironically largely spared them from the opioid epidemic.

A related problem is deaths from heroin overdose which have nearly tripled in 12 years, exceeding 10,500 in 2014. The number of addicts has doubled in that time, with the vast majority of new users being people who had previously misused prescription opioids. What to do? In a “shutting the stable door” move, the CDC have issued a new guideline for prescribing opioids for chronic pain, emphasising non-opioid treatments, low dosages, and following up patients to check that opioids are having the desired effect or to help them taper off the drugs. This sounds a very labour-intensive policy and one wonders how this would work in the US health system.

Other private health systems will be prone to the problems of prescription opioids but so too may public health systems: already it is reported that prescription opioid use has increased four-fold in 10 years in Australia while Canada, Germany, Austria, Switzerland, Belgium, Netherlands and Denmark are starting to catch up the USA. Other European countries and New Zealand also seem to be increasing prescription opioid use. With the USA, these countries account for 96% of prescription opioids used world-wide with just 15% of the population. We should remember, however, that this is also a problem for the 85% who may need access to prescription opioids but can’t get them. ………………………………………………………………………………………………………………

Information: How opioids work

Endogenous opioids (e.g. endorphin and enkephalins) have been found in all animals where they have been looked for, such as the very simple flatworms, as well as nematode worms, annelids, molluscs, crustaceans and insects, and all vertebrates. One major purpose may be to suppress pain when the priority is escape but endogenous opioids are involved in many other systems, such as the gut, and in social behaviours and in reward systems in the brain. They work by binding to opioid receptors,* of which there are at least four types, found in different tissues and causing different effects.

Morphine (and to a lesser extent codeine) is produced by the opium poppy as part of its defence mechanism against damage. Entirely fortuitously, morphine binds strongly to opioid receptors and activates them, resulting in relief of pain, euphoria (in the reward systems), inhibition of gut movement (resulting in constipation), suppression of the cough reflex, and depression of the breathing reflex (risking cessation of breathing). Codeine has no effect but is broken down by liver enzymes to produce morphine and other metabolites. People lacking these enzymes get no benefit from codeine.

Repeated use of morphine, or its derivatives such as heroin, reduces the body’s natural production of endogenous opioids, encouraging increased doses and resulting in withdrawal (abstinence) syndrome, an exaggeration of the opposite effects to those caused by morphine. This makes the original problem worse which is why opioids should only be used for short periods. Interestingly, there are some compounds, such as naloxone, which bind to opioid receptors even more strongly than morphine but do not activate them. These are opioid antagonists and can be used to reverse opioid poisoning since they rapidly displace opioids from the receptors and deactivate them.

Some opioids do not activate all receptor types. These partial agonists, such as Tramadol and buprenorphine, have been suggested as safer alternatives, with the latter being used to treat opioid dependence. When I was on a placement with Reckitt’s in the late 1970s, we were told that healthy volunteers taking buprenorphine for long periods had withdrawal symptoms when the drug was stopped but that they preferred these to the side effects from taking the drug.** Nevertheless, buprenorphine is abused by some people, as is Tramadol. ……………………………………………………………………………………………………………..


*Hans Kosterlitz developed the first bioassay for the opioids (allegedly after a dream!). This consisted of a length of guinea-pig intestine whose electrically-stimulated contractions were inhibited by certain concentrations of morphine. Other potential opioids could be checked against this to assess their potency. Kosterlitz and his colleagues predicted the existence of a naturally-occurring opioid in mammals and this was confirmed when some mashed-up pig brain was added to the saline solution bathing the intestine and its contractions were duly inhibited. Kosterlitz was a refugee from Nazi Germany who settled in Aberdeen; his son Michael, now based in USA, has just jointly won the 2016 Nobel Prize in Physics: both are marvellous advertisements for the benefits of migration. Michael has said that he is considering renouncing British citizenship if Brexit goes ahead.

**We were also told by a senior scientist that there were no serious health effects from long-term use of (prescription) opioids, apart from addiction. We now know that there are health effects and that long-term use of opioids does not solve the problem for which they are prescribed. It is still better to legally supply addicts than to criminalise them but the task of weaning them from their drugs is a difficult one.

References  (click after number to see)




Pain in dinosaurs: what’s the evidence?

I recently presented this poster at the International Association for the Study of Pain’s World Congress in Yokohama, 26-30 Sep 2016.

I am pleased to say that it generated a lot of interest. I believe that it helps push home the message that pain behaviour has evolved as animal life has evolved and many pain behaviours are conserved. This realisation may help to understand such seemingly inexplicable and harmful phenomena as chronic pain.

My poster partner, Amanda Williams, is developing a theoretical understanding of chronic pain: see her topical review What can evolutionary theory tell us about chronic pain? in the IASP journal, Pain, recently.
(April 2016:157(4);788–90 doi: 10.1097/j.pain.0000000000000464)


“Against stupidity, the gods themselves struggle in vain” (Goethe): The story of banning “legal highs”

Towards the end of January, “mostly supine” MPs passed a bill after a “clueless debate.” The Psychoactive Substances Act which is intended to ban “legal highs” (novel psychoactive substances – NPSs) is “one of the stupidest, most dangerous and unscientific pieces of drugs legislation ever conceived.” “Watching MPs debate…it was clear most didn’t have a clue. They misunderstood medical evidence, mispronounced drug names, and generally floundered. It would have been funny except lives and liberty were on the line.”

Not my words but those of an editorial in New Scientist (30 Jan 2016) and a report by Clare Wilson. The act came into force on 26 May, meaning that previously legal “head shops” must cease selling NPSs. The banned drugs will only be available from illegal drug dealers.

The story starts with the panic about “legal highs,” chemicals with similar effects on mood to banned drugs such as ecstasy, cocaine or speed, hence the term “psychoactive.” Legal highs were not covered by drug laws that banned named compounds but not new ones with similar effects.

If history tells us anything, it is that humans take drugs. Sometimes, these drugs cause harm to those who take them or to society in general. Banning specific drugs makes their use more dangerous. A logical approach would be to reduce the harm by controlling purity, taxing their sale, and educating users instead of criminalising them. Drug users would prefer not to break the law, providing a considerable incentive to synthesise new drugs that mimic banned drugs but aren’t on the banned list. But these new drugs will have unknown side effects and there is no control on dose and purity. In contrast, the effects of many “traditional” drugs are known.

The rationale for banning NPSs was that they were dangerous. Legal highs were mentioned in coroners’ reports for only 76 deaths from 2004 to 2013 (Office for National Statistics). Despite the government’s banning of NPSs as fast as it could, the number of mentions was increasing (23 in 2013). Reliable data are extremely difficult to obtain and mere mention of a drug in a coroner’s report is not evidence that the drug caused the death.

As each NPS was banned, more were synthesised. There were 24 NPSs in 2009 and 81 in 2013, making the government’s actions futile, so some bright spark came up with the idea of banning the production and supply of all substances which produce “a psychoactive effect in a person … by stimulating or depressing the person’s central nervous system [thus affecting] the person’s mental functioning or emotional state.” A bill was proposed by the new Conservative government and specified that anyone producing or supplying (but not merely possessing for personal use) the previously legal NPSs could be sent to prison for up to seven years.

The proposal soon ran into problems. Firstly, what is meant by stimulating or depressing the central nervous system? Secondly, what constitutes an effect on a person’s mental function or emotional state? Thirdly, how could it be proved that any suspected substance was psychoactive? After all, placebos can be psychoactive. Fourthly, what about alcohol, nicotine, caffeine, many medicines, and foodstuffs such as nutmeg and betel nut (or, in my case, cake)? Finally, would bona fide scientific research on psychoactive substances be outlawed?

Criticism poured in from scientists. Respected medical researchers said the bill was “poorly drafted, unethical in principle, unenforceable in practice, and likely to constitute a real danger to the freedom and well-being of the nation” (letter to The Times). The Royal Society, the Academy of Medical Sciences, the Wellcome Trust, and others wrote to Home Secretary Theresa May that “Many types of important research could potentially be affected by the Bill, particularly in the field of neuroscience, where substances with psychoactive properties are important tools in helping scientists to understand a variety of phenomena, including consciousness, memory, addiction and mental illness.”

Even the government’s Advisory Council of the Misuse of Drugs (ACMD), more in line with politicians’ wishes since the shameful “firing” of Professor David Nutt (see box), produced a list of objections. The government’s omission of the word “novel” made the bill apply to a vast number of other substances in addition to legal highs. It would be impossible to list all exemptions so benign substances, such as some herbal remedies, might be inadvertently included. Also, proving that a substance was psychoactive would require unethical human testing, since laboratory tests might not stand up in court.

The government changed the bill to exempt scientific research but otherwise remained obdurate. An example of the inevitable confusion concerns alkyl nitrites (poppers). Known since 1844 and used to treat heart problems, they have a short-acting psychoactive effect and are generally safe. However, the government referred to several non-specific risks and claimed that poppers had been “mentioned” in 20 death certificates since 1993 (far fewer than for lightning). After a Conservative MP appealed for poppers, which he used, not to be included, the government said they would consider the arguments later.

Another example concerns nitrous oxide (laughing gas), included in the ban despite its long history of use in medicine and recreationally. Discovered in 1772, laughing gas was greatly enjoyed by Sir Humphry Davy and friends, including the poet Shelley. It has an impressive safety record and has been used in dental and childbirth anaesthesia and sedation since 1844.* Nevertheless, the government referred to “the harms” of recreational laughing gas and included it in the bill. In fact, the deaths “caused” by nitrous oxide result from incorrect methods of inhalation which could be eliminated by education.

The Act was finally implemented on 26 May. Independent expert David Nutt described the government’s policy as “pathologically negative and thoughtless.” He predicts that deaths from drugs will increase as people turn to illegal drug dealers in the absence of legal “head shops.” Einstein defined insanity as “doing the same thing over and over again and expecting different results.” This just about sums up successive governments’ policies towards drugs.**


**But not all drugs. Nicotine and alcohol are legal, despite their addiction potential, toxicity, and role in causing accidents. See, for example, …………………………………………………………………………………………………………………..Labour’s problems with scientific evidence

Tories don’t have a monopoly on cluelessness. Expert neuroscientist Professor David Nutt was “sacked” from his position as chair of the Advisory Council on the Misuse of Drugs by the right-wing press’s favourite Labour politician, former Home Secretary Alan Johnson. This was after Nutt showed that cannabis, then being upgraded to Category B (the same as codeine, ketamine, mephedrone or speed) was less harmful than alcohol or tobacco. This wasn’t an ordinary sacking since Prof Nutt gave his time and expertise freely, believing that it was important to present the evidence to improve the quality of the debate. Three members of the ACMD resigned in protest.

Nutt stated in a lecture to fellow academics that the evidence showed that cannabis was less harmful than alcohol and tobacco. Johnson called this “campaigning against government policy” and “starting a debate in the national media without prior notification to my department.” Johnson was then accused of misleading MPs since Prof Nutt had given prior notice of the content of his lecture and no journalists were invited. Further, as an unpaid advisor, Nutt was not subject to the same rules as civil servants. Other ACMD members who resigned said that they “did not have trust” in the way the government would use the ACMD’s advice and that Johnson’s decision was “unduly based on media and political pressure.”

Shamefully, PM Gordon Brown backed Nutt’s removal, saying that the government could not afford to send “mixed messages” on drugs. Both Brown and Johnson (some people’s favourite to replace Jeremy Corbyn) were quite happy to send the wrong message.

Supported by other scientists, Nutt was awarded the John Maddox Prize for standing up for science by the pro-evidence charity Sense About Science. The government subsequently accepted a new ministerial code allowing for academic freedom and independence for advisers, with proper consideration of their advice. Under this, Nutt would not have been dismissed.

Nutt now works with DrugScience.

Body by Darwin

How evolution shapes our health and transforms medicine*

Book review

What has the theory of evolution to offer to modern medicine? Evolutionary insights are rarely used by medical practitioners when treating our cancers, fertility problems, allergies, dementias and so on. Jeremy Taylor’s book gives many examples of where evolution helps explain our modern patterns of disease and suggests new strategies for treatment.

Taylor starts with some encouraging facts. Since our gatherer-hunter past, mortality has decreased enormously so that life expectancy in several countries exceeds 80 years. Painless and sterile surgery, effective drugs, public health measures, vaccines, and organ transplants, among others, have transformed medicine from a fairly futile and even harmful practice into something approaching a science. So why, asks Taylor, do so many people suffer from autoimmune diseases (rheumatoid arthritis, multiple sclerosis, Type 1 diabetes, inflammatory bowel disease etc.), allergies (like eczema and asthma), heart disease, eye problems, bad backs, appendicitis, reproductive problems, cancers, mental illness, and dementia?

How can evolution have allowed this to happen? The problems are obvious to us but evolution is “blind and witless.” Like a politician, it focuses on the immediate problem – how are genes to be passed on. The problem for evolution to solve is not health but reproduction. Evolution selects for traits that in principle can lead to immortality – for genes! To genes, bodies are vehicles to get them safely into the next generation. It’s no surprise then that evolution has not eradicated disease from our bodies, particularly after reproductive age, but evolution also has something to do with the types of disease we get.

Taylor looks first, in “Absent friends”, at allergies and autoimmune diseases, becoming more common in richer countries. For instance, childhood diabetes is 200 times more common in the UK than in China. For most of our evolutionary history, humans have lived with parasites and micro-organisms (ticks, worms, protozoa, bacteria and viruses), in close proximity to domestic animals, with polluted water supplies, and in dirty dusty dwellings. This caused much poor health but our immune systems evolved with this background. In advanced societies, these parasites are no longer present and our immune systems are not challenged at an early age. According to this hygiene (or “old friends”) hypothesis , the immune system is “damped down” by early and frequent exposure to antigens: in these more hygienic times, it responds with inappropriate strength to harmless stimuli such as peanut proteins, grass pollen, or gluten. Autoimmune diseases, like Type 1 diabetes, coeliac disease and multiple sclerosis, seem linked to reduced exposure to bacteria in childhood.

There is evidence that deliberately infecting MS sufferers with parasitic worms alleviates their symptoms. Intriguingly, a case of severe autism seems to have been ameliorated by infection with parasitic worms or attacks by biting mites. Taylor quotes the example of an American boy, Lawrence, with a severe form of autism which led him to become very agitated and violently harm himself. His parents noticed that his symptoms seemed to go away when he had a fever. When Lawrence was older, his parents reluctantly agreed to put him into permanent care but, when he was attending a specialised summer camp, they were called by the staff to say that he was behaving … normally! It seemed that he had been severely bitten by chiggers, a type of mite found in grasslands and forests, and the powerful immune response this provoked had led to a total remission of his symptoms, returning when the reaction had subsided. To cut a long story short, Lawrence was eventually deliberately infected with an intestinal parasite, the pig whipworm and his symptoms completely vanished. This type of treatment has also been used on some sufferers from Crohn’s disease, an autoimmune disease of the bowel.

The make-up of one’s gut bacteria (microbiota) seems to be an important factor in some autoimmune conditions. Babies delivered by Caesarean section or not breast-fed do not acquire normal gut microbiota, while treatment with antibiotics can upset this. One answer lies in administering the correct bacteria after birth or via a “faecal transplant” later in life. Infection with parasites seems to work for several conditions but doesn’t seem a very nice idea so perhaps people could be treated with a extract of parasite antigens. It has also been found that early exposure to peanut proteins drastically cuts the chance of peanut allergy in children.

In “A fine romance,” Taylor attacks infertility and diseases of pregnancy from the standpoint of evolution. Why do some women have many miscarriages and some pregnant women get life-threatening pre-eclampsia, often leading to premature birth. Reproduction is rather inefficient in humans, with only about a fifth of ovulations with unprotected intercourse resulting in successful pregnancy. Some 30% of fertilised eggs fail to implant and another 30% are lost during the first six weeks. About 10% miscarry before 12 weeks. Of pregnant women, some 10% develop diabetes and another 10% very high blood pressure. This can lead to kidney and liver damage (pre-eclampsia), leading to seizures and convulsions (eclampsia). In 2013, 29,000 women died worldwide from pre-eclampsia. The treatment is induction of birth or Caesarean section.

The evolutionary setting for all this is that the foetus carries not only the mother’s genes but those of the father. The investment of the mother’s resources in one baby is set off against the investment necessary in all the future babies she can have during her reproductive life. On the other hand, the foetus, carrying the genes of the father as well, is vitally interested in getting the maximum investment of resources from the mother, even if that detracts from the interests of the mother’s future offspring. The mother’s body will tend to weed out any but the most viable embryos (hence the massive loss of embryos in the first twelve weeks).

The occurrence of pre-eclampsia is increased when pregnancy occurs quickly in a new relationship and this relates to another puzzle, that of how and why the mother’s body tolerates the presence of a foetus with a substantial proportion of “foreign” antigens that would normally lead to rejection. It seems that, in the course of a longer relationship, the mother’s immune system becomes habituated to and tolerant of the father’s antigens. This points towards a version of the “old friends” hypothesis in which pre-eclampsia is a sort of inappropriately strong immune response to the father’s antigens.

It is a commonplace that our upright bipedal stance, while freeing our hands for a variety of tasks, puts strain on our backs resulting in an increasing prevalence of back pain as we age. In fact, the Global Burden of Disease 2010, looking at 291 conditions, ranked low back pain worst for years lived with disability (approaching 10% of the world’s population). We also suffer from bunions, varicose veins, haemorrhoids, hernias, hip and knee problems.

Our bipedality, unique among mammals (including our closest relatives), has allowed our brains to expand but, since evolution doesn’t have a plan, this could not have been the reason for its evolution. Taylor points out, in “The downside of upright,” that there must have been an overriding reason for bipedality which outweighs the down side. Early fossils of hominids close to the split with chimpanzees’ ancestors are adapted for bipedalism and climbing trees, with opposable big toes. True bipedalism allows for a more energy-efficient two-legged gait and would have enabled our ancestors to expand their foraging territory from forests to savanna. Their hands would have been free to make and use tools and carry food. We are also adapted for running long distances, unlike our closest relatives, allowing our ancestors to run down prey by simply exhausting it. Taylor’s explanation for the biped’s health involves our different lifestyles – standing for long periods, sitting on chairs (rather than squatting or sitting on the ground), less physical activity, even our use of footwear when we do run, protecting our feet but jarring our joints.

The eye is often cited by creationists as so complex that it could only have been created by a supernatural being. In fact, all variations between light-sensitive patches and the primate eye with colour vision are found in nature, Taylor shows in “DIY eye.” Even some bacteria can focus light and use this to direct their movements. Calculations show that eyes can evolve in a virtual “blink of the eye” in the time since life evolved. Genetics shows that the eye only evolved once and that all subsequent eyes are modifications of this. It has been claimed that its layout, with the nerve cells in front of the retina, is a fault that evolution was unable to avoid, given the way the eye evolved. Taylor cites evidence that this layout is actually more advantageous to smaller animals since it maximises the distance from the lens to the retina, allowing more precise vision.

Our acute eyesight requires a high concentration of photoreceptor cells in the fovea, the centre of the retina where light is focused. Taylor thinks that this puts a lot of pressure on the blood supply and that this results in some people losing vision through macular degeneration in later life. This is an evolutionary trade-off for the benefit of having acute eyesight when young.

Next, in “Hopeful monsters,” Taylor explains to us “why cancer is almost impossible to cure.” Part of the problem is that, having mutated to become cancerous, the cells keep mutating. They form clones that compete with each other (and with normal cells) for food and oxygen. Unlike normal cells, cancer cells are “immortal,” achieving this through six steps, ending up as spreading or metastasising cancers. This is evolution in miniature, the first mutating to produce their own growth signals and last developing the ability to break away from a tumour and travel around the body. The other aspect of evolution and cancer cells is the development of drug-resistant cancer cells (as with antibiotic-resistant bacteria).

Heart disease is the major killer in the West: in “A problem with the plumbing,” Taylor explains how the evolution of the coronary arteries makes heart attacks more likely. Heart muscle needs oxygen but how is it to be supplied? Paradoxically, the oxygen-rich blood pumped by the heart passes through too quickly. Most vertebrates have coronary arteries to supply the heart with oxygen. These are very narrow and are prone to become narrowed even further by atherosclerosis, the formation of layers of plaque. When the muscle contracts, the branches of the coronary arteries are squeezed shut: they can only fill during relaxation. With increased exercise, the more rapid contractions reduce the time for the arteries to refill. If these are obstructed by plaque, the muscle is starved of oxygen causing pain (angina) and long-term damage. Pieces of plaque can break off, causing a blockage: the muscle supplied by that branch dies – a heart attack. Heart disease is usually attributed to lifestyle and diet but Taylor draws attention to another factor, the immune system. People who have had tonsils or appendixes removed in childhood are much more prone to heart attacks. This is due to disrupted development of the immune system (and shows that the appendix has some purpose). Also, people with autoimmune diseases are more prone to atherosclerosis (the “old friends” hypothesis).

Finally, in “Three score years – and then?” Taylor tackles Alzheimer’s disease. He convincingly argues that the focus on amyloid protein tangles in the brain cells of sufferers is misplaced. These are actually a symptom, rather than the cause, the underlying problem being inflammation. The genetic component of Alzheimer’s involves genes connected with the immune system. It seems that regularly taking anti-inflammatory drugs like aspirin or ibuprofen can delay onset. It may also be that a viral brain infection can cause the problematic inflammation. One candidate is Herpes simplex (cold sore) virus (HSV), found in 90% of people. Infecting the lips, it enters the trigeminal nerve and shelters from the immune system in the nerve ganglion, inside the skull next to the brain. Triggered by immune decline or by another infection, reactivated HSV can migrate back to the lips to cause more cold sores…or perhaps into the brain to start causing the changes in Alzheimer’s disease. Though not proven, this may indicate that Alzheimer’s is a consequence late in life, after genes have been passed on to offspring, of the early development of the nervous system which enables our success.

Jeremy Taylor has produced a meticulously detailed account of part of the growing field of evolutionary medicine which is going to affect treatments more and more.

Note: Taylor has produced science programmes for television, including The Blind Watchmaker and Nice Guys Finish First with Richard Dawkins for the BBC. His first book was Not a Chimp: the hunt to find the genes that make us human.

*University of Chicago Press, London (2015). £21.00 Hbk. ISBN 978-0-226-05988-4 (also e-book).

Headstrong – 52 Women Who Changed Science and the World*


Women are notoriously under-represented in science but the situation seems worse because such women scientists as there are tend to be misunderstood, misinterpreted, under-rated or ignored. Out of the 52 in Rachel Swaby’s book, the general reader might only have heard of Mary Anning (fossil hunter), Rachel Carson (author of Silent Spring), Rosalind Franklin (the “dark lady of DNA,” played by Nicole Kidman in the West End play, Photograph 51), Ada Lovelace (Byron’s daughter and pioneer of computing), Florence Nightingale (famed for nursing in the Crimean war), and Hedy Lamarr (celebrated actress, less known as an inventor). Swaby deliberately omits Marie Curie who has received substantial coverage (though there can never be enough about this double Nobel prizewinner, in my opinion).

In the past, it was difficult for women to gain an education or to carry on with their studies or work when they married. I will just mention a few of the 52, choosing those of earlier times or who are known for other activities.

Maria Sibylla Merian (1647-1717) became interested in insects as a child in Frankfurt. At thirteen, she was bringing up a colony of silkworms, taking notes and painting the stages in their life cycle. At a time when the metamorphosis from caterpillar to moth was not understood, Merian observed and painted insects throughout their lives, showing them in their habitats. These illustrations were published in her groundbreaking book Der Raupen wunderbarer Verwandlung (The Wondrous Transformation of Caterpillars) in 1679.

At 52, she set off for Surinam with her children on a very early example of a purely scientific expedition to collect and study the insects of plantations and jungle alike. The result was Metamorphosis insectorum Surinamensium, with 60 exquisite copperplate engravings of insects and other animals on leaves and branches, crawling, flying, eating, unfurling proboscises, attacking each other…

Her work was admired by Goethe and used by Linnaeus in developing his classification of living things.

Mary Anning (1799-1847) was a child of a poor family which gained extra income by selling fossils from the cliffs of Lyme Regis to tourists. Mary learnt her father’s fossil-hunting trade at ten and, after his death, carried on with her brother Joseph. Usually finding fossil shellfish, her brother noticed part of a skull protruding from the rock. This was the head of an ichthyosaur and Mary unearthed the rest of it. This, the first example of its kind, was sold for £23, a considerable sum. In her early 20s, Mary took over the business, going out in winter (the best time for the cliff falls that exposed new fossils) with just her dog. She discovered the first plesiosaur skeleton and the first pterosaur found in Britain.

Her discoveries were evidence for extinction of species which contradicted the notion that God’s creation was perfect. Furthermore, there seemed to have been an age when the dominant animals were reptiles. Her knowledge of fossils and geology was extensive and yet, being a working-class woman, gentleman geologists tended to gain the credit from writing about her discoveries. She began to be treated as a fellow scientist, gaining the respect of geologists William Buckland, Charles Lyell and Roderick Murchison, and of the Swiss palaeontologist Louis Agassiz.

Never well off, she was helped by her scientific colleagues selling specimens and drawings on her behalf. Eventually she was awarded a civil list pension by the government. When she became ill with breast cancer, the Geological Society (which had earlier refused her membership as a woman) raised money for her and, after her death aged 47, paid for a stained glass window in her local church. Charles Dickens wrote of her life in 1865, ending his article with “The carpenter’s daughter has won a name for herself, and has deserved to win it.” In 2010, the Royal Society included Anning in a list of the ten British woman who have most influenced the history of science.

Émilie du Châtelet (1706-49) is largely known as a lover and intellectual companion of Voltaire but she was instrumental in introducing Newton’s ideas to France. Born rich (which always helps) but mainly self-taught, she followed a conventional path for the time until, aged 27 and expecting her second child, she became interested in mathematics, studying Descartes’s geometry and engaging talented tutors who introduced her to Newton’s work. At 32, she entered the French Royal Academy of Sciences essay competition on the nature of fire (i.e., heat), in which she predicted what we now know as infra-red radiation: her entry was highly praised and published by the academy.

She then published Institutions de Physique (Foundations of Physics), a state-of-the art textbook in which she not only put forward Newton’s theories but improved on them. When this was attacked by the secretary of the academy as being the unsound ideas of a fickle and weak-minded woman, she refuted each of his criticisms and sent her response to all members of the academy. The secretary resigned soon after.

Her experimental work confirmed that the kinetic energy of an object was proportional to its speed squared (Newton had not discussed this, focusing rather on momentum). Her greatest achievement was her translation (from Latin) of and commentary on Newton’s Principia. It remains the standard French translation. Days after completing it, she died, aged 42, after giving birth to her fourth child.

Florence Nightingale (1820-1910) is famous for her innovations in nursing but is arguably one of the founders of evidence-based medicine. Gathering data on causes of death among British soldiers in Scutari, she devised a method of displaying her statistics in a visual form, the polar-area diagram (essentially a circular bar-chart or histogram). The diagram is composed of wedges, one for each month, whose area is proportional to the total deaths. The wedges were subdivided in proportion to causes of death – wounds, infections, or other. She was able to show that death rates declined as sanitary methods improved. The government soon established a Statistical Branch of the Army Medical Department.

Later, she devised statistical forms for hospitals to gather data on their patients’ progress. She became the first woman member of the Royal Statistical Society in 1858.

Emmy Noether (1882-1935) was a mathematical genius who succeeded despite the active obstruction of the authorities, whether of universities, the Prussian state or the Nazis. For eight years, she worked at the University of Erlangen, unpaid, developing the theory of invariants, supervising PhD students, publishing several papers and lecturing on behalf of her professor father whose health was deteriorating. In 1915, she was invited by two of the world’s greatest mathematicians, David Hilbert and Felix Klein, to work on General Relativity at the University of Göttingen but she was refused employment after protests by those who thought it inappropriate to have men taught by a woman. With Hilbert’s support, she worked for several years, until 1923, unpaid. Here she proved her first theorem, Noether’s Theorem, which states that, for each law of symmetry, there is a conservation law. This solved a problem with General Relativity where it seemed to violate the Law of Conservation of Energy. It has been said that this theorem is on a par with Pythagoras’ Theorem in importance.

Despite her brilliant achievements in pure mathematics and physics, she was the first professor at Göttingen to be sacked under the Nazis’ anti-Jewish laws. She carried on tutoring illegally, even to Nazi students, but soon was found a job at Bryn Mawr College in the US. She died two years later after surgery for an ovarian cyst. Shortly before her death, Norbert Wiener described her as “the greatest woman mathematician who has ever lived; and the greatest woman scientist of any sort now living,” while Einstein said after her death “Fräulein Noether was the most significant creative mathematical genius thus far produced since the higher education of women began.”

Hedy Lamarr (1914-2000), better known as an Austrian-American film actor, was in the US when war broke out. Incensed by the torpedoing of ships carrying civilians by her erstwhile compatriots, she wanted to help the Allied effort. US torpedoes in 1942 had a 60% failure rate, largely due the inability to guide them. This could be improved by radio transmissions from ship to torpedo but these could be easily jammed by the enemy. Interested since childhood in machines and how they worked, Lamarr and a composer friend, George Antheil, worked on an idea of frequency changing programmed into the transmitter and receiver. This would be impossible to crack before the torpedo struck. They patented their idea and reported it to the US government, who immediately classified it as secret. Unfortunately, for various reasons, the idea was not used in war time. However, it had a wider applicability and is used in such areas as wireless cash registers, bar code readers, Wi-Fi, Bluetooth, and GPS. Hedy Lamarr was awarded the Electronic Frontier Foundation’s Pioneer Award in 1997.

This is a very readable book and all the women chosen are fascinating characters. Each is worthy of following. I include a few illustrations of the work of the women mentioned in this review. They are:

1 From Merian’s Metamorphosis insectorum Surinamensium (1705)

2 Drawing of plesiosaur found by Anning (from Transactions of the Geological Society of London).

3 A version of one of Nightingale’s polar-area diagrams.

4 Du Châtelet’s essay on the nature of fire.

5 Emmy Noether sometimes discussed abstract algebra by postcard. This is addressed to Ernst Fischer at her former university of Erlangen.

6 Hedy Lamarr’s secret communication system patent.

*By Rachel Swaby. Broadway Books, 2015: ISBN 9780553446791

This slideshow requires JavaScript.

Where were you when gravitational waves were detected?

About a billion years ago, a billion light-years away, two black holes collided. Out of their total mass of some 60 times that of our Sun, about three solar masses were converted into energy. The amount of energy thus released can be calculated with Albert Einstein’s famous 1905 equation, E = mc*2 {energy = mass x (speed of light squared)}.1

The resulting disturbance in space-time was detected as gravitational waves on 14 September 2015, whose existence was predicted by Einstein 100 years previously, and announced (after rigorous checking) on 11 February 2016. This is yet another confirmation of Einstein’s General Theory of Relativity.

Following on from his 1905 Special Theory of Relativity, Einstein extended his theory to include gravity, publishing his findings in his General Theory in 1915. The Special Theory stated that: the speed of light in vacuum was a constant and could not be exceeded; space and time were aspects of each other and should be called space-time; for fast-moving objects, time passes more slowly, lengths are decreased, and mass is increased; and light from approaching or receding objects is blue-shifted or red-shifted respectively. All of these are contrary to “commonsense” and yet have all been verified.

In his General Theory, Einstein says that: mass distorts (curves) space-time; light heading towards or away from massive objects is blue-shifted or red-shifted respectively; time passes more slowly close to massive bodies; and moving masses produce ripples or waves in space-time. It also led to the prediction of black holes.

The first success of the theory was that it explained the anomalous orbit of Mercury, an ellipse whose major axis moves round gradually by a greater amount than predicted by Newton’s theory. This results from the curvature of space-time by the mass of the Sun, altering the geometry of the orbit from that in a perfectly flat space-time. This ‘precession’ of Mercury’s orbit had been a major defect in Newtonian physics. The theory also predicted the bending of light from distant stars passing close by the Sun: it was a major triumph for the theory and made Einstein famous when this was observed during the total solar eclipse in 1919.

The gravitational red- or blue-shift was demonstrated about 50 years ago; gravitational time dilation was shown over 40 years ago and is corrected for continually in GPS systems and other orbiting satellites to keep them synchronised with Earth-based clocks. Black holes have been adequately demonstrated more than once.

In 1916, Einstein realised that if a mass moves, the distortion in space-time should also move, spreading out like ripples on a pond: these ripples in space-time are gravitational waves. These should also be detectable because they would cause changes in length of objects in their path, a rhythmic squashing and stretching at right angles to the direction of the waves. The first gravitational wave detecting system was built nearly 50 years ago but was unsuccessful. This is because the effect of the waves is so small that the apparatus was nowhere near sensitive enough. Indirect evidence of gravitational waves was obtained some 40 years ago by observing binary pulsars. These were found to be spiralling in towards each other as predicted if they were radiating away energy in the form of gravitational waves.2

The recent detection of gravitational waves was by LIGO (Laser Interferometry Gravitational-wave Observatory) detectors in Hanford and Livingston, US. These consist of two 4-km-long vacuum tubes at right angles. Laser beams are split, sent down each branch, reflected back and forth 400 times, recombined, and detected. The beams interfere3 with each other and, if gravitational waves arrive and change the length of one arm more than the other, the interference pattern will change.

Unfortunately, the change in length is predicted to be less than a millionth of the width of an atom, even if the mass involved is large. The waves detected came from the movement of a very large amount of mass, the collision of two black holes of total mass about 60 times that of our Sun. In this, about three solar masses were converted to gravitational wave energy in about a fifth of a second.1 In the signals detected, this is seen as rapidly increasing oscillations which then cease as the black holes form one large one. This is the same “shape” as the waves of a ‘chirp’ sound.4 It was also proved that gravitational waves travel at the speed of light and that the graviton (the particle associated with gravitational waves) must be massless, like photons of light.

Now this has been achieved, more and better gravitational wave detectors will be designed. Some will be space-based, with much longer distances so that they will be more sensitive. Other LIGO detectors in different countries will enable us to pinpoint more accurately where the waves are coming from. It may be possible to use pulsars (very regularly pulsing stars) as gravitational wave detectors by observing slight delays in their signals caused by the passage of waves. Since gravitational waves can pass through everything, while light can’t, we will be able to “see” hitherto invisible regions of the universe (like the centre of our galaxy). Different types of gravitational waves are produced by different events, such as stars being swallowed by black holes, neutron stars spiralling into each other, or even relic waves from the ‘big bang,’ giving us a source of information about that in addition to the cosmic microwave background.

Government bean-counters (and not just them) constantly question the value of basic, curiosity-driven, ‘blue skies’ research. Why can’t the money be spent on more practical things or given back to taxpayers? This ignores the importance and interest of knowledge about us and our universe. Why should we only know what is of value to our employers? It also ignores the spin-offs of basic research, some of which have transformed our lives. These include transistors, lasers, LEDs, nuclear power, computers, microwave ovens, accurate GPS, X-ray machines, the structure of DNA, MRI scanners, proton beam cancer therapy, genetic engineering, DNA fingerprinting…and the internet! ………………………………………………………………………………………………………………….. 1 This sounds like a lot and it is (6 x 10*30 kg x (3 x 10*8)*2). But remember The Hitch-hiker’s Guide to the Galaxy: “Space is big. Really big. You just won’t believe how vastly, hugely, mindbogglingly big it is.” My back-of-an-envelope calculations show that, if this energy is spread out across a sphere of radius 1.3 billion light-years, the energy reaching us is about 1 milliwatt per metre squared. I don’t know how reliable this calculation is as the first time I did it I got a number 10 billion times smaller!

2 The Earth is spiralling in towards the Sun through gravitational wave radiation but will take 10 trillion times the current age of the universe to hit the Sun. Don’t panic!

3 The light waves meet and recombine. The apparatus is designed so that they arrive out of step and destructively interfere, leaving darkness! If a gravitational wave arrives, altering the length of the arms differently, the light beams will start to interfere constructively and some light will appear.


Excellent explanations of gravitational waves and their discovery: