Climate change

June 23, 2011

What price of Indian independence? Greenpeace under the spotlight

Filed under: Emissions Reduction, Energy Demand, Global Warming — buildeco @ 1:56 pm
Two PWRs under construction in Kudamkulam, India

Guest Post by Geoff RussellGeoff is a mathematician and computer programmer and is a member of Animal Liberation SA. His recently published book is CSIRO Perfidy. To see a list of other BNC posts by Geoff, click here.


India declared itself a republic in 1950 after more than a century of struggle against British Imperialism. Greenpeace India however, is still locked firmly under the yoke of its parent. Let me explain.

Like many Australians, I only caught up with Bombay’s 1995 change of name to Mumbai some time after it happened. Mumbai is India’s city of finance and film, of banks and Bollywood. A huge seething coastal metropolis on the north western side of India. It’s also the capital of the state of Maharashtra which is about 20 percent bigger than the Australian state of Victoria, but has 112 million people compared to Victoria’s 5.5 million. Mumbai alone has over double Victoria’s entire population. Despite its population, the electricity served up by Maharashtra’s fossil fuel power stations plus one big hydro scheme is just 11.3 GW (giga watts, see Note 3), not much more than the 8 or so GW of Victoria’s coal and gas fumers. So despite Mumbai’s dazzling glass and concrete skyline, many Indians in both rural and urban areas of the state still cook with biomass … things like wood, charcoal and cattle dung.

The modern Mumbai skyline at night

Mumbai’s wealth is a magnet for terrorism. The recent attacks in 2008 which killed 173 follow bombings in 2003 and 1993 which took 209 and 257 lives respectively. Such events are International news, unlike the daily death and illness, particularly to children, from cooking with biomass. Each year, cooking smoke kills about 256,000 Indian children between 1 and 5 years of age with acute lower respiratory infections (ALRI). Those who don’t die can suffer long term consequences to their physical and mental health. A rough pro-rata estimate would see about 23,000 children under 5 die in Maharashtra every year from cooking smoke.

The image is from a presentation by medical Professor Kirk Smith, who has been studying cooking smoke and its implications for 30 years.

Medical Prof. Kirk Smith’s summary of health impacts from cooking fires

The gizmo under the women’s right arm measures the noxious fumes she is exposed to while cooking. Kirk doesn’t just study these illnesses but has been spinning off development projects which develope and distribute cleaner cooking stoves to serve as an interim measure until electricity arrives.

The disconnect between what matters about Mumbai and India generally to an Australian or European audience and what matters locally is extreme. But a visit to the Greenpeace India website shows it is simply a western clone. In a country where real matters of life and death are ubiquitous, the mock panic infecting the front page of the Greenpeace India website at the death-less problems of the Fukushima nuclear plant seem weird at best, and obscene at worst.“Two months since Fukushima, the Jaitapur project has not been stopped“, shouts the text over one front page graphic in reference to the nuclear plant proposed for construction at Jaitapur. In those two months, nobody has died of radiation at Fukushima, but 58,000 Indian children have died from cooking smoke. They have died because of a lack of electricity. Some thousands in Maharashtra alone.

Greenpeace, now an obstructive dinosaur

The whole world loved Greenpeace back in its halcyon days protesting whaling and the atmospheric testing of nuclear weapons. Taking on whalers and the French Navy in the open sea in little rubber boats was indeed worthy of Mahatma Gandhi. But the legacy of those days is now an obstacle to Greenpeace helping to fight the much bigger environmental battles that are being fought. As Greenpeace campaigns to throw out the nuclear powered baby with the weapons testing bathwater, it seems to have forgotten the 2010 floods which displaced 20 million in the sub-continent. The Australian Council for International Development reports in May 2011 that millions are still displaced with 913,000 homes completely destroyed. Millions also have ongoing health issues with rising levels of tuberculosis, dengue fever and the impacts of extended periods of malnutrition. The economic structure of large areas has been devastated along with food and seed stocks. Areas in southern Pakistan are still under water.

This foreshadows the scale of devastation which will be delivered more frequently as global warming bites.

Brown clouds, cooking and climate change

Regardless of what you think about nuclear power, you’d think breathable air would be an environmental issue worthy of Greenpeace’s attention, but biomass cooking is missing from Greenpeace India’s campaign headings.

Biomass cooking isn’t just a consequence of poverty, it feeds into a vicious feedback loop. People, usually women and children, spend long periods collecting wood or cattle dung (see image or full study). This reduces educational opportunities, while pressure on forests for wood and charcoal degrades biodiversity. Infections from smoke, even if not fatal, combine with the marginal nutrition produced by intermittent grain shortages to yield short and sickly lifespans, while burning cattle dung wastes a resource far more valuable as fertiliser.

In 2004, a World Health Organisation Report estimated that, globally, 50 percent of all households and 90 percent of rural households cook with biomass. In India, they estimated that 81 percent of Indian households cook with biomass. That figure will have dropped somewhat with significant growth in Indian power generation over the past decade but will still be high.

Biomass cooking isn’t only a health issue, but a significant player in climate change. Globally, the black carbon in the smoke from over 3 billion people cooking and boiling water daily with wood, charcoal or cattle dung forms large brown clouds with regional and global impacts.

Maharashtra’s nuclear plans

Apart from a reliable food supply, the innovation that most easily distinguishes the developed and developing world is electricity. It’s the shortage of this basic commodity that kills those 256,000 Indian children annually. Electric cooking is clean and slices through the poverty inducing feedback loop outlined above. Refrigeration reduces not just food wastage but also food poisoning.

If you want to protect forests and biodiversity as well as children in India (and the rest of the developing world), then electricity is fundamental. Higher childhood survival is not only a worthy goal in itself, but it is also critical in reducing birthrates.

Apart from a Victorian sized coal fired power supply the 112 million people of Maharashtra also have the biggest nuclear power station in India. This is a cluster of two older reactors and two newer ones opened in 2005 and 2006. The newer reactors were constructed by Indian companies and were completed inside time and below budget. The two old reactors are relatively small, but the combined power of the two newer reactors is nearly a giga watt. India’s has a rich mathematical heritage going back a thousand years which underpins a sophisticated nuclear program. Some high-level analytic techniques were known in India hundreds of years before being discovered in Europe.

India has another nuclear power station planned for Maharashtra. And much bigger. This will be a half a dozen huge 1.7 GW French EPR reactors at Jaitapur, south of Mumbai. On its own, this cluster will surpass the entire current output of the state’s coal fired power stations. The project will occupy 968 hectares and displace 2,335 villagers (Wikipedia). How much land would solar collectors occupy for an Andasol like concentrating solar thermal system? About 40 times more land and either displace something like 80,000 people or eat into India’s few wildlife habitats.

If Greenpeace succeeds in delaying the Jaitapur nuclear plant, biomass cooking in the area it would have serviced will continue together with the associated suffering and death of children. It’s that simple. Greenpeace will have direct responsibility no less than if it had bombed a shipment of medical supplies or prevented the decontamination of a polluted drinking well.

Jaitapur and earthquakes

In the wake of the reactor failures at Fukushima which killed nobody, Greenpeace globally and Greenpeace India are redoubling their efforts to derail the new Jaitapur nuclear plant. The Greenpeace India website (Accessed 9th May) carries a graphic of the Fukushima station with covering text:

The Jaitapur nuclear plant in India is also in an earthquake prone zone. Do we want to take the risk? The people of Jaitapur don’t.

The Greenpeace site claims that the chosen location for the Jaitapur power plant is in a Seismic Zone 4 with a maximum recorded quake of 6.3 on the Richter scale. Accepting this as true (Wikipedia says its Zone 3), should anybody be afraid?

“Confident” and “relaxed” are far more appropriate responses for anybody who understands the Richter scale. It’s logarithmic. Base 10.

Still confused? A quake of Richter scale size 7 is 10 times more powerful than one of size 6. A quake of size 8 is 100 times more powerful than one a size 6. And a scale 9 quake, like Japan’s monster on March the 11th, is a thousand times more powerful than a quake of size 6. The 40 year old Fukushima reactors came through this massive quake with damage but no deaths. The reactors shutdown as they were designed to and subsequent problems, still fatality free and caused primarily by the tsunami, would not have occurred with a more modern reactor. We haven’t stopped building large buildings in earthquake zones because older designs failed.

Steep cliffs and modern reactor designs at Jaitapur will mean that tsunamis won’t be a problem. All over the world people build skyscrapers in major earthquake zones. The success of the elderly Fukushima reactors in the face of a monster quake is cause for relief and confidence, not blind panic. After all, compared to a skyscraper like Taipei 101, designing a low profile building like a nuclear reactor which can handle earthquakes is a relative doddle.

Despite being a 10 on the media’s self-proclaimed Richter scale, subsequent radiation leaks and releases at Fukushima will cause few if any cancers. It’s unlikely that a single worker will get cancer, let alone any of the surrounding population. This is not even a molehill next to the mountain of cancers caused by cigarettes, alcohol and red meat. The Fukushima evacuations are terrible for the individuals involved but even 170,000 evacuees pales beside the millions of evacuations caused by increasing climate based cataclysms.

Greenpeace India haunted by a pallid European ghost

Each year that the electricity supply in Maharashtra is inadequate, some 23,000 children under the age of 5 will die. They will die this year. They will die next year. They will keep dying while the electricity supply in Maharashtra is inadequate. While the children die, their parents will mourn and continue to deplete forests for wood and charcoal. They will continue to burn cattle dung and they will have more children.

A search of the Greenpeace India web pages finds no mention of biomass cooking. No mention of its general, environmental, climate or health impacts. But there are 118 pages referencing Chernobyl.

At Chernobyl, 237 people suffered acute radiation sickness with 28 dying within 4 months and another 19 dying between 1987 and 2006. As a result of the radiation plume and people who were children at the time drinking contaminated milk, there were 6,848 cases of thyroid cancer between 1991 and 2005. These were treated with a success rate of about 98% (implying about 140 deaths). Over the past 25 years there have also been some thousands of other cancers that might, or might not, have been caused by Chernobyl amongst the millions of cancers caused by factors that Greenpeace doesn’t seem the least worried by, things like cigarettes, alcohol and red meat.

On the other hand, each year that India’s electricity supply is inadequate will see about 256,000 childhood deaths. As an exercise, readers may wish to calculate the number of Indian children who have died due to inadequate cooking fuels over the past 25 years and compare it with the 140 children who died due to the Chernobyl accident. Every one of those Indian deaths was every bit as tragic as every one of those Chernobyl deaths.

Greenpeace India is dominated by the nuclear obsession of its parent organisation. On the day when the Greenpeace India blog ran a piece about 3 Japanese workers with burned feet, nearly a thousand Indian children under 5 will have died from cooking stove smoke. They didn’t get a mention on that day, or any other.

Why is Greenpeace India haunted by this pallid European ghost of an explosion 25 years ago in an obsolete model of reactor in Ukraine? Why is Greenpeace India haunted by the failure of a 40 year old Fukushima reactor without a single fatality? This is a tail wagging not just a dog, but the entire sled team.

Extreme scenarios

It’s time Greenpeace India looked rationally at Indian choices.

Should they perhaps copy the Germans whose 15 year flirtation with solar power hasn’t made the slightest dent in their fossil fuel use? (Note 2) It may simply be that the Germans are technologically incompetent and that things will go better in India. Perhaps the heirs of Ramanujan will succeed where the heirs of Gauss have failed. Alternatively, should India copy the Danes whose wind farms can’t even half power a tiny country of 5.4 million?

India’s current electricity sources. Cooking stoves not included! ‘Renewables’ are predominantly biomass thermal power plants and wind energy, with some solar PV.

India is well aware that she only has a four or five decades of coal left, but seems less aware, like other Governments, that atmospheric CO2 stabilisation must be at 350 ppm together with strict reductions in short lived forcings like black carbon and methane and that these constraints require her, like Australia and everybody else, to leave most of that coal in the ground. But regardless of motivation, India needs both a rebuild and expansion of her energy infrastructure over the next 50 years.

Let’s consider a couple of thumbnail sketches of two very different extreme scenarios that India may consider.

The first scenario is to phase out all India’s coal, oil and gas electricity generation facilities and replace them with nuclear. Currently these fossil fuel facilities generate about 900,000 GWh (giga watt hours) of electricity. Replacing them with 1,000 nuclear reactors at 1.7 GW each will generate about 14 million GWh annually. This is about 15 times the current electricity supply and roughly similar to Victoria’s per capita electricity supply. It’s a fairly modest target because electricity will be required to replace oil and gas in the future. I also haven’t factored in population growth in the hope that energy efficiency gains will compensate for population growth and also with confidence that electrification will reduce population growth. Nevertheless, this amount of electricity should be enough to catapult India into the realms of the developed world.

These reactors should last at least 60 years and the electricity they produce will prevent 256,000 children under 5 dying every year. Over the lifetime of the reactors this is about 15.4 million childhood deaths. But this isn’t so much about specific savings as a total transformation of India which will see life expectancy rise to developed world levels if dangerous climate change impacts can be averted and a stable global food supply is attained.

Build the reactors in groups of 6, as is proposed at Jaitapur, and you will need to find 166 sites of about 1000 hectares. The average density of people in India is about 3 per hectare, so you may need to relocate half a million people (3000 per site). This per-site figure is close to the actual figure for Jaitapur.

There are currently over 400 nuclear reactors operating world wide and there has been one Chernobyl and one Fukushima in 25 years. Nobody would build a Chernobyl style reactor again, but let’s be really silly and presume that over 60 years we had 2 Chernobyls and 2 Fukushimas in India. Over a 60 year period this might cost 20,000 childhood cancers with a 98% successful treatment rate … so about 400 children might die. There may also be a few thousand adult leukemias easily counterbalanced by a vast amount of adult health savings I haven’t considered.

The accidents would also result in 2 exclusion zones of about 30 kilometers in radius. Effectively this is 2 new modestly sized wildlife parks. We know from Chernobyl that wildlife will thrive in the absence of humans. With a 30km radius, the two exclusion zone wildlife parks would occupy 282,743 hectares.

If you are anti-nuclear, this is a worst case scenario. The total transformation of India into a country where children don’t die before their time in vast numbers.

This is a vision for India that Greenpeace India is fighting tooth and nail to avoid.

As our alternative extreme scenario, suppose India opted for concentrating solar thermal power stations similar to the Spanish Andasol system to supply 14 million GWh annually. Each such unit supplies about 180 GWh per year, so you would need at least 78,000 units with a solar collector area of 3.9 million hectares, equivalent to 13 of our hypothesized exclusion zone wildlife parks from the accidents. But, of course, these 3.9 million hectares are not wildlife parks. I say “at least 78,000″ units because the precise amount will depend on matching the demand for power with the availability of sunshine. Renewable sources of energy like wind and solar need overbuilding to make up for variability and unpredictability of wind and cloud cover. The 78,000 Andasol plants each come with 28,000 tonnes of molten salt (a mix of sodium nitrate and potassium nitrate) at 400 degrees centigrade which acts as a huge battery storing energy when the sun is shining for use when it isn’t. Local conditions will determine how much storage is required. The current global production of ordinary sodium chloride is about 210 million tonnes annually. Producing the 2.1 billion tonnes of special salt required for 78,000 Andasols will be difficult, as will the production of steel and concrete. Compared to the nuclear reactors, you will need about 15 times more concrete and 75 times more steel.

Build the 78,000 Andasols in groups of 78 and you have to find 1000 sites of about 4,000 hectares. Alternatively you could use 200 sites of 20,000 hectares. The average density of people in India is over 3 per hectare, so you may need to relocate perhaps 12 million people. If you were to use Solar photovoltaic in power stations (as opposed to rooftops), then you would need more than double the land (Note 4) and have to relocate even more people.


In a previous post, I cited an estimate of 1 tonne of CO2 per person per year as a sustainable greenhouse gas emissions limit for a global population of 8.9 billion. How do our two scenarios measure up?

A current estimate of full life cycle emissions from nuclear power is 65g/kWh (grams per kilo-watt-hour) of CO2, so 14 million GWh of electricity shared between 1.4 billion Indians is about 0.65 tonnes per person annum, which allows 0.35 tonnes for food and other non-energy greenhouse gas emissions. So not only is it sustainable, it’s in the ball park as a figure we will all have to live within.

The calculations required to check if this amount of electricity is sustainable from either solar thermal or solar PV are too complex to run through here, but neither will be within budget if any additional fossil fuel backup is required. Solar PV currently generates about 100 g/kWh (p.102) under Australian conditions, so barring technical breakthroughs, is unsustainable, unless you are happy not to eat at all. Solar thermal is similar to nuclear in g-CO2/kWh, except that the required overbuilding will probably blow the one tonne budget.

The human cost of construction time

The relative financial costs of the two scenarios could well have a human cost. For example, more money on energy usually means less on ensuring clean water. But this post is already too long. However, one last point needs to be made about construction time. I strongly suspect that while building 1000 nuclear reactors will be a vast undertaking, it is small compared to 78,000 Andasols. Compare the German and French experiences of solar PV and nuclear, or simply think about the sheer number and size of the sites required. The logistics and organisational time could end up dominating the engineering build time. We know from various experiences, including those of France and Germany, that rapid nuclear builds are physically plausible and India has demonstrated this with its own reactor program.

If I’m right and a solar (or other renewable) build is slower than a nuclear build, then the cost in human suffering will easily dwarf anything from any reasonable hypotheses on the number of accidents. Can we put a number on this? If we arbitrarily assume a pro-rata reduction in childhood deaths in proportion to the displacement of biomass cooking with electricity, then we can compare a phase-out over 10 five-year plans with one taking say 11. So at the end of each 5 year plan a chunk of electricity comes on line and the number of cooking smoke deaths drops. At the end of the process the number of deaths from cooking smoke is 0. It’s a decline in a series of 10 large or 11 slightly smaller steps. Plug in the numbers and add up the total over the two time periods and the difference is … 640,000 deaths in children under 5. Construction speed matters.

In conclusion

How do my back-of-an-envelope scenarios compare with India’s stated electricity development goals? According to India’s French partner in the Jaitapur project, Areva, India envisages about half my hypothesized electrical capacity being available by 2030, so a 50 year nuclear build plan isn’t ridiculous provided floods or failed monsoons don’t interfere unduly.

As for the safety issues and my hypothesised accidents, it doesn’t matter much what kind of numbers you plug in as a consequence of the silly assumption of a couple of Chernobyls. They are all well and truly trumped: firstly, by the increase in health for Indian children, secondly by the reforestation and biodiversity gains as biomass cooking declines, thirdly by the reduction in birth rates as people get used to not having their children die, and lastly, by helping us all have a fighting chance of avoiding the worst that climate change might deliver.

It’s time Greenpeace India told its parent organisation to shove off. It’s time Greenpeace India set its own agenda and put the fate of Indian children, the Indian environment and the planet ahead of the ideological prejudices of a parent organisation which has quite simply lost the plot.

Note 1: Nuclear Waste: What about the nuclear waste from a thousand reactors? This is far less dangerous than current levels of biomass cooking smoke and is much more easily managed. India has some of the best nuclear engineers in the business. They are planning thorium breeder reactors which will result in quite small amounts of waste, far smaller and more manageable than the waste from present reactors. Many newer reactor designs can run on waste from the present generation of reactors. These newer reactors are called IFR (Integral Fast Reactor) and details can be found on

Note 2: German Solar PV: Germany installed 17 GW of Solar photo voltaic (PV) power cells between 2000 and 2010 and in 2010 those 17 GW worth of cells delivered 12,000 GWh of energy. If those cells were running in 24×7 sunshine, they would have delivered 17x24x365 = 149 GWh of energy. So their efficiency is about 8 percent (this is usually called their capacity factor. A single 1.7GW nuclear reactor can produce about 1.7x24x365x0.9=13,402 GWh in a year (the 0.9 is a reasonable capacity factor for nuclear … 90 percent). Fossil fuel use for electricity production in Germany hasn’t changed much in the past 30 years with most of the growth in the energy supply being due to the development of nuclear power in Germany during the late 70s and 80s.

Note 3: Giga watts, for non technical readers.: The word billion means different things in different countries, but “giga” always means a thousand million, so a giga watt (GW for short) is a useful unit for large amounts of power. A 100-watt globe takes 100 watts of power to run. Run it for an hour and you have used 100 watt-hours of energy. Similarly, a GWh, is a giga watt of power used for an hour, and this is a useful unit for large amounts of energy. If you want to know all about energy units for a better understanding of BNC discussions, here’s Barry’s primer

Note 4: Area for Solar PV. German company JUWI provides large scale PV systems. Their 2 MW (mega watt system) can supply about 3.1 GWh per year and occupies 2 hectares. To supply a similar amount of energy to Andasol would need 180/3.1=58 units occupying some 116 hectares


March 11, 2011

A toy model for forecasting global temperatures – 2011 redux, part 1

Filed under: Climate Change, Global Warming — buildeco @ 1:14 pm

by Barry Brook

A little over two years ago, I wrote the following post : How hot should it have really been over the last 5 years? In it, I did some simple statistical tinkering to examine the (correlative) relationship between global temperatures and a few key factors, namely greenhouse gases, solar irradiance, and ENSO. In the next couple of posts, I’ll update the model, add a few different predictors, and correct for temporal autocorrelation. I’ll also make a prediction on how global temperatures might pan out over the coming few years.

In the 2008 post, I concluded with the following:

To cap of this little venture into what-if land, I’ll have a bit of fun and predict what we might expect for 2009. My guess is that the SOI will be neutral (neither El Niño or La Niña), the solar cycle 24 will be at about 20% of its expected 2013 peak), and there will be no large volcanic eruptions. On this basis, 2009 should be about +0.75, or between the 3rd and 5th hottest on record. Should we get a moderate El Niño (not probable, based on current SOI) it could be as high as +0.85C and could then become the hottest on record. I think that’s less likely.

By 2013, however, we’ll be at the top of the solar cycle again, and have added about another +0.1C worth of greenhouse gas temperature forcing and +0.24 of solar forcing compared to 2008. So even if 2013 is a La Niña year, it might still be +0.85C, making it hotter than any year we’ve yet experienced. If it’s a strong El Niño in 2013, it could be +1.2C, putting it way out ahead of 1998 on any metric. Such is the difference between the short-term effect of non-trending forcings (SOI and TSI) and that inexorable warming push the climate system is getting from ongoing accumulation of heat-trapping greenhouse gases.

So, now that we have data for 2009 and 2010, how did I do? Not too bad actually. Let’s see:

1. After bottoming out for a long period, the 11-year sunspot cycle has restarted. So much for those predicting a new Maunder Minimum. By the end of 2010, we had indeed reached about 20% of the new forecast maximum for cycle-24 (which is anticipated to be about half the peak value of cycle-23).

2. We had a mild El Niño in 2009 and early 2010, before dipping back into a strong La Niña. See here.

3. There were no large equatorial volcanic eruptions. The best we got was Iceland’s Eyjafjallajökull (don’t ask me to pronounce it), which actually helped climate change a little bit by stopping flights over Europe for a week.

4. 2009 was ranked as the 5th warming on record. I had ‘forecast’, based on my toy model, that it would be somewhere between 3rd to 5th. I said that 2009 would be about +0.25C hotter than 2008; the real difference was ~ +0.15C (based on the WTI average index data). This was followed up by 2010 equalling with 2005 as the hottest year on record. Pretty much right in line with my guesstimate.

5. Anthropogenic GHGs continue to accumulate; atmospheric CO2 concentrations built up by 1.9 ppm in 2009 and 2.4 ppm in 2010. That forcing ain’t going away!

I still stand by my 2008 prediction of us witnessing record-smashing year in 2013… but I’ll have to wait another couple of years to confirm my prognostication. However, I’m not going to leave it at this. There are a couple of simple ways I can improve my toy model, I think — without a lot of extra effort. Doing so will also give me a chance to show off a few resampling methods that can be used in time-series analysis, and to probe some questions that I skipped over in the 2008 post.

In short, I think I can do better.

In Part 2, I’ll describe the new and old data streams, do some basic cross-correlation analysis and plotting, bootstrap the data to deal with autocorrelation, and look briefly at a method for assessing a model’s structural goodness-of-fit.

In Part 3 I’ll do some multi-term fitting to data going up to 2005, and use this model to project the 2006 — 2010 period as a way of validating against independent data (i.e., that not used in the statistical fitting), then re-fit the best predictive model(s) to all the data, and make a forecast for 2011 to 2013 inclusive. The big catch here will be getting the non-CO2 forcings right.

Stay tuned

February 24, 2011

The cost of ending global warming – a calculation

Filed under: Economic issues, Global Warming — buildeco @ 12:04 am

Guest Post by Chris Uhlik. Dr Uhlik did a BS, MS, and PhD in Electrical Engineering at Stanford 1979–1990. He worked at Toyota in Japan, built robot controllers, cellular telephone systems, internet routers, and now does engineering management at Google. Among his 8 years of projects as an engineering director at Google, he counts engineering recruiting, Toolbar, Software QA, Software Security, GMail, Video, BookSearch, StreetView, AerialImaging, and research activities in Artificial Intelligence and Education. He has directly managed about 500 engineers at Google and indirectly over 2000 employees. His interests include nuclear power, photosynthesis, technology evolution, artificial intelligence, ecosystems, and education.

(Ed Note: Chris is a member of the IFRG [a private integral fast reactor discussion forum] as well as being a strong support of the LFTR reactor design)

An average American directly and indirectly uses about 10.8 kW of primary energy of which about 1.3 kW is electricity. Here I consider the cost of providing this energy as coming from 3 main sources:

1. The fuel costs (coal, oil, uranium, sunlight, wind, etc)

2. The capital costs of the infrastructure required to capture and distribute the energy in usable form (power plants, tanker trucks, etc)

3. The operating costs of the infrastructure (powerline maintenance, plant security, watching the dials, etc)

The average wholesale electricity price across the US is about 5c/kWh, so the all-in costs of providing the electrical component is currently ~$570/person/year or 1.2% of GDP. The electric power industry including all distribution, billing, residential services, etc is $1,120/person/year or 2.4% of GDP. So you can see there is about a factor of two between marginal costs of electricity production (wholesale prices) and retail prices.

The rest of this energy comes from Natural gas, Coal, Petroleum, and Biomass, to the tune of 6.36 kW/person.

I’m going to make the following assumptions to calculate how much it would cost to convert to an all-nuclear powered, fossil-carbon-free future.

Assumptions (*see numbers summary at foot of this post)

  • I’ll neglect all renewable sources such as hydro. They amount to only about 20% of electricity and don’t do anything about the larger fuel energy demand, so they won’t affect the answer very much.
  • Some energy sources are fuel-price intensive (e.g. natural gas) and some have zero fuel prices, but are capital intensive (e.g. wind). I’ll assume that nuclear is almost all capital intensive with only 35% of the cost coming from O&M and all the rest going to purchase costs plus debt service.
  • I’ll use 8% for cost of capital. Many utilities operate with a higher guaranteed return than this (e.g. 10.4%) but the economy historically provides more like 2–5% overall, so 8% seems quite generous.
  • I’ll assume 50 year life for nuclear power plants. They seem to be lasting longer than this, but building for more than 50 years seems wasteful as technologies advance and you probably want to replace them with better stuff sooner than that.
  • Back in the 1970′s we built nuclear power plants for about $0.80–0.90/watt (2009 dollars). In the 1980′s and 90′s we saw that price inflate to $2.09–3.39/watt (Palo Verde and Catawba) with a worst-case disaster of $15/watt (Shoreham). Current project costs are estimated at about $2.95/watt (Areva EPR). Current projects in China are ~$1.70/watt. If regulatory risks were controlled and incentives were aligned, we could probably build plants today for lower than the 1970′s prices, but I’ll pessimistically assume the current estimates of $3/watt.
  • Electricity vs Combustion: In an all nuclear, electricity-intensive, fossil-carbon-free future, many things would be done differently. For example, you won’t heat your house by burning natural gas. Instead you’ll use electricity-powered heat pumps. This will transfer energy away from primary source fuels like natural gas to electricity. Plug-in-hybrid cars will do the same for petroleum. In some cases, the total energy will go down (cars and heat pumps). In some cases, the total energy will go up (synthesizing fuel to run jet transport aircraft). I’ll assume the total energy demand in this future scenario is our current electricity demand plus an unchanged amount of energy in the fuel sector, but provided instead by electricity. I.e. 1.3 kW (today’s electricity) + 6.4 kW (today’s fuels, but provided by electricity with a mix of efficiencies that remains equivalent). This is almost certainly pessimistic, as electricity is often a much more efficient way to deliver energy to a  process than combustion. (Ed Note: I discuss similar issues in these two SNE2060 posts).
  • Zero GDP growth rate

Result: In this future, we need 7.7 kW per person, provided by $3/watt capitalized sources with 8% cost of capital and 35% surcharge for O&M. The cost of this infrastructure: $2,550/person/year or 5% of GDP.

Alternate assumptions:

  • Chinese nuclear plant costs of $1.70/watt
  • Higher efficiency in an electric future were most processes take about 1/2 as much energy from electricity as they used to take from combustion. 1.3 kW from old electricity demands (unchanged) + 3.2 kW from new electricity demands (half of 6.4 kW). And fuels (where still needed) are produced using nuclear heat-driven synthesis approaches.

Alternative result: $844/person/year or 2% of GDP.

Conclusion: Saving the environment using nuclear power could be cheap and worth doing.


Year: 2008
GDP: $14.59T
Population 306M
Electricity: 12.68 quads
Non-electricity fuels: 58.25 quads
Natural gas: 16.33 quads
Coal: 1.79 quads
Biomass: 3.46 quads
Petroleum: 36.67 quads
Average retail electricity price: 9.14 c/kwh
Electric power industry: $343B/yr
Electricity transmission industry: $7.8B/yr

Per person statistics:
GDP: $47,680
Electricity: 1.29 kW (average power)
Fuels: 6.36 kW

February 9, 2011

A remediation perspective on preventing future flood disasters

Filed under: Climate Change, Global Warming, Water Resources — buildeco @ 11:01 am

Guest Post by Michal Kravčík

The year 2011 started strangely. Firstly, there were tragic floods in Australia, Brazil, South Africa and secondly, unprecedented drought in China. The media routinely says, that the extremes of climate will only get worse. Why is this happening? Is it possible to prevent future catastrophic flooding, even to moderate the other extreme, droughts? One possible solution for Australia, is to make use of the unique hydrological and plant processes that exist in its mostly flat landscape. The country once had a unique natural irrigation system contained in its vast floodplains systems, that and this enabled the country to slow the thermoregulatory process of dramatic warming and cooling in the atmosphere and to mitigate the risks of extremes in weather. I am convinced that recovery of the small water cycles from plant biodiversity can cool the dry country of Australia, prevent flooding on the east coast, as well as, restore water to streams and rivers; life in the soil is critically important for climate.

The coastline of eastern Australia extends for over 4000km but it is also the boundary between the ocean and dry inland of the Continent. These two fundamentally differing ‘worlds’, produce another kind of energy in to the atmosphere. The sea surface water evaporates into the atmosphere (See diagram below) whilst


the dry and desiccated landscape of the interior produces sensible heat, which and enhances hot air output streams into the atmosphere (see Diagram below).


At the interface of these two “worlds” there is intensive development of heavy storm clouds, and as the moisture laden clouds from over the ocean try to enter the interior, they are blocked, by the hot output current of the dry country ( See Diagram below).


This phenomena causes the extreme tropical storms and heavy rainfall (that we have recently witnessed in Eastern Australia, especially in SE Queensland) resulting in


catastrophic flooding. The red arrows in the diagram below show exactly where all that heavy rainfall went….straight out to the sea.


So what is the solution?

The answer is easy! The process is a little more difficult.

If the hot output streams of the dry interior of the continent are preventing the entry of clouds further inland, what needs to be done is to cool the dried country using plants and water as the cooling agent. Plants act as solar operated air conditioners that cool the landscape during the day via the transpiration of water.

I say this process is not so easy, because, normally there is so little water inland. When there is too much water on the east coast, as we have seen in the recent time of extreme rainfall, it causes disastrous flooding. The solution is to understand the unique water holding systems that once functioned in the Australian landscape, west of the mountain range. It was plant life that enabled the land to collect and hold the rain, which in turn would cool and moderate climate.


Australia’s unique floodplain systems were once in ground water storage systems, covered by a wealth of transpiring plant biodiversity which cooled the land and gave rise to rain bearing clouds. If this natural hydological cycle were reinstated, rainfall could be redirected inland to reduce flood conditions on the east coast. This would potentially solve the flood problem all over the east coast of Australia and reduce excess rainwater run-off to the sea. This could then be used for the rehabilitation and restoration of ecosystems within the continent.


Intercepted rainwater in the country will evaporate into the atmosphere and thereby inhibite the production of sensible heat and create clouds which, after condensation will fall again in the form of gentle rain in the inland of the country.


This approach is simply restores ‘small water’ cycles and life in the landscape will spread further inland.


At the same time, clouds made by the small biodiversity based water cycles link with clouds of the large water cycle created by evaporation from the sea. This relationship restores the function of the biotic pump, as described by Russian scientists Gorshkov and Markarieva. They argue that originally with vegetation cover on the East Coast intact, the total rainfall was once higher and more balanced across the inland.


The restoration of water in the small water cycles of plant transpiration and recovery causes climate change remediation; mitigates temperature extremes; extreme weather events and results in a healthy environment with plenty of water for people, food, nature, and continental security and prosperity.


Globally we have not only experienced the degradation of our aquatic ecosystems and the loss of rainfall but greater water imbalances, and lots of heavy intense rain as opposed to the cooler, more gentle rain.

We concentrate only on water that we can see. Water that evaporates cannot be seen and thus it is considered lost to the system. Evaporated water in the atmosphere which, after condensation as rain, is returned to the surface of the landscape and restores life. So it is not a loss, but is part of the eternal cycle of water in a landscape. Water is only lost to the land system when it drains straight into the sea or is lost to hot air above the warm landscape!

Re-coupling Australia’s unique plant/water/climate systems will allow the return of rainwater into the interior of the dry inland country and Australia can start a new era of sustainable prosperity.

A working model of the restoration of water in the small water cycles (except at can be seen in Australia at Tarwyn Park in the Bylong Valley.

In pioneering this approach, Australia could be an example to the rest of the world.

Michal Kravčík, utorok 25. januára 2011 09:24
Čítajte viac:

January 11, 2011

No (statistical) warming since 1995? Wrong

Filed under: Global Warming — buildeco @ 1:59 pm

by Barry Brook

Yes, I’m still on vacation. But I couldn’t resist a quick response to this comment (and the subsequent debate):

BB: Do you agree that from 1995 to the present there has been no statistically-significant global warming

Phil Jones: Yes, but only just.

Here is the global temperature data from 1995 to 2010, for NASA GISS and Hadley CRU. The plot comes from the Wood for Trees website. A linear trend is fitted to each series.

Both trends are clearly upwards.

Phil Jones was referring to the CRU data, so let’s start with that. If you fit a linear least-squares regression (or a generalised linear model with a gaussian distribution and identity link function, using maximum likelihood), you get the follow results (from Program R):

glm(formula = as.formula(mod.vec[2]), family =
                       gaussian(link = "identity"),
    data = dat.2009)

Deviance Residuals:
      Min         1Q     Median         3Q        Max
-0.175952  -0.040652   0.001190   0.051519   0.192276  

              Estimate Std. Error t value Pr(>|t|)
(Intercept) -21.412933  11.079377  -1.933   0.0754 .
Year          0.010886   0.005534   1.967   0.0709 .
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 

(Dispersion parameter for gaussian family taken to be 0.008575483)

    Null deviance: 0.14466  on 14  degrees of freedom
Residual deviance: 0.11148  on 13  degrees of freedom
AIC: -24.961

Two particularly relevant things to note here. First, the Year estimate is 0.010886. This means that the regression slope is +0.011 degrees C per year (or 0.11 C/decade or 1.1 C/century). The second is that the “Pr” or p-value is 0.0709, which, according to the codes, is “not significant” at Fisher’s alpha = 0.05.

What does this mean? Well, in essence it says that if there was NO trend in the data (and it met the other assumptions of this test), you would expect to observe a slope at least that large in 7.1% or replicated samples. That is, if you could replay the temperature series on Earth, or replicate Earths, say 1,000 times, you would, by chance, see that trend or larger in 71 of them. According to classical ‘frequentist’ statistical convention (which is rather silly, IMHO), that’s not significant. However, if you only observed this is 50 of 1,000 replicate Earths, that WOULD be significant.

Crazy stuff, eh? Yeah, many people agree.

Alternatively, and more sensibly, we can fit two models: a ‘null’ with no slope, and a TREND model with a slope, and then compare how well they fit the data (after bias corrections). A useful way to do this comparison is via the Akaike Information Criterion – in particular, the AICc evidence ratio (ER). The ER is the model probability of slope model divided by that of the  intercept-only model, and is, in concept, akin to Bayesian odds ratios. The ER is preferable to a classic null-hypothesis significance test because the likelihood of the alternative model is explicitly evaluated (not just the null). Read more about it in this free chapter that Corey Bradshaw and I wrote.

Here is what we get:

           k    -LogL      AICc     dAICc      wAIC    pcdev
CRU ~ Year 3 15.48054 -22.77926 0.0000000 0.5897932 22.93616
CRU ~ 1    2 13.52652 -22.05304 0.7262213 0.4102068  0.00000

The key thing to look at here is the wAIC values. The ER in this case is 0.5897932/0.4102068 = 1.44. So, under this test, the model that says there IS a trend in the data is 1.44 times better supported by the data than the model that says there isn’t. The best supported model is the TREND model, but really, it’s too hard with this data to separate the alternative hypotheses.

With more data comes more statistical power, however. Say we add the results of 2010 to the mix (well, the average anomaly so far). Then, for the null hypothesis test, we get:

glm(formula = as.formula(mod.vec[2]), family =
                        gaussian(link = "identity"),
    data = dat)

Deviance Residuals:
      Min         1Q     Median         3Q        Max
-0.174040  -0.041956   0.008072   0.044350   0.193146  

              Estimate Std. Error t value Pr(>|t|)
(Intercept) -22.456037   9.709805  -2.313   0.0365 *
Year          0.011407   0.004849   2.353   0.0338 *
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 

(Dispersion parameter for gaussian family taken to be 0.007993787)

    Null deviance: 0.15616  on 15  degrees of freedom
Residual deviance: 0.11191  on 14  degrees of freedom
AIC: -27.996

For the ER test, we get:

           k    -LogL      AICc    dAICc      wAIC    pcdev
CRU ~ Year 3 16.99796 -25.99592 0.000000 0.7552163 28.33275
CRU ~ 1    2 14.33287 -23.74266 2.253259 0.2447837  0.00000

The ER = 3.1.

So, the “significance test” suddenly (almost magically….) goes from being non-significant to significant at p = 0.05 (because Pr is now 0.0338), or 38 times out of 1,000 by chance.

Whereas although the ER test is strengthened, the previous, result that the TREND is the best model (of these two alternatives), doesn’t change. This test is a little more robust, and certainly less arbitrary because no matter what the data, we are always evaluating the strength of our evidence rather than whether some pre-defined threshold is crossed.

You can do the same exercise with GISTEMP, but it’s less ‘interesting’, because GISTEMP shows a stronger trend, due largely to its inclusion of Arctic areas.

For GISTEMP, the 1995-2009 data yield a slope of 0.0163 C/year, a p-value = 0.0082, and an ER = 13.4 (that is, the TREND model is >10 times better supported by this data). The 1995-2010 (December-November averages) for GISTEMP gives a slope of 0.0174 C/year, a p-value = 0.0021, and an ER = 57.8 (TREND now >50 times better supported!).

You can see that for relatively short time series like this, adding extra years can make a reasonable difference, so longer series are preferable in noisy systems like this.

Okay, does that answer the top quote? I think so, but I’m happy to answer questions on details. Otherwise,  I’ve got a guest post from DV82XL that I’ll put up shortly to re-invigorate our debate out carbon prices…

December 21, 2010

The effect of cutting CO2 emissions to zero by 2050

Filed under: Climate Change, Emissions Reduction, Global Warming — buildeco @ 10:59 am

Guest post for Barry Brook by Dr Tom M. L. Wigley


Tom is a a senior scientist in the Climate and Global Dynamics Division of the US National Center for Atmospheric Research and former Director of the CRU. He is an adjunct Professor at the University of Adelaide. For his list of papers and citations, click here (his h-index is 70!). Tom is also a good friend of mine and a strong supporter of the IFR.

What would happen to CO2 concentrations, global-mean temperature and sea level if we could reduce total CO2 emissions (both fossil and net land-use change) to zero by 2050? Based on the literature that examines possible policy scenarios, this is a virtually impossible goal. The results presented here are given only as a sensitivity study.

To examine this idealized scenario one must make a number of assumptions. For CO2 emissions I assume that these follow the CCSP MiniCAM Level 1 stabilization scenario to 2020 and then drop linearly to zero by 2050. For the emissions of non-CO2 gases (including aerosols and aerosol precursors) I assume that these follow the extended MiniCAM Level 1 scenario (Wigley et al., 2009). The extended Level 1 scenario provides emissions data out to 2300. Note that the Level 1 scenario is the most stringent of the CCSP stabilization scenarios, one that would almost certainly be very costly to follow using traditional mitigation strategies. Dropping CO2 emissions to zero is a much more stringent assumption than the original Level 1 scenario, in which total CO2 emissions are 5.54GtC/yr in 2050 and 2.40GtC/yr in 2100.

For modeling the effects of this new scenario one must make assumptions about the climate sensitivity and various other model parameters. I assume that the sensitivity (equilibrium warming for 2xCO2) is 3.0C, the central estimate from the IPCC AR4. (Note that the 90% confidence interval for the sensitivity is about 1.5C to 6.0C – Wigley et al., 2009.)

For sea level rise I follow the AR4 and ignore the possible effects of accelerated melt of the Greenland and Antarctic ice sheets, so the projections here are almost certainly optimistic. All calculations have been carried out using version 5.3 of the MAGICC coupled gas-cycle/climate model. Earlier versions of MAGICC have been used in all IPCC reports to date. Version 5.3 is consistent with information on gas cycles and radiative forcing given in the IPCC AR4.

The assumed CO2 emissions are shown in Figure 1.

The corresponding CO2 concentration projection is shown in Figure 2. Note that the MAGICC carbon cycle includes climate feedbacks on the carbon cycle, which lead to somewhat higher CO2 concentrations than would be obtained if these feedbacks were ignored.

Global-mean temperature projections are shown in Figure 3. These assume a central climate sensitivity of 3.0C. Temperatures are, of course, affected by all radiatively active species. The most important of these, other than CO2, are methane (CH4) and aerosols. In the Level 1 scenario used here both CH4 and aerosol precursor (mainly SO2) emissions are assumed to drop substantially in the future. CH4 concentrations are shown in Figure 4. The decline has a noticeable cooling effect. SO2 emissions drop to near zero (not shown), which has a net warming effect.

The peak warming is about 0.9C relative to 2000, which is about 1.7C relative to pre-industrial times. This is below the Copenhagen target of 2.0C – but it clearly requires a massive reduction in CO2 emissions. Furthermore, the warming peak could be significantly higher if the climate sensitivity were higher than 3.0C. For a 3.0C sensitivity, stabilizing temperatures at 2.0C relative to the pre-industrial level could be achieved with much less stringent CO2 emissions reductions than assumed here. The standard Level 1 stabilization scenario, for example, gives a 50% probability of keeping below the 2.0C target.

Figure 5 gives the sea level projection for the assumed scenario. This is a central projection. Future sea level is subject to wide uncertainties arising from uncertainties in the climate sensitivity and in parameters that determine ice melt. As noted above, the projection given here is likely to be an optimistic projection. Note that sea level roughly stabilizes here, at a CO2 concentration of about 320ppm. Less optimistic assumptions regarding the emissions of non-CO2 gases would require a lower CO2 concentration level. Given the unrealistic nature of the assumption of zero CO2 emissions by 2050, this is a graphic illustration of how difficult it would be to stabilize sea level – even at a level more than 20cm above the present level.

Key reference:
T. M. L. Wigley, L. E. Clarke, J. A. Edmonds, H. D. Jacoby, S. Paltsev, H. Pitcher, J. M. Reilly, R. Richels, M. C. Sarofim and S. J. Smith (2009) Uncertainties in climate stabilization. Climatic Change 97, 85-121, DOI: 10.1007/s10584-009-9585-3.

November 25, 2010

Of brains, biceps and baloney

Filed under: Climate Change, Global Warming, Livestock's long shadow — buildeco @ 9:51 pm

Guest Post by Geoff RussellGeoff is a mathematician and computer programmer and is a member of Animal Liberation SA. His recently published book is CSIRO Perfidy.

NASA climate scientist James Hansen’s recent book Storms of my Grandchildren makes accessible the evidence behind the judgement of many climate scientists that we need to get atmospheric carbon dioxide back to 350 ppm (or perhaps 300-325 to be really safe) to avoid dangerous climate tipping points. But he also makes it clear that merely redesigning the global energy infrastructure isn’t enough, other important climate forcings like methane, nitrous oxide and black carbon must also be reduced.

What do we need to do?

Here’s Hansen’s todo list. Stick it on the fridge.

  1. Phase out all coal fired power stations by 2030. Of course, you can still use coal if you sequester all the emissions, … good luck with that.
  2. Undo 200 years of deforestation. We need to start this now, but it will take over 100 years and contribute a reduction of about 50ppm by 2150.
  3. Reduce non-carbon dioxide forcings. Hansen is a little vague here, but the argument implies that pre-industrial levels are required.

Now, if the next sentence doesn’t hit like a shattering ice-shelf, then reread until it does. All three items are mandatory. This isn’t a smorgasbord where you pick what you want and ignore the rest. With countries around the world still building new coal power plants, the first todo is looking shaky. Fortunately the second and third are technically easier. We don’t need any new science or technologies but the politics are diabolical.

You can’t tackle reforestation without a global food system rethink. People who’ve read my previous posts on BNC understand this, but be patient while I race through a little background for new readers.

As with reforestation, steep reductions of methane, black carbon and nitrous oxide forcings also require a rethink of the global food system. This is because 96 megatonnes of the 350 mega tonnes of anthropogenic methane emitted annually are due to livestock. It’s also livestock production which is responsible for the bulk of the annual global conflagrations responsible for preventing plenty of natural reforestation while also contributing rather a lot of black carbon. This is covered in Boverty I. The good news is that 38 megatonnes of methane emissions will go when we stop mining coal and another 73 megatonnes are tied up with oil and gas production and can be relatively easily dealt with when there is a will to do so.

The livestock reforestation impediment

Currently, a major sticking point on reforestation is the attitude to animal product consumption of the UN FAO which is summed up in the just released report on the greenhouse gases associated with the dairy sector: Without concerted action, emissions [from livestock] are unlikely to fall. On the contrary, they are rising, as global demand for meat, milk and eggs continues to grow rapidly. Projected population growth and rising incomes are expected to drive total consumption higher–with meat and milk consumption doubling by 2050 compared to 2000 (FAO, 2006b).

The cognitive dissonance at the UN FAO in understanding that livestock is currently the largest driver of deforestation, but also planning for a doubling of meat and milk consumption by 2050 while trying to reform a frontier cowboy culture is extreme. Any growth in the global livestock industry will make ending deforestation difficult and the required massive global reforestation impossible.

A huge part of the problem is that decades of meat and dairy industry propaganda has left people with the a cult-like certainty that they are some kind of wonder food. This sentiment is echoed in a recent special edition of Science onFood Security in a paper by H. Charles Godfray et al:

… in developing countries, meat represents the most concentrated source of some vitamins and minerals, which is important for some individuals such as young children.

Note that Godfray felt no need to justify his claim. Henry Thoreau wrote about a similar prejudice in 1852:

One farmer says to me, You cannot live on vegetable food solely, for it furnishes nothing to make bones with; and so he religiously devotes a part of his day to supplying his system with the raw material of bones; walking all the while he talks behind his oxen, which, with vegetable-made bones, jerk him and his lumbering plow along in spite of every obstacle.

How can you argue with the likes of Godfray when Science, one of the world’s top peer reviewed science journals allows him to get away with unsubstantiated assertions? There is not even any science to debate if you don’t justify your claims. Thankfully, the 2006 UN Livestock’s Long Shadow (LLS) report provides a hint of science in its justification for pushing livestock products in the developing world:

Children in particular have been shown to benefit greatly in terms of physical and mental health when modest amounts of milk, meat, or eggs are added to their diets, as shown by long-term research carried out in Kenya (Neumann 2003)

The above two quotes go to the heart of the international stranglehold of the livestock industry on the only organisations with enough political clout to have a chance of driving a major global reforestation effort. The players like the UN, the EU and major national Governments.

This post examines the second of these quotes in detail.

But what about all the starving children?

The LLS quote is some serious blackmail. It implies that nobody who cares about starving children could possibly suggest any reductions in global meat production, particularly in the developing world. There have been plenty of calls for a decrease in global meat production via a transfer of meat to the developing world. The most important of these was back in 2007 from an Australian team writing in one of the world’s top medical journals, The Lancet. They proposed the world’s average 100 gram per day meat intake be reduced to 90 grams per day with a hefty redistribution to even out global consumption. High income countries would drop from 200-250 grams per day to 90 grams per day and low income countries would increase their meat from 25-50 grams per day to 90 grams per day. Such a move might halt deforestation, depending on if and when the global population levels off, but it will clearly not be enough to allow the necessary massive reforestation.

The Lancet paper also contains a nice little qualitative table claiming that the proposed increase in meat in the developing world would heavily decrease childhood stunting. Such a claim is in line with the LLS quote, but no reference was given. Again as with Godfray, it seems nobody thought any evidence was needed.

LLS had 6 authors with the lead being the coordinator of the UN FAO’s Livestock Environment and Development Initiative (LEAD) … Henning Steinfeld. Is the quote just a demonstration that LEAD is a livestock industry pawn, or is it simply good science? Let’s look at the Neumann studies.

Show me the data!

The Neumann paper cited by LLS is part of a set that appeared in a 2003 supplement in the Journal of Nutrition.

The papers describe a study involving 554 children in Kenya provided supplemental food on a daily basis for 12 months. This is solid, careful, painstaking clinical research involving a supervised team of over 100 locals taking blood samples, preparing food, measuring biceps, administering IQ tests, freezing and transferring blood samples to the US for processing and dealing with a maze of logistical difficulties. All the meat was shipped into the rural area from Nairobi. Many of the children involved came from families with cattle, but they rarely eat or milk them. A Control group got no extra food at all. Why did they agree to take part? They were given a milking goat at the end of the research. Great PR for the dairy biz.


The randomisation to different extra-food groups was done by school. All the children of the selected age group in one school got more meat, those in another school got more milk and another got just plain food. So 12 schools were allocated to one of 4 groups … 3 schools per group. It’s easy to understand why this randomisation procedure was used, but equally easy to see how something unusual in even a single school might cause problems.

Hang in there while I describe accurately the extra food the children got. This kind of detail is unusual in a blog, but BNC is different and the details matter.

Who got fed what

The children had a median age of 7.4 years old and food intake before the study was highly variable with a quarter of the children being stunted at the start of the study. In addition to a Control group getting no supplementary food, there were 3 types of daily supplement, denoted Meat, Milk and Energy by the study authors. I’ll call them Meat, Milk, and Plants. Calling the Plant food group Energy makes it sound like that’s the only thing Plants can provide … a revealingly silly choice of terminology.

The three supplementary food groups were built around a local stew made of maize, beans and greens and all balanced to contain about 240 Calories. So the children either got stew with meat (60 grams of beef mince), stew with milk (200 ml), stew only, or nothing.

Comparing the LLS quote with the actual research

Now, how does LLS’s description of the results compare with what actually happened? Note that all the key studies are publicly available thanks to the enlightened policies of the Journal of Nutrition. You can read them yourself.

First, there was no egg in any of the supplementary feeding. Oops, strike one for LLS. Second, is 60 grams per day for a 7 year old modest? It’s almost double the Australian National Health and Medical Research Council recommended meat intake for a 7 year old. It’s double the per capita daily production of beef in Kenya. It’s close to double the average red meat intake of Australian 7 year olds (subscription required for this link). After a few months in the study, the meat supplement was increased to 85 grams/day and the milk increased to 250 ml. To describe this intake as modest seems a poor choice of adjective.

The daily food supplement was called a snack by the researchers, and had about the same caloric value as a standard McDonald’s hamburger which has 90 grams of beef mince, similar to the 85 grams in the Meat snack. This is also close to the 90 grams of meat recommended by the Lancet authors (although they recommended more of that meat be chicken or pig meat). The beans and greens however would have made the Meat snack rather more nutritious than a hamburger.

Did the children benefit greatly in physical or mental health as LLS claimed?

The title of the paper describing the physical impacts seems clear: Food Supplements Have a Positive Impact on Weight Gain and the Addition of Animal Source Foods Increases Lean Body Mass of Kenyan School children. But as with everything else about this research, you have to actually read the damn papers, not just the titles and not just the abstracts to find out what actually happened.

All the intervention groups gained an average of 10% (0.4 kilograms) more weight than the Control group but there were essentially no changes in height for age … sorry about the jargon, what does this mean? No change in stunting.

There were no statistically significant differences in height gain, body fat (as measured by skin fold tests) and a few other measures, but the meat group (and not the milk group) got statistically significantly bigger biceps … how much bigger? … after all, this is where the Increases Lean Body Mass of the title comes from. So, are we talking little Kenyan Schwarzeneggers? Not quite. The biceps were bigger in circumference by less than 1 millimeter, the Meat group’s bicep increased by 7.1 mm compared with 6.5 mm in the Plants group. Also, as usual, the paper’s title is misleading because it wasn’t all animal source foods which achieved this mighty sub-millimeter muscular gain, only meat. Milk produced the same gains as Plants. The area of the biceps was also bigger in the meat (but not in the milk group) but whether either of these changes was due to nutrition or the possibility that one or more schools did more physical education wasn’t discussed.

The other aspect of physical health that could be construed as important is micronutrient status. We will deal with that below.

Oh yes, I almost forgot. The researchers were alert to all manner of possible confounding problems. They even measured the food given at home to see if it changed as a result of knowing that the children were getting extra at school. It did. The Control children and the Meat children both got an increase of food at home (about 150 Calories), while the Milk and Plants children received a similar sized decrease in home nutrition. The quantity of extra or reduced home feeding wasn’t uniform over time but the direction of the changes persisted throughout the study.

The researchers went to great lengths to measure physical activity, but didn’t report any details except to claim an improvement. The activity results were published in 2007. I could describe them, but this post will be quite long enough as it is. Suffice to say that, regardless of the supplementation, many of the children would still not have been getting enough food to support high activity. I base this claim on the variance in the reported energy intake of the children and the Australian National Health and Medical Research Council (NHMRC) recommended energy intakes for children, suitably adjusted because Australian children are a little bigger at the same age. The reported extra 150 Calories per day over the 12 months for the meat group didn’t rate a mention in the 2007 paper which duly reported more high activity among the Meat group compared to any others.

That’s strike two for LLS, there were no great physical benefits for the meat children over and above the benefits of extra food.

Is taller better?

It’s worth noting here that while stunting is a definite indicator undernutrition, it doesn’t follow that maximising growth rates in children is good. There is a really good reason not to maximise growth rates. The big “C”. Your height as an adult is a good cancer predictor … greater height equals greater cancer risk. The 2007 World Cancer Research Fund report explains how it works and gives “adult attained height” the rare accolade of having been convincinglyshown to be a cause of bowel and breast cancers with a probable role in other cancers. So if additional animal source foods do maximise growth, then this is evidence against them, not for them.

Great improvements in mental health?

Now for the last of the LLS claims. Recall, LLS also told the world that animal foods, all of them, produce great benefits for mental health. I call this LLS’s meat head claim.

I’ll begin with a quote from the abstract of the relevant study before revealing the actual results. The abstract is beautifully crafted to mislead people who don’t read the entire study while being somewhat defensible in the face of a claim of fraudulent misrepresentation. First, comes the quotable quote, the take home message, the thing that will survive in the annals of meat industry propaganda:

Results suggest that supplementation with animal source food has positive effects on Kenyan children. …

There you have the guts of the LLS claim. But the LLS promoted “positive effects” into “benefitted greatly”. The abstract follows up on this general claim with a semblance of the truth expressed as abstractly and vaguely as possible:

However, these effects are not equivalent across all domains of cognitive functioning, nor did different forms of animal source foods produce the same beneficial effects.

Now let’s see what happened. The researchers measured 3 things: arithmetic, verbal skills and Raven’s Progressive Matrices (RPM). There were virtually no differences on the first two. Even the Control group, living on their normal diet and the promise of a goat made almost identical gains to the children with the burger-equivalent snacks.

But the Raven’s results showed a statistically significant impact. Keep in mind that the bicep increase (of less than 1mm) was also “statistically significant” … which is a rather different concept from “important”. An RPM test consists of a matrix of geometrical patterns with one missing, as in the example from Wikipedia on the left. Usually, the matrix is also accompanied by a set of possible candidate patterns.

The Meat group did statistically significantly better than the Plants and Control groups, while the Milk group statistically significantly worse. The size of the effect was similar. What we are talking about here is a relative change in the slope of the increase in RPM scores as the children aged.

Despite the lower RPM rise rate in the milk group, neither the study authors nor the LLS authors are recommending less milk to prevent a decline in mental health and there was no mention of rethinking the Control group’s free milking goat and perhaps delivering it in sliced and diced form.

By now, you will understand that research cited by LLS doesn’t show what it was supposed to. Not even close. It was funded by the USAID Global Livestock Nutrition Collaborative Research Support Program and was a substantial study carried out by well qualified people with a financial and professional interest in showing that animal foods are a god-send to poor children in developing countries. But apart from the occasional misrepresented and tiny result, they found nothing. This must have disappointed another of their funding sources: The National Cattleman’s Beef Association.

The sloppy, inaccurate and uncritical citation of these non-results by otherwise careful LLS authors just reflects what happens when people have been brainwashed by the tunnel vision of the dominant meat obsessed cuisine challenged culture.

Summing up

Remember that we began with a study cited by LLS which had UCLA Professor Charlotte Neumann as the lead author. Here is its full title.

Animal Source Foods Improve Dietary Quality, Micronutrient Status, Growth and Cognitive Function in Kenyan School Children: Background, Study Design and Baseline Findings

The title makes four claims and we can now summarise their accuracy:

  1. Animal source foods increase dietary quality. Vacuously true by Neumann’s definition of quality.
  2. Animal source foods increase growth … trivially true, but did it increase growth more than plant source foods? No.
  3. Cognitive functions … if the meat RPM increase is considered important, then the milk decrease should be similarly considered important … I’d judge both to be trivial and confounded.
  4. Micronutrient status … with the exception of B12, this is false. Again we need to read a subsidiary study. This paper says that none of the supplementary feeding had any impact on any biochemical nutrient measures except B12. Even with B12, the results will surprise some people. The rate of serious B12 deficiency dropped in the Meat and Milk supplemented groups, but the rate of moderate deficiency actually increased in the Meat group.The status paper has various results tables. Let me just cherry pick a few results, not because they prove anything, but just because they will surprise normal meat eaters. Serum zinc levels fell in all the groups, but fell most in the meat group. Oops, not good. Ditto copper. Plasma folate fell more in the Meat and Milk group than in the Plants group. Hemoglobin levels rose more in the Plants group than in the Meat and Milk groups. Serum iron increases in the Plants group were double what they were in the Milk group. The researchers defined anemia as having hemoglobin levels below 115 g/dl. The group which had the biggest fall in anemia rates was … wait for it … the Control group!There’s an old saying that when all you have is a hammer, everything looks like a nail. There are clearly some complex interactions happening between many factors in these children, some of which probably are not on anybody’s radar, let alone researchers who see animal foods as the ultimate hammer.

The issue of B12 is important and came up in the blog responses to Boverty II. The children given Meat or Milk in the Kenyan study didn’t all end up with good B12 blood levels. Judging by the rise in moderate deficiency, some went backwards. How could this happen? The B12 in animal foods is bound to protein and not well absorbed. The B12 in supplements is easier to absorb, doesn’t come with saturated fat, bowel cancer causing heme iron and other carcinogens, and can be supplied to 9 billion people without the deforestation that animal products on the required scale would entail. Older people (over 50) frequently have sub-optimal absorption anyway, which is why the US Institutes of Medicine advises all people over 50 to use B12 supplements … whether or not they eat meat. B12 fortified foods are common in developed countries, they need to become globally ubiquitous … much as iodine is in salt.

So you can stop reading now … if your only concern is the possible deleterious impact that reforestation and a consequent reduction in global livestock could have on global health, particularly to vulnerable children in developing countries. In my previous BNC post, Boverty Blues, I explained the mechanics of the livestock anchor chain depressing reforestation and agricultural productivity in many parts of Africa.

But the rest of the story about this research is fascinating and should be told.

RPM’s have risen in Kenya without animal foods!

Was the size of the RPM improvement as cognitively significant as the mighty increase in bicep circumference? The researchers don’t hazard an opinion. Most of them are also co-authors on another paper involving Kenyan children and RPM scores. This paper shows that there are ways of getting genuinely large increases in RPM scores without adding any animal source foods. The paper reports on a 1984 cohort of Kenyan children of similar age who also underwent RPM testing. The difference between the average RPM scores in the 1984 and 1998 cohorts was 4.5 points. This result held even when the 1984 cohort was carefully filtered to make it closely match the characteristics of the 1998 cohort. As we shall see, this isn’t a one off. RPM scores have been rising globally for decades and the increases are the subject of much research.

What do we know about the possible causes of this particular RPM increase over time? We know with a fair degree of certainty that it wasn’t caused by any increase in animal source foods … because while the 1998 children were better fed than the 1984 group, all of the additional food was plant food. Interestingly, I didn’t find any articles by this group with a title like: Increase in plant foods drives large IQ increase in Kenyan children.

But wait … there’s still more.

In search of the vanishing cohort

The Kenyan research actually involved not one but two different groups of children. My account above only described one. But there is mention of two cohorts in the main Neumann paper. The second cohort had 500 children and was enrolled 12 months after the project start. This cohort was enrolled after a drought and a teacher’s strike caused local food and logistic problems. These extra cohort could be used, they said, either as a replication or to increase the statistical power of the research.

But something happened to Cohort II. The cognitive functions paper just ignores it as does the micronutrient/dietary quality paper and the physical growth response to supplementation paper.

But Cohort II springs to life in a 2007 paper by Neumann et al … but it has shrunk from 500 to 375 without explanation. Cohort II appears in some end of term school test scores where Meat did best, followed by Plants, then Milk with the un-supplemented Control group bringing up the rear. It also appears in a figure describing bicep size changes.

I have emailed Professor Neumann asking what happened to Cohort II, but have so far not had a response.

RPM increases

Last but not least, RPM is a very interesting type of test. We have already noted the Kenyan increase over time without any animal food increments. RPM is a component of most IQ test batteries and children have been getting steep improvements on it (as well as some other IQ tests) for decades, prompting some to speculate that just as improved nutrition is responsible for height increases over the last 80 or so years, it is also responsible for IQ increases. Except that it isn’t.

A recent paper by one of the people (James Flynn) who discovered the effect which now bears his name, The Flynn Effect, demolishes the theory. The paper is called: Requiem for nutrition as the cause of IQ gains: Raven’s gains in Britain 1938-2008. The name nearly says it all, except that it also considers data far beyond Britain. The point that concerns us is that you can train for RPM and improve. This is fine, except that it doesn’t necessarily bringing arithmetic improvements.

The Flynn paper shows that once above some basic threshold level, it isn’t nutrition that drives performance on RPM nor improvements on RPM. It’s easy to dream up simplistic theories about what is driving these increases, but a paper by John Raven (a son of the RPM designer), demolishes more than a few such one-factor theories. Flynn’s own hypothesis about the cause of the increase, presented in his book “What is intelligence?”, is considerably more subtle.

Concluding Remarks

This post began with climate change.

Dealing with climate change requires a global reforestation effort, but that can’t happen without a dietary change and a dietary change won’t happen while people in positions of authority in developed countries sincerely believe that their own meat based diet should be the goal of developing countries. The chain is clear.

The false nutritional beliefs are based on decades of advertising lies and plenty of sloppy reporting of scientific results. This has been a longish post to untangling a tiny part of the tangled web of nutritional misinformation that has to be dispelled as part of our efforts to avoid dangerous climate change.

September 28, 2010

Kakadu – a climate change impacts hotspot

Filed under: Climate Change, Global Warming — buildeco @ 11:37 am

When ecologists, policy makers, or the public, think about the visceral impacts of climate change on Australia’s natural systems, World Heritage listed Kakadu National Park (KNP), located in the seasonal tropics of the Northern Territory, is high on the at-risk list. But looking deeper into the human-driven processes now threatening KNP, there is actually a synergy of interrelated problems requiring simultaneous management – a situation common to most biomes threatened with global warming (Brook et al. 2008).

The big issues for KNP are changed fire regimes (impacting savanna and rain forest communities), rising sea levels (affecting the floodplain wetlands), and a suite of invasive weed and feral animal species, operating across all three major ecosystems. All three threats have a climate change component, although for fire and ferals, not wholly.

The savannas, which by area make up the largest part of KNP, are at first glance apparently intact. There has been relatively little clearance of the woody component (dominated by Eucalyptus tetrodonta and E. miniata); indeed analysis of historical aerial photography has documented vegetation thickening linked to elevated atmospheric CO2, which favours the growth of woody C3 species (Banfai & Bowman 2005). However, an emphasis by Park managers on avoiding hot late dry season fires, has meant that a large proportion of KNP is burnt during the dry season, with a return time of 1 to 5 years (Williams et al. 1999).

The impact of regular early season burning on the Park’s biota and on the structuring of understory vegetation, is a topic of ongoing debate and research. Nevertheless, some long-term studies have explicitly linked high fire frequencies to species declines (Pardon et al. 2003; Andersen et al. 2005). Climate change, via increased temperatures or shifts in the timing and intensity of monsoonal rainfall, will likely enhance future fire risk (Parry et al. 2007).

The KNP wetlands support a rich and spectacular biota, including vast flocks of magpie geese (Anseranas semipalmata), which congregate in millions to feed on Eleocharis chestnuts growing on the floodplains of the Alligator Rivers system. These wetlands formed ~6,000 years ago after sea level stabilisation, following a post-glacial rise of 120 metres. Ironically, additional sea level change associated with anthropogenic global warming threatens their future viability (Brook & Whitehead 2006).

About 20cm sea level rise occurred during the 20th century. At least double that amount – and potentially >1m due to accelerated polar ice sheet melt – is predicted by 2100. Rising sea levels, in combination with intense tropical storm surges, increases the regularity and severity with which saline flows penetrate the low-lying freshwater wetlands. At the mouth of the Mary River, to the west of KNP, extensive earthen barrages have already been built in an attempt to alleviate the damage caused by salt water intrusion.

A complex network of low-lying natural drainage channels, enlarged or cross-connected by movement of feral animals such as Asian water buffalo (Bubaus bubalis) and pigs (Sus scrofa), means that even a few tens of centimetres of additional sea level rise may be sufficient to degrade or eliminate a large fraction of the floodplain communities (Traill et al. 2010). What remain will be isolated patches of freshwater wetlands within a mire of brackish swamps and saltwater mangroves.

Beyond their impact in facilitating saline intrusion of the wetlands, feral ungulates help spread weed species such as introduced pasture grasses and Mimosa (Bradshaw et al. 2007), which compete with native plants for space and nutrients. Climate change will also cause shifts in the relative ability of invasives to compete with indigenous species, especially if natives are also under stress from herbivore grazing, changing habitat quality and altered fire regimes (Rossiter et al. 2003; Brook 2008).

KNP inevitably faces a tangled web of mutually amplifying processes associated with global change. Given the global failure to achieve significant carbon emissions mitigation to date, it seems that ‘adaptation’ and ‘building resilience’ will be the buzz words for KNP conservation managers for many years to come.

September 13, 2010

Do the recent floods prove man-made climate change is real?

Filed under: Climate Change, Global Warming — buildeco @ 12:32 pm
by Barry Brook

I was asked by the Adelaide Advertiser newspaper to write a short piece last week which addressed the question “Does all the recent rain across the country prove man made climate change is real?“, in less than 500 words. My response, given below, appeared in the print edition on Thursday 9 September 2010:


Does all the recent rain across the country prove man made climate change is real? No.

As Dorothea Mackellar wrote over a century ago, Australia is naturally “A land… Of droughts and flooding rains”.

Putting the impossible issue of ‘proof’ aside, scientists certainly do expect climate change to lead to an increase the frequency and intensity of extreme weather events. After all, a warmer planet holds extra energy, making today’s climate system more dynamic than when Mackellar penned her poem.

In short, as the Earth’s atmosphere traps more heat due to an increase in greenhouse gases, it triggers more evaporation of water from the oceans. Average global humidity and precipitation rise in response.

As such, climate scientists predict increasingly energetic storms, heavier bursts of rain, and more intense flooding. In many parts of the world, deeper droughts and longer, hotter heat waves are also forecast.

So, while it is impossible to attribute any one event solely to human-caused warming, a useful analogy is that “weather throws the punches, but climate trains the boxer”. Another way to look at it is that human impacts are “loading the climate dice” towards more unfavourable (and previously unlikely) outcomes.

We have probably witnessed this in the unprecedented heat wave in Russia and record floods in Pakistan. These impacts cause great human misery and severe economic and environmental damage.

Earlier this year in Australia, the Bureau of Meteorology released a Special Climate Statement on the recent exceptional rain and flooding events in central Australia and Queensland. February 28th 2010 was the wettest day on record for the Northern Territory, and March 2nd set a new record for Queensland. Over the 10-day period ending March 3rd, an estimated 403 cubic kilometres (403,000 gigalitres) of rainfall fell across the NT and QLD. Extreme, indeed.

It’s clear that if such ‘unusual’ climatic events are visited upon us ever more regularly, then there will be practical limits to adaptation, or at least exponentially rising costs involved in coping.

The need for action on eliminating our dependence on fossil fuels is urgent and our window of opportunity for avoiding severe impacts of climate change is rapidly closing. Yet the obstacles to change are not principally technical or economic, they are political and social. But that’s another story

August 25, 2010

Climate change basics III – environmental impacts and tipping points

Filed under: Climate Change, Emissions Reduction, Global Warming — buildeco @ 6:48 pm

by Barry Brook

The world’s climate is inherently dynamic and changeable. Past aeons have borne witness to a planet choked by intense volcanic activity, dried out in vast circumglobal deserts, heated to a point where polar oceans were as warm as subtropical seas, and frozen in successive ice ages that entombed northern Eurasia and America under miles of ice. These changes to the Earth’s environment imposed great stresses upon ecosystems and often led to mass extinctions of species. Life always went on, but the world was inevitably a very different place.

We, a single species, are now the agent of global change. We are undertaking an unplanned and unprecedented experiment in planetary engineering, which has the potential to unleash physical and biological transformations on a scale never before witnessed by civilization. Our actions are causing a massive loss and fragmentation of habitats (e.g., deforestation of the tropical rain forests), over-exploitation of species (e.g., collapse of major fisheries), and severe environmental degradation (e.g., pollution and excessive draw-down of rivers, lakes and groundwater). These patently unsustainable human impacts are operating worldwide, and accelerating. They foreshadow a grim future. And then, on top of all of this, there is the looming spectre of climate change.

When climate change is discussed in the modern context, it is usually with reference to global warming, caused by anthropogenic pollution from the burning of fossil fuels. Since the furnaces of the industrial revolution were first ignited a few centuries ago, we have treated the atmosphere as an open sewer, dumping into it more than a trillion tonnes of heat-trapping carbon dioxide (CO2), as well as methane, nitrous oxide and ozone-destroying CFCs. The atmospheric concentration of CO2 is now nearly 40% higher than at any time over the past million years (and perhaps 40 million years – our data predating the ice core record is too sketchy to draw strong conclusions). Average global temperature rose 0.74°C in the hundred years since 1906, with almost two thirds of that warming having occurred in just the last 50 years.

What of the future? There is no doubt that climate predictions carry a fair burden of scientific ambiguity, especially regarding feedbacks in climatic and biological systems. Yet what is not widely appreciated among non-scientists is that more than half of the uncertainty, captured in the scenarios of the Intergovernmental Panel on Climate Change, is actually related to our inability to forecast the probable economic and technological development pathway global societies will take during the twenty-first century. As a forward-thinking and risk averse species, it is certainly within our power to anticipate the manifold impacts of anthropogenic climate change, and so make the key economic and technological choices required to substantially mitigate our carbon emissions. But will we act in time, and will it be with sufficient gusto? And can nature adapt?

The choice of on-going deferment of action is potentially dire. If we do not commit to deep emission cuts (up to 80% by 2050 is required for developed nations), our descendents will likely suffer from a globally averaged temperature rise of 4–7°C by 2100, an eventual (and perhaps rapid) collapse of the Greenland and the West Antarctic ice sheets (with an attendant 12–14 metres of sea level rise), more frequent and severe droughts, more intense flooding, a major loss of biodiversity, and the possibility of a permanent El Niño. This includes frequent failures of the tropical monsoon, which provides the water required to feed the billions of people in Asia.

Indeed, the European Union has judged that a warming of just 2°C above pre-industrial levels constitutes ‘dangerous anthropogenic interference with the climate system’, as codified in the 1992 United Nations Framework Convention on Climate Change. Worryingly, even if we can manage to stabilise greenhouse gas concentrations at 450 parts per million (it is currently 383 ppm CO2, and rising at 3 parts per million per year), we would still only have a roughly 50:50 chance of averting dangerous climate change. Beyond about 2°C of warming, the likelihood of crossing irreversible physical, biological and, ultimately, economic thresholds (such as rapid sea level rise associated with the disintegration of the polar ice sheets, a shutdown of major heat-distributing oceanic currents, a mass extinctions of species, and a collapse of the natural hazards insurance industry), becomes unacceptably high.

Unfortunately, there is no evidence to date that we are taking meaningful action to decarbonise the global economy. In fact, it is just the reverse, with a recent work showing that the carbon intensity of energy generation in developed nations such as the US and Australia has actually increased over the last decade. Over the last decade, the world’s rate of emissions growth has tripled, and total CO2 emissions now exceed 30 billion tonnes a year. China overtook the US in 2006 as the single biggest greenhouse polluter, and within a decade, it will be producing twice as much CO2. This remarkable rate of growth, if sustained, will means that over just the next 25 years, humans will spew into the atmosphere an additional volume of CO2 – greater than the total amount emitted during the 150 year industrial period of 1750 to 2000! Of particular concern is that long-lived greenhouse gases, like CO2, will continue to amplify global warming for centuries to come. For every four tonnes added during a year in which we prevaricate about reducing emissions, one tonne will still be trapping heat in 500 years. It is a bleak endowment to future generations.

Nature’s response to twentieth-century warming has been surprisingly pronounced. For instance, ecologists have documented numerous instances of shifts in the timing of biological events, such as flowering, emergence of insects, and bird migration occurring progressively earlier in the season. Similarly, many species, including insects, frogs, birds and mammals, have shifted their geographical range towards colder realms – towards higher latitudes, upwards in elevation, or both. Careful investigations have also revealed some new evolutionary adaptations to cope with changed climatic conditions, such as desiccation-tolerant fruit flies, and butterflies with longer wings that allow them to more readily disperse to new suitable habitats. On the other hand, some sensitive species have already been eliminated by recent climate change. For instance, the harlequin frog and golden toad were once found in abundance in the montane cloud forests of Costa Rica. But in the 1980s they were completely wiped out by a fungal disease, which flourished as the moist forests began to dry out: a drying caused by a rising cloud layer that was directly linked to global warming.

These changes are just the beginning. Under the current business-as-usual scenario of carbon emissions, the planet is predicted to experience five to nine times the rate of twentieth-century warming over the next hundred years. An obvious question is, will natural systems be able to continue to keep pace? There are a number of reasons to suspect that the majority will not.

Past global climate change characteristically unfolded over many millennia, whereas current anthropogenic global warming is now occurring at a greatly accelerated rate. If emissions are not checked, a level of planetary heating comparable to the difference between the present day and the height of the last ice age, or between now and the age of the dinosaurs (when Antarctica was ice free), is expected to unfold over a period of less than a century! When such catastrophically rapid changes in climate did occur, very occasionally, in the deep past – associated, for instance, with a large asteroid strike from space – a mass extinction event inevitably ensued. Most life just could not cope, and it took millions of years after this shock for biodiversity to recover. It has been estimated that 20 to 60 per cent of species might become extinct in the next few centuries, if global warming of more than a few degrees occurs. Many thousands (perhaps millions) will be from tropical areas, about which we know very little. A clear lesson from the past is that the faster and more severe the rate of global change, the more devastating the biological consequences.

Compounding the issue of the rate of recent climate change, is that plant and animal species trying to move to keep pace with the warming must now contend with landscapes dominated by farms, roads, towns and cities. Species will gradually lose suitable living space, as rising temperatures force them to retreat away from the relative safety of existing reserves, national parks and remnant habitat, in search of suitable climatic conditions. The new conditions may also facilitate invasions by non-indigenous or alien species, who will act to further stress resident species, as novel competitors or predators. Naturally mobile species, such as flying insects, plants with wind-dispersed seeds, or wide-ranging birds, may be able to continue to adjust their geographical ranges, and so flee to distant refugia. Many others will not.

A substantial mitigation of carbon emissions is urgently needed, to stave off the worst of this environmental damage. But irrespective of what we do now, we are committed to some adaptation. If all pollution was shut off immediately, the planet would still warm by at least a further 0.7°C.

For natural resource management, some innovative thinking will be required, to build long-term resilience into ecosystems and so stem the tide of species extinctions. Large-scale afforestation of previously cleared landscapes will serve to provide corridors, re-connecting isolated habitat patches. Reserves will need to be extended towards cooler climatic zones by the acquisition of new land, and perhaps abandoned and sold off along their opposite margins. Our national parks may need to be substantially reconfigured. We must also not shirk from taking a direct and active role in manipulating species distributions. For instance, we will need to establish suitable mixes of plant species which cannot themselves disperse, and translocate a variety of animal species. It may be that the new ecological communities we construct will be unlike anything that currently exists.

Such are the ‘unnatural’ choices we are likely to be forced to make, to offset the unintended impacts of our atmospheric engineering. Active and adaptive management of the Earth’s biological and physical systems will be the mantra in this brave new world. Truly, the century of consequences.

Older Posts »

Blog at