Climate change

September 22, 2011

Why population policy will not solve climate change – Part 1

Filed under: Climate Change, Population growth — buildeco @ 3:23 pm

by Barry Brook

I have given lots of talks on climate change over the last few years. In these presentations, I typically focus on explaining the basis of the anthropogenic climate change problem, how it sits in the context of other human and natural changes, and then, how greenhouse gas emissions could be mitigated with the elimination of fossil fuels and substitution with low-carbon replacement technologies such as nuclear fission, renewables of various flavours, energy efficiency, and so on. When question time follows, I regularly get people standing up and saying something along the following lines:

It is all very well to focus on energy technology, and even to  mention behavioural changes, but the real problem — the elephant in the room that you’ve ignored — is the size of the human population. No one seems to want to talk about that! About population policy. If we concentrated seriously on ways to reduce population pressure, many other issues would be far easier to solve.

On the face of it, it is hard to disagree with such statements. The human population has growth exponentially from ~650 million in the year 1700 AD to almost 7 billion today. When coupled to our increasing economic expansion and concomitant rising demand for natural resources, this rapid expansion of the human enterprise has put a huge burden on the environment and demands an accelerating depletion of fossil fuels and various high-grade ores, etc. (the Anthropocence Epoch).  Obviously, to avoid exhaustion of accessible natural resources, degradation of ecosystems and to counter the need to seek increasingly low-grade mineral resources, large-scale recycling and sustainable use of biotic systems will need to be widely adopted. Of this there is little room for doubt.

So, the huge size of the present-day human population is clearly a major reason why we face so many mounting environmental problems and are now pushing hard against planetary boundaries (see diagram above). But does it also follow that population control via various policies is the answer – the best solution — to solving these global problems? It might surprise you to learn that I say NO (at least over meaningful time scales). But, it will take some time to explain why — to work through the nuances, assumptions, sensitivities and global versus region story. So, in a series of posts, I’ll explain why I’ve reached this conclusion, and, as always, invite feedback!

In part 1, below, I outline some of the basic tools required to come up with some reasonable answers. A huge amount of relevant data on this topic (human demography) is available from the United Nations Population Division, the Human Life-Table Database, the Human Mortality Database, and the U.S. Census Bureau. That data and statistics I cite in these posts come from these sources.

First, let’s look at the global situation. As of 1 July 2011, the human population numbered approximately 6.96 billion people (that’s 6,960 million, give or take a few tens of millions), and is expected to cross 7 billion in March 2012. For historical context, in 1900 it was 1.6 billion, in 1954 it was 3 billion, in 1980 it was 4.5 billion and in 1999 it was 6 billion.

The mid-range forecast is 7.65 billion by 2020, 8.61 billion by 2035, 9.31 billion by 2050 and 10.12 billion by 2100. So, population globally is projected to continue to rise throughout the 21st century, but at a decelerating rate. A summary of a range of scenarios is given in the following table:

The annual growth rate can be calculated as the ratio of one 5-year time period over the previous one, e.g., medium variant 2015 = (7,284,296/6,895,889)^0.2 = 1.1 % per annum. This compares to the peak growth rate of 2.2% in the early 1960s — so it’s clear that population growth is already slowing, but only gradually.

The medium and high variants in the table above indicate no stabilisation of population size until after 2100, while the low variant hits a peak in 2045 with a gradual decline thereafter, reaching the 2001 level once again by the year 2100. The low variant involves assumptions about declining birth rates that are beyond the expectations of most demographers.

In this first post, I want to go beyond the standard UN assumptions to look, in brief, at some more extreme scenarios. I should note here that the model behind these projections is reasonably complex, being based on age-specific mortality and fertility schedules, current cohort-by-cohort inventories (in 5-year-class stages), and the forecast trends in these vital rates over time. In the second post, I’ll explain some of the detail behind this demographic projection model, and explore its sensitivity to various assumptions and parameter estimates. In the third post, I’ll look at the country specific forecasts, from both the developed and developing world. But first, here are some alternative global scenarios, which are not meant to be realistic… merely illustrative.

In this Scenario 1, birth and death rates are locked at those observed during the last 5 years. The 2100 population size is slightly larger than the medium variant from the UN, at 9.4 billion in 2050 and 11.3 billion in 2100. The intrinsic population growth rate (GR) in this model is 0.36% per annum, but due to an unstable initial age structure, the lower equilibrium rate is not reached until 2070. Projecting forward many centuries, to the year 2500, the population size is 47 billion. This is obviously ridiculous — such projections with unchanging vital rates obviously cannot hold over the long term.

Scenario 2 is the UN medium variant, listed in the table above. This assumes that total fertility (TF: number of children a woman would produce over her lifetime if she survived through to menopause) declines from today’s level of 2.5 to 2.03 by the year 2100. There is also a slight decrease in death rates (DR), which I will explain more in the next post.

Scenario 3, illustrated below, is the same as Scenario 2, except total fertility is assumed to decline (linearly) to 1.0 by 2100, rather than to 2.03.

In this case, the 2050 population size is 9.03 billion and 2100 is 7.96 billion. This is still higher than the UN low variant, showing that the low variant requires some fairly heroic fertility assumptions. Looking further into the future, if we assume that TF thereafter stabilises at 1, the global population would eventually decline to less than 1 billion by the year 2210, below 100 million in 2300, and below 1 million in 2460! The underlying GR in this forecast, after 2100, is a decline of 2.73% per annum. So, for effective extinction of the human population to be avoided, either birth rates would once again have to rise at some point after 2100, or else (more likely) death rates would decline substantially to to medical improvements. More on this in the second post on this topic.

Right, let’s go even more extreme, Scenario 4. What if a global TF declines to 1 by the year 2030 — within the next 20 years — which could only be achieved by implementation of a global 1-child per couple policy within a decade or two (I’ll led you judge the likelihood of this…). Here are the results:

In this case, the 2050 population size is 7.03 billion, and 3.79 billion by 2100. The 1 billion mark is passed in 2160, and 100 million in 2245.

For Scenario 5, let’s assume that some virus or hazardous chemical causes global sterilization by 2015. Here’s what the trajectory looks like.

Population is 4.90 billion in 2050 and crosses 1 billion in about 2090. Virtually everyone is dead by 2120, as you might expect. Now to be fair, the reality of a scenario like this would almost certainly be much worse, because as the population aged with no children, society would quickly fall apart. Most people would probably be dead due to societal collapse by mid-century.

Finally, let’s wind back the TF assumption a bit, but ramp up the death rates. Assume for instance that climate change has caused more famines, disease etc. such that death rates double over the course of the 21st century, rather than decline (as expected) due to improved medical treatments, and TF declines to 1 by 2100.

In this case, Scenario 6, the 2050 global population is 6.89 billion, and it has declined to 2.26 billion by 2100. The 1 billion mark is crossed in 2120, and 100 million in 2160. This seems to be the most plausible of the extremely low variant scenarios that I can possibly justify (I don’t say this is probable). But even in this grim outlook, global population is, in 2050, about the same as today!

The TF = 1 by 2030 (Scenario 4) just does not seem in any way achievable or desirable, and anyway, the total population size in 2050 is still larger than today’s. The conclusion is clear — even if the human collective were to pull as hard as possible on the ‘total fertility’ policy lever, the result would NOT constitute an effective policy for addressing climate change, for which we need to have major solutions well under way by 2050 and essentially wrapped up by 2100.

In summary (for part #1), I support policies to encourage global society to achieve the low growth variant UN scenario. (More on that in the next post). But I must underscore the point that population control policy is patently not the ‘elephant in the room’ that many claim — it’s more like a herd of goats that’s eaten down your garden, and is still there, munching away…

Advertisements

June 23, 2011

What price of Indian independence? Greenpeace under the spotlight

Filed under: Emissions Reduction, Energy Demand, Global Warming — buildeco @ 1:56 pm
Two PWRs under construction in Kudamkulam, India

Guest Post by Geoff RussellGeoff is a mathematician and computer programmer and is a member of Animal Liberation SA. His recently published book is CSIRO Perfidy. To see a list of other BNC posts by Geoff, click here.

——————

India declared itself a republic in 1950 after more than a century of struggle against British Imperialism. Greenpeace India however, is still locked firmly under the yoke of its parent. Let me explain.

Like many Australians, I only caught up with Bombay’s 1995 change of name to Mumbai some time after it happened. Mumbai is India’s city of finance and film, of banks and Bollywood. A huge seething coastal metropolis on the north western side of India. It’s also the capital of the state of Maharashtra which is about 20 percent bigger than the Australian state of Victoria, but has 112 million people compared to Victoria’s 5.5 million. Mumbai alone has over double Victoria’s entire population. Despite its population, the electricity served up by Maharashtra’s fossil fuel power stations plus one big hydro scheme is just 11.3 GW (giga watts, see Note 3), not much more than the 8 or so GW of Victoria’s coal and gas fumers. So despite Mumbai’s dazzling glass and concrete skyline, many Indians in both rural and urban areas of the state still cook with biomass … things like wood, charcoal and cattle dung.

The modern Mumbai skyline at night

Mumbai’s wealth is a magnet for terrorism. The recent attacks in 2008 which killed 173 follow bombings in 2003 and 1993 which took 209 and 257 lives respectively. Such events are International news, unlike the daily death and illness, particularly to children, from cooking with biomass. Each year, cooking smoke kills about 256,000 Indian children between 1 and 5 years of age with acute lower respiratory infections (ALRI). Those who don’t die can suffer long term consequences to their physical and mental health. A rough pro-rata estimate would see about 23,000 children under 5 die in Maharashtra every year from cooking smoke.

The image is from a presentation by medical Professor Kirk Smith, who has been studying cooking smoke and its implications for 30 years.

Medical Prof. Kirk Smith’s summary of health impacts from cooking fires

The gizmo under the women’s right arm measures the noxious fumes she is exposed to while cooking. Kirk doesn’t just study these illnesses but has been spinning off development projects which develope and distribute cleaner cooking stoves to serve as an interim measure until electricity arrives.

The disconnect between what matters about Mumbai and India generally to an Australian or European audience and what matters locally is extreme. But a visit to the Greenpeace India website shows it is simply a western clone. In a country where real matters of life and death are ubiquitous, the mock panic infecting the front page of the Greenpeace India website at the death-less problems of the Fukushima nuclear plant seem weird at best, and obscene at worst.“Two months since Fukushima, the Jaitapur project has not been stopped“, shouts the text over one front page graphic in reference to the nuclear plant proposed for construction at Jaitapur. In those two months, nobody has died of radiation at Fukushima, but 58,000 Indian children have died from cooking smoke. They have died because of a lack of electricity. Some thousands in Maharashtra alone.

Greenpeace, now an obstructive dinosaur

The whole world loved Greenpeace back in its halcyon days protesting whaling and the atmospheric testing of nuclear weapons. Taking on whalers and the French Navy in the open sea in little rubber boats was indeed worthy of Mahatma Gandhi. But the legacy of those days is now an obstacle to Greenpeace helping to fight the much bigger environmental battles that are being fought. As Greenpeace campaigns to throw out the nuclear powered baby with the weapons testing bathwater, it seems to have forgotten the 2010 floods which displaced 20 million in the sub-continent. The Australian Council for International Development reports in May 2011 that millions are still displaced with 913,000 homes completely destroyed. Millions also have ongoing health issues with rising levels of tuberculosis, dengue fever and the impacts of extended periods of malnutrition. The economic structure of large areas has been devastated along with food and seed stocks. Areas in southern Pakistan are still under water.

This foreshadows the scale of devastation which will be delivered more frequently as global warming bites.

Brown clouds, cooking and climate change

Regardless of what you think about nuclear power, you’d think breathable air would be an environmental issue worthy of Greenpeace’s attention, but biomass cooking is missing from Greenpeace India’s campaign headings.

Biomass cooking isn’t just a consequence of poverty, it feeds into a vicious feedback loop. People, usually women and children, spend long periods collecting wood or cattle dung (see image or full study). This reduces educational opportunities, while pressure on forests for wood and charcoal degrades biodiversity. Infections from smoke, even if not fatal, combine with the marginal nutrition produced by intermittent grain shortages to yield short and sickly lifespans, while burning cattle dung wastes a resource far more valuable as fertiliser.

In 2004, a World Health Organisation Report estimated that, globally, 50 percent of all households and 90 percent of rural households cook with biomass. In India, they estimated that 81 percent of Indian households cook with biomass. That figure will have dropped somewhat with significant growth in Indian power generation over the past decade but will still be high.

Biomass cooking isn’t only a health issue, but a significant player in climate change. Globally, the black carbon in the smoke from over 3 billion people cooking and boiling water daily with wood, charcoal or cattle dung forms large brown clouds with regional and global impacts.

Maharashtra’s nuclear plans

Apart from a reliable food supply, the innovation that most easily distinguishes the developed and developing world is electricity. It’s the shortage of this basic commodity that kills those 256,000 Indian children annually. Electric cooking is clean and slices through the poverty inducing feedback loop outlined above. Refrigeration reduces not just food wastage but also food poisoning.

If you want to protect forests and biodiversity as well as children in India (and the rest of the developing world), then electricity is fundamental. Higher childhood survival is not only a worthy goal in itself, but it is also critical in reducing birthrates.

Apart from a Victorian sized coal fired power supply the 112 million people of Maharashtra also have the biggest nuclear power station in India. This is a cluster of two older reactors and two newer ones opened in 2005 and 2006. The newer reactors were constructed by Indian companies and were completed inside time and below budget. The two old reactors are relatively small, but the combined power of the two newer reactors is nearly a giga watt. India’s has a rich mathematical heritage going back a thousand years which underpins a sophisticated nuclear program. Some high-level analytic techniques were known in India hundreds of years before being discovered in Europe.

India has another nuclear power station planned for Maharashtra. And much bigger. This will be a half a dozen huge 1.7 GW French EPR reactors at Jaitapur, south of Mumbai. On its own, this cluster will surpass the entire current output of the state’s coal fired power stations. The project will occupy 968 hectares and displace 2,335 villagers (Wikipedia). How much land would solar collectors occupy for an Andasol like concentrating solar thermal system? About 40 times more land and either displace something like 80,000 people or eat into India’s few wildlife habitats.

If Greenpeace succeeds in delaying the Jaitapur nuclear plant, biomass cooking in the area it would have serviced will continue together with the associated suffering and death of children. It’s that simple. Greenpeace will have direct responsibility no less than if it had bombed a shipment of medical supplies or prevented the decontamination of a polluted drinking well.

Jaitapur and earthquakes

In the wake of the reactor failures at Fukushima which killed nobody, Greenpeace globally and Greenpeace India are redoubling their efforts to derail the new Jaitapur nuclear plant. The Greenpeace India website (Accessed 9th May) carries a graphic of the Fukushima station with covering text:

The Jaitapur nuclear plant in India is also in an earthquake prone zone. Do we want to take the risk? The people of Jaitapur don’t.

The Greenpeace site claims that the chosen location for the Jaitapur power plant is in a Seismic Zone 4 with a maximum recorded quake of 6.3 on the Richter scale. Accepting this as true (Wikipedia says its Zone 3), should anybody be afraid?

“Confident” and “relaxed” are far more appropriate responses for anybody who understands the Richter scale. It’s logarithmic. Base 10.

Still confused? A quake of Richter scale size 7 is 10 times more powerful than one of size 6. A quake of size 8 is 100 times more powerful than one a size 6. And a scale 9 quake, like Japan’s monster on March the 11th, is a thousand times more powerful than a quake of size 6. The 40 year old Fukushima reactors came through this massive quake with damage but no deaths. The reactors shutdown as they were designed to and subsequent problems, still fatality free and caused primarily by the tsunami, would not have occurred with a more modern reactor. We haven’t stopped building large buildings in earthquake zones because older designs failed.

Steep cliffs and modern reactor designs at Jaitapur will mean that tsunamis won’t be a problem. All over the world people build skyscrapers in major earthquake zones. The success of the elderly Fukushima reactors in the face of a monster quake is cause for relief and confidence, not blind panic. After all, compared to a skyscraper like Taipei 101, designing a low profile building like a nuclear reactor which can handle earthquakes is a relative doddle.

Despite being a 10 on the media’s self-proclaimed Richter scale, subsequent radiation leaks and releases at Fukushima will cause few if any cancers. It’s unlikely that a single worker will get cancer, let alone any of the surrounding population. This is not even a molehill next to the mountain of cancers caused by cigarettes, alcohol and red meat. The Fukushima evacuations are terrible for the individuals involved but even 170,000 evacuees pales beside the millions of evacuations caused by increasing climate based cataclysms.

Greenpeace India haunted by a pallid European ghost

Each year that the electricity supply in Maharashtra is inadequate, some 23,000 children under the age of 5 will die. They will die this year. They will die next year. They will keep dying while the electricity supply in Maharashtra is inadequate. While the children die, their parents will mourn and continue to deplete forests for wood and charcoal. They will continue to burn cattle dung and they will have more children.

A search of the Greenpeace India web pages finds no mention of biomass cooking. No mention of its general, environmental, climate or health impacts. But there are 118 pages referencing Chernobyl.

At Chernobyl, 237 people suffered acute radiation sickness with 28 dying within 4 months and another 19 dying between 1987 and 2006. As a result of the radiation plume and people who were children at the time drinking contaminated milk, there were 6,848 cases of thyroid cancer between 1991 and 2005. These were treated with a success rate of about 98% (implying about 140 deaths). Over the past 25 years there have also been some thousands of other cancers that might, or might not, have been caused by Chernobyl amongst the millions of cancers caused by factors that Greenpeace doesn’t seem the least worried by, things like cigarettes, alcohol and red meat.

On the other hand, each year that India’s electricity supply is inadequate will see about 256,000 childhood deaths. As an exercise, readers may wish to calculate the number of Indian children who have died due to inadequate cooking fuels over the past 25 years and compare it with the 140 children who died due to the Chernobyl accident. Every one of those Indian deaths was every bit as tragic as every one of those Chernobyl deaths.

Greenpeace India is dominated by the nuclear obsession of its parent organisation. On the day when the Greenpeace India blog ran a piece about 3 Japanese workers with burned feet, nearly a thousand Indian children under 5 will have died from cooking stove smoke. They didn’t get a mention on that day, or any other.

Why is Greenpeace India haunted by this pallid European ghost of an explosion 25 years ago in an obsolete model of reactor in Ukraine? Why is Greenpeace India haunted by the failure of a 40 year old Fukushima reactor without a single fatality? This is a tail wagging not just a dog, but the entire sled team.

Extreme scenarios

It’s time Greenpeace India looked rationally at Indian choices.

Should they perhaps copy the Germans whose 15 year flirtation with solar power hasn’t made the slightest dent in their fossil fuel use? (Note 2) It may simply be that the Germans are technologically incompetent and that things will go better in India. Perhaps the heirs of Ramanujan will succeed where the heirs of Gauss have failed. Alternatively, should India copy the Danes whose wind farms can’t even half power a tiny country of 5.4 million?

India’s current electricity sources. Cooking stoves not included! ‘Renewables’ are predominantly biomass thermal power plants and wind energy, with some solar PV.

India is well aware that she only has a four or five decades of coal left, but seems less aware, like other Governments, that atmospheric CO2 stabilisation must be at 350 ppm together with strict reductions in short lived forcings like black carbon and methane and that these constraints require her, like Australia and everybody else, to leave most of that coal in the ground. But regardless of motivation, India needs both a rebuild and expansion of her energy infrastructure over the next 50 years.

Let’s consider a couple of thumbnail sketches of two very different extreme scenarios that India may consider.

The first scenario is to phase out all India’s coal, oil and gas electricity generation facilities and replace them with nuclear. Currently these fossil fuel facilities generate about 900,000 GWh (giga watt hours) of electricity. Replacing them with 1,000 nuclear reactors at 1.7 GW each will generate about 14 million GWh annually. This is about 15 times the current electricity supply and roughly similar to Victoria’s per capita electricity supply. It’s a fairly modest target because electricity will be required to replace oil and gas in the future. I also haven’t factored in population growth in the hope that energy efficiency gains will compensate for population growth and also with confidence that electrification will reduce population growth. Nevertheless, this amount of electricity should be enough to catapult India into the realms of the developed world.

These reactors should last at least 60 years and the electricity they produce will prevent 256,000 children under 5 dying every year. Over the lifetime of the reactors this is about 15.4 million childhood deaths. But this isn’t so much about specific savings as a total transformation of India which will see life expectancy rise to developed world levels if dangerous climate change impacts can be averted and a stable global food supply is attained.

Build the reactors in groups of 6, as is proposed at Jaitapur, and you will need to find 166 sites of about 1000 hectares. The average density of people in India is about 3 per hectare, so you may need to relocate half a million people (3000 per site). This per-site figure is close to the actual figure for Jaitapur.

There are currently over 400 nuclear reactors operating world wide and there has been one Chernobyl and one Fukushima in 25 years. Nobody would build a Chernobyl style reactor again, but let’s be really silly and presume that over 60 years we had 2 Chernobyls and 2 Fukushimas in India. Over a 60 year period this might cost 20,000 childhood cancers with a 98% successful treatment rate … so about 400 children might die. There may also be a few thousand adult leukemias easily counterbalanced by a vast amount of adult health savings I haven’t considered.

The accidents would also result in 2 exclusion zones of about 30 kilometers in radius. Effectively this is 2 new modestly sized wildlife parks. We know from Chernobyl that wildlife will thrive in the absence of humans. With a 30km radius, the two exclusion zone wildlife parks would occupy 282,743 hectares.

If you are anti-nuclear, this is a worst case scenario. The total transformation of India into a country where children don’t die before their time in vast numbers.

This is a vision for India that Greenpeace India is fighting tooth and nail to avoid.

As our alternative extreme scenario, suppose India opted for concentrating solar thermal power stations similar to the Spanish Andasol system to supply 14 million GWh annually. Each such unit supplies about 180 GWh per year, so you would need at least 78,000 units with a solar collector area of 3.9 million hectares, equivalent to 13 of our hypothesized exclusion zone wildlife parks from the accidents. But, of course, these 3.9 million hectares are not wildlife parks. I say “at least 78,000″ units because the precise amount will depend on matching the demand for power with the availability of sunshine. Renewable sources of energy like wind and solar need overbuilding to make up for variability and unpredictability of wind and cloud cover. The 78,000 Andasol plants each come with 28,000 tonnes of molten salt (a mix of sodium nitrate and potassium nitrate) at 400 degrees centigrade which acts as a huge battery storing energy when the sun is shining for use when it isn’t. Local conditions will determine how much storage is required. The current global production of ordinary sodium chloride is about 210 million tonnes annually. Producing the 2.1 billion tonnes of special salt required for 78,000 Andasols will be difficult, as will the production of steel and concrete. Compared to the nuclear reactors, you will need about 15 times more concrete and 75 times more steel.

Build the 78,000 Andasols in groups of 78 and you have to find 1000 sites of about 4,000 hectares. Alternatively you could use 200 sites of 20,000 hectares. The average density of people in India is over 3 per hectare, so you may need to relocate perhaps 12 million people. If you were to use Solar photovoltaic in power stations (as opposed to rooftops), then you would need more than double the land (Note 4) and have to relocate even more people.

Sustainability

In a previous post, I cited an estimate of 1 tonne of CO2 per person per year as a sustainable greenhouse gas emissions limit for a global population of 8.9 billion. How do our two scenarios measure up?

A current estimate of full life cycle emissions from nuclear power is 65g/kWh (grams per kilo-watt-hour) of CO2, so 14 million GWh of electricity shared between 1.4 billion Indians is about 0.65 tonnes per person annum, which allows 0.35 tonnes for food and other non-energy greenhouse gas emissions. So not only is it sustainable, it’s in the ball park as a figure we will all have to live within.

The calculations required to check if this amount of electricity is sustainable from either solar thermal or solar PV are too complex to run through here, but neither will be within budget if any additional fossil fuel backup is required. Solar PV currently generates about 100 g/kWh (p.102) under Australian conditions, so barring technical breakthroughs, is unsustainable, unless you are happy not to eat at all. Solar thermal is similar to nuclear in g-CO2/kWh, except that the required overbuilding will probably blow the one tonne budget.

The human cost of construction time

The relative financial costs of the two scenarios could well have a human cost. For example, more money on energy usually means less on ensuring clean water. But this post is already too long. However, one last point needs to be made about construction time. I strongly suspect that while building 1000 nuclear reactors will be a vast undertaking, it is small compared to 78,000 Andasols. Compare the German and French experiences of solar PV and nuclear, or simply think about the sheer number and size of the sites required. The logistics and organisational time could end up dominating the engineering build time. We know from various experiences, including those of France and Germany, that rapid nuclear builds are physically plausible and India has demonstrated this with its own reactor program.

If I’m right and a solar (or other renewable) build is slower than a nuclear build, then the cost in human suffering will easily dwarf anything from any reasonable hypotheses on the number of accidents. Can we put a number on this? If we arbitrarily assume a pro-rata reduction in childhood deaths in proportion to the displacement of biomass cooking with electricity, then we can compare a phase-out over 10 five-year plans with one taking say 11. So at the end of each 5 year plan a chunk of electricity comes on line and the number of cooking smoke deaths drops. At the end of the process the number of deaths from cooking smoke is 0. It’s a decline in a series of 10 large or 11 slightly smaller steps. Plug in the numbers and add up the total over the two time periods and the difference is … 640,000 deaths in children under 5. Construction speed matters.

In conclusion

How do my back-of-an-envelope scenarios compare with India’s stated electricity development goals? According to India’s French partner in the Jaitapur project, Areva, India envisages about half my hypothesized electrical capacity being available by 2030, so a 50 year nuclear build plan isn’t ridiculous provided floods or failed monsoons don’t interfere unduly.

As for the safety issues and my hypothesised accidents, it doesn’t matter much what kind of numbers you plug in as a consequence of the silly assumption of a couple of Chernobyls. They are all well and truly trumped: firstly, by the increase in health for Indian children, secondly by the reforestation and biodiversity gains as biomass cooking declines, thirdly by the reduction in birth rates as people get used to not having their children die, and lastly, by helping us all have a fighting chance of avoiding the worst that climate change might deliver.

It’s time Greenpeace India told its parent organisation to shove off. It’s time Greenpeace India set its own agenda and put the fate of Indian children, the Indian environment and the planet ahead of the ideological prejudices of a parent organisation which has quite simply lost the plot.


Note 1: Nuclear Waste: What about the nuclear waste from a thousand reactors? This is far less dangerous than current levels of biomass cooking smoke and is much more easily managed. India has some of the best nuclear engineers in the business. They are planning thorium breeder reactors which will result in quite small amounts of waste, far smaller and more manageable than the waste from present reactors. Many newer reactor designs can run on waste from the present generation of reactors. These newer reactors are called IFR (Integral Fast Reactor) and details can be found on bravenewclimate.com.

Note 2: German Solar PV: Germany installed 17 GW of Solar photo voltaic (PV) power cells between 2000 and 2010 and in 2010 those 17 GW worth of cells delivered 12,000 GWh of energy. If those cells were running in 24×7 sunshine, they would have delivered 17x24x365 = 149 GWh of energy. So their efficiency is about 8 percent (this is usually called their capacity factor. A single 1.7GW nuclear reactor can produce about 1.7x24x365x0.9=13,402 GWh in a year (the 0.9 is a reasonable capacity factor for nuclear … 90 percent). Fossil fuel use for electricity production in Germany hasn’t changed much in the past 30 years with most of the growth in the energy supply being due to the development of nuclear power in Germany during the late 70s and 80s.

Note 3: Giga watts, for non technical readers.: The word billion means different things in different countries, but “giga” always means a thousand million, so a giga watt (GW for short) is a useful unit for large amounts of power. A 100-watt globe takes 100 watts of power to run. Run it for an hour and you have used 100 watt-hours of energy. Similarly, a GWh, is a giga watt of power used for an hour, and this is a useful unit for large amounts of energy. If you want to know all about energy units for a better understanding of BNC discussions, here’s Barry’s primer

Note 4: Area for Solar PV. German company JUWI provides large scale PV systems. Their 2 MW (mega watt system) can supply about 3.1 GWh per year and occupies 2 hectares. To supply a similar amount of energy to Andasol would need 180/3.1=58 units occupying some 116 hectares

March 11, 2011

A toy model for forecasting global temperatures – 2011 redux, part 1

Filed under: Climate Change, Global Warming — buildeco @ 1:14 pm

by Barry Brook

A little over two years ago, I wrote the following post : How hot should it have really been over the last 5 years? In it, I did some simple statistical tinkering to examine the (correlative) relationship between global temperatures and a few key factors, namely greenhouse gases, solar irradiance, and ENSO. In the next couple of posts, I’ll update the model, add a few different predictors, and correct for temporal autocorrelation. I’ll also make a prediction on how global temperatures might pan out over the coming few years.

In the 2008 post, I concluded with the following:

To cap of this little venture into what-if land, I’ll have a bit of fun and predict what we might expect for 2009. My guess is that the SOI will be neutral (neither El Niño or La Niña), the solar cycle 24 will be at about 20% of its expected 2013 peak), and there will be no large volcanic eruptions. On this basis, 2009 should be about +0.75, or between the 3rd and 5th hottest on record. Should we get a moderate El Niño (not probable, based on current SOI) it could be as high as +0.85C and could then become the hottest on record. I think that’s less likely.

By 2013, however, we’ll be at the top of the solar cycle again, and have added about another +0.1C worth of greenhouse gas temperature forcing and +0.24 of solar forcing compared to 2008. So even if 2013 is a La Niña year, it might still be +0.85C, making it hotter than any year we’ve yet experienced. If it’s a strong El Niño in 2013, it could be +1.2C, putting it way out ahead of 1998 on any metric. Such is the difference between the short-term effect of non-trending forcings (SOI and TSI) and that inexorable warming push the climate system is getting from ongoing accumulation of heat-trapping greenhouse gases.

So, now that we have data for 2009 and 2010, how did I do? Not too bad actually. Let’s see:

1. After bottoming out for a long period, the 11-year sunspot cycle has restarted. So much for those predicting a new Maunder Minimum. By the end of 2010, we had indeed reached about 20% of the new forecast maximum for cycle-24 (which is anticipated to be about half the peak value of cycle-23).

2. We had a mild El Niño in 2009 and early 2010, before dipping back into a strong La Niña. See here.

3. There were no large equatorial volcanic eruptions. The best we got was Iceland’s Eyjafjallajökull (don’t ask me to pronounce it), which actually helped climate change a little bit by stopping flights over Europe for a week.

4. 2009 was ranked as the 5th warming on record. I had ‘forecast’, based on my toy model, that it would be somewhere between 3rd to 5th. I said that 2009 would be about +0.25C hotter than 2008; the real difference was ~ +0.15C (based on the WTI average index data). This was followed up by 2010 equalling with 2005 as the hottest year on record. Pretty much right in line with my guesstimate.

5. Anthropogenic GHGs continue to accumulate; atmospheric CO2 concentrations built up by 1.9 ppm in 2009 and 2.4 ppm in 2010. That forcing ain’t going away!

I still stand by my 2008 prediction of us witnessing record-smashing year in 2013… but I’ll have to wait another couple of years to confirm my prognostication. However, I’m not going to leave it at this. There are a couple of simple ways I can improve my toy model, I think — without a lot of extra effort. Doing so will also give me a chance to show off a few resampling methods that can be used in time-series analysis, and to probe some questions that I skipped over in the 2008 post.

In short, I think I can do better.

In Part 2, I’ll describe the new and old data streams, do some basic cross-correlation analysis and plotting, bootstrap the data to deal with autocorrelation, and look briefly at a method for assessing a model’s structural goodness-of-fit.

In Part 3 I’ll do some multi-term fitting to data going up to 2005, and use this model to project the 2006 — 2010 period as a way of validating against independent data (i.e., that not used in the statistical fitting), then re-fit the best predictive model(s) to all the data, and make a forecast for 2011 to 2013 inclusive. The big catch here will be getting the non-CO2 forcings right.

Stay tuned

February 24, 2011

The cost of ending global warming – a calculation

Filed under: Economic issues, Global Warming — buildeco @ 12:04 am

Guest Post by Chris Uhlik. Dr Uhlik did a BS, MS, and PhD in Electrical Engineering at Stanford 1979–1990. He worked at Toyota in Japan, built robot controllers, cellular telephone systems, internet routers, and now does engineering management at Google. Among his 8 years of projects as an engineering director at Google, he counts engineering recruiting, Toolbar, Software QA, Software Security, GMail, Video, BookSearch, StreetView, AerialImaging, and research activities in Artificial Intelligence and Education. He has directly managed about 500 engineers at Google and indirectly over 2000 employees. His interests include nuclear power, photosynthesis, technology evolution, artificial intelligence, ecosystems, and education.

(Ed Note: Chris is a member of the IFRG [a private integral fast reactor discussion forum] as well as being a strong support of the LFTR reactor design)

An average American directly and indirectly uses about 10.8 kW of primary energy of which about 1.3 kW is electricity. Here I consider the cost of providing this energy as coming from 3 main sources:

1. The fuel costs (coal, oil, uranium, sunlight, wind, etc)

2. The capital costs of the infrastructure required to capture and distribute the energy in usable form (power plants, tanker trucks, etc)

3. The operating costs of the infrastructure (powerline maintenance, plant security, watching the dials, etc)

The average wholesale electricity price across the US is about 5c/kWh, so the all-in costs of providing the electrical component is currently ~$570/person/year or 1.2% of GDP. The electric power industry including all distribution, billing, residential services, etc is $1,120/person/year or 2.4% of GDP. So you can see there is about a factor of two between marginal costs of electricity production (wholesale prices) and retail prices.

The rest of this energy comes from Natural gas, Coal, Petroleum, and Biomass, to the tune of 6.36 kW/person.

I’m going to make the following assumptions to calculate how much it would cost to convert to an all-nuclear powered, fossil-carbon-free future.

Assumptions (*see numbers summary at foot of this post)

  • I’ll neglect all renewable sources such as hydro. They amount to only about 20% of electricity and don’t do anything about the larger fuel energy demand, so they won’t affect the answer very much.
  • Some energy sources are fuel-price intensive (e.g. natural gas) and some have zero fuel prices, but are capital intensive (e.g. wind). I’ll assume that nuclear is almost all capital intensive with only 35% of the cost coming from O&M and all the rest going to purchase costs plus debt service.
  • I’ll use 8% for cost of capital. Many utilities operate with a higher guaranteed return than this (e.g. 10.4%) but the economy historically provides more like 2–5% overall, so 8% seems quite generous.
  • I’ll assume 50 year life for nuclear power plants. They seem to be lasting longer than this, but building for more than 50 years seems wasteful as technologies advance and you probably want to replace them with better stuff sooner than that.
  • Back in the 1970′s we built nuclear power plants for about $0.80–0.90/watt (2009 dollars). In the 1980′s and 90′s we saw that price inflate to $2.09–3.39/watt (Palo Verde and Catawba) with a worst-case disaster of $15/watt (Shoreham). Current project costs are estimated at about $2.95/watt (Areva EPR). Current projects in China are ~$1.70/watt. If regulatory risks were controlled and incentives were aligned, we could probably build plants today for lower than the 1970′s prices, but I’ll pessimistically assume the current estimates of $3/watt.
  • Electricity vs Combustion: In an all nuclear, electricity-intensive, fossil-carbon-free future, many things would be done differently. For example, you won’t heat your house by burning natural gas. Instead you’ll use electricity-powered heat pumps. This will transfer energy away from primary source fuels like natural gas to electricity. Plug-in-hybrid cars will do the same for petroleum. In some cases, the total energy will go down (cars and heat pumps). In some cases, the total energy will go up (synthesizing fuel to run jet transport aircraft). I’ll assume the total energy demand in this future scenario is our current electricity demand plus an unchanged amount of energy in the fuel sector, but provided instead by electricity. I.e. 1.3 kW (today’s electricity) + 6.4 kW (today’s fuels, but provided by electricity with a mix of efficiencies that remains equivalent). This is almost certainly pessimistic, as electricity is often a much more efficient way to deliver energy to a  process than combustion. (Ed Note: I discuss similar issues in these two SNE2060 posts).
  • Zero GDP growth rate

Result: In this future, we need 7.7 kW per person, provided by $3/watt capitalized sources with 8% cost of capital and 35% surcharge for O&M. The cost of this infrastructure: $2,550/person/year or 5% of GDP.

Alternate assumptions:

  • Chinese nuclear plant costs of $1.70/watt
  • Higher efficiency in an electric future were most processes take about 1/2 as much energy from electricity as they used to take from combustion. 1.3 kW from old electricity demands (unchanged) + 3.2 kW from new electricity demands (half of 6.4 kW). And fuels (where still needed) are produced using nuclear heat-driven synthesis approaches.

Alternative result: $844/person/year or 2% of GDP.

Conclusion: Saving the environment using nuclear power could be cheap and worth doing.

Numbers*:

Year: 2008
GDP: $14.59T
Population 306M
Electricity: 12.68 quads
Non-electricity fuels: 58.25 quads
Natural gas: 16.33 quads
Coal: 1.79 quads
Biomass: 3.46 quads
Petroleum: 36.67 quads
Average retail electricity price: 9.14 c/kwh
Electric power industry: $343B/yr
Electricity transmission industry: $7.8B/yr

Per person statistics:
GDP: $47,680
Electricity: 1.29 kW (average power)
Fuels: 6.36 kW

February 9, 2011

A remediation perspective on preventing future flood disasters

Filed under: Climate Change, Global Warming, Water Resources — buildeco @ 11:01 am

Guest Post by Michal Kravčík

The year 2011 started strangely. Firstly, there were tragic floods in Australia, Brazil, South Africa and secondly, unprecedented drought in China. The media routinely says, that the extremes of climate will only get worse. Why is this happening? Is it possible to prevent future catastrophic flooding, even to moderate the other extreme, droughts? One possible solution for Australia, is to make use of the unique hydrological and plant processes that exist in its mostly flat landscape. The country once had a unique natural irrigation system contained in its vast floodplains systems, that and this enabled the country to slow the thermoregulatory process of dramatic warming and cooling in the atmosphere and to mitigate the risks of extremes in weather. I am convinced that recovery of the small water cycles from plant biodiversity can cool the dry country of Australia, prevent flooding on the east coast, as well as, restore water to streams and rivers; life in the soil is critically important for climate.

The coastline of eastern Australia extends for over 4000km but it is also the boundary between the ocean and dry inland of the Continent. These two fundamentally differing ‘worlds’, produce another kind of energy in to the atmosphere. The sea surface water evaporates into the atmosphere (See diagram below) whilst

1.jpg

the dry and desiccated landscape of the interior produces sensible heat, which and enhances hot air output streams into the atmosphere (see Diagram below).

11.jpg

At the interface of these two “worlds” there is intensive development of heavy storm clouds, and as the moisture laden clouds from over the ocean try to enter the interior, they are blocked, by the hot output current of the dry country ( See Diagram below).

16.jpg

This phenomena causes the extreme tropical storms and heavy rainfall (that we have recently witnessed in Eastern Australia, especially in SE Queensland) resulting in

17.jpg

catastrophic flooding. The red arrows in the diagram below show exactly where all that heavy rainfall went….straight out to the sea.

18.jpg

So what is the solution?

The answer is easy! The process is a little more difficult.

If the hot output streams of the dry interior of the continent are preventing the entry of clouds further inland, what needs to be done is to cool the dried country using plants and water as the cooling agent. Plants act as solar operated air conditioners that cool the landscape during the day via the transpiration of water.

I say this process is not so easy, because, normally there is so little water inland. When there is too much water on the east coast, as we have seen in the recent time of extreme rainfall, it causes disastrous flooding. The solution is to understand the unique water holding systems that once functioned in the Australian landscape, west of the mountain range. It was plant life that enabled the land to collect and hold the rain, which in turn would cool and moderate climate.

4.jpg

Australia’s unique floodplain systems were once in ground water storage systems, covered by a wealth of transpiring plant biodiversity which cooled the land and gave rise to rain bearing clouds. If this natural hydological cycle were reinstated, rainfall could be redirected inland to reduce flood conditions on the east coast. This would potentially solve the flood problem all over the east coast of Australia and reduce excess rainwater run-off to the sea. This could then be used for the rehabilitation and restoration of ecosystems within the continent.

15.jpg

Intercepted rainwater in the country will evaporate into the atmosphere and thereby inhibite the production of sensible heat and create clouds which, after condensation will fall again in the form of gentle rain in the inland of the country.

6.jpg

This approach is simply restores ‘small water’ cycles and life in the landscape will spread further inland.

7.jpg

At the same time, clouds made by the small biodiversity based water cycles link with clouds of the large water cycle created by evaporation from the sea. This relationship restores the function of the biotic pump, as described by Russian scientists Gorshkov and Markarieva. They argue that originally with vegetation cover on the East Coast intact, the total rainfall was once higher and more balanced across the inland.

8.jpg

The restoration of water in the small water cycles of plant transpiration and recovery causes climate change remediation; mitigates temperature extremes; extreme weather events and results in a healthy environment with plenty of water for people, food, nature, and continental security and prosperity.

9.jpg

Globally we have not only experienced the degradation of our aquatic ecosystems and the loss of rainfall but greater water imbalances, and lots of heavy intense rain as opposed to the cooler, more gentle rain.

We concentrate only on water that we can see. Water that evaporates cannot be seen and thus it is considered lost to the system. Evaporated water in the atmosphere which, after condensation as rain, is returned to the surface of the landscape and restores life. So it is not a loss, but is part of the eternal cycle of water in a landscape. Water is only lost to the land system when it drains straight into the sea or is lost to hot air above the warm landscape!

Re-coupling Australia’s unique plant/water/climate systems will allow the return of rainwater into the interior of the dry inland country and Australia can start a new era of sustainable prosperity.

A working model of the restoration of water in the small water cycles (except at can be seen in Australia at Tarwyn Park in the Bylong Valley.

In pioneering this approach, Australia could be an example to the rest of the world.

Michal Kravčík, utorok 25. januára 2011 09:24
Čítajte viac: http://kravcik.blog.sme.sk/c/255048/A-remediation-perspective-on-preventing-future-flood-disasters.html#ixzz1DK6i1iSh

January 22, 2011

QLD floods highlight the cost of climate extremes

Filed under: Climate Change — buildeco @ 11:11 pm
by Barry Brook

After a long, hot period of drought in eastern Australia, spanning much of the 1990s and 2000s and referred to as the worst in 1000 years (see also discussion on the drought here and the strange winter of 2009 here), the period 2010-2011 has seen record rainfall and rural flooding events in Australia. This has culminated this week with the 3rd largest city, Brisbane, being struck by severely damaging and costly urban floods, inundating the central business district and overwhelming many thousands of homes and businesses. To quote:

BRISBANE is besieged by the flood of the century, with more than 30,000 properties to be inundated tomorrow… The Queensland capital is now the scene of a natural disaster unprecedented in contemporary Australia. The Brisbane River is due to reach 5.2m on a 4am high tide, 30cm down on the predicted peak, but approaching the mark set in the devastating 1974 floods that claimed 14 lives.

This all comes on the back of an earlier ‘drought breaking’ flood that struck central Queensland earlier in 2010, which I described in this post:

Do the recent floods prove man-made climate change is real? In this post, I said:

Earlier this year in Australia, the Bureau of Meteorology released a Special Climate Statement on the recent exceptional rain and flooding events in central Australia and Queensland. February 28th 2010 was the wettest day on record for the Northern Territory, and March 2nd set a new record for Queensland. Over the 10-day period ending March 3rd, an estimated 403 cubic kilometres (403,000 gigalitres) of rainfall fell across the NT and QLD. Extreme, indeed.

For further background on these events, you should read the latest special climate statement, released on 7th January by the Bureau of Meteorology: An extremely wet end to 2010 leads to widespread flooding across eastern Australia. It says:

It was the wettest December on record for Queensland and for eastern Australia as a whole, the second-wettest for the Murray-Darling Basin, the sixth-wettest for Victoria and the eighth-wettest for New South Wales. For Australia as a whole it was the third-wettest December on record. This followed an extremely wet spring, the wettest on record for Queensland, New South Wales, eastern Australia and the Murray-Darling Basin. The heavy late November and December rainfall followed a very wet July to October for Australia, meaning many catchments were already wet before the flooding rain. It was Australia’s wettest July to October on record and also the wettest July to December on record.

The point of this post is not to try to attribute these extreme weather events directly to climate change, although I think there is a real influence at work here. A major factor is one of the strongest La Niñas on record, as detailed in this excellent piece by climatologist Neville Nichols. Climate scientist Will Steffen from ANU also had this to say:

…there was no direct link between global warming and the tragic flash flooding in Toowoomba which has killed at least nine people in southeast Queensland.

But he told The Australian Online that climate change would lead to heavier, more frequent rain.

“As the climate warms, there is more water vapour in the atmosphere,” he told The Australian Online.

“This means that there is a probability that there will more intense rainfall events around the world.

There is some evidence that we can see them now. I think the place where the best data is the US.”

My point is this. The recent “Big Dry” was almost certainly the most economically damaging climate event ever to strike Australia — certainly for rural areas. Now, on the back of this extended event, which impacted many sectors of Australian business, comes the latest diluvian disaster. Aside from the direct costs of replacing damaged and destroyed goods and rebuilding infrastructure (the insurance estimate I saw on the news the other day was >$5 billion), there are reports that the cascading effects could wipe 1% off Australia’s GDP — around $13 billion — mainly through export losses.

These are major climate-related costs to the economy, as well as to the welfare of the people caught up in this event and the natural systems that are being damaged.

Then, in Europe and the US, we’ve seen record snowstorms and extremely disruptive cold snaps, which can be linked to extremely unusual events in the Arctic, as described here. And, of course, in 2009 we witnessed the horrific bushfires in Victoria – the worst on record. Again, I repeat, these types of events have happened before, and are difficult to attribute individually to climate change versus random chance — but that doesn’t change the fact they they really hurt, economically and socially.

Climate change, left unabated, will increase the frequency and severity of natural disasters. More and more energy is being trapped within the Earth system (see figure to the right), and it has to be expressed somewhere, sometime. The laws of physical science dictate nothing less. And it will, in turn, hit the Australian and World economy hard. Those economic rationalists among us should heed the reminder that these latest natural events have delivered. Avoided global heating is avoided cost (with the worst-case scenarios being incalculable).

For the general populace’s opinion on climate change, what will the latest events do? I can’t be sure of course, but I suspect that it will, in many, awaken within them a deep-seated horror — “...this could happen to me“. This personal demon, fed by the graphic reporting we now get on such events, might well do more than anything else to catalyst a community consensus for real, effective and urgent action to eliminate fossil fuels.

January 11, 2011

No (statistical) warming since 1995? Wrong

Filed under: Global Warming — buildeco @ 1:59 pm

by Barry Brook

Yes, I’m still on vacation. But I couldn’t resist a quick response to this comment (and the subsequent debate):

BB: Do you agree that from 1995 to the present there has been no statistically-significant global warming

Phil Jones: Yes, but only just.

Here is the global temperature data from 1995 to 2010, for NASA GISS and Hadley CRU. The plot comes from the Wood for Trees website. A linear trend is fitted to each series.

Both trends are clearly upwards.

Phil Jones was referring to the CRU data, so let’s start with that. If you fit a linear least-squares regression (or a generalised linear model with a gaussian distribution and identity link function, using maximum likelihood), you get the follow results (from Program R):

glm(formula = as.formula(mod.vec[2]), family =
                       gaussian(link = "identity"),
    data = dat.2009)

Deviance Residuals:
      Min         1Q     Median         3Q        Max
-0.175952  -0.040652   0.001190   0.051519   0.192276  

Coefficients:
              Estimate Std. Error t value Pr(>|t|)
(Intercept) -21.412933  11.079377  -1.933   0.0754 .
Year          0.010886   0.005534   1.967   0.0709 .
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 

(Dispersion parameter for gaussian family taken to be 0.008575483)

    Null deviance: 0.14466  on 14  degrees of freedom
Residual deviance: 0.11148  on 13  degrees of freedom
AIC: -24.961

Two particularly relevant things to note here. First, the Year estimate is 0.010886. This means that the regression slope is +0.011 degrees C per year (or 0.11 C/decade or 1.1 C/century). The second is that the “Pr” or p-value is 0.0709, which, according to the codes, is “not significant” at Fisher’s alpha = 0.05.

What does this mean? Well, in essence it says that if there was NO trend in the data (and it met the other assumptions of this test), you would expect to observe a slope at least that large in 7.1% or replicated samples. That is, if you could replay the temperature series on Earth, or replicate Earths, say 1,000 times, you would, by chance, see that trend or larger in 71 of them. According to classical ‘frequentist’ statistical convention (which is rather silly, IMHO), that’s not significant. However, if you only observed this is 50 of 1,000 replicate Earths, that WOULD be significant.

Crazy stuff, eh? Yeah, many people agree.

Alternatively, and more sensibly, we can fit two models: a ‘null’ with no slope, and a TREND model with a slope, and then compare how well they fit the data (after bias corrections). A useful way to do this comparison is via the Akaike Information Criterion – in particular, the AICc evidence ratio (ER). The ER is the model probability of slope model divided by that of the  intercept-only model, and is, in concept, akin to Bayesian odds ratios. The ER is preferable to a classic null-hypothesis significance test because the likelihood of the alternative model is explicitly evaluated (not just the null). Read more about it in this free chapter that Corey Bradshaw and I wrote.

Here is what we get:

           k    -LogL      AICc     dAICc      wAIC    pcdev
CRU ~ Year 3 15.48054 -22.77926 0.0000000 0.5897932 22.93616
CRU ~ 1    2 13.52652 -22.05304 0.7262213 0.4102068  0.00000

The key thing to look at here is the wAIC values. The ER in this case is 0.5897932/0.4102068 = 1.44. So, under this test, the model that says there IS a trend in the data is 1.44 times better supported by the data than the model that says there isn’t. The best supported model is the TREND model, but really, it’s too hard with this data to separate the alternative hypotheses.

With more data comes more statistical power, however. Say we add the results of 2010 to the mix (well, the average anomaly so far). Then, for the null hypothesis test, we get:

glm(formula = as.formula(mod.vec[2]), family =
                        gaussian(link = "identity"),
    data = dat)

Deviance Residuals:
      Min         1Q     Median         3Q        Max
-0.174040  -0.041956   0.008072   0.044350   0.193146  

Coefficients:
              Estimate Std. Error t value Pr(>|t|)
(Intercept) -22.456037   9.709805  -2.313   0.0365 *
Year          0.011407   0.004849   2.353   0.0338 *
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 

(Dispersion parameter for gaussian family taken to be 0.007993787)

    Null deviance: 0.15616  on 15  degrees of freedom
Residual deviance: 0.11191  on 14  degrees of freedom
AIC: -27.996

For the ER test, we get:

           k    -LogL      AICc    dAICc      wAIC    pcdev
CRU ~ Year 3 16.99796 -25.99592 0.000000 0.7552163 28.33275
CRU ~ 1    2 14.33287 -23.74266 2.253259 0.2447837  0.00000

The ER = 3.1.

So, the “significance test” suddenly (almost magically….) goes from being non-significant to significant at p = 0.05 (because Pr is now 0.0338), or 38 times out of 1,000 by chance.

Whereas although the ER test is strengthened, the previous, result that the TREND is the best model (of these two alternatives), doesn’t change. This test is a little more robust, and certainly less arbitrary because no matter what the data, we are always evaluating the strength of our evidence rather than whether some pre-defined threshold is crossed.

You can do the same exercise with GISTEMP, but it’s less ‘interesting’, because GISTEMP shows a stronger trend, due largely to its inclusion of Arctic areas.

For GISTEMP, the 1995-2009 data yield a slope of 0.0163 C/year, a p-value = 0.0082, and an ER = 13.4 (that is, the TREND model is >10 times better supported by this data). The 1995-2010 (December-November averages) for GISTEMP gives a slope of 0.0174 C/year, a p-value = 0.0021, and an ER = 57.8 (TREND now >50 times better supported!).

You can see that for relatively short time series like this, adding extra years can make a reasonable difference, so longer series are preferable in noisy systems like this.

Okay, does that answer the top quote? I think so, but I’m happy to answer questions on details. Otherwise,  I’ve got a guest post from DV82XL that I’ll put up shortly to re-invigorate our debate out carbon prices…

December 21, 2010

The effect of cutting CO2 emissions to zero by 2050

Filed under: Climate Change, Emissions Reduction, Global Warming — buildeco @ 10:59 am

Guest post for Barry Brook by Dr Tom M. L. Wigley

 

Tom is a a senior scientist in the Climate and Global Dynamics Division of the US National Center for Atmospheric Research and former Director of the CRU. He is an adjunct Professor at the University of Adelaide. For his list of papers and citations, click here (his h-index is 70!). Tom is also a good friend of mine and a strong supporter of the IFR.

What would happen to CO2 concentrations, global-mean temperature and sea level if we could reduce total CO2 emissions (both fossil and net land-use change) to zero by 2050? Based on the literature that examines possible policy scenarios, this is a virtually impossible goal. The results presented here are given only as a sensitivity study.

To examine this idealized scenario one must make a number of assumptions. For CO2 emissions I assume that these follow the CCSP MiniCAM Level 1 stabilization scenario to 2020 and then drop linearly to zero by 2050. For the emissions of non-CO2 gases (including aerosols and aerosol precursors) I assume that these follow the extended MiniCAM Level 1 scenario (Wigley et al., 2009). The extended Level 1 scenario provides emissions data out to 2300. Note that the Level 1 scenario is the most stringent of the CCSP stabilization scenarios, one that would almost certainly be very costly to follow using traditional mitigation strategies. Dropping CO2 emissions to zero is a much more stringent assumption than the original Level 1 scenario, in which total CO2 emissions are 5.54GtC/yr in 2050 and 2.40GtC/yr in 2100.

For modeling the effects of this new scenario one must make assumptions about the climate sensitivity and various other model parameters. I assume that the sensitivity (equilibrium warming for 2xCO2) is 3.0C, the central estimate from the IPCC AR4. (Note that the 90% confidence interval for the sensitivity is about 1.5C to 6.0C – Wigley et al., 2009.)

For sea level rise I follow the AR4 and ignore the possible effects of accelerated melt of the Greenland and Antarctic ice sheets, so the projections here are almost certainly optimistic. All calculations have been carried out using version 5.3 of the MAGICC coupled gas-cycle/climate model. Earlier versions of MAGICC have been used in all IPCC reports to date. Version 5.3 is consistent with information on gas cycles and radiative forcing given in the IPCC AR4.

The assumed CO2 emissions are shown in Figure 1.

The corresponding CO2 concentration projection is shown in Figure 2. Note that the MAGICC carbon cycle includes climate feedbacks on the carbon cycle, which lead to somewhat higher CO2 concentrations than would be obtained if these feedbacks were ignored.

Global-mean temperature projections are shown in Figure 3. These assume a central climate sensitivity of 3.0C. Temperatures are, of course, affected by all radiatively active species. The most important of these, other than CO2, are methane (CH4) and aerosols. In the Level 1 scenario used here both CH4 and aerosol precursor (mainly SO2) emissions are assumed to drop substantially in the future. CH4 concentrations are shown in Figure 4. The decline has a noticeable cooling effect. SO2 emissions drop to near zero (not shown), which has a net warming effect.

The peak warming is about 0.9C relative to 2000, which is about 1.7C relative to pre-industrial times. This is below the Copenhagen target of 2.0C – but it clearly requires a massive reduction in CO2 emissions. Furthermore, the warming peak could be significantly higher if the climate sensitivity were higher than 3.0C. For a 3.0C sensitivity, stabilizing temperatures at 2.0C relative to the pre-industrial level could be achieved with much less stringent CO2 emissions reductions than assumed here. The standard Level 1 stabilization scenario, for example, gives a 50% probability of keeping below the 2.0C target.

Figure 5 gives the sea level projection for the assumed scenario. This is a central projection. Future sea level is subject to wide uncertainties arising from uncertainties in the climate sensitivity and in parameters that determine ice melt. As noted above, the projection given here is likely to be an optimistic projection. Note that sea level roughly stabilizes here, at a CO2 concentration of about 320ppm. Less optimistic assumptions regarding the emissions of non-CO2 gases would require a lower CO2 concentration level. Given the unrealistic nature of the assumption of zero CO2 emissions by 2050, this is a graphic illustration of how difficult it would be to stabilize sea level – even at a level more than 20cm above the present level.

Key reference:
T. M. L. Wigley, L. E. Clarke, J. A. Edmonds, H. D. Jacoby, S. Paltsev, H. Pitcher, J. M. Reilly, R. Richels, M. C. Sarofim and S. J. Smith (2009) Uncertainties in climate stabilization. Climatic Change 97, 85-121, DOI: 10.1007/s10584-009-9585-3.

December 15, 2010

SNE 2060 – a multi-source energy supply scenario

Filed under: Climate Change, Emissions Reduction — buildeco @ 8:57 am

by Barry Brook

In this post, I develop a hypothetical multi-energy-supply scenario for global low-emissions electricity in ~2060. The assumed energy mix is 75 % nuclear fission and 25 % non-nuclear sources, with fossil fuel use virtually eliminated except where it is used with carbon capture and storage.

The % annual growth rate (GR) of energy supplied assumes an exponential rate of change from today’s levels over a 50-year period. It is consistent with (actually, better than) the IPCC WG III greenhouse gas emissions reduction targets. World total supply (277 EJ) matches the demand forecast in the previous post.

The future energy mix scenario offered in Table 1 should not be considered a forecast — it is better thought of as a ‘working hypothesis’ (sensu Elliott and Brook, 2007).

Table 1

Nameplate (installed) capacity is approximate, based on average capacity factors of hydro 0.45 (world average for 2006), wind/solar 0.3, other 0.5, biomass 0.85, fossil CCS 0.85, nuclear 0.9. These capacity factors are similar to those generated today, but are only used to estimate the nameplate column of the table above, and don’t affect the EJ supply column.

In this scenario, all existing non-fossil-fuel energy sources are expected to increase, with the highest rates of growth anticipated for wind/solar and nuclear fission. Comparing and contrasting my 2060 scenario with that of Trainer (2010, his Table 1), I (optimistically in all cases) have:

1. Hydro growing by 50% on today’s energy share (similar to Trainer’s 19 EJ)

2. Fossil fuels with CCS increases to a level similar to that of hydro (this is one third of the maximum 51 EJ allocated by Trainer)

3. Biomass and waste used for direct electricity generation increases by 50%; the majority of crop energy is used to supply 15 EJ of ethanol

4. Wind and solar output collectively expands 40-fold on today’s levels

5. Nuclear fission growth is then used to balance the total demand. This results in a 21-fold increase on today’s share of ~310 GWe average

The final ratio of 75% nuclear fission to 25% non-nuclear energy sources is similar to the national domestic electricity mix of France today (but the French are, of course, still reliant on oil, and haven’t yet taken they synfuel production step that will be necessary as oil/gas supplies run down).

This general forecast is also consistent with the conclusions of Jean-Baptiste and Ducroux (2003). In reality, there may be considerably greater or lesser supply from any of these low-carbon energy sources, but this depends on a broad range of complex factors, including carbon prices, subsidies and tariffs, energy security considerations, fossil fuel supply constraints, and technological, logistical, economic and socio-political circumstances (Hoffert et al., 2002).

The Table 1 scenario is simply offered for evaluation, as one possible which is able to meeta number of first-order logical, plausibility and sustainability criteria. Note that it is less demanding than the more pessimistic TR2 scenario described here. I will use the nuclear supply value of 7,300 GWe nameplate capacity (6,500 GWe average) for all future projections in the SNE2060 series.

Having arrived at what I consider a scientifically justifiable scenario, I will now turn back to modelling. The next few posts in the SNE2060 series will look at a couple of possible pathways for that 21-fold increase in nuclear power, incorporating a synergy of thermal reactors and Gen IV alternatives. I will also consider some further constraints on this roll-out, and the carbon mitigation implications of all this energy re-tooling.

Footnote: China has once again revised upwards its 2020 target for nuclear energy. It now stands at 112 GWe, up from the earlier target of 70 GWe (which itself was a positive revision of an earlier 40 GWe goal). This is relative to the 11 GWe of nuclear capacity operating today. China is most definitely moving quickly – as fast as they can possibly go — and I suspect they still haven’t shown their full hand. What their 2030 goal might now be is anyone’s guess (mine, for what it is worth, is ~500 GWe).

December 8, 2010

Systems modelling for synergistic ecological-climate dynamics

Filed under: Climate Change — buildeco @ 3:28 pm

by Barry Brook

For those interested in my current science research directions, I’ve just been funded for another 4 years by the Australian Research Council as a Professorial “Future Fellow“, to continue my work on stochastic systems modelling and scenario optimisation. Here are some details on the project:

Title: Systems modelling for synergistic ecological-climate dynamics

Summary of Proposal: The project aims to improve forecasts of the response of biodiversity to future climate change, and develop better on-ground conservation management. A systems modelling framework will be developed and tested against real-world data, to integrate a wide variety of biological and geophysical inputs and so produce more realistic predictions.

Many computer-based tools have been developed to simulate single-species demography and population dynamics and the effect of habitat loss, disease spread, response to harvest, and shifts in geographical ranges due to climate change. These applications can be individually sophisticated, yet they perform necessarily limited roles in isolation. I will implement a set of ‘meta-modelling’ applications to inter-link separate ecological simulators, by allowing sharing of data structures, parameters and outputs. Using a dynamical systems approach, I will develop and test a framework for multidisciplinary forecasting and sensitivity analysis, providing improved predictions of biodiversity response to the many and complex stressors of global change.

Here is the University of Adelaide media release and below are some further details on the aims and methods:

My strategic aims are to:

(a) determine the extent to which climate change might amplify or mitigate existing major anthropogenic threats to biodiversity (e.g. habitat degradation, overexploitation, and invasive species) and

(b) link this real-world data to novel statistical and computational systems models for predictive purposes.

The goal is to use available long-term data to model and forecast population responses (distributional range, fragmentation, viability, community interactions) to multiple stressors, in particular climate change.

The objectives of the proposed research are three-fold:

1. Develop regional ecological response models for threatened (and other important) species, using long-term data to evaluate the synergistic impact of multiple threats on population responses. These models will integrate existing ecosystem attributes and data related to demography, distribution, and autecology.

2. Partition population responses between anthropogenic, environmental and climatic stressors, identify historical population trends relative to contemporary climate change, and analyse emergent results from complex simulators.

3. Use integrated ecological response models to forecast population responses to future climatic conditions based on ecosystem attributes and regional down-scaled Global Climate Model outputs.

Given the scope and rate of climate change and other human impacts, it is now imperative that we find logistically feasible and cost-effective ways to prevent a cascading loss of biodiversity, from the local/regional to national/international scale, and thus maintain relatively healthy ecosystems in perpetuity. The project aims to improve forecasts of the response of biodiversity to future climate change, and so improve on-ground conservation management. A systems modelling framework will be developed, and tested against real-world data, to integrate a wide variety of biological and geophysical inputs and so produce more realistic predictions. Model development will be based on decades of on-ground monitoring and remote sensing data. Linking ecological response models to climate model projections affords an opportunity to project species and ecosystem responses for the next 50+ years, while providing a state-of-the-art forecasting tool for conservation and management agencies.

Meta-modelling is an approach in which computational links are constructed between separately developed discipline-specific models (described in Nyhus et al. 2007). The concept is that individual simulators (existing or new models) can function as arbitrarily powerful stand-alone programs, with a meta-model framework being used to manage the integration of two or more system components, to create a dynamic ‘bottom-up’ simulation. By passing data structures and variables describing the state of the system between programs, this novel approach can be used to develop sophisticated applications without the need to focus on time-consuming (and often specialist) development of all individual processes. It also holds the promise of generating emergent, non-linear behaviour – much like the output produced by Earth Systems models, which include atmosphere, cryosphere and layered ocean models, dynamic vegetation simulators, etc. (IPCC 2007).

——————–

I also hope to build in energy systems case studies into this work, as it’s now obviously a serious research interest of mine. There’s lots more in the formal proposal of course, but I’d be happy to answer any questions here on the details of the approach.

Older Posts »

Create a free website or blog at WordPress.com.