Climate change

September 22, 2011

Why population policy will not solve climate change – Part 1

Filed under: Climate Change, Population growth — buildeco @ 3:23 pm

by Barry Brook

I have given lots of talks on climate change over the last few years. In these presentations, I typically focus on explaining the basis of the anthropogenic climate change problem, how it sits in the context of other human and natural changes, and then, how greenhouse gas emissions could be mitigated with the elimination of fossil fuels and substitution with low-carbon replacement technologies such as nuclear fission, renewables of various flavours, energy efficiency, and so on. When question time follows, I regularly get people standing up and saying something along the following lines:

It is all very well to focus on energy technology, and even to  mention behavioural changes, but the real problem — the elephant in the room that you’ve ignored — is the size of the human population. No one seems to want to talk about that! About population policy. If we concentrated seriously on ways to reduce population pressure, many other issues would be far easier to solve.

On the face of it, it is hard to disagree with such statements. The human population has growth exponentially from ~650 million in the year 1700 AD to almost 7 billion today. When coupled to our increasing economic expansion and concomitant rising demand for natural resources, this rapid expansion of the human enterprise has put a huge burden on the environment and demands an accelerating depletion of fossil fuels and various high-grade ores, etc. (the Anthropocence Epoch).  Obviously, to avoid exhaustion of accessible natural resources, degradation of ecosystems and to counter the need to seek increasingly low-grade mineral resources, large-scale recycling and sustainable use of biotic systems will need to be widely adopted. Of this there is little room for doubt.

So, the huge size of the present-day human population is clearly a major reason why we face so many mounting environmental problems and are now pushing hard against planetary boundaries (see diagram above). But does it also follow that population control via various policies is the answer – the best solution — to solving these global problems? It might surprise you to learn that I say NO (at least over meaningful time scales). But, it will take some time to explain why — to work through the nuances, assumptions, sensitivities and global versus region story. So, in a series of posts, I’ll explain why I’ve reached this conclusion, and, as always, invite feedback!

In part 1, below, I outline some of the basic tools required to come up with some reasonable answers. A huge amount of relevant data on this topic (human demography) is available from the United Nations Population Division, the Human Life-Table Database, the Human Mortality Database, and the U.S. Census Bureau. That data and statistics I cite in these posts come from these sources.

First, let’s look at the global situation. As of 1 July 2011, the human population numbered approximately 6.96 billion people (that’s 6,960 million, give or take a few tens of millions), and is expected to cross 7 billion in March 2012. For historical context, in 1900 it was 1.6 billion, in 1954 it was 3 billion, in 1980 it was 4.5 billion and in 1999 it was 6 billion.

The mid-range forecast is 7.65 billion by 2020, 8.61 billion by 2035, 9.31 billion by 2050 and 10.12 billion by 2100. So, population globally is projected to continue to rise throughout the 21st century, but at a decelerating rate. A summary of a range of scenarios is given in the following table:

The annual growth rate can be calculated as the ratio of one 5-year time period over the previous one, e.g., medium variant 2015 = (7,284,296/6,895,889)^0.2 = 1.1 % per annum. This compares to the peak growth rate of 2.2% in the early 1960s — so it’s clear that population growth is already slowing, but only gradually.

The medium and high variants in the table above indicate no stabilisation of population size until after 2100, while the low variant hits a peak in 2045 with a gradual decline thereafter, reaching the 2001 level once again by the year 2100. The low variant involves assumptions about declining birth rates that are beyond the expectations of most demographers.

In this first post, I want to go beyond the standard UN assumptions to look, in brief, at some more extreme scenarios. I should note here that the model behind these projections is reasonably complex, being based on age-specific mortality and fertility schedules, current cohort-by-cohort inventories (in 5-year-class stages), and the forecast trends in these vital rates over time. In the second post, I’ll explain some of the detail behind this demographic projection model, and explore its sensitivity to various assumptions and parameter estimates. In the third post, I’ll look at the country specific forecasts, from both the developed and developing world. But first, here are some alternative global scenarios, which are not meant to be realistic… merely illustrative.

In this Scenario 1, birth and death rates are locked at those observed during the last 5 years. The 2100 population size is slightly larger than the medium variant from the UN, at 9.4 billion in 2050 and 11.3 billion in 2100. The intrinsic population growth rate (GR) in this model is 0.36% per annum, but due to an unstable initial age structure, the lower equilibrium rate is not reached until 2070. Projecting forward many centuries, to the year 2500, the population size is 47 billion. This is obviously ridiculous — such projections with unchanging vital rates obviously cannot hold over the long term.

Scenario 2 is the UN medium variant, listed in the table above. This assumes that total fertility (TF: number of children a woman would produce over her lifetime if she survived through to menopause) declines from today’s level of 2.5 to 2.03 by the year 2100. There is also a slight decrease in death rates (DR), which I will explain more in the next post.

Scenario 3, illustrated below, is the same as Scenario 2, except total fertility is assumed to decline (linearly) to 1.0 by 2100, rather than to 2.03.

In this case, the 2050 population size is 9.03 billion and 2100 is 7.96 billion. This is still higher than the UN low variant, showing that the low variant requires some fairly heroic fertility assumptions. Looking further into the future, if we assume that TF thereafter stabilises at 1, the global population would eventually decline to less than 1 billion by the year 2210, below 100 million in 2300, and below 1 million in 2460! The underlying GR in this forecast, after 2100, is a decline of 2.73% per annum. So, for effective extinction of the human population to be avoided, either birth rates would once again have to rise at some point after 2100, or else (more likely) death rates would decline substantially to to medical improvements. More on this in the second post on this topic.

Right, let’s go even more extreme, Scenario 4. What if a global TF declines to 1 by the year 2030 — within the next 20 years — which could only be achieved by implementation of a global 1-child per couple policy within a decade or two (I’ll led you judge the likelihood of this…). Here are the results:

In this case, the 2050 population size is 7.03 billion, and 3.79 billion by 2100. The 1 billion mark is passed in 2160, and 100 million in 2245.

For Scenario 5, let’s assume that some virus or hazardous chemical causes global sterilization by 2015. Here’s what the trajectory looks like.

Population is 4.90 billion in 2050 and crosses 1 billion in about 2090. Virtually everyone is dead by 2120, as you might expect. Now to be fair, the reality of a scenario like this would almost certainly be much worse, because as the population aged with no children, society would quickly fall apart. Most people would probably be dead due to societal collapse by mid-century.

Finally, let’s wind back the TF assumption a bit, but ramp up the death rates. Assume for instance that climate change has caused more famines, disease etc. such that death rates double over the course of the 21st century, rather than decline (as expected) due to improved medical treatments, and TF declines to 1 by 2100.

In this case, Scenario 6, the 2050 global population is 6.89 billion, and it has declined to 2.26 billion by 2100. The 1 billion mark is crossed in 2120, and 100 million in 2160. This seems to be the most plausible of the extremely low variant scenarios that I can possibly justify (I don’t say this is probable). But even in this grim outlook, global population is, in 2050, about the same as today!

The TF = 1 by 2030 (Scenario 4) just does not seem in any way achievable or desirable, and anyway, the total population size in 2050 is still larger than today’s. The conclusion is clear — even if the human collective were to pull as hard as possible on the ‘total fertility’ policy lever, the result would NOT constitute an effective policy for addressing climate change, for which we need to have major solutions well under way by 2050 and essentially wrapped up by 2100.

In summary (for part #1), I support policies to encourage global society to achieve the low growth variant UN scenario. (More on that in the next post). But I must underscore the point that population control policy is patently not the ‘elephant in the room’ that many claim — it’s more like a herd of goats that’s eaten down your garden, and is still there, munching away…

June 23, 2011

What price of Indian independence? Greenpeace under the spotlight

Filed under: Emissions Reduction, Energy Demand, Global Warming — buildeco @ 1:56 pm
Two PWRs under construction in Kudamkulam, India

Guest Post by Geoff RussellGeoff is a mathematician and computer programmer and is a member of Animal Liberation SA. His recently published book is CSIRO Perfidy. To see a list of other BNC posts by Geoff, click here.

——————

India declared itself a republic in 1950 after more than a century of struggle against British Imperialism. Greenpeace India however, is still locked firmly under the yoke of its parent. Let me explain.

Like many Australians, I only caught up with Bombay’s 1995 change of name to Mumbai some time after it happened. Mumbai is India’s city of finance and film, of banks and Bollywood. A huge seething coastal metropolis on the north western side of India. It’s also the capital of the state of Maharashtra which is about 20 percent bigger than the Australian state of Victoria, but has 112 million people compared to Victoria’s 5.5 million. Mumbai alone has over double Victoria’s entire population. Despite its population, the electricity served up by Maharashtra’s fossil fuel power stations plus one big hydro scheme is just 11.3 GW (giga watts, see Note 3), not much more than the 8 or so GW of Victoria’s coal and gas fumers. So despite Mumbai’s dazzling glass and concrete skyline, many Indians in both rural and urban areas of the state still cook with biomass … things like wood, charcoal and cattle dung.

The modern Mumbai skyline at night

Mumbai’s wealth is a magnet for terrorism. The recent attacks in 2008 which killed 173 follow bombings in 2003 and 1993 which took 209 and 257 lives respectively. Such events are International news, unlike the daily death and illness, particularly to children, from cooking with biomass. Each year, cooking smoke kills about 256,000 Indian children between 1 and 5 years of age with acute lower respiratory infections (ALRI). Those who don’t die can suffer long term consequences to their physical and mental health. A rough pro-rata estimate would see about 23,000 children under 5 die in Maharashtra every year from cooking smoke.

The image is from a presentation by medical Professor Kirk Smith, who has been studying cooking smoke and its implications for 30 years.

Medical Prof. Kirk Smith’s summary of health impacts from cooking fires

The gizmo under the women’s right arm measures the noxious fumes she is exposed to while cooking. Kirk doesn’t just study these illnesses but has been spinning off development projects which develope and distribute cleaner cooking stoves to serve as an interim measure until electricity arrives.

The disconnect between what matters about Mumbai and India generally to an Australian or European audience and what matters locally is extreme. But a visit to the Greenpeace India website shows it is simply a western clone. In a country where real matters of life and death are ubiquitous, the mock panic infecting the front page of the Greenpeace India website at the death-less problems of the Fukushima nuclear plant seem weird at best, and obscene at worst.“Two months since Fukushima, the Jaitapur project has not been stopped“, shouts the text over one front page graphic in reference to the nuclear plant proposed for construction at Jaitapur. In those two months, nobody has died of radiation at Fukushima, but 58,000 Indian children have died from cooking smoke. They have died because of a lack of electricity. Some thousands in Maharashtra alone.

Greenpeace, now an obstructive dinosaur

The whole world loved Greenpeace back in its halcyon days protesting whaling and the atmospheric testing of nuclear weapons. Taking on whalers and the French Navy in the open sea in little rubber boats was indeed worthy of Mahatma Gandhi. But the legacy of those days is now an obstacle to Greenpeace helping to fight the much bigger environmental battles that are being fought. As Greenpeace campaigns to throw out the nuclear powered baby with the weapons testing bathwater, it seems to have forgotten the 2010 floods which displaced 20 million in the sub-continent. The Australian Council for International Development reports in May 2011 that millions are still displaced with 913,000 homes completely destroyed. Millions also have ongoing health issues with rising levels of tuberculosis, dengue fever and the impacts of extended periods of malnutrition. The economic structure of large areas has been devastated along with food and seed stocks. Areas in southern Pakistan are still under water.

This foreshadows the scale of devastation which will be delivered more frequently as global warming bites.

Brown clouds, cooking and climate change

Regardless of what you think about nuclear power, you’d think breathable air would be an environmental issue worthy of Greenpeace’s attention, but biomass cooking is missing from Greenpeace India’s campaign headings.

Biomass cooking isn’t just a consequence of poverty, it feeds into a vicious feedback loop. People, usually women and children, spend long periods collecting wood or cattle dung (see image or full study). This reduces educational opportunities, while pressure on forests for wood and charcoal degrades biodiversity. Infections from smoke, even if not fatal, combine with the marginal nutrition produced by intermittent grain shortages to yield short and sickly lifespans, while burning cattle dung wastes a resource far more valuable as fertiliser.

In 2004, a World Health Organisation Report estimated that, globally, 50 percent of all households and 90 percent of rural households cook with biomass. In India, they estimated that 81 percent of Indian households cook with biomass. That figure will have dropped somewhat with significant growth in Indian power generation over the past decade but will still be high.

Biomass cooking isn’t only a health issue, but a significant player in climate change. Globally, the black carbon in the smoke from over 3 billion people cooking and boiling water daily with wood, charcoal or cattle dung forms large brown clouds with regional and global impacts.

Maharashtra’s nuclear plans

Apart from a reliable food supply, the innovation that most easily distinguishes the developed and developing world is electricity. It’s the shortage of this basic commodity that kills those 256,000 Indian children annually. Electric cooking is clean and slices through the poverty inducing feedback loop outlined above. Refrigeration reduces not just food wastage but also food poisoning.

If you want to protect forests and biodiversity as well as children in India (and the rest of the developing world), then electricity is fundamental. Higher childhood survival is not only a worthy goal in itself, but it is also critical in reducing birthrates.

Apart from a Victorian sized coal fired power supply the 112 million people of Maharashtra also have the biggest nuclear power station in India. This is a cluster of two older reactors and two newer ones opened in 2005 and 2006. The newer reactors were constructed by Indian companies and were completed inside time and below budget. The two old reactors are relatively small, but the combined power of the two newer reactors is nearly a giga watt. India’s has a rich mathematical heritage going back a thousand years which underpins a sophisticated nuclear program. Some high-level analytic techniques were known in India hundreds of years before being discovered in Europe.

India has another nuclear power station planned for Maharashtra. And much bigger. This will be a half a dozen huge 1.7 GW French EPR reactors at Jaitapur, south of Mumbai. On its own, this cluster will surpass the entire current output of the state’s coal fired power stations. The project will occupy 968 hectares and displace 2,335 villagers (Wikipedia). How much land would solar collectors occupy for an Andasol like concentrating solar thermal system? About 40 times more land and either displace something like 80,000 people or eat into India’s few wildlife habitats.

If Greenpeace succeeds in delaying the Jaitapur nuclear plant, biomass cooking in the area it would have serviced will continue together with the associated suffering and death of children. It’s that simple. Greenpeace will have direct responsibility no less than if it had bombed a shipment of medical supplies or prevented the decontamination of a polluted drinking well.

Jaitapur and earthquakes

In the wake of the reactor failures at Fukushima which killed nobody, Greenpeace globally and Greenpeace India are redoubling their efforts to derail the new Jaitapur nuclear plant. The Greenpeace India website (Accessed 9th May) carries a graphic of the Fukushima station with covering text:

The Jaitapur nuclear plant in India is also in an earthquake prone zone. Do we want to take the risk? The people of Jaitapur don’t.

The Greenpeace site claims that the chosen location for the Jaitapur power plant is in a Seismic Zone 4 with a maximum recorded quake of 6.3 on the Richter scale. Accepting this as true (Wikipedia says its Zone 3), should anybody be afraid?

“Confident” and “relaxed” are far more appropriate responses for anybody who understands the Richter scale. It’s logarithmic. Base 10.

Still confused? A quake of Richter scale size 7 is 10 times more powerful than one of size 6. A quake of size 8 is 100 times more powerful than one a size 6. And a scale 9 quake, like Japan’s monster on March the 11th, is a thousand times more powerful than a quake of size 6. The 40 year old Fukushima reactors came through this massive quake with damage but no deaths. The reactors shutdown as they were designed to and subsequent problems, still fatality free and caused primarily by the tsunami, would not have occurred with a more modern reactor. We haven’t stopped building large buildings in earthquake zones because older designs failed.

Steep cliffs and modern reactor designs at Jaitapur will mean that tsunamis won’t be a problem. All over the world people build skyscrapers in major earthquake zones. The success of the elderly Fukushima reactors in the face of a monster quake is cause for relief and confidence, not blind panic. After all, compared to a skyscraper like Taipei 101, designing a low profile building like a nuclear reactor which can handle earthquakes is a relative doddle.

Despite being a 10 on the media’s self-proclaimed Richter scale, subsequent radiation leaks and releases at Fukushima will cause few if any cancers. It’s unlikely that a single worker will get cancer, let alone any of the surrounding population. This is not even a molehill next to the mountain of cancers caused by cigarettes, alcohol and red meat. The Fukushima evacuations are terrible for the individuals involved but even 170,000 evacuees pales beside the millions of evacuations caused by increasing climate based cataclysms.

Greenpeace India haunted by a pallid European ghost

Each year that the electricity supply in Maharashtra is inadequate, some 23,000 children under the age of 5 will die. They will die this year. They will die next year. They will keep dying while the electricity supply in Maharashtra is inadequate. While the children die, their parents will mourn and continue to deplete forests for wood and charcoal. They will continue to burn cattle dung and they will have more children.

A search of the Greenpeace India web pages finds no mention of biomass cooking. No mention of its general, environmental, climate or health impacts. But there are 118 pages referencing Chernobyl.

At Chernobyl, 237 people suffered acute radiation sickness with 28 dying within 4 months and another 19 dying between 1987 and 2006. As a result of the radiation plume and people who were children at the time drinking contaminated milk, there were 6,848 cases of thyroid cancer between 1991 and 2005. These were treated with a success rate of about 98% (implying about 140 deaths). Over the past 25 years there have also been some thousands of other cancers that might, or might not, have been caused by Chernobyl amongst the millions of cancers caused by factors that Greenpeace doesn’t seem the least worried by, things like cigarettes, alcohol and red meat.

On the other hand, each year that India’s electricity supply is inadequate will see about 256,000 childhood deaths. As an exercise, readers may wish to calculate the number of Indian children who have died due to inadequate cooking fuels over the past 25 years and compare it with the 140 children who died due to the Chernobyl accident. Every one of those Indian deaths was every bit as tragic as every one of those Chernobyl deaths.

Greenpeace India is dominated by the nuclear obsession of its parent organisation. On the day when the Greenpeace India blog ran a piece about 3 Japanese workers with burned feet, nearly a thousand Indian children under 5 will have died from cooking stove smoke. They didn’t get a mention on that day, or any other.

Why is Greenpeace India haunted by this pallid European ghost of an explosion 25 years ago in an obsolete model of reactor in Ukraine? Why is Greenpeace India haunted by the failure of a 40 year old Fukushima reactor without a single fatality? This is a tail wagging not just a dog, but the entire sled team.

Extreme scenarios

It’s time Greenpeace India looked rationally at Indian choices.

Should they perhaps copy the Germans whose 15 year flirtation with solar power hasn’t made the slightest dent in their fossil fuel use? (Note 2) It may simply be that the Germans are technologically incompetent and that things will go better in India. Perhaps the heirs of Ramanujan will succeed where the heirs of Gauss have failed. Alternatively, should India copy the Danes whose wind farms can’t even half power a tiny country of 5.4 million?

India’s current electricity sources. Cooking stoves not included! ‘Renewables’ are predominantly biomass thermal power plants and wind energy, with some solar PV.

India is well aware that she only has a four or five decades of coal left, but seems less aware, like other Governments, that atmospheric CO2 stabilisation must be at 350 ppm together with strict reductions in short lived forcings like black carbon and methane and that these constraints require her, like Australia and everybody else, to leave most of that coal in the ground. But regardless of motivation, India needs both a rebuild and expansion of her energy infrastructure over the next 50 years.

Let’s consider a couple of thumbnail sketches of two very different extreme scenarios that India may consider.

The first scenario is to phase out all India’s coal, oil and gas electricity generation facilities and replace them with nuclear. Currently these fossil fuel facilities generate about 900,000 GWh (giga watt hours) of electricity. Replacing them with 1,000 nuclear reactors at 1.7 GW each will generate about 14 million GWh annually. This is about 15 times the current electricity supply and roughly similar to Victoria’s per capita electricity supply. It’s a fairly modest target because electricity will be required to replace oil and gas in the future. I also haven’t factored in population growth in the hope that energy efficiency gains will compensate for population growth and also with confidence that electrification will reduce population growth. Nevertheless, this amount of electricity should be enough to catapult India into the realms of the developed world.

These reactors should last at least 60 years and the electricity they produce will prevent 256,000 children under 5 dying every year. Over the lifetime of the reactors this is about 15.4 million childhood deaths. But this isn’t so much about specific savings as a total transformation of India which will see life expectancy rise to developed world levels if dangerous climate change impacts can be averted and a stable global food supply is attained.

Build the reactors in groups of 6, as is proposed at Jaitapur, and you will need to find 166 sites of about 1000 hectares. The average density of people in India is about 3 per hectare, so you may need to relocate half a million people (3000 per site). This per-site figure is close to the actual figure for Jaitapur.

There are currently over 400 nuclear reactors operating world wide and there has been one Chernobyl and one Fukushima in 25 years. Nobody would build a Chernobyl style reactor again, but let’s be really silly and presume that over 60 years we had 2 Chernobyls and 2 Fukushimas in India. Over a 60 year period this might cost 20,000 childhood cancers with a 98% successful treatment rate … so about 400 children might die. There may also be a few thousand adult leukemias easily counterbalanced by a vast amount of adult health savings I haven’t considered.

The accidents would also result in 2 exclusion zones of about 30 kilometers in radius. Effectively this is 2 new modestly sized wildlife parks. We know from Chernobyl that wildlife will thrive in the absence of humans. With a 30km radius, the two exclusion zone wildlife parks would occupy 282,743 hectares.

If you are anti-nuclear, this is a worst case scenario. The total transformation of India into a country where children don’t die before their time in vast numbers.

This is a vision for India that Greenpeace India is fighting tooth and nail to avoid.

As our alternative extreme scenario, suppose India opted for concentrating solar thermal power stations similar to the Spanish Andasol system to supply 14 million GWh annually. Each such unit supplies about 180 GWh per year, so you would need at least 78,000 units with a solar collector area of 3.9 million hectares, equivalent to 13 of our hypothesized exclusion zone wildlife parks from the accidents. But, of course, these 3.9 million hectares are not wildlife parks. I say “at least 78,000″ units because the precise amount will depend on matching the demand for power with the availability of sunshine. Renewable sources of energy like wind and solar need overbuilding to make up for variability and unpredictability of wind and cloud cover. The 78,000 Andasol plants each come with 28,000 tonnes of molten salt (a mix of sodium nitrate and potassium nitrate) at 400 degrees centigrade which acts as a huge battery storing energy when the sun is shining for use when it isn’t. Local conditions will determine how much storage is required. The current global production of ordinary sodium chloride is about 210 million tonnes annually. Producing the 2.1 billion tonnes of special salt required for 78,000 Andasols will be difficult, as will the production of steel and concrete. Compared to the nuclear reactors, you will need about 15 times more concrete and 75 times more steel.

Build the 78,000 Andasols in groups of 78 and you have to find 1000 sites of about 4,000 hectares. Alternatively you could use 200 sites of 20,000 hectares. The average density of people in India is over 3 per hectare, so you may need to relocate perhaps 12 million people. If you were to use Solar photovoltaic in power stations (as opposed to rooftops), then you would need more than double the land (Note 4) and have to relocate even more people.

Sustainability

In a previous post, I cited an estimate of 1 tonne of CO2 per person per year as a sustainable greenhouse gas emissions limit for a global population of 8.9 billion. How do our two scenarios measure up?

A current estimate of full life cycle emissions from nuclear power is 65g/kWh (grams per kilo-watt-hour) of CO2, so 14 million GWh of electricity shared between 1.4 billion Indians is about 0.65 tonnes per person annum, which allows 0.35 tonnes for food and other non-energy greenhouse gas emissions. So not only is it sustainable, it’s in the ball park as a figure we will all have to live within.

The calculations required to check if this amount of electricity is sustainable from either solar thermal or solar PV are too complex to run through here, but neither will be within budget if any additional fossil fuel backup is required. Solar PV currently generates about 100 g/kWh (p.102) under Australian conditions, so barring technical breakthroughs, is unsustainable, unless you are happy not to eat at all. Solar thermal is similar to nuclear in g-CO2/kWh, except that the required overbuilding will probably blow the one tonne budget.

The human cost of construction time

The relative financial costs of the two scenarios could well have a human cost. For example, more money on energy usually means less on ensuring clean water. But this post is already too long. However, one last point needs to be made about construction time. I strongly suspect that while building 1000 nuclear reactors will be a vast undertaking, it is small compared to 78,000 Andasols. Compare the German and French experiences of solar PV and nuclear, or simply think about the sheer number and size of the sites required. The logistics and organisational time could end up dominating the engineering build time. We know from various experiences, including those of France and Germany, that rapid nuclear builds are physically plausible and India has demonstrated this with its own reactor program.

If I’m right and a solar (or other renewable) build is slower than a nuclear build, then the cost in human suffering will easily dwarf anything from any reasonable hypotheses on the number of accidents. Can we put a number on this? If we arbitrarily assume a pro-rata reduction in childhood deaths in proportion to the displacement of biomass cooking with electricity, then we can compare a phase-out over 10 five-year plans with one taking say 11. So at the end of each 5 year plan a chunk of electricity comes on line and the number of cooking smoke deaths drops. At the end of the process the number of deaths from cooking smoke is 0. It’s a decline in a series of 10 large or 11 slightly smaller steps. Plug in the numbers and add up the total over the two time periods and the difference is … 640,000 deaths in children under 5. Construction speed matters.

In conclusion

How do my back-of-an-envelope scenarios compare with India’s stated electricity development goals? According to India’s French partner in the Jaitapur project, Areva, India envisages about half my hypothesized electrical capacity being available by 2030, so a 50 year nuclear build plan isn’t ridiculous provided floods or failed monsoons don’t interfere unduly.

As for the safety issues and my hypothesised accidents, it doesn’t matter much what kind of numbers you plug in as a consequence of the silly assumption of a couple of Chernobyls. They are all well and truly trumped: firstly, by the increase in health for Indian children, secondly by the reforestation and biodiversity gains as biomass cooking declines, thirdly by the reduction in birth rates as people get used to not having their children die, and lastly, by helping us all have a fighting chance of avoiding the worst that climate change might deliver.

It’s time Greenpeace India told its parent organisation to shove off. It’s time Greenpeace India set its own agenda and put the fate of Indian children, the Indian environment and the planet ahead of the ideological prejudices of a parent organisation which has quite simply lost the plot.


Note 1: Nuclear Waste: What about the nuclear waste from a thousand reactors? This is far less dangerous than current levels of biomass cooking smoke and is much more easily managed. India has some of the best nuclear engineers in the business. They are planning thorium breeder reactors which will result in quite small amounts of waste, far smaller and more manageable than the waste from present reactors. Many newer reactor designs can run on waste from the present generation of reactors. These newer reactors are called IFR (Integral Fast Reactor) and details can be found on bravenewclimate.com.

Note 2: German Solar PV: Germany installed 17 GW of Solar photo voltaic (PV) power cells between 2000 and 2010 and in 2010 those 17 GW worth of cells delivered 12,000 GWh of energy. If those cells were running in 24×7 sunshine, they would have delivered 17x24x365 = 149 GWh of energy. So their efficiency is about 8 percent (this is usually called their capacity factor. A single 1.7GW nuclear reactor can produce about 1.7x24x365x0.9=13,402 GWh in a year (the 0.9 is a reasonable capacity factor for nuclear … 90 percent). Fossil fuel use for electricity production in Germany hasn’t changed much in the past 30 years with most of the growth in the energy supply being due to the development of nuclear power in Germany during the late 70s and 80s.

Note 3: Giga watts, for non technical readers.: The word billion means different things in different countries, but “giga” always means a thousand million, so a giga watt (GW for short) is a useful unit for large amounts of power. A 100-watt globe takes 100 watts of power to run. Run it for an hour and you have used 100 watt-hours of energy. Similarly, a GWh, is a giga watt of power used for an hour, and this is a useful unit for large amounts of energy. If you want to know all about energy units for a better understanding of BNC discussions, here’s Barry’s primer

Note 4: Area for Solar PV. German company JUWI provides large scale PV systems. Their 2 MW (mega watt system) can supply about 3.1 GWh per year and occupies 2 hectares. To supply a similar amount of energy to Andasol would need 180/3.1=58 units occupying some 116 hectares

Disposal of UK plutonium stocks with a climate change focus

Filed under: IFR (Integral Fast Reactor) Nuclear Power, Nuclear Energy — buildeco @ 1:49 pm
by Barry Brook

In the 1950s, following World War II, the United Kingdom and a handful of other nations developed a nuclear weapons arsenal. This required the production of plutonium metal (or highly enriched uranium) purpose-built facilities. ‘Civil’ plutonium was also produced, since the facilities for separation existed and it was thought that this fissile material would prove useful in further nuclear power development.

Fifty years on, the question of what to do with the UK’s separated plutonium stocks is an important one. Should it, for instance, be downblended with uranium to produce mixed oxide fuel in thermal reactors, and then disposed of in a geological repository when it has been ‘spiked’ by fission products and higher actinide isotopes? Or is, perhaps, there an alternative, which would be of far greater medium- to long-term benefit to the UK, because it treats the plutonium not as waste, but as a major resource to capitalise on?

In the piece below, Tom Blees explores these questions. This was written as a formal submission to a paper “Management of the UK’s Plutonium Stocks: A consultation on the long-term management of UK owned separated civil plutonium”. Click on the picture to the left to read the background paper (which is interesting and not all that long).

This is the final in a series of three posts which has advocated SCGI’s position on the need for the IFR: (i) to provide abundant low-carbon energy and (ii) as a highly effective means of nuclear waste management and fuel extension for sustainable (inexhaustible) nuclear fission.

—————————–

Response to a consultation on the management of the UK’s plutonium stocks

Tom Blees, President, of The Science Council for Global Initiatives

Do you agree that it is not realistic for the Government to wait until fast breeder reactor technology is commercially available before taking a decision on how to manage plutonium stocks?

I strongly disagree, and I hope that you’ll take the time to read this and consider the fact that the fast reactor option is far more imminent than you might have heretofore believed. Not only that, but it is arguably the best option by far.

Current Fast Reactor Development

Worldwide there are well over 300 reactor-years of experience with fast reactors. Russia’s BN-600 fast reactor has been producing commercial electricity for over 30 years, and Russia is beginning to build BN-800 reactors both for their own use and for China. India’s first commercial-scale fast reactor is about to be finished within a year or two. South Korea has already built a sizeable pyroprocessing facility to convert their spent LWR fuel into metal fuel for fast reactors, and have only refrained from starting it up because of diplomatic agreements with the USA that are due to be renegotiated in the near future. China is building a copy of the Experimental Breeder Reactor II (EBR-II) that was the mainstay of the Integral Fast Reactor (IFR) development program at Argonne National Laboratory in the USA. Japan has reopened their Monju fast reactor to continue that research, though it should be noted that Toshiba and Hitachi contested the wisdom of that decision, favoring instead the metal-fueled fast reactor design as exemplified by the EBR-II.
The advantages of metal fuel in fast reactors is difficult to overstate. Rather than attempt to explicate the details here, I would refer the reader to the following URL: http://tinyurl.com/cwvn8n This is a chapter from a book that deals at length with the Integral Fast Reactor (IFR). The advantages of this system in safety, economics, fuel utilization, proliferation resistance and plutonium breeding or burning far outstrip any of the other options mentioned in the consultation document.

While fast breeders are mentioned as a future option, the rest of the document seems to have been unduly influenced by those who favor either MOX fabrication or long-term disposal. Both of these are mistakes that the USA has already made to one degree or another, mistakes that I would hope the UK will avoid when presented with the facts.

A Little History

In 1993, Presidents Yeltsin and Clinton signed nuclear disarmament agreements that would result in each country possessing 34 tons of excess weapons-grade plutonium. Since proliferation concerns would warrant safe disposal of this material, each president asked for the advice of one of their prominent scientists as to how to get rid of it. Yeltsin asked Dr. Evgeny Velikhov, one of the most prominent scientists in Russia to this day, who had been intimately involved in Russia’s military and civilian nuclear programs and was, in fact, in charge of the Chernobyl cleanup. Clinton asked Dr. John Holdren, who is now the director of the White House Office of Science & Technology Policy—President Obama’s top science advisor.

In July of 2009 I arranged for a meeting with Dr. Velikhov and Dr. Holdren in Washington, D.C. At that meeting we discussed what had happened when those two had met to decide on what advice to give to their respective presidents regarding the disposition of 68 tons of weapons-grade plutonium. Velikhov’s position was that it should be burned in fast reactors to generate electricity. Holdren disagreed. He contended that each country should build a MOX plant to dispose of it. That advice led to the construction that is now being done in South Carolina by Areva of a MOX plant that is expected to cost as much as ten billion dollars by the time all is said and done. And the processing of that plutonium into MOX fuel will take until the year 2030 at the very least.

Dr. Velikhov wasn’t buying it, nor was Yeltsin. But Holdren was in a tough position. Clinton had already signaled his lack of support for the IFR project that had been ongoing for nine years and was now in its final stages. It would be shut down the very next year by a duped Congress that had no idea of its importance and was manipulated into cutting off its funding for purely political reasons. Clinton wanted Russia’s solution for disposal of the excess plutonium to be the same as the USA’s, but Yeltsin said that he wasn’t prepared to spend the money. If Clinton wanted Russia to build a MOX plant, then America could pay for it. Needless to say, that never happened. And after 17 years of indecision, last spring the USA finally agreed that Russia should go ahead and dispose of their 34 tons in fast reactors.

By this time, the USA had contracted with Areva to build the South Carolina MOX plant, now under construction. That boondoggle will be a painfully slow and inefficient method of disposing of the plutonium compared to using fast reactors. Doctor Holdren made it clear at that meeting that he fully comprehends the wisdom of using IFRs to dispose of plutonium.

Salesmanship

Areva has not only talked the USA into building a horrendously expensive MOX plant, but judging by the tone of this consultation document they have apparently convinced some of the policymakers in the UK to do the same. This is as wrong now as it was when Holdren advised Clinton in 1993. Yet the South Carolina MOX plant’s construction is well underway and, like most big government-funded projects, would be about as hard to cancel at this point as turning a supertanker in the Thames. But the UK needn’t go down that road.

Areva touts its MOX technology as the greatest thing since sliced baguettes, yet in reality it only increases the utilization of the energy in uranium from about 0.6% to 0.8%. Metal-fueled fast reactors, on the other hand, can recover virtually 100% of that energy. Ironically, when discussing the ultimate shortcomings of Areva’s MOX policies with one of their own representatives, those unpleasant details were dismissed with the assurance that all that will be dealt with when we make the transition to fast reactors. Yet with billions of dollars tied up in MOX technology, Areva is anything but anxious to see that transition happen anytime soon. And the more countries they can convince to adopt MOX technology, the slower that transition will happen, for each of those countries will then have a large investment sunk into the same inferior technology.

A Pox on MOX

MOX is not only expensive, but it results in the separation of plutonium (though of course that’s not the issue in this case since the plutonium is already separated). That being said, the issue of proliferation from reactor-grade plutonium is quite overblown in general, since its isotopic composition makes it nearly impossible to fashion a nuclear weapon out of it. But regardless of its actual risk in that regard, its perception by the scientifically uninformed makes it politically radioactive, and international agreements to limit the spread of fissile material treats it as if it were weapons-grade. So any plans for the disposition of any sort of plutonium—whatever its composition—must take the politics into account.

If the UK would decide to spend five billion pounds or so on a MOX plant, it would end up with a lot of overpriced fuel that would have to be given away at a loss, since any utility company would surely choose to buy cheaper fuel from enriched virgin uranium. You would have a horrendously expensive single-purpose facility that would have to operate at a substantial loss for decades to consume the vast supplies of plutonium in question. And you would still end up with vast amounts of long-lived spent fuel that would ultimately, hopefully, be converted and used in fast reactors. Why not skip the MOX step altogether?

Given that the plutonium contains an almost unimaginable amount of energy within it, opting for long-term disposal via vitrification and burial would be unconscionable. The world will surely be in need of vast amounts of clean energy in the 21st century as the burgeoning population will demand not only energy for personal and industrial use, but will require energy-hungry desalination projects on a stunning scale. The deployment of fast reactors using the plutonium that earlier policymakers in the UK wisely decided to stockpile is a realistic solution to the world’s fast-approaching energy crisis.

Sellafield Nuclear Plant, UK

But this consultation report questions whether fast reactors can be deployed in the near future on a commercial scale. They can.

The PRISM Project

While the scientists and engineers were perfecting the many revolutionary features of the IFR at the EBR-II site in the Eighties and early Nineties, a consortium of major American firms collaborated with them to design a commercial-scale fast reactor based on that research. General Electric led that group, which included companies like Bechtel, Raytheon and Westinghouse, among others. The result was a modular reactor design intended for mass production in factories, called the PRISM (Power Reactor Innovative Small Module). A later iteration, the S-PRISM, would be slightly larger at about 300 MWe, while still retaining the features of the somewhat smaller PRISM. For purposes of simplicity I will refer hereinafter to the S-PRISM as simply the PRISM.

After the closure of the IFR project, GE continued to refine the PRISM design and is in a position to pursue the building of these advanced reactors as soon as the necessary political will can be found. Unfortunately for those who would like to see America’s fast reactor be built in America, nuclear politics in the USA is nearly as dysfunctional as it is in Germany. The incident at Fukushima has only made matters worse.

The suggestion in this report that fast reactors are thirty years away is far from accurate. GE-Hitachi plans to submit the PRISM design to the Nuclear Regulatory Commission (NRC) next year for certification. But that time-consuming process, while certainly not taking thirty years, may well be in process even as the first PRISM is built in another country.

This is far from unprecedented. In the early Nineties, GE submitted its Advanced Boiling Water Reactor (ABWR) design to the NRC for certification. GE then approached Toshiba and Hitachi and arranged for each of those companies to build one in Japan. Those two companies proceeded to get the design approved by their own NRC counterpart, built the first two ABWRs in just 36 and 39 months, fueled and tested them, then operated them for a year before the NRC in the US finally certified the design.

International Partners

On March 24th an event was held at the Russian embassy in Washington, D.C., attended by a small number of members of the nuclear industry and its regulatory agencies, both foreign and domestic, as well as representatives of NGOs concerned with nuclear issues. Sergei Kirienko, the director-general of Rosatom, Russia’s nuclear power agency, was joined by Dan Poneman, the deputy secretary of the U.S. Dept. of Energy. This was shortly after the Fukushima earthquake and tsunami, at a time when the nuclear power reactors at Fukushima Daiichi were still in a very uncertain condition.

Mr. Kirienko and Mr. Poneman first spoke about the ways in which the USA and Russia have been cooperating in tightening control over fissile material around the world. Then Mr. Kirienko addressed what was on the minds of all of us: the situation in Japan and what that portends for nuclear power deployment in the USA and around the world.

He rightly pointed out that the Chernobyl accident almost exactly 25 years ago, and the Fukushima problems now, clearly demonstrate that nuclear power transcends national boundaries, for any major accident can quickly become an international problem. For this reason Kirienko proposed that an international body be organized that would oversee nuclear power development around the world, not just in terms of monitoring fissile material for purposes of preventing proliferation (much as the IAEA does today), but to bring international expertise and oversight to bear on the construction and operation of nuclear power plants as these systems begin to be built in ever more countries.

Kirienko also pointed out that the power plants at risk in Japan were old reactor designs. He said that this accident demonstrates the need to move nuclear power into the modern age. For this reason, he said, Russia is committed to the rapid development and deployment of metal-fueled fast neutron reactor systems. His ensuing remarks specifically reiterated not only a fast reactor program (where he might have been expected to speak about Gen III or III+ lightwater reactor systems), but the development of metal fuel for these systems. This is precisely the technology that was developed at Argonne National Laboratory with the Integral Fast Reactor (IFR) program, but then prematurely terminated in 1994 in its final stages.

For the past two years I’ve been working with Dr. Evgeny Velikhov (director of Russia’s Kurchatov Institute and probably Russia’s leading scientist/political advisor) to develop a partnership between the USA and Russia to build metal-fueled fast reactors; or to be more precise, to facilitate a cooperative effort between GE-Hitachi and Rosatom to build the first PRISM reactor in Russia as soon as possible. During those two years there have been several meetings in Washington to put the pieces in place for such a bilateral agreement. The Obama administration, at several levels, seems to be willingly participating in and even encouraging this effort.

Dr Evgeny Velikhov, SCGI member

Dr. Velikhov and I (and other members of the Science Council for Global Initiatives) have also been discussing the idea of including nuclear engineers from other countries in this project, countries which have expressed a desire to obtain or develop this technology, some of which have active R&D programs underway (India, South Korea, China). Japan was very interested in this technology during the years of the IFR project, and although their fast reactor development is currently focused on their oxide-fueled Monju reactor there is little doubt that they would jump at the chance to participate in this project.

Dr. Velikhov has long been an advocate of international cooperation in advanced nuclear power research, having launched the ITER project about a quarter-century ago. He fully comprehends the impact that international standardization and deployment of IFR-type reactors would have on the well-being of humanity at large. Yet if Russia and the USA were to embark upon a project to build the first PRISM reactor(s) in Russia, one might presume that the Russians would prefer to make it a bilateral project that would put them at the cutting edge of this technology and open up golden opportunities to develop an industry to export it.

It was thus somewhat surprising when Mr. Kirienko, in response to a question from one of the attendees, said that Russia would be open to inviting Japan, South Korea and India to participate in the project. One might well question whether his failure to include China in this statement was merely an oversight or whether that nation’s notorious reputation for economic competition often based on reverse-engineering new technologies was the reason.

I took the opportunity, in the short Q&A session, to point out to Mr. Poneman that the Science Council for Global Initiatives includes not just Dr. Velikhov but most of the main players in the development of the IFR, and that our organization would be happy to act as a coordinating body to assure that our Russian friends will have the benefit of our most experienced scientists in the pursuit of this project. Mr. Poneman expressed his gratitude for this information and assured the audience that the USA would certainly want to make sure that our Russian colleagues had access to our best and brightest specialists in this field.

Enter the United Kingdom

Sergei Kirienko was very clear in his emphasis on rapid construction and deployment of fast reactors. If the United States moves ahead with supporting a GE-Rosatom partnership, the first PRISM reactor could well be built within the space of the next five years. The estimated cost of the project will be in the range of three to four billion dollars (USD), since it will be the first of its kind. The more international partners share in this project, the less will be the cost for each, of course. And future copies of the PRISM have been estimated by GE-Hitachi to cost in the range of $1,700/kW.

Work is under way on gram samples of civil plutonium

According to this consultation document, the UK is looking at spending £5-6 billion or more in dealing with its plutonium. Yet if the plutonium were to simply be secured as it currently is for a short time longer and the UK involved itself in the USA/Russia project, the cost would be a small fraction of that amount, and when the project is completed the UK will have the technology in hand to begin mass-production of PRISM reactors.

The plutonium stocks of the UK could be converted into metal fuel using the pyroprocessing techniques developed by the IFR project (and which, as noted above, are ready to be utilized by South Korea). The Science Council for Global Initiatives is currently working on arranging for the building of the first commercial-scale facility in the USA for conversion of spent LWR fuel into metal fuel for fast reactors. By the time the first PRISM is finished in Russia, that project will also likely be complete.

What this would mean for the UK would be that its stores of plutonium would become the fast reactor fuel envisioned by earlier policymakers. After a couple years in the reactor the spent fuel would be ready for recycling via pyroprocessing, then either stored for future use or used to start up even more PRISM reactors. In this way not only would the plutonium be used up but the UK would painlessly transition to fast reactors, obviating any need for future mining or enrichment of uranium for centuries, since once the plutonium is used up the current inventories of depleted uranium could be used as fuel.

Conclusion

Far from being decades away, a fully-developed fast reactor design is ready to be built. While I’m quite certain that GE-Hitachi would be happy to sell a PRISM to the UK, the cost and risk could be reduced to an absolute minimum by the happy expedient of joining in the international project with the USA, Russia, and whichever other nations are ultimately involved. The Science Council for Global Initiatives will continue to play a role in this project and would be happy to engage the UK government in initial discussions to further explore this possibility.

There is little doubt that Russia will move forward with fast reactor construction and deployment in the very near future, even if the PRISM project runs into an unforeseen roadblock. It would be in the best interests of all of us to cooperate in this effort. Not only will the deployment of a standardized modular fast reactor design facilitate the disposition of plutonium that is currently the driving force for the UK, but it would enable every nation on the planet to avail itself of virtually unlimited clean energy. Such an international cooperative effort would also provide the rationale for the sort of multinational nuclear power oversight agency envisioned by Mr. Kirienko and others who are concerned not only about providing abundant energy but also in maintaining control over fissile materials.

June 6, 2011

Renewables and efficiency cannot fix the energy and climate crises (part 2)

by Barry Brook

This post continues directly on from Part 1 (please read that if you’ve not already done so!). I also note the flurry of interest in the new IPCC WGIII special report on renewable energy prospects through to 2050. I will have more to say on this in an upcoming post, but in short, it fails to address — with any substance — any of the significant problems I describe below, or in the previous post. What a disappointment!

————————

Renewables and efficiency cannot fix the energy and climate crises (part 2)

Renewable energy cannot provide reliable 24-hour, 7-day-a-week  power to meet baseload demand

The minimum amount of power that a city or country demands usually occurs at night (when most people are asleep); this is called the electricity ‘baseload’. Some have claimed that it is a fallacy to argue that all of this demand is needed, because utilities tend to charge cheap (‘off peak’) rates during these low-use periods, to encourage more uptake (by everything from factory machinery to hot water systems). This is because some types of power stations (e.g., coal and nuclear) are quite expensive to build and finance (with long terms to pay off the interest), but fairly cheap to run, so the utility wants to keep them humming away 24 hours a day to maximise returns. Thus, there is some truth to this argument, although if that energy is not used at night, extra must instead be supplied in the day.

Some critical demand, however, never goes away – the power required to run hospitals, police stations, street lights, water and sewerage pumping stations,  refrigerators and cold storage, transport (if we are to use electric vehicles), and so on. If the power is lost to these services, even for a short while, chaos ensues, and the societal backlash after a few such events is huge. On the other side of the energy coin, there are times when huge power demands arise, such as when everyone gets home from work to cook their meals and watch television, or when we collectively turn on our air conditioners during a heatwave. If the energy to meet this peak demand cannot be found, the result can be anything from a lot of grumpy people through to collapse of the grid as rolling blackouts occur.

Two core limitations of wind, solar and most other renewable systems is that: (i) they are inherently variable and are prone to ‘gambler’s ruin‘ (in the sense that you cannot know, over any planning period, when long stretches of calm or cloudy days will come, which could bring even a heavily over-compensated system to its knees), and (ii) they are not ‘dispatchable’. They’ll provide a lot of power some of the time, when you may or may not need it, and little or none at other times, when you’ll certainly need some, and may need a lot. In short, they can’t send power out on demand, yet, for better or worse, this is what society demands of an electricity system. Okay, but can these limitations be overcome?

Large-scale renewables require massive ‘overbuilding’ and so are not cost competitive

The three most commonly proposed ways to overcome the problem of intermittency and unscheduled outages are: (i) to store energy during productive times and draw on these stores during periods when little or nothing is being generated; (ii) to have a diverse mix of renewable energy systems, coordinated by a smart electronic grid management system, so that even if the wind is not blowing in one place, it will be in another, or else the sun will be shining or the waves crashing; and (iii) to have fossil fuel or nuclear power stations on standby, to take up the slack when needed.

The reality is that any of these solutions are grossly uneconomic, and even if we were willing and able to pay for them, the result would be an unacceptably unreliable energy supply system. Truly massive amounts of energy would need to be stored to keep a city or country going through long stretches of cloudy winter days (yes, these even occur in the desert) or calm nights with little wind and no sun, yet energy storage (batteries, chemical conversion to hydrogen or ammonia, pumped hydropower, compressed air), even on a small scale, is currently very expensive. A mix of different contributions (solar, wind, wave, geothermal) would help, but then we’d need to pay for each of these systems, built to a level that they could compensate for the failure of another.

What’s more, in order to deliver all of our regular power demand whilst also charging up the energy stores , we would have to ‘overbuild’ our system many times, adding to the already prohibitive costs. As a result, an overbuilt system of wind and solar would, at times, be delivering 5 to 20 times our power demand (leading to problems of ‘dumping’ the excess energy that can’t be used or stored quickly enough or in sufficient quantity), and at other times, it would deliver virtually none of it.

If you do some modelling to work through the many contingencies, you find that a system which relies on wind and/or solar power, plus large-scale energy storage and a geographically dispersed electricity transmission network to channel power to load centres, would seem to be 10 to 40 times more expensive than an equivalent nuclear-powered system, and still less reliable. The cost to avoid 1 tonne of carbon dioxide would be >$800 with wind power compared with $22 with nuclear power.

The above critiques of renewable energy might strike some readers as narrow minded or deliberately pessimistic. Surely, isn’t it just a matter of prudent engineering and sufficient integration of geographically and technologically diverse systems, to overcome such difficulties? Alas, no! Although I only have limited space for this topic in this short post, let me grimly assure you that the problem of ‘scaling up’ renewable energy to the point where it can reliably meet all (or even most) of our power needs, involves solving a range of compounding, quite possibly insuperable, problems. We cannot wish these problems away — they are ‘the numbers’, ‘the reality’.

Economic and socio-political realities

Supporters of ’100% renewable energy’ maintain that sunlight, wind, waves and plant life, combined with vast improvements in energy efficiency and energy conservation leading to a flattening or reduction in total energy demand, are the answer.  This is a widespread view among environmentalists and would be perfectly acceptable to me if the numbers could be made to work. But I seriously doubt they can.

The high standard of living in the developed world has been based on cheap fossil (and nuclear) energy. While we can clearly cut back on energy wastage, we will still have to replace oil and gas. And that means a surge in demand for electricity, both to replace the energy now drawn from oil and gas and to meet the additional demand for power from that third of the world’s people who currently have no electricity at all.

Critics do not seem to understand – or refuse to acknowledge – the basis of modern economics and the investment culture. Some dream of shifts in the West and the East away from consumerism. There is a quasi-spiritualism which underpins such views. Yet at a time of crisis, societies must be ruthlessly practical in solving their core problems or risk collapse. Most people will fight tooth-and-nail to avoid a decline in their standard of living. We need to work with this, not against it. We are stuck with the deep-seated human propensity to revel in consuming and to hope for an easier life. We should seek ways to deliver in a sustainable way.

A friend of mine, the Californian entrepreneur Steve Kirsch, has put the climate-energy problem succinctly:

The most effective way to deal with climate change is to seriously reduce our carbon emissions. But we’ll never get the enormous emission reductions we need by treaty. Been there, done that – it’s not going to happen. If you want to get emissions reductions, you must make the alternatives for electric power generation cheaper than coal. It’s that simple. If you don’t do that, you lose.

Currently, no non-fossil-fuel energy technology has achieved this. So what is stopping nations replacing coal, oil and gas infrastructure with renewable energy? It is not (yet) because of any strong, society-wide opposition to a switch to renewables. No, it is economic uncertainty, technological immaturity, and good old financial risk management. Despite what ’100% renewables’ advocates would lead you to believe, it is still far from certain in what way the world will pursue a low-carbon future. You have only to look at what’s happening in the real world to verify that.

I’ve already written about fast-growing investment in nuclear energy in Asia. China, for instance, has overcome typical first-of-a-kind engineering cost overruns by building more than 25 reactors at the same time, in a bid to bring costs to, or below, those of coal.

In December 2009, there was a telling announcement from the United Arab Emirates (UAE), which wish to sell their valuable natural gas to the export market. Within the next few years, the UAE face a six-gigawatt increase in demand for electricity, which includes additional power required by an upgraded desalination program. Despite being desert-based with a wealth of solar resources, the UAE decided not to build large-scale solar power plants (or any other renewable technology). In terms of economics and reliability, the numbers just didn’t stack up. Instead, they have commissioned a South Korean consortium to build four new generation III+ APR-1400 reactors, at a cost of $3,500 a kilowatt installed – their first ever nuclear power plants.

Conclusion

Nuclear power, not renewable energy or energy efficiency, will probably end up being the primary global solution to the climate and energy crises. This is the emergent result of trying to be honest, logical and pragmatic about what will and will not work, within real-world physical, economic and social constraints.

If I am wrong, and non-hydro and non-combustible renewables can indeed rise to the challenge and ways can be found to overcome the issues I’ve touched on in these two posts, then I will not complain. After all, my principal goal — to replace fossil fuels with sustainable and low-carbon alternative energy sources — would have been met. But let’s not play dice with the biosphere and humanity’s future on this planet, and bet everything on such wishful thinking. It would be a risky gamble indeed.

Renewables and efficiency cannot fix the energy and climate crises (part 1)

 by Barry Brook
We must deal simultaneously with the energy-resource and climate-change pincers

The modern world is caught in an energy-resource and climate-change pincer. As the growing mega-economies of China and India strive to build the prosperity and quality of life enjoyed by citizens of the developed world, the global demand for cheap, convenient energy grows rapidly. If this demand is met by fossil fuels, we are headed for an energy supply and climate disaster. The alternatives, short of a total and brutal deconstruction of the modern world, are nuclear power and renewable energy.

Whilst I support both, I now put most of my efforts into advocating nuclear power, because: (i) few other environmentalists are doing this, whereas there are plenty of renewable enthusiasts  (unfortunately, the majority of climate activists seem to be actively anti-nuclear), and (ii) my research work on the energy replacement problem suggests to me that nuclear power will constitute at least 75 % of the solution for displacing coal, oil and gas.

Prometheus, who stole fire from the Gods and gave it to mortal man

In my blog, I argue that it’s time to become “Promethean environmentalists”. (Prometheus, in Greek mythology, was the defiantly original and wily Titan who stole fire from Zeus and gave it to mortals, thus improving their lives forever.) Another term, recently used by futurist Stewart Brand, is “Ecopragmatists”. Prometheans are realists who shun romantic notions that modern governments might guide society back to an era when people lived simpler lives, or that a vastly less consumption-oriented world is a possibility. They seek real, high-capacity solutions to environmental challenges – such as nuclear power – which history has shown to be reliable.

But I reiterate — this strong support for nuclear does NOT make me ‘anti-renewables’ (or worse, a ‘renewable energy denier‘, a thoroughly unpleasant and wholly inaccurate aspersion). Indeed, under the right circumstances, I think renewables might be able to make an important contribution (e.g., see here). Instead, my reticence to throw my weight confidently behind an ’100% renewable energy solution’ is based on my judgement that such an effort would prove grossly insufficient, as well as being plain risky. And given that the stakes we are talking about are so high (the future of human society, the fates of billions of people, and the integrity of the biosphere), failure is simply not an option.

Below I explain, in very general terms, the underlying basis of my reasoning. This is not a technical post. For those details, please consult the Thinking Critically About Sustainable Energy (TCASE) series — which is up to 12 parts, and will be restarted shortly, with many more examples and calculations.

————————

Renewables and efficiency cannot fix the energy and climate crises (part 1)

Boulton and Watt’s patented steam engine

The development of an 18th century technology that could turn the energy of coal into mechanical work – James Watt’s steam engine – heralded the dawn of the Industrial Age. Our use of fossil fuels – coal, oil and natural gas – has subsequently allowed our modern civilisation to flourish. It is now increasingly apparent, however, that our almost total reliance on these forms of ancient stored sunlight to meet our energy needs, has some severe drawbacks, and cannot continue much longer.

For one thing, fossil fuels are a limited resource. Most of the readily available oil, used for transportation, is concentrated in a few, geographically favoured hotspots, such as the Middle East. Most credible analysts agree that we are close to, or have passed, the point of maximum oil extraction (often termed ‘peak oil’), thanks to a century of rising demand. We’ve tapped less of the available natural gas (methane), used mostly for heating and electricity production, but globally, it too has no more than a few more decades of significant production left before supplies really start to tighten and prices skyrocket, especially if we ‘dash for gas’ as the oil wells run dry. Coal is more abundant than oil or gas, but even it has only a few centuries of economically extractable supplies.

Then there is climate change and air pollution. The mainstream scientific consensus is that emissions caused by the burning of fossil fuels, primarily carbon dioxide (CO2), are the primary cause of recent global warming. We also know that coal soot causes chronic respiratory problems, its sulphur causes acid rain, and its heavy metals (like mercury) induce birth defects and damage ecological food chains. These environmental health issues compound the problem of dwindling fossil fuel reserves.

Clearly, we must unhitch ourselves from the fossil-fuel-based energy bandwagon – and fast.

Meeting the growing demand for energy and clean water in the developing world

In the developed world (US, Europe, Japan, Australia and so on), we’ve enjoyed a high standard of living, linked to a readily available supply of cheap energy, based mostly on fossil fuels. Indeed, it can be argued that this has encouraged energy profligacy, and we really could be more efficient in the mileage we get out of our cars, the power usage of our fridges, lights and electrical appliances, and in the design of our buildings to reduce demands for heating and cooling. There is clearly room for improvement, and sensible energy efficiency measures should be actively pursued.

In the bigger, global picture, however, there is no realistic prospect that we can use less energy in the future. There are three obvious reasons for this:

1) Most of the world’s population is extremely energy poor. More than a third of all humanity, some 2.5 billion people, have no access to electricity whatsoever. For those that do, their long-term aspirations for energy growth, to achieve something equating that used today by the developed world, is a powerful motivation for development. For a nation like India, with over 1 billion people, that would mean a twenty-fold increase in per capita energy use.

2) As the oil runs out, we need to replace it if we are to keep our vehicles going. Oil is both a convenient energy carrier, and an energy source (we ‘mine’ it).  In the future, we’ll have to create our new energy carriers, be they chemical batteries or oil-substitutes like methanol or hydrogen. On a grand scale, that’s going to take a lot of extra electrical energy! This counts for all countries.

3) With a growing human population (which we hope will stabilise by mid-century at less than 10 billion) and the burgeoning impacts of climate change and other forms of environmental damage, there will be escalating future demands for clean water (at least in part supplied artificially, through desalination and waste water treatment), more intensive agriculture which is not based on ongoing displacement of natural landscapes like rainforests, and perhaps, direct geo-engineering to cool the planet, which might be needed if global warming proceeds at the upper end of current forecasts.

In short, the energy problem is going to get larger, not smaller, at least for the foreseeable future.

Renewable energy is diffuse, variable, and requires massive storage and backup

Let’s say we aim to have largely replaced fossil fuels with low-carbon substitutes by the year 2060 — in the next 50 years or so. What do we use to meet this enormous demand?

Nuclear power is one possibility, and is discussed in great detail elsewhere on this website. What about the other options? As discussed above, improved efficiency in the way we use energy offers a partial fix, at least in the short term. In the broader context, to imagine that the global human enterprise will somehow manage to get by with less just doesn’t stack up when faced with the reality of a fast developing, energy-starved world.

Put simply, citizens in Western democracies are simply not going to vote for governments dedicated to lower growth and some concomitant critique of consumerism, and nor is an authoritarian regime such as in China going to risk social unrest, probably of a profound order, by any embrace of a low growth economic strategy. As such, reality is demanding, and we must carefully scrutinise the case put by those who believe that renewable energy technologies are the answer.

Solarpark Mühlhausen in Bavaria. It covers 25 ha and generates 0.7 MW of average power (peak 6.3 MW)

The most discussed ‘alternative energy’ technologies (read: alternative to fossil fuels or nuclear) are: harnessing the energy in wind, sunlight (directly via photovoltaic panels or indirectly using mirrors to concentrate sunlight), water held behind large dams (hydropower), ocean waves and tides, plants, and geothermal energy, either from hot surface aquifers (often associated with volcanic geologies) or in deep, dry rocks. These are commonly called ‘renewable’ sources, because they are constantly replenished by incoming sunlight or gravity (tides and hot rocks) and radioactivity (hot rocks). Wind is caused by differences in temperature across the Earth’s surface, and so comes originally from the sun, and oceans are whipped up by the wind (wave power).

Technically, there are many challenges with economically harnessing renewable energy to provide a reliable power supply. This is a complex topic – many of which are explored in the TCASE series – but here I’ll touch on a few of the key issues. One is that all of the sources described above are incredibly diffuse – they require huge geographical areas to be exploited in order to capture large amounts of energy.

For countries like Australia, with a huge land area and low population density, this is not, in itself, a major problem. But it is a severe constraint for nations with high population density, like Japan or most European nations. Another is that they are variable and intermittent – sometimes they deliver a lot of power, sometimes a little, and at other times none at all (the exception here is geothermal). This means that if you wish to satisfy the needs of an ‘always on’ power demand, you must find ways to store large amounts of energy to cover the non-generating periods, or else you need to keep fossil-fuel or nuclear plants as a backup. That is where the difficulties really begin to magnify… To be continued…

————————

Part 2 will cover the ‘fallacy of the baseload fallacy’, ‘overbuilding’, costs, and evolution of real-world energy systems.

May 10, 2011

Decarbonise SA – regional action for greenhouse gas mitigation

Filed under: IFR (Integral Fast Reactor) Nuclear Power, Nuclear Energy — buildeco @ 3:48 pm

by Barry Brook

Global warming can only be tackled seriously by a massive reduction in anthropogenic greenhouse gas production. It’s that simple. But just hoping for this to gradually happen — locally, regionally or globally — by tinkering at the edge of the problem (carbon prices, alternative energy subsidies, mandated targets and loan guarantees, “100 ways to be more green” lists, etc.), is just not going to get us anywhere near where we need to be, when we need to be. For that, we need to develop and implement a well-thought-out, practical and cost-effective action plan!

Back in early 2009, I offered a A sketch plan for a zero-carbon Australia. Overall, I still think this advocates the right sort of path. I elaborated further on this idea in my two pieces: Climate debate missing the point and Energy in Australia in 2030; in the latter, I explored a number of potential storylines, along with an estimate of the probability and result of following these different pathways. But the lingering question that arises from thought experiments like this is… how do you turn it into something practical?

Sadly, I can’t think of any liberal-democratic government, anywhere in the world, that actually has a realistic, long-term energy  plan. Instead, we have politicians, businesses and other decision makers with their heads in the sand (peak oil is another issue where this is starkly apparent). This must change, and we — the citizenry — must be the agents of that change. That is why the new initiative by Ben Heard, called “Decarbonise SA“, is so exciting. I’ll let Ben explain more, in the guest post below.

But before that, just a small  note from me. For my many non-Australian readers don’t dismiss this as something parochicial. Think of it instead as a case study — a working template — for what you can help organise in your particular region (local council, city, state/province, whatever). We need all of you on board, because this is a problem of the global commons. Over to Ben.

——————–

Decarbonise SA

Ben Heard – Ben is Director of Adelaide-based advisory firm ThinkClimate Consulting, a Masters graduate of Monash University in Corporate Environmental Sustainability, and a member of the TIA Environmental and Sustainability Action Committee. He is the founder of Decarbonise SA. His recent  post was Think climate when judging nuclear power.

I have been a fan of Barry’s work of  for some time now. His knack for cutting through the noise to highlight the information we need to consider for making good decisions is remarkable. His reputation and tenure at Adelaide University also give his blogs a global reach and relevance, exemplified by the one million hits it received in the week following the Sendai quake and tsunami.

Remarkable though it is, the blog can’t do everything, nor should it try. That’s why I have started Decarbonise SA. The first thing you need to know is that this is more than a blog, it is a mission. The purpose of Decarbonise SA is to form a collective of like-minded people who will drive the most rapid possible decarbonisation of the economy of South Australia, with a primary focus on the electricity supply.

To achieve that goal, South Australia needs to introduce nuclear power into the mix of generating technologies. The primary driver for our support of nuclear power is recognition of the fact that the scientific findings in relation to climate change are now so serious, that we require the fastest and deepest cuts in emissions possible. That means attacking the biggest problems first.  In Australia, that’s electricity supply, specifically the coal and gas that provides most of our baseload generation. While climate change may be the catalyst, nuclear power provides many important environmental and safety benefits compared to coal, beyond greenhouse gas, that will give us a cleaner and healthier environment for the future.

Decarbonise SA also supports the increasing the use of renewable generation technologies, and becoming more efficient with energy. But the primary focus of Decarbonise SA is the introduction of nuclear power.  We are going to work with the government, community and private sectors of South Australia to make this happen.

Why South Australia?

South Australia’s electricity generation sector is in crisis. Aging, inefficient, decrepit infrastructure must be replaced soon, against the backdrop of an urgent global need to cut greenhouse gas emissions. As in any good crisis though, the opportunity is there if you look. South Australia is just a small number of significant infrastructure investments away from having among the world’s cleanest electricity. It is the mission of Decarbonise SA to make that happen, and happen fast. The goal is decidedly immodest. But that’s because climate change is upon us and we must act quickly, firmly and decisively.

But climate change is a global problem. So focussing a whole blog on a relatively small part of Australia may seem an odd strategy. Here’s the thing. There are already a great many resources pushing the cause of climate change (BNC being one). I’m not going to try to compete with that.

At the same time, every grand vision eventually needs implementation to matter, and necessarily, someone needs to downscale the bigger issues to a more manageable level and actually put a plan in place to make it happen.

I am a proud South Australian, and while my work is often national and my ideas and articles have spread around the world, I know where I have the most influence. It’s in the state of 1 million people where I was raised, where I have deep connections and networks, and where I do the bulk of my work. And as I said at the start, I have not started this blog to flap my gums; I very much intend to make this happen. If Decarbonise SA can move 1 million people in a developed nation from dirty a dirty electricity supply to among the world’s very cleanest, well, I’ll be satisfied, the model will have worked, and I can think even bigger. I will be proud if South Australia is first. But I will be even more excited to find ourselves in competition with others around the world who are decisively pursuing the same goal. So hopefully what we do with Decarbonise SA will become a model that has relevance in every state, territory, county and province the world over. So nothing is trademarked at Decarbonise SA. If you like what you read, but don’t live in SA, steal my blog idea and everything on it, and start your own Decarbonise movement. I’ll help.

How will we achieve this?

The introduction of nuclear power to South Australia is the foundation of the Decarbonise SA vision. Nuclear power will permit the rapid replacement of South Australia’s decrepit baseload generation facilities. This is to be accompanied by the continued and enhanced expansion of the renewable energy sector in South Australia, which has played a major role in lowering average emissions of South Australian electricity over the last few years, and continued efforts to improve our efficient use of energy. So yes; to resort to the labels that will come what may, Decarbonise SA is pro-nuclear power. It is also pro-renewables. It is also pro-energy efficiency. It is decidedly pro- nuclear, renewables and energy efficiency working in trio, each deployed as their respective advantages and disadvantages dictate they should be. But above all, it is pro, pro, pro the rapid decarbonisation of the South Australian economy, focussing on electricity. That makes us completely anti-coal and anti-gas for any new electricity generation capacity.

It is the introduction of nuclear power that is the focus of Decarbonise SA’s work, for some pretty simple reasons. Firstly, in South Australia it’s the missing component of a strategy that would actually get the job done (remember, I’m talking about zero emissions. I’m not interested in deep cuts or improvements). Secondly, while renewable technology and energy efficiency both need better support and deeper penetration, they also both have a lot of friends already. Energy efficiency is supported by legislation (like the Energy Efficiency Opportunities Act, mandatory standards for new houses, Minimum Energy Performance Standards (MEPS) and star ratings for appliances to name but a few), and organisations, governmental and otherwise. Renewables have support from organisations like Renewables SA, the Alternative Technology Association, and major legislated support from the national Renewable Energy Target (RET), as well as deep subsidies for solar PV. So the potential of this blog to improve the cause of either energy efficiency or renewables is minimal. To be perfectly clear, do not mistake the focus on nuclear power as an attack on, or belittling of, the role of either energy efficiency or renewables. That is not the case. But I do insist on being decidedly realistic about the potential of either to solve the problem in the absence of nuclear power.

Nuclear power, on the other hand, is roundly treated as the spawn of the devil, with the Australia’s Environmental Protection and Biodiversity Conservation Act specifically highlighting nuclear as requiring referral. Not to mention the opposition of the coal industry, who know full well that nuclear is the only real threat to their dominance of electricity generation in Australia.

At first approach, you may think this is crazy. Nuclear has never been very popular in Australia, and right now, as I write, the second biggest nuclear incident ever remains unresolved. Decarbonise SA is certainly not naive about the challenge of putting nuclear in the centre of the strategy. But when the options are 1) a tough sell that can work (nuclear and renewables with energy efficiency), and easier sells that are guaranteed to fail (gas generation with still high levels of greenhouse gas, plus more imports from Victoria where they burn the dirty brown coal in the world’s worst power station, plus a bit more renewables and energy efficiency) there is really no decision to be made.

Besides, nuclear power is hardly a fringe technology.  It is used in 30 countries worldwide, including the 16 largest economies (ignoring Australia at number 13).  It provides 15% of global electricity supply from around 440 reactors. It provides 80% of France’s electricity, 30% of Japan’s, and 20% of the United States’. It has been in use for over 50 years, with a remarkable safety record, and a suite of environmental, health and safety advantages over and above coal that make your head spin. It is embraced by many prominent environmentalists, thoughtful, caring and passionate people.  But Decarbonise SA has not based this plan on who else agrees or disagrees or what other countries have done; we based it on facts, evidence and context relating to:

  • The extraordinary challenge of climate change, that requires total and rapid decarbonisation of electricity
  • The need to maintain secure electricity supplies, and to urgently supply clean electricity to the 1 billion people in the world who have none
  • Honest and evidence-based appraisal of the advantages and disadvantages of different energy supply options across all relevant criteria, being:
    • Ability to provide near-zero greenhouse gas electricity across the lifecycle
    • Scalability to meet electricity demand requirements, with a focus on baseload
    • Location requirements
    • Cost
    • Reliability/ track record
    • Safety
    • Waste and pollution from energy generation
    • Waste and pollution from mining operations
    • Global security

When these criteria are attended to for all energy supply options with a clear head, and keeping prejudice to a minimum, one thing quickly becomes clear: Anyone who means what they say when they use the expression “climate crisis” needs to move nuclear power front and centre of the strategy, otherwise we will spend the next few decades rearranging the deck chairs on the Titanic.

By the way, this is all coming from someone who was once staunchly anti-nuclear. I supported the organisations who oppose it. I was first to rail against it if it came up over dinner or at a BBQ. But my growing understanding of the climate crisis forced me to take a second look at all of my reasons for opposition. I began that process believing that, in the end, I may find nuclear to be a necessary evil. When I was done, what I found instead is that it’s more than necessary, it’s essential, and it’s not really evil: compared to coal, nuclear power is 99% better in almost every relevant criterion (an assertion I will back with numbers in an upcoming post). I’ve been involved in enough environmental decisions now to know that if you have an option that will improve current conditions by 99%, that’s not a compromise. That’s not a defeatist stance. It’s a massive victory. I’ll be satisfied with the 99% this century, and chase the 1% in the next one if I’m still here.

So I hope you’ll join me on the journey, as I spell out the mission and reasons for Decarbonise SA in upcoming articles. But be warned: I’m not here for the talking. My children won’t really thank me for a blog. They will thank me for cleaner, healthier air, and a stable climate. That what Decarbonise SA is here for. And it needs you.

May 5, 2011

Energy debates in Wonderland

 by Barry Brook

My position on wind energy is quite ambivalent. I really do want it (and solar) to play an effective role in displacing fossil fuels, because to do this, we need every tool at our disposal (witness the Open Science project I kick started in 2009 [and found funding for], in order to investigate the real potential of renewables, Oz-Energy-Analysis.Org).

However, I think there is far too much wishful thinking wrapped up in the proclamations by the “100% renewables” crowd(most of who are unfortunately also anti-nuclear advocates), that wind somehow offers both a halcyon choice and an ‘industrial-strength’ solution to our energy dilemma. In contrast, my TCASE series (thinking critically about sustainable energy) illustrates that, pound-for-pound, wind certainty does NOT punch above it’s weight as a clean-energy fighter; indeed, it’s very much a journeyman performer.

The following guest post, by Jon Boone, looks at wind energy with a critical eye and a witty turn of phrase. I don’t offer it as a comprehensive technical critique — rather it’s more a philosophical reflection on past performance and fundamental limits. Whatever your view of wind, I think you’ll find it interesting.

————————

Energy debates in Wonderland

Guest Post by Jon Boone. Jon is a former university administrator and longtime environmentalist who seeks more more informed, effective energy policy in ways that expand and enhance modernity, increase civility, and demand stewardship on behalf of biodiversity and sensitive ecosystems. His brand of environmentalism eschews wishful thinking because it is aware of the unintended adverse consequences flowing from uninformed decisions. He produced and directed the documentary, Life Under a Windplant, which has been freely distributed within the United States and many countries throughout the world. He also developed the website Stop Ill Wind as an educational resource, posting there copies of his most salient articles and speeches. He receives no income from his work on wind technology.

March Hare (to Alice): Have some wine.

(Alice looked all round the table, but there was nothing on it but tea.)

Alice: I don’t see any wine.

March Hare: There isn’t any.

Alice: Then it wasn’t very civil of you to offer it.

March Hare: It wasn’t very civil of you to sit down without being invited.

— From Lewis Carroll’s Alice in Wonderland

Energy journalist Robert Bryce, whose latest book, Power Hungry, admirably foretells an electricity future anchored by natural gas from Marcellus Shale that will eventually bridge to pervasive use of nuclear power, has recently been involved in two prominent debates. In the first, conducted by The Economist, Bryce argued for the proposition that “natural gas will do more than renewables to limit the world’s carbon emissions.” In the second, an Intelligence Squared forum sponsored by the Rosenkranz Foundation, he and American Enterprise Institute scholar Steven Hayward argued against the proposition that “Clean Energy can drive America’s economic recovery.”

Since there’s more evidence a friendly bunny brought children multi-colored eggs on Easter Sunday than there is that those renewables darlings, wind and solar, can put much of a dent in CO2 emissions anywhere, despite their massively intrusive industrial presence, the first debate was little more than a curiosity. No one mentioned hydroelectric, which has been the most widely effective “renewable”—ostensibly because it continues to lose marketshare (it now provides the nation with about 7% of its electricity generation), is an environmental pariah to the likes of The Sierra Club, and has little prospect for growth. Nuclear, which provides the nation’s largest grid, the PJM, with about 40% of its electricity, is not considered a renewable, despite producing no carbon emissions; it is also on The Sierra Club’s hit list. Geothermal and biomass, those minor league renewables, were given short shrift, perhaps because no one thought they were sufficiently scalable to achieve the objective.

So it was a wind versus gas scrum played out as if the two contenders were equally matched as producers of power. Bryce pointed out wind’s puny energy density, how its noise harms health and safety, its threat to birds and bats, and how natural gas’s newfound abundance continues to decrease its costs—and its price. His opponent carried the argument that wind and solar would one day be economically competitive with natural gas, such that the former, since they produced no greenhouse gasses, would be the preferred choice over the latter, which does emit carbon and, as a non renewable, will one day become depleted.

Such a discussion is absurd at a number of levels, mirroring Alice’s small talk with the March Hare. One of the troubling things about the way wind is vetted in public discourse is how “debate” is framed to ensure that wind has modern power and economic value. It does not. Should we debate whether the 747 would do more than gliders in transporting large quantities of freight? Bryce could have reframed the discussion to ask whether wind is better than cumquats as a means of emissions reductions. But he didn’t. And the outcome of this debate, according to the vote, was a virtual draw.

Ironically, the American Natural Gas Association is perking up its louche ad slogan: “The success of wind and solar depends on natural gas.” Eureka! To ANGA, wind particularly is not an either to natural gas’s or. Rather, the renewables du jour will join forces with natural gas to reduce carbon emissions in a way that increases marketshare for all. With natural gas, wind would be an additive—not an alternative—energy source. Bryce might have made this clear.

What ANGA and industry trade groups like the Interstate Natural Gas Association of America (see its latest paper) don’t say is that virtually all emissions reductions in a wind/gas tandem would come from natural gas—not wind. But, as Bryce should also be encouraged to say, such a pretension is a swell way for the natural gas industry to shelter income via wind’s tax avoidance power. And to create a PR slogan based upon the deception of half-truths. Although natural gas can indeed infill wind’s relentless volatility, the costs would be enormous while the benefit would be inconsequential. Rate and taxpayers would ultimately pay the substantial capital expenses of supernumerary generation.

Beyond Wonderland and Through the Looking Glass

The Oxford-style Economist debate, which by all accounts Bryce and Hayward won with ease, nonetheless woozled around in a landscape worthy of Carroll’s Jabberwocky, complete with methodological slips, definitional slides, sloganeering, and commentary that often devolved into meaningless language—utter nonsense. It was as if Pixar had for the occasion magically incarnated the Red Queen, the Mad Hatter, and Humpty Dumpty, who once said in Through the Looking Glass, “When I use a word, it means just what I choose it to mean – neither more nor less.” Dumpty also said, “When I make a word do a lot of work … I always pay it extra.”

Those promoting “clean” were paying that word extra—and over the top, as Hayward frequently reminded by demanding a clear, consistent definition of clean technology.

Proponents frequently defined clean energy differently depending upon what they chose to mean. At times, they meant acts of commission in the form of “clean coal,” wind, solar, biomass (although ethanol was roundly condemned), and increased use of natural gas. Indeed, natural gas in the discussion became reified, in the best Nancy Pelosi/T. Boone Pickens tradition, as a clean source of energy on a par with wind and solar. At one time, clean also referred to nuclear—but the topic quickly changed back to wind and natural gas. At other times, clean referred to acts of omission, such as reducing demand with more efficient appliances, smarter systems of transmission, and more discerning lifestyle choices.

Shifting definitions about what was “clean” made for a target that was hard to hit. Bryce mentioned Jevon’s Paradox. Bulls eye. So much for increased efficiency. Hayward demonstrated that the US electricity sector has already cut SO2 and NOx emissions nearly 60% over the last 40 years, and reduced mercury emissions by about 40% over this time, despite tripling coal use from 1970 to 2005. Zap. All this without wind and solar. Green jobs from clean industry?  It would have been fruitful to have invoked Henry Hazlitt’s Broken Window fallacy, which illustrates the likelihood of few net new jobs because of the opportunities lost for other, more productive investment. Also welcoming would have been remarks about how more jobs in the electricity sector must translate into increased costs, making electricity less affordable. Such a development would substantially subvert prospects for economic recovery.

In arguing against the proposition that clean energy could be a force for economic recovery, Bryce and Hayward did clean the opposition’s clock (they had, as everyone agreed, the numbers on their side). But they also let the opposition off the hook by not exposing the worms at the core of the proposition. Yes, the numbers overwhelmingly suggest that coal and natural gas are going to be around for a long time, and that they will continue to be the primary fuels, along with oil, to energize the American economy.** They can be, as they have been, made cleaner by reducing their carbon emissions even more. But they won’t be clean. Outside Wonderland, cleaner is still not clean.

The proposition therefore had to fail. Even in Wonderland.

Example of the twinning between natural gas and renewable energy – unacceptable from a greenhouse gas mitigation perspective

Capacity Matters

These arguments, however, are mere body blows. Bryce should have supplied the knockout punch by reminding that any meaningful discussion of electricity production, which could soon embrace 50% of our overall energy use, must consider the entwined goals of reliability, security, and affordability, since reliable, secure, affordable electricity is the lynchpin of our modernity. Economic recovery must be built upon such a foundation. At the core of this triad, however, resides the idea of effective capacity—the ability of energy suppliers to provide just the right amount of controllable power at any specified time to match demand at all times. It is the fount of modern power applications.

By insisting that any future technology—clean, cleaner, or otherwise, particularly in the electricity sector—must produce effective capacity, Bryce would have come quickly to the central point, moving the debate out of Wonderland and into sensible colloquy.

Comparing—both economically and functionally—wind and solar with conventional generation is spurious work. Saying that the highly subsidized price of wind might, maybe, possibly become, one day, comparable to coal or natural gas may be true. But even if this happens, if, say, wind and coal prices become equivalent, paying anything for resources that yield no or little effective capacity seems deranged as a means of promoting economic recovery for the most dedicatedly modern country on the planet.

Subsidies for conventional fuels—coal, natural gas, nuclear, and hydro—make sense because they promote high capacity generation. Subsidies for wind and solar, which are, as Bryce stated, many times greater on a unit of production basis than for conventional fuels, promote pretentious power that make everything else work harder simply to stand still.

Consider the following passage from Part II of my recent paper, which is pertinent in driving this point home:

Since reliable, affordable, secure electricity production has historically required the use of many kinds of generators, each designed to perform different but complementary roles, much like instruments in an orchestra, it is not unreasonable for companies in the power business to diversify their power portfolios. Thus, investment in an ensemble of nuclear and large coal plants to provide for baseload power, along with bringing on board smaller coal and natural gas plants to engage mid and peak load, makes a great deal of sense, providing for better quality and control while achieving economies of scale.

Traditional diversified power portfolios, however, insisted upon a key common denominator: their generating machines, virtually all fueled by coal, natural gas, nuclear, and/or hydro, had high unit availability and capacity value. That is, they all could be relied upon to perform when needed precisely as required.

How does adding wind—a source of energy that cannot of itself be converted to modern power, is rarely predictable, never reliable, always changing, is inimical to demand cycles, and, most importantly, produces no capacity value—make any sense at all? Particularly when placing such a volatile brew in an ensemble that insists upon reliable, controllable, dispatchable modes of operation. As a functional means of diversifying a modern power portfolio, wind is a howler.

Language Matters

All electricity suppliers are subsidized. But conventional generation provides copious capacity while wind supplies none and solar, very little. The central issue is capacity—or its absence. Only capacity generation will drive future economic recovery. And Bryce should say so in future debates. Birds and bats, community protests, health and safety—pale in contrast to wind technology’s lack of capacity. And Bryce should say so. Ditto for any contraption fueled by dilute energy sources that cannot be converted to modern power capacity—even if they produce no carbon emissions. Clean and green sloganeering should not be conflated with effective production.

Moreover, even if the definition of clean and/or renewable technology is stretched to mean reduced or eliminated carbon emissions caused by less consumption of fossil fuels, then where is the evidence that technologies like wind and solar are responsible for doing this? When in the debate former Colorado governor Bill Ritter claimed that the wind projects he helped build in his state were reducing California’s carbon emissions, why didn’t the Bryce/Hayward team demand proof? Which is non existent.

It’s not just wind’s wispy energy density that makes conversion to modern power impossible—without having it fortified by substantial amounts of inefficiently operating fossil-fired power, virtually dedicated transmission lines, and new voltage regulation, the costs of which must collectively be calculated as the price for integrating wind into an electricity grid. It is rather wind’s continuous skittering, which destabilizes the required match between supply and demand; it must be smoothed by all those add-ons. The vast amount of land wind gobbles up therefore hosts a dysfunctional, Rube Goldbergesque mechanism for energy conversion. Bryce and his confreres would do well to aim this bullet right between the eyes.

Robert Bryce remains a champion of reasoned discourse and enlightened energy policy. He is one of the few energy journalists committed to gleaning meaningful knowledge from a haze of data and mere information. His work is a wise undertaking in the best traditions of journalism in a democracy. As he prepares for future debates—although, given the wasteland of contemporary journalism, it is a tribute to his skills that he is even invited to the table—he must cut through the chaff surrounding our politicized energy environment, communicating instead the whole grained wheat of its essentials.

Endnote: You might also enjoy my other relatively recent paper, Oxymoronic Wind (13-page PDF). It covers a lot of ground but dwells on the relationship between wind and companies swaddled in coal and natural gas, which is the case worldwide.

________________________________________________________

** It was fascinating to note Hayward’s brief comment about China’s involvement with wind, no doubt because it seeks to increase its renewables’ manufacturing base and then export the bulk of the machines back to a gullible West. As journalist Bill Tucker said recently in a panel discussion about the future of nuclear technology on the Charlie Rose show, China (and India), evidently dedicated to achieve high levels of functional modernity, will soon lead the world in nuclear production as it slowly transitions from heavy use of coal over the next half-century.

April 27, 2011

Energy debates in Wonderland

Filed under: Renewable Energy — buildeco @ 1:50 pm

by Barry Brook

My position on wind energy is quite ambivalent. I really do want it (and solar) to play an effective role in displacing fossil fuels, because to do this, we need every tool at our disposal (witness the Open Science project I kick started in 2009 [and found funding for], in order to investigate the real potential of renewables, Oz-Energy-Analysis.Org).

However, I think there is far too much wishful thinking wrapped up in the proclamations by the “100% renewables” crowd(most of who are unfortunately also anti-nuclear advocates), that wind somehow offers both a halcyon choice and an ‘industrial-strength’ solution to our energy dilemma. In contrast, my TCASE series (thinking critically about sustainable energy) illustrates that, pound-for-pound, wind certainty does NOT punch above it’s weight as a clean-energy fighter; indeed, it’s very much a journeyman performer.

The following guest post, by Jon Boone, looks at wind energy with a critical eye and a witty turn of phrase. I don’t offer it as a comprehensive technical critique — rather it’s more a philosophical reflection on past performance and fundamental limits. Whatever your view of wind, I think you’ll find it interesting.

————————

Energy debates in Wonderland

Guest Post by Jon Boone. Jon is a former university administrator and longtime environmentalist who seeks more more informed, effective energy policy in ways that expand and enhance modernity, increase civility, and demand stewardship on behalf of biodiversity and sensitive ecosystems. His brand of environmentalism eschews wishful thinking because it is aware of the unintended adverse consequences flowing from uninformed decisions. He produced and directed the documentary, Life Under a Windplant, which has been freely distributed within the United States and many countries throughout the world. He also developed the website Stop Ill Wind as an educational resource, posting there copies of his most salient articles and speeches. He receives no income from his work on wind technology.

March Hare (to Alice): Have some wine.

(Alice looked all round the table, but there was nothing on it but tea.)

Alice: I don’t see any wine.

March Hare: There isn’t any.

Alice: Then it wasn’t very civil of you to offer it.

March Hare: It wasn’t very civil of you to sit down without being invited.

— From Lewis Carroll’s Alice in Wonderland

Energy journalist Robert Bryce, whose latest book, Power Hungry, admirably foretells an electricity future anchored by natural gas from Marcellus Shale that will eventually bridge to pervasive use of nuclear power, has recently been involved in two prominent debates. In the first, conducted by The Economist, Bryce argued for the proposition that “natural gas will do more than renewables to limit the world’s carbon emissions.” In the second, an Intelligence Squared forum sponsored by the Rosenkranz Foundation, he and American Enterprise Institute scholar Steven Hayward argued against the proposition that “Clean Energy can drive America’s economic recovery.”

Since there’s more evidence a friendly bunny brings children multi-colored eggs on Easter Sunday than there is that those renewables darlings, wind and solar, can put much of a dent in CO2 emissions anywhere, despite their massively intrusive industrial presence, the first debate was little more than a curiosity. No one mentioned hydroelectric, which has been the most widely effective “renewable”—ostensibly because it continues to lose marketshare (it now provides the nation with about 7% of its electricity generation), is an environmental pariah to the likes of The Sierra Club, and has little prospect for growth. Nuclear, which provides the nation’s largest grid, the PJM, with about 40% of its electricity, is not considered a renewable, despite producing no carbon emissions; it is also on The Sierra Club’s hit list. Geothermal and biomass, those minor league renewables, were given short shrift, perhaps because no one thought they were sufficiently scalable to achieve the objective.

So it was a wind versus gas scrum played out as if the two contenders were equally matched as producers of power. Bryce pointed out wind’s puny energy density, how its noise harms health and safety, its threat to birds and bats, and how natural gas’s newfound abundance continues to decrease its costs—and its price. His opponent carried the argument that wind and solar would one day be economically competitive with natural gas, such that the former, since they produced no greenhouse gasses, would be the preferred choice over the latter, which does emit carbon and, as a non renewable, will one day become depleted.

Such a discussion is absurd at a number of levels, mirroring Alice’s small talk with the March Hare. One of the troubling things about the way wind is vetted in public discourse is how “debate” is framed to ensure that wind has modern power and economic value. It does not. Should we debate whether the 747 would do more than gliders in transporting large quantities of freight? Bryce could have reframed the discussion to ask whether wind is better than cumquats as a means of emissions reductions. But he didn’t. And the outcome of this debate, according to the vote, was a virtual draw.

Ironically, the American Natural Gas Association is perking up its louche ad slogan: “The success of wind and solar depends on natural gas.” Eureka! To ANGA, wind particularly is not an either to natural gas’s or. Rather, the renewables du jour will join forces with natural gas to reduce carbon emissions in a way that increases marketshare for all. With natural gas, wind would be an additive—not an alternative—energy source. Bryce might have made this clear.

What ANGA and industry trade groups like the Interstate Natural Gas Association of America (see its latest paper) don’t say is that virtually all emissions reductions in a wind/gas tandem would come from natural gas—not wind. But, as Bryce should also be encouraged to say, such a pretension is a swell way for the natural gas industry to shelter income via wind’s tax avoidance power. And to create a PR slogan based upon the deception of half-truths. Although natural gas can indeed infill wind’s relentless volatility, the costs would be enormous while the benefit would be inconsequential. Rate and taxpayers would ultimately pay the substantial capital expenses of supernumerary generation.

Beyond Wonderland and Through the Looking Glass

The Oxford-style Economist debate, which by all accounts Bryce and Hayward won with ease, nonetheless woozled around in a landscape worthy of Carroll’s Jabberwocky, complete with methodological slips, definitional slides, sloganeering, and commentary that often devolved into meaningless language—utter nonsense. It was as if Pixar had for the occasion magically incarnated the Red Queen, the Mad Hatter, and Humpty Dumpty, who once said in Through the Looking Glass, “When I use a word, it means just what I choose it to mean – neither more nor less.” Dumpty also said, “When I make a word do a lot of work … I always pay it extra.”

Those promoting “clean” were paying that word extra—and over the top, as Hayward frequently reminded by demanding a clear, consistent definition of clean technology.

Proponents frequently defined clean energy differently depending upon what they chose to mean. At times, they meant acts of commission in the form of “clean coal,” wind, solar, biomass (although ethanol was roundly condemned), and increased use of natural gas. Indeed, natural gas in the discussion became reified, in the best Nancy Pelosi/T. Boone Pickens tradition, as a clean source of energy on a par with wind and solar. At one time, clean also referred to nuclear—but the topic quickly changed back to wind and natural gas. At other times, clean referred to acts of omission, such as reducing demand with more efficient appliances, smarter systems of transmission, and more discerning lifestyle choices.

Shifting definitions about what was “clean” made for a target that was hard to hit. Bryce mentioned Jevon’s Paradox. Bulls eye. So much for increased efficiency. Hayward demonstrated that the US electricity sector has already cut SO2 and NOx emissions nearly 60% over the last 40 years, and reduced mercury emissions by about 40% over this time, despite tripling coal use from 1970 to 2005. Zap. All this without wind and solar. Green jobs from clean industry?  It would have been fruitful to have invoked Henry Hazlitt’s Broken Window fallacy, which illustrates the likelihood of few net new jobs because of the opportunities lost for other, more productive investment. Also welcoming would have been remarks about how more jobs in the electricity sector must translate into increased costs, making electricity less affordable. Such a development would substantially subvert prospects for economic recovery.

In arguing against the proposition that clean energy could be a force for economic recovery, Bryce and Hayward did clean the opposition’s clock (they had, as everyone agreed, the numbers on their side). But they also let the opposition off the hook by not exposing the worms at the core of the proposition. Yes, the numbers overwhelmingly suggest that coal and natural gas are going to be around for a long time, and that they will continue to be the primary fuels, along with oil, to energize the American economy.** They can be, as they have been, made cleaner by reducing their carbon emissions even more. But they won’t be clean. Outside Wonderland, cleaner is still not clean.

The proposition therefore had to fail. Even in Wonderland.

Example of the twinning between natural gas and renewable energy – unacceptable from a greenhouse gas mitigation perspective

Capacity Matters

These arguments, however, are mere body blows. Bryce should have supplied the knockout punch by reminding that any meaningful discussion of electricity production, which could soon embrace 50% of our overall energy use, must consider the entwined goals of reliability, security, and affordability, since reliable, secure, affordable electricity is the lynchpin of our modernity. Economic recovery must be built upon such a foundation. At the core of this triad, however, resides the idea of effective capacity—the ability of energy suppliers to provide just the right amount of controllable power at any specified time to match demand at all times. It is the fount of modern power applications.

By insisting that any future technology—clean, cleaner, or otherwise, particularly in the electricity sector—must produce effective capacity, Bryce would have come quickly to the central point, moving the debate out of Wonderland and into sensible colloquy.

Comparing—both economically and functionally—wind and solar with conventional generation is spurious work. Saying that the highly subsidized price of wind might, maybe, possibly become, one day, comparable to coal or natural gas may be true. But even if this happens, if, say, wind and coal prices become equivalent, paying anything for resources that yield no or little effective capacity seems deranged as a means of promoting economic recovery for the most dedicatedly modern country on the planet.

Subsidies for conventional fuels—coal, natural gas, nuclear, and hydro—make sense because they promote high capacity generation. Subsidies for wind and solar, which are, as Bryce stated, many times greater on a unit of production basis than for conventional fuels, promote pretentious power that make everything else work harder simply to stand still.

Consider the following passage from Part II of my recent paper, which is pertinent in driving this point home:

Since reliable, affordable, secure electricity production has historically required the use of many kinds of generators, each designed to perform different but complementary roles, much like instruments in an orchestra, it is not unreasonable for companies in the power business to diversify their power portfolios. Thus, investment in an ensemble of nuclear and large coal plants to provide for baseload power, along with bringing on board smaller coal and natural gas plants to engage mid and peak load, makes a great deal of sense, providing for better quality and control while achieving economies of scale.

Traditional diversified power portfolios, however, insisted upon a key common denominator: their generating machines, virtually all fueled by coal, natural gas, nuclear, and/or hydro, had high unit availability and capacity value. That is, they all could be relied upon to perform when needed precisely as required.

How does adding wind—a source of energy that cannot of itself be converted to modern power, is rarely predictable, never reliable, always changing, is inimical to demand cycles, and, most importantly, produces no capacity value—make any sense at all? Particularly when placing such a volatile brew in an ensemble that insists upon reliable, controllable, dispatchable modes of operation. As a functional means of diversifying a modern power portfolio, wind is a howler.

Language Matters

All electricity suppliers are subsidized. But conventional generation provides copious capacity while wind supplies none and solar, very little. The central issue is capacity—or its absence. Only capacity generation will drive future economic recovery. And Bryce should say so in future debates. Birds and bats, community protests, health and safety—pale in contrast to wind technology’s lack of capacity. And Bryce should say so. Ditto for any contraption fueled by dilute energy sources that cannot be converted to modern power capacity—even if they produce no carbon emissions. Clean and green sloganeering should not be conflated with effective production.

Moreover, even if the definition of clean and/or renewable technology is stretched to mean reduced or eliminated carbon emissions caused by less consumption of fossil fuels, then where is the evidence that technologies like wind and solar are responsible for doing this? When in the debate former Colorado governor Bill Ritter claimed that the wind projects he helped build in his state were reducing California’s carbon emissions, why didn’t the Bryce/Hayward team demand proof? Which is non existent.

It’s not just wind’s wispy energy density that makes conversion to modern power impossible—without having it fortified by substantial amounts of inefficiently operating fossil-fired power, virtually dedicated transmission lines, and new voltage regulation, the costs of which must collectively be calculated as the price for integrating wind into an electricity grid. It is rather wind’s continuous skittering, which destabilizes the required match between supply and demand; it must be smoothed by all those add-ons. The vast amount of land wind gobbles up therefore hosts a dysfunctional, Rube Goldbergesque mechanism for energy conversion. Bryce and his confreres would do well to aim this bullet right between the eyes.

Robert Bryce remains a champion of reasoned discourse and enlightened energy policy. He is one of the few energy journalists committed to gleaning meaningful knowledge from a haze of data and mere information. His work is a wise undertaking in the best traditions of journalism in a democracy. As he prepares for future debates—although, given the wasteland of contemporary journalism, it is a tribute to his skills that he is even invited to the table—he must cut through the chaff surrounding our politicized energy environment, communicating instead the whole grained wheat of its essentials.

Endnote: You might also enjoy my other relatively recent paper, Oxymoronic Wind (13-page PDF). It covers a lot of ground but dwells on the relationship between wind and companies swaddled in coal and natural gas, which is the case worldwide.

________________________________________________________

** It was fascinating to note Hayward’s brief comment about China’s involvement with wind, no doubt because it seeks to increase its renewables’ manufacturing base and then export the bulk of the machines back to a gullible West. As journalist Bill Tucker said recently in a panel discussion about the future of nuclear technology on the Charlie Rose show, China (and India), evidently dedicated to achieve high levels of functional modernity, will soon lead the world in nuclear production as it slowly transitions from heavy use of coal over the next half-century.

April 14, 2011

Fukushima rated at INES Level 7 – what does this mean?

Filed under: Japan Earthquake, Nuclear Energy — buildeco @ 8:19 pm
by Barry Brook

Hot in the news is that the Fukushima Nuclear crisis has been upgraded from INES 5 to INES 7. Note that this is not due to some sudden escalation of events  (aftershocks etc.), but rather it is based on an assessment of the cumulative magnitude of the events that have occurred at the site over the past month.

Below I look briefly at what this INES 7 rating means, why it has happened, and to provide a new place to centralise comments on this noteworthy piece of news.

The International Nuclear and Radiological Event Scale (INES) was developed by the International Atomic Energy Agency (IAEA) to rate nuclear accidents. It was formalised in 1990 and then back-dated to events like Chernobyl, Three Mile Island, Windscale and so on. Prior to today, only Chernobyl had been rated at the maximum level of the scale ‘major accident’. A useful 5-page PDF summary description of the INES, by the IAEA, is available here.

A new assessment of Fukushima Daiichi has put this event at INES 7, upgraded from earlier escalating ratings of 3, 4 and then 5. The original intention of the scale was historical/retrospective, and it was not really designed to track real-time crises, so until the accident is fully resolved, any time-specific rating is naturally preliminary.

The criteria used to rate against the INES scale are (from the IAEA documentation):

(i) People and the Environment: considers the radiation doses to people close to the location of the event and the widespread, unplanned release of radioactive material from an installation.

(ii) Radiological Barriers and Control: covers events without any direct impact on people or the environment and only applies inside major facilities. It covers unplanned high radiation levels and spread of significant quantities of radioactive materials confined within the installation.

(iii) Defence-in-Depth: covers events without any direct impact on people or the environment, but for which the range of measures put in place to prevent accidents did not function as intended.

In terms of severity:

Like the scales that describe earthquakes or major storms, each of the INES scale’s seven levels is designed to be ten times more severe that the one before. After below-scale ‘deviations’ with no safety significance, there are three levels of ‘incident’, then four levels of ‘accident’. The selection of a level for a given event is based on three parameters: whether people or the environment have been affected; whether any of the barriers to the release of radiation have been lost; and whether any of the layers of safety systems are lost.

So, on this definitional basis, one might argue that the collective Fukushima Daiichi event (core damage in three units, hydrogen explosions, problems with drying spent fuel ponds, etc.) is ~100 times worse than TMI-2, which was a Level 5.

However, what about when you hit the top of the INES? Does a rating of 7 mean that Fukushima is as bad as Chernobyl? Well, since you can’t get higher than 7 on the scale, it’s impossible to use this numerically to answer such a question on the basis of their categorical INES rating alone. It just tells you that both events are in the ‘major league’. There is simply no event rating 8, or 10, or whatever, or indeed any capacity within the INES system to rank or discriminate events within categories (this is especially telling for 7). For that, you need to look for other diagnostics.

So headlines likeFukushima is now on a par with Chernobyl‘ can be classified as semantically correct and yet also (potentially) downright misleading. Still, it sells newspapers.

There is a really useful summary of the actual ‘news’ of this INES upgrade from World Nuclear News, here. It reports:

Japanese authorities notified the International Atomic Energy Agency of their decision to up the rating: “As a result of re-evaluation, total amount of discharged iodine-131 is estimated at 1.3×1017 becquerels, and caesium-137 is estimated at 6.1×1015 becquerels. Hence the Nuclear and Industrial Safety Agency has concluded that the rating of the accident would be equivalent of Level 7.”

More here from the IAEA:

The new provisional rating considers the accidents that occurred at Units 1, 2 and 3 as a single event on INES. Previously, separate INES Level 5 ratings had been applied for Units 1, 2 and 3. The provisional INES Level 3 rating assigned for Unit 4 still applies.

The re-evaluation of the Fukushima Daiichi provisional INES rating resulted from an estimate of the total amount of radioactivity released to the environment from the nuclear plant. NISA estimates that the amount of radioactive material released to the atmosphere is approximately 10 percent of the 1986 Chernobyl accident, which is the only other nuclear accident to have been rated a Level 7 event.

I also discussed the uprating today on radio, and you can listen to the 12-minute interview here for my extended perspective.

So, what are some of the similarities and differences between Fukushima and Chernobyl?

Both have involved breeches of radiological barriers and controls, overwhelming of defence-in-depth measures, and large-scale release of radioactive isotopes into the environment. The causes and sequence of the two events were, however, very different, in terms of reactor designs, the nature of the triggering events, and time-scale for resolution — this is a topic to be explored in more depth in some future post. The obviously big contrast is in the human toll and nature of the radioactive release.

The Chernobyl event killed 28 people directly via the initial explosion or severe radiation sickness, and other ~15 died as directly attributed result of radiation-induced cancer (see the summary provided today by Ben Heard on Opinion Online: Giving Green the red light). Further, Chernobyl led to a significant overexposure of members of the public in the local area and region, especially due to iodine-131 that was dispersed by the reactor fire, and insufficient protection measures by authorities. An increase in thyroid cancers resulted from this.

In Fukushima, by contrast, no workers have been killed by radiation (or explosions), and indeed none have been exposed to doses >250 mSv (with a ~1000 mSv being the dose required for people to exhibit signs of radiation sickness, through to about 50 % of victims dying after being exposed to >5000 mSv [see chart here]). No member of the public has, as yet, been overexposed at Fukushima. Further, much of the radionuclides released into the environment around Fukushima have been a result of water leakages that were flushed into the ocean, rather than attached to carbon and other aerosols from a burning reactor moderator, where they were largely deposited on land, and had the potential to be inhaled (as occurred in Chernobyl).

So is Fukushima another Chernobyl? No. Is it a serious accident? Yes. Two quite different questions — and answers — which should not be carelessly conflated.

March 26, 2011

Preliminary lessons from Fukushima for future nuclear power plants

Filed under: IFR (Integral Fast Reactor) Nuclear Power, Japan Earthquake — buildeco @ 1:34 pm

by Barry Brook

No strong conclusions can yet be drawn on the Fukushima Nuclear Crisis, because so much detail and hard data remains unclear or unavailable. Indeed, it will probably take years to piece the whole of this story together (as has now been done for accidents like TMI and Chernobyl [read this and this from Prof. Bernard Cohen for an absolutely terrific overview]). Still, it will definitely be worth doing this post-event diagnostic, because of the valuable lessons it can teach us. In this spirit, below an associate of mine from the Science Council for Global Initiatives discusses what lessons we’ve learned so far. This is obviously a huge and evolving topic that I look forward to revisiting many times in the coming months…

——————–

Guest Post by Dr. William HannumBill worked for more than 40 years in nuclear power development, stretching from design and analysis of the Shippingport reactor to the Integral Fast Reactor. He earned his BA in physics at Princeton and his MS and PhD in nuclear physics at Yale. He has held key management positions with the U. S. Department of Energy (DOE), in reactor physics , reactor safety, and as Deputy Manager of the Idaho Operations Office.

He served as Deputy Director General of the OECD Nuclear Energy Agency, Paris, France; Chairman of the TVA Nuclear Safety Review Boards, and Director of the West Valley (high level nuclear waste processing and D&D) Demonstration Project. Dr. Hannum is a fellow of the American Nuclear Society, and has served as a consultant to the National Academy of Engineering on nuclear proliferation issues. He wrote a popular article for Scientific American on smarter use of nuclear waste, which you can download as a PDF here.

——————–

Background

On 11 March 2011, a massive earthquake hit Japan.  The six reactors at Fukushima-Dai-ichi suffered ground accelerations somewhat in excess of design specification.  It appears that all of the critical plant equipment survived the earthquake without serious damage, and safety systems performed as designed.  The following tsunami, however, carried the fuel tanks for the emergency diesels out to sea, and compromised the battery backup systems.  All off-site power was lost, and power sufficient operate the pumps that provide cooling of the reactors and the used-fuel pools remained unavailable for over a week.  Heroic efforts by the TEPCo operators limited the radiological release.  A massive recovery operation will begin as soon as they succeed in restoring the shutdown cooling systems.

It is important to put the consequences of this event in context.  This was not a disaster (the earthquake and tsunami were disasters).  This was not an accident; the plant experienced a natural event (“Act of God” in insurance parlance) far beyond what it was designed for.  Based on the evidence available today, it can be stated with confidence that no one will have suffered any identifiable radiation-related heath effects from this event.  A few of the operators may have received a high enough dose of radiation to have a slight statistical increase in their long term risk of developing cancer, but I would place the number at no more than 10 to 50.  None of the reports suggest that any person will have received a dose approaching one Sievert, which would imply immediate health effects.

Even ignoring the possibility of hormetic effects, this is approaching the trivial when compared with the impacts of the earthquake and tsunami, where deaths will likely come to well over 20,000.  Health impacts from industrial contamination, refinery fires, lack of sanitation, etc., etc. may reasonably be supposed to be in the millions.  Even the “psychological” impacts of the Fukushima problems must be seen to pale in contrast to those from the earthquake and tsunami.

The radiological impact on workers is also small relative to the non-radiological injuries suffered by them.  One TEPCO crane operator died from injuries sustained during the earthquake. Two TEPCO workers who had been in the turbine building of Unit 4, are missing.  At least eleven TEPCO workers were take to hospital because of earthquake-related physical injuries.

TEPCO has suffered a major loss of capital equipment, the value of which is non-trivial even in the context of the earthquake and tsunami devastation.  They also face a substantial cost for cleanup of the contamination which has been released from the plants. These are financial costs, not human health and well being matters.

The Sequence of Events

Following the tsunami, the operators had no power for the pumps that circulate the primary coolant to the heat exchangers.  The only way to remove the decay heat was to boil the water in the core.  After the normal feed water supplies were exhausted, they activated the system to supply sea water to the core, knowing this would render the plant unfit to return to operation.  In this way, the reactors were maintained in a relatively stable condition, allowing the water to boil, and releasing the resulting steam to the containment building. Since this is a Boiling Water Reactor (BWR), it is good at boiling water.  Operating with the water level 1.7 to 2 meters below the top of the core, they  mimicked power operation; the core normally operates at power with the water level well below the top of the core, the top part being cooled by steam.   Cold water in, steam out, is a crude but effective means of cooling.

Before using sea water, according to reports, water levels are thought to have dropped far enough to allow the fuel to overheat, damaging some of the fuel cladding.  When overheated, the cladding (Zirconium) reacts, claiming oxygen from the water.  Water, less oxygen, is hydrogen.  When vented to the containment and then to the outer building, the hydrogen built up, and eventually exploded, destroying the enclosing building.  With compromised fuel, the steam being vented contains radioactive fission products.  The design of BWRs is such that this venting goes through a water bath (in the Torus), that filters out all but the most volatile fission products.

With time, the heat generated in used fuel (both in the core and in the fuel pool) decreases.  From an initial power of about 2% of full power an hour after shutdown (when the coolant pumps lost power) to about 0.2% a week later, the amount of steam venting decreases, and releases can be controlled and planned for favorable weather conditions.

A second concern arose because of the inability to provide cooling for the used-fuel pool in Unit 4, and later Unit 3.  The Unit 4 pool was of concern because, for maintenance, the entire core had been off-loaded into the pool in November (it is believed that two older core loadings were also in this pool, awaiting transfer to the central storage pool).  With only a few months cooling, the residual heat is sufficient to raise the temperature of the water in the pool to boiling within several days or weeks.  There is also some suggestion that the earthquake may have sloshed some water out of the pool.  In any case, the fuel pools for Units 3 and 4 eventually were thought to be losing enough water such that the fuel would no longer be adequately cooled.  Since the fuel pools are outside the primary containment, leakage from these pools can spread contamination more readily than that from the reactor core.  High-power water hoses have been used to maintain water in the fuel pools.

While many areas within the plant complex itself, and localized areas as far away as 20 Km may require cleanup of the contamination released from the reactors and from the fuel pools, there is no indication that there are any areas that will require long term isolation or exclusion.

 

Lessons Learned

It is not the purpose of this paper to anticipate the lessons to be learned from this event, but a few items may be noted.  One lesson will dominate all others:

Prolonged lack of electrical power must be precluded.

While the designers believed their design included sufficient redundancies (diesels, batteries, redundant connections to the electrical grid), the simultaneous extended loss of all sources of power left the operators dependant on creative responses.  This lesson is applicable both to the reactor and to fuel pools.

All nuclear installations will probably be required to do a complete review of the security of their access to electrical power.  It may be noted that this lesson is applicable to many more activities than just nuclear power.  Extended loss of electrical power in any major metropolitan area would generate a monstrous crisis.  The loss of power was irrelevant to other activities in the region near the Fukushima plant because they were destroyed by the tsunami.

Other lessons that will be learned that may be expected to impact existing plants include:

Better means of control of hydrogen buildup in the case of fuel damage may be required.

In addition, detailed examinations of the Fukushimi plants will provide evidence of the margins available in seismic protection.  Detailed reconstruction of the event will give very helpful insights into the manner that fission product can release from damaged fuel, and their transport.

Applicability of Fukushima Information to MOX-fueled Reactors:

The core of Unit 3 was fueled with plutonium recycled from earlier used reactor fuel.  Preliminary information suggests that the release of hazardous radioactive material, for this type of event, is not significantly different than that non-recycle fuel.  More detailed examinations after the damaged cores are recovered, and models developed to reconstruct the events, will be necessary to verify and quantify this conclusion.

Applicability of Fukushima Information to Gen-III Reactors:

In the period since the Fukushima plants were designed, advanced designs for BWRs (and other reactor types) have been developed to further enhance passive safety (systems feedback characteristics that compensate for abnormal events, without reliance on operator actions or on engineered safety systems), simplify designs, and reduce costs.  The results of these design efforts (referred to as Gen-III) are the ones now under construction in Japan, China and elsewhere, and proposed for construction in the U.S.

One of the most evident features of the Gen-III systems is that they are equipped with large gravity-feed water reservoirs that would flood the core in case of major disruption.  This will buy additional time in the event of a Fukushima type situation, but the plants will ultimately rely of restoration of power at some point in time.

The applicability of the other lessons (hydrogen control, fuel pool) will need to be evaluated, but there are no immediately evident lessons beyond these that will affect these designs in a major way.

Applicability of Fukushima Information to Recycling Reactors:

As noted above, Unit-III was fueled with recycled plutonium, and there are no preliminary indications that this had any bearing on the performance of this plant during this event.

Advanced recycling, where essentially all of the recyclable material is recovered and used (as opposed to recovery and recycle of plutonium) presents a different picture.  Full recycling is effective only with a fast reactor.  A metal fuel, clad in stainless steel, allows a design of a sodium-cooled fast reactor with astonishing passive safety characteristics.  Because the sodium operates far from its boiling point in an essentially unpressurized system, catastrophic events caused by leakage or pipe failures cannot occur.  The metal fuel gives the system very favorable feedback characteristics, so that even the most extreme disruptions are passively accommodated.  A complete loss of cooling, such as at Fukushima, leads to only a modest temperature rise.  Even if the control system were rendered inoperable, and the system lost cooling but remained at full power (this is a far more serious scenario than Fukushima, where the automatic shutdown system operated as designed) the system would self-stabilize at low power, and be cooled by natural convection to the atmosphere.  Should the metal fuel fail for any reason, internal fission product gases would cause the fuel to foam and disperse, providing the most powerful of all shutdown mechanisms.

The only situation that could generate energy to disperse material from the reactor is the possibility of s sodium-water reaction.  By using an intermediate sodium system (reactor sodium passes its energy to a non-radioactive sodium system, which then passes its energy to water to generate steam to turn the electrical generator), the possibility of a sodium-water reaction spreading radioactive materials is precluded.

These reactors must accommodate seismic challenges, just as any other reactor type.  While there are many such design features in common with other reactor designs, the problem is simpler for the fast reactor because of the low pressure, and the fact that this type of reactor does not need elaborate water injection systems.

In light of the Fukushima event, one must consider the potential consequences of a massive tsunami accompanying a major challenge to the reactor.  Since it may be difficult to ensure that the sodium systems remain intact under the worst imaginable circumstances, it may be prudent to conclude that a tsunami-prone location may not be the best place to build a sodium facility (whether a nuclear power plant or something else).

Conclusions:

The major lesson to be learned is that for any water-cooled reactor there must be an absolutely secure supply of power sufficient to operate cooling pumps.  Many other lessons are likely to be learned.  At this early point, it appears that design criteria for fuel storage pools may need to be revised, and hydrogen control assessed.

Given the severity of the challenge faced by the operators at Fukushima, and their ability to manage the situation in such a way as to preclude any significant radiation related health consequences for workers or the public, this event should be a reassurance that properly designed and regulated nuclear power does not pose a catastrophic risk to the public—that, overall, nuclear power remains a safe and clean energy sources.

Given the financial impact this event will have on the utility (loss of four major power plants, massive cleanup responsibilities), it will be worthwhile for the designers, constructors, operators, and licensing authorities to support a thorough analysis of what actually transpired during this event.

Older Posts »

Blog at WordPress.com.

Follow

Get every new post delivered to your Inbox.