Climate change

June 23, 2011

What price of Indian independence? Greenpeace under the spotlight

Filed under: Emissions Reduction, Energy Demand, Global Warming — buildeco @ 1:56 pm
Two PWRs under construction in Kudamkulam, India

Guest Post by Geoff RussellGeoff is a mathematician and computer programmer and is a member of Animal Liberation SA. His recently published book is CSIRO Perfidy. To see a list of other BNC posts by Geoff, click here.

——————

India declared itself a republic in 1950 after more than a century of struggle against British Imperialism. Greenpeace India however, is still locked firmly under the yoke of its parent. Let me explain.

Like many Australians, I only caught up with Bombay’s 1995 change of name to Mumbai some time after it happened. Mumbai is India’s city of finance and film, of banks and Bollywood. A huge seething coastal metropolis on the north western side of India. It’s also the capital of the state of Maharashtra which is about 20 percent bigger than the Australian state of Victoria, but has 112 million people compared to Victoria’s 5.5 million. Mumbai alone has over double Victoria’s entire population. Despite its population, the electricity served up by Maharashtra’s fossil fuel power stations plus one big hydro scheme is just 11.3 GW (giga watts, see Note 3), not much more than the 8 or so GW of Victoria’s coal and gas fumers. So despite Mumbai’s dazzling glass and concrete skyline, many Indians in both rural and urban areas of the state still cook with biomass … things like wood, charcoal and cattle dung.

The modern Mumbai skyline at night

Mumbai’s wealth is a magnet for terrorism. The recent attacks in 2008 which killed 173 follow bombings in 2003 and 1993 which took 209 and 257 lives respectively. Such events are International news, unlike the daily death and illness, particularly to children, from cooking with biomass. Each year, cooking smoke kills about 256,000 Indian children between 1 and 5 years of age with acute lower respiratory infections (ALRI). Those who don’t die can suffer long term consequences to their physical and mental health. A rough pro-rata estimate would see about 23,000 children under 5 die in Maharashtra every year from cooking smoke.

The image is from a presentation by medical Professor Kirk Smith, who has been studying cooking smoke and its implications for 30 years.

Medical Prof. Kirk Smith’s summary of health impacts from cooking fires

The gizmo under the women’s right arm measures the noxious fumes she is exposed to while cooking. Kirk doesn’t just study these illnesses but has been spinning off development projects which develope and distribute cleaner cooking stoves to serve as an interim measure until electricity arrives.

The disconnect between what matters about Mumbai and India generally to an Australian or European audience and what matters locally is extreme. But a visit to the Greenpeace India website shows it is simply a western clone. In a country where real matters of life and death are ubiquitous, the mock panic infecting the front page of the Greenpeace India website at the death-less problems of the Fukushima nuclear plant seem weird at best, and obscene at worst.“Two months since Fukushima, the Jaitapur project has not been stopped“, shouts the text over one front page graphic in reference to the nuclear plant proposed for construction at Jaitapur. In those two months, nobody has died of radiation at Fukushima, but 58,000 Indian children have died from cooking smoke. They have died because of a lack of electricity. Some thousands in Maharashtra alone.

Greenpeace, now an obstructive dinosaur

The whole world loved Greenpeace back in its halcyon days protesting whaling and the atmospheric testing of nuclear weapons. Taking on whalers and the French Navy in the open sea in little rubber boats was indeed worthy of Mahatma Gandhi. But the legacy of those days is now an obstacle to Greenpeace helping to fight the much bigger environmental battles that are being fought. As Greenpeace campaigns to throw out the nuclear powered baby with the weapons testing bathwater, it seems to have forgotten the 2010 floods which displaced 20 million in the sub-continent. The Australian Council for International Development reports in May 2011 that millions are still displaced with 913,000 homes completely destroyed. Millions also have ongoing health issues with rising levels of tuberculosis, dengue fever and the impacts of extended periods of malnutrition. The economic structure of large areas has been devastated along with food and seed stocks. Areas in southern Pakistan are still under water.

This foreshadows the scale of devastation which will be delivered more frequently as global warming bites.

Brown clouds, cooking and climate change

Regardless of what you think about nuclear power, you’d think breathable air would be an environmental issue worthy of Greenpeace’s attention, but biomass cooking is missing from Greenpeace India’s campaign headings.

Biomass cooking isn’t just a consequence of poverty, it feeds into a vicious feedback loop. People, usually women and children, spend long periods collecting wood or cattle dung (see image or full study). This reduces educational opportunities, while pressure on forests for wood and charcoal degrades biodiversity. Infections from smoke, even if not fatal, combine with the marginal nutrition produced by intermittent grain shortages to yield short and sickly lifespans, while burning cattle dung wastes a resource far more valuable as fertiliser.

In 2004, a World Health Organisation Report estimated that, globally, 50 percent of all households and 90 percent of rural households cook with biomass. In India, they estimated that 81 percent of Indian households cook with biomass. That figure will have dropped somewhat with significant growth in Indian power generation over the past decade but will still be high.

Biomass cooking isn’t only a health issue, but a significant player in climate change. Globally, the black carbon in the smoke from over 3 billion people cooking and boiling water daily with wood, charcoal or cattle dung forms large brown clouds with regional and global impacts.

Maharashtra’s nuclear plans

Apart from a reliable food supply, the innovation that most easily distinguishes the developed and developing world is electricity. It’s the shortage of this basic commodity that kills those 256,000 Indian children annually. Electric cooking is clean and slices through the poverty inducing feedback loop outlined above. Refrigeration reduces not just food wastage but also food poisoning.

If you want to protect forests and biodiversity as well as children in India (and the rest of the developing world), then electricity is fundamental. Higher childhood survival is not only a worthy goal in itself, but it is also critical in reducing birthrates.

Apart from a Victorian sized coal fired power supply the 112 million people of Maharashtra also have the biggest nuclear power station in India. This is a cluster of two older reactors and two newer ones opened in 2005 and 2006. The newer reactors were constructed by Indian companies and were completed inside time and below budget. The two old reactors are relatively small, but the combined power of the two newer reactors is nearly a giga watt. India’s has a rich mathematical heritage going back a thousand years which underpins a sophisticated nuclear program. Some high-level analytic techniques were known in India hundreds of years before being discovered in Europe.

India has another nuclear power station planned for Maharashtra. And much bigger. This will be a half a dozen huge 1.7 GW French EPR reactors at Jaitapur, south of Mumbai. On its own, this cluster will surpass the entire current output of the state’s coal fired power stations. The project will occupy 968 hectares and displace 2,335 villagers (Wikipedia). How much land would solar collectors occupy for an Andasol like concentrating solar thermal system? About 40 times more land and either displace something like 80,000 people or eat into India’s few wildlife habitats.

If Greenpeace succeeds in delaying the Jaitapur nuclear plant, biomass cooking in the area it would have serviced will continue together with the associated suffering and death of children. It’s that simple. Greenpeace will have direct responsibility no less than if it had bombed a shipment of medical supplies or prevented the decontamination of a polluted drinking well.

Jaitapur and earthquakes

In the wake of the reactor failures at Fukushima which killed nobody, Greenpeace globally and Greenpeace India are redoubling their efforts to derail the new Jaitapur nuclear plant. The Greenpeace India website (Accessed 9th May) carries a graphic of the Fukushima station with covering text:

The Jaitapur nuclear plant in India is also in an earthquake prone zone. Do we want to take the risk? The people of Jaitapur don’t.

The Greenpeace site claims that the chosen location for the Jaitapur power plant is in a Seismic Zone 4 with a maximum recorded quake of 6.3 on the Richter scale. Accepting this as true (Wikipedia says its Zone 3), should anybody be afraid?

“Confident” and “relaxed” are far more appropriate responses for anybody who understands the Richter scale. It’s logarithmic. Base 10.

Still confused? A quake of Richter scale size 7 is 10 times more powerful than one of size 6. A quake of size 8 is 100 times more powerful than one a size 6. And a scale 9 quake, like Japan’s monster on March the 11th, is a thousand times more powerful than a quake of size 6. The 40 year old Fukushima reactors came through this massive quake with damage but no deaths. The reactors shutdown as they were designed to and subsequent problems, still fatality free and caused primarily by the tsunami, would not have occurred with a more modern reactor. We haven’t stopped building large buildings in earthquake zones because older designs failed.

Steep cliffs and modern reactor designs at Jaitapur will mean that tsunamis won’t be a problem. All over the world people build skyscrapers in major earthquake zones. The success of the elderly Fukushima reactors in the face of a monster quake is cause for relief and confidence, not blind panic. After all, compared to a skyscraper like Taipei 101, designing a low profile building like a nuclear reactor which can handle earthquakes is a relative doddle.

Despite being a 10 on the media’s self-proclaimed Richter scale, subsequent radiation leaks and releases at Fukushima will cause few if any cancers. It’s unlikely that a single worker will get cancer, let alone any of the surrounding population. This is not even a molehill next to the mountain of cancers caused by cigarettes, alcohol and red meat. The Fukushima evacuations are terrible for the individuals involved but even 170,000 evacuees pales beside the millions of evacuations caused by increasing climate based cataclysms.

Greenpeace India haunted by a pallid European ghost

Each year that the electricity supply in Maharashtra is inadequate, some 23,000 children under the age of 5 will die. They will die this year. They will die next year. They will keep dying while the electricity supply in Maharashtra is inadequate. While the children die, their parents will mourn and continue to deplete forests for wood and charcoal. They will continue to burn cattle dung and they will have more children.

A search of the Greenpeace India web pages finds no mention of biomass cooking. No mention of its general, environmental, climate or health impacts. But there are 118 pages referencing Chernobyl.

At Chernobyl, 237 people suffered acute radiation sickness with 28 dying within 4 months and another 19 dying between 1987 and 2006. As a result of the radiation plume and people who were children at the time drinking contaminated milk, there were 6,848 cases of thyroid cancer between 1991 and 2005. These were treated with a success rate of about 98% (implying about 140 deaths). Over the past 25 years there have also been some thousands of other cancers that might, or might not, have been caused by Chernobyl amongst the millions of cancers caused by factors that Greenpeace doesn’t seem the least worried by, things like cigarettes, alcohol and red meat.

On the other hand, each year that India’s electricity supply is inadequate will see about 256,000 childhood deaths. As an exercise, readers may wish to calculate the number of Indian children who have died due to inadequate cooking fuels over the past 25 years and compare it with the 140 children who died due to the Chernobyl accident. Every one of those Indian deaths was every bit as tragic as every one of those Chernobyl deaths.

Greenpeace India is dominated by the nuclear obsession of its parent organisation. On the day when the Greenpeace India blog ran a piece about 3 Japanese workers with burned feet, nearly a thousand Indian children under 5 will have died from cooking stove smoke. They didn’t get a mention on that day, or any other.

Why is Greenpeace India haunted by this pallid European ghost of an explosion 25 years ago in an obsolete model of reactor in Ukraine? Why is Greenpeace India haunted by the failure of a 40 year old Fukushima reactor without a single fatality? This is a tail wagging not just a dog, but the entire sled team.

Extreme scenarios

It’s time Greenpeace India looked rationally at Indian choices.

Should they perhaps copy the Germans whose 15 year flirtation with solar power hasn’t made the slightest dent in their fossil fuel use? (Note 2) It may simply be that the Germans are technologically incompetent and that things will go better in India. Perhaps the heirs of Ramanujan will succeed where the heirs of Gauss have failed. Alternatively, should India copy the Danes whose wind farms can’t even half power a tiny country of 5.4 million?

India’s current electricity sources. Cooking stoves not included! ‘Renewables’ are predominantly biomass thermal power plants and wind energy, with some solar PV.

India is well aware that she only has a four or five decades of coal left, but seems less aware, like other Governments, that atmospheric CO2 stabilisation must be at 350 ppm together with strict reductions in short lived forcings like black carbon and methane and that these constraints require her, like Australia and everybody else, to leave most of that coal in the ground. But regardless of motivation, India needs both a rebuild and expansion of her energy infrastructure over the next 50 years.

Let’s consider a couple of thumbnail sketches of two very different extreme scenarios that India may consider.

The first scenario is to phase out all India’s coal, oil and gas electricity generation facilities and replace them with nuclear. Currently these fossil fuel facilities generate about 900,000 GWh (giga watt hours) of electricity. Replacing them with 1,000 nuclear reactors at 1.7 GW each will generate about 14 million GWh annually. This is about 15 times the current electricity supply and roughly similar to Victoria’s per capita electricity supply. It’s a fairly modest target because electricity will be required to replace oil and gas in the future. I also haven’t factored in population growth in the hope that energy efficiency gains will compensate for population growth and also with confidence that electrification will reduce population growth. Nevertheless, this amount of electricity should be enough to catapult India into the realms of the developed world.

These reactors should last at least 60 years and the electricity they produce will prevent 256,000 children under 5 dying every year. Over the lifetime of the reactors this is about 15.4 million childhood deaths. But this isn’t so much about specific savings as a total transformation of India which will see life expectancy rise to developed world levels if dangerous climate change impacts can be averted and a stable global food supply is attained.

Build the reactors in groups of 6, as is proposed at Jaitapur, and you will need to find 166 sites of about 1000 hectares. The average density of people in India is about 3 per hectare, so you may need to relocate half a million people (3000 per site). This per-site figure is close to the actual figure for Jaitapur.

There are currently over 400 nuclear reactors operating world wide and there has been one Chernobyl and one Fukushima in 25 years. Nobody would build a Chernobyl style reactor again, but let’s be really silly and presume that over 60 years we had 2 Chernobyls and 2 Fukushimas in India. Over a 60 year period this might cost 20,000 childhood cancers with a 98% successful treatment rate … so about 400 children might die. There may also be a few thousand adult leukemias easily counterbalanced by a vast amount of adult health savings I haven’t considered.

The accidents would also result in 2 exclusion zones of about 30 kilometers in radius. Effectively this is 2 new modestly sized wildlife parks. We know from Chernobyl that wildlife will thrive in the absence of humans. With a 30km radius, the two exclusion zone wildlife parks would occupy 282,743 hectares.

If you are anti-nuclear, this is a worst case scenario. The total transformation of India into a country where children don’t die before their time in vast numbers.

This is a vision for India that Greenpeace India is fighting tooth and nail to avoid.

As our alternative extreme scenario, suppose India opted for concentrating solar thermal power stations similar to the Spanish Andasol system to supply 14 million GWh annually. Each such unit supplies about 180 GWh per year, so you would need at least 78,000 units with a solar collector area of 3.9 million hectares, equivalent to 13 of our hypothesized exclusion zone wildlife parks from the accidents. But, of course, these 3.9 million hectares are not wildlife parks. I say “at least 78,000″ units because the precise amount will depend on matching the demand for power with the availability of sunshine. Renewable sources of energy like wind and solar need overbuilding to make up for variability and unpredictability of wind and cloud cover. The 78,000 Andasol plants each come with 28,000 tonnes of molten salt (a mix of sodium nitrate and potassium nitrate) at 400 degrees centigrade which acts as a huge battery storing energy when the sun is shining for use when it isn’t. Local conditions will determine how much storage is required. The current global production of ordinary sodium chloride is about 210 million tonnes annually. Producing the 2.1 billion tonnes of special salt required for 78,000 Andasols will be difficult, as will the production of steel and concrete. Compared to the nuclear reactors, you will need about 15 times more concrete and 75 times more steel.

Build the 78,000 Andasols in groups of 78 and you have to find 1000 sites of about 4,000 hectares. Alternatively you could use 200 sites of 20,000 hectares. The average density of people in India is over 3 per hectare, so you may need to relocate perhaps 12 million people. If you were to use Solar photovoltaic in power stations (as opposed to rooftops), then you would need more than double the land (Note 4) and have to relocate even more people.

Sustainability

In a previous post, I cited an estimate of 1 tonne of CO2 per person per year as a sustainable greenhouse gas emissions limit for a global population of 8.9 billion. How do our two scenarios measure up?

A current estimate of full life cycle emissions from nuclear power is 65g/kWh (grams per kilo-watt-hour) of CO2, so 14 million GWh of electricity shared between 1.4 billion Indians is about 0.65 tonnes per person annum, which allows 0.35 tonnes for food and other non-energy greenhouse gas emissions. So not only is it sustainable, it’s in the ball park as a figure we will all have to live within.

The calculations required to check if this amount of electricity is sustainable from either solar thermal or solar PV are too complex to run through here, but neither will be within budget if any additional fossil fuel backup is required. Solar PV currently generates about 100 g/kWh (p.102) under Australian conditions, so barring technical breakthroughs, is unsustainable, unless you are happy not to eat at all. Solar thermal is similar to nuclear in g-CO2/kWh, except that the required overbuilding will probably blow the one tonne budget.

The human cost of construction time

The relative financial costs of the two scenarios could well have a human cost. For example, more money on energy usually means less on ensuring clean water. But this post is already too long. However, one last point needs to be made about construction time. I strongly suspect that while building 1000 nuclear reactors will be a vast undertaking, it is small compared to 78,000 Andasols. Compare the German and French experiences of solar PV and nuclear, or simply think about the sheer number and size of the sites required. The logistics and organisational time could end up dominating the engineering build time. We know from various experiences, including those of France and Germany, that rapid nuclear builds are physically plausible and India has demonstrated this with its own reactor program.

If I’m right and a solar (or other renewable) build is slower than a nuclear build, then the cost in human suffering will easily dwarf anything from any reasonable hypotheses on the number of accidents. Can we put a number on this? If we arbitrarily assume a pro-rata reduction in childhood deaths in proportion to the displacement of biomass cooking with electricity, then we can compare a phase-out over 10 five-year plans with one taking say 11. So at the end of each 5 year plan a chunk of electricity comes on line and the number of cooking smoke deaths drops. At the end of the process the number of deaths from cooking smoke is 0. It’s a decline in a series of 10 large or 11 slightly smaller steps. Plug in the numbers and add up the total over the two time periods and the difference is … 640,000 deaths in children under 5. Construction speed matters.

In conclusion

How do my back-of-an-envelope scenarios compare with India’s stated electricity development goals? According to India’s French partner in the Jaitapur project, Areva, India envisages about half my hypothesized electrical capacity being available by 2030, so a 50 year nuclear build plan isn’t ridiculous provided floods or failed monsoons don’t interfere unduly.

As for the safety issues and my hypothesised accidents, it doesn’t matter much what kind of numbers you plug in as a consequence of the silly assumption of a couple of Chernobyls. They are all well and truly trumped: firstly, by the increase in health for Indian children, secondly by the reforestation and biodiversity gains as biomass cooking declines, thirdly by the reduction in birth rates as people get used to not having their children die, and lastly, by helping us all have a fighting chance of avoiding the worst that climate change might deliver.

It’s time Greenpeace India told its parent organisation to shove off. It’s time Greenpeace India set its own agenda and put the fate of Indian children, the Indian environment and the planet ahead of the ideological prejudices of a parent organisation which has quite simply lost the plot.


Note 1: Nuclear Waste: What about the nuclear waste from a thousand reactors? This is far less dangerous than current levels of biomass cooking smoke and is much more easily managed. India has some of the best nuclear engineers in the business. They are planning thorium breeder reactors which will result in quite small amounts of waste, far smaller and more manageable than the waste from present reactors. Many newer reactor designs can run on waste from the present generation of reactors. These newer reactors are called IFR (Integral Fast Reactor) and details can be found on bravenewclimate.com.

Note 2: German Solar PV: Germany installed 17 GW of Solar photo voltaic (PV) power cells between 2000 and 2010 and in 2010 those 17 GW worth of cells delivered 12,000 GWh of energy. If those cells were running in 24×7 sunshine, they would have delivered 17x24x365 = 149 GWh of energy. So their efficiency is about 8 percent (this is usually called their capacity factor. A single 1.7GW nuclear reactor can produce about 1.7x24x365x0.9=13,402 GWh in a year (the 0.9 is a reasonable capacity factor for nuclear … 90 percent). Fossil fuel use for electricity production in Germany hasn’t changed much in the past 30 years with most of the growth in the energy supply being due to the development of nuclear power in Germany during the late 70s and 80s.

Note 3: Giga watts, for non technical readers.: The word billion means different things in different countries, but “giga” always means a thousand million, so a giga watt (GW for short) is a useful unit for large amounts of power. A 100-watt globe takes 100 watts of power to run. Run it for an hour and you have used 100 watt-hours of energy. Similarly, a GWh, is a giga watt of power used for an hour, and this is a useful unit for large amounts of energy. If you want to know all about energy units for a better understanding of BNC discussions, here’s Barry’s primer

Note 4: Area for Solar PV. German company JUWI provides large scale PV systems. Their 2 MW (mega watt system) can supply about 3.1 GWh per year and occupies 2 hectares. To supply a similar amount of energy to Andasol would need 180/3.1=58 units occupying some 116 hectares

Advertisements

January 6, 2011

OzEnergy – The second story

Filed under: Emissions Reduction, Energy Demand, Renewable Energy — buildeco @ 10:27 am

By Barry Brook

The Oz-Energy-Analysis.org project continues to hum away in the background, building momentum.

After much necessary background work, including data collation, website construction, preliminary wheel kicking and a lot of hard thinking (!), we are moving onto some serious analysis and modelling. But scenarios need storylines to hang off. Our first story was about scoping the problem. The second story — reproduced below — is about understanding. This is an exploration framework rather than a real-world proposal. To me, with an extensive experience in working with biological systems, the evolutionary approach we take here appeals. See what you think.

Francis and I would appreciate your critical feedback, either in the comments below or on the relevant OzEA page. Please consider both sites. And remember, OzEA is an experiment, with the tea room being a portal into developments. We always welcome your feedback, on any aspect of the site and its outputs.

————–

The Second Story – Understanding the Problem

Introduction

In the beginning was The First Story, followed in recent months by round one development through the menu bar (data, analysis, models…). This story ushers in round two.

To briefly reintroduce OzEA: the big picture is a global need for much increased electricity production as we progress through this century. Much increased fossil fuel use to achieve this is problematic given that current human impact on the carbon cycle is widely believed to be impacting on climate. While nuclear power is an alternative to coal and gas, issues around Nuclear Power, or the science of Climate Change, are not discussed here. OzEA seeks to be a broad church; we put our energies into empirical, high level and open analysis of how a high penetration of renewable electricity might be achieved in the Australian context.

In this Second Story we adopt ’50% by 2030′ renewable electricity as the basis for ongoing work into 2011. Demand management (smart grids) and system evolution are matters that will be central to the integration of renewables, and these are discussed in what follows. Work through to years end is to model the power output from large-scale scenarios of geographically distributed wind and solar power plants. This will provide a solid base for further rational analysis of renewable variability.

The fifty percent renewables by 2030 approach

Adopting 50% renewable penetration by 2030 as a baseline gives structure and coherence to our work plans. In reality Australia is scheduled to have around 20% renewable electricity in 2020 (predominately from wind), driven by the federal governments LRET scheme. The purpose of a 50% target is to drive analysis and thinking, rather than an engineering proposal.

While wind is currently the most mature and economical of the large scale renewable technologies, its variability will eventually make further deployment self limiting; more wind farms = more electricity when the wind blows => depressed prices in the wholesale market. In turn, electricity from solar power can become a more valuable renewable source. The key focus is thus to examineconfigurations of wind and solar that reduce variability and usefully match with demand. (Note: solar = large-scale concentrated solar thermal (CSP); we hold photovoltaics to the margin for now).

Working at the hour-to-hour level we use historical wind and solar data to model ~10 GW average of electricity supply from these sources. Combined with historical demand data, this allows calculation of a ‘demand remainder’ (demand minus renewable supply). The first, naive, approach is to supply this remainder by conventional generators (with a little support from available pumped storage hydro), and to assume that Smart Grid Demand Management does no more than smooth out sub-hour variability and keep demand peaks from growing above current levels.

The naivety above is to suppose demand data from the past can represent demand in the years to come. While past weather data is a good template for the future, the demand can and will change as the electricity system grows and evolves. Hence, the ‘demand remainder’ that we calculate will require a more thoughtful interpretation than simply power required from fossil fuel generators.

Supply and demand; transmission and distribution

These four components provide a template for understanding our electricity system. The transmission network is the backbone that connects region to region, state to state, connecting the power plants that supply electricity. This electricity is taken at substations and feed, at lower voltages, into distribution networks (the poles and wires on our residential streets). Around 25% of overall Demand is residential, with the commercial and industrial sectors making up the balance.

The market operators ensure that, with very high probability, the system remains in balance from second to second; i.e., that supply meets demand. While electricity can be stored economically in the form of pumped storage hydro, this capacity is limited and mostly demand is meet by ramping supply up and down as needed (see The Electricity System discussion).

Peak loads, especially driven by air-conditioner use, present a particular problem for the electricity system. While residential use is one quarter of demand as a blunt average, it is a much higher portion on hot afternoons. Distribution networks in particular can be pushed to their limits, and system planners are faced with the prospect of costly upgrades to these networks. Peaking loads create a real need for mechanisms that can curtail or shift demand, otherwise expensive upgrades are needed in order to provide a much higher network capacity – to a level that is only needed for a small fraction of the year.

Analysis of large-scale renewable integration is necessarily intertwined with peak demand and network development issues, as these pressures are driving system evolution now. From a renewables perspective, the pressure to manage extremes on the demand side crosses over with managing variation on the supply side. This point bears reading again.

Accounting the variability

Power from wind and solar can be very variable; sometimes these sources produce little if any power at all. This is an enormous impediment to making large investments in these renewable power sources. At the simplest level renewable energy can be accounted as free fuel. That is, the system continues to require the same number of coal and gas power plants as before, to cover the times when the wind isn’t blowing and the sun isn’t shining. The saving is on the fuel (and any associated emissions), however, the cost of the renewable infrastructure is much greater than the fuel saved.

Multiple wind and solar farms at different sites will to some extent smooth out the variability. A more involved reckoning of the supply capacity can be had by engaging in statistical calculations of ‘Capacity Credit’. This can be informative, but is only a rough cut at quantifying what is really of interest.

We explicitly model the electricity supply that given Wind and Solar scenarios would produce. This ‘renewable electricity’ time series can be examined in conjunction with the demand that existed over the same time period, and so give the ‘demand remainder’ on an hour-by-hour basis. Analysis of this demand remainder is superior because it empirically captures relationships between electricity demand and renewable supply (e.g. solar on hot days).

Development of a 50% renewables system can only occur as an evolution, and one that includes the demand patterns. Explicit modelling of renewable supply in the context of today’s demand profile shines light directly on the issues and opportunities that demand side evolution presents.

Smart grids and demand management – a necessary detour

As retail consumers it costs you or me the same to use our air conditioners (or heating) regardless of whether the wholesale price is $10 or $12,500 a MWh. Residential demand is disconnected from the supply market, except as a long-term average. This demand inelasticity is a problem crying out for solutions.

Enter stage left, Smart Grids and Smart Meters.

While these terms encompass various aspects, here we focus briefly on: (i) load control, and (ii) interval meters & Time of Use pricing; see the Demand Management discussion page for more extensive comments.

Through a ‘smart meter’, or perhaps simply via the internet, a control hub in your house can manage some appliances in an intelligent way. A pool pump, for example, would be off when the network was struggling. Water heating is the classic ‘off-peak’ appliance. More complicated, but essential, is a mechanism for the compressors (but not fans) of air-conditioners to be switched out for a few minutes when need be, and for the thermostat to ride modestly and intelligently across demand peaks.

Your motivation for smart operation of such appliances is simple; Time of Use metering. At peak times electricity will be more expensive; on a windy night it will be cheap. So called “Interval Metering” is a foundational functionality for a smart meter. While residential time-of-use pricing requires careful implementation, it should save you money if use at peak times is modest. What might be called the “Eco-Saver” electricity plan will allow you, essentially, to withdraw subsidy from those who are punishing the system at peak times by running four, perhaps inefficient, air conditioners flat out.

Smart grids and metering involve a world of detail at both the technical and policy levels. There is discussion and debate. In Victoria interval meters are being rolled out state-wide right now; in South Australia they are resisted. Digging into these issues became a distraction at OzEA, and for now we pull back to a watching brief. The key point is that development of technologies and interfaces for intelligent load control will lay the very foundations for further levels of demand side elasticity.

Big ideas: the ecology of energy and the variability gambit

Large, complex, efficient, systems are rarely imposed through a straightforward engineering plan, where the steps required are foreseen at the outset. The scale, efficiency and sophistication of our current fossil fuel based electricity system would seem fantastical to those who hauled coal in primitive mining operation at Ipswich or Collie a hundred years ago.

The variability problem can be engineered away with high levels of supply redundancy and proven but expensive or inefficient storage mechanisms. What can be done, and what responsible politicians, policy makers, board rooms and bankers, will do are two entirely different things. So far there is no ‘killer app’ on either the supply side (e.g. proven geothermal), or the demand side (e.g. cheap storage). But ‘killer apps’ can be weeds in an ecological context; evolution is not a one-step process nor is it fixed on only one possible outcome. Rather, many small steps act in concert to alter the very fabric of the system from which the next batch of little steps proceeds.

Starting with the system we have now, we ask: “What will happen as more renewable energy is included into the system?” (i.e. how might the system evolve, and what are the selective pressures that will induce change?)

With supply rendered less controllable by the addition of large-scale renewables, and with demand made more elastic in response to the cost of supply, the electricity market develops new niches for balancing supply and demand. Attention is too often focused on handling the occasional lean times (when the electricity price becomes high and dispatchable backup is required), when the real evolution will occur in the frequent plentiful times that come with large scale renewables; this presents enormous possibilities. With abundant electricity we can potentially displace more expensive transport fuels, and otherwise have wealth-producing industries and jobs spring up in the niches that a suitable energy ‘ecology’ (market) would provide.

Assuming we become a high penetration renewable country, to what extent will we look back in 30 or 50 years and see the value of a flexible and frequently abundant system outweighs the costs of maintaining ‘backup’ to cover the gaps? Thinking about this question requires looking past the next immediate roadblock.

The idea here, what we call the Variability Gambit, is to postulate that in time the variability problem is soluble, especially with a deepening of the electricity market and associated integration with the energy sector more generally.

The monster under the bed – how much will it cost?

At the simplest level (straight cost per MWh of electricity produced) the rule of thumb is wind power at twice the cost of coal power, while CSP is around four times as expensive — some forward estimates are more generous. Wind turbines are a mature technology and so the costs here can only be expected to reduce on a modest schedule (maybe a few percent a year), while the less-refined CSP might yet undergo stronger improvements as increased deployment occurs. A tax on carbon emissions would add to the scales, so roughly and at this basic level, costs are seen to be an uphill journey, but a gradual rather than a hopeless one.

The cost and engineering of large-scale renewable plant must include any associated transmission infrastructure. Further, the variability, and consequent need for storage or backup, introduces additional costs that make the task of an economic reconciliation more difficult again. Today’s renewable technologies, placed within todays systems, are not cost competitive as a fit-for-service means of replacing coal and gas.

Consider, as a thought experiment, imposing large scale renewables on the Australian system NOW, at the same time decommissioning our coal power assets and limiting the use of gas turbines (perhaps through a very high carbon price). Broader economic damage and electoral backlashes aside, lucrative opportunities would arise because of extreme variations in the wholesale electricity price. Storage of electricity using hydrogen or compressed air (as examples) would become profitable. Demand management technologies would develop rapidly. Much innovation would occur. After some decades of expensive electricity the system would again evolve into a form with cheap and plentiful electricity.

The question is, can we achieve much the same ends (more gradually) without draconian impositions and economic carnage? Forging that path is the task at hand, and the supply variation of renewables may itself be our most potent tool.

Open Science and the web-site

Doing Open Science (not just talking about it) is a parallel purpose of the OzEA project. In the beginning we imagined lots of community involvement in doing the Science, and now have more nuanced expectations. Certainly many valuable comments have been made, including a handful of really substantive contributions. We look forward to more of these as we knuckle down into 2011. Yet, this is not a blog, and we do not seek comment for the sake of comment, nor provide an arena for generic argument. Rather, the commenting system is largely a virtual lab-book that is open to all; it is a major part of our record keeping. And of course we continue to welcome critical comment, encouragement, focused questions, and the sharing of knowledge and experience.

Breadth first is the approach to analysis we take here, and so some of our analysis is not as sophisticated or refined as the more specialised work of others. What matters is whether an analysis is sufficient for the purposes it is put to. We welcome comparison and criticism in this regard, and are always grateful for nudges and prods into the issues and complications that careful work needs to take into consideration.

Concluding remarks

To mindfully anticipate the future electricity system is not straightforward. The basic difficulty in looking ahead multiple decades is that while some aspects are reasonably predictable, any number of less likely, and even improbable, technological and sociological developments could have significant impacts – if they come to pass. And some of these unlikely events will occur (play enough hands of poker and you will get a royal flush).

Moving into Round Two of data, analysis and modelling, we focus on the variability of supply that comes with high penetration renewables (wind and solar). While capturing the supply variability is a lot of work, it is also a relatively straightforward number crunching exercise. The real significance will be in the ‘demand remainder’, as so many of us seek to explore the implications, opportunities and consequences of increasing the level of renewable supply into the Australian electricity market.

A derisive term, “The Fake Fire Brigade”, has arisen to describe those seen as too optimistic or woolly in their claims for large-scale renewables. Here at OzEA we take a positivistic view; however, we are nobodies’ fire brigade. The wool-free version is simple enough: into the medium term at least, an Australian electricity system with an increasing penetration of renewables will continue to be underpinned with significant (fossil) fuelled supply, while demand side evolution will provide a more elastic response to supply variability. The rate of renewable rollout will be limited by real world costs, and driven by government support.

The importance of demand management to renewable integration is at once tenuous and profound. At the tenuous end, the ability to make modest adjustments to demand, especially in high load situations, provides some assistance with a generation mix that includes renewables. But it does not provide much help when there is little wind for several days in a system reliant on wind power. At the profound end are the pathways opened up for electricity system evolution in the decades to come as devices, houses, industries, suburbs and states interact dynamically with supply.

Implementation of smart grids must be undertaken with due care and forethought. It is easy to speculate about electric (or plug-in hybrid) cars; it is easy to note the long-term sense in houses being intelligently designed for space heating and cooling. It is not hard to see wind and solar power being integrated into the broader energy sector, perhaps via Hydrogen production. While all these points remain vague or speculative, it is simple deduction that building to high penetrations of wind and solar power will involve these sorts of developments.

The question, in the end, is this: can we intelligently and responsibly nurture the necessary evolution in the way our electricity system works? The next step to coherently addressing this question is a solid quantitative grip on the supply variability. As we work this through, it is our goal and commitment to communicate the analysis and its interpretation in as open and useful a way as we can.

December 21, 2010

The effect of cutting CO2 emissions to zero by 2050

Filed under: Climate Change, Emissions Reduction, Global Warming — buildeco @ 10:59 am

Guest post for Barry Brook by Dr Tom M. L. Wigley

 

Tom is a a senior scientist in the Climate and Global Dynamics Division of the US National Center for Atmospheric Research and former Director of the CRU. He is an adjunct Professor at the University of Adelaide. For his list of papers and citations, click here (his h-index is 70!). Tom is also a good friend of mine and a strong supporter of the IFR.

What would happen to CO2 concentrations, global-mean temperature and sea level if we could reduce total CO2 emissions (both fossil and net land-use change) to zero by 2050? Based on the literature that examines possible policy scenarios, this is a virtually impossible goal. The results presented here are given only as a sensitivity study.

To examine this idealized scenario one must make a number of assumptions. For CO2 emissions I assume that these follow the CCSP MiniCAM Level 1 stabilization scenario to 2020 and then drop linearly to zero by 2050. For the emissions of non-CO2 gases (including aerosols and aerosol precursors) I assume that these follow the extended MiniCAM Level 1 scenario (Wigley et al., 2009). The extended Level 1 scenario provides emissions data out to 2300. Note that the Level 1 scenario is the most stringent of the CCSP stabilization scenarios, one that would almost certainly be very costly to follow using traditional mitigation strategies. Dropping CO2 emissions to zero is a much more stringent assumption than the original Level 1 scenario, in which total CO2 emissions are 5.54GtC/yr in 2050 and 2.40GtC/yr in 2100.

For modeling the effects of this new scenario one must make assumptions about the climate sensitivity and various other model parameters. I assume that the sensitivity (equilibrium warming for 2xCO2) is 3.0C, the central estimate from the IPCC AR4. (Note that the 90% confidence interval for the sensitivity is about 1.5C to 6.0C – Wigley et al., 2009.)

For sea level rise I follow the AR4 and ignore the possible effects of accelerated melt of the Greenland and Antarctic ice sheets, so the projections here are almost certainly optimistic. All calculations have been carried out using version 5.3 of the MAGICC coupled gas-cycle/climate model. Earlier versions of MAGICC have been used in all IPCC reports to date. Version 5.3 is consistent with information on gas cycles and radiative forcing given in the IPCC AR4.

The assumed CO2 emissions are shown in Figure 1.

The corresponding CO2 concentration projection is shown in Figure 2. Note that the MAGICC carbon cycle includes climate feedbacks on the carbon cycle, which lead to somewhat higher CO2 concentrations than would be obtained if these feedbacks were ignored.

Global-mean temperature projections are shown in Figure 3. These assume a central climate sensitivity of 3.0C. Temperatures are, of course, affected by all radiatively active species. The most important of these, other than CO2, are methane (CH4) and aerosols. In the Level 1 scenario used here both CH4 and aerosol precursor (mainly SO2) emissions are assumed to drop substantially in the future. CH4 concentrations are shown in Figure 4. The decline has a noticeable cooling effect. SO2 emissions drop to near zero (not shown), which has a net warming effect.

The peak warming is about 0.9C relative to 2000, which is about 1.7C relative to pre-industrial times. This is below the Copenhagen target of 2.0C – but it clearly requires a massive reduction in CO2 emissions. Furthermore, the warming peak could be significantly higher if the climate sensitivity were higher than 3.0C. For a 3.0C sensitivity, stabilizing temperatures at 2.0C relative to the pre-industrial level could be achieved with much less stringent CO2 emissions reductions than assumed here. The standard Level 1 stabilization scenario, for example, gives a 50% probability of keeping below the 2.0C target.

Figure 5 gives the sea level projection for the assumed scenario. This is a central projection. Future sea level is subject to wide uncertainties arising from uncertainties in the climate sensitivity and in parameters that determine ice melt. As noted above, the projection given here is likely to be an optimistic projection. Note that sea level roughly stabilizes here, at a CO2 concentration of about 320ppm. Less optimistic assumptions regarding the emissions of non-CO2 gases would require a lower CO2 concentration level. Given the unrealistic nature of the assumption of zero CO2 emissions by 2050, this is a graphic illustration of how difficult it would be to stabilize sea level – even at a level more than 20cm above the present level.

Key reference:
T. M. L. Wigley, L. E. Clarke, J. A. Edmonds, H. D. Jacoby, S. Paltsev, H. Pitcher, J. M. Reilly, R. Richels, M. C. Sarofim and S. J. Smith (2009) Uncertainties in climate stabilization. Climatic Change 97, 85-121, DOI: 10.1007/s10584-009-9585-3.

December 15, 2010

SNE 2060 – a multi-source energy supply scenario

Filed under: Climate Change, Emissions Reduction — buildeco @ 8:57 am

by Barry Brook

In this post, I develop a hypothetical multi-energy-supply scenario for global low-emissions electricity in ~2060. The assumed energy mix is 75 % nuclear fission and 25 % non-nuclear sources, with fossil fuel use virtually eliminated except where it is used with carbon capture and storage.

The % annual growth rate (GR) of energy supplied assumes an exponential rate of change from today’s levels over a 50-year period. It is consistent with (actually, better than) the IPCC WG III greenhouse gas emissions reduction targets. World total supply (277 EJ) matches the demand forecast in the previous post.

The future energy mix scenario offered in Table 1 should not be considered a forecast — it is better thought of as a ‘working hypothesis’ (sensu Elliott and Brook, 2007).

Table 1

Nameplate (installed) capacity is approximate, based on average capacity factors of hydro 0.45 (world average for 2006), wind/solar 0.3, other 0.5, biomass 0.85, fossil CCS 0.85, nuclear 0.9. These capacity factors are similar to those generated today, but are only used to estimate the nameplate column of the table above, and don’t affect the EJ supply column.

In this scenario, all existing non-fossil-fuel energy sources are expected to increase, with the highest rates of growth anticipated for wind/solar and nuclear fission. Comparing and contrasting my 2060 scenario with that of Trainer (2010, his Table 1), I (optimistically in all cases) have:

1. Hydro growing by 50% on today’s energy share (similar to Trainer’s 19 EJ)

2. Fossil fuels with CCS increases to a level similar to that of hydro (this is one third of the maximum 51 EJ allocated by Trainer)

3. Biomass and waste used for direct electricity generation increases by 50%; the majority of crop energy is used to supply 15 EJ of ethanol

4. Wind and solar output collectively expands 40-fold on today’s levels

5. Nuclear fission growth is then used to balance the total demand. This results in a 21-fold increase on today’s share of ~310 GWe average

The final ratio of 75% nuclear fission to 25% non-nuclear energy sources is similar to the national domestic electricity mix of France today (but the French are, of course, still reliant on oil, and haven’t yet taken they synfuel production step that will be necessary as oil/gas supplies run down).

This general forecast is also consistent with the conclusions of Jean-Baptiste and Ducroux (2003). In reality, there may be considerably greater or lesser supply from any of these low-carbon energy sources, but this depends on a broad range of complex factors, including carbon prices, subsidies and tariffs, energy security considerations, fossil fuel supply constraints, and technological, logistical, economic and socio-political circumstances (Hoffert et al., 2002).

The Table 1 scenario is simply offered for evaluation, as one possible which is able to meeta number of first-order logical, plausibility and sustainability criteria. Note that it is less demanding than the more pessimistic TR2 scenario described here. I will use the nuclear supply value of 7,300 GWe nameplate capacity (6,500 GWe average) for all future projections in the SNE2060 series.

Having arrived at what I consider a scientifically justifiable scenario, I will now turn back to modelling. The next few posts in the SNE2060 series will look at a couple of possible pathways for that 21-fold increase in nuclear power, incorporating a synergy of thermal reactors and Gen IV alternatives. I will also consider some further constraints on this roll-out, and the carbon mitigation implications of all this energy re-tooling.

Footnote: China has once again revised upwards its 2020 target for nuclear energy. It now stands at 112 GWe, up from the earlier target of 70 GWe (which itself was a positive revision of an earlier 40 GWe goal). This is relative to the 11 GWe of nuclear capacity operating today. China is most definitely moving quickly – as fast as they can possibly go — and I suspect they still haven’t shown their full hand. What their 2030 goal might now be is anyone’s guess (mine, for what it is worth, is ~500 GWe).

October 7, 2010

Challicum Hills wind farm and the wettest September on record

Filed under: Emissions Reduction, Renewable Energy — buildeco @ 6:14 pm

I was recently on annual leave and spent a few days on a motoring tour (with my parents and my two boys, Billy and Eddy, aged 11 and 8) around western Victoria — Castlemaine, Ararat, Lake Fyans, the spectacular Grampians National Park. I was touring around Hamilton and surrounds (Merino, Tahara, Branxholme), where I lived 25 years ago, for a few years. Not much has changed! It’s still the beautiful, rolling green country of Australia Felix that I remember from my boyhood.

We were in Ararat on Friday 1 Oct and took the opportunity to visit the 53 MWe (peak) Challicum Hills wind farm. Here is a picture of me out the front of it.

Barry Brook at the edge of the 53 MWe (peak) Challicum Hills wind farm in western Victoria, 1 October 2010

The turbines were spinning gently (well, most of them), but the breeze was very light and that was reflected in the low capacity factor on that day, as reported on Andrew Miskelly’s “Wind Farm Performance” website (which graphically depicts performance of wind farms connected to the electricity grid in south-eastern Australia over a 24-hour period, showing output as a percentage of installed capacity and actual output in megawatts):

I was there at about 10:30 am, during one of those little humps in output. The wind farm, snaking along a ridge line, consists of 35 NEG NM 64 wind turbines, each with a 64 m rotor diameter, 68 m hub height, and a peak output of 1.5 MWe. The CHWF was completed in 2003 at a cost of $76 million. It fun to see WF sites up close that have only previously been names on an analysis data frame! [hint: The Broome to Cooktown Challenge is still looking for input over on Oz-Energy-Analysis.org]

On a climate note, the Australian Bureau of Meteorology has released a new Special Climate Statement. From BoM’s Dr. Karl Braganza (Manager Climate Monitoring, National Climate Centre), it…

details recent high rainfall across Australia in 2010, including record rainfall in northern Australia, and reviews the prolonged dry conditions experienced in south-east Australia and in the south-west of Western Australia.

The end of September 2010 marks 14 years since the start of a very long meteorological drought1 in south-east Australia. In the south-west of Western Australia, similarly dry conditions have been in place over the past 14 years, while a longer term drying trend has been observed since the 1970s.

The prolonged dry spell has been characterised by a combination of recurrent meteorological drought (short-term dry “spells”), less autumn and winter rainfall in most years, and an absence of very wet periods.

Recent, widespread, above-average rainfall across much of Australia has alleviated short-term (month to seasonal) dry conditions. This rainfall has been associated with the breakdown of the 2009 El Niño and the development of a moderate to strong La Niña event in 2010.

The recent rainfall has not ended the long-term rainfall deficiencies still affecting large parts of southern Australia. While some parts received well above-average rainfall, most notably in the Murray-Darling Basin, drought-affected regions in the far south-east of the continent have experienced near-normal conditions. The south-west has continued its run of very much below-average rainfall, adding further to the long-term drying trend in this region.

You can download the PDF of the full statement here.

Billy, Barry and Eddy, bush walking in the Grampians National Park

This is an interesting addition for me, coming on the back of the recent blog post I wrote, “Do the recent floods prove man-made climate change is real?“.

Whilst offline, I’ve been tinkering further with the SNE2060 modelling and background work (on the assumptions and outcomes), and will put a couple more posts up on this topic during this week.

But for now, I’m back to my holidays. We visited the recently extinct crater lake of Mt Eccles (Australia’s most recent mainland volcano — last eruption was approximately 8,000 years ago), then down to the Great Ocean Road, on the winding way back to Melbourne.

After all, I had to get back to Adelaide in time for Wednesday, where I spoke at the RiAus event “Thinking Critically About Sustainable Energy #4: A Nuclear Future“:

With an urgent need to reduce our reliance on fossil fuels and the global demand for energy rising exponentially, might nuclear energy be the only non-carbon-emitting technology capable of meeting the world’s requirements?

The nuclear industry’s image has been compromised by the threat of nuclear proliferation, reactor malfunctions and the storage of radioactive waste. However, today’s proponents argue that improvements in reactor design have made them safer as well as more fuel-efficient and cost-competitive to build, compared with coal plants.

With renewable energy sources still unable to provide enough baseload power, is nuclear energy our best option for reducing carbon emissions? Will the next generation of reactors make nuclear the clean, green option?

This event is the fourth of six public forums aimed at providing a comprehensive examination of sustainable energy technologies and a critical evaluation of their potential for reducing carbon emissions.

Presented in association with the Centre for Energy Technology, the University of Adelaide’s Environment Institute and the Institute for Mineral and Energy Resources.

 

September 21, 2010

Fast reactor future – the vision of an atomic energy pioneer

Filed under: Emissions Reduction, Nuclear Energy — buildeco @ 1:34 pm

by Barry Brook

REACTOR PIONEERS — Some of those who worked on EBR-I posed in front of the sign chalked on the wall when EBR-I produced the first electricity from atomic power. Koch is front row, second from right.

When I was in Idaho Falls in August 2010, one of the places I visited was the Experimental Breeder Reactor I. It’s now a publicly accessible U.S. National Historic Landmark, and has some incredible experimental X-39 atomic aircraft engines sitting out the front (see little inset photo). I’ll talk more about this visit in a later BNC post, but one thing is relevant here. That is, there is a blackboard (now preserved permanently under glass) which includes the chalked signatures of the original EBR-I research crew. One of the names on that list is a young engineer called Leonard Koch — (see photo with him standing there almost 60 years before I looked at the same board!).

Well, Len, at 90, is still going strong, and recently sent the IFRG a speech he gave in 2005 in Russia on fast reactors and the future. It’s a terrific essay, and not available anywhere on the internet (until now — I transcribed his scanned copy). Len kindly gave me permission to post it here on BNC. He also said to me:

I am pleased that you visited EBR-I. It is pretty primitive compared to the very sophisticated plants that are being built today, but it got things started. The plane the Wright Brothers built was even more primitive but they got the airplane business started. The key is to get things started and persist.

Enjoy.

—————————–

Brief bio: A retired, “Pioneer”, Leonard Koch is probably the oldest continuing supporter and participant in the development of the original concept of nuclear power. He joined Argonne National Laboratory in early 1948 and participated in the development, design, construction and early operation of EBR-l as the Associate Project Engineer. He was responsible for the development, design and construction of the EBR-ll as the Project Manager. He wrote the book, “EBR-ll”, published by the American Nuclear Soceity, which describes that activity. More here.

Nuclear energy can contribute to the solution of global energy problems

Leonard J. Koch, winner of the 2004 Global Energy International Prize.

This paper was originally presented at the Programme of International Symposium “Science and Society”, March 13, 2005, St. Petersburg, Russia, the year after his prize was awarded, in recognition of the 75th birthday of Zhores Alferov, the founder of the Global Energy International Prize. A large number of Nobel Laureates and Global Energy Laureates participated in the symposium.

Energy has become a dominant, if not the dominant, field of science impacting society. In the last century, man’s use of energy increased more than it did in the entire previous history of civilization. It has resulted in the highest standard of living in history, but it has also created a global dependence on energy that may become very difficult to meet. That is the primary global energy problem. More specifically, it is the growing recognition that the increasing global demand for petroleum will exceed the supply.

Science has produced many uses for petroleum, but by far the most demanding of the unique capabilities of petroleum is its use for transportation of people and goods. Science has created a very mobile global society. Petroleum has made this possible because of its unique capability to serve as an energy source and as an energy “carrier”. Excluding natural gas, which I include in a very broad definition of “petroleum”, there is no alternative to petroleum that can serve both functions. There are energy sources and there are energy carriers, but no single alternative that can satisfactorily combine both capabilities.

It is generally agreed that the Earth was endowed with about two trillion barrels of oil and that about one trillion barrels have been extracted and used. Also, it is rather generally agreed that the present extraction rate of about 82 million barrels a day is at, or near, the peak rate that is achievable. Demand has been increasing and is expected to continue to increase. Although these figures would suggest that there is only a 35 year supply of petroleum remaining, of course, this is not what will happen, or what should be used for planning purposes. A long, gradual transition period will occur during which a variety of alternatives to petroleum in its various applications must be found and used. The challenge for science and technology is to endure that sufficient alternatives are acceptable, available and ready when needed.

Many people and organizations are addressing this matter. They have produced a variety of predictions and conclusions. They are readily and extensively available on the internet. At best, these predictions are disturbing and describe a difficult and, perhaps, an unpleasant transition period. At worst, they predict a catastrophe and the end of life and we now know it.

They generally agree that no single substitute for petroleum will be found and there is a wide disparity in the predicted acceptability of combinations of energy sources and energy carriers. Electricity and hydrogen are recognized as potential energy carriers. Electricity is well established. Hydrogen possesses superb “combustion” characteristics, but will require much more development and, will require an immense infrastructure. Its distribution will be difficult and expensive. If it is to be the eventual substitute for petroleum, a huge energy source with very long term availability will be required to produce the hydrogen.

There is little agreement on energy sources that can fulfill this potential demand. Coal is environmentally unacceptable, wind and solar are unreliable, because they require ”the wind to blow or the sun to shine’” while hydro and nuclear are considered inadequate because of available resources.

Nuclear energy is included in this latter category because the estimated reserves of uranium are found to be inadequate. this conclusion is scientifically incorrect! It is based on an immature technology which does not incorporate established scientific knowledge.

The ‘science” of nuclear energy is very simple and very specific. a pound of uranium contains the energy equivalent to about 5,000 barrels of oil or about 200,000 gallons of gasoline. in scientific terms, one kilogram of uranium contains the energy equivalent of almost two million liters of gasoline.

The United States has an inventory of more than one million tons of uranium in storage in the form of “spent fuel” from reactors, and “depleted uranium” from uranium enrichment plants. This inventory contains the energy equivalent of about ten trillion barrels of oil! The total global inventory of this material must be at least 3 or 4 times as large. These nuclear energy reserves are already mined and refined, the uranium (and thorium) still remaining in the Earth combined with the existing stockpile make this a virtually inexhaustible energy supply.

Clearly, the problem is not that the global uranium reserves are inadequate; it is that the contained energy is not being extractable using today’s immature technology, only about one percent of the energy is extracted from natural uranium! The balance remains in the inventories described earlier. The scientific requirements for extracting this energy have been understood for more than 50 years. The technology for doing so has not yet been developed.

Nuclear energy is produced by the fission of uranium atoms in a nuclear reactor. Natural uranium, as it occurs in the earth, is composed of two isotopes, uranium-235 which is fissionable, and uranium-238 which is not fissionable, but is “fertile” and when it absorbs a neutron it is transformed into plutonium-239 which is fissionable.

Natural uranium consists of about 0.7% U-235 and about 99.3% U-238. Rhe U-238 can only be fissioned if it is first “transmuted” to Pu-239. Therefore, natural uranium can only produce energy effectively by transmuting U-238 to Pu-239. The combination of fission and transmutation occurs in any nuclear reactor in which the fuel contains U-235 and U-238 or Pu-239 and U-238.

It occurs in all of the power reactors operating in the world today. In most of them, an adjustment is made in the U-235 concentration to enhance operation. The 0.7% U-235 content is “enriched” to about 3.0%. This process produces “depleted uranium” which contains about 99.8% U-238. None of the energy contained in this enormous global inventory of depleted uranium has been extracted.

The current generation of nuclear power reactors convert about 1 atom of U-238 into Pu-239 for each 2 atoms of U-235 fissioned. Some of the Pu-239 atoms are fissioned in situ. Therefore, a very small amount of the energy contained in the U-238 is extracted in today’s nuclear power plants. Virtually all of it remains in the spent fuel. The net result of these operations is that about one percent of the energy contained in the original natural uranium energy source has been extracted. The remaining 99% is contained in the spent fuel and depleted uranium. Virtually all of this energy is contained in U-238 which must be converted to Pu-239 to extract it.

This can be accomplished most efficiently in fast reactors fueled with Pu-239 and U-238. In this system, about 3 atoms of U-238 are converted to Pu-239 for each 2 atoms of Pu-239 fissioned. Because these machines can produce more plutonium than they consume, they are called “breeders”. The current conventional reactors which are about one third as efficient are called “converters”.

From the very early days of the nuclear age, it was predicted that the energy contained in uranium could be extracted by recycling nuclear fuel in fast reactors. It was recognized also that this could only be accomplished if the following questions were answered favorably. Would the neutronics produce a “breeder” type performance? Could energy be extracted usefully and acceptably from large fast neutron power reactors? Could nuclear fuel be recycled through such reactors in the manner required to extract the energy?

The first two questions have been answered. The plutonium – uranium fuel system in fast reactors will permit energy to be extracted from U-238. It has been shown that large fast power reactors can indeed produce useable energy. This has been done, probably most convincingly, in Russia at the BN-600 power station. In addition, work in other countries corroborate that fast power reactors can be used to produce electricity and for other uses.

The third question has not been answered adequately. Nuclear fuel has not been recycled to the extent necessary to demonstrate the capability to extract a significant fraction of the energy contained in uranium! This is the remaining challenge for science and technology.

I was deeply involved in a very early attempt to advance this technology. It evolved into the EBR-ll project; the Experimental Breeder Reactor No. 2., developed by Argonne National Laboratory in the United States. It was developed to demonstrate, on a small scale, the feasibility of power generation, but much more importantly, to advance fuel recycle technology. It was a relatively small plant, generating only 20,000 kilowatts of electricity, but it incorporated a complete “fuel cycle facility” interconnected to the nuclear reactor plant. Although fast reactor power plant projects were proceeding in the United States and other countries, none of them incorporated provisions for direct on-site fuel recycle. Therefore, the EBR-II experience is unique and important.

The fuel selected for the first phase of operation was an enriched uranium metal alloy which was actually established by the fuel refining process which had been selected. Neither plutonium, nor plutonium-uranium technology, were available at the time (the 1950′s). A relatively simple and imperfect fuel processing system was selected to provide a “starting point” for the development of this technology, with recognition that much additional technology development would be required. The uranium metal fuel was to be processed by melt refining which removed fission products from molten uranium by volatilization and oxidation. This process provided adequate purification for fast reactor fuel recycle, even though all of the fission products were not removed.

It was estimated that at nominal equilibrium conditions, after several fuel cycles, this process would produce an alloy consisting of about 95% uranium and 5% fission products (about 2.5% molybdenum and 2% ruthenium plus a small amount of “others”). This alloy was named “fissium” and it was decided to create this alloy for the initial fuel loading to avoid a constantly changing fuel composition with each fuel recycle. It was not expected that this first phase of operation would demonstrate a true breeder fuel recycle. That was planned for the next phase.

Simultaneously, some very preliminary laboratory-scale experiments indicated that electrorefining of plutonium-uranium metallic alloys might prove to be suitable for recycle of this fuel in fast power reactors. As a result, the EBR-II program plan was to operate initially on an enriched uranium fuel cycle and shift to a plutonium-uranium fuel cycle later when the technology for that fuel cycle was developed. It was thought that valuable power reactor fuel recycle experience could be obtained during the first phase even though it was not a true breeder fuel cycle.

Only the first phase was accomplished, and only on a limited scale. Five total reactor core loadings were recycled through the reactor. About 35,000 individual fuel elements were reprocessed, fabricated and assembled into almost 400 fuel subassemblies. An administrative decision was made that the United States nuclear power program would concentrate on oxide nuclear fuel for all power reactors, including fast reactors. The EBR-II fuel recycle program, based on metal fuel, was terminated. Reactor operation was continued for more than 20 years, but the fuel was not recycled. The reactor continued operation as a “fissium-fueled”, base load, electrical generating station and a fast neutron irradiation facility. The fuel cycle facility was used for examination of irradiated fuel and other materials.

Even though this program was interrupted, it produced and demonstrated some very useful technology that will be applicable to future recycle systems and provides an overall perspective of nuclear fuel recycle requirements. It includes the performance of highly complex operations in a very strong radiation field and the removal of fission product decay heat during fuel fabrication and assembly operations. Even though future systems may be less demanding, this technology and experience will be invaluable.

Each future recycle system will create unique requirements related specifically to the fuel, the fuel form and the design of the individual fuel elements. They will include removing the spent fuel from its container; (most probably a cylindrical tube), reprocessing the fuel and installing it in a new container.

It is this part of the total fuel recycle process that requires much development and demonstration. There are a variety of potential fuels and fuel forms and a variety of potential purification and fabrication processes which will produce a variety of fuel recycle characteristics and requirements . The composition of the fuel will change during recycle and an equilibrium, or near equilibrium, composition will eventually result. This scenario has not been produced for any of the potential fuel systems, nor will it be, until the required operational experience has been obtained. Global attention is needed because this will be a very slow, long-term undertaking. There are no quick fixes! A fuel cycle will probably take about three years, and several cycles will be required to establish a reasonable demonstration of the total performance of a specific recycle process. There will be, almost certainly, more than one total fuel recycle system to pursue; possibly several. Each will be unique and produce its own results and create its own requirements.

I have proposed that the United States initiate a program to begin the process by constructing a “fuel recycle reactor” (FRR) designed specifically to provide a facility in which these fuels can be recycled. I do not believe that a single facility of this kind can begin to do the job that is necessary to establish this badly needed technology. I know that it is presumptuous of me to suggest what other countries should do; but, I propose that a vigorous international effort be undertaken to develop and establish the technology required to recycle nuclear fuel in fast power reactors and thus make it possible for the world to use the tremendous capability which exists in the global resources of nuclear fuel.

This is a timely international challenge. I note that Japan is considering the restart of their Monju fast reactor and are exploring international participation ¡n fuel cycle technology. I note also that India is proceeding with their first fast power reactor with a capacity of 500 megawatts and plans to build three more by 2020. I find this to be a very interesting development; India has maintained a continuing technical interest in fast reactors since the very early days of nuclear power. I expect this program will bring a new perspective to nuclear power and fuel recycle. India has a strong interest in the U-233 thorium cycle because of their large indigenous supply of thorium.

Th-232, which is not fissionable, is similar to U-238; when it absorbs a neutron, it is transformed into fissionable U-233. This process also can be best accomplished in fast reactors and requires fuel recycle. Therefore, fuel recycle technology also must be developed to extract this source of energy. The vast global thorium reserves should be included in estimates of total global nuclear energy capability.

On a longer range basis, the magnitude of the demand for energy sources will eventually become dominant. In addition to providing an alternative to dwindling petroleum resources, there will be the need to provide for the continuing growth in demand for energy to satisfy the needs of increasing global population and their standard of living.

For nuclear energy to contribute significantly to satisfying this enormous potential demand, it will be necessary to not only develop the technology, but to make it acceptable!

History has established a relationship between nuclear energy and nuclear weapons that is not clearly defined or well understood. Nuclear weapons are produced from fissionable materials, but recycled power reactor fuel is not a suitable source for that material. Even the spent fuel after only one fuel cycle in current generation power reactors is unsuitable for weapons use. After multiple recycles, the fuel is essentially useless for weapons.

It will be necessary to demonstrate that nuclear energy on the vast scale I have suggested will not result in unacceptable nuclear waste. Efficient fuel recycle has the potential capability of virtually eliminating this requirement. The primary problem presented by the long term storage of spent fuel is the long half-life of the actinides produced in the spent fuel. They can be destroyed by fission.

A complete nuclear fuel recycle process will destroy these actinides and produce energy from those that fission. At equilibrium, all of the necessary processes will be operating simultaneously. Pu-239 will be fissioning, the higher isotopes of plutonium will be fissioning, or absorbing neutrons and transmuting into isotopes that fission and are destroyed.

The ideal fuel cycle will recycle all of the uranium, the plutonium isotopes and the other actinides and remove only fission products during each fuel cycle. The nuclear waste will consist primarily of fission products which will be far easier to store and virtually all of the energy will have been extracted from the original energy source, natural uranium. A similar scenario can be developed for thorium. The science is firmly established. The technology is needed. The incentive to do so is enormous. It is to provide an inexhaustible supply of energy for the foreseeable future and beyond.

September 15, 2010

IFR FaD 6 – fast reactors are easy to control


by Barry Brook

There are many topics in the IFR FaD series that I want to develop in sequence — and in some detail. But for the moment, here’s a little diversion. People often complain that sodium-cooled fast reactors are about as easy to control as wild stallions — at least compared to the docile mares that are water-moderated thermal reactors. The experience on the EBR-II (which I’ll describe further in future posts) certainly belies this assertion, but for now, I want to go to another source.

Here are comments from Joël Sarge Guidez, written in 2002, who Chairman of International Group Of Research Reactors (IGORR), Director of Phénix fast breeder reactor (a 233 MWe power plant which operated in France for more than 30 years, with an availability factor of 78 % in 2004, 85% in 2005 and 78% in 2006), and President of the club of French Research Reactors:

—————————

A reactor that’s easy to live with

Pressurised water reactor specialists are always surprised how easy it is to run a fast reactor: no pressure, no neutron poisons like boron, no xenon effect, no compensatory movements of the rods, etc. Simply, when one raises the rods, there is divergence and the power increases. Regulating the level of the rods stabilises the reactor at the desired power. The very strong thermal inertia of the whole unit allows plenty of time for the corresponding temperature changes. If one does nothing, the power will gradually decrease as the fuel ages, and from time to time one will have to raise the rods again to maintain constant power. It all reminds one of a good honest cart-horse rather than a highly-strung race horse.

Similarly, the supposed drawbacks of sodium often turn out in practice to be advantages. For example, the sodium leaks (about thirty so far since the plant first started up) create electrical contacts and produce smoke, which means they can be detected very quickly. Again, the fact that sodium is solid at ambient temperature simplifies many operations on the circuits. More generally, because of the chemical properties of sodium, the plant is designed to keep it rigorously confined, including during handling. During operation, all this provides a much greater “dosimetric convenience” than conventional reactors. In particular, a very large part of the plant is completely accessible to staff whatever power the reactor is at, and the dose levels are very low.

Because of the very high neutron flux (more than ten times as high as with water reactors), there is great demand for experiments. These experiments are performed using either rigs inside carrier sub-assemblies or using special experimental sub-assemblies with particular characteristics. All experiments are run and monitored in the core like the other subassemblies.

Since the origin Phénix irradiated around 1000 sub-assemblies, on which 200 were experimental sub-assemblies. It is true that the Phénix is not as flexible as an experimental water reactor, in which targets can easily be handled and moved. But, with a minimum of preparation – which is necessary anyway for reasons of safety and quality – numerous parameters such as flux, spectrum and duration can be adjusted to the needs of each experiment.

Furthermore, the reactor was designed by modest people who thought in advance of everything that would be needed for intervention on the plant: modular steam generators, washing pits, component handling casks etc. All of which has been very useful and has made possible numerous operations and modifications in every domain. All this has meant that a prototype reactor built in the early 1970s is still operational in 2004, and will continue so for several years yet.

—————————

Some further useful information can be had from Guidez’s presentation at the 2008 International Group on Research Reactors conference. Download and read over this 19-page PDF, which is the easy-to-read slides of his presentation, called “THE RENAISSANCE OF SODIUM FAST REACTORS STATUS AND CONTRIBUTION OF PHENIX”.

August 25, 2010

Climate change basics III – environmental impacts and tipping points

Filed under: Climate Change, Emissions Reduction, Global Warming — buildeco @ 6:48 pm


by Barry Brook

The world’s climate is inherently dynamic and changeable. Past aeons have borne witness to a planet choked by intense volcanic activity, dried out in vast circumglobal deserts, heated to a point where polar oceans were as warm as subtropical seas, and frozen in successive ice ages that entombed northern Eurasia and America under miles of ice. These changes to the Earth’s environment imposed great stresses upon ecosystems and often led to mass extinctions of species. Life always went on, but the world was inevitably a very different place.

We, a single species, are now the agent of global change. We are undertaking an unplanned and unprecedented experiment in planetary engineering, which has the potential to unleash physical and biological transformations on a scale never before witnessed by civilization. Our actions are causing a massive loss and fragmentation of habitats (e.g., deforestation of the tropical rain forests), over-exploitation of species (e.g., collapse of major fisheries), and severe environmental degradation (e.g., pollution and excessive draw-down of rivers, lakes and groundwater). These patently unsustainable human impacts are operating worldwide, and accelerating. They foreshadow a grim future. And then, on top of all of this, there is the looming spectre of climate change.

When climate change is discussed in the modern context, it is usually with reference to global warming, caused by anthropogenic pollution from the burning of fossil fuels. Since the furnaces of the industrial revolution were first ignited a few centuries ago, we have treated the atmosphere as an open sewer, dumping into it more than a trillion tonnes of heat-trapping carbon dioxide (CO2), as well as methane, nitrous oxide and ozone-destroying CFCs. The atmospheric concentration of CO2 is now nearly 40% higher than at any time over the past million years (and perhaps 40 million years – our data predating the ice core record is too sketchy to draw strong conclusions). Average global temperature rose 0.74°C in the hundred years since 1906, with almost two thirds of that warming having occurred in just the last 50 years.

What of the future? There is no doubt that climate predictions carry a fair burden of scientific ambiguity, especially regarding feedbacks in climatic and biological systems. Yet what is not widely appreciated among non-scientists is that more than half of the uncertainty, captured in the scenarios of the Intergovernmental Panel on Climate Change, is actually related to our inability to forecast the probable economic and technological development pathway global societies will take during the twenty-first century. As a forward-thinking and risk averse species, it is certainly within our power to anticipate the manifold impacts of anthropogenic climate change, and so make the key economic and technological choices required to substantially mitigate our carbon emissions. But will we act in time, and will it be with sufficient gusto? And can nature adapt?

The choice of on-going deferment of action is potentially dire. If we do not commit to deep emission cuts (up to 80% by 2050 is required for developed nations), our descendents will likely suffer from a globally averaged temperature rise of 4–7°C by 2100, an eventual (and perhaps rapid) collapse of the Greenland and the West Antarctic ice sheets (with an attendant 12–14 metres of sea level rise), more frequent and severe droughts, more intense flooding, a major loss of biodiversity, and the possibility of a permanent El Niño. This includes frequent failures of the tropical monsoon, which provides the water required to feed the billions of people in Asia.

Indeed, the European Union has judged that a warming of just 2°C above pre-industrial levels constitutes ‘dangerous anthropogenic interference with the climate system’, as codified in the 1992 United Nations Framework Convention on Climate Change. Worryingly, even if we can manage to stabilise greenhouse gas concentrations at 450 parts per million (it is currently 383 ppm CO2, and rising at 3 parts per million per year), we would still only have a roughly 50:50 chance of averting dangerous climate change. Beyond about 2°C of warming, the likelihood of crossing irreversible physical, biological and, ultimately, economic thresholds (such as rapid sea level rise associated with the disintegration of the polar ice sheets, a shutdown of major heat-distributing oceanic currents, a mass extinctions of species, and a collapse of the natural hazards insurance industry), becomes unacceptably high.

Unfortunately, there is no evidence to date that we are taking meaningful action to decarbonise the global economy. In fact, it is just the reverse, with a recent work showing that the carbon intensity of energy generation in developed nations such as the US and Australia has actually increased over the last decade. Over the last decade, the world’s rate of emissions growth has tripled, and total CO2 emissions now exceed 30 billion tonnes a year. China overtook the US in 2006 as the single biggest greenhouse polluter, and within a decade, it will be producing twice as much CO2. This remarkable rate of growth, if sustained, will means that over just the next 25 years, humans will spew into the atmosphere an additional volume of CO2 – greater than the total amount emitted during the 150 year industrial period of 1750 to 2000! Of particular concern is that long-lived greenhouse gases, like CO2, will continue to amplify global warming for centuries to come. For every four tonnes added during a year in which we prevaricate about reducing emissions, one tonne will still be trapping heat in 500 years. It is a bleak endowment to future generations.

Nature’s response to twentieth-century warming has been surprisingly pronounced. For instance, ecologists have documented numerous instances of shifts in the timing of biological events, such as flowering, emergence of insects, and bird migration occurring progressively earlier in the season. Similarly, many species, including insects, frogs, birds and mammals, have shifted their geographical range towards colder realms – towards higher latitudes, upwards in elevation, or both. Careful investigations have also revealed some new evolutionary adaptations to cope with changed climatic conditions, such as desiccation-tolerant fruit flies, and butterflies with longer wings that allow them to more readily disperse to new suitable habitats. On the other hand, some sensitive species have already been eliminated by recent climate change. For instance, the harlequin frog and golden toad were once found in abundance in the montane cloud forests of Costa Rica. But in the 1980s they were completely wiped out by a fungal disease, which flourished as the moist forests began to dry out: a drying caused by a rising cloud layer that was directly linked to global warming.

These changes are just the beginning. Under the current business-as-usual scenario of carbon emissions, the planet is predicted to experience five to nine times the rate of twentieth-century warming over the next hundred years. An obvious question is, will natural systems be able to continue to keep pace? There are a number of reasons to suspect that the majority will not.

Past global climate change characteristically unfolded over many millennia, whereas current anthropogenic global warming is now occurring at a greatly accelerated rate. If emissions are not checked, a level of planetary heating comparable to the difference between the present day and the height of the last ice age, or between now and the age of the dinosaurs (when Antarctica was ice free), is expected to unfold over a period of less than a century! When such catastrophically rapid changes in climate did occur, very occasionally, in the deep past – associated, for instance, with a large asteroid strike from space – a mass extinction event inevitably ensued. Most life just could not cope, and it took millions of years after this shock for biodiversity to recover. It has been estimated that 20 to 60 per cent of species might become extinct in the next few centuries, if global warming of more than a few degrees occurs. Many thousands (perhaps millions) will be from tropical areas, about which we know very little. A clear lesson from the past is that the faster and more severe the rate of global change, the more devastating the biological consequences.

Compounding the issue of the rate of recent climate change, is that plant and animal species trying to move to keep pace with the warming must now contend with landscapes dominated by farms, roads, towns and cities. Species will gradually lose suitable living space, as rising temperatures force them to retreat away from the relative safety of existing reserves, national parks and remnant habitat, in search of suitable climatic conditions. The new conditions may also facilitate invasions by non-indigenous or alien species, who will act to further stress resident species, as novel competitors or predators. Naturally mobile species, such as flying insects, plants with wind-dispersed seeds, or wide-ranging birds, may be able to continue to adjust their geographical ranges, and so flee to distant refugia. Many others will not.

A substantial mitigation of carbon emissions is urgently needed, to stave off the worst of this environmental damage. But irrespective of what we do now, we are committed to some adaptation. If all pollution was shut off immediately, the planet would still warm by at least a further 0.7°C.

For natural resource management, some innovative thinking will be required, to build long-term resilience into ecosystems and so stem the tide of species extinctions. Large-scale afforestation of previously cleared landscapes will serve to provide corridors, re-connecting isolated habitat patches. Reserves will need to be extended towards cooler climatic zones by the acquisition of new land, and perhaps abandoned and sold off along their opposite margins. Our national parks may need to be substantially reconfigured. We must also not shirk from taking a direct and active role in manipulating species distributions. For instance, we will need to establish suitable mixes of plant species which cannot themselves disperse, and translocate a variety of animal species. It may be that the new ecological communities we construct will be unlike anything that currently exists.

Such are the ‘unnatural’ choices we are likely to be forced to make, to offset the unintended impacts of our atmospheric engineering. Active and adaptive management of the Earth’s biological and physical systems will be the mantra in this brave new world. Truly, the century of consequences.

August 20, 2010

‘Zero Carbon Australia – Stationary Energy Plan’ – Critique

Filed under: Emissions Reduction, Renewable Energy — buildeco @ 12:20 pm


by Barry Brook

‘Zero Carbon Australia – Stationary Energy Plan’ – Critique

Download the printable PDF here

[An addendum on wind farm construction rates, by Dave Burraston, can be downloaded here.]

————————

Edit: Here are some media-suitable ‘sound bytes’ from the critique, prepared by Martin. Obviously, please read the whole critique below to understand the context:

  • They assume we will be using less than half the energy by 2020 than we do today without any damage to the economy. This flies in the face of 200 years of history.
  • They have seriously underestimated the cost and timescale required to implement the plan.
  • For $8 a week extra on your electricity bill, you will give up all domestic plane travel, all your bus trips and you must all take half your journeys by electrified trains.
  • They even suggest that all you two car families cut back to just one electric car.
  • You better stock up on candles because you can certainly expect more blackouts and brownouts.
  • Addressing these drawbacks could add over $50 a week to your power bill not the $8 promised by BZE. That’s over $2,600 per year for the average household.

By Martin Nicholson and Peter Lang, August 2010

1. Summary

This document provides a critique of the ‘Zero Carbon Australia – Stationary Energy Plan’ [1] (referred to as the Plan in this document) prepared by Beyond Zero Emissions (BZE). We looked at the total electricity demand required, the total electricity generating capacity needed to meet that demand and the total capital cost of installing that generating capacity. We did not review the suitability of the technologies proposed.  We briefly considered the timeline for installing the capacity by 2020 but have not critiqued this part of the Plan in detail.

In reviewing the total energy demand, we referred to the assumptions made in the Plan and compared them to the Australian Bureau of Agricultural and Resource Economics (ABARE) report on Australian energy projections to 2029-30 [2]. The key Plan assumptions we questioned were the use of 2008 energy data as the benchmark for 2020, the transfer of close to half the current road transport to electrified rail and transfer of all domestic air travel and shipping to rail which could have a devastating impact on the economy. In the Plan, total energy demand was reduced by 63% below ABARE’s assessment. We recalculated the energy demand for 2020 without these particular assumptions. Our recalculation increased electricity demand by 38% above the demand proposed in the Plan.

We next turned our minds to the amount of generator capacity needed to meet our recalculated electricity demand. We assumed that the existing electricity network customers would require the same level of network reliability as now. At best the solar thermal plants would have the same reliability and availability of the existing coal fleet so the network operators would at least require a similar proportion of reserve margin capacity as in the existing networks. We kept the same proportion of wind energy as in the Plan (40%) and recalculated the total capacity needed to maintain the reserve margin. The total installed capacity needed increased by 65% above the proposed capacity in the Plan.

The Plan misleadingly states that it relies only on existing, proven, commercially available and costed technologies. The proposed products to be used in the Plan fail these tests. So to assess the total capital cost of installing the generating capacity needed, we reviewed some current costs for both wind farms and solar thermal plants. We also reviewed ABARE’s expectation on future cost reductions. We considered that current costs were the most likely to apply to early installed plants and  that ABARE’s future cost reductions were more likely to apply than the reductions used in the Plan. Applying these costs to the increased installed capacity increased the total capital cost almost 5 fold and increases the wholesale cost of electricity by at least five times and probably 10 times. This will have a significant impact on consumer electricity prices.

We consider the Plan’s Implementation Timeline as unrealistic.  We doubt any solar thermal plants, of the size and availability proposed in the plan, will be on line before 2020.  We expect only demonstration plants will be built until there is confidence that they can become economically viable. Also, it is common for such long term projections to have high failure rates.

2. 2020 Electricity Demand

BZE make a number of assumptions in assessing the electricity demand used to calculate the generating capacity needed by 2020. In summary these are:

  1. 2008 is used as the benchmark year for the analysis. BZE defend this by saying “ZCA2020 intends to decouple energy use from GDP growth. Energy use per capita is used as a reference, taking into account medium-range population growth.”.
  2. Various industrial energy demands in 2020 are reduced including gas used in the export of LNG, energy used in coal mining, parasitic electricity losses, off-grid electricity and coal for smelting.
  3. Nearly all transport is electrified and a substantial proportion of the travel kms are moved from road to electrified rail including 50% of urban passenger and truck kms and all bus kms. All domestic air and shipping is also moved to electric rail.
  4. All fossil fuels energy, both domestic and industrial, is replaced with electricity.
  5. Demand is reduced through energy efficiency and the use of onsite solar energy.

The net effect of these assumptions is to reduce the 2020 total energy by 58% below the 2008 benchmark and 63% below the ABARE estimate for 2020. The total electricity required in 2020 to service demand and achieve these reductions is 325 TWh. This is the equivalent of an average generating capacity of 37 GW over the year.

All of these assumptions are challenging and some are probably unrealistic or politically unacceptable. To address these concerns, we have adjusted the assumptions and recalculated the energy estimates shown in Table A1.3 of the Plan.

The revised assumptions are as follows:

  1. Comparing Australia’s energy use per capita with Northern Europe ignores the significant differences in population density and climate between the two regions. To address this, we have used ABARE’s forecast for 2020 as the benchmark year for our analysis. The ABARE forecast assumes energy efficiency improvement of 0.5 per cent a year in non energy-intensive end use sectors and 0.2 per cent a year in energy intensive industries.
  2. The export of LNG will continue. Much of the world may not wish to, or be able to, emulate this plan and the demand for gas as an energy source will continue for several decades. The other demand reductions shown in BZE assumption 2 above are included.
  3. A substantial modal shift in transport to rail is unlikely to be politically acceptable, particularly domestic aviation and bus travel. Domestic aviation and shipping will continue to use fossil fuels or bio-equivalents. In our analysis, nearly all road transport is electrified but without a reduction in distance travelled. Though this transport electrification is unlikely to be achieved by 2020, it is a realistic long term goal so has been included in the revised calculations. ABARE energy data are for final energy consumption so a tank/battery to wheel efficiency comparison should be made. This is considered to be a 3:1 energy reduction [3] not 5:1 as identified in the Plan.
  4. All fossil fuels energy is replaced with electricity as per the Plan.
  5. Demand is reduced through energy efficiency and the use of onsite solar energy as per the Plan but discounted by the energy efficiency already included in the ABARE data identified in 1 above.

These assumptions and recalculations are based on information provided in Appendix 1 of the Plan. Each SET column shown in Table 1 below are defined in Appendix 1. Recalculations are based on data provided in Appendix 1. ABARE provided data for 2008 and 2030 only so 2020 is our estimate based on the ABARE figures.

The net effect of these revised assumptions is shown in Table 1 which is a rework of Table A1.3 in Appendix 1 of the Plan. The total electricity required in 2020 to service the revised demand and achieve the energy reductions is 449 TWh or 38% more than the ZCA2020 Plan estimate of 325 TWh.

3. Total Capacity Needed

A number of assumptions have been made by BZE in assessing the generating capacity needed to supply the electricity demand in 2020. These can be summaries as follows:

  1. The Plan relies on 50 GW of wind and 42.5 GW of concentrating solar thermal (CST) alone to meet 98% of the projected electricity demand of 325 TWh/yr. In addition, the combination of hydro and biomass generation as backup at the CST sites is expected to meet the remaining 2% of total demand, covering the few occasions where periods of low wind and extended low sun coincide.
  2. In the Plan system design the extra generating capacity needed to meet peak demand is reduced relative to current requirements. The electrification of heating, along with an active load management system, is assumed to defer heating and cooling load to smooth out peaks in demand resulting in a significant reduction in the overall installed capacity required to meet peak demand.
  3. In the Plan, negawatts are achieved through energy efficiency programs which lower both overall energy demand and peak electricity demand as well as by time-shifting loads using active load management. Negawatts can be conceptually understood as real decreases in necessary installed generating capacity, due to real reductions in overall peak electricity demand.
  4. The current annual energy demand in the Plan is considered to be 213 TWh which can be converted to an average power figure of 24 GW. BZE assumes that the current installed capacity to meet maximum demand is 45 GW. The difference (21 GW) is then considered power for meeting the demand for intermediate and peak loads only. The peak load in 2020 is assumed to be equal to the average of 37 GW plus the 21 GW for intermediate and peak loads. This is then reduced by a 3 GW allowance for ‘Negawatt’ to give an overall maximum demand of 55 GW.
  5. In the worst case scenario modelled in the Plan of low wind and low sun, there is a minimum of 55 GW of reliable capacity. This is based on a projected 15%, or 7.5 GW, of wind power always being available and the 42.5 GW of solar thermal turbine capacity also always being available with up to 15 GW of this turbine capacity backed up by biomass heaters. The 5 GW of existing hydro capacity is also always available.

The key issues in these assumptions are that the maximum (peak) demand is 55GW and that the proposed installed capacity can deliver a minimum of 55GW at any time. We will deal with each of these issues separately.

3.1. Recalculation of peak demand

The ZCA2020 Plan proposes a single National Grid comprising the existing NEM, SWIS and NWIS grids. The current installed capacity and loads in the three regions are shown in Table 2. An accurate assessment of peak demand – not average demand – is critical for assessing the total installed capacity needed.

Reliability in each network is maintained by additional available capacity over and above the expected peak demand. This is to cover for planned or unexpected loss of generating capacity either through planned maintenance or unplanned breakdown. This additional capacity is often referred to as the ‘reserve margin’.

The current reserve margin in each network is approximately 33% higher than the actual peak load. Note also that the actual total installed capacity is 53 GW and average power is 26 GW across the three networks. These are both higher than suggested by BZE in assumption 4 above.

The anticipated electricity demand in 2020 from Table 1 is 449 TWh. Assuming no change in current peak demand we can expect the pro rata peak in 2020 would be 78.7 GW (39.7 x 449/227). If we apply the 3 GW negawatt reduction discussed in assumption 4, peak demand will become 75.7 GW as shown in Table 3.

3.2. Recalculation of required capacity to reliably meet demand

The Plan insists that the combination of wind power and solar thermal with storage can deliver continuous supply (baseload). The only way to accurately assess this and the capacity required to meed the performance demands on the network is to do a full loss of load probability (LOLP) analysis. This does not appear to have been done in the ZCA2020 Plan, or at least it was not discussed as such in the report.

It is also beyond the scope of this critique to perform an LOLP analysis. A reasonable proxy is to apply the reserve margin requirements currently in the network. To maintain reliability, all three network regions have a reserve margin of 33% above the anticipated peak demand.

The size of the reserve margin is, among other things, related to the reliability of the generators in the network. In the current networks the predominant generators are conventional fossil fuel plants supplying over 90% of the energy.

In the Plan, the predominant plants are solar thermal with biomass backup supplying just under 60% of the energy. The Plan states that “The solar thermal power towers specified in the Plan will be able to operate at 70-75% annual capacity factor, similar to conventional fossil fuel plants.” The remainder of the energy mostly comes from wind powered generators. It would therefore seem likely that the network operators would continue, at a minimum, to require a 33% reserve margin to maintain the current levels of network reliability. The reserve margin may well be higher given the proportion of wind power and the use of relatively new solar thermal/biomass hybrid plants.

Table 3 shows the anticipated peak demand and total capacity needed to meet the 2020 demand calculated in section 2.

3.3. Estimate of the required wind and solar capacity

As close as possible we have kept the percentage of energy coming from wind and solar the same as in the Plan. This means that roughly 40% of the energy will come from wind and 60% will come from solar thermal plants with sufficient biomass capacity and sufficient fuel supply system to back-up for when there is insufficient energy in storage.

40% of the 449 TWh demand required by 2020 shown in section 2 will require 68 GW of wind. This is 36% higher than the 50 GW of wind used in the Plan.

The Plan assumed that 15% of wind power would always be available (assumption 5 above). This is the capacity credit allocated when assessing network reliability. Dispatchable generators like fossil fuel plants typically have a capacity credit of 99%. [4]

For the purpose of this estimate, we have assumed that the solar plants will have sufficient biomass capacity and reliability to be given a capacity credit of 99%. This may require a higher availability of biomass at the solar sites than has been included in the Plan. Without an LOLP we are not able to make that assessment.

Table 4 shows the amount of wind and solar needed to satisfy the network requirement for a total capacity of 101 GW calculated in 3.2 and shown in Table 3. The solar supply and biomass backup will need to be more than doubled from the present 42.5 GW to 87 GW.

4. Capital Costs

The Plan makes an estimate of the capital costs for the generators and the transmission lines. The Plan states that it “relies only on existing, proven, commercially available and costed technologies”. This is misleading. Although it is true that wind and solar thermal generators have been used commercially for a number of years, the particular products and product size suggested in the Plan are not yet available and caution is needed when estimating future costs for these products. Further, the Plan also assumes that baseload solar thermal is available today when the International Energy Agency does not expecting competitive baseload CSP before 2025. [5]

In this analysis we have compared the costs proposed in the Plan with known costs for solar and wind plants, together with ABARE’s suggested likely cost reductions over time.

4.1. Wind costs

According to ABARE [6, 7], current costs for wind farms in Australia are around $2.9 million/MW. In 2009 the costs were $2.3 million/MW – see Table 5.

The following assumptions have been made by BZE in estimating the cost of wind farms:

  1. The Plan involves a large scale roll out of wind turbines, that will require a ramp up in production rate, which will help to reduce wind farm capital costs and bring Australian costs into line with the world (European) markets.
  2. The 2010 forecast capital cost of onshore wind is approximately €1,200/kW (2006 prices) or $2,200/kW (current prices). By 2015 the European capital cost of onshore wind is estimated to be around €900/kW (2006 prices) (or $1,650 in current prices).
  3. It is expected that Australian wind turbine costs in 2011 will reduce to the current European costs of $2.2 million/MW. For the first 5 years of the Plan, the capital costs of wind turbines are expected to transition from the current European costs to the forecast 2015 European amount — $1.65 million/MW.
  4. In the final five years the capital costs are expected to drop to approximately $1.25 million/MW in Australia.

Wind turbines are not new technology and this would not normally suggest such significant falls in future costs. The 7.5 MW Enercon E126 turbine proposed is significantly larger than any currently installed on-shore commercial turbine and is still being developed. No firm costs for such a turbine are yet available. It seems very optimistic to suggest that the cost of these turbines will almost halve over the next decade. That projection is not supported by ABARE, which forecasts2 a reduction in the cost of wind power of 21% from 2015 to 2030. This is a simple average reduction of 1.5% per year.

Given the current cost of turbines in Australia ($2.9 million/MW) and accepting some economy of scale both in turbine size and volume purchased it might seem more prudent to assume the cost will fall from the current cost of $2.9 million/MW to $2.5 million/MW over the decade in line with ABARE’s forecast.

4.2. Solar costs

The solar plant proposed by the ZCA2020 Plan is a solar thermal tower with 17 hours molten salt energy storage. The proposed 220 MW plant is 13 times larger than any existing solar tower system. As with the wind proposal, no firm costs for such a large sized plant are yet available.

We have prepared an analysis of two solar thermal tower projects of varying sizes and using molten salt with varying energy storage sizes. These are plants where the capital cost could be identified and shown in Table 6. All costs are converted to 2010 A$.

Part of the variation in cost per MW is related to the hours of storage. The size of the solar field has to be increased to support more hours of storage as does the size of the storage tanks. According to the Plan (p140), 80% of the cost of a solar tower system using molten salt storage comes from the solar field and the storage system.  Scaling up the storage will increase the cost per MW. These costs have been adjusted in Table 6 to 17 hours storage as proposed in the Plan.

The Plan (p61) has applied the following pricing which falls as more solar plants are installed:

  1. The first 1,000 MW is priced at a similar price to SolarReserve’s Tonopah project at $10.5 million/MW.
  2. The next 1,600 MW is priced slightly cheaper at $9.0 million/MW.
  3. The next 2,400 MW is priced at Sargent & Lundy’ conservative mid-term estimate for the Solar 100 module which is $6.5 million/MW.
  4. The next 3,700 MW is priced at Sargent & Lundy Solar 200 module price of $5.3 million/MW.
  5. The remaining 33,800 MW is priced at $115 billion or $3.4 million/MW.

The Tonopah project is treated as a First-Of-A-Kind (FOAK) plant. Unfortunately the Tonopah plant has only 10 hours of storage [8] not 17 hours as required by the Plan. Grossing up the $10.5 million/MW from 10 hours to 17 hours based on the additional materials needed makes the cost $16.4 million/MW. For comparison, the Gemasolar plant shown in Table 6 has a scaled up cost of $25.7 million/MW.

ABARE2 forecasts a reduction in the cost of solar thermal with storage of 34% from 2015 to 2030. This is a simple average reduction of 2% per year. It might seem more prudent to assume the price will fall in line with ABARE’s assessment which will lower the price from $16.4 million/MW to $13.7 million/MW over the decade.

4.3. Assessment of generator capital costs based on revised capacity

In 3.3 we estimated the needed capacity to meet reliability standards in the electricity networks. From Table 4 the wind capacity needed was 68 GW and solar thermal plant capacity was 87 GW.

In this section we take the construction timelines suggested in the Plan (p57, p67) and gross them up to meet the capacity figures above. We then apply the prices calculated in 4.1 and 4.2 to calculate the revised total capital cost.

Table 7 and Table 8 apply a construction schedule as close as possible to the schedules provided in Table 3.7 and Table 3.14 of the Plan. The price each year is assumed to fall uniformly over the 10 years. We recognise this is not what would happen in practice but the end result would not vary greatly.

The Plan’s projected capital cost of wind = $72 billion.

The Plan’s projected capital cost of CST = $175 billion.

Because the required capacity for wind is 36% higher in this analysis than in the Plan and the capacity for solar is 105% higher, there is significant increase in capital cost over the Plan. This is particularly so for the solar component as the average cost per MW over the 10 years has increased from the BZE assessment of $4.1 million to $14.6 million. This a 3.6 times increase in average capital cost.

4.4. Assessment of the revised total investment cost

As the total installed capacity has increased then both the transmission system and biomass supply will also need to be increased. For the purpose of this assessment, the biomass is assumed to increase pro rata with the increase in solar thermal capacity. The transmission is assumed to increase pro rata with the total installed capacity. The actual increases could only be properly assessed with a full LOLP analysis.

The Plan assumes that the biomass fuel will be transported from the biomass pelletising plants, which are located in the wheat growing areas, to the solar thermal power plants by electrified railway lines.  It seems the Plan does not include the cost of these.  We have made an allowance of $54 billion for the capital cost of the electrified rail system for the biomass fuel handling logistics.  This assumes 300km average rail line distance per solar power site, for 12 sites at $15 million/km of electrified rail line.  This is included in our revised total investment cost shown in Table 9.

4.5. Uncertainty in the capital cost estimates

Capital costs for this Plan are highly uncertain.  None of the proposed generator types has ever been built.  Previous estimates for wind power and solar power have often proved to be gross underestimates. Our estimates include projections of cost reductions due to learning rates as does the Plan.  However, there is evidence that real costs have been increasing for decades so the learning rate reductions have to be considered uncertain.

The Plan calls for electrified rail lines to run from the pelleting plants in the wheat growing areas to the solar power stations but the capital cost for lines was not included.  We have included an estimate for this as discussed in 4.4.

There is uncertainty on the downside due to potential technological break-throughs which might make the learning curve rates forecast by various sources: Sargent and Lundy, NEEDS, DOE, IEA and ABARE achievable.  BZE projects a cost reduction of some 50% for solar and wind over the decade.  We will consider this to be the downside uncertainty.

There are several uncertainties on the upside:

  1. 1. A qualified estimator will state that the uncertainty on the upper end is as high as 100% for a conceptual estimate involving a particular design using mature technology for a particular site. The Plan and our estimates are for a concept that does not involve mature technology, without specific site surveys and without a system design for a totally redesigned electricity system.
  2. Previous estimates for solar thermal plants over the past two decades have often underestimated the cost of the actual plants.  For example, the estimated cost of Solar Tres / Gemasolar increased by 260% between 2005 and 2009 (when construction began).
  3. 3. A loss of load probability (LOLP) study would be essential to accurately estimate the generating capacity and transmission network requirements before this Plan was executed.
  4. The wind power contribution to reliability is based on an assumed firm capacity of 15%.  Many consider this highly optimistic.  Should the LOLP study suggest a significantly lower firm wind capacity, then much more solar thermal and biomass capacity would be required, increasing the total capital cost.
  5. Some consider that almost none of our hydro resource could be used in the way assumed in the Plan to back up for low sun and low wind periods.  If this proved to be the case then more solar and biomass capacity would be required.
  6. 6. All existing CST pilot plants have been built in areas that are relatively close to the necessary infrastructure such as road, water, gas mains and a work force.  This will not be the case for most of the 12 sites proposed for Australia.

In Table 9 , we have used a downside uncertainty of 50% and an upside uncertainty of 260% for solar plants and 200% for the other components.

5. Electricity Costs

The wholesale electricity cost, the price paid to the generator, makes up between 30% to 50% of retail electricity prices so any significant increase in the wholesale cost will impact consumer electricity prices. The Plan claims that wholesale prices will rise from the present $55/MWh to $120/MWh after  2020 (p122).

Table 10 shows estimates for the cost of electricity from solar thermal plants and wind farms for different years. It is clear that the Plan estimate for solar is significantly less than the other estimates. This would suggest a significantly lower capital cost for solar in the Plan than anticipated by these other assessments. The Plan does not offer an electricity cost for wind farms.

Based on the ABARE electricity cost estimates shown in Table 10. for solar thermal and wind, if the ratio of energy generated is 60% solar and 40% wind then the wholesale electricity price would need to be, at a minimum, $270/MWh by 2020 to cover the cost of generation.

However this is not a total system cost.  The wholesale cost of electricity would be about $500/MWh based on the capital cost of $1,709 billion, the supply of 443 TWh/a, a lifetime of 30 years and real interest rate of 10% pa.

If the capital cost is at the low end of the range, $885 billion, the electricity cost would be about $270/MWh.  If the capital cost is at the high end of the range, the electricity cost would be about $1200/MWh.

The $500/MWh cost is over 4 times the cost proposed in the Plan and nearly 10 times the current cost of electricity.  The low end of the estimate, $270/MWh, is more than twice the estimate proposed by the Plan and 5 times the current cost of electricity.  The high end of the range is over 10 times the cost proposed in the Plan and over 20 times the current cost of electricity.

6. Implementation Timeline

The Plan is not economically viable; therefore it will not be built to the timeline envisaged in the plan. As an example of how unrealistic the timeline is, the Plan assumes 1000 MW of CST will be under construction in 2011.   This is clearly impossible.  The first plant with 100MW peak capacity and just 10 hours of storage won’t be on-line in the USA until 2013 at the earliest.  It could be years before Australia can begin building plants with 17 hours of storage.

Trying to schedule the proposed build is making a category error. It is unlikely that any project manager would touch it. The project is simply not scoped.

We expect only demonstration plants will be built until there is confidence that they can become economically viable.  We doubt any solar thermal plants, of the size and availability proposed in the plan, will be on line before 2020. .

7. Conclusions

We have reviewed the “Zero Carbon Australia – Stationary Energy Plan” by Beyond Zero Emissions.  We have evaluated and revised the assumptions and cost estimates. We conclude:

  • The ZCA2020 Stationary Energy Plan has significantly underestimated the cost and timescale required to implement such a plan.
  • Our revised cost estimate is nearly five times higher than the estimate in the Plan: $1,709 billion compared to $370 billion.  The cost estimates are highly uncertain with a range of $855 billion to $4,191 billion for our estimate.
  • The wholesale electricity costs would increase nearly 10 times above current costs to $500/MWh, not the $120/MWh claimed in the Plan.
  • The total electricity demand in 2020 is expected to be 44% higher than proposed: 449 TWh compared to the 325 TWh presented in the Plan.
  • The Plan has inadequate reserve capacity margin to ensure network reliability remains at current levels. The total installed capacity needs to be increased by 65% above the proposed capacity in the Plan to 160 GW compared to the 97 GW used in the Plan.
  • The Plan’s implementation timeline is unrealistic.  We doubt any solar thermal plants, of the size and availability proposed in the plan, will be on line before 2020.  We expect only demonstration plants will be built until there is confidence that they can be economically viable.
  • The Plan relies on many unsupported assumptions, which we believe are invalid; two of the most important are:
    1. A quote in the Executive Summary “The Plan relies only on existing, proven, commercially available and costed technologies.”
    2. Solar thermal power stations with the performance characteristics and availability of baseload power stations exist now or will in the near future.

8. References

[1] Australian Sustainable Energy – Zero Carbon Australia – Stationary Energy Plan

http://media.beyondzeroemissions.org/ZCA2020_Stationary_Energy_Report_v1.pdf

[2] ABARE Australian energy projections to 2029-30

http://www.abare.gov.au/publications_html/energy/energy_10/energy_proj.pdf

[3] European Commission – Mobility and Transport

http://ec.europa.eu/transport/urban/vehicles/road/electric_en.htm

[4] Doherty et al – Establishing the Role That Wind Generation May Have in Future Generation Portfolios IEEE TRANSACTIONS ON POWER SYSTEMS, VOL. 21, NO. 3, AUGUST 2006

[5] IEA – Technology Roadmap Concentrating Solar Power

http://www.iea.org/papers/2010/csp_roadmap.pdf

[6] ABARE’s list of major electricity generation projects – April 2009

http://www.abare.gov.au/publications_html/energy/energy_09/EG09_AprListing.xls

[7] ABARE’s list of major electricity generation projects – April 2010

http://www.abare.gov.au/publications_html/energy/energy_10/EG10_AprListing.xls

[8] SOLARRESERVE GETS GREEN LIGHT ON NEVADA SOLAR THERMAL PROJECT July 2010

http://solarreserve.com/news/SolarReservePUCNApprovalAnnouncement072810.pdf

Add to FacebookAdd to NewsvineAdd to DiggAdd to Del.icio.usAdd to StumbleuponAdd to RedditAdd to BlinklistAdd to TwitterAdd to Technorati

July 27, 2010

Is Cash for Clunkers a great big new mistake?

Filed under: Economic issues, Emissions Reduction — buildeco @ 6:23 pm

Guest post by Alan Davies – The Melbourne Urbanist

Clunker – 1992 Nissan Pulsar

Did Julia Gillard read my post last Thursday arguing that she should take action in the election campaign to improve the fuel efficiency of Australia’s cars? Possibly not, but I wish now I’d left in the sentence saying that whatever happens, please don’t make the same mistake as President Obama and bring in a poorly-designed “cash for clunkers” program!

Now the PM has announced today her own Cash for Clunkers initiative (here and here) with the ostensible purpose of saving one million tonnes in carbon emissions (this is not an annual saving but the total over the life of the scheme).

The scheme will be financed by cutting back other programs, including the solar and carbon capture and storage programs, and the renewable energy bonus scheme (see here).

President Obama at least had the excuse that his scheme was primarily a pump-priming exercise designed to lift consumer spending in the wake of the GFC. In our context however, Cash for Clunkers looks like seriously bad policy. Even on the skimpy detail released today, it is evident there are clear failings.

First, it would be hard to think of a more expensive way to offset carbon emissions. Even accepting at face value the PM’s claim that the scheme will save one million tonnes of CO2 at a total cost of $396 million, that means it costs $396 to save a tonne of carbon.

The going rate to offset a tonne of carbon is around $20 or less (see hereherehere and here). But even if we play conservatively and assume it costs $50 per tonne, that would give an all-up cost of $50 million – still much less than what Cash for Clunkers will cost.

Second, the scheme takes no account of the embodied energy and associated emissions involved in bringing forward the production of the 200,000 new cars that will replace clunkers.

Third, scrapping the 200,000 trade-ins will very likely increase prices of old, second hand cars and make those who rely on them for transport worse off.

Fourth, the scheme will create additional economic demand at a time when the threat to the economy is excessive demand and a possible further interest rate increase is in the offing. Conversely, we can expect the market for new cars to be depressed for a period once the scheme has finished.

I won’t say it’s a flaw at this stage, but the equity implications of this scheme also warrant closer examination. Who runs a fifteen year old car but can afford to buy a new car just because it’s $2,000 cheaper? My guess is many of the beneficiaries will be middle class households upgrading second and third cars.

Somewhere in all this, consideration also needs to be given to what proportion of clunkers would have “died off” of natural causes and what proportion would in any event have been replaced by more emissions-efficient vehicles.

In fairness, there are other benefits to getting old cars off the road (e.g. newer cars cause less pollution and are safer) that should be acknowledged, but the PM has not mentioned these. This scheme has all the hallmarks of having been pulled together at short notice to offset the poor public reaction to the proposed 150 person Citizens Assembly on how Australia should respond to climate change.

Older Posts »

Create a free website or blog at WordPress.com.