Climate change

June 23, 2011

Disposal of UK plutonium stocks with a climate change focus

Filed under: IFR (Integral Fast Reactor) Nuclear Power, Nuclear Energy — buildeco @ 1:49 pm
by Barry Brook

In the 1950s, following World War II, the United Kingdom and a handful of other nations developed a nuclear weapons arsenal. This required the production of plutonium metal (or highly enriched uranium) purpose-built facilities. ‘Civil’ plutonium was also produced, since the facilities for separation existed and it was thought that this fissile material would prove useful in further nuclear power development.

Fifty years on, the question of what to do with the UK’s separated plutonium stocks is an important one. Should it, for instance, be downblended with uranium to produce mixed oxide fuel in thermal reactors, and then disposed of in a geological repository when it has been ‘spiked’ by fission products and higher actinide isotopes? Or is, perhaps, there an alternative, which would be of far greater medium- to long-term benefit to the UK, because it treats the plutonium not as waste, but as a major resource to capitalise on?

In the piece below, Tom Blees explores these questions. This was written as a formal submission to a paper “Management of the UK’s Plutonium Stocks: A consultation on the long-term management of UK owned separated civil plutonium”. Click on the picture to the left to read the background paper (which is interesting and not all that long).

This is the final in a series of three posts which has advocated SCGI’s position on the need for the IFR: (i) to provide abundant low-carbon energy and (ii) as a highly effective means of nuclear waste management and fuel extension for sustainable (inexhaustible) nuclear fission.

—————————–

Response to a consultation on the management of the UK’s plutonium stocks

Tom Blees, President, of The Science Council for Global Initiatives

Do you agree that it is not realistic for the Government to wait until fast breeder reactor technology is commercially available before taking a decision on how to manage plutonium stocks?

I strongly disagree, and I hope that you’ll take the time to read this and consider the fact that the fast reactor option is far more imminent than you might have heretofore believed. Not only that, but it is arguably the best option by far.

Current Fast Reactor Development

Worldwide there are well over 300 reactor-years of experience with fast reactors. Russia’s BN-600 fast reactor has been producing commercial electricity for over 30 years, and Russia is beginning to build BN-800 reactors both for their own use and for China. India’s first commercial-scale fast reactor is about to be finished within a year or two. South Korea has already built a sizeable pyroprocessing facility to convert their spent LWR fuel into metal fuel for fast reactors, and have only refrained from starting it up because of diplomatic agreements with the USA that are due to be renegotiated in the near future. China is building a copy of the Experimental Breeder Reactor II (EBR-II) that was the mainstay of the Integral Fast Reactor (IFR) development program at Argonne National Laboratory in the USA. Japan has reopened their Monju fast reactor to continue that research, though it should be noted that Toshiba and Hitachi contested the wisdom of that decision, favoring instead the metal-fueled fast reactor design as exemplified by the EBR-II.
The advantages of metal fuel in fast reactors is difficult to overstate. Rather than attempt to explicate the details here, I would refer the reader to the following URL: http://tinyurl.com/cwvn8n This is a chapter from a book that deals at length with the Integral Fast Reactor (IFR). The advantages of this system in safety, economics, fuel utilization, proliferation resistance and plutonium breeding or burning far outstrip any of the other options mentioned in the consultation document.

While fast breeders are mentioned as a future option, the rest of the document seems to have been unduly influenced by those who favor either MOX fabrication or long-term disposal. Both of these are mistakes that the USA has already made to one degree or another, mistakes that I would hope the UK will avoid when presented with the facts.

A Little History

In 1993, Presidents Yeltsin and Clinton signed nuclear disarmament agreements that would result in each country possessing 34 tons of excess weapons-grade plutonium. Since proliferation concerns would warrant safe disposal of this material, each president asked for the advice of one of their prominent scientists as to how to get rid of it. Yeltsin asked Dr. Evgeny Velikhov, one of the most prominent scientists in Russia to this day, who had been intimately involved in Russia’s military and civilian nuclear programs and was, in fact, in charge of the Chernobyl cleanup. Clinton asked Dr. John Holdren, who is now the director of the White House Office of Science & Technology Policy—President Obama’s top science advisor.

In July of 2009 I arranged for a meeting with Dr. Velikhov and Dr. Holdren in Washington, D.C. At that meeting we discussed what had happened when those two had met to decide on what advice to give to their respective presidents regarding the disposition of 68 tons of weapons-grade plutonium. Velikhov’s position was that it should be burned in fast reactors to generate electricity. Holdren disagreed. He contended that each country should build a MOX plant to dispose of it. That advice led to the construction that is now being done in South Carolina by Areva of a MOX plant that is expected to cost as much as ten billion dollars by the time all is said and done. And the processing of that plutonium into MOX fuel will take until the year 2030 at the very least.

Dr. Velikhov wasn’t buying it, nor was Yeltsin. But Holdren was in a tough position. Clinton had already signaled his lack of support for the IFR project that had been ongoing for nine years and was now in its final stages. It would be shut down the very next year by a duped Congress that had no idea of its importance and was manipulated into cutting off its funding for purely political reasons. Clinton wanted Russia’s solution for disposal of the excess plutonium to be the same as the USA’s, but Yeltsin said that he wasn’t prepared to spend the money. If Clinton wanted Russia to build a MOX plant, then America could pay for it. Needless to say, that never happened. And after 17 years of indecision, last spring the USA finally agreed that Russia should go ahead and dispose of their 34 tons in fast reactors.

By this time, the USA had contracted with Areva to build the South Carolina MOX plant, now under construction. That boondoggle will be a painfully slow and inefficient method of disposing of the plutonium compared to using fast reactors. Doctor Holdren made it clear at that meeting that he fully comprehends the wisdom of using IFRs to dispose of plutonium.

Salesmanship

Areva has not only talked the USA into building a horrendously expensive MOX plant, but judging by the tone of this consultation document they have apparently convinced some of the policymakers in the UK to do the same. This is as wrong now as it was when Holdren advised Clinton in 1993. Yet the South Carolina MOX plant’s construction is well underway and, like most big government-funded projects, would be about as hard to cancel at this point as turning a supertanker in the Thames. But the UK needn’t go down that road.

Areva touts its MOX technology as the greatest thing since sliced baguettes, yet in reality it only increases the utilization of the energy in uranium from about 0.6% to 0.8%. Metal-fueled fast reactors, on the other hand, can recover virtually 100% of that energy. Ironically, when discussing the ultimate shortcomings of Areva’s MOX policies with one of their own representatives, those unpleasant details were dismissed with the assurance that all that will be dealt with when we make the transition to fast reactors. Yet with billions of dollars tied up in MOX technology, Areva is anything but anxious to see that transition happen anytime soon. And the more countries they can convince to adopt MOX technology, the slower that transition will happen, for each of those countries will then have a large investment sunk into the same inferior technology.

A Pox on MOX

MOX is not only expensive, but it results in the separation of plutonium (though of course that’s not the issue in this case since the plutonium is already separated). That being said, the issue of proliferation from reactor-grade plutonium is quite overblown in general, since its isotopic composition makes it nearly impossible to fashion a nuclear weapon out of it. But regardless of its actual risk in that regard, its perception by the scientifically uninformed makes it politically radioactive, and international agreements to limit the spread of fissile material treats it as if it were weapons-grade. So any plans for the disposition of any sort of plutonium—whatever its composition—must take the politics into account.

If the UK would decide to spend five billion pounds or so on a MOX plant, it would end up with a lot of overpriced fuel that would have to be given away at a loss, since any utility company would surely choose to buy cheaper fuel from enriched virgin uranium. You would have a horrendously expensive single-purpose facility that would have to operate at a substantial loss for decades to consume the vast supplies of plutonium in question. And you would still end up with vast amounts of long-lived spent fuel that would ultimately, hopefully, be converted and used in fast reactors. Why not skip the MOX step altogether?

Given that the plutonium contains an almost unimaginable amount of energy within it, opting for long-term disposal via vitrification and burial would be unconscionable. The world will surely be in need of vast amounts of clean energy in the 21st century as the burgeoning population will demand not only energy for personal and industrial use, but will require energy-hungry desalination projects on a stunning scale. The deployment of fast reactors using the plutonium that earlier policymakers in the UK wisely decided to stockpile is a realistic solution to the world’s fast-approaching energy crisis.

Sellafield Nuclear Plant, UK

But this consultation report questions whether fast reactors can be deployed in the near future on a commercial scale. They can.

The PRISM Project

While the scientists and engineers were perfecting the many revolutionary features of the IFR at the EBR-II site in the Eighties and early Nineties, a consortium of major American firms collaborated with them to design a commercial-scale fast reactor based on that research. General Electric led that group, which included companies like Bechtel, Raytheon and Westinghouse, among others. The result was a modular reactor design intended for mass production in factories, called the PRISM (Power Reactor Innovative Small Module). A later iteration, the S-PRISM, would be slightly larger at about 300 MWe, while still retaining the features of the somewhat smaller PRISM. For purposes of simplicity I will refer hereinafter to the S-PRISM as simply the PRISM.

After the closure of the IFR project, GE continued to refine the PRISM design and is in a position to pursue the building of these advanced reactors as soon as the necessary political will can be found. Unfortunately for those who would like to see America’s fast reactor be built in America, nuclear politics in the USA is nearly as dysfunctional as it is in Germany. The incident at Fukushima has only made matters worse.

The suggestion in this report that fast reactors are thirty years away is far from accurate. GE-Hitachi plans to submit the PRISM design to the Nuclear Regulatory Commission (NRC) next year for certification. But that time-consuming process, while certainly not taking thirty years, may well be in process even as the first PRISM is built in another country.

This is far from unprecedented. In the early Nineties, GE submitted its Advanced Boiling Water Reactor (ABWR) design to the NRC for certification. GE then approached Toshiba and Hitachi and arranged for each of those companies to build one in Japan. Those two companies proceeded to get the design approved by their own NRC counterpart, built the first two ABWRs in just 36 and 39 months, fueled and tested them, then operated them for a year before the NRC in the US finally certified the design.

International Partners

On March 24th an event was held at the Russian embassy in Washington, D.C., attended by a small number of members of the nuclear industry and its regulatory agencies, both foreign and domestic, as well as representatives of NGOs concerned with nuclear issues. Sergei Kirienko, the director-general of Rosatom, Russia’s nuclear power agency, was joined by Dan Poneman, the deputy secretary of the U.S. Dept. of Energy. This was shortly after the Fukushima earthquake and tsunami, at a time when the nuclear power reactors at Fukushima Daiichi were still in a very uncertain condition.

Mr. Kirienko and Mr. Poneman first spoke about the ways in which the USA and Russia have been cooperating in tightening control over fissile material around the world. Then Mr. Kirienko addressed what was on the minds of all of us: the situation in Japan and what that portends for nuclear power deployment in the USA and around the world.

He rightly pointed out that the Chernobyl accident almost exactly 25 years ago, and the Fukushima problems now, clearly demonstrate that nuclear power transcends national boundaries, for any major accident can quickly become an international problem. For this reason Kirienko proposed that an international body be organized that would oversee nuclear power development around the world, not just in terms of monitoring fissile material for purposes of preventing proliferation (much as the IAEA does today), but to bring international expertise and oversight to bear on the construction and operation of nuclear power plants as these systems begin to be built in ever more countries.

Kirienko also pointed out that the power plants at risk in Japan were old reactor designs. He said that this accident demonstrates the need to move nuclear power into the modern age. For this reason, he said, Russia is committed to the rapid development and deployment of metal-fueled fast neutron reactor systems. His ensuing remarks specifically reiterated not only a fast reactor program (where he might have been expected to speak about Gen III or III+ lightwater reactor systems), but the development of metal fuel for these systems. This is precisely the technology that was developed at Argonne National Laboratory with the Integral Fast Reactor (IFR) program, but then prematurely terminated in 1994 in its final stages.

For the past two years I’ve been working with Dr. Evgeny Velikhov (director of Russia’s Kurchatov Institute and probably Russia’s leading scientist/political advisor) to develop a partnership between the USA and Russia to build metal-fueled fast reactors; or to be more precise, to facilitate a cooperative effort between GE-Hitachi and Rosatom to build the first PRISM reactor in Russia as soon as possible. During those two years there have been several meetings in Washington to put the pieces in place for such a bilateral agreement. The Obama administration, at several levels, seems to be willingly participating in and even encouraging this effort.

Dr Evgeny Velikhov, SCGI member

Dr. Velikhov and I (and other members of the Science Council for Global Initiatives) have also been discussing the idea of including nuclear engineers from other countries in this project, countries which have expressed a desire to obtain or develop this technology, some of which have active R&D programs underway (India, South Korea, China). Japan was very interested in this technology during the years of the IFR project, and although their fast reactor development is currently focused on their oxide-fueled Monju reactor there is little doubt that they would jump at the chance to participate in this project.

Dr. Velikhov has long been an advocate of international cooperation in advanced nuclear power research, having launched the ITER project about a quarter-century ago. He fully comprehends the impact that international standardization and deployment of IFR-type reactors would have on the well-being of humanity at large. Yet if Russia and the USA were to embark upon a project to build the first PRISM reactor(s) in Russia, one might presume that the Russians would prefer to make it a bilateral project that would put them at the cutting edge of this technology and open up golden opportunities to develop an industry to export it.

It was thus somewhat surprising when Mr. Kirienko, in response to a question from one of the attendees, said that Russia would be open to inviting Japan, South Korea and India to participate in the project. One might well question whether his failure to include China in this statement was merely an oversight or whether that nation’s notorious reputation for economic competition often based on reverse-engineering new technologies was the reason.

I took the opportunity, in the short Q&A session, to point out to Mr. Poneman that the Science Council for Global Initiatives includes not just Dr. Velikhov but most of the main players in the development of the IFR, and that our organization would be happy to act as a coordinating body to assure that our Russian friends will have the benefit of our most experienced scientists in the pursuit of this project. Mr. Poneman expressed his gratitude for this information and assured the audience that the USA would certainly want to make sure that our Russian colleagues had access to our best and brightest specialists in this field.

Enter the United Kingdom

Sergei Kirienko was very clear in his emphasis on rapid construction and deployment of fast reactors. If the United States moves ahead with supporting a GE-Rosatom partnership, the first PRISM reactor could well be built within the space of the next five years. The estimated cost of the project will be in the range of three to four billion dollars (USD), since it will be the first of its kind. The more international partners share in this project, the less will be the cost for each, of course. And future copies of the PRISM have been estimated by GE-Hitachi to cost in the range of $1,700/kW.

Work is under way on gram samples of civil plutonium

According to this consultation document, the UK is looking at spending £5-6 billion or more in dealing with its plutonium. Yet if the plutonium were to simply be secured as it currently is for a short time longer and the UK involved itself in the USA/Russia project, the cost would be a small fraction of that amount, and when the project is completed the UK will have the technology in hand to begin mass-production of PRISM reactors.

The plutonium stocks of the UK could be converted into metal fuel using the pyroprocessing techniques developed by the IFR project (and which, as noted above, are ready to be utilized by South Korea). The Science Council for Global Initiatives is currently working on arranging for the building of the first commercial-scale facility in the USA for conversion of spent LWR fuel into metal fuel for fast reactors. By the time the first PRISM is finished in Russia, that project will also likely be complete.

What this would mean for the UK would be that its stores of plutonium would become the fast reactor fuel envisioned by earlier policymakers. After a couple years in the reactor the spent fuel would be ready for recycling via pyroprocessing, then either stored for future use or used to start up even more PRISM reactors. In this way not only would the plutonium be used up but the UK would painlessly transition to fast reactors, obviating any need for future mining or enrichment of uranium for centuries, since once the plutonium is used up the current inventories of depleted uranium could be used as fuel.

Conclusion

Far from being decades away, a fully-developed fast reactor design is ready to be built. While I’m quite certain that GE-Hitachi would be happy to sell a PRISM to the UK, the cost and risk could be reduced to an absolute minimum by the happy expedient of joining in the international project with the USA, Russia, and whichever other nations are ultimately involved. The Science Council for Global Initiatives will continue to play a role in this project and would be happy to engage the UK government in initial discussions to further explore this possibility.

There is little doubt that Russia will move forward with fast reactor construction and deployment in the very near future, even if the PRISM project runs into an unforeseen roadblock. It would be in the best interests of all of us to cooperate in this effort. Not only will the deployment of a standardized modular fast reactor design facilitate the disposition of plutonium that is currently the driving force for the UK, but it would enable every nation on the planet to avail itself of virtually unlimited clean energy. Such an international cooperative effort would also provide the rationale for the sort of multinational nuclear power oversight agency envisioned by Mr. Kirienko and others who are concerned not only about providing abundant energy but also in maintaining control over fissile materials.

June 6, 2011

Renewables and efficiency cannot fix the energy and climate crises (part 2)

by Barry Brook

This post continues directly on from Part 1 (please read that if you’ve not already done so!). I also note the flurry of interest in the new IPCC WGIII special report on renewable energy prospects through to 2050. I will have more to say on this in an upcoming post, but in short, it fails to address — with any substance — any of the significant problems I describe below, or in the previous post. What a disappointment!

————————

Renewables and efficiency cannot fix the energy and climate crises (part 2)

Renewable energy cannot provide reliable 24-hour, 7-day-a-week  power to meet baseload demand

The minimum amount of power that a city or country demands usually occurs at night (when most people are asleep); this is called the electricity ‘baseload’. Some have claimed that it is a fallacy to argue that all of this demand is needed, because utilities tend to charge cheap (‘off peak’) rates during these low-use periods, to encourage more uptake (by everything from factory machinery to hot water systems). This is because some types of power stations (e.g., coal and nuclear) are quite expensive to build and finance (with long terms to pay off the interest), but fairly cheap to run, so the utility wants to keep them humming away 24 hours a day to maximise returns. Thus, there is some truth to this argument, although if that energy is not used at night, extra must instead be supplied in the day.

Some critical demand, however, never goes away – the power required to run hospitals, police stations, street lights, water and sewerage pumping stations,  refrigerators and cold storage, transport (if we are to use electric vehicles), and so on. If the power is lost to these services, even for a short while, chaos ensues, and the societal backlash after a few such events is huge. On the other side of the energy coin, there are times when huge power demands arise, such as when everyone gets home from work to cook their meals and watch television, or when we collectively turn on our air conditioners during a heatwave. If the energy to meet this peak demand cannot be found, the result can be anything from a lot of grumpy people through to collapse of the grid as rolling blackouts occur.

Two core limitations of wind, solar and most other renewable systems is that: (i) they are inherently variable and are prone to ‘gambler’s ruin‘ (in the sense that you cannot know, over any planning period, when long stretches of calm or cloudy days will come, which could bring even a heavily over-compensated system to its knees), and (ii) they are not ‘dispatchable’. They’ll provide a lot of power some of the time, when you may or may not need it, and little or none at other times, when you’ll certainly need some, and may need a lot. In short, they can’t send power out on demand, yet, for better or worse, this is what society demands of an electricity system. Okay, but can these limitations be overcome?

Large-scale renewables require massive ‘overbuilding’ and so are not cost competitive

The three most commonly proposed ways to overcome the problem of intermittency and unscheduled outages are: (i) to store energy during productive times and draw on these stores during periods when little or nothing is being generated; (ii) to have a diverse mix of renewable energy systems, coordinated by a smart electronic grid management system, so that even if the wind is not blowing in one place, it will be in another, or else the sun will be shining or the waves crashing; and (iii) to have fossil fuel or nuclear power stations on standby, to take up the slack when needed.

The reality is that any of these solutions are grossly uneconomic, and even if we were willing and able to pay for them, the result would be an unacceptably unreliable energy supply system. Truly massive amounts of energy would need to be stored to keep a city or country going through long stretches of cloudy winter days (yes, these even occur in the desert) or calm nights with little wind and no sun, yet energy storage (batteries, chemical conversion to hydrogen or ammonia, pumped hydropower, compressed air), even on a small scale, is currently very expensive. A mix of different contributions (solar, wind, wave, geothermal) would help, but then we’d need to pay for each of these systems, built to a level that they could compensate for the failure of another.

What’s more, in order to deliver all of our regular power demand whilst also charging up the energy stores , we would have to ‘overbuild’ our system many times, adding to the already prohibitive costs. As a result, an overbuilt system of wind and solar would, at times, be delivering 5 to 20 times our power demand (leading to problems of ‘dumping’ the excess energy that can’t be used or stored quickly enough or in sufficient quantity), and at other times, it would deliver virtually none of it.

If you do some modelling to work through the many contingencies, you find that a system which relies on wind and/or solar power, plus large-scale energy storage and a geographically dispersed electricity transmission network to channel power to load centres, would seem to be 10 to 40 times more expensive than an equivalent nuclear-powered system, and still less reliable. The cost to avoid 1 tonne of carbon dioxide would be >$800 with wind power compared with $22 with nuclear power.

The above critiques of renewable energy might strike some readers as narrow minded or deliberately pessimistic. Surely, isn’t it just a matter of prudent engineering and sufficient integration of geographically and technologically diverse systems, to overcome such difficulties? Alas, no! Although I only have limited space for this topic in this short post, let me grimly assure you that the problem of ‘scaling up’ renewable energy to the point where it can reliably meet all (or even most) of our power needs, involves solving a range of compounding, quite possibly insuperable, problems. We cannot wish these problems away — they are ‘the numbers’, ‘the reality’.

Economic and socio-political realities

Supporters of ’100% renewable energy’ maintain that sunlight, wind, waves and plant life, combined with vast improvements in energy efficiency and energy conservation leading to a flattening or reduction in total energy demand, are the answer.  This is a widespread view among environmentalists and would be perfectly acceptable to me if the numbers could be made to work. But I seriously doubt they can.

The high standard of living in the developed world has been based on cheap fossil (and nuclear) energy. While we can clearly cut back on energy wastage, we will still have to replace oil and gas. And that means a surge in demand for electricity, both to replace the energy now drawn from oil and gas and to meet the additional demand for power from that third of the world’s people who currently have no electricity at all.

Critics do not seem to understand – or refuse to acknowledge – the basis of modern economics and the investment culture. Some dream of shifts in the West and the East away from consumerism. There is a quasi-spiritualism which underpins such views. Yet at a time of crisis, societies must be ruthlessly practical in solving their core problems or risk collapse. Most people will fight tooth-and-nail to avoid a decline in their standard of living. We need to work with this, not against it. We are stuck with the deep-seated human propensity to revel in consuming and to hope for an easier life. We should seek ways to deliver in a sustainable way.

A friend of mine, the Californian entrepreneur Steve Kirsch, has put the climate-energy problem succinctly:

The most effective way to deal with climate change is to seriously reduce our carbon emissions. But we’ll never get the enormous emission reductions we need by treaty. Been there, done that – it’s not going to happen. If you want to get emissions reductions, you must make the alternatives for electric power generation cheaper than coal. It’s that simple. If you don’t do that, you lose.

Currently, no non-fossil-fuel energy technology has achieved this. So what is stopping nations replacing coal, oil and gas infrastructure with renewable energy? It is not (yet) because of any strong, society-wide opposition to a switch to renewables. No, it is economic uncertainty, technological immaturity, and good old financial risk management. Despite what ’100% renewables’ advocates would lead you to believe, it is still far from certain in what way the world will pursue a low-carbon future. You have only to look at what’s happening in the real world to verify that.

I’ve already written about fast-growing investment in nuclear energy in Asia. China, for instance, has overcome typical first-of-a-kind engineering cost overruns by building more than 25 reactors at the same time, in a bid to bring costs to, or below, those of coal.

In December 2009, there was a telling announcement from the United Arab Emirates (UAE), which wish to sell their valuable natural gas to the export market. Within the next few years, the UAE face a six-gigawatt increase in demand for electricity, which includes additional power required by an upgraded desalination program. Despite being desert-based with a wealth of solar resources, the UAE decided not to build large-scale solar power plants (or any other renewable technology). In terms of economics and reliability, the numbers just didn’t stack up. Instead, they have commissioned a South Korean consortium to build four new generation III+ APR-1400 reactors, at a cost of $3,500 a kilowatt installed – their first ever nuclear power plants.

Conclusion

Nuclear power, not renewable energy or energy efficiency, will probably end up being the primary global solution to the climate and energy crises. This is the emergent result of trying to be honest, logical and pragmatic about what will and will not work, within real-world physical, economic and social constraints.

If I am wrong, and non-hydro and non-combustible renewables can indeed rise to the challenge and ways can be found to overcome the issues I’ve touched on in these two posts, then I will not complain. After all, my principal goal — to replace fossil fuels with sustainable and low-carbon alternative energy sources — would have been met. But let’s not play dice with the biosphere and humanity’s future on this planet, and bet everything on such wishful thinking. It would be a risky gamble indeed.

Renewables and efficiency cannot fix the energy and climate crises (part 1)

 by Barry Brook
We must deal simultaneously with the energy-resource and climate-change pincers

The modern world is caught in an energy-resource and climate-change pincer. As the growing mega-economies of China and India strive to build the prosperity and quality of life enjoyed by citizens of the developed world, the global demand for cheap, convenient energy grows rapidly. If this demand is met by fossil fuels, we are headed for an energy supply and climate disaster. The alternatives, short of a total and brutal deconstruction of the modern world, are nuclear power and renewable energy.

Whilst I support both, I now put most of my efforts into advocating nuclear power, because: (i) few other environmentalists are doing this, whereas there are plenty of renewable enthusiasts  (unfortunately, the majority of climate activists seem to be actively anti-nuclear), and (ii) my research work on the energy replacement problem suggests to me that nuclear power will constitute at least 75 % of the solution for displacing coal, oil and gas.

Prometheus, who stole fire from the Gods and gave it to mortal man

In my blog, I argue that it’s time to become “Promethean environmentalists”. (Prometheus, in Greek mythology, was the defiantly original and wily Titan who stole fire from Zeus and gave it to mortals, thus improving their lives forever.) Another term, recently used by futurist Stewart Brand, is “Ecopragmatists”. Prometheans are realists who shun romantic notions that modern governments might guide society back to an era when people lived simpler lives, or that a vastly less consumption-oriented world is a possibility. They seek real, high-capacity solutions to environmental challenges – such as nuclear power – which history has shown to be reliable.

But I reiterate — this strong support for nuclear does NOT make me ‘anti-renewables’ (or worse, a ‘renewable energy denier‘, a thoroughly unpleasant and wholly inaccurate aspersion). Indeed, under the right circumstances, I think renewables might be able to make an important contribution (e.g., see here). Instead, my reticence to throw my weight confidently behind an ’100% renewable energy solution’ is based on my judgement that such an effort would prove grossly insufficient, as well as being plain risky. And given that the stakes we are talking about are so high (the future of human society, the fates of billions of people, and the integrity of the biosphere), failure is simply not an option.

Below I explain, in very general terms, the underlying basis of my reasoning. This is not a technical post. For those details, please consult the Thinking Critically About Sustainable Energy (TCASE) series — which is up to 12 parts, and will be restarted shortly, with many more examples and calculations.

————————

Renewables and efficiency cannot fix the energy and climate crises (part 1)

Boulton and Watt’s patented steam engine

The development of an 18th century technology that could turn the energy of coal into mechanical work – James Watt’s steam engine – heralded the dawn of the Industrial Age. Our use of fossil fuels – coal, oil and natural gas – has subsequently allowed our modern civilisation to flourish. It is now increasingly apparent, however, that our almost total reliance on these forms of ancient stored sunlight to meet our energy needs, has some severe drawbacks, and cannot continue much longer.

For one thing, fossil fuels are a limited resource. Most of the readily available oil, used for transportation, is concentrated in a few, geographically favoured hotspots, such as the Middle East. Most credible analysts agree that we are close to, or have passed, the point of maximum oil extraction (often termed ‘peak oil’), thanks to a century of rising demand. We’ve tapped less of the available natural gas (methane), used mostly for heating and electricity production, but globally, it too has no more than a few more decades of significant production left before supplies really start to tighten and prices skyrocket, especially if we ‘dash for gas’ as the oil wells run dry. Coal is more abundant than oil or gas, but even it has only a few centuries of economically extractable supplies.

Then there is climate change and air pollution. The mainstream scientific consensus is that emissions caused by the burning of fossil fuels, primarily carbon dioxide (CO2), are the primary cause of recent global warming. We also know that coal soot causes chronic respiratory problems, its sulphur causes acid rain, and its heavy metals (like mercury) induce birth defects and damage ecological food chains. These environmental health issues compound the problem of dwindling fossil fuel reserves.

Clearly, we must unhitch ourselves from the fossil-fuel-based energy bandwagon – and fast.

Meeting the growing demand for energy and clean water in the developing world

In the developed world (US, Europe, Japan, Australia and so on), we’ve enjoyed a high standard of living, linked to a readily available supply of cheap energy, based mostly on fossil fuels. Indeed, it can be argued that this has encouraged energy profligacy, and we really could be more efficient in the mileage we get out of our cars, the power usage of our fridges, lights and electrical appliances, and in the design of our buildings to reduce demands for heating and cooling. There is clearly room for improvement, and sensible energy efficiency measures should be actively pursued.

In the bigger, global picture, however, there is no realistic prospect that we can use less energy in the future. There are three obvious reasons for this:

1) Most of the world’s population is extremely energy poor. More than a third of all humanity, some 2.5 billion people, have no access to electricity whatsoever. For those that do, their long-term aspirations for energy growth, to achieve something equating that used today by the developed world, is a powerful motivation for development. For a nation like India, with over 1 billion people, that would mean a twenty-fold increase in per capita energy use.

2) As the oil runs out, we need to replace it if we are to keep our vehicles going. Oil is both a convenient energy carrier, and an energy source (we ‘mine’ it).  In the future, we’ll have to create our new energy carriers, be they chemical batteries or oil-substitutes like methanol or hydrogen. On a grand scale, that’s going to take a lot of extra electrical energy! This counts for all countries.

3) With a growing human population (which we hope will stabilise by mid-century at less than 10 billion) and the burgeoning impacts of climate change and other forms of environmental damage, there will be escalating future demands for clean water (at least in part supplied artificially, through desalination and waste water treatment), more intensive agriculture which is not based on ongoing displacement of natural landscapes like rainforests, and perhaps, direct geo-engineering to cool the planet, which might be needed if global warming proceeds at the upper end of current forecasts.

In short, the energy problem is going to get larger, not smaller, at least for the foreseeable future.

Renewable energy is diffuse, variable, and requires massive storage and backup

Let’s say we aim to have largely replaced fossil fuels with low-carbon substitutes by the year 2060 — in the next 50 years or so. What do we use to meet this enormous demand?

Nuclear power is one possibility, and is discussed in great detail elsewhere on this website. What about the other options? As discussed above, improved efficiency in the way we use energy offers a partial fix, at least in the short term. In the broader context, to imagine that the global human enterprise will somehow manage to get by with less just doesn’t stack up when faced with the reality of a fast developing, energy-starved world.

Put simply, citizens in Western democracies are simply not going to vote for governments dedicated to lower growth and some concomitant critique of consumerism, and nor is an authoritarian regime such as in China going to risk social unrest, probably of a profound order, by any embrace of a low growth economic strategy. As such, reality is demanding, and we must carefully scrutinise the case put by those who believe that renewable energy technologies are the answer.

Solarpark Mühlhausen in Bavaria. It covers 25 ha and generates 0.7 MW of average power (peak 6.3 MW)

The most discussed ‘alternative energy’ technologies (read: alternative to fossil fuels or nuclear) are: harnessing the energy in wind, sunlight (directly via photovoltaic panels or indirectly using mirrors to concentrate sunlight), water held behind large dams (hydropower), ocean waves and tides, plants, and geothermal energy, either from hot surface aquifers (often associated with volcanic geologies) or in deep, dry rocks. These are commonly called ‘renewable’ sources, because they are constantly replenished by incoming sunlight or gravity (tides and hot rocks) and radioactivity (hot rocks). Wind is caused by differences in temperature across the Earth’s surface, and so comes originally from the sun, and oceans are whipped up by the wind (wave power).

Technically, there are many challenges with economically harnessing renewable energy to provide a reliable power supply. This is a complex topic – many of which are explored in the TCASE series – but here I’ll touch on a few of the key issues. One is that all of the sources described above are incredibly diffuse – they require huge geographical areas to be exploited in order to capture large amounts of energy.

For countries like Australia, with a huge land area and low population density, this is not, in itself, a major problem. But it is a severe constraint for nations with high population density, like Japan or most European nations. Another is that they are variable and intermittent – sometimes they deliver a lot of power, sometimes a little, and at other times none at all (the exception here is geothermal). This means that if you wish to satisfy the needs of an ‘always on’ power demand, you must find ways to store large amounts of energy to cover the non-generating periods, or else you need to keep fossil-fuel or nuclear plants as a backup. That is where the difficulties really begin to magnify… To be continued…

————————

Part 2 will cover the ‘fallacy of the baseload fallacy’, ‘overbuilding’, costs, and evolution of real-world energy systems.

May 10, 2011

Decarbonise SA – regional action for greenhouse gas mitigation

Filed under: IFR (Integral Fast Reactor) Nuclear Power, Nuclear Energy — buildeco @ 3:48 pm

by Barry Brook

Global warming can only be tackled seriously by a massive reduction in anthropogenic greenhouse gas production. It’s that simple. But just hoping for this to gradually happen — locally, regionally or globally — by tinkering at the edge of the problem (carbon prices, alternative energy subsidies, mandated targets and loan guarantees, “100 ways to be more green” lists, etc.), is just not going to get us anywhere near where we need to be, when we need to be. For that, we need to develop and implement a well-thought-out, practical and cost-effective action plan!

Back in early 2009, I offered a A sketch plan for a zero-carbon Australia. Overall, I still think this advocates the right sort of path. I elaborated further on this idea in my two pieces: Climate debate missing the point and Energy in Australia in 2030; in the latter, I explored a number of potential storylines, along with an estimate of the probability and result of following these different pathways. But the lingering question that arises from thought experiments like this is… how do you turn it into something practical?

Sadly, I can’t think of any liberal-democratic government, anywhere in the world, that actually has a realistic, long-term energy  plan. Instead, we have politicians, businesses and other decision makers with their heads in the sand (peak oil is another issue where this is starkly apparent). This must change, and we — the citizenry — must be the agents of that change. That is why the new initiative by Ben Heard, called “Decarbonise SA“, is so exciting. I’ll let Ben explain more, in the guest post below.

But before that, just a small  note from me. For my many non-Australian readers don’t dismiss this as something parochicial. Think of it instead as a case study — a working template — for what you can help organise in your particular region (local council, city, state/province, whatever). We need all of you on board, because this is a problem of the global commons. Over to Ben.

——————–

Decarbonise SA

Ben Heard – Ben is Director of Adelaide-based advisory firm ThinkClimate Consulting, a Masters graduate of Monash University in Corporate Environmental Sustainability, and a member of the TIA Environmental and Sustainability Action Committee. He is the founder of Decarbonise SA. His recent  post was Think climate when judging nuclear power.

I have been a fan of Barry’s work of  for some time now. His knack for cutting through the noise to highlight the information we need to consider for making good decisions is remarkable. His reputation and tenure at Adelaide University also give his blogs a global reach and relevance, exemplified by the one million hits it received in the week following the Sendai quake and tsunami.

Remarkable though it is, the blog can’t do everything, nor should it try. That’s why I have started Decarbonise SA. The first thing you need to know is that this is more than a blog, it is a mission. The purpose of Decarbonise SA is to form a collective of like-minded people who will drive the most rapid possible decarbonisation of the economy of South Australia, with a primary focus on the electricity supply.

To achieve that goal, South Australia needs to introduce nuclear power into the mix of generating technologies. The primary driver for our support of nuclear power is recognition of the fact that the scientific findings in relation to climate change are now so serious, that we require the fastest and deepest cuts in emissions possible. That means attacking the biggest problems first.  In Australia, that’s electricity supply, specifically the coal and gas that provides most of our baseload generation. While climate change may be the catalyst, nuclear power provides many important environmental and safety benefits compared to coal, beyond greenhouse gas, that will give us a cleaner and healthier environment for the future.

Decarbonise SA also supports the increasing the use of renewable generation technologies, and becoming more efficient with energy. But the primary focus of Decarbonise SA is the introduction of nuclear power.  We are going to work with the government, community and private sectors of South Australia to make this happen.

Why South Australia?

South Australia’s electricity generation sector is in crisis. Aging, inefficient, decrepit infrastructure must be replaced soon, against the backdrop of an urgent global need to cut greenhouse gas emissions. As in any good crisis though, the opportunity is there if you look. South Australia is just a small number of significant infrastructure investments away from having among the world’s cleanest electricity. It is the mission of Decarbonise SA to make that happen, and happen fast. The goal is decidedly immodest. But that’s because climate change is upon us and we must act quickly, firmly and decisively.

But climate change is a global problem. So focussing a whole blog on a relatively small part of Australia may seem an odd strategy. Here’s the thing. There are already a great many resources pushing the cause of climate change (BNC being one). I’m not going to try to compete with that.

At the same time, every grand vision eventually needs implementation to matter, and necessarily, someone needs to downscale the bigger issues to a more manageable level and actually put a plan in place to make it happen.

I am a proud South Australian, and while my work is often national and my ideas and articles have spread around the world, I know where I have the most influence. It’s in the state of 1 million people where I was raised, where I have deep connections and networks, and where I do the bulk of my work. And as I said at the start, I have not started this blog to flap my gums; I very much intend to make this happen. If Decarbonise SA can move 1 million people in a developed nation from dirty a dirty electricity supply to among the world’s very cleanest, well, I’ll be satisfied, the model will have worked, and I can think even bigger. I will be proud if South Australia is first. But I will be even more excited to find ourselves in competition with others around the world who are decisively pursuing the same goal. So hopefully what we do with Decarbonise SA will become a model that has relevance in every state, territory, county and province the world over. So nothing is trademarked at Decarbonise SA. If you like what you read, but don’t live in SA, steal my blog idea and everything on it, and start your own Decarbonise movement. I’ll help.

How will we achieve this?

The introduction of nuclear power to South Australia is the foundation of the Decarbonise SA vision. Nuclear power will permit the rapid replacement of South Australia’s decrepit baseload generation facilities. This is to be accompanied by the continued and enhanced expansion of the renewable energy sector in South Australia, which has played a major role in lowering average emissions of South Australian electricity over the last few years, and continued efforts to improve our efficient use of energy. So yes; to resort to the labels that will come what may, Decarbonise SA is pro-nuclear power. It is also pro-renewables. It is also pro-energy efficiency. It is decidedly pro- nuclear, renewables and energy efficiency working in trio, each deployed as their respective advantages and disadvantages dictate they should be. But above all, it is pro, pro, pro the rapid decarbonisation of the South Australian economy, focussing on electricity. That makes us completely anti-coal and anti-gas for any new electricity generation capacity.

It is the introduction of nuclear power that is the focus of Decarbonise SA’s work, for some pretty simple reasons. Firstly, in South Australia it’s the missing component of a strategy that would actually get the job done (remember, I’m talking about zero emissions. I’m not interested in deep cuts or improvements). Secondly, while renewable technology and energy efficiency both need better support and deeper penetration, they also both have a lot of friends already. Energy efficiency is supported by legislation (like the Energy Efficiency Opportunities Act, mandatory standards for new houses, Minimum Energy Performance Standards (MEPS) and star ratings for appliances to name but a few), and organisations, governmental and otherwise. Renewables have support from organisations like Renewables SA, the Alternative Technology Association, and major legislated support from the national Renewable Energy Target (RET), as well as deep subsidies for solar PV. So the potential of this blog to improve the cause of either energy efficiency or renewables is minimal. To be perfectly clear, do not mistake the focus on nuclear power as an attack on, or belittling of, the role of either energy efficiency or renewables. That is not the case. But I do insist on being decidedly realistic about the potential of either to solve the problem in the absence of nuclear power.

Nuclear power, on the other hand, is roundly treated as the spawn of the devil, with the Australia’s Environmental Protection and Biodiversity Conservation Act specifically highlighting nuclear as requiring referral. Not to mention the opposition of the coal industry, who know full well that nuclear is the only real threat to their dominance of electricity generation in Australia.

At first approach, you may think this is crazy. Nuclear has never been very popular in Australia, and right now, as I write, the second biggest nuclear incident ever remains unresolved. Decarbonise SA is certainly not naive about the challenge of putting nuclear in the centre of the strategy. But when the options are 1) a tough sell that can work (nuclear and renewables with energy efficiency), and easier sells that are guaranteed to fail (gas generation with still high levels of greenhouse gas, plus more imports from Victoria where they burn the dirty brown coal in the world’s worst power station, plus a bit more renewables and energy efficiency) there is really no decision to be made.

Besides, nuclear power is hardly a fringe technology.  It is used in 30 countries worldwide, including the 16 largest economies (ignoring Australia at number 13).  It provides 15% of global electricity supply from around 440 reactors. It provides 80% of France’s electricity, 30% of Japan’s, and 20% of the United States’. It has been in use for over 50 years, with a remarkable safety record, and a suite of environmental, health and safety advantages over and above coal that make your head spin. It is embraced by many prominent environmentalists, thoughtful, caring and passionate people.  But Decarbonise SA has not based this plan on who else agrees or disagrees or what other countries have done; we based it on facts, evidence and context relating to:

  • The extraordinary challenge of climate change, that requires total and rapid decarbonisation of electricity
  • The need to maintain secure electricity supplies, and to urgently supply clean electricity to the 1 billion people in the world who have none
  • Honest and evidence-based appraisal of the advantages and disadvantages of different energy supply options across all relevant criteria, being:
    • Ability to provide near-zero greenhouse gas electricity across the lifecycle
    • Scalability to meet electricity demand requirements, with a focus on baseload
    • Location requirements
    • Cost
    • Reliability/ track record
    • Safety
    • Waste and pollution from energy generation
    • Waste and pollution from mining operations
    • Global security

When these criteria are attended to for all energy supply options with a clear head, and keeping prejudice to a minimum, one thing quickly becomes clear: Anyone who means what they say when they use the expression “climate crisis” needs to move nuclear power front and centre of the strategy, otherwise we will spend the next few decades rearranging the deck chairs on the Titanic.

By the way, this is all coming from someone who was once staunchly anti-nuclear. I supported the organisations who oppose it. I was first to rail against it if it came up over dinner or at a BBQ. But my growing understanding of the climate crisis forced me to take a second look at all of my reasons for opposition. I began that process believing that, in the end, I may find nuclear to be a necessary evil. When I was done, what I found instead is that it’s more than necessary, it’s essential, and it’s not really evil: compared to coal, nuclear power is 99% better in almost every relevant criterion (an assertion I will back with numbers in an upcoming post). I’ve been involved in enough environmental decisions now to know that if you have an option that will improve current conditions by 99%, that’s not a compromise. That’s not a defeatist stance. It’s a massive victory. I’ll be satisfied with the 99% this century, and chase the 1% in the next one if I’m still here.

So I hope you’ll join me on the journey, as I spell out the mission and reasons for Decarbonise SA in upcoming articles. But be warned: I’m not here for the talking. My children won’t really thank me for a blog. They will thank me for cleaner, healthier air, and a stable climate. That what Decarbonise SA is here for. And it needs you.

May 5, 2011

Energy debates in Wonderland

 by Barry Brook

My position on wind energy is quite ambivalent. I really do want it (and solar) to play an effective role in displacing fossil fuels, because to do this, we need every tool at our disposal (witness the Open Science project I kick started in 2009 [and found funding for], in order to investigate the real potential of renewables, Oz-Energy-Analysis.Org).

However, I think there is far too much wishful thinking wrapped up in the proclamations by the “100% renewables” crowd(most of who are unfortunately also anti-nuclear advocates), that wind somehow offers both a halcyon choice and an ‘industrial-strength’ solution to our energy dilemma. In contrast, my TCASE series (thinking critically about sustainable energy) illustrates that, pound-for-pound, wind certainty does NOT punch above it’s weight as a clean-energy fighter; indeed, it’s very much a journeyman performer.

The following guest post, by Jon Boone, looks at wind energy with a critical eye and a witty turn of phrase. I don’t offer it as a comprehensive technical critique — rather it’s more a philosophical reflection on past performance and fundamental limits. Whatever your view of wind, I think you’ll find it interesting.

————————

Energy debates in Wonderland

Guest Post by Jon Boone. Jon is a former university administrator and longtime environmentalist who seeks more more informed, effective energy policy in ways that expand and enhance modernity, increase civility, and demand stewardship on behalf of biodiversity and sensitive ecosystems. His brand of environmentalism eschews wishful thinking because it is aware of the unintended adverse consequences flowing from uninformed decisions. He produced and directed the documentary, Life Under a Windplant, which has been freely distributed within the United States and many countries throughout the world. He also developed the website Stop Ill Wind as an educational resource, posting there copies of his most salient articles and speeches. He receives no income from his work on wind technology.

March Hare (to Alice): Have some wine.

(Alice looked all round the table, but there was nothing on it but tea.)

Alice: I don’t see any wine.

March Hare: There isn’t any.

Alice: Then it wasn’t very civil of you to offer it.

March Hare: It wasn’t very civil of you to sit down without being invited.

— From Lewis Carroll’s Alice in Wonderland

Energy journalist Robert Bryce, whose latest book, Power Hungry, admirably foretells an electricity future anchored by natural gas from Marcellus Shale that will eventually bridge to pervasive use of nuclear power, has recently been involved in two prominent debates. In the first, conducted by The Economist, Bryce argued for the proposition that “natural gas will do more than renewables to limit the world’s carbon emissions.” In the second, an Intelligence Squared forum sponsored by the Rosenkranz Foundation, he and American Enterprise Institute scholar Steven Hayward argued against the proposition that “Clean Energy can drive America’s economic recovery.”

Since there’s more evidence a friendly bunny brought children multi-colored eggs on Easter Sunday than there is that those renewables darlings, wind and solar, can put much of a dent in CO2 emissions anywhere, despite their massively intrusive industrial presence, the first debate was little more than a curiosity. No one mentioned hydroelectric, which has been the most widely effective “renewable”—ostensibly because it continues to lose marketshare (it now provides the nation with about 7% of its electricity generation), is an environmental pariah to the likes of The Sierra Club, and has little prospect for growth. Nuclear, which provides the nation’s largest grid, the PJM, with about 40% of its electricity, is not considered a renewable, despite producing no carbon emissions; it is also on The Sierra Club’s hit list. Geothermal and biomass, those minor league renewables, were given short shrift, perhaps because no one thought they were sufficiently scalable to achieve the objective.

So it was a wind versus gas scrum played out as if the two contenders were equally matched as producers of power. Bryce pointed out wind’s puny energy density, how its noise harms health and safety, its threat to birds and bats, and how natural gas’s newfound abundance continues to decrease its costs—and its price. His opponent carried the argument that wind and solar would one day be economically competitive with natural gas, such that the former, since they produced no greenhouse gasses, would be the preferred choice over the latter, which does emit carbon and, as a non renewable, will one day become depleted.

Such a discussion is absurd at a number of levels, mirroring Alice’s small talk with the March Hare. One of the troubling things about the way wind is vetted in public discourse is how “debate” is framed to ensure that wind has modern power and economic value. It does not. Should we debate whether the 747 would do more than gliders in transporting large quantities of freight? Bryce could have reframed the discussion to ask whether wind is better than cumquats as a means of emissions reductions. But he didn’t. And the outcome of this debate, according to the vote, was a virtual draw.

Ironically, the American Natural Gas Association is perking up its louche ad slogan: “The success of wind and solar depends on natural gas.” Eureka! To ANGA, wind particularly is not an either to natural gas’s or. Rather, the renewables du jour will join forces with natural gas to reduce carbon emissions in a way that increases marketshare for all. With natural gas, wind would be an additive—not an alternative—energy source. Bryce might have made this clear.

What ANGA and industry trade groups like the Interstate Natural Gas Association of America (see its latest paper) don’t say is that virtually all emissions reductions in a wind/gas tandem would come from natural gas—not wind. But, as Bryce should also be encouraged to say, such a pretension is a swell way for the natural gas industry to shelter income via wind’s tax avoidance power. And to create a PR slogan based upon the deception of half-truths. Although natural gas can indeed infill wind’s relentless volatility, the costs would be enormous while the benefit would be inconsequential. Rate and taxpayers would ultimately pay the substantial capital expenses of supernumerary generation.

Beyond Wonderland and Through the Looking Glass

The Oxford-style Economist debate, which by all accounts Bryce and Hayward won with ease, nonetheless woozled around in a landscape worthy of Carroll’s Jabberwocky, complete with methodological slips, definitional slides, sloganeering, and commentary that often devolved into meaningless language—utter nonsense. It was as if Pixar had for the occasion magically incarnated the Red Queen, the Mad Hatter, and Humpty Dumpty, who once said in Through the Looking Glass, “When I use a word, it means just what I choose it to mean – neither more nor less.” Dumpty also said, “When I make a word do a lot of work … I always pay it extra.”

Those promoting “clean” were paying that word extra—and over the top, as Hayward frequently reminded by demanding a clear, consistent definition of clean technology.

Proponents frequently defined clean energy differently depending upon what they chose to mean. At times, they meant acts of commission in the form of “clean coal,” wind, solar, biomass (although ethanol was roundly condemned), and increased use of natural gas. Indeed, natural gas in the discussion became reified, in the best Nancy Pelosi/T. Boone Pickens tradition, as a clean source of energy on a par with wind and solar. At one time, clean also referred to nuclear—but the topic quickly changed back to wind and natural gas. At other times, clean referred to acts of omission, such as reducing demand with more efficient appliances, smarter systems of transmission, and more discerning lifestyle choices.

Shifting definitions about what was “clean” made for a target that was hard to hit. Bryce mentioned Jevon’s Paradox. Bulls eye. So much for increased efficiency. Hayward demonstrated that the US electricity sector has already cut SO2 and NOx emissions nearly 60% over the last 40 years, and reduced mercury emissions by about 40% over this time, despite tripling coal use from 1970 to 2005. Zap. All this without wind and solar. Green jobs from clean industry?  It would have been fruitful to have invoked Henry Hazlitt’s Broken Window fallacy, which illustrates the likelihood of few net new jobs because of the opportunities lost for other, more productive investment. Also welcoming would have been remarks about how more jobs in the electricity sector must translate into increased costs, making electricity less affordable. Such a development would substantially subvert prospects for economic recovery.

In arguing against the proposition that clean energy could be a force for economic recovery, Bryce and Hayward did clean the opposition’s clock (they had, as everyone agreed, the numbers on their side). But they also let the opposition off the hook by not exposing the worms at the core of the proposition. Yes, the numbers overwhelmingly suggest that coal and natural gas are going to be around for a long time, and that they will continue to be the primary fuels, along with oil, to energize the American economy.** They can be, as they have been, made cleaner by reducing their carbon emissions even more. But they won’t be clean. Outside Wonderland, cleaner is still not clean.

The proposition therefore had to fail. Even in Wonderland.

Example of the twinning between natural gas and renewable energy – unacceptable from a greenhouse gas mitigation perspective

Capacity Matters

These arguments, however, are mere body blows. Bryce should have supplied the knockout punch by reminding that any meaningful discussion of electricity production, which could soon embrace 50% of our overall energy use, must consider the entwined goals of reliability, security, and affordability, since reliable, secure, affordable electricity is the lynchpin of our modernity. Economic recovery must be built upon such a foundation. At the core of this triad, however, resides the idea of effective capacity—the ability of energy suppliers to provide just the right amount of controllable power at any specified time to match demand at all times. It is the fount of modern power applications.

By insisting that any future technology—clean, cleaner, or otherwise, particularly in the electricity sector—must produce effective capacity, Bryce would have come quickly to the central point, moving the debate out of Wonderland and into sensible colloquy.

Comparing—both economically and functionally—wind and solar with conventional generation is spurious work. Saying that the highly subsidized price of wind might, maybe, possibly become, one day, comparable to coal or natural gas may be true. But even if this happens, if, say, wind and coal prices become equivalent, paying anything for resources that yield no or little effective capacity seems deranged as a means of promoting economic recovery for the most dedicatedly modern country on the planet.

Subsidies for conventional fuels—coal, natural gas, nuclear, and hydro—make sense because they promote high capacity generation. Subsidies for wind and solar, which are, as Bryce stated, many times greater on a unit of production basis than for conventional fuels, promote pretentious power that make everything else work harder simply to stand still.

Consider the following passage from Part II of my recent paper, which is pertinent in driving this point home:

Since reliable, affordable, secure electricity production has historically required the use of many kinds of generators, each designed to perform different but complementary roles, much like instruments in an orchestra, it is not unreasonable for companies in the power business to diversify their power portfolios. Thus, investment in an ensemble of nuclear and large coal plants to provide for baseload power, along with bringing on board smaller coal and natural gas plants to engage mid and peak load, makes a great deal of sense, providing for better quality and control while achieving economies of scale.

Traditional diversified power portfolios, however, insisted upon a key common denominator: their generating machines, virtually all fueled by coal, natural gas, nuclear, and/or hydro, had high unit availability and capacity value. That is, they all could be relied upon to perform when needed precisely as required.

How does adding wind—a source of energy that cannot of itself be converted to modern power, is rarely predictable, never reliable, always changing, is inimical to demand cycles, and, most importantly, produces no capacity value—make any sense at all? Particularly when placing such a volatile brew in an ensemble that insists upon reliable, controllable, dispatchable modes of operation. As a functional means of diversifying a modern power portfolio, wind is a howler.

Language Matters

All electricity suppliers are subsidized. But conventional generation provides copious capacity while wind supplies none and solar, very little. The central issue is capacity—or its absence. Only capacity generation will drive future economic recovery. And Bryce should say so in future debates. Birds and bats, community protests, health and safety—pale in contrast to wind technology’s lack of capacity. And Bryce should say so. Ditto for any contraption fueled by dilute energy sources that cannot be converted to modern power capacity—even if they produce no carbon emissions. Clean and green sloganeering should not be conflated with effective production.

Moreover, even if the definition of clean and/or renewable technology is stretched to mean reduced or eliminated carbon emissions caused by less consumption of fossil fuels, then where is the evidence that technologies like wind and solar are responsible for doing this? When in the debate former Colorado governor Bill Ritter claimed that the wind projects he helped build in his state were reducing California’s carbon emissions, why didn’t the Bryce/Hayward team demand proof? Which is non existent.

It’s not just wind’s wispy energy density that makes conversion to modern power impossible—without having it fortified by substantial amounts of inefficiently operating fossil-fired power, virtually dedicated transmission lines, and new voltage regulation, the costs of which must collectively be calculated as the price for integrating wind into an electricity grid. It is rather wind’s continuous skittering, which destabilizes the required match between supply and demand; it must be smoothed by all those add-ons. The vast amount of land wind gobbles up therefore hosts a dysfunctional, Rube Goldbergesque mechanism for energy conversion. Bryce and his confreres would do well to aim this bullet right between the eyes.

Robert Bryce remains a champion of reasoned discourse and enlightened energy policy. He is one of the few energy journalists committed to gleaning meaningful knowledge from a haze of data and mere information. His work is a wise undertaking in the best traditions of journalism in a democracy. As he prepares for future debates—although, given the wasteland of contemporary journalism, it is a tribute to his skills that he is even invited to the table—he must cut through the chaff surrounding our politicized energy environment, communicating instead the whole grained wheat of its essentials.

Endnote: You might also enjoy my other relatively recent paper, Oxymoronic Wind (13-page PDF). It covers a lot of ground but dwells on the relationship between wind and companies swaddled in coal and natural gas, which is the case worldwide.

________________________________________________________

** It was fascinating to note Hayward’s brief comment about China’s involvement with wind, no doubt because it seeks to increase its renewables’ manufacturing base and then export the bulk of the machines back to a gullible West. As journalist Bill Tucker said recently in a panel discussion about the future of nuclear technology on the Charlie Rose show, China (and India), evidently dedicated to achieve high levels of functional modernity, will soon lead the world in nuclear production as it slowly transitions from heavy use of coal over the next half-century.

April 14, 2011

Fukushima rated at INES Level 7 – what does this mean?

Filed under: Japan Earthquake, Nuclear Energy — buildeco @ 8:19 pm
by Barry Brook

Hot in the news is that the Fukushima Nuclear crisis has been upgraded from INES 5 to INES 7. Note that this is not due to some sudden escalation of events  (aftershocks etc.), but rather it is based on an assessment of the cumulative magnitude of the events that have occurred at the site over the past month.

Below I look briefly at what this INES 7 rating means, why it has happened, and to provide a new place to centralise comments on this noteworthy piece of news.

The International Nuclear and Radiological Event Scale (INES) was developed by the International Atomic Energy Agency (IAEA) to rate nuclear accidents. It was formalised in 1990 and then back-dated to events like Chernobyl, Three Mile Island, Windscale and so on. Prior to today, only Chernobyl had been rated at the maximum level of the scale ‘major accident’. A useful 5-page PDF summary description of the INES, by the IAEA, is available here.

A new assessment of Fukushima Daiichi has put this event at INES 7, upgraded from earlier escalating ratings of 3, 4 and then 5. The original intention of the scale was historical/retrospective, and it was not really designed to track real-time crises, so until the accident is fully resolved, any time-specific rating is naturally preliminary.

The criteria used to rate against the INES scale are (from the IAEA documentation):

(i) People and the Environment: considers the radiation doses to people close to the location of the event and the widespread, unplanned release of radioactive material from an installation.

(ii) Radiological Barriers and Control: covers events without any direct impact on people or the environment and only applies inside major facilities. It covers unplanned high radiation levels and spread of significant quantities of radioactive materials confined within the installation.

(iii) Defence-in-Depth: covers events without any direct impact on people or the environment, but for which the range of measures put in place to prevent accidents did not function as intended.

In terms of severity:

Like the scales that describe earthquakes or major storms, each of the INES scale’s seven levels is designed to be ten times more severe that the one before. After below-scale ‘deviations’ with no safety significance, there are three levels of ‘incident’, then four levels of ‘accident’. The selection of a level for a given event is based on three parameters: whether people or the environment have been affected; whether any of the barriers to the release of radiation have been lost; and whether any of the layers of safety systems are lost.

So, on this definitional basis, one might argue that the collective Fukushima Daiichi event (core damage in three units, hydrogen explosions, problems with drying spent fuel ponds, etc.) is ~100 times worse than TMI-2, which was a Level 5.

However, what about when you hit the top of the INES? Does a rating of 7 mean that Fukushima is as bad as Chernobyl? Well, since you can’t get higher than 7 on the scale, it’s impossible to use this numerically to answer such a question on the basis of their categorical INES rating alone. It just tells you that both events are in the ‘major league’. There is simply no event rating 8, or 10, or whatever, or indeed any capacity within the INES system to rank or discriminate events within categories (this is especially telling for 7). For that, you need to look for other diagnostics.

So headlines likeFukushima is now on a par with Chernobyl‘ can be classified as semantically correct and yet also (potentially) downright misleading. Still, it sells newspapers.

There is a really useful summary of the actual ‘news’ of this INES upgrade from World Nuclear News, here. It reports:

Japanese authorities notified the International Atomic Energy Agency of their decision to up the rating: “As a result of re-evaluation, total amount of discharged iodine-131 is estimated at 1.3×1017 becquerels, and caesium-137 is estimated at 6.1×1015 becquerels. Hence the Nuclear and Industrial Safety Agency has concluded that the rating of the accident would be equivalent of Level 7.”

More here from the IAEA:

The new provisional rating considers the accidents that occurred at Units 1, 2 and 3 as a single event on INES. Previously, separate INES Level 5 ratings had been applied for Units 1, 2 and 3. The provisional INES Level 3 rating assigned for Unit 4 still applies.

The re-evaluation of the Fukushima Daiichi provisional INES rating resulted from an estimate of the total amount of radioactivity released to the environment from the nuclear plant. NISA estimates that the amount of radioactive material released to the atmosphere is approximately 10 percent of the 1986 Chernobyl accident, which is the only other nuclear accident to have been rated a Level 7 event.

I also discussed the uprating today on radio, and you can listen to the 12-minute interview here for my extended perspective.

So, what are some of the similarities and differences between Fukushima and Chernobyl?

Both have involved breeches of radiological barriers and controls, overwhelming of defence-in-depth measures, and large-scale release of radioactive isotopes into the environment. The causes and sequence of the two events were, however, very different, in terms of reactor designs, the nature of the triggering events, and time-scale for resolution — this is a topic to be explored in more depth in some future post. The obviously big contrast is in the human toll and nature of the radioactive release.

The Chernobyl event killed 28 people directly via the initial explosion or severe radiation sickness, and other ~15 died as directly attributed result of radiation-induced cancer (see the summary provided today by Ben Heard on Opinion Online: Giving Green the red light). Further, Chernobyl led to a significant overexposure of members of the public in the local area and region, especially due to iodine-131 that was dispersed by the reactor fire, and insufficient protection measures by authorities. An increase in thyroid cancers resulted from this.

In Fukushima, by contrast, no workers have been killed by radiation (or explosions), and indeed none have been exposed to doses >250 mSv (with a ~1000 mSv being the dose required for people to exhibit signs of radiation sickness, through to about 50 % of victims dying after being exposed to >5000 mSv [see chart here]). No member of the public has, as yet, been overexposed at Fukushima. Further, much of the radionuclides released into the environment around Fukushima have been a result of water leakages that were flushed into the ocean, rather than attached to carbon and other aerosols from a burning reactor moderator, where they were largely deposited on land, and had the potential to be inhaled (as occurred in Chernobyl).

So is Fukushima another Chernobyl? No. Is it a serious accident? Yes. Two quite different questions — and answers — which should not be carelessly conflated.

November 5, 2010

SNE 2060 – can we build nuclear power plants fast enough to meet the 2060 target?


by Barry Brook

The nuclear scenario I describe here requires around 10,000 GWe of nuclear capacity by 2060, to replace most of our current fossil fuel use. (For further justification of this 10 TW target, read this TCASE post.) My next step is to look critically as some of the critical underpinning assumptions — uranium supply and build rates. Now, as was the case for the previous question (are uranium resources sufficient?), I’m not the first to try to provide an answer on possible build rates. So, before I add my say on the matter, I’ll quote from two other sources.

———————————————-

First up, we have Tom Blees from Prescription for the Planet (pg 200+)

So what kind of money and timelines are we talking about here? As to the latter, the idea of building hundreds of nuclear plants a year is something I haven’t seen even remotely suggested by anyone, though there are really no compelling reasons, given the political will, that it couldn’t be done. France has been good enough to give us a perfect demonstration.

Once the oil shocks of the early Seventies jolted the world into a new perspective, France more than any other nation took decisive action. Having precious few natural energy sources of its own, the nation embarked on an ambitious plan to convert their energy infrastructure to nuclear power, supplemented by what hydroelectric power they’d already developed. Within the space of about 25 years they succeeded, and today France’s fourth largest export is electricity.

About eighty percent of their electricity is provided by nuclear power, with nearly all the rest comprised of hydroelectric and other renewable sources. It is truly ironic—and more than a little ridiculous—that France is singled out for being so far behind on meeting the EU’s renewable energy target, a system that was put in place to encourage its member nations to reduce their GHG emissions. The fact that nearly all of France’s GHG emissions come from the transportation sector and that they produce far lower emissions from their electrical generation systems than any other EU nation just isn’t recognized under the renewable energy goal system. So if you happen to see France being castigated as a global warming slacker, take it with a large grain of salt. They are, in fact, helping their neighbors reduce their GHG emissions by selling them electricity from France’s nuclear and renewable energy power plants, all the while enjoying the clearest skies in the industrialized world.

France’s nuclear power buildup proceeded at the rate of up to six new power plants a year. As in most other countries, they tend to build them in clusters of three or four, with a total capacity per cluster of 3-4 gigawatts electrical (GWe). Currently the government-owned electrical utility, Electricité de France (EdF), operates 59 nuclear plants with a total capacity of over 63 GWe, exporting over 10% of their electricity every year (France is the world’s largest net electricity exporter). Their electricity cost is among the lowest in Europe at about 3 eurocents (or €ents, if you’ll allow me to coin a new symbol of sorts, since I know of no euro-native symbol akin to the U.S. ¢) per kilowatt-hour

Just how realistic is it to think we can build 100 nuclear plants per year? Remember that France built up to six per year during their conversion to nuclear, so let’s look at Gross Domestic Product (GDP) as a guide to what a given country can financially bear for such a project, keeping in mind that France proceeded without the sense of urgency that the world today should certainly be ready to muster. There are six countries with higher GDPs than France, all of whom already possess the technology to build fast reactors: USA, China, Japan, India (they’re building one now), Germany, and the United Kingdom. Add Canada and Russia (which already has one running and is planning more), then tally up the GDP of these eight countries. At the rate of 6 plants per year with France’s GDP, these countries alone could afford to build about 117 IFRs per year, even without any greater urgency than the French brought to bear on their road to energy independence. And come on, you know that using “urgency” and “French” in the same sentence is pushing the envelope.

———————————————-

Then we have David Mackay from Sustainable Energy: Without the Hot Air (pg 171):

I heard that nuclear power can’t be built at a sufficient rate to make a useful contribution.

The difficulty of building nuclear power fast has been exaggerated with the help of a misleading presentation technique I call “the magic playing field.” In this technique, two things appear to be compared, but the basis of the comparison is switched halfway through. The Guardian’s environment editor, summarizing a report from the Oxford Research Group, wrote

“For nuclear power to make any significant contribution to a reduction in global carbon emissions in the next two generations, the industry would have to construct nearly 3000 new reactors – or about one a week for 60 years. A civil nuclear construction and supply programme on this scale is a pipe dream, and completely unfeasible. The highest historic rate is 3.4 new reactors a year.”

Graph of the total nuclear power in the world that was built since 1967 and that is still operational today. The world construction rate peaked at 30 GW of nuclear power per year in 1984.

3000 sounds much bigger than 3.4, doesn’t it! In this application of the “magic playing field” technique, there is a switch not only of timescale but also of region. While the first figure (3000 new reactors over 60 years) is the number required for the whole planet, the second figure (3.4 new reactors per year) is the maximum rate of building by a single country (France)!

A more honest presentation would have kept the comparison on a per- planet basis. France has 59 of the world’s 429 operating nuclear reactors, so it’s plausible that the highest rate of reactor building for the whole planet was something like ten times France’s, that is, 34 new reactors per year. And the required rate (3000 new reactors over 60 years) is 50 new reactors per year. So the assertion that “civil nuclear construction on this scale is a pipe dream, and completely unfeasible” is poppycock. Yes, it’s a big construction rate, but it’s in the same ballpark as historical construction rates.

How reasonable is my assertion that the world’s maximum historical construction rate must have been about 34 new nuclear reactors per year? Let’s look at the data. [The figure] shows the power of the world’s nuclear fleet as a function of time, showing only the power stations still operational in 2007. The rate of new build was biggest in 1984, and had a value of (drum-roll please…) about 30 GW per year – about 30 1-GW reactors. So there!

See also: Plan C (PDF)

———————————————-

Barry Brook takes on the matter

Okay, so I think it’s clear from the above two extracts that the deployment of 50 new reactors a year, worldwide (i.e., 1 GWe per week) would be quite achievable, assuming any serious socio-political impediments were overcome, like they were in France in the 1970s — 1990s, and are today in places like China, South Korea and India. I crunched some further numbers to back up this assessment.

World GDP in 2009 is $US 58 trillion. Yet the top 30 nations encompass 87.4 % of this total (and 22 of those already have commercial nuclear power, with another 4-6 of them actively seeking it), or $US 50.7 trillion, so to simplify, let’s just consider these nations. In 2009, France ($US 2.68 trillion) represented 5.3 % of the Top 30 cumulative total. So if France could build at a rate of 3.4 GWe per year (6 reactors with average unit size of 500 to 600 MWe), the Top 30 could do it at 3.4/0.053 = 64 GW/yr. Back in 1980, however, France’s GDP per capita was $12K, versus $32K today, a 2.7-fold increase. If we applied that multiplier to the figures above, we get a possible build rate, on an equal-terms economic basis, of ~170 GWe per year.

To go from 380 GW in 2010 to 10,000 GW in 2060, however, would require an average of 190 GW to be built each year. Actually, as this table from the previous SNE2060 post shows, the maximum rate I calculate from the TR2 scenario is 386 GW per year, but that peak doesn’t occur until 2040, giving plenty of time to ‘tool up’ (the implied rate from my modelling in 2020 is 25 GW/yr, and in 2030 is 130 GW/yr).

So, another take. China’s electricity consumption grew by an average of  360 TWh over the last 5 years, or 40 GW of equivalent generation capacity, driven by a national GDP of $US 4.9 trillion. If this rate of build is scaled-up to the Top 30 (i.e., assume that all other nations built nothing), this would be like adding 410 GW of electricity generation capacity worldwide. Now, let’s say that in some hypothetical future, where the world’s economic powers urgently wanted to replace coal with low-carbon alternatives (including substituting oil with electricity-derived synfuels), and the goal was to emulate France. such that ~80% of their new build was nuclear power stations, then China’s current pace setting would allow for 410*0.8 = 330 GW of new nuclear capacity per year.

Bottom Line: Folks, the conclusions are that: (a) it’ll require a massive effort to build 10 TW of replacement nuclear (and renewable etc) capacity by 2060, but (b) it’s certainly doable, based on no more than the level of urgency currently shown by China today (with France as backup).

October 20, 2010

IFR FaD 8 – Two TV documentaries and a new film on the Integral Fast Reactor

Filed under: IFR (Integral Fast Reactor) Nuclear Power, Nuclear Energy — buildeco @ 11:26 am


by Barry Brook

Want to know more about the Integral Fast Reactor technology from the comfort of your lounge room chair? Then these two fascinating videos, recently transcoded and uploaded by Steve Kirsch to the “ifr.blp.tv” website, are for you. You can watch online, or download in .MP4 format (choose the format and then the download link below) for offline viewing.

First, we have: Advanced Liquid Metal Reactor Actinide Recycle System, ”Energy for the 21st Century”

It is about 8 minutes long and cost the ALMR team about $40,000 to make in 1990 (according to Chuck Boardman).

This video was also highlighted on Atom Insights blog by fellow IFRG member Rod Adams. Rod said:

The Energy Policy Act of 1992 included language directing research and development of the Advanced Liquid Metal Reactor (ALMR) with Actinide Recycle System. The above video is an explanatory (some might use the word “promotional”) production that explains the program and its goals from the perspective of the mid 1990s.

As many nuclear energy insiders know, the Integral Fast Reactor (IFR) demonstration was part of the ALMR program. That program was cancelled by the Clinton Administration when its energy program decision makers decided to zero out all research on advanced nuclear energy systems. The reactor design that the video describes – the PRISM – is still on GE’s drawing board. It still has its advocates. Jack Fuller, Chairman of the Board, GE Hitachi Nuclear Energy presented the reactor design and described its history to the Blue Ribbon Commission on America’s Energy Future.

This video provides more evidence of an energy opportunity that America has not been pursuing. Knowing just how important an abundant, clean, reliable energy source can be to a country’s prosperity, one has to wonder why there was so much opposition to the concept during the 1990s and why that opposition still exists today.

Second, we have “The New Explorers: Atoms for Peace”

This 54 minute TV documentary is a history of nuclear energy in America, broadcast in 1996 on the national PBS network. The show focuses on Argonne’s efforts to develop the Integral Fast Reactor, an inherently safe nuclear power plant killed by Congress and the Clinton Administration. From Argonne NL website:

http://blip.tv/file/4199148

[Film-maker] Bill Kurtis hosts the exploration of nuclear energy from its beginnings under the Stagg Field grandstands at the University of Chicago, through the bombing of Hiroshima and Nagasaki, President Eisenhower’s “Atoms for Peace” program and the development of the Integral Fast Reactor (IFR).

“Argonne National Laboratory holds a very special place in the 50-year-long journey to turn nuclear power into unlimited energy for the world,” according to Tom Olson of the “New Explorers” Chicago Production Center.

Several Argonne researchers will be featured as “new explorers,” including Walter Zinn and Charles Till. Zinn was Argonne’s first director and leader of the project responsible for producing the world’s first nuclear electricity (from Experimental Breeder Reactor-I, in 1951). Charles Till, associate laboratory director for engineering research, will explain the concepts behind the IFR.

You can also read a review of this documentary by Walter Goodman, published in 1996 in the New York Times.

There is also a multi-part YouTube version (in 14 minute chunks) that’s been posted by one of my commenters Scott, here (thanks for the tip).

After watching these, you’ve really got to ask yourselves — how did we let the last 15 years slip by with no action on this? Still, there’s no point crying over spilt milk. It’s time to get sustainable nuclear energy firmly back on the public agenda. With that motivation in mind, environmental documentary maker Robert Stone is about to embark on a new project, called “Pandora’s Promise“. You can read more about it here, including a multi-page treatment. It’s still in the early stages of development and finalising funding. The synopsis:

PANDORA’S PROMISE will be a feature-length documentary about nuclear power and how mankind’s most feared and controversial technological discovery may ultimately hold the key to its very survival. Built around a number of in-depth interviews with several of the world’s leading environmentalists, scientists and energy experts, many of whom (like me) have undergone a metamorphosis in their thinking about nuclear power, the film will be brought to life through a wealth of incredible archival footage and original filming across the globe. Operating as history, cultural meditation and contemporary exploration, PANDORA’S PROMISE aims to inspire a serious and realistic debate over what is without question the most important issue of our time: how we continue to power modern civilization without destroying it.

I shared a car trip with Robert when travelling from Sacremento to Berkeley the other month, which gave us a good chance to chat about the movie. The previous evening, Robert had joined me, Steve Kirsch and others from SCGI (Ron Gester, Susan von Borstel etc.) for dinner at the Blees’ house, where I was staying. He’s a very nice guy, and makes excellent movies. One of his previous ones was a real love letter to the environmental movement, and includes interviews with Hunter Lovins etc., so if anyone is going to make THE definitive picture on nuclear energy for environmentalists, it’s Robert!

People, we CAN solve the climate and energy crunches of the 21st century, IF we have the will and the knowledge. These old and new video productions could go a long way towards inspiring and educating today’s generation of citizens on the great potential of fission energy as the natural, sustainable successor to fossil fuels. We just have to get people engaged and aware. Help spread the message, push ahead with a ‘can do’ positive attitude, and things may yet change faster than you could ever imagine…

September 21, 2010

Fast reactor future – the vision of an atomic energy pioneer

Filed under: Emissions Reduction, Nuclear Energy — buildeco @ 1:34 pm

by Barry Brook

REACTOR PIONEERS — Some of those who worked on EBR-I posed in front of the sign chalked on the wall when EBR-I produced the first electricity from atomic power. Koch is front row, second from right.

When I was in Idaho Falls in August 2010, one of the places I visited was the Experimental Breeder Reactor I. It’s now a publicly accessible U.S. National Historic Landmark, and has some incredible experimental X-39 atomic aircraft engines sitting out the front (see little inset photo). I’ll talk more about this visit in a later BNC post, but one thing is relevant here. That is, there is a blackboard (now preserved permanently under glass) which includes the chalked signatures of the original EBR-I research crew. One of the names on that list is a young engineer called Leonard Koch — (see photo with him standing there almost 60 years before I looked at the same board!).

Well, Len, at 90, is still going strong, and recently sent the IFRG a speech he gave in 2005 in Russia on fast reactors and the future. It’s a terrific essay, and not available anywhere on the internet (until now — I transcribed his scanned copy). Len kindly gave me permission to post it here on BNC. He also said to me:

I am pleased that you visited EBR-I. It is pretty primitive compared to the very sophisticated plants that are being built today, but it got things started. The plane the Wright Brothers built was even more primitive but they got the airplane business started. The key is to get things started and persist.

Enjoy.

—————————–

Brief bio: A retired, “Pioneer”, Leonard Koch is probably the oldest continuing supporter and participant in the development of the original concept of nuclear power. He joined Argonne National Laboratory in early 1948 and participated in the development, design, construction and early operation of EBR-l as the Associate Project Engineer. He was responsible for the development, design and construction of the EBR-ll as the Project Manager. He wrote the book, “EBR-ll”, published by the American Nuclear Soceity, which describes that activity. More here.

Nuclear energy can contribute to the solution of global energy problems

Leonard J. Koch, winner of the 2004 Global Energy International Prize.

This paper was originally presented at the Programme of International Symposium “Science and Society”, March 13, 2005, St. Petersburg, Russia, the year after his prize was awarded, in recognition of the 75th birthday of Zhores Alferov, the founder of the Global Energy International Prize. A large number of Nobel Laureates and Global Energy Laureates participated in the symposium.

Energy has become a dominant, if not the dominant, field of science impacting society. In the last century, man’s use of energy increased more than it did in the entire previous history of civilization. It has resulted in the highest standard of living in history, but it has also created a global dependence on energy that may become very difficult to meet. That is the primary global energy problem. More specifically, it is the growing recognition that the increasing global demand for petroleum will exceed the supply.

Science has produced many uses for petroleum, but by far the most demanding of the unique capabilities of petroleum is its use for transportation of people and goods. Science has created a very mobile global society. Petroleum has made this possible because of its unique capability to serve as an energy source and as an energy “carrier”. Excluding natural gas, which I include in a very broad definition of “petroleum”, there is no alternative to petroleum that can serve both functions. There are energy sources and there are energy carriers, but no single alternative that can satisfactorily combine both capabilities.

It is generally agreed that the Earth was endowed with about two trillion barrels of oil and that about one trillion barrels have been extracted and used. Also, it is rather generally agreed that the present extraction rate of about 82 million barrels a day is at, or near, the peak rate that is achievable. Demand has been increasing and is expected to continue to increase. Although these figures would suggest that there is only a 35 year supply of petroleum remaining, of course, this is not what will happen, or what should be used for planning purposes. A long, gradual transition period will occur during which a variety of alternatives to petroleum in its various applications must be found and used. The challenge for science and technology is to endure that sufficient alternatives are acceptable, available and ready when needed.

Many people and organizations are addressing this matter. They have produced a variety of predictions and conclusions. They are readily and extensively available on the internet. At best, these predictions are disturbing and describe a difficult and, perhaps, an unpleasant transition period. At worst, they predict a catastrophe and the end of life and we now know it.

They generally agree that no single substitute for petroleum will be found and there is a wide disparity in the predicted acceptability of combinations of energy sources and energy carriers. Electricity and hydrogen are recognized as potential energy carriers. Electricity is well established. Hydrogen possesses superb “combustion” characteristics, but will require much more development and, will require an immense infrastructure. Its distribution will be difficult and expensive. If it is to be the eventual substitute for petroleum, a huge energy source with very long term availability will be required to produce the hydrogen.

There is little agreement on energy sources that can fulfill this potential demand. Coal is environmentally unacceptable, wind and solar are unreliable, because they require ”the wind to blow or the sun to shine’” while hydro and nuclear are considered inadequate because of available resources.

Nuclear energy is included in this latter category because the estimated reserves of uranium are found to be inadequate. this conclusion is scientifically incorrect! It is based on an immature technology which does not incorporate established scientific knowledge.

The ‘science” of nuclear energy is very simple and very specific. a pound of uranium contains the energy equivalent to about 5,000 barrels of oil or about 200,000 gallons of gasoline. in scientific terms, one kilogram of uranium contains the energy equivalent of almost two million liters of gasoline.

The United States has an inventory of more than one million tons of uranium in storage in the form of “spent fuel” from reactors, and “depleted uranium” from uranium enrichment plants. This inventory contains the energy equivalent of about ten trillion barrels of oil! The total global inventory of this material must be at least 3 or 4 times as large. These nuclear energy reserves are already mined and refined, the uranium (and thorium) still remaining in the Earth combined with the existing stockpile make this a virtually inexhaustible energy supply.

Clearly, the problem is not that the global uranium reserves are inadequate; it is that the contained energy is not being extractable using today’s immature technology, only about one percent of the energy is extracted from natural uranium! The balance remains in the inventories described earlier. The scientific requirements for extracting this energy have been understood for more than 50 years. The technology for doing so has not yet been developed.

Nuclear energy is produced by the fission of uranium atoms in a nuclear reactor. Natural uranium, as it occurs in the earth, is composed of two isotopes, uranium-235 which is fissionable, and uranium-238 which is not fissionable, but is “fertile” and when it absorbs a neutron it is transformed into plutonium-239 which is fissionable.

Natural uranium consists of about 0.7% U-235 and about 99.3% U-238. Rhe U-238 can only be fissioned if it is first “transmuted” to Pu-239. Therefore, natural uranium can only produce energy effectively by transmuting U-238 to Pu-239. The combination of fission and transmutation occurs in any nuclear reactor in which the fuel contains U-235 and U-238 or Pu-239 and U-238.

It occurs in all of the power reactors operating in the world today. In most of them, an adjustment is made in the U-235 concentration to enhance operation. The 0.7% U-235 content is “enriched” to about 3.0%. This process produces “depleted uranium” which contains about 99.8% U-238. None of the energy contained in this enormous global inventory of depleted uranium has been extracted.

The current generation of nuclear power reactors convert about 1 atom of U-238 into Pu-239 for each 2 atoms of U-235 fissioned. Some of the Pu-239 atoms are fissioned in situ. Therefore, a very small amount of the energy contained in the U-238 is extracted in today’s nuclear power plants. Virtually all of it remains in the spent fuel. The net result of these operations is that about one percent of the energy contained in the original natural uranium energy source has been extracted. The remaining 99% is contained in the spent fuel and depleted uranium. Virtually all of this energy is contained in U-238 which must be converted to Pu-239 to extract it.

This can be accomplished most efficiently in fast reactors fueled with Pu-239 and U-238. In this system, about 3 atoms of U-238 are converted to Pu-239 for each 2 atoms of Pu-239 fissioned. Because these machines can produce more plutonium than they consume, they are called “breeders”. The current conventional reactors which are about one third as efficient are called “converters”.

From the very early days of the nuclear age, it was predicted that the energy contained in uranium could be extracted by recycling nuclear fuel in fast reactors. It was recognized also that this could only be accomplished if the following questions were answered favorably. Would the neutronics produce a “breeder” type performance? Could energy be extracted usefully and acceptably from large fast neutron power reactors? Could nuclear fuel be recycled through such reactors in the manner required to extract the energy?

The first two questions have been answered. The plutonium – uranium fuel system in fast reactors will permit energy to be extracted from U-238. It has been shown that large fast power reactors can indeed produce useable energy. This has been done, probably most convincingly, in Russia at the BN-600 power station. In addition, work in other countries corroborate that fast power reactors can be used to produce electricity and for other uses.

The third question has not been answered adequately. Nuclear fuel has not been recycled to the extent necessary to demonstrate the capability to extract a significant fraction of the energy contained in uranium! This is the remaining challenge for science and technology.

I was deeply involved in a very early attempt to advance this technology. It evolved into the EBR-ll project; the Experimental Breeder Reactor No. 2., developed by Argonne National Laboratory in the United States. It was developed to demonstrate, on a small scale, the feasibility of power generation, but much more importantly, to advance fuel recycle technology. It was a relatively small plant, generating only 20,000 kilowatts of electricity, but it incorporated a complete “fuel cycle facility” interconnected to the nuclear reactor plant. Although fast reactor power plant projects were proceeding in the United States and other countries, none of them incorporated provisions for direct on-site fuel recycle. Therefore, the EBR-II experience is unique and important.

The fuel selected for the first phase of operation was an enriched uranium metal alloy which was actually established by the fuel refining process which had been selected. Neither plutonium, nor plutonium-uranium technology, were available at the time (the 1950′s). A relatively simple and imperfect fuel processing system was selected to provide a “starting point” for the development of this technology, with recognition that much additional technology development would be required. The uranium metal fuel was to be processed by melt refining which removed fission products from molten uranium by volatilization and oxidation. This process provided adequate purification for fast reactor fuel recycle, even though all of the fission products were not removed.

It was estimated that at nominal equilibrium conditions, after several fuel cycles, this process would produce an alloy consisting of about 95% uranium and 5% fission products (about 2.5% molybdenum and 2% ruthenium plus a small amount of “others”). This alloy was named “fissium” and it was decided to create this alloy for the initial fuel loading to avoid a constantly changing fuel composition with each fuel recycle. It was not expected that this first phase of operation would demonstrate a true breeder fuel recycle. That was planned for the next phase.

Simultaneously, some very preliminary laboratory-scale experiments indicated that electrorefining of plutonium-uranium metallic alloys might prove to be suitable for recycle of this fuel in fast power reactors. As a result, the EBR-II program plan was to operate initially on an enriched uranium fuel cycle and shift to a plutonium-uranium fuel cycle later when the technology for that fuel cycle was developed. It was thought that valuable power reactor fuel recycle experience could be obtained during the first phase even though it was not a true breeder fuel cycle.

Only the first phase was accomplished, and only on a limited scale. Five total reactor core loadings were recycled through the reactor. About 35,000 individual fuel elements were reprocessed, fabricated and assembled into almost 400 fuel subassemblies. An administrative decision was made that the United States nuclear power program would concentrate on oxide nuclear fuel for all power reactors, including fast reactors. The EBR-II fuel recycle program, based on metal fuel, was terminated. Reactor operation was continued for more than 20 years, but the fuel was not recycled. The reactor continued operation as a “fissium-fueled”, base load, electrical generating station and a fast neutron irradiation facility. The fuel cycle facility was used for examination of irradiated fuel and other materials.

Even though this program was interrupted, it produced and demonstrated some very useful technology that will be applicable to future recycle systems and provides an overall perspective of nuclear fuel recycle requirements. It includes the performance of highly complex operations in a very strong radiation field and the removal of fission product decay heat during fuel fabrication and assembly operations. Even though future systems may be less demanding, this technology and experience will be invaluable.

Each future recycle system will create unique requirements related specifically to the fuel, the fuel form and the design of the individual fuel elements. They will include removing the spent fuel from its container; (most probably a cylindrical tube), reprocessing the fuel and installing it in a new container.

It is this part of the total fuel recycle process that requires much development and demonstration. There are a variety of potential fuels and fuel forms and a variety of potential purification and fabrication processes which will produce a variety of fuel recycle characteristics and requirements . The composition of the fuel will change during recycle and an equilibrium, or near equilibrium, composition will eventually result. This scenario has not been produced for any of the potential fuel systems, nor will it be, until the required operational experience has been obtained. Global attention is needed because this will be a very slow, long-term undertaking. There are no quick fixes! A fuel cycle will probably take about three years, and several cycles will be required to establish a reasonable demonstration of the total performance of a specific recycle process. There will be, almost certainly, more than one total fuel recycle system to pursue; possibly several. Each will be unique and produce its own results and create its own requirements.

I have proposed that the United States initiate a program to begin the process by constructing a “fuel recycle reactor” (FRR) designed specifically to provide a facility in which these fuels can be recycled. I do not believe that a single facility of this kind can begin to do the job that is necessary to establish this badly needed technology. I know that it is presumptuous of me to suggest what other countries should do; but, I propose that a vigorous international effort be undertaken to develop and establish the technology required to recycle nuclear fuel in fast power reactors and thus make it possible for the world to use the tremendous capability which exists in the global resources of nuclear fuel.

This is a timely international challenge. I note that Japan is considering the restart of their Monju fast reactor and are exploring international participation ¡n fuel cycle technology. I note also that India is proceeding with their first fast power reactor with a capacity of 500 megawatts and plans to build three more by 2020. I find this to be a very interesting development; India has maintained a continuing technical interest in fast reactors since the very early days of nuclear power. I expect this program will bring a new perspective to nuclear power and fuel recycle. India has a strong interest in the U-233 thorium cycle because of their large indigenous supply of thorium.

Th-232, which is not fissionable, is similar to U-238; when it absorbs a neutron, it is transformed into fissionable U-233. This process also can be best accomplished in fast reactors and requires fuel recycle. Therefore, fuel recycle technology also must be developed to extract this source of energy. The vast global thorium reserves should be included in estimates of total global nuclear energy capability.

On a longer range basis, the magnitude of the demand for energy sources will eventually become dominant. In addition to providing an alternative to dwindling petroleum resources, there will be the need to provide for the continuing growth in demand for energy to satisfy the needs of increasing global population and their standard of living.

For nuclear energy to contribute significantly to satisfying this enormous potential demand, it will be necessary to not only develop the technology, but to make it acceptable!

History has established a relationship between nuclear energy and nuclear weapons that is not clearly defined or well understood. Nuclear weapons are produced from fissionable materials, but recycled power reactor fuel is not a suitable source for that material. Even the spent fuel after only one fuel cycle in current generation power reactors is unsuitable for weapons use. After multiple recycles, the fuel is essentially useless for weapons.

It will be necessary to demonstrate that nuclear energy on the vast scale I have suggested will not result in unacceptable nuclear waste. Efficient fuel recycle has the potential capability of virtually eliminating this requirement. The primary problem presented by the long term storage of spent fuel is the long half-life of the actinides produced in the spent fuel. They can be destroyed by fission.

A complete nuclear fuel recycle process will destroy these actinides and produce energy from those that fission. At equilibrium, all of the necessary processes will be operating simultaneously. Pu-239 will be fissioning, the higher isotopes of plutonium will be fissioning, or absorbing neutrons and transmuting into isotopes that fission and are destroyed.

The ideal fuel cycle will recycle all of the uranium, the plutonium isotopes and the other actinides and remove only fission products during each fuel cycle. The nuclear waste will consist primarily of fission products which will be far easier to store and virtually all of the energy will have been extracted from the original energy source, natural uranium. A similar scenario can be developed for thorium. The science is firmly established. The technology is needed. The incentive to do so is enormous. It is to provide an inexhaustible supply of energy for the foreseeable future and beyond.

September 15, 2010

IFR FaD 6 – fast reactors are easy to control


by Barry Brook

There are many topics in the IFR FaD series that I want to develop in sequence — and in some detail. But for the moment, here’s a little diversion. People often complain that sodium-cooled fast reactors are about as easy to control as wild stallions — at least compared to the docile mares that are water-moderated thermal reactors. The experience on the EBR-II (which I’ll describe further in future posts) certainly belies this assertion, but for now, I want to go to another source.

Here are comments from Joël Sarge Guidez, written in 2002, who Chairman of International Group Of Research Reactors (IGORR), Director of Phénix fast breeder reactor (a 233 MWe power plant which operated in France for more than 30 years, with an availability factor of 78 % in 2004, 85% in 2005 and 78% in 2006), and President of the club of French Research Reactors:

—————————

A reactor that’s easy to live with

Pressurised water reactor specialists are always surprised how easy it is to run a fast reactor: no pressure, no neutron poisons like boron, no xenon effect, no compensatory movements of the rods, etc. Simply, when one raises the rods, there is divergence and the power increases. Regulating the level of the rods stabilises the reactor at the desired power. The very strong thermal inertia of the whole unit allows plenty of time for the corresponding temperature changes. If one does nothing, the power will gradually decrease as the fuel ages, and from time to time one will have to raise the rods again to maintain constant power. It all reminds one of a good honest cart-horse rather than a highly-strung race horse.

Similarly, the supposed drawbacks of sodium often turn out in practice to be advantages. For example, the sodium leaks (about thirty so far since the plant first started up) create electrical contacts and produce smoke, which means they can be detected very quickly. Again, the fact that sodium is solid at ambient temperature simplifies many operations on the circuits. More generally, because of the chemical properties of sodium, the plant is designed to keep it rigorously confined, including during handling. During operation, all this provides a much greater “dosimetric convenience” than conventional reactors. In particular, a very large part of the plant is completely accessible to staff whatever power the reactor is at, and the dose levels are very low.

Because of the very high neutron flux (more than ten times as high as with water reactors), there is great demand for experiments. These experiments are performed using either rigs inside carrier sub-assemblies or using special experimental sub-assemblies with particular characteristics. All experiments are run and monitored in the core like the other subassemblies.

Since the origin Phénix irradiated around 1000 sub-assemblies, on which 200 were experimental sub-assemblies. It is true that the Phénix is not as flexible as an experimental water reactor, in which targets can easily be handled and moved. But, with a minimum of preparation – which is necessary anyway for reasons of safety and quality – numerous parameters such as flux, spectrum and duration can be adjusted to the needs of each experiment.

Furthermore, the reactor was designed by modest people who thought in advance of everything that would be needed for intervention on the plant: modular steam generators, washing pits, component handling casks etc. All of which has been very useful and has made possible numerous operations and modifications in every domain. All this has meant that a prototype reactor built in the early 1970s is still operational in 2004, and will continue so for several years yet.

—————————

Some further useful information can be had from Guidez’s presentation at the 2008 International Group on Research Reactors conference. Download and read over this 19-page PDF, which is the easy-to-read slides of his presentation, called “THE RENAISSANCE OF SODIUM FAST REACTORS STATUS AND CONTRIBUTION OF PHENIX”.

Older Posts »

Create a free website or blog at WordPress.com.