Climate change

June 23, 2011

Disposal of UK plutonium stocks with a climate change focus

Filed under: IFR (Integral Fast Reactor) Nuclear Power, Nuclear Energy — buildeco @ 1:49 pm
by Barry Brook

In the 1950s, following World War II, the United Kingdom and a handful of other nations developed a nuclear weapons arsenal. This required the production of plutonium metal (or highly enriched uranium) purpose-built facilities. ‘Civil’ plutonium was also produced, since the facilities for separation existed and it was thought that this fissile material would prove useful in further nuclear power development.

Fifty years on, the question of what to do with the UK’s separated plutonium stocks is an important one. Should it, for instance, be downblended with uranium to produce mixed oxide fuel in thermal reactors, and then disposed of in a geological repository when it has been ‘spiked’ by fission products and higher actinide isotopes? Or is, perhaps, there an alternative, which would be of far greater medium- to long-term benefit to the UK, because it treats the plutonium not as waste, but as a major resource to capitalise on?

In the piece below, Tom Blees explores these questions. This was written as a formal submission to a paper “Management of the UK’s Plutonium Stocks: A consultation on the long-term management of UK owned separated civil plutonium”. Click on the picture to the left to read the background paper (which is interesting and not all that long).

This is the final in a series of three posts which has advocated SCGI’s position on the need for the IFR: (i) to provide abundant low-carbon energy and (ii) as a highly effective means of nuclear waste management and fuel extension for sustainable (inexhaustible) nuclear fission.

—————————–

Response to a consultation on the management of the UK’s plutonium stocks

Tom Blees, President, of The Science Council for Global Initiatives

Do you agree that it is not realistic for the Government to wait until fast breeder reactor technology is commercially available before taking a decision on how to manage plutonium stocks?

I strongly disagree, and I hope that you’ll take the time to read this and consider the fact that the fast reactor option is far more imminent than you might have heretofore believed. Not only that, but it is arguably the best option by far.

Current Fast Reactor Development

Worldwide there are well over 300 reactor-years of experience with fast reactors. Russia’s BN-600 fast reactor has been producing commercial electricity for over 30 years, and Russia is beginning to build BN-800 reactors both for their own use and for China. India’s first commercial-scale fast reactor is about to be finished within a year or two. South Korea has already built a sizeable pyroprocessing facility to convert their spent LWR fuel into metal fuel for fast reactors, and have only refrained from starting it up because of diplomatic agreements with the USA that are due to be renegotiated in the near future. China is building a copy of the Experimental Breeder Reactor II (EBR-II) that was the mainstay of the Integral Fast Reactor (IFR) development program at Argonne National Laboratory in the USA. Japan has reopened their Monju fast reactor to continue that research, though it should be noted that Toshiba and Hitachi contested the wisdom of that decision, favoring instead the metal-fueled fast reactor design as exemplified by the EBR-II.
The advantages of metal fuel in fast reactors is difficult to overstate. Rather than attempt to explicate the details here, I would refer the reader to the following URL: http://tinyurl.com/cwvn8n This is a chapter from a book that deals at length with the Integral Fast Reactor (IFR). The advantages of this system in safety, economics, fuel utilization, proliferation resistance and plutonium breeding or burning far outstrip any of the other options mentioned in the consultation document.

While fast breeders are mentioned as a future option, the rest of the document seems to have been unduly influenced by those who favor either MOX fabrication or long-term disposal. Both of these are mistakes that the USA has already made to one degree or another, mistakes that I would hope the UK will avoid when presented with the facts.

A Little History

In 1993, Presidents Yeltsin and Clinton signed nuclear disarmament agreements that would result in each country possessing 34 tons of excess weapons-grade plutonium. Since proliferation concerns would warrant safe disposal of this material, each president asked for the advice of one of their prominent scientists as to how to get rid of it. Yeltsin asked Dr. Evgeny Velikhov, one of the most prominent scientists in Russia to this day, who had been intimately involved in Russia’s military and civilian nuclear programs and was, in fact, in charge of the Chernobyl cleanup. Clinton asked Dr. John Holdren, who is now the director of the White House Office of Science & Technology Policy—President Obama’s top science advisor.

In July of 2009 I arranged for a meeting with Dr. Velikhov and Dr. Holdren in Washington, D.C. At that meeting we discussed what had happened when those two had met to decide on what advice to give to their respective presidents regarding the disposition of 68 tons of weapons-grade plutonium. Velikhov’s position was that it should be burned in fast reactors to generate electricity. Holdren disagreed. He contended that each country should build a MOX plant to dispose of it. That advice led to the construction that is now being done in South Carolina by Areva of a MOX plant that is expected to cost as much as ten billion dollars by the time all is said and done. And the processing of that plutonium into MOX fuel will take until the year 2030 at the very least.

Dr. Velikhov wasn’t buying it, nor was Yeltsin. But Holdren was in a tough position. Clinton had already signaled his lack of support for the IFR project that had been ongoing for nine years and was now in its final stages. It would be shut down the very next year by a duped Congress that had no idea of its importance and was manipulated into cutting off its funding for purely political reasons. Clinton wanted Russia’s solution for disposal of the excess plutonium to be the same as the USA’s, but Yeltsin said that he wasn’t prepared to spend the money. If Clinton wanted Russia to build a MOX plant, then America could pay for it. Needless to say, that never happened. And after 17 years of indecision, last spring the USA finally agreed that Russia should go ahead and dispose of their 34 tons in fast reactors.

By this time, the USA had contracted with Areva to build the South Carolina MOX plant, now under construction. That boondoggle will be a painfully slow and inefficient method of disposing of the plutonium compared to using fast reactors. Doctor Holdren made it clear at that meeting that he fully comprehends the wisdom of using IFRs to dispose of plutonium.

Salesmanship

Areva has not only talked the USA into building a horrendously expensive MOX plant, but judging by the tone of this consultation document they have apparently convinced some of the policymakers in the UK to do the same. This is as wrong now as it was when Holdren advised Clinton in 1993. Yet the South Carolina MOX plant’s construction is well underway and, like most big government-funded projects, would be about as hard to cancel at this point as turning a supertanker in the Thames. But the UK needn’t go down that road.

Areva touts its MOX technology as the greatest thing since sliced baguettes, yet in reality it only increases the utilization of the energy in uranium from about 0.6% to 0.8%. Metal-fueled fast reactors, on the other hand, can recover virtually 100% of that energy. Ironically, when discussing the ultimate shortcomings of Areva’s MOX policies with one of their own representatives, those unpleasant details were dismissed with the assurance that all that will be dealt with when we make the transition to fast reactors. Yet with billions of dollars tied up in MOX technology, Areva is anything but anxious to see that transition happen anytime soon. And the more countries they can convince to adopt MOX technology, the slower that transition will happen, for each of those countries will then have a large investment sunk into the same inferior technology.

A Pox on MOX

MOX is not only expensive, but it results in the separation of plutonium (though of course that’s not the issue in this case since the plutonium is already separated). That being said, the issue of proliferation from reactor-grade plutonium is quite overblown in general, since its isotopic composition makes it nearly impossible to fashion a nuclear weapon out of it. But regardless of its actual risk in that regard, its perception by the scientifically uninformed makes it politically radioactive, and international agreements to limit the spread of fissile material treats it as if it were weapons-grade. So any plans for the disposition of any sort of plutonium—whatever its composition—must take the politics into account.

If the UK would decide to spend five billion pounds or so on a MOX plant, it would end up with a lot of overpriced fuel that would have to be given away at a loss, since any utility company would surely choose to buy cheaper fuel from enriched virgin uranium. You would have a horrendously expensive single-purpose facility that would have to operate at a substantial loss for decades to consume the vast supplies of plutonium in question. And you would still end up with vast amounts of long-lived spent fuel that would ultimately, hopefully, be converted and used in fast reactors. Why not skip the MOX step altogether?

Given that the plutonium contains an almost unimaginable amount of energy within it, opting for long-term disposal via vitrification and burial would be unconscionable. The world will surely be in need of vast amounts of clean energy in the 21st century as the burgeoning population will demand not only energy for personal and industrial use, but will require energy-hungry desalination projects on a stunning scale. The deployment of fast reactors using the plutonium that earlier policymakers in the UK wisely decided to stockpile is a realistic solution to the world’s fast-approaching energy crisis.

Sellafield Nuclear Plant, UK

But this consultation report questions whether fast reactors can be deployed in the near future on a commercial scale. They can.

The PRISM Project

While the scientists and engineers were perfecting the many revolutionary features of the IFR at the EBR-II site in the Eighties and early Nineties, a consortium of major American firms collaborated with them to design a commercial-scale fast reactor based on that research. General Electric led that group, which included companies like Bechtel, Raytheon and Westinghouse, among others. The result was a modular reactor design intended for mass production in factories, called the PRISM (Power Reactor Innovative Small Module). A later iteration, the S-PRISM, would be slightly larger at about 300 MWe, while still retaining the features of the somewhat smaller PRISM. For purposes of simplicity I will refer hereinafter to the S-PRISM as simply the PRISM.

After the closure of the IFR project, GE continued to refine the PRISM design and is in a position to pursue the building of these advanced reactors as soon as the necessary political will can be found. Unfortunately for those who would like to see America’s fast reactor be built in America, nuclear politics in the USA is nearly as dysfunctional as it is in Germany. The incident at Fukushima has only made matters worse.

The suggestion in this report that fast reactors are thirty years away is far from accurate. GE-Hitachi plans to submit the PRISM design to the Nuclear Regulatory Commission (NRC) next year for certification. But that time-consuming process, while certainly not taking thirty years, may well be in process even as the first PRISM is built in another country.

This is far from unprecedented. In the early Nineties, GE submitted its Advanced Boiling Water Reactor (ABWR) design to the NRC for certification. GE then approached Toshiba and Hitachi and arranged for each of those companies to build one in Japan. Those two companies proceeded to get the design approved by their own NRC counterpart, built the first two ABWRs in just 36 and 39 months, fueled and tested them, then operated them for a year before the NRC in the US finally certified the design.

International Partners

On March 24th an event was held at the Russian embassy in Washington, D.C., attended by a small number of members of the nuclear industry and its regulatory agencies, both foreign and domestic, as well as representatives of NGOs concerned with nuclear issues. Sergei Kirienko, the director-general of Rosatom, Russia’s nuclear power agency, was joined by Dan Poneman, the deputy secretary of the U.S. Dept. of Energy. This was shortly after the Fukushima earthquake and tsunami, at a time when the nuclear power reactors at Fukushima Daiichi were still in a very uncertain condition.

Mr. Kirienko and Mr. Poneman first spoke about the ways in which the USA and Russia have been cooperating in tightening control over fissile material around the world. Then Mr. Kirienko addressed what was on the minds of all of us: the situation in Japan and what that portends for nuclear power deployment in the USA and around the world.

He rightly pointed out that the Chernobyl accident almost exactly 25 years ago, and the Fukushima problems now, clearly demonstrate that nuclear power transcends national boundaries, for any major accident can quickly become an international problem. For this reason Kirienko proposed that an international body be organized that would oversee nuclear power development around the world, not just in terms of monitoring fissile material for purposes of preventing proliferation (much as the IAEA does today), but to bring international expertise and oversight to bear on the construction and operation of nuclear power plants as these systems begin to be built in ever more countries.

Kirienko also pointed out that the power plants at risk in Japan were old reactor designs. He said that this accident demonstrates the need to move nuclear power into the modern age. For this reason, he said, Russia is committed to the rapid development and deployment of metal-fueled fast neutron reactor systems. His ensuing remarks specifically reiterated not only a fast reactor program (where he might have been expected to speak about Gen III or III+ lightwater reactor systems), but the development of metal fuel for these systems. This is precisely the technology that was developed at Argonne National Laboratory with the Integral Fast Reactor (IFR) program, but then prematurely terminated in 1994 in its final stages.

For the past two years I’ve been working with Dr. Evgeny Velikhov (director of Russia’s Kurchatov Institute and probably Russia’s leading scientist/political advisor) to develop a partnership between the USA and Russia to build metal-fueled fast reactors; or to be more precise, to facilitate a cooperative effort between GE-Hitachi and Rosatom to build the first PRISM reactor in Russia as soon as possible. During those two years there have been several meetings in Washington to put the pieces in place for such a bilateral agreement. The Obama administration, at several levels, seems to be willingly participating in and even encouraging this effort.

Dr Evgeny Velikhov, SCGI member

Dr. Velikhov and I (and other members of the Science Council for Global Initiatives) have also been discussing the idea of including nuclear engineers from other countries in this project, countries which have expressed a desire to obtain or develop this technology, some of which have active R&D programs underway (India, South Korea, China). Japan was very interested in this technology during the years of the IFR project, and although their fast reactor development is currently focused on their oxide-fueled Monju reactor there is little doubt that they would jump at the chance to participate in this project.

Dr. Velikhov has long been an advocate of international cooperation in advanced nuclear power research, having launched the ITER project about a quarter-century ago. He fully comprehends the impact that international standardization and deployment of IFR-type reactors would have on the well-being of humanity at large. Yet if Russia and the USA were to embark upon a project to build the first PRISM reactor(s) in Russia, one might presume that the Russians would prefer to make it a bilateral project that would put them at the cutting edge of this technology and open up golden opportunities to develop an industry to export it.

It was thus somewhat surprising when Mr. Kirienko, in response to a question from one of the attendees, said that Russia would be open to inviting Japan, South Korea and India to participate in the project. One might well question whether his failure to include China in this statement was merely an oversight or whether that nation’s notorious reputation for economic competition often based on reverse-engineering new technologies was the reason.

I took the opportunity, in the short Q&A session, to point out to Mr. Poneman that the Science Council for Global Initiatives includes not just Dr. Velikhov but most of the main players in the development of the IFR, and that our organization would be happy to act as a coordinating body to assure that our Russian friends will have the benefit of our most experienced scientists in the pursuit of this project. Mr. Poneman expressed his gratitude for this information and assured the audience that the USA would certainly want to make sure that our Russian colleagues had access to our best and brightest specialists in this field.

Enter the United Kingdom

Sergei Kirienko was very clear in his emphasis on rapid construction and deployment of fast reactors. If the United States moves ahead with supporting a GE-Rosatom partnership, the first PRISM reactor could well be built within the space of the next five years. The estimated cost of the project will be in the range of three to four billion dollars (USD), since it will be the first of its kind. The more international partners share in this project, the less will be the cost for each, of course. And future copies of the PRISM have been estimated by GE-Hitachi to cost in the range of $1,700/kW.

Work is under way on gram samples of civil plutonium

According to this consultation document, the UK is looking at spending £5-6 billion or more in dealing with its plutonium. Yet if the plutonium were to simply be secured as it currently is for a short time longer and the UK involved itself in the USA/Russia project, the cost would be a small fraction of that amount, and when the project is completed the UK will have the technology in hand to begin mass-production of PRISM reactors.

The plutonium stocks of the UK could be converted into metal fuel using the pyroprocessing techniques developed by the IFR project (and which, as noted above, are ready to be utilized by South Korea). The Science Council for Global Initiatives is currently working on arranging for the building of the first commercial-scale facility in the USA for conversion of spent LWR fuel into metal fuel for fast reactors. By the time the first PRISM is finished in Russia, that project will also likely be complete.

What this would mean for the UK would be that its stores of plutonium would become the fast reactor fuel envisioned by earlier policymakers. After a couple years in the reactor the spent fuel would be ready for recycling via pyroprocessing, then either stored for future use or used to start up even more PRISM reactors. In this way not only would the plutonium be used up but the UK would painlessly transition to fast reactors, obviating any need for future mining or enrichment of uranium for centuries, since once the plutonium is used up the current inventories of depleted uranium could be used as fuel.

Conclusion

Far from being decades away, a fully-developed fast reactor design is ready to be built. While I’m quite certain that GE-Hitachi would be happy to sell a PRISM to the UK, the cost and risk could be reduced to an absolute minimum by the happy expedient of joining in the international project with the USA, Russia, and whichever other nations are ultimately involved. The Science Council for Global Initiatives will continue to play a role in this project and would be happy to engage the UK government in initial discussions to further explore this possibility.

There is little doubt that Russia will move forward with fast reactor construction and deployment in the very near future, even if the PRISM project runs into an unforeseen roadblock. It would be in the best interests of all of us to cooperate in this effort. Not only will the deployment of a standardized modular fast reactor design facilitate the disposition of plutonium that is currently the driving force for the UK, but it would enable every nation on the planet to avail itself of virtually unlimited clean energy. Such an international cooperative effort would also provide the rationale for the sort of multinational nuclear power oversight agency envisioned by Mr. Kirienko and others who are concerned not only about providing abundant energy but also in maintaining control over fissile materials.

Advertisements

June 6, 2011

Renewables and efficiency cannot fix the energy and climate crises (part 2)

by Barry Brook

This post continues directly on from Part 1 (please read that if you’ve not already done so!). I also note the flurry of interest in the new IPCC WGIII special report on renewable energy prospects through to 2050. I will have more to say on this in an upcoming post, but in short, it fails to address — with any substance — any of the significant problems I describe below, or in the previous post. What a disappointment!

————————

Renewables and efficiency cannot fix the energy and climate crises (part 2)

Renewable energy cannot provide reliable 24-hour, 7-day-a-week  power to meet baseload demand

The minimum amount of power that a city or country demands usually occurs at night (when most people are asleep); this is called the electricity ‘baseload’. Some have claimed that it is a fallacy to argue that all of this demand is needed, because utilities tend to charge cheap (‘off peak’) rates during these low-use periods, to encourage more uptake (by everything from factory machinery to hot water systems). This is because some types of power stations (e.g., coal and nuclear) are quite expensive to build and finance (with long terms to pay off the interest), but fairly cheap to run, so the utility wants to keep them humming away 24 hours a day to maximise returns. Thus, there is some truth to this argument, although if that energy is not used at night, extra must instead be supplied in the day.

Some critical demand, however, never goes away – the power required to run hospitals, police stations, street lights, water and sewerage pumping stations,  refrigerators and cold storage, transport (if we are to use electric vehicles), and so on. If the power is lost to these services, even for a short while, chaos ensues, and the societal backlash after a few such events is huge. On the other side of the energy coin, there are times when huge power demands arise, such as when everyone gets home from work to cook their meals and watch television, or when we collectively turn on our air conditioners during a heatwave. If the energy to meet this peak demand cannot be found, the result can be anything from a lot of grumpy people through to collapse of the grid as rolling blackouts occur.

Two core limitations of wind, solar and most other renewable systems is that: (i) they are inherently variable and are prone to ‘gambler’s ruin‘ (in the sense that you cannot know, over any planning period, when long stretches of calm or cloudy days will come, which could bring even a heavily over-compensated system to its knees), and (ii) they are not ‘dispatchable’. They’ll provide a lot of power some of the time, when you may or may not need it, and little or none at other times, when you’ll certainly need some, and may need a lot. In short, they can’t send power out on demand, yet, for better or worse, this is what society demands of an electricity system. Okay, but can these limitations be overcome?

Large-scale renewables require massive ‘overbuilding’ and so are not cost competitive

The three most commonly proposed ways to overcome the problem of intermittency and unscheduled outages are: (i) to store energy during productive times and draw on these stores during periods when little or nothing is being generated; (ii) to have a diverse mix of renewable energy systems, coordinated by a smart electronic grid management system, so that even if the wind is not blowing in one place, it will be in another, or else the sun will be shining or the waves crashing; and (iii) to have fossil fuel or nuclear power stations on standby, to take up the slack when needed.

The reality is that any of these solutions are grossly uneconomic, and even if we were willing and able to pay for them, the result would be an unacceptably unreliable energy supply system. Truly massive amounts of energy would need to be stored to keep a city or country going through long stretches of cloudy winter days (yes, these even occur in the desert) or calm nights with little wind and no sun, yet energy storage (batteries, chemical conversion to hydrogen or ammonia, pumped hydropower, compressed air), even on a small scale, is currently very expensive. A mix of different contributions (solar, wind, wave, geothermal) would help, but then we’d need to pay for each of these systems, built to a level that they could compensate for the failure of another.

What’s more, in order to deliver all of our regular power demand whilst also charging up the energy stores , we would have to ‘overbuild’ our system many times, adding to the already prohibitive costs. As a result, an overbuilt system of wind and solar would, at times, be delivering 5 to 20 times our power demand (leading to problems of ‘dumping’ the excess energy that can’t be used or stored quickly enough or in sufficient quantity), and at other times, it would deliver virtually none of it.

If you do some modelling to work through the many contingencies, you find that a system which relies on wind and/or solar power, plus large-scale energy storage and a geographically dispersed electricity transmission network to channel power to load centres, would seem to be 10 to 40 times more expensive than an equivalent nuclear-powered system, and still less reliable. The cost to avoid 1 tonne of carbon dioxide would be >$800 with wind power compared with $22 with nuclear power.

The above critiques of renewable energy might strike some readers as narrow minded or deliberately pessimistic. Surely, isn’t it just a matter of prudent engineering and sufficient integration of geographically and technologically diverse systems, to overcome such difficulties? Alas, no! Although I only have limited space for this topic in this short post, let me grimly assure you that the problem of ‘scaling up’ renewable energy to the point where it can reliably meet all (or even most) of our power needs, involves solving a range of compounding, quite possibly insuperable, problems. We cannot wish these problems away — they are ‘the numbers’, ‘the reality’.

Economic and socio-political realities

Supporters of ’100% renewable energy’ maintain that sunlight, wind, waves and plant life, combined with vast improvements in energy efficiency and energy conservation leading to a flattening or reduction in total energy demand, are the answer.  This is a widespread view among environmentalists and would be perfectly acceptable to me if the numbers could be made to work. But I seriously doubt they can.

The high standard of living in the developed world has been based on cheap fossil (and nuclear) energy. While we can clearly cut back on energy wastage, we will still have to replace oil and gas. And that means a surge in demand for electricity, both to replace the energy now drawn from oil and gas and to meet the additional demand for power from that third of the world’s people who currently have no electricity at all.

Critics do not seem to understand – or refuse to acknowledge – the basis of modern economics and the investment culture. Some dream of shifts in the West and the East away from consumerism. There is a quasi-spiritualism which underpins such views. Yet at a time of crisis, societies must be ruthlessly practical in solving their core problems or risk collapse. Most people will fight tooth-and-nail to avoid a decline in their standard of living. We need to work with this, not against it. We are stuck with the deep-seated human propensity to revel in consuming and to hope for an easier life. We should seek ways to deliver in a sustainable way.

A friend of mine, the Californian entrepreneur Steve Kirsch, has put the climate-energy problem succinctly:

The most effective way to deal with climate change is to seriously reduce our carbon emissions. But we’ll never get the enormous emission reductions we need by treaty. Been there, done that – it’s not going to happen. If you want to get emissions reductions, you must make the alternatives for electric power generation cheaper than coal. It’s that simple. If you don’t do that, you lose.

Currently, no non-fossil-fuel energy technology has achieved this. So what is stopping nations replacing coal, oil and gas infrastructure with renewable energy? It is not (yet) because of any strong, society-wide opposition to a switch to renewables. No, it is economic uncertainty, technological immaturity, and good old financial risk management. Despite what ’100% renewables’ advocates would lead you to believe, it is still far from certain in what way the world will pursue a low-carbon future. You have only to look at what’s happening in the real world to verify that.

I’ve already written about fast-growing investment in nuclear energy in Asia. China, for instance, has overcome typical first-of-a-kind engineering cost overruns by building more than 25 reactors at the same time, in a bid to bring costs to, or below, those of coal.

In December 2009, there was a telling announcement from the United Arab Emirates (UAE), which wish to sell their valuable natural gas to the export market. Within the next few years, the UAE face a six-gigawatt increase in demand for electricity, which includes additional power required by an upgraded desalination program. Despite being desert-based with a wealth of solar resources, the UAE decided not to build large-scale solar power plants (or any other renewable technology). In terms of economics and reliability, the numbers just didn’t stack up. Instead, they have commissioned a South Korean consortium to build four new generation III+ APR-1400 reactors, at a cost of $3,500 a kilowatt installed – their first ever nuclear power plants.

Conclusion

Nuclear power, not renewable energy or energy efficiency, will probably end up being the primary global solution to the climate and energy crises. This is the emergent result of trying to be honest, logical and pragmatic about what will and will not work, within real-world physical, economic and social constraints.

If I am wrong, and non-hydro and non-combustible renewables can indeed rise to the challenge and ways can be found to overcome the issues I’ve touched on in these two posts, then I will not complain. After all, my principal goal — to replace fossil fuels with sustainable and low-carbon alternative energy sources — would have been met. But let’s not play dice with the biosphere and humanity’s future on this planet, and bet everything on such wishful thinking. It would be a risky gamble indeed.

Renewables and efficiency cannot fix the energy and climate crises (part 1)

 by Barry Brook
We must deal simultaneously with the energy-resource and climate-change pincers

The modern world is caught in an energy-resource and climate-change pincer. As the growing mega-economies of China and India strive to build the prosperity and quality of life enjoyed by citizens of the developed world, the global demand for cheap, convenient energy grows rapidly. If this demand is met by fossil fuels, we are headed for an energy supply and climate disaster. The alternatives, short of a total and brutal deconstruction of the modern world, are nuclear power and renewable energy.

Whilst I support both, I now put most of my efforts into advocating nuclear power, because: (i) few other environmentalists are doing this, whereas there are plenty of renewable enthusiasts  (unfortunately, the majority of climate activists seem to be actively anti-nuclear), and (ii) my research work on the energy replacement problem suggests to me that nuclear power will constitute at least 75 % of the solution for displacing coal, oil and gas.

Prometheus, who stole fire from the Gods and gave it to mortal man

In my blog, I argue that it’s time to become “Promethean environmentalists”. (Prometheus, in Greek mythology, was the defiantly original and wily Titan who stole fire from Zeus and gave it to mortals, thus improving their lives forever.) Another term, recently used by futurist Stewart Brand, is “Ecopragmatists”. Prometheans are realists who shun romantic notions that modern governments might guide society back to an era when people lived simpler lives, or that a vastly less consumption-oriented world is a possibility. They seek real, high-capacity solutions to environmental challenges – such as nuclear power – which history has shown to be reliable.

But I reiterate — this strong support for nuclear does NOT make me ‘anti-renewables’ (or worse, a ‘renewable energy denier‘, a thoroughly unpleasant and wholly inaccurate aspersion). Indeed, under the right circumstances, I think renewables might be able to make an important contribution (e.g., see here). Instead, my reticence to throw my weight confidently behind an ’100% renewable energy solution’ is based on my judgement that such an effort would prove grossly insufficient, as well as being plain risky. And given that the stakes we are talking about are so high (the future of human society, the fates of billions of people, and the integrity of the biosphere), failure is simply not an option.

Below I explain, in very general terms, the underlying basis of my reasoning. This is not a technical post. For those details, please consult the Thinking Critically About Sustainable Energy (TCASE) series — which is up to 12 parts, and will be restarted shortly, with many more examples and calculations.

————————

Renewables and efficiency cannot fix the energy and climate crises (part 1)

Boulton and Watt’s patented steam engine

The development of an 18th century technology that could turn the energy of coal into mechanical work – James Watt’s steam engine – heralded the dawn of the Industrial Age. Our use of fossil fuels – coal, oil and natural gas – has subsequently allowed our modern civilisation to flourish. It is now increasingly apparent, however, that our almost total reliance on these forms of ancient stored sunlight to meet our energy needs, has some severe drawbacks, and cannot continue much longer.

For one thing, fossil fuels are a limited resource. Most of the readily available oil, used for transportation, is concentrated in a few, geographically favoured hotspots, such as the Middle East. Most credible analysts agree that we are close to, or have passed, the point of maximum oil extraction (often termed ‘peak oil’), thanks to a century of rising demand. We’ve tapped less of the available natural gas (methane), used mostly for heating and electricity production, but globally, it too has no more than a few more decades of significant production left before supplies really start to tighten and prices skyrocket, especially if we ‘dash for gas’ as the oil wells run dry. Coal is more abundant than oil or gas, but even it has only a few centuries of economically extractable supplies.

Then there is climate change and air pollution. The mainstream scientific consensus is that emissions caused by the burning of fossil fuels, primarily carbon dioxide (CO2), are the primary cause of recent global warming. We also know that coal soot causes chronic respiratory problems, its sulphur causes acid rain, and its heavy metals (like mercury) induce birth defects and damage ecological food chains. These environmental health issues compound the problem of dwindling fossil fuel reserves.

Clearly, we must unhitch ourselves from the fossil-fuel-based energy bandwagon – and fast.

Meeting the growing demand for energy and clean water in the developing world

In the developed world (US, Europe, Japan, Australia and so on), we’ve enjoyed a high standard of living, linked to a readily available supply of cheap energy, based mostly on fossil fuels. Indeed, it can be argued that this has encouraged energy profligacy, and we really could be more efficient in the mileage we get out of our cars, the power usage of our fridges, lights and electrical appliances, and in the design of our buildings to reduce demands for heating and cooling. There is clearly room for improvement, and sensible energy efficiency measures should be actively pursued.

In the bigger, global picture, however, there is no realistic prospect that we can use less energy in the future. There are three obvious reasons for this:

1) Most of the world’s population is extremely energy poor. More than a third of all humanity, some 2.5 billion people, have no access to electricity whatsoever. For those that do, their long-term aspirations for energy growth, to achieve something equating that used today by the developed world, is a powerful motivation for development. For a nation like India, with over 1 billion people, that would mean a twenty-fold increase in per capita energy use.

2) As the oil runs out, we need to replace it if we are to keep our vehicles going. Oil is both a convenient energy carrier, and an energy source (we ‘mine’ it).  In the future, we’ll have to create our new energy carriers, be they chemical batteries or oil-substitutes like methanol or hydrogen. On a grand scale, that’s going to take a lot of extra electrical energy! This counts for all countries.

3) With a growing human population (which we hope will stabilise by mid-century at less than 10 billion) and the burgeoning impacts of climate change and other forms of environmental damage, there will be escalating future demands for clean water (at least in part supplied artificially, through desalination and waste water treatment), more intensive agriculture which is not based on ongoing displacement of natural landscapes like rainforests, and perhaps, direct geo-engineering to cool the planet, which might be needed if global warming proceeds at the upper end of current forecasts.

In short, the energy problem is going to get larger, not smaller, at least for the foreseeable future.

Renewable energy is diffuse, variable, and requires massive storage and backup

Let’s say we aim to have largely replaced fossil fuels with low-carbon substitutes by the year 2060 — in the next 50 years or so. What do we use to meet this enormous demand?

Nuclear power is one possibility, and is discussed in great detail elsewhere on this website. What about the other options? As discussed above, improved efficiency in the way we use energy offers a partial fix, at least in the short term. In the broader context, to imagine that the global human enterprise will somehow manage to get by with less just doesn’t stack up when faced with the reality of a fast developing, energy-starved world.

Put simply, citizens in Western democracies are simply not going to vote for governments dedicated to lower growth and some concomitant critique of consumerism, and nor is an authoritarian regime such as in China going to risk social unrest, probably of a profound order, by any embrace of a low growth economic strategy. As such, reality is demanding, and we must carefully scrutinise the case put by those who believe that renewable energy technologies are the answer.

Solarpark Mühlhausen in Bavaria. It covers 25 ha and generates 0.7 MW of average power (peak 6.3 MW)

The most discussed ‘alternative energy’ technologies (read: alternative to fossil fuels or nuclear) are: harnessing the energy in wind, sunlight (directly via photovoltaic panels or indirectly using mirrors to concentrate sunlight), water held behind large dams (hydropower), ocean waves and tides, plants, and geothermal energy, either from hot surface aquifers (often associated with volcanic geologies) or in deep, dry rocks. These are commonly called ‘renewable’ sources, because they are constantly replenished by incoming sunlight or gravity (tides and hot rocks) and radioactivity (hot rocks). Wind is caused by differences in temperature across the Earth’s surface, and so comes originally from the sun, and oceans are whipped up by the wind (wave power).

Technically, there are many challenges with economically harnessing renewable energy to provide a reliable power supply. This is a complex topic – many of which are explored in the TCASE series – but here I’ll touch on a few of the key issues. One is that all of the sources described above are incredibly diffuse – they require huge geographical areas to be exploited in order to capture large amounts of energy.

For countries like Australia, with a huge land area and low population density, this is not, in itself, a major problem. But it is a severe constraint for nations with high population density, like Japan or most European nations. Another is that they are variable and intermittent – sometimes they deliver a lot of power, sometimes a little, and at other times none at all (the exception here is geothermal). This means that if you wish to satisfy the needs of an ‘always on’ power demand, you must find ways to store large amounts of energy to cover the non-generating periods, or else you need to keep fossil-fuel or nuclear plants as a backup. That is where the difficulties really begin to magnify… To be continued…

————————

Part 2 will cover the ‘fallacy of the baseload fallacy’, ‘overbuilding’, costs, and evolution of real-world energy systems.

May 10, 2011

Decarbonise SA – regional action for greenhouse gas mitigation

Filed under: IFR (Integral Fast Reactor) Nuclear Power, Nuclear Energy — buildeco @ 3:48 pm

by Barry Brook

Global warming can only be tackled seriously by a massive reduction in anthropogenic greenhouse gas production. It’s that simple. But just hoping for this to gradually happen — locally, regionally or globally — by tinkering at the edge of the problem (carbon prices, alternative energy subsidies, mandated targets and loan guarantees, “100 ways to be more green” lists, etc.), is just not going to get us anywhere near where we need to be, when we need to be. For that, we need to develop and implement a well-thought-out, practical and cost-effective action plan!

Back in early 2009, I offered a A sketch plan for a zero-carbon Australia. Overall, I still think this advocates the right sort of path. I elaborated further on this idea in my two pieces: Climate debate missing the point and Energy in Australia in 2030; in the latter, I explored a number of potential storylines, along with an estimate of the probability and result of following these different pathways. But the lingering question that arises from thought experiments like this is… how do you turn it into something practical?

Sadly, I can’t think of any liberal-democratic government, anywhere in the world, that actually has a realistic, long-term energy  plan. Instead, we have politicians, businesses and other decision makers with their heads in the sand (peak oil is another issue where this is starkly apparent). This must change, and we — the citizenry — must be the agents of that change. That is why the new initiative by Ben Heard, called “Decarbonise SA“, is so exciting. I’ll let Ben explain more, in the guest post below.

But before that, just a small  note from me. For my many non-Australian readers don’t dismiss this as something parochicial. Think of it instead as a case study — a working template — for what you can help organise in your particular region (local council, city, state/province, whatever). We need all of you on board, because this is a problem of the global commons. Over to Ben.

——————–

Decarbonise SA

Ben Heard – Ben is Director of Adelaide-based advisory firm ThinkClimate Consulting, a Masters graduate of Monash University in Corporate Environmental Sustainability, and a member of the TIA Environmental and Sustainability Action Committee. He is the founder of Decarbonise SA. His recent  post was Think climate when judging nuclear power.

I have been a fan of Barry’s work of  for some time now. His knack for cutting through the noise to highlight the information we need to consider for making good decisions is remarkable. His reputation and tenure at Adelaide University also give his blogs a global reach and relevance, exemplified by the one million hits it received in the week following the Sendai quake and tsunami.

Remarkable though it is, the blog can’t do everything, nor should it try. That’s why I have started Decarbonise SA. The first thing you need to know is that this is more than a blog, it is a mission. The purpose of Decarbonise SA is to form a collective of like-minded people who will drive the most rapid possible decarbonisation of the economy of South Australia, with a primary focus on the electricity supply.

To achieve that goal, South Australia needs to introduce nuclear power into the mix of generating technologies. The primary driver for our support of nuclear power is recognition of the fact that the scientific findings in relation to climate change are now so serious, that we require the fastest and deepest cuts in emissions possible. That means attacking the biggest problems first.  In Australia, that’s electricity supply, specifically the coal and gas that provides most of our baseload generation. While climate change may be the catalyst, nuclear power provides many important environmental and safety benefits compared to coal, beyond greenhouse gas, that will give us a cleaner and healthier environment for the future.

Decarbonise SA also supports the increasing the use of renewable generation technologies, and becoming more efficient with energy. But the primary focus of Decarbonise SA is the introduction of nuclear power.  We are going to work with the government, community and private sectors of South Australia to make this happen.

Why South Australia?

South Australia’s electricity generation sector is in crisis. Aging, inefficient, decrepit infrastructure must be replaced soon, against the backdrop of an urgent global need to cut greenhouse gas emissions. As in any good crisis though, the opportunity is there if you look. South Australia is just a small number of significant infrastructure investments away from having among the world’s cleanest electricity. It is the mission of Decarbonise SA to make that happen, and happen fast. The goal is decidedly immodest. But that’s because climate change is upon us and we must act quickly, firmly and decisively.

But climate change is a global problem. So focussing a whole blog on a relatively small part of Australia may seem an odd strategy. Here’s the thing. There are already a great many resources pushing the cause of climate change (BNC being one). I’m not going to try to compete with that.

At the same time, every grand vision eventually needs implementation to matter, and necessarily, someone needs to downscale the bigger issues to a more manageable level and actually put a plan in place to make it happen.

I am a proud South Australian, and while my work is often national and my ideas and articles have spread around the world, I know where I have the most influence. It’s in the state of 1 million people where I was raised, where I have deep connections and networks, and where I do the bulk of my work. And as I said at the start, I have not started this blog to flap my gums; I very much intend to make this happen. If Decarbonise SA can move 1 million people in a developed nation from dirty a dirty electricity supply to among the world’s very cleanest, well, I’ll be satisfied, the model will have worked, and I can think even bigger. I will be proud if South Australia is first. But I will be even more excited to find ourselves in competition with others around the world who are decisively pursuing the same goal. So hopefully what we do with Decarbonise SA will become a model that has relevance in every state, territory, county and province the world over. So nothing is trademarked at Decarbonise SA. If you like what you read, but don’t live in SA, steal my blog idea and everything on it, and start your own Decarbonise movement. I’ll help.

How will we achieve this?

The introduction of nuclear power to South Australia is the foundation of the Decarbonise SA vision. Nuclear power will permit the rapid replacement of South Australia’s decrepit baseload generation facilities. This is to be accompanied by the continued and enhanced expansion of the renewable energy sector in South Australia, which has played a major role in lowering average emissions of South Australian electricity over the last few years, and continued efforts to improve our efficient use of energy. So yes; to resort to the labels that will come what may, Decarbonise SA is pro-nuclear power. It is also pro-renewables. It is also pro-energy efficiency. It is decidedly pro- nuclear, renewables and energy efficiency working in trio, each deployed as their respective advantages and disadvantages dictate they should be. But above all, it is pro, pro, pro the rapid decarbonisation of the South Australian economy, focussing on electricity. That makes us completely anti-coal and anti-gas for any new electricity generation capacity.

It is the introduction of nuclear power that is the focus of Decarbonise SA’s work, for some pretty simple reasons. Firstly, in South Australia it’s the missing component of a strategy that would actually get the job done (remember, I’m talking about zero emissions. I’m not interested in deep cuts or improvements). Secondly, while renewable technology and energy efficiency both need better support and deeper penetration, they also both have a lot of friends already. Energy efficiency is supported by legislation (like the Energy Efficiency Opportunities Act, mandatory standards for new houses, Minimum Energy Performance Standards (MEPS) and star ratings for appliances to name but a few), and organisations, governmental and otherwise. Renewables have support from organisations like Renewables SA, the Alternative Technology Association, and major legislated support from the national Renewable Energy Target (RET), as well as deep subsidies for solar PV. So the potential of this blog to improve the cause of either energy efficiency or renewables is minimal. To be perfectly clear, do not mistake the focus on nuclear power as an attack on, or belittling of, the role of either energy efficiency or renewables. That is not the case. But I do insist on being decidedly realistic about the potential of either to solve the problem in the absence of nuclear power.

Nuclear power, on the other hand, is roundly treated as the spawn of the devil, with the Australia’s Environmental Protection and Biodiversity Conservation Act specifically highlighting nuclear as requiring referral. Not to mention the opposition of the coal industry, who know full well that nuclear is the only real threat to their dominance of electricity generation in Australia.

At first approach, you may think this is crazy. Nuclear has never been very popular in Australia, and right now, as I write, the second biggest nuclear incident ever remains unresolved. Decarbonise SA is certainly not naive about the challenge of putting nuclear in the centre of the strategy. But when the options are 1) a tough sell that can work (nuclear and renewables with energy efficiency), and easier sells that are guaranteed to fail (gas generation with still high levels of greenhouse gas, plus more imports from Victoria where they burn the dirty brown coal in the world’s worst power station, plus a bit more renewables and energy efficiency) there is really no decision to be made.

Besides, nuclear power is hardly a fringe technology.  It is used in 30 countries worldwide, including the 16 largest economies (ignoring Australia at number 13).  It provides 15% of global electricity supply from around 440 reactors. It provides 80% of France’s electricity, 30% of Japan’s, and 20% of the United States’. It has been in use for over 50 years, with a remarkable safety record, and a suite of environmental, health and safety advantages over and above coal that make your head spin. It is embraced by many prominent environmentalists, thoughtful, caring and passionate people.  But Decarbonise SA has not based this plan on who else agrees or disagrees or what other countries have done; we based it on facts, evidence and context relating to:

  • The extraordinary challenge of climate change, that requires total and rapid decarbonisation of electricity
  • The need to maintain secure electricity supplies, and to urgently supply clean electricity to the 1 billion people in the world who have none
  • Honest and evidence-based appraisal of the advantages and disadvantages of different energy supply options across all relevant criteria, being:
    • Ability to provide near-zero greenhouse gas electricity across the lifecycle
    • Scalability to meet electricity demand requirements, with a focus on baseload
    • Location requirements
    • Cost
    • Reliability/ track record
    • Safety
    • Waste and pollution from energy generation
    • Waste and pollution from mining operations
    • Global security

When these criteria are attended to for all energy supply options with a clear head, and keeping prejudice to a minimum, one thing quickly becomes clear: Anyone who means what they say when they use the expression “climate crisis” needs to move nuclear power front and centre of the strategy, otherwise we will spend the next few decades rearranging the deck chairs on the Titanic.

By the way, this is all coming from someone who was once staunchly anti-nuclear. I supported the organisations who oppose it. I was first to rail against it if it came up over dinner or at a BBQ. But my growing understanding of the climate crisis forced me to take a second look at all of my reasons for opposition. I began that process believing that, in the end, I may find nuclear to be a necessary evil. When I was done, what I found instead is that it’s more than necessary, it’s essential, and it’s not really evil: compared to coal, nuclear power is 99% better in almost every relevant criterion (an assertion I will back with numbers in an upcoming post). I’ve been involved in enough environmental decisions now to know that if you have an option that will improve current conditions by 99%, that’s not a compromise. That’s not a defeatist stance. It’s a massive victory. I’ll be satisfied with the 99% this century, and chase the 1% in the next one if I’m still here.

So I hope you’ll join me on the journey, as I spell out the mission and reasons for Decarbonise SA in upcoming articles. But be warned: I’m not here for the talking. My children won’t really thank me for a blog. They will thank me for cleaner, healthier air, and a stable climate. That what Decarbonise SA is here for. And it needs you.

May 5, 2011

Energy debates in Wonderland

 by Barry Brook

My position on wind energy is quite ambivalent. I really do want it (and solar) to play an effective role in displacing fossil fuels, because to do this, we need every tool at our disposal (witness the Open Science project I kick started in 2009 [and found funding for], in order to investigate the real potential of renewables, Oz-Energy-Analysis.Org).

However, I think there is far too much wishful thinking wrapped up in the proclamations by the “100% renewables” crowd(most of who are unfortunately also anti-nuclear advocates), that wind somehow offers both a halcyon choice and an ‘industrial-strength’ solution to our energy dilemma. In contrast, my TCASE series (thinking critically about sustainable energy) illustrates that, pound-for-pound, wind certainty does NOT punch above it’s weight as a clean-energy fighter; indeed, it’s very much a journeyman performer.

The following guest post, by Jon Boone, looks at wind energy with a critical eye and a witty turn of phrase. I don’t offer it as a comprehensive technical critique — rather it’s more a philosophical reflection on past performance and fundamental limits. Whatever your view of wind, I think you’ll find it interesting.

————————

Energy debates in Wonderland

Guest Post by Jon Boone. Jon is a former university administrator and longtime environmentalist who seeks more more informed, effective energy policy in ways that expand and enhance modernity, increase civility, and demand stewardship on behalf of biodiversity and sensitive ecosystems. His brand of environmentalism eschews wishful thinking because it is aware of the unintended adverse consequences flowing from uninformed decisions. He produced and directed the documentary, Life Under a Windplant, which has been freely distributed within the United States and many countries throughout the world. He also developed the website Stop Ill Wind as an educational resource, posting there copies of his most salient articles and speeches. He receives no income from his work on wind technology.

March Hare (to Alice): Have some wine.

(Alice looked all round the table, but there was nothing on it but tea.)

Alice: I don’t see any wine.

March Hare: There isn’t any.

Alice: Then it wasn’t very civil of you to offer it.

March Hare: It wasn’t very civil of you to sit down without being invited.

— From Lewis Carroll’s Alice in Wonderland

Energy journalist Robert Bryce, whose latest book, Power Hungry, admirably foretells an electricity future anchored by natural gas from Marcellus Shale that will eventually bridge to pervasive use of nuclear power, has recently been involved in two prominent debates. In the first, conducted by The Economist, Bryce argued for the proposition that “natural gas will do more than renewables to limit the world’s carbon emissions.” In the second, an Intelligence Squared forum sponsored by the Rosenkranz Foundation, he and American Enterprise Institute scholar Steven Hayward argued against the proposition that “Clean Energy can drive America’s economic recovery.”

Since there’s more evidence a friendly bunny brought children multi-colored eggs on Easter Sunday than there is that those renewables darlings, wind and solar, can put much of a dent in CO2 emissions anywhere, despite their massively intrusive industrial presence, the first debate was little more than a curiosity. No one mentioned hydroelectric, which has been the most widely effective “renewable”—ostensibly because it continues to lose marketshare (it now provides the nation with about 7% of its electricity generation), is an environmental pariah to the likes of The Sierra Club, and has little prospect for growth. Nuclear, which provides the nation’s largest grid, the PJM, with about 40% of its electricity, is not considered a renewable, despite producing no carbon emissions; it is also on The Sierra Club’s hit list. Geothermal and biomass, those minor league renewables, were given short shrift, perhaps because no one thought they were sufficiently scalable to achieve the objective.

So it was a wind versus gas scrum played out as if the two contenders were equally matched as producers of power. Bryce pointed out wind’s puny energy density, how its noise harms health and safety, its threat to birds and bats, and how natural gas’s newfound abundance continues to decrease its costs—and its price. His opponent carried the argument that wind and solar would one day be economically competitive with natural gas, such that the former, since they produced no greenhouse gasses, would be the preferred choice over the latter, which does emit carbon and, as a non renewable, will one day become depleted.

Such a discussion is absurd at a number of levels, mirroring Alice’s small talk with the March Hare. One of the troubling things about the way wind is vetted in public discourse is how “debate” is framed to ensure that wind has modern power and economic value. It does not. Should we debate whether the 747 would do more than gliders in transporting large quantities of freight? Bryce could have reframed the discussion to ask whether wind is better than cumquats as a means of emissions reductions. But he didn’t. And the outcome of this debate, according to the vote, was a virtual draw.

Ironically, the American Natural Gas Association is perking up its louche ad slogan: “The success of wind and solar depends on natural gas.” Eureka! To ANGA, wind particularly is not an either to natural gas’s or. Rather, the renewables du jour will join forces with natural gas to reduce carbon emissions in a way that increases marketshare for all. With natural gas, wind would be an additive—not an alternative—energy source. Bryce might have made this clear.

What ANGA and industry trade groups like the Interstate Natural Gas Association of America (see its latest paper) don’t say is that virtually all emissions reductions in a wind/gas tandem would come from natural gas—not wind. But, as Bryce should also be encouraged to say, such a pretension is a swell way for the natural gas industry to shelter income via wind’s tax avoidance power. And to create a PR slogan based upon the deception of half-truths. Although natural gas can indeed infill wind’s relentless volatility, the costs would be enormous while the benefit would be inconsequential. Rate and taxpayers would ultimately pay the substantial capital expenses of supernumerary generation.

Beyond Wonderland and Through the Looking Glass

The Oxford-style Economist debate, which by all accounts Bryce and Hayward won with ease, nonetheless woozled around in a landscape worthy of Carroll’s Jabberwocky, complete with methodological slips, definitional slides, sloganeering, and commentary that often devolved into meaningless language—utter nonsense. It was as if Pixar had for the occasion magically incarnated the Red Queen, the Mad Hatter, and Humpty Dumpty, who once said in Through the Looking Glass, “When I use a word, it means just what I choose it to mean – neither more nor less.” Dumpty also said, “When I make a word do a lot of work … I always pay it extra.”

Those promoting “clean” were paying that word extra—and over the top, as Hayward frequently reminded by demanding a clear, consistent definition of clean technology.

Proponents frequently defined clean energy differently depending upon what they chose to mean. At times, they meant acts of commission in the form of “clean coal,” wind, solar, biomass (although ethanol was roundly condemned), and increased use of natural gas. Indeed, natural gas in the discussion became reified, in the best Nancy Pelosi/T. Boone Pickens tradition, as a clean source of energy on a par with wind and solar. At one time, clean also referred to nuclear—but the topic quickly changed back to wind and natural gas. At other times, clean referred to acts of omission, such as reducing demand with more efficient appliances, smarter systems of transmission, and more discerning lifestyle choices.

Shifting definitions about what was “clean” made for a target that was hard to hit. Bryce mentioned Jevon’s Paradox. Bulls eye. So much for increased efficiency. Hayward demonstrated that the US electricity sector has already cut SO2 and NOx emissions nearly 60% over the last 40 years, and reduced mercury emissions by about 40% over this time, despite tripling coal use from 1970 to 2005. Zap. All this without wind and solar. Green jobs from clean industry?  It would have been fruitful to have invoked Henry Hazlitt’s Broken Window fallacy, which illustrates the likelihood of few net new jobs because of the opportunities lost for other, more productive investment. Also welcoming would have been remarks about how more jobs in the electricity sector must translate into increased costs, making electricity less affordable. Such a development would substantially subvert prospects for economic recovery.

In arguing against the proposition that clean energy could be a force for economic recovery, Bryce and Hayward did clean the opposition’s clock (they had, as everyone agreed, the numbers on their side). But they also let the opposition off the hook by not exposing the worms at the core of the proposition. Yes, the numbers overwhelmingly suggest that coal and natural gas are going to be around for a long time, and that they will continue to be the primary fuels, along with oil, to energize the American economy.** They can be, as they have been, made cleaner by reducing their carbon emissions even more. But they won’t be clean. Outside Wonderland, cleaner is still not clean.

The proposition therefore had to fail. Even in Wonderland.

Example of the twinning between natural gas and renewable energy – unacceptable from a greenhouse gas mitigation perspective

Capacity Matters

These arguments, however, are mere body blows. Bryce should have supplied the knockout punch by reminding that any meaningful discussion of electricity production, which could soon embrace 50% of our overall energy use, must consider the entwined goals of reliability, security, and affordability, since reliable, secure, affordable electricity is the lynchpin of our modernity. Economic recovery must be built upon such a foundation. At the core of this triad, however, resides the idea of effective capacity—the ability of energy suppliers to provide just the right amount of controllable power at any specified time to match demand at all times. It is the fount of modern power applications.

By insisting that any future technology—clean, cleaner, or otherwise, particularly in the electricity sector—must produce effective capacity, Bryce would have come quickly to the central point, moving the debate out of Wonderland and into sensible colloquy.

Comparing—both economically and functionally—wind and solar with conventional generation is spurious work. Saying that the highly subsidized price of wind might, maybe, possibly become, one day, comparable to coal or natural gas may be true. But even if this happens, if, say, wind and coal prices become equivalent, paying anything for resources that yield no or little effective capacity seems deranged as a means of promoting economic recovery for the most dedicatedly modern country on the planet.

Subsidies for conventional fuels—coal, natural gas, nuclear, and hydro—make sense because they promote high capacity generation. Subsidies for wind and solar, which are, as Bryce stated, many times greater on a unit of production basis than for conventional fuels, promote pretentious power that make everything else work harder simply to stand still.

Consider the following passage from Part II of my recent paper, which is pertinent in driving this point home:

Since reliable, affordable, secure electricity production has historically required the use of many kinds of generators, each designed to perform different but complementary roles, much like instruments in an orchestra, it is not unreasonable for companies in the power business to diversify their power portfolios. Thus, investment in an ensemble of nuclear and large coal plants to provide for baseload power, along with bringing on board smaller coal and natural gas plants to engage mid and peak load, makes a great deal of sense, providing for better quality and control while achieving economies of scale.

Traditional diversified power portfolios, however, insisted upon a key common denominator: their generating machines, virtually all fueled by coal, natural gas, nuclear, and/or hydro, had high unit availability and capacity value. That is, they all could be relied upon to perform when needed precisely as required.

How does adding wind—a source of energy that cannot of itself be converted to modern power, is rarely predictable, never reliable, always changing, is inimical to demand cycles, and, most importantly, produces no capacity value—make any sense at all? Particularly when placing such a volatile brew in an ensemble that insists upon reliable, controllable, dispatchable modes of operation. As a functional means of diversifying a modern power portfolio, wind is a howler.

Language Matters

All electricity suppliers are subsidized. But conventional generation provides copious capacity while wind supplies none and solar, very little. The central issue is capacity—or its absence. Only capacity generation will drive future economic recovery. And Bryce should say so in future debates. Birds and bats, community protests, health and safety—pale in contrast to wind technology’s lack of capacity. And Bryce should say so. Ditto for any contraption fueled by dilute energy sources that cannot be converted to modern power capacity—even if they produce no carbon emissions. Clean and green sloganeering should not be conflated with effective production.

Moreover, even if the definition of clean and/or renewable technology is stretched to mean reduced or eliminated carbon emissions caused by less consumption of fossil fuels, then where is the evidence that technologies like wind and solar are responsible for doing this? When in the debate former Colorado governor Bill Ritter claimed that the wind projects he helped build in his state were reducing California’s carbon emissions, why didn’t the Bryce/Hayward team demand proof? Which is non existent.

It’s not just wind’s wispy energy density that makes conversion to modern power impossible—without having it fortified by substantial amounts of inefficiently operating fossil-fired power, virtually dedicated transmission lines, and new voltage regulation, the costs of which must collectively be calculated as the price for integrating wind into an electricity grid. It is rather wind’s continuous skittering, which destabilizes the required match between supply and demand; it must be smoothed by all those add-ons. The vast amount of land wind gobbles up therefore hosts a dysfunctional, Rube Goldbergesque mechanism for energy conversion. Bryce and his confreres would do well to aim this bullet right between the eyes.

Robert Bryce remains a champion of reasoned discourse and enlightened energy policy. He is one of the few energy journalists committed to gleaning meaningful knowledge from a haze of data and mere information. His work is a wise undertaking in the best traditions of journalism in a democracy. As he prepares for future debates—although, given the wasteland of contemporary journalism, it is a tribute to his skills that he is even invited to the table—he must cut through the chaff surrounding our politicized energy environment, communicating instead the whole grained wheat of its essentials.

Endnote: You might also enjoy my other relatively recent paper, Oxymoronic Wind (13-page PDF). It covers a lot of ground but dwells on the relationship between wind and companies swaddled in coal and natural gas, which is the case worldwide.

________________________________________________________

** It was fascinating to note Hayward’s brief comment about China’s involvement with wind, no doubt because it seeks to increase its renewables’ manufacturing base and then export the bulk of the machines back to a gullible West. As journalist Bill Tucker said recently in a panel discussion about the future of nuclear technology on the Charlie Rose show, China (and India), evidently dedicated to achieve high levels of functional modernity, will soon lead the world in nuclear production as it slowly transitions from heavy use of coal over the next half-century.

April 14, 2011

Fukushima rated at INES Level 7 – what does this mean?

Filed under: Japan Earthquake, Nuclear Energy — buildeco @ 8:19 pm
by Barry Brook

Hot in the news is that the Fukushima Nuclear crisis has been upgraded from INES 5 to INES 7. Note that this is not due to some sudden escalation of events  (aftershocks etc.), but rather it is based on an assessment of the cumulative magnitude of the events that have occurred at the site over the past month.

Below I look briefly at what this INES 7 rating means, why it has happened, and to provide a new place to centralise comments on this noteworthy piece of news.

The International Nuclear and Radiological Event Scale (INES) was developed by the International Atomic Energy Agency (IAEA) to rate nuclear accidents. It was formalised in 1990 and then back-dated to events like Chernobyl, Three Mile Island, Windscale and so on. Prior to today, only Chernobyl had been rated at the maximum level of the scale ‘major accident’. A useful 5-page PDF summary description of the INES, by the IAEA, is available here.

A new assessment of Fukushima Daiichi has put this event at INES 7, upgraded from earlier escalating ratings of 3, 4 and then 5. The original intention of the scale was historical/retrospective, and it was not really designed to track real-time crises, so until the accident is fully resolved, any time-specific rating is naturally preliminary.

The criteria used to rate against the INES scale are (from the IAEA documentation):

(i) People and the Environment: considers the radiation doses to people close to the location of the event and the widespread, unplanned release of radioactive material from an installation.

(ii) Radiological Barriers and Control: covers events without any direct impact on people or the environment and only applies inside major facilities. It covers unplanned high radiation levels and spread of significant quantities of radioactive materials confined within the installation.

(iii) Defence-in-Depth: covers events without any direct impact on people or the environment, but for which the range of measures put in place to prevent accidents did not function as intended.

In terms of severity:

Like the scales that describe earthquakes or major storms, each of the INES scale’s seven levels is designed to be ten times more severe that the one before. After below-scale ‘deviations’ with no safety significance, there are three levels of ‘incident’, then four levels of ‘accident’. The selection of a level for a given event is based on three parameters: whether people or the environment have been affected; whether any of the barriers to the release of radiation have been lost; and whether any of the layers of safety systems are lost.

So, on this definitional basis, one might argue that the collective Fukushima Daiichi event (core damage in three units, hydrogen explosions, problems with drying spent fuel ponds, etc.) is ~100 times worse than TMI-2, which was a Level 5.

However, what about when you hit the top of the INES? Does a rating of 7 mean that Fukushima is as bad as Chernobyl? Well, since you can’t get higher than 7 on the scale, it’s impossible to use this numerically to answer such a question on the basis of their categorical INES rating alone. It just tells you that both events are in the ‘major league’. There is simply no event rating 8, or 10, or whatever, or indeed any capacity within the INES system to rank or discriminate events within categories (this is especially telling for 7). For that, you need to look for other diagnostics.

So headlines likeFukushima is now on a par with Chernobyl‘ can be classified as semantically correct and yet also (potentially) downright misleading. Still, it sells newspapers.

There is a really useful summary of the actual ‘news’ of this INES upgrade from World Nuclear News, here. It reports:

Japanese authorities notified the International Atomic Energy Agency of their decision to up the rating: “As a result of re-evaluation, total amount of discharged iodine-131 is estimated at 1.3×1017 becquerels, and caesium-137 is estimated at 6.1×1015 becquerels. Hence the Nuclear and Industrial Safety Agency has concluded that the rating of the accident would be equivalent of Level 7.”

More here from the IAEA:

The new provisional rating considers the accidents that occurred at Units 1, 2 and 3 as a single event on INES. Previously, separate INES Level 5 ratings had been applied for Units 1, 2 and 3. The provisional INES Level 3 rating assigned for Unit 4 still applies.

The re-evaluation of the Fukushima Daiichi provisional INES rating resulted from an estimate of the total amount of radioactivity released to the environment from the nuclear plant. NISA estimates that the amount of radioactive material released to the atmosphere is approximately 10 percent of the 1986 Chernobyl accident, which is the only other nuclear accident to have been rated a Level 7 event.

I also discussed the uprating today on radio, and you can listen to the 12-minute interview here for my extended perspective.

So, what are some of the similarities and differences between Fukushima and Chernobyl?

Both have involved breeches of radiological barriers and controls, overwhelming of defence-in-depth measures, and large-scale release of radioactive isotopes into the environment. The causes and sequence of the two events were, however, very different, in terms of reactor designs, the nature of the triggering events, and time-scale for resolution — this is a topic to be explored in more depth in some future post. The obviously big contrast is in the human toll and nature of the radioactive release.

The Chernobyl event killed 28 people directly via the initial explosion or severe radiation sickness, and other ~15 died as directly attributed result of radiation-induced cancer (see the summary provided today by Ben Heard on Opinion Online: Giving Green the red light). Further, Chernobyl led to a significant overexposure of members of the public in the local area and region, especially due to iodine-131 that was dispersed by the reactor fire, and insufficient protection measures by authorities. An increase in thyroid cancers resulted from this.

In Fukushima, by contrast, no workers have been killed by radiation (or explosions), and indeed none have been exposed to doses >250 mSv (with a ~1000 mSv being the dose required for people to exhibit signs of radiation sickness, through to about 50 % of victims dying after being exposed to >5000 mSv [see chart here]). No member of the public has, as yet, been overexposed at Fukushima. Further, much of the radionuclides released into the environment around Fukushima have been a result of water leakages that were flushed into the ocean, rather than attached to carbon and other aerosols from a burning reactor moderator, where they were largely deposited on land, and had the potential to be inhaled (as occurred in Chernobyl).

So is Fukushima another Chernobyl? No. Is it a serious accident? Yes. Two quite different questions — and answers — which should not be carelessly conflated.

March 26, 2011

Preliminary lessons from Fukushima for future nuclear power plants

Filed under: IFR (Integral Fast Reactor) Nuclear Power, Japan Earthquake — buildeco @ 1:34 pm

by Barry Brook

No strong conclusions can yet be drawn on the Fukushima Nuclear Crisis, because so much detail and hard data remains unclear or unavailable. Indeed, it will probably take years to piece the whole of this story together (as has now been done for accidents like TMI and Chernobyl [read this and this from Prof. Bernard Cohen for an absolutely terrific overview]). Still, it will definitely be worth doing this post-event diagnostic, because of the valuable lessons it can teach us. In this spirit, below an associate of mine from the Science Council for Global Initiatives discusses what lessons we’ve learned so far. This is obviously a huge and evolving topic that I look forward to revisiting many times in the coming months…

——————–

Guest Post by Dr. William HannumBill worked for more than 40 years in nuclear power development, stretching from design and analysis of the Shippingport reactor to the Integral Fast Reactor. He earned his BA in physics at Princeton and his MS and PhD in nuclear physics at Yale. He has held key management positions with the U. S. Department of Energy (DOE), in reactor physics , reactor safety, and as Deputy Manager of the Idaho Operations Office.

He served as Deputy Director General of the OECD Nuclear Energy Agency, Paris, France; Chairman of the TVA Nuclear Safety Review Boards, and Director of the West Valley (high level nuclear waste processing and D&D) Demonstration Project. Dr. Hannum is a fellow of the American Nuclear Society, and has served as a consultant to the National Academy of Engineering on nuclear proliferation issues. He wrote a popular article for Scientific American on smarter use of nuclear waste, which you can download as a PDF here.

——————–

Background

On 11 March 2011, a massive earthquake hit Japan.  The six reactors at Fukushima-Dai-ichi suffered ground accelerations somewhat in excess of design specification.  It appears that all of the critical plant equipment survived the earthquake without serious damage, and safety systems performed as designed.  The following tsunami, however, carried the fuel tanks for the emergency diesels out to sea, and compromised the battery backup systems.  All off-site power was lost, and power sufficient operate the pumps that provide cooling of the reactors and the used-fuel pools remained unavailable for over a week.  Heroic efforts by the TEPCo operators limited the radiological release.  A massive recovery operation will begin as soon as they succeed in restoring the shutdown cooling systems.

It is important to put the consequences of this event in context.  This was not a disaster (the earthquake and tsunami were disasters).  This was not an accident; the plant experienced a natural event (“Act of God” in insurance parlance) far beyond what it was designed for.  Based on the evidence available today, it can be stated with confidence that no one will have suffered any identifiable radiation-related heath effects from this event.  A few of the operators may have received a high enough dose of radiation to have a slight statistical increase in their long term risk of developing cancer, but I would place the number at no more than 10 to 50.  None of the reports suggest that any person will have received a dose approaching one Sievert, which would imply immediate health effects.

Even ignoring the possibility of hormetic effects, this is approaching the trivial when compared with the impacts of the earthquake and tsunami, where deaths will likely come to well over 20,000.  Health impacts from industrial contamination, refinery fires, lack of sanitation, etc., etc. may reasonably be supposed to be in the millions.  Even the “psychological” impacts of the Fukushima problems must be seen to pale in contrast to those from the earthquake and tsunami.

The radiological impact on workers is also small relative to the non-radiological injuries suffered by them.  One TEPCO crane operator died from injuries sustained during the earthquake. Two TEPCO workers who had been in the turbine building of Unit 4, are missing.  At least eleven TEPCO workers were take to hospital because of earthquake-related physical injuries.

TEPCO has suffered a major loss of capital equipment, the value of which is non-trivial even in the context of the earthquake and tsunami devastation.  They also face a substantial cost for cleanup of the contamination which has been released from the plants. These are financial costs, not human health and well being matters.

The Sequence of Events

Following the tsunami, the operators had no power for the pumps that circulate the primary coolant to the heat exchangers.  The only way to remove the decay heat was to boil the water in the core.  After the normal feed water supplies were exhausted, they activated the system to supply sea water to the core, knowing this would render the plant unfit to return to operation.  In this way, the reactors were maintained in a relatively stable condition, allowing the water to boil, and releasing the resulting steam to the containment building. Since this is a Boiling Water Reactor (BWR), it is good at boiling water.  Operating with the water level 1.7 to 2 meters below the top of the core, they  mimicked power operation; the core normally operates at power with the water level well below the top of the core, the top part being cooled by steam.   Cold water in, steam out, is a crude but effective means of cooling.

Before using sea water, according to reports, water levels are thought to have dropped far enough to allow the fuel to overheat, damaging some of the fuel cladding.  When overheated, the cladding (Zirconium) reacts, claiming oxygen from the water.  Water, less oxygen, is hydrogen.  When vented to the containment and then to the outer building, the hydrogen built up, and eventually exploded, destroying the enclosing building.  With compromised fuel, the steam being vented contains radioactive fission products.  The design of BWRs is such that this venting goes through a water bath (in the Torus), that filters out all but the most volatile fission products.

With time, the heat generated in used fuel (both in the core and in the fuel pool) decreases.  From an initial power of about 2% of full power an hour after shutdown (when the coolant pumps lost power) to about 0.2% a week later, the amount of steam venting decreases, and releases can be controlled and planned for favorable weather conditions.

A second concern arose because of the inability to provide cooling for the used-fuel pool in Unit 4, and later Unit 3.  The Unit 4 pool was of concern because, for maintenance, the entire core had been off-loaded into the pool in November (it is believed that two older core loadings were also in this pool, awaiting transfer to the central storage pool).  With only a few months cooling, the residual heat is sufficient to raise the temperature of the water in the pool to boiling within several days or weeks.  There is also some suggestion that the earthquake may have sloshed some water out of the pool.  In any case, the fuel pools for Units 3 and 4 eventually were thought to be losing enough water such that the fuel would no longer be adequately cooled.  Since the fuel pools are outside the primary containment, leakage from these pools can spread contamination more readily than that from the reactor core.  High-power water hoses have been used to maintain water in the fuel pools.

While many areas within the plant complex itself, and localized areas as far away as 20 Km may require cleanup of the contamination released from the reactors and from the fuel pools, there is no indication that there are any areas that will require long term isolation or exclusion.

 

Lessons Learned

It is not the purpose of this paper to anticipate the lessons to be learned from this event, but a few items may be noted.  One lesson will dominate all others:

Prolonged lack of electrical power must be precluded.

While the designers believed their design included sufficient redundancies (diesels, batteries, redundant connections to the electrical grid), the simultaneous extended loss of all sources of power left the operators dependant on creative responses.  This lesson is applicable both to the reactor and to fuel pools.

All nuclear installations will probably be required to do a complete review of the security of their access to electrical power.  It may be noted that this lesson is applicable to many more activities than just nuclear power.  Extended loss of electrical power in any major metropolitan area would generate a monstrous crisis.  The loss of power was irrelevant to other activities in the region near the Fukushima plant because they were destroyed by the tsunami.

Other lessons that will be learned that may be expected to impact existing plants include:

Better means of control of hydrogen buildup in the case of fuel damage may be required.

In addition, detailed examinations of the Fukushimi plants will provide evidence of the margins available in seismic protection.  Detailed reconstruction of the event will give very helpful insights into the manner that fission product can release from damaged fuel, and their transport.

Applicability of Fukushima Information to MOX-fueled Reactors:

The core of Unit 3 was fueled with plutonium recycled from earlier used reactor fuel.  Preliminary information suggests that the release of hazardous radioactive material, for this type of event, is not significantly different than that non-recycle fuel.  More detailed examinations after the damaged cores are recovered, and models developed to reconstruct the events, will be necessary to verify and quantify this conclusion.

Applicability of Fukushima Information to Gen-III Reactors:

In the period since the Fukushima plants were designed, advanced designs for BWRs (and other reactor types) have been developed to further enhance passive safety (systems feedback characteristics that compensate for abnormal events, without reliance on operator actions or on engineered safety systems), simplify designs, and reduce costs.  The results of these design efforts (referred to as Gen-III) are the ones now under construction in Japan, China and elsewhere, and proposed for construction in the U.S.

One of the most evident features of the Gen-III systems is that they are equipped with large gravity-feed water reservoirs that would flood the core in case of major disruption.  This will buy additional time in the event of a Fukushima type situation, but the plants will ultimately rely of restoration of power at some point in time.

The applicability of the other lessons (hydrogen control, fuel pool) will need to be evaluated, but there are no immediately evident lessons beyond these that will affect these designs in a major way.

Applicability of Fukushima Information to Recycling Reactors:

As noted above, Unit-III was fueled with recycled plutonium, and there are no preliminary indications that this had any bearing on the performance of this plant during this event.

Advanced recycling, where essentially all of the recyclable material is recovered and used (as opposed to recovery and recycle of plutonium) presents a different picture.  Full recycling is effective only with a fast reactor.  A metal fuel, clad in stainless steel, allows a design of a sodium-cooled fast reactor with astonishing passive safety characteristics.  Because the sodium operates far from its boiling point in an essentially unpressurized system, catastrophic events caused by leakage or pipe failures cannot occur.  The metal fuel gives the system very favorable feedback characteristics, so that even the most extreme disruptions are passively accommodated.  A complete loss of cooling, such as at Fukushima, leads to only a modest temperature rise.  Even if the control system were rendered inoperable, and the system lost cooling but remained at full power (this is a far more serious scenario than Fukushima, where the automatic shutdown system operated as designed) the system would self-stabilize at low power, and be cooled by natural convection to the atmosphere.  Should the metal fuel fail for any reason, internal fission product gases would cause the fuel to foam and disperse, providing the most powerful of all shutdown mechanisms.

The only situation that could generate energy to disperse material from the reactor is the possibility of s sodium-water reaction.  By using an intermediate sodium system (reactor sodium passes its energy to a non-radioactive sodium system, which then passes its energy to water to generate steam to turn the electrical generator), the possibility of a sodium-water reaction spreading radioactive materials is precluded.

These reactors must accommodate seismic challenges, just as any other reactor type.  While there are many such design features in common with other reactor designs, the problem is simpler for the fast reactor because of the low pressure, and the fact that this type of reactor does not need elaborate water injection systems.

In light of the Fukushima event, one must consider the potential consequences of a massive tsunami accompanying a major challenge to the reactor.  Since it may be difficult to ensure that the sodium systems remain intact under the worst imaginable circumstances, it may be prudent to conclude that a tsunami-prone location may not be the best place to build a sodium facility (whether a nuclear power plant or something else).

Conclusions:

The major lesson to be learned is that for any water-cooled reactor there must be an absolutely secure supply of power sufficient to operate cooling pumps.  Many other lessons are likely to be learned.  At this early point, it appears that design criteria for fuel storage pools may need to be revised, and hydrogen control assessed.

Given the severity of the challenge faced by the operators at Fukushima, and their ability to manage the situation in such a way as to preclude any significant radiation related health consequences for workers or the public, this event should be a reassurance that properly designed and regulated nuclear power does not pose a catastrophic risk to the public—that, overall, nuclear power remains a safe and clean energy sources.

Given the financial impact this event will have on the utility (loss of four major power plants, massive cleanup responsibilities), it will be worthwhile for the designers, constructors, operators, and licensing authorities to support a thorough analysis of what actually transpired during this event.

March 25, 2011

10+ days of crisis at the Fukushima Daiichi nuclear power plant – 22 March 2010

Filed under: IFR (Integral Fast Reactor) Nuclear Power, Japan Earthquake — buildeco @ 1:54 pm

by Barry Brook

Update: Detailed graphical status report on each reactor unit is available. Here is the picture for Unit 2 — click on the figure to access the PDF for all units.

————————-

Yes, it really has been that long. So what happened during those 10+ days? For a long answer, look back over the daily posts on this blog, which also has plenty of links to more off-site information. For the short-hand version, I offer you this excellent graphic produced by the Wall Street Journal:

Credit: Wall Street Journal: http://goo.gl/E9YuA

Things continue to develop slowly, but I think now towards an inevitable conclusion — barring any sudden turn of events, a cold shutdown (reactor temperature below 100C) should be achieved in units 1 to 3 within the next week (or two?). The other priority is to get the spent fuel storage sufficiently covered with water to make them approachable (and ideally to get AC power systems restored to these ponds, as has been the case already for units 5 and 6). The clean up, diagnostics, and ultimate decommissioning of Fukushima Daiichi, of course, will take months and years to complete.

What is the latest news?

First, there is a new estimate of the tsunami damage. According to the NEI:

TEPCO believes the tsunami that inundated the Fukushima Daiichi site was 14 meters high, the network said. The design basis tsunami for the site was 5.7 meters, and the reactors and backup power sources were located 10 to 13 meters above sea level. The company reported that the maximum earthquake for which the Fukushima Daiichi plants were designed was magnitude 8. The quake that struck March 11 was magnitude 9.

Second, the IAEA reports elevated levels of radioactivity in the sea water off the coast of these reactors. That is hardly surprising, given that contaminated cooling water would gradually drain off the site — and remember, it is very easy with modern instruments to detect radioactivity in even trace amounts. These reported amounts (see table) are clearly significantly elevated around the plant — but the ocean is rather large, and so the principle of disperse and dilute also applies.

I’m reminded of a quote from James Lovelock in “The Vanishing Face of Gaia” (2008):

In July 2007 an earthquake in Japan shook a nuclear power station enough to cause an automatic shutdown ; the quake was of sufficient severity-over six on the Richter scale-to cause significant structural damage in an average town. The only “nuclear” consequence was the fall of a barrell from a stack of low-level waste that allowed the leak of about 90,000 becquerels of radioactivity. This made front page news in Australia, where it was said that the leak posed a radiation threat to the Sea of Japan.The truth is that about 90,000 becquerels is just twice the amount of natural radioactivity, mostly in the form of potassium, which you and I carry in our bodies. In other words, if we accept this hysterical conclusion, two swimmers in the Sea of Japan would make a radiation threat.

For further details on radiation trends in Japan, read this from WNN. In short, levels are hovering at or just above background levels in most surrounding prefectures, but are elevated in some parts of Fukushima. However, the World Health Organisation:

… backed the Japanese authorities, saying “These recommendations are in line with those based on accepted public health expertise.”

Below is a detailed situation summary of the Fukushima Daiichi site, passed to me by a colleague:

(1) Radioactivity was detected in the sea close to Fukushima-Daiichi. On March 21, TEPCO detected radioactivity in the nearby sea at Fukushima-Daiichi nuclear power station (NPS). TEPCO notified this measurement result to NISA and Fukushima prefecture. TEPCO continues sampling survey at Fukushima-Daiichi NPS, and also at Fukushima-Daini NPS in order to evaluate diffusion from the Fukushima-Daiichi. Though people do not drink seawater directly, TEPCO thinks it important to see how far these radioactivity spread in the sea to assess impact to human body.
Normal values of radioactivity are mostly below detection level, except for tritium. (detection level of Co-60 is 0.02Bq/ml) Also, samples of soil in the station have been sent to JAEA (Japan Atomic Energy Agency).

(2) Seawater injection to the spent fuel pool at Fukushima-Daiichi unit 2. This continues, with seawater injected through Fuel Pool Cooling and Cleanup System (FPC) piping. A temporary tank filled with seawater was connected to FPC, and a pump truck send seawater to the tank, then fire engine pump was used to inject seawater to the pool. Although the water level in the pool is not confirmed, judging from the total amount of injected seawater, 40 tons, it is assumed that the level increased about 30 cm after this operation.

(3) Brown smoke was observed from unit 3 reactor building. At around 3:55 pm on March 21, a TEPCO employee confirmed light gray smoke arising from the southeast side of the rooftop of the Unit 3 building. Workers were told to evacuate. It is observed the smoke has decreased and died out at 6:02pm. TEPCO continues to monitor the site’s immediate surroundings. There was no work and no explosive sound at the time of discovery.

(4) Smoke from unit 2 reactor building (as of 9:00pm, March 21). TEPCO’s unit operator found new smoke spewing from mountain side of unit 2 reactor building around 6:20 pm, which was different smoke from blow-out panel on the sea side. There was no explosive sound heard at the time. At 7:10 pm, TEPCO instructed workers at unit 1 – 4 to evacuate into the building. Evacuation was confirmed at 8:30 pm.

(Note: Since there was another smoke found from unit 3 at 1:55pm and evacuation was completed at that time, no workers were remained at the units when smoke found at unit 2.)

TEPCO assumes the smoke is something like vapor, but are still investigating the cause of this smoke with monitoring plant parameters.

Radiation level near the Gate of Fukushima-Daiichi NPS increased at the time of smoke, then decreased to prior level.

5:40 pm 494 μSv/hr

6:10 pm 1,256 μSv/hr

6:20 pm 1,428 μSv/hr

6:30 pm 1,932 μSv/hr

8:00 pm 493.5 μSv/hr

As a result of smoke from unit 2 and 3, scheduled water cannon spraying operations for March 21 were postponed.

(5) Power supply restoration at unit 2 (as of 5:00 pm, March 21). Power cables have been connected to the main power center (existing plant equipment) and confirmed as properly functioning. Presently, soundness tests of the equipment are underway. A pump motor, which is used to inject water to spent fuel pool, has been identified as needing to be replaced.

Similar power connections have been made to reactors 5 and 6 and a diesel generator is providing power to a cooling pump for the used fuel pools. Power cable is being laid to reactor 4, and power is expected to be restored to reactors 3 and 4 by Tuesday.

Kyodo News now reports that all 6 units are connected to external power, and control room power and lighting is about to be restored.

The water-spraying mission for the No. 4 reactor, meanwhile, was joined by trucks with a concrete squeeze pump and a 50-meter arm confirmed to be capable of pouring water from a higher point after trial runs.

With the new pump trucks arriving, the pumping rates for water spraying has increased to 160 tonnes per hour through a 58 metre flexible boom via remote control.

Here is the latest FEPC status report:

March 21, 2011

Fukushima Nuclear Accident – Why I stay in Tokyo

Filed under: IFR (Integral Fast Reactor) Nuclear Power, Japan Earthquake — buildeco @ 12:22 pm

Posted by Barry Brook

Guest Post by Axel Lieber. Axel is a German national and has been a resident of Tokyo since 1998. He runs a small executive search firm and is married to a Japanese.

[Editor’s note: This is a personal perspective, not a professional scientific one, but I can verify Axel’s facts]

Why I stay in Tokyo

僕 が東京にとどまる理由

[This commentary contains footnotes and links that allow you to verify what I am saying.]

Thousands have left Tokyo recently in a panic about the perceived radiation threat. If you ask any one of them to precisely articulate what the threat consists of, they will be unable to do so. This is because they actually don’t know, and because in fact there is no threat justifying departure, at least not from radioactivity (*).They flee because they have somehow heard that there is a threat – from the media, their embassies, their relatives overseas, friends, etc. These sources of information, too, have never supplied a credible explanation for their advisories.

But they have managed to create a mass panic, leading to thousands of people wasting their money on expensive air fares, disrupting their professional lives, their children’s education, and the many other productive activities they were going about. In some cases, foreign executives have abandoned their post in Tokyo, guaranteeing a total loss of respect among those who have stayed behind. Some service providers catering to the foreign community have lost almost their entire income over night. Other providers reversely will lose long-term clientele because they have fled, leaving their remaining customers and clients forced to find new providers. Domestic helpers (especially from the Philippines) have suddenly lost their livelihoods because their “employers” think it’s alright to run away without paying their helpers another penny. Another result of all the hysteria is that attention has been diverted away from the real disaster: the damage done in north-eastern Japan where thousands have died, and many tens of thousands live in dreadful conditions right now, waiting for help.

Contrast this with the fact that radioactivity levels in Tokyo are entirely safe and have been since the beginning of the Fukushima incident (*1a, and *1b for continuous updates). Modern instruments to measure radioactivity are extremely sensitive and precise, and report even the smallest deviations with impressive reliability. Nowhere in the Tokyo area have there been any measurements that would imply any sort ofhealth risk. There certainly have been increases in radioactivity but they are tiny and simply irrelevant to anyone’s health. There is also no fear that there could be some kind of cover-up.

Instruments to measure radioactivity are available at many different research institutions that are not controlled by the Japanese government. The J-gov does also not control the media. They simply have no laws and no means to do so.

[Editor’s Note: For a contrast, the background level in London is 0.035 to 0.05 µSv per hour, see the pie chart for an average breakdown by source. Also, see this great chart.]

But what about a worst-case scenario, one that is yet to come? For four days now, I have tried to find a serious source of information – a nuclear safety engineer or a public health expert – who would be able to articulate just what exactly the threat to residents of Tokyo is. It has been difficult because there aren’t many who bother to. I could quote several Japanese experts here but won’t do so to avoid a debate over their credibility (which I personally do not have any particular reason to doubt). The most to-the-point assessment I have found from outside of Japan comes from the UK government’s Chief Science Advisor, Sir John Beddington. In a phone call to the British embassy in Tokyo, he says about the worst-case scenario:

In this reasonable worst case you get an explosion. You get some radioactive material going up to about 500m up into the air. Now, that’s really serious, but it’s serious again for the local area….The problems are within 30 km of the reactor. And to give you a flavour for that, when Chernobyl had a massive fire at the graphite core, material was going up not just 500m but to 30,000 feet (9,144m) . It was lasting not for the odd hour or so but lasted months, and that was putting nuclear radioactive material up into the upper atmosphere for a very long period of time. But even in the case of Chernobyl, the exclusion zone that they had was about 30km. And in that exclusion zone, outside that, there is no evidence whatsoever to indicate people had problems from the radiation. The problems with Chernobyl were people were continuing to drink the water, continuing to eat vegetables and so on and that was where the problems came from. That’s not going to be the case here. So what I would really re-emphasise is that this is very problematic for the area and the immediate vicinity and one has to have concerns for the people working there. Beyond that 20-30km, it’s really not an issue for health.(*2)

It is important to note that Beddington, too, uses language such as “really serious”. Most nuclear safety engineers at this moment would describe the Fukushima incident as “very serious” and as having potentially “catastrophic consequences”. But the important point to note here is that these descriptions of the situation do not translate into public health concerns for Tokyo residents! They apply to the local situation at and around the Fukushima plant alone.

As of the time of writing this note (March 19, 2011, 13:00 JST), the status at Fukushima is still precarious but there are now signs that the situation is stabilizing and may be brought under control in the next few days. (*3)

Tokyo, even at this time of crisis, remains one of the best, safest and coolest large cities in the world to live in. All public services operate normally or almost normally. Many areas of central Tokyo have not had any power outages, and when such occur they are limited to a few hours and certain areas, and are announced well in advance. I have personally not experienced any power outages. Food is available in almost normal quantity and quality. The only food type I have personally seen to be lacking is milk and dairy products, and rice because of panic purchases. Gas (petrol) supply is indeed limited but just yesterday I was able to get a full tank of gas after “only” a fifteen minute wait. Public order and safety in Tokyo remains higher than in any other large city in the world, as it has always been over the past few decades.

To really rub this in: if you live in New York, Shanghai, Berlin, London or Sydney or any other metropolis, you are more exposed to public safety threats such as crime or road accidents than I am at this moment, and in most cases considerably so.

Active and passive smoking, driving a car or motorcycle, getting a chest x-ray, jay-walking, or snowboarding down a snowy mountain are all much more risky activities than simply sitting on a sunny roof terrace in Tokyo.

And sunny it is today, in the capital of the country whose name is literally “Origin of the Sun”.

To obtain level-headed information about the status at Fukushima, avoid CNN and read www.mitnse.com or www.bravenewclimate.com

Footnotes

(*) There is, however, a possibility that there will be further strong earthquakes in the next few weeks, especially in the north-east of Japan, but also in other areas, including Tokyo. This was demonstrated in the recent earthquakes in New Zealand and Chile, where powerful quakes followed the original ones, not necessarily in the same spot either. It would be more rational to stay away from Japan for a few weeks because of this. But again, the risk of being harmed by another earthquake, especially in Tokyo with its superb infrastructure, is not very high. And if you consider this reason enough to stay away, then indeed, you should never live in Japan because we will always face this risk here.

(1a) http://e.nikkei.com/e/fr/tnks/Nni20110316D15JFA16.htm

(1b) http://metropolis.co.jp/quake/quake-2011-03/tokyo-atmospheric-radiation-levels/

(2) http://www.bbc.co.uk/blogs/thereporters/ferguswalsh/2011/03/japan_nuclear_leak_-_health_risks_2.html

(3) http://mitnse.com/2011/03/18/news-update-318/

The original post can be read here (or here for 日本).

Fukushima nuclear accident: Saturday 19 March summary

Filed under: IFR (Integral Fast Reactor) Nuclear Power, Japan Earthquake — buildeco @ 12:18 pm

by Barry Brook

Last Saturday the the crisis level at the Fukushima Daiichi nuclear power station was rapidly on the rise. Hydrogen explosions, cracks in the wetwell torus and fires in a shutdown unit’s building — it seemed the sequence of new problems would never end. A week later, the situation remains troubling, but, over the last few days, it has not got any worse. Indeed, one could make a reasonable argument that it’s actually got better.

Yes, the IAEA has now formally listed the overall accident at an INES level 5 (see here for a description of the scales), up from the original estimate of 4. This is right and proper — but it doesn’t mean the situation has escalated further, as some have inferred. Here is a summary of the main site activities for today, followed by the latest JAIF and FEPC reports. You also might be interested in the following site map:

Another large cohort of 100 Tokyo fire fighters joined the spraying operation to cool down the reactors and keep the water in the spent fuel ponds. The ‘Hyper Rescue’ team have set up a special vehicle for firing a water cannon from 22 m high (in combination with a super pump truck), and today have been targeting the SNF pond in unit 3. About 60 tons of sea water successfully penetrated the building in the vicinity of the pool, at a flow rate of 3,000 litres per minute. Spraying with standard unmanned vehicles was also undertaken for 7 hours into other parts of the the unit 3 building (delivering more than 1,200 tons), to keep the general containment area cool. The temperature around the fuel rods is now reported by TEPCO (via NHK news) to be below 100C.

Conditions in unit 3 are stabilising but will need attention for many days to come. Promisingly, TEPCO has now connected AC cables to the unit 1 and 2 reactor buildings, with hopes that powered systems can be restored to these building by as early as tomorrow (including, it is hoped, the AC core cooling systems), once various safety and equipment condition checks are made.

Holes were made in the secondary containment buildings of Units 5 and 6 as a precautionary measure, to vent any hydrogen that might accumulate and so prevent explosions in these otherwise undamaged structures.  The residual heat removal system for these units has now been brought back on line and these pools maintain a tolerable steady temperature of 60C. More here. These buildings were operating on a single emergency diesel generator, but now have a second electricity supply via the external AC power cable.

Why are they concentrating on these activities? Let’s revisit a bit of the history of last week. The spent fuel pool still has decay heat (probably of the order of few MW in each pool) that requires active cooling. When power went out on Friday, the cooling stopped and the pool temperature has been rising slowly over the weekend, and probably started boiling off (and a large volume may have also been lost due to ‘sloshing’ during the seismic event). The pool is located on the 4th floor above the reactor vessel level. It remains unclear why they could not arrange fire trucks to deliver the sea water before the fuel rods got damaged and started releasing radioactivity. Now the effort is hampered by the high radiation level (primarily penetrating gamma rays). This is the inventory of those spent fuel ponds that have been causing so many headaches:


In order to remove the decay heat after the reactor shutdown, the cooling system should be operating. Following the loss of offsite power, the on-site diesel generators came on but the tsunami arrived an hour or so later and wiped out the diesel generators. Then the battery provided the power for 8 hours or so, during which time they brought in portable generators. However, the connectors were incompatible. As the steam pressure built up inside the pressure vessel, the relief valve was open and dumped the steam to the pressure suppression chamber, which in turn was filtered out to the confinement building and the hydrogen explosion took out the slabs.

The sea water was then pumped in by fire trucks and the reactor pressure vessels are now cooled down to near atmospheric pressure but the fuel assemblies are uncovered at the top quarter or third (the FEPC updates give the actual pressure and water levels). It appears that the pressure vessels and the reactor containment structures are intact, except the Unit 2, where the hydrogen explosion took place inside the containment and hence damaging the lower wetwell torus structure (but almost certainly not the reactor vessel, although the exact status is unclear). It appears that the radioactivity releases are mostly coming from the spent fuel storages than the reactor cores.

World Nuclear News has a really excellent extended article here entitled “Insight to Fukushima engineering challenges“. Read it! Further, you must watch this 8 minute reconstruction of the timeline of the accident done by NHK — brilliant, and really highlights the enormous stresses this poor station faced against a record-breaking force of nature. As I’d noted earlier, just about everything that could have went wrong, did. But valuable lessons must also be learned.

The IAEA and Japanese government has reported the potential contamination of food products from the local Fukushima area via radioactive iodine (mostly vented as part of the pressure relief operations of units 1 to 3). This is a short-term risk due to the 8-day half-life of radioactive iodine (and a small risk, given the trace amounts recorded), but precautions are warranted, as discussed here. What does this mean?

In the case of the milk samples, even if consumed for one year, the radiation dose would be equivalent to that a person would receive in a single CT scan. The levels found in the spinach were much lower, equivalent to one-fifth of a single CT scan.

… and to further put this in context:

The UK government’s chief independent scientific advisor has told the British Embassy in Tokyo that radiation fears from the stricken Fukushima nuclear power plant are a “sideshow” compared with the general devastation caused by the massive earthquake and tsunami that struck on 11 March. Speaking from London in a teleconference on 15 March to the embassy, chief scientific officer John Beddington said that the only people likely to receive doses of radiation that could damage their health are the on-site workers at the Fukushima Daiichi plant. He said that the general population outside of the 20 kilometre evacuation zone should not be concerned about contamination.

As to the possibility of a zirconium fire in the SNF ponds, this seems unlikely. Zr has a very high combustion point, as illustrated in video produced by UC Berkeley nuclear engineers. They applied a blowtorch to a zirconium rod and it did not catch on fire. The demonstration is shown about 50 seconds into this video. The temperature was said to reach 2000C [incidentally, I visited that lab last year!].

The the Japan Atomic Industrial Forum has provided their 12th reactor-by-reactor status update (16:00 March 19).

Here is the latest FEPC status report:

———————-

  • Radiation Levels
    • At 7:30PM on March 18, radiation level outside main office building (approximately 1,640 feet from Unit 2 reactor building) of Fukushima Daiichi Nuclear Power Station: 3,699 micro Sv/h.
    • Measurement results of ambient dose rate around Fukushima Nuclear Power Station at 4:00PM and 7:00PM on March 18 are shown in the attached two PDF files respectively.
    • At 1:00PM on March 18, MEXT decided to carry out thorough radiation monitoring nationwide.
    • For comparison, a human receives 2,400 micro Sv per year from natural radiation in the form of sunlight, radon, and other sources. One chest CT scan generates 6,900 micro Sv per scan.
  • Fukushima Daiichi Unit 1 reactor
    • Since 10:30AM on March 14, the pressure within the primary containment vessel cannot be measured.
    • At 4:00PM on March 18, pressure inside the reactor core: 0.191MPa.
    • At 4:00PM on March 18, water level inside the reactor core: 1.7 meters below the top of the fuel rods.
    • As of 3:00PM on March 18, the injection of seawater continues into the reactor core.
    • Activities for connecting the commercial electricity grid are underway.
  • Fukushima Daiichi Unit 2 reactor
    • At 4:00PM on March 18, pressure inside the primary containment vessel: 0.139MPaabs.
    • At 4:00PM on March 18, pressure inside the reactor core: -0.002MPa.
    • At 4:00PM on March 18, water level inside the reactor core: 1.4 meters below the top of the fuel rods.
    • As of 3:00PM on March 18, the injection of seawater continues into the reactor core.
    • Activities for connecting the commercial electricity grid are underway.
  • Fukushima Daiichi Unit 3 reactor
    • At 2:00PM on March 18, six Self Defense emergency fire vehicles began to shoot water aimed at the spent fuel pool, until 2:38PM (39 tones of water in total).
    • At 2:42PM on March 18, TEPCO began to shoot water aimed at the spent fuel pool, until 2:45PM, by one US Army high pressure water cannon.
    • At 3:55PM on March 18, pressure inside the primary containment vessel: 0.160MPaabs.
    • At 3:55PM on March 18, pressure inside the reactor core: -0.016MPa.
    • At 3:55PM on March 18, water level inside the reactor core: 2.0 meters below the top of the fuel rods.
    • As of 3:00PM on March 18, the injection of seawater continues into the reactor core.
  • Fukushima Daiichi Unit 4 reactor
    • No official updates to the information in our March 18 update have been provided.
  • Fukushima Daiichi Unit 5 reactor
    • At 4:00PM on March 18, the temperature of the spent fuel pool was measured at 152.4 degrees Fahrenheit.
  • Fukushima Daiichi Unit 6 reactor
    • At 4:00PM on March 18, the temperature of the spent fuel pool was measured at 148.1 degrees Fahrenheit.
  • Fukushima Daiichi Common Spent Fuel Pool
    • At 10:00AM on March 18, it was confirmed that water level in the pool was secured.
  • Fukushima Daiichi Dry Cask Storage Building
    • At 10:00AM on March 18, it was confirmed that there was no damage by visual checking of external appearance.

At 5:50PM on March 18, Japanese Safety Authority (NISA: Nuclear and Industrial Safety Agency) announced provisional INES (International Nuclear and Radiological Event Scale) rating to the incidents due to the earthquake.

Fukushima Daiichi Unit 1, 2 and 3 Unit = 5 (Accident with wider consequences)

Fukushima Daiichi Unit 4 = 3 (Serious incident)

Fukushima Daini Unit 1, 2 and 4 Unit = 3 (Serious incident)

(No official provisional rating for Fukushima Daini Unit 3 has been provided.)

Older Posts »

Create a free website or blog at WordPress.com.