tertium quid
fundamental mind and a post-physicalist paradigm
fusion illusion
“There is potential in fusion to revolutionize our world and to change all of the options that are in front of us and provide the world with abundant and clean energy without the harmful emissions of traditional energy sources.”
— John Kerry, Special Presidential Envoy for Climate, 2023
Hopes that fusion power will save us from the looming crises of global warming appear to be even stronger now than they were in 1954 when Lewis Strauss— then chairman of the Atomic Energy Commission, and in apparent reference to fusion power— infamously declared that nuclear power would one day be too cheap to meter. Indeed, nuclear fusion is widely regarded as the best, and some believe the only, hope for achieving widespread, carbon-free electrical power. But that optimism appears to derive more from desperation than from realistic assessment. Ignoring the enormous and well-known technological problems of trying to maintain a tiny sun in a vacuum vessel, and the all but certain outcome that fusion power will never be economically competitive with other low- or non-carbon energy technologies, the fatal flaw in hopes for fusion power is, unbelievable as it may seem, a shortage of fuel. This essay focuses on the tritium-fueled tokamak reactors used or planned by almost all of the world’s major fusion programs. The fatal flaw of insufficient fuel and many lesser but highly problematic issues with tritium fuel do not apply to fusion programs that don’t use it. Although the technical difficulty of aneutronic fusion is far more challenging, many smaller fusion programs are going in that direction.
As fusion advocates often claim, half of the material required for fusion fuel (deuterium) is readily available from seawater. But what they apparently don’t know– or, far worse, they ignore– is that the other half (tritium, the radioactive isotope of hydrogen) can only be created in nuclear reactors. Very tiny amounts of tritium are also produced in the upper atmosphere by cosmic rays colliding with a 14N nucleus. Far from being abundant, tritium is extremely rare– almost all of it has been made by nuclear militaries for hydrogen bombs. Serendipitously for fusion research, however, very small amounts of tritium are also produced in the few nuclear reactors worldwide that use heavy water as a moderator and coolant. Incredibly, the source for half of the fuel required for the world’s major fusion energy programs is a minor and extremely rare waste-product from a handful of aging nuclear power plants in Canada. Almost all of the world’s commercially available tritium comes from a Canadian facility that processes used heavy-water from Canada’s CANDU reactors to remove the tritium that builds up during reactor operations.
Nuclear power can be said to have been born on December 2, 1942 when a team of scientists and engineers led by Enrico Fermi initiated the world’s first controlled chain reaction in a small reactor at the University of Chicago. Thirteen years later nuclear reactors were powering U.S. submarines, and two years after that the first U.S. nuclear generating station started delivering electricity to the grid. By 1992, just fifty years after nuclear power was born, fission reactors were providing 20% of U.S. electricity. There was never any doubt about whether turning atomic fission into electricity could be accomplished. No fundamental engineering challenges needed to be overcome, and no new materials needed to be developed. Other than the isotopic enrichment of uranium and the chemical separation of plutonium, both of which were sufficiently understood such that their production processes worked as predicted. Fission reactors were basically just a new way to boil water. The physics had been worked out in top-secret pursuit of weapons with unimaginably destructive power; after that, harnessing nuclear fission for electricity was mostly a matter of straightforward engineering. This is not to diminish the many technical problems encountered and the innovative solutions developed in designing and building fission reactors, but rather to point out that there was never any doubt that a working power reactor could be built.
There were, of course, some problems. Most commercial reactors in the U.S. are scaled-up versions of power systems originally designed for naval warships, particularly submarines: compact reactors with high power density. Pressurized water reactors (PWRs) are the most common type of commercial power reactor in the U.S and the only type of reactor used for U.S. Naval vessels. But light-water reactors are not inherently safe; the reactor can melt if not constantly and carefully managed. Although nuclear engineers in the 50’s knew how to design inherently safe reactors (reactors that could not melt even if left unattended), the higher-risk naval reactors had a strong head start and were therefore faster to commercially exploit, so that’s what utility companies chose to build. Passively safe reactors that do not need human intervention to prevent meltdown have a much lower energy density— larger size per unit of power— and are therefore not suitable for naval power systems.
Looking back, quickly locking into naval reactor designs for civilian power was a short-term expediency that we should now regret. The civilian nuclear industry was not as tightly managed as the nuclear navy, and a couple of high-profile accidents– along with a concerted anti-nuclear campaign by fossil fuel companies and consequent environmentalist opposition– led to widespread fear of nuclear power and dashed any realistic prospects of nuclear reactors replacing coal-fired power plants. For specifics, see Atomic Insights
With coal-fired power stubbornly holding its share of total world electrical generating capacity, The Guardian, April, 2022 and the planet rapidly getting hotter, it’s easy to understand the allure of fusion power as a means of allowing us to continue our electric-intensive lifestyles without the guilt of contributing to global warming. Hopes for a fusion future continue to be promoted by media reports that almost invariably paint a glossy picture of fusion as the future source of energy— clean, safe and virtually unlimited fuel. In a segment typical of reports on fusion power, for example, Science Magazine recently stated that “Fusion holds the tantalizing promise of plentiful, carbon-free energy, without many of the radioactive headaches of fission-driven nuclear power.”
Fusion advocates emphasize the idea that fusion reactors will be “safe and clean” in contrast to “dangerous and dirty” fission and point out that that fusion reactors cannot meltdown and will not produce extremely long-lived radioactive waste. But those are misleading comparisons. It is true that fusion reactors cannot melt down, and that fusion power reactors will not create large amounts of high-level radioactive waste. Fusion power plants will, however, create very large amounts of low-level radioactive waste that must be kept isolated for 100 years and significant amounts of intermediate-level waste that must be isolated for 1,000 years or more. In contrast, spent fuel from most fission power plants can be safely handled without shielding in about 300 years (plutonium, the primary radio-toxic material remaining after 300 years, is only biologically dangerous if it is ingested.) See Integral Management Strategy for Fusion Radwaste: Recycling and Clearance, Avoiding Land-Based Disposal for a brief summary of some of the issues facing waste disposal for fusion power plants Handling, recycling and disposal options for fusion waste will also be greatly complicated by the massive amounts of beryllium (a highly-toxic metal) that will be used in fusion plants. Fusion reactors will also face some of the same risks as fission reactors (such as an aircraft crash) that could lead regulatory authorities to require containment structures. Tokamak reactors also have unique risks. For example, the superconducting (SC) magnets can accidentally “quench,” rapidly releasing their stored energy. If all of the massive SC magnets at ITER were to suddenly quench, they could release energy equivalent to about 10 tons of TNT; the extent of damage would be determined by how fast that energy is released and whether it is controlled by directing the energy external to the building. See “Fusion Research: Time to Set a New Path.
For these and other reasons the typical narrative presented to the public about fusion— a reliable source of clean energy with virtually unlimited fuel— is at best misleading and at worst outright false. This criticism only applies to fusion programs that are based on using tritium fuel, although that includes all of the world’s largest fusion programs. Even prominent scientists get it wrong: Michio Kaku, a physicist popular for explaining physics on television, said that “hydrogen from seawater could be the basic fuel. So this is too good to be true.” In an interview with CNBC following the announcement in December 2022 that the National Inertial Fusion center had achieved “break even” power. He was right about it being too good to be true, at least in part because he was wrong to imply that all of the fuel for fusion power plants could come from seawater.
The scarcity of reactor fuel is just one of fusion power’s potential show-stoppers (explored in more detail below). All of the world’s major fusion programs are based on “burning” fuel that is 50% tritium. Although a few privately-funded fusion programs are pursuing fusion schemes based on fuel cycles that do not require tritium, they are generally considered to be long shots. If successful, however, they would obviate many of the criticisms of fusion power presented here. Tritium power plants will be the most complex system of systems ever built, requiring technologies that are now only conceptual and materials that have yet to be developed. The plasma in a fusion reactor, ten times hotter than the core of the sun, As unimaginably high as those temperatures seem, tritium-fueled reactors actually run at temperatures considerably cooler than other fusion fuel cycles. Deuterium-deuterium reactors, for example, require temperatures ten times higher than tritium-fueled reactors, more than 100 times hotter than the core of the sun. must be kept away from the reactor wall by magnets that are super-cooled to almost absolute zero and are only a meter or so from the plasma. The intense heat and radiation from the plasma will damage the plasma-facing materials in the reactor in ways that can only be modeled now, as there is no means of testing them in the conditions they’ll actually encounter. Materials behind that initial wall will be subject to unprecedented degradation from the high-energy neutrons that create fusion energy, 80% of the energy in tritium-based fusion reactors will come from the same high-velocity neutrons that are released in a neutron bomb. Models indicate that every atom in the material surrounding the reactor will be displaced many times in a year of operation, with consequences for material degradation that cannot be tested at this time. and systems must be developed to breed and recover tritium at a scale and efficiency far greater than the small test systems developed to date. Difficult compromises must be made between materials and system performance in critical systems and components, not only for system effectiveness but also for human health and environmental protection. And we won’t know the true extent of problems with many of the new materials and systems until a full-scale plant goes online.
If a fusion power plant ever becomes operational, maintenance and repairs on much of the plant will have to be done by remote control, including swapping out assemblies that weigh many tons. And then there is the problem of net power and sustainable operation. Unlike all other power plants, fusion reactors must draw massive amounts of power from the grid to heat the plasma. It is likely that tokamak reactors will only produce more power than they consume in relatively short duration “pulses,” and it remains to be seen how much actual net power they will produce (many experts believe it will be relatively little). But even if they can produce significant net power, tokamak fusion reactors may be offline more often than they are producing power– and they could very well require backup power to maintain a constant flow of electricity to the grid.
Critical as these problems are for prospects of fusion power, most of them will not be addressed by the world’s most expensive experiment, the International Thermonuclear Experimental Reactor (ITER) currently under construction in southern France. The ITER is a multi-national effort originally intended to demonstrate the viability of fusion as worldwide source of power. That ambitious objective has been greatly scaled back, however. The current and more modest objectives of the ITER are to test the “availability and integration of the technologies essential for a fusion reactor;” to investigate and demonstrate sustained plasma (although only for less than an hour); and to demonstrate that a fusion reactor can be operated safely. See the ITER website: “What is ITER”
Even if the ITER is successful in achieving these more limited goals– an outcome that some experts believe is highly optimistic– the prospects for fusion power will still be unclear, as the objectives of the ITER will only address some of the critical questions facing the potential for widespread fusion power. The task of demonstrating fusion as a viable source of electricity is presently planned for ITER’s successor, the Demonstration Reactor (DEMO). Still in concept infancy, with many fundamental design issues hinging on results from the ITER, DEMO is intended to be the world’s first fully operational fusion reactor, a robust power plant that will demonstrate the technical and economic viability of fusion power. And that is where the big problems with fusion are likely to emerge.
The DEMO power plant, if it is ever completed, will be the most complex, expensive, and temperamental power system ever built. At this very early concept stage, DEMO is expected to produce 800 MW of electrical power, about 80% of a typical commercial fission power plant. To achieve that level of power production the DEMO reactor will have be much larger than ITER, with huge consequent increases in cost. Most fusion engineers believe that reactor costs will increase disproportionately to increases in power output. Many of the critical systems cannot be even conceptually finalized until ITER has completed its mission, now expected to be sometime in the 2040s. All of the systems will be first-of-kind in type or scale, and they must work together almost flawlessly from the outset, as tritium supplies will by then be so limited that any major delays in operation could doom long-term prospects for the plant.
If tritium is so rare, it seems reasonable to ask, why are all the world’s major fusion programs planning to use it? Credible answers are no doubt varied and complex, but they include the fact tritium is the only known way to achieve fusion on earth. It is simply not possible to fuse elemental hydrogen, the process that fuels the sun, as the temperatures and, mostly, pressures required to do so are simply not attainable on earth. It is, however, possible to fuse the isotopes of hydrogen: deuterium (2H) and tritium (3H). Deuterium, also known as heavy-water, is a naturally-occurring and stable isotope that can be extracted from seawater– so the supply of it is indeed virtually unlimited. But fusion using only deuterium (referred to as D-D fusion) requires temperatures and pressures that appear to be unattainable with present or foreseeable technology. Deuterium-tritium (D-T) fusion, on the other hand, occurs at temperatures and pressures that, although much hotter than the core of the sun, are achievable with today’s technology, even if at this point only for very short periods of time. To achieve productive fusion with tokamak reactors, temperatures must be maintained at about 150 million °F with D-T fuel and around 400-500 million °C with D-D fuel. The temperature at the core of the sun is about 15 million °F. The current record for sustained plasma is about one minute. A 50/50 mixture of deuterium and tritium (D-T fuel) is therefore the design basis for all of the world’s major fusion power programs. Several fusion projects are pursuing to D-D and other non-tritium reactions, but they are relatively small do not appear to have much prospect for commercial application in the foreseeable future.
Unlike deuterium, however, tritium is an extremely rare, human-made isotope created exclusively for thermonuclear weapons. Fortunately for the world’s fusion programs, it is also a minor waste product from CANDU reactors that use heavy-water as a reactor moderator and coolant. CANDU is an acronym for Canadian Deuterium-Uranium. (An atom of tritium is formed when a neutron produced during reactor operation collides with a nucleus of deuterium). Although tritium has relatively low radio-toxicity, it’s a carcinogen (beta-emitter) so Canada and South Korea extract tritium from used heavy-water in order to reduce worker exposure and releases to the environment. Canada is the only country, however, that makes their tritium available for commercial uses such as self-luminous signage and displays, medical tracers, and fusion research. Canadian tritium is therefore the only expected source of start-up fuel for the world’s first fusion power reactor(s), at least for now. Romania is building a tritium extraction facility for its CANDU reactors (two operational and two planned) but the production rate and availability of tritium from the Romanian facility has not been established. Total production of tritium if all four reactors are operating would probably be in the range of 1,000 to 1,500 grams per year, about the same as the current output of the Darlington facility. And the total inventory of Canadian tritium is extremely small, only about 25 kilograms – enough to run a full-size fusion power plant for about six months. Moreover, with a half-life of 12.3 years, tritium inventories decrease through decay at about 5% per year. Annual production of tritium from the Darlington plant in Ontario is about 1,500 grams, barely keeping up with annual decay. (South Korea produces about 700 grams of tritium per year but has not made any significant amount of that material available to the world market).
To date, the low inventory of tritium has not been a problem, as commercial demand for it is only a few hundred grams per year. But the extremely limited amounts of tritium is a huge problem for the world’s major fusion programs, all of which are basing their designs on reactors that burn tritium fuel. The ITER, small compared to the size of potential full-scale fusion power plants, is estimated to need 12.3 kg for its planned 10-12 year tritium burn experiments beginning in 2039 (a date that has slipped by more than 10 years from the original schedule). From “Tritium resources available for fusion reactors in the long term,” M. Kovari, M. Coleman, et.al, IAEA Nuclear Fusion, December 21, 2017. The ITER schedule for first tritium use has now been extended to 2039 following decision to replace beryllium with tungsten as the plasma-facing material. . That’s more than half of the tritium expected to be available in 2039. The China Fusion Engineering Test Reactor (CFTER), planned for start-up sometime in the 2030’s, is expected to need about 2 kg of tritium, although CFTER has not said where it expects to get it. And there is no way to know at this point how much tritium the more than two-dozen smaller fusion programs planning to use it will need over the next several decades.
The real elephant in the room for near-term fusion power, however, is that there may not be (some experts say there won’t be) enough tritium for the start-up of DEMO, the first fusion reactor intended to actually produce electricity– currently but unrealistically planned to begin operation sometime in the 2050s. Solid data from ITER, critical for the conceptual design of all major systems in DEMO, won’t be available until the mid-2040s or later. ITER is currently planning to begin experiments using tritium in 2039 with a slow ramp-up to ensure worker safety. Some results from experiments will be available in the early 2040s, but a full evaluation of material and system performance at ITER probably won’t be available until the late 2040s. How long will it take to complete the evaluation of data from ITER, decide on critical design criteria, produce a complete design for the reactor and balance of plant, obtain necessary regulatory approvals, and finally construct the world’s first fusion power reactor? No one knows. But it seems reasonable to think that it will take a lot longer than it currently takes just to build a (comparatively very simple) conventional fission power plant. Nuclear plants under construction in the U.S. are taking about 15 years to build. The designs and regulatory approvals for those plants were completed before construction began and were based on decades of experience with similar power plants.
Although the lack of fuel has been glossed-over in most media accounts of fusion energy, fusion engineers have always known that tritium-fueled plants would have to create or “breed” the tritium required to sustain operations. All serious fusion power programs therefore include at least a conceptual scheme for replacing burned and lost tritium by “breeding” it using the neutrons from fusion. Tritium is created by transmutation when a neutron collides with an atom of lithium. Tokamak reactor programs intend to embed lithium in the material surrounding the reactor wall where it will be exposed to intense neutron radiation. The physics are problematic, however, because each atom of tritium that fuses only releases a single neutron. Break-even for tritium breeding therefore requires a neutron capture efficiency of 100%– each neutron released in a fusion event must create another atom of tritium– just to stay even.
But 100% efficiency is not possible, as recovery losses and escaped neutrons are inevitable. So fusion power plants must include a way to create more neutrons than are released during fusion by adding “neutron multiplying” material to the reactor wall surrounding the plasma chamber. Beryllium is the material of choice for neutron multiplication, and the initial design for ITER specified beryllium as the plasma-facing material in the reactor. But beryllium is extremely toxic– so much so that the ITER is now being redesigned to replace beryllium with tungsten as the plasma-facing material and instead embed beryllium in the material behind the plasma-facing layer– a change with broad consequences that will extend schedule delays already decades behind the original plan.
Regardless of the neutron multiplying material and scheme used, breeding and recovering sufficient quantities of tritium will require another new, industrial scale technology: producing large quantities of lithium enriched in 6Li, an isotope that makes up about 8% of naturally-occurring lithium (the other 92% is the isotope 7Li). Although atoms of both isotopes will transmute into lithium if they absorb a neutron, 6Li has a higher probability of doing so. Neutrons do not collide with the nucleus of an atom through a straight-line interaction such as in billiards, but rather have a probability or tendency to be “absorbed” with varying degrees of likelihood that depend on the energy of the neutron and the type of target nucleus. With the razor-thin breeding margin for tritium, every neutron counts. So the lithium used in fusion reactors must be highly enriched in 6Li. Most schemes for tokamak fusion reactors include embedding enriched lithium into the blanket wall of the reactor and recovering the tritium– a complex process that is critical for sustained reactor operation but that has not yet been developed at anything approaching the scale of a fusion power plant.
As with tritium, however, commercially available inventories of enriched 6Li are extremely small (the world’s nuclear militaries hold larger inventories for producing tritium for nuclear weapons). The U.S. ended lithium enrichment over 50 years ago because it was so environmentally destructive: over 6 tons of mercury were unaccounted for and presumably released to the environment for every 10 tons of enriched lithium produced for the nuclear weapons program. “Mercury Releases from Y-12 Lithium Enrichment”. The COLEX lithium enrichment process used at the Y-12 plant continues to be used by China and Russia. No other means of enriching lithium has been tested at the scale required for fusion reactors. See World Nuclear Association, “Lithium” China and Russia are the only countries currently enriching lithium, albeit in very limited amounts. But fusion plants will need a lot of 6Li. The DEMO reactor alone is estimated to require some 20 tons of it. Producing enough enriched lithium for even a single fusion power plant will effectively require a new industry, as an environmentally-acceptable method for enriching lithium has not been demonstrated on an industrial scale– another critical issue that is being glossed over or ignored by the world’s major fusion programs. ITER ignores the low worldwide inventory of 6Li and offers a very misleading analysis of expected 6Li requirements for commercial fusion plants. And Bringing Fusion to the U.S. Grid, a recent report on a strategic plan for a U.S. fusion power plant by 2050 prepared by the Institutes of Engineering, Science, and Medicine, does not even mention 6Li production as an issue that needs to be addressed.
Although it goes without saying that a power plant has to produce more power than it consumes, that’s never been an issue for conventional thermal power plants (coal, natural gas, nuclear, etc.) as the amount of electricity those plants consume is insignificant compared to the amount they generate. It is, however, a major issue for tokamak reactors. Tokamaks are energy hogs— they require massive amounts of electricity to create the plasma and run all of the machinery and systems necessary to create and contain fusion reactions. The goal, of course, is for the electricity produced by a fusion plant to exceed the power it consumes, ideally by a lot.
For tokamak reactors, however, it remains to be seen how much net power they will be able to produce. The ITER, for example, will consume between 110 and 620 MW of power when it is operating and about 50 MW even when it is idle or shut down for maintenance and repair. ITER Website: “Power Supply” It will consume around 300 MW when the reactor is operating, and for an estimated 10 seconds at peak of every fusion cycle, will draw about half the output of a large nuclear power plant— enough electricity to service a half-million homes. And the ITER is a relatively small reactor: although it was not designed to produce electricity, if it had been designed to do so it could only generate about 100 MW(e), or about 10% of a modern nuclear power plant. Units of MW can be used for either thermal or electrical power, so a parenthetical (t) or (e) is often used to denote thermal or electrical power, respectively. In other words, the first fusion reactor will constantly consume about half much electricity as a power reactor of its size could generate, about three times as much during normal operations, and six times as much for brief periods.
That reality, however, is never mentioned along with the frequently heard claim that one of ITER’s objectives is to achieve a ten-fold gain in energy, ostensibly releasing ten times more energy than it consumes. The ITER website states “ITER is designed to yield in its plasma a ten-fold return on power (Q=10).” “What will ITER Do?” Although technically accurate in very narrow terms, the media typically ignores the narrow definition and makes sweeping statements such as a Science News report titled “In a breakthrough experiment, nuclear fusion finally makes more energy than it uses.” Most people could be forgiven for thinking that that claim is referring to all the energy the experiment used. But it’s not.
“Q,” is a narrowly-defined parameter used by fusion engineers to judge progress being made in controlling fusion reactions; it’s simply the ratio of energy produced by a fusion reaction to the energy input required to sustain it. From the ITER website: “The Tao of Q” In other words, Q is the amount of energy produced in fusion divided by the relatively small amount of energy that’s injected into plasma to boost its temperature from about 20 million °F to the approximately 150 million °F necessary for fusion to occur– about 50 MW(t) for the ITER. This only applies to tokamak reactors such as the one being built at the ITER. But even that narrow definition is misleading, as the 50 MW is thermal energy generated by RF heaters and neutral beam injectors. To deliver 50 MW(t) of thermal energy, those devices will actually consume about 100 MW(e) of electricity, due to the inherent loss from conversion of electrical to thermal energy.
Although that’s a lot of energy (enough to run more than 70,000 homes), it’s only a portion of the 300-620 MW(e) ITER will draw from the grid when the reactor is running. It takes an enormous amount of electricity to create the plasma and heat it to around 20 million °F and to power the giant superconducting magnets, the cryogenic system required to cool them to almost absolute zero, and all the other machinery and systems necessary to create and sustain a fusion reaction. See “ITER is a showcase…for the drawbacks of fusion energy” None of that power is included in the calculation of Q. Achieving a “Q” factor of 10 would indeed be a truly impressive technological accomplishment, but it is technically and intellectually dishonest to leave the impression that a fusion reactor achieving Q=10 is actually producing ten times more power than it consumes.
Much like the widespread misrepresentation of Q, fusion advocates avoid mentioning that the power balance of tokamak fusion reactors– the amount of power they produce vs. the amount of power they consume– bodes rather poorly for the prospects of fusion power. As described above, tokamak reactors require massive amount of electricity to operate. Although increases in efficiency from the experimental stage of the ITER are reasonable to expect (if the ITER or subsequent fusion plants ever actually operate), the amount of net power tokamak reactors can produce is uncertain but likely to be rather low.
Another critical question that will affect the amount of power a tokamak reactor actually delivers to the grid is whether it can run continuously, or if, as many fusion engineers believe, it will have to produce power in cycles or “pulses” of several minutes followed by far-longer cooling-down and re-heating periods. The problem here is that running a tokamak reactor continuously increases the risk of damage to the plasma-facing components and could lead to significantly increased repairs or even premature failure. So although it is desirable (and possibly critical as far as utilities are concerned) for a fusion plant to generate power continuously, it may be necessary to run the reactor in repeated cycles of plasma heating, power production, and cool-down. Because electricity will only flow to the grid during power production, backup power would be required during periods when fusion is not occurring. It isn’t clear what the percentage of of time fusion will actually be occurring, but the prospect of needing routine backup power for normal operations is a serious problem for the prospects of fusion power.
And then there is the matter of fusion plant reliability and economics– and whether there is any reasonable hope that fusion plants will ever be competitive with existing low- or non-carbon energy technologies. Today’s nuclear power plants, for example, almost all of which have been running for decades using technology developed in the 1950’s, average full-power generation (referred to as capacity factor) of more than 90%; wind turbines average around 30%. Capacity factor is the total amount of power generated by a power plant in a period of time divided by the power that could have been generated in that period if the plant ran continuously at full power output. Nuclear fission power plants achieved rapid commercial success because the technology was relatively simple and the plants produced power with very few unscheduled interruptions. Many incremental improvements have been made over the seven or so decades of nuclear power, but the basic schemes for commercial nuclear plants has remained the same. Stodgy as they may be, North American nuclear power plants continue to crank out power more reliably than any other source of electricity. statista.com: Capacity factors for selected energy sources in the United States in 2023
It is difficult to imagine fusion power plants achieving anywhere near the same level of reliability. Far from being the relatively simple technology of nuclear fission, fusion power plants will be highly complex systems requiring an array of interdependent, first-of-kind technologies— many of which cannot even be tested until they are completed and deployed in a full-scale power plant. Even if all of the fundamental materials and engineering problems can be solved, if plant operators are able to maintain sufficiently reliable power output, and if enough tritium is somehow available to operate the reactors, fusion power plants will still face the daunting challenge of economic viability.
Although the large number of major questions and unknowns precludes a credible cost estimate for building and operating a fusion power plant, it is all but certain that fusion power will be far more expensive to build and operate than today’s nuclear plants. It is possible that the operating costs alone for a fusion power plant will be so high that the electricity produced would be far too expensive even if the plant didn’t cost anything to build. And it is difficult to imagine that utilities who are no longer investing in comparatively inexpensive conventional fission will even consider buying into the more expensive and less reliable tritium-fueled tokamak fusion plants.
It is a matter of conjecture as to why the world’s major fusion programs are proceeding without apparent concern, let alone alarm, about the shortage of tritium, or why so few commentators mention it in their otherwise rosy assessments of fusion energy. But the facts are as they are, and it seems reasonable to conclude that fusion reactors using D-T fuel will never produce any meaningful amount of electricity. For numerous and substantive reasons, we need to face the likelihood that fusion reactors will not be a viable source of power in any time frame that could help stave off catastrophic global warming.
This is not to say, however, that the prospects for fusion energy should be dismissed— only that we should take a longer view, carefully evaluate the options, and invest our resources in technologies that could actually pay off, even if not in this century. Serious advocates of nuclear power typically hold that the current fleet of fission power reactors should be replaced by more robust, fail-safe designs and a fuel cycle that recycles spent fuel— thereby eliminating the problems of reactor failure and eons-long storage of radioactive waste. In the medium term, advanced design, fail-safe fission reactors are the only credible alternative to fossil fuel power plants. At least over the next few decades, and probably much longer. Although a program to build advanced fission reactors could take decades, we know that they will work as planned. Looking several decades down the road, closing the fission fuel cycle (such as the Integral Fast Reactor Program) could in theory provide 100% of our electricity for hundreds of years using the waste from today’s reactors as fuel– and greatly reduce the need for mining (and, more importantly, for enriching) uranium. Having wasted several decades fretting about the impossible goal of risk-free nuclear power, it’s time to confront the fact that all sources of power have risks and environmental consequences and begin to responsibly balance the risks and rewards of all energy options.
For fusion research, the potential for aneutronic fusion should be seriously considered. Aneutronic fusion— where very little of the energy comes from neutrons— has been explored since the middle of the 20th century. If aneutronic fusion could be accomplished (a questionable outcome, to be sure), the problems associated with today’s fusion schemes— high-energy neutrons, tritium shortage, material degradation, and radioactive waste— they all simply disappear. It is, however, much more difficult to reach energy break-even with aneutronic fusion, as the required temperatures and pressures are far beyond what we can achieve with today’s technology. But that is what research is all about: pushing beyond the limits of what we are currently able to do. The illusions of the currently fashionable D-T fusion program should not diminish our imagination of what we may one day actually be able to do.