Keywords

These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

What Have We Accomplished?

A controlled fusion reaction requires holding together for a long enough time a plasma that is hot enough and dense enough. These critical conditions can be quantified by the triple product Tnτ, a modification of the Lawson criterion explained in Chap. 5. Here, T is the temperature of the ions, the reacting species; n is the density of either the ions or the electrons, since the plasma is quasineutral; and τ (tau) is the energy confinement time, a measure of how fast (or slowly) energy must be applied to keep T constant. Over the years, over 200 tokamaks have been built, and the value of Tnτ achieved in each has been calculated. Some of these are plotted in Fig. 8.1 as a function of time. This measure of success has increased over 100,000 times in four decades, recently doubling every two years.

Fig. 8.1
figure 1figure 1

Increase of the triple product Tnτ with year. The points are labeled with the names of the tokamaks (Data from http://www.efda.org_fusion_energy/fusion_research_today.htm. The units for Tnτ are 1020 m−3  keV s)

Most of this increase has come from the confinement time. The first experimental machines suffered from hydromagnetic instabilities such as the Rayleigh–Taylor and the kink instabilities described in Chap. 5. These can take the plasma to the wall at the speed of a field line wiggle called an “Alfvén wave,” which limits the confinement time τ to microseconds. Once these were controlled, τ increased a thousand-fold to several milliseconds, at which point microinstabilities were the limiting factor. After years of understanding banana orbits, magnetic islands, ballooning modes, and connection lengths, these instabilities were minimized; and τ increased another thousand times to the present value of several seconds.

The rate of progress in fusion can be compared with that in the development of computer chips, the famous Moore’s Law. Gordon Moore had predicted that the number of transistors on a chip would double every two years, an unbelievable rate which was actually followed almost exactly. Figure 8.2 shows how this growth compares with a range of doubling times. The fusion figure of merit in Fig. 8.1 keeps pace with Moore’s law, now also doubling every two years. Both of these outstrip Livingston’s law for particle accelerators; where the energy doubling time is three years.

Fig. 8.2
figure 2figure 2

Moore’s Law for semiconductors compared with doubling rates

Here are pictures of the four large tokamaks which provided the points at the top of these graphs (Figs. 8.38.6).Footnote 1

Fig. 8.3
figure 3figure 3

TFTR: Tokamak Fusion Test Reactor at Princeton, NJ

Fig. 8.4
figure 4figure 4

JET: Joint European Torus at Abingdon, UK

Fig. 8.5
figure 5figure 5

DIII-D: Doublet III at General Atomic, LaJolla, CA

Fig. 8.6
figure 6figure 6

JT-60U: Japan Torus at Ibaraki, Japan

As you can see, or cannot see, the tokamak itself is hidden behind a jumble of equipment which includes the neutral-beam injectors, power feeds to the coils, the support structure, and diagnostic instrumentation. To show the size of these machines, Fig. 8.7 is an inside view of the vacuum chamber of DIII-D when it is opened up to air.

Fig. 8.7
figure 7figure 7

Inside the vacuum chamber of DIII-D when it is opened up to air

Fits, Starts, and Milestones

How did we get to this point? The scatter in the points in Fig. 8.1 tells a story. In the short term, progress has been sporadic, with fits and starts caused not only by problems of physics, but also by problems of funding and politics. Glimpses of the history of fusion research can be found in popular books by physicists Amasa Bishop [1], Hans Wilhelmsson [2], McCracken and Stott [3], and Ken Fowler [4]. Less technical coverage of people and politics is given in books by journalists Joan Lisa Bromberg [5] and Robin Herman [6], and in an article by Gary Weisel [7]. Here is a nutshell account.

In the USA, three groups started research on controlled fusion in 1951–1952: one at Livermore, California, headed by Richard F. Post; one at Los Alamos, New Mexico, headed by James Tuck, and one at Princeton, New Jersey, headed by Lyman Spitzer, Jr. It was obvious that the hydrogen bomb reaction was a source of a huge amount of energy, if only it could be released slowly in a controlled way. It was not obvious how to do it. All agreed that trapping and holding a hot plasma would be necessary. Dick Post proposed to use magnetic mirrors, which we shall describe in Chap. 10. Jim Tuck proposed to use pinches (Chap. 7), in which the entire magnetic field is generated by plasma currents. These devices suffered, of course, from the kink instability, which was not known at that time. Tuck had the foresight to name his machine the Perhapsatron. At Princeton, Lyman Spitzer, an astronomer, designed the figure-8 torus, which he named, of course, a Stellarator. A little later, a fourth program started at Oak Ridge, Tennessee, based on another mirror machine, the DCX. This group emphasized experiments which ran continuously (hence DC) rather than in pulses, and eventually included the curiously named ELMO Bumpy Torus. In England, the initial efforts concentrated on pinches, particularly the toroidal pinch, which is a torus like a tokamak, but with a poloidal confining field produced by a large toroidal current. In Russia, research began at the Kurchatov Institute in Moscow with a small torus which they named the Tokamak, invented by Igor Tamm and Andrei Sakharov. Other nations did not join in until after the first milestone, the Geneva conference of 1958, when these secret programs were declassified and revealed.

In the years before that, the US program grew rapidly with the enthusiastic support of Atomic Energy Commission chairman Lewis L. Strauss. The program was named Project Sherwood after the name of James Tuck, reminiscent of Friar Tuck of Sherwood Forest. Strauss kept the program classified and well funded with the aim of beating out the UK and the USSR in achieving fusion. Sherwood conferences were held yearly, and there were some memorable occasions. In 1956, the meeting was hosted by Oak Ridge at Gatlinburg, Tennessee, and most attendees found out for the first time the meaning of “dry town.” Even without lubrication, Lyman Spitzer regaled the group with his rendition of songs by Gilbert and Sullivan, which he sang from memory. In 1957, the meeting was in Berkeley, California, and a movie theater had to be taken over in the day time and secured for the classified meeting. By sheer coincidence, the movie that was playing that week was “Top Secret.” From 1952 to 1954 James van Allen, who discovered his famous radiation belts, built the B-1 stellarator at Princeton, a machine which the newly hired young experimentalists inherited in 1954.

Meanwhile, Spitzer had assembled a strong theoretical group, whose magnum opus was the elegant paper An energy principle for hydromagnetic stability problems, published in 1958 [8]. This paper by Bernstein, Frieman, Kruskal, and Kulsrud did more than anything else to establish plasma physics as a respectable new field in the eyes of all physicists. A calculational method based on minimization of energy was given that could predict the boundaries of stable MHD operation even in toroidal machines with complicated magnetic geometries. This tool allowed experimentalists to build machines that were stable against the Rayleigh–Taylor and kink instabilities, among others, that were discussed in Chaps. 5. and 6.

The 1958 Atoms for Peace conference was organized by the IAEA (International Atomic Energy Agency), formed in 1957 by the United Nations. Based in Vienna, Austria, the IAEA has sponsored the plasma physics and controlled fusion conference every two years since then. A large contingent from Project Sherwood was sent to Geneva, flying across the Atlantic on propeller planes. Preceding the team were tons of display equipment managed by the Oak Ridge experts. Not only were there models such as the figure-8 stellarator shown in Fig.  4.18, but actual operating machines were also transported, including the power supplies and control equipment needed to make them work. No expense was spared. England also put on a large and splendid exhibit, featuring their toroidal pinch, the Zeta. Meanwhile, the USSR exhibit featured the Sputnik, which they had just launched to open the space age. Their fusion machine, the tokamak, was secondary. The tokamak on exhibit looked like a formless, dark, unrecognizable piece of iron and was not made to work. This was how the tokamak age began. But the gauntlet was thrown by the USA, the UK, and the USSR; and the race was on.

At the Geneva conference, the British team announced that neutrons characteristic of fusion reactions had been observed in Zeta. This would have been the first demonstration of fusion created by hot plasma. Unfortunately, it was found that these neutrons came from energetic ions striking the wall, not from the thermal ions in the body of the plasma. As explained in Chap. 3, ion beams cannot produce net energy gain; that requires a thermonuclear reaction. The Brits had been careless and had stumbled. It was an embarrassing moment for their leaders, Peter Thonemann and Sebastian “Bas” Pease, two gentlemen who were the best friends one could have. The idea of a toroidal z-pinch (zed-pinch to Englishmen) has survived, however, as a possible advanced alternative to the tokamak, aided by a brilliant theory by their countryman, Bryan Taylor.

The 1960s saw progress on many fronts. The most important was the announcement in 1968 by Lev Artsimovich, the driving force of the Russian effort, that the confinement time was 30 times longer than the Bohm time and record-breaking electron temperatures had been achieved in their T-3 tokamak. Recall that Bohm diffusion, caused by microinstabilities, was limiting confinement times to the millisecond regime, so this was important progress if it could be believed. The scientific community was skeptical, since Russian instruments were comparatively primitive. In 1969, an English team headed by Derek Robinson flew to Kurchatov with a laser diagnostic tool that the Russians did not have. They measured the plasma in the T-3 and found that the Russian claims were correct. The tokamak had to be taken seriously. Soon thereafter, research tokamaks began appearing at General Atomics and several universities in the USA, as well as in many locations in Western Europe and Japan. Even the venerable Model C stellarator at Princeton was converted to a tokamak in 1970. In retrospect, the invention of the tokamak was a lucky break. Its self-curing feature of sawtooth oscillations was not foreseen, nor were the gifts from Mother Nature listed in Chap. 7. The cures for Bohm diffusion could have been laboriously found in any of a number of magnetic bottles, some of which may turn out to be more suitable for a reactor than a tokamak. It was concentrating on a single concept, the first promising one, that advanced the tokamak to its present status.

Throughout the 1960s, the Princeton group whittled away at the Bohm diffusion problem, clarifying the microinstabilities responsible for that enhanced loss rate. Much of this work was basic experimentation done in linear machines, which did not suffer from the complicated field lines of stellarators and tokamaks. In the USSR, Mikhail Ioffe at his institute in St. Petersburg invented the “Ioffe bars.” These were four bars carrying current to form a magnetic well (“minimum-B”) configuration in a mirror machine, thus stabilizing the most troublesome instability in those confinement devices. Though mirror confinement is outside our scope here, the minimum-B concept is also used in tokamak configurations. These results, as well as the ones from the T-3 tokamak, were presented in the memorable IAEA meeting of 1968. After the technical sessions in Moscow, Artsimovich led the entire conference to a big party in Novosibirsk, the science city deep in Siberia. The party was held at a large artificial lake made by cutting down trees and covering the stumps with water. Long picnic tables were set up on the shores and food served with Russian hospitality. It seemed that the tables for 60-second chess games must have stretched for 100 yards. Here, plasma physicists from many countries got acquainted on a personal level. It was the beginning of international cooperation and competition.

Another milestone was announced at the Novosibirsk meeting when the General Atomics group showed the picture of Fig. 8.8, which completely surprised the Russians. Had the Americans trumped them with the resources to build a torus large enough to hold a person standing up? Actually, it was not a tokamak or stellarator but an “octopole,” spelled “octupole” when another one was built at the University of Wisconsin by Don Kerst. It had four current-carrying rings suspended by thin wires within the plasma, creating a magnetic well. The plasma was absolutely stable in such a magnetic field, and the classical diffusion rate, caused by collisions alone, was observed for the first time [9]. Being a pure physics experiment, the octopole did not require a large, expensive magnetic field, and it was not the advanced fusion machine that the Russians had feared. Internal conductors would not be practical in a real reactor.

Fig. 8.8
figure 8figure 8

Inside the toroidal octopole at General Atomics (courtesy of Tihiro Ohkawa and published in Chen [10])

The 1970s was a period of euphoria, with Artsimovich predicting scientific breakeven by 1978, and Bob Hirsch, then head of fusion research in the Atomic Energy Commission, pushing for an even earlier date. The prospect of an infinite energy source evoked such lyrical epithets as “Prometheus Unbound!”. With the difficulty of magnetic confinement recognized, the importance of controlling fusion was compared with that of inventing fire. Funding started to increase when James R. Schlesinger became AEC chairman on the way to the CIA and Defense. Support for fusion energy was further escalated by the oil crisis of 1973, when a speed limit of 55 miles per hour was mandated throughout the USA. The dramatic increase in the fusion budget is shown in Fig. 8.9, reaching a peak of almost $900M annually in 2008 dollars. Championed by Representative Mike McCormack (D-WA), Congress passed the Magnetic Fusion Engineering Act of 1980, which laid out the plans and the budget needed to build a demonstration reactor DEMO by the year 2000. The Act was never funded as passed. Tired of promises that fusion would be achieved in 25 years regardless of when the question was asked, Congress began cutting the fusion budget. Ed Kintner took over the fusion office from Hirsch in 1976 and had to reorganize priorities to fit available funds. Many alternative approaches to magnetic confinement still existed at that time,Footnote 2 and they should be explored while keeping the tokamak as the flagship, while critical engineering tests are made. Nonetheless, several large projects ultimately had to be canceled, including the Fusion Materials Test Facility and MFTF-B, the world’s largest superconducting magnet built for mirror fusion. That fusion would always be 25 years in the future was made a self-fulfilling prophecy by the decrease in funding.

Fig. 8.9
figure 9figure 9

US fusion research budget in 2008 dollars (adapted from data from Fusion Power Associates, Gaithersburg, VA)

Curiously enough, the peak in funding in Fig. 8.9 follows a similar graph of the price of oil at the time.Footnote 3 Unfortunately, this did not happen in the oil crisis of 2008, since other energy alternatives such as solar and wind power were available, and the USA was at war in Iraq. The dissolution of the Soviet Union in 1991 had a major effect on the willingness of Congress to support fusion. The threat of being outdone by the Russians was no longer there, and the attitude was to let the friendly nations which are more dependent on foreign oil bear the main expense. As a result, the USA, which had been the world leader in fusion development, slowly lost its preeminent position to the UK and Japan.

The peak funding levels of the 1970s nonetheless enabled the start of the billion-dollar machines that set milestones two decades later. The TFTR at PrincetonFootnote 4 began construction in 1976 and ran from 1982 to 1997. This was a big step because it was the first machine made to run with DT rather than helium or deuterium. Once tritium is introduced, the DT reaction would produce 14-MeV neutrons, which would activate the stainless steel walls. Massive shielding would be required, and maintenance could be done only by remote control. By 1986, TFTR had set records in ion temperature (50 keV or 510,000,000°C), plasma density (1014 cm−3), and confinement time (0.21 s), but of course not all at the same time. In 1994, a 50–50% DT mixture was heated to produce 10.7 MW of fusion power. This is only about 1% of what a power plant would give and occurred only in a pulse, but it was the first demonstration of palpable power output. Before it was decommissioned, TFTR also demonstrated bootstrap current and reversed shear, effects described in Chap. 7.

Close on the heels of the TFTR, western Europe built an even larger machine, the Joint European Torus, JET, also capable of using DT fuel. Designed in 1973–1975 and constructed in 1979, it has operated from 1983 until now. It was funded by the countries of Euratom and is now operated under the European Fusion Development Agreement, with participation of over 20 countries.Footnote 5 Currently, the world’s largest tokamak with a major radius of 3 m, it is also powered impressively with a magnetic field of 3.45 T (34.5 kG), total heating power of 46 MW, and a toroidal current of 7 MA. It set a record with a pulse of 2 MA that lasted 60 s. In 1997, JET announced a new world record with DT fuel, producing 16 MW of fusion power and keeping 4 MW going for 4 s. JET is being modified for experiments in support of ITER, the large international project described at the end of this chapter.

The third large tokamak of this era is Japan’s JT-60, which started operating in 1985. It plays a leading role in researching the effects on the forefront of tokamak science, such as reversed shear, H-modes, and bootstrap current. Much of this is too technical for this book, but JT-60 has set some world records which are easy to understand. In 1996, it achieved the highest fusion triple product. Recall that the triple product is, more exactly,

$$ \text{Triple product}=n{T}_{\text{i}}{t}_{\text{E}},$$

where τ E is the energy confinement time. The value achieved was 1.5  ×  1021 keV s/m3, close to the value needed for energy breakeven, and only about seven times less than that required for a reactor. Of course, this was in a pulse and not in steady state. In 1998, JT-60 set a record for Q, the ratio of fusion energy to plasma heating energy, at Q  =  1.25. However, since JT-60 was not designed to handle tritium, the experiment was done in deuterium and the result extrapolated to DT. The highest ion temperature of 49 keV was also reported in JT-60. The machine excelled in long pulses, running steadily for as long as 15 s, or for 7.4 s while the bootstrap fraction was 75%. Perhaps most impressive was the production in 2000 of a plasma with zero current over 40% of the minor radius. The current in an outer shell held the plasma even though there was no confinement in the current hole. This is exactly the profile that is suitable for operation with a large bootstrap current fraction.

By focusing on these three machines, we have had to omit the great contributions of other large machines such as DIII-D and ASDEX, as well as those of hundreds of smaller tokamaks built to study particular effects. Though not tokamaks, there are also large machines of the stellarator type, such as Wendelstein 7 in Germany and the Large Helical Device in Japan. No large tokamaks had been built since the turn of the century until two Asian machines went online in 2007: the KSTAR in Daejeon, Korea and the EAST (Experimental Advanced Superconducting Tokamak) in Hefei, China. You can guess what KSTAR stands for. Both of these machines use superconducting coils cooled by liquid helium, requiring a second vacuum system to keep the coils cold. The development of large superconductors is an important step toward a fusion reactor.

As can be seen in Fig. 8.9, the US fusion budget steadily declined in the 1980s and 1990s. Construction of large machines had been completed; there was no oil crisis or competition from the USSR; and people were disillusioned about the prospect of ever achieving fusion. In particular, members of Congress were reluctant to support a project that could not be completed in their terms of office. Major sources of funding shifted to countries which have very limited fossil fuel reserves, and the USA slowly lost its lead at the forefront of fusion research. In 1995, a Fusion Review Panel headed by John P. Holdren and Robert W. Conn submitted a reportFootnote 6 to President Clinton’s Commission of Advisors on Science and Technology on a requested evaluation of the fusion situation. The Panel estimated that progress to a demonstration reactor by 2025 would require annual funding levels averaging $645M between 1995 and 2005, with at peak of $860M in 2002. Should budgetary constraints not permit this level, alternate scenarios were also given. At a realistic level of $320M/year, the best that could be done was to maintain the expert community in plasma science and fusion technology while expanding international participation. With this devaluation, the Magnetic Fusion Energy Program was changed to the Fusion Energy Sciences Program. The restructured program was presented to the DOE Office of Energy Research by the Fusion Energy Advisory Committee, chaired by Conn, in 1996 [13]. As seen in Fig. 8.9, the budget has been maintained the $300M level since that time, partly through the efforts of Undersecretary for Science Raymond Orbach under President Bush. With DIII-D, the largest tokamak extant in the USA, the level of fusion science and innovation nonetheless leapt forward with many intermediate-sized devices in universities and with advances in computation and theory.

It was in this period that burning plasma became the catchword, and planning for a large international tokamak to achieve this, the ITER, began. The success story of the negotiations deserves its own section. This is presently our best chance to move forward in making our own sun. Meanwhile, we need another scientific interlude to clarify the uncertainties that still exist in fusion science.

Computer Simulation

Before describing some effects that are not yet completely understood, we should mention the basis for believing that these problems are not insoluble. That’s the important subject of computer simulation. In the 1970s and 1980s, when unanticipated difficulties with instabilities arose, computers were still in their infancy. To the dismay of both fusion scientists and congressmen, the date for the first demonstration reactor kept being pushed forward by decades. The great progress seen in Fig. 8.1 since the 1980s was in large part aided by the advances in computers, as seen in Fig. 8.2. In a sense, advances in fusion science had to wait for the development of computer science; then the two fields progressed dramatically together. Nowadays, a $300 personal computer has more capability than a room-size ­computer had 50 years ago when the first principles of magnetic confinement were being formulated.

Computer simulation was spearheaded by the late John Dawson, who worked out the first principles and trained a whole cadre of students who have developed the science to its present advanced level. A computer can be programmed to solve an equation, but equations usually cannot even be written to describe something as complicated as a plasma in a torus. What, for instance, does wavebreaking mean? In Hokusai’s famous painting in Fig. 8.10, we see that the breaking wave doubles over on itself. In mathematical terms, the wave amplitude is double-valued. Ignoring the fractals that Hokusai also put into the picture, we see that the height of the wave after breaking has two values, one at the bottom and one at the top. Equations cannot handle this; Dawson’s first paper showed how to handle this on a computer.

Fig. 8.10
figure 10figure 10

Hokusai’s painting of the Big Wave

So the idea is to ask the computer to track where each plasma particle goes without using equations. For each particle, the computer has to memorize the x, y, z coordinates of its position as well as its three velocity components. Summing over the particles would give the electrical charge at each place, and that leads to the electric fields that the particles generate. Summing over their velocities gives the currents generated, and these specify the magnetic fields generated by the plasma motions. The problem is this. There are as many as 1014 ions and electrons per cubic centimeter in a plasma. That’s 200,000,000,000,000 particles. No computer in the foreseeable future can handle all that data! Dawson decided that particles near one another will move together, since they will feel about the same electric and magnetic fields at that position. He divided the particles into bunches, so that only, say, 40,000 of these superparticles have to be followed. This is done time step by time step. Depending on the problem, these time steps can be as short as a nanosecond. At each time step, the superparticle positions and velocities are used to solve for the E- and B-fields at each position. These fields then tell how each particle moves and where they will be at the beginning of the next time step. The process is repeated over and over again until the behavior is clear (or the project runs out of money). A major problem is how to treat collisions between superparticles, since, with their large charges, the collisions would be more violent than in reality. How to overcome this is one of the principles worked out by Dawson.

Before computers, scientists’ bugaboo was nonlinearity. This is nonproportionality, like income taxes, which go up faster than your income. Linear equations could be solved, but nonlinear equations could not, except in special cases. A computer does not care whether a system behaves linearly or not; it just chugs along, time step by time step. A typical result is shown in Fig. 8.11. This shows the pattern of the electric fields generated by an instability that starts as a coherent wave but then goes nonlinear and takes on an irregular form. This turbulent state, however, has a structure that could not have been predicted without computation; namely, there are long “fingers” or “streamers” stretching in the radial direction (left to right). These are the dangerous perturbations that are broken up by the zonal flows of Chap. 7.

Fig. 8.11
figure 11figure 11

Electric field pattern in a turbulent plasma (from ITER Physics Basis 2007 [26], quoted from [14]. The plot is of electric potential contours of electron-temperature-gradient turbulence in a torus)

The simulation techniques developed in fusion research are also useful in other disciplines, like predicting climate change. There is a big difference, however, between 2D and 3D computations. A cylinder is a 2D object, with radial and azimuthal directions and an ignorable axial direction, along which everything stays the same. When you bend a cylinder into a torus, it turns into a 3D object, and a computer has to be much larger to handle that. For many years, theory could explain experimental data after the fact, but it could not predict the plasma behavior. When computers capable of 2D calculations came along, the nonlinear behavior of plasmas could be studied. Computers are now fast enough to do 3D calculations in a tokamak, greatly expanding theorists’ predictive capability. Here is an example of a 3D computation (Fig. 8.12). The lines follow the electric field of an unstable perturbation called an ion-temperature-gradient mode. These lines pretty much follow the magnetic field lines. On the two cross sections, however, you can see the how these lines move in time. The intersections trace small eddies, unlike those in the previous illustration. It is this capability to predict how the plasma will move under complex forces in a complicated geometry that gives confidence that the days of conjectural design of magnetic bottles are over.

Fig. 8.12
figure 12figure 12

A 3D computer simulation of turbulence in a D-shaped tokamak (courtesy of W.W. Lee, Princeton Plasma Laboratory)

The science of computer simulation has matured so that it has its own philosophy and terminology, as explained by Martin Greenwald [15]. In the days of Aristotle, physical models were based on indisputable axioms, using pure logic with no input from human senses. In modern times, models are based on empiricism and must agree with observations. However, both the models and the observations are inexact. Measurements always have errors, and models can keep only the essential elements. This is particularly true for plasmas, where one cannot keep track of every single particle. The problem is to know what elements are essential and which are not. Computing introduces an important intermediate step between theory (models) and experiment. Computers can only give exact solutions to inexact equations or approximate solutions to more exact (and complicated) equations. Computer models (codes) have to be introduced. For instance, a plasma can be represented as particles moving in a space divided into cells, or as a continuous fluid with no individual particles. Benchmarking is checking agreement between different codes to solve the same problem. Verification is checking that the computed results agree with the physical model; that is, that the code solves the equations correctly. Validation is checking that the results agree with experiment; that is, that the equations are the right ones to solve. Plasma physics is more complicated than, say, accelerator physics, where only a few particles have to be treated at a time. Because even the models (equations) describing a plasma cannot be exact, the development of fusion could not proceed until the science of computer simulation had been developed.

Unfinished Physics

Edge-Localized Modes

In fusion, ELMs are not trees but edge-localized modes. The name itself suggests that they are not understood, not unlike the term assigned to the Irritable Bowel Syndrome. The name has even spawned an adjective, ELMy, and a participle, ELMing, which should give philologists conniptions. ELMs occur at the pedestal in H-mode plasmas (Chap. 7). Recall that in this high-confinement mode, a transport barrier, shown earlier in Fig.  7.25, is formed at the edge of the plasma. This thin layer holds back the plasma because it quenches all instabilities with strong electric field shear. But it can’t do that forever. If the plasma escaped at the classical diffusion rate due to collisions alone, the plasma pressure in the interior would rise so high that the barrier would break down. This breakdown occurs in short bursts, called ELMs, so that there is a steady release of plasma to the outside. Actually, this is a good thing because the “ash” of the DT reaction has to be taken out. This ash is the cleanest ash ever – pure helium – but it has to be removed because otherwise the expensive magnetic field would be used up in confining the ash rather than the fuel.

The H-mode occurs only when the heating power exceeds a certain threshold value. ELMs occur when the power is just above this threshold and are really localized near the plasma edge. Recall that the “edge” of the plasma is defined by the divertor, like the one at the bottom of Fig. 8.13. The plasma edge is defined by the last closed magnetic surface, the one at the X made by the field lines just above the divertor. Plasma venturing beyond that is led into the divertor, where it strikes high-temperature materials with heroic cooling to dissipate the heat. Also shown in the figure is the layer where the H-mode barrier exists and, inside that, the core plasma. The problem with ELMs is that the heat comes in short bursts – less than 1 ms – occurring a few times a second, and divertors cannot handle a heat flow that is not steady. A single ELM, while it lasts, can carry 20 GW of power, an energy flow comparable to that of the Three Gorges Dam in China [17]. There are thus three tasks: measuring what ELMs do, explaining what causes them, and devising a way to suppress them.

Fig. 8.13
figure 13figure 13

Cross-section of a tokamak with a single-null divertor, showing the scrape-off layer [16]

It’s hard to measure what goes on inside the thin barrier layer during the unpredictable time when a burst occurs, but there is a large data base on the different types of ELMs and the conditions before and after they occur [18]. Three types of ELMs have been observed. As the heating power is increased past the H-mode threshold, Type 3 ELMs first occur. These occur rapidly, each with a small energy release. They come after a magnetic precursor signal can be detected. As the power is raised, the ELM frequency decreases until there are no ELMs at all. Then Type 2 ELMs, called “grassy” ELMs, occur; they are very small, rapid bursts whose time traces resemble grass. Further increase in power produces Type 1 ELMs. These occur in most H-mode tokamaks and release energy in rather regular bursts. Each pulse occurs when the density and temperature at the top of the pedestal reach critical values, and these drop when an ELM occurs. Density and temperature then recover slowly until the next burst is triggered. Although ELM-free discharges can be produced, they cause the temperature and density at the top of the pedestal to be rather low, and these control the quality of the fusion plasma in the main volume. It is found that the best fusion conditions can be produced by ELMy H-mode plasmas, in which the plasma is allowed to escape in regular Type 1 ELMs.

Many theorists [19] have worked on the ELM problem, and the consensus is that ELMs are a magnetic instability called a “peeling–ballooning” instability. Computations can predict the temperature and density values in the pedestal that can trigger an ELM, but they are far from explaining all the features that have been observed. And, as usual, there is no guarantee that another theory can’t also explain the ELM threshold. There is, however, good news. The DIII-D team at General Atomics have figured out a way to suppress ELMs without degrading the quality of the core plasma [20]. They apply “resonant magnetic perturbations” with an array of small coils just outside the plasma edge. These produce small magnetic islands in the edge region which work some kind of magic. Experimental results are promising enough that such coils are being considered and designed to be added to ITER.5

Fishbones

The colorful language of plasma physics cannot compete with the charmed and colored quarks of high-energy theory, but we have so far had bananas, sawteeth, and ELMs. We now have fishbones. These arise from their oscilloscope traces, not from the hunger for better funding. Fishbones were first seen in the PDX tokamak at Princeton during neutral-beam injection [21]. Recall that the most powerful way to heat a plasma is to inject beams of high-energy deuterium atoms. Since the atoms are not charged, they can penetrate the magnetic field and get inside the plasma. Once there, they are rapidly ionized by the electrons and become a beam of deuterium ions of 50-keV energy. Oscillations in the plasma could be seen with several different diagnostics, and they look like those in Fig. 8.14. Fishbones often occur on the q  =  1 surface where the sawtooth oscillations (Chap. 7) occur, and sometimes they can excite the sawteeth and appear simultaneously with them. The bad news is that fishbones cause injected ions to be lost before they have transferred their energy to the plasma. As much as 20–40% of the energy can be lost, greatly reducing the efficiency of this primary heating method.

Fig. 8.14
figure 14figure 14

(a) Fishbone oscillations on a sawtooth. (b) An expanded view reveals the origin on the name [21]

Beams are notorious in exciting plasma instabilities. As usual, the plasma finds a way to come to thermal equilibrium rapidly by generating an instability. Theorists had no problem in finding a suitable instability for this. Initially, there were two somewhat different theories [22, 23], each having to do with an internal kink mode. In Chap. 6, we described the kink instability that occurs to the whole plasma when too large current is driven through it. A localized current can also drive a kink inside a plasma, and this is what happens in the sawtooth region in the presence of a current of fast injected deuterium ions.

The theories could predict the frequency of the oscillations and the conditions when they would occur. Computations of the nonlinear behavior gave traces very much like the experimental ones in Fig. 8.14b. Subsequent work has cleaned up many of the details of the fishbone instability.

The fact that fast ions can be lost via instability is worrisome not only because of the loss of heating power, but even more so because of the fast helium ions (the “ash”) that are generated in fusion. The helium has to remain in the plasma long enough to give up their energy to keep the plasma “burning.” Fortunately, the theorists can tell us not to worry. Roscoe White et al. [24] have found that there is a regime in a fusion-quality plasma in which neither sawteeth nor fishbones will occur, and this parameter regime is actually larger at higher temperatures and with more fast particles. This has yet to be tested, but there is another mitigating factor. In the next generation of tokamaks, starting with ITER, the plasma will be much larger than the widths of the banana orbits. Since the fast ions are lost with a step size of the order of the banana width, it will take many steps for them to reach the wall. Though not finished, the physics of fishbone instabilities is far enough advanced to tell us that this is not a big problem.

Disruptions

No picturesque name here, because this is a really serious problem. Tokamak discharges are known to disrupt themselves, suddenly stopping and releasing all the energy put into them into the containment chamber. Unless we can stop disruptions from occurring, the entire structure of the tokamak, especially the divertors, would have to be beefed up to absorb all that energy. This is not the kind of accident that can happen in fission, because in fusion no energy is released that has not already been put in; it is just that we do not want it to come out all at once and melt or otherwise harm the tokamak structure. The problem is so serious that a large experimental data base has been accumulated on numerous tokamaks, even in the interim between the two ITER planning documents, the ITER Physics Bases of 1999 [25] and 2007 [26].

To get a DT plasma to fuse, we need to heat it to temperatures of the order of a half-billion degrees. The amount of heat in a large experiment like ITER will be about 400 MJ, the energy of 100 pounds of TNT. The poloidal magnetic field created by the tokamak current will hold another 400 MJ of energy. Fortunately, the toroidal magnetic field energy, which is much larger, is not released in a disruption unless the toroidal field coils are damaged. Normally, the plasma energy escapes slowly into the divertors, which are designed to handle that heat load; and when the plasma is turned off, the current decays slowly, and the poloidal field energy goes back into the coils that drove the current. In a disruption, all this energy sprays out in a matter of 10 milliseconds and is hard to handle. What happens to the plasma in a disruption has been caught by the M.I.T.Footnote 7 group working with the intermediate-size Alcator-C tokamak. In a typical elongated D-shaped tokamak, the plasma has to be kept from drifting up or down with specially shaped coils. When an instability causes a disruption, the plasma moves vertically, as shown in Fig. 8.15, shrinking as it loses its energy and current. In this case, it moves downward toward the divertor, but it could as well move upwards. The time scale shows that the whole event took less than 4 ms.

Fig. 8.15
figure 15figure 15

Vertical motion of the plasma in a disruption [27]

The damage caused by a disruption can be divided into three parts: thermal quench, current quench, and runaway electrons. In thermal quench, the plasma’s heat is deposited in the walls, vaporizing them in spots. This influx of impure gas raises the resistivity of the plasma, and the tokamak current decays. Even if most of the plasma outflow is channeled into the divertor, there is no time for the heat to be conducted away, and the refractory materials in the divertor – tungsten and carbon – will be vaporized also. In current quench, the fast decrease of the toroidal current will drive a counter-current, by transformer action, in the conducting parts of the confining vessel. Since this counter-current is located inside the strong DC toroidal magnetic field, it will exert a tremendous force on the vessel, moving or deforming it unless it is made sturdy enough. As plasma shrinks toward the divertor, it will drive a “halo current,” shown by the dark arrows in Fig. 8.15, flowing through the conducting parts of that structure. The halo current can be as much as 25% of the original tokamak current; and since that current was flowing along helical field lines, the halo current will try to find a helical path through the conducting parts around the divertor.

The third deleterious effect of disruptions is the generation of “runaway” electrons. In Chap. 5, we showed that a hot plasma is almost a superconductor because fast electrons do not make many collisions. The faster the electron, the farther it will go before it collides with an ion. This distance is its free path. If there is a large electric field pushing the electron, its free path can increase faster than the electron is going, and it never makes a collision! It is a runaway and can get up to MeVs of energy before it loses confinement. Of course, this depends on the number of scattering centers; namely, on the plasma density. Normally, runaway electrons occur during the startup of the plasma. If the electric field is turned up too high before the density is high, runaways can occur. Machine operators know how to prevent this. In a disruption, however, there is no control. If the density falls below a critical value while a strong toroidal electric field is still on, a horde of runaway electrons will be created, amounting to 50–70% of the original tokamak current. When these hit the wall, they will certainly cause damage. In ITER, the tokamak current will be 15,000,000 A. By comparison, household circuits carry only 15–20 A.

The obvious questions are then: What causes disruptions? How often do they occur? Can they be eliminated? It turns out that disruptions mostly occur when we try to push the envelope. There are known limits to the plasmas that a tokamak can confine. There is a density limit, called the Greenwald density, which we will describe shortly. There is a pressure limit called the Troyon limit. And there has to be enough shear stabilization, as specified by the quality factor q, which has to be above 2 at the edge. When the plasma is pushed too close to one of these limits, a disruption is likely to occur. Exactly how it occurs is not entirely clear. Sometimes two island chains with different numbers of islands can lock onto each other and merge. If there is a detected precursor, this locking can be avoided by setting the plasma into rotation. Sometimes this change in magnetic geometry brings a bubble of cold gas in from the periphery, disrupting the whole plasma. When the density or pressure limits are approached, known instabilities can occur. These are the ideal MHD instability, called the Rayleigh–Taylor instability in Chap. 5, and the neoclassical tearing mode, which is triggered by finite resistivity, as described in Chap. 6. Here, “ideal” means that no resistivity has to be considered for the instability to occur, and “neoclassical” means that banana orbits are considered in the calculation. Figure 8.16 shows a computer simulation of how an instability can bring cold plasma in from the edge, thus cooling the core.

Fig. 8.16
figure 16figure 16

Computer simulation of a disruption [26]

Up to now, tokamak discharges have been pulsed and not run continuously as in an eventual reactor. An average over all tokamaks shows that 13% of these pulses have suffered a disruption. This would be an unacceptable rate, but these are experiments meant to probe the stability of a plasma. In long pulses, lasting many seconds in the large tokamaks such as TFTR and JET, the disruption rate is less than 1% because the machine is run conservatively. In the experimental stage, much depends on the experience of the machine operator. He learns the settings on various controls that will produce a stable discharge. For instance, the currents on the various magnetic coils have to be turned on at the right time and increased at the right rate, and the heating power from various sources have to come on at the right time. Operator experience is valuable in the use of almost any machine; snow plows, cranes, and ordinary cars, for instance. Even in the use of a toaster, one sets the darkness level intuitively depending on the dryness of the bread. Nonetheless, in a reactor even one disruption would be disastrous, and methods must be found to eliminate them.

This task is being tackled on three fronts: avoidance, prediction, and amelioration. As already shown in experiment, disruptions can be avoided if the plasma parameters are not pushed close to the instability limits. As shown in Fig. 8.17, these limits have been extensively tested, and the occurrence of disruptions from this cause is predictable. The quantity β N is a measure of the plasma pressure, and stable discharges are all below the theoretical limit, with disruptions occurring when the limit is exceeded. Prediction of imminent disruption can be obtained from many sensors, for instance of magnetic precursor signals; and neural networks have been successfully used to integrate these signals to give a definite warning of an oncoming disruption. After many trials, these networks can be trained to suppress false positives. To stop a disruption from occurring, automatic controls can change such parameters as the plasma density, the toroidal current, or the plasma elongation; but this response may be too slow. A faster method would be to drive electron current with electron cyclotron waves in order to change the current profile, and thus the q profile, to a more stable shape. Once an unavoidable disruption starts, there are still ways to ameliorate the damage. For instance, a massive injection of a gas such as neon or argon can reduce the halo currents by 50% and the electromagnetic forces by 75% [26]. Raising the plasma density by about two orders of magnitude this way would also suppress runaway electrons. As tokamaks get larger, the damage from disruptions can be expected to get worse, because the energy released varies as the cube of the radius (i.e., the volume), whereas the energy has to be absorbed by the surface area, which varies only as the square of the radius. On the other hand, the disruptions will evolve more slowly, giving more time to control them.

Fig. 8.17
figure 17figure 17

Data from the TFTR tokamak showing the accuracy of theoretical prediction of instability and disruption [25]

For tokamaks, the problem of disruptions is receiving a great deal of attention because of its importance. However, tokamaks may not be the machines ultimately chosen for fusion reactors. Stellarators, which do not need large currents, do not suffer from disruptions. The reason that tokamaks are now prevalent is that they gave the best initial results, and there has not been enough money to study other toruses to the same extent. The next generation of tokamaks – the ITER – will allow us to study a burning plasma, one in which the helium products can be used to keep the plasma hot. After that, we still have a choice; we are not stuck with the tokamak if disruptions continue to be a problem.

The Tokamak’s Limits

The Greenwald Limit

Ever since the early days of tokamak research, it has been noticed that the plasma density could never be raised above a certain limit. Sometimes this limit was blamed on a loss of confinement via an unspecified instability, sometimes on excessive energy loss by radiation, and sometimes the plasma suffered a disruption. In 1988, Greenwald et al. [28] put together the data from different machines to see what the density limit depended on. They came up with a surprisingly simple answer: roughly speaking, the density limit depended only on the tokamak current per unit area! For those who would rather have a formula, the one for the Greenwald density n G2 is given in Note 8 hrs.Footnote 8 This limit has been found to be obeyed in all tokamaks regardless of what mechanism causes the problem at high densities. No one has yet found a theory that explains this; the Greenwald limit is purely empirical. Figure 8.18 shows how well the Greenwald limit is obeyed in two large tokamaks. In almost all shots, the measured density cannot be raised above the straight line, which is the Greenwald limit. This unexplained law is so universal that it is used in the design of future machines. The design would be to achieve, say, 85% of n G, or 95%, depending on how adventurous one wants to be.

Fig. 8.18
figure 18figure 18

Measured density limit n DL vs. density n G calculated from the Greenwald formula (modified from a figure in ITER Physics Basis 2007, Chap. 2)

The Troyon Limit

This is a limit on the plasma pressure that a tokamak’s magnetic field can hold. Unlike the Greenwald limit, this criterion is rigorously calculated from ideal MHD (MagnetoHydroDynamics) theory. The quantity that measures the balance between the pressure and magnetic forces is called β (beta). Since β is used in many scientific disciplines, especially in medicine, I had refrained from defining it until it was necessary. It is now necessary. Beta is the ratio between plasma pressure and magnetic pressure:

$$ b=\frac{\text{Plasma pressure}}{\text{Magnetic pressure}}.$$

The plasma’s pressure is the product of its density and its temperature, and the magnetic pressure is proportional to the square of the field strength B. These quantities are not constant over a cross section of the plasma, so a reasonable definition would be to take the average pressure and divide it by the average magnetic field before the plasma is created. The last proviso is needed because the plasma is diamagnetic, so its very presence decreases the B-field inside it. Since the B-field is the most expensive component, β is a measure of the cost effectiveness of a tokamak. It has a value below 10%, typically 4–5%.

The value of β has been shown to depend on the toroidal current I divided by the plasma radius a and the magnetic field strength B. Figure 8.19 shows how data from different tokamaks all fall on the same line if plotted against I / aB. It is convenient, then, to introduce a normalized β, called β N, which would apply to all tokamaks, regardless of their values of I, a, and B:

Fig. 8.19
figure 19figure 19

Dependence of β on I/aB in various tokamaks [25]

$$ {b}_{\text{N}}\equiv \frac{b\times a\times B}{I}.$$

The Troyon limit (Troyon et al. [30])Footnote 9 is when β N is about 3.5. A numerical formula is given in footnote Footnote 10. Figure 8.17 shows how well the experiments in different tokamaks obey the Troyon limit, above which disruptions are likely to occur.

Big Q and Little q

As we now turn our attention from fusion physics to fusion energy, we have to introduce Big Q, as distinct from little q. Little q, as you remember, is the “quality” factor in toruses like tokamaks and stellarators. It is the reciprocal of the rotational transform, which is the number of times a helical field line encircles the minor axis each time it goes around the whole torus. The variation of q with radius r, or q(r), is perhaps the most important feature in the design of toroidal magnetic bottles. Big Q, on the other hand, has to do with how much energy a fusion reactor will produce. It is the ratio of the fusion energy produced to the energy required to make the plasma:

$$ Q=\frac{\text{Fusion energy}}{\text{Input energy}}.$$

In Chap. 3, we showed this equation for the DT reaction:

$$ D+T\to a+n+17.6{MeV},$$

where α is an alpha particle (a helium nucleus) and n is a neutron. Most of the 17.6 MeV of energy released is carried by a 14.1 MeV neutron, and the other 3.5 MeV is carried by the alpha particle.Footnote 11 The neutron energy is the part used to produce the electrical output of the power plant, and the alpha energy is used to keep the plasma hot. Since the α’s are charged, they are confined by the magnetic field, and the hope is to hold them long enough that they can transfer their energies to the DT plasma, keeping it at a steady temperature. But since the α’s have only one-fifth of the fusion energy, Q has to be at least 5 for this to happen. This is called ignition. The plasma is “burning” by itself. The reaction cannot run away as in fission because some instability will quench the plasma as soon as the operational limits are exceeded.

The first milestone is to achieve Q  =  1, which is called scientific breakeven, which assumes that the whole 17.6 MeV is equal to the input energy. The next milestone is to get to ignition at Q  =  5. To produce net energy, you have to count also the energy needed to make the magnetic fields and the plasma currents, as well as all the electricity needed to run the power plant (even the lights!) and the energy used to transmit the power to where it is used. This means that Q has to be at least 10. Figure 8.20 is a Lawson diagram (Chap. 5) plotting E vs. T i and showing what different tokamaks have achieved in DD and DT plasmas. The heavy curve is for Q  =  1 in DT, and we see that this has been reached in JET. The yellow region is ignition at Q greater than 5. The diagonal dashed lines are for constant values of the triple product. The obvious next significant step is to get to ignition, and that is the story of ITER.

Fig. 8.20
figure 20figure 20

Lawson diagram showing progress toward breakeven and ignition [31]

The Confinement Scaling Law

The triple product plotted in Fig. 8.20 contains the energy confinement time τ E, which is how long each amount of energy used to heat the plasma stays in there before it has to be renewed. The plasma energy is lost through three main channels: radiation, mostly in the form of X-rays, and escape of ions and electrons to the wall, carrying their heat with them. The first two of these, radiation and ion loss, follow theory and can be predicted, but electrons escape faster than can be explained. The energy loss by electrons can be measured, but it cannot be predicted. It would be impossible to design a new machine accurately without knowing what τ E would be, but fortunately the over 200 tokamaks that have been built were found to follow an empirical scaling law. This formulaFootnote 12 gives the value of τ E in terms of the size and shape of the tokamak, the magnetic field, the plasma current, and other such factors. The result is shown in Fig. 8.21.

Fig. 8.21
figure 21figure 21

Data from 13 tokamaks showing that the energy confinement time as measured follows an empirical scaling law12

This empirical scaling law is the basis on which new tokamaks are designed. It cannot be derived theoretically, but it is followed in a massive database from a variety of tokamaks. This “law” is given in mathematical form in footnote 12. Most of the dependences are consistent with our understanding of the physics. For instance, τ E increases with the square of the machine size. The strength of the toroidal field does not matter much because the size of the banana orbits depends on the poloidal field. The poloidal field indeed enters in the linear dependence on plasma current. The wonder is that only eight parameters are needed to make all tokamaks fall into line. As seen in Fig.  8.21, the data cover over a factor of 100 in τ E. To design ITER, the scaling had to be extrapolated by another factor of 4.

ITER: Seven Nations Forge Ahead

The light at the end of the tunnel may be located at the spot marked A in southern France on the map of Fig. 8.22. It is here, in a town called Cadarache near Aix-en-Provence that ITER is being built. Magnetic confinement of plasma gets better with size, and it has long been clear that a much larger machine has to be built to achieve ignition, a machine so large that no single country can bear the whole cost. Thus was born the international thermonuclear experimental reactor, now known only by its initials, ITER. Coincidentally, ITER in Latin means a path, a journey. It may indeed be the best way to get there.

Fig. 8.22
figure 22figure 22

Map of France, showing the location of Cadarache

The reason for the large size is that the amount of power generated is proportional to the volume of the plasma, which increases with the cube of its radius, while the losses are proportional to the surface area of the plasma, which increases only as the square of its radius. To take the next step beyond the four machines shown above, therefore, requires a much larger machine, one so large that its cost has to be shared among many countries. The idea of an international project to achieve fusion energy was born in the 1985 Geneva Superpower Summit, where President Mikhail Gorbachev of the USSR and President Ronald Reagan of the USA, with advice from President François Mitterand of France, agreed to initiate a project involving the USSR, the USA, the European Union, and Japan. (It probably helped that Gorbachev’s advisers were Evgeniy Velikov and Roald Sagdeev, both plasma physicists.) More on what ensued afterwards will come later, but first let’s see what kind of machine ITER is.

Figure 8.23 is the diagram of the machine being built. Its size is indicated by the small figure at the bottom, representing a standard 2-meter person. The plasma chamber has the standard D-shape, 1.7 times as high as it is wide. The width is 4 m at its widest part, and the major radius (the distance between the center of the chamber and the axis of the whole machine) is 6.2 m. The D-shaped coils that produce the main magnetic field can be seen, but all the other equipment is shown simplified; otherwise, the vacuum chamber would not be visible at all! That includes all the other coils for shaping the plasma, the neutral-beam injectors for heating, the neutron-absorbing blanket, the divertors for catching the plasma, pellet injectors for fueling, and a host of measurement devices. How much bigger ITER is compared with the current champion, JET, is shown in Fig. 8.24. The clutter surrounding a real machine can be seen in the pictures of existing large tokamaks in Figs. 8.38.6.

Fig. 8.23
figure 23figure 23

Diagram of ITER (http://www.iter.org)

Fig. 8.24
figure 24figure 24

Comparison of ITER with JET (http://www.iter.org)

What is ITER designed to do? The primary goal is to produce, for the first time, a “burning” plasma. That is, a plasma that will keep itself hot once it has been heated to several hundred million degrees. Remember that 80% of the fusion energy from DT fuel is in the form of neutrons, and only 20% is in alpha particles (helium ions) which can give energy to the plasma because they are magnetically confined. Therefore, a Q value of at least 5 is needed for burning or ignition. To get a safety margin, ITER is designed to produce a Q of 10, where Q is the ratio of energy out of the plasma to the energy put into the plasma from external sources. Q  =  1 is scientific breakeven (energy in equals total energy out), but most of that energy is in the form of neutrons, which produce the power plant energy but cannot heat the plasma. The best that JET could do was Q  =  0.65, below scientific breakeven. The large step from Q  =  0.65 to Q  =  10 is the reason that ITER has to be so big. The step is not trivial also from a physics point of view. The 3.5-MeV alphas may cause an instability that drives them out of the plasma. Although the stability conditions have been calculated, they have never been tested. The experiment will be considered a success if enough self-heating occurs for these conditions to be established, even if Q  =  10 is not achieved. The self-heating mechanism which powers the sun has never been seen on earth outside of a bomb, and plasma experts are eagerly anticipating this critical test. The term “ignition” may invoke fear that the reaction will run away and cause an explosion. This cannot happen in a fusion reactor because if the density or temperature gets too high, the plasma will disrupt and fizzle out. This may cause melting of parts of the tokamak, but it would be no worse than leaving a pot on a stove after the water has boiled out. The “pot” here would be an expensive one, though!

There are other objectives for ITER besides achieving Q  =  10. It will produce 500 MW of power, about one-sixth that of a full-size reactor. Many large key components of a fusion reactor have to be designed, manufactured, and tested in operation. This includes superconducting magnet coils, wall materials and divertors that can withstand the heat and neutron bombardment, tritium handling, and remote control and maintenance after the walls become radioactive and cannot be approached by personnel. Instability control has to keep the plasma confined steadily for as long as 8 min, using a large amount of bootstrap current and generating 500 MW of power. There will be a first test of a neutron-absorbing “blanket” that can breed tritium. Tritium does not occur naturally. Most of the time, ITER will use tritium coming from fission reactors, of which it is a byproduct; but in a fusion power plant the tritium has to be made internally. This is done in a blanket that captures the 14-MeV neutrons from the reaction, slows them down, and generates heat to run a steam plant. A part of this blanket can be used to breed tritium from lithium, which is an abundant element on earth.

ITER is the logical next step toward fusion power, but it is still primarily a physics experiment. It will lead to DEMO, a demonstration power plant that will run without breakdown and produce a usable amount of power. However, many believe that an intermediate step between ITER and DEMO is necessary to develop engineering concepts that will work in a real reactor. Some of the difficult problems are, for instance, (1) the material to be used in the plasma-facing components (the “first wall”), (2) the handing and breeding of tritium, (3) continuous operation for long periods, (4) maintenance procedures, and (5) plasma exhaust and waste treatment. ITER can provide only a first try on such topics. Engineering will be the topic of the next chapter; this is only an introduction. As an example, the first-wall material has to take the heat of facing a 100,000,000-degree plasma, and it has to allow a large flux of neutrons to pass through without causing such damage that it has to be replaced often. It also cannot contaminate the plasma with impurities of high atomic number, which would cool the plasma. Tests of suitable materials can be done without a tokamak; a fission source of neutrons would do. In fact, most of these engineering tests can be done on a much smaller, cheaper machine than ITER, and such a machine can be built and operated simultaneously with ITER to save time. Most large laboratories have proposed such a machine. For instance, the Fusion Development Facility proposed by General Atomics is a tokamak using normal-conducting coils and producing only 100–250 MW of power at Q less than 5. But it is designed to run continuously for weeks at a time over 30% of a year and breed up to 1.3 kg of tritium per year. Such machines and DEMO are still in the talking stage, but the ITER project is up and running.

As can be imagined, a cooperative project among seven nations is an administrative nightmare. It took over 20 years to get to the present stage. After the initial Gorbachev–Reagan agreement, the four partner nations managed to agree to start Conceptual Design Activities in 1988, and the design was finished in 1990. The resulting tokamak was much larger than the present design. In 1992, an agreement was made to start more serious Engineering Design Activities. Each country had its own home team, and a Joint Central Team was stationed in La Jolla, California. The directors of ITER for this study was at first Paul-Henri Rebut and later Robert Aymar, both of France. After six years of work, it was decided that the tokamak was too large and too expensive, and the activity was extended to 2001. The final design, finished in 2001, is half the price but achieves almost the same objectives. The physics basis for ITER, which we discussed in Chap. 7, was worked out in this period and contributed to the efficiency of the new design. Some $650M was expended to design ITER, with the original agreement that the European Union and Japan would each bear one-third of the cost, while the USSR and the USA shared the other third. To everyone’s chagrin, the USA withdrew from the project in 1999, not to return until 2003. The project continued without funding from the US Congress.

Meanwhile, in 1991, the USSR collapsed and was replaced by the Russian Federation. In 2003, the Peoples’ Republic of China and South Korea joined ITER. India joined in 2005, raising the number of partners to seven. Canada was temporarily involved but dropped out when its proposed site was turned down. With an area larger than that of Western Europe, Kazakhstan has been considering joining in spite of the fact that it has large fossil reserves. The seven current nations supporting ITER are shown in Fig. 8.25. These countries represent more than half the world’s population. Without public support, the USA has been a lukewarm partner in this path-breaking enterprise, and again failed to contribute its financial share in 2008.

Fig. 8.25
figure 25figure 25

The seven nations in the ITER organization

By 2003, ITER’s design had been agreed upon, and the project was ready to move ahead. The estimated cost was calculated to be five billion euros (about $7B) for ten years of construction, and another 5B euros for 20 years of operation.Footnote 13 Then came a totally unexpected delay. There was a deadlock on the site for ITER. The site had to have sufficient power and accessibility for such a large machine. The finalists were a site in Japan and a site in Europe, at first in Spain, but finally in France. The EU, China, and Russia voted for France; and Japan, Korea, and the USA voted for Japan. India had not yet joined. The impasse lasted for two years. Finally, in 2005, the deadlock was broken, and France was chosen. As compensation, Japan was to supply 20% of the staff and had the right to choose the Director. Furthermore, the EU was required to purchase 20% of its ITER material from Japan. As host, the EU has to bear 5/11ths of the cost of ITER, and the other six countries 1/11th each. Kaname Ikeda was chosen to be Director. The 45% contribution by the EU will stimulate its economy.

Once a Joint Implementation Agreement was signed in November 2006 by the seven parties, the ITER Organization sprang into action. Hundreds of scientists, engineers, and administrators began to migrate to Cadarache, settling into temporary offices. Bulldozers began to move two million cubic meters of soil to prepare the flat site for ITER, shown in Fig. 8.26. This amount of dirt would fill the Cheops pyramid, and the area is that of 57 soccer fields. The roads had to be widened to accommodate nine-meter wide truck convoys which will carry the major components of the tokamak. Even traffic circles (roundabouts) like the one at the upper left of Fig. 8.26 had to be enlarged. Those parts manufactured outside Europe would be shipped to the Mediterranean port of Fos-sur-Mer and then barged and trucked to Cadarache. A three-story office building was built in 2008 to house 300 employees, but this was still temporary and off-site. To accommodate their families, a multilingual school was established in Manosque; by 2009 it had 212 students from 21 nations and 80 teachers. In 2010, the school will have its own building and include a nursery school and a junior high. The first ITER baby was delivered in 2008. A weekly bulletinFootnote 14 covers not only technical and personnel news but also includes cultural events and introduces the entire international community to the history and traditions of this region in southern France.

Fig. 8.26
figure 26figure 26

Preparation of the ITER site in 2008

ITER is truly an international project. For instance, the vacuum vessel will be made by Europe and Korea, with other parts from Russia and India. The largest components, the magnet coils, will weigh 8,700 tons and will be made of Nb3Sn and NbTi superconductors. Many different types of magnet coils and their feed-ins are required, and the manufacture of the superconductor material and their formation into coils are shared among most of the ITER partners. The USA will supply 40 tons of expensive Nb3Sn conductors for the toroidal field, and those for the poloidal field will be shared among China, Russia, and Europe. Superconductor wire is very complicated, wound in many strands and cooled with liquid helium. That these actually work in large coils has been tested in the LHD stellarator in Japan and will be further tested in the new superconducting tokamaks in China, Korea, and Japan.

Domestic Agencies have been established in each country to organize the manufacture of its in-kind contributions to ITER by local industries. Through these agencies, Procurement Agreements have to be drawn up and signed by each member country. As of 2010, 28 PAs have been signed. The site in Fig. 8.25 has been completely leveled, and the construction of 38 buildings on it has begun. The first of these is a six-story 253-m long building for winding the poloidal field coils, which are too large to be shipped, and the superconductor cable is all in one piece. New office buildings will replace the temporary ones. Off-site in Manosque, a new school, will be built for the community.

It is clear that the ITER project is in for the long pull. Figure 8.27 shows the originally agreed schedule for the construction and operation of ITER. The site preparation will not be finished until 2012, but meanwhile the components are being designed, fabricated, and tested in various countries. It will take four years to get all the parts delivered and the tokamak assembled. The first plasma is scheduled to be made near the end of 2016. At first, experiments will be done with hydrogen, which is not radioactive. Remote handling will then be implemented so that deuterium can be used; the D–D reaction creates some neutrons, but not as many as does DT. In 2020, operation with DT will start, first in pulsed (low-duty) operation, to achieve the designed Q value. In the later stages, emphasis will be on quasi-steady state operation (high-duty) to test whether bootstrap current and noninductive (no transformer) drive can sustain the plasma. At the end of 2026, a decision will be made whether to decommission the machine or to continue it with modifications. De-activating, decommissioning, and disposing of the machine is expected to take another 11 years. The ITER machine will have 30,000 components in ten million pieces. To get these to be delivered on time and fit together requires numerous groups and oversight committees. Their acronyms are overwhelming, but that’s the price you pay for organizational efficiency.

Fig. 8.27
figure 27figure 27

The original ITER timeline

At this time, the goal of achieving first plasma in 2016 seems a long way off, but the worldwide economic downturn in 2008–2009 has made it even worse. Both the budget and the schedule had to be revised in 2010. The project will be delayed two years or more by economic constraints. The new construction schedule will look something like Fig. 8.28. DT plasmas will not be attempted before 2027.

Fig. 8.28
figure 28figure 28

The revised ITER timeline [32]

These estimates notwithstanding, the project is proceeding nicely under new Director Osamu Motojima. The digging and flattening of the ITER site has been finished and is shown in Fig. 8.29. Parts of the machine are coming in from different countries. Figure 8.30 shows the buildings planned for the site. These will be earthquake-proof, and some will have containment for radioactivity. The long coil-winding building mentioned above can be seen at the top for scale. It is exciting to see international teamwork functioning so well.

Fig. 8.29
figure 29figure 29

The ITER site in June, 201014

Fig. 8.30
figure 30figure 30

Planned buildings for the ITER site [32]

Contrary to popular perception, fusion is no longer in a guessing stage. The timeline for its development has been set. Each country has its own ITER organization and its own specialized manufacturing capabilities to contribute to the project. At the current level of funding, it will take until 2026 to get the information from this experiment. Concurrently, materials testing facilities can be built and run to support DEMO. Design, construction, and operation of DEMO will take until 2050; and, if it is successful, commercial reactors can follow soon thereafter. The present plan is to achieve fusion power by 2050, in time for the present generation of children to enjoy. However, with increased international ambition, the time can be shortened.

There may be some confusion in the public’s mind between ITER and another large experiment, the Large Hadron Collider, or LHC, at CERN near Geneva. Geneva can be seen in Fig. 8.21 north of Cadarache. It is quite a coincidence that the two largest physics experiments in the world should be located only a few hundred kilometers from each other. The LHC is a particle accelerator 27 km (17 miles) in circumference, buried in a circular tunnel under France and Switzerland. It is similar to ITER in internationality, cost (6.3B euros), and the extensive use of superconductors; but it is entirely different in technology and purpose. The LHC is a basic physics experiment to explore the subatomic structure of matter and energy: quarks, Higgs bosons, dark matter, and so forth. Protons and antiprotons are accelerated to multi-TeV (trillions of eV) energies and hurled against one another to break them up, one particle at a time. ITER, on the other hand, deals with a gas of multi-billions of particles at KeV (thousands of eV) energies. In the LHC, large magnetic fields are used to bend the protons into circular orbits, their Larmor radius being measured in kilometers. In ITER, large magnetic fields are used to hold a plasma, which exerts a large pressure not because the particles are so energetic but because there are so many of them.

The LHC and its predecessors were inspired by man’s urge to understand his place in the universe, not by any practical need. ITER, on the other hand, is being built to develop an energy source that will save mankind, and, if done soon enough may also solve current problems in climate change and fossil fuel depletion. We are living in a golden age in which civilization has advanced to such a point that we can afford to reach for lofty goals. Let us hope that our reach does not exceed our grasp.