Keywords

These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Definition of the Subject

We live in a world that abounds in radiation of all types. Many radiations, such as the neutrinos or visible light from our sun present little risk to us. Other radiations, such as medical x-rays or gamma rays emitted by radioactive materials, have the potential to cause us harm. In this entry, only the transport of indirectly ionizing radiation is considered. These radiations consist of chargeless particles such as neutrons or photons that, upon interacting with matter, produce energetic secondary charged particles called directly ionizing radiation. It is these secondary charged particles that, through ionization and excitation of ambient atoms along their paths, cause radiation damage to biological tissues or other sensitive materials.

To mitigate radiation damage, a shield is often interposed between a source of ionizing radiation and the object to be protected so that the radiation levels near the object are reduced to tolerable levels. Typically, a shield is composed of matter that effectively diminishes the radiation that is transmitted. (However, there are noncorporeal shields such as magnetic fields that deflect moving charged particles. The earth’s magnetic field serves as such a shield to protect us from charged particles reaching earth from outer space.) The term radiation shielding refers usually to a system of shields constructed for a specific radiation protection purpose. The term also refers to the study of shields – the topic of this entry.

Introduction

The origins of shielding go back to the science of optics in which the exponential attenuation of light was long recognized. The exponential attenuation of radiation rays is still widely used for neutron and photon shielding. Also the governing field equation that describes how radiation migrates through matter was introduced in 1872 by Ludwig Boltzmann who used it to study the kinetic theory of gas. All this occurred before the discovery of ionizing radiation! The radiation transport equation is just a special case of the Boltzmann equation applied to situations in which radiation particles do not interact among themselves.

The study of shielding has many aspects: transport of (deeply penetrating) indirectly ionizing radiation in the shield, the production of very slightly penetrating secondary (directly ionizing) radiation in the shield and its surroundings, the radiation levels in the vicinity of the shield, deposition of heat in the shield, radiation penetration through holes in the shield, radiation scattered around the shield, selection of shielding materials, optimization of the shielding configuration, and the economics of shield design. It also involves understanding of related matters such as radiation source characteristics, radiation protection standards, and the fundamentals of how radiation interacts with matter.

The restriction of this entry to indirectly ionizing radiation is of a practical nature. Sources of charged particles, such as the alpha and beta particles emitted in some types of radioactive decay, can and do cause biological damage, particularly if the radioactive material is ingested. Here, however, it is assumed that the radiation sources are external to the body or the sensitive material of interest. Such external sources also usually emit far more penetrating indirectly ionizing radiation, and any shield that is effective against indirectly ionizing radiation is usually more than adequate to stop the directly ionizing radiation.

History of Shielding

To appreciate better the current state of shielding practice, it is important to understand how the discipline developed and what were the driving forces that caused it to mature. In this section, a brief overview of the history of shielding is presented. (A greatly expanded version of the following synopsis is provided by Shultis and Faw [1].)

Early History

The hazards of x-rays were recognized within months of Roentgen’s 1895 discovery, but dose limitation by time, distance, and shielding was at the discretion of the individual researcher until about 1913. Only then were there organized efforts to create groups to establish guidelines for radiation protection. And it was not until about 1925 that instruments became available to quantify radiation exposure.

In 1925, Mutscheller [2] introduced important concepts in x-ray shielding. He expressed the erythema dose (An ED value of unity represents a combination of time, distance, and beam current just leading to a first-degree burn) ED quantitatively in terms of the beam current i (mA), exposure time t (min), and source-to-receiver distance r (m), namely, \( {{ED}} = 0.00368it/{r^2} \), independent of x-ray energy. Mutscheller also published attenuation factors in lead as a function of lead thickness and x-ray average wavelength.

Evolutionary changes to x-ray shielding were made during the decades preceding World War II. These included consideration of scattered x-rays, refinements in shielding requirements in terms of x-ray tube voltages, recommendations for use of goggles (0.25-mm Pb equivalent) and aprons (0.5-mm Pb equivalent) for fluoroscopy, and specifications for tube-enclosure shielding and structural shielding for control rooms.

The other major source of ionizing radiation before World War II was the medical and industrial use of radioactive radium discovered by Marie and Pierre Curie in 1898. Not until 1927 were lead shielding standards recommended for radium applicators, solutions, and storage containers. For example, the International X-Ray and Radium Protection Committee recommended that tubes and applicators should have at least 5 cm of lead shielding per 100 mg of radium. It was not until 1941 that a tolerance dose for radium, expressed in terms of a maximum permissible body burden of 0.1 \( \mu \)Ci, was established. This was done largely in consideration of the experiences of early “radium-dial” painters and the need for standards on safe handling of radioactive luminous compounds [3].

Manhattan Project and the Early Postwar Period

Early reactor shielding. During World War II, research on nuclear fission, construction of nuclear reactors, production of enriched uranium, generation of plutonium and its separation from fission products, and the design, construction, testing, and deployment of nuclear weapons all were accomplished at breakneck speed in the Manhattan Project. Radiation sources new in type and magnitude demanded not only protective measures such as shielding but also examination of biological effects and establishment of work rules.

The construction of nuclear reactors for research and for plutonium production required shield designs for both gamma rays and neutrons. However, with only sparse empirical data and large uncertainties about how neutrons and gamma rays migrate through shields, shield designers acted very conservatively. For example, shielding for both Fermi’s 1943 graphite pile in Chicago and the 1947 X-10 research reactor at what is now Oak Ridge National Laboratory was adequate for gamma rays and overdesigned for neutrons. Operation of the X-10 reactor, built to provide data for the design of plutonium-production reactors, revealed problems with streaming of gamma rays and neutrons around access holes in the shield. The water-cooled graphite plutonium-production reactors at Hanford, Washington used iron thermal shields and high-density limonite and magnetite concrete as biological shields.

By the 1940s, the importance of scattered gamma rays was certainly known from measurements, and use of the term buildup factor to characterize the relative importance of scattered and unscattered gamma rays had its origin during the days of the Manhattan Project. Neutron diffusion theory and Fermi age theory were established, but shielding requirements for high-energy neutrons were not well understood. Wartime radiation shielding was an empirical, rule-of-thumb craft.

Nuclear reactors for propulsion. The Atomic Energy Act of 1946 transferred control of nuclear matters from the Army to the civilian Atomic Energy Commission (AEC). That same year, working with the AEC, the US Navy began development of a nuclear powered submarine and the US Air Force, a nuclear powered aircraft. Both of these enterprises demanded minimization of space and weight of the nuclear-reactor power source. Such could be accomplished only by minimizing design margins and that required knowledge of mechanical, thermal, and nuclear properties of materials with greater precision than known before.

Research reactors were constructed at various national laboratories in the USA and Britain to provide the much needed shielding data. The first such research program was begun in 1947 at Oak Ridge National Laboratory with the construction of the X-10 graphite reactor. The X-10 graphite reactor had a 2-ft square aperture in its shielding from which a neutron beam could be extracted, the intensity being augmented by placement of fuel slugs in front of the aperture. Attenuation of neutrons could then be measured within layers of shielding materials placed against the beam aperture. Early measurements revealed the importance of capture gamma rays produced when neutrons were absorbed. Improved experimental geometry was obtained by using a converter plate containing enriched uranium instead of relying on fission neutrons from fuel slugs. A broadly uniform beam of thermal neutrons incident on the plate generated a well-defined source of fission neutrons. A water tank was adjacent to the fission source, with shielding slabs and instrumentation within the tank. This Lid Tank Shielding Facility, LTSF, was the precursor of many so-called bulk-shielding facilities incorporated into many water-cooled research reactors.

Although a nuclear powered aircraft never flew, the wealth of information gained on the thermal, mechanical, and shielding properties of many special materials is a valuable legacy. To obtain shielding data in the absence of ground reflection of radiation, several specialized facilities were constructed. A test reactor was suspended by crane for tests of ground reflection. Then an aircraft shield test reactor was flown in the bomb-bay of a B-36 aircraft to allow measurements at altitude. The Oak Ridge tower shielding facility (TSF) went into operation in 1954, and remained in operation for almost 40 years. Designed for the aircraft nuclear propulsion program, the facility allowed suspension of a reactor hundreds of feet above grade and separate suspension of aircraft crew compartments. In its long life, the TSF also supported nuclear defense and space nuclear applications.

Streaming of radiation through shield penetrations and heating in concrete shields due to neutron and gamma-ray absorption were early shielding studies conducted in support of gas-cooled reactor design. Additional efforts were undertaken soon thereafter at universities as well as government and industrial laboratories. Shielding material properties, neutron attenuation, the creation of capture and inelastic scattering gamma rays, reflection and streaming of neutrons and gamma rays through ducts and passages, and radiation effects on materials were major research topics.

The Decade of the 1950s

This era saw the passage in the USA of the Atomic Energy Act of 1954, the Atoms for Peace program, and the declassification of nuclear data. During this decade, many simplified shielding methods were developed that were suitable for hand calculations. The first digital computers appeared and were quickly used for radiation transport calculations. The US Air Force also started a short-lived nuclear rocket program.

Advances in neutron shielding methods. These advances resulted from measurements at the LTSF and other bulk-shielding facilities. One advancement was the measurement of point kernels, or Green’s functions, for attenuation of fission neutrons in water and other hydrogenous media. The other was the discovery that the effect of water-bound oxygen, indeed the effect of homogeneous or heterogeneous shielding materials in hydrogenous media, could be modeled by exponential attenuation governed by effective “removal” cross sections for the non-hydrogen components. The LTSF allowed measurement of removal cross section for many materials.

Advances in gamma-ray shielding methods. As the decade began, researchers at the National Bureau of Standards investigated electron and photon transport. Much of this effort dealt with the moments method for solving the transport equation describing the spatial, energy, and angular distributions of radiation particles emitted from fixed sources. From such calculations, buildup factors to account for scattered photons were determined for various shielding media and shield thicknesses. Various empirical formulas were also developed to aid in the interpolation of the buildup-factor data.

Advances in Monte Carlo computational methods. The Monte Carlo method of simulating radiation transport computationally has its roots in the work of John von Neumann and Stanislaw Ulam at Los Alamos in the 1940s. Neutron-transport calculations were performed in 1948 using the ENIAC digital computer which had commenced operations in 1945. In this decade, major theoretical advances in Monte Carlo methods were made and many clever algorithms were invented to allow Monte Carlo simulations of radiation transport through matter. Little did the pioneers of this transport approach realize that Monte Carlo techniques would become indispensable in modern shielding practice.

The Decade of the 1960s

The 1960s saw the technology of nuclear-reactor shielding consolidated in several important publications. Blizard and Abbott [4] edited and released a revision of a portion of the 1955 Reactor Handbook as a separate volume on radiation shielding, recognizing that reactor shielding had emerged from nuclear-reactor physics into a discipline of its own. In a similar vein, the first volume of the Engineering Compendium on Radiation Shielding [5] was published. These two volumes brought together contributions from scores of authors and had a great influence on both practice and education in the field of radiation shielding.

This exciting decade also saw the beginning of the Apollo program, the start of the NASA NERVA (Nuclear Engine for Rocket Vehicle Application) program, the deployments in space of SNAP-3, a radioisotope thermoelectric generator in 1961, and SNAP-10A nuclear-reactor power system in 1965. It also saw the Cuban missile crisis in October 1962 and a major increase in the cold-war apprehension about possible use of nuclear weapons. The Apollo program demanded attention to solar flare and cosmic radiation sources and the shielding of space vehicles. Cold-war concerns demanded attention to nuclear-weapon effects, particularly structure shielding from nuclear-weapon fallout. Reflection of gamma rays and neutrons and their transmission through ducts and passages took on special importance in structure shielding. The rapid growth in access to digital computers allowed introduction of many computer codes for shielding design and fostered advances in solving various approximations to the Boltzmann transport equation for neutrons and gamma rays. Similar advances were made in treating the slowing down and transport of charged particles.

Space shielding. Data gathered over many years revealed a very complicated radiation environment in space. Two trapped-radiation belts had been found to surround the earth, an inner proton belt and an outer electron belt. Energy spectra and spatial distributions in these belts are determined by the earth’s magnetic field and by the solar wind, a plasma of low-energy protons and electrons. The radiations pose a risk to astronauts and to sensitive electronic equipment. Uniform intensities of very-high-energy galactic cosmic rays demand charged-particle shielding for protection of astronauts in long duration missions. The greatest radiation risk faced by Apollo astronauts was from solar flare protons and alpha particles with energies as great as 100 MeV for the former and 400 MeV for the latter. The overall subject of space radiation shielding is treated by Haffner [6].

Structure shielding. Structure shielding from nuclear-weapon fallout required careful examination of the atmospheric transport of gamma rays of a wide range of energies and expression of angular distributions and related data in a manner easily adopted to analysis of structures. There was a need to assess, at points within a structure, the ratios of interior dose rates to that outside the building, called reduction factors. These factors were measured experimentally and also calculated with the transport moments method which had been used so successfully in calculation of buildup factors.

Other shielding advances. Of great importance to structure shielding, but also of interest in reactor and nuclear plant shielding, were the development of simplified methods to quantify neutron and gamma-ray streaming through ducts and voids in shields. This decade saw the development of removal-diffusion methods to describe quite accurately the penetration and slowing down of fast fission neutrons in shields. Finally, a simplified approach was developed to describe how gamma rays or neutrons incident on some material are scattered back. The central concept in this approach is the particle albedo, a function that describes how radiation incident on a thick medium, a concrete wall for example, is reemitted or reflected back from the surface. Measurements, theoretical calculations, and approximating formulas for both neutron and gamma-ray albedos were developed in this decade.

Digital computer applications. Radiation transport calculations are by nature very demanding of computer resources. The community of interest in radiation transport and shielding has been served magnificently for more than 4 decades by the Radiation Safety Information Computational Center (RSICC). Established in 1962 as the Radiation Shielding Information Center (RSIC) at Oak Ridge National Laboratory, RSICC’s mission is to provide in-depth coverage of the radiation transport field to meet the needs of the international shielding community.

The 1960s saw many new “mainframe” computer codes developed and disseminated. Among these codes were gamma-ray “point-kernel” codes such as ISOSHLD and QAD, with versions of both still in use after almost 4 decades. The discrete-ordinates method of solving the Boltzmann transport equation was devised in the 1950s and put into practice in the 1960s in a series of computer codes, such as DTF, DOT, and ANISN. The spherical harmonics method of treating neutron spatial and energy distributions in shields was advanced by Shure [7] in one-dimensional \( {P_3} \) calculations. Progress in Monte Carlo methods advanced in pace with discrete-ordinates methods, and the multigroup Monte Carlo code for neutron and gamma-ray transport, MORSE, was introduced at the end of the decade. The continuous energy Monte Carlo code, now known as MCNP, also began in this decade at Los Alamos National Laboratory. A general-purpose particle-transport code MCS was written in 1963 to be followed by the MCN code for three-dimensional calculations written in 1965.

The Decade of the 1970s

The Nuclear Non-Proliferation Treaty (NPT) of 1968 and the National Environmental Policy Act (NEPA) of 1969 had major impacts on the radiation shielding field in the 1970s and succeeding decades. The NPT precluded nuclear fuel reprocessing and led to ever-increasing needs for on-site storage of spent fuel at nuclear power plants. NEPA required exhaustive studies of off-site radiation doses around nuclear power plants and environmental impacts of plant operations. Early in the 1970s, there were major disruptions in oil supplies caused by the OPEC embargo. The response in the USA was an energy policy that forbade electricity production using oil or natural gas. The result was placement of many orders for nuclear power plants despite NPT and NEPA constraints. In the field of radiation shielding, special attention was given to plant design issues such as streaming of neutrons and gamma rays through voids, passageways, and shield penetrations, and to operational issues such as fission-product inventories in fuels and gamma-ray skyshine, particularly associated with 16N sources.

Information essential for plant design, fuel management, and waste management is data tracking radionuclide activities in reactor fuel and process streams, and corresponding strengths and energy spectra of sources, including fission products, activation products, and actinides. To accomplish this, the ORIGEN codes were developed at Oak Ridge National Laboratory and the CINDER code was developed at Los Alamos National Laboratory. Assessment of radiation doses from airborne beta-particle emitters was also studied for the first time. Although the ETRAN Monte Carlo code for electron transport was available at the National Bureau of Standards, work began in the mid-1970s at Sandia Laboratory on the TIGER code and at Stanford Linear Accelerator Center on the EGS code, both for coupled photon and electron transport by the Monte Carlo method.

Design needs brought new attention to buildup factors and to attenuation of broad beams of neutrons and gamma rays. Definitive compilations were made of buildup factors and also the attenuation and reflection by shields obliquely illuminated by photons. Detailed results were also obtained for transmission of neutrons and secondary gamma rays through shielding barriers. This decade also saw the publication of two important NCRP reports [8,9] dealing with neutron shielding and dosimetry and with design of medical facilities that protected against effects of gamma rays and high-energy x-rays.

Design and analysis needs also fostered continuing attention to computer codes for criticality and neutron-transport calculations. A series of more robust discrete-ordinates transport codes were developed. Advances in Monte Carlo calculations were also made. The MCN code was merged with the MCG code in 1973 to form the MCNG code for treating coupled neutron–photon transport. Another merger took place with the MCP code in 1977, allowing detailed treatment of photon transport at energies as low as 1 keV. This new code was known, then and now, as MCNP.

The 1980s and 1990s

These years saw the consolidation of resources for design and analysis work. In the 1980s, personal computers allowed methods such as point-kernel calculations to be programmed. In the 1990s, personal computers took over from the mainframe computers in even the most demanding shielding design and analysis. Comprehensive sets of fluence-to-dose conversion factors became available for widespread use. Radionuclide decay data became available in databases easily used for characterizing sources. Gamma-ray buildup factors were computed with precision and a superb method of data fitting was devised. All these carried point-kernel as well as more advanced shielding methodology to a new plateau.

Databases. Kocher [10] published radioisotope decay data for shielding design and analysis that largely supplanted earlier compilations. Then a new MIRD compendium [11] and ICRP-38 database became the norms, with the latter especially useful for characterizing low-energy x-ray and Auger electron emission. Today a wealth of nuclear structure and decay data is available on the web from the National Nuclear Data Center at Brookhaven National Laboratory (http://www.nndc.bnl.gov/index.jsp).

Advances in buildup factors. Refinements in the computation of buildup factors continued to be made over the years. Computer codes now could account for not only Compton scattering and photoelectric absorption, but also positron creation and annihilation, fluorescence, and bremsstrahlung. Calculation of buildup factors incorporating all these sources of secondary photon radiation was made leading to a comprehensive set of precise buildup factors standardized for use in design and analysis [12]. Also a new five-parameter buildup-factor formulation, called the geometric progression formula, was introduced. Although difficult to use for hand calculations, it is an extraordinarily precise formula and is today used in most modern point-kernel codes. Both the calculated buildup factors and the coefficients for the geometric progression buildup factors are tabulated in the design standard [12].

Cross sections and dose conversion factors. Authoritative cross-section data are now available in the ENDF/B (evaluated nuclear data file) (http://www.nndc.bnl.gov/exfor/endf00.htm) database containing evaluated cross sections, spectra, angular distributions, fission product yields, photo-atomic and thermal scattering law data, with emphasis on neutron-induced reactions. The National Institute of Science and Technology (NIST) has long been the repository for gamma-ray interaction coefficients. The Institute also sponsors the XCOM cross-section code, which may be executed on the NIST Internet site (http://physics.nist.gov/PhysRefData/Xcom/Text/XCOM.html) or downloaded for personal use.

Gamma-ray fluence-to-dose conversion factors for local values of exposure or kerma may be computed directly from readily available energy transfer or energy absorption coefficients for air, tissue, etc. Neutron conversion factors for local values of tissue kerma were computed by Caswell et al. [13]. As the second century of radiation protection begins, there are two classes of fluence-to-dose conversion factors in use for neutrons and gamma rays. One very conservative class is to be used for operational purposes at doses well below regulatory limits. This class is based on doses at fixed depths in 30-cm diameter spherical phantoms irradiated in various ways. The other class is to be used for dose assessment purposes, and not for personnel dosimetry. This class is based on the anthropomorphic human phantom and weight factors for effective dose equivalent [14] or effective dose [15].

Computer applications. The 1980s and 1990s were decades of revolution for the computational aspects of radiation shield design and analysis. The advent of inexpensive personal computers with rapidly increasing speeds and memory freed the shielding analyst from dependence on a few supercomputers at national laboratories. Many shielding codes that could previously run only on large mainframe computers were reworked to run on small personal computers, thereby, allowing any shielding analysts to perform detailed calculations that only a privileged few were able to do previously.

At the same time, many improvements were made to the transport codes and their algorithms. MCNP has gone through a series of improvements adding new capabilities and improvements, such as new variance reduction methods, tallies, and physics models. It has also spun off a second version MCNPX with a capability of treating 34 types of particles with energies up to 150 MeV. Also in these decades many other Monte Carlo transport codes were developed by researchers in many nations. Each version has unique features and capabilities. General-purpose discrete-ordinates codes were also extensively improved with many novel acceleration schemes introduced to improve their speeds. An excellent review of many such improvements is given by Adams and Larsen [16].

Practice of Radiation Shielding

Shielding design and shielding analysis are complementary activities. In design, the source and maximum target dose are specified, and the task is to determine the type and amount of the shielding required to reduce the target dose to that specified. In analysis, the source and shielding are identified and the task is to determine the dose at some point(s) of interest. Whether one is engaged in a hand calculation or in a most elaborate Monte Carlo simulation, one is faced with the tasks of (1) characterizing the source, (2) characterizing the nature and attenuating properties of the shielding materials, (3) evaluating at a target location the radiation intensity and perhaps its angular and energy distributions, and (4) converting the intensity to a dose or response meaningful in terms of radiation effects.

Source Characterization

Source geometry, energy, and angular distribution are required characteristics. Radionuclide sources, with isotropic emission and unique energies of gamma and x-rays are relatively easy to characterize. Activity and source strength must be carefully distinguished, as not every decay results in emission of a particular gamma or x-ray. Careful consideration must be given to a low-energy limit below which source particles may be ignored, else computation resources may be wasted. Similarly, when photons of many energies are emitted, as in the case of fission-product sources, one is compelled to use a group structure in source characterization, and much care is needed in establishing efficient and appropriate group energy limits and group average energies. When the source energies are continuously distributed, as is the case with fission neutrons and gamma rays, one option is to use a multigroup approach, as might be used in point-kernel calculations. Another option, useful in Monte Carlo simulations, is to sample source energies from a mathematical representation of the energy spectrum.

A point source is very often an appropriate approximation of a physical source of small size. It is also appropriate to represent a line, plane, or volume source as a collection of point sources, as is done in the point-kernel method of shielding analysis. Radionuclide and fission sources are isotropic in angular distribution; however, there are cases for which it is efficient to identify a surface and to characterize the surface as a secondary source surface. Such surface sources are very often non-isotropic in angular distribution. For example, consider the radiation emitted into the atmosphere from a large body of water containing a distributed radiation source. The interface may be treated approximately, but very effectively, as a plane source emitting radiation not isotropically, but with an intensity varying with the angle of emission from the surface.

Attenuating Properties

The total microscopic cross section for an element or nuclide, \( \sigma (E) \), multiplied by the atomic density, is the linear interaction coefficient \( \mu (E) \), also called the macroscopic cross section, the probability per unit (differential) path length that a particle of energy E interacts with the medium in some way. Its reciprocal, called the mean free path, is the average distance traveled before interaction. Usually, the ratio \( \mu /\rho \), called the mass interaction coefficient, is tabulated because it is independent of density. Various subscripts may be used to designate particular types of interactions, for example, \( {\sigma}{_a}(E) \) for absorption or \( {\sigma}{_f}(E) \) for fission. Likewise, additional independent variables may be introduced, with, for example, \( {\sigma}{_s}(E,E^{\prime}){\text{d}}E^{\prime} \) representing the cross section for scattering from energy E to an energy between \( E\prime \) and \( E\prime + {\text{d}}E\prime \). Information resources for attenuating properties are described in this entry’s historical review, as are resources for radionuclide decay data.

Intensity Characterization

The intensity of a neutron or photon field is usually described in terms of radiation crossing the surface of a small spherical volume V. The fluence \( \Phi \) is defined, in the limit \( V \to 0 \), as the expected or average sum of the path lengths in V traveled by entering particles divided by the volume V. Equivalently, \( \Phi \) is, again in the limit \( V \to 0 \), the expected number of particles crossing the surface of V divided by the cross-sectional area of the volume. The time derivative of the fluence is the fluence rate or flux density \( \Phi \). Note that the fluence, though having units of reciprocal area, has no reference area or orientation. Note too that the fluence and flux are point functions. The fluence, a function of position, may also be a distribution function for particle energies and directions. For example, \( \Phi (r,E,\Omega ){\text{d}}E{\text{d}}\Omega \) is the fluence at \( \mathbf{{r}} \) of particles with energies in \( {\text{d}}E \) about E and with directions in solid angle \( {\text{d}}\Omega \) about the direction \( {\Omega} \). When a particular surface, with outward normal \( \mathbf{{n}} \), is used as a reference, it is useful to define radiation intensity in terms of the flow \( {J_n}(\mathbf{{r}},E,\Omega ){\text{d}}E{\text{d}}\Omega \equiv \mathbf{{n}} \cdot \Omega \Phi (\mathbf{{r}},E,\Omega ){\text{d}}E{\text{d}}\Omega \) across the reference surface.

Fluence-to-Dose Conversion Factors

Whether the shield designer uses the simplest of the point-kernel methods or the most comprehensive of the Monte Carlo or discrete-ordinates methods, fluence-to-dose conversion factors generally have to be used. The radiation attenuation calculation deals with the particle fluence, the direct measure of radiation intensity. To convert that intensity into a measure of radiation damage or heating of a material, to a field measurement such as exposure, or to a measure of health risk, conversion factors must be applied.

The shielding analysis ordinarily yields the energy spectrum \( \Phi (\mathbf{{r}},E) \) of the photon or neutron fluence at a point identified by the vector r. Use of a Monte Carlo code normally yields the energy spectrum as a function of energy, whence the dose or, more generally, response \( R(\mathbf{{r}}) \) is given by the convolution of the fluence with the fluence-to-dose factor, here called the response function \( {\mathcal{R}}(E) \), so that

$$ R(\mathbf{{r}}) = \int\limits_E {\text{d}}E {\mathcal R}\left( {\mathbf{{r}}, E} \right)\Phi \left( {\mathbf{{r}},E} \right). $$
(14.1)

Point-kernel, or other energy-multigroup methods yield the energy spectrum at discrete energies, or in energy groups, and the dose convolution is a summation rather than an integration.

While the fluence is most always computed as a point function of position, the response of interest may be a dose at a point (called a local dose) or it may be a much more complicated function such as the average radiation dose in a physical volume such as an anthropomorphic phantom. Local and phantom-related doses are briefly discussed later.

Suppose the local dose of interest is the kerma, defined as the expected sum of the initial kinetic energies of all charged particles produced by the radiation field in a mass m, in the limit as \( m \to 0 \). Then the response function is given by

$$ {\mathcal{R}_K}(E) = \kappa \sum\limits_i \frac{{{N_i}}}{\rho }\sum\limits_j {\sigma_{ji}}(E){\epsilon_{ji}}(E). $$
(14.2)

in which \( \rho \) is the mass density, \( {N_i} \) is the atoms of species i per unit volume (proportional to \( \rho \)), \( {\sigma_{ji}}(E) \) is the cross section for the jth interaction with species i, and \( {\epsilon_{ji}}(E) \) is the average energy transferred to secondary charged particles in the jth interaction with species i. A units conversion factor \( \kappa \) is needed to convert from, say, units of MeV cm2/g to units of rad cm2 or Gy cm2. For neutrons, a quality factor multiplier \( Q(E) \) is needed to convert to units of dose equivalent (rem or Sv). For photons, Eq. 2 reduces to

$$ {\mathcal{R}_K}(E)\;({\text{Gy}}\;{\text{c}}{{\text{m}}^2}) = 1.602 \times {10^{ - 10}}E[{\mu_{\text{tr}}}(E)/\rho ], $$
(14.3)

where E is in MeV and \( {\mu_{\text{tr}}}/\rho \) is the mass energy transfer coefficient in units of cm2/g for the material to which energy is transferred.

More related to radiation damage is the local absorbed dose, defined as the expected energy imparted, through ionization, excitation, chemical changes, and heat, to a mass m, in the limit as \( m \to 0 \). Under conditions of charged-particle equilibrium, the neutron or gamma-ray kerma equals the absorbed dose, less the energy radiated away as bremsstrahlung. Such equilibrium is approached in a region of homogeneity in composition and uniformity in neutron or photon intensity. Then the absorbed dose is given by Eq. 3 with \( {\mu_{\text{tr}}} \) replaced by the energy absorption coefficient \( {\mu}{_{en}} \) to account for any bremsstrahlung losses.

The second type of response function or dose is that related to the local dose within a simple geometric phantom or some sort of average dose within an anthropomorphic phantom. The phantom dose, in fact, is a point function and serves as a standardized reference dose for instrument calibration and radiation protection purposes. Even though the radiation fluence, itself a point function, may have strong spatial and angular variation as well as energy variation, it is still possible to associate with the radiation fluence a phantom-related dose. The procedure is as follows. The fluence is treated, for example, as a very broad parallel beam of the same intensity as the actual radiation field, incident in some fixed way on the phantom. This is the so-called expanded and aligned field. For a geometric phantom, the dose is computed at a fixed depth. For an anthropomorphic phantom, the dose is computed as an average of doses to particular tissues and organs, weighted by the susceptibility of the tissues and organs to radiation carcinogenesis or hereditary illness. Many phantoms have been used with various directions of incident radiation. The calculated response functions are then tabulated as a function of the radiation energy. Additional details of phantom doses and their tabulations are given by Shultis and Faw [17].

Basic Analysis Methods

To say modern shielding practice has been reduced to running large “black-box” codes is very misleading. Randomly varying model parameters, such as shield dimensions, placement, and material, is a very inefficient way to optimize shielding for a given situation. Using the concepts and ideas behind the earlier simplified methods often allows a shield analyst to select materials and geometry for a preliminary design before using large transport codes to refine the design. In this section, fundamental methods for estimating neutron or photon doses are reviewed. Such indirectly ionizing radiation is characterized by straight-line trajectories punctuated by “point” interactions. The basic concepts presented here apply equally to all particles of such radiation.

It should be noted that throughout this entry, calculated doses are the expected or average value of the stochastic measured doses, that is, the mechanistically calculated dose represents the statistical average of a large number of dose measurements which exhibit random fluctuations as a consequence of the stochastic nature of the source emission and interactions in the detector and surrounding material.

Uncollided Radiation Doses

In many situations, the dose at some point of interest is dominated by particles streaming directly from the source without interacting in the surrounding medium. For example, if only air separates a gamma-ray or neutron source from a detector, interactions in the intervening air or in nearby solid objects, such as the ground or building walls, are often negligible, and the radiation field at the detector is due almost entirely to uncollided radiation coming directly from the source.

In an attenuating medium, the uncollided dose at a distance r from a point isotropic source emitting \( {S}{{_p}} \) particles of energy E is

$$ {D}{^o}(r) = \frac{{{S}{{_p}}\mathcal{R}}}{{4\pi{r^2}}}{{e}^{ - {{l}}}}, $$
(14.4)

where \( l \) is the total number of mean-free-path lengths of material a particle must traverse before reaching the detector, namely, \( \int_0^r{\text{d}}s\mu (s) \). Here \( \mathcal{R} \) is the appropriate response function. The \( 1/(4{\pi} {r^2}) \) term in Eq. 4 is often referred to as the geometric attenuation and the \( {{\text{e}}^{ - {{l}}}} \) term the material attenuation. Equation 4 can be extended easily to a source emitting particles with different discrete energies or a continuous spectrum of energies.

Point Kernel for Uncollided Dose

Consider an isotropic point source placed at \( {\mathbf{{r}}_{\text{s}}} \) and an isotropic point detector (or target) placed at \( {\mathbf{{r}}_{\text{t}}} \) in a homogeneous medium. The detector response depends not on \( {\mathbf{{r}}_{\text{s}}} \) and \( {\mathbf{{r}}_{\text{t}}} \) separately, but only on the distance \( |{\mathbf{{r}}_{\text{s}}} - {\mathbf{{r}}_{\text{t}}}| \) between the source and detector. For a unit strength source, the detector response is (cf. Eq. 4)

$$ \mathcal{G}{^{{o}}}({\mathbf{{r}}_{{s}}},{\mathbf{{r}}_{{t}}},E) = \frac{{{\mathcal{R}}(E)}}{{\Delta{{\pi }}|{\mathbf{{r}}_{{s}}} - {\mathbf{{r}}_{{ t}}}{|^2}}}{{{e}}^{ -\mu (E)|{\mathbf{{r}}_{{s}}} - {\mathbf{{r}}_{{t}}}|}}.$$
(14.5)

Here \( {\mathcal{G}^{\text{o}}}({\mathbf{{r}}_{\text{s}}},{\mathbf{{r}}_{\text{t}}},E) \) is the uncollided dose point kernel and equals the dose at \( {\mathbf{{r}}_{\text{t}}} \) per particle of energy E emitted isotropically at \( {\mathbf{{r}}_{\text{s}}} \). This result holds for any geometry or medium provided that the material through which a ray from \( {\mathbf{{r}}_{\text{s}}} \) to \( {\mathbf{{r}}_{\text{t}}} \) passes has a constant interaction coefficient \( \mu \).

With this point kernel, the uncollided dose due to an arbitrarily distributed source can be found by first decomposing (conceptually) the source into a set of contiguous effective point sources and then summing (integrating) the dose produced by each point source.

Applications to Selected Geometries

The results for the uncollided dose from a point source can be used to derive expressions for the uncollided dose arising from a wide variety of distributed sources such as line sources, area sources, and volumetric sources [4, 5, 18, 19]. An example to illustrate the method is as follows:

An isotropic disk source of radius a emitting isotropically \( {S_a} \) particles per unit area at energy E is depicted in Fig. 14.1. A detector is positioned at point P a distance h above the center of the disk. Suppose the only material separating the disk source and the receptor at P is a slab of thickness t with a total attenuation coefficient \( \mu \).

Fig. 14.1
figure 1

An isotropic disk source is shielded by a parallel slab shield of thickness t

Consider a differential area \( {\text{d}}A \) between distance \( \rho \) and \( \rho + {\text{d}}\rho \) from the disk center and between \( \psi \) and \( \psi + {\text{d}}\psi \). The source within \( {\text{d}}A \) may be treated as an effective point isotropic source emitting \( {S_a}{\text{d}}A = {S_a}\rho {\text{d}}\rho {\text{d}}\psi \) particles which produces an uncollided dose at P of \( {\text{d}}{D^{\text{o}}} \). The ray from the source in \( {\text{d}}A \) must pass through a slant distance of the shield \( t \text{sec}\theta \) so that the dose at P from particles emitted in dρ about ρ is

$$ {t{d}}{D^{{o}}}(P) = \frac{{{\mathcal{R}}{S_a}\rho{{d}}\rho {{d}}\psi }}{{\Delta{{\pi }}{r^2}}}{{exp}}\left[ { - \mu t{ \sec }\theta } \right], $$
(14.6)

where \( \mathcal{R} \) and \( \mu \) generally depend on the particle energy E. To obtain the total dose at P from all differential areas of the disk source, one then must sum, or rather integrate, \( {\text{d}}{D^{\text{o}}} \) over all differential areas. Thus, the total uncollided dose at P is

$$ {D^{{o}}}(P) = \frac{{{S_a}\mathcal{R}}}{{4{{\pi}}}}\int_0^{2{{\pi }}} {{d}}\psi \int_0^a {{d}}\rho \frac{{\rho exp[- \mu t{sec}\theta ]}}{{{r^2}}}. $$
(14.7)

Because h is fixed, \( \rho {\text{d}}\rho = r{\text{d}}r \), and from Fig. 14.1 it is seen that \( r = h{\text{sec}}\theta \). Integration over \( \psi \) and changing variables yields

$$ \eqalign{{D^{\text{o}}}(P) = \frac{{{S_a}\mathcal{R}}}{2}\int_h^{h {\text {sec}}{\theta_{\text{o}}}} {\text{d}}r{r^{ - 1}}{{\text{e}}^{ - \mu rt/h}}} $$
(14.8)
$$ \quad\quad\;\;= \frac{{{S_a}\mathcal{R}}}{2}\int_{\mu t}^{\mu t {\text{sec}}{\theta_o}} {\text{d}}x{x^{ - 1}}{{\text{e}}^{ - x}} $$
(14.9)
$$\quad\quad\;\; = \frac{{{S_a}\mathcal{R}}}{2}\left[ {{E_1}(\mu t) - {E_1}(\mu t {\text{sec}}{\theta_{\text{o}}})} \right], $$
(14.10)

where the exponential integral function \( {E_n} \) is defined as \( {E_n}(x) \equiv {x^{n - 1}}\int_x^\infty {\text{d}}u{u^{ - n}}{{\text{e}}^{ - u}} \) and is tabulated in many compilations [4, 5, 17].

Intermediate Methods for Photon Shielding

In this section, several special techniques are summarized for the design and analysis of shielding for gamma and x-rays with energies from about 1 keV to about 20 MeV. These techniques are founded on very precise radiation transport calculations for a wide range of carefully prescribed situations. These techniques, which rely on buildup factors, attenuation factors, albedos or reflection factors, and line-beam response functions, then allow estimation of photon doses for many frequently encountered shielding situations without the need of transport calculations.

Buildup-Factor Concept

The total photon fluence \( \Phi (\mathbf{{r}},E) \) at some point of interest \( \mathbf{{r}} \) is the sum of two components: the uncollided fluence \( {\Phi^{\text{o}}}(\mathbf{{r}},E) \) of photons that have streamed to \( \mathbf{{r}} \) directly from the source without interaction, and the fluence of scattered and secondary photons \( {\Phi^{\text{s}}}(\mathbf{{r}},E) \) consisting of source photons scattered one or more times, as well as secondary photons such as x-rays and annihilation gamma rays.

The buildup factor \( B(\mathbf{{r}}) \) is defined as

$$ B(\mathbf{{r}}) \equiv \frac{{D(\mathbf{{r}})}}{{{D^{\text{o}}}(\mathbf{{r}})}} = 1 + \frac{{{D^{\text{s}}}(\mathbf{{r}})}}{{{D^{\text{o}}}(\mathbf{{r}})}}, $$
(14.11)

where \( D(\mathbf{{r}}) \) is the total dose equal to the sum of the uncollided dose \( {D^{\text{o}}}(\mathbf{{r}}) \) and the scattered or secondary photon dose \( {D^{\text{s}}}(\mathbf{{r}}) \). For a monoenergetic source this reduces to

$$ B({E_{\text{o}}},\mathbf{{r}}) = 1 + \frac{1}{{{\Phi^{\text{o}}}(\mathbf{{r}})}}\int_0^{{E_{\text{o}}}} {\text{d}}E\frac{{{\mathcal{R}}(E)}}{{{\mathcal{R}}({E_{\text{o}}})}}{\Phi^{\text{s}}}(\mathbf{{r}},E). $$
(14.12)

In this case, the nature of the dose or response is fully accounted for in the ratio \( {\mathcal{R}}(E)/{\mathcal{R}}({E_{\text{o}}}) \). By far the largest body of buildup-factor data is for point, isotropic, and monoenergetic sources of photons in infinite homogeneous media. Calculation of buildup factors for high-energy photons requires consideration of the paths traveled by positrons from their creation until their annihilation. Such calculations have been performed by Hirayama [20] and by Faw and Shultis [21] for photon energies as great as 100 MeV. Because incoherent scattering was neglected in many buildup-factor calculations, coherent scattering should also be neglected in calculating the uncollided dose, a significant consideration only for low-energy photons at deep penetration.

Buildup-Factor Geometry

Generally, buildup factors depend on the source and shield geometries. For a given material thickness between source and detector, buildup factors are slightly different for point isotropic sources in (a) an infinite medium, (b) at the surface of a bare sphere, and for a slab shield between source and detector. However, the use of buildup factors for a point isotropic source is almost always conservative, that is, the estimated dose is greater than that for a finite shield [17]. Adjustment factors for buildup factors at the surface of a finite medium in terms of the infinite-medium buildup factors is illustrated in Fig. 14.2.

Fig. 14.2
figure 2

Adjustment factor for the buildup factor \( {B_x} \) at the boundary of a finite medium in terms of the infinite-medium buildup factor \( {B_\infty } \) for the same depth of penetration (EGS4 calculations courtesy of Sherrill Shue, Nuclear Engineering Department, Kansas State University)

Buildup factors are also available for plane isotopic (PLI) and plane monodirectional (PLM) gamma-ray sources in infinite media. Indeed, Fano et al. [22], Goldstein [23], and Spencer [24], in their moments-method calculations, obtained buildup factors for plane sources first and, from these, buildup factors for point sources were derived. Buildup factors at depth in a half-space shield are also available for the PLM source, that is, normally incident photons [20, 25, 26]. The use of buildup factors for a point isotropic source in an infinite medium is conservative, that is, overpredictive, for the PLI and PLM geometries.

Buildup Factors for Stratified Shields

Sometimes shields are stratified, that is, composed of layers of different materials. The use of the buildup-factor concept for such heterogeneous shields is, for the most part, of dubious merit. Nevertheless, implementation of point-kernel codes for shielding design and analysis demands some way of treating buildup when the ray from source point to dose point is through more than one shielding material. However, certain regularities do exist, which permit approximate use of homogeneous-medium buildup factors for stratified shields. Many approximate buildup methods have been suggested, as described by Shultis and Faw [17]; however, they are of little use in most point-kernel codes and are not needed at all for shielding analysis based on transport methods.

Point-Kernel Computer Codes

There are many codes in wide use that are based on the point-kernel technique. In these codes, a distributed source is decomposed into small but finite elements and the dose at some receptor point from each element is computed using the uncollided dose kernel and a buildup factor based on the optical thickness of material between the source element and the receptor. The results for all the source elements are then added together to obtain the total dose. Some that have been widely used are MicroShield [27], the QAD series [28], QADMOD-GP [29], QAD-CGGP [30], and \( {G^3} \) [31].

Broad-Beam Attenuation

Often a point radionuclide or x-ray source in air is located sufficiently far from a wall or shielding slab that the radiation reaches the wall in nearly parallel rays. Further, the attenuation in the air is usually quite negligible in comparison to that provided by the shielding wall. Shielding design and analysis for such broad-beam illumination of a slab shield are addressed by NCRP Report 49 [9], Archer [32], and Simpkin [33]. The dose at the surface of the cold side of the wall can be computed as

$$ D = {D^{\text{o}}}{A_{\text{f}}}. $$
(14.13)

For a radionuclide source of activity \( \mathcal{A} \), the dose \( {D^{\text{o}}} \) without the wall can be expressed in terms of the source energy spectrum, response functions, and distance r from the source to the cold side of the wall. Then,

$$ D = {D^{\text{o}}}{A_{\text{f}}} = \frac{\mathcal{A}}{{{r^2}}}\Gamma {A_{\text{f}}}, $$
(14.14)

where \( \Gamma \), called the specific gamma-ray constant, is the dose rate in vacuum at a unit distance from a source with unit activity, and \( {A_{\text{f}}} \) is an attenuation factor which depends on the nature and thickness of the shielding material, the source energy characteristics, and the angle of incidence \( \theta \) (with respect to the wall normal). Values for \( \Gamma \) and \( {A_{\text{f}}} \) are provided by NCRP [9].

Oblique Incidence

Attenuation factors for obliquely incident beams are presented in NCRP Report 49 [9]. For such cases, special three-argument slant-incidence buildup factors should be used [17]. For a shield wall of thickness t mean free paths, slant incidence at angle \( \theta \) with respect to the normal to the wall, and source energy \( {E_{\text{o}}} \), the attenuation factor is in function form \( {A_{\text{f}}}({E_{\text{o}}},t,\theta ) \). However, a common, but erroneous, practice has been to use a two-argument attenuation factor based on an infinite-medium buildup factor for slant penetration distance \( t \text {sec}\theta \), in the form \( {A_{\text{f}}}({E_{\text{o}}},t \text{sec}\theta ) \). This practice can lead to severe underprediction of transmitted radiation doses.

X-ray Beam Attenuation

For x-ray sources, the appropriate measure of source strength is the electron-beam current i, and the appropriate characterization of photon energies, in principle, involves the peak accelerating voltage (kVp), the wave form, and the degree of filtration (e.g., beam half-value thickness). If i is the beam current (mA) and r is the source-detector distance \( (m) \), the dose behind a broadly illuminated shield wall is

$$ D(P) = \frac{i}{{{r^2}}}{K_{\text{o}}}{A_{\text{f}}}, $$
(14.15)

in which \( {K_{\text{o}}} \), called the radiation output (factor), is the dose rate in vacuum (or air) per unit beam current at unit distance from the source in the absence of the shield. Empirical formulas for computing \( {A_{\text{f}}} \) are available for shield design [34, 35].

Intermediate Methods for Neutron Shielding

Shielding design for fast neutrons is generally far more complex than shielding design for photons. Not only does one have to protect against the neutrons emitted by some source, one also needs to protect against primary gamma rays emitted by most neutron sources as well as secondary photons produced by inelastic neutron scattering and from radiative capture. There may also be secondary neutrons produced from \( (n,2n) \) and fission reactions. In many instances, secondary photons produce greater radiological risks than do the primary neutrons. Fast-neutron sources include spontaneous and induced fission, fusion, \( (\alpha, n) \) reactions, \( (\gamma, n) \) reactions, and spallation reactions in accelerators, each producing neutrons with a different distribution of energies.

Unlike photon cross sections, neutron cross sections usually vary greatly with neutron energy and among the different isotopes of the same element. Comprehensive cross-section databases are needed. Also, because of the erratic variation of the cross sections with energy, it is difficult to calculate uncollided doses needed in order to use the buildup-factor approach. Moreover, buildup factors are very geometry dependent and sensitive to the energy spectrum of the neutron fluence and, consequently, point-kernel methods can be applied to neutron shielding only in very limited circumstances.

Early work led to kernels for fission sources in aqueous systems and the use of removal cross sections to account for shielding barriers. Over the years, the methodology was stretched to apply to nonaqueous hydrogenous media, then to non-hydrogenous media, then to fast-neutron sources other than fission. Elements of diffusion and age theory were melded with the point kernels. Today, with the availability of massive computer resources, neutron shielding design and analysis is largely done using transport methods. Nevertheless, the earlier methodologies offer insight and allow more critical interpretation of transport calculations.

Also, unlike ratios of different photon response functions, those for neutrons vary, often strongly, with neutron energy. Hence, neutrons doses cannot be converted to different dose units by simply multiplying by an appropriate constant. The energy spectrum of the neutron fluence is needed to obtain doses in different units. Consequently, many old measurements or calculations of point kernels, albedo functions, transmission factors, etc., made with obsolete dose units cannot be converted to modern units because the energy spectrum is unknown. In this case, there is no recourse but to repeat the measurements or calculations.

Capture Gamma Photons

A significant, often dominant, component of the total dose at the surface of a shield accrues from capture gamma photons produced deep within the shield and arising from neutron absorption. Of lesser significance are secondary photons produced in the inelastic scattering of fast neutrons. Secondary neutrons are also produced as a result of \( (\gamma, n) \) reactions. Thus, in transport methods, gamma-ray and neutron transport are almost always coupled.

Historically, capture gamma-ray analysis was appended to neutron removal calculations. Most neutrons are absorbed only when they reach thermal energies, and, consequently, only the absorption of thermal neutrons was considered. (Exceptional cases include the strong absorption of epithermal neutrons in fast reactor cores or in thick slabs of low-moderating, high-absorbing material.) For this reason, it is important to calculate accurately the thermal neutron fluence \( {\Phi_{\text{th}}}(\mathbf{{r}}) \) in the shield. The volumetric source strength of capture photons per unit energy about E is then given by

$$ {S_{{\gamma }}}(\mathbf{{r}},E) ={\Phi_{{th}}}(\mathbf{{r}}){\mu_{{\gamma }}}(\mathbf{{r}}){{f}}(\mathbf{{r}},E), $$
(14.16)

where \( {\mu_{{\gamma }}}(\mathbf{{r}}) \) is the absorption coefficient at \( \mathbf{{r}} \) for thermal neutrons and \( {{f}}(\mathbf{{r}},E) \) is the number of photons produced in unit energy about E per thermal neutron absorption at \( \mathbf{{r}} \).

Once the capture gamma-ray source term \( {S_{{\gamma }}}(\mathbf{{r}},E) \) is known throughout the shield, point-kernel techniques using exponential attenuation and buildup factors can be used to calculate the capture gamma-ray dose at the shield surface.

Neutron Shielding with Concrete

Concrete is probably the most widely used shielding material because of its relatively low cost and the ease with which it can be cast into large and variously shaped shields. However, unlike that for photon attenuation in concrete, the concrete composition, especially the water content, has a strong influence on its neutron attenuation properties. Other important factors that influence the effectiveness of concrete as a neutron shield include type of aggregate, the dose–response function, and the angle of incidence of the neutrons.

Because concrete is so widely used as a shield material, its effectiveness for a monoenergetic, broad, parallel beam of incident neutrons has been extensively studied, both for normal and slant incidence, and many tabulated results for shields of various thickness are available [3640]. These results, incorporated into design and manufacturing standards (standards are available from professional societies such as the American Nuclear Society and the American Society of Mechanical Engineers) are extremely useful in the preliminary design of concrete shields.

Gamma-Ray and Neutron Reflection

Until now only shielding situations have been considered in which the radiation reaching a target contains an uncollided component. For these situations, point-kernel approximations, in principle, may be used and concepts such as particle buildup may be applied. However, in many problems encountered in shielding design and analysis, only scattered radiation may reach the target. Radiation doses due to reflection from a surface are examples that arise in treatment of streaming of radiation through multi-legged ducts and passageways. Treatment of radiation reflection from surfaces of structures is also a necessary adjunct to precise calibration of nuclear instrumentation. Skyshine, that is, reflection in the atmosphere of radiation from fixed sources to distant points is another example of this class of reflected-radiation problems. All such reflection problems are impossible to treat using elementary point-kernel methods and are also very difficult and inefficient to treat using transport-based methods. For reflection from a surface of radiation from a point source to a point receiver, the albedo function has come to be very useful in design and analysis. The same can be said for use of the line-beam response function in treatment of skyshine. Both are discussed below.

Albedo Methods

There are frequent instances for which the dose at some location from radiation reflected from walls and floors may be comparable to the line-of-sight dose. The term reflection in this context does not imply a surface scattering. Rather, gamma rays or neutrons penetrate the surface of a shielding or structural material, scatter within the material, and then emerge from the material with reduced energy and at some location other than the point of entry.

In many such analyses, a simplified method, called the albedo method, may be used. The albedo method is based on the following approximations. (1) The displacement between points of entry and emergence may be neglected. (2) The reflecting medium is effectively a half-space, a conservative approximation. (3) Scattering in air between a source and the reflecting surface and between the reflecting surface and the detector may be neglected.

Application of the Albedo Method

Radiation reflection may be described in terms of the geometry shown in Fig. 14.3. Suppose that a point isotropic and monoenergetic source is located distance \( {r_1} \) from area \( {\text{d}}A \) along incident direction \( {\Omega_{\text{o}}} \) and that a dose point is located distance \( {r_2} \) from area \( {\text{d}}A \) along emergent direction \( \Omega \). Suppose also the source has an angular distribution such that \( S({\theta_{\text{o}}}) \) is the source intensity per steradian evaluated at the direction from the source to the reflecting area \( {\text{d}}A \). Then the dose \( {\text{d}}{D_{\text{r}}} \) at the detector from particles reflected from \( {\text{d}}A \) can be shown to be [17]

Fig. 14.3
figure 3

Angular and energy relationships in the albedo formulation

$$ {\text{d}}{D_{\text{r}}} = {D_{\text{o}}}{\alpha_{\text{D}}}({E_{\text{o}}},{\theta_{\text{o}}};\theta, \psi )\frac{{{\text{d}}A {\text {cos}}{\theta_{\text{o}}}}}{{r_2^2}}, $$
(14.17)

in which \( {D_{\text{o}}} \) is the dose at \( {\text{d}}A \) due to incident particles. Here \( {\alpha_{\text{D}}}({E_{\text{o}}},{\theta_{\text{o}}};\theta, \psi ) \) is the dose albedo. Determination of the total reflected dose \( {D_{\text{r}}} \) requires integration over the area of the reflecting surface. In doing so one must be aware that, as the location on the surface changes, all the variables \( {\theta_{\text{o}}} \), \( \theta \), \( \psi \), \( {r_1} \), and \( {r_2} \) change as well. Also, it is necessary to know \( {\alpha_{\text{D}}}({E_{\text{o}}},{\theta_{\text{o}}};\theta, \psi ) \) or, more usefully, to have some analytical approximation for the dose albedo so that numerical integration over all the surface area can be performed efficiently.

Gamma-Ray Dose Albedo Approximations

A two-parameter approximation for the photon dose albedo was first devised by Chilton and Huddleston [41] and later extended by Chilton et al. [42]. Chilton [43] later proposed a more accurate seven-parameter albedo formula for concrete. Brockhoff [44] published seven-parameter fit data for albedos from water, concrete, iron, and lead. Two examples of this dose albedo approximation are shown in Fig. 14.4.

Fig. 14.4
figure 4

Ambient-dose-equivalent albedos for reflection of 1.25-MeV photons from concrete, computed using the seven-term Chilton–Huddleston approximation

Neutron Dose Albedo Approximations

The dose albedo concept is very useful for streaming problems that involve “reflection” of neutrons or photons from some material interface. However, unlike photon albedos, the neutron albedos are seldom tabulated or approximated for monoenergetic incident neutrons because of the rapid variation with energy of neutron cross sections. Rather, albedos for neutrons with a specific range of energies (energy group) are usually considered, thereby, averaging over all the cross-section resonances in the group. Also unlike photon albedos, neutron albedos involve reflected dose from both neutrons and secondary capture gamma rays.

There are many studies of the neutron albedos in the literature. Selph [45] published a detailed review. Extensive compilations of neutron albedo data are available, for example, SAIL [46] and BREESE-II [47]. Of more utility are analytic approximations for the albedo based on measured or calculated albedos. Neutrons albedos are often divided into three types: (1) fast-neutron albedos (\( E \geq 0.2 \) MeV), (2) intermediate-energy albedos, and (3) thermal-neutron albedos. Selph [45] reviews early approximations for neutron albedos, among which is a 24-parameter approximation developed by Maerker and Muckenthaler [48]. Newly computed and more accurate fast-neutron albedos, based on different 24-parameter approximations, have been computed by Brockhoff [44] for several shielding materials.

For neutrons with energy less than about 100 keV, the various dose equivalent response functions are very insensitive to neutron energy. Consequently, the dose albedo \( {\alpha_{\text{D}}} \) is very closely approximated by the number albedo \( {\alpha_{\text{N}}} \). Thus, for reflected dose calculations involving intermediate or thermal neutrons, the number albedo is almost always used. Coleman et al. [49] calculated neutron albedos for intermediate-energy neutrons (200 keV to 0.5 eV) incident monodirectionally on reinforced concrete slabs and developed a nine-parameter formula for the albedo.

Thermal neutrons entering a shield undergo isotropic scattering that, on the average, does not change their energies. For one-speed particles incident in an azimuthally symmetric fashion on a half-space of material that isotropically scatters particles, Chandrasekhar [50] derived an exact expression for the differential albedo. A purely empirical and particularly simple formula, based on Monte Carlo data for thermal neutrons, has been proposed by Wells [51] for ordinary concrete, namely,

$$ {\alpha_{\text{N}}}({\theta_{\text{o}}};\theta, \psi ) = 0.21{\text{cos}}\theta {({\text{cos}}{\theta_{\text{o}}})^{ - 1/3}}. $$
(14.18)

Radiation Streaming Through Ducts

Except in the simplest cases, the analysis of radiation streaming requires advanced computational procedures. However, even within the framework of Monte Carlo transport calculations, albedo methods are commonly used, and special data sets have been developed for such use [46, 47, 52].

Elementary methods for gamma-ray streaming are limited to straight cylindrical ducts, with incident radiation symmetric about the duct axis and uniform over the duct entrance. Transmitted radiation generally may be subdivided into three components: line-of-sight, lip-penetrated, and wall scattered. The first two may be treated using point-kernel methodology. The last requires use of albedo methods to account for scattering over the entire surface area of the duct walls. Selph [45] reviews the methodology of duct transmission calculations and LeDoux and Chilton [53] devised a method of treating two-legged rectangular ducts, important in analysis of structure shielding.

Neutron streaming through gaps and ducts in a shield is much more serious for neutrons than for gamma photons. Neutron albedos, especially for thermal neutrons, are generally much higher than those for photons, and multiple scattering within the duct is very important. Placing bends in a duct, which is very effective for reducing gamma-ray penetration, is far less effective for neutrons. Fast neutrons entering a duct in a concrete shield become thermalized and thereafter are capable of scattering many times, allowing the neutrons to stream through the duct, even those with several bends. Also, unlike gamma-ray streaming, the duct need not be a void (or gas filled) but can be any part of a heterogeneous shield that is “transparent” to neutrons. For example, the steel walls of a water pipe embedded in a concrete shield (such as the cooling pipes that penetrate the biological shield of a nuclear reactor) act as an annular duct for fast neutrons.

There is much literature on experimental and calculational studies of gamma-ray and neutron streaming through ducts. In many of these studies, empirical formulas, obtained by fits to the data, have been proposed. These formulas are often useful for estimating duct-transmitted doses under similar circumstances. As a starting point for finding such information, the interested reader is referred to Rockwell [18], Selph [45], and NCRP [54].

Gamma-Ray and Neutron Skyshine

For many intense localized sources of radiation, the shielding against radiation that is directed skyward is usually far less than that for the radiation emitted laterally. However, the radiation emitted vertically into the air undergoes scattering interactions and some radiation is reflected back to the ground, often at distances far from the original source. This atmospherically reflected radiation, referred to as skyshine, is of concern both to workers at a facility and to the general population outside the facility site.

As alternatives to rigorous transport-theory treatment of the skyshine problem several approximate procedures have been developed for both gamma-photon and neutron skyshine sources [54]. This section summarizes one approximate method, which has been found useful for bare or shielded skyshine sources. The integral line-beam skyshine method, is based on the availability of a line-beam response function \( {\mathcal{R}}(E,\phi, x) \), which gives the dose (air kerma or ambient dose) at a distance x from a point source emitting a photon or neutron of energy E at an angle φ from the source-to-detector axis into an infinite air medium. The air–ground interface is neglected in this method. This response function can be fit over a large range of x to the following three-parameter empirical formula, for a fixed value of E and φ [55]:

$${\mathcal{R}}(E,\phi, x) = \kappa {(\rho /{\rho_{\text{o}}})^2}E{[x(\rho /{\rho_{\text{o}}})]^{\text{b}}} {\text{exp}}[a - cx(\rho /{\rho_{\text{o}}})], $$
(14.19)

in which \( \rho \) is the air density in the same units as the reference density \( {\rho_{\text{o}}} = 0.0012\;{\text{g/c}}{{\text{m}}^3} \). The constant \( \kappa \) depends on the choice of units.

The parameters a, b, and c in Eq. 19 depend on the photon or neutron energy and the source emission angle. These parameters have been estimated and tabulated, for fixed values of E and \( \phi \), by fitting Eq. 19 to values of the line-beam response function, at different x distances, usually obtained by Monte Carlo calculations. Gamma-ray response functions have been published by Lampley [56] and Brockhoff et al. [57]. Neutron and secondary gamma-ray response functions have been published by Lampley [56] and Gui et al. [58]. These data and their method of application are presented by Shultis and Faw [17].

To obtain the skyshine dose\( D(d) \) at a distance d from a bare collimated source, the line-beam response function, weighted by the energy and angular distribution of the source, is integrated over all source energies and emission directions. Thus, if the collimated source emits \( S(E,\Omega ) \) photons, the skyshine dose is

$$ D(d) = \int_0^\infty {\text{d}}E\int_{{\mathbf{{\Omega }}_{\text{s}}}} {\text{d}}\Omega S(E,\Omega ){\mathcal{R}}(E,\phi, d), $$
(14.20)

where the angular integration is over all emission directions \( {\Omega_{\text{s}}} \) allowed by the source collimation. Here, \( \phi \) is a function of the emission direction \( \Omega \). To obtain this result, it has been assumed that the presence of an air–ground interface can be neglected by replacing the ground by an infinite air medium. The effect of the ground interface on the skyshine radiation, except at positions very near a broadly collimated source, has been found to be very small.

The presence of a shield over a skyshine source, for example, a building roof, causes some of the source particles penetrating the shield to be degraded in energy and angularly redirected before being transported through the atmosphere. The effect of an overhead shield on the skyshine dose far from the source can be accurately treated by a two-step hybrid method [59,60]. First a transport calculation is performed to determine the energy and angular distribution of the radiation penetrating the shield, and then, with this distribution as an effective point, bare, skyshine source, the integral line-beam method is used to evaluate the skyshine dose.

The integral line-beam method for gamma-ray and neutron skyshine calculations has been applied to a variety of source configurations and found to give generally excellent agreement with benchmark calculations and experimental results [59]. It has been used as the basis of the microcomputer code MicroSkyshine [61] for gamma rays. A code package for both neutron and gamma-ray calculations is available from the Radiation Safety Computation Information Center. (Code package CCC-646: SKYSHINE-KSU: Code System to Calculate Neutron and Gamma-Ray Skyshine Doses Using the Integral Line-Beam Method, and data library DLC-188: SKYDATA-KSU: Parameters for Approximate Neutron and Gamma-Ray Skyshine Response Functions and Ground Correction Factors.)

Transport Theory

For difficult shielding problems in which simplified techniques such as point kernels with buildup corrections cannot be used, calculations based on transport theory must often be used. There are two basic approaches for transport calculations: deterministic transport calculations in which the linear Boltzmann equation is solved numerically, and Monte Carlo calculations in which a simulation is made of how particles migrate stochastically through the problem geometry. Both approaches have their advantages and weaknesses. Because of space limitations, it is not possible to give a detailed review of the vast literature supporting both approaches. What follows is a brief explanation of the basic ideas involved and some general references are supplied.

Deterministic Transport Theory

The neutron or photon flux \( \phi (\mathbf{{r}},E,\Omega ) \) for particles with energy E and direction \( \Omega \) is rigorously given by the linear Boltzmann equation or, simply, the transport equation

$$\begin{array}{lll}\Omega \cdot \nabla \phi(\mathbf{{r}},E,\Omega ) + \mu (\mathbf{{r}},E)\phi (\mathbf{{r}},E,\Omega ) = S(\mathbf{{r}},E,\Omega )\; +\cr& \\\int_0^\infty {{d}}E^\prime\int_{4{{\pi }}} {\text{d}}\Omega^\prime{\mu_{\text{s}}}(\mathbf{{r}},E',\Omega^\prime \to E,\Omega )\phi (\mathbf{{r}},E', \Omega^\prime ),\end{array}$$
(14.21)

where S is the volumetric source strength of particles. This equation can be formally integrated to yield the integral form of the transport equation, namely,

$$ \begin{array}{lll} \phi (\mathbf{{r}},E,\Omega ) = \phi(\mathbf{{r}} - R\Omega, E,\Omega )f(R) \\ \cr&\quad\qquad\qquad+\int_0^R {{d}}R^\prime q(\mathbf{{r}} - R^\prime\Omega, \Omega )f(R\prime), \\\end{array}$$
(14.22)

where \( f(x) \equiv {\text{exp}}\left[ { - \int_0^x \mu (\mathbf{{r}} - R''\Omega, E){\text{d}}R''} \right] \) and q is given by

$$ \begin{array}{lll} q(\mathbf{{r}},E,\Omega ) \equiv S(\mathbf{{r}},E,\Omega ) \;+\cr& \hfill \\ \int_0^\infty{{d}}E'\int_{4{{\pi }}} {{d}}\Omega'{\mu_{\text{s}}}(\mathbf{{r}},E',\Omega ' \to E,\Omega )\phi (\mathbf{{r}},E',\Omega '). \hfill \\\end{array} $$
(14.23)

Unfortunately, neither of these formulations of the transport equation can be solved analytically except for idealistic cases, for example, infinite medium with monoenergetic particles or a purely absorbing medium. Numerical solutions must be used for all practical shielding analyses. Many approximations of the transport equation are used, such as diffusion theory, to allow easier calculations. Also the energy region of interest is usually divided into a few or even hundreds of contiguous energy subintervals and average cross sections are calculated for each group using an assumed energy spectrum of the radiation. In this manner, the transport equation is approximated by a set of coupled equations in which energy is no longer an independent variable. Even with an energy-multigroup approximation, numerical solutions are still computationally formidable.

The most widely used deterministic transport approach is the discrete-ordinates method. In this method, a spatial and directional mesh is created for the problem geometry, and the multigroup form of the transport equation is then integrated over each spatial and directional cell. The solution of the approximating algebraic equations is then accomplished by introducing another approximation that relates the cell-centered flux densities to those on the cell boundaries, and an iterative procedure between the source (scattered particles and true source particles) and flux density calculation is then used to calculate the fluxes at the mesh nodes. For details of this method, the reader is referred to Carlson and Lathrop [62], Duderstadt and Martin [63], and Lewis and Miller [64].

Discrete-ordinates calculations can be computationally expensive because of the usually enormous number of mesh nodes and the fact that the convergence of an iterative solution is often very slow. A subject of great interest in the last 30 years has been the development of numerous methods to accelerate convergence of the iterations. Without convergence acceleration schemes, discrete-ordinate solutions would be computationally impractical for many shielding problems. An excellent description of the various acceleration schemes that have been used is provided by Adams and Larsen [16].

Mature computer codes based on the discrete-ordinates method are widely available to treat one-, two-, and three-dimensional problems in the three basic geometries (rectangular, spherical, and cylindrical) with an arbitrary number of energy groups [65,66].

Although discrete-ordinates methods are widely used by shielding analysts, these methods do have their limitations. Most restrictive is the requirement that the problem geometry must be one of the three basic geometries (rectangular, spherical, or cylindrical) with boundaries and material interfaces placed perpendicular to a coordinate axis. Problems with irregular boundaries and material distributions are difficult to solve accurately with the discrete-ordinates method. Also, in multidimensional geometries, the discrete-ordinates method often produces spurious oscillations in the flux densities (the ray effect) as an inherent consequence of the angular discretization. Finally, the discretization of the spatial and angular variables introduces numerical truncation errors, and it is necessary to use sufficiently fine angular and spatial meshes to obtain flux densities that are independent of the mesh size. For multidimensional situations in which the flux density is very anisotropic in direction and in which the medium is many mean-free-path lengths in size, typical of many shielding problems, the computational effort to obtain an accurate discrete-ordinates solution can become very large. However, unlike Monte Carlo calculations, discrete-ordinates methods can treat very-deep-penetration problems, that is, the calculation of fluxes and doses at distances many mean-free-path lengths from a source.

Monte Carlo Transport Theory

In Monte Carlo calculations, particle tracks are generated by simulating the stochastic nature of the particle interactions with the medium. One does not even need to invoke the transport equation; all one needs are complete mathematical expressions of the probability relationships that govern the track length of an individual particle between interaction points, the choice of an interaction type at each such point, the choice of a new energy and a new direction if the interaction is of a scattering type, and the possible production of additional particles. These are all stochastic variables, and in order to make selections of specific values for these variables, one needs a complete understanding of the various processes a particle undergoes in its lifetime from the time it is given birth by the source until it is either absorbed or leaves the system under consideration.

The experience a particle undergoes from the time it leaves its source until it is absorbed or leaves the system is called its history. From such histories expected or average values about the radiation field can be estimated. For example, suppose the expected energy \( \langle E\rangle \) absorbed in some small volume V in the problem geometry is being sought. There is a probability \( {{f}}(E){\text{d}}E \) that a particle deposits energy in dE about E. Then the expected energy deposited is simply \( \langle E\rangle = \int E{{f}}(E){\text{d}}E \). Unfortunately, \( {{f}}(E) \) is not known a priori and must be obtained from a transport calculation. In a Monte Carlo analysis, \( {{f}}(E) \) is constructed by scoring or tallying the energy deposited \( {E_i} \) in V by the ith particle history. Then in the limit of a large number of histories N

$$ \langle E\rangle \equiv \int E{{f}}(E){\text{d}}E \simeq \overline E \equiv \frac{1}{N}\sum\limits_{i = 1}^N {E_i}. $$
(14.24)

The process of using a computer to generate particle histories can be performed in a way completely analogous to the actual physical process of particle transport through a medium. This direct simulation of the physical transport is called an analog Monte Carlo procedure. However, if the tally region is far from the source regions, most analog particle histories will make zero contribution to the tally, and thus a huge number of histories must be generated to obtain a statistically meaningful result. To reduce the number of histories, nonanalog Monte Carlo procedures can be used whereby certain biases are introduced in the generation of particle histories to increase the chances that a particle reaches the tally region. For example, source particles could be emitted preferentially toward the tally region instead of with the usual isotropic emission. Of course, when tallying such biased histories, corrections must be made to undo the bias so that a correct score is obtained. Many biasing schemes have been developed, and are generally called variance reduction methods since, by allowing more histories to score, the statistical uncertainty or variance in the average score is reduced.

The great advantage of the Monte Carlo approach, unlike discrete-ordinates, is that it can treat complex geometries. However, Monte Carlo calculations can be computationally extremely expensive, especially for deep-penetration problems. The stochastic contribution a single history makes to a particular score requires that a great many histories be simulated to achieve a good estimate of the expected or average score. If a tally region is many mean-free-path lengths from the source, very few histories reach the tally region and contribute to the score. Even with powerful variance reduction techniques, enormous numbers of histories often are required to obtain a meaningful score in deep-penetration problems.

Those readers interested in more comprehensive treatments of the Monte Carlo method will find rich resources. A number of monographs address Monte Carlo applications in radiation transport. Those designed for the specialists in nuclear-reactor computations are Goertzel and Kalos [67], Kalos [68], Kalos et al. [69], and Spanier and Gelbard [70]. More general treatments will be found in the books by Carter and Cashwell [71] Lux and Koblinger [72], and Dunn and Shultis [73]. Coupled photon and electron transport are addressed in the compilation edited by Jenkins et al. [74]. A very great deal of practical information can be gleaned from the manuals for Monte Carlo computer codes. Especially recommended are those for the EGS4 code [75], the TIGER series of codes [76], and the MCNP code [77].

Future Directions

In many respects, radiation shielding is a mature technological discipline. It is supported by a comprehensive body of literature and a diverse selection of computational resources. Indeed, the present availability of inexpensive computer clusters and the many sophisticated transport codes incorporating the most detailed physics models, modern data, and the ability to model complex geometries has reduced shielding practice in many cases to brute force calculation. Many shielding problems require such a computer approach; however, there are many routine shielding problems that can be effectively treated using the simplified techniques developed in the 1940s–1970s. Point-kernel methods are still widely used today. However, there are shielding problems for which no simplified approach is effective and transport methods must be employed. These include transmission of radiation through ducts and passages in structures, reflection of gamma rays from shielding walls and other structures, and transmission of beams of radiation obliquely incident on shielding slabs.

Despite the relative maturity of the discipline, one must not become complacent. There will continue to be advances in many areas. Undoubtedly, new computational resources will allow much more detailed 3-D graphical modeling of the shielding geometries and their incorporation into the transport codes. Likewise 3-D displays of output will allow much better interpretations of results. New capabilities will be added to Monte Carlo and discrete-ordinates codes. Hybrid codes employing both Monte Carlo and deterministic techniques will also be developed. More nuclear data will become available that will, for example, allow detailed analysis of actinides in spent fuel and correlation effects in nuclear data will allow better sensitivity analyses of results. Likewise, more information on material properties, especially in radiation resistance, will become known. Advances in microdosimetry will provide better understanding of cellular responses to single radiation particles and the effects of low-level radiation doses. A better understanding of radiation hormesis effects may lead to changes in radiation standards that will better reflect health effects of radiation. New sources of radiation in research and medicine will include energetic protons and neutrons. These developments require continuing attention and adoption into the radiation-shielding discipline.