Introduction

The loss of coolant accident which occurred in Fukushima Dai-ichi Nuclear Power Plant (FDNPP) during the great east Japan earthquake and tsunami event on 11 March 2011 (USGS 2011) was the biggest (7 on nuclear event scale) after the Chernobyl event (INES 2011). The failure of the cooling systems due to tsunami flooding had resulted in overheating, hydrogen and steam water explosions, and a probable partial melting of the core of the three operating reactors in the FDNPP. Large quantities of radioactive material were released to the environment (air, sea) (Chino et al 2011; Ohnishi 2012) over a period of 25 days from 11 March to 5 April 2011. The Japan government had evacuated population in the first 20-km radius and imposed sheltering of people in the 20–30-km zone to protect them from high radiation dose (http://www.mhlw.go.jp/bunya/kokusaigyomu/asean/2012/dl/Introduction_Dr.Yamauchi.pdf). Measurements from aerial- and ground-based radiation monitoring (DOE 2011; Katata et al. 2012a; MEXT and DOE 2011; Taira et al. 2012) indicate that a high deposition occurred in the northwest sector of FDNPP with dose rate exceeding 30 μSv/h. Considering the severity of the accident, a number of observational and simulation studies were made on the levels of environmental radioactivity around FDNPP (Bowyer et al. 2011; Chino et al. 2011; Hiorose 2012; Masson et al. 2011; Morino et al. 2011; Sugiyama et al. 2012; Yasunari et al. 2011; among others). It is important to understand the regional and local scale meteorological conditions influencing the atmospheric dispersion process during the accident to make proper assessment of radiological consequences and the anticipated health risk for the exposed populations. The authors of this work made a regional scale dispersion study (Srinivas et al. 2012) to simulate the meteorological conditions during the accident that led to high deposited activity in the northwest zone of FDNPP up to 100 km. Katata et al. (2012) have recently simulated the high-dose rate zones around FDNPP using WSPEEDI system. Hashimoto et al (2013) have modeled the spatiotemporal dynamics of radiocesium deposited onto forest areas in Japan from FDNPP releases to facilitate effective countermeasure strategies to mitigate forest contamination.

A few studies reported the health risk impact taking into account the radiation measurements especially in the near range accident zone in Japan (e.g., WHO 2012a, b). Though measurements provide the actual ground truth, due to their limited density, they may not represent the spatial variation of the deposited activity, the hotspots in the radiation dose, and the resulting variation in the impact. Atmospheric dispersion modeling tools help to determine the spatial extent of plume dispersion, the fallout concentration, and the resultant deposited activity as they are well built on the fundamental atmospheric physical and dynamical processes governing the dispersion phenomena. The complex dispersion models can provide spatial picture of accurate concentration, deposition, and dose quantities when input parameters such as the release quantities, source characteristics, and meteorological variables are realistically specified. Although some observations and earlier modeling studies are available on this accident, they have the limitation of either being completely observation oriented or being driven by coarse resolution simulation outputs. In view of release of large quantities of radioactivity into the environment, it is necessary to employ models at sufficiently high resolutions to realistically simulate the atmospheric physical processes as well as the resultant complex dispersion so that the hotspots of activity can be identified. The outputs from these models can then be used for radiological risk assessment for the population of the affected region. To our knowledge, there are very few studies of radiological impact assessment that were focused on complete dispersion analysis covering all significant releases with high-resolution modeling. Hence, this study is undertaken to understand the spatial distribution of radioactivity in the 40-km zone around the Fukushima reactor using a numerically accurate atmospheric dispersion model and then assessing the radiation risk with the simulation outputs using suitable risk models.

The objective of this work is firstly to estimate the yearly distribution of radiation exposure up to lifetime to populations distributed in a 40-km zone around Fukushima reactor and subsequently to assess the health risk to the members of public. To meet the first objective, high-resolution flow field simulations were conducted with nested Weather Research and Forecasting (WRF)-Advanced Research WRF (ARW) mesoscale model in a 40-km range around the site, followed by dispersion simulations using the flexible particle (FLEXPART) Lagrangian particle dispersion model (LPDM) with available source term inputs. Suitable risk models (e.g., UNSCEAR 2006; WHO 2012b) were then used to estimate the probable long-term health impacts (lifetime risk) due to the spread of the radioactivity. Further, the number of years that would be taken for rehabilitation into these areas is also estimated. The details of the methods and numerical simulations are described initially, and then, the results of dispersion, dose, and health risk estimates are presented.

Methods

The health risk due to the radioactive fallout around Fukushima was evaluated using the environmental source-to-risk modeling approach provided in Slaper et al. (1994) and WHO (2012a, b). This consists of identification of release quantities, dispersion and deposition calculation, exposure, and risk assessment. The radiological risk assessment was made based on simulated air concentration and deposited activity with validation using measurements in and around the Fukushima prefecture. The details of numerical simulations, exposure, and risk assessment are given below.

Meteorological simulations

First, the meteorological condition was simulated over the 40-km region with the ARW atmospheric model (Skamarock et al. 2008) followed by dispersion simulation with FLEXPART (Stohl et al. 2005). The ARW simulates the three-dimensional wind, potential temperature, geopotential, surface pressure, turbulent kinetic energy, and moisture (water vapor mixing ratio, cloud water). A multi-scale simulation was performed with ARW using four two-way interactive nested domains with horizontal resolutions 27, 9, 3, and 1 km. To obtain realistic meteorological data for dispersion assessment, a sequence of 10 simulations with mesoscale model ARW, each of 48 h integration, was performed to cover the significant release period of 20 days from 11 to 31 March 2011. The initial and boundary conditions for each simulation were derived from the National Centers for Environmental Prediction (NCEP) 50-km resolution Global Forecast System (GFS) meteorological analysis and forecasts. The model grid configuration and physics options are given in Table 1. The time-varying three-dimensional meteorological fields at 1-h interval from the fourth domain were used in the dispersion simulations.

Table 1 Details of the grid configuration, and physics used in WRF model

Atmospheric dispersion simulations

Model description and configuration

Simulation of the atmospheric transport and dispersion of the accidental releases from FDNPP were made using the FLEXPART-WRF Lagrangian particle dispersion model (Doran et al. 2008; Fast and Easter 2006; Stohl et al. 2005). It simulates the mesoscale transport, diffusion, dry and wet deposition, and radioactive decay of tracers. The computational grid of FLEXPART was configured with 300 × 300 horizontal grids, 26 vertical levels with lowest layer as 0–25 m above ground level (AGL), and model top at 6,000 m. A total of 300,000 pseudo particles were released over 48 h of simulation period. The diffusion parameters are computed according to local surface layer stability at each grid point through Hanna’s flux profile method (Hanna 1982). A single continuous simulation with FLEXPART was performed using hourly ARW outputs (horizontal and vertical winds, temperature, humidity, rainfall, cloud fractions, planetary boundary height, friction velocity, surface heat flux) and incorporating the hourly emission variations. The wet deposition was computed using scavenging coefficients by an exponential decay process (McMahon and Denison 1979) considering the cloud and rain mechanisms. The dry deposition of particles and gases was calculated by computing a deposition velocity (v d) with the resistance method (Wesely and Hicks 1977) wherein the crucial parameters gravitational settling velocity, aerodynamic resistance, quasi-laminar sublayer resistance, and bulk surface resistance are computed as per atmospheric, surface, and species characteristics. The aerodynamic resistance (r a) is computed with the flux-profile relationship (Businger et al. 1971) based on surface layer stability taking roughness parameter from land use database. The quasi-laminar sublayer resistance (r b) is computed following Erisman et al (1994) relation using Prandtl number (~0.72), kinematic viscosity of air (~0.15 cm2 s–1), and molecular diffusivity of species (~1.2) in air. The surface resistance (r c) is computed following Wesely (1989) relation considering upper canopy and lower canopy effects using land use database of the study region. The parameters used in the dry deposition of particles (I131, Cs137, Cs134) are density ρ (2.5 × 103 kg m–1 s–1), mean particle diameter d p (0.6 μm), and standard deviation about mean size σ p (3.0 × 10–1 μm), and those used in wet deposition are the scavenging coefficient A (1.0 × 10–4 s–1) for precipitation rate of 1 mm/h and rain rate dependency factor B (0.80) for both rain and snow. The gravitational settling velocity (v g) for particles is computed from particle diameter and density using Stoke’s relation. Dry deposition is computed as deposition velocity times the ground-level air concentration. Activity concentration and deposition were output at hourly intervals using a sampling rate of 900 s and sampling average of 3,600 s.

Source term inputs

Fukushima accident is characterized with large temporal variation in emission rate due to explosions in the reactor units. Hence, it is important to consider the variation in emission quantities as well as variation in wind and rainfall on those days in order to accurately simulate the spatial distribution of activity concentration and deposition using dispersion model. Source term for the air-borne releases from the FDNPP accident was assessed (e.g., Chino et al. 2011; NSC 2011; Terada et al. 2012; Stohl et al. 2012, among others) using environmental monitoring data and inverse dispersion modeling. The release mainly occurred between 12 and 19 March 2011 with peak emissions between 14 and 15 March 2011. We used temporal (hourly) emission quantities for Xe133 and Cs137 from Stohl et al (2012) as they are with less error factor. The source term for I131 was taken from Chino et al (2011). The release quantities for four isotopes (1.5 × 1017 Bq for I131, 15.3 × 1018 Bq for Xe133 and 3.58 × 1016 Bq each for Cs137 and Cs134) were used in the present study. Iodine isotope I132 was not considered in the present calculation as it is short lived (half life 2.29 hours and the source term and thyroid dose conversion factor for I132 are about 4 and 2 orders less, respectively, than the corresponding values for I131.

Exposure assessment

The exposure assessment comprises calculation of lifetime committed inhalation dose, cloud shine due to passage of radioactive cloud, the cumulative dose due to external exposure from deposited activity on the ground, and cumulative ingestion dose due to consumption of contaminated food through which radioactivity enters from deposited activity. The dose received was computed assuming that people remain at the same location for the lifetime.

The cloud shine (D c) for the period of release was computed for noble gases (Xe133) and particulates (I131, Cs134, Cs137) using a point kernel method wherein the gamma dose at each ground receptor was evaluated by dividing the plume into infinitesimal elements and integrating the dose distribution from each element over the entire volume of the cloud. An occupancy factor (fraction of time spent indoors, F oc) of 0.7 and location factor (reduction factor for shielding inside buildings, F loc) of 0.3 (Slaper et al. 1994) are considered in this pathway. The ground shine due to Cs137, Cs134, and I131 was computed by multiplying the time-integrated deposited activity with effective dose conversion factors for the respective isotopes (ICRP 1996) and summing up the dose from all contributing isotopes.

$$ {\displaystyle {\sum}_{i\kern0.5em =\kern0.5em 1}^n{\mathrm{DE}}_i}={\displaystyle {\sum}_{i=1}^n\left(\mathrm{TID}\right)}i\cdot \mathrm{DCF}i\cdot \left(1-{F}_{\mathrm{oc}}\cdot \left(1-{F}_{\mathrm{loc}}\right)\right)\cdot \frac{\left({e}^{-\lambda {t}_1}-{e}^{-\lambda {t}_2}\right)}{\lambda}\cdot {F}_{\mathrm{penet}}\cdot {F}_{\mathrm{runoff}} $$
(1)

where DE i is the effective dose (in Sv) due to external exposure for specific radionuclide “i,” TID i is the time-integrated deposition (in Bq m–2) for the period of the release (11–31 March 2011), and DCF i (Sv h–1 per Bq m–2) is the dose conversion factor for the radionuclide i. F penet is the soil shielding factor (1 in first one month; 0.5 up to 1 year; 0.37 after 1 year) due to penetration of nuclides in the ground, and F runoff is correction factor (~0.5 for urban and rural land) for runoff (Slaper et al. 1994). The external dose due to deposited activity was calculated over three separate time intervals: the first 1 month following deposition, secondly the period between the first month and 1 year after deposition, and thirdly beyond 1 year up to the lifetime of 89 years. The above equation was integrated between first year (t 1) and each succeeding year t 2 (t2 = 1,2,3 …89) in steps of years to obtain cumulative ground shine dose.

The inhalation dose (D inh) for particulates and vapors was computed by multiplying the inhaled activity with the effective dose conversion factor for inhalation and integrating over the contributing radionuclides:

$$ {D}_{\mathrm{inh}}={C}_{\mathrm{a}}\ {V}_{\mathrm{b}}\ {\mathrm{DC}}_{\mathrm{inh}}\ \left(1-{F}_{\mathrm{ind}}\right) $$
(2)

where D inh is the total inhalation dose (in Sv), C a is the time-integrated air concentration (Bq m–3), V b is the breathing rate (~1.2 m3/h for adults), and DCinh is the age-dependent nuclide-specific effective dose coefficient for inhalation (Sv/h per Bq/m3). F ind is the location factor (~0.7) for reduction of the dose due to being indoors. Inhalation dose was computed using the air concentrations from Cs134; Cs137 releases alone as the bio-distribution of these isotopes in the body is homogeneous. The above equation was used separately to calculate thyroid dose from I131 (particulate form) air concentration and using inhalation dose conversion factor for thyroid.

The ingestion dose occurs by intake of radionuclides through food. This can occur either by direct consumption of contaminated crops or indirectly through consumption of milk and meat from animals which consumed the contaminated pasture and soil. The details of food categories consumed by Japanese and their consumption rates (g/person/day) were obtained from Japanese survey data reported in WHO (2012a). We considered five main food categories consumed by Japanese, namely vegetables, cereals, fruits, milk, and meat as they account for 75 % of the total consumption; the rest being eggs, fish, and mushrooms. It was assumed that people consumed locally produced food. No correction factor is applied for the use of the imported food as its proportion to the domestic food in the Japanese market is not known. The ingestion dose was calculated following the procedure outlined in Slaper et al. (1994). First, the time-integrated soil concentration due to deposited activity G a (Bq m–2) was calculated from the procedure (IAEA 1982):

$$ {C}_{\mathrm{s}}=\frac{G_{\mathrm{a}}}{S\left(\lambda +{\lambda}_1\right)}\left(1-{e}^{-\left(\lambda +{\lambda}_1\ \right){T}_{\mathrm{end}}}\right) $$
(3)

where C s is the time-integrated soil concentration (Bq kg–1 day), G a is the total deposited activity density (Bq m–2), S is the mass of soil in plough layer per unit area (kg m–2) obtained by multiplying the depth of the plough layer (0.30 m) with soil density (kg m–3), λ is the decay constant for the radionuclide considered (day–1), λ 1 is a constant describing removal from plough layer (day–1), and T end is the evaluation period in days. Equation (3) was solved over each year in the lifetime of 89 years considering T end as the period in days up to the year of evaluation to obtain cumulative soil concentration up to each year in the evaluation period. The movement of radionuclides in the soil occurs through the hydrological processes of percolation by rainfall or irrigation and upward movement by evaporation considering the soil affinity to the radionuclide. Hence, instead of assuming the removal rate of radionuclides from soil as constant, it was evaluated considering water balance and soil affinity using a relation given by Baes and Sharp (1983). The contamination of the crops occurring due to interception and uptake of deposited activity from soil was computed using (IAEA 1982):

$$ {C}_p=\frac{G_{\mathrm{a}}\cdot {F}_{\mathrm{ip}}}{Y_p\left(\lambda +{\lambda}_{\mathrm{w}}\right)}\left(1-{e}^{-\left(\lambda +{\lambda}_{\mathrm{w}}\ \right){T}_{\mathrm{a}\mathrm{p}}}\right)+{C}_{\mathrm{s}}\ {B}_{\mathrm{v}} $$
(4)

where C p is the time-integrated radionuclide concentration in crops (Bq kg–1 day), F ip is the direct interception fraction for crop type “p,” λ is the physical decay constant (day–1), λ w is the rate constant for reduction of the deposited activity on the surface of vegetation due to processes other than radiological decay (per day), T ap is the time period during the growing season that crops can be contaminated through direct interception of deposited activity, Y p is the agriculture yield (in kg m–2), and B v is the concentration factor for uptake of the radionuclide from soil by edible parts of the crops (in Bq kg–1 plant tissue per Bq kg–1 of dry soil). Three groups of crops, namely vegetables, fruits, and cereals, were considered. The overall human ingestion of radionuclides was computed using

$$ {A}_i={\displaystyle {\sum}_{p\kern0.5em =\kern0.5em 1}^5\left({I}_p\cdot {F}_{b,p}\cdot {C}_p\cdot {e}^{-\lambda {t}_{\mathrm{d}}}\right)} $$
(5)

where A i is the total intake of radionuclide under consideration (Bq), I p is the human intake of food product p (kg day–1), F b,p is the reduction factor for removal of radionuclides due to food preparation processes, and t d is the time between harvesting/milking/slaughtering and consumption (day). λ is a physical decay constant of radionuclide (day–1). The total dose contribution from ingestion was calculated using

$$ {\displaystyle {\sum}_{i\kern0.5em =\kern0.5em 1}^n{\mathrm{D}}_{\mathrm{ing}}={{\mathrm{D}\mathrm{C}}_{\mathrm{ing}}}_i{A}_i} $$
(6)

where D ing is the total ingestion dose (Sv) for radionuclide i under consideration, DCing is the dose conversion factor for ingestion (Sv Bq–1) for the isotope i, and A i is the total intake of radionuclide under consideration. The various parametric constants in the Eqs. (4) to (7) were adopted from Slaper et al. (1994). The internal and external doses were summed up from first year up to the year under consideration to obtain cumulative yearly distribution of effective dose up to lifetime (89 years). The yearly distribution of organ doses for breast, bone marrow, colon (obtained by multiplying the effective dose with organ tissue weight factor 0.12), and thyroid was used in the health risk computation as described below.

Assessment of health risk

Lifetime attributable health risk (LAR) in the 40-km range around the FDNPP was computed using generalized risk assessment models (NAS 2006; USEPA 1994, 1999; UNSCEAR 2006) described in WHO (2012b). The LAR represents the probability of a premature incidence of cancer attributable to radiation exposure in a representative member of the population (Kellerer et al. 2001, 2002; Thomas et al. 1992; Vaeth and Pierce 1990). For a given dose, LAR is the additional cumulative probability of having cancer up to the lifetime (~89 years). The LAR was calculated using the excess relative risk models. For a person of sex g, exposed to dose D at age-at-exposure e, the LAR at attained age was calculated using the mathematical relation

$$ \mathrm{LAR}\left(D,e,g\right)={\displaystyle {\int}_{e+L}^{a_{\max }}M\left(D,e,a,g\right)}\frac{S_{\mathrm{aj}}\left(a,g\right)}{S_{\mathrm{aj}}\left(e,g\right)} da $$
(7)

where M(D,e,a,g) is the risk model, and S aj(a,g) is the probability of surviving cancer-free to age a, for the unexposed population. L is the minimum latency period; the ratio S aj(a,g)/S aj(e,g) is the conditional probability of a person alive and cancer-free at age-at-exposure e to reach at least an attained age a. The risk model M(D,e,a,g) is expressed as

$$ M\left(D,e,a,g\right)=\mathrm{ERR}\left(D,e,a,g\right)m\left(a,g\right) $$
(8)

where m(a,g) (also called as lifetime baseline risk (LBR)) is the baseline cancer incidence rate in the population. The LBR for a general population of any given region is the cumulated baseline probability of having a specific cancer over the lifetime and was taken from WHO (2012b) (5 years for leukemia, 3 years for thyroid, 12 years for breast cancer, and 5 years for all solid cancers). Excess relative risk models (Preston et al. 2007; UNSCEAR 2006) for leukemia and all solid and thyroid cancer and absolute risk model for breast cancer as prescribed in WHO (2012b) were used in Eq. (8). The LAR was computed spatially using distributed organ dose and considering adults (20 years), children (10 years), and infants (1 year) and male and females separately. The LAR for breast cancer was assessed for female population alone.

Results and discussion

Results of atmospheric model simulations

Under a low pressure formed in the Pacific Ocean and its movement towards east Japan coast, the low level atmospheric circulation has changed from westerly to easterly between 15 and 16 March 2011 (IRSN 2011). It has been reported that wet deposition formed on land areas due to precipitation (rain/snow) during the dispersion of the radioactive plume under easterly/southeasterly wind flow (IRSN 2011). This has been confirmed from ground and air monitoring data. The outputs of surface level meteorological parameters from ARW were compared with nearest available observations at three stations Fukushima, Nigata, and Yamagata located about 40, 90, and 100 km in the west, north, and northwest directions, respectively, from FDNPP. Comparisons indicated fairly good agreement in simulated temperature, relative humidity, wind speed, and wind direction (Table 2). Slightly stronger winds, lesser temperatures, higher humidity, and wind direction deviation are noted. The surface (10 m level) wind flow pattern, rainfall intensity (mm/h) from the ARW high-resolution domain, and ground level (0–25 m layer) plume distribution pattern of Cs137 concentration (Bq/m3) in a 40-km range are presented in Fig. 1 between 09 Japan Standard Time (JST) 15 March to 09 JST 16 March 2011. It has been found that the simulated flow was initially northeasterly over Pacific Ocean and northerly over the land, under which the plume moved to the south-southeast direction. As the flow changed gradually to northeasterly, easterly, southeasterly, and northerly, the plume moved in the clockwise direction. The model simulated a rainfall of 1 to 5 mm/h on 15 March coinciding with the eastward flow and landward plume. These results demonstrate that ARW could simulate the local topographic winds and change in the flow field as well rainfall associated with low pressure front during the accident. The dispersion simulations could capture the spatiotemporal variation of the plume trajectory resulting from the complex atmospheric flow and the deposition due to precipitation. These results corroborate and supplement the other earlier modeling studies (e.g., Katata et al 2012; Srinivas et al 2012).

Fig. 1
figure 1

Simulated wind flow pattern (shown in arrows, m/s), rainfall intensity (in contours, mm/h), and Cs137 plume (in shaded contours, Bq/m3) at a 09 JST, b 12 JST, c 15 JST, d 18 JST, and e 21 JST on 15 March 2011 and f 09 JST 16 March 2011 (x axis longitude and y axis latitude)

Table 2 Statistical error metrics (mean bias, mean absolute error, root mean square error, correlation coefficient) for the WRF-simulated surface meteorological parameters
Fig. 2
figure 2

Spatial pattern of Cs137 deposited activity (Bq/m2) from a simulation and b measurements by MEXT, Japan

Results of dispersion and dose simulation

Simulated deposited activity

Here, the distribution of radionuclides in the 40-km impact zone is analyzed as it is important to understand the effect of the accident. Spatial simulated time-integrated deposition between 11 and 31 March 2011 of Cs137 (Fig. 2) shows the deposited activity was spread mainly in the northwest, west-northwest, and south-southwest sectors from the accident site according to the wind circulation and the precipitation during the release. The highest deposition is concentrated in a narrow zone in the northwest sector in both simulation and observations. The deposition pattern indicates that of the total deposited activity in the 40-km land area, about 58.9 % of the activity is deposited in a 12-km zone, 19.6 % in the next 10 km, and about 17.3 % in the next 18-km distance from the release point and with highest distributions in the northwest direction. The first 20-km evacuation zone is noted to be marked with 75 % of the total deposited activity on the 40-km land area. The Cs137 deposit map prepared by the Ministry of Education, Culture, Sports and Technology (MEXT) using measurements collected during April 2011 (Fig. 2b) indicates nine ranges of surface activity varying from 10 to >3,000 kBq/m2. The spatial measurements show densely distributed activity of 20 km width and 50 to 70 km length in the north-west sector of FDNPP. The comparisons indicate good agreement in spatial pattern of deposited activity in the present simulations and highlight some asymmetries and hotspots in the activity deposition. While the interpolated data from observations (Fig. 2b) indicate a nearly uniform and narrow high deposition (>3 MBq/m2) area up to 32 km in the northwest sector, the simulation shows a hotspot of deposition (>3 MBq/m2) in the first 10 km, followed by progressively decreasing activity level (1 to 3 MBq/m2 from 10 to 20 km, 0.6 to 1 MBq/m2 from 20 to 30 km, and 0.3 to 0.6 MBq/m2 from 30 to 40 km). Model slightly underestimated the deposition in the northwest region of FDNPP and is attributable to the underestimation of rainfall and stronger winds simulated with ARW. Simulations also reveal high deposited activity in the south-southwest sector along the coast, which is slightly overestimated due to overestimation of precipitation in that area. All the other areas adjacent to the dense northwest deposition zone are noted to have activity deposition varying as 0.3 to 0.6 MBq/m2. The most significant result is the first 10-km area which is marked with highest deposited activity where the impact is expected to be the highest and another patch of high deposition in the south-southwest wind sector along the coast. Simulated Cs137 activity depositions were compared with measurements from monitoring stations reported by MEXT, Japan (http://www.mext.go.jp, WHO 2012a) from Itate, Namie, and Katsurao towns situated in a 30-km range in the northwest sector of FDNPP. The comparison plot indicates that about 55 % of the simulated values agree with observations within an order variation (Fig. 3).

Fig. 3
figure 3

Comparison of observed and simulated Cs137 activity deposition at 50 locations in the 20-km fallout zone of Fukushima

The above results of dispersion simulations are subject to certain uncertainty resulting from limitations of (i) input meteorological data derived from ARW predictions (winds, mixed layer depth, precipitation, surface layer stability), (ii) input source term, and (iii) modeling of dispersion, deposition, etc. The meteorological model predicted slightly stronger winds, lower temperatures, higher humidity, and spatial differences in precipitation pattern. These errors lead to stronger transport, diffusion, and lesser deposition in dispersion model process. The uncertainty introduced in this way amounts to a factor of about 3. Though the deposition is estimated using a detailed method after Wesely and Hicks (1977), the complex parameterizations use certain fit coefficients which introduce certain uncertainty which is not exactly known, and a minimum uncertainty factor of 2 is assumed. The uncertainty factors in the source term estimates are given as 0.9 for Cs137, 0.4 for Xe133 in Stohl et al (2012), and 5 for I-131 in Chino et al (2011). Altogether, the uncertainty in the model results falls within a factor of 10 for the concentration and deposition. This is also evident in the comparison of simulated deposition data with measured data which indicated deviations of nearly a factor of 10 with model underestimating the deposition. Hence, an overall uncertainty factor of 10 applies to the results of activity deposition in all the spatial ranges.

Simulated radiation doses

Simulated cloud shine and inhalation doses (Fig. 4), being dependent on air concentration, are distributed in the north-northwest and south-southwest directions from the reactor. Time-integrated cloud shine dose (Fig. 4a) over the period 11–31 March 2011 indicates significant gamma dose exceeding >10 mSv spreads to areas as far as ~5 km from the FDNPP along the coast in north-northwest and south-southwest directions. The cloud shine dose is reduced to 2 mSv in the first 2- to 5-km zone and then to 0.1 mSv from 5- to 20-km range and then further reduced to 0.01 mSv from 20 to 40 km. The inhalation dose (Fig. 4b) is seen to vary from 100 to 1,000 μSv in about 20-km range in the north–south zone, and the dose rapidly fell to ~0.1 μSv in the westward direction. The highest inhalation dose of 1 mSv is distributed in a range of nearly 3 km from the release point. Simulated external air dose (also called ground shine) rate corresponding to deposited activity as on 31 March 2011 (Fig. 5a) shows large groundshine dose rates, i.e., up to 100 μSv/h in the first 2-km zone. The ground shine dose rate decreased to 50 μSv/h in the 10-km range, to 5 μSv/h in the 15-km range, and further to 1 μSv/h up to 40 km. Taira et al. (2012) reported the groundshine dose rates estimated using measurements from soil samples at Fukushima city (at 57 km from FDNPP) as 1.9–2.9 μSv/h on 22 March 2012. The estimates of groundshine dose rates arrived in the present study are in good agreement with the reported values. High groundshine dose rates corresponding to large deposited activity are noted in the northwest zone of FDNPP. The simulated groundshine dose pattern is found to be in good agreement with the spatial external dose map reported by MEXT (Fig. 5b) but with underestimation of dose in 20–30-km area in the northwest sector of FDNPP. The simulated ground shine dose is compared with the monitor data on external dose reported by MEXT, Japan as on 11 April 2011 in a 40-km range from FDNPP. The cumulative frequency of the ground shine dose values at different monitoring spots (Fig. 6) indicates the model values agree with observed doses at 85 % of the monitor stations within a factor of 5 and at about 95 % of the monitor stations within a factor of 10. This indicates a reasonable simulation of external dose but with slight underestimation. An uncertainty in the estimation of external dose falls within a factor of 5 which arises primarily from the underestimation of deposited activity.

Fig. 4
figure 4

Simulated a cloud shine dose (in mSv) due to Xe133, I131, Cs134, and Cs137 over the period 11–31 March 2011 and b committed inhalation dose (in μSv) from Cs134 and Cs137

Fig. 5
figure 5

Groundshine dose rate (in μSv/h) from a simulation as on 31 March 2011 and b measurements reported by MEXT, Japan

Fig. 6
figure 6

Comparison of simulated and observed external dose (in μSv/h) in the 40-km zone of Fukushima Dai-ichi NPP

The first year external dose in the 40-km range was calculated (Fig. 7a) from the simulated groundshine dose rates as on 31 March 2011 and compared with the corresponding data reported by MEXT, Japan. MEXT-projected doses (Fig. 7b) were based on measurements of air dose rates up till September 2011. The air dose values from MEXT for the first year in the 60-km range in the northwest sector from FDNPP vary as 5 to 100 mSv. The reported external dose varies as 100 mSv in the 0–5 km, 20 mSv in the 5–15 km, again peaking to 100 mSv in the 15–25-km range, and reduced to 50 to 20 mSv from 25 to 40 km. Beyond 40 km, they reduced to 5 mSv at about 60 km. Simulated first year external dose is noted to vary from 75 to 90 mSv in the first 5-km range, 20 to 60 mSv in the 5–15-km range, 2–20 mSv in the 15–25-km range, and 1–2 mSv in 25–40 km in the northwest sector. While the simulation shows a gradual fall of external dose, the MEXT-projected data shows second peak at about 20 km from FDNPP. The highest doses both from simulation and monitoring data (~100 mSv) are distributed very near to the reactor zone (<5 km) and correspond to the limit characterized with significant excess risk of radiation-induced cancer. IRSN reported the external dose and the size of the population located in different contaminated areas based on the demographic data of the Fukushima prefecture (http://www.stat.go.jp) which is also presented in Fig. 7b. Assuming that evacuation was not implemented, the simulated projected external doses indicate that people living lifetime in the affected areas would receive about 250–750 mSv in the first 2 km around the reactor, 100–250 mSv in the 2–10 km, 30–75 mSv in the 10–20-km zone, 10–30 mSv in the 20–30-km zone, and 2–10 mSv in the 30–40-km zone, respectively. The number of people receiving the high dose (≥100 mSv) if not evacuated would be around 30,000 in the 30-km zone (Table 3).

Fig. 7
figure 7

Spatial distribution of first year dose (mSv). a Simulated ground shine. b Ground shine dose from monitoring data (MEXT, Japan). c Simulated ingestion dose. d Simulated effective dose

Table 3 Activity deposition, external dose projected for first year, and lifetime (70 years) from simulation and approximate number of people who would be affected with the assumption of no evacuation

The ingestion dose that would be delivered to the locally residing population from intake of vegetables, milk, and meat for the first year is presented in Fig. 7c. As the ingestion dose arises indirectly from the deposited activity through soil, vegetation, food, and milk pathways, it closely resembled the pattern of deposited activity. All the three radionuclide (Cs134, Cs137, and I131) depositions were considered in computing the lifetime ingestion dose. The ingestion dose during first year is about 60 % of the ground shine dose. Large ingestion doses of 20 to 60 mSv are distributed in a range of 10 km around the FDNPP, and the dose falls to 5 mSv in the 10–15 km range, 2 mSv in the 15–25-km range, and further to 0.1 at 40 km in the northwest zone. Also, considerable ingestion dose values up to 1–3 mSv are simulated in the south/southwest zone along the coast. The hotspots in the ingestion dose are noted at near the source and at 10-km range in the northwest sector. The cumulative lifetime ingestion dose is noted to vary as 50–100 mSv in 2-km radius around the reactor, 20–50 mSv in the 2- to 10-km range, 7.5–20 mSv in the 10–20-km range, and 0.5–3 mSv in the range up to 40 km. The main uncertainty in ingestion dose estimate arises from input-deposited activity and the food categories used in calculation. The deposited activity is underestimated by 1 order. An additional uncertainty of 0.25 arises due to the omission of food categories fish, mushrooms, and eggs which together have nearly 25 % of the total food consumption rate in Japan. As per the Eq. (5) in “Exposure assessment” section, an uncertainty factor of 7.5 arises in the ingestion dose calculation. The ingestion dose is almost of the same order of magnitude as ground shine in the first year, but it is about seven times less than the ground shine for the lifetime. The rapid fall in ingestion dose over lifetime is due to the biological decay of the activity (WHO 2012a). The effective dose followed the deposition pattern, and the hotspots are located in the 20-km range (Fig. 7d). The computation shows that the total effective dose for all age groups that would be received as a result of exposure during the first year after the accident is about 150 mSv just near the FDNPP. It varies as 75 to 150 mSv in the first few kilometers from the reactor and falls to 30 mSv at 10 km, 7 mSv at 20 km, and 2 mSv at 40 km distance from the source. In most of Fukushima prefecture from 10 to 20 km from the source, the effective dose for all age groups is estimated to be within a dose band of 1 to 20 mSv. In the other neighboring prefectures, the estimated effective dose varies as 0.1 to 2 mSv. The ingestion dose and external exposure are the major contributors to the effective dose. The uncertainty in the effective dose falls within a factor of 9 which arises due to errors in the deposition dose and ingestion dose calculation. The lifetime effective dose is noted to be about five to six times the effective dose in the first year, and the major contributor is the cumulative ground shine. The dose from different pathways of exposure in the 40-km range is presented in Table 4. The computed thyroid dose for 11 to 31 March (Fig. 8) indicates high values (~150 mSv) in the first 5-km range, reduces to 15 mSv at 10 km, 8 mSv at 20-km range, and to 1 mSv beyond 25 km from FDNPP for adults. The child and infant thyroid doses at the above distance ranges are higher by 2 and 2.5 times than the adult thyroid dose.

Table 4 Estimated radiation dose under different pathways and their variation with distance
Fig. 8
figure 8

Simulated thyroid dose pattern around the FDNPP for adults

Simulated yearly distribution of computed total dose in the 0–5-, 5–10-, 10–20-, and 20–40-km areas is presented in Fig. 9 from the first year (2011) to an integration period of 89 years up to 2099. The annual dose limit for the general public is 1 mSv (ICRP 2007). The spatial averages of total dose (Fig. 9a) indicate that a maximum average dose of about 34, 15.6, 10.7, and 4.8 mSv occurs during the first year in the 0–5-, 5–10-, 10–20-, and 20–40-km areas, respectively. The average dose reduces to the threshold minimum of 1 mSv in a period of about 50, 16, 7, and 2 years, respectively, in the above spatial ranges indicating relatively long reclamation times in the 0–5 and 5–10-km zones for the activity to reduce to acceptable limits assuming implementation of no countermeasures. A short reclamation time of 7 years is sufficient in the 10–20-km zone for the total dose to reduce to acceptable 1 mSv limit. The time distribution of average dose along the footprint of the plume (area of maximum deposition) in the northwest sector of FDNPP (Fig. 9b) indicates hotspots of risk that require special attention as they have relatively longer reclamation times (75, 45, and 9 years in the 0–5-, 5–10-, and 10–20-km areas, respectively) than the other corresponding areas across the maximum deposition zone. The time distribution of average dose (Fig. 9a) indicates that population from the present evacuated areas can return in about 16 years (i.e., by 2027) in 5–10-km area and 7 years (i.e., by 2018) in 10–20-km area for safe habitation with minimum risk. Results indicate that a large period of about 50 years is required for the activity in the 0–5-km area around FDNPP to reduce to safe limit for rehabilitation. The above analysis shows priority areas where countermeasures are to be implemented to reduce the risk. It is to be noted that if reclamation measures are implemented in the high-risk area (i.e., along the plume foot print), the safe limit for rehabilitation would be achieved faster.

Fig. 9
figure 9

Simulated yearly distribution of total dose (mSv) in the areas 0–5, 5–10, 10–20, and 20–40 km as a spatial average and b average along the north-west sector of FDNPP

Results of health risk assessment

Radiation-related health risks derived using risk models from estimated doses that would be received by the members of the general population exposed to the radiation around FDNPP are presented here. No countermeasures like sheltering, evacuation, and food control are accounted in the following analysis. For the risk characterization, three age groups of population, namely infants (1-year-old), children (10-year-old), and adults (20-year-old), were considered in the first method adopted from WHO (2012b) as they represent younger and more sensitive members of the population. All the risk estimates were based on simulated organ doses for breast, bone marrow, colon, and thyroid. The average LAR values in two distance ranges 0–20 km (evacuation zone) and 20–40 km (sheltering zone) representing the anticipated risks are presented in Fig. 10. The spatial variations in the LAR for leukemia, solid body, breast, and thyroid cancers are presented as an example for male population in Fig. 11.

Fig. 10
figure 10

Estimated average lifetime attributable risk (LAR) for a leukemia, b solid cancers, c thyroid, and d breast cancers for both male and female populations and different age groups in the 0–20-km range and 20–40-km range from the FDNPP

Fig. 11
figure 11

Estimated lifetime attributable risk (LAR) for a leukemia, b all solid, c thyroid, and d breast cancers for adult age group

The spatial average estimate of LAR for leukemia (Fig. 10a) shows the near range 0–20 km characterizes a high-risk zone relative to the 20–40-km zone. In this zone, the average LAR varies as 15 to 30 in 10,000 for males and 10 to 15 in 10,000 for females considering all the age groups. In the far range 20–40 km, the average LAR varies as 3 to 6 in 10,000 for males and 2 to 3 in 10,000 for females for all age groups. In each zone, the average risk increases from adults to infants. The actual spatial distribution of LAR for leukemia (Fig. 11a) in respect of male adults indicates high values in the northwest sector. In that sector, the leukemia risk is significant in the 1–10-km range of the nuclear power plant (NPP) (adults, 10 to 25; children, 20 to 30; infants, 20 to 40 in 10,000). The risk reduces to 10 in 10,000 from 10 to 40 km in all directions. For female population, the risk reduces to 1 to 10 in 10,000 outside the 10-km zone in all the directions. The leukemia risk for children and infants is roughly two times higher than that for the adults in all the distance ranges. The higher LAR for leukemia for males relative to females is due to relatively higher LBR for males.

The risk for solid cancers is estimated to be nearly an order higher than that for leukemia. The LAR for solid cancers is nearly the same for both males and females because of the similar order of LBR in both males and females. The spatial average estimate of LAR for solid cancers (Fig. 10b) shows that the 20–40-km range has relatively lesser risk (~50 % less) than the 0–20-km range. The average LAR in the 0–20-km range is about 35 to 170 in 10,000 for males and 45 to 210 in 10,000 for females for all age groups. In the 20–40-km range, the risk reduces as 20 to 100 in 10,000 for both males and females. In both the zones, the risk is nearly five times higher for infants than the adults. The spatial distribution (Fig. 11b) indicates several asymmetries and hotspots in the anticipated risk. High-risk rates for solid cancer incidence are noted in the northwest sector up to 20-km range and in the south-southwest sector from 20 to 40 km around the FDNPP. For example, the first 1–5-km range in the northwest sector is characterized with a highest risk for solid cancers (roughly 1,000 in 10,000), and the risk reduces to 100 in 10,000 in the 5- to 20-km range. The estimated risk is higher in the northwest and south-southwest zones of the FDNPP where relatively higher deposited activity was simulated. The spatial average LAR for thyroid cancer is presented in Fig. 10c. It varies as 1 to 13 in 10,000 for males and 2 to 20 in 10,000 for females in the first 0–20-km range for all age groups. In the 20–40-km range, the average LAR for thyroid cancer incidence is noted to reduce by 40 % of the LAR in 0–20-km range. The thyroid risk is about 20 times higher for infants than the adults and about two times higher for males than for female population. Spatial distributions of LAR for thyroid cancer (Fig. 11c) indicate a nearly uniform distribution of risk in all directions from the FDNPP as it results from air concentrations of I131. The first 10-km range is characterized with highest risk for thyroid cancer (~5 to 50 in 10,000) for adults, and the risk reduces rapidly to <1 in 10,000 further away. The average LAR for breast cancer (Fig. 10d) indicates the risk varies from 750 to 2,000 in 10,000 in the 0–20-km range and 400 to 1,000 in 10,000 in the 20–40-km range. The risk for infant age group is about three times higher than that for adults, and it falls by roughly 50 % from 0–20-km zone to 20–40-km zone. As in the case of leukemia and all solid cancers, the spatial distribution (Fig. 11d) indicates that the first 20-km range in northwest sector of FDNPP is characterized with the highest risk for breast cancers (>1,000 in 10,000).

The risk calculations are influenced by the uncertainty in the risk model fit coefficients. The risk model inherent uncertainty falls within a factor of 0.11 for solid cancer, 0.28 for thyroid cancer, and 0.17 for breast cancer. The dose from ground shine and ingestion are the major inputs used in the risk model. We have already discussed in the “Simulated deposited activity” section about the uncertainty in their estimates which is about a factor of 10. This uncertainty factor in input dose supersedes the risk model uncertainty thereby increasing the uncertainty almost to the same factor of 10 in the final risk estimates.

Comparison of risk values with other methods/studies

The spatial estimate of LAR was also obtained by an alternative simple method of multiplying the lifetime effective dose with the ICRP (2007) general risk coefficient (5.7 × 10–2 per Sv), and tissue weighting factor for organ doses is presented in Fig. 12. It is noted that LAR varies as 10 in 10,000 in the first 5 km, 2 to 5 in 10,000 from 5 to 10 km, and 0.5 to 1 in 10,000 beyond 10 km. The average LAR is about 2 in 10,000 in the first 0–20 km and 1 in 10,000 in the 20–40-km range for all age groups and cancers. While the estimates obtained using risk models almost provide similar order of risk variation as with the simple method for leukemia and thyroid cancers, they are about 1 order higher for all solid and breast cancer risks.

Fig. 12
figure 12

Estimated lifetime attributable risk (LAR) using ICRP method for a leukemia, breast, all solid, and b thyroid cancers for all age groups

A more detailed way of calculating the risk is to adopt the risk models provided by UNSCEAR (2006) as done in this study. The WHO calculation too had adopted the same risk models; however, it simplified the method of arriving at the effective body and organ doses, thus leading to differences in the risk estimates. WHO health risk estimates were based on measured radiation data for the region within the Fukushima prefecture and dispersion simulation for the region outside Fukushima prefecture in Japan and the rest of the world. The estimates hence are not spatially shown particularly to the region in the vicinity of the reactor. Therefore, we used the same method of WHO to extend to the interior 0–20-km area using simulated radioactive plume. The comparison of average risk for male and female from all the three methods is provided in Table 5. It is seen that in the 20–40-km range, our present estimates are about 1 order lower for leukemia and thyroid cancers and nearly 1 order higher for all solid and breast cancers than the WHO estimates. The differences in the two estimates arise mainly from the inputs and the approaches used in the lifetime dose and risk evaluation.

Table 5 Comparison of cancer risk estimates obtained from the current study with values obtained using ICRP method and reported estimates by WHO

The main uncertainty in the dose and risk calculation arises from accidental source term and organ tissue weight factors. The present study considered only four significant radionuclides (Xe133, I131, Cs134, Cs137) as against 16 species considered by WHO. The release quantities for Xe133, I131, Cs134, and Cs137 considered in the present study were about 30, 17, 80, and 60 % higher than the quantities considered by WHO for the above isotopes in dispersion calculation. The projected first year effective doses from the present study (20–150 mSv in 0–30 km; 0.1–20 mSv in >30 km area) are roughly two to three times higher than the corresponding estimates by WHO (10–50 and 0.1–10 mSv). There are also differences in the parameter choices such as type of food, approach of ingestion, inhalation, and cloud shine dose computation which led to variation in the dose estimations. The WHO simplified the inhalation dose by extrapolation of air concentration from activity deposition and cloud shine dose using semi-infinite plume method. It also simplified lifetime effective dose as twice that of the first year effective dose taking the analogy with the rapid temporal decay of doses from the Chernobyl accident. In our present study, after detailed evaluation of ingestion dose accounting all the pathways and food chain (soil, vegetation, cow, milk, meat, etc.), the total lifetime effective dose is found to be roughly six times that of the first year effective dose. To reduce the uncertainty in the risk estimates, WHO followed piecewise integration method in steps of 15 years as against the whole lifetime used for integration in the current approach. While the present study computes the organ doses considering general tissue weight factors from ICRP 103, WHO considered radionuclide-specific organ doses from internal and external exposures separately for different age groups using organ dose coefficients obtained from Jacob (1990). A difference of 1 to 2 orders in the organ dose coefficients is seen between the values used by WHO and those used in the current study for breast and colon especially through inhalation and ingestion pathways. These differences led to roughly an order higher risk for breast and solid cancer in the current study. The modeling results from the present study are well within the range of results of WHO report for leukemia and thyroid risks. The difference in the risk estimates for breast cancer and all solid cancer types is attributable to the differences in the source term, first year effective dose, the difference of lifetime to first year effective dose ratio, and its temporal distribution as well as the differences in organ doses resulting from variation in tissue weight factors.

Conclusions

In the present work, simulation of the radiological plume dispersion and health risk assessment were made in a 40-km impact region around the FDNPP for the accidental releases that occurred from 11 to 31 March 2011 following the east Japan earthquake and tsunami event. The objective has been to understand and implement a risk model in a complex natural environment under an accidental release scenario in a realistic way with high spatial resolution. The study brings out necessary input parameters and their significance in risk model and explains a modeling approach for plume dose estimation under a complex mesoscale atmospheric condition. The radiological dose and risk assessment were based on outputs from high-resolution (with a grid size of 300 m × 300 m) atmospheric dispersion simulations using FLEXPART-WRF LPDM and with available source term for airborne radioactivity. The study indicated large deposited activity and the consequent high air dose rates in spatial ranges up to 40 km in the northwest sector from the FDNPP. The two important factors contributing to this high deposition and dose are (i) movement of the accidental plume over to land from Pacific Ocean as a result of shift in the wind flow and (ii) precipitation taking place as a result of a low pressure that crossed the east coast of Japan during 15–17 March 2011. A good agreement is noted in the ARW meteorological simulations with observations for the accuracy needed in the flow field and plume trajectory calculation.

The dispersion computations revealed that about 75 % of the activity was deposited in the 0–20-km range with highest distributions in the northwest sector. Simulated deposited activity agrees with observations within an order, and simulated external dose rates agree with measurements within a factor of 5 indicating fairly good dispersion estimates. The northwest sector is marked with high external dose rates (5–100 μSv/h) as well as high first year (2–150 mSv) and lifetime effective doses (30 to 750 mSv) in the 0–20-km zone. Assuming no countermeasures like evacuation, food control, etc. were taken, this zone is characterized with high risk (20 to 40 for leukemia; 100 to 1,000 for solid cancer; 1,000 for breast cancer in 10,000) for all age groups. Further, the estimated average risk in the 0–20-km zone is also high (10 to 20 for leukemia; 35–200 for solid cancer; >750 for breast cancer in 10,000) and reduced by nearly five times in the 20–40-km zone for all cancer types. The study shows large spatial asymmetry in the deposition, dose, and anticipated health risk patterns with several hotspots in the first 20-km range indicating the necessity of special attention by the administrators and emergency responders. The analysis shows that the immediate implementation of countermeasures of evacuation in the 0–20-km zone and sheltering in the 20–40-km zone by the Japanese authorities should have significantly reduced the health risks. The dose and risk estimates arrived in this work are comparable to the values reported by WHO (2012a, b), and the differences in respect of solid and breast cancer risk may be attributable to the inputs (source term, parametric values) and modeling approach used in the present study. The main conclusions of the study are as follows:

  • The study brings out a comprehensive picture of radiological impact of nuclear emergency in terms of atmospheric dispersion, dose exposure from different pathways, and the risk to different age groups of population all by means of numerical simulation alone. It could be easily carried out immediately following the accident so that results would help evolve a rapid management and implementation of countermeasures in needy areas.

  • The simulations made it possible to identify the asymmetries as well as hotspots in the activity deposition, effective dose, and anticipated risk arising from the complex environmental flow characteristics useful for accurate risk assessment for the emergency responders.

  • The present estimations do not take any credit for countermeasures such as food control, sheltering, evacuation, etc. which could reduce the actual risk immensely.

  • The estimates in the 0–5-km area and along the plume footprint in the north-west sector of FDNPP indicate hotspots in risk and high priority for the implementation of reclamation measures. The areas across the plume are noted with less risk where people can be rehabilitated in about 16 years in 5–10-km area and 7 years in 10–20-km area. The removal of top soil in the hotspot areas would drastically bring down the time of return in those areas.

  • The values per se of the risk estimated cannot be considered absolute values given many uncertainties in source term and other input parameters as for example omission of dose from consumption of seafood, assumption of food consumption from locally produced foodstuff, etc. While the method is quite scientific, the magnitude of the risk depends on realistic input data such as accidental source term, demography, movement of the people, actual local food intake, etc.

Thus, the generic methodology for radiological risk assessment based on numerical dispersion simulation is demonstrated for Fukushima accident case.