Keywords

1 Introduction

Limitations on the supplies of freshwater and fossil fuels drive research and policy on water and energy. These two resources are highly related (Fig. 1), and in recent years, the feedbacks between water and energy, the water–energy nexus, have received increased attention and governments and nongovernmental organizations have realized that policy must consider both water and energy simultaneously. For example, in 2012, the International Energy Association’s (IEA) World Energy Outlook included a special section on the need for water in the energy sector noting the importance of water scarcity in energy resource planning and the rapidly increasing use of water for energy [2]. The United Nations Water program will focus World Water Day 2014 on the interactions between water and energy and will focus their first annual themed World Water Development Report on the water–energy nexus [3]. In the United States, the Government Accountability office (GAO) has written six reports since 2009 exploring the use of water for energy, energy for water supply, and the need for integrated government information. In their most recent report, the GAO recommends that the Department of Energy coordinate a program involving multiple federal agencies to address the water–energy nexus [4]. Nongovernmental organizations such as the Pacific Institute and the River Network have also reported on the water–energy nexus. In June 2013, the World Bank issued its report “Thirsty Energy” detailing the use of water for energy and the need for integrated policy [5]. This proliferation of work on the water–energy nexus indicates the growing awareness of the importance of feedbacks between water and energy and the lack of currently integrated approaches. Many of the efforts toward integrated water and energy policy are in early stages and based in the United States [5] and often focus on the need to supply water for the energy sector (i.e., 4). However, the water–energy nexus will also create challenges in expanding and maintaining the potable water supply.

Fig. 1
figure 1

A conceptual landscape illustrating some of the ways in which water and energy are related. Water is used in mining, irrigation of crops for biofuels, generation of electricity through hydropower, and cooling in thermoelectric power plants. Energy is used to treat, pump, and heat the water. Adapted from the US DOE (Figure I-1) [1]

The need for understanding these feedbacks will intensify as demand for water and energy increases. The United States Energy Information Administration (US EIA) predicts a 1.5 % annual increase in global total energy consumption from 2010 to 2040, while population is only expected to grow 0.8 % annually ([6], Fig. 2). Cai and Rosengrant [7] predicted a 72 % increase in global domestic freshwater use from 1995 to 2025 with over 90 % of this increase occurring in developing countries largely due to population growth and a 40 % increase in per capita water use. The energy use for water supply will also be affected by how people obtain water. Accessible and safe drinking water for all people is an important goal for global public health and economic opportunity. In 2010, 89 % of the world’s population obtained water from an improved water source, up from 76 % in 1990 [8]. While a water treatment and distribution system typical of a developed country is not required for an improved water source, the number of people obtaining water from piped on premise systems, bottled water, and public taps has increased during this time [8]. Unimproved water sources can require a large expenditure of human energy, often in the form of walking long distances, while improvements in water supply and sanitation are typically accompanied by an increase in (nonhuman) energy use due to water treatment and distribution.

Fig. 2
figure 2

Predicted trends in population and energy consumption. Energy consumption is increasing at a faster rate than population. Data is from [6]

This chapter focuses on challenges to expanding safe drinking water presented by the water–energy nexus. It provides an overview of challenges to supplying potable water related to the water–energy nexus.

2 Energy Demand for Potable Water Supplies

In countries in which access to safe drinking water is common, supplying water uses a significant amount of electricity. Water and wastewater combined use 2–19 % of electricity and water alone uses 0.5–3 % of electricity across a range of scales (Table 1). The variability in these values arises from differences in the energy intensity of water and wastewater services as well as the energy intensity of the overall economy. For example, water and wastewater services in India are expected to be more energy intensive because of initial poor water quality [11], while in China [14] and Texas, United States [15], water services account for a small percent of total electricity use because of high electricity use across the rest of the economy. In California, the volumetric energy intensity of the water supply (the energy required per volume of water supplied often expressed as kWh m−3) is high because of long-distance pumping of water to arid or semiarid areas [16].

Table 1 Percent of total energy use of localities around the world that is used for water or water and wastewater services

In coming years, energy intensity of water supply will likely increase as overall water quality deteriorates and populations turn to poorer quality and more remote sources of water to meet growing demands. Drinking water supply in the United States requires an average of 0.51 kWh of energy per m3 of drinking water [10], with public water systems using more energy than private wells [17]. This analysis by Arzbaecher et al. [10] indicates that the energy intensity of water supply in the United States has increased since the often-cited study by EPRI [17] which determined energy intensity between 0.37 and 0.48 kWh m−3 for public water supply in the United States. Overall, energy use for the water and wastewater services is expected to increase by approximately 1/3 over the next 20 years, partially due to the use of alternative water sources [18]. In Australia, traditional centralized water requires 0.39 kWh m−3, but the energy used for water supply will increase as the country turns to more energy-intensive water sources such as desalination (4.3 kWh m−3) and recycled water (1.7 kWh m−3) [19]. On a volumetric basis, electricity use for water in China increased 17 % (from 0.079 kWh m−3 to 0.094 kWh m−3) from 1997 to 2004 and in total increased 14 % from 2003 to 2005 [14]. The demand for energy in supplying safe drinking water is not limited to developed countries, as boiling is often used in developing countries to make water biologically safe to drink, though a lack of fuel, or access to fuel, limits the viability of this solution for the poorest areas [20]. Other decentralized water treatment approaches such as ultraviolet light disinfection may also be limited by availability of energy, though innovative water treatment approaches driven solely by gravity or incoming solar radiation are emerging as viable options [20].

Energy is required in a typical potable water system in a developed nation to move and treat water. The energy required at a minimum to supply safe drinking water in an area will depend on the topography, location and quality of the source water, and the length of the distribution system. Beyond this, energy use will be affected by the type of treatment used, pumping efficiency, leaks, and other issues largely within the control of the public water utility. The nature and extent of treatment required are dependent upon both the initial water quality and the desired final water quality. For example, much of the water supplied to the City of New York (United States) does not require initial filtering because of high source water quality, while water supplied to cities such as Los Angeles, California (United States), requires more extensive treatment [21]. In this section, we will focus on operational energy use in water supply instead of lifecycle energy use. The energy used for construction of water treatment plants is minimal compared to the lifetime operation of the plants [2224]. In addition, while energy use to heat water is very important in considering lifecycle energy use due to potable water, this section will focus from the water source through delivery to the user, not including activities of the user. The use of energy for potable water supply and opportunities for decreasing this energy use can be examined in terms of water transport/pumping and water treatment. The expanding use of both alternative water supplies, such as reclaimed water, and alternative energy to power water supply will also affect the future energy use of potable water.

2.1 Water Extraction

To supply potable water, water is extracted and moved from the source to the water treatment plant and then moved from the water treatment plant to the consumer, typically by pumping. These two pumping stages are the most energy-intensive part of the water supply cycle [17, 25] and the volumetric energy intensity varies widely among public water utilities. Carlson and Walburger [26] developed a benchmarking metric for water utilities in the United States. The metric shows that total volume of water, total horsepower, elevation, raw water pump horsepower, and distribution main length all positively correlate with total energy use. When combined with the quantity of purchased water, which negatively correlated with total energy use, these parameters explained 87 % of the variability in energy use of 176 public water utilities in the United States [26]. The high explanatory power of these variables demonstrates the strong influence of pumping on total water utility energy use.

2.1.1 Groundwater Extraction

Many public water systems and private water users extract water by pumping from an underground aquifer. These systems require about 30 % more energy than surface water systems largely because of the vertical lift required [17, 27]. The energy use depends on pump efficiency and the depth to the water table. Rothausen and Conway [28] estimated that at 100 % efficiency a pump uses 0.0027 kWh of energy for each 1 m it lifts 1 m3 of water. However, additional energy is needed to maintain water pressures suitable for water treatment plants. For example, raising water 46 m requires 0.16 kWh m−3 [29], but supplying that same water at a pressure of 400 kPa requires 0.367 kWh m−3 [30]. The water pressure required will depend on the type of treatment used. For example, reverse osmosis requires higher water pressures than sand filtration. In addition, pumps rarely work at 100 % efficiency. Gay and Sinha [25] compared the minimum energy use for raw water intake, calculated from friction, static head (i.e., the required elevation increase), and pump efficiency, with the actual energy use for water utilities in Virginia. Excluding gravity-fed systems, the actual energy use for pumping was 1.2–27 times higher than the calculated minimum energy required. When compared to a theoretical ideal energy requirement, which does not include pump efficiency, water loss, and required pressures, the actual energy use was 1.3–226 times higher [25]. Across a range of systems, Plappally and Lienhard [30] found that 0.004 kWh m−3 of energy was needed per meter of lift, almost 50 % higher than high efficiency estimate of Rothausen and Conway [28]. This gap between the minimum feasible energy and the amount of energy used represents an opportunity for energy savings, particularly where groundwater depletion increases the energy needed to pump groundwater.

2.1.2 Surface Water Extraction

In ideal conditions, water systems in which surface water is the source can rely on gravity to move water from the reservoir to the water treatment plant. However, pumping is often needed to transport raw surface water to the treatment plant using (in the United States) an average of 0.32 kWh m−3 of energy [17]. As discussed in Lawson [21], New York City and Los Angeles represent extremes of the energy intensity required to supply public water in the United States. Water from upper New York State is gravity fed to New York City to supply water for the urban population, while water to Los Angeles is supplied through the California Aqueduct which requires 2.09–2.62 kWh m−3 of energy depending upon its path [21]. While raw water is transported long distances to both cities, desirable topography creates a much less energy-intensive water system in New York City. As summarized in Plappally and Lienhard [30], long-distance transport of surface water requiring significant energy inputs is not unique to Los Angeles. For example, installed and proposed projects from the United States, Australia, and Spain transport water distances up to 744 km with energy use per unit distance ranging from 0.002 to 0.007 kWh m−3 km−1 [30]. Topography, particularly the need to pump over mountain ranges, affects the energy intensity of transporting surface water, as seen in the much greater energy requirements of supplying water to Tijuana, Mexico, than other Mexican cities [31]. In addition to topography and distance, the amount of energy required to transport raw water also increases due to corrosion and friction increase in aged pipelines [25].

2.2 Water Distribution

After treatment, a pump is used again to deliver water to consumers. Approximately 85 % of the energy in supplying potable water in the United States is used for water distribution using pumps [32]. The energy intensity of water distribution varies widely, with reported values from 0.015 to 2.4 kWh m−3 and lower values typical for greater volumes [30]. In some urban areas, such as Oslo, Norway, water distribution energy requirements can be less than water treatment energy requirements [33], though this is not typical. Distance, elevation change, pumping efficiency, required pressure, and pipe characteristics affect the amount of energy required to transport water. Piratla et al. [34] estimated that pumping energy required for a potable water distribution pipeline would be 3.5 % higher with a ductile iron pipe than a PVC-O pipe because of the increased friction due to corrosion in the ductile iron pipe.

While the amount of energy required to withdraw raw water and distribute treated water depends on topography and location of the water source [35], water utilities can reduce pumping energy requirements by improved system design. For example, in the United States, there are over 200,000 water mains break per year [36], which results in significant loss of treated and pressurized water and, therefore, energy loss. In Oslo, approximately 20 % of the water in the distribution network is lost due to leaks [33]. Pipe replacement and repair will minimize these losses and the total volume of water that will need to be supplied from the water treatment plant. In addition, pipe replacement and repair can reduce the friction losses due to corrosion and, therefore, significantly reduce the energy required to distribute water [37, 38]. Improvements in pumping efficiency, including appropriate sizing and the use of variable frequency drive pumps, can also greatly reduce the energy demand of water pumping. For example, Arzbaecher et al. estimate that in the United States, improvements in pump and motor systems could save 2,600–7,800 million kWh of energy annually [10]. The need for pump efficiency is particularly enhanced in regions that rely on groundwater for water supplies. As noted earlier, declined water tables and deeper water wells will increase the energy required to pump water. For example, supplying groundwater from 37 m below the surface requires 143 kWh per 1,000 m3, while supplying groundwater from 120 m below the surface requires 528 kWh per 1,000 m3 [39].

2.3 Water Treatment

Once water is taken from the raw water source, and before it is distributed, the water must be treated. For groundwater systems, this treatment is often minimal and may just include disinfection because in most areas groundwater is considered a relatively clean water source compared to surface water, though in some areas groundwater can be chemically contaminated and require similar treatment as surface water. Treatment of surface water is often more extensive and includes filtration, settling, and often multiple forms of disinfection.

The type of treatment used by the water utility will affect the energy intensity of the treatment. Electricity consumption for water treatment can vary from 0.05 to 0.7 kWh m−3 dependent upon initial water quality and treatment technique [35]. Chlorine has historically been the primary disinfectant used in drinking water treatment. This form of treatment requires a relatively small quantity of energy (Fig. 3), but health concerns and regulations in many locations are leading to replacement of chlorine with more energy-intensive means of disinfection. Using advanced water treatment (ozone or microfiltration/ultrafiltration) instead of conventional water treatment can increase annual energy use for a 10-mgd (3.8 × 104 m3 per day) water treatment plant by over one million kWh per year [41]. Elliot et al. [42] found that using microfiltration would increase energy use by 0.18 kWh m−3, while ozone disinfection would use 0.03–0.15 kWh m−3 [29].

Fig. 3
figure 3

Energy requirements for chlorination [29] and advanced freshwater treatment ([40], based on Table 1 from [21]). Disinfection by chlorine requires much less energy per volume of water than other treatment techniques

These more energy-intensive water treatment processes are gaining favor due to deteriorating source water quality and increasingly strict drinking water standards. For example, the increase in energy intensity of China’s water supply from 1997 to 2004 is attributed to enhancement of the water supply systems to meet new water quality standards [14]. In the United States, two recent drinking water regulations (the Stage 2 Disinfectants and Disinfection Byproducts Rule (Stage 2 DBPR) and Long Term 2 Enhanced Surface Water Treatment Rule (LT2ESWTR)) could each increase the use of energy for drinking water supply by over 100 million kWh per year [12, 43]. The effects of these regulations are not cumulative because a single technology may satisfy requirements for both regulations. Even though the energy requirements are noncumulative, the regulations still represent a noticeable increase in energy requirements for water treatment. In addition, in a study comparing energy intensity of water and wastewater in India and the United States, Miller et al. [11] found that while wastewater treatment was typically more energy intensive in the United States, water supply and treatment in India were more energy intensive than wastewater treatment, a difference attributed to poorer initial source water quality in India.

Energy use for water treatment will likely increase in the future as source water quality deteriorates. New contaminants of concern, such as pharmaceuticals, may require new treatment techniques. For example, reverse osmosis is effective at removing organic micropollutants such as personal care products but low-energy reverse osmosis is less effective for removing these contaminants [44]. Reverse osmosis requires more energy per volume of water than other treatment techniques such as ozone and ultraviolet disinfection (Fig. 4).

Fig. 4
figure 4

Energy requirements for water treatment of conventional freshwater treatment, advanced freshwater treatment, and alternative water supplies. Data from [35]. The energy required for distribution will be the same regardless of water supply, indicating that freshwater is less energy intensive if the energy use from source to treatment is relatively low

2.4 Alternative Water Sources

Due to water stress, alternative water supplies such as recycled/reclaimed water (treated wastewater used for direct potable, indirect potable, or non-potable uses) and desalination are gaining popularity in many countries. The energy required for treatment of alternative sources is highly dependent upon the initial source water quality [45]. For example, a study of potential alternative water sources in southern California cities found that recycled water and imported water were two to five times less energy intensive than desalination, largely because of the energy-intensive reverse osmosis systems used in desalination [18]. Similarly, Kajenthira et al. [46] showed that using treated wastewater instead of desalination in six inland cities in Saudi Arabia would create an energy saving of 4.0 × 109 kWh of energy annually.

Despite the high-energy intensity of desalination, water stress in many areas is leading to an expansion of desalination (Fig. 5). Reverse osmosis (RO) and thermal (evaporation/distillation) desalination are the most commonly practiced desalination technologies. Reverse osmosis has high energy requirements, 3.9 kWh m−3 in one case study [35] and typical energy use between 3.5 and 4.5 kWh m−3 [48]. The energy required for reverse osmosis depends on the level of salinity, with higher salinity water requiring greater energy consumption [49]. With optimal efficiency, RO systems can consume as little as 1.6 kWh/m3 [50], but this is atypical. RO systems are actually less energy intensive than thermal desalination, a methodology mostly used in some fuel-rich countries [48]. Energy recovery provides an opportunity for improved energy efficiency in the desalination process. Energy efficiency can be achieved through heat recovery or using the brine discharge to turn a turbine for generating electricity [48]. Raluy et al. [51] showed that when heat is recovered from thermal desalination projects, the environmental impact is similar to desalination by reverse osmosis.

Fig. 5
figure 5

Global installed desalination plant capacity 1945–2004. Desalination capacity has increased dramatically. Data from Pacific Institute [47]

Reclaimed wastewater represents another alternative water source. Wastewater reuse can be considered in terms of direct (treated wastewater is pumped directly for use) and indirect (treated wastewater is discharged into a water body that is used a source water for potable water treatment) reuse and potable or non-potable use. In most developed countries, primary and secondary wastewater treatment is required before the water is discharged to the environment. The addition of tertiary (advanced) treatment makes this water suitable for non-potable uses. Wastewater treatment is generally energy intensive, so adding tertiary treatment to a plant typically does not significantly increase the energy use of the plant, but pumping energy use can be significant if the tertiary treatment is not ideally located, as can happen when tertiary treatment is added after the plant is built [52]. When used for non-potable urban and agricultural uses, reclaimed water can actually have a lower cumulative energy demand than traditional water sources [53, 54]. Other studies, such as Rygaard et al. [55], show that wastewater reclamation is a more energy-intensive water supply approach than traditional freshwater or groundwater. This difference may emerge from the intended use and the scope of treatment attributed to the wastewater reclamation process. Treating raw sewage to potable water standards is logically far more energy intensive than treating surface water or groundwater to potable water standards, but the additional treatment needed to bring treated wastewater that is ready for discharge to non-potable standards is likely small and may decrease as standards for wastewater treatment become more stringent. Differences also occur based on the type of treatment used with membrane technologies typically requiring more energy [56]. Most calculations of the energy intensity of reclaimed water (for example, references reported in [30]) indicate that reclaimed water is more energy intensive than traditional water supplies, but the results are highly variable.

2.5 Alternative Energy Sources

In the face of increasing water demand and increasingly energy-intensive water systems, alternative and renewable energy sources can dramatically reduce the impacts of water production and facilitate expansion of drinking water services. While renewable energy, such as biogas generation, at wastewater treatment plants has received more attention than renewable energy at potable water treatment plants, renewable energy options for potable water are expanding. Solar energy has perhaps received the most attention for water treatment. For example, solar energy can be used to provide heat for desalination processes or can be used directly in disinfection of water [57]. At a household scale, solar energy can be used by simply exposing water to sunlight, preferably concentrated by lenses, mirrors, or aluminum foil, to produce bacteriological safe water [58]. Solar energy can also be used to power remote water pumping stations for a cost savings over using diesel pumps [59] and can be used in distillation processes to provide safe drinking water in areas where arsenic contamination exists [60]. While solar energy is well suited to direct application in water treatment, other renewable energy sources also can be used to provide electricity to water treatment plants and pumping systems. Lifecycle analysis demonstrates that supplying water treatment plants from renewable energy sources significantly decreases the environmental impact [35, 61]. In comparing the environmental impact of a real nanofiltration water treatment plant and a virtual conventional water treatment plant with granular activated carbon, Bonton et al. [62] found that the use of hydropower to supply the nanofiltration plant greatly reduced the environmental impact, even though nanofiltration is much more energy intensive than conventional filtration with granular activated carbon.

In summary, creating a clean, reliable water supply requires significant energy use and these energy requirements seem to be increasing. The relative importance of pumping and treatment varies dependent upon the type of system [18], and energy use for both treatment and pumping will increase as populations use poorer initial quality water and are forced to transport water greater distances. In addition to the energy used to supply drinking water, water use within buildings uses significant amounts of energy, particularly for heating, making water conservation important [1]. Energy use efficiency must be considered when improving existing water systems and developing new ones.

3 Water Use for Electricity Generation

Electricity generation uses large quantities of water and can be in competition with potable water supply while also damaging water quality. In the United States alone, power plants generated close to 4 trillion kWh of electricity in 2010, with 89 % of this electricity generation requiring cooling, typically using freshwater [63]. In the United States, thermoelectric power plants withdraw 7.6 × 108 m3 of water (freshwater and saline) per day, 49 % of all water withdrawals and 40 % of all freshwater withdrawals [64]. Water is also used in the mining and processing of fuels. Based on estimates of water use for mining in the United States (not including transportation and processing of coal), the US Department of Energy [1] estimates that coal mining uses 260,000–940,000 m3 of water per day.

Much of the water used in electricity generation is used in thermoelectric power plants (including coal, natural gas, concentrating solar power, biomass, and nuclear). In thermoelectric power plants, heat from the fuel source is used to produce steam, which turns a turbine to generate electricity. Water is primarily used for cooling the steam in a condenser. In once-through cooling systems, the cooling water is immediately returned to a surface water body, while in recirculating or closed-loop systems, the same water is used multiple times for cooling. Power plants with recirculating systems withdraw (take from the surface or groundwater body) about 2 % of the water volume withdrawn for once-through cooling systems. However, a greater total quantity of water is evaporated in closed-loop cooling systems making the water consumption (removal of the water from the local system) higher in recirculating systems [65, 66]. Historically, virtually all thermoelectric power plants used once-through cooling systems, but these cooling systems are becoming less common due to concerns about thermal pollution caused by the discharge of the heated water. Virtually all new power plants use recirculating cooling systems [67].

The change to recirculating cooling systems has implications for local water resources. Chen et al. [65] found that in the United States alone, freshwater withdrawal for thermoelectric power increased from 1.1 × 108 m3 per day in 1950 to 5.4 × 108 m3 per day in 2005, with most of this increase occurring before 1975. Water withdrawals from 1975 to 2005 remained relatively constant while electricity generation increased dramatically because of a change from once-through cooling systems to recirculating cooling systems [65]. The lower water withdrawal for closed-loop cooling reduces the overall withdrawal of water or allows withdrawal to stay constant, with increasing electricity production, but increases the consumption [65, 6870]. Shifts in cooling system type alone may result in a 10 % increase in water consumption for electricity generation by the end of this century [70]. This increase in consumption may increase competition between electricity generation and potable water supply.

Technology designed to limit carbon emissions also affects water use. Reducing carbon emissions from coal-fired power plants can increase water use because of amine-based carbon storage practices which use water for cooling, increased electric demand because of the parasitic load from carbon capture and storage (CCS) technology, and the demand for water in sulfur scrubbers [68]. In Texas (United States), sulfur controls require an additional 2 × 107 m3 of water for electricity generation per year [71]. The National Energy Technology Laboratory (National Energy Technology Laboratory (NETL)[72] Estimating freshwater needs to meet future thermoelectric generation requirements 2010) estimates an increase of 58–91 % in water consumption when CO2 capture is installed in a coal or natural gas power plant. These values are similar to estimates in Texas that carbon capture on pulverized power plants will increase water consumption by 95 % or 2.2 m3/MWh [71] and a study by Zhai et al. [73] that found that consumptive water use at coal-fired power plants doubled with the addition of amine-based CCS.

Finally, as energy use continues to grow, the relative contribution of different energy sources will change (Fig. 6) and the type of energy source affects water use. In many climate change and energy debates, renewable and nonrenewable energy sources are considered opposite ends of the spectrum. For water use in electricity generation, thermoelectric and non-thermoelectric seem to be a more useful categorization. Across a broad range of data, thermoelectric power sources generally have the highest water use per megawatt-hour with most of this water used for cooling regardless of fuel type [68, 74]. For non-thermoelectric power sources, such as wind, solar PV, and hydropower, operational water use is generally low, but significant quantities of water may be used in manufacturing equipment [68]. The exception to this is hydropower. Estimates of hydropower water use vary widely, largely based on how much evaporation from the reservoir is attributed to the hydropower plant [68]. The reservoirs are typically multiuse (water supply, recreation, etc.) which makes attribution of all evaporative losses to hydropower a likely overestimation [70]. Transitioning to renewable energy sources may increase or decrease the total water use depending on reliance on water-intensive renewable sources (such as biomass) or sources such as wind and solar PV [66]. Studies have consistently shown that wind and photovoltaic power use the least water [66, 70, 74]. However, further research also needs to be conducted more accurately estimate the water intensity of electricity sources such as geothermal and biomass.

Fig. 6
figure 6

Global electricity production by type. Electricity production overall is increasing with renewables growing at a faster annual rate (by percent) than other types. Data from [6]

Translating the increase in water use of individual technologies into overall water future water use requires more than simple multiplication because installing carbon capture and storage is not the only option to meet carbon reduction goals. For example, under possible climate policy initiatives in the United States, Chandel et al. [63] found that increases in electricity prices will decrease overall electricity use and changes in the energy mix will actually decrease water withdrawals but may increase consumption in some regions. In addition, using post-combustion capture technologies or integrated gasification combined cycleplants instead of pulverized coal plants (both with carbon capture) could provide a more water-efficient means of reducing carbon emissions [63]. The National Energy Technology Laboratory (National Energy Technology Laboratory (NETL)[72] Estimating freshwater needs to meet future thermoelectric generation requirements 2010) provides an estimate of the maximum impact of CCS as 1.4 × 107 m3 per day of water withdrawal and 8.3 × 106 m3 per day of water consumption, almost doubling the water consumption by 2035. More realistic estimates of the impact on water use of climate policy indicate that water withdrawal for electricity generation will decrease, but consumption will increase relative to business-as-usual scenarios with climate policies that attach a financial burden to carbon emissions [63]. This trend of decreased withdrawals and increased consumption will likely occur globally, with consumption increasing more dramatically if policies favor a mix of renewable energy sources that favors concentrating solar power, which uses large quantities of water [75].

Competition between water for thermoelectric power generation and other uses is not only an issue for the future as plants have already had to shut down for water-related issues, for example, in the United States during the drought of 2007 [76]. In the United States, the 2007 drought threatened 24 out of 104 nuclear power plants, while a 2003 drought decreased France’s nuclear power capacity by 15 % and hydropower capacity by 20 % [77]. In a county-by-county analysis of predicted population growth, electricity use, and water supply, Sovacool and Sovacool [69] estimate that 22 counties in the United States, housing 20 major metropolitan areas, will have severe water shortages by 2025 due to expansion of thermoelectric power capacity. While the authors of that report identify some methodological shortcomings of their approach, the study does highlight the potential for competition between water use for electricity generation and water use for public water supply.

4 Impact of Electricity Generation on Water Quality

Generation of electricity can impact water quality through extraction of fuel, transport of fuel, conversion to electricity, and storage of wastes, making the water unusable for potable uses or increasing the energy needed to treat the water. Fuel extraction, processing, and electricity generation can lead to metal, nutrient, or radiological contamination which may not be addressed by conventional treatment. In this case, local communities may be required to use more energy-intensive water treatment methods or import water. The types of contamination, risk of contamination, and public perception of the risk all vary for the different electricity sources including coal, nuclear, natural gas, and renewables.

4.1 Coal

Because of coal’s extensive use as an energy source, the environmental impacts of coal have been relatively well documented. Much of the research emphasis has focused on atmospheric pollution from coal-burning power plants, but coal can also present a significant threat to surface and ground water quality. Some of the pathways through which coal can impact water quality include contaminant leaching from mining sites, deposition of combustion by-products in surface waters, and spills of waste materials.

For example, acid mine drainage can impact water quality for decades after site remediation with low pH, high sulfate concentrations, and contamination from metals such as iron [78]. Water pollution from mining operations can make freshwater unsuitable for drinking water [79] and reclamation efforts may have limited success to address concentrations of some dissolved contaminants such as sulfate in source water [80, 81]. The quality of water discharged from abandoned mine sites varies widely, but in a study in Pennsylvania (United States), less than 1 % of samples abandoned coal mine discharge met United States Environmental Protection Agency standards from drinking water concentrations of inorganic constituents [82]. In some cases, mine discharges, while not suitable for drinking water, are suitable for non-potable uses [83, 84].

Coal can also affect water quality sources near the power plant. Water percolating through stored coal [85] and waste piles [86, 87] can pick up metals and acidity and contaminate local surface and groundwater. When coal is burned to produce heat to generate electricity, the by-products of this combustion can lead to contamination of local water supplies through wet and dry deposition or leaching through waste piles. For example, Farooqi et al. [88] found extensive groundwater contamination by arsenic (mean concentration = 235 μg/l in shallow groundwater 24–27 m) and fluoride (mean concentration = 11.0 mg/l in shallow groundwater) in Punjab, Pakistan, as well as measurable concentrations of these contaminants in rainfall. The higher concentrations in the shallow groundwater than deeper groundwater, the isotopic signature, and the presence of these contaminants in rainwater indicate that the source of the groundwater contamination is open-air coal burning [88]. In addition, power plant effluents can contain high levels of metals and can contribute to making surface water supplies unsuitable for human consumption [89].

Studies have demonstrated that the drainage from abandoned mines can mix with local groundwater and surface water impacting local water resources [86, 90], though this mixing may not necessarily create water quality problems [91]. In Turkey, groundwater near the Yatagan Thermal Power Plant typically does not meet drinking water standards due to leaching from the coal waste disposal basins [92, 93]. Estimating risk or decreased availability of freshwater due to contamination from mining activities is complex, but the impact of coal mining on water resources is obvious. This impact is regulated in many countries, but universal regulation and enforcement do not exist.

4.2 Nuclear

The publicly perceived threat of water contamination from nuclear power plants is high, particularly following the damage caused by the Fukushima I nuclear power plant in Japan following a tsunami in March 2011. Following the incident, elevated levels of Iodine-131 were found in drinking water in Fukushima and some surrounding areas, leading to restrictions on water consumption [94] but minimal health risks [95]. Following the destruction of a single reactor at the Chernobyl nuclear power plant in the Ukraine, widely regarded as the worst nuclear power plant disaster in history, a contaminated cooling pond became one of the major sources of radionuclides to the Dnieper River and local groundwater, where concentrations of long half-life radionuclides such as 90Sr remained above drinking water thresholds for many years [96]. While these very rare disasters have an impact on water quality, research shows that under normal operation, contamination from nuclear power plants (other than thermal pollution) is virtually nonexistent. For example, radionuclide concentrations in the Vltava and Elbe Rivers showed no difference before and after the establishment of the Temelin Nuclear Power Plant, though monitoring data indicated an input of tritium to the rivers from the power plant as allowed by the permit and maintaining concentrations in accordance with drinking water standards [97]. Health risks from the operation of nuclear power plants are miniscule and the improved air quality that would result from replacing fossil fuel electricity with nuclear power would save close to 80,000 lives per year [98].

While power plant operation is not likely to affect the availability of local water resources for drinking water supply through contamination, mining, and processing of nuclear fuel can. In Caldas, Brazil, uranium mining caused fluoride, manganese, uranium, and zinc contamination of creek waters that feed a local water supply [99]. Surface water contamination due to uranium mining has also been found in China [100] and Russia [101].

4.3 Natural Gas

Natural gas has received increasing attention as a fuel source in recent years because of lower carbon dioxide emission than coal. Natural gas is often seen as a “bridging fuel” that will help mitigate global climate change, while the capacity for renewable, climate neutral energy sources is developed. While this strategy is criticized because it does not end reliance on a nonrenewable fossil fuel, the strategy, along with exploitation of nonconventional supplies, has led to increased use of natural gas. Much of the natural gas that is being exploited and used is extracted through unconventional means, such as hydraulic fracturing. The impact of these unconventional means on water supply is hotly contested in the scientific literature and more research may be needed to definitively identify the risks.

When natural gas is extracted from coal seams, large quantities of contaminated water can also be produced. This water typically contains heavy metals and other contaminants such as arsenic and is stored in on-site ponds or discharged to local waters. If soil conditions are correct, the produced water can be stored in these ponds without influencing local groundwater [102]. However, if the produced water is introduced to streams, it can negatively affect water quality, including increasing the salinity of the stream [103]. The use of chemicals to fracture the coal in hydraulic fracturing is another major consideration in the potential effects of natural gas on water quality. To release the natural gas from the coalbed, water, chemicals, and sand are pumped in at high pressure to create fractures. The potential for contamination of water supplies due to this practice is widely debated. In 2004, the USEPA released a report that found no evidence that drinking water wells had been contaminated due to hydraulic fracturing [104]. Osborn et al. [105] similarly found no evidence of fracturing fluid in drinking water wells, but did find elevated levels of methane in drinking water wells due to hydraulic fracturing. This finding has been criticized [106, 107] and consensus has not been reached on the water quality impacts of hydraulic fracturing, though the potential for contamination seems clear.

4.4 Biofuels

Renewable energy sources can also negatively affect water quality. Increased agricultural productions of crops such as corn with accompanying fertilizer and pesticide use can negatively affect the quality of local water resources. For example, continuous corn or canola production, as modeled in four watersheds in Michigan (United States), increased the pesticide concentration in surface waters far beyond safe drinking water standards [108]. In addition, nitrogen and phosphorus concentrations in water may also increase with increased biofuel production [109, 110]. Intensive agricultural production for biofuels will have the same negative water quality and quantity impacts as any other form of intensive agricultural production. In local areas, the demand for water for biofuels may limit water availability for food production [111]. While the impact will vary dependent upon the crop and intensity of cultivation, water issues must be considered in assessing the overall sustainability of biofuels.

Relatively little attention has been given to the impact of biofuel production on water treatment. One notable exception is a study of the impact of increased corn production in the United States to support ethanol production. Twomey et al. [112] found that the increased nitrate concentration in surface water and groundwater from the increased corn production would locally result in a very significant increase in energy needs for water treatment of polluted waters. The energy implications of this increased demand for water treatment or the need to import water to maintain a potable water supply warrants further research.

5 Impact of Climate Change on Water Supplies

Anthropogenic climate change is tied closely to the energy sector through greenhouse gas emissions and will affect potable water supplies. The Intergovernmental Panel on Climate Change (IPCC) predicts increases in global average temperature, changes in precipitation amount and intensity patterns, and decreases in snow and glacier cover depending on the geographic location [113]. These changes in climate will have a range of effects on drinking water supply. First, changing precipitation patterns, particularly significant decreases in annual rainfall in some areas, will affect surface and groundwater supplies. Second, rising sea levels will change the interface and pressure balance between salt and freshwater for both groundwater and surface water. Third, changes in snow and glacier cover will affect the timing of freshwater delivery to rivers which many communities depend upon for water supplies. Fourth, climate change will affect the demand for water, particularly for irrigation.

Climate change models show slightly different patterns in precipitation changes, but most models show that precipitation will increase in some areas and decrease in others. Most arid and semiarid regions will experience decreases in precipitation [114], indicating that changes in precipitation may most dramatically affect areas that already experience strained water supplies. For example, using results from global circulation models, de Wit and Stankiewicz predicted that Cape Town, South Africa, will lose over 50 % of its perennial supply of water [115]. Water resources stress will also increase in the Middle East and Mediterranean, with 53–113 million more people living in the countries with water stress by 2025 in modeling scenarios with climate change than in scenarios without climate change [116]. In the West Bank, a 16 % decrease in precipitation, a relatively high value in the range of model predictions, would cause a 30 % decrease in groundwater recharge, impacting the primary source of freshwater in the region [117]. Even when overall precipitation increases, changes in the timing of precipitation can seasonally decrease the water yield of river basins [118] and may create water stress [119]. While climate change will in general increase precipitation, climate change will enhance stress on water resources in some regions. The reduction in precipitation will also affect the availability of water for electricity generation, with an expected decrease in summer capacity at power plants in Europe and the United States due to water limitations by 2031–2060 [120].

Sea-level rise is associated with climate change and may cause salt water contamination of fresh groundwater aquifers and surface water resources. The saltwater/freshwater boundary in both surface and groundwater will be affected by freshwater flow as well as sea-level rise. In a study of saltwater intrusion in Monterey, California, United States, groundwater withdrawal affected saltwater intrusion more than sea-level rise [121], but even in areas where withdrawal is the primary driver of saltwater intrusion, this condition can be exacerbated by sea-level rise [122]. In one modeling case study, sea-level rise hastened the saltwater contamination of groundwater wells by 10–21 years compared to withdrawal changes alone [122]. Predicted changes in precipitation can also affect saltwater intrusion and in the Netherlands, the combination of sea-level rise and changes in infiltration will result in salinity changes 5 km inland from the coast [123]. Similarly saltwater intrusion in estuaries is a balance between river flow and sea-level rise with sea-level rise producing a stronger effect during periods of low river discharge [124]. Even a modest rise in sea level can affect the salinity at drinking water intakes [125] necessitating energy-intensive treatment to meet drinking water standards.

In addition, climate change is affecting water supply in areas that rely on glaciers or regional snowfall and subsequent snowmelt. For example, much of the western United States relies on snowmelt for summer time water supply, but the amount of precipitation retained in the snowpack has decreased in recent years, largely due to human-induced climate changes [126, 127]. This change in snowpack leads to higher streamflow in late winter and early spring and reduced streamflow during the summer months, a trend that may continue in the future [118]. In many of these regions, snowpack serves as a water storage medium from wet to dry months and earlier melting and less precipitation falling as snow will affect the ability to meet summer time water demands. A few river basins have enough additional storage, such as man-made reservoirs, to buffer the impact of the earlier melting, meaning that much of the water will be lost to the ocean affecting the water supply of greater than 17 % of the human population [127]. In a modeling study on the effects of climate change on the Columbia River in the northwestern United States, Payne et al. [128] found that changes in streamflow due to changes in the timing of snowmelt will result in completion between water demand for hydropower and endangered species. Regions that rely on glacial meltwater, such as regions surrounding the Himalayas, will also be particularly affected as summer time flows initially increase (due to glacial melting) and then abruptly diminish (due to loss of the glaciers; [127]).

Climate change may also affect demand for water for diverse uses. Most notably climate change may increase demand for irrigation water. A combination of increased plant water demand due to changes in precipitation and temperature and extended growing seasons may result in a 395–410 gm3 increase in global irrigation water demand by 2080 [129]. These increases will not be universal across the globe, with areas such as South Asia experiencing a much greater percent increase (15 %) in irrigation demand than the global aggregate (5–8 %) by 2070 [130]. An analysis of the effects of a 6 °C increase in temperature in the West Bank, a relatively high estimate of temperature increase, indicates a 17 % increase in irrigation demand [117]. These increases in irrigation demand may not directly translate to an increase in water withdrawal because of changes in efficiency which may mitigate some of the impacts. Climate change mitigation can also reduce these future irrigation demands by approximately 125–160 gm3 y−1 [129].

6 Conclusions

The topics covered in this chapter could be examined in much greater depth. However, this chapter only provides an overview of the ways in which the water–energy nexus creates challenges in supplying potable water. Water treatment and distribution require large quantities of energy and this value will likely increase as poorer quality water is used, water is pumped greater distances, and regulations on water quality improve. Electricity generation uses large quantities of water and maybe in competition with drinking water supply while also damaging water quality. All of these interactions between water and energy occur against a backdrop of growing population and climate change. However, there are opportunities to dramatically improve the situation. For example, coproduced water from some mining activities can be used for non-potable uses, decreasing the reliance on potable water sources. Combined water and energy plants, often used in desalination, can more efficiently use and produce these two resources. Most importantly, water and energy policy can be developed from a water–energy nexus approach, examining feedbacks between the two resources rather than considering them separately. In addition, conservation of both water and energy resources is vital and should never be overlooked as a strategy for protecting these resources and meeting increased demand simultaneously.