Abstract
The inverse problem of seawater intrusion (SWI) is reviewed. It represents a challenge because of both conceptual and computational difficulties and because coastal aquifer models display many singularities: (1) head measurements need to be complemented with density information; (2) salinity concentration data are very sensitive to flow within the borehole. Data problems can be reduced by incorporating the measurement process within model calibration; (3) SWI models are extremely sensitive to aquifer bottom topography; (4) the initial conditions may be far from steady state and depend on the location and type of sea-aquifer connection. Problems with aquifer geometry and initial conditions can be addressed by parameterization, which allows for modification during inversion. The four sets of difficulties can be partly overcome by using tidal response and electrical conductivity data, which are highly informative and provide extensive coverage. Still, SWI inversion is extremely demanding from a computation point of view. Computational improvements are discussed.
Resumé
Le problème inverse de l’intrusion saline (SWI) est décrit ici. Il représente un challenge de par les difficultés de concept et de calcul et du fait que les modèles d’aquifères côtiers présentent diverses particularités: (1) la mesure du niveau piézométrique nécessite une information dense; (2) les données de concentration en sels sont sensibles aux flux au sein des forages. Le problème de données peut être réduit par l’incorporation des processus de mesure dans la calibration du modèle; (3) les modèles de SWI sont extrêmement sensibles à la topographie de la base des aquifères; (4) les conditions initiales peuvent être loin de l’état stationnaire et dépendent de la localisation et du type de connexion entre la mer et l’aquifère. Les problèmes de la géométrie des aquifères et des conditions initiales peuvent être adressées par la paramétrisation qui permet des modifications durant l’inversion. Les quatre types de difficultés peuvent être en partie dépassés en utilisant la réponse tidale et les données de conductivité qui apportent beaucoup d’information sur une surface importante. SWI reste toutefois très demandeuse en calcul. Des améliorations de calculs sont discutées.
Resumen
Se presenta una revisión del problema inverso en intrusión marina, que representa un desafío tanto por las dificultades conceptuales y computacionales como por las abundantes singularidades de los modelos de acuíferos costeros: (1) Las medidas de nivel necesitan complementarse con información sobre la densidad; (2) Los datos de concentración de salinidad son muy sensibles al flujo en el interior del sondeo. Los problemas con los datos se pueden reducir incorporando el proceso de medida en la calibración; (3) Los modelos de intrusión marina son extremadamente sensibles a la topografía de la base del acuífero; (4) Las condiciones iniciales pueden estar lejos del estado estacionario y depender de la situación y tipo de la conexión mar-acuífero. Estos dos últimos problemas pueden abordarse mediante la parametrización de la geometría del acuífero y las condiciones iniciales, lo que permite su modificación durante la inversión. Los cuatro conjuntos de dificultades pueden ser parcialmente superados usando la respuesta a las mareas y datos de conductividad eléctrica, que son muy informativos y de extensa cobertura. Aún así, la inversión de problemas de intrusión marina es extremadamente exigente desde el punto de vista computacional. Se discuten algunas mejoras computacionales.
摘要
本文对海水入侵 (SWI) 的反问题进行了评述研究。这项工作的挑战性体现在概念和计算的困难, 以及沿海含水层模型的特异性质 : (1) 水头测量需要辅之以密度信息; (2) 盐度数据对钻孔中水的流动非常敏感。测量过程中加入模型识别可以减少数据带来的问题; (3) SWI模型对含水层底部地形极度敏感; (4) 初始条件与稳态相去甚远, 且取决于海与含水层连接的位置和类型。调参能够解决含水层形态和初始条件的问题, 因为在反演过程中调参可以进行适当的修改。这四个难点可以通过潮水响应和电导率数据部分解决, 因其能够提供较高和覆盖面较广的信息量。尽管如此, SWI反演仍极度依赖于计算方法。最后讨论了计算方法的改进。
Resumo
Faz-se uma revisão do problema inverso da intrusão de água salgada marinha (seawater intrusion, SWI). Trata-se de um desafio motivado por dificuldades de natureza conceptual e computacional e pelo facto dos modelos de aquíferos costeiros incluírem muitas singularidades: (1) Medidas de carga hidráulica necessitam ser complementadas com informação acerca da densidade; (2) Os dados de concentração da salinidade são muito sensíveis ao escoamento dentro do furo. Os problemas com os dados podem ser reduzidos através da incorporação dos processos de medição na calibração do modelo; (3) Os modelos de SWI são extremamente sensíveis à topografia da base do aquífero; (4) As condições iniciais podem estar longe de ser estacionárias e dependem da localização e tipo de conexão mar-aquífero. Os problemas com a geometria do aquífero e as condições iniciais podem ser tratados através da parametrização, a qual permite modificações durante a inversão. Os quatro conjuntos de dificuldades podem ser parcialmente ultrapassados através da utilização da resposta de maré e pela utilização de dados de condutividade eléctrica, que são altamente informativos e proporcionam uma cobertura extensiva. Entretanto, a inversão SWI é altamente exigente sob o ponto de vista computacional. São discutidos aperfeiçoamentos computacionais.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
Introduction
Protecting coastal aquifers requires not only a good understanding of their dynamics, but also a detailed knowledge of the variability of their parameters. Seawater intrusion (SWI) is especially sensitive to the sea-aquifer connection, usually associated with the presence of preferential flow paths. Management of coastal aquifers and design of protection and correction actions requires identification of such paths. These goals demand that modeling takes full advantage of collected data, which can only be achieved in an inverse modeling framework (e.g., Poeter and Hill 1997).
Coastal aquifers would appear to be ideally suited to inversion, in the sense that highly informative and relatively easy to collect data are usually available. Aquifer response to sea level fluctuations (caused by tides and wind or barometric fluctuations) provides a range of aquifer-scale hydraulic data that cannot be matched by inland aquifers. Pollutants usually affect a small portion of inland aquifers, whereas salinity transport may occur along the whole coastline, bringing in information about large-scale properties. Moreover, salinity should be relatively easy to monitor by means of geophysical methods, so that extensive data can be collected at a moderate cost.
The concurrence of need and availability of informative data should lead to a perfunctory application of inverse modeling techniques to coastal aquifers. Paradoxically, the literature reports on fully fledged inversion are extremely scarce. It can be contended that this scarcity reflects conceptual and computational difficulties.
Conceptual difficulties start from the fact that SWI is an essentially three-dimensional (3D) problem and is very sensitive to the heterogeneity in hydraulic conductivity and to the presence of preferential flow paths (e.g., paleochannels, Mulligan et al. 2007). It is also highly sensitive to aquifer bathymetry (Abarca et al. 2007a). Moreover, head measurements are affected by density (Post et al. 2007). Salinity concentration measurements in open wells may not reflect resident aquifer concentrations but flux averaged concentrations. These difficulties are shared by all transport problems, but are particularly severe in SWI, where vertical fluxes are likely to occur within the borehole. Computational difficulties include the need for solving two coupled non-linear equations. Doing so in a 3D domain, while solving the inverse problem, requires a huge computational effort.
These difficulties often lead to questioning the wisdom of inversion. The opposite can also be contended. Modeling difficulties highlight the need for inversion. Ironically, but not surprisingly, literature on the inverse problem for SWI problems is scant. A number of reviews are available for conventional groundwater model inversion (Yeh 1986; Carrera 1987; McLaughlin and Townley 1996; Poeter and Hill 1997; de Marsily et al. 1999; Carrera et al. 2005), but none of them devotes any attention to SWI. The objective of this paper is to fill such a gap by analyzing the conceptual and computational aspects of the inverse problem that are specific to SWI modeling.
Basic inversion concepts
The basic issues of the groundwater inverse problem are fairly well established. A summary of them is included here for the sake of completeness and to define the terms that will be used later.
Problem statement: parameterization
An inverse problem can be stated as a process of finding the set of parameters that leads to an optimal fit between computed and measured values of aquifer state variables. These include both direct state variables such as head or concentration, and derived state variables such as electrical conductivity or flow rates. The term “parameter” is more difficult to define. In the context of inversion, parameters are a set of unknown scalars that allow for the definition, without ambiguity, of all aquifer properties (hydraulic conductivity, storativity, recharge, boundary heads and fluxes, porosity, dispersivity, aquifer geometry) at all points in space and, when applicable, time.
The process of expressing all aquifer properties in terms of parameters is termed parameterization. Many parameterization schemes can be used. The most popular ones are zonation, where parameters are associated with properties within a portion (zone) of the aquifer, or pilot points, where properties are obtained by interpolation between parameter values associated to those points (see McLaughlin and Townley 1996, or Alcolea et al. 2006, for discussions on this issue). Strictly speaking, parameterization is not required for the pure geostatistically based formulations of the inverse problem (e.g., Kitanidis and Vomvoris 1983; Rubin and Dagan 1987; Hernández et al. 2006). However, these formulations would be unaffordably expensive for SWI and will not be discussed here.
Experience dictates that parameterization may be the most difficult conceptual step of inverse modelling. On one hand, it is desirable to keep the number of parameters as small as possible to reduce convergence difficulties and CPU time. On the other hand, it is clear that many parameters may be required for a proper identification of spatial variability patterns. As numerical methods and computer speed advance, there is a clear trend towards densely parameterized models (Alcolea et al. 2006; Hunt et al. 2007).
Objective function
Model calibration is usually performed manually by trial and error. However, the process is tedious and often incomplete (see, e.g., Carrera and Neuman 1986a; Poeter and Hill 1997). Automatic solution overcomes these difficulties. Automatic calibration is normally formulated as the minimization of an objective function. An alternative to this approach is the use of direct methods which consist of substituting state variables, assumed to be known everywhere, into the governing equations and solve these for aquifer properties (e.g. Nelson 1960; Giudici et al. 2000). However, this approach does not appear feasible for coupled non-linear problems and will not be discussed here.
While a number of objective functions are feasible, the vast majority of authors use variations of
where subindex i identifies the type of data (e.g., i = h for head, i = c for concentration, i = p for parameters, etc.), the λ i is the relative weight factor and F i measures the fit between measurements and computations of type i data (including model parameters, that is, i = Y for log-K (hydraulic conductivity), i = r for recharge, etc). A weighted sum of squared errors is usually adopted for F i . For a generic type of data u (state variable or model parameter):
where u* is the vector of measurements, u(p) is the vector of computed values of u with parameters p at the same location and times as measurements and \( {\mathbf{V}}_u^{{\text{ - 1}}} \) is the covariance matrix of u residuals, that is (u(p) – u*), which includes both measurement and model errors. This covariance matrix is never known with accuracy. Therefore, following Neuman and Yakowitz (1979), it is common to write it as C u = τ u V u , where C u is an improved estimate of the covariance matrix and τ u is an unknown scalar. Note that, when u represents a given type of parameter (e.g. log-K), then u(p) is itself the vector of parameters of such type and u* is the vector of their prior estimates.
The rationale behind the objective Eq. (1) is diverse. It was originally proposed by Neuman (1973) with two terms (F h + λ p F p ), in a multiobjective optimization context, to obtain a good fit of heads, while ensuring plausible parameters (i.e., computed parameters p are close to their prior estimates, p*). Stated like this, F p plays the role of a regularization term that stabilizes the solution (see the following section Uniqueness, stability, identifiability). However, this term appears naturally in statistically based objective functions such as the Bayesian function (Neuman and Yakowitz 1979) or maximum likelihood estimation (Carrera and Neuman 1986a) (see also Emselem and de Marsily 1971). These approaches lead to sum of squared errors objective functions, such as Eq. (2), when residuals are multinormal. They also provide optimal means to estimate weight factors λ i (e.g., Kitanidis and Vomvoris 1983; Medina and Carrera 2003). Therefore, the objective function Eq. (1) is indeed optimal when residuals are multinormal. Moreover, minimization is easy when the dependence between observation and parameters is linear. Both requisites (normality and linearity) may be obtained by appropriate transformation of the variables. For example, hydraulic conductivity is known to be log-normally distributed (Davis 1969). Therefore, the objective function for hydraulic conductivity K should be written in terms of Y = log K. As it turns out, this transformation may also help in improving the quadratic component of F (Dagan 1985; Carrera and Neuman 1986b). A careful analysis of concentration errors prompted Knopman and Voss (1989) to also log-transform concentration.
The nature of terms F i , λ i and V i should be understood in a somewhat lax manner (the effect of varying λ i is shown in Fig. 1). Several terms may be used for data of the same type, but which the modeller may wish to treat separately. For example, Rötting et al. (2006) or Alcolea et al. (2007, 2009) separate terms representing natural head, typically independent at different wells, and head responses to pumping tests or river or sea level fluctuations, which are often autocorrelated in time, thus leading to a non-diagonal V h (e.g., Carrera and Neuman 1986c). By the same token, a careful analysis of model errors is needed to properly define the error structure, which may be achieved either formally (Refsgaard et al. 2006) or subjectively (Sanz and Voss 2006). In short, there is a lot of room in the objective function for modellers to introduce their conceptual views and subjective judgement.
Minimization algorithm
Minimizing F (Eq. 1) requires an iterative process, unless F is exactly quadratic, which is rarely the case. Numerous minimization methods are available. Discrete optimization methods, which solely rely on the computation of F are the simplest to implement. Many of them are designed to find the global minimum. Examples include simulated annealing, genetic algorithms (e.g., Rao et al. 2003; Tsai et al. 2003), or the shuffled complex evolution method (Duan et al. 1992). They have been used to solve optimization problems in coastal aquifers (e.g., Benhachmi et al. 2003; Katsifarakis and Petala 2006; Yeh and Bray 2006; He et al. 2007). However, the cost of discrete optimization methods grows exponentially with the number of parameters. Moreover, discrete non-uniqueness is much less of an issue than often purported. Therefore, the focus will be set here on continuous methods. Cooley (1985) showed that the most efficient of these are Gauss-Newton methods (Marquardt method being the favourite). They are used routinely and will be the only ones discussed here. The algorithm proceeds as follows (see Fig. 2):
-
Step 1.
Initialization. Set k = 0 and define initial parameters p 0. Solve the direct problem to compute h(p 0) and other derived state variables. Compute F = F(p 0).
-
Step 2.
Compute the state variables, u k, Jacobian, J k = ∂ u k/∂ p k, first order approximation to Hessian, \( {{\mathbf{H}}^{\text{k}}} = {{\mathbf{J}}^{\text{t}}}{\mathbf{V}}_u^{{\text{ - 1}}}{\mathbf{J}} + {\lambda _p}{\mathbf{V}}_p^{{\text{ - 1}}} \) and gradient, g k = ∂F k/∂ p k.
-
Step 3.
Compute updating direction d k from H k d k = -2g k.
-
Step 4.
Update parameters, \( {{\mathbf{p}}^{{\text{k + 1}}}} = {{\mathbf{p}}^{\text{k}}} + {{\mathbf{d}}^{\text{k}}} \)
-
Step 5.
Solve the direct problem for p k + 1 and compute F k + 1(p k + 1).
-
Step 6.
If converged (small \( \left\| {{{\mathbf{g}}^{\text{k}}}} \right\| \), small \( \left\| {{{\mathbf{d}}^{\text{k}}}} \right\| \), small \( \left| {{F^{\text{k}}} - {F^{{\text{k + 1}}}}} \right| \), etc.), stop. If not, if \( {F^{{\text{k + 1}}}} < {F^{\text{k}}} \), set \( k = k + 1 \) and go to step 2, otherwise if \( {F^{{\text{k + 1}}}} > {F^{\text{k}}} \), either add a positive matrix to H k (and return to step 3), or perform line search to find α that minimizes F k + 1 (p k + α d k).
There are numerous variations for the basic algorithm (see, e.g., Cooley 1985; Doherty 2002; Medina and Carrera 2003), but they will not be examined here.
Sensitivity, uncertainty and worth of data
In a broad sense, sensitivity refers to the dependence of model output on model input. As such, it can be evaluated globally to quantify the overall dependence of model outputs on input parameters (see, e.g., Saltelli et al. 2005). However, sensitivity is computed locally in the context of inverse modelling. That is, the sensitivity of a state variable u m with respect to parameter p j simply expresses the rate of change of u m per unit change in p j at the current value of all parameters. That is:
This definition is not very useful for qualitative analysis, because (J u ) mj depends on the relative magnitude of u m and p j . For example, the sensitivity of a concentration expressed in mg/l is 1,000 times larger than the corresponding sensitivity to the same concentration expressed in g/l. It is clear that sensitivities need to be scaled. The most natural way to scale sensitivities in an inversion context is to decompose \( {\mathbf{V}}_u^{ - 1} \) as \( {\mathbf{W}}_u^t{{\mathbf{W}}_u} \) and \( {\mathbf{V}}_p^{ - 1} \) as \( {\mathbf{W}}_p^t{{\mathbf{W}}_p} \), so that the scaled sensitivity matrix would become:
In the case of diagonal V u and V p , the components of SS u are:
where σ pj is the standard deviation of the jth parameter and σ um is the standard deviation of the mth residual of type u measurements (\( \sigma _{pj}^2 \) and \( \sigma _{um}^2 \) are diagonal terms of V p and V u , respectively). Given the uncertainty on V u and V p (recall the need to find scaling parameter τ or λ), this may still not be sufficient to properly assess the worth of different types of data. This is why Knopman and Voss (1989) substitute σ um by a subjective magnitude and σ pj by p j (this latter choice is equivalent to assume p j log-normally distributed with σ ln pj = 1).
Analyzing sensitivities allows one to understand how parameters affect results and to gain insight into model behaviour. Sensitivity is also used to evaluate uncertainty. This can be done either qualitatively or quantitatively. If ss mj is large, small variations in p j should lead to large variations in u m . If the state variable u m has been measured, then the value of p j is heavily constrained by the measurement. This is quantified by the covariance matrix of estimated parameters or by Fisher’s information matrix. The latter expresses the information that data contain about parameters. It can be approximated by:
\( {\mathbf{I}}_F^{{\text{ - 1}}} \) gives a lower bound of the a posteriori covariance matrix, Σ p . Σ p is expected to be much smaller than the a priori covariance matrix V p because it includes all the information contained in the observations. Several comments should be made about covariance and Fisher’s matrices. First, the covariance matrix of model parameters quantifies the uncertainty of estimated parameters (as measured by their variances and correlation coefficients). It is often stated that high correlations are undesirable. Actually (see Fig. 3), it is the opposite. Uncertainty on a parameter is quantified by its variance (or standard deviation, Fig. 3). A high correlation with another parameter means that the two parameters are dependent on each other. The correct reading of a high correlation is that one knows something about the two parameters (e.g., their ratio, if log-transformed parameters are used in Fig. 3), although not about each one separately. Since nothing is known when the parameters are uncorrelated, one is much better off with a high than with a low correlation.
The second remark to be made is that one needs a careful assessment of relative weights (λ i ) to assess properly both uncertainty and information (see statistical approaches by Kitanidis and Vomvoris 1983 or Carrera and Neuman 1986a). In practice, at least in the authors’ experience, modellers tend to be optimistic about measurement and model errors (i.e., tend to assign low V i ). Only after preliminary inversion runs does one become fully aware of model limitations and assigns realistic V i matrices (this is automatically done by the above statistical approaches). Avoiding this step will lead to improper weighting of different types of data.
A third remark is that the covariance thus computed is too optimistic (Fig. 3). It must be viewed as a lower bound of uncertainty (it is exactly the lower bound, if model output is a linear function of model parameters). An evaluation of the degree of optimism was carried out by Carrera and Glorioso (1991), but they showed that it is very problem dependent. Nonlinear confidence intervals can also be computed (e.g., Vecchia and Cooley 1987; Hill 1998), but they are out of the scope of this section.
A final remark should be made regarding information. As quantified by Fisher’s matrix, information is additive—recall Eq. (6). In fact, the information contained by data can be quantified by different metrics of I F (e.g., the determinant, the sum of diagonal terms, etc., see Carrera and Neuman 1986c). A particularly popular metric about information on model parameters is the cumulative scaled sensitivity (CSS; Knopman and Voss 1989), which is obtained from the diagonal terms of the information matrix (usually divided by the number of measurements) and square rooted,
where N is the total number of observations. In finely parameterized models, css j can be mapped to show which parameters can be estimated with a given observation network and which cannot.
Cumulative scaled sensitivities can be used to assess the information content of all measurements about each parameter. However, to evaluate which measurements provide most information about all parameters, the contribution of each measurement to the information matrix should be used. This is obtained by simply adding the diagonal terms of such contribution (L.J. Slooten, IDAEA-CSIC, unpublished data, 2009). That is,
where N p is the number of parameters and I m should be read as the information contained in the mth measurement about all parameters. I m should be integrated in time in transient problems. I m can be computed for every node and plotted to identify the areas where measurements are most informative.
Uniqueness, stability, identifiability
The inverse problem is often said to be ill posed because its solution may be non-unique or unstable. Non-identifiability occurs when different parameter sets lead to the same solution of the direct problem. Non-uniqueness occurs when different parameter sets satisfy the minimum condition of the objective Eq. (1). Instability occurs when small changes in the observations lead to large changes in the estimated parameters. Carrera and Neuman (1986b) discuss extensively these concepts and show that they are closely related. They argue that the most frequent problem is instability. However its effect (Fig. 3) is identical to the ones of non-identifiability or non-uniqueness: the solution depends on the initial parameters. The point to stress here is that the presence of this kind of problem can be detected and fixed.
Detection can be achieved by analyzing the covariance matrix of estimated parameters—or the information matrix, I F (Eq. 6). When the problem is restricted to two parameters, instability (or poor identifiability) is associated with a very high correlation, which is why high correlations are viewed as negative. If more than two parameters are involved, poor identifiability is linked to high eigenvalues of the covariance matrix (low eigenvalues of the information matrix). The corresponding eigenvector defines the combination of parameters that cannot be identified (see Fig. 3). Details of the procedure are described by Carrera and Neuman (1986c) and Medina and Carrera (1996).
The impact of these problems can be reduced by several means. The traditional option is regularization, which consists of including F p terms in the objective function. These terms tend to smooth the solution and keep it close to the prior estimates. The risk is over-smoothing, which may cause a loss of resolution capacity (recall Fig. 1, a too large λ p led to a solution without channels). Actually, it is sufficient to increase the weight of prior estimates only for the parameters associated to large eigenvalues. A second option is to reduce the number of parameters to be estimated. This can be done using subjective judgment, possibly aided by a sensitivity analysis (e.g., fix the values of the most uncertain parameters). Formal techniques have also been developed such as single value decomposition (Chang and Yeh 1976; Hill and Østerby 2003), hybrid parameterization (Tonkin and Doherty 2005) or model reduction (Vermeulen et al. 2006). A third option is to increase the number and types of data or to optimize the observation scheme, by designing it to minimize parameter uncertainty and/or to increase the ability of data to discriminate among alternative models (Knopman and Voss 1989; Usunoff et al. 1992), as discussed earlier.
Conceptual aspects
Model simplifications
The methodology outlined in the previous section has never been reported for a full 3D SWI problem in a strict sense (but see Dausman et al. 2009). Probably the closest to a full calibration is the case reported by Bauer et al. (2006a), who used PEST to solve the inverse problem in a 2D vertical cross section at the Okavango Delta, Botswana (where water density is controlled by salinity, but this is not really a SWI problem!). Excessive computer time prevented them from estimating more than four parameters, not to mention going on to calibrate the full 3D problem. Iribar et al. (1997) used head, chloride concentration and flow rate data to estimate 40 transmissivity values. Abarca et al. (2006) and Vázquez-Suñé et al. (2006) used some 100 transmissivity values, plus storativity values, boundary fluxes, porosity, dispersivity and time evolution data of river recharge at the Llobregat Delta (Spain). However, they had to neglect density effects, which they justified because of the small aquifer thickness and elevation gradients. Thus, they could only settle on a two layer model. Bray et al. (2007) adopted an intermediate solution. They assumed hydraulic conductivity to be known from abundant point data interpolated by kriging and they calibrated dispersivity against concentration data. Leaving aside the question of whether point measurements of hydraulic conductivity are appropriate (Barlebo et al. 2004, among others, argue the opposite), it is worth noticing that only two parameters were estimated.
Automatic calibration is often disregarded because of its excessive CPU time cost (Bauer et al. 2006a; Werner and Gallagher 2006). Sometimes, manual calibration is made in conjunction with a formal sensitivity analysis (Person et al. 1998; Yakirevich et al. 1998). For example, Momii et al. (2005) used a sharp interface model to calibrate manually head, head fluctuations caused by tides and concentration data on a 2D plane model.
It is worth mentioning the work of Barazzuoli et al. (2008), who calibrated a 3D model using steady-state head to find hydraulic conductivity in each of the four layers of the model and used transient head data to find transient fluxes. Karahanoglu and Doyuran (2003) also calibrated a 2D vertical section in sequential phases (first steady state, then transient).
These efforts are clearly suboptimal. Sequential calibration does not take full advantage of the worth of information contained in the data. Sequential calibration efforts are to be commended as practical, but a lot of information is lost in the process. For example, if hydraulic conductivity is derived from steady-state head data, the information contained in transient head or concentration data is lost. Moreover, each of the sequential problems is more likely to be uncertain. Therefore, this type of approach must be viewed as a struggle by modellers to cope with the computational and conceptual difficulties discussed in the following.
Worth of data
The Fisher’s information matrix (Eq. 6) shows that the worth of an observation in an inverse problem context is determined by two main factors: the sensitivity of the (simulated) observations to all the different parameters, and the variance of the associated measurement and model errors. Measurements of different observation types tend to inform about different parameters, and to have different sources of error. This has led several authors to investigate what measurement types contain most information, and what measurement locations are optimal.
Flow related measurements (e.g., head) do not contain information about transport parameters in constant density models but they do in variable density ones. Shoemaker (2004) studied the capacity of observations of different types to constrain model parameters by computing scaled sensitivities (Eq. 5) and parameter correlations when using different data sets and different parameters. He found that using only head observations is not enough to identify flow and transport parameters. By combining head with salinity and flow rate observations, the parameters became much better constrained.
Sanz and Voss (2006) applied an analysis of the a-posteriori parameter covariance matrix (recall the previous section Uniqueness, stability, identifiability), and the correlation matrix to the Henry problem (Henry 1964). The solution depends on two dimensionless numbers, each one a function of the classical flow and transport parameters (permeability, diffusion coefficient, freshwater inflow rate, etc.). This dependence can be found from an eigenanalysis of the covariance matrix (see Medina and Carrera 1996, for the procedure) or from a qualitative analysis of the problem. Sanz and Voss (2006) found that head measurements are most informative deep inland, while concentration measurements are most informative around the toe of the seawater wedge. Their work also illustrates the importance of using an appropriate error structure for state variables and relative weighting of different types of data.
As mentioned at the beginning of this section, the worth of data is increased not only by seeking informative measurements, but also by minimizing the variance of measurement and model errors. Regarding the latter, careful scrutiny of data and large residuals may help in identifying outliers, a frequent case of trouble during automatic inversion, or deficiencies in the conceptual model. Error filtering and time averaging is specially recommended when long time data records are available. This eliminates high frequency errors and favors Gaussianity.
Use of head data
Using head data for calibration of density-dependent flow models is much more delicate than for constant density models (Post et al. 2007). For one thing, head is not a state variable in density dependent flow. SWI models are solved in terms of either pressure or equivalent freshwater head. Yet, head data are often gathered by measuring water elevation in a well. This is only informative if density along the piezometer water column is known (Fig. 4). To address this difficulty, one may either measure directly pressure at depth (e.g., Alcolea et al. 2009), which may imply a slight loss of accuracy, or monitor both water elevations and average salinity, which is costly.
The situation is much more complex if the borehole is open. On the one hand, measured head is an average along the vertical weighted by the hydraulic conductivity. While this problem may affect all types of aquifers, it is relatively easy to deal with in constant density flow models (see, e.g., Martínez-Landa and Carrera 2006). On the other hand, a vertical flux should be expected as a result of the vertical pressure gradient created by the influence of the sea. This effect can be explicitly included in the inversion process. Two alternatives are available. First, the borehole can be explicitly modeled by using a string of one-dimensional elements connected to aquifer nodes. The conductances of these connections depend on the hydraulic conductivity of the node. Density-dependent flow and transport is then solved in the expanded grid, which includes both aquifer and borehole nodes. This option may be expensive because the short-circuit effect of the borehole causes large head and concentration gradients and, if tides are simulated, fast fluctuations. Therefore, this option is only recommended for highly detailed small-scale models. The second alternative consists of assuming that aquifer head and concentration will not be significantly affected by the borehole. Therefore, the model is solved without explicitly simulating the short-circuit effect. This effect needs to be taken into account only for computing head (or pressure) to be compared to measurements.
An additional source of uncertainty may be caused by sea level fluctuations. As discussed later in section On the use of tidal data, high frequency fluctuations (e.g., tides) will be dampened close to the coast in free aquifers and should not be a problem. However, in confined aquifers, the tidal signal may affect measurements deep inland and would cause an additional source of noise, if not monitored properly. Addressing this issue requires averaging head over a long period, which is costly, but may be useful.
In summary, head errors may be large. As described earlier, addressing them in detail may be costly. If the measurement process is not modeled explicitly, errors should be acknowledged in the head covariance matrix, \( {\mathbf{V}}_h^{ - 1} \) (recall the previous section Objective function). It must be added, that these errors, especially the ones caused by salinity within the borehole, are likely to be highly correlated, which requires a non-diagonal \( {\mathbf{V}}_h^{ - 1} \). A simple way to account for auto-correlated noise is described in detail by Neuman and Carrera (1985).
Use of concentration data
The use of concentration data is not as simple as it might look. The most immediate difficulty is caused by saltwater circulation within the well (Fig. 4b, case C). Circulation causes measured salinity profiles to be much sharper than the actual width of the mixing zone (Tellam et al. 1986). Using pore water samples, as Tellam et al. (1986) did, can only be justified for a research project. Alternatives such as profiles deduced from induction in closed PVC wells (Lebbe 1999) should be explored further. It is clear, however, that (1) vertical salinity profiles should be used with care, and (2) the issue needs to be studied in much more detail (see, e.g., Shalev et al. 2009).
Concentration at pumping wells also needs close scrutiny. Ideally, mixing at the well can be represented in models, so that measured concentrations are comparable with computed concentrations. In practice, however, model simplifications may make this comparison non-trivial, e.g., when using a sharp interface model (as discussed in Mantoglou 2003).
A third source of concern is the difference between resident and flow concentration. Here, again, the issue is related to the type of model adopted. In general, measured concentration will be close to flowing concentration over the open portion of a pumping well screen. If this portion is long, the difference with resident concentration can be quite large. In periods of intrusion, flowing concentration will be larger than resident concentration. The opposite should occur during periods of retreat. As transport models are usually solved in terms of resident concentration, a post-processing is required. Only models based on non-local transport formulations represent the difference between resident and flow concentration explicitly (see discussion by Willmann et al. 2008). To the authors' knowledge, there has not been any attempt to use these kind of models for SWI problems. These problems can be addressed by explicitly modelling the measurement borehole (as described previously); however, the solution is numerically difficult and computationally costly.
Regarding the worth of concentration data, Fig. 5 shows that the concentration field is heavily dependent on hydraulic conductivity (sharp drop on areas of low transmissivity, saltwater wedge lying below high permeability zones, etc). The problem is more severe in aquifers affected by SWI, which salinize primarily along channels well connected to the sea (e.g., Iribar et al. 1997).
Nevertheless, some studies conclude that concentrations are not very informative about hydraulic parameters (e.g., Bray et al. 2007), whereas others conclude that the inclusion of concentration data significantly improves parameter estimation (Shoemaker 2004). A partial explanation may be that steady-state unpumped conditions such as the ones shown in Fig. 5, may not be comparable to SWI conditions observed during pumping. When pumping drives SWI, hydraulic gradients may override buoyancy forces, so that transport parameters become less important to explain concentration. Still, buoyancy forces may dominate on portions of the aquifer (see, e.g., Pool and Carrera 2009). In short, while it is clear that concentration data should be used for calibration whenever possible, it appears clear that the issue deserves further analysis.
Geophysical methods
In view of the difficulties associated with concentration data, it is not surprising that electrical conductivity (EC) measurements, typically derived from geophysics, have been extensively used. In fact, the whole suite of electro-magnetic methods have been used in model calibration attempts: electrical resistance tomography (ERT; Bauer et al. 2006b, Comte and Banton 2007), short and long offset transient-electromagnetic-measurements (SHOTEM and LOTEM) (Kafri et al. 2007), or time-domain electromagnetic methods (TDEM) (Yechieli et al. 2001). By providing extensive coverage, electrical conductivity measurements should allow a rather complete, albeit often blurry, picture of the interface shape. As already discussed, the interface shape and its time evolution should be sensitive to heterogeneity (Fig. 5) and, especially, preferential flow paths connecting the aquifer to the coast (Mulligan et al. 2007).
Electrical geophysics is not free of problems. Resistivity maps cannot be compared directly to water salinity, but require a calibration of their own (Comte and Banton 2007). This does not preclude qualitative use, but hinders direct use for inversion. Moreover, connate saltwater at low permeability areas may hide deeper resistivity measurements. Ironically, this would hinder qualitative use of resistivity maps, but could be overcome by joint inversion of SWI and geoelectric model parameters. In summary, EC mapping is an extremely attractive option, but should be made in connection with flow and transport inversion.
On the use of tidal data
Sea level fluctuations such as astronomical or wind driven tides, represent a large-scale stress on the system. As such, they yield information about hydraulic parameters. As pointed out earlier, taking advantage of these data should improve parameter identifiability and inverse problem stability. More importantly, Knudby and Carrera (2006) showed transport connectivity, which controls how fast SWI will contaminate an aquifer, correlates best with hydraulic diffusivity (T/S, T being transmissivity and S the storage coefficient). In fact, Carr and Vanderkamp (1969) showed that the head response in homogeneous aquifers depends solely on the characteristic length:
where P is the period of fluctuation. Equation (9) is not applicable to heterogeneous aquifers, but the sole dependence on diffusivity remains true. That is, the response to tides is not sufficient to identify T (or K) and S (or specific storage S s), but needs to be complemented by other data such as concentration or hydraulic tests (Alcolea et al. 2007, 2009). Another advantage of tidal response is that it is cheap to measure and to simulate because equivalent freshwater head response is virtually insensitive to density variations (Ataie-Ashtiani et al. 2001; L.J. Slooten, IDAEA-CSIC, unpublished data, 2009). Therefore, computations required for this type of data can be made with a constant density flow model.
Tidal response can provide large-scale information. Characteristic length, L can be quite large for confined aquifers. For example, with a tidal period of half a day, L will equal 1,260 m for a confined aquifer (S = 10–4) of 1,000 m2/day transmissivity. Obviously, this distance is much shorter for unconfined aquifers. Equation (9) is also valid when several fluctuations are superimposed. Typical tides are dominated by a half day period, but longer components are also present. In fact, wind or barometric pressure fluctuations may contain modes with periods of several days. This implies that L (Eq. 9) can vary quite widely, so that aquifer fluctuations driven by sea level fluctuations may penetrate significantly inland even in unconfined aquifers.
A sensitivity analysis for tidal response data, aimed at identifying optimal observation locations (Fig. 6) using the methodology described in the previous section Sensitivity, uncertainty and worth of data was performed (L.J. Slooten, IDAEA-CSIC, unpublished data, 2009). They found that if the aquifer is treated as homogeneous, maximum information is obtained at a distance L from the coast. However, if heterogeneity is acknowledged, maximum information is contained by heads measured at a distance around L/2 from the coast. Yet, assuming a dense observation network, the parameters that can be best estimated are those right at the coast. This finding supports the earlier assertion about the identification of connectivity. Given that connectivity to the sea is important for coastal aquifer management, it is clear that the full advantage of aquifer response to sea level fluctuations should be taken whenever possible.
Initial conditions: aquifer bathymetry
Specifying initial conditions is required for simulating any transient problem. When the history of pumping is well known, the best option usually consists of simulating such history while assuming that the aquifer is at an initial steady state. In failing to do so, the model will generate spurious results until it accommodates the instabilities introduced by the specified, non-equilibrium, initial condition (e.g., Werner and Gallagher 2006; Doherty 2008).
It is generally believed that the initial steady state must be the result of a sufficiently long simulation. As it turns out, the nonlinear density dependent flow and transport equations can also be solved under steady-state conditions provided that a sufficiently close initial guess is available. Since such an initial guess is not easy to come up with, most codes do not provide the steady state option. However, in an inverse modelling context, a good initial guess for steady state may be the solution of the steady state resulting from the previous inverse problem iteration.
A problem with starting from a steady state is that it may be unrealistic: the time needed to reach the steady state can be longer than the timescales on which changes in external forcing occur (Feseker 2007). The problem may occur in both directions (i.e., initial salinities larger than suggested by a steady state simulation, and vice versa). On the one hand, connate saltwater is likely to be found in Holocene aquifers poorly connected to the sea (Gámez et al. 2009). It is also likely to be present in low permeability areas (Custodio et al. 1971; Bridger and Allen 2006). On the other hand, sea level had been rising during the Holocene. Therefore, low permeability zones may not have yet been reached by salt water, although they would under a steady state condition with current sea levels. In this regard, one should bear in mind that the last glacial maximum occurred “only” some 15,000 years ago. Therefore, it is very likely that the initial salinities do not reflect current sea level in poorly connected areas. This problem can be identified by performing two long-term simulations: one with initially salinized conditions, and one with initial freshwater conditions. If they lead to the same solution, then the problem can be ignored and initial steady-state conditions can be adopted.
Difficulties with initial conditions lead to the development of an alternative approach discussed by Doherty (2008). In this work, the initial conditions are controlled by estimation parameters: “spreading parameters” that describe the width of the mixing zone around the interface, and “elevation parameters” that define the initial height of the interface above the aquifer bottom.
The issue of initial conditions also makes apparent the need for a careful assessment of aquifer elevations and connection to the sea. As illustrated in Fig. 7, initial conditions may be highly sensitive to the elevation of the discharge point (Gámez et al. 2009). Moreover, valleys of the aquifer bottom should coincide with regions of maximum inland penetration of seawater, even under steady-state conditions (Abarca et al. 2007a). Things can be worsened if these valleys coincide with high-permeability regions, which should be expected if they correspond to paleochannels deposited during periods of low sea level. In such cases, deep portions will represent preferential flow initial salinity makes them perfect candidates for fast SWI. The problem is especially severe in karstic regions, where flow along high-permeability channels may be turbulent, so that Darcy’s law is not valid.
The previous discussion points to the importance of characterizing aquifer elevation and connection to the sea. The most immediate option is to extend the parametrization of Doherty (2008) to the aquifer bottom and sea-aquifer connection. Parameters controlling aquifer bottom and sea connection can then be estimated during calibration. The fact that no efforts along this direction have been published in the scientific literature may reflect that either (1) the resulting inversion is too complex (in 3D models, the grid would have to be updated during calibration), (2) the problem is only truly relevant for unusually high variations in aquifer elevation, or (3) modelers are overcome by other difficulties. In any case, it is clear that the issue requires further analysis.
Computational aspects
SWI problems inversion is computationally costly. High cost reflects mainly the need for the sensitivity matrix (3) of Gauss-Newton methods. In the following, a summary of the methods to compute J u and some possible improvements of the computational performance are discussed.
Computation of sensitivities
Three methods can be used to compute sensitivities: the adjoint state method (Jacquard and Jain 1965; Townley and Wilson 1985), the influence coefficient method (Becker and Yeh 1972) and the sensitivity equation method (Distefano and Rath 1975). The adjoint state method is not well suited for SWI problems because this method is most appropriate for linear problems. It can be used for non-linear problems, but it is no longer convenient, especially for transient ones. The influence coefficient method, also known as incremental ratio or parameter perturbation method, approximates the sensitivity matrix using a finite difference scheme (i.e., ratio of change in computed state variables per unit change in each component of the parameter set). This approach requires the evaluation of the direct model at least N P +1 times (one time with the original parameter set and N P times corresponding to each parameter perturbation). Therefore, the resulting cost is high (see Shoemaker (2004) for an example of the increase in the calibration time). Moreover, an adequate choice of the magnitude of each parameter perturbation is required to obtain a good approximation of the sensitivity matrix. Inaccuracies in the sensitivity matrix may affect the computation of the gradient of the objective function, covariance matrices and the determination of the correlation between parameters (Hill and Østerby 2003). Precision in the computation of the sensitivity matrix can be enhanced using a higher-order finite difference scheme at the expense of an increase in CPU time. In spite of these disadvantages, the influence matrix method is the most widely used method in seawater intrusion applications because of its simplicity and the availability of generic calibration tools such as UCODE (Poeter et al. 2005) or PEST (Doherty 2002). These facilitate solving the inverse problem with conventional simulation codes. Also Van Meir and Lebbe (2005) used the parameter perturbation method to calibrate an axi-symmetric density dependent flow model.
The sensitivity equation method computes the sensitivity matrix by differentiating the direct problem equations, which leads to
where f F and f T are the (discretized) flow and solute transport equations, respectively. Solving this set of linear systems yields the sensitivities. Evaluating the coefficient matrix and right hand side in Eq. (10) requires tedious programming and verification, which has deterred modelers from implementing it. An alternative to this problem is to use autodifferentiation tools (Rall 1981; Griewank 2000) to generate the necessary code automatically. Rath et al. (2006) used the code SHEMAT (Clauser 2003) to do so while calibrating coupled flow and heat transport. However, autodifferentiation requires the original code to follow some coding conventions (e.g., adapt the code to Fortran 77 standard, avoid implicit loops) which can make the process as arduous as the actual implementation of the derivatives. Furthermore, if not correctly implemented, it can worsen the performance of the original code.
Still, the exact computation of the sensitivity matrix yields benefits in the calibration performance. The computational advantages of the sensitivity equation method can be seen analyzing the cost of calibrating a given model. The costs of a single iteration of the inverse problem for the sensitivity equation and influence coefficient methods are
where a is an integer depending on the finite difference scheme used to approximate the derivatives (1 for backward and forward differences and 2 for central differences), C IC is the cost of the influence coefficient method, C SE the cost of the sensitivity equation method, C DP the cost of solving the direct problem, C SM the cost of computing the sensitivity matrix and C LSE is the cost of solving a linear system of equations of the form of Eq. (10). Equation (11) shows that the calibration cost grows proportionally with the number of estimated parameters with a slope equal to aC DP for the influence coefficient method. The sensitivity equation method has an initial overhead because of the computation of the derivatives but the growth rate of the cost is only C LSE (<< aC DP).
A comparison of the performance of the influence coefficient and sensitivity equation methods is shown in Fig. 8. Results correspond to the calibration of a Henry problem but with a random Gaussian transmissivity field. Calibration was done using the pilot point method for an increasing number of parameters. As can be seen, the cost of successful iterations was dramatically reduced with the direct derivation method. However, the difference between the overall calibration cost was not as different as suggested by Eq. (11). In the implementation adopted here, the influence coefficient method detects failed iterations, which do not require computation of the sensitivities, after the first simulation of that iteration, thus avoiding the need for extra computations. The sensitivity equation method, instead, computes sensitivities in all iterations.
Areas of improvement
The results of the previous example point out that there is a lot of room for improvement related to computational performance. Inverse modeling codes may profit from the repeatedly simulated problems with similar parameters during the calibration process. Stored information on the state variables from previous calibration iterations can be used as initial guess for the resolution of the non-linear direct problem (Galarza et al. 1999), which can reduce its cost.
Code parallelization can improve the performance of the inversion process. Parallelization can be done at different levels. Adequate division and numbering of the model mesh result in a direct problem sparse matrix suitable for parallel linear solvers (Canot et al. 2006). This process is straightforward in finite difference and regular finite element meshes, which may not be appropriate for the geometry of real aquifers. Efficiency relies in the storage scheme and the linear solver. Parallelization can be generalized to all the computations in the problem to improve the efficiency. It has been successfully applied to CO2 sequestration problems (Lu and Lichtner 2007), although the technical resources may not be commonly affordable. Regarding the inverse problem, parameter perturbation methods can benefit largely from parallelization. If the (N p+1) direct problem computations needed for each inverse problem iteration are distributed among N p+1 processors, the actual time required for computing the sensitivity matrix is comparable to that of a direct simulation. This functionality is included in the UCODE and PEST suites. In the same manner, genetic algorithms can benefit from parallel processing, as shown by Bray and Yeh (2008).
Conclusions
The discussion presented here points out that the full inversion formalism has not yet been applied to seawater intrusion (but see Dausman et al. 2009). Automatic calibration efforts reported so far are based on numerous simplifications: 2D modeling, ignoring density dependence, neglecting mixing, splitting the problem (separate inversion of different data sets), disregarding variations in aquifer elevation, or combinations of these. These simplifications reflect both conceptual and computational difficulties.
From a computational point of view, the inversion of two non-linear coupled equations on a 3D domain is challenging. Computer cost can be significantly reduced by analytical evaluation of sensitivities or by taking advantage of the fact that similar problems have to be solved, varying only model parameters. However, these kinds of improvements require tedious and costly programming. Instead, recent trends appear to point in the direction of generic inversion codes such as PEST or UCODE, whose performance can be greatly enhanced by parallelization.
It can be contended, however, that the main difficulties reflect conceptual shortcomings. Moreover, SWI inversion is complex because SWI models depend on many factors that can be neglected in conventional freshwater aquifers. The use and meaning of measured heads and concentrations is sensitive to borehole construction (length of open interval) and history (whether full of freshwater or saltwater). These problems can be addressed by explicitly modeling the measurement process, which is feasible, but represents an added source of complexity.
Seawater intrusion is sensitive to aquifer bathymetry and initial conditions. The latter can be obtained numerically if a steady state is chosen as initial state. However, the solution may be difficult because it requires a good initial guess. Fortunately, such a guess can be obtained from previous iterations in the context of automatic inversion. Unfortunately, initial conditions may not be at steady state because actual salinization prior to pumping may not reflect current sea level. In such cases, an option is to parameterize initial salinities, which are then estimated during model calibration. In fact, the same can be done regarding aquifer bathymetry (especially the elevation of the discharge point in confined aquifers). Obviously, these options represent a marked increase in model complexity. Further analysis is needed to find out whether and when they are sensible.
These difficulties are partially overcome by the availability of informative extra data sets, notably electromagnetic geophysics and tidal response. These data are highly informative, relatively easy to obtain, and they provide extensive areal coverage. Taking advantage of them increases computational cost and conceptual complexity of inversion, but is likely to be worth the effort.
In all, the time is ripe. The number of publications on the conceptual aspects of SWI has grown exponentially in recent years. Therefore, most of the difficulties addressed here should be overcome soon. As a result, a surge in SWI inversion should be expected.
References
Abarca E, Vázquez-Suñé E, Carrera J, Capino B, Gámez D, Batlle F (2006) Optimal design of measures to correct seawater intrusion. Water Resour Res 42(9), W09415
Abarca E, Carrera J, Sanchez-Vila X, Dentz M (2007a) Anisotropic dispersive Henry problem. Adv Water Resour 30(4):913–926
Abarca E, Carrera J, Sanchez-Vila X, Voss CI (2007b) Quasy-horizontal circulation cells in 3D seawater intrusion. J Hydrol 339(3–4):118–129
Alcolea A, Carrera J, Medina A (2006) Pilot points method incorporating prior information for solving the groundwater flow inverse problem. Adv Water Resour 29(11):1678–1689
Alcolea A, Castro E, Barbieri M, Carrera J, Bea S (2007) Inverse modeling of coastal aquifers using tidal response and hydraulic tests. Ground Water 45(6):711–722
Alcolea A, Renard P, Mariethoz G, Bertone F (2009) Reducing the impact of a desalination plant using stochastic modeling and optimization techniques. J Hydrol 365:275–288
Ataie-Ashtiani B, Volker RE, Lockington DA (2001) Tidal effects on groundwater dynamics in unconfined aquifers. Hydrol Process 15:655–669
Barazzuoli P, Nocchi M, Rigati R, Salleolini M (2008) A conceptual and numerical model for groundwater management: a case study on a coastal aquifer in southern Tuscany, Italy. Hydrogeol J 16(8):1557–1576
Barlebo HC, Hill MC, Rosbjerg D (2004) Investigating the Macrodispersion Experiment (MADE) site in Columbus, Mississippi, using a three-dimensional inverse flow and transport model. Water Resour Res 40(4), W04211
Bauer P, Held RJ, Zimmermann S, Linn F, Kinzelbach W (2006a) Coupled flow and salinity transport modelling in semi-arid environments: the Shashe River Valley, Botswana. J Hydrol 316(1–4):163–183
Bauer P, Supper R, Zimmermann S, Kinzelbach W (2006b) Geoelectrical imaging of groundwater salinization in the Okavango Delta, Botswana. J Appl Geophys 60(2):126–141
Becker L, Yeh WWG (1972) Identification of parameters in unsteady open channel flows. Water Resour Res 8(4):956–965
Benhachmi, MK, Ouazar D, Naji A, Cheng AHD, El Harrouni K (2003) Pumping optimization in saltwater intruded aquifers by simple genetic algorithm-Deterministic model. Proceedings of Coastal Aquifers Intrusion Technology: Mediterranean Countries International Conference (TIAC'03), vol 1, Alicante, Spain, March 2003, pp 291-293
Bray BS, Yeh WWG (2008) Improving seawater barrier operation with simulation optimization in southern California. J Water Res Pl-ASCE 134(2):171–180
Bray BS, Tsai FTC, Sim Y, Yeh WWG (2007) Model development and calibration of a saltwater intrusion model in southern California. JAWRA 43(5):1329–1343
Bridger DW, Allen DM (2006) An investigation into the effects of diffusion on salinity distribution beneath the Fraser River Delta, Canada. Hydrogeol J 14(8):1423–1442
Canot E, de Dieuleveult C, Erhel J (2006) A parallel software for a saltwater intrusion problem. In: Joubert GR, Nagel WE, Peters FJ, Plata O, Tirado P, Zapata E (eds) Parallel computing: current and future issues of high-end computing, Proceedings of the International Conference ParCo 2005, vol 33. NIC Series, John von Neumann Institute for Computing, Jülich, Germany, pp 399–406
Carr PA, Vanderkamp GS (1969) Determining aquifer characteristics by tidal method. Water Resour Res 5:1023–1031
Carrera J (1987) State of the art of the inverse problem applied to the flow and solute transport problems. In: Groundwater flow and quality modeling, NATO ASI Ser:549-585
Carrera J, Glorioso L (1991) On geostatistical formulations of the groundwater flow inverse problem. Adv Water Resour 14(5):273–283
Carrera J, Neuman SP (1986a) Estimation of aquifer parameters under transient and steady-state conditions: 1. maximum likelihood method incorporating prior information. Water Resour Res 22(2):199–210
Carrera J, Neuman SP (1986b) Estimation of aquifer parameters under transient and steady-state conditions: 2. uniqueness, stability and solution algorithms. Water Resour Res 22(2):211–227
Carrera J, Neuman SP (1986c) Estimation of aquifer parameters under transient and steady-state conditions: 3. application to synthetic and field data. Water Resour Res 22(2):228–242
Carrera J, Alcolea A, Medina A, Hidalgo J, Slooten L (2005) Inverse problem in hydrogeolgy. Hydrogeol J 13:206–222
Chang S, Yeh WWG (1976) Proposed algorithm for solution of large-scale inverse problem in groundwater. Water Resour Res 12(3):365–374
Clauser C (ed) (2003) Numerical simulation of reactive flow in hot aquifers. SHEMAT and PROCESSING SHEMAT. Springer, New York
Comte JC, Banton O (2007) Cross-validation of geo-electrical and hydrogeological models to evaluate seawater intrusion in coastal aquifers. Geophys Res Lett 34(10):L10402
Cooley RL (1985) A comparison of several methods of solving nonlinear-regression groundwater-flow problems. Water Resour Res 21(10):1525–1538
Custodio E, Bayó A, Pelaez MD (1971) Geoquímica y datación de aguas para el estudio del movimiento de las aguas subterráneas en el delta del Llobregat (Barcelona) [Geochemistry and water datation for the study of the movement of groundwater in the Llobregat Delta (Barcelona) ]. In: Primer congreso Hispano-Luso-Americano de Geología Económica, vol 5, Madrid-Lisboa Comunicación E-5-2. IGME, Madrid, pp 2–23
Dagan G (1985) Stochastic modeling of groundwater-flow by unconditional and conditional probabilities: the inverse problem. Water Resour Res 21:65–72
Dausman AM, Doherty J, Langevin CD (2009) Hypothesis testing of buoyant plume migration using a highly parameterized variable-density groundwater model at a site in Florida, USA. Hydrogeol J. doi:10.1007/s10040-009-0511-6
Davis SN (1969) Porosity and permeability of natural materials. In: DeWiest RJM (ed) Flow through porous media. Academic, New York, pp 54–86
de Marsily G, Delhomme JP, Delay F, Buoro A (1999) 40 years of inverse problems in hydrogeology. CR Acad Sci Paris Earth Planet Sci 329(2):73–87
Distefano N, Rath A (1975) An identification approach to subsurface hydrological systems. Water Resour Res 11(6):1005–1012
Doherty J (2002) PEST: Model-independent parameter estimation user manual, 5th edn. Watermark Numerical Computing, Brisbane, Queensland, Australia
Doherty J (2008) Incorporating Initial Conditions in the Model Calibration Process. Proceedings of the 20th SWIM meeting. Naples, FL, USA, 23–27 June 2008, pp 64–67. http://conference.ifas.ufl.edu/swim/papers.pdf. September 2009
Duan QY, Sorooshian S, Gupta V (1992) Effective and efficient global optimization for conceptual rainfall-runoff models. Water Resour Res 28(4):1015–1031
Emselem Y, de Marsily G (1971) An automatic solution for the inverse problem. Wat Resour Res 7(5):1264–1283
Feseker T (2007) Numerical studies on saltwater intrusion in a coastal aquifer in northwestern Germany. Hydrogeol J 15:267–279
Galarza GA, Carrera J, Medina A (1999) Computational techniques for optimization of problems involving non-linear transient simulations. Int J Numer Methods Eng 45:319–334
Gámez D, Simó JA, Lobo FJ, Barnolas A, Carrera J, Vázquez-Suñé E (2009) Onshore–offshore correlation of the Llobregat deltaic system, Spain: development of deltaic geometries under different relative sea-level and growth fault influences. Sediment Geol 217(1–4):64–84. doi:10.1016/j.sedgeo.2009.03.007
Giudici M, Ponzini G, Parravicini G (2000) Uniqueness and stability of the determination of aquifer properties with inverse problems. In: Bjerg PL, Engesgaard P, Krom TD (eds) Proceedings of the International Conference on Groundwater Research, Copenhagen, Denmark, 6–8 June 2000, pp 91–92
Griewank A (2000) Evaluating derivatives: principles and techniques of algorithmic differentiation. SIAM, Philadelphia
He B, Takase K, Wang Y (2007) Regional groundwater prediction model using automatic parameter calibration SCE method for a coastal plain of Seto Inland Sea. Water Resour Manage 21(6):947–959
Henry H (1964) Effects of dispersion on salt encroachment in coastal aquifers. US Geol Surv Water Suppl Pap 1613-C
Hernández AF, Neuman SP, Guadagnini A, Carrera J (2006) Inverse stochastic moment analysis of steady state flow in randomly heterogeneous media. Water Resour Res 42(5), W05425
Hill M (1998) Methods and guidelines for effective model calibration. US Geol Surv Water Resour Invest Rep 98-4005
Hill MC, Østerby O (2003) Determining extreme parameter correlation in ground water models. Ground Water 41(4):420–430
Hunt RJ, Doherty J, Tonkin MJ (2007) Are models too simple? Arguments for increased parameterization. Ground Water 45:254–262
Iribar V, Carrera J, Custodio E, Medina A (1997) Inverse modelling of seawater intrusion in the Llobregat delta deep aquifer. J Hydrol 198(1–4):226–244
Jacquard P, Jain C (1965) Permeability distribution from field pressure data. Soc Pet Eng J 5(4):281–294
Kafri U, Goldman M, Lyakhovsky V, Scholl C, Helwig S, Tezkan B (2007) The configuration of the fresh-saline groundwater interface within the regional Judea Group carbonate aquifer in northern Israel between the Mediterranean and the Dead Sea base levels as delineated by deep geoelectromagnetic soundings. J Hydrol 344(1–2):123–134
Karahanoglu N, Doyuran V (2003) Finite element simulation of seawater intrusion into a quarry-site coastal aquifer, Kocaeli-Darica, Turkey. Environ Geol 44(4):456–466
Katsifarakis KL, Petala Z (2006) Combining genetic algorithms and boundary elements to optimize coastal aquifers' management. J Hydrol 327(1–2):200–207. doi:10.1016/j.jhydrol.2005.11.016
Kitanidis PK, Vomvoris EG (1983) A geostatistical approach to the inverse problem in groundwater modelling (steady state) and one dimensional simulations. Water Resour Res 19(3):677–690
Knopman DS, Voss CI (1989) Multiobjective sampling design for parameter-estimation and model discrimination in groundwater solute transport. Water Resour Res 25(10):2245–2258
Knudby C, Carrera J (2006) On the use of apparent hydraulic diffusivity as an indicator of connectivity. J Hydrol 329(3–4):377–389
Lebbe L (1999) Parameter identification in fresh-saltwater flow based on borehole resistivities and freshwater head data. Adv Water Resour 22(8):791–806
Lu C, Lichtner PC (2007) High resolution numerical investigation on the effect of convective instability on long term CO2 storage in saline aquifers. J Phys: Conf Ser 78 012041. doi:10.1088/1742-6596/78/1/012042
Mantoglou A (2003) Pumping management of coastal aquifers using analytical models of saltwater intrusion. Water Resour Res 33:1335
Martínez-Landa L, Carrera J (2006) A methodology to interpret cross-hole tests in a granite block. J Hydrol 325:222–240
McLaughlin D, Townley LLR (1996) A reassessment of the groundwater inverse problem. Water Resour Res 32(5):1131–1161
Medina A, Carrera J (1996) Coupled estimation of flow and solute transport parameters. Water Resour Res 32:3063–3076
Medina A, Carrera J (2003) Geostatistical inversion of coupled problems: dealing with computational burden and different types of data. J Hydrol 281:251–264
Momii K, Shoji J, Nakagawa K (2005) Observations and modelling of seawater intrusion for a small limestone island aquifer. Hydrol Process 19(19):3897–3909
Mulligan AE, Evans RL, Lizarralde D (2007) The role of paleochannels in groundwater/seawater exchange. J Hydrol 335(3–4):313–329
Nelson RW (1960) In-place measurement of permeability in heterogeneous media: 1. theory of a proposed method. J Geophys Res 65(6):1753–1758
Neuman SP (1973) Calibration of distributed parameter groundwater flow models viewed as a multiple-objective decision process under uncertainty. Water Resour Res 9(4):1006–1021
Neuman SP, Carrera J (1985) Maximum-likelihood adjoint-state finite-element estimation of groundwater parameters under steady-state and nonsteady-state conditions. Appl Math Comput 17:405–432
Neuman SP, Yakowitz S (1979) Statistical approach to the inverse problem of aquifer hydrology: 1. theory. Water Resour Res 15(4):845–860
Person M, Taylor JZ, Dingman SL (1998) Sharp interface models of salt water intrusion and wellhead delineation on Nantucket Island, Massachusetts. Ground Water 36(5):731–742
Poeter EP, Hill MC (1997) Inverse models: a necessary next step in groundwater modeling. Ground Water 35(2):250–260
Poeter, EP, Hill MC, Banta ER, Mehl S, Christensen S (2005) UCODE_2005 and six other computer codes for universal sensitivity analysis, calibration, and uncertainty evaluation. US Geol Surv Tech Methods 6-A11, 283 pp
Pool M, Carrera J (2009) Dynamics of negative hydraulic barriers to prevent seawater intrusion. Hydrol J. doi:10.1007/s10040-009-0516-1
Post V, Kooi H, Simmons C (2007) Using hydraulic head measurements in variable-density ground water flow analyses. Ground Water 45:664–671
Rall, LB (1981) Automatic differentiation: techniques and applications. Lecture Notes in Computer Science, vol 120. Springer, Berlin
Rao SVN, Thandaveswara BS, Bhallamudi SM, Srivivasulu V (2003) Optimal groundwater management in deltaic regions using simulated annealing and neural networks. Water Resour Manage 17(6):409–428
Rath V, Wolf A, Bücker HM (2006) Joint three-dimensional inversion of coupled groundwater flow and heat transfer based on automatic differentiation: sensitivity calculation, verification, and synthetic examples. Geophys J Int 167:453–466
Refsgaard JC, van der Sluijs JP, Brown J, van der Keur P (2006) A framework for dealing with uncertainty due to model structure error. Adv Water Resour 29:1586–1597
Rötting TS, Carrera J, Bolzicco J, Salvany JM (2006) Stream-stage response tests and their joint interpretation with pumping tests. Ground Water 44(3):371–385
Rubin Y, Dagan G (1987) Stochastic identification of transmissivity and effective recharge in steady groundwater flow: 1. theory. Water Resour Res 23(7):1185–1192
Saltelli A, Ratto M, Tarantola S, Campolongo F (2005) Sensitivity analysis for chemical models. Chem Rev 105:2811–2827
Sanz E, Voss CI (2006) Inverse modeling for seawater intrusion in coastal aquifers: insights about parameter sensitivities, variances, correlations and estimation procedures derived from the Henry problem. Adv Water Resour 29(3):439–457
Shalev E, Lazar A, Wollman S (2009) Biased monitoring of fresh water-salt water mixing zone in coastal aquifers. Ground Water 47(1):49–56
Shoemaker WB (2004) Important observations and parameters for a salt water intrusion model. Ground Water 42:829–840
Tellam JH, Lloyd JW, Walters M (1986) The morphology of a saline groundwater body: its investigation, description and possible explanation. J Hydrol 83(1–2):1–21
Tonkin MJ, Doherty J (2005) A hybrid regularized inversion methodology for highly parameterized environmental models. Water Resour Res 41, W10412. doi:10.1029/2005WR003995
Townley LR, Wilson JL (1985) Computationally efficient algorithms for parameter estimation and uncertainty propagation in numerical models of groundwater flow. Water Resour Res 21(12):1851–1860
Tsai FTC, Sun NZ, Yeh WWG (2003) Global-local optimization for parameter structure identification in three-dimensional groundwater modeling. Water Resour Res 39(2):1043. doi:10.1029/2001WR001135
Usunoff E, Carrera J, Mousavi SF (1992) An approach to the design of experiments for discriminating among alternative conceptual models. Adv Water Resour 15(3):199–214
Van Meir N, Lebbe L (2005) Parameter identification for axi-symmetric density-dependent groundwater flow based on drawdown and concentration data. J Hydrol 309(1–4):167–177
Vázquez-Suñé E, Abarca E, Carrera J, Capino B, Gámez D, Pool M, Simo A, Batlle F, Niñerola JM, Ibañez X (2006) Groundwater modelling as a tool for the European Water Framework Directive (WFD) application: the Llobregat case. Phys Chem Earth 31(17):1015–1029
Vecchia AV, Cooley RL (1987) Simultaneous confidence and prediction intervals for nonlinear regression models with application to a groundwater flow model. Water Resour Res 23(7):1237–1250
Vermeulen PTM, Stroet CBMT, Heemink AW (2006) Model inversion of transient nonlinear groundwater flow models using model reduction. Water Resour Res 42(9), W09417
Werner AD, Gallagher MR (2006) Characterisation of sea-water intrusion in the Pioneer Valley, Australia, using hydrochemistry and three-dimensional numerical modelling. Hydrogeol J 14(8):1452–1469
Willmann M, Carrera J, Sanchez-Vila X (2008) Transport upscaling in heterogeneous aquifers: What physical parameters control memory functions? Water Resour Res 44, W12437
Yakirevich A, Melloul A, Sorek S, Shaath S, Borisov V (1998) Simulation of seawater intrusion into the Khan Yunis area of the Gaza Strip coastal aquifer. Hydrogeol J 6(4):549–559
Yechieli Y, Kafri U, Goldman M, Voss CI (2001) Factors controlling the configuration of the fresh-saline water interface in the Dead Sea coastal aquifers: synthesis of TDEM surveys and numerical groundwater modelling. Hydrogeol J 9(4):367–377
Yeh WWG (1986) Review of parameter estimation procedures in groundwater hydrology: the inverse problem. Water Resour Res 22:95–108
Yeh WWG, Bray B (2006) Modeling and optimization of seawater intrusion barriers in southern California coastal plain. Technical Completion Reports, Paper 983, University of California Water Resources Center, Riverside, CA
Acknowledgements
The authors are grateful to comments by three anonymous reviewers, and the editors (Elena Abarca and Vincent Post). Much of the experience summarized here has been obtained through projects funded by ENRESA (Spanish radioactive waste management company), CICYT (Spanish research funding agency), ACA (Catalonian water agency), IGME (Spanish geological survey) and EU (notably project SALTRANS).
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
About this article
Cite this article
Carrera, J., Hidalgo, J.J., Slooten, L.J. et al. Computational and conceptual issues in the calibration of seawater intrusion models. Hydrogeol J 18, 131–145 (2010). https://doi.org/10.1007/s10040-009-0524-1
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10040-009-0524-1