Abstract
For a given nonlinear problem discretized by standard finite elements, we propose two iterative schemes to solve the discrete problem. We prove the well-posedness of the corresponding problems and their convergence. Next, we construct error indicators and prove optimal a posteriori estimates where we treat separately the discretization and linearization errors. Some numerical experiments confirm the validity of the schemes and allow us to compare them.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction
Let \(\Omega \) be an open polygon in \(\mathbb {R}^{2}\), we consider the problem
where \(\lambda \) and p are two positive real numbers. The right-hand side f belongs to the dual space \(H^{-1}(\Omega )\) of the Sobolev space \(H^{1}_{0}(\Omega )\). The a posteriori error analysis of finite element approximations of the present model problem has been studied by Bernardi et al. [2]. In fact, let \(V_h \subset H^1_0(\Omega )\) be the \({\mathcal {P}}_1\) finite element space associated with a regular family of triangulations of \(\Omega \), denoted by \({\mathcal {T}}_h.\) Using \({\mathcal {P}}_1\) Lagrange finite elements, the discrete variational problem obtained by the Galerkin method amounts to (from now on, we denote by \((\cdot ,\cdot )\) the scalar product of \(L^2(\Omega )\)).
Find \(u_h\in V_h\) such that
In order to solve the discrete nonlinear problem (1.3), we introduced in [2] the following linear numerical scheme, called fixed-point algorithm:
Find \(u_h^{i+1}\in V_h\) such that
This algorithm leads to a conditional convergence of the problem. In fact, the convergence of this numerical schemes depends on the parameters \(\lambda \), p and f. Furthermore, the a priori estimate of the discrete variationel problem is presented in [2]. As well, the a posteriori analysis of the discretization is performed but requires that the discrete solution belongs to a neighborhood of the exact solution u.
As a new contribution to the previous work that we have carried out recently on the a posteriori analysis of the present nonlinear problem, see [2], we introduce in this paper two different convergent numerical schemes to solve this problem. In fact, the main idea is to introduce a parameter \(\alpha \) which can be controlled in order to insure the convergence. Let \(u_h^0\) be an initial guess, for \(i\ge 0\) we introduce the following two algorithms:
First numerical scheme.
Find \(u_h^{i+1}\in V_h\) such that
Second numerical scheme.
Find \(u_h^{i+1}\in V_h\) such that
For a parameter \(\alpha \) bigger than a specific constant that depends on \(\lambda ,\; p\) and the data f, problem (1.5) and (1.6) always converge. Moreover, our objective is to derive an a posteriori error estimate distinguishing linearization and discretization errors.
In practice, the present problem (1.1)–(1.2) is solved using an iterative method involving a linearization process and approximated by the finite element method. Thus, two sources of error appear, namely linearization and discretization. The main result in [2] is a two-sided bound of the error distinguishing linearization and discretization errors in the context of an adaptive procedure. This type of analysis was introduced by Chaillou and Suri [3, 4] for a general class of problems characterized by strongly monotone operators and developed by El Alaoui et al. [5] for a class of second-order monotone quasi-linear diffusion-type problems approximated by piecewise affine, continuous finite elements. We wish to extend these results to the problem that we consider and prove optimal estimates.
In the following, we summarize the differences between the scheme (1.4) (studied in [2]) and the schemes (1.5) and (1.6) studied in this work:
-
(1)
(1.4) converges when the data \((\mathbf{f},\lambda ,p)\) verifies a condition called small condition (see [2], Theorem 4.1). But the two schemes presented in this paper [(1.5) and (1.6)] introduce a parameter \(\alpha \) which can be calibrated to obtain the convergence for any data. Numerical simulations of comparison are listed in Sect. 5 and specially in Tables 1 and 2 to show the comparison between them.
-
(2)
In [2] we show the a posteriori error corresponding to the fixed point scheme when the discrete iterative solution \(\mathbf{u}_h^{i+1}\) is in a neighborhood of the exact solution \(\mathbf{u}\). With the two schemes presented in this work, we derive a posteriori error estimates for any iterative solution \(u_h^{i+1}\) without the neighborhood constraints.
The paper is organized as follows:
2 Preliminaries
In this section, we describe the variational formulation associated with the nonlinear problem (1.1)–(1.2) and introduce and recall some corresponding properties which will be used later.
We denote by \(L^p(\Omega )\) the space of measurable functions summable with power p, and for all \(v\in L^p(\Omega )\), the corresponding norm is defined by
In the case \(p =2\), we also denote this norm by \(\Vert v \Vert _{0,\Omega } = \Vert v \Vert _{L^2(\Omega )}\). Throughout this paper, we constantly use the classical Sobolev space
which is equipped respectively with the semi-norm and norm
In particular, we consider the following space
and its dual space \(H^{-1}(\Omega )\). We recall the Sobolev imbeddings (see Adams [1], Chapter 3).
Lemma 2.1
For any bounded domain \(\Omega \) in \(\mathbb {R}^2\), for all j, \(1 \le j<\infty \), there exists a positive constant \(S_j\) such that
Remark 2.2
For domains \(\Omega \) in \(\mathbb {R}^3\), inequality (2.1) with standard definition of \(H_0^1(\Omega )\) remains valid only for \(j\le 6\), whence the interest of working in dimension \(d=2.\)
Setting \(X= H^{1}_{0}(\Omega )\), the model problem (1.1)–(1.2) admits the equivalent variational formulation:
Find \(u\in X\) such that
Theorem 2.3
[2] Problem (2.2) admits a unique solution \(u \in X.\)
We now introduce the following technical lemmas:
Lemma 2.4
Let a, b and \(p\ge 1\) be three real numbers. We have the following relation
Proof
The result follows from applying the mean value theorem to \(f(x)=x^{p}\) with \(x\ge 0\).
\(\square \)
Remark 2.5
For a real positive \(p<1\) and for any real numbers a and b, the last lemma can be written as follow
Lemma 2.6
For all \(x,y \in \mathbb {R}\) and \(p\in \mathbb {R}^{+}\), we have
Remark 2.7
In the sequel, we denote by C, \(C',\ldots \) generic constants that can vary from line to line but are always independent of all discretization parameters.
3 Finite Element Discretization and Convergence
In this section, we begin to collect some useful notation concerning the discrete setting and the a priori estimate. Then, we show the convergence of the schemes (1.5) and (1.6).
Let \(({{\mathcal {T}}}_h)_{h}\) be a regular family of triangulations of \( \Omega \), in the sense that, for each h:
-
The union of all elements of \({{\mathcal {T}}}_h\) is equal to \( {{\overline{\Omega }}}\).
-
The intersection of two different elements of \({{\mathcal {T}}}_h \), if not empty, is a vertex or a whole edge of both triangles.
-
The ratio of the diameter \(h_K\) of any element K of \( {{\mathcal {T}}}_h\) to the diameter of its inscribed circle \(\delta _K\) is smaller than a constant independent of h.
As usual, h stands for the maximum of the diameters \(h_K\) of the element \(K\in {\mathcal {T}}_h\). Let \(V_h \subset H^1_0(\Omega )\) be the Lagrange \({\mathcal {P}}_\ell \) finite element space associated with \({\mathcal {T}}_h,\) more precisely
$$\begin{aligned} V_h= \bigg \{ v_h \in H^1_0(\Omega );\; \forall K \in {\mathcal {T}}_h, \; v_{h_{|K}} \in {\mathcal {P}}_\ell (K) \bigg \}, \end{aligned}$$where \({\mathcal {P}}_\ell (K)\) stands for the space of restrictions to K of polynomial functions of degree \(\le \ell \) on \(\mathbb {R}^2\).
Remark 3.1
(Inverse inequality) There exists a constant \(S_I>0\) such that for all \(v_h \in V_h\) and \(K\in {{\mathcal {T}}}_h\), we have
Theorem 3.2
[2] Let u be the solution of (2.2). Then, Problem (1.3) has a unique solution \(u_h\). Moreover, if \(u \in H^2(\Omega )\), the following estimate holds
In the following, we investigate the convergence of the schemes (1.5) and (1.6).
Theorem 3.3
Problem (1.5) admits a unique solution. Furthermore, if the initial value \(u_h^0\) satisfies the condition
then the solution of the problem (1.5) satisfies the estimates
Proof
It is readily checked that problem (1.5) has a unique solution as a consequence of the coercivity of the bilinear form.
We consider the Eq. (1.5) with \(v_h = u_h^{i+1}\) and we obtain:
By using the inequality
we deduce the relation
We now prove the first estimate in (3.3) by induction on i. Starting with the relation (3.2), we suppose that we have
We are in one of the following two situations:
-
We have \( ||u_h^{i+1}||_{0,\Omega } \le ||u_h^i||_{0,\Omega }\). We obviously deduce the bound
$$\begin{aligned} ||u_h^{i+1}||_{0,\Omega } \le S_2 ||f||_{-1,\Omega } \end{aligned}$$from the induction hypothesis.
-
We have \( ||u_h^{i+1}||_{0,\Omega } \ge ||u_h^i||_{0,\Omega }\). The Eq. (3.4) gives
$$\begin{aligned} |u_h^{i+1} |_{1,\Omega }^2 \le ||f||_{-1,\Omega }^2 \end{aligned}$$and we deduce the inequality
$$\begin{aligned} \begin{array}{rcl} ||u_h^{i+1} ||_{0,\Omega }^2 &{}\le &{} S_2^2 |u_h^{i+1} |_{1,\Omega }^2\\ &{}\le &{} S_2^2 ||f||_{-1,\Omega }^2. \end{array} \end{aligned}$$
This gives the first part of (3.3). We now check the second part. We have from (3.4)
whence the desired result.
Theorem 3.4
Problem (1.6) admits a unique solution. Furthermore, if the initial value \(u_h^0\) verifies the condition
then the solution of Problem (1.6) satisfies the estimate
Proof
We follow the same proof as for Theorem 3.3. It is readily checked that problem (1.6) has a unique solution as a consequence of the coercivity of the bilinear form.
We consider the Eq. (1.6) with \(v_h = u_h^{i+1}\) and we obtain:
We deduce the relation
We prove the relation (3.6) recursively. Starting with (3.5), we suppose that we have
We are in one of the following two situations:
-
We have \( |u_h^{i+1}|_{1,\Omega } \le |u_h^i|_{1,\Omega }\). We deduce the bound
$$\begin{aligned} |u_h^{i+1}|_{1,\Omega } \le ||f||_{-1,\Omega }. \end{aligned}$$ -
We have \( |u_h^{i+1}|_{1,\Omega } \ge |u_h^i|_{1,\Omega }\). It follows from (3.7) that
$$\begin{aligned} |u_h^{i+1} |_{1,\Omega }^2 \le ||f||_{-1,\Omega }^2 . \end{aligned}$$
We conclude the proof of the theorem.
Unfortunately the proof of the next result is much more technical.
Theorem 3.5
Assume that there exists \(\beta >0\) such that, for every element \(K\in {{\mathcal {T}}}_h\), we have
(which means that the family of triangulations is uniformly regular). Under the assumptions of Theorem 3.3 and for
where
the sequence of solutions \((u_h^i)\) of Problem (1.5) converges in \(H^1_0(\Omega )\) to the solution \(u_h\) of Problem (1.3).
Proof
We take the difference between the Eqs. (1.5) and (1.3) with \(v_h = u^{i+1}_h - u_h\) and we obtain the equation
The last term in the previous equation, denoted by T, can be decomposed as
We denote by \(T_1\) and \(T_2\), respectively, the first and the second terms in the right-hand side of the last equation. Using Lemma 2.6, we have \(T_2 \ge 0\). Then we derive by using Lemma 2.4 (with p replaced by 2p)
We denote by \(C= 4 S_4 S_8 S_{8(2p-1)}^{2p-1} \frac{S_I^{2p}}{\beta ^{2p}} S_2^{2p} ||f||_{-1,\Omega }^{2p}\) and we use the Cauchy’s inequality \(ab \le \frac{1}{2\varepsilon } a^2 + \frac{\varepsilon }{2} b^2\) (with \(\varepsilon = \frac{1}{C p\lambda h^{-2p}}\)) to obtain the following bound
We choice \(\alpha > C^2p^2\lambda ^2 h^{-4p}\), denote by \(C_1 = \frac{\alpha - C^2p^2\lambda ^2 h^{-4p}}{2}\) and obtain
We deduce that, for all \(i\ge 1\), we have (if \(|| u_h^{i} - u_h ||_{0,\Omega } \ne 0\))
and we deduce the convergence of the sequence \((u_h^{i+1} - u_h)\) in \(L^2(\Omega )\) and the the convergence of the sequence \(u_h^{i}\) in \(L^2(\Omega )\). By taking the limit of (3.9) we get
As \(T_2 \ge 0\), we deduce that \(| u_h^{i+1} - u_h |_{1,\Omega }\) converges to 0 and \(u_h^{i+1}\) converges to \(u_h\) in \(H^1_0(\Omega )\).
Theorem 3.6
Under the assumptions of Theorem 3.4 and for
the sequence of solutions \((u_h^i)\) of Problem (1.6) converges in \(H^1_0(\Omega )\) to the solution \(u_h\) of Problem (1.3).
Proof
We take the difference between the Eqs. (1.6) and (1.3) with \(v_h = u^{i+1}_h - u_h\) and we obtain the equation
The last term in the previous equation, denoted by T, can be decomposed as
We denote by \(T_1\) and \(T_2\) respectively the first and the second terms in the right-hand side of the last equation. Using Lemma 2.6, we have \(T_2 \ge 0\). Then we have by using Lemma 2.4
We denote by \(C= 4 S_2 S_4 S_8 S_{8(2p-1)}^{2p-1} ||f||_{-1,\Omega }^{2p} \) and we use the Cauchy’s inequality \(ab \le \frac{1}{2\varepsilon } a^2 + \frac{\varepsilon }{2} b^2\) (with \(\varepsilon = \frac{1}{Cp\lambda }\)) to obtain the following bound
We choose \(\alpha > C^2p^2\lambda ^2\), denote by \(C_1 = \frac{\alpha - C^2p^2\lambda ^2}{2}\) and obtain
We derive that, for all \(i\ge 1\), we have
we obtain the convergence of the sequence \((u_h^{i+1} - u_h)\) in \(H^1(\Omega )\) and then the convergence of \(u_h^{i}\) in \(H^1(\Omega )\), by taking the limit of (3.11) we get
As \(T_2 \ge 0\), we deduce that \(| u_h^{i+1} - u_h |_{1,\Omega }\) converges to 0 and \(u_h^{i+1}\) converges to \(u_h\) in \(H^1_0(\Omega )\). \(\square \)
Remark 3.7
The conditions (3.2) and (3.5) suppose that the initial values of the algorithms are small related to the data f. We can always take \(u_h^0=0\).
Remark 3.8
The previous two theorems bring to light a first difference between the two schemes (1.5) and (1.6): in opposite to (1.5), the convergence of (1.6) is proved when \(\alpha \) is larger than a constant independent of h (and does not require the uniform regularity of the family of triangulations).
4 A Posteriori Error Analysis
We start this section by introducing some additional notation which is needed for constructing and analyzing the error indicators in the sequel.
For any triangle \(K \in {\mathcal {T}}_h\) we denote by \({\mathcal {E}}(K)\) and \({\mathcal {N}}(K)\) the set of its edges and vertices, respectively, and we set
With any edge \(e \in {\mathcal {E}}_h\) we associate a unit vector n such that n is orthogonal to e. We split \({\mathcal {E}}_h\) and \({\mathcal {N}}_h\) in the form
where \({\mathcal {E}}_{h, \partial \Omega }\) is the set of edges in \({\mathcal {E}}_{h}\) that lie on \(\partial \Omega \) and \({\mathcal {E}}_{h, \Omega } = {\mathcal {E}}_h {\setminus } {\mathcal {E}}_{h, \partial \Omega }\). The same goes for \({\mathcal {N}}_{h, \partial \Omega }\).
Furthermore, for \(K \in {\mathcal {T}}_h\) and \(e \in {\mathcal {E}}_h\), let \(h_K\) and \(h_e\) be their diameter and length, respectively. An important tool in the construction of an upper bound for the total error is Clément’s interpolation operator \({\mathcal {R}}_h\) with values in \(V_h\). The operator \({\mathcal {R}}_h\) satisfies, for all \(v \in H^1_0(\Omega )\), the following local approximation properties (see Verfürth [7], Chapter 1):
where \(\Delta _K\) and \(\Delta _e\) are the following sets:
We now recall the following properties (see Verfürth [7], Chapter 1):
Proposition 4.1
Let r be a positive integer. For all \(v \in {\mathcal {P}}_r(K)\), the following properties hold
where \(\psi _K\) is the triangle-bubble function (equal to the product of the barycentric coordinates associated with the vertices of K).
We also introduce a lifting operator: For each \(K\in {\mathcal T}_h\) and any edge e of K, \(L_{e,K}\) maps polynomials of fixed degree on e vanishing on \(\partial e\) into polynomials on K vanishing on \(\partial K {\setminus } e\) and is constructed by affine transformation from a fixed lifting operator on the reference triangle. For a positive integer r, we denote by \({\mathcal {P}}_r(e)\) the space of restrictions to e of polynomial functions of degree \(\le r\) on \(\mathbb {R}^2\).
Proposition 4.2
Let r be a positive integer. For all \(v \in {\mathcal {P}}_r(e)\), we have the following property
where \(\psi _e\) is the bubble function on the edge e, and for all \(v \in {\mathcal {P}}_r(e)\) vanishing on \(\partial e\), we have
where \(\kappa \) is a triangle of edge e.
Finally, we denote by \([v_h]\) the jump of \(v_h\) across the common edge e of two adjacent elements \(K, K' \in {\mathcal {T}}_h\). We have now provided all prerequisites to establish an upper bound and lower bound for the total error. Let \(u_h^{i+1}\) and u be the solution of the iterative problem (1.5) or (1.6) and the continuous problem, respectively. They satisfy the identity
We now start the a posteriori analysis of our algorithms.
4.1 Algorithm (1.5)
In order to prove an upper bound of the error, we introduce an approximation \(f_h\) of the data f which is constant on each element K of \({\mathcal {T}}_h\). We first write the residual equation
By adding and subtracting \(\lambda \int _{\Omega } |u_h^{i+1}|^{2p}u_h^{i+1}v d \mathbf{x }\), we obtain
We now define the local linearization indicator \(\eta _{K,i}^{(L)}\) and the local discretization indicator \(\eta _{K,i}^{(D)}\) at each iteration i by:
We are in a position to state the first result of this section:
Theorem 4.3
Upper bound. Let \(u_{h}^{i+1}\) and u be the solution of the iterative problem (1.5) and the exact problem (2.2) respectively. We have the following a posteriori error estimate
Proof
We consider Eq. (4.7) with \(v = u - u^{i+1}_h\) and we obtain
Then we have by using Lemmas 2.4 and 2.6
We choose \(v_h=R_hv\), the image of v by the Clément operator and we obtain
We begin by bounding the second term of the right-hand side of the last inequality and we obtain by using Theorem 3.3
Let \(S=4\lambda p (1+\alpha S_2^2)^{p} S_2 S_4 S_8 S^{2p-1}_{8(2p-1)} ||f||^{2p}_{-1,\Omega }\), then we have
By using the formula \(ab \le \frac{1}{2\varepsilon } a^2 + \frac{\varepsilon }{2} b^2\), we obtain
We choose \(\varepsilon _1=8 C_1\), \(\varepsilon _2=8 C_2\), \(\varepsilon _3=4 C_3\), \(\varepsilon _4=8 S\) et \(\varepsilon _5=8 \alpha S_2^2\) to obtain
and then
We conclude the proof of the theorem.
We address now the efficiency of the previous indicators. \(\square \)
Theorem 4.4
Lower bound. For each \(K \in {\mathcal {T}}_h\), there holds
where \(\omega _K\) is the union of the triangles sharing at least one edge with K.
Proof
The estimation of the linearization indicator follows easily from the triangle inequality by introducing u in \(\eta _{K,i}^{(L)}.\) We now estimate the discretization indicator \(\eta _{K,i}^{(D)}.\) We proceed in two steps:
(i) We start by adding and subtracting \(\lambda \int _{\Omega } |u_h^{i+1}|^{2p}u_h^{i+1}vd \mathbf{x }\) in (4.6). Taking \(v_h=0\), we derive
We choose \(v=v_K\) such that
where \(\psi _K\) is the triangle-bubble function.
Using Cauchy–Schwarz inequality, (2.1), (4.1) and (4.2) we obtain
Therefore, we derive the following estimate of the first term of the local discretization estimator \(\eta _{K,i}^{(D)}\)
(ii) Now we estimate the second term of \(\eta _{K,i}^{(D)}\). Similarly, using (4.9) we infer
We choose \(v=v_e\) such that
where \(\psi _e\) is the edge-bubble function, \(K'\) denotes the other element of \({\mathcal {T}}_h\) that share e with K (the operator \(L_{e,K}\) was introduced above Proposition 4.2).
Using Cauchy–Schwarz inequality, (2.1), (4.3) and (4.4) we derive
Collecting the two bounds above leads to the following estimation
These estimates of the local linearization and discretization indicators are fully optimal. \(\square \)
4.2 Algorithm (1.6)
The same calculation is followed as before but in (4.6) and (4.7) we have \(\alpha \sum _{K\in {\mathcal {T}}_h } \int _{K} \nabla (u_h^{i+1}-u_h^i) \nabla v \) instead of \(\alpha \sum _{K\in {\mathcal {T}}_h } \int _{K}(u_h^{i+1}-u_h^i) v\). We are led to define the modified discretization error indicator \({\bar{\eta }}_{K,i}^{(D)}\) by
The rest of the calculation is similar. We skip the proofs since they are exactly the same as for Theorems 4.3 and 4.4.
Theorem 4.5
Upper bound. Let \(u_{h}^{i+1}\) and u be the solution of the iterative problem (1.6) and the exact problem (2.2) respectively. We have the following a posteriori error estimate
Theorem 4.6
Lower bound. For each \(K \in {\mathcal {T}}_h\), there holds
where \(\omega _K\) is the union of the triangles sharing at least one edge with K.
5 Numerical Results
In this section, we present numerical experiments for our nonlinear problem. These simulations have been performed using the code FreeFem++ due to Hecht and Pironneau [6]. For all the numerical investigations and for simplicity, we use the finite element of degree \(\ell =1\).
5.1 A Priori Estimation
We consider the domain \(\Omega =]-1,1[^2\), each edge is divided into N equal segments so that \(\Omega \) is divided into \(N^2\) equal squares and finally into \(2N^2\) equal triangles . We consider the exact solution \(u=\text{ e }^{-5(x^2+y^2)}\) where \(f=- \Delta u + \lambda |u|^{2p} u\).
For the convergence, we use the classical stopping criterion \(err_L\le 10^{-5}\), where \(err_L\) is defined by
We consider \(\lambda =10\), \(p=50\) and \(N=50\). Table 1 shows the error
which describes the convergence of the algorithms (1.5) and (1.6) with respect of \(\alpha \). We remark that the algorithm (1.5) converges for \(\alpha \ge 21.82\) and the algorithm (1.6) converges for \(\alpha \ge 0.77\).
In order to compare our algorithms (1.5) and (1.6) with (1.4), Table 2 shows the convergence for \(N=50\) and a fixed \(\alpha =22\) in our algorithms. In fact, for big values of \(\lambda \) and p, the algorithm (1.4) diverges. We mention that for \(\lambda \) and p where (1.5) and (1.6) diverge, we must take a bigger values of \(\alpha \) to obtain the convergence. Figure 1 shows in logarithmic scale the error Err with respect to h (algorithm 1.5 in the left and algorithm 1.6 in the right). The slope of the error corresponding to (1.5) and (1.6) are respectively 0.92 and 0.96, which validates Theorem 3.2.
5.2 A Posteriori Analysis
In this section, we test our a posteriori error estimates on our model problem. We consider the same domain \(\Omega \) with the theoretical solution now given by \(u=\text{ e }^{-100(x^2+y^2)}\). and we choose \(\lambda =10\) and \(p=50\).
In [2] and for the adaptive strategy, we define the global indicators (introduced in [5]):
and we introduce two kinds of stopping criteria:
and
where \(\gamma \) is a parameter which balances the discretization and linearization errors. We studied in [2] the comparison between these two types of stopping criterion and we showed the efficiency of the new one which is considered in this paper with \(\gamma =0.001\).
For our numerical investigations, we follow the algorithm described in [2]. The evolution of the meshes with the new stopping criterion looks like the figures 3 and 4 in [2]. We note that for \(\lambda =10\) and \(p=50\), the algorithm (1.4) diverges.
Figure 2 gives a comparison in logarithmic scale of the error between the uniform and adaptive methods using the algorithms (1.5) and (1.6) with respect of the number of vertices. We can easily see that the algorithms (1.5) and (1.6) give comparable results but the adaptive method is more powerful than the uniform one.
Figure 3 shows the dependencies of the algorithms (1.5) and (1.6) with respect of \(\gamma \) in logarithmic scale. We remark that for the algorithm (1.5), the curves are similar for \(\gamma \ge 0.1\) and we have approximately the same precision but it is much sensible with the variation of \(\gamma \) for the algorithm (1.6).
Table 3 shows comparisons, for approximatively the same precision, of the CPU time between the algorithm (1.5) and (1.6) with respect of \(\alpha \). We remark that algorithm (1.5) is faster than (1.6).
In order to have an idea of the constant on the upper bound in Theorem 4.3, Table 4 shows the repartition of the error Err and the sum of the indicators
during the refinement level and after the convergence on each one. Even if the errors regularly decrease (for instance from 1 to 0.14 for \(err_I\)) with respect to the number of adaptive refinement levels which is consistent with adapted mesh method, the constant remains stable and can be approximated by 2.85.
References
Adams, R.A.: Sobolev Spaces. Acadamic Press, INC (1978)
Bernardi, C., Dakroub, J., Mansour, G., Sayah, T.: A posteriori analysis of iterative algorithms for a nonlinear problem. J. Sci. Comput. 65(2), 672–697 (2015)
Chaillou, A.-L., Suri, M.: Computable error estimators for the approximation of nonlinear problems by linearized models. Comput. Methods Appl. Mech. Eng. 196, 210–224 (2006)
Chaillou, A.-L., Suri, M.: A posteriori estimation of the linearization error for strongly monotone nonlinear operators. Comput. Methods Appl. Mech. Eng. 205, 72–87 (2007)
El Alaoui, L., Ern, A., Vohralík, M.: Guaranteed and robust a posteriori error estimate and balancing discretization and linearization errore for monotone nonlinear problems. Comput. Methods Appl. Mech. Eng. 200, 2782–2795 (2011)
Hecht, F.: New development in FreeFem++. J. Numer. Math. 20, 251–266 (2012)
Verfürth, R.: A Posteriori Error Estimation Techniques For Finite Element Methods. Numerical Mathematics and Scientific Computation, Oxford (2013)
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
About this article
Cite this article
Bernardi, C., Dakroub, J., Mansour, G. et al. Convergence Analysis of Two Numerical Schemes Applied to a Nonlinear Elliptic Problem. J Sci Comput 71, 329–347 (2017). https://doi.org/10.1007/s10915-016-0301-y
Received:
Revised:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10915-016-0301-y