Abstract
In this paper, we turn our attention to formulating and studying a new class of variational inequalities in a nonconvex setting, called regularized nonconvex mixed variational inequalities. By using the auxiliary principle technique, some new predictor corrector methods for solving such class of regularized nonconvex mixed variational inequalities are suggested and analyzed. The study of convergence analysis of the proposed iterative algorithms requires either pseudomonotonicity or partially mixed relaxed and strong monotonicity of the operator involved in regularized nonconvex mixed variational inequalities. As a consequence of our main results, we provide the correct versions of the algorithms and results presented in the literature.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction
In [1], Signorini posed a fundamental contact problem. This suggested to Fichera to study some important Mathematical Physics problems and to coin the term “variational inequality” (in short, \(\mathrm{VI}\)) [2]. At this point, Stampacchia [3] began the development of variational inequalities, by establishing the first, important theorem. \(\mathrm{VI}\) has been recognized as suitable mathematical models for dealing with many problems arising in different fields, such as optimization theory, partial differential equations, economic equilibrium mechanics; see, for example, Refs. [4,5,6] and the references therein. Because of its importance and active impact in the nonlinear analysis and optimization, \(\mathrm{VI}\) has been intensively extended and studied since the beginning of this theory in the 1960s. It has been used as a tool to study different aspects of optimization problems; see, for example, Refs. [6, 7] and the references therein. A useful and significant generalization of variational inequalities is called the mixed variational inequality (or the variational inequality of the second kind) involving a nonlinear term. In order to see the applications, formulations, and numerical methods, we refer the reader to Refs. [8,9,10,11,12] and the references therein.
In recent years, the concept of convex set has been generalized in many directions, which has potential and important applications in various fields. It is well known that the uniformly prox-regular sets are nonconvex and include the convex sets as special cases; for more details, see, for example, Refs. [13,14,15,16,17,18,19]. In the recent past, many efforts have been devoted to the development of efficiency and of implementable numerical methods for solving variational inequalities and their generalizations in the context of a nonconvex set. For instance, Bounkhel et al. [15], Moudafi [20], Ansari and Balooee [21], Ansari et al. [22], Balooee [23] and Balooee and Cho [24] have considered variational inequalities and different types of their generalizations in the context of uniformly prox-regular sets. In order to solve the problems introduced in the above-mentioned papers, they suggested and analyzed some iterative algorithms based on the projection method and auxiliary principle technique.
Recently, Noor [25] considered and studied a nonconvex variational inequality in the context of uniformly prox-regular sets. He suggested and analyzed some predictor–corrector methods for solving it by using the auxiliary principle technique. Further, he studied the convergence analysis of the proposed iterative algorithms under the conditions of pseudomonoticity and partially relaxed strong monotonicity of the operator involved in the considered nonconvex variational inequality.
In this paper, we pursue two goals. Our first goal is to consider and study a new class of variational inequalities in the context of uniformly prox-regular sets, named regularized nonconvex mixed variational inequalities \((\mathrm{RNMVI})\). With the help of the auxiliary principle technique, some predictor–corrector algorithms for solving \(\mathrm{RNMVI}\) are proposed and analyzed. The study of convergence analysis of the suggested iterative algorithms requires only either jointly pseudomonotonicity or partially mixed relaxed and strong monotonicity of type (I) of the operator involved in \(\mathrm{RNMVI}\). Our second goal is to investigate and analyze the nonconvex variational inequality problem considered in [25]. We point out that the results given in [25] are not valid. In the meanwhile, as a consequence of our main results, the correct versions of the corresponding results presented in [25] are provided.
2 Preliminaries and Basic Facts
Throughout the paper, unless otherwise specified, we let \(\mathcal {H}\) be a real Hilbert space whose inner product and norm are denoted by \(\langle .,.\rangle \) and \(\Vert .\Vert \), respectively. Let K be a nonempty closed subset of \(\mathcal {H}\). We denote by \(d_K(.)\) or d(., K) the usual distance function from a point to the set K, that is, \(d_K(u)=\inf \limits _{v\in K}\Vert u-v\Vert \).
Definition 2.1
Let \(u\in \mathcal {H}\) be a point not lying in K. A point \(v\in K\) is called a projection of u onto K if \(d_K(u)=\Vert u-v\Vert \). The set of all such points is denoted by \(P_K(u)\), that is,
Definition 2.2
The proximal normal cone of K at a point \(u\in K\) is given by
The following lemma gives a characterization of the proximal normal cone.
Lemma 2.1
[17, Proposition 1.1.5] Let K be a nonempty closed subset of \(\mathcal {H}\). Then \(\xi \in N_K^P(u)\) if and only if there exists a constant \(\alpha =\alpha (\xi ,u)>0\) such that \(\langle \xi ,v-u\rangle \le \alpha \Vert v-u\Vert ^2\) for all \(v\in K\).
Clarke et al. [18] introduced a nonconvex set, called proximally smooth set. Subsequently, it has been investigated by Poliquin et al. [19] but under the name of uniformly prox-regular set. Such kind of sets are used in many nonconvex applications in optimization, economic models, dynamical systems, differential inclusions, etc.
Definition 2.3
[18] For a given \(r \in (0,+\infty ]\), a subset \(K_r\) of \(\mathcal {H}\) is said to be normalized uniformly prox-regular (or uniformly r-prox-regular) if for all \(\bar{x}\in K_r\) and all \(\mathbf{0} \ne \xi \in N^P_{K_r}(\bar{x})\),
The class of normalized uniformly prox-regular sets includes the class of convex sets, p-convex sets [26], \(C^{1,1}\) submanifolds (possibly with boundary) of \(\mathcal {H}\), the images under a \(C^{1,1}\) diffeomorphism of convex sets and many other nonconvex sets [18].
Lemma 2.1
[18] A closed set \(K\subseteq \mathcal {H}\) is convex if and only if it is uniformly r-prox-regular for every \(r>0\).
If \(r=+\infty \), then in view of Definition 2.3 and Lemma 2.1, the uniform r-prox-regularity of \(K_r\) is equivalent to the convexity of \(K_r\). That is, for \(r=+\infty \), we set \(K_r=K\).
The union of two disjoint intervals [a, b] and [c, d] is uniformly r-prox-regular with \(r=\frac{c-b}{2}\), see, for example, [17, 19].
3 Formulations, Algorithms and Convergence Results
In this section, our attention is turned to introduce a new class of nonconvex variational inequalities in the context of uniformly prox-regular sets. With the help of the auxiliary principle technique, which is mainly due to Glowinski et al. [4], we suggest some predictor–corrector methods for solving them and study the convergence analysis of the proposed iterative algorithms under some appropriate conditions.
From now onward, we suppose that K is a uniformly r-prox-regular set in \(\mathcal {H}\), unless otherwise specified. Let \(T:K\rightarrow \mathcal {H}\) be a nonlinear operator. For a given nonlinear proper extended real-valued bifunction \(\varphi :K\times K\rightarrow \mathbb {R}\cup \{+\infty \}\), we consider the problem of finding \(u\in K\) such that
which is called the regularized nonconvex mixed variational inequality \((\mathrm{RNMVI})\).
By taking different choices of the operators T and \(\varphi \) in the above problem, one can easily obtain the problems studied in [3, 10, 11, 27] and the references therein.
In the sequel, we denote by \(\mathrm{RNMVI}(T,\varphi ,K)\) the set of solutions of \(\mathrm{RNMVI}\) (1).
Let T and \(\varphi \) be the same as in \(\mathrm{RNMVI}\) (1). For a given \(u\in K\), consider the following auxiliary regularized nonconvex mixed variational inequality problem of finding \(w\in K\) such that
where \(\rho >0\) is a constant. We observe that if \(w=u\), then clearly w is a solution of \(\mathrm{RNMVI}\) (1). This observation enables us to propose the following predictor–corrector method for solving \(\mathrm{RNMVI}\) (1).
Algorithm 3.1
Let T and \(\varphi \) be the same as in \(\mathrm{RNMVI}\) (1). For a given \(u_0\in K\), compute \(u_{n+1}\in K\) by the iterative scheme
where \(\rho >0\) is a constant and \(n=0,1,2,\dots \).
In order to establish the strong convergence of the sequence generated by Algorithm 3.1 to a solution of \(\mathrm{RNMVI}\) (1), we need the following definitions.
Definition 3.1
Let \(T:K\rightarrow \mathcal {H}\) be a nonlinear operator and let \(\varphi :K\times K\rightarrow \mathbb {R}\cup \{+\infty \}\) be a nonlinear bifunction. Then T is said to be
-
(a)
pseudomonotone iff
$$\begin{aligned} \langle Tu_1,u_2-u_1\rangle +\frac{\Vert Tu_1\Vert }{2r}\Vert u_2-u_1\Vert ^2\ge 0 \end{aligned}$$implies that
$$\begin{aligned} \langle Tu_2,u_1-u_2\rangle +\frac{\Vert Tu_2\Vert }{2r}\Vert u_2-u_1\Vert ^2\le 0,\quad \forall u_1,u_2\in K; \end{aligned}$$ -
(b)
pseudomonotone with respect to \(\varphi \) iff
$$\begin{aligned} \langle Tu_1,u_2-u_1\rangle +\frac{\Vert Tu_1\Vert }{2r}\Vert u_2-u_1\Vert ^2+\varphi (u_2,u_1)-\varphi (u_1,u_1)\ge 0 \end{aligned}$$implies that
$$\begin{aligned}&\langle Tu_2,u_1-u_2\rangle +\frac{\Vert Tu_2\Vert }{2r}\Vert u_2-u_1\Vert ^2+\varphi (u_1,u_1)\\&\quad -\varphi (u_2,u_1)\le 0,\quad \forall u_1,u_2\in K. \end{aligned}$$
Definition 3.2
The bifunction \(\varphi :K\times K\rightarrow \mathbb {R}\cup \{+\infty \}\) is said to be skew-symmetric iff
The next proposition plays a key role in the study of convergence analysis of the iterative sequence generated by Algorithm 3.1.
Proposition 3.1
Let T and \(\varphi \) be the same as in \(\mathrm{RNMVI}\) (1), and let \(u\in K\) be a solution of \(\mathrm{RNMVI}\) (1). Suppose further that \(\{u_n\}\) is a sequence generated by Algorithm 3.1. If the operator T is pseudomonotone with respect to the bifunction \(\varphi \), and \(\varphi \) is skew-symmetric, then
Proof
Since \(u\in K\) is a solution of \(\mathrm{RNMVI}\) (1), we have
where \(\rho >0\) is an arbitrary constant as in Algorithm 3.1. In light of the fact that the operator T is pseudomonotone with respect to the bifunction \(\varphi \), the above inequality implies that
Taking \(v=u_{n+1}\) in (5) and \(v=u\) in (3), we obtain
and
By combining (6) and (7) and taking into consideration the fact that \(\varphi \) is skew-symmetric, it follows that
On the other hand, letting \(x=u_{n+1}-u_n\) and \(y=u-u_{n+1}\) and by utilizing the well-known property of the inner product, we have
By using (8) and (9), we deduce that
which is the required result (4). This completes the proof.\(\square \)
We now prove the convergence of the sequence generated by Algorithm 3.1 to a solution of \(\mathrm{RNMVI}\) (1).
Theorem 3.1
Let \(\mathcal {H}\) be a finite-dimensional real Hilbert space and let \(T:K\rightarrow \mathcal {H}\) be a continuous operator. Suppose that the bifunction \(\varphi :K\times K\rightarrow \mathbb {R}\cup \{+\infty \}\) is continuous in both arguments and let all the conditions of Proposition 3.1 hold. Further, let \(\mathrm{RNMVI}(T,\varphi ,K)\ne \emptyset \). Then, the iterative sequence \(\{u_n\}\) generated by Algorithm 3.1 converges to a solution \(\hat{u}\in K\) of \(\mathrm{RNMVI}\) (1).
Proof
Let \(u\in K\) be a solution of \(\mathrm{RNMVI}\) (1). By using the inequality (4), it follows that the sequence \(\{\Vert u-u_n\Vert \}\) is nonincreasing and so the sequence \(\{u_n\}\) is bounded. Meanwhile, by means of the inequality (4), we get
which implies that \(\Vert u_n-u_{n+1}\Vert \rightarrow 0\), as \(n\rightarrow \infty \). Let \(\hat{u}\) be a cluster point of the sequence \(\{u_n\}\) . The boundedness of \(\{u_n\}\) guarantees the existence of a subsequence \(\{u_{n_i}\}\) of \(\{u_n\}\) such that \(u_{n_i}\rightarrow \hat{u}\), as \(i\rightarrow \infty \). By virtue of (3), we yield
Thanks to the facts that \(\lim \limits _{n\rightarrow \infty }\Vert u_n-u_{n+1}\Vert =0\), and the mappings T and \(\varphi \) are continuous, by taking the limit in the relation (10) as \(i\rightarrow \infty \), we obtain
that is, \(\hat{u}\in K\) is a solution of \(\mathrm{RNMVI}\) (1). Hence, Proposition 3.1 implies that
From (11) it follows that \(u_n\rightarrow \hat{u}\), as \(n\rightarrow \infty \). Consequently, the sequence \(\{u_n\}\) has exactly one cluster point \(\hat{u}\in K\). This gives us the desired result. \(\square \)
It is well known that to implement the proximal point methods, one has to calculate the approximate solution implicitly, which is itself a difficult problem. In order to overcome this drawback, we consider another auxiliary nonconvex problem and with the help of it, we suggest another iterative algorithm for solving \(\mathrm{RNMVI}\) (1).
Let T and \(\varphi \) be the same as in \(\mathrm{RNMVI}\) (1). For a given \(u\in K\), we consider the following auxiliary regularized nonconvex mixed variational inequality problem of finding \(w\in K\) such that
where \(\rho >0\) is a constant. It should be pointed out that the problems (2) and (12) are quite different. If \(w=u\), then clearly w is a solution of \(\mathrm{RNMVI}\) (1). This observation allows us to suggest the following iterative algorithm for solving \(\mathrm{RNMVI}\) (1).
Algorithm 3.2
Let T and \(\varphi \) be the same as in \(\mathrm{RNMVI}\) (1). For a given \(u_0\in K\), compute \(u_{n+1}\in K\) in the following way:
where \(\rho >0\) is a constant and \(n=0,1,2,\dots \).
Before turning to the convergence result related to Algorithm 3.2, we need to present the following definition.
Definition 3.3
A nonlinear operator \(T:K\rightarrow \mathcal {H}\) is said to be
-
(a)
monotone iff
$$\begin{aligned} \langle Tu_1-Tu_2,u_1-u_2\rangle \ge 0,\qquad \forall u_1,u_2\in K; \end{aligned}$$ -
(b)
\(\kappa \)-strongly monotone iff there exists a constant \(\kappa >0\) such that
$$\begin{aligned} \langle Tu_1-Tu_2,u_1-u_2\rangle \ge \kappa \Vert u_1-u_2\Vert ^2,\qquad \forall u_1,u_2\in K; \end{aligned}$$ -
(c)
partially \(\varrho \)-strongly monotone iff there exists a constant \(\varrho >0\) such that
$$\begin{aligned} \langle Tu_1-Tu_2,z-u_2\rangle \ge \varrho \Vert z-u_2\Vert ^2,\qquad \forall u_1,u_2,z\in K; \end{aligned}$$ -
(d)
partially \(\varsigma \)-relaxed monotone of type (I) iff there exists a constant \(\varsigma >0\) such that
$$\begin{aligned} \langle Tu_1-Tu_2,z-u_2\rangle \ge -\varsigma \Vert z-u_1\Vert ^2,\qquad \forall u_1,u_2,z\in K; \end{aligned}$$ -
(e)
partially \((\alpha ,\beta )\)-mixed relaxed and strongly monotone of type (I) iff there exist two constants \(\alpha ,\beta >0\) such that
$$\begin{aligned} \langle Tu_1-Tu_2,z-u_2\rangle \ge -\alpha \Vert z-u_1\Vert ^2+\beta \Vert z-u_2\Vert ^2,\qquad \forall u_1,u_2,z\in K. \end{aligned}$$
If \(z=u_1\), then partially strong monotonicity and partially mixed relaxed and strong monotonicity of type (I) reduce to strong monotonicity, and partially relaxed monotonicity of type (I) reduces to monotonicity.
The following assertion plays an important and key role in the study of the convergence analysis of Algorithm 3.2.
Proposition 3.2
Let T and \(\varphi \) be the same as in \(\mathrm{RNMVI}\) (1) and let \(u\in K\) be a solution of \(\mathrm{RNMVI}\) (1). Suppose that \(\{u_n\}\) is a sequence generated by Algorithm 3.2 and let the sequence \(\{Tu_n\}\) is bounded. If \(\varphi \) is skew-symmetric and the operator T is partially \((\alpha ,\beta )\)-mixed relaxed and strongly monotone of type (I) with constant \(\beta =\frac{1}{2r}(\Vert Tu\Vert +\sup \limits _{n\ge 0}\Vert Tu_n\Vert )\), then
Proof
Since \(u\in K\) is a solution of \(\mathrm{RNMVI}\) (1), we have
where \(\rho >0\) is an arbitrary constant the same as in Algorithm 3.2.
Taking \(v=u_{n+1}\) in the above inequality, we yield
Letting \(v=u\) in (13), we obtain
Applying (15) and (16) and taking into consideration the facts that the operator T is partially \((\alpha ,\beta )\)-mixed relaxed and strongly monotone of type (I) with constant \(\beta =\frac{1}{2r}(\Vert Tu\Vert +\sup \limits _{n\ge 0}\Vert Tu_n\Vert )\), and the bifunction \(\varphi \) is skew-symmetric, it follows that
Employing (9) and (17), we deduce that
the required result (14). This completes the proof.\(\square \)
In next theorem, the required conditions for establishing the convergence of the iterative sequence generated by Algorithm 3.2 to a solution of \(\mathrm{RNMVI}\) (1) are stated.
Theorem 3.2
Let \(\mathcal {H}\) be a finite-dimensional real Hilbert space and let \(T:K\rightarrow \mathcal {H}\) be a continuous operator. Assume that the bifunction \(\varphi :K\times K\rightarrow \mathbb {R}\cup \{+\infty \}\) is continuous in both arguments and let all the conditions of Proposition 3.2 hold. Moreover, let \(\mathrm{RNMVI}(T,\varphi ,K)\ne \emptyset \). If \(\rho \in ]0,\frac{1}{2\alpha }[\), then the iterative sequence \(\{u_n\}\) generated by Algorithm 3.2 converges to a solution \(\hat{u}\in K\) of \(\mathrm{RNMVI}\) (1).
Proof
Let \(u\in K\) be a solution of \(\mathrm{RNMVI}\) (1). The inequality (14) implies that the sequence \(\{\Vert u-u_n\Vert \}\) is nonincreasing and hence the sequence \(\{u_n\}\) is bounded. In the meantime, from the inequality (14) yields
whence we deduce that \(\Vert u_n-u_{n+1}\Vert \rightarrow 0\), as \(n\rightarrow \infty \). If \(\hat{u}\) is a cluster point of the sequence \(\{u_n\}\), then in a similar way to that of proof of Theorem 3.1, one can prove that \(\hat{u}\) is a solution of \(\mathrm{RNMVI}\) (1) and the sequence \(\{u_n\}\) has exactly one cluster point \(\hat{u}\in K\). This completes the proof.\(\square \)
4 Nonvex Variational Inequalities and Some Extra Comments
This section concerned with the study the nonconvex variational problem and the regularized nonconvex variational inequality considered in [25]. The iterative algorithms and convergence results given in [25] are examined and by detecting some fatal errors in them, the invalidity of the assertions presented in [25] is pointed out. As a consequence of our main results mentioned in the previous section, we derive the correct versions of the algorithms and results in [25].
Let K be a uniformly r-prox-regular set in \(\mathcal {H}\). For a given nonlinear operator \(T:\mathcal {H}\rightarrow \mathcal {H}\), Noor [25] considered the problem of finding \(u\in K\) such that
which is called the nonconvex variational problem \((\mathrm{NVP})\).
If \(r=\infty \), that is, K is a convex set in \(\mathcal {H}\), then by utilizing Lemma 2.1, it is easy to see that \(\mathrm{NVP}\) (18) is equivalent to finding \(u\in K\) such that
which is known as the classical variational inequality \((\mathrm{VI})\), introduced and studied by Stampacchia [3] in 1964.
Noor [25] claimed that even for the case when K is a uniformly r-prox-regular set in \(\mathcal {H}\), the problems (18) and (19) are equivalent. In fact, he derived his claim based on the following lemma.
Lemma 4.1
[25, Lemma 2.2] If K is a uniformly r-prox-regular set, then the nonconvex variational problem (18) is equivalent to finding \(u\in K\) such that
The inequality (20) is known as the regularized nonconvex variational inequality.
By a careful reading, we found that there are two fatal errors in the proof of Lemma 4.1 (that is, [25, Lemma 2.2]). In the first place, Noor [25] asserted that every solution of the regularized nonconvex variational inequality (20) is a solution of \(\mathrm{NVP}\) (18). In fact, by assuming that \(u\in K\) is a solution of the problem (18) and \(Tu\ne 0\), he deduced that
Then, by invoking Lemma 2.1, he claimed that
that is, \(u\in K\) is a solution of \(\mathrm{NVP}\) (18). It is easy to see that unlike the claim of the author in [25], by assuming \(u\in K\) as a solution of the problem (20), what yields is the inequality
not the inequality (21). Even without considering this fact, it should be pointed out by using the inequality (21) and in virtue of Lemma 2.1, one cannot deduce the relation (22). This fact is illustrated in the following example.
Example 4.1
Let \(\mathcal {H}=\mathbb {R}\) and \(K=[0,\alpha ]\cup [\beta ,\gamma ]\) be the union of two disjoint intervals \([0,\alpha ]\) and \([\beta ,\gamma ]\) where \(0<\alpha<\beta <\gamma \). Then K is a uniformly r-prox-regular set in \(\mathcal {H}\) with \(r=\frac{\beta -\alpha }{2}\) and so we have \(\frac{1}{2r}=\frac{1}{\beta -\alpha }\). Let the operator \(T:H\rightarrow \mathcal {H}\) be defined by
where \(m\in \mathbb {R}\), \(n\ge 0\), \(0<\varrho <\frac{1}{\alpha ^n}\) and \(0<\lambda \le \frac{1-\varrho \alpha ^n}{e^{m\alpha }}\) are arbitrary real constants. Taking \(u=\alpha \), for all \(v\in \mathcal {H}\), we have
If \(v\in [0,\alpha ]\), then \(-\alpha \le v-\alpha \le 0\) and
For the case when \(v\in [\beta ,\gamma ]\), we have \(\beta -\alpha \le v-\alpha \le \gamma -\alpha \) and
Applying (24) and (25) and taking into account of the facts that \(0<\varrho <\frac{1}{\alpha ^n}\) and \(0<\lambda \le \frac{1-\varrho \alpha ^n}{e^{m\alpha }}\) it follows that
From (23) and (26), we deduce that
Let us now take \(x=\frac{\alpha +\beta }{2}\). Then, we have \(P_K(x)=\{\alpha ,\beta \}\) and
Clearly, \(-Tu=-\lambda e^{m\alpha }-\varrho \alpha ^n\in [-1,0[\notin N_K^P(u)=N(K;u)\). Relying on this fact, it follows that every solution of the regularized nonconvex variational inequality (20) need not be a solution of \(\mathrm{NVP}\) (18).
In the second place, in addition to the above-mentioned fatal error, there is another fatal error in the proof of Lemma 4.1. In fact, Noor [25] claimed that if \(u\in K\) is a solution of \(\mathrm{NVP}\) (18), then Definition 2.3 guarantees that \(u\in K\) is a solution of the regularized nonconvex variational inequality (20).
However, the following example shows that every solution of \(\mathrm{NVP}\) (18) need not be a solution of the regularized nonconvex variational inequality (20).
Example 4.2
Let \(\mathcal {H}\) and K be the same as in Example 4.1 and let the operator \(T:\mathcal {H}\rightarrow \mathcal {H}\) be defined by
where \(\theta <-1\) and \(\mu >1\) are arbitrary real constants. Taking \(x=\frac{\alpha +\beta }{2}\), we have \(P_K(x)=\{\alpha ,\beta \}\),
and
Let us now take \(u=\alpha \). Then, we have \(-Tu=-\theta \in N_K^P(u)\) and
If \(v=\beta \), we deduce that
By a similar argument, taking \(u=\beta \), we have \(-Tu=-\mu \in N_K^P(u)\) and
Obviously, for \(v=\alpha \), it follows that
Therefore, the inequality
cannot hold for all \(v\in K\). Consequently, every solution of \(\mathrm{NVP}\) (18) is not a solution of the problem (20) necessarily.
It should be pointed out that the equivalence between the two problems (18) and (20) has a key role in proposing algorithms and in getting the results in [25], and plays a crucial and basic role in [25]. Indeed, all the results in [25] have been obtained based on the equivalence between the two problems (18) and (20). But, unfortunately, it was shown that the problems (18) and (20) are not equivalent.
In Section 3 from [25], the author considered the following auxiliary regularized nonconvex variational inequality: For a given \(u\in K\), find \(w\in K\) such that
where \(\rho >0\) is a constant. Noor [25] claimed that if \(w=u\), then w is a solution of the problem (20). Based on this fact, he suggested the following iterative algorithm for solving the problem (20).
Algorithm 4.1
[25, Algorithm 3.2] For a given \(u_0\in K\), compute the approximate solution \(u_{n+1}\) by the iterative scheme
By an easy checking, we found that unlike the claim in [25], if \(w=u\) then w is not a solution of the problem (20) necessarily. In fact, if \(w=u\), then the auxiliary regularized nonconvex variational inequality (27) reduces to the regularized nonconvex variational inequality, which consists in finding \(u\in K\) such that
However, the following example illustrates that every solution of the problem (29) need to be a solution of the problem (20).
Example 4.3
Let \(\mathcal {H}=\mathbb {R}\) and \(K=[\alpha ,\beta ]\cup [\gamma ,\delta ]\) be the union of two disjoint intervals \([\alpha ,\beta ]\) and \([\gamma ,\delta ]\) where \(0<\alpha<\beta<\gamma <\delta \). Then, K is a uniformly r-prox-regular set with \(r=\frac{\gamma -\beta }{2}\) and so we have \(\gamma =\frac{1}{2r}=\frac{1}{\gamma -\beta }\). Let us define the operator \(T:\mathcal {H}\rightarrow \mathcal {H}\) by
where \(s\in \mathbb {N}\backslash \{1\}\) and \(t\in \mathbb {N}\) are two arbitrary but fixed natural numbers, and \(q\in \mathbb {R}\), \(a>1\) and \(\varsigma <\frac{\beta -\delta }{(\gamma -\beta )(\root s \of {\beta ^t}+a^{q\beta })}\) are arbitrary real constants. Let \(\rho \in \,]0,-\frac{1}{\varsigma (\root s \of {\beta ^t}+a^{q\beta })}]\) be a positive real constant. Then, taking \(u=\beta \), for all \(v\in \mathcal {H}\), we have
In the case when \(v\in [\alpha ,\beta ]\), the fact that \(\varsigma<0<\rho \) implies that
and so
If \(v\in [\gamma ,\delta ]\), taking into consideration the facts that \(\frac{1}{\gamma -\beta }(v-\alpha )\in [1,\frac{\delta -\beta }{\gamma -\beta }]\) for all \(v\in [\gamma ,\delta ]\) and \(0<\rho \le -\frac{1}{\varsigma (\root s \of {\beta ^t}+a^{q\beta })}\), it follows that
whence we deduce that
Therefore,
On the other hand, for all \(v\in \mathcal {H}\), one has
Considering the facts that \(\frac{1}{\gamma -\beta }(v-\beta )\in [1,\frac{\delta -\beta }{\gamma -\beta }]\) for all \(v\in [\gamma ,\delta ]\), and \(\varsigma <\frac{\beta -\delta }{(\gamma -\beta )(\root s \of {\beta ^t}+a^{q\beta })}\), we obtain
which implies that
Hence, the inequality
cannot hold for all \(v\in K\). In virtue of this fact, it follows that every solution of the problem (29) is not necessarily a solution of the regularized nonconvex variational inequality (20). Consequently, for a given \(u\in K\), if \(w=u\) is a solution of the auxiliary regularized nonconvex variational inequality problem (27), then w need not be a solution of the problem (20).
In order to study the convergence analysis of Algorithm 4.1, Noor [25] used the following definition.
Definition 4.1
[25, Definition 3.1] For all \(u,v,z\in \mathcal {H}\), an operator \(T:\mathcal {H}\rightarrow \mathcal {H}\) is said to be
-
(a)
monotone iff
$$\begin{aligned} \langle Tu-Tv,u-v\rangle \ge 0; \end{aligned}$$ -
(b)
pseudomonotone iff
$$\begin{aligned} \langle Tu,v-u\rangle \ge 0\quad \text{ implies } \text{ that } \quad \langle Tv,u-v\rangle \le 0; \end{aligned}$$ -
(c)
partially relaxed strongly monotone iff there exists a constant \(\alpha >0\) such that
$$\begin{aligned} \langle Tu-Tv,z-v\rangle \ge -\alpha \Vert z-u\Vert ^2. \end{aligned}$$
It should be remarked that there are two small mistakes in the context of Definition 3.1 from [25]. In fact, in parts (a) and (b) of Definition 3.1 from [25], \(\langle Tu-Tv,v-u\rangle \ge 0\) and \(\langle Tu,u-v\rangle \ge 0\) must be replaced by \(\langle Tu-Tv,u-v\rangle \ge 0\) and \(\langle Tu,v-u\rangle \ge 0\), respectively, as we have done in Definition 4.1.
The next theorem played a crucial role in the study of the convergence analysis of the iterative sequence generated by Algorithm 4.1.
Theorem 4.1
[25, Theorem 3.1] Let the operator \(T:K\rightarrow \mathcal {H}\) be pseudomonotone. If \(u_{n+1}\) is the approximate solution obtained from Algorithm 4.1 and \(u\in K\) is a solution of (20), then
By a careful reading of the proof of Theorem 4.1 (that is, [25, Theorem 3.1]), we discovered that there are fatal errors in it. By assuming that \(u\in K\) is a solution of (20), Noor [25] deduced the inequality (8) in [25] by using the pseudomonotonicity of the operator T as follows:
In fact, letting \(u\in K\) as a solution of (20), Noor [25] claimed that the inequality (20) implies the inequality
and then by utilizing (32) and the notion of pseudomonotonicity of the operator T, he obtained the following inequality:
He derived at least the relation (31) with the help of (33).
However, the following example illustrates that unlike the claim of the author in [25], by assuming \(u\in K\) as a solution of (20), the inequality (20) does not imply necessarily the inequality (32). The fact that by using (20) and the notion of pseudomonotonicity of the operator T, one cannot deduce necessarily the relation (31) is also shown.
Example 4.4
Let \(\mathcal {H}\) and K be the same as in Example 4.3 and let the operator \(T:K\rightarrow \mathcal {H}\) be defined by
where \(\sigma \in \mathbb {R}\), \(-\frac{1}{e^{\sigma \beta }+\beta ^l}\le \xi <0\le l\) and \(\tau <\frac{\beta -\delta }{\gamma (\gamma -\beta )}\) are arbitrary real constants. Taking \(u=\beta \), for all \(v\in K\), we have
If \(v\in [\alpha ,\beta ]\), then \(\alpha -\beta \le v-\beta \le 0\), then the fact that \(\xi <0\) implies that
and so
For the case when \(v\in [\gamma ,\delta ]\), considering the facts that \(1\le \frac{v-\beta }{\gamma -\beta }\le \frac{\delta -\beta }{\gamma -\beta }\) and \(\xi \ge -\frac{1}{e^{\sigma \beta }+\beta ^l}\) it follows that
whence we deduce that
Therefore,
Whereas, for all \(v\in [\gamma ,\delta ]\), in light of the fact that \(\xi <0\), we have
that is,
Accordingly, the inequality \(\langle Tu,v-u\rangle \ge 0\) cannot hold for all \(v\in K\).
Suppose that \(u,v\in [\alpha ,\beta ]\) are chosen arbitrary. If \(\langle Tu,v-u\rangle \ge 0\) then \(\xi (e^{\sigma u}+u^l)(v-u)\ge 0\). Since \(e^{\sigma u}+u^l>0\) for all \(u\in K\), in virtue of the fact that \(\xi <0\) it follows that \(v-u\le 0\). This fact guarantees that \(u-v\ge 0\) and so \(\xi (e^{\sigma v}+v^l)(u-v)\le 0\), that is , \(\langle Tv,u-v\rangle \le 0\).
Let us now take \(u\in [\alpha ,\beta ]\) and \(v\in [\gamma ,\delta ]\) arbitrarily. If \(\langle Tu,v-u\rangle \ge 0\), by an argument analogous to the previous one, it follows that \(v-u\le 0\) and so \(u-v\ge 0\). Then the fact that \(\tau <0\) implies that \(\tau v(u-v)\le 0\), that is, \(\langle Tv,u-v\rangle \le 0\).
In the case when u and v are arbitrary points belonging to \([\gamma ,\delta ]\), in a similar fashion to the preceding analysis, one can show that \(\langle Tu,v-u\rangle \ge 0\) implies that \(\langle Tv,u-v\rangle \le 0\).
By virtue of the above-mentioned argument, we deduce that the operator T is pseudomonotone.
On the other hand, for all \(v\in [\gamma ,\delta ]\), one has
Taking into consideration the facts that \(\frac{1}{\gamma -\beta }(v-\beta )\in [1,\frac{\delta -\beta }{\gamma -\beta }]\) for all \(v\in [\gamma ,\delta ]\) and \(\tau <\frac{\beta -\delta }{\gamma (\gamma -\beta )}\), we obtain
which implies that
Consequently, the inequality
cannot hold for all \(v\in K\).
We now go back to analyze the proof of Theorem 4.1 (that is, [25, Theorem 3.1]). Taking \(v=u_{n+1}\) in (31), Noor [25] obtained the relation (9) in [25] as follows:
Setting \(v=u\) in (28) and by utilizing (34), he derived the relation (10) in [25] as follows:
Letting \(v=u-u_{n+1}\) and \(u=u_{n+1}-u_n\) and with the help of the well-known property of the inner product, the author got the relation (9) which is the relation (11) in [25].
In the end, by combining (9) and (35), Noor [25] deduced the relation (30).
However, picking \(v=u\) in (28) and by using (34), what we can obtain is the following inequality:
not the inequality (35).
Relying on the fact that the inequality (36) is the correct version of the inequality (35), it should be remarked that unlike the claim in [25], the combination of (9) and the inequality (36) do not give us the required result (30), but what one can get is the following inequality:
In light of the above-mentioned arguments, it follows that Theorem 4.1 is not true in the present form.
Noor [25] asserted that the sequence \(\{u_n\}\) generated by Algorithm 4.1 is convergent to a solution of the problem (20).
Theorem 4.2
[25, Theorem 3.2] Let \(\mathcal {H}\) be finite-dimensional subspace and let \(u_{n+1}\) be the approximate solution obtained from Algorithm 4.1. If \(r>1\) and \(u\in K\) is a solution of (20), then \(\lim \limits _{n\rightarrow \infty }u_n=u\).
Theorem 4.1 plays an important and key role in the proof of Theorem 4.2 (that is, [25, Theorem 3.2]). However, as we have pointed out the statement of Theorem 4.1 is not valid in general. Beside this fact, we also observe, by a careful reading, there are fatal errors in the proof of Theorem 4.2.
Firstly, Noor [25] claimed that the inequality (30) together with the fact that \(r>1\) implies the boundedness of the sequence \(\{u_n\}\). In fact, considering the fact that \(r>1\) and by using the inequality (30), the author asserted that
that is, the sequence \(\{\Vert u-u_n\Vert \}\) is nonincreasing and then he deduced the boundedness of the sequence \(\{u_n\}\) by utilizing the inequality (37). However, in virtue of the inequality (30), what we get is the inequality
not the inequality (37). Since \(r>1\), it follows that \(\frac{1}{\sqrt{1-\frac{1}{r}}}>1\) and so the inequality (38) does not imply that the sequence \(\{\Vert u-u_n\Vert \}\) is nonincreasing. Consequently, unlike the claim of the author in [25], under the assumptions of Theorem 4.2, the sequence \(\{u_n\}\) is not necessarily bounded.
Secondly, Noor [25] claimed that by using the inequality (30), one can deduce the following inequality:
which implies that
However, by using the inequality (30), what one can obtain is the following inequality:
not the inequality (39). Obviously, the inequality (41) does not imply the relation (40). Taking into account of the fact that the boundedness of the sequence \(\{u_n\}\) and the relation (40) are main tools to prove the assertion of Theorem 4.2, it follows that Theorem 4.2 is not correct in the present form.
For a given nonlinear operator \(T:K\rightarrow \mathcal {H}\), we consider the problem of finding \(u\in K\) such that
which is called the regularized nonconvex variational inequality \((\mathrm{RNVI})\). Rest of the paper, we denote by \(\mathrm{RNVI}(T,K)\) the set of solutions of \(\mathrm{RNVI}\) (42).
Now we present the correct version of Lemma 4.1 in which the equivalence between \(\mathrm{NVP}\) (18) and \(\mathrm{RNVI}\) (42) is stated.
Lemma 4.2
\(\mathrm{NVP}\) (18) and \(\mathrm{RNVI}\) (42) are equivalent.
Proof
Let \(u\in K\) be a solution of \(\mathrm{RNVI}\) (42). If \(Tu=0\), then \(0\in Tu+N_K^P(u)=Tu+N(K;u)\), because the zero vector always belongs to any normal cone. For the case when \(Tu\ne 0\), we have
Lemma 2.1 implies that \(-Tu\in N_K^P(u)\), and so \(0\in Tu+N_K^P(u)\). Thanks to the above-mentioned facts, we conclude that \(Tu\cap \{-N_K^P(u)\}=Tu\cap \{-N(K;u)\}\ne \emptyset \), that is, \(u\in K\) is a solution of \(\mathrm{NVP}\) (18). Conversely, if \(u\in K\) is a solution of \(\mathrm{NVP}\) (18), then it follows that \(0\in Tu+N_K^P(u)\) and so \(-Tu\in N_K^P(u)=N(K;u)\). Now, Definition 2.3 guarantees that \(u\in K\) is a solution of \(\mathrm{RNVI}\) (42).\(\square \)
Let \(T:K\rightarrow \mathcal {H}\) be a nonlinear operator. For a given \(u\in K\), we consider the auxiliary regularized nonconvex variational inequality, which consists in finding \(w\in K\) such that
where \(\rho >0\) is a constant. If \(w=u\), then clearly w is a solution of \(\mathrm{RNVI}\) (42). This observation allows us to propose the following predictor–corrector algorithm for solving \(\mathrm{RNVI}\) (42).
Algorithm 4.2
Let T be the same as in \(\mathrm{RNVI}\) (42). For a given \(u_0\in K\), compute \(u_{n+1}\in K\) by the iterative scheme
where \(\rho >0\) is a constant and \(n=0,1,2,\dots \).
We now present the correct versions of Theorems 4.1 and 4.2, respectively.
Theorem 4.3
Let T be the same as in \(\mathrm{RNVI}\) (42) and let \(u\in K\) be a solution of \(\mathrm{RNVI}\) (42). Suppose further that \(\{u_n\}\) is a sequence generated by Algorithm 4.2. If the operator T is pseudomonotone, then the inequality (4) holds for all \(n\ge 0\).
Proof
Taking \(\varphi \equiv 0\), we get the desired result from Proposition 3.1. \(\square \)
Theorem 4.4
Let \(\mathcal {H}\) be a finite-dimensional real Hilbert space and let \(T:K\rightarrow \mathcal {H}\) be a nonlinear continuous operator. Further, let all the conditions of Theorem 4.3 hold and \(\mathrm{RNVI}(T,K)\ne \emptyset \). Then the iterative sequence \(\{u_n\}\) generated by Algorithm 4.2 converges to a solution \(\hat{u}\in K\) of \(\mathrm{RNVI}\) (42).
Proof
It follows from Theorem 3.1 immediately by taking \(\varphi \equiv 0\). \(\square \)
It is well known that to implement the proximal point method, one has to calculate the approximate solution implicitly, which is itself a difficult problem. To overcome the drawback, Noor [25] considered the following auxiliary regularized nonconvex variational inequality problem:
For a given \(u\in K\), find \(w\in K\) such that
where \(\rho >0\) is a constant.
Noor[25] asserted that if \(w=u\), then obviously w is a solution of the problem (20). However, this claim of the author in [25] is also not true. Indeed, if \(w=u\), then the problem (44) reduces to the problem (29). But, it was shown in Example 4.3 that every solution of the problem (29) need not be a solution of the problem (20). Hence, for a given \(u\in K\), if \(w=u\) is a solution of the auxiliary regularized nonconvex variational inequality problem (44), then w is not necessarily a solution of the problem (20).
Based on the fact that in the problem (44), if \(w=u\) then clearly w is a solution of the regularized nonconvex variational inequality (20), Noor [25] suggested the following iterative algorithm for solving the problem (20).
Algorithm 4.3
[25, Algorithm 3.4] For a given \(u_0\in K\), compute the approximate solution \(u_{n+1}\) by the iterative scheme
Noor [25] claimed that using essentially the technique of Theorem 4.2, one can study the convergence analysis of Algorithm 4.3. For this purpose, he first asserted that the following statement holds.
Theorem 4.5
[25, Theorem 3.3] Let the operator T be partially relaxed strongly monotone with constant \(\alpha >0\). If \(u_{n+1}\) is the approximate solution obtained from Algorithm 4.3 and \(u\in K\) is a solution of (20), then
Now we analyze the proof of Theorem 4.5 (that is, [25, Theorem 3.3]). By a careful reading the proof of Theorem 3.3 in [25], we discovered that under the assumptions mentioned in Theorem 4.5, the relation (46) does not hold necessarily. By assuming \(u\in K\) as a solution of the problem (20) and taking \(v=u_{n+1}\) in (20), the author deduced the inequality (17) in [25] as follows:
Picking \(v=u\) in (45), Noor [25] obtained the following inequality:
Employing (47) and (48) and taking into account of the fact that T is partially relaxed strongly monotone with constant \(\alpha \), the author derived the inequality (18) in [25] as follows:
In the end, by combining (9) (which is the relation (11) in [25]) and (49), Noor [25] deduced the required result (46). However, unlike the claim in [25], by using (47) and (48) and the definition of partially relaxed strong monotonicity of the operator T given in part (c) of Definition 4.1, what we obtain is the inequality
not the inequality (49). Considering the fact that the relation (50) is the correct version of the relation (49), by combining (9) and (50), we get the inequality
not the inequality (46).
Relying on the above-mentioned argument, Theorem 4.5 is not true in the present form. In view of the fact that Theorem 4.5 plays a crucial role in the study of the convergence analysis of Algorithm 4.3, by an argument analogous to the previous one mentioned for the proof of Theorem 4.2, one can establish that unlike the claim of the author in [25], using essentially the technique of Theorem 4.2 (that is, [25, Theorem 3.3]), one cannot study the convergence analysis of Algorithm 4.3.
Let \(T:K\rightarrow \mathcal {H}\) be a nonlinear operator. For a given \(u\in K\), we consider the following auxiliary regularized nonconvex variational inequality problem:
Find \(w\in K\) such that
where \(\rho >0\) is a constant.
It should be pointed out that the problems (43) and (51) are quite different. If \(w=u\), then clearly w is a solution of \(\mathrm{RNVI}\) (42). This fact enables us to suggest the following iterative algorithm for solving \(\mathrm{RNVI}\) (42).
Algorithm 4.4
Let T be the same as in \(\mathrm{RNVI}\) (42). For a given \(u_0\in K\), define the iterative sequence \(\{u_n\}\) in K in the following way:
where \(\rho >0\) is a constant and \(n=0,1,2,\dots \).
In the following, the correct version of Theorem 4.5 is given.
Theorem 4.6
Let T be the same as in \(\mathrm{RNVI}\) (42), and let \(u\in K\) be a solution of \(\mathrm{RNVI}\) (42). Assume that \(\{u_n\}\) is a sequence generated by Algorithm 4.4 and let the sequence \(\{Tu_n\}\) is bounded. If the operator T is partially \((\alpha ,\beta )\)-mixed relaxed and strongly monotone of type (I) with constant \(\beta =\frac{1}{2r}(\Vert Tu\Vert +\sup \limits _{n\ge 0}\Vert Tu_n\Vert )\), then the inequality (14) holds for all \(n\ge 0\).
Proof
The desired result follows immediately from Proposition 3.2 by taking \(\varphi \equiv 0\). \(\square \)
The next statement provides us the required conditions under which the iterative sequence generated by Algorithm 4.4 converges to a solution of \(\mathrm{RNVI}\) (42)
Theorem 4.7
Suppose that \(\mathcal {H}\) is a finite-dimensional real Hilbert space and let \(T:K\rightarrow \mathcal {H}\) be a nonlinear continuous operator. Let all the conditions of Theorem 4.6 hold and let \(\mathrm{RNVI}(T,K)\ne \emptyset \). If \(\rho \in ]0,\frac{1}{2\alpha }[\), then the iterative sequence \(\{u_n\}\) generated by Algorithm 4.4 converges to a solution \(\hat{u}\in K\) of \(\mathrm{RNVI}\) (42).
Proof
Taking \(\varphi \equiv 0\), we get the desired result from Theorem 3.2. \(\square \)
In the beginning of Section 4 from [25], for a given nonlinear operator \(T:K\rightarrow \mathcal {H}\) and a univariate prox-regular function \(\varphi :K\rightarrow \mathbb {R}\cup \{+\infty \}\), the author considered the problem, which consists in finding \(u\in K\) such that
which is known as the regularized mixed variational inequality.
Noor [25] claimed that using essentially the techniques and ideas developed in [25], one can suggest the following iterative algorithms for solving the problem (52).
Algorithm 4.5
[25, Algorithm 4.1] For a given \(u_0\in K\), compute the approximate solution \(u_{n+1}\) by the iterative scheme
Algorithm 4.6
[25, Algorithm 4.2] For a given \(u_0\in K\), compute the approximate solution \(u_{n+1}\) by the iterative scheme
Unfortunately, by an argument analogous to the previous one, we can prove that unlike the claim of the author in [25], Algorithms 4.5 and 4.6 cannot be used for solving the problem (52) and in the meanwhile one cannot study the convergence analysis of the iterative sequences generated by them. In order to overcome this drawback, for a given nonlinear operator \(T:K\rightarrow \mathcal {H}\) and a univariate proper extended real-valued function \(\varphi :K\rightarrow \mathbb {R}\cup \{+\infty \}\), we consider, instead of the problem (52), the problem of finding \(u\in K\) such that
which is called the regularized nonconvex mixed variational inequality \((\mathrm{RNMVI})\). In the sequel, we denote by \(\mathrm{RNMVI}(T,\varphi ,K)\) the set of solutions of \(\mathrm{RNMVI}\) (53).
Let T and \(\varphi \) be the same as in \(\mathrm{RNMVI}\) (53). For a given \(u\in K\), we consider the auxiliary regularized nonconvex mixed variational inequality, which consists in finding \(w\in K\) such that
where \(\rho >0\) is a constant. If \(w=u\), then clearly w is a solution of \(\mathrm{RNMVI}\) (53). Based on this observation, we are able to propose a predictor–corrector method for solving \(\mathrm{RNMVI}\) (53) as follows.
Algorithm 4.7
Let T and \(\varphi \) be the same as in \(\mathrm{RNMVI}\) (53). For a given \(u_0\in K\), compute \(u_{n+1}\in K\) by the iterative scheme
where \(\rho >0\) is a constant and \(n=0,1,2,\dots \).
To study the convergence analysis of the iterative sequence generated by Algorithm 4.7, we need the following definition.
Definition 4.2
Let \(T:K\rightarrow \mathcal {H}\) be a nonlinear operator and let \(\varphi :K\rightarrow \mathbb {R}\cup \{+\infty \}\) be a nonlinear proper extended real-valued function. Then T is said to be pseudomonotone with respect to \(\varphi \) iff
implies that
The next proposition plays a key role in the study of convergence analysis of the iterative sequence generated by Algorithm 4.7.
Proposition 4.1
Let T and \(\varphi \) be the same as in \(\mathrm{RNMVI}\) (53) and let \(u\in K\) be a solution of \(\mathrm{RNMVI}\) (53). Suppose further that \(\{u_n\}\) is a sequence generated by Algorithm 4.7. If the operator T is pseudomonotone with respect to \(\varphi \), then the inequality (4) holds for all \(n\ge 0\).
Proof
We obtain the desired result from Proposition 3.1 by assuming that in Proposition 3.1 the functon \(\varphi \) is univariate. \(\square \)
In next theorem, the required conditions for establishing the convergence of the iterative sequence generated by Algorithm 4.7 are provided.
Theorem 4.8
Let \(\mathcal {H}\) be a finite-dimensional real Hilbert space and \(\varphi :K\rightarrow \mathbb {R}\cup \{+\infty \}\) be a continuous function. Suppose that the operator \(T:K\rightarrow \mathcal {H}\) is continuous and pseudomonotone with respect to \(\varphi \). Moreover, let \(\mathrm{RNMVI}(T,\varphi ,K)\ne \emptyset \). Then, the iterative sequence \(\{u_n\}\) generated by Algorithm 4.7 converges to a solution \(\hat{u}\in K\) of \(\mathrm{RNMVI}\) (53).
Proof
Letting \(\varphi \) as a univariate function in Theorem 3.1, the desired result follows immediately from Theorem 3.1. \(\square \)
As it was pointed out, to implement the proximal point method, one has to calculate the approximate solution implicitly, which is itself a difficult task. To overcome this drawback, we consider another auxiliary problem and then with the help of it we suggest another iterative algorithm which requires only partially mixed relaxed and strong monotonicity of type (I) of the operator involved in \(\mathrm{RNMVI}\) (53) to study the convergence analysis.
Let T and \(\varphi \) be the same as in \(\mathrm{RNMVI}\) (53). For a given \(u\in K\), we consider the auxiliary regularized nonconvex mixed variational inequality, which consists in finding \(w\in K\) such that
where \(\rho >0\) is a constant.
It should be remarked that the problems (54) and (55) are quite different. If \(w=u\), then obviously w is a solution of \(\mathrm{RNMVI}\) (53). By using this observation, we suggest the following iterative method for solving \(\mathrm{RNMVI}\) (53).
Algorithm 4.8
Let T and \(\varphi \) be the same as in \(\mathrm{RNMVI}\) (53). For a given \(u_0\in K\), define the iterative sequence \(\{u_n\}\) in K in the following way:
where \(\rho >0\) is a constant and \(n=0,1,2,\dots \).
The next proposition is a main tool for studying the convergence analysis of the iterative sequence generated by Algorithm 4.8.
Proposition 4.2
Let T and \(\varphi \) be the same as in \(\mathrm{RNMVI}\) (53), and let \(u\in K\) be a solution of \(\mathrm{RNMVI}\) (53). Suppose that \(\{u_n\}\) is a sequence generated by Algorithm 4.8 and let the sequence \(\{Tu_n\}\) is bounded. If the operator T is partially \((\alpha ,\beta )\)-mixed relaxed and strongly monotone of type (I) with constant \(\beta =\frac{1}{2r}(\Vert Tu\Vert +\sup \limits _{n\ge 0}\Vert Tu_n\Vert )\), then the inequality (14) holds for all \(n\ge 0\).
Proof
It follows from Proposition 3.2 by taking \(\varphi \) as a univariate function in Proposition 3.2. \(\square \)
We now close this section with the following theorem in which the required conditions for studying the convergence analysis of the iterative sequence generated by Algorithm 4.8 are stated.
Theorem 4.9
Let \(\mathcal {H}\) be a finite-dimensional real Hilbert space and \(\varphi :K\rightarrow \mathbb {R}\cup \{+\infty \}\) be a continuous function. Suppose that the nonlinear operator \(T:K\rightarrow \mathcal {H}\) be continuous and let all the conditions of Proposition 4.2 hold. Furthermore, let \(\mathrm{RNMVI}(T,\varphi ,K)\ne \emptyset \). If \(\rho \in ]0,\frac{1}{2\alpha }[\), then the iterative sequence \(\{u_n\}\) generated by Algorithm 4.8 converges to a solution \(\hat{u}\in K\) of \(\mathrm{RNMVI}\) (53).
Proof
By assuming that in Theorem 3.2 \(\varphi \) is a univariate function, we obtain the desired result from Theorem 3.2. \(\square \)
5 Conclusions
Glowinski et al. [4] were the first to use the auxiliary principle technique, which plays an efficient and important role in variational inequality theory, to study the existence of a solution of the mixed variational inequalities. Most of the results related to the existence of solutions and iterative methods for variational inequality problems and their generalizations have been investigated and considered so far to the case where the underlying set is convex. This paper is devoted to the construction of iterative methods for solving a new class of variational inequalities in a nonconvex setting, called regularized nonconvex mixed variational inequalities, with the help of auxiliary principle technique. We have studied the convergence analysis of the suggested iterative algorithms under some appropriate conditions. Our other motivation is reflected in the final section of this article. In the last section, the nonconvex variational inequality and nonconvex variational problem introduced and studied in [25] have been investigated and analyzed. We have verified that unlike the claim of the author in [25], the nonconvex variational problem (2) and the nonconvex variational inequality (3) in [25] are not equivalent. It should be pointed out that the equivalence between the problems (2) and (3) in [25] has a key role in constructing algorithms and in getting the results in [25], and plays a crucial and basic role in it. Indeed, all the results in [25] have been obtained based of the equivalence between the problems (2) and (3). Thanks to this fact, we have pointed that the main results in [25] are not valid. As a consequence of our main results, we have provided the correct versions of the algorithms and results presented in [25].
References
Signorini, A.: Questioni di elasticità non linearizzata e semilinearizzata. Rend. Mat. Appl. 18, 95–139 (1959)
Fichera, G.: Problemi elettrostatici con vincoli unilaterali: il problema di Signorini con ambigue condizioni al contorno. Atti Acad. Naz. Lincei. Mem. Cl. Sci. Fis. Mat. Nat. Sez. I 7, 91–140 (1964)
Stampacchia, G.: Formes bilineaires coercitives sur les ensembles convexes. C. R. Acad. Sci. Paris 258, 4413–4416 (1964)
Glowinski, R., Lions, J.L., Trémolieres, R.: Numerical Analysis of Variational Inequalities. North-Holland, Amsterdam (1981)
Lions, J.L., Stampacchia, G.: Variational inequalities. Commun. Pure Appl. Math. 20, 493–519 (1967)
Facchinei, F., Pang, J.-S.: Finite-Dimensional Variational Inequalities and Complementarity Problems, Volume I and II. Springer, New York (2003)
Goh, C.J., Yang, X.Q.: Duality in Optimization and Variational Inequalities. Taylor and Francis, London (2002)
Baiocchi, C., Capelo, A.: Variational and Quasi-Variational Inequalities. Wiley, New York (1984)
Balooee, J., Cho, Y.J.: Algorithms for solutions of extended general mixed variational inequalities and fixed points. Optim. Lett. 7, 1929–1955 (2013)
Noor, M.A.: Monotone mixed variational inequalities. Appl. Math. Lett. 14, 231–236 (2001)
Noor, M.A.: Proximal methods for mixed variational inequalities. J. Optim. Theory Appl. 115, 447–452 (2002)
Giannessi, F., Maugeri, A.: Variational Inequalities and Network Equilibrium Problems. Plemum Press, New York (1995)
Bernard, F., Thibault, L., Zlateva, N.: Characterizations of prox-regular sets in uniformly convex Banach spaces. J. Convex Anal. 13, 525–559 (2006)
Bernard, F., Thibault, L., Zlateva, N.: Prox-regular sets and epigraphs in uniformly convex Banach spaces: various regularities and other properties. Trans. Am. Math. Soc. 363, 2211–2247 (2011)
Bounkhel, M., Tadji, L., Hamdi, A.: Iterative schemes to solve nonconvex variational problems. J. Inequal. Pure Appl. Math. 4, 1–14 (2003)
Bounkhel, M., Thibault, L.: Nonconvex sweeping process and prox-regularity in Hilbert space. J. Nonlinear Convex Anal. 6, 359–374 (2005)
Clarke, F.H., Ledyaev, Y.S., Stern, R.J., Wolenski, P.R.: Nonsmooth Analysis and Control Theory. Springer, New York (1998)
Clarke, F.H., Stern, R.J., Wolenski, P.R.: Proximal smoothness and the lower \(C^2\) property. J. Convex Anal. 2, 117–144 (1995)
Poliquin, R.A., Rockafellar, R.T., Thibault, L.: Local differentiability of distance functions. Trans. Am. Math. Soc. 352, 5231–5249 (2000)
Moudafi, A.: An algorithmic approach to prox-regular variational inequalities. Appl. Math. Comput. 155(3), 845–852 (2004)
Ansari, Q.H., Balooee, J.: Predictor–corrector methods for general regularized nonconvex variational inequalities. J. Optim. Theory Appl. 159, 473–488 (2013)
Ansari, Q.H., Balooee, J., Yao, J.-C.: Iterative algorithms for systems of extended regularized nonconvex variational inequalities and fixed point problems. Appl. Anal. 93(5), 972–993 (2014)
Balooee, J.: Projection method approach for general regularized non-convex variational inequalities. J. Optim. Theory Appl. 159, 192–209 (2013)
Balooee, J., Cho, Y.J.: General regularized nonconvex variational inequalities and implicit general nonconvex Wiener–Hopf equations. Pac. J. Optim. 10(2), 255–273 (2014)
Noor, M.A.: Iterative schemes for nonvex variatiponal inequalities. J. Optim. Theory Appl. 121, 385–395 (2004)
Canino, A.: On \(p\)-convex sets and geodesics. J. Differ. Equ. 75, 118–157 (1988)
Noor, M.A.: Proximal methods for mixed quasivariational inequalities. J. Optim. Theory Appl. 115(2), 453–459 (2002)
Acknowledgements
The author would like to express his gratitude to the anonymous referee and handling editor for their helpful comments on the first version of this article.
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
About this article
Cite this article
Balooee, J. Regularized Nonconvex Mixed Variational Inequalities: Auxiliary Principle Technique and Iterative Methods. J Optim Theory Appl 172, 774–801 (2017). https://doi.org/10.1007/s10957-016-1046-3
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10957-016-1046-3
Keywords
- Regularized nonconvex mixed variational inequalities
- Auxiliary principle technique
- Prox-regularity
- Nonconvex sets
- Predictor–corrector methods
- Convergence analysis