1 Introduction

Since the origin of the theory of variational inequalities in early nineteen sixty’s, several numerical methods have been proposed in the literature for solving various classes of variational inequalities and related optimization problems. One of the most effective numerical methods is the auxiliary principle technique introduced by Glowinski et al. [21]. This method is based on a supporting (auxiliary) problem linked with the original problem. This way one can define a mapping that relates the original problem with the auxiliary problem. It is worth to mention that most of the results on the existence of solutions and iterative methods for variational inequalities have been investigated and considered so far in the setting of a convex set. To deal with many nonconvex applications in optimization, economic models, dynamical systems, differential inclusions, etc, Clarke et al. [19] introduced a nonconvex set, called proximally smooth set. Subsequently, it has been investigated by Poliquin et al. [26] but under the name of uniformly prox-regular set. This class of nonconvex sets is well suited to overcome with the difficulties which arise due to the nonconvexity assumption. For further details and applications, we refer to [10,11,12, 14] and the references therein.

During the last decade, several authors paid their attention to develop efficient and implementable numerical methods for solving variational inequalities and their generalizations in the setting of an uniformly prox-regular set, see, for example, [1,2,3,4,5, 7,8,9, 13, 23] and the references therein. By using the projection method or auxiliary principle technique, several iterative algorithms for solving variational inequality problems are suggested and analyzed in these papers.

The variational inequality problem where the underling mapping is a set-valued mapping, is called a generalized variational inequality problem. It was introduced and studied by Saigal [27] and Fang and Peterson [20]. It provides a necessary and sufficient condition for a solution of a nondifferentiable but convex minimization problem. For further details on generalized variational inequality problems, we refer [6] and the references therein. Recently, Pang et al. [25] considered and studied a set-valued variational problem where the underlying set is nonconvex, and called it as nonconvex generalized variational problem. They claimed that the nonconvex generalized variational problem is equivalent to the nonconvex generalized variational inequality problem in the setting of uniformly prox-regular sets. They suggested and analyzed a modified predictor-corrector algorithm for solving the nonconvex generalized variational inequality problem by using the auxiliary principle technique. They claimed that the iterative sequences generated by their algorithm converge strongly to a solution of the nonconvex generalized variational inequality problem.

In this paper, we introduce a new class of regularized nonconvex set-valued mixed variational inequalities (in short, \(\mathop {\mathrm{RNSVMVI}}\)), where the underlying set is uniformly prox-regular. By using the auxiliary principle technique, a predictor-corrector method for solving \(\mathop {\mathrm{RNSVMVI}}\) is proposed and analyzed. We study the convergence analysis of the iterative sequences generated by the suggested algorithm. It is worth to mention that the convergence analysis of the suggested iterative algorithm requires only partially mixed relaxed and strong monotonicity of type (I) property of the operator involved in \(\mathop {\mathrm{RNSVMVI}}\). Last section investigates and analyzes the results given in [25]. Some errors in the results presented in [25] are detected and the invalidity of them is pointed out. Finally, we present the correct version of the results given in [25] as special cases of our main results presented in Sect. 3.

2 Preliminaries and basic facts

Throughout the paper, unless otherwise specified, we use the following notations, terminology and assumptions. Let \(\mathcal {H}\) be a real Hilbert space whose inner product and norm are denoted by \(\langle .,.\rangle \) and \(\Vert .\Vert \), respectively. Let K be a nonempty closed subset of \(\mathcal {H}\). We denote by \(d_K(.)\) or d(., K) the usual distance function from a point to a set K, that is, \(d_K(u)=\inf \nolimits _{v\in K}\Vert u-v\Vert \).

Definition 2.1

Let \(u\in \mathcal {H}\) be a point not lying in K. A point \(v\in K\) is called a closest point or a projection ofuontoK if \(d_K(u)=\Vert u-v\Vert \). The set of all such closest points is denoted by \(P_K(u)\), that is,

$$\begin{aligned} P_K(u) := \left\{ v \in K : d_K(u) = \Vert u-v\Vert \right\} . \end{aligned}$$

Definition 2.2

The proximal normal cone of K at a point \(u\in K\) is given by

$$\begin{aligned} N_K^P(u):=\left\{ \xi \in \mathcal {H}:\exists \alpha >0 \text{ such } \text{ that } u\in P_K(u+\alpha \xi )\right\} . \end{aligned}$$

The following lemmas give the characterization of the proximal normal cone.

Lemma 2.1

[18, Proposition 1.1.5] Let K be a nonempty closed subset of \(\mathcal {H}\). Then \(\xi \in N_K^P(u)\) if and only if there exists a constant \(\alpha =\alpha (\xi ,u)>0\) such that \(\langle \xi ,v-u\rangle \le \alpha \Vert v-u\Vert ^2\) for all \(v\in K\).

Lemma 2.2

[18, Proposition 1.1.10] Let K be a nonempty, closed and convex subset of \(\mathcal {H}\). Then \(\xi \in N_K^P(u)\) if and only if \(\langle \xi ,v-u\rangle \le 0\) for all \(v\in K\).

Definition 2.3

[17] Let \(f : \mathcal {H} \rightarrow \mathbb {R}\) be locally Lipschitz near a point x. The Clarke’s directional derivative of f at x in the direction v, denoted by \(f^{\circ }(x;v)\), is defined by

$$\begin{aligned} f^{\circ }(x;v) = \limsup _{\begin{array}{c} {y \rightarrow x} \\ { t\downarrow 0} \end{array}} \frac{f(y+tv)-f(y)}{t}, \end{aligned}$$

where y is a vector in \(\mathcal {H}\) and t is a positive scalar.

The tangent cone to K at a point \(x \in K\), denoted by \(T_K(x)\), is defined by

$$\begin{aligned} T_K(x) := \left\{ v \in \mathcal {H} : d^{\circ }_K(x;v) = 0 \right\} . \end{aligned}$$

The normal cone to K at \(x \in K\), denoted by \(N_K(x)\), is defined by

$$\begin{aligned} N_K(x) := \left\{ \xi \in \mathcal {H} : \langle \xi ,v \rangle \le 0 \text{ for } \text{ all } v \in T_K(x) \right\} . \end{aligned}$$

The Clarke normal cone, denoted by \(N^C_K(x)\), is defined by \(N^C_K(x)=\overline{co}[N_K^P(x)]\), where \(\overline{co}[S]\) denotes the closure of the convex hull of S.

Clearly, \(N^P_K(x) \subseteq N^C_K(x)\). Note that \(N^C_K(x)\) is a closed and convex cone, whereas \(N_K^P(x)\) is convex, but may not be closed. For further details on this topic, we refer to [17, 18, 26] and the references therein.

Definition 2.4

[19] For a given \(r \in (0,+\infty ]\), a subset \(K_r\) of \(\mathcal {H}\) is said to be normalized uniformly prox-regular (or uniformlyr-prox-regular) if every nonzero proximal normal to \(K_r\) can be realized by an r-ball. This means that for all \(\bar{x}\in K_r\) and all \(\mathbf{0} \ne \xi \in N^P_{K_r}(\bar{x})\),

$$\begin{aligned} \left\langle \dfrac{\xi }{\Vert \xi \Vert }, x - \bar{x} \right\rangle \le \frac{1}{2r} \Vert x-\bar{x}\Vert ^2,\quad \text{ for } \text{ all } x \in K_r. \end{aligned}$$

The class of normalized uniformly prox-regular sets includes the class of convex sets, p-convex sets [15], \(C^{1,1}\) submanifolds (possibly with boundary) of \(\mathcal {H}\), the images under a \(C^{1,1}\) diffeomorphism of convex sets and many other nonconvex sets [19].

Lemma 2.3

[19] A closed set \(K\subseteq \mathcal {H}\) is convex if and only if it is uniformly r-prox-regular for every \(r>0\).

If \(r=+\infty \), then in view of Definition 2.4 and Lemma 2.3, the uniform r-prox-regularity of \(K_r\) is equivalent to the convexity of \(K_r\). That is, for \(r=+\infty \), we set \(K_r=K\).

The union of two disjoint intervals [ab] and [cd] is uniformly r-prox-regular with \(r=\frac{c-b}{2}\) [13, 18, 26]. The finite union of disjoint intervals is also uniformly r-prox-regular and r depends on the distances between the intervals.

3 Main results

The main motivation of this section is to introduce a new class of regularized nonconvex set-valued mixed variational inequalities and to suggest and analyze an iterative method for solving such variational inequalities by using the auxiliary principle technique. Further, the convergence analysis of the proposed iterative algorithm under some appropriate conditions is studied.

From now onward, unless otherwise specified, we suppose that K is a uniformly r-prox-regular set in \(\mathcal {H}\). We denote by \(CB(\mathcal {H})\) the family of all nonempty closed and bounded subsets of \(\mathcal {H}\). Let \(T:K\rightarrow CB(\mathcal {H})\) be a set-valued operator. For a given univariate real-valued function \(\varphi :\mathcal {H}\rightarrow \mathbb {R}\cup \{+\infty \}\), we consider the problem of finding \(u\in K\) and \(u^*\in T(u)\) such that

$$\begin{aligned} \langle u^*,v-u\rangle +\frac{\Vert u^*\Vert }{2r}\Vert v-u\Vert ^ 2+\varphi (v)-\varphi (u)\ge 0,\quad \forall v\in K, \end{aligned}$$
(3.1)

which is called regularized nonconvex set-valued mixed variational inequality\((\mathop {\mathrm{RNSVMVI}})\).

For appropriate and suitable choices of the mappings T and \(\varphi \) in the above problem, one can easily obtain the problems studied in [16, 20, 22] and the references therein.

In the sequel, we denote by \(\mathop {\mathrm{RNSVMVI}}(T,\varphi ,K)\) the set of solutions of \(\mathop {\mathrm{RNSVMVI}}\) (3.1).

We now mention the following result due to Nadler [24] which will be used in the sequel.

Lemma 3.1

[24] Let X be a complete metric space and \(T:X\rightarrow CB(X)\) be a set-valued mapping. Then for any \(\varepsilon >0\) and for any given \(x,y\in X\), \(u\in T(x)\), there exists \(v\in T(y)\) such that

$$\begin{aligned} d(u,v)\le (1+\varepsilon ) M(T(x),T(y)), \end{aligned}$$

where M(., .) is the Hausdorff metric on CB(X) defined by

$$\begin{aligned} M(A,B)=\max \left\{ \sup \limits _{x\in A}\inf \limits _{y\in B}\Vert x-y\Vert , \sup \limits _{y\in B}\inf \limits _{x\in A}\Vert x-y\Vert \right\} , \quad \forall A,B\in CB(X). \end{aligned}$$

Let T and \(\varphi \) be the same as in \(\mathop {\mathrm{RNSVMVI}}\) (3.1). For given \(u\in K\) and \(u^*\in T(u)\), consider the following auxiliary regularized nonconvex set-valued mixed variational inequality problem of finding \(w\in K\) and \(w^*\in T(w)\) such that

$$\begin{aligned} \langle \rho w^*+w-u,v-w\rangle +\frac{\rho \Vert w^*\Vert }{2r}\Vert v-w\Vert ^2 +\rho \varphi (v)-\rho \varphi (w)\ge 0,\quad \forall v\in K, \end{aligned}$$
(3.2)

where \(\rho >0\) is a constant.

We observe that if \(w=u\), then obviously \((w,w^*)\) is a solution of \(\mathop {\mathrm{RNSVMVI}}\) (3.1). This observation and Lemma 3.1 enable us to suggest the following three-step predictor-corrector method for solving \(\mathop {\mathrm{RNSVMVI}}\) (3.1).

Algorithm 3.1

Let T and \(\varphi \) be the same as in \(\mathop {\mathrm{RNSVMVI}}\) (3.1). For given \(u_0,y_0,w_0\in K\) and \(u^*_0\in T(u_0)\), \(y^*_0\in T(y_0)\) and \(w^*_0\in T(w_0)\), define the iterative sequences \(\{u_n\}\), \(\{u^*_n\}\), \(\{y_n\}\), \(\{y^*_n\}\), \(\{w_n\}\) and \(\{w^*_n\}\) by the iterative schemes

$$\begin{aligned}&\langle \rho w^*_n\!+u_{n+1}-w_n,v-u_{n+1}\rangle \!+\frac{\rho \Vert w^*_n\Vert }{2r}\Vert v-u_{n+1}\Vert ^2 +\rho \varphi (v)\!-\!\rho \varphi (u_{n+1})\ge 0,\,\,\forall v\in K,\end{aligned}$$
(3.3)
$$\begin{aligned}&\langle \rho y^*_n+w_n-y_n,v-w_n\rangle +\frac{\rho \Vert y^*_n\Vert }{2r}\Vert v-w_n\Vert ^2 +\rho \varphi (v)-\rho \varphi (w_n)\ge 0,\quad \forall v\in K,\end{aligned}$$
(3.4)
$$\begin{aligned}&\langle \rho u^*_n+y_n-u_n,v-y_n\rangle +\frac{\rho \Vert u^*_n\Vert }{2r}\Vert v-y_n\Vert ^2 +\rho \varphi (v)-\rho \varphi (y_n)\ge 0,\quad \forall v\in K,\end{aligned}$$
(3.5)
$$\begin{aligned}&w^*_n\in T(w_n): \Vert w^*_{n+1}-w^*_n\Vert \le (1+(1+n)^{-1})M(T(w_{n+1}),T(w_n)),\end{aligned}$$
(3.6)
$$\begin{aligned}&y^*_n\in T(y_n): \Vert y^*_{n+1}-y^*_n\Vert \le (1+(1+n)^{-1})M(T(y_{n+1}),T(y_n)), \end{aligned}$$
(3.7)
$$\begin{aligned}&u^*_n\in T(u_n): \Vert u^*_{n+1}-u^*_n\Vert \le (1+(1+n)^{-1})M(T(u_{n+1}),T(u_n)), \end{aligned}$$
(3.8)

where \(\rho >0\) is a constant and \(n=0,1,2,\ldots \).

The following definitions will be used to study the convergence analysis of iterative sequences generated by Algorithm 3.1.

Definition 3.1

A set-valued operator \(T:\mathcal {H}\rightarrow CB(\mathcal {H})\) is said to be M-Lipschitz continuous with constant \(\delta \) if there exists a constant \(\delta >0\) such that

$$\begin{aligned} M(T(u),T(v))\le \delta \Vert u-v\Vert ,\quad \forall u,v\in \mathcal {H}, \end{aligned}$$

where M(., .) is the Hausdorff metric on \(CB(\mathcal {H})\).

Definition 3.2

A set-valued operator \(T:K\rightarrow CB(\mathcal {H})\) is said to be

  1. (a)

    monotone if

    $$\begin{aligned} \langle u^*-v^*,u-v\rangle \ge 0,\quad \forall u,v\in K, u^*\in T(u), v^*\in T(v); \end{aligned}$$
  2. (b)

    \(\kappa \)-strongly monotone if there exists a constant \(\kappa >0\) such that

    $$\begin{aligned} \langle u^*-v^*,u-v\rangle \ge \kappa \Vert u-v\Vert ^2,\quad \forall u,v\in K, u^*\in T(u), v^*\in T(v); \end{aligned}$$
  3. (c)

    partially\(\varsigma \)-strongly monotone if there exists a constant \(\varsigma >0\) such that

    $$\begin{aligned} \langle u^*-v^*,z-v\rangle \ge \varsigma \Vert z-v\Vert ^2,\quad \forall u,v,z\in K, u^*\in T(u), v^*\in T(v); \end{aligned}$$
  4. (d)

    partially\(\zeta \)-relaxed monotone of type (I) if there exists a constant \(\zeta >0\) such that

    $$\begin{aligned} \langle u^*-v^*,z-v\rangle \ge -\zeta \Vert z-u\Vert ^2,\quad \forall u,v,z\in K, u^*\in T(u), v^*\in T(v); \end{aligned}$$
  5. (e)

    partially\((\alpha ,\beta )\)-mixed relaxed and strongly monotone of type (I) if there exist constants \(\alpha ,\beta >0\) such that

    $$\begin{aligned} \langle u^*-v^*,z-v\rangle \ge -\alpha \Vert z-u\Vert ^2+\beta \Vert z-v\Vert ^2, \quad \forall u,v,z\in K, u^*\in T(u), v^*\in T(v). \end{aligned}$$

If \(z=u\), then partially strong monotonicity, and partially mixed relaxed and strong monotonicity of type (I) reduce to strong monotonicity, and partially relaxed monotonicity of type (I) reduces to monotonicity.

The following proposition plays a key role in establishing the strong convergence of the iterative sequences generated by Algorithm 3.1 to a solution of \(\mathop {\mathrm{RNSVMVI}}\) (3.1).

Proposition 3.1

Let T and \(\varphi \) be the same as in \(\mathop {\mathrm{RNSVMVI}}\) (3.1) and let \(u\in K\) and \(u^*\in T(u)\) be the solution of \(\mathop {\mathrm{RNSVMVI}}\) (3.1). Suppose further that \(\{u_n\}\), \(\{w_n\}\), \(\{y_n\}\), \(\{u^*_n\}\), \(\{w^*_n\}\) and \(\{y^*_n\}\) are sequences generated by Algorithm 3.1 such that the sequences \(\{u^*_n\}\), \(\{w^*_n\}\) and \(\{y^*_n\}\) are bounded. If the operator T is partially \((\alpha ,\beta )\)-mixed relaxed and strongly monotone of type (I) with \(\beta =\frac{1}{2r}\left( \Vert u^*\Vert +\sup \Big \{\Vert u^*_n\Vert ,\Vert w^*_n\Vert ,\Vert y^*_n\Vert :n\ge 0\Big \}\right) \), then, for all \(n\ge 0\),

$$\begin{aligned} \Vert u-u_{n+1}\Vert ^2&\le \Vert u-w_n\Vert ^2-(1-2\alpha \rho )\Vert u_{n+1}-w_n\Vert ^2,\end{aligned}$$
(3.9)
$$\begin{aligned} \Vert u-w_n\Vert ^2&\le \Vert u-y_n\Vert ^2-(1-2\alpha \rho )\Vert w_n-y_n\Vert ^2,\end{aligned}$$
(3.10)
$$\begin{aligned} \Vert u-y_n\Vert ^2&\le \Vert u-u_n\Vert ^2-(1-2\alpha \rho )\Vert y_n-u_n\Vert ^2. \end{aligned}$$
(3.11)

Proof

Since \(u\in K\) and \(u^*\in T(u)\) are the solution of \(\mathop {\mathrm{RNSVMVI}}\) (3.1), it follows that

$$\begin{aligned} \langle \rho u^*,v-u\rangle +\frac{\rho \Vert u^*\Vert }{2r}\Vert v-u\Vert ^2+ \rho \varphi (v)-\rho \varphi (u)\ge 0,\quad \forall v\in K. \end{aligned}$$
(3.12)

Taking \(v=u_{n+1}\) in (3.12) and \(v=u\) in (3.3), we obtain

$$\begin{aligned} \langle \rho u^*,u_{n+1}-u\rangle +\frac{\rho \Vert u^*\Vert }{2r}\Vert u_{n+1} -u\Vert ^2+\rho \varphi (u_{n+1})-\rho \varphi (u)\ge 0, \end{aligned}$$
(3.13)

and

$$\begin{aligned}&\langle \rho w^*_n+u_{n+1}-w_n,u-u_{n+1}\rangle +\frac{\rho \Vert w^*_n\Vert }{2r}\Vert u-u_{n+1}\Vert ^2 +\rho \varphi (u)-\rho \varphi (u_{n+1})\ge 0.\nonumber \\ \end{aligned}$$
(3.14)

By using (3.13) and (3.14), we get

$$\begin{aligned}&\langle u_{n+1}-w_n,u-u_{n+1}\rangle \nonumber \\&\quad \ge \rho \langle w_n^*,u_{n+1}-u\rangle -\frac{\rho \Vert w^*_n\Vert }{2r}\Vert u-u_{n+1}\Vert ^2 +\rho \varphi (u_{n+1})-\rho \varphi (u)\nonumber \\&\quad =\rho \langle w_n^*-u^*,u_{n+1}-u\rangle +\rho \langle u^*,u_{n+1}-u\rangle -\frac{\rho \Vert w^*_n\Vert }{2r} \Vert u-u_{n+1}\Vert ^2+\rho \varphi (u_{n+1})-\rho \varphi (u)\nonumber \\&\quad \ge \rho \langle w_n^*-u^*,u_{n+1}-u\rangle -\frac{\rho \Vert u^*\Vert }{2r}\Vert u_{n+1}-u\Vert ^2 -\frac{\rho \Vert w^*_n\Vert }{2r}\Vert u-u_{n+1}\Vert ^2 \nonumber \\&\quad =\rho \langle w_n^*-u^*,u_{n+1}-u\rangle -\frac{\rho (\Vert u^*\Vert +\Vert w^*_n\Vert )}{2r}\Vert u_{n+1}-u\Vert ^2. \end{aligned}$$
(3.15)

Taking into account that T is partially \((\alpha ,\beta )\)-mixed relaxed and strongly monotone of type (I) with \(\beta =\frac{1}{2r}\left( \Vert u^*\Vert +\sup \Big \{\Vert u^*_n\Vert ,\Vert w^*_n\Vert ,\Vert y^*_n\Vert :n\ge 0\Big \}\right) \), it follows from (3.15) that

$$\begin{aligned}&\langle u_{n+1}-w_{n},u-u_{n+1}\rangle \nonumber \\&\quad \ge -\alpha \rho \Vert u_{n+1}-w_{n}\Vert ^2 +\rho \beta \Vert u_{n+1}-u\Vert ^2-\frac{\rho (\Vert u^*\Vert +\Vert w^*_n\Vert )}{2r}\Vert u_{n+1}-u\Vert ^2\nonumber \\&\quad \ge -\alpha \rho \Vert u_{n+1}-w_n\Vert ^2. \end{aligned}$$
(3.16)

Setting \(x=u-u_{n+1}\) and \(y=u_{n+1}-w_n\), and utilizing well-known property of the inner product, we have

$$\begin{aligned} 2\langle u_{n+1}-w_n,u-u_{n+1}\rangle =\Vert u-w_n\Vert ^2-\Vert u-u_{n+1}\Vert ^2-\Vert u_{n+1}-w_n\Vert ^2. \end{aligned}$$
(3.17)

Combining (3.16) and (3.17), we deduce

$$\begin{aligned} \Vert u-u_{n+1}\Vert ^2\le \Vert u-w_n\Vert ^2-(1-2\alpha \rho )\Vert u_{n+1}-w_n\Vert ^2, \end{aligned}$$

which is the required result (3.9).

Taking \(v=w_n\) in (3.12) and \(v=u\) in (3.4), we get

$$\begin{aligned} \langle \rho u^*,w_n-u\rangle +\frac{\rho \Vert u^*\Vert }{2r}\Vert w_n-u\Vert ^2+\rho \varphi (w_n)-\rho \varphi (u)\ge 0 \end{aligned}$$
(3.18)

and

$$\begin{aligned} \langle \rho y^*_n+w_n-y_n,u-w_n\rangle +\frac{\rho \Vert y^*_n\Vert }{2r}\Vert u-w_n\Vert ^2 +\rho \varphi (u)-\rho \varphi (w_n)\ge 0. \end{aligned}$$
(3.19)

By the same argument as in the proof of (3.15)–(3.17), and by using (3.18) and (3.19) and partially \((\alpha ,\beta )\)-mixed relaxed and strong monotonicity of type (I) of T, one can conclude that

$$\begin{aligned} \Vert u-w_n\Vert ^2\le \Vert u-y_n\Vert ^2-(1-2\alpha \rho )\Vert w_n-y_n\Vert ^2, \end{aligned}$$

which is the required result (3.10).

Picking \(v=y_n\) in (3.12) and \(v=u\) in (3.5), we yield

$$\begin{aligned} \langle \rho u^*,y_n-u\rangle +\frac{\rho \Vert u^*\Vert }{2r}\Vert y_n-u\Vert ^2+ \rho \varphi (y_n)-\rho \varphi (u)\ge 0, \end{aligned}$$
(3.20)

and

$$\begin{aligned}&\langle \rho u^*_n+y_n-u_n,u-y_n\rangle +\frac{\rho \Vert u^*_n\Vert }{2r}\Vert u-y_n\Vert ^2 +\rho \varphi (u)-\rho \varphi (y_n)\ge 0. \end{aligned}$$
(3.21)

In a similar fashion to the preceding analysis, by virtue of (3.20) and (3.21) and considering the fact that T is partially \((\alpha ,\beta )\)-mixed relaxed and strongly monotone of type (I), we can show that

$$\begin{aligned} \Vert u-y_n\Vert ^2\le \Vert u-u_n\Vert ^2-(1-2\alpha \rho )\Vert y_n-u_n\Vert ^2, \end{aligned}$$

which gives us the required result (3.11). This completes the proof.

We now conclude this section by presenting the following theorem which provides the strong convergence of sequences generated by Algorithm 3.1.

Theorem 3.1

Let \(\mathcal {H}\) be a finite dimensional real Hilbert space and \(\varphi :\mathcal {H}\rightarrow \mathbb {R}\cup \{+\infty \}\) be a continuous real-valued function. Suppose that the set-valued mapping \(T:K\rightarrow CB(\mathcal {H})\) is M-Lipschitz continuous with constant \(\delta \), all the conditions of Proposition 3.1 hold and \(\mathop {\mathrm{RNSVMVI}}(T,\varphi ,K)\ne \emptyset \). If \(\rho \in (0,\frac{1}{2\alpha })\), then the sequences \(\{u_n\}\) and \(\{u^*_n\}\) generated by Algorithm 3.1 converge strongly to \(\hat{u}\in K\) and \(\hat{u}^*\in T(\hat{u})\), respectively, and \((\hat{u},\hat{u}^*)\) is a solution of \(\mathop {\mathrm{RNSVMVI}}\) (3.1).

Proof

Let \(u\in K\) and \(u^*\in T(u)\) be the solution of \(\mathop {\mathrm{RNSVMVI}}\) (3.1). Then inequalities (3.9)–(3.11) imply that the sequence \(\{\Vert u_n-u\Vert \}\) is nonincreasing, and so the sequence \(\{u_n\}\) is bounded. Furthermore, by using (3.9)–(3.11), we have

$$\begin{aligned} (1-2\alpha \rho ) \left( \Vert u_{n+1}-w_n\Vert ^2+\Vert w_n-y_n\Vert ^2+\Vert y_n-u_n\Vert ^2 \right) \le \Vert u-u_n\Vert ^2-\Vert u-u_{n+1}\Vert ^2, \end{aligned}$$

which implies that

$$\begin{aligned} \sum \limits _{n=0}^{\infty }(1-2\alpha \rho ) \left( \Vert u_{n+1}-w_n\Vert ^2 +\Vert w_n-y_n\Vert ^2+\Vert y_n-u_n\Vert ^2 \right) \le \Vert u-u_0\Vert ^2. \end{aligned}$$
(3.22)

From (3.22), we deduce that \(\Vert u_{n+1}-w_n\Vert \rightarrow 0\), \(\Vert w_n-y_n\Vert \rightarrow 0\) and \(\Vert y_n-u_n\Vert \rightarrow 0\) as \(n\rightarrow \infty \). Let \(\hat{u}\) be a cluster point of the sequence \(\{u_n\}\). The boundedness of the sequence \(\{u_n\}\) guarantees the existence of a subsequence \(\{u_{n_i}\}\) of \(\{u_n\}\) such that \(u_{n_i}\rightarrow \hat{u}\) as \(i\rightarrow \infty \). The fact that \(\Vert y_n-u_n\Vert \rightarrow 0\) as \(n\rightarrow \infty \), implies that

$$\begin{aligned} \Vert y_{n_i}-\hat{u}\Vert \le \Vert y_{n_i}-u_{n_i}\Vert +\Vert u_{n_i}-\hat{u}\Vert . \end{aligned}$$

The right side of the above inequality tends to zero as \(i\rightarrow \infty \), consequently, \(\Vert y_{n_i}-\hat{u}\Vert \rightarrow 0\) as \(i\rightarrow \infty \), that is, \(y_{n_i}\rightarrow \hat{u}\) as \(i\rightarrow \infty \). Since the operator T is M-Lipschitz continuous with constant \(\delta >0\), by using (3.8), we have

$$\begin{aligned} \begin{aligned} \Vert u^*_{n_i+1}-u^*_{n_i}\Vert&\le (1+(1+n_i)^{-1})M(T(u_{n_i+1}),T(u_{n_i}))\\&\le (1+(1+n_i)^{-1})\delta \Vert u_{n_i+1}-u_{n_i}\Vert . \end{aligned} \end{aligned}$$
(3.23)

Inequality (3.23) implies that \(\Vert u^*_{n_i+1}-u^*_{n_i}\Vert \rightarrow 0\) as \(i\rightarrow \infty \), that is, \(\{u^*_{n_i}\}\) is a Cauchy sequence in \(\mathcal {H}\). Therefore, \(u^*_{n_i}\rightarrow \hat{u}^*\), for some \(\hat{u}^*\in \mathcal {H}\), as \(i\rightarrow \infty \). By using (3.5), we yield

$$\begin{aligned} \langle \rho u^*_{n_i}+y_{n_i}-u_{n_i},v-y_{n_i}\rangle +\frac{\rho \Vert u^*_{n_i}\Vert }{2r}\Vert v-y_{n_i}\Vert ^2 +\rho \varphi (v)-\rho \varphi (y_{n_i})\ge 0,\quad \forall v\in K.\nonumber \\ \end{aligned}$$
(3.24)

Taking the limit in relation (3.24) as \(i\rightarrow \infty \) and using the continuity of g, we obtain

$$\begin{aligned} \langle \hat{u}^*,v-\hat{u}\rangle +\frac{\Vert \hat{u}^*\Vert }{2r}\Vert v-\hat{u}\Vert ^2 +\varphi (v)-\varphi (\hat{u})\ge 0,\quad \forall v\in K. \end{aligned}$$
(3.25)

On the other hand, by M-Lipschitz continuity of T with constant \(\delta >0\), we obtain

$$\begin{aligned} \begin{aligned} d(\hat{u}^*,T(\hat{u}))&=\inf \left\{ \Vert \hat{u}^*-\vartheta \Vert : \vartheta \in T(\hat{u}) \right\} \\&\le \Vert \hat{u}^*-u^*_{n_i}\Vert +d(u^*_{n_i},T(\hat{u}))\\&\le \Vert \hat{u}^*-u^*_{n_i}\Vert +M(T(u_{n_i}),T(\hat{u}))\\&\le \Vert \hat{u}^*-u^*_{n_i}\Vert +\delta \Vert u_{n_i}-\hat{u}\Vert . \end{aligned} \end{aligned}$$

Note that the right side of above inequality approaches zero as \(i\rightarrow \infty \). Since \(T(\hat{u})\in CB(\mathcal {H})\), we conclude that \(\hat{u}^*\in T(\hat{u})\). Thus, (3.25) guarantees that \((\hat{u}, \hat{u}^*)\) with \(\hat{u}\in K\) and \(\hat{u}^*\in T(\hat{u})\) is a solution of \(\mathop {\mathrm{RNSVMVI}}\) (3.1). Now, inequalities (3.9)–(3.11) imply that

$$\begin{aligned} \Vert \hat{u}-u_{n+1}\le \Vert \hat{u}-u_n\Vert ,\quad \forall n\ge 0. \end{aligned}$$
(3.26)

From (3.26), it follows that \(u_n\rightarrow \hat{u}\) as \(n\rightarrow \infty \). Accordingly, the sequence \(\{u_n\}\) has exactly one cluster point \(\hat{u}\). This gives us the desired result.

4 Some comments on nonconvex generalized variational inequalities

This section is devoted to the study of nonconvex generalized variational problem and regularized nonconvex generalized variational inequality considered in [25]. We examine the iterative algorithm and convergence result given in [25] and point out some errors. As a consequence of our main results mentioned in Sect. 3, we derive the correct version of the results presented in [25].

Let \(C(\mathcal {H})\) denote the family of all nonempty compact subsets of \(\mathcal {H}\) and K be an uniformly r-prox-regular set in \(\mathcal {H}\). For a given set-valued mapping \(T:\mathcal {H}\rightarrow C(\mathcal {H})\), Pang et al. [25] considered the problem of finding \(u\in K\) such that

$$\begin{aligned} T(u)\cap \{-N(K,u)\}\ne \emptyset , \end{aligned}$$
(4.1)

and called it as nonconvex generalized variational problem\((\mathop {\mathrm{NGVP}})\).

If \(r=\infty \), that is, K is a convex set in \(\mathcal {H}\), then \(\mathop {\mathrm{NGVP}}\) (4.1) is equivalent to find \(u\in K\) and \(u^*\in T(u)\) such that

$$\begin{aligned} \langle u^*,v-u\rangle \ge 0,\quad \forall v\in K, \end{aligned}$$
(4.2)

which is known generalized variational inequality problem, introduced and studied by Fang and Peterson [20].

Based on the following lemma, Pang et al. [25] claimed that \(\mathop {\mathrm{NGVP}}\) (4.1) is equivalent to a nonconvex variational inequality problem.

Lemma 4.1

[25, Lemma 2.2] If K is a uniformly r-prox-regular set, then nonconvex generalized variational problem (4.1) is equivalent to find \(u\in K\) and \(u^*\in T(u)\) such that

$$\begin{aligned} \langle u^*,v-u\rangle +\frac{1}{2r}\Vert v-u\Vert ^2\ge 0,\quad \forall v\in K. \end{aligned}$$
(4.3)

Inequality (4.3) is known as regularized nonconvex generalized variational inequality.

By a careful reading, we found that there is an error in the proof of Lemma 4.1 (that is, [25, Lemma 2.2]). Pang et al. [25] asserted that if \(u\in K\) is a solution of \(\mathop {\mathrm{NGVP}}\) (4.1), then there exists \(u^*\in T(u)\) such that

$$\begin{aligned} -u^*\in N^P(K,u)=N(K,u), \end{aligned}$$

and then in the light of Definition 2.4, they deduced that \(u\in K\) is a solution of regularized nonconvex generalized variational inequality problem (4.3).

However, the following example illustrates that every solution \(\mathop {\mathrm{NGVP}}\) (4.1) need not be a solution of regularized nonconvex generalized variational inequality problem (4.3) as claimed in [25].

Example 4.1

Let \(\mathcal {H}=\mathbb {R}\) and \(K=[0,\alpha ]\cup [\beta ,\gamma ]\) be the union of two disjoint intervals \([0,\alpha ]\) and \([\beta ,\gamma ]\) where \(0<\alpha<\beta <\gamma \). Then K is a uniformly r-prox-regular set in \(\mathcal {H}\) with \(r=\frac{\beta -\alpha }{2}\) and so we have \(\frac{1}{2r}=\frac{1}{\beta -\alpha }\). Let the operator \(T:H\rightarrow C(\mathcal {H})\) be defined by

$$\begin{aligned} T(x)=\left\{ \begin{array}{ll} [0,e^x], &{}\quad x\in \mathbb {R}\backslash \{\alpha ,\beta \},\\ \{\theta ,\mu \}, &{}\quad x=\alpha ,\\ \{\xi ,\eta \}, &{}\quad x=\beta , \end{array}\right. \end{aligned}$$

where \(\theta ,\xi \in \mathbb {R}\), \(\mu <-1\) and \(\eta >1\) are arbitrary real constants. Taking \(x=\frac{\alpha +\beta }{2}\), we have \(P_K(x)=\{\alpha ,\beta \}\),

$$\begin{aligned} N_K^P(\alpha )=\left\{ t(x-\alpha ):t\ge 0\right\} =[0,+\infty ), \end{aligned}$$

and

$$\begin{aligned} N_K^P(\beta )=\left\{ t(x-\beta ):t\ge 0\right\} =(-\infty ,0]. \end{aligned}$$

Take \(u=\alpha \) and \(u^*=\mu \). Then, we have \(-u^*\in N^P_K(u)\) and

$$\begin{aligned} \begin{aligned} \langle u^*,v-u\rangle +\frac{1}{2r}\Vert v-u\Vert ^2&=\mu (v-\alpha )+\frac{1}{\beta -\alpha }(v-\alpha )^2\\&=(v-\alpha ) \left( \mu +\frac{v-\alpha }{\beta -\alpha } \right) . \end{aligned} \end{aligned}$$

If \(v=\beta \), it follows that

$$\begin{aligned} \langle u^*,v-u\rangle +\frac{1}{2r}\Vert v-u\Vert ^2=(\beta -\alpha )(\mu +1)<0. \end{aligned}$$

Taking \(u=\beta \) and \(u^*=\gamma \), we have \(-u^*=-\gamma \in N_K^P(u)\) and

$$\begin{aligned} \begin{aligned} \langle u^*,v-u\rangle +\frac{1}{2r}\Vert v-u\Vert ^2&=\eta (v-\beta )+\frac{1}{\beta -\alpha }(v-\beta )^2\\&=(v-\beta ) \left( \eta -\frac{\beta -v}{\beta -\alpha } \right) . \end{aligned} \end{aligned}$$

Obviously, for \(v=\alpha \), we deduce that

$$\begin{aligned} \langle u^*,v-u\rangle +\frac{1}{2r}\Vert v-u\Vert ^2=(v-\beta )(\eta -1)<0. \end{aligned}$$

Hence, the inequality

$$\begin{aligned} \langle u^*,v-u\rangle +\frac{1}{2r}\Vert v-u\Vert ^2\ge 0 \end{aligned}$$

cannot hold for all \(v\in K\). Accordingly, every solution of \(\mathop {\mathrm{NGVP}}\) (4.1) need not be a solution of regularized nonconvex generalized variational inequality problem (4.3).

The equivalence between problems (4.1) and (4.3) plays a crucial role in proposing algorithms and in establishing convergence results. Indeed, all the results in [25] have been obtained based on the equivalence between problems (4.1) and (4.3). But, unfortunately, as mentioned above, problems (4.1) and (4.3) are not equivalent.

Lemma 4.2

[24] Let X be a complete metric space, and let \(T:X\rightarrow C(X)\) be a set-valued mapping. Then for any given \(x,y\in X\), \(u\in T(x)\), there exists \(v\in T(y)\) such that

$$\begin{aligned} d(u,v)\le M(T(x),T(y)), \end{aligned}$$
(4.4)

where M(., .) is the Hausdorff metric on C(X).

Pand et al. [25, Section 3] considered the following auxiliary nonconvex variational inequality problem:

For a given \(u\in K\), find \(w\in K\) and \(w^*\in T(w)\) such that

$$\begin{aligned} \langle \rho w^*+w-u,v-w\rangle +\frac{1}{2r}\Vert v-w\Vert ^2\ge 0,\quad \forall v\in K, \end{aligned}$$
(4.5)

where \(\rho >0\) is a constant.

Pang et al. [25] claimed that if \(w=u\), then w is a solution of generalized variational inequality problem (4.2). Based on this fact and by utilizing Lemma 4.2, they suggested the following modified predictor-corrector algorithm for solving problem (4.2).

Algorithm 4.1

(Modified Predictor-Corrector Algorithm) [25, Algorithm 3.1] For a given \(u_0\in \mathcal {H}\), compute the approximate solution \(u_{n+1}\) by the following iterative scheme:

$$\begin{aligned}&\langle \rho w^*_n+u_{n+1}-w_n,v-u_{n+1}\rangle +\frac{1}{2r}\Vert v-w_n\Vert ^2\ge 0,\quad \forall v\in K,\nonumber \\&w^*_n\in T(w_n): \Vert w^*_{n+1}-w^*_n\Vert \le M(T(w_{n+1}),T(w_n)),\end{aligned}$$
(4.6)
$$\begin{aligned}&\langle \rho y^*_n+w_n-y_n,v-w_n\rangle +\frac{1}{2r}\Vert v-y_n\Vert ^2\ge 0,\quad \forall v\in K,\nonumber \\&y^*_n\in T(y_n): \Vert y^*_{n+1}-y^*_n\Vert \le M(T(y_{n+1}),T(y_n)),\end{aligned}$$
(4.7)
$$\begin{aligned}&\langle \rho u^*_n+y_n-u_n,v-y_n\rangle +\frac{1}{2r}\Vert v-u_n\Vert ^2\ge 0,\quad \forall v\in K,\nonumber \\&u^*_n\in T(u_n): \Vert u^*_{n+1}-u^*_n\Vert \le M(T(u_{n+1}),T(u_n)), \end{aligned}$$
(4.8)

where \(\rho >0\) is a constant and \(n=0,1,2,\ldots \).

We remark that there is a small error in Algorithm 3.1 in [25]. In fact, in Algorithm 3.1 in [25], \(u^*_n\in T(w_n)\) must be replaced by \(u^*_n\in T(u_n)\), as we have done in Algorithm 4.1.

It can be easily seen that if \(w=u\), then w need not be a solution of generalized variational inequality problem (4.2) as claimed in [25]. Even, if \(w=u\), then w is not necessarily a solution of problem (4.3). In fact, if \(w=u\), then the auxiliary nonconvex variational inequality problem (4.5) reduces to the regularized nonconvex variational inequality problem of finding \(u\in K\) and \(u^*\in T(u)\) such that

$$\begin{aligned} \langle \rho u^*,v-u\rangle +\frac{1}{2r}\Vert v-u\Vert ^2\ge 0,\quad \forall v\in K. \end{aligned}$$
(4.9)

However, the following example shows that a solution of problem (4.9) need not be a solution of problems (4.2) and (4.3).

Example 4.2

Let \(\mathcal {H}\) and K be the same as in Example 4.1. Let the set-valued mapping \(T:\mathcal {H}\rightarrow C(\mathcal {H})\) be defined as follows:

$$\begin{aligned} T(x)=\left\{ \begin{array}{ll} [\xi ,\mu ], &{}\quad x=0,\\ \{\theta e^{sx},\varrho x^l\}, &{}\quad x\ne 0, \end{array}\right. \end{aligned}$$

where \(\xi ,\mu ,s,l\in \mathbb {R}\), \(\xi <\mu \), \(\theta <\frac{\alpha -\gamma }{(\beta -\alpha )e^{s\alpha }}\) and \(\varrho <\frac{\alpha -\gamma }{(\beta -\alpha )\alpha ^l}\) are arbitrary real constants. Let \(\rho \in \left( 0,\min \Big \{-\frac{1}{\theta e^{s\alpha }},-\frac{1}{\varrho \alpha ^l}\Big \}\right] \) be a positive real constant. Then, taking \(u=\alpha \) and \(u^*=\theta e^{s\alpha }\), for all \(v\in \mathcal {H}\), we have

$$\begin{aligned} \begin{aligned} \langle \rho u^*,v-u\rangle +\frac{1}{2r}\Vert v-u\Vert ^2&=\rho \theta e^{s\alpha }(v-\alpha )+\frac{1}{\beta -\alpha }(v-\alpha )^2\\&=(v-\alpha ) \left( \rho \theta e^{s\alpha }+\frac{1}{\beta -\alpha }(v-\alpha ) \right) . \end{aligned} \end{aligned}$$

When \(v\in [0,\alpha ]\), then \(\theta<0<\rho \) implies that

$$\begin{aligned} \rho \theta e^{s\alpha }+\frac{1}{\beta -\alpha }(v-\alpha )<0, \end{aligned}$$

and so,

$$\begin{aligned} (v-\alpha ) \left( \rho \theta e^{s\alpha }+\frac{1}{\beta -\alpha }(v-\alpha ) \right) \ge 0. \end{aligned}$$

If \(v\in [\beta ,\gamma ]\), taking into consideration the facts that \(\frac{1}{\beta -\alpha }(v-\alpha )\in [1,\frac{\gamma -\alpha }{\beta -\alpha }]\) for all \(v\in [\beta ,\gamma ]\) and \(0<\rho \le -\frac{1}{\theta e^{s\alpha }}\), it follows that

$$\begin{aligned} \rho \theta e^{s\alpha }+\frac{1}{\beta -\alpha }(v-\alpha )\ge 0, \end{aligned}$$

whence we deduce that

$$\begin{aligned} (v-\alpha ) \left( \rho \theta e^{s\alpha }+\frac{1}{\beta -\alpha }(v-\alpha ) \right) \ge 0. \end{aligned}$$

If \(u^*=\varrho \alpha ^l\), then for all \(v\in \mathcal {H}\), we have

$$\begin{aligned} \begin{aligned} \langle \rho u^*,v-u\rangle +\frac{1}{2r}\Vert v-u\Vert ^2&=\rho \varrho \alpha ^l(v-\alpha )+\frac{1}{\beta -\alpha }(v-\alpha )^2\\&=(v-\alpha ) \left( \rho \varrho \alpha ^l+\frac{1}{\beta -\alpha }(v-\alpha ) \right) , \quad \forall v\in \mathcal {H}. \end{aligned} \end{aligned}$$

By an argument analogous to the previous one, considering the fact that \(\varrho <0\le -\frac{1}{\varrho \alpha ^l}\), one can deduce that

$$\begin{aligned} (v-\alpha ) \left( \rho \varrho \alpha ^l+\frac{1}{\beta -\alpha }(v-\alpha ) \right) \ge 0,\quad \forall v\in K. \end{aligned}$$

Consequently,

$$\begin{aligned} \langle \rho u^*,v-u\rangle +\frac{1}{2r}\Vert v-u\Vert ^2\ge 0,\quad \forall v\in K, \end{aligned}$$

that is, inequality (4.6) holds for all \(v\in K\). However, taking into account of the facts \(\theta <\frac{\alpha -\gamma }{(\beta -\alpha )e^{s\alpha }}\) and \(\varrho <\frac{\alpha -\gamma }{(\beta -\alpha )\alpha ^l}\), it follows that

$$\begin{aligned} (v-\alpha ) \left( \theta e^{s\alpha }+\frac{1}{\beta -\alpha }(v-\alpha ) \right) <0, \quad \forall v\in [\beta ,\gamma ], \end{aligned}$$

and

$$\begin{aligned} (v-\alpha ) \left( \varrho \alpha ^l+\frac{1}{\beta -\alpha }(v-\alpha ) \right) < 0, \quad \forall v\in [\beta ,\gamma ], \end{aligned}$$

that is, for each \(u^*\in T(u)\)

$$\begin{aligned} \langle u^*,v-u\rangle +\frac{1}{2r}\Vert v-u\Vert ^2<0,\quad \forall v\in [\beta ,\gamma ]. \end{aligned}$$

Therefore, inequality (4.3) cannot hold for all \(v\in K\). Considering the fact that

$$\begin{aligned} \langle u^*,v-u\rangle \le \langle u^*,v-u\rangle +\frac{1}{2r}\Vert v-u\Vert ^2,\quad \forall v\in \mathcal {H}, \end{aligned}$$

it follows that

$$\begin{aligned} \langle u^*,v-u\rangle <0,\quad \forall v\in [\beta ,\gamma ], \end{aligned}$$

that is, inequality (4.2) cannot hold for all \(v\in K\). Hence, a solution of problem (4.9) need not be a solution of problems (4.2) and (4.3). Thus, for a given \(u\in K\), if \(w=u\) is a solution of auxiliary nonconvex variational inequality problem (4.5), then w need not be a solution of problems (4.2) and (4.3).

In order to study the convergence analysis of Algorithm 4.1, Pang et al. [25] used the following definition.

Definition 4.1

[25, Definition 2.4] A set-valued mapping \(T:\mathcal {H}\rightarrow C(\mathcal {H})\) is said to be partially relaxed strongly monotone if there exists a constant \(\alpha >0\) such that

$$\begin{aligned} \langle u^*_1-u^*_2,z-u_2\rangle \ge -\alpha \Vert u_1-z\Vert ^2, \quad \forall u_1,u_2,z\in \mathcal {H}, u^*_1\in T(u_1),u^*_2\in T(u_2). \end{aligned}$$

The following lemma plays a crucial role to study strong convergence of iterative sequences generated by Algorithm 4.1 to a solution of problem (4.2).

Lemma 4.3

[25, Lemma 3.1] Let \(\rho \in \left( 0,\min \left\{ r,\frac{1}{2r} \right\} \right) \), \(u\in K\) be the exact solution of (4.2), and let \(u_n\) be the approximate solution obtained by Algorithm 4.1. If the set-valued mapping \(T:\mathcal {H}\rightarrow C(\mathcal {H})\) is partially relaxed strongly monotone with constant \(0\le \alpha \le 1\), then

$$\begin{aligned} c_1\Vert u_{n+1}-u\Vert ^2\le \Vert u-u_n\Vert ^2-c_2\Vert u_{n+1}-u_n\Vert ^2, \end{aligned}$$
(4.10)

where \(c_1\in (0,1)\), \(c_2=(1-\alpha \rho )(1-\frac{\rho }{r})^{-1}>0\).

We remark that the following inequality (inequality (3.5) in [25, Lemma 3.1])

$$\begin{aligned} c_1\Vert u_{n+1}-u\Vert ^2\le \Vert u-u_n\Vert ^2-c_2\Vert u_{n+1}-w_n\Vert ^2 \end{aligned}$$

must be replaced by

$$\begin{aligned} c_1\Vert u_{n+1}-u\Vert ^2\le \Vert u-u_n\Vert ^2-c_2\Vert u_{n+1}-u_n\Vert ^2, \end{aligned}$$

as we have done in Lemma 4.3.

By a careful reading of the proof of Lemma 4.3 (that is, [25, Lemma 3.1]), we have the following observations.

In fact, by assuming that \((u, u^*)\) with \(u\in K\) and \(u^*\in T(u)\) is a solution of (4.2), Pang et al. [25] deduced relations (3.9), (3.12) and (3.13) in [25] by using relations (4.6)–(4.8) and with the help of the partially relaxed strong monotonicity of the operator T as follows:

$$\begin{aligned} \left( \frac{1}{2}-\frac{\rho }{2r} \right) \Vert u_{n+1}-u\Vert ^2\le & {} \left( \frac{1}{2}+\frac{1}{2r} \right) \Vert u-w_n\Vert ^2-\left( \frac{1}{2}-\alpha \rho \right) \Vert u_{n+1}-w_n\Vert ^2, \end{aligned}$$
(4.11)
$$\begin{aligned} \left( \frac{1}{2}-\frac{\rho }{2r} \right) \Vert w_n-u\Vert ^2\le & {} \left( \frac{1}{2}+\frac{1}{2r} \right) \Vert u-y_n\Vert ^2 \end{aligned}$$
(4.12)

and

$$\begin{aligned} \left( \frac{1}{2}-\frac{\rho }{2r} \right) \Vert y_n-u\Vert ^2 \le \left( \frac{1}{2}+\frac{1}{2r} \right) \Vert u-u_n\Vert ^2. \end{aligned}$$
(4.13)

To obtain an estimation of \(\Vert u_{n+1}-w_n\Vert \), they obtained relation (3.14) in [25] as follows:

$$\begin{aligned} \Vert u_{n+1}-w_n\Vert ^2=\Vert u_{n+1}-u_n\Vert ^2+\Vert u_n-w_n\Vert ^2+2\langle u_{n+1}-u_n,u_n-w_n\rangle . \end{aligned}$$
(4.14)

By combining relations (4.11)–(4.14), they asserted that relation (4.10) holds. In fact, they deduced the inequality

$$\begin{aligned} \Vert u_{n+1}-u_n\Vert \le \Vert u_{n+1}-w_n\Vert , \end{aligned}$$
(4.15)

by using relation (4.14). By combining (4.11)–(4.13) and (4.15), they deduced relation (4.10). However, relation (4.14) does not imply inequality (4.15). Indeed, applying inequalities (4.12) and (4.13), it follows that

$$\begin{aligned} \Vert u_{n+1}-w_n\Vert&\le \Vert u_{n+1}-u_n\Vert +\Vert u_n-u\Vert +\Vert w_n-u\Vert \nonumber \\&\le \Vert u_{n+1}-u_n\Vert +\Vert u_n-u\Vert + \left( 1+\frac{1}{r} \right) \left( 1-\frac{\rho }{r} \right) ^{-1}\Vert y_n-u\Vert \nonumber \\&\le \Vert u_{n+1}-u_n\Vert +\Vert u_n-u\Vert + \left( 1+\frac{1}{r} \right) ^2 \left( 1-\frac{\rho }{r} \right) ^{-2}\Vert u_n-u\Vert \nonumber \\&=\Vert u_{n+1}-u_n\Vert + \left[ 1+ \left( 1+\frac{1}{r} \right) ^2 \left( 1-\frac{\rho }{r} \right) ^{-2} \right] \Vert u_n-u\Vert . \end{aligned}$$
(4.16)

By using (4.16) and in the light of the fact that \((a+b)^2\le 2(a^2+b^2)\) for all \(a,b\in \mathbb {R}\), we get

$$\begin{aligned} \Vert u_{n+1}-w_n\Vert ^2&\le \left( \Vert u_{n+1}-u_n\Vert + \left[ 1+ \left( 1+\frac{1}{r} \right) ^2 \left( 1-\frac{\rho }{r} \right) ^{-2} \right] \Vert u_n-u\Vert \right) ^2\nonumber \\&\le 2 \left( \Vert u_{n+1}-u_n\Vert ^2+ \left[ 1+ \left( 1+\frac{1}{r} \right) ^2 \left( 1-\frac{\rho }{r} \right) ^{-2} \right] ^2\Vert u_n-u\Vert ^2\right) .\nonumber \\ \end{aligned}$$
(4.17)

By view of assumptions of Lemma 4.3, we can obtain relation (4.17) as an estimation of \(\Vert u_{n+1}-w_n\Vert \), but not inequality (4.15). In the light of above mentioned argument and by using relations (4.11)–(4.13) and (4.17), one cannot derive relation (4.10).

In the following result, Pang et al. [25] claimed that the sequence \(\{u_n\}\) generated by Algorithm 3.1 converges strongly to a solution of problem (4.2).

Theorem 4.1

[25, Theorem 3.1] Assume that the assumptions of Lemma 4.3 hold. Let \(\mathcal {H}\) be a finite dimensional Hilbert space and \(T:\mathcal {H}\rightarrow C(\mathcal {H})\) be a M-Lipschitz continuous set-valued mapping. Then the sequence \(\{u_n\}\) generated by Algorithm 4.1 converges strongly to a solution u of problem (4.2).

Now we analyze the proof of Theorem 4.1 (that is, [25, Theorem 3.1]).

Lemma 4.3 played a crucial role in the proof of Theorem 4.1. However, as we have pointed out that the statement of Lemma 4.3 is not valid in general. Even, without considering this fact, by a careful reading, we discovered that there are two errors in the proof of Theorem 4.1. Firstly, Pang et al. [25] claimed that by using inequality (4.10), one can deduce the following inequality:

$$\begin{aligned} \sum \limits _{n=0}^{\infty }c_2\Vert u_n-u_{n+1}\Vert ^2\le \Vert u_0-u\Vert ^2, \end{aligned}$$
(4.18)

which implies that

$$\begin{aligned} \lim \limits _{n\rightarrow \infty }\Vert u_n-u_{n+1}\Vert =0. \end{aligned}$$
(4.19)

However, by utilizing inequality (4.10), one can obtain the following inequality:

$$\begin{aligned} \sum \limits _{n=0}^{\infty }c_2\Vert u_n-u_{n+1}\Vert ^2\le \sum \limits _{n=0}^{\infty }\Vert u-u_n\Vert ^2 -\sum \limits _{n=0}^{\infty }c_1\Vert u_{n+1}-u\Vert ^2, \end{aligned}$$
(4.20)

but not inequality (4.18). Obviously, inequality (4.20) does not imply relation (4.19).

Secondly, on page 324, line 8 from the bottom in [25], Pang et al. claimed that with the help of M-lipschitz continuity with constant \(\delta \) of the operator T, one can deduce the following inequality:

$$\begin{aligned} \Vert u^*_n-u^*\Vert \le M(T(u_n),T(u))\le \delta \Vert u_n-u\Vert . \end{aligned}$$
(4.21)

Unfortunately, there is an error in relation (4.21). In fact, in view of relation (4.21), Pang et al. used the fact that \(u^*\in T(u)\) in (4.21) before proving it. Even, without considering this fact, the following example illustrates that for any given \(x,y\in \mathcal {H}\), \(u\in T(x)\) and \(v\in T(y)\), inequality (4.4) need not be true.

Example 4.3

Let \(X=l^{\infty }\) be the real Banach space consisting of all bounded real sequences \(x= \{ x_n \}_{n=1}^{\infty }\) with the supremum norm \(\Vert x\Vert _{\infty }=\sup \nolimits _n|x_n|\) and let the set-valued mapping \(T:l^{\infty }\rightarrow CB(l^{\infty })\) be defined by

$$\begin{aligned} T(x)=\left\{ \begin{array}{ll} \left\{ \left( \frac{\alpha }{\root p \of {n}}\right) _{n=1}^{\infty },\left( \frac{\beta }{\root p \of {n}}\right) _{n=1}^{\infty } \right\} , &{}\quad x= \mathbf{0},\\ \left\{ \left( \frac{\gamma }{\root p \of {n}}\right) _{n=1}^{\infty },\left( \frac{\delta }{\root p \of {n}}\right) _{n=1}^{\infty } \right\} , &{}\quad x\ne \mathbf{0}, \end{array}\right. \end{aligned}$$

for all \(x=(x_n)_{n=1}^{\infty }\in l^{\infty }\), where \(\mathbf{0}\) is the zero vector of the space \(l^{\infty }\), \(p\in \mathbb {N}\backslash \{1\}\) is an arbitrary but fixed natural number, and \(\alpha \), \(\beta \), \(\gamma \) and \(\delta \) are arbitrary real constants such that \(0<\alpha<\beta<\gamma <\delta \) and \(\beta +\gamma >\alpha +\delta \). Let \(x = \mathbf{0}\), \(\mathbf{0} \ne y=(y_n)\in l^{\infty }\) be an arbitrary but fixed nonzero element of \(l^{\infty }\), \(u= \left( \frac{\alpha }{\root p \of {n}} \right) _{n=1}^{\infty }\) and \(v= \left( \frac{\delta }{\root p \of {n}} \right) _{n=1}^{\infty }\). If \(a= \left( \frac{\alpha }{\root p \of {n}} \right) _{n=1}^{\infty }\), then from the fact that \(0<\alpha<\gamma <\delta \), it follows that

$$\begin{aligned} \begin{aligned} d(a,T(y))&=\inf \left\{ d\Bigg ( \left( \frac{\alpha }{\root p \of {n}} \right) _{n=1}^{\infty }, \left( \frac{\gamma }{\root p \of {n}} \right) _{n=1}^{\infty }\Bigg ), d\Bigg ( \left( \frac{\alpha }{\root p \of {n}} \right) _{n=1}^{\infty }, \left( \frac{\delta }{\root p \of {n}} \right) _{n=1}^{\infty }\Bigg )\right\} \\&=\inf \left\{ \Bigg \Vert \left( \frac{\alpha }{\root p \of {n}} \right) _{n=1}^{\infty } - \left( \frac{\gamma }{\root p \of {n}} \right) _{n=1}^{\infty }\Bigg \Vert _{\infty }, \Bigg \Vert \left( \frac{\alpha }{\root p \of {n}} \right) _{n=1}^{\infty } - \left( \frac{\delta }{\root p \of {n}} \right) _{n=1}^{\infty }\Bigg \Vert _{\infty }\right\} \\&=\inf \left\{ \sup \limits _n\Bigg |\frac{\alpha }{\root p \of {n}}-\frac{\gamma }{\root p \of {n}}\Bigg |,\sup \limits _n\Bigg | \frac{\alpha }{\root p \of {n}}-\frac{\delta }{\root p \of {n}}\Bigg |\right\} \\ {}&=\inf \left\{ \sup \limits _n\Bigg |\frac{\alpha -\gamma }{\root p \of {n}}\Bigg |,\sup \limits _n\Bigg | \frac{\alpha -\delta }{\root p \of {n}}\Bigg |\right\} \\ {}&=\inf \Bigg \{\gamma -\alpha ,\delta -\alpha \Bigg \}\\ {}&=\gamma -\alpha . \end{aligned} \end{aligned}$$

For the case when \(a= \{\frac{\beta }{\root p \of {n}} \}_{n=1}^{\infty }\), from the fact that \(0<\beta<\gamma <\delta \), we obtain

$$\begin{aligned} \begin{aligned} d(a,T(y))&=\inf \left\{ d\Bigg ( \left( \frac{\beta }{\root p \of {n}} \right) _{n=1}^{\infty }, \left( \frac{\gamma }{\root p \of {n}} \right) _{n=1}^{\infty }\Bigg ), d\Bigg ( \left( \frac{\beta }{\root p \of {n}} \right) _{n=1}^{\infty }, \left( \frac{\delta }{\root p \of {n}} \right) _{n=1}^{\infty }\Bigg )\right\} \\&=\inf \left\{ \Bigg \Vert \left( \frac{\beta }{\root p \of {n}} \right) _{n=1}^{\infty } - \left( \frac{\gamma }{\root p \of {n}} \right) _{n=1}^{\infty }\Bigg \Vert _{\infty }, \Bigg \Vert \left( \frac{\beta }{\root p \of {n}} \right) _{n=1}^{\infty } - \left( \frac{\delta }{\root p \of {n}} \right) _{n=1}^{\infty }\Bigg \Vert _{\infty }\right\} \\&=\inf \left\{ \sup \limits _n\Bigg |\frac{\beta }{\root p \of {n}}-\frac{\gamma }{\root p \of {n}}\Bigg |,\sup \limits _n\Bigg | \frac{\beta }{\root p \of {n}}-\frac{\delta }{\root p \of {n}}\Bigg |\right\} \\ {}&=\inf \left\{ \sup \limits _n\Bigg |\frac{\beta -\gamma }{\root p \of {n}}\Bigg |,\sup \limits _n\Bigg | \frac{\beta -\delta }{\root p \of {n}}\Bigg |\right\} \\ {}&=\inf \Bigg \{\gamma -\beta ,\delta -\beta \Bigg \}\\ {}&=\gamma -\beta . \end{aligned} \end{aligned}$$

Since \(\alpha <\beta \), we have

$$\begin{aligned} \sup \limits _{a\in T(x)}d(a,T(y))=\max \left\{ \gamma -\alpha , \gamma -\beta \right\} = \gamma -\alpha . \end{aligned}$$

Taking \(b=(\frac{\gamma }{\root p \of {n}})_{n=1}^{\infty }\) and in virtue of the fact that \(0<\alpha<\beta <\gamma \), it follows that

$$\begin{aligned} \begin{aligned} d(T(x),b)&=\inf \left\{ d\Bigg (\left( \frac{\alpha }{\root p \of {n}}\right) _{n=1}^{\infty },(\frac{\gamma }{\root p \of {n}})_{n=1}^{\infty }\Bigg ), d\Bigg (\left( \frac{\beta }{\root p \of {n}}\right) _{n=1}^{\infty },\Bigg (\frac{\gamma }{\root p \of {n}}\Bigg )_{n=1}^{\infty }\Bigg )\right\} \\ {}&=\inf \left\{ \Bigg \Vert \Bigg (\frac{\alpha }{\root p \of {n}}\Bigg )_{n=1}^{\infty }-\Bigg (\frac{\gamma }{\root p \of {n}}\Bigg )_{n=1}^{\infty }\Bigg \Vert _{\infty }, \Bigg \Vert \Bigg (\frac{\beta }{\root p \of {n}}\Bigg )_{n=1}^{\infty }-\Bigg (\frac{\gamma }{\root p \of {n}}\Bigg )_{n=1}^{\infty }\Bigg \Vert _{\infty }\right\} \\ {}&=\inf \left\{ \sup \limits _n\Bigg |\frac{\alpha }{\root p \of {n}}-\frac{\gamma }{\root p \of {n}}\Bigg |,\sup \limits _n\Bigg | \frac{\beta }{\root p \of {n}}-\frac{\gamma }{\root p \of {n}}\Bigg |\right\} \\ {}&=\inf \left\{ \sup \limits _n\Bigg |\frac{\alpha -\gamma }{\root p \of {n}}\Bigg |,\sup \limits _n\Bigg | \frac{\beta -\gamma }{\root p \of {n}}\Bigg |\right\} \\ {}&=\inf \Bigg \{\gamma -\alpha ,\gamma -\beta \Bigg \}\\ {}&=\gamma -\beta . \end{aligned} \end{aligned}$$

If \(b= \{ \frac{\delta }{\root p \of {n}} \}_{n=1}^{\infty }\), in view of the fact that \(0<\alpha<\beta <\delta \), we have

$$\begin{aligned} \begin{aligned} d(T(x),b)&=\inf \left\{ d\Bigg (\Bigg (\frac{\alpha }{\root p \of {n}}\Bigg )_{n=1}^{\infty },\Bigg (\frac{\delta }{\root p \of {n}}\Bigg )_{n=1}^{\infty }\Bigg ), d\Bigg (\Bigg (\frac{\beta }{\root p \of {n}}\Bigg )_{n=1}^{\infty },\Bigg (\frac{\delta }{\root p \of {n}}\Bigg )_{n=1}^{\infty }\Bigg )\right\} \\&=\inf \left\{ \Bigg \Vert \Bigg (\frac{\alpha }{\root p \of {n}}\Bigg )_{n=1}^{\infty } -\Bigg (\frac{\delta }{\root p \of {n}}\Bigg )_{n=1}^{\infty }\Bigg \Vert _{\infty }, \Bigg \Vert \Bigg (\frac{\beta }{\root p \of {n}}\Bigg )_{n=1}^{\infty }-\Bigg (\frac{\delta }{\root p \of {n}}\Bigg )_{n=1}^{\infty }\Bigg \Vert _{\infty }\right\} \\&=\inf \left\{ \sup \limits _n\Bigg |\frac{\alpha }{\root p \of {n}}-\frac{\delta }{\root p \of {n}}\Bigg |,\sup \limits _n\Bigg | \frac{\beta }{\root p \of {n}}-\frac{\delta }{\root p \of {n}}\Bigg |\right\} \\&=\inf \left\{ \sup \limits _n\Bigg |\frac{\alpha -\delta }{\root p \of {n}}\Bigg |,\sup \limits _n\Bigg | \frac{\beta -\delta }{\root p \of {n}}\Bigg |\right\} \\&=\inf \Bigg \{\delta -\alpha ,\delta -\beta \Bigg \}\\&=\delta -\beta . \end{aligned} \end{aligned}$$

The fact that \(0<\beta<\gamma <\delta \) implies that

$$\begin{aligned} \sup \limits _{b\in T(y)}d(T(x),b)=\max \left\{ \gamma -\beta ,\delta -\beta \right\} =\gamma -\beta . \end{aligned}$$

Taking into consideration the fact that \(\beta +\gamma >\alpha +\delta \), we deduce that

$$\begin{aligned} \begin{aligned} M(T(x),T(y))&=\max \left\{ \sup \limits _{a\in T(x)}d(a,T(y)), \sup \limits _{b\in T(y)}d(T(x),b) \right\} \\&=\max \left\{ \gamma -\alpha ,\gamma -\beta \right\} =\gamma -\alpha . \end{aligned} \end{aligned}$$

Finally, from \(0<\alpha<\gamma <\delta \), it follows that

$$\begin{aligned} \begin{aligned} d(u,v)&=d\Bigg (\Bigg (\frac{\alpha }{\root p \of {n}}\Bigg )_{n=1}^{\infty }, \Bigg (\frac{\delta }{\root p \of {n}}\Bigg )_{n=1}^{\infty }\Bigg )= \Bigg \Vert \Bigg (\frac{\alpha }{\root p \of {n}}\Bigg )_{n=1}^{\infty }-\Bigg (\frac{\delta }{\root p \of {n}}\Bigg )_{n=1}^{\infty }\Bigg \Vert _{\infty }\\&=\sup \limits _n\Bigg |\frac{\alpha -\delta }{\root p \of {n}}\Bigg |= \delta -\alpha >\gamma -\alpha =M(T(x),T(y)). \end{aligned} \end{aligned}$$

As it is pointed out that \(\mathop {\mathrm{NGVP}}\) (4.1) and problem (4.3) are not necessarily equivalent. In the next lemma, which is the correct version of Lemma 4.1, the equivalence between \(\mathop {\mathrm{NGVP}}\) (4.1) and a nonconvex variational inequality problem is stated.

Lemma 4.4

If K is a uniformly r-prox-regular set in \(\mathcal {H}\) and \(T:K\rightarrow C(\mathcal {H})\) is a set-valued mapping, then \(\mathop {\mathrm{NGVP}}\) (4.1) is equivalent to find \(u\in K\) and \(u^*\in T(u)\) such that

$$\begin{aligned} \langle u^*,v-u\rangle +\frac{\Vert u^*\Vert }{2r}\Vert v-u\Vert ^2\ge 0,\quad \forall v\in K. \end{aligned}$$
(4.22)

Proof

Let \(u\in K\) and \(u^*\in T(u)\) be the solution of problem (4.22). If \(u^*=0\), then \(0\in u^*+N_K^P(u)\), because the zero vector always belongs to any normal cone. Consequently \(u^*\in (-N_K^P(u))\). If \(u^*\ne 0\), then we have

$$\begin{aligned} \langle -u^*,v-u\rangle \le \frac{\Vert u^*\Vert }{2r}\Vert v-u\Vert ^2,\quad \forall v\in K. \end{aligned}$$

Invoking Lemma 2.1, it follows that \(-u^*\in N_K^P(u)\) which implies that \(u^*\in -N_K^P(u)\). Hence, \(u^*\in T(u)\cap (-N_K^P(u))\), that is, \(u\in K\) is a solution of \(\mathop {\mathrm{NGVP}}\) (4.1). Conversely, if \(u\in K\) is a solution of \(\mathop {\mathrm{NGVP}}\) (4.1), then we deduce that there exists \(u^*\in T(u)\) such that \(-u^*\in N_K^P(u)\). Now, Definition 2.4 guarantees that \((u,u^*)\) with \(u\in K\) and \(u^*\in T(u)\) is a solution of problem (4.22).

Problem (4.22) is called regularized nonconvex set-valued variational inequality\((\mathop {\mathrm{RNMVI}})\) associated with \(\mathop {\mathrm{NGVP}}\) (4.1). In the sequel, we denote by \(\mathop {\mathrm{RNMVI}}(T,K)\) the set of solutions of \(\mathop {\mathrm{RNMVI}}\) (4.22).

Let \(T:K\rightarrow C(\mathcal {H})\) be a set-valued mapping. For given \(u\in K\) and \(u^*\in T(u)\), we consider the following auxiliary regularized nonconvex set-valued variational inequality problem of finding \(w\in K\) and \(w^*\in T(w)\) such that

$$\begin{aligned} \langle \rho w^*+w-u,v-w\rangle +\frac{\rho \Vert w^*\Vert }{2r}\Vert v-w\Vert ^2\ge 0,\qquad \forall v\in K, \end{aligned}$$

where \(\rho >0\) is a constant. If \(w=u\), then obviously \((w,w^*)\) is a solution of \(\mathop {\mathrm{RNMVI}}\) (4.22). By using this observation and Nadler’s technique [24], we are able to propose a three-step predictor-corrector method for solving \(\mathop {\mathrm{NGVP}}\) (4.1) as follows.

Algorithm 4.2

Let \(T:K\rightarrow C(\mathcal {H})\) be a set-valued mapping. For given \(u_0,y_0,w_0\in K\), \(u^*_0\in T(u_0)\), \(y^*_0\in T(y_0)\) and \(w^*_0\in T(w_0)\), define the iterative sequences \(\{u_n\}\), \(\{u^*_n\}\), \(\{y_n\}\), \(\{y^*_n\}\), \(\{w_n\}\) and \(\{w^*_n\}\) by the following iterative schemes:

$$\begin{aligned} \begin{aligned}&\langle \rho w^*_n+u_{n+1}-w_n,v-u_{n+1}\rangle +\frac{\rho \Vert w^*_n\Vert }{2r}\Vert v-u_{n+1}\Vert ^2\ge 0, \quad \forall v\in K,\\&\langle \rho y^*_n+w_n-y_n,v-w_n\rangle +\frac{\rho \Vert y^*_n\Vert }{2r}\Vert v-w_n\Vert ^2\ge 0,\quad \forall v\in K,\\&\langle \rho u^*_n+y_n-u_n,v-y_n\rangle +\frac{\rho \Vert u^*_n\Vert }{2r}\Vert v-y_n\Vert ^2\ge 0,\quad \forall v\in K,\\&w^*_n\in T(w_n): \Vert w^*_{n+1}-w^*_n\Vert \le M(T(w_{n+1}),T(w_n)),\\&y^*_n\in T(y_n): \Vert y^*_{n+1}-y^*_n\Vert \le M(T(y_{n+1}),T(y_n)),\\&u^*_n\in T(u_n): \Vert u^*_{n+1}-u^*_n\Vert \le M(T(u_{n+1}),T(u_n)), \end{aligned} \end{aligned}$$

where \(\rho >0\) is a constant and \(n=0,1,2,\ldots \).

Now we present the correct version of Lemma 4.3 and Theorem 4.1, respectively.

Lemma 4.5

Let \(T:K\rightarrow C(\mathcal {H})\) be a set-valued operator and \((u,u^*)\) with \(u\in K\) and \(u^*\in T(u)\) be a solution of \(\mathop {\mathrm{RNMVI}}\) (4.22). Assume that \(\{u_n\}\), \(\{w_n\}\), \(\{y_n\}\), \(\{u^*_n\}\), \(\{w^*_n\}\) and \(\{y^*_n\}\) are the sequences generated by Algorithm 4.2 such that the sequences \(\{u^*_n\}\), \(\{w^*_n\}\) and \(\{y^*_n\}\) are bounded. If the operator T is partially \((\alpha ,\beta )\)-mixed relaxed and strongly monotone of type (I) with \(\beta =\frac{1}{2r}\left( \Vert u^*\Vert +\sup \Big \{\Vert u^*_n\Vert ,\Vert w^*_n\Vert ,\Vert y^*_n\Vert :n\ge 0\Big \}\right) \), then inequalities (3.9)–(3.11) hold for all \(n\ge 0\).

Proof

It follows from Proposition 3.1 by taking \(\varphi \equiv 0\).

Theorem 4.2

Let \(\mathcal {H}\) be a finite dimensional real Hilbert space and \(T:K\rightarrow C(\mathcal {H})\) be M-Lipschitz continuous with constant \(\delta >0\). Suppose that all the conditions of Lemma 4.5 hold and \(\mathop {\mathrm{RNMVI}}(T,K)\ne \emptyset \). If \(\rho \in (0,\frac{1}{2\alpha })\), then the iterative sequences \(\{u_n\}\) and \(\{u^*_n\}\) generated by Algorithm 4.2 converge strongly to \(\hat{u}\in K\) and \(\hat{u}^*\in T(\hat{u})\), respectively, and \((\hat{u},\hat{u}^*)\) is a solution of \(\mathop {\mathrm{RNMVI}}\) (4.22).

Proof

Taking \(\varphi \equiv 0\), the desired result follows from Theorem 3.1 immediately.