1 Introduction

The theory of variational inequalities has wide applications to many fields including, for example, mechanics and physics, optimization and control, linear and nonlinear programming, economics and finance, and engineering sciences; see, for instance, [13]. In recent years, one of the most important research directions in variational inequalities is the study of gap functions or merit functions. As we know, gap functions can transform a variational inequality into an equivalent optimization problem; see [416]. Then, powerful optimization solution methods and algorithms can be exploited for finding a solution of a variational inequality. On the other hand, gap functions are very useful in developing error bounds, which provide an upper estimate of the distance of an arbitrary feasible point to the solution set of a variational inequality. Error bounds have played an important role in the sensitivity analysis of optimization problems and the convergence analysis of iterative algorithms; see [3, 1317].

In the last few years, many important concepts and methods of optimization problems have been extended from linear spaces to Riemannian manifolds (in particular Hadamard manifolds); see [1835] and the references therein. On the other hand, numerous problems in applied fields can be transformed into variational inequalities or boundary value problems on Riemannian manifolds. Therefore, extension of concepts, techniques as well as methods of the theory of variational inequalities and related topics from linear spaces to Riemannian manifolds is conceivable, but this work is non-trivial. Actually, in recent years, many researchers have been making great efforts to this topic; see [22, 3641]. In particular, Németh [22] and Li et al. [36] studied the existence of solutions for variational inequalities on Hadamard manifolds and Riemannian manifolds, respectively. Tang and Huang [37] proposed Korpelevich’s method, and Tang et al. [38] extended proximal point algorithm to solve variational inequalities on Hadamard manifolds.

On the other hand, gap functions have become a successful tool to solve variational inequalities, and many authors have studied gap functions and global error bounds for variational inequalities in linear spaces; for example, Fan et al. [15] investigated the gap functions and error bounds for set-valued variational inequalities in \(\mathbb {R}^{n}\), and Tang et al. [16] generalized the results of [15] to the case of generalized mixed variational inequalities. It is noteworthy that most of the known developments in the gap functions and error bounds for generalized mixed variational inequalities deeply exploit the usual convexity of the minimization problems, together with the monotonicity of the operators; see, for instance, [1416]. As we know, the generalization of optimization problems from Euclidean spaces to Riemannian manifolds has some important advantages; for example, non-convex minimization problems can be reduced to convex problems on Riemannian manifolds, and non-monotone vector fields can be transformed into monotone by choosing an appropriate Riemannian metric; see [19, 26, 27, 35]. On the other hand, a generalized mixed variational inequality can be reformulated as an optimization problem in linear spaces, and there are some useful algorithms to solve optimization problems on Riemannian manifolds and Hadamard manifolds, for example, Newton’s method [24], proximal point algorithm [23, 28, 29], subgradient algorithm [31, 32]. All this has been the motivation for us to exploit the gap functions and error bounds for generalized mixed variational inequalities on Hadamard manifolds. As we know, in general, a manifold does not have a linear structure; in this setting, the linear space is replaced by a Riemannian manifold, and the line segment is replaced by a geodesic [19, 34], so the extension of concepts and techniques from linear spaces to Hadamard manifolds is non-trivial. In this paper, we show how tools of Riemannian geometry, more specifically convex analysis on Riemannian manifolds, can be used to exploit the gap functions and error bounds for non-convex and non-monotone generalized mixed variational inequalities in Euclidean spaces. Compared with the corresponding results in linear spaces, the problems considered here for Hadamard manifolds are much more complicated.

In this paper, we establish some new gap functions for the generalized mixed variational inequality on Hadamard manifolds. We also derive error bounds, which are based on these gap functions under certain assumptions on Hadamard manifolds. As we know, error bounds are closely related to weak sharp minima and linear regularity. Some characterizations of weak sharp minima for convex optimization problems on Riemannian manifolds have been studied by Li et al. in [42]. However, to the best of our knowledge, this paper is the first work concerning the notion of the gap function and the error bound for the generalized mixed variational inequality on Hadamard manifolds. We would like to point out that the results presented in this paper generalize the corresponding known results in [6, 1416] from linear spaces to Hadamard manifolds.

The paper is planned as follows. In Sect. 2, we recall some basic definitions and introduce generalized mixed variational inequalities on Hadamard manifolds. In Sect. 3, we introduce some gap functions for generalized mixed variational inequalities on Hadamard manifolds. In Sect. 4, we derive global error bounds for generalized mixed variational inequalities on Hadamard manifolds.

2 Preliminaries

In this section, we introduce some fundamental properties and notations used throughout this paper. These basic facts can be founded in any introductory books on Riemannian geometry, for example, in [4345].

Let M be a simply connected m-dimensional manifold and \(x\in M\). The tangent space of M at x is denoted by \(T_{x}M\), and the tangent bundle of M by \(TM=\bigcup _{x\in M}T_{x}M\), which is naturally a manifold. We always assume that M is endowed with a Riemannian metric to become a Riemannian manifold. We denote by \(\langle \cdot , \cdot \rangle _{x}\) the scalar product on \(T_{x}M\) with the associated norm \(\Vert \cdot \Vert _{x}\), where the subscript x is sometimes omitted. Given a piecewise smooth curve \(\gamma : [a, b]\rightarrow M\) joining x to y, that is, \(\gamma (a)=x\) and \(\gamma (b)=y\), we can define the length of \(\gamma \) by \(l(\gamma )=\int _{a}^{b}\Vert \gamma '(t)\Vert dt\). Then, for any \(x, y\in M\), the Riemannian distance d(xy), which induces the original topology on M, is defined by minimizing this length over the set of all such curves joining x to y.

Let \(\nabla \) be the Levi-Civita connection associated with the Riemannian metric. If \(\gamma \) is a curve joining x to y in M, then for each \(t\in [a, b]\), \(\nabla \) induces an isometry \(L_{\gamma , x,\gamma (t)}: T_{x}M\rightarrow T_{\gamma (t)}M\), the so-called parallel transport along \(\gamma \) from x to \(\gamma (t)\). When the reference to a curve joining x to y is not necessary, we use the notation \(L_{x,y}\). We say that \(\gamma \) is a geodesic when \(\nabla _{\gamma '}^{\gamma '}=0\), in this case \(\Vert \gamma '\Vert =1\), \(\gamma \) is said to be normalized. A geodesic joining x to y in M is said to be minimal iff its length equals d(xy).

A Riemannian manifold is complete if for any \(x\in M\), all geodesic emanating from x is defined for all \(-\infty <t<+\infty \). By the Hopf–Rinow theorem, we know that if M is complete, then any pair of points in M can be joined by a minimal geodesic. Moreover, (Md) is a complete metric space, and bounded and closed subsets are compact.

Assuming that M is complete, the exponential mapping \(\exp _{x}: T_{x}M\rightarrow M\) at x is defined by \(\exp _{x}v:=\gamma _{v}(1, x)\) for each \(v\in T_{x}M\), where \(\gamma (\cdot )=\gamma _{v}(\cdot , x)\) is the geodesic starting x with velocity v, that is, \(\gamma (0)=x\) and \(\gamma '(0)=v\). It is easy to see that \(\exp _{x}tv=\gamma _{v}(t, x)\) for each real number t.

Definition 2.1

A Hadamard manifold M is a complete simply connected Riemannian manifold of non-positive sectional curvature.

Proposition 2.1

[45] Let M be a Hadamard manifold. Then, the mapping \(\exp _{x}: T_{x}M\rightarrow M\) is a diffeomorphism, and for any two points \(x, y\in M\), there exists a unique minimal geodesic \(\gamma _{x, y}(t)=\exp _{x}t\exp ^{-1}_{x}y\) for all \(t\in [0, 1]\) joining x to y.

Definition 2.2

Let M be a Hadamard manifold. A subset \(K\subseteq M\) is said to be convex iff, for any points x and y in K, the geodesic joining x to y is contained in K; that is, if \(\gamma : [a, b]\rightarrow M\) is a geodesic such that \(x=\gamma (a)\) and \(y=\gamma (b)\), then \(\gamma _{x, y}(t)=\exp _{x}t\exp ^{-1}_{x}y\in K\) for all \(t\in [0, 1]\).

Definition 2.3

The function \(\phi : M\rightarrow \mathbb {R}\bigcup \{+\infty \}\) is said to be convex iff, for any geodesic \(\gamma \) in M, the composition function \(\phi \circ \gamma \) is convex, that is,

$$\begin{aligned} (\phi \circ \gamma )(ta+(1-t)(b))\le t(\phi \circ \gamma )(a)+(1-t)(\phi \circ \gamma )(b), \end{aligned}$$

for any \(a, b\in \mathbb {R}\) and \(t\in [0,1]\).

Let \(\mathcal {X}(M)\) denote the set of all set-valued vector fields F defined on M such that \(F(x)\subseteq T_{x}M\) for each \(x\in \mathcal {D}(F)\), where \(\mathcal {D}(F)\) denotes the domain of F, defined by \(\mathcal {D}(F):=\{x\in M|F(x)\ne \emptyset \}. \)

In this paper, we always assume that M is a Hadamard manifold and \(\mathcal {D}(F)=M\). Let \(\phi : M\rightarrow \mathbb {R}\bigcup \{+\infty \}\) be a lower semicontinuous, proper and convex function, \(F\in \mathcal {X}(M)\) be a set-valued vector field with non-empty and compact values. We consider the following generalized mixed variational inequality on Hadamard manifolds, denoted by \(\mathrm {GMVI}(F, \phi )\), which consists in finding \(x^{*}\in M\) such that

$$\begin{aligned} \exists u^{*}\in F(x^{*}): \left\langle u^{*}, \exp ^{-1}_{x^{*}}y\right\rangle +\phi (y)-\phi (x^{*})\ge 0,\quad \forall y\in M. \end{aligned}$$
(1)

For the sake of convenience, the solution set of \(\mathrm {GMVI}(F, \phi )\) is denoted by \(\mathrm {SOL}(F, \phi )\).

If \(M=\mathbb {R}^{n}\) and \(\phi (\cdot )=\delta _{K}(\cdot )\), where \(\delta _{K}(\cdot )\) denotes the indicator function over the non-empty, closed and convex subset K of M, then the problem \(\mathrm {GMVI}(F, \phi )\) reduces to the set-valued variational inequality in [15], which consists in finding \(x^*\in K\) such that

$$\begin{aligned} \exists u^{*}\in F(x^{*}): \langle u^{*}, y-x^*\rangle \ge 0,\quad \forall y\in K. \end{aligned}$$

If M is a Hilbert space, then the problem \(\mathrm {GMVI}(F, \phi )\) reduces to the following problem in [16], which consists in finding \(x^*\in M\) such that

$$\begin{aligned} \exists u^{*}\in F(x^{*}): \langle u^{*}, y-x^*\rangle +\phi (y)-\phi (x^*)\ge 0, \quad \forall y\in M. \end{aligned}$$

If \(\phi (\cdot )=\delta _{K}(\cdot )\) and F is single-valued, then the problem \(\mathrm {GMVI}(F, \phi )\) reduces to the variational inequality on Hadamard manifolds in [22, 37], which consists in finding \(x^*\in K\) such that

$$\begin{aligned} \left\langle F(x^*), \exp ^{-1}_{x^{*}}y\right\rangle \ge 0, \quad \forall y\in K. \end{aligned}$$
(2)

If \(F\equiv 0\), then the problem \(\mathrm {GMVI}(F, \phi )\) reduces to the optimization problem for the convex function on Hadamard manifolds in [19], which consists in finding \(x^*\in M\) such that

$$\begin{aligned} \phi (y)-\phi (x^*)\ge 0, \quad \forall y\in M. \end{aligned}$$

Definition 2.4

[27] A set-valued vector field \(F\in \mathcal {X}(M)\) is said to be

  1. (i)

    upper semicontinuous at \(x_{0}\in M\) iff, for any open set \(V\subseteq M\) satisfying \(F(x_{0})\subseteq V\subseteq T_{x_{0}}M\), there exists an open neighborhood \(U(x_{0})\) of \(x_{0}\) such that \(L_{ x,x_{0}}F(x)\subseteq V\) for all \(x\in U(x_{0})\);

  2. (ii)

    upper semicontinuous on M iff F is upper semicontinuous at every point \(x\in M\).

Definition 2.5

[20, 21] A set-valued vector field \(F\in \mathcal {X}(M)\) is said to be

  1. (i)

    strongly monotone iff there exists \(\beta >0\) such that, for any \(x, y\in M\),

    $$\begin{aligned} \left\langle u, \exp ^{-1}_{x}y\right\rangle -\left\langle v, -\exp ^{-1}_{y}x\right\rangle \le -\beta d^{2}(x,y),\quad \forall u\in F(x), v\in F(y); \end{aligned}$$
  2. (ii)

    monotone iff for any \(x, y\in M\),

    $$\begin{aligned} \left\langle u, \exp ^{-1}_{x}y\right\rangle \le \left\langle v, -\exp ^{-1}_{y}x\right\rangle , \quad \forall u\in F(x), v\in F(y); \end{aligned}$$
  3. (iii)

    maximal monotone iff it is monotone, and for any \(x\in M\) and \(u\in T_{x}M\), the condition

    $$\begin{aligned} \left\langle u, \exp ^{-1}_{x}y\right\rangle \le \left\langle v, -\exp ^{-1}_{y}x\right\rangle , \quad \forall y\in M, v\in F(y), \end{aligned}$$

    implies that \(u\in F(x)\);

  4. (iv)

    pseudomonotone iff for any \(x, y\in M\), \(u\in F(x)\) and \(v\in F(y)\),

    $$\begin{aligned} \left\langle u, \exp ^{-1}_{x}y\right\rangle \ge 0\Rightarrow \left\langle v, \exp ^{-1}_{y}x\right\rangle \le 0; \end{aligned}$$
  5. (v)

    strongly pseudomonotone iff there exists \(\beta >0\) such that, for any \(x, y\in M\), \(u\in F(x)\) and \(v\in F(y)\),

    $$\begin{aligned} \left\langle u, \exp ^{-1}_{x}y\right\rangle \ge 0\Rightarrow \left\langle v, \exp ^{-1}_{y}x\right\rangle \le -\beta d^{2}(x,y); \end{aligned}$$
  6. (vi)

    \(\phi \)-pseudomonotone iff for any \(x, y\in M\), \(u\in F(x)\) and \(v\in F(y)\),

    $$\begin{aligned} \left\langle u, \exp ^{-1}_{x}y\right\rangle +\phi (y)-\phi (x)\ge 0\Rightarrow \left\langle v, \exp ^{-1}_{y}x\right\rangle +\phi (x)-\phi (y)\le 0; \end{aligned}$$
  7. (vii)

    \(\phi \)-strongly pseudomonotone iff there exists \(\beta >0\) such that, for any \(x,y\in M\), \(u\in F(x)\) and \(v\in F(y)\),

    $$\begin{aligned} \left\langle u, \exp ^{-1}_{x}y\right\rangle +\phi (y)-\phi (x)\ge 0\Rightarrow \left\langle v, \exp ^{-1}_{y}x\right\rangle +\phi (x)-\phi (y)\,{\le }\,-\beta d^{2}(x,y). \end{aligned}$$

Remark 2.1

If \(\phi \equiv \mathrm {constant}\), then \(\phi \)-pseudomonotone and \(\phi \)-strongly pseudomonotone mappings reduce to pseudomonotone and strongly pseudomonotone mappings, respectively.

Definition 2.6

The subdifferential of a function \(\phi : M\rightarrow \mathbb {R}\bigcup \{+\infty \}\) at x is the set-valued vector field \(\partial \phi \), defined by

$$\begin{aligned} \partial \phi (x):=\left\{ u\in T_{x}M: \phi (y)\ge \phi (x)+ \langle u, \exp ^{-1}_{y}x\rangle ,\;\forall y\in M\right\} . \end{aligned}$$

Definition 2.7

[46] Let \(A\in \mathcal {X}(M)\) and \(\alpha >0\). Then, the resolvent of A is the set-valued mapping \(J_{\alpha }^{A}: M\rightrightarrows M\), defined by

$$\begin{aligned} J_{\alpha }^{A}(x):=\{z\in M| x\in \exp _{z}\alpha A(z)\},\quad \forall x\in M. \end{aligned}$$

Remark 2.2

  1. (i)

    The range of the resolvent \(J_{\alpha }^{A}\) is contained in the domain of A;

  2. (ii)

    The domain of the resolvent \(J_{\alpha }^{A}\) is the range of the vector field, defined by \(x\mapsto \exp _{x}\alpha A(x)\). Then, we know that \(\mathcal {D}(J_{\alpha }^{A})=\mathcal {R}(\exp _{(\cdot )}\alpha A(\cdot ));\)

  3. (iii)

    If \(\mathcal {D}(A)=M\), then the vector field A is maximal monotone iff \(J_{\alpha }^{A}\) is single-valued, firmly non-expansive and the domain \(\mathcal {D}(J_{\alpha }^{A})=M\); see [46].

Proposition 2.2

[27] Let \(x_{0}\in M\) and \(\{x_{n}\}\subseteq M\) be such that \(x_{n}\rightarrow x_{0}\). Then, for any \(y\in M\),

$$\begin{aligned} \exp _{x_{n}}^{-1}y\rightarrow \exp _{x_{0}}^{-1}y \quad \mathrm {and}\quad \exp _{y}^{-1}x_{n}\rightarrow \exp _{y}^{-1}x_{0}. \end{aligned}$$

Proposition 2.3

[47] For any \(x, y, z\in M\), the following inequality holds,

$$\begin{aligned} \left\langle \exp ^{-1}_{x}y, \exp ^{-1}_{x}z\right\rangle +\left\langle \exp ^{-1}_{y}x, \exp ^{-1}_{y}z\right\rangle \ge d^{2}(x,y). \end{aligned}$$

As we know, the gap function can transform a generalized mixed variational inequality into an equivalent optimization problem in linear spaces, and there are many useful algorithms to solve optimization problems on Hadamard manifolds. In order to solve generalized mixed variational inequalities on Hadamard manifolds, it is interesting to establish gap functions for generalized mixed variational inequalities and use optimization methods to solve generalized mixed variational inequalities. Next, we introduce the definition of gap functions for generalized mixed variational inequalities on Hadamard manifolds.

Definition 2.8

A function \(f: M\rightarrow \mathbb {R}\bigcup \{+\infty \}\) is called a gap function for problem (1) iff

  1. (i)

    \(f(x)\ge 0, \quad \forall x\in M\);

  2. (ii)

    \(f(x_{0})=0\) iff \(x_{0}\) is a solution of (1).

One of the many useful applications of gap functions is in deriving the so-called error bounds, i.e., upper estimation on the distance to the solution set S of problem (1): \(d(x,S)\le \gamma f(x)^{\lambda },\) where \(\gamma , \lambda >0\) are independent of x.

The following result follows from Theorem 1.4.16 of [48].

Lemma 2.1

Let the set-valued vector field \(F: M\rightrightarrows TM\) and the function \(f: \mathrm {Graph}(F)\rightarrow \mathbb {R}\bigcup \{+\infty \}\) are given. If f and F are upper semicontinuous, and the values of F are compact, then the function \(g: M\rightarrow \mathbb {R}\bigcup \{+\infty \}\), defined by

$$\begin{aligned} g(x):=\sup _{y\in F(x)}f(x,y), \end{aligned}$$

is upper semicontinuous.

Let \(\phi \) be a lower semicontinuous, proper and convex function on M and \(\mathcal {D}(\phi )=M\). Recall that the proximal mapping \(P_{\alpha }^{\phi }: M\rightarrow M\) is given by

$$\begin{aligned} P_{\alpha }^{\phi }(z):=\mathrm {argmin}_{y\in M} \left\{ \phi (y)+\frac{1}{2\alpha }d^{2}(z,y)\right\} ,\quad \forall z\in M, \alpha >0. \end{aligned}$$

In Lemma 4.2 of [23], it was proved that \(P_{\alpha }^{\phi }(\cdot )\) is a single-valued mapping with \(\mathcal {D}(P_{\alpha }^{\phi })=M\), and there exists unique point \(x=P_{\alpha }^{\phi }(z)\) for each \(z\in M\), which is characterized by

$$\begin{aligned} \frac{1}{\alpha }\exp ^{-1}_{x}z\in \partial \phi (x). \end{aligned}$$
(3)

On the other hand, by the characterization (3), we have \(P_{\alpha }^{\phi }(z)=J_{\alpha }^{\partial \phi }(z)\) for any \(z\in M\). Indeed, given \(z\in M\), let \(x=P_{\alpha }^{\phi }(z)\). It follows from (3) that \(z\in \exp _{x}\alpha \partial \phi (x)\), and hence \(x\in J_{\alpha }^{\partial \phi }(z)\) by the definition of the resolvent of \(\partial \phi \). Since \(\mathcal {D}(\phi )=M\), by Theorem 5.1 of [27], the set-valued vector field \(\partial \phi \) is proved maximal monotone with full domain. We obtain \(J_{\alpha }^{\partial \phi }(\cdot )\) is single-valued and \(\mathcal {D}(J_{\alpha }^{\partial \phi })=M\) by Remark 2.2 (iii). Thus, \(P_{\alpha }^{\phi }(z)=J_{\alpha }^{\partial \phi }(z). \) Furthermore, by Remark 2.2 (iii), we know that \( J_{\alpha }^{\partial \phi }\) is firmly non-expansive. It is easy to check that \( J_{\alpha }^{\partial \phi }\) is a Lipschitz continuous mapping by Definition 1 in [46]. Therefore, \(P_{\alpha }^{\phi }\) is a Lipschitz continuous mapping.

Next, we define

$$\begin{aligned} R_{\alpha }(x,u):=\exp ^{-1}_{P_{\alpha }^{\phi }(\exp _{x}(-\alpha u))}x, \quad \forall x\in M, u\in F(x), \alpha >0 \end{aligned}$$

and \(r_{\alpha }(x):=\inf _{u\in F(x)} \Vert R_{\alpha }(x,u)\Vert . \) Then, we have the following result.

Proposition 2.4

Let \(\phi \) be a lower semicontinuous, proper and convex function on M and \(\mathcal {D}(\phi )=M\), \(F\in \mathcal {X}(M)\) be a set-valued vector field with non-empty and compact values. Let \(\alpha >0\) be arbitrary. Then, the following statements are equivalent:

  1. (i)

    x is a solution of problem (1);

  2. (ii)

    There exists \(u\in F(x)\) such that \(R_{\alpha }(x,u)=0\);

  3. (iii)

    \(r_{\alpha }(x)=0\).

Proof

\(\mathrm {(i)\Rightarrow (ii)}\). Suppose that \(x\in M\) solves (1). Then, for any \(y\in M\), there exists \(u\in F(x)\) such that \(\langle u, \exp ^{-1}_{x}y\rangle +\phi (y)-\phi (x)\ge 0.\) This means that \(-u\in \partial \phi (x)\). Furthermore, one has \(\exp _{x}(-\alpha u)\in \exp _{x}(\alpha \partial \phi (x)). \)

By the definition of the resolvent of \(\partial \phi \), we get \(x\in J_{\alpha }^{\partial \phi }(\exp _{x}(-\alpha u)). \) Since \(J_{\alpha }^{\partial \phi }\) is single-valued and \(P_{\alpha }^{\phi }(\cdot )=J_{\alpha }^{\partial \phi }(\cdot )\), we have \(x=P_{\alpha }^{\phi }(\exp _{x}(-\alpha u)). \) Therefore, \(R_{\alpha }(x,u)=\exp ^{-1}_{P_{\alpha }^{\phi }(\exp _{x}(-\alpha u))}x=0.\)

(ii)\(\Rightarrow \)(iii) is obvious.

(iii)\(\Rightarrow \)(i). For any \(x\in M\), since \(\exp _{x}(\cdot )\) and \(P_{\alpha }^{\phi }(\cdot )\) are continuous, we have \(P_{\alpha }^{\phi }(\exp _{x}(\cdot ))\) is continuous. By Proposition 2.2, one has \(R_{\alpha }(x,u)\) is continuous in u. From the compactness of F(x) and (iii), there exists \(u\in F(x)\) such that \(R_{\alpha }(x,u)=0. \) This implies that \(x=P_{\alpha }^{\phi }(\exp _{x}(-\alpha u))\). By the definition of \(J_{\alpha }^{\partial \phi }(\cdot )\) and the equality \(P_{\alpha }^{\phi }(\cdot )=J_{\alpha }^{\partial \phi }(\cdot )\), it holds that

$$\begin{aligned} \exp _{x}(-\alpha u)\in \exp _{x}(\alpha \partial \phi (x)). \end{aligned}$$

Then, we have \(-u\in \partial \phi (x), \) which in turn is equivalent to

$$\begin{aligned} \phi (y)\ge \phi (x)-\left\langle u, \exp _{x}^{-1}y\right\rangle ,\quad \forall y\in M. \end{aligned}$$

Therefore x solves (1). This completes the proof. \(\square \)

Let \(\alpha >0\) be arbitrary. By Proposition 2.4, it is easy to obtain the following result.

Corollary 2.1

Let \(K=M\), \(\phi \equiv 0\) and F be single-valued. Then, the following statements are equivalent:

  1. (i)

    x is a solution of problem (2);

  2. (ii)

    \(x=P_{\alpha }^{\phi }(\exp _{x}(-\alpha F(x)))\);

  3. (iii)

    \(r_{\alpha }(x)=0\), where \(r_{\alpha }(\cdot )\) is defined by

    $$\begin{aligned} r_{\alpha }(x):=\left\| \exp ^{-1}_{P_{\alpha }^{\phi }(\exp _{x}(-\alpha F(x)))}x\right\| . \end{aligned}$$

Proposition 2.5

Let \(\alpha >0\) be arbitrary and \(F\in \mathcal {X}(M)\) be upper semicontinuous with compact values. Then,

  1. (i)

    \(r_{\alpha }(\cdot )\) is a gap function for \(\mathrm{(1)}\);

  2. (ii)

    There is some \(u\in F(x)\) such that \(r_{\alpha }(x)=\Vert R_{\alpha }(x,u)\Vert \);

  3. (iii)

    \(r_{\alpha }(\cdot )\) is lower semicontinuous on M.

Proof

(i) is obvious.

(ii) For any \(x\in M\), by the continuity of \(P_{\alpha }^{\phi }(\cdot )\) and \(\exp _{x}(\cdot )\), from Proposition 2.2, we obtain that \(\Vert R_{\alpha }(x,u)\Vert \) is continuous in u. Since F(x) is compact, there exists \(u\in F(x)\) such that \(r_{\alpha }(x)=\Vert R_{\alpha }(x,u)\Vert \).

(iii) Since F is upper semicontinuous with compact values, and \(\Vert R_{\alpha }(\cdot ,\cdot )\Vert \) is continuous, from Lemma 2.1, we have

$$\begin{aligned} r_{\alpha }(x)=\inf _{u\in F(x)} \Vert R_{\alpha }(x,u)\Vert =-\sup _{u\in F(x)}(- \Vert R_{\alpha }(x,u)\Vert ), \end{aligned}$$

is lower semicontinuous. This completes the proof. \(\square \)

3 Gap Functions for Generalized Mixed Variational Inequalities

In this section, we will propose some new gap functions for generalized mixed variational inequalities on Hadamard manifolds, which extend the ones introduced in [6, 15, 16].

Let \(\phi \) be a lower semicontinuous, proper and convex function defined on M and \(F\in \mathcal {X}(M)\) be upper semicontinuous with compact values. For any \(\alpha >0\) and \(u\in F(x)\), we define \(G_{\alpha }\) by

$$\begin{aligned} G_{\alpha }(x; u):=\sup _{y\in M}\left\{ \langle L_{x,y}u, \exp _{y}^{-1}x\rangle +\phi (x)-\phi (y)-\frac{1}{2\alpha }d^{2}(x,y)\right\} , \end{aligned}$$

and \(f_{\alpha }\) by \(f_{\alpha }(x):=\inf _{u\in F(x)}G_{\alpha }(x; u). \)

Lemma 3.1

Let \(\alpha >0\) be arbitrary and \(F\in \mathcal {X}(M)\) be upper semicontinuous with compact values. Then,

  1. (i)

    For any \(\alpha >0\), \(f_{\alpha }(\cdot )\) is nonnegative on M;

  2. (ii)

    For any \(x\in M\), there exists \(u\in F(x)\) such that \(f_{\alpha }(x)=G_{\alpha }(x; u)\);

  3. (iii)

    For any \(\alpha >0\), \(f_{\alpha }(\cdot )\) is lower semicontinuous.

Proof

(i) is obvious.

(ii) For any \(x\in M\), since F(x) is compact and \(G_{\alpha }(x; u)\) is continuous in u, there exists \(u\in F(x)\) such that \(f_{\alpha }(x)=G_{\alpha }(x; u)\).

(iii) Let

$$\begin{aligned} \overline{G}_{\alpha }((x,u), y)=\left\langle L_{x,y}u, \exp _{y}^{-1}x\right\rangle +\phi (x)-\phi (y)-\frac{1}{2\alpha }d^{2}(x,y). \end{aligned}$$

Since \(\phi \) is lower semicontinuous, by Proposition 2.2, it is easy to see that \(\overline{G}\) is also lower semicontinuous in the argument (xu) for each \(y\in M\). Then, \(G_{\alpha }(x,u)=\sup _{y\in M}\overline{G}_{\alpha }(x,u)\) is lower semicontinuous, and hence \(-G_{\alpha }(x,u)\) is upper semicontinuous. Combining with the fact F is upper semicontinuous with compact values, we obtain that the function \(f_{\alpha }(\cdot )\), defined by

$$\begin{aligned} f_{\alpha }(x):=\inf _{u\in F(x)}G_{\alpha }(x; u)=-\sup _{u\in F(x)}[-G_{\alpha }(x; u)], \end{aligned}$$

is lower semicontinuous. This completes the proof. \(\square \)

Lemma 3.2

Let \(\phi \) be a lower semicontinuous, proper and convex function on M and \(\mathcal {D}(\phi )=M\) and \(F\in \mathcal {X}(M)\) be a set-valued vector field with non-empty and compact values. If \(\alpha >0\), then it holds that

$$\begin{aligned} f_{\alpha }(x)\ge \frac{1}{2\alpha }r_{\alpha }^{2}(x),\quad \forall x\in M. \end{aligned}$$
(4)

In particular, \(f_{\alpha }(x)=0\) iff x is a solution of \(\mathrm{(1)}\).

Proof

For any fixed \(x\in M, \alpha >0\), observe that for any \(u\in F(x)\),

$$\begin{aligned} P_{\alpha }^{\phi }(\exp _{x}(-\alpha u))=J_{\alpha }^{\partial \phi }(\exp _{x}(-\alpha u))=\{z\in M|\exp _{x}(-\alpha u)\in \exp _{z}(\alpha \partial \phi (z))\}, \end{aligned}$$

which is equivalent to \(\frac{\exp _{z}^{-1}\exp _{x}(-\alpha u)}{\alpha }\in \partial \phi (z).\) It follows from the definition of subdifferential that

$$\begin{aligned} \phi (y)-\phi (z)-\left\langle \frac{\exp _{z}^{-1}\exp _{x}(-\alpha u)}{\alpha }, \exp _{z}^{-1}y\right\rangle \ge 0,\quad \forall y\in M, u\in F(x). \end{aligned}$$

Taking \(y=x\) in the equality above, it follows that

$$\begin{aligned} \phi (x)-\phi (z)-\left\langle \frac{\exp _{z}^{-1}\exp _{x}(-\alpha u)}{\alpha }, \exp _{z}^{-1}x\right\rangle \ge 0,\quad \forall u\in F(x). \end{aligned}$$

This, together with Proposition 2.3, implies that for any \(u\in F(x)\),

$$\begin{aligned} \phi (x)-\phi (z)\ge & {} \left\langle \frac{\exp _{z}^{-1}\exp _{x}(-\alpha u)}{\alpha }, \exp _{z}^{-1}x\right\rangle \\\ge & {} \frac{1}{\alpha }\left( d^{2}(x,z)-\left\langle \exp _{x}^{-1}z, \exp _{x}^{-1}\exp _{x}(-\alpha u)\right\rangle \right) \\= & {} \frac{1}{\alpha }d^{2}(x,z)+\left\langle u, \exp _{x}^{-1}z\right\rangle , \end{aligned}$$

which yields that

$$\begin{aligned} \phi (x)-\phi (z)+\left\langle L_{x,z}u, \exp _{z}^{-1}x\right\rangle \ge \frac{1}{\alpha }d^{2}(x,z),\quad \forall u\in F(x). \end{aligned}$$

By the definition of \(G_{\alpha }(x; u)\) and \(P_{\alpha }^{\phi }\), one has

$$\begin{aligned} G_{\alpha }(x; u)= & {} \sup _{y\in M}\left\{ \langle L_{x,y}u, \exp _{y}^{-1}x\rangle +\phi (x)-\phi (y)-\frac{1}{2\alpha }d^{2}(x,y)\right\} \\\ge & {} \left\langle L_{x,z}u, \exp _{z}^{-1}x\right\rangle +\phi (x)-\phi (z)-\frac{1}{2\alpha }d^{2}(x,z)\\\ge & {} \frac{1}{2\alpha }d^{2}(x,z)=\frac{1}{2\alpha }d^{2}(x, P_{\alpha }^{\phi }(\exp _{x}(-\alpha u))),\quad \forall u\in F(x). \end{aligned}$$

Therefore,

$$\begin{aligned} f_{\alpha }(x)= & {} \inf _{u\in F(x)}G_{\alpha }(x; u)\ge \frac{1}{2\alpha }\inf _{u\in F(x)}d^{2}(x, P_{\alpha }^{\phi }(\exp _{x}(-\alpha u)))\\= & {} \frac{1}{2\alpha }\inf _{u\in F(x)}\left\| \exp ^{-1}_{P_{\alpha }^{\phi }(\exp _{x}(-\alpha u))}x\right\| ^{2} =\frac{1}{2\alpha }r_{\alpha }^{2}(x). \end{aligned}$$

Next, we prove that \(f_{\alpha }(x)=0\) iff x solves (1). Suppose that \(f_{\alpha }(x)=0\). Then, we obtain that \(r_{\alpha }(x)=0\) from (4). Hence, by Proposition 2.4, it follows that x solves (1).

Conversely, if x solves (1), then there exists \(u\in F(x)\) such that

$$\begin{aligned} \left\langle u, \exp _{x}^{-1}y\right\rangle +\phi (y)-\phi (x)\ge 0,\quad \forall y\in M. \end{aligned}$$

Clearly,

$$\begin{aligned} \left\langle L_{x,y}u, \exp _{y}^{-1}x\right\rangle +\phi (x)-\phi (y)- \frac{1}{2\alpha }d^{2}(x,y)\le 0,\quad \forall y\in M, \end{aligned}$$

which yields that \(f_{\alpha }(x)\le G_{\alpha }(x; u)\le 0.\) Combining with the nonnegativity of \(f_{\alpha }(\cdot )\), we know that \(f_{\alpha }(x)=0\). This completes the proof. \(\square \)

Remark 3.1

If \(M=\mathbb {R}^{n}\) and \(\phi (\cdot )=\delta _{K}(\cdot )\), then Lemma 3.1 reduces to Lemma 3.1 of [15]. Furthermore, Lemma 3.1 can be regarded as a generalization of Lemma 3.2 in [16] from linear spaces to Hadamard manifolds. If \(\mathcal {D}(\Phi )=\mathbb {H}\) in [16], then Lemma 3.2 can be regarded as a generalization of Lemma 3.3 in [16] from linear spaces to Hadamard manifolds.

Next, we consider the following function, defined by

$$\begin{aligned} \varphi _{f,\alpha ,\lambda }(x):=\inf _{z\in M}\left\{ f_{\alpha }(z)+\lambda d^{2}(x,z)\right\} , \end{aligned}$$

where \(\lambda \) is a positive constant. In fact, from the definition of \(f_{\alpha }(\cdot )\), we know that \(\varphi _{f,\alpha ,\lambda }(\cdot )\) can be rewritten as

$$\begin{aligned} \varphi _{f,\alpha ,\lambda }(x)= & {} \inf _{z\in M, u\in F(z)}\left\{ \sup _{y\in M}\left\{ \langle L_{z,y}u, \exp _{y}^{-1}z\rangle +\phi (z)-\phi (y)-\frac{1}{2\alpha }d^{2}(z,y)\right\} \right. \\&\left. +\,\lambda d^{2}(x,z)\right\} . \end{aligned}$$

Theorem 3.1

For any \(\alpha >0\) and \(\lambda >0\), the function \(\varphi _{f,\alpha ,\lambda }(\cdot )\) is nonnegative on M. Moreover, if \(\phi \) is a lower semicontinuous, proper and convex function on M and \(\mathcal {D}(\phi )=M\), F is upper semicontinuous with compact values, then \(x^*\) is a solution of \(\mathrm{(1)}\) iff \(\varphi _{f,\alpha ,\lambda }(x^*)=0\);

Proof

For any \(\alpha >0\), sine \(f_{\alpha }(\cdot )\) is nonnegative on M, we know that \(\varphi _{f,\alpha ,\lambda }(\cdot )\) is nonnegative on M.

If \(x^*\) solves (1), then from Lemma 3.2, it holds that \(f_{\alpha }(x^*)=0\). Thus,

$$\begin{aligned} \varphi _{f,\alpha ,\lambda }(x^*)=\inf _{z\in M}\left\{ f_{\alpha }(z)+\lambda d^{2}(x^*,z)\right\} \le f_{\alpha }(x^*)+\lambda d^{2}(x^*,x^*)=0. \end{aligned}$$

Combining with the nonnegativity of \(\varphi _{f,\alpha ,\lambda }(\cdot )\), it follows that \(\varphi _{f,\alpha ,\lambda }(x^*)=0\).

Conversely, suppose that \(\varphi _{f,\alpha ,\lambda }(x^*)=0\). From the definition of \(\varphi _{f,\alpha ,\lambda }(\cdot )\), there exists a minimizing sequence \(\{z_{n}\}\) in M such that

$$\begin{aligned} f_{\alpha }(z_{n})+\lambda d^{2}(x^*,z_{n})<\frac{1}{n}. \end{aligned}$$

Thus, there exists a sequence \(\{z_{n}\}\) such that \(f_{\alpha }(z_{n})\rightarrow 0\) and \(d(x^*,z_{n})\rightarrow 0\). Since \(f_{\alpha }(\cdot )\) is lower semicontinuous and nonnegative by Lemma 3.1, one has

$$\begin{aligned} 0\le f_{\alpha }(x^*)\le \liminf _{n\rightarrow \infty }f_{\alpha }(z_{n})=0, \end{aligned}$$

which yields that \(f_{\alpha }(x^*)=0\). Therefore, from Lemma 3.2, we know that \(x^*\) solves (1). This completes the proof. \(\square \)

Theorem 3.1 shows us the unconstrained minimizing problem

$$\begin{aligned} \min _{x\in M}\varphi _{f,\alpha ,\lambda }(x) \end{aligned}$$

is equivalent to problem (1) under certain assumptions of \(\phi \) and F. Thus, it is convenient to use unconstrained minimizing methods to solve (1).

Now, it is desirable that the gap function \(\varphi _{f,\alpha ,\lambda }(\cdot )\) is differentiable everywhere. Thus, we define the function \(\psi _{f,\alpha ,\lambda }(\cdot , \cdot ): M\times M\rightarrow \mathbb {R}\bigcup \{+\infty \}\) by

$$\begin{aligned} \psi _{f,\alpha ,\lambda }(x, z):=f_{\alpha }(z)+\lambda d^{2}(x,z). \end{aligned}$$

By the definition of \(\varphi _{f,\alpha ,\lambda }(\cdot )\), we know that \(\varphi _{f,\alpha ,\lambda }(x)=\inf _{z\in M}\psi _{f,\alpha ,\lambda }(x, z)\).

Theorem 3.2

Let \(\alpha >0\) and \(\lambda >0\). Suppose the function \(\psi _{f,\alpha ,\lambda }(x, \cdot )\) attains its unique minimum \(z_{f,\alpha ,\lambda }(x)\) on M, and \(z_{f,\alpha ,\lambda }(x)\) is continuous. Then, \(\varphi _{f,\alpha ,\lambda }(\cdot )\) is differentiable on M and

$$\begin{aligned} \mathrm {grad}\varphi _{f,\alpha ,\lambda }(x)=-2 \lambda \exp ^{-1}_{x}z_{f,\alpha ,\lambda }(x). \end{aligned}$$

Proof

Suppose that x is fixed. From the definitions of \(\varphi _{f,\alpha ,\lambda }(\cdot ), \psi _{f,\alpha ,\lambda }(\cdot , \cdot )\) and \(z_{f,\alpha ,\lambda }(\cdot )\), for each \(d\in T_{x}M\) and \(t>0\), we have

$$\begin{aligned}&\varphi _{f,\alpha ,\lambda }(\exp _{x}td)-\varphi _{f,\alpha ,\lambda }(x)\nonumber \\&\quad \le \psi _{f,\alpha ,\lambda }(\exp _{x}td, z_{f,\alpha ,\lambda }(x))-\psi _{f,\alpha ,\lambda }(x, z_{f,\alpha ,\lambda }(x))\nonumber \\&\quad =\lambda \left[ d^{2}(\exp _{x}td, z_{f,\alpha ,\lambda }(x))-d^{2}(x, z_{f,\alpha ,\lambda }(x))\right] . \end{aligned}$$
(5)

We use \(\rho _{y}(x)\) to denote \(d^{2}(x, y)\). Then,

$$\begin{aligned} d^{2}(\exp _{x}td, z_{f,\alpha ,\lambda }(x))=\rho _{z_{f,\alpha ,\lambda }(x)}(\exp _{x}td) \end{aligned}$$

and \(d^{2}(x, z_{f,\alpha ,\lambda }(x))=\rho _{z_{f,\alpha ,\lambda }(x)}(x).\) It was proved in [45] that, for any \(d\in T_{x}M\),

$$\begin{aligned} \left\langle \mathrm {grad}\rho _{z_{f,\alpha ,\lambda }(x)}(x), d\right\rangle= & {} \lim _{t\downarrow 0}\frac{\rho _{z_{f,\alpha ,\lambda }(x)}(\exp _{x}td)-\rho _{z_{f,\alpha ,\lambda }(x)}(x)}{t}\\= & {} \left\langle -2\exp ^{-1}_{x}z_{f,\alpha ,\lambda }(x), d\right\rangle . \end{aligned}$$

By dividing t in the leftmost and rightmost sides of (5) and letting \(t\downarrow 0\), we get

$$\begin{aligned}&\limsup _{t\downarrow 0}\frac{\varphi _{f,\alpha ,\lambda }(\exp _{x}td)-\varphi _{f,\alpha ,\lambda }(x)}{t}\\&\quad \le \limsup _{t\downarrow 0}\frac{\lambda \left[ d^{2}(\exp _{x}td, z_{f,\alpha ,\lambda }(x))-d^{2}(x, z_{f,\alpha ,\lambda }(x))\right] }{t}\\&\quad =\lambda \left\langle \mathrm {grad}\rho _{z_{f,\alpha ,\lambda }(x)}(x), d\right\rangle =\left\langle -2\lambda \exp ^{-1}_{x}z_{f,\alpha ,\lambda }(x), d\right\rangle . \end{aligned}$$

On the other hand, for each \(d\in T_{x}M\) and \(t>0\), let \(x_{t}:=\exp _{x}td\). It follows from the definitions of \(\varphi _{f,\alpha ,\lambda }(\cdot ), \psi _{f,\alpha ,\lambda }(\cdot , \cdot )\) and \(z_{f,\alpha ,\lambda }(\cdot )\) that

$$\begin{aligned}&\varphi _{f,\alpha ,\lambda }(\exp _{x}td)-\varphi _{f,\alpha ,\lambda }(x)\\&\quad =\varphi _{f,\alpha ,\lambda }(x_{t})-\varphi _{f,\alpha ,\lambda }(x)\\&\quad \ge \psi _{f,\alpha ,\lambda }(x_{t}, z_{f,\alpha ,\lambda }(x_{t}))-\psi _{f,\alpha ,\lambda }(x, z_{f,\alpha ,\lambda }(x_{t}))\\&\quad =\lambda \left[ d^{2}(\exp _{x}td,z_{f,\alpha ,\lambda }(\exp _{x}td))-d^{2}(x, z_{f,\alpha ,\lambda }(\exp _{x}td))\right] . \end{aligned}$$

By dividing t in the leftmost and rightmost side of the inequality above and letting \(t\downarrow 0\), we get

$$\begin{aligned}&\liminf _{t\downarrow 0}\frac{\varphi _{f,\alpha ,\lambda }(\exp _{x}td)-\varphi _{f,\alpha ,\lambda }(x)}{t}\nonumber \\&\quad \ge \liminf _{t\downarrow 0}\frac{\lambda \left[ d^{2}(\exp _{x}td, z_{f,\alpha ,\lambda }(\exp _{x}td))-d^{2}(x, z_{f,\alpha ,\lambda }(\exp _{x}td))\right] }{t}\nonumber \\&\quad =\left\langle -2\lambda \exp ^{-1}_{x}z_{f,\alpha ,\lambda }(x), d\right\rangle . \end{aligned}$$
(6)

It follows from (5) and (6) that, for each \(d\in T_{x}M\) and \(t>0\),

$$\begin{aligned} \mathrm {grad}\varphi _{f,\alpha ,\lambda }(x)= & {} \lim _{t\downarrow 0} \frac{\varphi _{f,\alpha ,\lambda }(\exp _{x}td)-\varphi _{f,\alpha ,\lambda }(x)}{t}\\= & {} \left\langle -2\lambda \exp ^{-1}_{x}z_{f,\alpha ,\lambda }(x), d\right\rangle . \end{aligned}$$

Therefore, \(\mathrm {grad}\varphi _{f,\alpha ,\lambda }(x)=-2\lambda \exp ^{-1}_{x}z_{f,\alpha ,\lambda }(x).\) This completes the proof. \(\square \)

Remark 3.2

  1. (i)

    If \(M=\mathbb {R}^{n}\) and \(\phi (\cdot )=\delta _{K}(\cdot )\), then Theorem 3.2 can be regarded as a generalization of Proposition 3.1 of [15] from linear spaces to Hadamard manifolds;

  2. (ii)

    If \(M=\mathbb {R}^{n}\), \(\phi (\cdot )=\delta _{K}(\cdot )\) and F is single-valued, then Theorem 3.2 can be regarded as a generalization of Proposition 2.5 of [6] from linear spaces to Hadamard manifolds;

  3. (iii)

    Theorem 3.2 can be regarded as a generalization of Theorem 3.2 of [16] from linear spaces to Hadamard manifolds.

Below we provide three examples, which illustrate that our results in the present paper are applicable. Example 3.1 is provided to give a gap function of a generalized mixed variational inequality on Hadamard manifolds. Examples 3.2 and 3.3 are presented to illustrate that all assumptions in Theorems 3.1 and 3.2 can be satisfied, and hence we can solve problem (1) by exploiting its gap function. However, Examples 3.2 and 3.3 are not valid in Euclidean spaces, because \(\phi \) is non-convex in usual sense.

Example 3.1

Let \(M=\mathbb {H}^{2}:=\{x=(x_{1},x_{2}, x_{3})\in \mathbb {R}^{3}: \left\langle x, x\right\rangle =-1, x_{3}>0\}\) be the 2-dimensional hyperbolic space, endowed with the Lorentz metric \(\langle \cdot , \cdot \rangle \) of \(\mathbb {R}^{3}\); see, for example [26]. The sectional curvature of \(\mathbb {H}^{2}\) is \(-1\). The normalized geodesic \(\gamma :\mathbb {R}\rightarrow \mathbb {H}^{2}\), starting from \(x\in \mathbb {H}^{2}\), is given by

$$\begin{aligned} \gamma (t)=(\cosh t)x+(\sinh t)v,\quad \forall t\in \mathbb {R}, \end{aligned}$$

where \(v\in T_{x}\mathbb {H}^{2}\) is a unit vector. The Riemannian distance \(d:M\times M\rightarrow \mathbb {R}\) is given by \(d(x,y)=\mathrm {arcosh} (-\langle x, y\rangle ). \) For any \(x,y\in M\), one can check the inverse exponential map is given by

$$\begin{aligned} \exp ^{-1}_{x}y=\mathrm {arcosh}(-\langle x, y\rangle )\frac{y+\langle x, y\rangle x}{\sqrt{\langle x, y\rangle ^{2}}-1}. \end{aligned}$$

Let \(F: M\rightrightarrows TM\) and \(\phi : M\rightarrow \mathbb {R}\) be defined, respectively, by

$$\begin{aligned} F(x_{1},x_{2},x_{3}):=\{(0,0,t(1-x_{3}^{2})):t\in [1,2]\} \end{aligned}$$

and \(\phi (x_{1},x_{2},x_{3}):=4(x^{2}_{1}+x^{2}_{2}).\) Obviously, F is upper semicontinuous with compact values. Observe \((\phi \circ \gamma )''(t)\ge 0\), we know that \(\phi \circ \gamma \) is convex, and hence \(\phi \) is convex on M by Definition 2.3. We consider the generalized mixed variational inequality on M, which consists in finding \(x^{*}\in M\) such that

$$\begin{aligned} ~\exists u^{*}\in F(x^{*}): \left\langle u^{*}, \exp ^{-1}_{x^{*}}y\right\rangle +\phi (y)-\phi (x^{*})\ge 0,\quad \forall y\in M. \end{aligned}$$
(7)

For any \(\alpha >0\) and \(u\in F(x)\), we define \(G_{\alpha }\) by

$$\begin{aligned} G_{\alpha }(x; u):=\sup _{y\in M}\left\{ \langle L_{x,y}u, \exp _{y}^{-1}x\rangle +\phi (x)-\phi (y)-\frac{1}{2\alpha }d^{2}(x,y)\right\} , \end{aligned}$$

and \(f_{\alpha }\) by \(f_{\alpha }(x):=\inf _{u\in F(x)}G_{\alpha }(x; u).\) One can check that the function \(f_{\alpha }(x)\) is proper and lower semicontinuous on M. From Lemma 3.2 and Definition 2.8, it is easy to see that \(f_{\alpha }(x)\) is a gap function of problem (7). We know that the gap function \(f_{\alpha }(x)\) transformed problem (7) into a unconstrained minimization problem. Thus, it is convenient to use unconstrained minimizing methods to solve problem (7).

Example 3.2

Let \(M:=\{(y_{1}, y_{2})\in \mathbb {R}^{2}|y_{2}>0\}\) be the Poincaré plane, endowed with the Riemannian metric \(g=(g_{ij})\), where

$$\begin{aligned} g_{11}=g_{22}=\frac{1}{y_{2}^{2}}, \quad g_{12}=0,\quad \forall (y_{1},y_{2})\in M. \end{aligned}$$

It is well known that M is a Hadamard manifold of constant curvature \(K<0\). The geodesic of Poincaré plane is the semilines \(C_{a}: x=a, y>0\) and the semicircles \(C_{b,r}: (x-b)^{2}+y^{2}=r^{2}, y>0\). Taking \(x\in M\), we get \(T_{x}M=\mathbb {R}^{2}\). Let \(F: M\rightrightarrows TM\) and \(\phi : M\rightarrow \mathbb {R}\) be defined, respectively, by

$$\begin{aligned} F(x_{1},x_{2}):=\{(tx_{2}\sinh x_{1}, t-t\cosh x_{1} ):t\in [1,2]\} \end{aligned}$$

and \(\phi (x_{1},x_{2}):=\ln ^{2}\frac{x_{1}}{x_{2}}. \) Obviously, F is upper semicontinuous with compact values. One can check that \(\phi \) is convex on M (see page 87 in [19]). Thus, all assumptions in Theorem 3.1 are satisfied. However, Example 3.2 does not hold in Euclidean spaces, because \(\phi \) is non-convex in usual sense.

Example 3.3

Let the set \(M=\mathbb {R}_{++}^{2}:=\{(x_{1}, x_{2})\in \mathbb {R}^2: x_{1}, x_{2}>0\}\) be a Riemannian manifold. Let G be a \(2\times 2\) matrix, defined by \(G(x):=(g_{ij}(x))\), where \(g_{ij}=\frac{\delta _{ij}}{x_{i}x_{j}}\). M is endowed with the Riemannian metric \(\ll , \gg \), defined by \(\ll u, v\gg :=\langle G(x)v, u\rangle \). For any \((x_{1}, x_{2}),(y_{1}, y_{2})\in M\), the Riemannian distance \(d: M\times M\rightarrow \mathbb {R}_{+}\) is given by

$$\begin{aligned} d((x_{1}, x_{2}), (y_{1}, y_{2}))=\left\| \left( \ln \frac{x_{1}}{y_{1}} ,\ln \frac{x_{2}}{x_{2}}\right) \right\| . \end{aligned}$$

For more details, see [32]. From Example 3.1 in [41], one has

$$\begin{aligned} \exp _{(x_{1}, x_{2})}^{-1}(y_{1}, y_{2})=(s_{1}, s_{2})=\left( x_{1}\ln \frac{y_{1}}{x_{1}}, x_{2}\ln \frac{y_{2}}{x_{2}}\right) . \end{aligned}$$

Consider the function \(\phi : M\rightarrow \mathbb {R}\bigcup \{+\infty \}\) and the vector field F, defined by \(\phi (x_{1}, x_{2}):=\ln x_{2}-\ln x_{1}\) and \( F(x_{1}, x_{2}):=\left( -x_{1}\ln \frac{1}{x_{1}}+x_{1}, -x_{2}\ln \frac{1}{x_{2}}-x_{2}\right) , \) respectively. Note that \(\phi \) is convex on M, but not convex in the usual sense. Furthermore, it is easy to check that \(\phi \) is lower semicontinuous. Let \(\alpha >0 \),\(\lambda >0\) and the function \(\psi _{f,\alpha ,\lambda }: M\times M\rightarrow \mathbb {R}\bigcup \{+\infty \}\) be defined by

$$\begin{aligned} \psi _{f,\alpha ,\lambda }(x,z):= & {} \inf _{u\in F(z)}\left\{ \sup _{y\in M}\left\{ \langle L_{z,y}u, \exp _{y}^{-1}z\rangle +\phi (z)-\phi (y)- \frac{1}{2\alpha }d^{2}(z,y)\right\} \right\} \\&+\,\lambda d^{2}(x,z). \end{aligned}$$

For any \(x\in M\), one can check that \(\psi _{f,\alpha ,\lambda }(x,\cdot )\) attains its unique minimum \((1,1)\in M\). Thus, all assumptions of Theorem 3.2 are satisfied.

4 Global Error Bounds

In this section, we present error bounds, which are based on gap functions \(f_{\alpha }(\cdot )\) and \(\varphi _{f,\alpha ,\lambda }(\cdot )\) for problem (1). First, we discuss how the gap function \(f_{\alpha }(\cdot )\) provides error bounds for (1) on M.

Lemma 4.1

Suppose \(x^*\in M\) is the unique solution of \(\mathrm{(1)}\), and F is \(\phi \)-strongly pseudomonotone with respect to \(x^*\) with modulus \(\mu >0\). If \(\alpha \) is chosen to satisfy \(\alpha >\frac{1}{2\mu }\), then

$$\begin{aligned} f_{\alpha }(x)\ge \left( \mu -\frac{1}{2\alpha }\right) d^{2}(x,x^*),\quad \forall x\in M. \end{aligned}$$

Proof

By Lemma 3.1(ii), for any \(x\in M\), there exists \(u_{x}\in F(x)\) such that \(f_{\alpha }(x)=G_{\alpha }(x;u_{x})\). Since F is \(\phi \)-strongly pseudomonotone with respect to \(x^*\) with modulus \(\mu >0\), it holds that

$$\begin{aligned} \left\langle u_{x}, \exp _{x}^{-1}x^*\right\rangle +\phi (x^*)-\phi (x)\le -\mu d^{2}(x,x^*). \end{aligned}$$

Therefore,

$$\begin{aligned} f_{\alpha }(x)= & {} G_{\alpha }(x;u_{x})\\\ge & {} \left\langle L_{x,x^*}u_{x}, \exp _{x^*}^{-1}x\right\rangle +\phi (x)-\phi (x^*)-\frac{1}{2\alpha } d^{2}(x,x^*)\\\ge & {} \mu d^{2}(x,x^*)-\frac{1}{2\alpha }d^{2}(x,x^*) =\left( \mu -\frac{1}{2\alpha }\right) d^{2}(x,x^*). \end{aligned}$$

This completes the proof. \(\square \)

Lemma 4.2

[46] For any \(x,y,z,m\in M\) with \(d(x,m)=d(y,m)=\frac{d(x,y)}{2}\), one has

$$\begin{aligned} d^{2}(z,m)\le \frac{1}{2}d^{2}(z,x)+\frac{1}{2}d^{2}(z,y)- \frac{1}{4}d^{2}(x,y). \end{aligned}$$

Theorem 4.1

Suppose that \(x^*\in M\) is the unique solution of problem \(\mathrm{(1)}\), and F is \(\phi \)-strongly pseudomonotone with respect to \(x^*\) with modulus \(\mu >0\). If \(\alpha \) is chosen to satisfy \(\alpha >\frac{1}{2\mu }\), then for any \(\lambda >0\), one has

$$\begin{aligned} \frac{1}{2}\min \left\{ \mu -\frac{1}{2\alpha }, \lambda \right\} d^{2}(x,x^*)\le \varphi _{f,\alpha ,\lambda }(x)\le \lambda d^{2}(x,x^*),\quad \forall x\in M. \end{aligned}$$
(8)

Proof

If \(x^*\) is the unique solution of (1), then there exists \(u\in F(x^*)\) such that

$$\begin{aligned} \left\langle u, \exp _{x^*}^{-1}y\right\rangle +\phi (y)-\phi (x^*)\ge 0,\quad \forall y\in M. \end{aligned}$$

Since \(\left\langle L_{x^*,y}u, \exp _{y}^{-1}x^*\right\rangle =-\left\langle u, \exp _{x^*}^{-1}y\right\rangle \), we have

$$\begin{aligned} \left\langle L_{x^*,y}u, \exp _{y}^{-1}x^*\right\rangle +\phi (x^*)-\phi (y)-\frac{1}{2\alpha }d^{2}(x^*,y)\le 0,\quad \forall y\in M, \end{aligned}$$

which yields that \(f_{\alpha }(x^*)\le 0.\) Combining with the nonnegativity of \(f_{\alpha }(\cdot )\), it follows that \(f_{\alpha }(x^*)=0\). Therefore,

$$\begin{aligned} \varphi _{f,\alpha ,\lambda }(x)= & {} \inf _{z\in M}\left\{ f_{\alpha }(z)+\lambda d^{2}(x,z)\right\} \le f_{\alpha }(x^*)+\lambda d^{2}(x,x^*) =\lambda d^{2}(x,x^*), \end{aligned}$$

which implies the right-hand inequality in (8).

Next, we prove the left-hand inequality in (8). It follows from Lemma 4.1 that

$$\begin{aligned} \varphi _{f,\alpha ,\lambda }(x)= & {} \inf _{z\in M}\left\{ f_{\alpha }(z)+\lambda d^{2}(x,z)\right\} \nonumber \\\ge & {} \inf _{z\in M}\left\{ \left( \mu -\frac{1}{2\alpha }\right) d^{2}(x^*,z)+\lambda d^{2}(x,z)\right\} \nonumber \\\ge & {} \min \left\{ \mu -\frac{1}{2\alpha },\lambda \right\} \inf _{z\in M}\left\{ d^{2}(x^*,z)+d^{2}(x,z)\right\} . \end{aligned}$$
(9)

By Lemma 4.2, we obtain that, for any \(z\in M\),

$$\begin{aligned} \frac{1}{2}d^{2}(x,x^*)\le d^{2}(x^*,z)+d^{2}(x,z), \end{aligned}$$

which in turn is equivalent to

$$\begin{aligned} \frac{1}{2}d^{2}(x,x^*)\le \inf _{z\in M}\left\{ d^{2}(x^*,z)+d^{2}(x,z)\right\} . \end{aligned}$$

This, together with (9), implies that

$$\begin{aligned} \varphi _{f,\alpha ,\lambda }(x) \ge \frac{1}{2}\min \left\{ \mu -\frac{1}{2\alpha },\lambda \right\} d^{2}(x,x^*). \end{aligned}$$

This completes the proof. \(\square \)

Remark 4.1

  1. (i)

    If \(M=\mathbb {R}^{n}\) and \(\phi (\cdot )=\delta _{K}(\cdot )\), then Lemma 4.1 and Theorem 4.1 reduce to Lemma 4.1 and Theorem 4.1 of [15], respectively;

  2. (ii)

    Lemma 4.1 and Theorem 4.1 can be regarded as a generalization of Lemma 4.1 and Theorem 4.1 of [16] from linear spaces to Hadamard manifolds, respectively.

Example 4.1

Let \(M, \phi , F\) be the same as in Example 3.3. Then, the vector field F is strongly monotone on M, and hence F is \(\phi \)-strongly pseudomonotone on M. Indeed, for any \((x_{1}, x_{2})\in M\) and \((y_{1}, y_{2})\in M\), we have

$$\begin{aligned}&\ll F(x_{1}, x_{2}), \exp _{(x_{1}, x_{2})}^{-1}(y_{1}, y_{2})\gg +\ll F(y_{1}, y_{2}), \exp _{(y_{1}, y_{2})}^{-1}(x_{1}, x_{2})\gg \\&\quad =-\left[ \left( \ln \frac{x_{1}}{y_{1}}\right) ^{2}+\left( \ln \frac{x_{2}}{y_{2}}\right) ^{2}\right] =-d^{2}((x_{1}, x_{2}),(y_{1}, y_{2})). \end{aligned}$$

Consequently, F is strongly monotone on M. One can check that \((1,1)\in M\) is the unique solution of problem (1). Let \(\alpha >\frac{1}{2}\). Then, all assumptions of Theorem 4.1 are satisfied.

5 Conclusions

In this paper, we have investigated gap functions and global error bounds for generalized mixed variational inequalities on Hadamard manifolds, which is a generalization of the results in [6, 1416] from linear spaces to Hadamard manifolds. To the best of our knowledge, gap functions and global error bounds for generalized mixed variational inequalities on Hadamard manifolds have not been studied. We remark that the techniques used in this paper do not work in general manifolds; for example, Proposition 2.3 does not hold in Riemannian manifolds with positive curvature. It is interesting to investigate gap functions and global error bounds for variational inequalities on general Riemannian manifolds in future work.