1 Introduction

Variational principles concern sufficient conditions under which after a perturbation of the optimized function, the perturbed minimization problem has a solution. In 1972, Ekeland [3] proved a variational principle in complete metric spaces. This principle is equivalent to the completeness of the metric space, see [10], as well as to Caristi’s fixed point theorem and Takahashi’s minimization principle, see [9]. McLinden [8] used Ekeland variational principle to derive some results of minimax type in Banach spaces.

In 2010, Kenderov and Revalski [6] proved a variational principle in completely regular topological spaces. Later, in [7], they gave a sufficient condition for existence of a perturbation such that the perturbed problem is (Tykhonov) well-posed. Let us recall that the problem to minimize \(f:X \rightarrow \mathbb {R} \cup \{+\infty \}\), where X is a topological space, is called well-posed if it has unique solution \(x_0 \in X\) and for every minimizing sequence \(\{x_n\}_n \subset X\), \(f(x_n) \rightarrow \inf _X f\) it holds that \(x_n \rightarrow x_0\), i.e., \(x_0\) is the strong minimum of f on X. In the same paper they obtained a variational principle for supinf problems.

We prove in Theorem 2 that if the variational principle of Kenderov and Revalski holds in a topological space then the space is completely regular. Furthermore, in Theorem 4, we show that if the strong variational principle of Kenderov and Revalski holds in a topological space, then the space is completely regular and satisfies the first axiom of countability. We establish a minimax variational principle in completely regular topological spaces in Theorem 7. In the final section we give a sufficient condition for existence of a perturbation such that the perturbed saddle point problem is well-posed. At the end, we give a characterization of well-posed perturbed saddle point problems.

2 Characterization of Completely Regular Topological Spaces

A topological space X is said to be completely regular if it is Hausdorff and for every set \(A \subset X\) and a point \(x \in X \setminus \overline{A}\) (i.e., not belonging to the closure of A) there exists a continuous function \(f: X \rightarrow \mathbb {R}\) such that \(f(x) \notin \overline{f(A)}\), see [2]. The following assertion is often used as a definition for completely regular topological spaces: a Hausdorff topological space X is completely regular if for every point x and a closed set A, such that \(x \notin A\) there exists a continuous function \(f: X \rightarrow [0,1]\) such that \(f(x)=0\) and \(f(A)=1\). Let X be a completely regular topological space and \(f: X \rightarrow [-\infty , +\infty ]\) be an extended real-valued function. The domain of f is denoted by \(\textrm{dom}\,f\) and consists of all points in X at which f has a finite value. The function f is proper if its domain is not empty. Denote by C(X) the space of all continuous and bounded real-valued functions defined on X. The space C(X) equipped with the supremum norm \(\Vert f\Vert _{\infty }:=\sup \{|f(x)|: x \in X\}\) is a real Banach space. We denote by \(\mathbb {R}_{+}\) the set of all nonnegative real numbers.

Let us recall the variational principle of Kenderov and Revalski.

Theorem 1

[6] Let X be a completely regular topological space and \(f: X \rightarrow \mathbb {R} \cup \{+\infty \}\) be a proper lower semicontinuous function bounded from below. Let \(x_0 \in \textrm{dom}\,f\) and \(\varepsilon >0\) be such that \(f(x_0)<\inf _{X} f + \varepsilon \). Then, there exists a continuous bounded function \(h: X \rightarrow \mathbb R_+\), \(h(x_0)=0\), \(\Vert h\Vert _{\infty } < \varepsilon \) and the function \(f+h\) attains its minimum in X at \(x_0\). Moreover, h can be chosen such that \(\Vert h\Vert _{\infty }=f(x_0)-\inf _{X}f\).

Assuming that the variational principle of Kenderov and Revalski holds in a Hausdorff topological space X, we prove that X is completely regular.

Theorem 2

Let X be a Hausdorff topological space. If for every function \(f: X \rightarrow \mathbb {R} \cup \{+\infty \}\) which is proper lower semicontinuous and bounded from below and for every \(x_0 \in \textrm{dom}\,f\), there exists a continuous bounded function \(h: X \rightarrow \mathbb R_+\), \(h(x_0)=0\), \(\Vert h\Vert _{\infty } = f(x_0) - \inf _X f\) such that the function \(f+h\) attains its minimum on X at \(x_0\), then X is a completely regular topological space.

Proof

Let \(x_0 \in X\), A be a closed subset of X, \(x_0 \notin A\) and let us consider the function

$$ f(x):=\left\{ \begin{array}{ll} 0 &{}\quad \text {if } x \in A,\\ 1 &{}\quad \text {otherwise.} \end{array}\right. $$

Since f is a proper lower semicontinuous function bounded from below then, by assumption there exists \(h: X \rightarrow \mathbb R_+\), \(h(x_0)=0\), \(\Vert h\Vert _{\infty } = 1\) such that the function \(f+h\) attains its minimum in X at \(x_0\). Let \(x \in A\) be arbitrary. Then

$$ f(x)+h(x)=h(x) \ge f(x_0)+h(x_0)=f(x_0)=1, $$

so \(h(x) \ge 1\). As \(\Vert h\Vert _{\infty } \le 1\), therefore \(h(A) \equiv 1\) and X is a completely regular topological space.\(\square \)

Let us note that the conclusion of Theorem 2 still holds if we make the assumptions only for the characteristic functions of closed sets instead of for all lower semicontinuous bounded below functions.

Now we recall the strong variational principle of Kenderov and Revalski.

Theorem 3

[7] Let X be a completely regular space and \(f: X \rightarrow \mathbb {R} \cup \{+\infty \}\) be a proper lower semicontinuous function bounded from below. Let \(x_0 \in \textrm{dom}\,f\) has a countable local base in X. Let \(\varepsilon >0\) be arbitrary. Then, there exists a continuous bounded function

$$ h: X \rightarrow \mathbb R_+, \quad h(x_0)=0,~ \Vert h\Vert _{\infty } < f(x_0)-\inf _{X}f+\varepsilon , $$

such that the function \(f+h\) attains its strong minimum on X at \(x_0\).

Assuming that the strong variational principle of Kenderov and Revalski holds in a Hausdorff topological space X, we will prove that X is a completely regular topological space that satisfies the first axiom of countability.

Theorem 4

Let X be a Hausdorff topological space. If for every function \(f: X \rightarrow \mathbb R \cup \{+\infty \}\) which is proper lower semicontinuous and bounded from below, for every \(x_0 \in \textrm{dom}\,f\) and for every \(\varepsilon >0\), there exists a continuous bounded function

$$ h: X \rightarrow \mathbb R_+, \quad h(x_0)=0,~ \Vert h\Vert _{\infty } < f(x_0)-\inf _{X}f+\varepsilon , $$

such that the function \(f+h\) attains its strong minimum on X at \(x_0\), then X is a completely regular topological space that satisfies the first axiom of countability.

Proof

Let \(x_0 \in X\), A be a subset of X, \(x_0 \notin \overline{A}\) and let

$$ f(x):=\left\{ \begin{array}{ll} 0&{}\quad \text {if } x \in \overline{A},\\ 1 &{}\quad \text {otherwise.} \end{array}\right. $$

Since f is a proper lower semicontinuous function bounded from below then, by assumption, there exists \(h: X \rightarrow \mathbb R_+\), \(h(x_0)=0\), such that the function \(f+h\) attains its strong minimum on X at \(x_0\). Let \(x \in \overline{A}\) be arbitrary. Then

$$ h(x)= f(x)+h(x) > f(x_0)+h(x_0)=f(x_0)=1, $$

so \(h(x) > 1\) for all \(x \in \overline{A}\). Therefore, \(\overline{h(\overline{A})}\ge 1\). Since \(h(x_0)=0\), \(h(x_0) \notin \overline{h(\overline{A})}\supset \overline{h( A)}\) and X is a completely regular topological space.

Now we will consider the constant function \(f(x)=0\) for all \(x \in X\) and arbitrary fixed \(x_0\in X\). By the assumption there exists a continuous bounded function h which attains its strong minimum at \(x_0\) and \(\min _X h=h(x_0)=0\). Consider the sets

$$ L_n:=\{x \in X: h(x)<1/n\}, \quad n \ge 1. $$

We will prove that the sets \(\{L_n, n \ge 1\}\) form a countable local base at \(x_0\). First, observe that \(\{x_0\} = \cap _{n=1}^{\infty } L_n\). This follows from the assumption that \(x_0\) is the strong minimum of h and every minimizing sequence converges to \(x_0\). Then, take an arbitrary neighbourhood U of \(x_0\). To the contrary, suppose that for each n there exists a point \(x_n \in L_n\), \(x_n \not \in U\). But \(\{x_n\}_n\) is a minimizing sequence and its limit point is \(x_0\), so there exists \(n_0\) such that \(x_n \in U\) for all \(n \ge n_0\), which yields a contradiction. Therefore, X is a completely regular topological space with a countable local base at each point.\(\square \)

3 Variational Principle for Saddle Points

Let X and Y be topological spaces and \(f: X \times Y \rightarrow [-\infty , +\infty ]\) be an extended real-valued function. A solution to the supinf problem

$$ \sup _{x \in X} \inf _{y \in Y} f(x,y) $$

is called any point \((x_0,y_0)\) such that

$$ f(x_0,y_0)=\inf _{y\in Y} f(x_0,y)=\sup _{x\in X}\inf _{y\in Y} f(x,y), $$

see, e.g., [7]. Analogously, a solution to the infsup problem

$$ \inf _{y \in Y} \sup _{x \in X} f(x,y), $$

is called any point \((x_0,y_0)\) such that

$$ f(x_0,y_0)=\sup _{x\in X} f(x,y_0)=\inf _{y\in Y}\sup _{x\in X} f(x,y). $$

For a given function \(f:X \times Y \rightarrow [-\infty , +\infty ]\), denote

$$\begin{aligned} v_f(x) := \inf _{y \in Y} f(x,y), \end{aligned}$$

and by \(V_f\) denote the optimal value of the supinf problem, i.e., \(V_f:= \sup _{x\in X}v_f(x)\). Also, denote

$$\begin{aligned} w_f(y) := \sup _{x \in X} f(x,y), \end{aligned}$$
(1)

and by \(W_f\) denote the optimal value of the infsup problem, i.e., \(W_f:= \inf _{y\in Y}w_f(x)\).

Let

$$ {\varDelta }_f:= W_f-V_f=\inf _{y \in Y} \sup _{x \in X}f(x,y)-\sup _{x \in X} \inf _{y \in Y}f(x,y). $$

It is clear that \({\varDelta }_f\ge 0\).

A point \((x_0,y_0)\) is said to be a saddle point of f on \(X \times Y\) if

$$ f(x,y_0) \le f(x_0,y_0) \le f(x_0,y),\quad \forall x \in X, \forall y \in Y, $$

which is equivalent to

$$ V_f=v_f(x_0)= f(x_0,y_0)=w_f(y_0)=W_f. $$

The saddle point problem for a given function \(f:X \times Y \rightarrow [-\infty , +\infty ]\) is to find a saddle point of f on \(X\times Y\).

Obviously, if \((x_0,y_0)\) is a saddle point of f on \(X \times Y\), then \((x_0,y_0)\) is a solution to both infsup and supinf problems for f, they have the same optimal values equal to \(f(x_0,y_0)\), and \({\varDelta }_f=0\).

Note that it is possible that \({\varDelta }_f=0\) but the function f has no saddle point on \(X \times Y\). For example, consider the function \(f(x,y):=x-y\) defined on \(X \times Y\), where \(X:=(0,1)\), \(Y:=(0,1]\). It is easy to check that \({\varDelta }_f=0\) but f has no saddle point on \(X \times Y\) since \((1,1) \notin X \times Y\).

We will make the following assumptions for the function \(f:X \times Y \rightarrow [-\infty , +\infty ]\):

  1. (A1)

    for any \(y \in Y\) the function \(f(\cdot , y)\) is upper semicontinuous;

  2. (A2)

    the function \(v_f(x)\) is bounded above in X and proper as a function with values in \(\mathbb {R} \cup \{-\infty \}\);

  3. (A3)

    for any \(x \in X\) the function \(f(x,\cdot )\) is lower semicontinuous;

  4. (A4)

    the function \(w_f(y)\) is bounded below in Y and proper as a function with values in \(\mathbb {R} \cup \{+\infty \}\).

It is easy to see that if the function f satisfies the assumptions (A1)–(A4), then \({\varDelta }_f\) is finite.

Let us recall the supinf variational principle of Kenderov and Revalski.

Theorem 5

[7] Let X and Y be completely regular topological spaces and \(f:X \times Y \rightarrow [-\infty , +\infty ]\) be an extended real-valued function which satisfies the assumptions (A1)–(A2). Let \(\varepsilon >0\) and \(x_0 \in X\) be such that \(v_f(x_0)>\sup _{x \in X} v_f(x) - \varepsilon \), and let \(\delta >0\) and \(y_0 \in Y\) be such that \(f(x_0,y_0)<\inf _{y \in Y} f(x_0,y)+\delta \). Then, there exist continuous bounded functions \(q: X \rightarrow \mathbb R_{+}\) and \(p: Y \rightarrow \mathbb R_{+}\), such that \(q(x_0)=p(y_0)=0\), \(\Vert q\Vert _{\infty } < \varepsilon \), \(\Vert p\Vert _{\infty } < \delta \) and the supinf problem

$$\begin{aligned} \sup _{x \in X} \inf _{y \in Y} \{f(x,y)-q(x)+p(y)\} \end{aligned}$$
(2)

has a solution at \((x_0,y_0)\).

Note that assuming (A3) the function \(w_f(y)\) defined by (1) is lower semicontinuous, so one can follow the lines of the proof of the above result in [7] to obtain the following

Theorem 6

Let X and Y be completely regular topological spaces and \(f:X \times Y \rightarrow [-\infty , +\infty ]\) be an extended real-valued function which satisfies the assumptions (A3)–(A4). Let \(\varepsilon >0\) and \(y_0 \in Y\) be such that \(w_f(y_0)<\inf _{y \in Y} w_f(y) + \varepsilon \), and let \(\delta >0\) and \(x_0 \in X\) be such that \(f(x_0,y_0)>\sup _{x \in X} f(x,y_0)-\delta \). Then, there exist continuous bounded functions \(h: X \rightarrow \mathbb R_{+}\) and \(g: Y \rightarrow \mathbb R_{+}\), such that \(h(x_0)=g(y_0)=0\), \(\Vert h\Vert _{\infty } < \delta \), \(\Vert g\Vert _{\infty } < \varepsilon \) and the infsup problem

$$\begin{aligned} \inf _{y \in Y} \sup _{x \in X} \{f(x,y)-h(x)+g(y)\} \end{aligned}$$
(3)

has a solution at \((x_0,y_0)\).

We will prove the following minimax variational principle in completely regular topological spaces.

Theorem 7

Let X and Y be completely regular topological spaces and \(f:X \times Y \rightarrow [-\infty , +\infty ]\) be an extended real-valued function which satisfies the assumptions (A1)–(A4). Let \(\varepsilon '>0\), \(\varepsilon ''>0\), \(x_0 \in X\) and \(y_0 \in Y\) be such that

$$\begin{aligned} v_f(x_0)> & {} \sup _{x \in X} v_f(x) - \varepsilon ',\\ w_f(y_0)< & {} \inf _{y \in Y} w_f(y) + \varepsilon ''. \end{aligned}$$

Then, there exist continuous bounded functions \(k: X \rightarrow \mathbb R_{+}\) and \(r: Y \rightarrow \mathbb R_{+}\), such that \(k(x_0)=r(y_0)=0\), \(\Vert k\Vert _{\infty } < 2\varepsilon '+\varepsilon ''+{\varDelta }_f\), \(\Vert r\Vert _{\infty } < \varepsilon '+2\varepsilon ''+{\varDelta }_f\), and the function \(f(x,y)-k(x)+r(y)\) has a saddle point at \((x_0,y_0)\).

Proof

We begin with two chains of inequalities:

$$\begin{aligned} f(x_0,y_0)-\varepsilon '' \le \sup _{x \in X} f(x,y_0)-\varepsilon ''=w_f(y_0)-\varepsilon '' < \inf _Y w_f= \inf _{y \in Y}\sup _{x \in X}f(x,y) \end{aligned}$$
(4)

and

$$\begin{aligned} f(x_0,y_0)+\varepsilon ' \ge \inf _{y \in Y} f(x_0,y)+\varepsilon '=v_f(x_0)+\varepsilon ' > \sup _X v_f= \sup _{x \in X}\inf _{y \in Y}f(x,y). \end{aligned}$$
(5)

Combining the equality

$$ \inf _{y \in Y} \sup _{x \in X}f(x,y) - {\varDelta }_f=\sup _{x \in X} \inf _{y \in Y}f(x,y), $$

with (4) and (5), we get

$$ f(x_0,y_0)- \varepsilon '' - {\varDelta }_f< \inf _{y \in Y}\sup _{x \in X}f(x,y) - {\varDelta }_f = \sup _{x \in X}\inf _{y \in Y}f(x,y)<\inf _{y \in Y} f(x_0,y)+\varepsilon ', $$

hence

$$ f(x_0,y_0)<\inf _{y \in Y} f(x_0,y)+\varepsilon '+\varepsilon ''+{\varDelta }_f. $$

Analogously, we get

$$ f(x_0,y_0)>\sup _{x \in X} f(x,y_0)-\varepsilon '-\varepsilon ''-{\varDelta }_f. $$

Now we apply Theorem 5 to the point \(x_0\) for \(\varepsilon =\varepsilon '\), and to the point \(y_0\) for \(\delta =\varepsilon '+\varepsilon ''+{\varDelta }_f\). Therefore, there exist continuous bounded functions \(q: X \rightarrow \mathbb R_{+}\), and \(p: Y \rightarrow \mathbb R_{+}\) such that \(q(x_0)=p(y_0)=0\), \(\Vert q\Vert _{\infty } < \varepsilon '\), \(\Vert p\Vert _{\infty } < \varepsilon '+\varepsilon ''+{\varDelta }_f\) and the supinf problem (2) has a solution at \((x_0,y_0)\), i.e.,

$$\begin{aligned} \sup _{x \in X} \inf _{y \in Y} \{f(x,y)-q(x)+p(y)\}=\inf _{y \in Y} \{f(x_0,y)-q(x_0)+p(y)\}=f(x_0,y_0). \end{aligned}$$
(6)

Furthermore, we apply Theorem 6 to the point \(y_0\) for \(\varepsilon =\varepsilon ''\), and to the point \(x_0\) for \(\delta =\varepsilon '+\varepsilon ''+{\varDelta }_f\). Hence, there exist continuous bounded functions \(h: X \rightarrow \mathbb R_{+}\), \(g: Y \rightarrow \mathbb R_{+}\) such that \(h(x_0)=g(y_0)=0\), \(\Vert h\Vert _{\infty } < \varepsilon '+\varepsilon ''+{\varDelta }_f\), \(\Vert g\Vert _{\infty } < \varepsilon '\) and the infsup problem (3) has a solution at \((x_0,y_0)\), i.e.,

$$\begin{aligned} \inf _{y \in Y} \sup _{x \in X} \{f(x,y)-h(x)+g(y)\}=\sup _{x \in X} \{f(x,y_0)-h(x)+g(y_0)\}=f(x_0,y_0). \end{aligned}$$
(7)

Let \(x \in X\), \(y \in Y\) be arbitrary. As \(q(x) \ge 0\), \(g(y) \ge 0\), from (7) and (6) it follows that

$$\begin{aligned} f(x,y_0)-h(x)-q(x)+p(y_0)+g(y_0)\le & {} f(x,y_0)-h(x)+g(y_0)\\\le & {} f(x_0,y_0)\le f(x_0,y)+p(y)-q(x_0)\\\le & {} f(x_0,y)+p(y)+g(y)-q(x_0)-h(x_0). \end{aligned}$$

Setting \(k(x)\!:=\!h(x)+q(x)\) and \(r(y)\!:=\!p(y)+g(y)\) we get the conclusion of the theorem.\(\square \)

Note that whenever a function f satisfies the assumptions (A1)–(A4), for any \(\varepsilon '>0\) and \(\varepsilon ''>0\) one can always find \(x_0\) and \(y_0\) satisfying the assumptions of Theorem 7.

Definition 1

For a function \(f:X\times Y\rightarrow [-\infty ,+\infty ]\), such that \({\varDelta }_f=0\) and \(\varepsilon >0\), we say that \((x_0,y_0) \in X \times Y\) is an \(\varepsilon \)-saddle point for f if

$$\begin{aligned} v_f(x_0)> & {} \sup _{x \in X} v_f(x) - \varepsilon /3,\\ w_f(y_0)< & {} \inf _{y \in Y} w_f(y) + \varepsilon /3. \end{aligned}$$

If f satisfies (A1)–(A4) and \({\varDelta }_f=0\), from Theorem 7 easily follows a variational principle, which states that we can perturb the function f by functions with arbitrary small norms in a way that the perturbed function has a saddle point.

Theorem 8

Let X and Y be completely regular topological spaces, \(f:X\times Y\rightarrow [-\infty ,+\infty ]\) satisfy (A1)–(A4) and \({\varDelta }_f=0\). Let \(\varepsilon >0\), and \((x_0,y_0) \in X \times Y\) be an \(\varepsilon \)-saddle point for f. Then, there exist continuous bounded functions \(k: X \rightarrow \mathbb R_{+}\), and \(r: Y \rightarrow \mathbb R_{+}\), such that \(k(x_0)=r(y_0)=0\), \(\Vert k\Vert _{\infty } < \varepsilon \), \(\Vert r\Vert _{\infty } < \varepsilon \), and the function \(f(x,y)-k(x)+r(y)\) has a saddle point at \((x_0,y_0)\).

If f is an additively separable function, i.e., \(f(x,y)=f_1(x)+f_2(y)\), then \({\varDelta }_{f+k+r}=0\) for every \(k \in C(X)\) and \(r \in C(Y)\). The following result constitutes a dense variational principle for the saddle point problem.

Theorem 9

Let X and Y be completely regular topological spaces, let \(f:X\times Y\rightarrow [-\infty ,+\infty ]\) satisfy the assumptions (A1)–(A4), and \(f(x,y)=f_1(x)+f_2(y)\). Then, the set \(\{(k,r) \in C(X) \times C(Y): \text { the function }f(x,y)+k(x)+r(y), (x,y) \in X \times Y\text { has a saddle point}\}\) is a dense subset of \(C(X) \times C(Y)\).

4 Well-posedness of Saddle Point Problems

Let \(f: X \times Y \rightarrow [-\infty ,+\infty ]\), where X and Y are completely regular topological spaces. In [5] (in the constrained case) a sequence of points \(\{(x_n, y_n)\}_n \in X \times Y\) is called optimizing for the supinf problem for f if \(v_f(x_n) \rightarrow V_f\) and \(f(x_n,y_n) \rightarrow V_f\). Analogously, \(\{(x_n, y_n)\}_n\) is optimizing for the infsup problem for f if \(w_f(y_n) \rightarrow W_f\) and \(f(x_n,y_n) \rightarrow W_f\).

Here we consider the following

Definition 2

A sequence \(\{(x_n, y_n)\}_n \in X \times Y\) is called optimizing for the saddle point problem for \(f: X \times Y \rightarrow [-\infty , +\infty ]\) if

  1. 1.

    \(v_f(x_n) \rightarrow V_f\);

  2. 2.

    \(w_f(y_n) \rightarrow W_f\);

  3. 3.

    \({\varDelta }_f=0\).

Let us note that in [1] maximinimizing sequences are considered. A sequence \(\{(x_n, y_n)\}_n \in X \times Y\) is called maximinimizing for the saddle point problem for f if \(w_f(y_n)-v_f(x_n) \rightarrow 0\), as \(n \rightarrow \infty \). Since \(v_f(x_n)=\inf _{y \in Y} f(x_n,y) \le f(x_n,y_n) \le \sup _{x \in X} f(x,y_n)=w_f(y_n)\), it is clear that every optimizing sequence for the saddle point problem is maximinimizing. Moreover, any optimizing sequence for the saddle point problem for f is optimizing for the supinf and infsup problems for f.

The supinf (resp. infsup) problem for the function f is called well-posed if any optimizing for it sequence converges to its unique solution.

Moreover, the supinf problem for the function f is sup-well-posed if the problem \(\sup _{x \in X} v_f(x)\) is well-posed, see [7]. In such a case, the unique point realizing the maximum is called a sup-solution.

Analogously, the infsup problem for the function f is inf-well-posed if the problem \(\inf _{y \in Y} w_f(y)\) is well-posed and the unique point realizing the minimum is called inf-solution.

Definition 3

The saddle point problem for \(f: X \times Y \rightarrow [-\infty , +\infty ]\) is well-posed if every optimizing for it sequence converges to the unique solution of the problem.

The saddle point problem for a function f is well-posed if and only if the supinf and infsup problems for f are sup-well-posed and inf-well-posed, respectively, and \({\varDelta }_f=0\). Well-posedness of the saddle point problem for f does not entail well-posedness of the corresponding supinf and infsup problems, see Remark 3-1 in [1].

If f is such that \({\varDelta }_f=0\), then for f:

$$\begin{aligned}&\{(x_n,y_n)\}_n\text { is an optimizing sequence for the saddle point problem}&\\&\Updownarrow&\\&\{(x_n,y_n)\}_n\text { is an optimizing sequence for both supinf and infsup problems,}&\end{aligned}$$

and, moreover,

$$\begin{aligned}&\text { the supinf and infsup problems for { f} are well-posed}&\\&\Downarrow&\\&\text { the saddle point problem for { f} is well-posed}&\\&\Updownarrow&\\&\text { the supinf problem is sup-well-posed and the infsup problem is inf-well-posed.}&\end{aligned}$$

The next result is about perturbations for which the perturbed saddle point problem is well-posed in the sense of Definition 3.

Theorem 10

Let for XY and \(f: X \times Y \rightarrow [-\infty , +\infty ]\) the assumptions in Theorem 7 hold and let k and r be the functions from its conclusion. Suppose that \(x_0\) has a countable local base in X and \(y_0\) has a countable local base in Y. Then, for arbitrary \(\delta >0\) there exist continuous bounded functions \(k': X \rightarrow \mathbb R_{+}\) and \(r': Y \rightarrow \mathbb R_{+}\), such that \(k(x_0)=r(y_0)=0\), \(\Vert k'\Vert _{\infty } < \delta \), \(\Vert r'\Vert _{\infty } < \delta \), and for the function \(g(x,y):=f(x,y)-k(x)+r(y)-k'(x)+r'(y)\),

  1. (a)

    the supinf problem is sup-well-posed with unique sup-solution at \(x_0\);

  2. (b)

    the infsup problem is inf-well-posed with unique inf-solution at \(y_0\);

  3. (c)

    the saddle point problem is well-posed with unique solution at \((x_0,y_0)\).

Proof

We follow the idea of the proof of Proposition 2.10 in [7]. Consider countable local bases of open nested neighbourhoods \(\{U_n, n \ge 1\}\) of \(x_0\) and \(\{V_n, n \ge 1\}\) of \(y_0\), respectively. For each fixed \(n \ge 1\), let \(h_n : X \rightarrow [0, 1]\) and \(g_n : Y \rightarrow [0, 1]\) be continuous functions such that

$$\begin{aligned} h_n(x_0)= & {} 0\quad \text { and }\quad h_n(X \setminus U_n) \equiv 1,\\ g_n(y_0)= & {} 0 \quad \text { and }\quad g_n(Y \setminus V_n) \equiv 1. \end{aligned}$$

Define the functions

$$\begin{aligned} k'(x):= & {} \delta \sum _{n=1}^\infty \frac{1}{2^n}h_n(x), \quad x \in X,\\ r'(y):= & {} \delta \sum _{n=1}^\infty \frac{1}{2^n}g_n(y), \quad y \in Y. \end{aligned}$$

The functions \(k'\) and \(r'\) are continuous bounded functions with values in \([0,\delta ]\). For all \(x \in X\) and \(y \in Y\), \(x \ne x_0\) and \(y \ne y_0\), \(k'(x)>0\) and \(r'(y)>0\). Furthermore, from \(k(x_0)=k'(x_0)=0\), \(r(y_0)=r'(y_0)=0\) and the fact that \((x_0,y_0)\) is a saddle point for \(f(x,y)-k(x)+r(y)\), it holds that for all \(x \in X\) and \(y \in Y\), \(x \ne x_0\) and \(y \ne y_0\),

$$\begin{aligned} f(x,y_0)-k(x)-k'(x)+r(y_0)+r'(y_0)< & {} f(x_0,y_0)-k(x_0)-k'(x_0)+r(y_0)+r'(y_0)\\< & {} f(x_0,y)-k(x_0)-k(x_0)+r(y)+r'(y), \end{aligned}$$

and, therefore, \((x_0,y_0)\) is a solution for the saddle point problem, supinf problem and infsup problem for the function g(xy) in \(X \times Y\). Observe that \((x_0,y_0)\) is also a solution for the saddle point problem, supinf problem and infsup problem for the functions \(f(x,y)-k(x)+r(y)+r'(y)\) and \(f(x,y)-k(x)-k'(x)+r(y)\) in \(X \times Y\).

To prove (a) let us consider the functions

$$\begin{aligned} v_{f-k+r+r'}(x):= & {} \inf _Y \{f(x,y)-k(x)+r(y)+r'(y)\},\\ v_{g}(x):= & {} \inf _Y \{f(x,y)-k(x)-k'(x)+r(y)+r'(y)\}. \end{aligned}$$

Having in mind that \(k(x_0)=k'(x_0)=0\) and \(r(y_0)=r'(y_0)=0\) and that \((x_0,y_0)\) is a solution for the supinf problems for the functions \(f(x,y)-k(x)+r(y)+r'(y)\) and g(xy) in \(X \times Y\), we have that

$$ v_{f-k+r+r'}(x_0)=v_{g}(x_0)=f(x_0,y_0). $$

Suppose that \(\{x_n\}_n\) is an optimizing sequence for \(v_{g}\) and \(\{x_n\}_n\) does not converge to \(x_0\). Then, there would be some integer \(n_0 \ge 1\) and a subsequence of \(\{x_n\}_n\) (without renumbering), such that \(x_n \notin U_{n_0}\) for every n. Furthermore, \(\sup _X v_{g} = \sup _X v_{f-k+r+r'}\) and \(v_{g}(x) \le v_{f-k+r+r'}(x)\), and since \(k'\) has nonnegative values, \(k'(x_n)\) converges to 0. But this contradicts \(x_n \notin U_{n_0}\) for any n since the latter would imply that \(k'(x_n)\ge \delta \sum _{m=n_0}^\infty (1/2^m)>0\) for all n.

The proof of (b) is similar to the proof of (a).

To establish (c) let us observe that from the proofs of (a) and (b) it follows that the supinf and infsup problems for the function g are sup-well-posed and inf-well-posed with unique solutions at \(x_0\) and \(y_0\), respectively. Moreover \({\varDelta }_{g}=0\) and, therefore, the saddle point problem for g is well-posed with unique solution at \((x_0,y_0)\).\(\square \)

For a given function \(f: X \times Y \rightarrow [-\infty , +\infty ]\) denote by \(S_f: C(X) \times C(Y) \rightrightarrows X \times Y\) the correspondence which assigns to every couple of functions \(s \in C(X)\) and \(u \in C(Y)\) the (possibly empty) set of solutions \((x_0,y_0)\) to the saddle point problem for the function \(f(x,y)+s(x)+u(y)\). We follow the proof of Theorem 3 in [4] to get the following result.

Theorem 11

Let X and Y be completely regular topological spaces and \(f:X \times Y \rightarrow [-\infty ,+\infty ]\) be an extended real-valued function which satisfies (A1)–(A4). The mapping \(S_f\) is single-valued and upper semicontinuous at \((s,u) \in C(X) \times C(Y)\) if and only if the saddle-point problem for the function \(f(x,y)+s(x)+u(y)\) is well-posed.

Proof

\(\Rightarrow )\) Let \(\{(x_n,y_n)\}_n \in X \times Y\) be an optimizing sequence for the saddle point problem for \(f(x,y)+s(x)+u(y)\). According to the definition of such a sequence, it holds that:

  1. 1.

    \(v_{f+s+u}(x_n) \rightarrow V_{f+s+u}\);

  2. 2.

    \(w_{f+s+u}(y_n) \rightarrow W_{f+s+u}\);

  3. 3.

    \({\varDelta }_{f+s+u}=0\).

If \(S_f(s,u)=\{(x_0,y_0)\}\), so \({\varDelta }_{f+s+u}=0\). Suppose that \(\{(x_n,y_n)\}_n\) does not converge to \((x_0,y_0)\). Then, there exist open sets U and V, such that \(x_0 \in U\), \(y_0 \in V\) and a subsequence (without renumbering) \(\{(x_n,y_n)\}_n\) such that \((x_n,y_n) \notin U \times V\) for all n.

From the upper semicontinuity of \(S_f(s,u)\) there exists \(\varepsilon >0\), such that \(S_f(s',u') \subset U \times V\) for all \(s' \in C(X)\) such that \(\Vert s'-s\Vert _\infty <\varepsilon \), and all \(u' \in C(Y)\) such that \(\Vert u'-u\Vert _\infty <\varepsilon \).

Let n be so large that \(V_{f+s+u}-v_{f+s+u}(x_n) < \varepsilon /3\) and \(w_{f+s+u}(y_n)-W_{f+s+u} < \varepsilon /3\). Applying Theorem 8 for the \(\varepsilon \)-saddle point \((x_n,y_n)\) of the function \(f(x,y)+s(x)+u(y)\) one obtains functions \(s_n \in C(X)\) and \(u_n \in C(Y)\), such that \(\Vert s_n\Vert _{\infty } < \varepsilon \), \(\Vert u_n\Vert _{\infty } < \varepsilon \) and \((x_n,y_n)\) is a solution to the supinf problem for the function \(f(x,y)+s(x)-s_n(x)+u(y)+u_n(y)\). But \(\Vert s+s_n-s\Vert _{\infty }<\varepsilon \), \(\Vert u-u_n-u\Vert _{\infty }<\varepsilon \) and \((x_n,y_n) \in S_f(s-s_n,u+u_n)\) which contradicts \((x_n,y_n) \notin U \times V\).

\(\Leftarrow )\) Suppose that the saddle point problem for the function \(f(x,y)+s(x)+u(y)\) is well-posed with unique solution \((x_0,y_0)\). Hence, \({\varDelta }_{f+s+u}=0\) and \(S_f(s,u)\) is single-valued. Suppose that \(S_f\) is not upper semicontinuous at (su). Then there would exist open neighbourhoods U of \(x_0\) and V of \(y_0\), respectively, such that for every \(n \ge 1\) there would be \(s_n \in C(X)\) and \(u_n \in C(Y)\) with \(\Vert s_n-s\Vert _{\infty } <1/n\) and \(\Vert u_n-u\Vert _{\infty } <1/n\), such that \(S_f(s_n,u_n)\notin U \times V\), i.e., \((x_n,y_n) \in S_f(s_n,u_n) \setminus U \times V\) for all \(n\in \mathbb {N}\).

Observe that \({\varDelta }_{f+s_n+u_n}=0\), and

$$\begin{aligned} v_{f+s_n+u_n}(x_n):= & {} \inf _{y \in Y}\{f(x_n,y)+s_n(x_n)+u_n(y)\}\\= & {} \sup _{x \in X} \{f(x,y_n)+s_n(x)+u_n(y_n)\}=:w_{f+s_n+u_n}(y_n). \end{aligned}$$

As \(\{s_n\}_n\) and \(\{u_n\}_n\) converge uniformly on X and Y to s and u, respectively, then for every \(\varepsilon >0\) we can find \(n_0\), such that, for every \(n \ge n_0\):

$$\begin{aligned} |v_{f+s_n+u_n}(x_n)-v_{f+s+u}(x_n)|< & {} \varepsilon ,\\ |w_{f+s_n+u_n}(y_n)-w_{f+s+u}(y_n)|< & {} \varepsilon . \end{aligned}$$

Therefore \(v_{f+s+u}(x_n)\) and \(w_{f+s+u}(y_n)\) are close eventually and \(\{(x_n,y_n)\}_n\) is an optimizing sequence for \(f(x,y)+s(x)+u(y)\). Since the saddle point problem for \(f(x,y)+s(x)+u(y)\) is well-posed with unique solution \((x_0,y_0)\), the sequence \(\{(x_n,y_n)\}_n\) converges to \((x_0,y_0)\). The latter contradicts the assumption \((x_n,y_n) \in S_f(s_n,u_n) \setminus U \times V\).\(\square \)

If we consider the correspondence \(\widetilde{S}_f: C(X \times Y) \rightrightarrows X \times Y\) which assigns to each function \(z \in C(X \times Y)\) the (possibly empty) set of solutions \((x_0,y_0)\) to the saddle point problem for the function \(f(x,y)+z(x,y)\), and follow the lines of the proof of Theorem 11 we will obtain

Theorem 12

Let X and Y be completely regular topological spaces and \(f:X \times Y \rightarrow [-\infty ,+\infty ]\) be an extended real-valued function which satisfies the assumptions (A1)–(A4). The mapping \(\widetilde{S}_f\) is single-valued and upper semicontinuous at \(z \in C(X \times Y)\) if and only if the saddle-point problem for the function \(f(x,y)+z(x,y)\) is well-posed.