1 Introduction

The theory of maximal monotone operators has had a great development for over 60 years. The concept of maximal monotone operators is a generalization of subdifferentials of lower semi-continuous convex functions. Maximal monotone operators rapidly found uses for subgradients, optimization, algorithms, financial mathematics, and much more. The closedness of graphs of maximal monotone operators plays a significant role in the theory of variational analysis. It is well-known that the graph of a maximal monotone operator is norm\(\times \)weak\(^*\) sequentially closed. However, the graph of such an operator might be not norm\(\times \)weak\(^*\) closed. This occurs due to the fact that a weak\(^*\) convergent net might be unbounded.

In 2003, Borwein et al. [3] gave an example showing that the subdifferential of a lower semi-continuous function in a separable Hilbert space is not norm\(\times \)weak\(^*\) closed. In 2012, Borwein and Yao [4] provided an explicit structure formula for maximal monotone operators whose domains have nonempty interiors. Consequently, the graphs of these operators are norm\(\times \)weak\(^*\) closed. The authors considered a maximal monotone operator in a subset of the interior of its domain in which the operator is locally bounded. The support function of the operator at a given point x equals the support function in the set consisting of the weak\(^*\) convergent limits of the operator at nearby points of x. Recently, Hantoute and Nguyen [11] considered maximal monotone operators in Hilbert spaces whose domains may have empty interiors and established a relation between the faces of its values at a given point x and the set of strong convergent limits of the operator at nearby points of x. Then, the authors provided an explicit representation for maximal monotone operators. These results were extended to maximal monotone operators in reflexive Banach spaces which are locally bounded together with their dual spaces by Khanh and Nguyen [13]. Furthermore, based on the minimal-norm selections, the authors discovered another formula for maximal monotone operators in that class of spaces.

The main approach in [11, 13] is that the authors used Minty surjectivity theorem in reflective spaces. A class of maximal monotone operators in nonreflexive Banach spaces which share several properties with these operators in reflexive Banach spaces was introduced by Gossez in 1971 and named ‘type (D)’; see [9]. A typical property of such a maximal monotone operator is that the range of the sum of it and the duality mapping is dense in the dual space. In 1992, Fitzpatrick and Phelps [8] defined a class of maximal monotone operators which includes the class of type (D) and was later called ‘type (FP)’. Four years later, Simons [18] introduced the class (NI) which was seemed to generalize the class of type (D). However, in 2010, Marques and Svaiter [20] showed that these two classes are identical. One year later, Bauschke et al. [2] discovered that these three of types coincide.

In this paper we consider a maximal monotone operator of type (D) in a nonreflexive Banach space whose dual space is strictly convex and try to extend the results in [11, 13]. We represent the value of the operator at a point x via the set consisting of weak\(^*\) convergent limits of its bounded nets at nearby points of x. We introduce the concepts ‘w-Kadec-Klee property’ and ‘w\(^*\)-Kadec-Klee property’ for Banach spaces and their dual spaces, respectively. These classes of spaces include locally uniformly convex spaces. Respective results are given for the case that X has the w-Kadec-Klee property and \(X^*\) has the w\(^*\)-Kadec-Klee property.

The paper is organized as follows. In Sect. 2 we introduce the necessary background material which is well-known in principle [1, 15, 19]. Since the field requires a substantial amount of notation and concepts, we nevertheless provide some details. In Sect. 3 we consider a maximal monotone operator A of type (D) in a nonreflexive Banach space whose dual space is strictly convex. In Theorem 3.1 we represent the value Ax via the values of A at nearby points of x. Theorem 3.3 deals with lower semicontinuous convex functions. In the final section we give a representation formula for the support function of the value of a maximal monotone operator in a nonreflexive Banach space whose dual space has the w\(^*\)-Kadec-Klee property by means of its minimal-norm selections.

2 Preliminaries

Throughout the present paper, \((X,\Vert \cdot \Vert )\) is a real Banach space and \(X^*, X^{**}\) are its dual and bidual spaces, respectively. The spaces X and \(X^*\) are paired by \(\langle \cdot , \cdot \rangle \). We denote by \(\rightarrow \), \(\overset{{\text {w}}}{\rightarrow }\), and \(\overset{{\text {w}}^*}{\rightharpoonup }\) the norm convergence, weak convergence, and weak\(^*\) convergence of nets, respectively.

We first recall some geometric properties of real Banach spaces.

Definition 2.1

Let X be a real Banach space.

  1. (i)

    X is called strictly convex if for \(x,y\in X\), \(x \ne y\) with \(\Vert x \Vert = \Vert y \Vert =1\), then \(\Vert x+y\Vert <2.\)

  2. (ii)

    X is called locally uniformly convex if, for every \(x \in X\) with \(\Vert x\Vert =1\) and \(\varepsilon >0\), there exists \(\delta (x, \varepsilon )>0\) such that for any \(y \in X\) with \(\Vert y\Vert =1\), \(\Vert x-y\Vert > \varepsilon \), then \( \Vert x+y\Vert < 2- \delta .\)

It is clear that a locally uniformly convex space is strictly convex. The duality mappings of X and \(X^*\) are respectively defined by

$$\begin{aligned} J(x)&:= \{x^*\in X^*:\, \frac{1}{2}\Vert y\Vert ^2 \ge \frac{1}{2}\Vert x\Vert ^2 +\langle x^*, y-x \rangle , \forall y \in X\},\end{aligned}$$
(2.1)
$$\begin{aligned} J^*(x^*)&:= \{x^{**} \in X^{**}: \langle x^{**}, x^*\rangle = \Vert x^*\Vert ^2 = \Vert x^{**}\Vert ^2\}. \end{aligned}$$
(2.2)

For \(x \in X\), the set J(x) is nonempty and closed convex. If \(X^*\) is strictly convex, then J(x) is a singleton.

Let C be a nonempty subset of X. We denote by \(\overline{C}, {\text {co}}C, {\text {int}}C\), and \({\text {bd}}C\) the closure, convex hull, interior, and boundary of C, respectively. For \(x \in \overline{C}\), the normal cone of C at x is defined by

$$\begin{aligned} N_C(x):= \{x^* \in X^*: \langle x^*, y-x\rangle \le 0, \forall y \in C\}. \end{aligned}$$

It is well-known that \(N_C(x)\) is closed convex. The support function of C is the function from \(X^*\) to \(\overline{\mathbb {R}}:= \mathbb {R} \cup \{+\infty \}\) defined by

$$\begin{aligned} \sigma _{C}(x^*)= \underset{x \in C}{\sup }\,\langle x^*, x \rangle .\end{aligned}$$

The support function \(\sigma _K: X \rightarrow \overline{\mathbb {R}}\) of a nonempty subset \(K \subset X^*\) is defined analogously.

Let \(f: X \rightarrow \overline{\mathbb {R}}\) be a convex function. The domain of f, denoted by \({\text {dom}}f\), is the set of points \(x \in X\) such that \(f(x) <+\infty \). We call f lower semicontinuous on X if for all \(x\in X\)

$$\begin{aligned} \underset{y\rightarrow x}{\lim \inf }\ f(y) \ge f(x).\end{aligned}$$

Let f be a lower semicontinuous convex function on X, \(x \in {\text {dom}}f\) and \(\varepsilon \ge 0\). A linear continuous function \(x^*\in X^*\) is called a \(\varepsilon \)-subgradient of f at x if

$$\begin{aligned} f(y) \ge f(x) +\langle x^*, y-x \rangle -\varepsilon , \, \forall y \in X.\end{aligned}$$

The set of all \(\varepsilon \)-subgradients of f at x is denoted by \(\partial _{\varepsilon } f(x)\) and is called the \(\varepsilon \)-subdifferential of f at x. For the case of \(f(x)= +\infty \) we write \(\partial _{\varepsilon } f(x)= \varnothing .\) As usual, \(\partial _{0}f(x)\) is denoted again by \(\partial f(x)\) and is called the \(\textit{subdifferential}\) of f at x. In the case of \(f(x)=\frac{1}{2}\Vert x\Vert ^2\) for \(x \in X\), we denote by \(J_{\varepsilon }(x)\) the \(\varepsilon \)-subdifferential of f at x. From (2.1), \(J(x)=\partial f(x)\).

Let \(A: X \rightrightarrows X^{*}\) be a set-valued operator. The domain, graph, and range of A are given, respectively, by

$$\begin{aligned} {D}(A)&:= \{x \in X: Ax \ne \varnothing \},\\ {Gr}(A)&:= \{(x,x^*) \in X \times X^*: x^* \in Ax\},\\ {R}(A)&:= \underset{x \in X}{\bigcup }\ Ax. \end{aligned}$$

Definition 2.2

  1. (i)

    An operator A is called monotone if

    $$\begin{aligned} \langle x^*-y^*, x-y\rangle \ge 0 \text{ for } (x, x^*), (y,y^*) \in {Gr}(A).\end{aligned}$$
  2. (ii)

    A monotone operator A is called maximal monotone if Gr(A) is not properly contained in the graph of any other monotone operators.

If A is a maximal monotone operator, then for every \(x \in X,\) Ax is closed, convex (see [1, Proposition 2.1]) and satisfies the equality

$$\begin{aligned} Ax= Ax+ N_{\overline{{D}(A)}}(x). \end{aligned}$$
(2.3)

Let \(x \in {D}(A)\). The minimal-norm selection of Ax is the set

$$\begin{aligned}A^\circ x:= \{x^*\in Ax: \Vert x^*\Vert = \underset{y^* \in Ax}{\inf }\Vert y^*\Vert \}.\end{aligned}$$

If \(X^*\) is strictly convex, then \(A^{\circ }\) is a singleton.

We call a net \((x_i)_{i \in I}\) in X eventually bounded if there exists \(M>0\) and \(i_0 \in I\) such that

$$\begin{aligned} \Vert x_i\Vert < M,\,\, \forall i\succeq _I i_0.\end{aligned}$$

According to [4, Fact 3.5], if \((x_i, x^*_i)_{i \in I}\) is a net in Gr(A) such that \(x_i \rightarrow x\), \(x^*_i \overset{{\text {w}}^*}{\rightharpoonup } x^*\), and \((x^*_i)_{i \in I}\) is eventually bounded, then \((x, x^*) \in {Gr}(A)\).

The next concept was introduced by Gossez [10].

Definition 2.3

Let X be a Banach space and \(A: X \rightrightarrows X^{*}\) be a maximal monotone operator. The Gossez’s monotone closure of A is defined by

$$\begin{aligned} \overline{A}^g= \Big \{(x^{**}, x^*)\in X^{**} \times X^*: \langle x^*-y^*, x^{**}-y\rangle \ge 0, \forall (y, y^*) \in {Gr}(A)\Big \}.\end{aligned}$$

The operator A is called of type (D) if for any \((x^{**}, x^*) \in \overline{A}^g\) there exists a bounded net \((x_i, x^*_i)_{i\in I}\) in Gr(A) such that \(x_i\) converges to \(x^{**}\) in \(\tau (X^{**}, X^*)\) and \(x^*_i\) strongly converges to \(x^*\).

The following theorem is a characterization of maximal monotone operators of type (D).

Theorem 2.1

[20, Theorem 4.4] Let X be a Banach space and \(A: X \rightrightarrows X^{*}\) be a maximal monotone operator. Then A is of type (D) if and only if

$$\begin{aligned} {R}(\lambda J_{\varepsilon }(\cdot -x) +A) =X^*, \forall \lambda>0, \varepsilon >0, \forall x \in X.\end{aligned}$$

Finally, let \(F: X \rightrightarrows X^{*}\) be a set-valued mapping. For \(x\in X\) we define some types of convergent limits

$$\begin{aligned} \begin{aligned} {\text {w}}^*-\underset{y \rightarrow x, y\ne x}{{\text {Lim}}\sup }\ \,Fy :=\Big \{ x^*&\in X^*: \exists \text { a net } (x_i, x^*_i)_{i\in I} \in {Gr}(F) \text { with } \\&x_i \ne x ,\forall i \in I, x_i \rightarrow x, \,\, x^*_i \overset{{\text {w}}^*}{\rightharpoonup } x^*\Big \}, \\ { g}{\text {bw}}^*-\underset{y \rightarrow x, y\ne x}{{\text {Lim}}\sup }\ \,Fy :=\Big \{ x^*&\in X^*: \exists \text { a net } (x_i, x^*_i)_{i\in I} \in {Gr}(F) \text { and } M >0 \text { with } \\&x_i \ne x ,\forall i \in I, x_i \rightarrow x, \Vert x^*_i\Vert \le M ,\, \forall i\in I, \,\, x^*_i \overset{{\text {w}}^*}{\rightharpoonup } x^*\Big \},\\ {{\text {s}}-}\underset{y \rightarrow x, y\ne x}{{\text {Lim}}\sup }\ \,Fy:=\Big \{ x^*&\in X^*: \exists \text { a net } (x_i, x^*_i)_{i\in I} \in {Gr}(F) \text { with } x_i \ne x ,\forall i \in I,\\&x_i \rightarrow x, \, x^*_i \rightarrow x^*\Big \}. \end{aligned} \end{aligned}$$

3 Representations for Maximal Monotone Operators of Type (D)

In this section we consider a maximal monotone operator A of type (D) in a Banach space X whose dual space is strictly convex. We represent the value Ax via the values at nearby points of x. We define new properties for X and \(X^*\), which are generalizations of the Kadec-Klee property, named the w-Kadec-Klee and \({\text {w}}^*\)-Kadec-Klee, respectively. A respective representation was obtained in the case that \(X^*\) has the \({\text {w}}^*\)-Kadec-Klee property.

We first observe a property of the minimal-norm selection.

Lemma 3.1

Let \(A: X \rightrightarrows X^*\) be a maximal monotone operator. Then the minimal-norm \(A^\circ x\) is nonempty for all \(x \in {D}(A).\)

Proof

Let \(x \in {D}(A)\). Then \(\alpha := \inf _{x^* \in Ax}\Vert x^*\Vert <+\infty \). For every \(n \in \mathbb {N}\), we take \(x^*_n \in Ax\) such that

$$\begin{aligned} \Vert x^*_n\Vert \le \alpha +\frac{1}{n}. \end{aligned}$$
(3.1)

By Banach–Alaoglu theorem, there exist a subnet \((x^*_i)_{i \in I}\) of \((x^*_n)_{n \in \mathbb {N}}\) and \(x^* \in X^*\) such that \(x^*_i \overset{{\text {w}}^*}{\rightharpoonup } x^*.\) It follows from inequality (3.1) that \(\Vert x^*\Vert \le \displaystyle \liminf _{i }\Vert x^*_i\Vert \le \alpha .\) On the other hand, also from inequality (3.1), the subnet \((x^*_i)_{i \in I}\) is eventually bounded. Owing to [3, Section 2], we have \(x^* \in Ax\). This implies \(\Vert x^* \Vert \ge \alpha \) and hence \(\Vert x^*\Vert = \alpha \), so we get \(x^* \in A^\circ x\), proving the lemma. \(\square \)

The following properties of \(J_{\varepsilon }\) are useful later.

Lemma 3.2

Let \(x \in X\) and \(\varepsilon \ge 0, \lambda >0\). Then

  1. (i)

    \(J_{\varepsilon }(-x) = -J_{\varepsilon }(x), \,\, \lambda J_{\varepsilon }(x)= J_{\lambda ^2\varepsilon }(\lambda x);\)

  2. (ii)

    \(\underset{x^* \in J_{\varepsilon }(x)}{\sup }\vert \Vert x\Vert -\Vert x^*\Vert \vert \le \sqrt{2\varepsilon };\)

  3. (iii)

    \(\underset{x^* \in J_{\varepsilon }(x)}{\sup }\vert \langle x^*, x \rangle -\Vert x\Vert ^2 \vert \le \sqrt{\varepsilon }(1+ \frac{1}{2}\Vert x\Vert ^2).\)

Proof

  1. (i)

    Let \(x \in X, \varepsilon \ge 0, \lambda >0\) and \(x^* \in J_{\varepsilon }(-x).\) It follows from the definition of \(J_{\varepsilon }\) that

$$\begin{aligned} \frac{1}{2}\Vert y\Vert ^2 + \varepsilon \ge \frac{1}{2}\Vert x\Vert ^2 + \langle x^*, y+x \rangle , \, \forall y \in X. \end{aligned}$$

This implies

$$\begin{aligned} \frac{1}{2}\Vert y\Vert ^2 + \varepsilon \ge \frac{1}{2}\Vert x\Vert ^2 + \langle - x^*, y -x \rangle , \forall y \in X,\end{aligned}$$

which gives \(-x^* \in J_{\varepsilon }(x)\). Hence, we have the inclusion

$$\begin{aligned}J_{\varepsilon }(-x) \subset -J_{\varepsilon }(x), \forall x \in X.\end{aligned}$$

Switching the roles of x and \(-x\), we also get \(-J_{\varepsilon }(x)\subset J_{\varepsilon }(-x)\), implying the former. For the latter, letting \(x^*\in J_\varepsilon (x)\) we observe that

$$\begin{aligned} \frac{1}{2}\Vert \lambda y\Vert ^2 +\lambda ^2\varepsilon \ge \frac{1}{2}\Vert \lambda x\Vert ^2 + \langle \lambda x^*, \lambda y - \lambda x\rangle , \forall y \in X. \end{aligned}$$

This yields

$$\begin{aligned} \frac{1}{2}\Vert y\Vert ^2 +\lambda ^2\varepsilon \ge \frac{1}{2}\Vert \lambda x\Vert ^2 + \langle \lambda x^*, y - \lambda x\rangle , \forall y \in X. \end{aligned}$$

Then \(\lambda x^* \in J_{\lambda ^2\varepsilon }(\lambda x)\) implies that

$$\begin{aligned} \lambda J_{\varepsilon }(x) \subset J_{\lambda ^2\varepsilon }(\lambda x), \,\, \forall x \in X, \forall \varepsilon \ge 0, \forall \lambda >0. \end{aligned}$$
(3.2)

This leads to

$$\begin{aligned} \frac{1}{\lambda }J_{\lambda ^2\varepsilon }(\lambda x) \subset J_{(\frac{1}{\lambda })^2(\lambda ^2\varepsilon )}(\frac{1}{\lambda }\lambda x)= J_{\varepsilon }(x),\,\, \forall x \in X,\forall \varepsilon \ge 0, \forall \lambda >0. \end{aligned}$$
(3.3)

Combining (3.2) and (3.3), we get the latter.

  1. (ii)

    This follows from

    $$\begin{aligned} \vert \Vert x^*\Vert - \Vert x\Vert \vert \le \sqrt{2\varepsilon }\end{aligned}$$

    due to [21, Proposition 3.1].

  2. (iii)

    This is obvious for \(\varepsilon =0.\) For \(\varepsilon >0\) the definition of \(J_{\varepsilon }\) implies that

    $$\begin{aligned} \frac{1}{2}\Vert (1+t)x\Vert ^2 \ge \frac{1}{2}\Vert x\Vert ^2 + t\langle x^*, x\rangle - \varepsilon ,\,\,\forall t \in \mathbb {R}.\end{aligned}$$

This is equivalent to

$$\begin{aligned} t(\langle x^*,x\rangle - \Vert x\Vert ^2) \le \varepsilon + \frac{t^2}{2}\Vert x \Vert ^2,\,\, \forall t \in \mathbb {R}.\end{aligned}$$

In particular, for \(t= \sqrt{\varepsilon }~\textrm{sign}(\langle x^*,x\rangle -\Vert x\Vert ^2)\) we get

$$\begin{aligned} \vert \langle x^*,x\rangle - \Vert x\Vert ^2 \vert \le \sqrt{\varepsilon }\left( 1+\frac{1}{2}\Vert x \Vert ^2\right) ,\end{aligned}$$

which proves (iii). \(\square \)

The following lemma is an extension of [1, Proposition 2.2] on the Yosida approximation of maximal monotone operators in reflexive Banach spaces.

Lemma 3.3

Let X be a Banach space whose dual space is strictly convex, \(A: X \rightrightarrows X^*\) be a maximal monotone operator of type (D). For every \(x \in {D}(A)\) and a sequence \(\lambda _n \downarrow 0\) there exist a subnet \((\lambda _i)_{i\in I}\) of \((\lambda _n)_{n \in \mathbb {N}}\), a net \((x_i)_{i \in I} \subset {D}(A)\) converging to x, a net \((z^*_i)_{i\in I} \subset X^*\) with \(z^*_i \in J_{\lambda ^6_i}(x-x_i)\) such that \((\lambda ^{-1}_iz^*_i)_{i \in I}\) is bounded and

$$\begin{aligned} \lambda ^{-1}_iz^*_i \in Ax_i, \,\,\, \, \Vert \lambda ^{-1}_iz^*_i\Vert \rightarrow \Vert A^\circ x\Vert , \,\,\, \lambda _i^{-1} z^*_i \overset{{\text {w}}^*}{\rightharpoonup } A^\circ x. \end{aligned}$$
(3.4)

Proof

Let \(x \in {D}(A)\) and a sequence \(\lambda _n \downarrow 0\). By Theorem 2.1, for every \(n \in \mathbb {N}\) we can find \(x_n \in X\) such that

$$\begin{aligned} 0 \in \lambda ^{-1}_nJ_{\lambda ^6_n}(x_n-x) + Ax_n. \end{aligned}$$

By Lemma 3.2 (i), there exists a sequence \((z^*_n)_{n\in \mathbb {N}} \subset X^*\) such that \(z^*_n \in J_{\lambda ^6_n}(x-x_n)\) and

$$\begin{aligned} \lambda ^{-1}_nz^*_n \in Ax_n,\,\, \forall n\in \mathbb {N}. \end{aligned}$$
(3.5)

Due to the monotonicity of A and Lemma 3.1, the set \(A^\circ (x)\) is nonempty and

$$\begin{aligned} \langle \lambda ^{-1}_nz^*_n -A^\circ x, x -x_n\rangle \le 0, \end{aligned}$$
(3.6)

which implies that

$$\begin{aligned} \lambda ^{-1}_n\langle z^*_n, x-x_n\rangle \le \langle A^\circ x, x- x_n\rangle \le \Vert A^\circ x\Vert \left\| x_n-x\right\| . \end{aligned}$$
(3.7)

Since \(z^*_n \in J_{\lambda ^6_n}(x-x_n)\), Lemma 3.2 (iii) gives us

$$\begin{aligned} \langle z^*_n, x-x_n \rangle \ge \Vert x-x_n\Vert ^2-\lambda ^3_n\left( 1+\frac{1}{2}\Vert x-x_n\Vert ^2\right) \ge \Vert x-x_n\Vert ^2-\lambda ^3_n(1+\Vert x-x_n\Vert ^2).\end{aligned}$$

It follows from (3.7) that

$$\begin{aligned} \lambda ^{-1}_n\Big (\Vert x-x_n\Vert ^2 -\lambda ^3_n( 1+\Vert x-x_n\Vert ^2) \Big ) \le \Vert A^\circ x\Vert \left\| x-x_n\right\| .\end{aligned}$$

Multiplying both sides by \(\lambda ^{-1}_n\) and then adding \(\lambda _n\), we get

$$\begin{aligned} (1-\lambda ^3_n)(\lambda ^{-1}_n\Vert x-x_n\Vert )^2 \le \Vert A^\circ x\Vert (\lambda ^{-1}_n\Vert x-x_n\Vert ) + \lambda _n, \end{aligned}$$

which yields

$$\begin{aligned} \lambda ^{-1}_n\Vert x-x_n\Vert \le \frac{\Vert A^\circ x \Vert + \sqrt{\Vert A^\circ x \Vert ^2 +4\lambda _n(1-\lambda ^3_n)}}{2(1-\lambda ^3_n)} \end{aligned}$$
(3.8)

for sufficiently large n, and hence

$$\begin{aligned} \underset{n \rightarrow \infty }{\lim \sup }\,\, \lambda ^{-1}_n\Vert x-x_n \Vert \le \Vert A^\circ x\Vert . \end{aligned}$$
(3.9)

Furthermore, by Lemma 3.2 (ii),

$$\begin{aligned} \Vert z^*_n\Vert \le \Vert x-x_n\Vert +\sqrt{2}\lambda ^3_n, \,\,\forall n\in \mathbb {N}. \end{aligned}$$

Together with (3.9), we get

$$\begin{aligned} \underset{n \rightarrow \infty }{\lim \sup }\,\, \Vert \lambda ^{-1}_n z^*_n \Vert \le \underset{n \rightarrow \infty }{\lim \sup }\,\,(\lambda ^{-1}_n\Vert x-x_n\Vert +\sqrt{2}\lambda ^2_n) \le \Vert A^\circ x\Vert . \end{aligned}$$
(3.10)

This shows that \((\lambda ^{-1}_nz^*_n)_{n \in \mathbb {N}}\) is bounded. Using Banach–Alaoglu theorem, there exist \(y^* \in X^*\) and a subnet \((\lambda ^{-1}_i z^*_i)_{i \in I}\) of \((\lambda ^{-1}_n z^*_n)_{n \in \mathbb {N}}\) satisfying

$$\begin{aligned} \lambda ^{-1}_i z^*_i \overset{{\text {w}}^*}{\rightharpoonup } y^*. \end{aligned}$$
(3.11)

It follows from (3.10) that

$$\begin{aligned} \Vert y^*\Vert \le \underset{i}{\lim \inf }\,\Vert \lambda ^{-1}_i z^*_i\Vert \le \underset{i}{\lim \sup }\,\Vert \lambda ^{-1}_i z^*_i\Vert \le \Vert A^\circ x\Vert . \end{aligned}$$
(3.12)

On the other hand, owing to (3.8), \(x_n \rightarrow x \text { as } n \rightarrow \infty \), and hence \( x_i\rightarrow x.\) It follows from (3.10) and (3.11) that \((\lambda ^{-1}_iz^*_i)_{i\in I}\) is bounded and

$$\begin{aligned} \lambda ^{-1}_iz^*_i \in Ax_i, \lambda ^{-1}_iz^*_i \overset{{\text {w}}^*}{\rightharpoonup } y^*, \,\, x_i \rightarrow x. \end{aligned}$$
(3.13)

Then \(y^* \in Ax\) by the maximal monotonicity of A. From (3.12), we get \(y^*= A^\circ x\) since \(A^\circ x\) is a singleton set due to the strict convexity of \(X^*\). Hence, we obtain from (3.12) that \( \Vert \lambda ^{-1}_i z^*_i\Vert \rightarrow \Vert A^\circ x\Vert \), completing the proof. \(\square \)

Recall that if A is a maximal monotone operator, then Ax is convex and closed in \(X^*\) for \(x\in {D}(A)\). In order to represent Ax by means of the values at nearby points of x, we establish a relation between the values at nearby points of x and the faces A(xv) of Ax with respect to vector \(v \in X {\setminus } \{0\},\) which is defined by

$$\begin{aligned} A(x;v):= \Big \{y^*\in Ax: \langle y^*, v\rangle = \underset{x^* \in Ax}{\sup }\ \langle x^*, v \rangle \Big \}.\end{aligned}$$

This relation depends on the properties of X. Next we introduce a new class of Banach spaces.

Definition 3.1

Let X be a Banach space.

  1. (a)

    X is said to have the \({\text {w}}\)-Kadec-Klee property if for any net \((x_i)_{i \in I} \), \(x_i \overset{{\text {w}}}{\rightarrow } x\) and \(\Vert x_i\Vert \rightarrow \Vert x\Vert \), then \(x_i \rightarrow x\).

  2. (b)

    \(X^*\) is said to have the \({\text {w}}^*\)-Kadec-Klee property if for any net \((x^*_i)_{i\in I} \), \(x^*_i \overset{{\text {w}}^*}{\rightharpoonup } x^*\) and \(\Vert x^*_i\Vert \rightarrow \Vert x^*\Vert \), then \(x^*_i \rightarrow x^*\).

The following lemma shows that the class of Banach spaces with \({\text {w}}^*\)-Kadec-Klee property includes locally uniformly convex Banach spaces.

Lemma 3.4

Let X be a Banach space. Then,

  1. (a)

    If X is locally uniformly convex, then X has \({\text {w}}\)-Kadec-Klee property;

  2. (b)

    If \(X^*\) is locally uniformly convex, then \(X^*\) has \({\text {w}}\)-Kadec-Klee property.

See [22, Proposition 3.7.6] for a proof of (a). The proof of (b) is similar.

The next result discovers a relation between the faces of Ax and the set consisting of the weak\(^*\) convergent limits of bounded nets of A at nearby points of x. In what follows \(j_{\lambda }(w)\) means an element of \(J_{\lambda }(w)\).

Proposition 3.1

Let X be a Banach space whose dual space is strictly convex and let \(A: X \rightrightarrows X^*\) be a maximal monotone operator of type (D). Then for \(x \in {D}(A)\) and \(v \in X {\setminus } \{0\}\)

$$\begin{aligned} A(x;v) \subset { g}{\text {bw}}^*{-}\underset{\begin{array}{c} \lambda \rightarrow 0 \\ \underset{j_{\lambda }(w) \overset{{\text {w}}^*}{\rightharpoonup } J(v)}{\Vert j_{\lambda }(w)\Vert \rightarrow \Vert J(v)\Vert } \end{array}}{{\text {Lim}}\sup }\,\, A(x+\lambda w). \end{aligned}$$
(3.14)

If in addition that \(X^*\) has the \({\text {w}}^*\)-Kadec-Klee property, then for \(x \in {D}(A)\) and \(v \in X {\setminus } \{0\}\)

$$\begin{aligned} A(x; v) \subset {\text {s}}-\underset{\begin{array}{c} \lambda \rightarrow 0 \\ j_{\lambda }(w) \rightarrow J(v) \end{array}}{{\text {Lim}}\sup } A(x+\lambda w). \end{aligned}$$
(3.15)

Proof

Let \(x \in {D}(A)\), \(v \in X {\setminus } \{0\}\) and \(x^* \in A(x;v).\) Since \(X^*\) is strictly convex, J(v) is a singleton set. We consider a maximal monotone operator defined by

$$\begin{aligned} By:= Ay-J(v)-x^*, \,\, \forall \, y \in X.\end{aligned}$$

It is obvious that \({D}(B)= {D}(A)\), and hence \(x \in {D}(B)\). We first claim that

$$\begin{aligned} B^\circ x= -J(v). \end{aligned}$$
(3.16)

Indeed, observe that

$$\begin{aligned} -J(v) = x^*-J(v)-x^* \in Ax-J(v)-x^* = Bx, \end{aligned}$$
(3.17)

Moreover, for \(y^* \in Ax\), \(\langle x^*-y^*, v\rangle \ge 0\) since \(x^* \in A(x;v)\). This implies

$$\begin{aligned} {\begin{matrix} \Vert -J(v)\Vert &{} = \Vert v\Vert ^{-1}\langle J(v), v \rangle \\ &{} \le \Vert v\Vert ^{-1}\langle J(v)+x^*-y^*, v \rangle \\ &{} \le \Vert v\Vert ^{-1}\Vert J(v)+x^*-y^*\Vert \Vert v\Vert \\ &{} = \Vert y^*-J(v)-x^*\Vert . \end{matrix}} \end{aligned}$$

Now (3.16) follows from (3.17).

Next, we apply Lemma 3.3 to have nets \((\lambda _i)_{i\in I} \rightarrow 0\), \((x_i)_{i\in I} \subset {D}(B)\) with \(x_i \rightarrow x\) and \((z^*_i)_{i \in I} \subset X^*\) with \( z^*_i \in J_{\lambda ^6_i}(x-x_i)\) such that \((\lambda ^{-1}_iz^*_i)_{i \in I}\) is bounded and

$$\begin{aligned} \lambda ^{-1}_iz^*_i \in Bx_i, \,\, \,\, \lambda ^{-1}_iz^*_i \overset{{\text {w}}^*}{\rightharpoonup } B^\circ x = -J(v), \Vert \lambda ^{-1}_iz^*_i\Vert \rightarrow \Vert J(v) \Vert . \end{aligned}$$

Define \(w_i:= \lambda ^{-1}_i(x_i-x), i \in I\). We then have

$$\begin{aligned} (\lambda ^{-1}_iz^*_i + J(v)+ x^*)_{i \in I}\,\, \text { is bounded},\end{aligned}$$
$$\begin{aligned}{} & {} \lambda ^{-1}_iz^*_i+ J(v)+ x^* \in Bx_i+J(v)+x^* = Ax_i= A(x+\lambda _iw_i), \end{aligned}$$
(3.18)
$$\begin{aligned}{} & {} \lambda ^{-1}_iz^*_i+J(v)+ x^* \overset{{\text {w}}^*}{\rightharpoonup } x^*, \end{aligned}$$
(3.19)
$$\begin{aligned}{} & {} -\lambda _i^{-1}z^*_i \overset{{\text {w}}^*}{\rightharpoonup } J(v), \Vert -\lambda _i^{-1}z^*_i\Vert \rightarrow \Vert J(v) \Vert . \end{aligned}$$
(3.20)

It follows from Lemma 3.2 (i) that

$$\begin{aligned} -\lambda ^{-1}_iz^*_i \in J_{\lambda ^4_i}(\lambda ^{-1}_i(x_i-x)) \subset J_{\lambda _i}(\lambda ^{-1}_i(x_i-x)) = J_{\lambda _i}(w_i). \end{aligned}$$
(3.21)

Thus

$$\begin{aligned} x^* \in { g}{\text {bw}}^*-\underset{\begin{array}{c} t \rightarrow 0\\ \underset{j_t(w) \overset{{\text {w}}^*}{\rightharpoonup } J(v)}{\Vert j_t(w)\Vert \rightarrow \Vert J(v)\Vert } \end{array}}{{\text {Lim}}\sup }\,\, A(x+tw),\end{aligned}$$

and hence (3.14) holds.

Now, suppose in addition that \(X^*\) has the \({\text {w}}^*\)-Kadec-Klee property and \(x^* \in A(x;v)\) with \(x \in {D}(A)\) and \(v \ne 0\). By the above result, we find nets \((\lambda _i)_{i \in I} \subset \mathbb {R}_{+},\) \((x_i)_{i \in I}\subset {D}(A)\), \((w_i)_{i\in I} \subset X\) and \((z^*_i)_{i\in I} \in X^*\) satisfying (3.18)–(3.20). It follows from (3.20) that \(-\lambda _i^{-1}z^*_i \rightarrow J(v)\), and hence \(\lambda ^{-1}_iz^*_i+J(v)+ x^* \rightarrow x^*.\) This leads to

$$\begin{aligned} x^* \subset {\text {s}}-\underset{\begin{array}{c} t \rightarrow 0 \\ j_{t}(w) \rightarrow J(v) \end{array}}{{\text {Lim}}\sup } A(x+tw),\end{aligned}$$

completing the proof. \(\square \)

Corollary 3.1

Let X be a Banach space whose dual space is strictly convex and let \(A: X \rightrightarrows X^*\) be a maximal monotone operator of type (D). Then, for \(x \in {D}(A)\) and \(v \in X {\setminus } \{0\}\)

$$\begin{aligned} A(x;v) \subset { g}{\text {bw}}^*-\underset{y \rightarrow x, y\ne x}{{\text {Lim}}\sup }\,Ay. \end{aligned}$$
(3.22)

If in addition that \(X^*\) has the \({\text {w}}^*\)-Kadec-Klee property, then for \(x \in {D}(A)\) and \(v \in X {\setminus } \{0\},\)

$$\begin{aligned} A(x; v) \subset {\text {s}}-\underset{y \rightarrow x, y\ne x}{{\text {Lim}}\sup }\,Ay. \end{aligned}$$
(3.23)

Proof

Let \(x \in {D}(A)\), \(v \in X {\setminus } \{0\}\) and \(x^* \in A(x;v)\). By (3.14), there exist nets \((\lambda _i)_{i \in I} \rightarrow 0,\) \((w_i)_{i \in I} \subset X,\) \((z^*_i)_{i\in I} \subset X^*\) with \(z^*_i \in J_{\lambda _i}(w_i)\), \(z^*_i \overset{{\text {w}}^*}{\rightharpoonup } J(v),\) \(\Vert z^*_i \Vert \rightarrow \Vert J(v)\Vert \), \((x^*_{i})_{i\in I} \subset X^*\) with \(x^*_i \in A(x+\lambda _iw_i)\), such that

$$\begin{aligned}(x^*_i)_{i\in I} \text { is bounded and } x^*_i \overset{{\text {w}}^*}{\rightharpoonup } x^*.\end{aligned}$$

Next we verify that \(x+\lambda _iw_i \rightarrow x\), which would imply

$$\begin{aligned} x^* \in { g}{\text {bw}}^*-\underset{y \rightarrow x, y\ne x}{{\text {Lim}}\sup }\,Ay\end{aligned}$$

and hence (3.22) holds. By Lemma 3.2 (ii) and \(z_i \in J_{\lambda _i}(w_i),\) we have \(\Vert w_i\Vert \le \Vert z^*_i \Vert +\sqrt{2\lambda _i}\) for \(i \in I\). Therefore

$$\begin{aligned} \underset{i\in I}{\limsup }\Vert w_i\Vert \le \underset{i\in I}{\limsup }\left( \Vert z^*_i\Vert + \sqrt{2\lambda _i}\right) = \Vert J(v)\Vert = \Vert v\Vert \end{aligned}$$

implies \(x+\lambda _iw_i \rightarrow x.\) The same argument applies to yield (3.23). \(\square \)

Now we are in a position to derive the first representation for the value of a maximal monotone operator of type (D) at a point in its domain.

Theorem 3.1

Let X be a Banach space whose dual space is strictly convex, and let \(A: X \rightrightarrows X^*\) be a maximal monotone operator of type (D). Then for \(x\in X\)

$$\begin{aligned} Ax = {\text {co}}\overline{\{{ g}{\text {bw}}^*-\underset{y \rightarrow x, y\ne x}{{\text {Lim}}\sup }\,Ay\}} + N_{\overline{{D}(A)}}(x). \end{aligned}$$
(3.24)

If in addition that \(X^*\) has the \({\text {w}}^*\)-Kadec-Klee property, then for \(x\in X\)

$$\begin{aligned} Ax = {\text {co}}\{{\text {s}}-\underset{y \rightarrow x, y\ne x}{{\text {Lim}}\sup }\,Ay\} + N_{\overline{{D}(A)}}(x). \end{aligned}$$
(3.25)

Proof

Let \(x \in X\) and

$$\begin{aligned}K:= { g}{\text {bw}}^*-\underset{y \rightarrow x, y\ne x}{{\text {Lim}}\sup }\,Ay.\end{aligned}$$

Owing to the maximal monotonicity of A, \(K \subset Ax\) and the fact that Ax is closed convex and satisfies \(Ax= Ax+ N_{\overline{{D}(A)}}(x)\), we deduce the inclusion

$$\begin{aligned} {\text {co}}\overline{\{{ g}{\text {bw}}^*-\underset{y \rightarrow x, y\ne x}{{\text {Lim}}\sup }\,Ay\}} + N_{\overline{{D}(A)}}(x) \subset Ax.\end{aligned}$$

To justify the reverse inclusion, we will show the following two inclusions

$$\begin{aligned} Ax \subset {\text {co}}({\text {bd}}Ax) + N_{\overline{{D}(A)}}(x) \end{aligned}$$
(3.26)

and

$$\begin{aligned} {\text {bd}}Ax \subset \overline{K}. \end{aligned}$$
(3.27)

For the case \(x \notin {D}(A)\), one has \(K= Ax = {\text {bd}}Ax= \varnothing \) and so the above inclusions are trivial. Let \(x \in {D}(A).\) To clarify (3.26), we pick any \(x^* \in Ax\). If \(x^* \in {\text {bd}}Ax\), then

$$\begin{aligned} x^* \in {\text {bd}}Ax \subset {\text {co}}({\text {bd}}Ax) + N_{\overline{{D}(A)}}(x). \end{aligned}$$
(3.28)

If \(x^* \in {\text {int}}Ax\), we take \(x^*_0 \in {\text {bd}}Ax\) and define

$$\begin{aligned} \bar{\rho }:= \sup \big \{ \rho> 0: x^* +t(x^*-x^*_0) \subset Ax, \forall t \in [0, \rho )\big \}>0.\end{aligned}$$

We consider two cases of \(\bar{\rho }\).

Case 1 \(\bar{\rho } = +\infty \). It means that \(x^*+ t(x^*-x^*_0) \in Ax\) for every \(t \ge 0\). Hence, for \((y, y^*) \in {Gr}(A)\) by the monotonicity of A, we have

$$\begin{aligned} \langle x^*+ t(x^*-x^*_0)-y^*, y-x \rangle \le 0, \forall t \ge 0.\end{aligned}$$

This implies \(\langle x^*-x^*_0, y-x \rangle \le 0\). Since this holds for any \(y \in {D}(A),\) we get \(x^*-x^*_0 \in N_{\overline{{D}(A)}}(x)\), which implies that

$$\begin{aligned} x^* \in x^*_0 + N_{\overline{{D}(A)}}(x) \subset {\text {co}}({\text {bd}}Ax) + N_{\overline{{D}(A)}}(x).\end{aligned}$$

Case 2 \(\bar{\rho } < +\infty .\) Since Ax is closed, \(z^*:= x^* + \bar{\rho }(x^*-x^*_0) \in Ax.\) Furthermore, \(z^* \in {\text {bd}}Ax\) by the definition of \(\bar{\rho }\). Therefore

$$\begin{aligned} x^* = \frac{1}{1+\bar{\rho }}z^*+ \frac{\bar{\rho }}{1+ \bar{\rho }}x^*_0 \in {\text {co}}({\text {bd}}Ax) \subset {\text {co}}({\text {bd}}Ax) + N_{\overline{{D}(A)}}(x).\end{aligned}$$

Combining the results above, we conclude that (3.26) holds.

Finally, let \(x^* \in {\text {bd}}Ax\). According to [14, Theorem 1], since Ax is weak\(^*\) closed convex, there exists a sequence \((x^*_n)_{n \in \mathbb {N}} \subset {\text {bd}}Ax\) such that \(x^*_n \rightarrow x^*\), where \(x^*_n\) are weak\(^*\) support points of Ax, i.e., for each \(x^*_n\), there exists \(v_n \in X {\setminus } \{0\}\) such that

$$\begin{aligned}\langle x^*_n,v_n\rangle = \sigma _{Ax}(v_n).\end{aligned}$$

The equality means that \(x^*_n \in A(x; v_n)\) for \(n \in \mathbb {N}\). By Corollary 3.1, \(x^*_n \in K\), which implies \(x^* \in \overline{K}\). Hence we get (3.27) and the proof of (3.24) is complete.

Now suppose in addition that \(X^*\) has the \({\text {w}}^*\)-Kadec-Klee property. The proof of (3.25) is similar to that of (3.24), where K is replaced by the closed set \( K':= {\text {s}}-\underset{y \rightarrow x, y\ne x}{{\text {Lim}}\sup }\,Ay.\) \(\square \)

Remark 3.1

  1. (a)

    With the setting in the previous theorem, for \(x\in X\)

$$\begin{aligned} Ax = {\text {co}}\,\overline{\{{ g}{\text {bw}}^*-\underset{y \rightarrow x, y\ne x}{{\text {Lim}}\sup }\,Ay\}} + N_{\overline{{D}(A)}}(x) \subset {\text {co}}\overline{\{{\text {w}}^*-\underset{y \rightarrow x, y\ne x}{{\text {Lim}}\sup }\,Ay\}} + N_{\overline{{D}(A)}}(x). \end{aligned}$$
(3.29)

It is worth noting that in the case that X is a Hilbert space, the above inclusion is still strict; see [3, Example 1]. Furthermore, the inclusion becomes equality if the nets are replaced by sequences, and by [11, Theorem 3.3]

$$\begin{aligned} Ax= {\text {co}}{\{{\text {ss}}-\underset{y \rightarrow x, y \ne x}{{\text {Lim}}\sup }\,Ay\}} + N_{\overline{{D}(A)}}(x),\end{aligned}$$

where

$$\begin{aligned}{} & {} {{\text {ss}}-}\underset{y \rightarrow x, y\ne x}{{\text {Lim}}\sup }\ \,Ay \\{} & {} \quad =\Big \{ x^* \in X^*: \exists \, (x_n, x^*_n)_{n \in \mathbb {N}} \in {Gr}(A) \text { with } x_n \ne x,\forall n \in \mathbb {N}, x_n \rightarrow x, \, x^*_n \rightarrow x^*\Big \}.\end{aligned}$$
  1. (b)

    Alternate and well-known recession cone descriptions of \(N_{\overline{{D}(A)}}(x)\) were presented in [4] for the sequential version and [7] for the net version. Define

$$\begin{aligned} ({\text {Rec}} A)x{} & {} := \{x^* \in X^*: \exists \text { a sequence } t_{n} \rightarrow 0^+, (x_n, x^*_n) \in {Gr}(A)\nonumber \\{} & {} \quad \ \text { with } x_n \rightarrow x,\,\, t_nx^*_n \overset{{\text {w}}^*}{\rightharpoonup } x^*\}. \end{aligned}$$
(3.30)

According to [4, Proposition 5.3], \(N_{\overline{{D}(A)}}(x)= ({\text {Rec}} A)x\) and hence

$$\begin{aligned} Ax = {\text {co}}\overline{\{{ g}{\text {bw}}^*-\underset{y \rightarrow x, y\ne x}{{\text {Lim}}\sup }\,Ay\}} +({\text {Rec}} A)x \end{aligned}$$

by (3.24). \(\Diamond \)

The next theorem is a representation for maximal monotone operators in reflexive spaces.

Theorem 3.2

Let X be a reflexive Banach space and let \(A: X \rightrightarrows X^*\) be a maximal monotone operator. Then for \(x\in X\)

$$\begin{aligned} Ax= {\text {co}}\{{\text {s}}-\underset{y \rightarrow x, y\ne x}{{\text {Lim}}\sup }\,Ay\} + N_{\overline{{D}(A)}}(x).\end{aligned}$$

Proof

Let \(x \in X\). We first observe that the maximal monotonicity of A and the terms \({\text {co}}\{{\text {s}}-\underset{y \rightarrow x, y\ne x}{{\text {Lim}}\sup }\,Ay\}\), \(N_{\overline{{D}(A)}}(x)\) are not affected by equivalently renorming of X and \(X^*.\) By Troyanski’s renorming theorem [12, Theorem 5.192], X admits an equivalent norm so that both X and \(X^*\) are locally uniformly convex. Therefore, we may assume that both X and \(X^*\) are locally uniformly convex. Then \(X^*\) has the \({\text {w}}^*\)-Kadec-Klee property due to Lemma 3.4. Since the space X is reflexive, the operator A is of type (D), owing to [16, Proposition 1]. Using the second part of Theorem 3.1, we derive the conclusion of the theorem. \(\square \)

The next result is a representation formula for the subdifferentials of a lower semicontinuous convex function.

Theorem 3.3

Let X be a Banach space whose dual space is strictly convex and let \(f: X \rightarrow \overline{\mathbb {R}}\) be a lower semicontinuous convex function. Then for \(x\in X\)

$$\begin{aligned} \partial f(x)= {\text {co}}\overline{\{{ g}{\text {bw}}^*-\,\,\underset{y \rightarrow x, y\ne x}{{\text {Lim}}\sup }\,\partial f(y)\}} +N_{\overline{{\text {dom}}f}}(x). \end{aligned}$$
(3.31)

If in addition that \(X^*\) has the \({\text {w}}^*\)-Kadec-Klee property, then for \(x\in X\)

$$\begin{aligned} \partial f(x)= {\text {co}}\{{\text {s}}-\,\,\underset{y \rightarrow x, y\ne x}{{\text {Lim}}\sup }\,\partial f(y)\} +N_{\overline{{\text {dom}}f}}(x). \end{aligned}$$
(3.32)

Proof

Using [10, Lemma 1] and [17, Corollary 2] one has

$$\begin{aligned} {R}(\lambda J_{\varepsilon }(\cdot -x) + \partial f) = X^*, \forall x \in X,\forall \lambda , \varepsilon >0.\end{aligned}$$

Now the theorem follows from Theorem 3.1. \(\square \)

Remark 3.2

Let X be a Banach space. If the closed unit ball of \(X^*\) is \(\textrm{weak}^*\)- sequentially compact, then the nets in Lemma 3.3 can be replaced by sequences. As a consequence, all results in Proposition 3.1, Theorems 3.1, 3.2, 3.3 and Corollary 3.1 are also true for sequences. In particular, these results are true for the case that X is a weak Asplund space; see [6, p. 239]. \(\square \)

4 A Representation Formula for the Support Functions of the Values of Maximal Monotone Operators

In this section we consider a maximal monotone operator A of type (D) in a Banach space which has the w-Kadec-Klee property and whose dual space has the w\(^*\)-Kadec-Klee property. We provide an explicit formula for the support function of Ax by use of the minimal-norm selection of Ay for nearby points y of x.

We first present a technical result for the theorem below.

Lemma 4.1

Let X be a Banach space such that \(X^*, X^{**}\) are strictly convex and X has the \({\text {w}}\)-Kadec-Klee property, \(X^*\) has the \({\text {w}}^*\)-Kadec-Klee property. Let \(v \in X {\setminus } \{0\}\), \((t_i)_{i \in I} \subset \mathbb {R}_{+}\) with \(t_i \rightarrow 0\) and \((w_i)_{i \in I} \subset X\). If there exists a net \((z^*_i)_{i \in I} \subset X^*\) with \(z^*_i \in J_{t_i}(w_i)\), \(z^*_i \rightarrow J(v)\), then there exists a subnet of \((w_i)_{i \in I}\) converging to v.

Proof

Assume that \((z^*_i)_{i \in I}\subset X^*\) with \(z^*_i \in J_{t_i}(w_i)\) converging to J(v). According to [5, p.608], for every \(i \in I\), there exists \(v_i \in X\) such that

$$\begin{aligned} \Vert w_i-v_i\Vert \le \sqrt{t_i}, \Vert z^*_i- J(v_i) \Vert \le \sqrt{t_i}. \end{aligned}$$
(4.1)

This implies \(J(v_i) \rightarrow J(v),\) and hence \(\Vert v_i\Vert \rightarrow \Vert v\Vert \), which shows that \((v_i)_{i \in I} \subset X \subset X^{**}\) is eventually bounded. Using Banach–Alaoglu theorem, by passing to a subnet, if necessary, that \(v_i \overset{{\text {w}}}{\rightarrow } z^{**} \in X^{**}\). Furthermore, since \(X^{**}\) is strictly convex, we have \(v_i= J^*(J(v_i))\) and \(v= J^*(J(v))\). By the maximal monotonicity of \(J^*\), we deduce \(z^{**} = J^*(J(v))\) and hence

$$\begin{aligned} \Vert v_i\Vert \rightarrow \Vert v\Vert ,\,\,\text { and }\,\, v_i \overset{{\text {w}}}{\rightarrow } v. \end{aligned}$$

Owing to the assumption that X has the \({\text {w}}\)-Kadec-Klee property, we get \(v_i \rightarrow v\). It follows from (4.1) that \(w_i \rightarrow v\), which completes the proof. \(\square \)

The following theorem represents the support function of the value Ax via its minimal-norm selections.

Theorem 4.1

Let X be a Banach space such that \(X^*, X^{**}\) are strictly convex and X has the \({\text {w}}\)-Kadec-Klee property, \(X^*\) has the \({\text {w}}^*\)-Kadec-Klee property. Let \(A: X \rightrightarrows X^*\) be a maximal monotone operator of type (D). Then for \(x \in {D}(A), \, v \in X {\setminus } \{0\}\)

$$\begin{aligned} \sigma _{Ax}(v)= \underset{\begin{array}{c} t \downarrow 0 \\ w \rightarrow v \end{array}}{\lim \inf }\langle A^{\circ }(x+tw),w\rangle . \end{aligned}$$
(4.2)

Proof

Let \(x \in {D}(A)\) and \(v \in X{\setminus }\{0\}\). We first show that

$$\begin{aligned} \sigma _{Ax}(v)\le \underset{\begin{array}{c} t \downarrow 0\\ w\rightarrow v \end{array}}{\lim \inf }\langle A^{\circ }(x+tw),w\rangle . \end{aligned}$$
(4.3)

Indeed, by the monotonicity of A, for \(x^* \in Ax\) we have

$$\begin{aligned}\langle x^*, w\rangle = \frac{1}{t} \langle x^*, (x+tw)-x \rangle \le \langle A^{\circ }(x+tw),w\rangle \end{aligned}$$

for all \(t \ge 0, w \in X {\setminus } \{0\}\), with the convention \(\langle A^{\circ }(x+tw),w\rangle = +\infty \) if \(x+tw\notin {D}(A)\). It follows that

$$\begin{aligned} \langle x^*, v \rangle \le \underset{\begin{array}{c} t \downarrow 0\\ w\rightarrow v \end{array}}{\lim \inf }\langle A^{\circ }(x+tw),w\rangle .\end{aligned}$$

Since this holds for any \(x^* \in Ax\), we get inequality (4.3). It remains to show

$$\begin{aligned} \sigma _{Ax}(v) \ge \underset{\begin{array}{c} t \downarrow 0 \\ w \rightarrow v \end{array}}{\lim \inf }\,\langle A^{\circ }(x+tw),w\rangle . \end{aligned}$$
(4.4)

This obviously holds for \(\sigma _{Ax}(v) = +\infty \). If \(\sigma _{Ax}(v) < +\infty \), then \(v \in {\text {dom}}(\sigma _{Ax})\subset \overline{{D}(\partial \sigma _{Ax})}\). We consider two cases of v as follows.

Case 1 \(v \in D(\partial \sigma _{Ax})\). Pick any \(x^* \in \partial \sigma _{Ax}(v).\) From the definition of A(xv), one has \(\partial \sigma _{Ax}(v) = A(x;v)\), so \(x^*\in A(x;v)\). According to Proposition 3.1, we can find nets \((t_i)_{i \in I} \subset \mathbb {R}_{+}\), \( (w_i)_{i\in I} \subset X\) and \((z^*_i)_{i \in I} \subset X^*, (x^*_i)_{i \in I} \subset X^*\) with \(t_i \downarrow 0,\) \(z^*_i \in J_{t_i}(w_i)\), and \( x^*_i \in A(x+t_iw_i)\) such that

$$\begin{aligned} z^*_i \rightarrow J(v), \,\, \,\,x^*_i \rightarrow x^*. \end{aligned}$$
(4.5)

On one hand, since \(x^*_i \rightarrow x^*\), \((x^*_i)_{i\in I}\) is eventually bounded. Together with \(\Vert A^{\circ }(x+t_iw_i)\Vert \le \Vert x^*_i\Vert \) for all \(i \in I\), it follows that the net \((A^{\circ }(x+t_iw_i))_i\) is also eventually bounded. By Banach–Alaoglu theorem, passing to the limit \(i\rightarrow \infty \) along a subnet, one has

$$\begin{aligned} A^{\circ }(x+t_iw_i) \overset{{\text {w}}^*}{\rightharpoonup } z^* \end{aligned}$$
(4.6)

for some \(z^*\in X^*\). Note that \((w_i)_{i}\) is eventually bounded since \( \Vert w_i\Vert \le \Vert z^*_i\Vert + \sqrt{2t_i}, \forall i \in I \) and \(\Vert z^*_i\Vert \rightarrow \Vert J(v)\Vert = \Vert v\Vert \). This gives \(x+t_iw_i \rightarrow x\). It follows from the maximal monotonicity of A, the eventual boundedness of \((A^{\circ }(x+t_iw_i))_i\) and (4.6) that \(z^* \in Ax.\) This yields

$$\begin{aligned} \sigma _{Ax}(v) \ge \langle z^*, v\rangle . \end{aligned}$$
(4.7)

On the other hand, using Lemma 4.1, we may suppose, by passing to a subnet if necessary, that \(w_i \rightarrow v.\) Owing to the fact that \((A^{\circ }(x+t_iw_i))_{i \in I}\) is eventually bounded and (4.6), we get

$$\begin{aligned} \langle A^{\circ }(x+t_iw_i), w_i\rangle \rightarrow \langle z^*, v\rangle . \end{aligned}$$

Together with (4.7), we derive

$$\begin{aligned} \sigma _{Ax}(v) \ge \underset{i}{\lim }\,\langle A^{\circ }(x+t_iw_i), w_i\rangle \ge \underset{\begin{array}{c} t \rightarrow 0 \\ j_{t}(w) \rightarrow J(v) \end{array}}{\lim \inf }\langle A^{\circ }(x+tw),w\rangle . \end{aligned}$$
(4.8)

It follows from Lemma 4.1 that

$$\begin{aligned} \underset{\begin{array}{c} t \rightarrow 0 \\ j_{t}(w) \rightarrow J(v) \end{array}}{\lim \inf }\langle A^{\circ }(x+tw),w\rangle \ge \underset{\begin{array}{c} t \rightarrow 0 \\ w \rightarrow v \end{array}}{\lim \inf }\langle A^{\circ }(x+tw),w\rangle . \end{aligned}$$
(4.9)

Combining (4.8) and (4.9), we obtain (4.4).

Case 2 \( v \in {\text {dom}}\sigma _{Ax} {\setminus } {\text {D}}(\partial \sigma _{Ax})\). This means that \(\partial \sigma _{Ax}(v) = \varnothing .\) According to [5, Theorem 2], there exists \(v_n \rightarrow v\) such that \(\sigma _{Ax}(v_n) \rightarrow \sigma _{Ax}(v)\) and \(\partial \sigma _{Ax}(v_n) \ne \varnothing .\) From Case 1, for every \(n \in \mathbb {N}\)

$$\begin{aligned} \sigma _{Ax}(v_n)= \underset{\begin{array}{c} t \downarrow 0, \\ w \rightarrow v_n \end{array}}{\lim \inf }\langle A^{\circ }(x+tw),w\rangle .\end{aligned}$$

Hence, for every n there exist \(t_n >0, w_n \in X\) such that

$$\begin{aligned} t_n \le \frac{1}{n},\,\, \Vert w_n -v_n\Vert \le \frac{1}{n}, \,\, \vert \langle A^{\circ }(x+t_nw_n), w_n\rangle -\sigma _{Ax}(v_n)\vert \le \frac{1}{n}.\end{aligned}$$

This implies that \(t_n \rightarrow 0, w_n \rightarrow v\) and \(\underset{n \rightarrow \infty }{\lim }\langle A^{\circ }(x+t_nw_n), w_n\rangle = \sigma _{Ax}(v)\). Hence we get (4.4). The proof is complete. \(\square \)

The next result follows directly from Lemma 3.4 and the previous theorem.

Corollary 4.1

Let X be a Banach space such that \(X, X^*\) are locally uniformly convex and \(X^{**}\) is strictly convex and let \(A: X \rightrightarrows X^*\) be a maximal monotone operator of type (D). Then for \(x \in {D}(A)\) and \(v \in X{\setminus }\{0\}\)

$$\begin{aligned} \sigma _{Ax}(v)= \underset{\begin{array}{c} t \downarrow 0, \\ w \rightarrow v \end{array}}{\lim \inf }\,\langle A^{\circ }(x+tw),w\rangle . \end{aligned}$$
(4.10)

Corollary 4.2

Let X be a Banach space such that \(X^*, X^{**}\) are strictly convex and X has the \({\text {w}}\)-Kadec-Klee property, \(X^*\) has the \({\text {w}}^*\)-Kadec-Klee property, and let \(A_1, A_2: X \rightrightarrows X^*\) be maximal monotone operators of type (D). If \({D}(A_1)= {D}(A_2)=: D\) and \(A^{\circ }_1x \in A_2x\) for all \(x \in D,\) then \(A_1 = A_2.\)

Proof

We first show that for \(x \in D\) and \(v \in X {\setminus } \{0\}\)

$$\begin{aligned} \sigma _{A_1x}(v) \ge \sigma _{A_2x}(v), \end{aligned}$$
(4.11)

which would imply \(A_2x \subset A_1x\) for \(x \in {D}\). This is trivial for \(\sigma _{A_1x} = +\infty \). We consider \(\sigma _{A_1x}(v) < +\infty \). According to Theorem 4.1, there exist nets \((t_i)_{i \in I} \subset \mathbb {R}_{+}, (w_i)_{i \in I} \in X\) with \(t_i \rightarrow 0\) and \(w_i \rightarrow v\) such that

$$\begin{aligned} \sigma _{A_1x}(v) = \underset{i}{\lim }\langle A^{\circ }_1(x+t_iw_i),w_i\rangle . \end{aligned}$$
(4.12)

By assumption, \(A^{\circ }_1(x+t_iw_i) \subset A_2(x+t_iw_i)\) for all \(i \in I\). The monotonicity of \(A_2\) implies that

$$\begin{aligned} \sigma _{A_2x}(w_i) \le \langle A^{\circ }_1(x+t_iw_i), w_i\rangle \ \ \text{ for } \text{ all } i \in I.\end{aligned}$$

Combining this with the lower semicontinuity of \(\sigma _{A_2x}\) and (4.12), we obtain

$$\begin{aligned} \sigma _{A_2x}(v) \le \underset{i}{\lim }\,\sigma _{A_2x}(w_i) \le \underset{i}{\lim }\langle A^{\circ }_1(x+t_iw_i), w_i\rangle \le \sigma _{A_1x}(v),\end{aligned}$$

which means that (4.11) holds. This yields \(A^{\circ }_2x \in A_2x \subset A_1x\) for all \(x \in D.\) Similarly, we have \(A_1x \subset A_2x\) for \(x \in D\). Therefore \(A_1= A_2\), as was to be shown. \(\square \)