1 Introduction and preliminaries

Splitting methods have recently received much attention due to the fact that many nonlinear problems arising in applied areas such as image recovery, signal processing, and machine learning are mathematically modeled as a nonlinear operator equation and this operator is decomposed as the sum of two nonlinear operators. Splitting methods for linear equations were introduced by Peaceman and Rachford [1] and Douglas and Rachford [2]. Extensions to nonlinear equations in Hilbert spaces were carried out by Kellogg [3] and Lions and Mercier [4]. The central problem is to iteratively find a zero of the sum of two monotone operators A and B in a Hilbert space H. In this paper, we consider the problem of finding a solution to the following problem: find an x in the fixed point set of the mapping S such that

x ( A + B ) 1 (0),

where A and B are two monotone operators. The problem has been addressed by many authors in view of the applications in image recovery and signal processing; see, for example, [59] and the references therein.

Throughout this paper, we always assume that H is a real Hilbert space with the inner product , and norm , respectively. Let C be a nonempty closed convex subset of H and P C be the metric projection from H onto C. Let S:CC be a mapping. In this paper, we use F(S) to denote the fixed point set of S; that is, F(S):={xC:x=Sx}.

Recall that S is said to be nonexpansive iff

SxSyxy,x,yC.

If C is a bounded, closed, and convex subset of H, then F(S) is not empty, closed, and convex; see [10].

S is said to be quasi-nonexpansive iff F(S) and

Sxyxy,xC,yF(S).

It is easy to see that nonexpansive mappings are Lipschitz continuous; however, the quasi-nonexpansive mapping is discontinuous on its domain generally. Indeed, the quasi-nonexpansive mapping is only continuous in its fixed point set.

Let A:CH be a mapping. Recall that A is said to be monotone iff

AxAy,xy0,x,yC.

A is said to be strongly monotone iff there exists a constant α>0 such that

AxAy,xyα x y 2 ,x,yC.

For such a case, A is also said to be α-strongly monotone. A is said to be inverse-strongly monotone iff there exists a constant α>0 such that

AxAy,xyα A x A y 2 ,x,yC.

For such a case, A is also said to be α-inverse-strongly monotone. Notice that

α A x A y 2 AxAy,xyAxAyxy

clearly shows that A is 1 α -Lipschitz continuous.

Recall that the classical variational inequality is to find an xC such that

Ax,yx0,yC.
(1.1)

In this paper, we use VI(C,A) to denote the solution set of (1.1). It is known that x C is a solution to (1.1) iff x is a fixed point of the mapping P C (IλA), where λ>0 is a constant, I stands for the identity mapping, and P C stands for the metric projection from H onto C.

A multivalued operator T:H 2 H with the domain D(T)={xH:Tx} and the range R(T)={Tx:xD(T)} is said to be monotone if for x 1 D(T), x 2 D(T), y 1 T x 1 , and y 2 T x 2 , we have x 1 x 2 , y 1 y 2 0. A monotone operator T is said to be maximal if its graph G(T)={(x,y):yTx} is not properly contained in the graph of any other monotone operator. Let I denote the identity operator on H and T:H 2 H be a maximal monotone operator. Then we can define, for each λ>0, a nonexpansive single-valued mapping J λ :HH by J λ = ( I + λ T ) 1 . It is called the resolvent of T. We know that T 1 0=F( J λ ) for all λ>0 and J λ is firmly nonexpansive.

The Mann iterative algorithm is efficient to study fixed point problems of nonlinear operators. Recently, many authors have studied the common solution problem, that is, find a point in a solution set and a fixed point (zero) point set of some nonlinear problems; see, for example, [1130] and the references therein.

In [11], Kamimura and Takahashi investigated the problem of finding zero points of a maximal monotone operator by considering the following iterative algorithm:

x 0 H, x n + 1 = α n x n +(1 α n ) J λ n x n ,n=0,1,2,,
(1.2)

where { α n } is a sequence in (0,1), { λ n } is a positive sequence, T:H 2 H is a maximal monotone, and J λ n = ( I + λ n T ) 1 . They showed that the sequence { x n } generated in (1.2) converges weakly to some z T 1 (0) provided that the control sequence satisfies some restrictions. Further, using this result, they also investigated the case that T=f, where f:H(,] is a proper lower semicontinuous convex function. Convergence theorems are established in the framework of real Hilbert spaces.

In [12], Takahashi an Toyoda investigated the problem of finding a common solution of the variational inequality problem (1.1) and a fixed point problem involving nonexpansive mappings by considering the following iterative algorithm:

x 0 C, x n + 1 = α n x n +(1 α n )S P C ( x n λ n A x n ),n0,
(1.3)

where { α n } is a sequence in (0,1), { λ n } is a positive sequence, S:CC is a nonexpansive mapping, and A:CH is an inverse-strongly monotone mapping. They showed that the sequence { x n } generated in (1.3) converges weakly to some zVI(C,A)F(S) provided that the control sequence satisfies some restrictions.

The above convergence theorems are weak. In this paper, motivated by the above results, we consider the problem of finding a common solution to the zero point problems and fixed point problems based on hybrid iterative methods with errors. Strong convergence theorems are established in the framework of Hilbert spaces.

To obtain our main results in this paper, we need the following lemmas and definitions.

Let C be a nonempty, closed, and convex subset of H. Let S:CC be a mapping. Then the mapping IS is demiclosed at zero, that is, if { x n } is a sequence in C such that x n x ¯ and x n S x n 0, then x ¯ F(S).

Lemma [9]

Let C be a nonempty, closed, and convex subset of H, A:CH be a mapping, and B:HH be a maximal monotone operator. Then F( J r (IλA))= ( A + B ) 1 (0).

2 Main results

Theorem 2.1 Let C be a nonempty closed convex subset of a real Hilbert space H, A:CH be an α-inverse-strongly monotone mapping, S:CC be a quasi-nonexpansive mapping such that IS is demiclosed at zero, and B be a maximal monotone operator on H such that the domain of B is included in C. Assume that F=F(S) ( A + B ) 1 (0). Let { λ n } be a positive real number sequence. Let { α n } be a real number sequence in [0,1]. Let { x n } be a sequence in C generated in the following iterative process:

{ x 1 C , C 1 = C , y n = α n x n + ( 1 α n ) S J λ n ( x n λ n A x n ) , C n + 1 = { z C n : y n z x n z } , x n + 1 = P C n + 1 x 1 , n 1 ,

where J λ n = ( I + λ n B ) 1 . Suppose that the sequences { α n } and { λ n } satisfy the following restrictions:

  1. (a)

    0 α n a<1;

  2. (b)

    0<b λ n c<2α.

Then the sequence { x n } converges strongly to P F x 1 .

Proof First, we show that C n is closed and convex. Notice that C 1 =C is closed and convex. Suppose that C i is closed and convex for some i1. We show that C i + 1 is closed and convex for the same i. Indeed, for any v C i , we see that

y i z x i z

is equivalent to

y i 2 x i 2 2z, y i x i 0.

Thus C i + 1 is closed and convex. This shows that C n is closed and convex.

Next, we prove that I λ n A is a nonexpansive mapping. Indeed, we have

In view of the restriction (b), we obtain that I λ n A is nonexpansive. Next, we show that F C n for each n1. From the assumption, we see that FC= C 1 . Assume that F C i for some i1. For any zF C i , we find from Lemma that

z=Sz= J λ i (z λ i Az).

Put z n = J λ n ( x n λ n A x n ). Since J λ n and I λ n A are nonexpansive, we have

z n p ( x n λ n A x n ) ( p λ n A p ) x n p .
(2.1)

It follows from (2.1) that

y i z = α i x i + ( 1 α i ) S z i z α i x i z + ( 1 α i ) z i z x i z .

This shows that z C i + 1 . This proves that F C n . Notice that x n = P C n x 1 . For every zF C n , we have

x 1 x n x 1 z.

In particular, we have

x 1 x n x 1 P F x 1 .

This implies that { x n } is bounded. Since x n = P C n x 1 and x n + 1 = P C n + 1 x 1 C n + 1 C n , we arrive at

0 x 1 x n , x n x n + 1 x 1 x n 2 + x 1 x n x 1 x n + 1 .

It follows that

x n x 1 x n + 1 x 1 .

This implies that lim n x n x 1 exists. On the other hand, we have

It follows that

lim n x n x n + 1 =0.
(2.2)

Notice that x n + 1 = P C n + 1 x 1 C n + 1 . It follows that

y n x n + 1 x n x n + 1 .

This in turn implies that

y n x n y n x n + 1 + x n x n + 1 2 x n x n + 1 .

In view of (2.2), we obtain that

lim n x n y n =0.
(2.3)

On the other hand, we have

x n y n =(1 α n ) x n S z n .

It follows from (2.3) that

lim n x n S z n =0.
(2.4)

For any pF, we see that

z n p 2 = J λ n ( x n λ n A x n ) J λ n ( p λ n A p ) 2 x n p 2 2 x n p , A x n A p + λ n 2 A x n A p 2 x n p 2 λ n ( 2 α λ n ) A x n A p 2 .
(2.5)

Notice that

y n p 2 α n x n p 2 + ( 1 α n ) S z n p 2 α n x n p 2 + ( 1 α n ) z n p 2 .
(2.6)

Substituting (2.5) into (2.6), we see that

y n p 2 x n p 2 (1 α n ) λ n (2α λ n ) A x n A p 2 .

It follows that

( 1 α n ) λ n ( 2 α λ n ) A x n A p 2 x n p 2 y n p 2 ( x n p + y n p ) x n y n .

This implies from (2.3) that

lim n A x n Ap=0.
(2.7)

On the other hand, we have

z n p 2 = J λ n ( x n λ n A x n ) J λ n ( p λ n A p ) 2 ( x n λ n A x n ) ( p λ n A p ) , z n p = 1 2 ( ( x n λ n A x n ) ( p λ n A p ) 2 + z n p 2 ( x n λ n A x n ) ( p λ n A p ) ( z n p ) 2 ) 1 2 ( x n p 2 + z n p 2 x n z n λ n ( A x n A p ) 2 ) 1 2 ( x n p 2 + z n p 2 x n z n 2 λ n 2 A x n A p 2 + 2 λ n x n z n A x n A p ) 1 2 ( x n p 2 + z n p 2 x n z n 2 + 2 λ n x n z n A x n A p ) .

It follows that

z n p 2 x n p 2 x n z n 2 +2 λ n x n z n A x n Ap.
(2.8)

Substituting (2.8) into (2.6), we see that

y n p 2 x n p 2 (1 α n ) x n z n 2 +2(1 α n ) λ n x n z n A x n Ap.

It follows that

In view of the restriction (a), we obtain from (2.7) that

lim n x n z n =0.
(2.9)

Since { x n } is bounded, we may assume that there is a subsequence { x n i } of { x n } converging weakly to some point x . It follows from (2.9) that z n i converges weakly to x . Notice that

S z n z n S z n x n + x n z n .

It follows from (2.4) and (2.9) that

lim n S z n z n =0.

In view of the assumption that S is demiclosed at zero, we see that x F(S).

Next, we show that x ( A + B ) 1 (0). Notice that z n = J λ n ( x n λ n A x n ). This implies that

x n λ n A x n (I+ λ n B) z n .

That is,

x n z n λ n A x n B z n .

Since B is monotone, we get for any (u,v)B, that

z n u , x n z n λ n A x n v 0.
(2.10)

Replacing n by n i and letting i, we obtain from (2.10) that

ωu,Aωv0.

This means AωBω, that is, 0(A+B)(ω). Hence, we get ω ( A + B ) 1 (0). This completes the proof that x F.

Notice that P F x 1 C n + 1 and x n + 1 = P C n + 1 x 1 , we have

x 1 x n + 1 x 1 P F x 1 .

On the other hand, we have

x 1 P F x 1 x 1 x lim inf i x 1 x n i lim sup i x 1 x n i x 1 P F x 1 .

We, therefore, obtain that

x 1 x = lim i x 1 x n i = x 1 P F x 1 .

This implies x n i x = P F x 1 . Since { x n i } is an arbitrary subsequence of { x n }, we obtain that x n P F x 1 as n. This completes the proof. □

From Theorem 2.1, we have the following results immediately.

Corollary 2.2 Let C be a nonempty closed convex subset of a real Hilbert space H, A:CH be an α-inverse-strongly monotone mapping, and B be a maximal monotone operator on H such that the domain of B is included in C. Assume that ( A + B ) 1 (0). Let { λ n } be a positive real number sequence. Let { α n } be a real number sequence in [0,1]. Let { x n } be a sequence in C generated in the following iterative process:

{ x 1 C , C 1 = C , y n = α n x n + ( 1 α n ) J λ n ( x n λ n A x n ) , C n + 1 = { z C n : y n z x n z } , x n + 1 = P C n + 1 x 1 , n 1 ,

where J λ n = ( I + λ n B ) 1 . Suppose that the sequences { α n } and { λ n } satisfy the following restrictions:

  1. (a)

    0 α n a<1;

  2. (b)

    0<b λ n c<2α.

Then the sequence { x n } converges strongly to P ( A + B ) 1 ( 0 ) x 1 .

Let f:H(,] be a proper lower semicontinuous convex function. Define the subdifferential

f(x)= { z H : f ( x ) + y x , z f ( y ) , y H }

for all xH. Then ∂f is a maximal monotone operator of H into itself; see [23] for more details. Let C be a nonempty closed convex subset of H and i C be the indicator function of C, that is,

i C x={ 0 , x C , , x C .

Furthermore, we define the normal cone N C (v) of C at v as follows:

N C v= { z H : z , y v 0 , y H }

for any vC. Then i C :H(,] is a proper lower semicontinuous convex function on H and i C is a maximal monotone operator. Let J λ x= ( I + λ i C ) 1 x for any λ>0 and xH. From i C x= N C x and xC, we get

v = J λ x x v + λ N C v x v , y v 0 , y C , v = P C x ,

where P C is the metric projection from H into C. Similarly, we can get that x ( A + i C ) 1 (0)xVI(A,C). Putting B= i C in Theorem 2.1, we can see J λ n = P C . The following is not hard to derive.

Corollary 2.3 Let C be a nonempty closed convex subset of a real Hilbert space H, A:CH be an α-inverse-strongly monotone mapping, and S:CC be a quasi-nonexpansive mapping such that IS is demiclosed at zero. Assume that F=F(S)VI(C,A). Let { λ n } be a positive real number sequence. Let { α n } be a real number sequence in [0,1]. Let { x n } be a sequence in C generated in the following iterative process:

{ x 1 C , C 1 = C , y n = α n x n + ( 1 α n ) S P C ( x n λ n A x n ) , C n + 1 = { z C n : y n z x n z } , x n + 1 = P C n + 1 x 1 , n 1 .

Suppose that the sequences { α n } and { λ n } satisfy the following restrictions:

  1. (a)

    0 α n a<1;

  2. (b)

    0<b λ n c<2α.

Then the sequence { x n } converges strongly to P F x 1 .

In view of Corollary 2.3, we have the following corollary on variational inequalities.

Corollary 2.4 Let C be a nonempty closed convex subset of a real Hilbert space H and A:CH be an α-inverse-strongly monotone mapping. Assume that F=VI(C,A). Let { λ n } be a positive real number sequence. Let { α n } be a real number sequence in [0,1]. Let { x n } be a sequence in C generated in the following iterative process:

{ x 1 C , C 1 = C , y n = α n x n + ( 1 α n ) P C ( x n λ n A x n ) , C n + 1 = { z C n : y n z x n z } , x n + 1 = P C n + 1 x 1 , n 1 .

Suppose that the sequences { α n } and { λ n } satisfy the following restrictions:

  1. (a)

    0 α n a<1;

  2. (b)

    0<b λ n c<2α.

Then the sequence { x n } converges strongly to P VI ( C , A ) x 1 .