# An inertial based forward–backward algorithm for monotone inclusion problems and split mixed equilibrium problems in Hilbert spaces

## Abstract

Iterative algorithms are widely applied to solve convex optimization problems under a suitable set of constraints. In this paper, we develop an iterative algorithm whose architecture comprises a modified version of the forward-backward splitting algorithm and the hybrid shrinking projection algorithm. We provide theoretical results concerning weak and strong convergence of the proposed algorithm towards a common solution of the monotone inclusion problem and the split mixed equilibrium problem in Hilbert spaces. Moreover, numerical experiments compare favorably the efficiency of the proposed algorithm with the existing algorithms. As a consequence, our results improve various existing results in the current literature.

## Introduction

Convex optimization is a subject that is widely and increasingly used as a tool to solve various problems arising in applied mathematics including applications in engineering, medicine, economics, management, industry, and other branches of science. This subject is not only expanding in all directions of science but also serves as an interdisciplinary bridge between various branches of science. In order to solve a convex optimization problem, one can use either optimization algorithms or iterative methods to find the feasible solution of the convex optimization problem. Iterative methods are ubiquitous in the theory of convex optimization, and still new iterative and theoretical techniques have been proposed and analyzed for the solution of various real world and theoretical problems which can be modeled in the general framework of convex optimization. Such an algorithm or iterative method deals with the selection of the best out of many possible decisions in a real-life environment, constructing computational methods to find optimal solutions, exploring the theoretical properties and studying the computational performance of numerical algorithms implemented based on computational methods.

Monotone operator theory is a fascinating field of research in nonlinear functional analysis and found valuable applications in the field of convex optimization, subgradients, partial differential equations, variational inequalities, signal and image processing, evolution equations and inclusions; see, for instance, [13, 6, 8, 19, 21, 26, 27, 3035, 43, 47, 50, 52, 53] and the references cited therein. It is remarked that the convex optimization problem can be translated into finding a zero of a maximal monotone operator defined on a Hilbert space. On the other hand, the problem of finding a zero of the sum of two (maximal -) monotone operators is of fundamental importance in convex optimization and variational analysis [38, 46, 56, 57]. The forward-backward algorithm is prominent among various splitting algorithms to find a zero of the sum of two maximal monotone operators , see also . The class of splitting algorithms has parallel computing architectures and thus reducing the complexity of the problems under consideration. On the other hand, the forward-backward algorithm efficiently tackles the situation for smooth and/or nonsmooth functions.

In 1964, Polyak  employed the inertial extrapolation technique, based on the heavy ball methods of the two-order time dynamical system, to equip the iterative algorithm with fast convergence characteristic, see also . It is remarked that the inertial term is computed by the difference of the two preceding iterations. The inertial extrapolation technique was originally proposed for minimizing differentiable convex functions, but it has been generalized in different ways. The heavy ball method has been incorporated in various iterative algorithms to obtain the fast convergence characteristic; see, for example, [4, 5, 1012, 22, 40, 45] and the references cited therein. It is worth mentioning that the forward-backward algorithm has been modified by employing the heavy ball method for convex optimization problems.

The theory of equilibrium problems is a systematic approach to study a diverse range of problems arising in the field of physics, optimization, variational inequalities, transportation, economics, network and noncooperative games; see, for example, [9, 21, 23] and the references cited therein. The existence result of an equilibrium problem can be found in the seminal work of Blum and Oettli . Moreover, this theory has a computational flavor and flourishes significantly due to an excellent paper of Combettes and Hirstoaga . The classical equilibrium problem theory has been generalized in several interesting ways to solve real world problems. In 2012, Censor et al.  proposed a theory regarding split variational inequality problem (SVIP) which aims to solve a pair of variational inequality problems in such a way that the solution of a variational inequality problem, under a given bounded linear operator, solves another variational inequality.

Motivated by the work of Censor et al. , Moudafi  generalized the concept of SVIP to that of split monotone variational inclusions (SMVIP) which includes, as a special case, split variational inequality problem, split common fixed point problem, split zeroes problem, split equilibrium problem, and split feasibility problem. These problems have already been studied and successfully employed as a model in intensity-modulated radiation therapy treatment planning, see [14, 15]. This formalism is also at the core of modeling of many inverse problems arising for phase retrieval and other real-world problems; for instance, in sensor networks in computerized tomography and data compression; see, for example, [18, 21]. Some methods have been proposed and analyzed to solve split equilibrium problem and mixed split equilibrium problem in Hilbert spaces; see, for example, [24, 25, 28, 29, 36, 37, 51, 54, 59, 60] and the references cited therein. Inspired and motivated by the above-mentioned results and the ongoing research in this direction, we aim to employ the modified inertial forward-backward algorithm to find a common solution of the monotone inclusion problem and the SEP in Hilbert spaces. The proposed algorithm converges weakly to the common solution under a suitable set of control conditions. The strong convergence characteristics of the proposed algorithm is also obtained by employing the shrinking effect of the half space.

The rest of the paper is organized as follows: Sect. 2 contains preliminary concepts and results regarding monotone operator theory and equilibrium problem theory. Section 3 comprises weak and strong convergence results of the proposed algorithm. Section 4 deals with the efficiency of the proposed algorithm and its comparison with the existing algorithm by numerical experiments.

## Preliminaries

Throughout this section, we first fix some necessary notions and concepts which will be required in the sequel (see [7, 8] for a detailed account). We denote by $$\mathbb{N}$$ the set of all natural numbers and by $$\mathbb{R}$$ the set of all real numbers, respectively. Let $$C\subseteq \mathcal{H}_{1}$$ and $$Q\subseteq \mathcal{H}_{2}$$ be two nonempty subsets of real Hilbert spaces $$\mathcal{H}_{1}$$ and $$\mathcal{H}_{2}$$ with the inner product $$\langle \cdot,\cdot \rangle$$ and the associated norm $$\Vert \cdot \Vert$$. Let $$x_{n}\rightarrow x$$ (resp. $$x_{n}\rightharpoonup x$$) indicate strong convergence (resp. weak convergence) of a sequence $$\{x_{n}\}_{n=1}^{\infty }$$ in C.

Let $$A:\mathcal{H}_{1}\rightarrow 2^{\mathcal{H}_{1}}$$ be an operator. We denote by $$\operatorname{dom} ( A ) = \{ x\in \mathcal{H}_{1}:Ax\neq \emptyset \}$$ the domain of A, by $$\operatorname{Gr}(A)= \{ ( x,u ) \in \mathcal{H}_{1}\times \mathcal{H}_{1}:u\in Ax \}$$ the graph of A, and by $$\operatorname{zer} ( A ) = \{ x\in \mathcal{H}_{1}:0\in Ax \}$$ the set of zeros of A. The inverse of A, that is, $$A^{-1}$$ is defined as $$( u,x ) \in \operatorname{Gr}(A^{-1})$$ if and only if $$( x,u ) \in \operatorname{Gr}(A)$$ and the resolvent of A is denoted as $$J_{A}= ( \operatorname{Id}+A ) ^{-1}$$, where Id denotes the identity operator. It is remarked that $$J_{A}:\mathcal{H}_{1}\rightarrow \mathcal{H}_{1}$$ is a single-valued and maximal monotone operator provided that A is maximal monotone. Recall that A is said to be: (i) monotone if $$\langle x-y,u-v \rangle \geq 0$$ for all $$( x,u ), ( y,v ) \in \operatorname{Gr}(A)$$; (ii) maximally monotone if A is monotone and there exists no monotone operator $$B:\mathcal{H}_{1}\rightarrow 2^{\mathcal{H}_{1}}$$ such that $$\operatorname{Gr}(B)$$ properly contains $$\operatorname{Gr}(A)$$; (iii) strongly monotone with modulus $$\alpha >0$$ such that $$\langle x-y,u-v \rangle \geq \alpha \Vert x-y \Vert ^{2}$$ for all $$( x,u ), ( y,v ) \in \operatorname{Gr}(A)$$, and (iv) inverse strongly monotone (co-coercive) with parameter β such that $$\langle x-y,Ax-Ay \rangle \geq \beta \Vert Ax-Ay \Vert ^{2}$$.

Let $$f:\mathcal{H}_{1}\rightarrow \mathbb{R\cup } \{ +\infty \}$$ be a proper convex lower semicontinuous function, and let $$g:\mathcal{H}_{1}\rightarrow \mathbb{R}$$ be a convex differentiable and Lipschitz continuous gradient function, then the convex minimization problem for f and g is defined as follows:

$$\min_{x\in \mathcal{H}_{1}} \bigl\{ f ( x ) +g ( x ) \bigr\} .$$

The subdifferential of a function f is defined and denoted as follows:

$$\partial f ( x ) = \bigl\{ x^{\ast }\in \mathcal{H}_{1}:f ( y ) \geq f ( x ) + \bigl\langle x^{\ast },y-x \bigr\rangle \text{ for all }y\in \mathcal{H}_{1} \bigr\} .$$

It is remarked that the subdifferential of a proper convex lower semicontinuous function is a maximally monotone operator. The proximity operator of a function f is defined as follows:

$$\operatorname{prox}_{f}:\mathcal{H}_{1}\rightarrow \mathcal{H}_{1}:x\mapsto \underset{y\in \mathcal{H}_{1}}{ \operatorname{argmin}} \biggl( f ( y ) +\frac{1}{2} \Vert x-y \Vert ^{2} \biggr).$$

Note that the proximity operator is linked with the subdifferential operator in such a way that argmin $$( f ) =\operatorname{zer} ( \partial f )$$. Moreover, prox$$_{f}=J_{ \partial f}$$. Utilizing the said connection, we state that a monotone inclusion problem with respect to a maximally monotone operator A and an arbitrary operator B is to find

$$x^{\ast }\in C\quad\text{such that }0\in Ax^{\ast }+Bx^{\ast }.$$
(1)

The solution set of problem (1) is denoted by $$\operatorname{zer} ( A+B )$$.

We now define the concept of split-mixed equilibrium problem (SMEP).

Let $$F:C\times C\rightarrow \mathbb{R}$$ and $$G:Q\times Q\rightarrow \mathbb{R}$$ be two bifunctions. Let $$\phi _{f}:C\rightarrow \mathcal{H}_{1}$$ and $$\phi _{g}:Q\rightarrow \mathcal{H}_{2}$$ be two nonlinear operators, and let $$h:\mathcal{H}_{1}\rightarrow \mathcal{H}_{2}$$ be a bounded linear operator. A SMEP is to find

$$x^{\ast }\in C\quad\text{such that }F \bigl( x^{\ast },x \bigr) + \phi _{f}(x)-\phi _{f}\bigl(x^{\ast }\bigr)\geq 0\text{ for all }x\in C$$
(2)

and

$$y^{\ast }=hx^{\ast }\in Q\quad\text{such that }G \bigl( y^{\ast },y \bigr) +\phi _{g}(y)-\phi _{g} \bigl(y^{\ast }\bigr)\geq 0\text{ for all }y\in Q.$$
(3)

It is remarked that inequality (2) represents the mixed equilibrium problem, and its solution set is denoted by $$\operatorname{MEP}(F,\phi _{f})$$. The solution set of the SMEP as defined in (2) and (3) is denoted by

$$\operatorname{SMEP}(F,\phi _{f},G,\phi _{g}):=\bigl\{ x^{\ast }\in C:x^{\ast }\in \operatorname{MEP}(F, \phi _{f})\text{ and } hx^{\ast }\in \operatorname{MEP}(G,\phi _{g})\bigr\} .$$

Let C be a nonempty closed convex subset of a Hilbert space $$\mathcal{H}_{1}$$. For each $$x\in \mathcal{H}_{1}$$, there exists a unique nearest point of C, denoted by $$P_{C}x$$, such that

$$\Vert x-P_{C}x \Vert \leq \Vert x-y \Vert \quad\text{ for all }y \in C.$$

Such a mapping $$P_{C}:\mathcal{H}_{1}\rightarrow C$$ is known as a metric projection or the nearest point projection of $$\mathcal{H}_{1}$$ onto C. Moreover, $$P_{C}$$ satisfies nonexpansiveness in a Hilbert space and $$\langle x-P_{C}x,P_{C}x-y \rangle \geq 0$$ for all $$x,y\in C$$. It is remarked that $$P_{C}$$ is a firmly nonexpansive mapping from $$\mathcal{H}_{1}$$ onto C, that is,

$$\Vert P_{C}x-P_{C}y \Vert ^{2}\leq \langle x-y,P_{C}x-P_{C}y \rangle \quad\text{for all }x,y\in C.$$

The following lemma collects some well-known results in the context of a real Hilbert space.

### Lemma 2.1

()

The following properties hold in a real Hilbert space $$\mathcal{H}_{1}$$:

1. 1

$$\Vert x-y\Vert ^{2}=\Vert x\Vert ^{2}-\Vert y\Vert ^{2}-2\langle x-y,y \rangle$$for all $$x,y\in \mathcal{H}_{1}$$;

2. 2

$$\Vert x+y\Vert ^{2}\leq \Vert x\Vert ^{2}+2\langle y,x+y\rangle$$for all $$x,y\in \mathcal{H}_{1}$$;

3. 3

$$\Vert \alpha x+(1-\alpha )y\Vert ^{2}=\alpha \Vert x\Vert ^{2}+(1- \alpha )\Vert y\Vert ^{2}-\alpha (1-\alpha )\Vert x-y\Vert ^{2}$$for every $$x,y\in \mathcal{H}_{1}$$and $$\mu \in [ 0,1]$$.

### Lemma 2.2

()

Let C be a nonempty closed convex subset of a real Hilbert space $$\mathcal{H}_{1}$$, and let $$T:C\rightarrow C$$be a nonexpansive mapping, then $$(\operatorname{Id}-T)$$is demiclosed at the origin. That is, if $$\{x_{n}\}$$is a sequence in C such that $$x_{n}\rightharpoonup x$$and $$(I-T)x_{n}\rightarrow 0$$, then $$(I-T)x=0$$.

### Assumption 2.3

()

Let C be a nonempty closed convex subset of a real Hilbert space $$\mathcal{H}_{1}$$. Let $$F:C\times C\rightarrow \mathbb{R}$$ be a bifunction and $$\phi _{f}: C \rightarrow \mathbb{R}\cup \{+\infty \}$$ be a convex lower semicontinuous convex function satisfying the following conditions:

1. (A1)

$$F(x,x)=0$$ for all $$x\in C$$;

2. (A2)

F is monotone, i.e., $$F(x,y)+F(y,x)\leq 0$$ for all $$x,y\in C$$;

3. (A3)

for each $$x,y,z\in C$$, $$\limsup_{t\rightarrow 0}F(tz+(1-t)x,y) \leq F(x,y)$$;

4. (A4)

for each $$x\in C$$, $$y\mapsto F(x,y)$$ is convex and lower semi-continuous.

### Lemma 2.4

()

Let C be a nonempty closed convex subset of a real Hilbert space $$\mathcal{H}_{1}$$, and let $$F,\phi _{f}$$be as in Assumption 2.3such that $$C\cap \operatorname{dom} ( \phi _{f} ) \neq \emptyset$$. For $$r>0$$and $$x\in \mathcal{H}_{1}$$, there exists $$z\in C$$such that

$$F(z,y)+\phi _{f}(y)-\phi _{f}(z)+\frac{1}{r}\langle y-z,z-x\rangle \geq 0\quad\textit{for all }y\in C.$$

Moreover, define a mapping $$T_{r}^{F}:\mathcal{H}_{1}\rightarrow C$$by

$$T_{r}^{F}(x)= \biggl\{ z\in C:F(z,y)+\phi _{f}(y)-\phi _{f}(z)+ \frac{1}{r}\langle y-z,z-x\rangle \geq 0 \textit{ for all }y\in C \biggr\}$$

for all $$x\in \mathcal{H}_{1}$$. Then the following results hold:

1. (1)

$$T_{r}^{F}\textit{ is single-valued;}$$

2. (2)

$$T_{r}^{F}$$is firmly nonexpansive, i.e., for every $$x,y \in \mathcal{H}_{1}, \Vert T_{r}^{F}x-T_{r}^{F}y \Vert ^{2}\leq \langle T_{r}^{F}x-T_{r}^{F}y,x-y \rangle$$;

3. (3)

$$F(T_{r}^{F})= \{ x\in C:T_{r}^{F}(x)=x \} =\operatorname{MEP}(F, \phi _{f})$$, where $$F(T_{r}^{F})$$denotes the set of fixed points of the mapping $$T_{r}^{F}$$;

4. (4)

$$\operatorname{MEP}(F,\phi _{f})$$is closed and convex.

It is remarked that if $$G:Q\times Q\rightarrow \mathbb{R}$$is a bifunction satisfying conditions (A1)–(A4) and $$\phi _{g}:Q\rightarrow \mathbb{R}\cup \{+\infty \}$$is a proper convex lower semicontinuous function such that $$Q\cup \operatorname{dom} ( \phi _{g} ) \neq \emptyset$$, where Q is a nonempty closed convex subset of a Hilbert space $$\mathcal{H}_{2}$$. Then, for each $$s>0$$and $$w\in \mathcal{H}_{2}$$, we can define the following mapping:

$$T_{s}^{G}(w)= \biggl\{ d\in C:G(d,e)+\phi _{g}(e)-\phi (d)+\frac{1}{s} \langle e-d,d-w\rangle \geq 0 \textit{ for all }e\in Q \biggr\} ,$$

which satisfies

1. (1)

$$T_{s}^{G}$$is single-valued;

2. (2)

$$T_{s}^{G}$$is firmly nonexpansive;

3. (3)

$$F(T_{s}^{G})=\operatorname{MEP}(G,\phi _{g})$$;

4. (4)

$$\operatorname{MEP}(G,\phi _{g})$$is closed and convex.

### Lemma 2.5

()

Let E be a Banach space satisfying Opial’s condition, and let $$\{x_{n}\}$$be a sequence in E. Let $$l,m\in E$$be such that $$\lim_{n\rightarrow \infty }\Vert x_{n}-l\Vert$$, and let $$\lim_{n\rightarrow \infty }\Vert x_{n}-m\Vert$$exist. If $$\{x_{n_{k}}\}$$and $$\{x_{m_{k}}\}$$are subsequences of $$\{x_{n}\}$$which converge weakly to l and m, respectively, then $$l=m$$.

### Lemma 2.6

()

Let E be a Banach space, and let $$A:E\rightarrow E$$be α-inverse strongly accretive of order q and $$B:E\rightarrow 2^{E}$$be an m-accretive operator. Then we have:

1. (a)

For $$r > 0, F(T^{A,B}_{r})=(A+B)^{-1}(0)$$;

2. (b)

For $$0 < s \leq r$$and $$x \in E, \|x-T^{A,B}_{s}x\|\leq 2\|x-T^{A,B}_{r} \|$$.

### Lemma 2.7

()

Let E be a uniformly convex and q-uniformly smooth Banach space for some $$q\in (0,2]$$. Assume that A is single-valued α-inverse strongly accretive of order $$q\in E$$. Then, given $$r>0$$, there exists a continuous, strictly increasing, and convex function $$\varphi _{q}:\mathbb{R}^{+}\rightarrow \mathbb{R}^{+}$$with $$\varphi _{q}(0)=0$$such that, for all $$x,y\in B_{r}$$, $$\Vert T_{r}^{A,B}x-T_{r}^{A,B}y\Vert ^{q}\leq \Vert x-y\Vert ^{q}-r( \alpha q-r^{q-1}k_{q})\Vert Ax-Ay\Vert ^{q}- \varphi _{q}( \Vert (\operatorname{Id}-J_{r}^{B})(\operatorname{Id}-rA)x-(\operatorname{Id}-J_{r}^{B})(\operatorname{Id}-rA)y\Vert )$$, where $$k_{q}$$is the q-uniform smoothness coefficient of E.

### Lemma 2.8

()

Let $$\{\xi _{n}\}$$, $$\{\eta _{n}\}$$, and $$\{\alpha _{n}\}$$be sequences in $$[0,+\infty )$$satisfying $$\xi _{n+1}\leq \xi _{n}+\alpha _{n}(\xi _{n}-\xi _{n-1})+\eta _{n}$$for all $$n\geq 1$$provided $$\sum_{n=1}^{\infty }\eta _{n}<+\infty$$and with $$0\leq \alpha _{n}\leq \alpha <1$$for all $$n\geq 1$$. Then the following hold:

1. (a)

$$\sum_{n \geq 1}[\xi _{n} - \xi _{n-1}]_{+} < +\infty, where [t]_{+}=\max \{t,0\}$$;

2. (b)

there exists $$\xi ^{*} \in [0,+\infty )$$such that $$\lim_{n \rightarrow +\infty } \xi _{n} = \xi ^{*}$$.

### Lemma 2.9

()

Let C be a nonempty closed convex subset of a real Hilbert space $$\mathcal{H}_{1}$$. For every $$x,y\in \mathcal{H}_{1}$$and $$a\in \mathbb{R}$$, the set

$$D=\bigl\{ v\in C: \Vert y-v \Vert ^{2}\leq \Vert x-v \Vert ^{2}+\langle z,v \rangle +\gamma \bigr\}$$

is closed and convex.

### Proposition 2.10

()

Let $$q>1$$, and let E be a real smooth Banach space with the generalized duality mapping $$j_{q}$$. Let $$m\in \mathbb{N}$$be fixed. Let $$\{x_{i}\}_{i=1}^{m}\subset E$$and $$t_{i}\geq 0$$for all $$i=1,2,3,\ldots,m$$with $$\sum_{i=1}^{m}t_{i}\leq 1$$. Then we have

$$\Biggl\Vert \sum_{i=1}^{m}t_{i}x_{i} \Biggr\Vert ^{q}\leq \frac{\sum_{i=1}^{m}t_{i} \Vert x_{i} \Vert ^{q}}{q-(q-1)(\sum_{i=1}^{m}t_{i})}.$$

## Weak convergence results

In this section, we establish convergence analysis of the inertial forward-backward splitting method for solving the split mixed equilibrium problem together with the monotone inclusion problems in the framework of Hilbert spaces. We first prove the following weak convergence theorem.

### Theorem 3.1

Let $$\mathcal{H}_{1}$$and $$\mathcal{H}_{2}$$be two real Hilbert spaces, and let $$C\subseteq \mathcal{H}_{1}$$and $$Q\subseteq \mathcal{H}_{2}$$be nonempty closed convex subsets of $$\mathcal{H}_{1}$$and $$\mathcal{H}_{2}$$, respectively. Let $$F:C\times C\rightarrow \mathbb{R}$$and $$G:Q\times Q\rightarrow \mathbb{R}$$be two bifunctions satisfying (A1)–(A4) of Assumption 2.3such that G is upper semicontinuous. Let $$h:\mathcal{H}_{1}\rightarrow \mathcal{H}_{2}$$be a bounded linear operator; let $$\phi _{f}:C\rightarrow \mathcal{H}_{1}$$and $$\phi _{g}:Q\rightarrow \mathcal{H}_{2}$$be proper lower semicontinuous and convex functions such that $$C\cap \operatorname{dom} ( \phi _{f} ) \neq \emptyset$$and $$Q\cap \operatorname{dom} ( \phi _{g} ) \neq \emptyset$$. Let $$A:\mathcal{H}_{1}\rightarrow \mathcal{H}_{1}$$be an α-inverse strongly monotone operator and $$B:\mathcal{H}_{1}\rightarrow 2^{\mathcal{H}_{1}}$$be a maximally monotone operator. Assume that $$\varGamma =(A+B)^{-1}(0)\cap \varOmega \neq \emptyset$$, where $$\varOmega =\{x^{\ast }\in C:x^{\ast }\in \operatorname{MEP} (F,\phi _{f})\ \textit{and}\ hx^{ \ast }\in \operatorname{MEP}(G,\phi _{g})\}$$. For given $$x_{0},x_{1}\in \mathcal{H}_{1}$$, let the iterative sequences $$\{x_{n}\}$$, $$\{y_{n}\}$$, and $$\{u_{n}\}$$be generated by

\begin{aligned} &y_{n} =x_{n}+\theta _{n}(x_{n}-x_{n-1}), \\ &u_{n} =\alpha _{n}y_{n}+(1-\alpha _{n})T_{r_{n}}^{F}\bigl(\operatorname{Id}-\gamma h^{ \ast }\bigl(\operatorname{Id}-T_{r_{n}}^{G}\bigr)h \bigr)y_{n}, \\ &x_{n+1} =\beta _{n}u_{n}+(1-\beta _{n})J_{n}u_{n},\quad n\geq 1 , \end{aligned}
(4)

where $$J_{n}=(\operatorname{Id}+s_{n}B)^{-1}(\operatorname{Id}-s_{n}A)$$with $$\{s_{n}\}\subset (0,2\alpha )$$and $$\{\theta _{n}\}\subset [ 0,\theta ]$$for some $$\theta \in [ 0,1)$$. Let $$\gamma \in (0,\frac{1}{L})$$such that L is the spectral radius of $$h^{\ast }h$$where $$h^{\ast }$$is the adjoint of h. Let $$\{r_{n}\}\subset (0,\infty )$$and $$\{\alpha _{n}\},\{\beta _{n}\}$$be in $$[0,1]$$. Assume that the following conditions hold:

1. C1

$$\sum_{n=1}^{\infty }\theta _{n}\Vert x_{n}-x_{n-1}\Vert <\infty$$;

2. C2

$$0<\liminf_{n\rightarrow \infty }\alpha _{n}\leq \limsup_{n \rightarrow \infty }\alpha _{n}<1$$;

3. C3

$$0<\liminf_{n\rightarrow \infty }\beta _{n}\leq \limsup_{n \rightarrow \infty }\beta _{n}<1$$;

4. C4

$$\liminf_{n\rightarrow \infty }r_{n}>0$$;

5. C5

$$0<\liminf_{n\rightarrow \infty }s_{n}\leq \limsup_{n\rightarrow \infty }s_{n}<2\alpha$$.

Then the sequence $$\{x_{n}\}$$ generated by (4) weakly converges to a point $$\hat{q}\in \varGamma$$.

### Proof

First we show that $$h^{\ast }(\operatorname{Id}-T_{r_{n}}^{G})h$$ is a $$\frac{1}{L}$$-inverse strongly monotone mapping. For this, we utilize the firm nonexpansiveness of $$T_{r_{n}}^{G}$$ which implies that $$(\operatorname{Id}-T_{r_{n}}^{G})$$ is a 1-inverse strongly monotone mapping. Now, observe that

\begin{aligned} &\bigl\Vert h^{\ast }\bigl(\operatorname{Id}-T_{r_{n}}^{G} \bigr)hx-h^{\ast }\bigl(\operatorname{Id}-T_{r_{n}}^{G} \bigr)hy \bigr\Vert ^{2} \\ &\quad=\bigl\langle h^{\ast }\bigl( \operatorname{Id}-T_{r_{n}}^{G}\bigr) (hx-hy),h^{\ast } \bigl(\operatorname{Id}-T_{r_{n}}^{G}\bigr) (hx-hy) \bigr\rangle \\ &\quad=\bigl\langle \bigl(\operatorname{Id}-T_{r_{n}}^{G}\bigr) (hx-hy),h^{\ast }h\bigl(\operatorname{Id}-T_{r_{n}}^{G} \bigr) (hx-hy) \bigr\rangle \\ &\quad\leq L\bigl\langle \bigl(\operatorname{Id}-T_{r_{n}}^{G} \bigr) (hx-hy),\bigl(\operatorname{Id}-T_{r_{n}}^{G}\bigr) (hx-hy) \bigr\rangle \\ &\quad=L \bigl\Vert \bigl(\operatorname{Id}-T_{r_{n}}^{G}\bigr) (hx-hy) \bigr\Vert ^{2} \\ &\quad\leq L\bigl\langle x-y,h^{\ast }\bigl(\operatorname{Id}-T_{r_{n}}^{G} \bigr) (hx-hy)\bigr\rangle \end{aligned}

for all $$x,y\in \mathcal{H}_{1}$$. So, we observe that $$h^{\ast }(\operatorname{Id}-T_{r_{n}}^{G})h$$ is $$\frac{1}{L}$$-inverse strongly monotone. Moreover, $$\operatorname{Id}-\gamma h^{\ast }(\operatorname{Id}-T_{r_{n}}^{G})h$$ is nonexpansive provided $$\gamma \in (0,\frac{1}{L})$$. Now, we divide the rest of the proof into the following three steps.

Step 1. Show that $$\lim_{n\rightarrow \infty }\Vert x_{n}-\hat{p}\Vert$$ exists for every $$\hat{p}\in \varGamma$$.

In order to proceed, we first set $$T_{n}=T_{r_{n}}^{F}(\operatorname{Id}-\gamma h^{\ast }(\operatorname{Id}-T_{r_{n}}^{G})h)$$ which is quasi-nonexpansive by definition. For any $$\hat{p}\in \varGamma$$, we get

\begin{aligned} \Vert y_{n}-\hat{p} \Vert &= \bigl\Vert x_{n}+ \theta _{n}(x_{n}-x_{n-1})- \hat{p} \bigr\Vert \\ &= \bigl\Vert (x_{n}-\hat{p})+\theta _{n}(x_{n}-x_{n-1}) \bigr\Vert \\ &\leq \Vert x_{n}-\hat{p} \Vert +\theta _{n} \Vert x_{n}-x_{n-1} \Vert . \end{aligned}
(5)

Utilizing (5), we have

\begin{aligned} \Vert u_{n}-\hat{p} \Vert &= \bigl\Vert \alpha _{n}y_{n}+(1-\alpha _{n})T_{n}y_{n}-\hat{p} \bigr\Vert \\ &\leq \alpha _{n} \Vert y_{n}-\hat{p} \Vert +(1- \alpha _{n}) \Vert T_{n}y_{n}-\hat{p} \Vert \\ &\leq \Vert y_{n}-\hat{p} \Vert \\ &\leq \Vert x_{n}-\hat{p} \Vert +\theta _{n} \Vert x_{n}-x_{n-1} \Vert . \end{aligned}
(6)

It follows from (4), (6), and Lemma 2.7 that

\begin{aligned} \Vert x_{n+1}-\hat{p} \Vert &= \bigl\Vert \beta _{n}u_{n}+(1-\beta _{n})J_{n}u_{n}-\hat{p} \bigr\Vert \\ &= \bigl\Vert \beta _{n}(u_{n}-\hat{p})+(1-\beta _{n}) ( J_{n}u_{n}- \hat{p} ) \bigr\Vert \\ &\leq \beta _{n} \Vert u_{n}-\hat{p} \Vert +(1-\beta _{n}) \Vert J_{n}u_{n}-\hat{p} \Vert \\ &= \Vert u_{n}-\hat{p} \Vert \\ &\leq \Vert x_{n}-\hat{p} \Vert +\theta _{n} \Vert x_{n}-x_{n-1} \Vert . \end{aligned}
(7)

From Lemma 2.8 and (C1), we conclude from estimate (7) that $$\lim_{n\rightarrow \infty }\Vert x_{n}-\hat{p}\Vert$$ exists, in particular, $$\{x_{n}\}$$, $$\{y_{n}\}$$, and $$\{u_{n}\}$$ are all bounded.

Step 2. Show that $$x_{n}\rightharpoonup \hat{q}\in (A+B)^{-1}(0)$$.

Since $$\hat{p}=J_{n}\hat{p}$$, therefore it follows from Lemma 2.1 and Lemma 2.7 that

\begin{aligned} \Vert x_{n+1}-\hat{p} \Vert ^{2} = {}&\bigl\Vert \beta _{n}u_{n}+(1-\beta _{n})J_{n}u_{n}- \hat{p} \bigr\Vert ^{2} \\ = {}&\bigl\Vert \beta _{n}(u_{n}-\hat{p})+(1-\beta _{n})J_{n}u_{n}-\hat{p} \bigr\Vert ^{2} \\ \leq{}& \beta _{n} \Vert u_{n}-\hat{p} \Vert ^{2}+(1-\beta _{n}) \Vert J_{n}u_{n}- \hat{p} \Vert ^{2} \\ \leq {}& \Vert u_{n}-\hat{p} \Vert ^{2}-(1-\beta _{n})s_{n}(2\alpha -s_{n}) \Vert Au_{n}-A\hat{p} \Vert ^{2} \\ &{}-(1-\beta _{n}) \Vert u_{n}-s_{n}Au_{n}-J_{n}u_{n}+s_{n}A \hat{p} \Vert \\ \leq {}&\alpha _{n} \Vert y_{n}-\hat{p} \Vert ^{2}+(1-\alpha _{n}) \Vert T_{n}y_{n}- \hat{p} \Vert ^{2}-(1-\beta _{n})s_{n}(2\alpha -s_{n}) \Vert Au_{n}-A \hat{p} \Vert ^{2} \\ &{}-(1-\beta _{n}) \Vert u_{n}-s_{n}Au_{n}-J_{n}u_{n}+s_{n}A \hat{p} \Vert \\ \leq{} & \Vert y_{n}-\hat{p} \Vert ^{2}-(1-\beta _{n})s_{n}(2\alpha -s_{n}) \Vert Au_{n}-A\hat{p} \Vert ^{2} \\ &{}-(1-\beta _{n}) \Vert u_{n}-s_{n}Au_{n}-J_{n}u_{n}+s_{n}A \hat{p} \Vert \\ \leq{} & \Vert x_{n}-\hat{p} \Vert ^{2}+2\theta _{n}\langle x_{n}-x_{n-1},y_{n}-\hat{p}\rangle -(1-\beta _{n})s_{n}(2\alpha -s_{n}) \Vert Au_{n}-A \hat{p} \Vert ^{2} \\ &{}-(1-\beta _{n}) \Vert u_{n}-s_{n}Au_{n}-J_{n}u_{n}+s_{n}A \hat{p} \Vert . \end{aligned}
(8)

As $$\lim_{n\rightarrow \infty }\Vert x_{n}-\hat{p}\Vert$$ exists, therefore utilizing (C1), (C3), and (C5), we get

$$\lim_{n\rightarrow \infty }(1-\beta _{n})s_{n}(2\alpha -s_{n}) \Vert Au_{n}-A\hat{p} \Vert =0.$$
(9)

Also from (8) we get that

$$\lim_{n\rightarrow \infty } \Vert u_{n}-s_{n}Au_{n}-J_{n}u_{n}+s_{n}A \hat{p} \Vert =0.$$
(10)

Using (9), (10) and the following triangle inequality:

$$\Vert u_{n}-s_{n}Au_{n}-J_{n}u_{n}+s_{n}A \hat{p} \Vert \leq \Vert u_{n}-J_{n}u_{n} \Vert +s_{n} \Vert Au_{n}-A\hat{p} \Vert ,$$

we get

$$\lim_{n\rightarrow \infty } \Vert J_{n}u_{n}-u_{n} \Vert =0.$$
(11)

Since $$\liminf_{n\rightarrow \infty }s_{n}>0$$, therefore there exists $$s>0$$ such that $$s_{n}\geq s$$ for all $$n\geq 0$$. It follows from Lemma 2.6(ii) that

$$\bigl\Vert T_{s}^{A,B}u_{n}-u_{n} \bigr\Vert \leq 2 \Vert J_{n}u_{n}-u_{n} \Vert .$$

Now, utilizing (11), the above estimate implies that

$$\lim_{n\rightarrow \infty } \bigl\Vert T_{s}^{A,B}u_{n}-u_{n} \bigr\Vert =0.$$
(12)

From (11), we have

$$\lim_{n\rightarrow \infty } \Vert x_{n+1}-u_{n} \Vert = \lim_{n \rightarrow \infty }(1-\beta _{n}) \Vert J_{n}u_{n}-u_{n} \Vert =0.$$
(13)

Again, by Lemma 2.1 and Lemma 2.7, we have

\begin{aligned} \Vert x_{n+1}-\hat{p} \Vert ^{2} &\leq \beta _{n} \Vert u_{n}-\hat{p} \Vert ^{2}+(1-\beta _{n}) \Vert J_{n}u_{n}-\hat{p} \Vert ^{2} \\ &\leq \Vert u_{n}-\hat{p} \Vert ^{2} \\ &\leq \alpha _{n} \Vert y_{n}-\hat{p} \Vert ^{2}+(1-\alpha _{n}) \Vert T_{n}y_{n}- \hat{p} \Vert ^{2}-\alpha _{n}(1-\alpha _{n}) \Vert T_{n}y_{n}-y_{n} \Vert ^{2} \\ &\leq \Vert y_{n}-\hat{p} \Vert ^{2}-\alpha _{n}(1-\alpha _{n}) \Vert T_{n}y_{n}-y_{n} \Vert ^{2} \\ &\leq \Vert x_{n}-\hat{p} \Vert ^{2}+2\theta _{n}\langle x_{n}-x_{n-1},y_{n}-\hat{p}\rangle -\alpha _{n}(1-\alpha _{n}) \Vert T_{n}y_{n}-y_{n} \Vert ^{2}. \end{aligned}

Utilizing (C2), the above estimate implies that

$$\lim_{n\rightarrow \infty } \Vert T_{n}y_{n}-y_{n} \Vert =0.$$
(14)

Note that

$$\Vert u_{n}-y_{n} \Vert =(1-\alpha _{n}) \Vert T_{n}y_{n}-y_{n} \Vert .$$

Using (14), the above estimate implies that

$$\lim_{n\rightarrow \infty } \Vert u_{n}-y_{n} \Vert =0.$$
(15)

By the definition of $$\{y_{n}\}$$ and (C1), we have

$$\lim_{n\rightarrow \infty } \Vert y_{n}-x_{n} \Vert = \lim_{n \rightarrow \infty }\theta _{n} \Vert x_{n}-x_{n-1} \Vert =0.$$
(16)

It follows from (13), (15), and (16) that

$$\Vert x_{n+1}-x_{n} \Vert \leq \Vert x_{n+1}-u_{n} \Vert + \Vert u_{n}-y_{n} \Vert + \Vert y_{n}-x_{n} \Vert \overset{n\rightarrow \infty }{\longrightarrow }0.$$
(17)

Moreover, from (13) and (17), we have

$$\Vert u_{n}-x_{n} \Vert \leq \Vert u_{n}-x_{n+1} \Vert + \Vert x_{n+1}-x_{n} \Vert \overset{n\rightarrow \infty }{\longrightarrow }0.$$
(18)

Since $$\{x_{n}\}$$ is bounded and $$\mathcal{H}_{1}$$ is reflexive, $$\nu _{w}(x_{n})=\{x\in \mathcal{H}_{1}:x_{n_{i}}\rightharpoonup x,\{x_{n_{i}} \}\subset \{x_{n}\}\}$$ is nonempty. Let $$\hat{q}\in \nu _{w}(x_{{n}})$$ be an arbitrary element. Then there exists a subsequence $$\{x_{n_{i}}\}\subset \{x_{n}\}$$ converging weakly to . Let $$\hat{p}\in \nu _{w}(x_{n})$$ and $$\{x_{n_{m}}\}\subset \{x_{n}\}$$ be such that $$x_{n_{m}}\rightharpoonup \hat{p}$$. From (18), we also have $$u_{n_{i}}\rightharpoonup \hat{q}$$ and $$u_{n_{m}}\rightharpoonup \hat{p}$$. Since $$T_{s}^{A,B}$$ is nonexpansive, therefore from (12) and Lemma 2.2, we have $$\hat{p},\hat{q}\in (A+B)^{-1}(0)$$. By applying Lemma 2.5, we obtain $$\hat{p}=\hat{q}$$.

Step 3. Show that $$\hat{q}\in \varOmega$$.

In order to proceed, we first set $$v_{n}=T_{r_{n}}^{F}(\operatorname{Id}-\gamma h^{\ast }(\operatorname{Id}-T_{r_{n}}^{G})h)y_{n}$$. Hence, for any $$\hat{p}\in \varGamma$$, we calculate the following estimate:

\begin{aligned} \Vert v_{n}-\hat{p} \Vert ^{2} ={}& \bigl\Vert T_{r_{n}}^{F}\bigl(\operatorname{Id}-\gamma h^{ \ast } \bigl(\operatorname{Id}-T_{r_{n}}^{G}\bigr)h\bigr)y_{n}- \hat{p} \bigr\Vert ^{2} \\ \leq {}& \bigl\Vert y_{n}-\gamma h^{\ast }\bigl( \operatorname{Id}-T_{r_{n}}^{G}\bigr)hy_{n}-\hat{p} \bigr\Vert ^{2} \\ \leq{} & \Vert y_{n}-\hat{p} \Vert ^{2}+\gamma ^{2} \bigl\Vert h^{\ast }\bigl(\operatorname{Id}-T_{r_{n}}^{G} \bigr)hy_{n} \bigr\Vert ^{2}+2\gamma \bigl\langle \hat{p}-y_{n},h^{\ast }\bigl(\operatorname{Id}-T_{r_{n}}^{G} \bigr)hy_{n} \bigr\rangle \\ \leq{} & \Vert x_{n}-\hat{p} \Vert ^{2}+2\theta _{n}\langle x_{n}-x_{n-1},y_{n}-\hat{p}\rangle +\gamma ^{2}\bigl\langle hy_{n}-T_{r_{n}}^{G}hy_{n},h^{ \ast }h \bigl(\operatorname{Id}-T_{r_{n}}^{G}\bigr)hy_{n}\bigr\rangle \\ &{}+2\gamma \bigl\langle \hat{p}-y_{n},h^{\ast }\bigl( \operatorname{Id}-T_{r_{n}}^{G}\bigr)hy_{n} \bigr\rangle \\ \leq{} & \Vert x_{n}-\hat{p} \Vert ^{2}+2\theta _{n}\langle x_{n}-x_{n-1},y_{n}-\hat{p}\rangle +L\gamma ^{2} \bigl\Vert hy_{n}-T_{r_{n}}^{G}hy_{n} \bigr\Vert ^{2} \\ &{}+2\gamma \bigl\langle \hat{p}-y_{n},h^{\ast }\bigl( \operatorname{Id}-T_{r_{n}}^{G}\bigr)hy_{n} \bigr\rangle . \end{aligned}
(19)

Note that

\begin{aligned} 2\gamma \bigl\langle \hat{p}-y_{n},h^{\ast }\bigl( \operatorname{Id}-T_{r_{n}}^{G}\bigr)hy_{n} \bigr\rangle ={}&2\gamma \bigl\langle h(\hat{p}-y_{n}),hy_{n}-T_{r_{n}}^{G}hy_{n} \bigr\rangle \\ ={}&2\gamma \bigl\langle h(\hat{p}-y_{n}),\bigl(hy_{n}-T_{r_{n}}^{G}hy_{n} \bigr) \\ &{}-\bigl(hy_{n}-T_{r_{n}}^{G}hy_{n} \bigr),hy_{n}-T_{r_{n}}^{G}hy_{n}\bigr\rangle \\ ={}&2\gamma \bigl[ \bigl\langle Ap-T_{r_{n}}^{G}hy_{n},hy_{n}-T_{r_{n}}^{G}hy_{n} \bigr\rangle - \bigl\Vert hy_{n}-T_{r_{n}}^{G}hy_{n} \bigr\Vert ^{2}\bigr] \\ \leq{} &2\gamma \biggl[ \frac{1}{2} \bigl\Vert hy_{n}-T_{r_{n}}^{G}hy_{n} \bigr\Vert ^{2}- \bigl\Vert hy_{n}-T_{r_{n}}^{G}hy_{n} \bigr\Vert ^{2}\biggr] \\ ={}&{-}\gamma \bigl\Vert hy_{n}-T_{r_{n}}^{G}hy_{n} \bigr\Vert ^{2}. \end{aligned}
(20)

Substituting (20) in (19), we have

$$\Vert v_{n}-\hat{p} \Vert ^{2}\leq \Vert x_{n}-\hat{p} \Vert ^{2}+2 \theta _{n}\langle x_{n}-x_{n-1},y_{n}-p\rangle +\gamma (L\gamma -1) \bigl\Vert hy_{n}-T_{r_{n}}^{G}hy_{n} \bigr\Vert ^{2}.$$
(21)

Moreover,

\begin{aligned} \Vert u_{n}-\hat{p} \Vert ^{2} &\leq \alpha _{n} \Vert y_{n}-\hat{p} \Vert ^{2}+(1-\alpha _{n}) \Vert v_{n}-\hat{p} \Vert ^{2} \\ &\leq \Vert x_{n}-\hat{p} \Vert ^{2}+2\theta _{n}\langle x_{n}-x_{n-1},y_{n}-p \rangle +\gamma (L\gamma -1) \bigl\Vert hy_{n}-T_{r_{n}}^{G}hy_{n} \bigr\Vert ^{2}. \end{aligned}

Rearranging the above estimate, we have

\begin{aligned} &\gamma (1-L\gamma ) \bigl\Vert hy_{n}-T_{r_{n}}^{G}hy_{n} \bigr\Vert ^{2} \\ &\quad\leq \Vert x_{n}-\hat{p} \Vert ^{2}- \Vert u_{n}-\hat{p} \Vert ^{2}+2 \theta _{n}\langle x_{n}-x_{n-1},y_{n}-p\rangle \\ &\quad\leq \bigl( \Vert x_{n}-\hat{p} \Vert + \Vert u_{n}- \hat{p} \Vert \bigr) \Vert x_{n}-u_{n} \Vert +2\theta _{n} \langle x_{n}-x_{n-1},y_{n}-p \rangle. \end{aligned}

Since $$\gamma ( 1-\gamma L ) >0$$, therefore utilizing (18) and (C1), the above estimate implies that

$$\lim_{n\rightarrow \infty } \bigl\Vert hy_{n}-T_{r_{n}}^{G}hy_{n} \bigr\Vert =0.$$
(22)

Note that $$T_{r_{n}}^{F}$$ is firmly nonexpansive and $$\operatorname{Id}-\gamma h^{\ast }(\operatorname{Id}-T_{r_{n}}^{G})h$$ is nonexpansive, it follows that

\begin{aligned} \Vert v_{n}-\hat{p} \Vert ^{2} ={}& \bigl\Vert T_{r_{n}}^{F}\bigl(y_{n}-\gamma h^{ \ast } \bigl(\operatorname{Id}-T_{r_{n}}^{G}\bigr)hy_{n} \bigr)-T_{r_{n}}^{F}\hat{p} \bigr\Vert ^{2} \\ \leq {}&\bigl\langle T_{r_{n}}^{F}\bigl(y_{n}-\gamma h^{\ast }\bigl(\operatorname{Id}-T_{r_{n}}^{G} \bigr)hy_{n}\bigr)-T_{r_{n}}^{F} \hat{p},y_{n}- \gamma h^{\ast }\bigl(\operatorname{Id}-T_{r_{n}}^{G} \bigr)hy_{n}-\hat{p} \bigr\rangle \\ ={}&\bigl\langle v_{n}-\hat{p}, y_{n}-\gamma h^{\ast }\bigl(\operatorname{Id}-T_{r_{n}}^{G} \bigr)hy_{n}-\hat{p}\bigr\rangle \\ ={}&\frac{1}{2}\bigl\{ \Vert v_{n}-\hat{p} \Vert ^{2}+ \bigl\Vert y_{n}-\gamma h^{ \ast }\bigl( \operatorname{Id}-T_{r_{n}}^{G}\bigr)hy_{n}-\hat{p} \bigr\Vert ^{2} \\ &{}- \bigl\Vert v_{n}-y_{n}+\gamma h^{\ast }\bigl( \operatorname{Id}-T_{r_{n}}^{G}\bigr)hy_{n} \bigr\Vert ^{2} \bigr\} \\ \leq{} &\frac{1}{2}\bigl\{ \Vert v_{n}-\hat{p} \Vert ^{2}+ \Vert y_{n}-\hat{p} \Vert ^{2}- \bigl\Vert v_{n}-y_{n}+\gamma h^{\ast }\bigl( \operatorname{Id}-T_{r_{n}}^{G}\bigr)hy_{n} \bigr\Vert ^{2}\bigr\} \\ ={}&\frac{1}{2}\bigl\{ \Vert v_{n}-\hat{p} \Vert ^{2}+ \Vert y_{n}-\hat{p} \Vert ^{2}-\bigl( \Vert v_{n}-y_{n} \Vert ^{2}+\gamma ^{2} \bigl\Vert h^{\ast }\bigl(\operatorname{Id}-T_{r_{n}}^{G} \bigr)hy_{n} \bigr\Vert ^{2} \\ &{}-2\gamma \bigl\langle v_{n}-y_{n}, h^{\ast }\bigl( \operatorname{Id}-T_{r_{n}}^{G}\bigr)hy_{n} \bigr\rangle \bigr)\bigr\} . \end{aligned}

Simplifying the above estimate, we get

$$\Vert v_{n}-\hat{p} \Vert ^{2}\leq \Vert y_{n}-\hat{p} \Vert ^{2}- \Vert v_{n}-y_{n} \Vert ^{2}+2\gamma \Vert v_{n}-y_{n} \Vert \bigl\Vert h^{ \ast }\bigl(\operatorname{Id}-T_{r_{n}}^{G} \bigr)hy_{n} \bigr\Vert .$$
(23)

Taking into consideration the variant of (6) and (23), we have

\begin{aligned} \Vert u_{n}-\hat{p} \Vert ^{2} \leq{} &\alpha _{n} \Vert y_{n}-\hat{p} \Vert ^{2}+(1-\alpha _{n}) \Vert v_{n}-\hat{p} \Vert ^{2} \\ \leq{} &\alpha _{n} \Vert y_{n}-\hat{p} \Vert ^{2}+(1-\alpha _{n}) \bigl( \Vert y_{n}-\hat{p} \Vert ^{2}- \Vert v_{n}-y_{n} \Vert ^{2} \\ &{}+2\gamma \Vert v_{n}-y_{n} \Vert \bigl\Vert h^{\ast }\bigl(\operatorname{Id}-T_{r_{n}}^{G} \bigr)hy_{n} \bigr\Vert \bigr). \end{aligned}

Rearranging the above estimate, we get

\begin{aligned} (1-\alpha _{n}) \Vert v_{n}-y_{n} \Vert ^{2} \leq {}& \Vert y_{n}-\hat{p} \Vert ^{2}- \Vert u_{n}-\hat{p} \Vert ^{2}+2\gamma \Vert v_{n}-y_{n} \Vert \bigl\Vert h^{\ast }\bigl( \operatorname{Id}-T_{r_{n}}^{G}\bigr)hy_{n} \bigr\Vert ) \\ \leq {}& \bigl( \Vert y_{n}-\hat{p} \Vert + \Vert u_{n}- \hat{p} \Vert \bigr) \Vert y_{n}-u_{n} \Vert \\ &{}+2\gamma \Vert v_{n}-y_{n} \Vert \bigl\Vert h^{\ast }\bigl(\operatorname{Id}-T_{r_{n}}^{G} \bigr)hy_{n} \bigr\Vert ). \end{aligned}

Utilizing (15), (22), and (C2), we have

$$\lim_{n\rightarrow \infty } \Vert v_{n}-y_{n} \Vert =0.$$
(24)

From (16), (24), and the following triangular inequality

$$\Vert v_{n}-x_{n} \Vert \leq \Vert v_{n}-y_{n} \Vert + \Vert y_{n}-x_{n} \Vert ,$$

we get

$$\lim_{n\rightarrow \infty } \Vert v_{n}-x_{n} \Vert =0.$$
(25)

It follows from Step 2 that $$x_{n}\rightharpoonup \hat{q}$$. Therefore we conclude from (25) that $$v_{n}\rightharpoonup \hat{q}$$. Next, we show that $$\hat{q}\in \operatorname{MEP}(F,\phi _{f})$$. Since $$v_{n}=T_{r_{n}}^{F}(I-\gamma h^{\ast }(I-T_{r_{n}}^{G})h)y_{n}$$, therefore we have

$$F(v_{n},y)+\phi _{f}(y)-\phi _{f}(v_{n})+ \frac{1}{r_{n}}\bigl\langle y-v_{n},v_{n}-x_{n}- \gamma h^{\ast }\bigl(I-T_{r_{n}}^{G}hy_{n} \bigr)\bigr\rangle \geq 0\quad\text{for all }y\in C.$$

This implies that

$$F(v_{n},y)+\phi _{f}(y)-\phi _{f}(v_{n})+ \frac{1}{r_{n}}\langle y-v_{n},v_{n}-x_{n} \rangle -\frac{1}{r_{n}}\bigl\langle y-v_{n},\gamma h^{\ast } \bigl(I-T_{r_{n}}^{G}hy_{n}\bigr) \bigr\rangle \geq 0.$$

From Assumption 2.3(A2), we have

$$\phi _{f}(y)-\phi _{f}(v_{n})+ \frac{1}{r_{n}}\langle y-v_{n},v_{n}-x_{n} \rangle -\frac{1}{r_{n}}\bigl\langle y-v_{n},\gamma h^{\ast } \bigl(I-T_{r_{n}}^{G}hy_{n}\bigr) \bigr\rangle \geq F(y,v_{n})$$

for all $$y\in C$$. Since $$v_{n}\rightharpoonup \hat{q}$$, therefore utilizing (25) and (C4), the above estimate implies that

$$F(y,\hat{q})+\phi _{f}(\hat{q})-\phi _{f}(y)\leq 0 \quad\text{for all } y \in C.$$

Let $$y_{t}=ty+(1-t)\hat{q}$$ for some $$1\geq t>0$$ and $$y\in C$$. Since $$\hat{q}\in C$$, this implies that $$y_{t}\in C$$ and hence $$F(y_{t},\hat{q})+\phi _{f}(\hat{q})-\phi _{f}(y_{t})\leq 0$$. Using Assumption 2.3((A1) and (A4)), it follows that

\begin{aligned} 0 &=F(y_{t},y_{t}) \\ &\leq tF(y_{t},y)+(1-t)F(y_{t},\hat{q}) \\ &\leq tF(y_{t},y)+(1-t) \bigl( \phi _{f}(y_{t})- \phi _{f}(\hat{q}) \bigr) \\ &\leq tF(y_{t},y)+(1-t)t \bigl( \phi _{f}(y)-\phi _{f}(\hat{q}) \bigr) \\ &\leq F(y_{t},y)+(1-t) \bigl( \phi _{f}(y)-\phi _{f}(\hat{q}) \bigr) . \end{aligned}

Letting $$t\rightarrow 0$$, we have

$$F(\hat{q},y)+\phi _{f}(y)-\phi _{f}(\hat{q})\geq 0 \quad\text{for all } y \in C.$$

This implies that $$\hat{q}\in \operatorname{MEP}(F,\phi _{f})$$. It remains to show that $$h\hat{q}\in \operatorname{MEP}(G,\phi _{g})$$. Since $$y_{n}\rightharpoonup \hat{q}$$ (utilizing estimate (16) and the fact that $$x_{n}\rightharpoonup \hat{q}$$) and h is a bounded linear operator, therefore $$hy_{n}\rightharpoonup h\hat{q}$$. Hence, it follows from (22) that

$$T_{r_{n}}^{G}hy_{n}\rightharpoonup h\hat{q} \quad\text{as } n \rightarrow \infty.$$
(26)

Moreover, Lemma 2.4 implies that

$$G\bigl(T_{r_{n}}^{G}hy_{n},z\bigr)+\phi _{g}(z)-\phi _{g}\bigl(T_{r_{n}}^{G}hy_{n} \bigr)+ \frac{1}{r_{n}}\bigl\langle z-T_{r_{n}}^{G}hy_{n},T_{r_{n}}^{G}hy_{n}-hy_{n} \bigr\rangle \geq 0$$

for all $$z\in Q$$. Since G is upper semicontinuous in the first argument, taking lim sup of the above estimate as $$n\rightarrow \infty$$ and utilizing (C2) and (26), we have

$$G(h\hat{q},z)+\phi _{g}(z)-\phi _{g}(h\hat{q})\geq 0$$

for all $$z\in Q$$. This implies that $$h\hat{q}\in \operatorname{MEP}(G,\phi _{g})$$ and hence $$\hat{q}\in \operatorname{SMEP}(F,\phi _{f},G,\phi _{g})$$. From this together with the conclusion of Step 2, we have that $$\hat{q}\in \varGamma$$. This completes the proof. □

### Remark 3.2

Since the split mixed equilibrium problem contains the following problems:

1. (i)

Split equilibrium problem provided that $$\phi _{f}=\phi _{g}=0$$;

2. (ii)

Mixed equilibrium problem provided that $$G = 0$$ and $$\phi _{g}=0$$;

3. (iii)

Classical equilibrium problem provided that $$G = 0$$ and $$\phi _{f}=\phi _{g}=0$$.

Hence the following results can be obtained from Theorem 3.1 immediately.

### Corollary 3.3

Let $$\mathcal{H}_{1}$$and $$\mathcal{H}_{2}$$be two real Hilbert spaces, and let $$C\subseteq \mathcal{H}_{1}$$and $$Q\subseteq \mathcal{H}_{2}$$be nonempty closed convex subsets of $$\mathcal{H}_{1}$$and $$\mathcal{H}_{2}$$, respectively. Let $$F:C\times C\rightarrow \mathbb{R}$$and $$G:Q\times Q\rightarrow \mathbb{R}$$be two bifunctions satisfying (A1)–(A4) of Assumption 2.3such that G is upper semicontinuous. Let $$h:\mathcal{H}_{1}\rightarrow \mathcal{H}_{2}$$be a bounded linear operator, $$A:\mathcal{H}_{1}\rightarrow \mathcal{H}_{1}$$be an α-inverse strongly monotone operator, and $$B:\mathcal{H}_{1}\rightarrow 2^{\mathcal{H}_{1}}$$be a maximally monotone operator. Assume that $$\varGamma =(A+B)^{-1}(0)\cap \varOmega \neq \emptyset$$, where $$\varOmega =\{x^{\ast }\in C:x^{\ast }\in EP(F)\ \textit{and}\ hx^{\ast }\in EP(G)\}$$. For given $$x_{0},x_{1}\in \mathcal{H}_{1}$$, let the iterative sequences $$\{x_{n}\}$$, $$\{y_{n}\}$$, and $$\{u_{n}\}$$be generated by

\begin{aligned} &y_{n} =x_{n}+\theta _{n}(x_{n}-x_{n-1}), \\ &u_{n} =\alpha _{n}y_{n}+(1-\alpha _{n})T_{r_{n}}^{F}\bigl(\operatorname{Id}-\gamma h^{ \ast }\bigl(\operatorname{Id}-T_{r_{n}}^{G}\bigr)h \bigr)y_{n} , \\ &x_{n+1} =\beta _{n}u_{n}+(1-\beta _{n})J_{n}u_{n},\quad n\geq 1, \end{aligned}
(27)

where $$J_{n}=(\operatorname{Id}+s_{n}B)^{-1}(\operatorname{Id}-s_{n}A)$$with $$\{s_{n}\}\subset (0,2\alpha )$$and $$\{\theta _{n}\}\subset [ 0,\theta ]$$for some $$\theta \in [ 0,1)$$. Let $$\gamma \in (0,\frac{1}{L})$$such that L is the spectral radius of $$h^{\ast }h$$where $$h^{\ast }$$is the adjoint of h. Let $$\{r_{n}\}\subset (0,\infty )$$and $$\{\alpha _{n}\},\{\beta _{n}\}$$be in $$[0,1]$$. Assume that the following conditions hold:

1. C1

$$\sum_{n=1}^{\infty }\theta _{n}\Vert x_{n}-x_{n-1}\Vert <\infty$$;

2. C2

$$0<\liminf_{n\rightarrow \infty }\alpha _{n}\leq \limsup_{n \rightarrow \infty }\alpha _{n}<1$$;

3. C3

$$0<\liminf_{n\rightarrow \infty }\beta _{n}\leq \limsup_{n \rightarrow \infty }\beta _{n}<1$$;

4. C4

$$\liminf_{n\rightarrow \infty }r_{n}>0$$;

5. C5

$$0<\liminf_{n\rightarrow \infty }s_{n}\leq \limsup_{n\rightarrow \infty }s_{n}<2\alpha$$.

Then the sequence $$\{x_{n}\}$$ generated by (27) weakly converges to a point $$\hat{q}\in \varGamma$$.

### Corollary 3.4

Let C be a nonempty closed convex subset of a real Hilbert space $$\mathcal{H}_{1}$$. Let $$F:C\times C\rightarrow \mathbb{R}$$be a bifunction satisfying (A1)–(A4) of Assumption 2.3and $$\phi _{f}:C\rightarrow \mathcal{H}_{1}$$be a proper lower semicontinuous and convex function such that $$C\cap \operatorname{dom} ( \phi _{f} ) \neq \emptyset$$. Let $$A:\mathcal{H}_{1}\rightarrow \mathcal{H}_{1}$$be an α-inverse strongly monotone operator and $$B:\mathcal{H}_{1}\rightarrow 2^{\mathcal{H}_{1}}$$be a maximally monotone operator. Assume that $$\varGamma =(A+B)^{-1}(0) \cap \operatorname{MEP}(F,\phi _{f})\neq \emptyset$$. For given $$x_{0},x_{1}\in \mathcal{H}_{1}$$, let the iterative sequences $$\{x_{n}\}$$, $$\{y_{n}\}$$, and $$\{u_{n}\}$$be generated by

\begin{aligned} &y_{n} =x_{n}+\theta _{n}(x_{n}-x_{n-1}), \\ &u_{n} =\alpha _{n}y_{n}+(1-\alpha _{n})T_{r_{n}}^{F}y_{n}, \\ &x_{n+1} =\beta _{n}u_{n}+(1-\beta _{n})J_{n}u_{n},\quad n\geq 1, \end{aligned}
(28)

where $$J_{n}=(\operatorname{Id}+s_{n}B)^{-1}(\operatorname{Id}-s_{n}A)$$with $$\{s_{n}\}\subset (0,2\alpha )$$and $$\{\theta _{n}\}\subset [ 0,\theta ]$$for some $$\theta \in [ 0,1)$$. Let $$\{r_{n}\}\subset (0,\infty )$$and $$\{\alpha _{n}\},\{\beta _{n}\}$$be in $$[0,1]$$. Assume that the following conditions hold:

1. C1

$$\sum_{n=1}^{\infty }\theta _{n}\Vert x_{n}-x_{n-1}\Vert <\infty$$;

2. C2

$$0<\liminf_{n\rightarrow \infty }\alpha _{n}\leq \limsup_{n \rightarrow \infty }\alpha _{n}<1$$;

3. C3

$$0<\liminf_{n\rightarrow \infty }\beta _{n}\leq \limsup_{n \rightarrow \infty }\beta _{n}<1$$;

4. C4

$$\liminf_{n\rightarrow \infty }r_{n}>0$$;

5. C5

$$0<\liminf_{n\rightarrow \infty }s_{n}\leq \limsup_{n\rightarrow \infty }s_{n}<2\alpha$$.

Then the sequence $$\{x_{n}\}$$ generated by (28) weakly converges to a point $$\hat{q}\in \varGamma$$.

### Corollary 3.5

Let C be a nonempty closed convex subset of a real Hilbert space $$\mathcal{H}_{1}$$. Let $$F:C\times C\rightarrow \mathbb{R}$$be a bifunction satisfying (A1)–(A4) of Assumption 2.3. Let $$A:\mathcal{H}_{1}\rightarrow \mathcal{H}_{1}$$be an α-inverse strongly monotone operator and $$B:\mathcal{H}_{1}\rightarrow 2^{\mathcal{H}_{1}}$$be a maximally monotone operator. Assume that $$\varGamma =(A+B)^{-1}(0)\cap EP(F)\neq \emptyset$$. For given $$x_{0},x_{1}\in \mathcal{H}_{1}$$, let the iterative sequences $$\{x_{n}\}$$, $$\{y_{n}\}$$, and $$\{u_{n}\}$$be generated by

\begin{aligned} &y_{n} =x_{n}+\theta _{n}(x_{n}-x_{n-1}), \\ &u_{n} =\alpha _{n}y_{n}+(1-\alpha _{n})T_{r_{n}}^{F}y_{n}, \\ &x_{n+1} =\beta _{n}u_{n}+(1-\beta _{n})J_{n}u_{n},\quad n\geq 1, \end{aligned}
(29)

where $$J_{n}=(\operatorname{Id}+s_{n}B)^{-1}(\operatorname{Id}-s_{n}A)$$with $$\{s_{n}\}\subset (0,2\alpha )$$and $$\{\theta _{n}\}\subset [ 0,\theta ]$$for some $$\theta \in [ 0,1)$$. Let $$\{r_{n}\}\subset (0,\infty )$$and $$\{\alpha _{n}\},\{\beta _{n}\}$$be in $$[0,1]$$. Assume that the following conditions hold:

1. C1

$$\sum_{n=1}^{\infty }\theta _{n}\Vert x_{n}-x_{n-1}\Vert <\infty$$;

2. C2

$$0<\liminf_{n\rightarrow \infty }\alpha _{n}\leq \limsup_{n \rightarrow \infty }\alpha _{n}<1$$;

3. C3

$$0<\liminf_{n\rightarrow \infty }\beta _{n}\leq \limsup_{n \rightarrow \infty }\beta _{n}<1$$;

4. C4

$$\liminf_{n\rightarrow \infty }r_{n}>0$$;

5. C5

$$0<\liminf_{n\rightarrow \infty }s_{n}\leq \limsup_{n\rightarrow \infty }s_{n}<2\alpha$$.

Then the sequence $$\{x_{n}\}$$ generated by (29) weakly converges to a point $$\hat{q}\in \varGamma$$.

## Strong convergence results

This section is devoted to modifying the sequence $$\{x_{n}\}$$ generated by (4) to establish strong convergence results in Hilbert spaces. For this, we equip the proposed sequence with the shrinking projection method.

### Theorem 4.1

Let $$\mathcal{H}_{1}$$and $$\mathcal{H}_{2}$$be two real Hilbert spaces, and let $$C\subseteq \mathcal{H}_{1}$$and $$Q\subseteq \mathcal{H}_{2}$$be nonempty closed convex subsets of $$\mathcal{H}_{1}$$and $$\mathcal{H}_{2}$$, respectively. Let $$F:C\times C\rightarrow \mathbb{R}$$and $$G:Q\times Q\rightarrow \mathbb{R}$$be two bifunctions satisfying (A1)–(A4) of Assumption 2.3such that G is upper semicontinuous. Let $$h:\mathcal{H}_{1}\rightarrow \mathcal{H}_{2}$$be a bounded linear operator; $$\phi _{f}:C\rightarrow \mathcal{H}_{1}$$and $$\phi _{g}:Q\rightarrow \mathcal{H}_{2}$$be proper lower semicontinuous and convex functions such that $$C\cap \operatorname{dom} ( \phi _{f} ) \neq \emptyset$$and $$Q\cap \operatorname{dom} ( \phi _{g} ) \neq \emptyset$$. Let $$A:\mathcal{H}_{1}\rightarrow \mathcal{H}_{1}$$be an α-inverse strongly monotone operator and $$B:\mathcal{H}_{1}\rightarrow 2^{\mathcal{H}_{1}}$$be a maximally monotone operator. Assume that $$\varGamma =(A+B)^{-1}(0)\cap \varOmega \neq \emptyset$$, where $$\varOmega =\{x^{\ast }\in C:x^{\ast }\in \operatorname{MEP} (F,\phi _{f})\ \textit{and}\ hx^{ \ast }\in \operatorname{MEP}(G,\phi _{g})\}$$. For given $$x_{0},x_{1}\in C_{1}=C$$, let the iterative sequences $$\{x_{n}\}$$, $$\{y_{n}\}$$, and $$\{u_{n}\}$$be generated by

\begin{aligned} &y_{n} =x_{n}+\theta _{n}(x_{n}-x_{n-1}), \\ &u_{n} =\alpha _{n}y_{n}+(1-\alpha _{n})T_{r_{n}}^{F}\bigl(I-\gamma h^{ \ast } \bigl(I-T_{r_{n}}^{G}\bigr)h\bigr)y_{n}, \\ &z_{n} =\beta _{n}u_{n}+(1-\beta _{n})J_{n}u_{n} , \\ &C_{n+1} =\bigl\{ z\in C_{n}: \Vert z_{n}-z \Vert ^{2}\leq \Vert x_{n}-z \Vert ^{2}+2 \theta _{n}^{2} \Vert x_{n}-x_{n-1} \Vert ^{2}-2\theta _{n} \langle x_{n}-z,x_{n-1}-x_{n} \rangle \bigr\} , \\ &x_{n+1} =P_{C_{n+1}}x_{1}, \quad n \geq 1, \end{aligned}
(30)

where $$J_{n}=(\operatorname{Id}+s_{n}B)^{-1}(\operatorname{Id}-s_{n}A)$$with $$\{s_{n}\}\subset (0,2\alpha )$$and $$\{\theta _{n}\}\subset [ 0,\theta ]$$for some $$\theta \in [ 0,1)$$. Let $$\gamma \in (0,\frac{1}{L})$$such that L is the spectral radius of $$h^{\ast }h$$where $$h^{\ast }$$is the adjoint of h. Let $$\{r_{n}\}\subset (0,\infty )$$and $$\{\alpha _{n}\},\{\beta _{n}\}$$be in $$[0,1]$$. Assume that the following conditions hold:

1. C1

$$\sum^{\infty }_{n=1}\theta _{n}\|x_{n}-x_{n-1}\| < \infty$$;

2. C2

$$0 < \liminf_{n \rightarrow \infty }\alpha _{n} \leq \limsup_{n \rightarrow \infty }\alpha _{n} < 1$$;

3. C3

$$0 < \liminf_{n \rightarrow \infty }\beta _{n} \leq \limsup_{n \rightarrow \infty }\beta _{n} < 1$$;

4. C4

$$\liminf_{n \rightarrow \infty }r_{n} > 0$$;

5. C5

$$0 < \liminf_{n \rightarrow \infty }s_{n} \leq \limsup_{n \rightarrow \infty }s_{n} < 2\alpha$$.

Then the sequence $$\{x_{n}\}$$ generated by (30) strongly converges to a point $$\hat{q}=P_{\varGamma }x_{1}$$.

### Proof

The proof is divided into the following steps:

Step 1. Show that the sequence $$\{ x_{n} \}$$ defined in (30) is well defined.

We know that $$(A+B)^{-1}(0)$$ and Ω are closed and convex by Lemmas 2.4 and 2.6, respectively. Moreover, from Lemma 2.9 we have that $$C_{n+1}$$ is closed and convex for each $$n\geq 1$$. Hence the projection $$P_{C_{n+1}}x_{1}$$ is well defined. For any $$\hat{p}\in \varGamma$$, it follows from (30), (5), and (6) that

\begin{aligned} \Vert z_{n}-\hat{p} \Vert ^{2} &\leq \beta _{n} \Vert u_{n}-\hat{p} \Vert ^{2}+(1-\beta _{n}) \Vert J_{n}u_{n}-\hat{p} \Vert ^{2} \\ &\leq \Vert u_{n}-\hat{p} \Vert ^{2} \\ &\leq \alpha _{n} \Vert y_{n}-\hat{p} \Vert ^{2}+(1-\alpha _{n}) \Vert T_{n}y_{n}- \hat{p} \Vert ^{2} \\ &\leq \Vert y_{n}-\hat{p} \Vert ^{2} \\ &\leq \Vert x_{n}-\hat{p} \Vert ^{2}+2\theta _{n}\langle x_{n}-x_{n-1},y_{n}-\hat{p}\rangle \\ &\leq \Vert x_{n}-\hat{p} \Vert ^{2}+2\theta _{n}^{2} \Vert x_{n}-x_{n-1} \Vert ^{2}-2\theta _{n}\langle x_{n}- \hat{p},x_{n-1}-x_{n}\rangle. \end{aligned}

It follows from the above estimate that $$\varGamma \subset C_{n+1}$$. Summing up these facts, we conclude that $$C_{n+1}$$ is nonempty, closed, and convex for all $$n\geq 1$$, and hence the sequence $$\{ x_{n} \}$$ is well defined.

Step 2. Show that $$\lim_{n\rightarrow \infty }\Vert x_{n}-x_{1}\Vert$$ exists.

Since Γ is a nonempty closed and convex subset of $$\mathcal{H}_{1}$$, there exists unique $$x^{\ast }\in \varGamma$$ such that $$x^{\ast }=P_{\varGamma }x_{1}$$. From $$x_{n+1}=P_{C_{n+1}}x_{1}$$, we have $$\Vert x_{n+1}-x_{1}\Vert \leq \Vert \hat{p}-x_{1}\Vert$$ for all $$\hat{p}\in \varGamma \subset C_{n+1}$$. In particular $$\Vert x_{n+1}-x_{1}\Vert \leq \Vert P_{\varGamma }x_{1}-x_{1}\Vert$$. This proves that the sequence $$\{x_{n}\}$$ is bounded. On the other hand, from $$x_{n}=P_{C_{n}}x_{1}$$ and $$x_{n+1}=P_{C_{n+1}}x_{1}\in C_{n+1}$$, we get that

$$\Vert x_{n}-x_{1} \Vert \leq \Vert x_{n+1}-x_{1} \Vert .$$

This implies that $$\{x_{n}\}$$ is nondecreasing and hence

$$\lim_{n\rightarrow \infty } \Vert x_{n}-x_{1} \Vert \quad \text{exists.}$$
(31)

Step 3. Show that $$x_{n}\rightharpoonup \hat{q}\in (A+B)^{-1}(0)$$.

In order to proceed, we first calculate the following estimate which is required in the sequel:

\begin{aligned} \Vert x_{n+1}-x_{n} \Vert ^{2} &= \Vert x_{n+1}-x_{1}+x_{1}-x_{n} \Vert ^{2} \\ &= \Vert x_{n+1}-x_{1} \Vert ^{2}+ \Vert x_{n}-x_{1} \Vert ^{2}-2 \langle x_{n}-x_{1},x_{n+1}-x_{1} \rangle \\ &= \Vert x_{n+1}-x_{1} \Vert ^{2}+ \Vert x_{n}-x_{1} \Vert ^{2}-2 \langle x_{n}-x_{1},x_{n+1}-x_{n}+x_{n}-x_{1} \rangle \\ &= \Vert x_{n+1}-x_{1} \Vert ^{2}- \Vert x_{n}-x_{1} \Vert ^{2}-2 \langle x_{n}-x_{1},x_{n+1}-x_{n} \rangle \\ &\leq \Vert x_{n+1}-x_{1} \Vert ^{2}- \Vert x_{n}-x_{1} \Vert ^{2}. \end{aligned}

Taking limsup on both sides of the above estimate and utilizing (31), we have $$\limsup_{n\rightarrow \infty } \Vert x_{n+1}-x_{n} \Vert ^{2}=0$$. That is,

$$\lim_{n\rightarrow \infty } \Vert x_{n+1}-x_{n} \Vert =0.$$
(32)

Note that $$x_{n+1}\in C_{n+1}$$, therefore we have

$$\Vert z_{n}-x_{n+1} \Vert \leq \Vert x_{n}-x_{n+1} \Vert +2\theta _{n} \Vert x_{n}-x_{n-1} \Vert -2\theta _{n}\langle x_{n}-x_{n+1},x_{n-1}-x_{n} \rangle.$$

Utilizing (32) and (C1), the above estimate implies that

$$\lim_{n\rightarrow \infty } \Vert z_{n}-x_{n+1} \Vert =0.$$
(33)

From (32), (33), and the following triangular inequality:

$$\Vert z_{n}-x_{n} \Vert \leq \Vert z_{n}-x_{n+1} \Vert + \Vert x_{n+1}-x_{n} \Vert ,$$

we get

$$\lim_{n\rightarrow \infty } \Vert z_{n}-x_{n} \Vert =0.$$
(34)

Also, from Lemma 2.1, we have

\begin{aligned} \Vert z_{n}-\hat{p} \Vert ^{2} &=\beta _{n} \Vert u_{n}-\hat{p} \Vert ^{2}+(1- \beta _{n}) \Vert J_{n}u_{n}-\hat{p} \Vert ^{2}-\beta _{n}(1-\beta _{n}) \Vert J_{n}u_{n}-u_{n} \Vert ^{2} \\ &\leq \Vert u_{n}-\hat{p} \Vert ^{2}-\beta _{n}(1-\beta _{n}) \Vert J_{n}u_{n}-u_{n} \Vert ^{2} \\ &\leq \alpha _{n} \Vert y_{n}-\hat{p} \Vert ^{2}+(1-\alpha _{n}) \Vert T_{n}y_{n}- \hat{p} \Vert ^{2}-\beta _{n}(1-\beta _{n}) \Vert J_{n}u_{n}-u_{n} \Vert ^{2} \\ &\leq \Vert y_{n}-\hat{p} \Vert ^{2}-\beta _{n}(1-\beta _{n}) \Vert J_{n}u_{n}-u_{n} \Vert ^{2} \\ &\leq \Vert x_{n}-\hat{p} \Vert ^{2}+2\theta _{n}\langle x_{n}-x_{n-1},y_{n}-\hat{p}\rangle -\beta _{n}(1-\beta _{n}) \Vert J_{n}u_{n}-u_{n} \Vert ^{2}. \end{aligned}

Rearranging the above estimate, we have

\begin{aligned} &\beta _{n}(1-\beta _{n}) \Vert J_{n}u_{n}-u_{n} \Vert ^{2} \\ &\quad\leq \Vert x_{n}-\hat{p} \Vert ^{2}- \Vert z_{n}-\hat{p} \Vert ^{2}+2 \theta _{n}\langle x_{n}-x_{n-1},y_{n}- \hat{p}\rangle \\ &\quad\leq \bigl( \Vert x_{n}-\hat{p} \Vert + \Vert z_{n}-\hat{p} \Vert \bigr) \Vert x_{n}-z_{n} \Vert +2\theta _{n}\langle x_{n}-x_{n-1},y_{n}- \hat{p}\rangle. \end{aligned}

The above estimate, by using (C1) and (34), implies that

$$\lim_{n\rightarrow \infty } \Vert J_{n}u_{n}-u_{n} \Vert =0.$$
(35)

Making use of (35), we have the following estimate:

$$\lim_{n\rightarrow \infty } \Vert z_{n}-u_{n} \Vert = \lim_{n \rightarrow \infty }(1-\beta _{n}) \Vert J_{n}u_{n}-u_{n} \Vert =0.$$
(36)

Reasoning as above, we get from (34) and (36) that

$$\lim_{n\rightarrow \infty } \Vert u_{n}-x_{n} \Vert =0.$$
(37)

In a similar fashion, we have

$$\lim_{n\rightarrow \infty } \bigl\Vert T_{s}^{A,B}u_{n}-u_{n} \bigr\Vert =0.$$
(38)

Reasoning as above (Theorem 3.1 Step 2), we have the desired result.

Step 4. Show that $$\hat{q}\in \varOmega$$.

See the proof of Step 3 in Theorem 3.1.

Step 5. Show that $$\hat{q}=P_{\varGamma }x_{1}$$.

Let $$x=P_{\varGamma }x_{1}$$ imply that $$x=P_{\varGamma }x_{1}\in C_{n+1}$$. Since $$x_{n+1}=P_{C_{n+1}}x_{1}\in C_{n+1}$$, we have

$$\Vert x_{n+1}-x_{1} \Vert \leq \Vert x-x_{1} \Vert .$$

On the other hand, we have

\begin{aligned} \Vert x-x_{1} \Vert &\leq \Vert \hat{q}-x_{1} \Vert \\ &\leq \liminf_{j\rightarrow \infty } \Vert x_{n}-x_{1} \Vert \\ &\leq \limsup_{j\rightarrow \infty } \Vert x_{n}-x_{1} \Vert \\ &\leq \Vert x-x_{1} \Vert . \end{aligned}

That is,

$$\Vert \hat{q}-x_{1} \Vert =\lim_{n\rightarrow \infty } \Vert x_{n}-x_{1} \Vert = \Vert x-x_{1} \Vert .$$

Therefore, we conclude that $$\lim_{n\rightarrow \infty }x_{n}=\hat{q}=P_{\varGamma }x_{1}$$. This completes the proof. □

Taking into consideration Remark 3.2, the following results can easily be derived from Theorem 4.1.

### Corollary 4.2

Let $$\mathcal{H}_{1}$$and $$\mathcal{H}_{2}$$be two real Hilbert spaces, and let $$C\subseteq \mathcal{H}_{1}$$and $$Q\subseteq \mathcal{H}_{2}$$be nonempty closed convex subsets of $$\mathcal{H}_{1}$$and $$\mathcal{H}_{2}$$, respectively. Let $$F:C\times C\rightarrow \mathbb{R}$$and $$G:Q\times Q\rightarrow \mathbb{R}$$be two bifunctions satisfying (A1)–(A4) of Assumption 2.3such that G is upper semicontinuous. Let $$h:\mathcal{H}_{1}\rightarrow \mathcal{H}_{2}$$be a bounded linear operator, $$A:\mathcal{H}_{1}\rightarrow \mathcal{H}_{1}$$be an α-inverse strongly monotone operator, and $$B:\mathcal{H}_{1}\rightarrow 2^{\mathcal{H}_{1}}$$be a maximally monotone operator. Assume that $$\varGamma =(A+B)^{-1}(0)\cap \varOmega \neq \emptyset$$, where $$\varOmega =\{x^{\ast }\in C:x^{\ast }\in EP(F)\ \textit{and}\ hx^{\ast }\in EP(G)\}$$. For given $$x_{0},x_{1}\in \mathcal{H}_{1}$$, let the iterative sequences $$\{x_{n}\}$$, $$\{y_{n}\}$$, and $$\{u_{n}\}$$be generated by

\begin{aligned} &y_{n} =x_{n}+\theta _{n}(x_{n}-x_{n-1}), \\ &u_{n} =\alpha _{n}y_{n}+(1-\alpha _{n})T_{r_{n}}^{F}\bigl(I-\gamma h^{ \ast } \bigl(I-T_{r_{n}}^{G}\bigr)h\bigr)y_{n}, \\ &z_{n} =\beta _{n}u_{n}+(1-\beta _{n})J_{n}u_{n}, \\ &C_{n+1} =\bigl\{ z\in C_{n}: \Vert z_{n}-z \Vert ^{2}\leq \Vert x_{n}-z \Vert ^{2}+2 \theta _{n}^{2} \Vert x_{n}-x_{n-1} \Vert ^{2}-2\theta _{n} \langle x_{n}-z,x_{n-1}-x_{n} \rangle \bigr\} , \\ &x_{n+1} =P_{C_{n+1}}x_{1}, \quad n \geq 1, \end{aligned}
(39)

where $$J_{n}=(\operatorname{Id}+s_{n}B)^{-1}(\operatorname{Id}-s_{n}A)$$with $$\{s_{n}\}\subset (0,2\alpha )$$and $$\{\theta _{n}\}\subset [ 0,\theta ]$$for some $$\theta \in [ 0,1)$$. Let $$\gamma \in (0,\frac{1}{L})$$such that L is the spectral radius of $$h^{\ast }h$$where $$h^{\ast }$$is the adjoint of h. Let $$\{r_{n}\}\subset (0,\infty )$$and $$\{\alpha _{n}\},\{\beta _{n}\}$$be in $$[0,1]$$. Assume that the following conditions hold:

1. C1

$$\sum_{n=1}^{\infty }\theta _{n}\Vert x_{n}-x_{n-1}\Vert <\infty$$;

2. C2

$$0<\liminf_{n\rightarrow \infty }\alpha _{n}\leq \limsup_{n \rightarrow \infty }\alpha _{n}<1$$;

3. C3

$$0<\liminf_{n\rightarrow \infty }\beta _{n}\leq \limsup_{n \rightarrow \infty }\beta _{n}<1$$;

4. C4

$$\liminf_{n\rightarrow \infty }r_{n}>0$$;

5. C5

$$0<\liminf_{n\rightarrow \infty }s_{n}\leq \limsup_{n\rightarrow \infty }s_{n}<2\alpha$$.

Then the sequence $$\{x_{n}\}$$ generated by (39) strongly converges to a point $$\hat{q}=P_{\varGamma }x_{1}$$.

### Corollary 4.3

Let C be a nonempty closed convex subset of a real Hilbert space $$\mathcal{H}_{1}$$. Let $$F:C\times C\rightarrow \mathbb{R}$$be a bifunction satisfying (A1)–(A4) of Assumption 2.3, and let $$\phi _{f}:C\rightarrow \mathcal{H}_{1}$$be a proper lower semicontinuous and convex function such that $$C\cap \operatorname{dom} ( \phi _{f} ) \neq \emptyset$$. Let $$A:\mathcal{H}_{1}\rightarrow \mathcal{H}_{1}$$be an α-inverse strongly monotone operator and $$B:\mathcal{H}_{1}\rightarrow 2^{\mathcal{H}_{1}}$$be a maximally monotone operator. Assume that $$\varGamma =(A+B)^{-1}(0) \cap \operatorname{MEP}(F,\phi _{f})\neq \emptyset$$. For given $$x_{0},x_{1}\in \mathcal{H}_{1}$$, let the iterative sequences $$\{x_{n}\}$$, $$\{y_{n}\}$$, and $$\{u_{n}\}$$be generated by

\begin{aligned} &y_{n} =x_{n}+\theta _{n}(x_{n}-x_{n-1}), \\ &u_{n} =\alpha _{n}y_{n}+(1-\alpha _{n})T_{r_{n}}^{F}y_{n}, \\ &z_{n} =\beta _{n}u_{n}+(1-\beta _{n})J_{n}u_{n}, \\ &C_{n+1} =\bigl\{ z\in C_{n}: \Vert z_{n}-z \Vert ^{2}\leq \Vert x_{n}-z \Vert ^{2}+2 \theta _{n}^{2} \Vert x_{n}-x_{n-1} \Vert ^{2}-2\theta _{n} \langle x_{n}-z,x_{n-1}-x_{n} \rangle \bigr\} , \\ &x_{n+1} =P_{C_{n+1}}x_{1},\quad n \geq 1, \end{aligned}
(40)

where $$J_{n}=(\operatorname{Id}+s_{n}B)^{-1}(\operatorname{Id}-s_{n}A)$$with $$\{s_{n}\}\subset (0,2\alpha )$$and $$\{\theta _{n}\}\subset [ 0,\theta ]$$for some $$\theta \in [ 0,1)$$. Let $$\{r_{n}\}\subset (0,\infty )$$and $$\{\alpha _{n}\},\{\beta _{n}\}$$be in $$[0,1]$$. Assume that the following conditions hold:

1. C1

$$\sum_{n=1}^{\infty }\theta _{n}\Vert x_{n}-x_{n-1}\Vert <\infty$$;

2. C2

$$0<\liminf_{n\rightarrow \infty }\alpha _{n}\leq \limsup_{n \rightarrow \infty }\alpha _{n}<1$$;

3. C3

$$0<\liminf_{n\rightarrow \infty }\beta _{n}\leq \limsup_{n \rightarrow \infty }\beta _{n}<1$$;

4. C4

$$\liminf_{n\rightarrow \infty }r_{n}>0$$;

5. C5

$$0<\liminf_{n\rightarrow \infty }s_{n}\leq \limsup_{n\rightarrow \infty }s_{n}<2\alpha$$.

Then the sequence $$\{x_{n}\}$$ generated by (40) strongly converges to a point $$\hat{q}=P_{\varGamma }x_{1}$$.

### Corollary 4.4

Let C be a nonempty closed convex subset of a real Hilbert space $$\mathcal{H}_{1}$$. Let $$F:C\times C\rightarrow \mathbb{R}$$be a bifunction satisfying (A1)–(A4) of Assumption 2.3. Let $$A:\mathcal{H}_{1}\rightarrow \mathcal{H}_{1}$$be an α-inverse strongly monotone operator and $$B:\mathcal{H}_{1}\rightarrow 2^{\mathcal{H}_{1}}$$be a maximally monotone operator. Assume that $$\varGamma =(A+B)^{-1}(0)\cap EP(F)\neq \emptyset$$. For given $$x_{0},x_{1}\in \mathcal{H}_{1}$$, let the iterative sequences $$\{x_{n}\}$$, $$\{y_{n}\}$$, and $$\{u_{n}\}$$be generated by

\begin{aligned} &y_{n} =x_{n}+\theta _{n}(x_{n}-x_{n-1}), \\ &u_{n} =\alpha _{n}y_{n}+(1-\alpha _{n})T_{r_{n}}^{F}y_{n}, \\ &z_{n} =\beta _{n}u_{n}+(1-\beta _{n})J_{n}u_{n}, \\ &C_{n+1} =\bigl\{ z\in C_{n}: \Vert z_{n}-z \Vert ^{2}\leq \Vert x_{n}-z \Vert ^{2}+2 \theta _{n}^{2} \Vert x_{n}-x_{n-1} \Vert ^{2}-2\theta _{n} \langle x_{n}-z,x_{n-1}-x_{n} \rangle \bigr\} , \\ &x_{n+1} =P_{C_{n+1}}x_{1}, \quad n \geq 1, \end{aligned}
(41)

where $$J_{n}=(\operatorname{Id}+s_{n}B)^{-1}(\operatorname{Id}-s_{n}A)$$with $$\{s_{n}\}\subset (0,2\alpha )$$and $$\{\theta _{n}\}\subset [ 0,\theta ]$$for some $$\theta \in [ 0,1)$$. Let $$\{r_{n}\}\subset (0,\infty )$$and $$\{\alpha _{n}\},\{\beta _{n}\}$$be in $$[0,1]$$. Assume that the following conditions hold:

1. C1

$$\sum_{n=1}^{\infty }\theta _{n}\Vert x_{n}-x_{n-1}\Vert <\infty$$;

2. C2

$$0<\liminf_{n\rightarrow \infty }\alpha _{n}\leq \limsup_{n \rightarrow \infty }\alpha _{n}<1$$;

3. C3

$$0<\liminf_{n\rightarrow \infty }\beta _{n}\leq \limsup_{n \rightarrow \infty }\beta _{n}<1$$;

4. C4

$$\liminf_{n\rightarrow \infty }r_{n}>0$$;

5. C5

$$0<\liminf_{n\rightarrow \infty }s_{n}\leq \limsup_{n\rightarrow \infty }s_{n}<2\alpha$$.

Then the sequence $$\{x_{n}\}$$ generated by (41) strongly converges to a point $$\hat{q}=P_{\varGamma }x_{1}$$.

### Remark 4.5

We remark here that condition (C1) can easily be implemented in numerical computation since the value of $$\Vert x_{n}-x_{n-1}\Vert$$ is known before choosing $$\theta _{n}$$. Moreover, the parameter $$\theta _{n}$$ can be taken as $$0\leq \theta _{n}\leq \widehat{\theta _{n}}$$,

$$\widehat{\theta _{n}}= \textstyle\begin{cases} \min \{\frac{z_{n}}{ \Vert x_{n}-x_{n-1} \Vert }, \theta \} & \text{if }x_{n} \neq x_{n-1}; \\ \theta& \text{otherwise,} \end{cases}$$

where $$\{z_{n}\}$$ is a positive sequence such that $$\sum_{n=1}^{\infty }z_{n}<\infty$$.

## Examples and numerical results

In this section, we give examples and numerical results to strengthen the theoretical results established in the previous sections.

### Example 5.1

Let $$\mathcal{H}_{1} = \mathcal{H}_{2} = \mathbb{R}^{3}$$ with the inner product defined by $$\langle x, y\rangle = xy$$ for all $$x, y \in \mathbb{R}^{3}$$ and the induced usual norm $$|\cdot |$$. Let $$C=\{ x \in \mathbb{R}_{+}^{3}|\sqrt{x^{2}_{1}+x^{2}_{2}+x^{2}_{3}} \leq 1 \}$$ and $$Q=\{x \in \mathbb{R}_{-}^{3}|\langle a, x\rangle \geq b\}$$ where $$a=(2,-1,3)$$ and $$b=1$$. Let $$F:C \times C \rightarrow \mathbb{R}$$ be defined as $$F(x,y)=2\max_{x_{i} \in x, {y_{i} \in y}}x(y-x)$$ where $$x =(x_{1},x_{2},x_{3}), y=(y_{1},y_{2},y_{3})\in C$$ and $$G(u,v)=\max_{u_{i}\in u,{v_{i}\in v}}u(v-u)$$ where $$u =(u_{1},u_{2},u_{3}),v=(v_{1},v_{2},v_{3}) \in Q$$. Let the mappings $$\phi _{f}:C \rightarrow \mathcal{H}_{1}$$ be defined as $$\phi _{f}(x)=0$$ for each $$x \in C$$ and $$\phi _{g}:Q \rightarrow H_{2}$$ be defined as $$\phi _{g}(u)=0$$ for each $$u \in Q$$. For $$r > 0$$, let $$T^{F}_{r}x=P_{C}x$$ and $$T^{G}_{r}x=P_{Q}x$$. Moreover, we define three mappings $$h,A,B:\mathbb{R}^{3}\rightarrow \mathbb{R}^{3}$$ as follows:

$$h(x)= \begin{pmatrix} 1 & -1 & 5 \\ 0 & 1 & 3 \\ 0 & 0 & 2 \end{pmatrix} \begin{pmatrix} x_{1} \\ x_{2} \\ x_{2} \end{pmatrix}, \qquad Ax=3x+(1,2,1)\quad \text{and}\quad Bx=4x$$

for all $$x =(x_{1},x_{2},x_{3}) \in \mathbb{R}^{3}$$.

Choose $$\alpha _{n} =\frac{n}{100n+1}$$, $$\beta _{n} =\frac{n}{100n+1}$$, $$r_{n} = \frac{1}{5}$$, $$L=3$$, and $$s = 0.1$$.

Since

$$\theta _{n}= \textstyle\begin{cases} \min \{\frac{1}{n^{2} \Vert x_{n}-x_{n-1} \Vert },0.5\}& \text{if }x_{n}\neq x_{n-1}; \\ 0.5& \text{otherwise}, \end{cases}$$

we can construct strongly convergent sequences $$\{x_{n}\}$$, $$\{y_{n}\}$$, and $$\{u_{n}\}$$ as defined in Theorem 4.1.

### Proof

It is easy to prove that the bifunctions F and G satisfy Assumption 2.3(A1)–(A4) and G is upper semicontinuous. The operator h is a bounded linear operator on $$\mathbb{R}^{3}$$ with adjoint operator $$h^{\ast }$$ and $$\|h\| = \|h^{\ast }\| = 3$$. Moreover, it is clear that A is $$1/3$$-inverse strongly monotone and B is maximal monotone. Furthermore, it is easy to observe that, for $$s > 0$$,

\begin{aligned} J^{B}_{s}(x-sAx)&=(I+sB)^{-1}(x-sAx) \\ &= \frac{1-3s}{1+4s}x-\frac{s}{1+4s}(1,2,1). \end{aligned}

Note that $$\operatorname{Sol}(\operatorname{MEP}(F,\phi _{f}))=\{0\}=\operatorname{Sol}(\operatorname{MEP}(G,\phi _{g}))$$. Hence $$\varGamma = (A + B)^{-1}(0)\cap \varOmega = 0$$. Now, we compute our desired sequences in the following six steps.

Step 1. Find $$z \in Q$$ such that $$G(z,y)+\phi _{g}(y)-\phi _{g}(z)+\frac{1}{r}\langle y - z,z - hx \rangle \geq 0$$ for all $$y \in Q$$.

Observe that

\begin{aligned} &G(z,y)+\phi _{g}(y)-\phi _{g}(z)+\frac{1}{r}\langle y - z,z - hx \rangle \geq 0 \\ &\quad\Leftrightarrow \quad z(y-z)+\frac{1}{r}\langle y - z,z - hx \rangle \geq 0 \\ &\quad\Leftrightarrow \quad rz(y-z)+(y-z) (z-hx)\geq 0 \\ &\quad\Leftrightarrow \quad (y-z) \bigl((1+r)z-hx\bigr)\geq 0 \end{aligned}

for all $$y \in Q$$. Thus, by Lemma 2.4(2), we know that $$T^{G}_{r}hx$$ is single-valued for each $$x \in Q$$. Hence $$z=\frac{hx}{1+r}$$.

Step 2. Find $$m \in C$$ such that $$m = x-\gamma h^{\ast }(I-T_{r}^{G})hx$$.

It follows from Step 1 that

\begin{aligned} m&=x-\gamma h^{\ast }\bigl(I-T_{r}^{G}\bigr)hx = x- \gamma h^{\ast }\bigl(I-T_{r}^{G}\bigr)hx \\ &= x - \gamma \biggl(3x-\frac{3(hx)}{1+r}\biggr) \\ &= (1-3\gamma )x+\frac{3\gamma }{1+r}(hx). \end{aligned}

Step 3. Find $$u \in C$$ such that $$F(u,v)+\phi _{f}(v)-\phi _{f}(u)+\frac{1}{r}\langle v-u,u-m\rangle \geq 0$$ for all $$v \in C$$. From Step 2, we have

\begin{aligned} &F(u,v)+\phi _{f}(v)-\phi _{f}(u)+\frac{1}{r}\langle v-u,u-m\rangle \geq 0\\ &\quad\leftrightarrow \quad (2u)(v - u)+\frac{1}{r}\langle v-u, u-m \rangle \geq 0 \\ &\quad\leftrightarrow\quad r(2u)(v-u)+(v-u)(u-m)\geq 0 \\ &\quad\leftrightarrow\quad (v-u)((1+2r)u-m)\geq 0 \end{aligned}

for all $$v \in C$$. Similarly, by Lemma 2.4(2), we obtain $$u=\frac{m}{1+2r}=\frac{(1-3\gamma )x}{1+2r}+ \frac{3\gamma hx}{(1+r)(1+2r)}$$.

Step 4.

$$\textstyle\begin{cases} x_{0}=x \in \mathbb{R}^{3} \\ y_{n}=x_{n}+\theta _{n}(x_{n}-x_{n-1}) \\ u_{n}=\frac{n}{100n+1} y_{n}+(1-\frac{n}{100n+1})( \frac{(1-3\gamma )x_{n}}{1+2r}+\frac{3\gamma hx_{n}}{(1+r)(1+2r)})y_{n} \\ z_{n}=\frac{n}{100n+1}u_{n}+(1-\frac{n}{100n+1})(\frac{1-3s}{1+4s}x_{n}-\frac{s}{1+4s}Ax_{n})u_{n}.\end{cases}$$

Step 5. Find

$$C_{n+1}=\{z \in C_{n}:\|z_{n}-z\|^{2}\leq \|x_{n}-z\|^{2}+2\theta ^{2}_{n} \|x_{n}-x_{n-1}\|^{2}-2\theta _{n}\langle x_{n}-z,x_{n-1}-x_{n} \rangle \}$$.

Since $$\|z_{n}-z\|^{2}\leq \|x_{n}-z\|^{2}+2\theta ^{2}_{n}\|x_{n}-x_{n-1} \|^{2}-2\theta _{n}\langle x_{n}-z,x_{n-1}-x_{n}\rangle$$, we have

$$\frac{ \Vert z_{n} \Vert ^{2}- \Vert x_{n} \Vert ( \Vert x_{n} \Vert +2)-2\theta _{n} \Vert x_{n}-x_{n-1} \Vert ^{2}}{2(( \Vert z_{n} \Vert - \Vert x_{n} \Vert )-\theta _{n} \Vert x_{n}-x_{n-1} \Vert )} \leq \Vert z \Vert .$$

Step 6. Compute the numerical results of $$x_{n+1}=P_{C_{n+1}}x_{1}$$.

We provide a numerical test of a comparison between the inertial forward-backward method defined in Theorem 4.1 and the standard forward-backward method (i.e., $$\theta _{n}=0$$). The stopping criterion is defined as $$E_{n}=\|x_{n+1}-x_{n}\|<10^{-9}$$. □

The error plotting $$E_{n}$$ against each choice in Table 1 is shown in Figs. 13, respectively.

## Conclusion

From a mathematical formulation of the monotone inclusion problem together with the split mixed equilibrium problem, we have derived in this paper an iterative algorithm comprising a modified version of the forward-backward splitting algorithm and the shrinking projection method in Hilbert spaces. We have shown theoretically that the proposed algorithm exhibits weak and strong convergence characteristics towards the common solution under a suitable set of constraints. It is remarked that the proposed algorithm is easily computable as demonstrated in Sect. 5. Moreover, numerical performance of the proposed algorithm has been established in comparison to the existing algorithms. We are interested in extending these results for various different classes of monotone inclusion problems together with fixed point problems and/or equilibrium problems in Hilbert spaces.

## References

1. 1.

Agarwal, P., Agarwal, R.P., Ruzhansky, M.: Special Functions and Analysis of Differential Equations. CRC Press, Boca Raton (2020)

2. 2.

Agarwal, P., Dragomir, S.S., Jleli, M., Samet, B.: Advances in Mathematical Inequalities and Applications. Springer, Singapore (2019)

3. 3.

Agarwal, P., Jleli, M., Samet, B.: Fixed Point Theory in Metric Spaces, Recent Advances and Applications. Springer, Singapore (2018)

4. 4.

Alvarez, F.: Weak convergence of a relaxed and inertial hybrid projection-proximal point algorithm for maximal monotone operators in Hilbert spaces. SIAM J. Optim. 14, 773–782 (2004)

5. 5.

Alvarez, F., Attouch, H.: An inertial proximal method for monotone operators via discretization of a nonlinear oscillator with damping. Set-Valued Anal. 9, 3–11 (2001)

6. 6.

Baleanu, D., Jafari, H., Khan, H., Johnston, S.J.: Results for mild solution of fractional coupled hybrid boundary value problems. Open Math. 13(1), 601–608 (2015)

7. 7.

Bauschke, H.H., Combettes, P.L.: A weak-to-strong convergence principle for Fejer-monotone methods in Hilbert spaces. Math. Oper. Res. 26, 248–264 (2001)

8. 8.

Bauschke, H.H., Combettes, P.L.: Convex Analysis and Monotone Operator Theory in Hilbert Spaces. CMS Books in Mathematics. Springer, New York (2011)

9. 9.

Blum, E., Oettli, W.: From optimization and variational inequalities to equilibrium problems. Math. Stud. 63, 123–145 (1994)

10. 10.

Bot, R.I., Csetnek, E.R.: An inertial alternating direction method of multipliers. Minimax Theory Appl. 1, 29–49 (2016)

11. 11.

Bot, R.I., Csetnek, E.R., Hendrich, C.: Inertial Douglas–Rachford splitting for monotone inclusion problems. Appl. Math. Comput. 256, 472–487 (2015)

12. 12.

Bot, R.I., Csetnek, E.R., László, S.C.: An inertial forward–backward algorithm for the minimization of the sum of two nonconvex functions. EURO J. Comput. Optim. 4, 3–25 (2016)

13. 13.

Browder, F.E.: Nonexpansive nonlinear operators in a Banach space. Proc. Natl. Acad. Sci. USA 54, 1041–1044 (1965)

14. 14.

Censor, Y., Bortfeld, T., Martin, B., Trofimov, A.: A unified approach for inversion problems in intensity modulated radiation therapy. Phys. Med. Biol. 51, 2353–2365 (2006)

15. 15.

Censor, Y., Elfving, T., Kopf, N., Bortfeld, T.: The multiple-sets split feasibility problem and its applications for inverse problems. Inverse Probl. 21, 2071–2084 (2005)

16. 16.

Censor, Y., Gibali, A., Reich, S.: Algorithms for the split variational inequality problem. Numer. Algorithms 59, 301–323 (2012)

17. 17.

Cholamjiak, P.: A generalized forward-backward splitting method for solving quasi inclusion problems in Banach spaces. Numer. Algorithms 8, 221–239 (1994)

18. 18.

Combettes, P.L.: The convex feasibility problem in image recovery. Adv. Imaging Electron Phys. 95, 155–453 (1996)

19. 19.

Combettes, P.L.: Solving monotone inclusions via compositions of nonexpansive averaged operators. Optimization 53, 475–504 (2004)

20. 20.

Combettes, P.L., Hirstoaga, S.A.: Equilibrium programming in Hilbert spaces. J. Nonlinear Convex Anal. 6, 117–136 (2005)

21. 21.

Combettes, P.L., Wajs, V.R.: Signal recovery by proximal forward-backward splitting. Multiscale Model. Simul. 4, 1168–1200 (2005)

22. 22.

Dang, Y., Sun, J., Xu, H.: Inertial accelerated algorithms for solving a split feasibility problem. J. Ind. Manag. Optim. 13, 1383–1394 (2017)

23. 23.

Daniele, P., Giannessi, F., Mougeri, A.: Equilibrium Problems and Variational Models, Nonconvex Optimization and Its Application, vol. 68. Kluwer Academic, Norwell (2003)

24. 24.

Deepho, J., Kumm, W., Kumm, P.: A new hybrid projection algorithm for solving the split generalized equilibrium problems and the system of variational inequality problems. J. Math. Model. Algorithms 13, 405–423 (2014)

25. 25.

Deepho, J., Martinez-Moreno, J., Kumam, P.: A viscosity of Cesaro mean approximation method for split generalized equilibrium, variational inequality and fixed point problems. J. Nonlinear Sci. Appl. 9, 1475–1496 (2016)

26. 26.

Douglas, J., Rachford, H.H.: On the numerical solution of the heat conduction problem in two and three space variables. Trans. Am. Math. Soc. 82, 421–439 (1956)

27. 27.

El-Sayed, A.A., Agarwal, P.: Numerical solution of multiterm variable-order fractional differential equations via shifted Legendre polynomials. Math. Methods Appl. Sci. 42, 3978–3991 (2019)

28. 28.

He, Z.: The split equilibrium problem and its convergence algorithms. J. Inequal. Appl. 2012, 162 (2012)

29. 29.

Kazmi, K.R., Rizvi, S.H.: Iterative approximation of a common solution of a split equilibrium problem, a variational inequality problem and fixed point problem. J. Egypt. Math. Soc. 21, 44–51 (2013)

30. 30.

Khan, A., Abdeljawad, T., Gomez-Aguilar, J.F., Khan, H.: Dynamical study of fractional order mutualism parasitism food web module. Chaos Solitons Fractals 134, 109–685 (2020)

31. 31.

Khan, H., Abdeljawad, T., Tunc, C., Alkhazzan, A., Khan, A.: Minkowski’s inequality for the AB-fractional integral operator. J. Inequal. Appl. 2019, 96 (2019)

32. 32.

Khan, H., Khan, A., Abdeljawad, T., Alkhazzan, A.: Existence results in Banach space for a nonlinear impulsive system. Adv. Differ. Equ. 2019, 18 (2019)

33. 33.

Khan, H., Khan, A., Chen, W., Shah, K.: Stability analysis and a numerical scheme for fractional Klein–Gordon equations. Math. Methods Appl. Sci. 42(2), 723–732 (2018)

34. 34.

Khan, H., Tunc, C., Baleanu, D., Khan, A., Alkhazzan, A.: Inequalities for n-class of functions using the Saigo fractional integral operator. RACSAM 113, 2407–2420 (2019)

35. 35.

Khan, H., Tunc, C., Khan, A.: Green function’s properties and existence theorems for nonlinear singular-delay-fractional differential equations. Discrete Contin. Dyn. Syst. (2020). https://doi.org/10.3934/dcdss.2020139

36. 36.

Khan, M.A.A.: Convergence characteristics of a shrinking projection algorithm in the sense of Mosco for split equilibrium problem and fixed point problem in Hilbert spaces. Linear Nonlinear Anal. 3, 423–435 (2017)

37. 37.

Khan, M.A.A., Arfat, Y., Butt, A.R.: A shrinking projection approach to solve split equilibrium problems and fixed point problems in Hilbert spaces. UPB Sci. Bull., Ser. A 80(1), 33–46 (2018)

38. 38.

Lions, P.L., Mercier, B.: Splitting algorithms for the sum of two nonlinear operators. SIAM J. Numer. Anal. 16, 964–979 (1979)

39. 39.

Lopez, G., Martin-Marquez, V., Wang, F., Xu, H.K.: Forward–backward splitting methods for accretive operators in Banach spaces. Abstr. Appl. Anal. 2012, Article ID 109236 (2012)

40. 40.

Lorenz, D., Pock, T.: An inertial forward-backward algorithm for monotone inclusions. J. Math. Imaging Vis. 51, 311–325 (2015)

41. 41.

Ma, Z., Wang, L., Chang, S.-S., Duan, W.: Convergence theorems for split equality mixed equilibrium problems with applications. Fixed Point Theory Appl. 2015, 31 (2015)

42. 42.

Martinez-Yanes, C., Xu, H.K.: Strong convergence of CQ method for fixed point iteration processes. Nonlinear Anal. 64, 2400–2411 (2006)

43. 43.

Minty, G.J.: On a monotonicity method for the solution of nonlinear equations in Banach spaces. Proc. Natl. Acad. Sci. 50, 1038–1041 (1963)

44. 44.

Moudafi, A.: Split monotone variational inclusions. J. Optim. Theory Appl. 150, 275–283 (2011)

45. 45.

Moudafi, A., Oliny, M.: Convergence of a splitting inertial proximal method for monotone operators. J. Comput. Appl. Math. 155, 447–454 (2003)

46. 46.

Nesterov, Y.: A method for solving the convex programming problem with convergence rate $$O(\frac{1}{k^{2}})$$. Dokl. Akad. Nauk SSSR 269, 543–547 (1983)

47. 47.

Peaceman, D.H., Rachford, H.H.: The numerical solution of parabolic and elliptic differential equations. J. Soc. Ind. Appl. Math. 3, 28–41 (1955)

48. 48.

Polyak, B.T.: Some methods of speeding up the convergence of iteration methods. USSR Comput. Math. Math. Phys. 4(5), 1–17 (1964)

49. 49.

Polyak, B.T.: Introduction to Optimization, Optimization Software, New York (1987)

50. 50.

Rekhviashvili, S., Pskhu, A., Agarwal, P., Jain, S.: Application of the fractional oscillator model to describe damped vibrations. Turk. J. Phys. 43, 236–242 (2019)

51. 51.

Rizvi, S.H.: A strong convergence theorem for split mixed equilibrium and fixed point problems for nonexpansive mappings. J. Fixed Point Theory Appl. 20, 1–22 (2018)

52. 52.

Rockafellar, R.T.: Monotone operators and the proximal point algorithm. SIAM J. Control Optim. 14, 877–898 (1976)

53. 53.

Ruzhansky, M., Cho, Y.J., Agarwal, P.: Advances in Real and Complex Analysis with Applications. Springer, Singapore (2017)

54. 54.

Sitthithakerngkiet, K., Deepho, J., Martinez-Moreno, J., Kumam, P.: An iterative approximation scheme for solving a split generalized equilibrium, variational inequalities and fixed point problems. Int. J. Comput. Math. 94, 2373–2395 (2017)

55. 55.

Suantai, S.: Weak and strong convergence criteria of Noor iterations for asymptotically nonexpansive mappings. J. Math. Anal. Appl. 311, 506–517 (2005)

56. 56.

Thong, D.V., Vinh, N.T.: Inertial methods for fixed point problems and zero point problems of the sum of two monotone mappings. Optimization 68, 1037–1072 (2019)

57. 57.

Tseng, P.: Applications of a splitting algorithm to decomposition in convex programming and variational inequalities. SIAM J. Control Optim. 29, 119–138 (1991)

58. 58.

Tseng, P.: A modified forward-backward splitting method for maximal monotone mappings. SIAM J. Control Optim. 38, 431–446 (2000)

59. 59.

Witthayarat, U., Abdou, A.A., Cho, Y.J.: Shrinking projection method for solving split equilibrium problems and fixed point problems for asymptotically nonexpansive mappings in Hilbert spaces. Fixed Point Theory Appl. 2015, 200 (2015)

60. 60.

Yuan, H.: Fixed points and split equilibrium problems in Hilbert spaces. J. Nonlinear Convex Anal. 19, 1983–1993 (2018)

### Acknowledgements

The authors wish to thank the anonymous referees for their comments and suggestions.

### Availability of data and materials

Data sharing not applicable to this article as no datasets were generated or analysed during the current study.

## Funding

This project was supported by Theoretical and Computational Science (TaCS-CoE), Faculty of Science, KMUTT. The author Yasir Arfat was supported by the Petchra Pra Jom Klao Ph.D. Research Scholarship from King Mongkut’s University of Technology Thonburi, Thailand (Grant No. 16/2562).

## Author information

Authors

### Contributions

All authors contributed equally and significantly in writing this article. All authors read and approved the final manuscript.

### Corresponding author

Correspondence to Poom Kumam.

## Ethics declarations

### Competing interests

The authors declare that they have no competing interests.

## Rights and permissions 