Skip to main content

A parallel Tseng’s splitting method for solving common variational inclusion applied to signal recovery problems


In this work we propose an accelerated algorithm that combines various techniques, such as inertial proximal algorithms, Tseng’s splitting algorithm, and more, for solving the common variational inclusion problem in real Hilbert spaces. We establish a strong convergence theorem of the algorithm under standard and suitable assumptions and illustrate the applicability and advantages of the new scheme for signal recovering problem arising in compressed sensing.


Let \(\mathcal{H}\) be a real Hilbert space such that \(\langle \cdot,\cdot \rangle \) and \(\|\cdot \|\) are the inner product and the induced norm, respectively. We are interested in the variational inclusion problem (VIP) which is to find \(\bar{u}\in \mathcal{H}\) such that

$$\begin{aligned} 0\in (F+G)\bar{u}, \end{aligned}$$

where \(F:\mathcal{H}\rightarrow \mathcal{H}\) is a single-valued mapping and \(G:\mathcal{H}\rightarrow 2^{\mathcal{H}}\) is a multivalued mapping. The solution set of VIP (1.1) is denoted by \((F+G)^{-1}(0)\). These VIPs (1.1) include as particular cases many mathematical problems, such as variational inequalities, split feasibility problem, convex minimization problem, and linear inverse problem, which can be applied in many ways, such as machine learning, statistical modeling, image processing, and signal recovery, see in [57, 21]. Many splitting algorithms have been introduced and improved to find a solution of VIP (1.1); one of the famous splitting algorithms is the forward-backward splitting algorithm, see in [14] for more details. It is well known that VIP (1.1) is equivalent to the following fixed point equation \(\bar{u}=J^{G}_{\gamma }(I-\gamma F)\bar{u}\), where \(J^{G}_{\gamma }\) is the resolvent operator of G defined by \(J^{G}_{\gamma }=(I+\gamma G)^{-1}\) such that \(\gamma >0\). The following naturally introduced forward-backward splitting algorithm has been proposed in [1]:

$$\begin{aligned} u_{k+1}=J^{G}_{\gamma _{k}}(I-\gamma _{k} F)u_{k},\quad \gamma _{k}>0, k\geq 0. \end{aligned}$$

In 2015, Donoghue and Candès [19] showed that the forward-backward splitting algorithm (1.2), which is reduced to the proximal gradient algorithm for convex optimization problems, may get a lot of iterations when F is the gradient of a convex and differential function. Finding a process to speed up the convergence of algorithms is very important. Before that, in 1964, the inertial extrapolation technique, which is called the heavy ball method, was introduced by Polyak [20] to speed up the convergence of iterative algorithms. Later on, the inertial extrapolation has been used for VIPs (1.1) and improved by many mathematicians, see in [2, 15, 18]. The inertial proximal algorithm is the one of using the inertial technique with the forward-backward algorithm. The following inertial proximal algorithm has been proposed by Moudafi and Oliny [17]:

$$\begin{aligned} & r_{k}=u_{k}+\xi _{k}(u_{k}-u_{k-1}), \\ &u_{k+1}=J^{G}_{\gamma _{k}} \bigl(r_{k}-\gamma _{k}F(u_{k}) \bigr),\quad k\geq 0, \end{aligned}$$

where \(\{\gamma _{k}\}\) is a positive real sequence. Based on the condition generated in terms of the sequence \(\{u_{k}\}\) and parameter \(\xi _{k}\) under a cocoercivity condition F with respect to the solution set, the weak convergence of the iterative sequence was established. For obtaining the strong convergence, Cholamjiak et al. [4] introduced Halpern-type forward-backward splitting algorithm (HTFBSA) involving the inertial technique in a Hilbert space. This algorithm was generated by a fixed element \(w\in \mathcal{H}\) and

$$\begin{aligned} &r_{k}=u_{k}+\xi _{k}(u_{k}-u_{k-1}), \\ &u_{k+1}=a_{k}w+(1-a_{k}-b_{k})r_{k}+b_{k}J^{G}_{\gamma _{k}} \bigl(r_{k}- \gamma _{k}F(r_{k}) \bigr),\quad k\geq 1, \end{aligned}$$

where \(\{a_{k}\}\) and \(\{b_{k}\}\) are sequences in \([0,1]\). After that, Yambangwai et al. [27] extended the HTFBSA to the following modified viscosity inertial forward-backward splitting algorithm (MVIFBSA):

$$\begin{aligned} & r_{k}=u_{k}+\xi _{k}(u_{k}-u_{k-1}), \\ &u_{k+1}=a_{k}\varphi (r_{k})+(1-a_{k}-b_{k})r_{k}+b_{k}J^{G}_{ \gamma _{k}} \bigl(r_{k}-\gamma _{k}F(r_{k}) \bigr),\quad k\geq 1, \end{aligned}$$

where φ is a ρ-contractive on \(\mathcal{H}\).

Other developments and modifying of the forward-backward splitting algorithm have been introduced to speed up the algorithm’s convergence. A well-known modified forward-backward algorithm is Tseng’s splitting algorithm [24]. This algorithm uses an adaptive line-search rule with parameter \(\gamma _{k}\) and converges weakly in a real Hilbert space. Recently, Gibali and Thong [8] presented two additional extensions of the forward-backward splitting algorithm; these modifications, presented next, are inspired by Mann and viscosity techniques.

Strong convergence of the above two algorithms is established under Lipschitz continuity and monotonicity of the operator F.

While all the above introduction is focused on a single variational inclusion problem (1.1), many real-world problems require to find a solution that fulfils several constraints. These constraints can be reformulated via a nonlinear functional model, and thus in this work we wish to focus on the common variational inclusion problem (CVIP). The CVIP consists of finding a point \(\bar{u}\in \mathcal{H}\) such that

$$\begin{aligned} 0\in (F_{i}+G_{i})\bar{u}, \end{aligned}$$

where \(F_{i}:\mathcal{H}\rightarrow \mathcal{H}\) are single-valued mappings and \(G_{i}:\mathcal{H}\rightarrow 2^{\mathcal{H}}\) are multivalued mappings for all \(i =1,2, \dots, K\). We assume that the solution set of the problem system (1.9) is nonempty. Recently, Yambangwai et al. [26] studied an image restoration problem in which several blurred filters were considered, and the mathematical model used there is the common variational inclusion problem. A parallel inertial forward-backward splitting algorithm for solving this problem was introduced and analyzed. Some results of the parallel algorithm for solving the common variational inclusion problem and associated issues have been reported, see [3, 913, 23].

Inspired by the above works, we focus on the common variational inclusion problem and present a new modified Tseng’s splitting algorithm for solving it with strong converges in real Hilbert spaces.

The paper is organized as follows. We first recall some basic definitions and results in Sect. 2. The new algorithms and their analysis are introduced in Sect. 3. In Sect. 4 we consider as an application a signal recovery problem with several blurred filters, and compare and illustrate computational advantages of the method. Final remarks and conclusions are given in Sect. 5.


In what follows, recall that \(\mathcal{H}\) is a real Hilbert space. Let C be a nonempty, closed, and convex subset of \(\mathcal{H}\). We denote by and → weak and strong convergence, respectively. We next collect some necessary definitions and lemmas for proving our main results.

Definition 2.1

Let \(G: \mathcal{H}\rightarrow 2^{\mathcal{H}}\) be a multivalued mapping. Then G is said to be

  1. (i)

    monotone if for all \((x, u), (y, v)\in \operatorname{graph}(G)\) (the graph of mapping G)

    $$\begin{aligned} \langle u-v, x-y\rangle \geq 0, \end{aligned}$$
  2. (ii)

    maximal monotone if there is no proper monotone extension of \(\operatorname{graph}(G)\).

Lemma 2.2


Let \(\{a_{k}\}\) and \(\{c_{k}\}\) be nonnegative sequences of real numbers such that \(\sum_{k=1}^{\infty }c_{k}<\infty \), and let \(\{b_{k}\}\) be a sequence of real numbers such that \(\limsup_{k\rightarrow \infty } b_{k}\leq 0\). If there exists \(k_{0}\in \mathbb{N}\) such that, for any \(k\geq k_{0}\),

$$\begin{aligned} a_{k+1}\leq (1-\delta _{k})a_{k}+\delta _{k}b_{k}+c_{k}, \end{aligned}$$

where \(\{\delta _{k}\}\) is a sequence in \((0,1)\) such that \(\sum_{k=1}^{\infty } \delta _{k}=\infty \), then \(\lim_{k\rightarrow \infty } a_{k}=0\).

Lemma 2.3


Let \(\{\Xi _{k}\}\) be a sequence of real numbers such that there exists a subsequence \(\{\Xi _{k_{j}}\}_{j\geq 0}\) of \(\{\Xi _{k}\}\) satisfying \(\Xi _{k_{j}}<\Xi _{k_{j}+1}\) for all \(j\geq 0\). Define a sequence of integers \(\{\psi (k)\}_{k\geq k^{*}}\) by

$$\begin{aligned} \psi (k):=\max \{n\leq k: \Xi _{n}< \Xi _{n+1} \}. \end{aligned}$$

Then \(\{\psi (k)\}_{k\geq k^{*}}\) is a nondecreasing sequence such that \(\lim_{k\rightarrow \infty }\psi (k)=\infty \), and for all \(k\geq k^{*}\), we have that \(\Xi _{\psi (k)}\leq \Xi _{\psi (k)+1}\) and \(\Xi _{k}\leq \Xi _{\psi (k)+1}\).

Main result

In this section we present our new parallel inertial Tseng type algorithm (PITTA) for solving (1.9). For the convergence analysis of the proposed method, we assume the following assumptions for all \(i=1, 2,\dots, K\).

Assumption 1

\(\mathcal{H}\) is a real Hilbert space, \(F_{i}: \mathcal{H}\to \mathcal{H}\) is an \(\mathcal{L}_{i}\)-Lipschitz continuous and monotone mapping and \(G_{i}: \mathcal{H}\to 2^{\mathcal{H}}\) is a maximal monotone operator.

Assumption 2

\(\Phi:= \bigcap_{i=1}^{K}(F_{i} + G_{i})^{-1}(0)\) is nonempty.

Assumption 3

\(\{\xi _{k}\}\subset [0, \xi )\), \(\{b_{n}\}\subset (b^{*}, b^{\prime })\subset (0, 1-a_{n})\) for some \(\xi > 0, b^{*} >0, b^{\prime } >0\), and \(\{a_{n}\}\subset (0, 1)\) satisfies \(\lim_{k\rightarrow \infty }a_{k}=0 \text{ and } \sum_{k=1}^{ \infty }a_{n}=\infty \).

Assumption 4

\(\varphi: \mathcal{H}\to \mathcal{H}\) is a ρ-contractive mapping.

Next the algorithm is presented.

Lemma 3.1

Assume that Assumptions 14hold, then any sequence \(\{\gamma _{k}^{i}\}\) in Algorithm PITTA is nonincreasing and converges to \(\gamma _{i}\) such that \(\min \lbrace \gamma _{1}^{i}, \frac{\lambda _{i}}{\mathcal{L}_{i}} \rbrace \leq \gamma _{i}\) for all \(i=1,2,\dots,K\).


See [8, Lemma 5]. □

Lemma 3.2

Let \(u\in \Phi \). Then under Assumptions 14, we have, for all \(i=1,2,\dots,K\),

$$\begin{aligned} \bigl\Vert t_{k}^{i}-u \bigr\Vert ^{2}\leq \Vert r_{k}-u \Vert ^{2}- \bigl[1- \bigl( \varrho _{k}^{i} \bigr)^{2} \bigr] \bigl\Vert r_{k}-s_{k}^{i} \bigr\Vert ^{2} \end{aligned}$$


$$\begin{aligned} \bigl\Vert t_{k}^{i}-s_{k}^{i} \bigr\Vert \leq \varrho _{k}^{i} \bigl\Vert r_{k}-s_{k}^{i} \bigr\Vert , \end{aligned}$$

where \(\varrho _{k}^{i}=\lambda _{i} \frac{\gamma _{k}^{i}}{\gamma _{k+1}^{i}}\).


In the same manner as [8, Lemma 6], we obtain that inequalities (3.1) and (3.2) hold. □

Lemma 3.3

Suppose that \(\lim_{k\rightarrow \infty }\|r_{k}-s_{k}^{i}\|=0\) for all \(i=1,2,\dots,K\). If there exists a weakly convergent subsequence \(\{r_{k_{j}}\}\) of \(\{r_{k}\}\), then under Assumptions 14, we have that the limit of \(\{r_{k_{j}}\}\) belongs to Φ.


The proof is similar to the proof of [8, Lemma 7]. □

With the above results we are now ready for the main convergence theorem.

Theorem 3.4

Suppose that \(\lim_{k\rightarrow \infty }\frac{\xi _{k}}{a_{k}}\|u_{k}-u_{k-1} \|=0\), then under Assumptions 14, we have \(u_{k}\to \mu \) as \(k\to \infty \), where \(\mu = P_{\Phi }\circ \varphi (\mu )\).


First, since \(\lim_{k\to \infty } [1- (\varrho _{k}^{i} )^{2} ]=1-\lambda _{i}^{2}>0\), one can find \(m_{i}\in \mathbb{N}\) such that \(1- (\varrho _{k}^{i} )^{2}>0\) for all \(k\geq k_{0}\), where \(k_{0} = \max_{i=1,2,\dots,K} m_{i}\). Let \(u\in \Phi \), from (3.1), we get

$$\begin{aligned} \bigl\Vert t_{k}^{i}-u \bigr\Vert \leq \Vert r_{k}-u \Vert \end{aligned}$$

for all \(k\geq k_{0}\). Next, we divide the proof into the following claims.

Claim 1

\(\{u_{k}\}\) is a bounded sequence.

By the sequence \(\lbrace \frac{\xi _{k}}{a_{k}}\|u_{k}-u_{k-1}\| \rbrace \) converges to 0, we have that there exists a constant \(M_{*}\geq 0\) such that, for all \(k\in \mathbb{N}\),

$$\begin{aligned} \frac{\xi _{k}}{a_{k}} \Vert u_{k}-u_{k-1} \Vert \leq M_{*}. \end{aligned}$$

From the definition of \(r_{k}\) and combining (3.3) and (3.4), we obtain, for all \(k\geq k_{0}\),

$$\begin{aligned} \bigl\Vert t_{k}^{i}-u \bigr\Vert \leq \Vert r_{k}-u \Vert &= \bigl\Vert u_{k}+\xi _{k}(u_{k}-u_{k-1})-u \bigr\Vert \\ &\leq \Vert u_{k}-u \Vert +\frac{\xi _{k}}{a_{k}} \Vert u_{k}-u_{k-1} \Vert a_{k} \\ &\leq \Vert u_{k}-u \Vert +a_{k}M_{*}. \end{aligned}$$

From the definition of i, we get, for all \(k\geq k_{0}\),

$$\begin{aligned} \Vert \bar{t}_{k}-u \Vert \leq \Vert r_{k}-u \Vert \end{aligned}$$


$$\begin{aligned} \Vert \bar{t}_{k}-u \Vert \leq \Vert u_{k}-u \Vert +a_{k}M_{*}. \end{aligned}$$

By Assumption 4 and using (3.6), the following relation is obtained for all \(k\geq k_{0}\):

$$\begin{aligned} \Vert u_{k+1}-u \Vert &= \bigl\Vert a_{k} \bigl( \varphi (u_{k})-u \bigr)+(1-a_{k}-b_{k}) (u_{k}-u)+b_{k}( \bar{t}_{k}-u) \bigr\Vert \\ &\leq a_{k} \bigl\Vert \varphi (u_{k})-u \bigr\Vert +(1-a_{k}-b_{k}) \Vert u_{k}-u \Vert +b_{k} \Vert \bar{t}_{k}-u \Vert \\ &\leq a_{k} \bigl\Vert \varphi (u_{k})-\varphi (u) \bigr\Vert +a_{k} \bigl\Vert \varphi (u)-u \bigr\Vert +(1-a_{k}) \Vert u_{k}-u \Vert +a_{k}b_{k}M_{*} \\ &\leq \bigl[1-a_{k}(1-\rho ) \bigr] \Vert u_{k}-u \Vert +a_{k} \bigl( \bigl\Vert \varphi (u)-u \bigr\Vert +M_{*} \bigr) \\ &= \bigl[1-a_{k}(1-\rho ) \bigr] \Vert u_{k}-u \Vert +a_{k}(1-\rho ) \frac{ \Vert \varphi (u)-u \Vert +M_{*}}{1-\rho } \\ &\leq \max \biggl\lbrace \Vert u_{k}-u \Vert , \frac{ \Vert \varphi (u)-u \Vert +M_{*}}{1-\rho } \biggr\rbrace . \end{aligned}$$

This leads to a conclusion that \(\|u_{k+1}-u\|\leq \max \lbrace \|u_{k_{0}}-u\|, \frac{\|\varphi (u)-u\|+M_{*}}{1-\rho } \rbrace \) for any \(k\geq k_{0}\). Consequently, the sequence \(\{u_{k}\}\) is bounded. In addition, \(\{\varphi (u_{k})\}\) is also bounded. Since Φ is a closed and convex set, \(P_{\Phi }\circ \varphi \) is a ρ-contractive mapping. Now, we can uniquely find \(\mu \in \Phi \) with \(\mu = P_{\Phi }\circ \varphi (\mu )\) due to the Banach fixed point theorem. We also get that, for any \(u\in \Phi \),

$$\begin{aligned} \bigl\langle \varphi (\mu )-\mu, u-\mu \bigr\rangle \leq 0. \end{aligned}$$

Now, for each \(k\in \mathbb{N}\), set \(\Xi _{k}:=\|u_{k}-\mu \|^{2}\).

Claim 2

There is \(M_{0}>0\) such that

$$\begin{aligned} b_{k}(1-a_{k}-b_{k}) \Vert u_{k}- \bar{t}_{k} \Vert ^{2} &\leq \Xi _{k}-\Xi _{k+1}+a_{k} \bigl( \bigl\Vert \varphi (u_{k})- \mu \bigr\Vert ^{2}+M_{0} \bigr) \end{aligned}$$

for all \(k\geq k_{0}\).

Applying (3.6), we have, for all \(k\geq k_{0}\),

$$\begin{aligned} \Vert \bar{t}_{k}-\mu \Vert ^{2}&\leq \bigl( \Vert u_{k}-\mu \Vert +a_{k}M_{*} \bigr) ^{2} \\ &=\Xi _{k}+a_{k} \bigl( 2M_{*} \Vert u_{k}-\mu \Vert +a_{k}M_{*}^{2} \bigr) \\ &\leq \Xi _{k}+a_{k}M_{0} \end{aligned}$$

for some \(M_{0}>0\). For any \(k\geq k_{0}\), it follows from the assumption on φ and (3.8) that

$$\begin{aligned} \Xi _{k+1}&= \bigl\Vert a_{k} \bigl(\varphi (u_{k})-\mu \bigr)+(1-a_{k}-b_{k}) (u_{k}-\mu )+b_{k}( \bar{t}_{k}-\mu ) \bigr\Vert ^{2} \\ &\leq a_{k} \bigl\Vert \varphi (u_{k})-\mu \bigr\Vert ^{2}+(1-a_{k}-b_{k})\Xi _{k}+b_{k} \Vert \bar{t}_{k}-\mu \Vert ^{2}-b_{k}(1-a_{k}-b_{k}) \Vert u_{k}-\bar{t}_{k} \Vert ^{2} \\ &\leq a_{k} \bigl\Vert \varphi (u_{k})-\mu \bigr\Vert ^{2}+(1-a_{k})\Xi _{k}+a_{k}b_{k}M_{0}-b_{k}(1-a_{k}-b_{k}) \Vert u_{k}-\bar{t}_{k} \Vert ^{2} \\ &\leq \Xi _{k}+a_{k} \bigl( \bigl\Vert \varphi (u_{k})-\mu \bigr\Vert ^{2}+M_{0} \bigr) -b_{k}(1-a_{k}-b_{k}) \Vert u_{k}- \bar{t}_{k} \Vert ^{2} . \end{aligned}$$

Therefore, Claim 2 is obtained.

Claim 3

There is \(\bar{M}>0\) such that

$$\begin{aligned} \Xi _{k+1}\leq {}&\bigl[1-a_{k}(1-\rho ) \bigr]\Xi _{k}+a_{k}(1-\rho ) \biggl[ \frac{3\bar{M}}{1-\rho }\cdot \frac{\xi _{k}}{a_{k}} \Vert u_{k}-u_{k-1} \Vert \biggr] \\ &{} +a_{k}(1-\rho ) \biggl[\frac{2\bar{M}}{1-\rho } \Vert u_{k}- \bar{t}_{k} \Vert +\frac{2}{1-\rho } \bigl\langle \varphi (\mu )- \mu, u_{k+1}-\mu \bigr\rangle \biggr] \end{aligned}$$

for all \(k\geq k_{0}\).

Indeed, setting \(c_{k}=(1-b_{k})u_{k} + b_{k}\bar{t}_{k}\). From inequality (3.5) and the definition of \(r_{k}\), we have

$$\begin{aligned} \Vert c_{k}-\mu \Vert &\leq (1-b_{k}) \Vert u_{k}-\mu \Vert +b_{k} \Vert \bar{t}_{k}-\mu \Vert \\ &\leq (1-b_{k}) \Vert u_{k}-\mu \Vert +b_{k} \Vert r_{k}-\mu \Vert \\ &\leq \Vert u_{k}-\mu \Vert +b_{k}\xi _{k} \Vert u_{k}-u_{k-1} \Vert \end{aligned}$$


$$\begin{aligned} \Vert u_{k}-c_{k} \Vert =b_{k} \Vert u_{k}-\bar{t}_{k} \Vert \end{aligned}$$

for all \(k\geq k_{0}\). Hence, from the assumption on φ, and (3.2), (3.9), and (3.10), we obtain, for all \(k\geq k_{0}\),

$$\begin{aligned} \Xi _{k+1}= {}&\bigl\Vert (1-a_{k}) (c_{k}-\mu )+a_{k} \bigl(\varphi (u_{k})-\varphi ( \mu ) \bigr)-a_{k}(u_{k}-c_{k})-a_{k} \bigl( \mu -\varphi (\mu ) \bigr) \bigr\Vert ^{2} \\ \leq {}&\bigl\Vert (1-a_{k}) (c_{k}-\mu )+a_{k} \bigl(\varphi (u_{k})-\varphi (\mu ) \bigr) \bigr\Vert ^{2}-2a_{k} \bigl\langle u_{k}-c_{k}+ \mu -\varphi (\mu ), u_{k+1}-\mu \bigr\rangle \\ \leq{}& (1-a_{k}) \Vert c_{k}-\mu \Vert ^{2}+a_{k} \bigl\Vert \varphi (u_{k})-\varphi ( \mu ) \bigr\Vert ^{2}+2a_{k}\langle c_{k}-u_{k}, u_{k+1}-\mu \rangle \\ &{} +2a_{k} \bigl\langle \varphi (\mu )-\mu, u_{k+1}-\mu \bigr\rangle \\ \leq{}& (1-a_{k}) \bigl( \Vert u_{k}-\mu \Vert +b_{k}\xi _{k} \Vert u_{k}-u_{k-1} \Vert \bigr)^{2} +a_{k}\rho ^{2}\Xi _{k}+2a_{k} \Vert c_{k}-u_{k} \Vert \Vert u_{k+1}- \mu \Vert \\ &{} +2a_{k} \bigl\langle \varphi (\mu )-\mu, u_{k+1}-\mu \bigr\rangle \\ \leq{}& (1-a_{k})\Xi _{k}+2\xi _{k} \Vert u_{k}-\mu \Vert \Vert u_{k}-u_{k-1} \Vert + \xi _{k}^{2}| \Vert u_{k}-u_{k-1} \Vert ^{2} +a_{k}\rho \Xi _{k} \\ &{} +2a_{k}b_{k} \Vert u_{k}- \bar{t}_{k} \Vert \Vert u_{k+1}-\mu \Vert +2a_{k} \bigl\langle \varphi (\mu )-\mu, u_{k+1}-\mu \bigr\rangle \\ \leq{}& \bigl[1-a_{k}(1-\rho ) \bigr]\Xi _{k}+\xi _{k} \Vert u_{k}-u_{k-1} \Vert \bigl( 2 \Vert u_{k}- \mu \Vert +\xi \Vert u_{k}-u_{k-1} \Vert \bigr) \\ &{} +2a_{k}b_{k} \Vert u_{k}- \bar{t}_{k} \Vert \Vert u_{k+1}-\mu \Vert +2a_{k} \bigl\langle \varphi (\mu )-\mu, u_{k+1}-\mu \bigr\rangle \\ \leq{}& \bigl[1-a_{k}(1-\rho ) \bigr]\Xi _{k}+3\bar{M}\xi _{k} \Vert u_{k}-u_{k-1} \Vert +2 \bar{M}a_{k}b_{k} \Vert u_{k}- \bar{t}_{k} \Vert \\ &{}+2a_{k} \bigl\langle \varphi (\mu )- \mu, u_{k+1}-\mu \bigr\rangle \\ \leq{}& \bigl[1-a_{k}(1-\rho ) \bigr]\Xi _{k}+a_{k}(1- \rho ) \biggl[ \frac{3\bar{M}}{1-\rho }\cdot \frac{\xi _{k}}{a_{k}} \Vert u_{k}-u_{k-1} \Vert \biggr] \\ &{} +a_{k}(1-\rho ) \biggl[\frac{2\bar{M}}{1-\rho } \Vert u_{k}- \bar{t}_{k} \Vert +\frac{2}{1-\rho } \bigl\langle \varphi (\mu )- \mu, u_{k+1}-\mu \bigr\rangle \biggr] \end{aligned}$$

for \(\bar{M}:= \sup_{n\in \mathbb{N}} \lbrace \|u_{k}- \mu \|, \xi \|u_{k}-u_{k-1}\| \rbrace > 0\). Recall that our task is to show that \(u_{k}\to \mu \), which is now equivalent to showing that \(\Xi _{k}\to 0\) as \(k\to \infty \).

Claim 4

\(\Xi _{k}\to 0\) as \(k\to \infty \).

The proof is divided into the following two cases.

Case a. We can find \(N\in \mathbb{N}\) satisfying that, for all \(k\geq N\), the inequality \(\Xi _{k+1}\leq \Xi _{k}\) holds. Since each term \(\Xi _{k}\) is nonnegative, it is convergent. Due to the fact that \(\lim_{k\rightarrow \infty } a_{k}=0\) and \(\lim_{k\rightarrow \infty }b_{k} \in (0,1)\), and by Claim 2,

$$\begin{aligned} \lim_{k\rightarrow \infty } \Vert u_{k}- \bar{t}_{k} \Vert =0. \end{aligned}$$

Indeed, we immediately get

$$\begin{aligned} \lim_{k\rightarrow \infty } \Vert u_{k}-r_{k} \Vert = \lim_{k\rightarrow \infty }\frac{\xi _{k}}{a_{k}} \Vert u_{k}-u_{k-1} \Vert a_{k}=0. \end{aligned}$$

In addition, from the definition of \(\bar{t}_{k}\) and by using the triangle inequality, the following inequalities are obtained:

$$\begin{aligned} \bigl\Vert t_{k}^{i}-r_{k} \bigr\Vert &\leq \Vert \bar{t}_{k}-r_{k} \Vert \leq \Vert \bar{t}_{k}-u_{k} \Vert + \Vert u_{k}-r_{k} \Vert \end{aligned}$$


$$\begin{aligned} \bigl\Vert r_{k}-s_{k}^{i} \bigr\Vert &\leq \bigl\Vert r_{k}-t_{k}^{i} \bigr\Vert + \bigl\Vert t_{k}^{i}-s_{k}^{i} \bigr\Vert \end{aligned}$$

for all \(i=1,2,\dots,K\). It follows from inequality (3.2) that

$$\begin{aligned} \bigl(1-\varrho _{k}^{i} \bigr) \bigl\Vert r_{k}-s_{k}^{i} \bigr\Vert \leq \Vert \bar{t}_{k}-u_{k} \Vert + \Vert u_{k}-r_{k} \Vert \end{aligned}$$

for all \(i=1,2,\dots,K\). Since \(\lim_{k\to \infty } [1- (\varrho _{k}^{i} )^{2} ]=1-\lambda _{i}^{2}>0\), (3.11) and (3.12),

$$\begin{aligned} \lim_{k\rightarrow \infty } \bigl\Vert r_{k}-s_{k}^{i} \bigr\Vert =0 \end{aligned}$$

for all \(i=1,2,\dots,K\). Note that, for each \(k\in \mathbb{N}\),

$$\begin{aligned} \Vert u_{k+1}-u_{k} \Vert &\leq \Vert u_{k+1}-\bar{t}_{k} \Vert + \Vert \bar{t}_{k}-u_{k} \Vert \\ &\leq a_{k} \bigl\Vert \varphi (u_{k})-u_{k} \bigr\Vert +(2-b_{k}) \Vert u_{k}-\bar{t}_{k} \Vert . \end{aligned}$$

Consequently, since \(\lim_{k\rightarrow \infty }a_{k}=0\) and by (3.14), \(\lim_{k\rightarrow \infty }\|u_{k+1}-u_{k}\|=0\). Next observe that, for the reason that \(\{u_{k}\}\) is bounded, there is \(w\in \mathcal{H}\) such that \(u_{k_{j}}\rightharpoonup w\) as \(j\to \infty \) for some subsequence \(\{u_{k_{j}}\}\) of \(\{u_{k}\}\). By (3.12), we get \(r_{k_{j}}\rightharpoonup w\) as \(j\to \infty \). Then Lemma 3.3 together with (3.13) implies that \(w\in \Phi \). From (3.7), it is straightforward to show that

$$\begin{aligned} \limsup_{k\rightarrow \infty } \bigl\langle \varphi (\mu )-\mu, u_{k}-\mu \bigr\rangle =\lim_{k\rightarrow \infty } \bigl\langle \varphi (\mu )-\mu, u_{k_{j}}- \mu \bigr\rangle = \bigl\langle \varphi (\mu )-\mu, w-\mu \bigr\rangle \leq 0. \end{aligned}$$

Since \(\lim_{k\rightarrow \infty }\|u_{k+1}-u_{k}\|=0\), the following result is obtained:

$$\begin{aligned} \limsup_{k\rightarrow \infty } \bigl\langle \varphi (\mu )-\mu, u_{k+1}- \mu \bigr\rangle &\leq \limsup_{n\rightarrow \infty } \bigl\langle \varphi (\mu )- \mu, u_{k+1}-u_{k} \bigr\rangle + \limsup_{n\rightarrow \infty } \bigl\langle \varphi (\mu )-\mu, u_{k}- \mu \bigr\rangle \\ & \leq 0. \end{aligned}$$

Applying Lemma 2.2 to the inequality from Claim 3, we can conclude that \(\lim_{k\rightarrow \infty }\Xi _{k}= 0\).

Case b. We can find \(k_{n}\in \mathbb{N}\) satisfying that \(k_{n}\geq n\) and \(\Xi _{k_{n}}<\Xi _{k_{n}+1}\) for all \(n\in \mathbb{N}\). According to Lemma 2.3, the inequality \(\Xi _{\psi (k)}\leq \Xi _{\psi (k)+1}\) is obtained, where \(\psi: \mathbb{N}\rightarrow \mathbb{N}\) is defined by (2.1), and \(k\geq k^{*}\) for some \(k^{*}\in \mathbb{N}\). This implies, by Claim 2, for all \(k\geq \max \{k_{0}, k^{*}\}\), that

$$\begin{aligned} &b_{\psi (k)}(1-a_{\psi (k)}-b_{\psi (k)}) \Vert u_{\psi (k)}- \bar{t}_{ \psi (k)} \Vert ^{2} \\ &\quad\leq \Xi _{\psi (k)}-\Xi _{\psi (k)+1}+a_{\psi (k)} \bigl( \bigl\Vert \varphi (u_{\psi (k)})- \mu \bigr\Vert ^{2}+M_{0} \bigr). \end{aligned}$$

Similar to Case a, since \(a_{k}\to 0\) as \(k\to \infty \), we obtain

$$\begin{aligned} \lim_{k\rightarrow \infty } \Vert u_{\psi (k)}-\bar{t}_{\psi (k)} \Vert =0. \end{aligned}$$

Furthermore, an argument similar to the one used in Case a shows that

$$\begin{aligned} \limsup_{k\rightarrow \infty } \bigl\langle \varphi (\mu )-\mu, u_{ \psi (k)+1}-\mu \bigr\rangle \leq 0. \end{aligned}$$

Finally, from the inequality \(\Xi _{\psi (k)}\leq \Xi _{\psi (k)+1}\) and by Claim 3, for all \(k\geq \max \{k_{0}, k^{*}\}\), we obtain

$$\begin{aligned} \Xi _{\psi (k)+1} \leq{}& \bigl[1-a_{\psi (k)}(1-\rho ) \bigr]\Xi _{\psi (k)+1}+a_{ \psi (k)}(1-\rho ) \biggl[ \frac{3\bar{M}}{1-\rho }\cdot \frac{\xi _{\psi (k)}}{a_{\psi (k)}} \Vert u_{\psi (k)}-u_{\psi (k)-1} \Vert \biggr] \\ &{} +a_{\psi (k)}(1-\rho ) \biggl[\frac{2\bar{M}}{1-\rho } \Vert u_{ \psi (k)}- \bar{t}_{\psi (k)} \Vert +\frac{2}{1-\rho } \bigl\langle \varphi (\mu )- \mu, u_{\psi (k)+1}-\mu \bigr\rangle \biggr]. \end{aligned}$$

Some simple calculations yield

$$\begin{aligned} \Xi _{\psi (k)+1} \leq{}& \frac{3\bar{M}}{1-\rho }\cdot \frac{\xi _{\psi (k)}}{a_{\psi (k)}} \Vert u_{\psi (k)}-u_{\psi (k)-1} \Vert + \frac{2\bar{M}}{1-\rho } \Vert u_{\psi (k)}-\bar{t}_{\psi (k)} \Vert \\ &{} +\frac{2}{1-\rho } \bigl\langle \varphi (\mu )-\mu, u_{\psi (k)+1}- \mu \bigr\rangle . \end{aligned}$$

From this it follows that \(\limsup_{k\rightarrow \infty }\Xi _{\psi (k)+1}\leq 0\). Thus, \(\lim_{k\rightarrow \infty }\Xi _{\psi (k)+1}=0\). In addition, by Lemma 2.3,

$$\begin{aligned} \lim_{k\to \infty }\Xi _{k}\leq \lim_{k\to \infty } \Xi _{\psi (k)+1}=0. \end{aligned}$$

Hence, we can conclude that \(u_{k}\) converges strongly to μ. □

Numerical illustrations

In this section we consider a signal recovery problem in compressed sensing that involves several blurring filters. The classical problem involving a single filter is phrased as follows:

$$\begin{aligned} b = Hx+\varepsilon, \end{aligned}$$

where \(x\in \mathbb{R}^{N}\) is the original signal, \(b\in \mathbb{R}^{M}\) is the observed signal with noise ε, and \(H\in \mathbb{R}^{M\times N}\) (\(M < N\)) is a filter matrix. Clearly solving system (4.1) is equivalent to solving the following regularized least squares problem:

$$\begin{aligned} \min_{x\in \mathbb{R}^{N}}\frac{1}{2} \Vert Hx-b \Vert _{2}^{2}+\eta \Vert x \Vert _{1}, \end{aligned}$$

where \(\eta > 0\) is a parameter. Next, let \(g(x) = \frac{1}{2}\|Hx-b\|_{2}^{2}\) and \(h(x) = \eta \|x\|_{1}\), then \(\nabla g(x) = H^{t}(Hx-b)\) is monotone and \(\|H\|_{2}^{2}\)-Lipschitz continuous. Besides, \(\partial h(x)\), the subdifferential of h at x, is maximal monotone, see [22]. In addition, from Proposition 3.1(iii) of [5], x is a solution to problem (4.2) \(0\in \nabla g(x)+\partial h(x)\) \(x=\operatorname{prox}_{\eta h}(I-\eta \nabla g)(x)\) for any \(\eta >0\), where \(\operatorname{prox}_{\eta h}(x) = \operatorname{arg}\min_{u\in \mathbb{R}^{N}} \lbrace h(u)+\frac{1}{2\eta }\|x-u\|^{2} \rbrace \).

Here we consider the following model for the signal recovering problem consisting of various filters:

$$\begin{aligned} \min_{x\in \mathbb{R}^{N}}\frac{1}{2} \Vert H_{1}x-b_{1} \Vert _{2}^{2}&+\eta _{1} \Vert x \Vert _{1}, \\ \min_{x\in \mathbb{R}^{N}}\frac{1}{2} \Vert H_{2}x-b_{2} \Vert _{2}^{2}&+\eta _{2} \Vert x \Vert _{1}, \\ \min_{x\in \mathbb{R}^{N}}\frac{1}{2} \Vert H_{3}x-b_{3} \Vert _{2}^{2}&+\eta _{3} \Vert x \Vert _{1}, \\ &\vdots \\ \min_{x\in \mathbb{R}^{N}}\frac{1}{2} \Vert H_{K}x-b_{K} \Vert _{2}^{2}&+\eta _{K} \Vert x \Vert _{1} , \end{aligned}$$

where, for all \(i = 1, 2, 3, \ldots, K\), \(H_{i}\) is a filter matrix, \(b_{i}\) is an observed signal, and \(\eta _{i} > 0\). Problem (4.3) can be seen as problem (1.9) through the following settings: \(\mathcal{H}=\mathbb{R}^{N}\), \(F_{i}(\cdot ) = \nabla (\frac{1}{2}\|H_{i}(\cdot )-b_{i}\|_{2}^{2} )\), and \(G_{i}(\cdot ) = \partial ( \eta _{i}\|\cdot \|_{1} )\) for all \(i = 1, 2, 3, \ldots, K\).

For the experiments in this section, we choose the signal size to be \(N = 1024\) and \(M = 512\), and the original signal x is generated by the uniform distribution in \([-2, 2]\) with m nonzero elements. We use the mean-squared error to measure the restoration accuracy defined as follows: \(\operatorname{MSE}_{k} = \frac{1}{N}\|u_{k}-x\|_{2}^{2}<5\times 10^{-5}\) and suppose

$$\begin{aligned} \xi _{k}=\textstyle\begin{cases} \min \lbrace \bar{\xi }_{k}, \frac{1}{4} \rbrace &\text{if } u_{k}\neq u_{k-1}, \\ \frac{1}{4} & \text{otherwise} \end{cases}\displaystyle \end{aligned}$$

for all \(k\in \mathbb{N}\). In the first part, we solve problem (4.2) by considering different components within PITTA (Algorithm 3) where \(K=1\): \(\lambda _{1}, \gamma _{1}^{1}, \varphi (\cdot ), \bar{\xi }_{k}, b_{k}\), and \(a_{k}\). Let H be the Gaussian matrix generated by the MATLAB routine \(\operatorname{randn}(M, N)\), the observation b be generated by white Gaussian noise with signal-to-noise ratio SNR=40 and \(\eta =1\). Given that the initial points \(u_{0}, u_{1}\) are generated by commend \(\operatorname{randn}(N, 1)\).

Case 1. We compare the performance of the algorithm with different parameters \(\lambda _{1}\) by setting \(\gamma _{1}^{1} = 7.55, \varphi (\cdot ) = \frac{1}{2}(\cdot ), \bar{\xi }_{k} = \frac{1}{\|u_{k}-u_{k-1}\|^{4}+(k+1)^{4}}, a_{k} = \frac{1}{10(k+1)}\), and \(b_{k} =\frac{1}{2}(1-a_{k})\). Then the results are presented in Table 1.

Table 1 Numerical results of \(\lambda _{1}\)

Case 2. We compare the performance of the algorithm with different parameters \(\gamma _{1}^{1}\) by setting \(\lambda _{1} = 0.95\), and select \(\varphi (\cdot ), \bar{\xi }_{k}, a_{k} \), and \(b_{k}\) are the same as in Case 1. Then the results are presented in Table 2.

Table 2 Numerical results of \(\gamma _{1}^{1}\)

Case 3. We compare the performance of the algorithm with different mappings \(\varphi (\cdot )\) by setting \(\lambda _{1} = 0.95\), \(\gamma _{1}^{1} = 0.01\), and select \(\bar{\xi }_{k}, a_{k} \), and \(b_{k}\) are the same as in Case 1. Then the results are presented in Table 3.

Table 3 Numerical results of \(\varphi (\cdot )\)

Case 4. We compare the performance of the algorithm with different parameters \(\bar{\xi }_{k}\) by setting \(\lambda _{1} = 0.95\), \(\gamma _{1}^{1} = 0.01\), \(\varphi (\cdot ) = \frac{1}{10}\cos (\cdot )\), and select \(a_{k} \) and \(b_{k}\) are the same as in Case 1. Then the results are presented in Table 4.

Table 4 Numerical results of \(\bar{\xi }_{k}\)

Case 5. We compare the performance of the algorithm with different parameters \(b_{k}\) by setting \(\lambda _{1} = 0.95\), \(\gamma _{1}^{1} = 0.01\), \(\varphi (\cdot ) = \frac{1}{10}\cos (\cdot )\), \(\bar{\xi }_{k} = \frac{1}{(k+1)^{1.1}\|u_{k}-u_{k-1}\|}\), and select \(a_{k} \) as in Case 1. Then the results are presented in Table 5.

Table 5 Numerical results of \(b_{k}\)

Case 6. We compare the performance of the algorithm with different parameters \(a_{k}\) by setting \(\lambda _{1} = 0.95\), \(\gamma _{1}^{1} = 0.01\), \(\varphi (\cdot ) = \frac{1}{10}\cos (\cdot )\), \(\bar{\xi }_{k} = \frac{1}{(k+1)^{1.1}\|u_{k}-u_{k-1}\|}\), and \(b_{k} = \frac{99}{100}(1-a_{k})\). Then the results are presented in Table 6.

Table 6 Numerical results of \(a_{k}\)

We noticed that in all the above six cases, selecting \(a_{k} = \frac{1}{k+1}\) for all \(k\in \mathbb{N}\) and setting \(b_{k}, \bar{\xi }_{k}, \lambda _{1}, \gamma _{1}^{1}\), and \(\varphi (\cdot )\) as in Case 6 yield the best results.

In the next experiment, we wish to compare the performance of MTTA (Algorithm 1), VTTA (Algorithm 2), HTFBSA, MVIFBSA, and PITTA for solving problem (4.2) with one filter, that is, \(K=1\). We suppose that \(H, b, \eta, u_{0}\), and \(u_{1}\) are the same as in the first part and select \(a_{k} = \frac{1}{k+1}\) for all \(k\in \mathbb{N}\). We set \(b_{k}, \bar{\xi }_{k}, \lambda _{1}, \gamma _{1}^{1}\), and \(\varphi (\cdot )\) are the same as in Case 6. For MTTA and VTTA, let \(\lambda _{1} = 0.95\) and \(\gamma _{1}^{1} =0.01\). Define w by using \(\operatorname{randn}(N, 1)\) for HTFBSA. Further, for any \(k\in \mathbb{N}\), we select \(\gamma _{k} = \frac{1}{2\|H\|_{2}^{2}}\) for HTFBSA and MVIFBSA. The results are presented in Table 7 and Figs. 1 and 2.

Algorithm 1

Mann Tseng type algorithm (MTTA)

Algorithm 2

Viscosity Tseng type algorithm (VTTA)

Algorithm 3

Parallel inertial Tseng type algorithm (PITTA)

Figure 1

From top to bottom: the original signal, the measurement, and the reconstructed signals by the five algorithms in Table 7 for \(m=100\)

Figure 2

The mean-squared error versus the number of iterations for \(m=100\)

Table 7 Numerical comparison of five algorithms

Based on the above results, we can see that our proposed algorithm is less time consuming and requires lower number of iterations than the other four algorithms.

The final experiment considers PITTA for solving (4.3) with multiple inputs \(H_{i}\), and then we compare it with the parallel monotone hybrid algorithm (PMHA) of Suantai et al. [23]. Gaussian matrices are generated by the MATLAB routine \(\operatorname{randn}(M, N)\). The observation \(b_{i}\) is generated by white Gaussian noise with signal-to-noise ratio SNR=40, \(\eta _{i}=1\), \(\lambda _{i}=0.95\), and \(\gamma _{1}^{i} = 0.01\) for all \(i = 1, 2, 3\). Select \(a_{k} = \frac{1}{k+1}\) and set \(u_{0}, u_{1}\), \(\varphi (\cdot )\), \(b_{k}\) and \(\bar{\xi }_{k}\) are the same as in Case 6 for all \(k\in \mathbb{N}\). Further, for any \(k\in \mathbb{N}\) and all \(i = 1, 2, 3\), we select \(\alpha _{k}^{i} = 0.75\) and \(S_{i}(\cdot ) = \operatorname{prox}_{\frac{\|\cdot \|_{1}}{\|H_{i}\|_{2}^{2}}}(I- \frac{1}{\|H_{i}\|_{2}^{2}}F_{i})(\cdot )\) for PMHA. The results are presented in Tables 8, 9 and Figs. 38.

Figure 3

From top to bottom: the original signal and the measurement by using \(H_{1}\), \(H_{2}\), and \(H_{3}\), respectively, with \(m=100\)

Figure 4

From top to bottom: the reconstructed signals by using each input for \(m=100\)

Figure 5

The mean-squared error versus the number of iterations for \(m=100\)

Figure 6

From top to bottom: the original signal and the measurement by using \(H_{1}\), \(H_{2}\), and \(H_{3}\), respectively, with \(m=128\)

Figure 7

From top to bottom: the reconstructed signals by the two algorithms in Table 9 for \(m=128\)

Figure 8

The mean-squared error versus the number of iterations for \(m=128\)

Table 8 Numerical results of PITTA
Table 9 Numerical comparison of two algorithms

From the above one can observe that incorporating all three Gaussian matrices (\(H_{1}, H_{2}\), and \(H_{3}\)) into PITTA is more effective with respect to time and number of iterations than involving only one or two of them. PITTA also requires lower number of iterations than PMHA.


In this work we study the common variational inclusion problem (CVIP) and propose an inertial Tseng’s splitting algorithm for solving it. A parallel iterative method is presented, and under standard assumption we establish its strong convergence in real Hilbert spaces. An intensive numerical investigation with comparison to several related schemes is presented for signal recovery problem involving several filters. Our work extends and generalizes some related works in the literature and also demonstrates great practical potential.

Availability of data and materials

Contact the authors for data requests.


  1. 1.

    Bauschke, H.H., Combettes, P.L.: Convex Analysis and Monotone Operator Theory in Hilbert Spaces, vol. 408. Springer, New York (2011)

    Book  Google Scholar 

  2. 2.

    Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM J. Imaging Sci. 2(1), 183–202 (2009)

    MathSciNet  Article  Google Scholar 

  3. 3.

    Cholamjiak, P., Hieu, D.V., Cho, Y.J.: Relaxed forward-backward splitting methods for solving variational inclusions and applications. J. Sci. Comput. (2021).

    MathSciNet  Article  MATH  Google Scholar 

  4. 4.

    Cholamjiak, W., Cholamjiak, P., Suantai, S.: An inertial forward-backward splitting method for solving inclusion problems in Hilbert spaces. J. Fixed Point Theory Appl. 20, 42 (2018)

    MathSciNet  Article  Google Scholar 

  5. 5.

    Combettes, P.L., Wajs, V.R.: Signal recovery by proximal forward-backward splitting. Multiscale Model. Simul. 4(4), 1168–1200 (2005)

    MathSciNet  Article  Google Scholar 

  6. 6.

    Daubechies, I., Defrise, M., Demol, C.: An iterative thresholding algorithm for linear inverse problems with a sparsity constraint. Commun. Pure Appl. Math. 57, 1413–1541 (2004)

    MathSciNet  Article  Google Scholar 

  7. 7.

    Duchi, J., Singer, Y.: Efficient online and batch learning using forward backward splitting. J. Mach. Learn. Res. 10, 2899–2934 (2009)

    MathSciNet  MATH  Google Scholar 

  8. 8.

    Gibali, A., Thong, D.V.: Tseng type methods for solving inclusion problems and its applications. Calcolo 55(4), 49 (2018)

    MathSciNet  Article  Google Scholar 

  9. 9.

    Hieu, D.V., Anh, P.K., Muu, L.D.: Modified forward-backward splitting method for variational inclusions. 4OR 19(1), 127–151 (2021)

    MathSciNet  Article  Google Scholar 

  10. 10.

    Hieu, D.V., Anh, P.K., Muu, L.D., Strodiot, J.J.: Iterative regularization methods with new stepsize rules for solving variational inclusions. J. Appl. Math. Comput. (2021).

    Article  Google Scholar 

  11. 11.

    Hieu, D.V., Cho, Y.J., Xiao, Y., Kumam, P.: Modified extragradient method for pseudomonotone variational inequalities in infinite dimensional Hilbert spaces. Vietnam J. Math. (2020).

    Article  MATH  Google Scholar 

  12. 12.

    Hieu, D.V., Muu, L.D., Anh, P.K.: Parallel hybrid extragradient methods for pseudomonotone equilibrium problems and nonexpansive mappings. Numer. Algorithms 73(1), 197–217 (2016)

    MathSciNet  Article  Google Scholar 

  13. 13.

    Hieu, D.V., Reich, S., Anh, P.K., Ha, N.H.: A new proximal-like algorithm for solving split variational inclusion problems. Numer. Algorithms (2021).

    Article  Google Scholar 

  14. 14.

    Lions, P.-L., Mercier, B.: Splitting algorithms for the sum of two nonlinear operators. SIAM J. Numer. Anal. 16(6), 964–979 (1979)

    MathSciNet  Article  Google Scholar 

  15. 15.

    Lorenz, D., Pock, T.: An inertial forward-backward algorithm for monotone inclusions. J. Math. Imaging Vis. 51, 311–325 (2015)

    MathSciNet  Article  Google Scholar 

  16. 16.

    Maingé, P.E.: Strong convergence of projected subgradient methods for nonsmooth and nonstrictly convex minimization. Set-Valued Anal. 16(7–8), 899–912 (2008)

    MathSciNet  Article  Google Scholar 

  17. 17.

    Moudafi, A., Oliny, M.: Convergence of a splitting inertial proximal method for monotone operators. J. Comput. Appl. Math. 155, 447–454 (2003)

    MathSciNet  Article  Google Scholar 

  18. 18.

    Nesterov, Y.: A method for solving the convex programming problem with convergence rate \(O(1/k^{2})\). Dokl. Akad. Nauk SSSR 269, 543–547 (1983)

    MathSciNet  Google Scholar 

  19. 19.

    O’Donoghue, B., Candès, E.J.: Adaptive restart for accelerated gradient schemes. Found. Comput. Math. 15(3), 715–732 (2015)

    MathSciNet  Article  Google Scholar 

  20. 20.

    Polyak, B.T.: Some methods of speeding up the convergence of iteration methods. USSR Comput. Math. Math. Phys. 4, 1–17 (1964)

    Article  Google Scholar 

  21. 21.

    Raguet, H., Fadili, J., Peyré, G.: A generalized forward-backward splitting. SIAM J. Imaging Sci. 6, 1199–1226 (2013)

    MathSciNet  Article  Google Scholar 

  22. 22.

    Rockafellar, R.T.: Monotone operators and the proximal point algorithm. SIAM J. Control Optim. 14, 877–898 (1976)

    MathSciNet  Article  Google Scholar 

  23. 23.

    Suantai, S., Kankam, K., Cholamjiak, P., Cholamjiak, W.: A parallel monotone hybrid algorithm for a finite family of G-nonexpansive mappings in Hilbert spaces endowed with a graph applicable in signal recovery. Comput. Appl. Math. (2021).

    MathSciNet  Article  MATH  Google Scholar 

  24. 24.

    Tseng, P.: A modified forward-backward splitting method for maximal monotone mappings. SIAM J. Control Optim. 38, 431–446 (2000)

    MathSciNet  Article  Google Scholar 

  25. 25.

    Xu, H.K.: Iterative algorithms for nonlinear operators. J. Lond. Math. Soc. 66(1), 240–256 (2002)

    MathSciNet  Article  Google Scholar 

  26. 26.

    Yambangwai, D., Khan, S.A., Dutta, H., Cholamjiak, W.: Image restoration by advanced parallel inertial forward-backward splitting methods. Soft Comput. (2021).

    Article  Google Scholar 

  27. 27.

    Yambangwai, D., Suantai, S., Dutta, H., Cholamjiak, W.: Viscosity modification with inertial forward-backward splitting methods for solving inclusion problems. In: Zeki Sarıkaya, M., Dutta, H., Ocak Akdemir, A., Srivastava, H. (eds.) Mathematical Methods and Modelling in Applied Sciences. ICMRS 2019. Lecture Notes in Networks and Systems, vol. 123. Springer, Cham (2020).

    Chapter  Google Scholar 

Download references


This research was partially supported by Chiang Mai University. W. Cholamjiak would like to thank the University of Phayao, Thailand. T. Mouktonglang would like to thank the Faculty of Science, Chiang Mai University.


Chiang Mai University, Thailand.

Author information




The authors equally conceived of the study, participated in its design and coordination, drafted the manuscript, participated in the sequence alignment, and read and approved the final manuscript.

Corresponding author

Correspondence to Thanasak Mouktonglang.

Ethics declarations

Competing interests

The authors declare that they have no competing interests.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Suparatulatorn, R., Cholamjiak, W., Gibali, A. et al. A parallel Tseng’s splitting method for solving common variational inclusion applied to signal recovery problems. Adv Differ Equ 2021, 492 (2021).

Download citation


  • 65Y05
  • 68W10
  • 47H05
  • 47J25
  • 49M37


  • Common variational inclusion problem
  • Inertial proximal algorithm
  • Tseng’s splitting algorithm
  • Compressed sensing
  • Communications technology
  • Information and communication technology