Skip to main content

Theory and Modern Applications

Solving common nonmonotone equilibrium problems using an inertial parallel hybrid algorithm with Armijo line search with applications to image recovery

Abstract

In this work, we modify the inertial hybrid algorithm with Armijo line search using a parallel method to approximate a common solution of nonmonotone equilibrium problems in Hilbert spaces. A weak convergence theorem is proved under some continuity and convexity assumptions on the bifunction and the nonemptiness of the common solution set of Minty equilibrium problems. Furthermore, we demonstrate the quality of our inertial parallel hybrid algorithm by using image restoration, as well as its superior efficiency when compared with previously considered parallel algorithms.

1 Introduction

Let H be a real Hilbert space with the inner product \(\langle \cdot,\cdot \rangle \) and induced norm \(\|\cdot \|\) and let F be an open convex subset of H. In 1992, Muu and Oettli [25] introduced the equilibrium problem associated with ψ, which is finding \(s\in C\) such that

$$\begin{aligned} \psi (s,t) \geq 0 \quad\text{for all } t \in C, \end{aligned}$$
(1.1)

where C is a nonempty closed and convex subset of F and \(\psi: F\times F\rightarrow \mathbb {R}\) is a bifunction with \(\psi (s, s) = 0\) for all \(s \in C\). The set of solutions of the problem (1.1) is denoted by \(EP(\psi, C)\). For the Minty equilibrium problem (MEP), it was introduced by Castellani and Giuli [9] in 2013. This problem is associated with the equilibrium problem (1.1), which is to find \(s \in C\) such that

$$\begin{aligned} \psi (t,s) \leq 0 \quad\text{for all } t \in C. \end{aligned}$$
(1.2)

The solution set of the Minty equilibrium problem is represented as SM.

The equilibrium problem has been widely applied to study real world applications, which were unified by including as particular cases in applied mathematics like variational inequality, Nash equilibria, fixed point problem, optimization problem, saddle-point problems, complementarity problem; see, for instance, [1–3, 5, 15, 16]. When solving some problems in applications of engineering, economics, management science, and other areas, one needs to formulate them in equilibrium form; see, for example, [4, 6–8, 10, 12, 13, 18, 21, 22, 24, 25, 28].

In 2003, Dinh and Kim [11] introduced the projection algorithm with line search of a bifunction which is not required to be pseudomonotone to solve the equilibrium problem. A weak convergence theorem was proved under continuity and convexity assumptions on the bifunction ψ, which is not required to have any monotonicity property, and assuming the solution set of Minty equilibrium problem (1.2) is nonempty.

In 1964, the inertial extrapolation technique was introduced by Polyak [27] to speed up the convergence of the algorithm. After that, many mathematicians have improved it in many ways, see [19, 23, 26]. In 2018, Iyiola et al. [14] motivated the inertial-type algorithms with the algorithm of Dinh and Kim [11], they obtained convergence theorems and presented the following inertial-type iterative method with Armijo line search step-size which is faster and more efficient than the algorithm by Dinh and Kim [11].

Algorithm 1.1

Step 1: Choose a sequence \({\{\epsilon _{n}\}}^{\infty }_{n=1} \in l_{1}\) and take \(\sigma \in (0,1),\rho > 0\). Select arbitrary points \(s_{0} \in C_{0}, s_{1} \in C_{1}; C_{0} =C_{1}=C\), and \(\theta \in [0,1)\). Set \(n:=1\).

Step 2: Given the iterates \(s_{n-1}\) and \(s_{n}, n\geq 1\), choose \(\theta _{n}\) such that \(0 \leq \theta _{n}\leq \bar{\theta }_{n}\), where

$$\begin{aligned} \bar{\theta }_{n}= \textstyle\begin{cases} \min \{\theta,\frac{\epsilon _{n}}{ \Vert s_{n} - s_{n-1} \Vert ^{2}}\} &s_{n} \neq s_{n-1}, \\ \theta &\textit{otherwise}. \end{cases}\displaystyle \end{aligned}$$

Step 3: Compute

$$\begin{aligned} t_{n}=s_{n} + \theta _{n}(s_{n}-s_{n-1}). \end{aligned}$$

Step 4: Compute

$$\begin{aligned} u_{n} = \min \biggl\{ \psi (t_{n},v)+\frac{\rho }{2} \Vert v-t_{n} \Vert ^{2}: v \in C\biggr\} , \end{aligned}$$

if \(u_{n}=s_{n}\), then stop. Otherwise go to Step 5.

Step 5: Find \(m_{n}\) as the smallest nonnegative integer m satisfying

$$\begin{aligned} \textstyle\begin{cases} v_{n,m}=(1-\sigma ^{m})t_{n}+\sigma ^{m}v_{n}, \\ w_{n,m}\in \partial _{2}\psi (v_{n,m},v_{n,m}), \\ \langle w_{n,m},t_{n}-u_{n} \rangle \geq \frac{\rho }{2} \Vert u_{n}-t_{n} \Vert ^{2}. \end{cases}\displaystyle \end{aligned}$$
(1.3)

Set \(\sigma _{n}:=\sigma ^{m_{n}},v_{n}=v_{m,n},w_{n}=w_{n,m_{n}}\).

Step 6: Compute

$$\begin{aligned} {s_{n+1}} = P_{C_{n+1}}(t_{n}), \end{aligned}$$
(1.4)

where \(C_{n+1} = C_{n} \cap H_{n}, H_{n} = \{{x \in H: h_{n}(x) \leq 0} \}\), and

$$\begin{aligned} h_{n}(x):= \langle w_{n}, x - u_{n} \rangle. \end{aligned}$$
(1.5)

Step 7: Set \(n \leftarrow n+1\) and go to Step 2.

In this work, we focus on the common equilibrium problem (CEP), which is to find \(s \in C\) such that

$$\begin{aligned} \psi _{i}(s, v) \leq 0 \quad\text{for all } v \in C, \end{aligned}$$
(1.6)

where \(\psi _{i}: F \times F \rightarrow \mathbb {R}\) is a bifunction with \(\psi _{i}(s, s) = 0\) for all \(i = 1,2,\dots,N\). Denote the solution set of the common Minty equilibrium problem (1.6) by \(CS_{M}\).

Very recently, the parallel method was used to solve common problems in many improved algorithms. One of such is a parallel viscosity-type subgradient extragradient algorithm (PVTSE) introduced by Suantai et al. [29] for solving common variational inequalities. In this work, PVTSE algorithm was applied for solving image recovery problems with common types of blur effects. Note the similarity with the modified parallel hybrid subgradient extragradient (MHPSE), which is the algorithm that was used to solve common variational inequalities, constructed by Kitisak et al. [17].

Inspired and encouraged by the previous works, in this paper we proposed an inertial-type parallel monotone hybrid algorithm with Armijo line search for solving common nonmonotone equilibrium problems. A weak convergence theorem is established under some suitable conditions imposed on the bifunction \(\psi _{i}\). In the last section, we apply our algorithms for solving unconstrained image recovery problems and compare our main algorithms with PVTSE [29] and MHPSE [17] algorithms. It is remarkable that our method has a better convergence rate.

2 Preliminaries

This section contains some definitions and basic results that will be used in our subsequent analyses. We next recall some properties of the projection, see [4] for more details. Let C be a nonempty closed and convex subset of a real Hilbert space H. Let \(\{x_{n}\}\) be a sequence in H, we denote the weak convergence (strong convergence) of \(\{x_{n}\}\) to a point \(x\in H\) by \(x_{n} \rightharpoonup x\) (\(x_{n}\rightarrow x\)), respectively.

Lemma 2.1

Let \(h:H\rightarrow \mathbb{R}\) be a real-valued function and K be a subset of H defined by \(K:= \{{u \in H: h(u) \leq 0}\}\). If K is nonempty and h is Lipschitz continuous on C with modulus \(\theta > 0 \), then

$$\begin{aligned} \mathrm{d}(u,K) \geq \theta ^{-1} \max \bigl\{ h(u),0\bigr\} \quad\textit{for all } u \in C. \end{aligned}$$

Definition 2.2

Let C be a nonempty closed and convex subset of a Hilbert space H. A function \(\psi: C \rightarrow H\) is called Lipschitz continuous if there exists a real constant \(K \geq 0\) such that

$$\begin{aligned} \bigl\Vert \psi (u)-\psi (v) \bigr\Vert \leq K \Vert u - v \Vert \quad\text{for all } u, v \in C. \end{aligned}$$

Definition 2.3

A bifunction \(\psi: C \times C \rightarrow \mathbb {R}\) is called jointly weakly continuous on \(C \times C \) if for all \(s, t \in C \) and \(\{s_{n}\}, \{t_{n}\} \) being two sequences in C converging weakly to s and t, respectively, \(\psi (s_{n}, t_{n})\) converges to \(\psi (s,t)\).

We now state the following assumptions which will be required in the sequel:

(A1) \(\psi (u,\cdot )\) is convex on H for every \(s \in H \);

(A2) ψ is jointly weakly continuous on \(H \times H\).

For each \(s,t \in H \), by \(\partial _{2}\psi (s,t)\) we denote the subdifferential of the convex function \(\psi (s,\cdot )\) at t, i.e.,

$$\begin{aligned} \partial _{2}\psi (s,t):= \bigl\{ x \in H: \psi (s,v) \geq \psi (s,t) + \langle x, v - t \rangle, \forall v \in H \bigr\} . \end{aligned}$$
(2.1)

In particular,

$$\begin{aligned} \partial _{2}\psi (s, s) = \bigl\{ x \in H: \psi (s, v) \geq \langle x, v - s \rangle, \forall v \in H \bigr\} . \end{aligned}$$
(2.2)

In our main theorem, the following lemmas will be used in the convergence analysis.

Lemma 2.4

([30])

Let \(\psi: H \times H \longrightarrow \mathbb {R}\) be a function satisfying conditions (A1) and (A2). Let \(\overline{s}, \overline{t} \in H\) and \(\{ s_{n}\},\{ t_{n}\}\) be two sequences in H converging weakly to \(\overline{s},\overline{t}\), respectively. Then, for any \(\epsilon > 0\), there exist \(\eta > 0\) and \(n_{\epsilon } \in \mathbb {N}\) such that

$$\begin{aligned} \partial _{2}\psi (s_{n}, t_{n}) \subset \partial _{2}\psi ( \overline{s},\overline{t}) + \frac{\epsilon }{\eta }B, \end{aligned}$$

for every \(n \geq n_{\epsilon }\), where B denotes the closed unit ball in H.

Lemma 2.5

([11])

Suppose the bifunction ψ satisfies the assumptions (A1) and (A2). If \(\{s_{n}\} \subset H\) is a sequence which converges strongly to s̅ and a sequence \(\{u_{n}\}\), with \(u_{n} \in \partial _{2}\psi (s_{n},s_{n})\), converges weakly to u̅, then \(\overline{u} \in \partial _{2}\psi (\overline{s},\overline{s})\).

Lemma 2.6

([11])

Suppose the bifunction ψ satisfies the assumptions (A1) and (A2). If \(\{s_{n}\} \subset C\) is bounded, \(\rho > 0\), and \(\{t_{n}\}\) is a sequence such that

$$\begin{aligned} t_{n} = \arg \min \biggl\{ \psi (s_{n},v) + \frac{\rho }{2} \Vert v - s_{n} \Vert ^{2} : v \in C \biggr\} , \end{aligned}$$

then \(\{t_{n}\}\) is bounded.

Lemma 2.7

([20])

Assume \(\phi _{n} \in [ 0, \infty )\) and \(\delta _{n} \in \phi _{n} \in [ 0, \infty )\) satisfy:

  1. 1.

    \(\phi _{n+1} - \phi _{n} \leq \theta _{n}(\phi _{n} - \phi _{n-1}) + \delta _{n} \),

  2. 2.

    \(\sum_{n=1}^{\infty }\delta _{n} < \infty \),

  3. 3.

    \(\{\theta _{n}\} \subset [0,\theta ]\), where \(\theta \in (0,1)\).

Then the sequence \(\{\phi _{n}\}\) is convergent with \(\sum_{n=1}^{\infty }[\phi _{n+1} - \phi _{n}]_{+} < \infty \), where \([t]_{+}:= \max \{t,0\}\) (for any \(t \in \mathbb {R}\)).

3 Main results

In this section, we introduce a modified parallel method with line search rule for solving the common equilibrium problem (1.6) and provide some comments regarding the iteration parameters.

Algorithm 3.1

Step 1: Choose \(\sigma \in (0,1),\rho > 0\). Select arbitrary points \(s_{0} \in C_{0}, s_{1} \in C_{1} ; C_{0} =C_{1}=C\) and \(\{\theta _{n}\}\subset [0,\theta ]\) for some \(\theta \in [0,1)\). Set \(n:=1\).

Step 2: Compute

$$\begin{aligned} t_{n}:=s_{n} + \theta _{n}(s_{n}-s_{n-1}). \end{aligned}$$

Step 3: For each \(i = 1,2,\dots,N\), compute

$$\begin{aligned} u_{n}^{i}:=\arg \min \biggl\{ \psi _{i}(t_{n},v)+ \frac{\rho }{2} \Vert v-t_{n} \Vert ^{2}: v \in C\biggr\} , \end{aligned}$$

if \(u_{n}^{i}=t_{n}\), \(\forall i = 1,2,\dots,N\), then stop. Otherwise go to Step 4.

Step 4: Find \(m_{n}^{i}\) as the smallest nonnegative integer \(m^{i}\) satisfying

$$\begin{aligned} \textstyle\begin{cases} v_{n,m^{i}}^{i}=(1-\sigma ^{m^{i}})t_{n}+\sigma ^{m^{i}}u_{n}^{i}, \\ w_{n,m^{i}}^{i}\in \partial _{2}\psi _{i}(v_{n,m^{i}}^{i},v_{n,m^{i}}^{i}), \\ \langle w_{n,m^{i}}^{i},t_{n}-u_{n}^{i} \rangle \geq \frac{\rho }{2} \Vert u_{n}^{i}-t_{n} \Vert ^{2}. \end{cases}\displaystyle \end{aligned}$$
(3.1)

Set \(\sigma _{n}^{i}=\sigma ^{m_{n}^{i}}\), \(v_{n}^{i}=v_{n,m}^{i}\), \(w_{n}^{i}=w_{n,m_{n}}^{i}\).

Step 5: Compute

$$\begin{aligned} x_{n}^{i} = P_{C^{i}_{n+1}}(t_{n}), \end{aligned}$$
(3.2)

where \(C^{i}_{n+1} = C^{i}_{n} \cap H^{i}_{n}, H^{i}_{n} = \{{x \in H: f^{i}_{n}(x) \leq 0} \}\), and

$$\begin{aligned} f^{i}_{n}(x):= \bigl\langle w^{i}_{n}, x - u^{i}_{n} \bigr\rangle . \end{aligned}$$
(3.3)

Step 6: Compute

$$\begin{aligned} s_{n+1} = \mathrm{argmax}\bigl\{ \bigl\Vert x_{n}^{i}-s_{n} \bigr\Vert :i=1,\dots, N\bigr\} . \end{aligned}$$

Step 7: Set \(n \leftarrow n+1\) and go to Step 2.

Remark 3.2

(1) It is clear that if \(u^{i}_{n} = t_{n}\) for all \(i = 1,2,\dots,N\), then \(t_{n}\) is a common solution of equilibrium problem (1.6).

(2) If \(N=1\), then Algorithm 3.1 reduces to Algorithm 1.2 of Iyiola et al. [14].

Lemma 3.3

Suppose the solution set \(CS_{M}\) of the Minty equilibrium problem is nonempty. Then, the following hold:

  1. (i)

    There exists an integer number \(m_{i}>0\) satisfying the following inequality:

    $$\begin{aligned} \bigl\langle w^{i}_{n,m},t_{n}-u^{i}_{n} \bigr\rangle \geq \frac{\rho }{2} \bigl\Vert t_{n}-u^{i}_{n} \bigr\Vert ^{2} \quad\textit{for all } w^{i}_{n,m} \in \partial _{2}\psi _{i}\bigl(v^{i}_{n,m},v^{i}_{n,m} \bigr); \end{aligned}$$
  2. (ii)

    The sequence \(\{s_{n}\}\) generated by Algorithm 3.1is well defined and belong to \(C_{n}^{i}\) for all \(i=1,\dots, N\).

Proof

(i) We assume by contradiction that there exists \(i\in \{1,\dots, N\}\) and for every positive integer \(m_{i}\) and \(v^{i}_{n,m} = (1-\sigma ^{m_{i}})t_{n} + \sigma ^{m_{i}}u^{i}_{n}\), there exists \(w^{i}_{n,m} \in \partial _{2}\psi _{i}(v^{i}_{n,m}, v^{i}_{n,m})\) such that

$$\begin{aligned} \bigl\langle v^{i}_{n,m}, t_{n}-u^{i}_{n} \bigr\rangle < \frac{\rho }{2} \bigl\Vert t_{n} -u^{i}_{n} \bigr\Vert ^{2}. \end{aligned}$$
(3.4)

Observe that \(v^{i}_{n,m}\) → \(t_{n}\) as m → ∞ and therefore, by Lemma 2.5, the sequence \({\{w^{i}_{n,m}\}}^{\infty }_{m=1}\) is bounded. Thus, we suppose that \(w^{i}_{n,m}\) converges weakly to \(\bar{w} \in C\). Taking the limit as \(m \rightarrow \infty \) (noting that \(v^{i}_{n,m}\) → \(t_{n}\) and \(w^{i}_{n,m}\) ⇀ w̄) and using Lemma 2.6, we get \(\bar{w} \in \partial _{2}\psi _{i}(t_{n}, t_{n})\) and

$$\begin{aligned} \bigl\langle \bar{w}, t_{n} - u^{i}_{n} \bigr\rangle \leq \frac{\rho }{2} \bigl\Vert u^{i}_{n} - t_{n} \bigr\Vert ^{2}. \end{aligned}$$
(3.5)

Moreover, since \(\bar{w} \in \partial _{2}\psi _{i}(t_{n}, t_{n})\), we have

$$\begin{aligned} \psi _{i}\bigl(t_{n}, u^{i}_{n}\bigr) \geq \psi _{i}(t_{n}, t_{n}) + \bigl\langle \bar{w},u^{i}_{n} - t_{n} \bigr\rangle = \bigl\langle \bar{s},u^{i}_{n} - t_{n} \bigr\rangle . \end{aligned}$$
(3.6)

Combining with (3.5) yields

$$\begin{aligned} \psi _{i}\bigl(t_{n}, u^{i}_{n}\bigr) + \frac{\rho }{2} \bigl\Vert u^{i}_{n} - t_{n} \bigr\Vert ^{2} \geq 0, \end{aligned}$$

which contradicts the fact that

$$\begin{aligned} \psi _{i}\bigl(t_{n}, u^{i}_{n}\bigr) + \frac{\rho }{2} \bigl\Vert u^{i}_{n} - t_{n} \bigr\Vert ^{2} < 0. \end{aligned}$$

Thus, the line search is well defined.

(ii) We first show that \(C^{i}_{n}\) is nonempty. Indeed, by the assumption \(CS_{M} \neq \emptyset \), for each \(x^{*} \in CS_{M}\), we get \(\psi _{i}(y,x^{*}) \leq 0,\forall y \in C, \forall i = 1,\dots,N \). So, \(\psi _{i}(v_{n}^{i}, x^{*}) \leq 0, \forall n\). From the convexity of \(\psi _{i}(v_{n}^{i}, \cdot )\), we have

$$\begin{aligned} \psi _{i} \bigl(v_{n}^{i}, y\bigr) \geq \psi _{i}\bigl(u^{i}_{n}, u^{i}_{n} \bigr) + \bigl\langle w_{n}^{i}, y -u^{i}_{n} \bigr\rangle , \quad\forall y \in C. \end{aligned}$$

Therefore,

$$\begin{aligned} 0 \geq \psi _{i}\bigl(v_{n}^{i}, x^{*} \bigr) \geq \bigl\langle w_{n}^{i}, x^{*} - u^{i}_{n} \bigr\rangle . \end{aligned}$$

This implies that for each \(i=1,2,\dots,N\), there exists \(x_{n}^{i}\in C^{i}_{n+1}\). This means that \(\{s_{n}\}\) is well defined. □

Theorem 3.4

Suppose \(CS_{M}\neq \emptyset \) and let Assumptions (A1), (A2) hold. If

$$\begin{aligned} \sum_{n=1}^{\infty }\theta _{n} \Vert s_{n}-s_{n-1} \Vert ^{2}< \infty, \end{aligned}$$

then the sequence {\(s_{n}\)} generated by Algorithm 1.1converges weakly to z \(\in EP(C,\psi _{i})\) for all \(i = 1,\dots,N\).

Proof

We split our proof into four steps below for the sake of clarity.

Step 1. We first show that \(\{s_{n}\}\) is bounded and there exists a weak cluster point of \(\{s_{n}\}\). Let \(x^{*}\in CS_{M}\). Then from Lemma 3.3, we have that \(x^{*} \in C^{i}_{n}\),

$$\begin{aligned} \bigl\Vert x^{i}_{n} - x^{*} \bigr\Vert ^{2} = \bigl\Vert P_{C^{i}_{n+1}}(t_{n}) - x^{*} \bigr\Vert ^{2} \leq \bigl\Vert t_{n} - x^{*} \bigr\Vert ^{2}. \end{aligned}$$
(3.7)

Also,

$$\begin{aligned} \bigl\Vert t_{n} - x^{*} \bigr\Vert ^{2} &= \bigl\Vert s_{n} + \theta _{n}(s_{n}-s_{n-1})-x^{*} \bigr\Vert ^{2} \\ &= \bigl\Vert s_{n} - x^{*} \bigr\Vert ^{2} + 2 \theta _{n} \bigl\langle s_{n} - x^{*}, s_{n} - s_{n-1} \bigr\rangle + {\theta }^{2}_{n} \Vert s_{n} - s_{n-1} \Vert ^{2}. \end{aligned}$$
(3.8)

Observe that

$$\begin{aligned} 2\bigl\langle s_{n} - x^{*}, s_{n} - s_{n-1} \bigr\rangle = \bigl\Vert s_{n} - x^{*} \bigr\Vert ^{2} - \bigl\Vert s_{n-1} - x^{*} \bigr\Vert ^{2} + \Vert s_{n} - s_{n-1} \Vert ^{2}. \end{aligned}$$
(3.9)

Thus, from (3.8) and (3.9), we have

$$\begin{aligned} \begin{aligned} \bigl\Vert x^{i}_{n} - x^{*} \bigr\Vert ^{2} &\leq \bigl\Vert t_{n} - x^{*} \bigr\Vert ^{2} \\ & = \bigl\Vert s_{n} - x^{*} \bigr\Vert ^{2} + \theta _{n} \bigl( \bigl\Vert s_{n} - x^{*} \bigr\Vert ^{2} - \bigl\Vert s_{n-1} - x^{*} \bigr\Vert ^{2}\bigr) + \bigl(\theta _{n} + {\theta }^{2}_{n}\bigr) \Vert s_{n} - s_{n-1} \Vert ^{2} \\ &\leq \bigl\Vert s_{n} - x^{*} \bigr\Vert ^{2} + \theta _{n}\bigl( \bigl\Vert s_{n} - x^{*} \bigr\Vert ^{2} - \bigl\Vert s_{n-1} - x^{*} \bigr\Vert ^{2}\bigr)+ 2\theta _{n} \Vert s_{n} - s_{n-1} \Vert ^{2}. \end{aligned} \end{aligned}$$
(3.10)

By the definition of \(\{s_{n}\}\), we have

$$\begin{aligned} \bigl\Vert s_{n+1} - x^{*} \bigr\Vert ^{2} \leq \bigl\Vert s_{n} - x^{*} \bigr\Vert ^{2} + \theta _{n}\bigl( \bigl\Vert s_{n} - x^{*} \bigr\Vert ^{2} - \bigl\Vert s_{n-1} - x^{*} \bigr\Vert ^{2}\bigr) + 2\theta _{n} \Vert s_{n} - s_{n-1} \Vert ^{2}. \end{aligned}$$
(3.11)

Since \({\sum }^{\infty }_{n=1} \theta _{n} \|s_{n} - s_{n-1}\|^{2} < \infty \), letting \(\delta _{n} = 2\theta _{n}\|s_{n} - x\|^{2}\) and \(\psi _{n} = \|s_{n} - x^{*}\|^{2}\), we deduce from Lemma 2.7 that the sequence \(\{\|s_{n} - x^{*}\|^{2}\}\) is convergent. Thus, \(\{s_{n}\}\) is bounded and \({\sum }^{\infty }_{n=1}[\|s_{n+1} - x^{*}\|^{2} - \|s_{n} - x^{*}\|^{2}] < \infty \). Furthermore, since \(\{s_{n}\}\) is bounded, there exists a subsequence \(\{s_{n_{k}}\}\) of \(\{s_{n}\}\) such that \(s_{n_{k}}\rightharpoonup p\in H\).

Step 2. We now show that for each \(i=1,\dots,N\), any weak accumulation point p of the sequence {\(s_{n}\)} belongs to \(C^{i}_{n}\) for all n. Suppose that \(\{s_{n_{j}}\} \subset \{s_{n}\}, s_{n_{j}} \rightharpoonup p\) as \(j \rightarrow \infty \), and there exists \(n_{0}\) such that \(p \notin C^{i}_{n_{0}}\). Then by the closedness and convexity of \(C^{i}_{n_{0}},C^{i}_{n_{0}}\) is also weakly closed. Hence, there exists \(n_{j_{0}} > n_{0}\) such that \(s_{n_{j}} \notin C^{i}_{n_{0}}\) for all \(n_{j} \geq n_{j_{0}}\), in particular \(s_{n_{j_{0}}} \notin C^{i}_{n_{0}}\). This contradicts the fact that \(s_{n_{j_{0}}} \in C^{i}_{n_{j_{0}}}-1 \subset \cdots \subset C^{i}_{n_{0}+1} \subset C^{i}_{n_{0}}\). Therefore, \(p \in C^{i}_{n}, \forall n\) or \(p \in \bigcap^{\infty }_{n=0} C^{i}_{n}\). Since \(C^{i}_{n} \subset B^{i}_{n}, \forall n\), this implies that \(p \in \bigcap^{\infty }_{n=0}B^{i}_{n}\).

Step 3. Show that \(p \in EP(C,\psi _{i})\) for all \(i = 1,\dots,N \).

Using Algorithm 3.1, we have

$$\begin{aligned} \begin{aligned} \bigl\Vert x^{i}_{n}-x^{*} \bigr\Vert ^{2}= {}&\bigl\Vert P_{C^{i}_{n+1}}(t_{n}) - x^{*} \bigr\Vert ^{2} \\ ={}& \bigl\Vert \bigl(P_{C^{i}_{n+1}}(t_{n}) - t_{n} \bigr) + \bigl(t_{n} - x^{*}\bigr) \bigr\Vert ^{2} \\ ={}& \bigl\Vert t_{n} - x^{*} \bigr\Vert ^{2}+ \bigl( \bigl\Vert P_{C^{i}_{n+1}}(t_{n}) - t_{n} \bigr\Vert ^{2} + 2 \bigl\langle P_{C^{i}_{n+1}}(t_{n}) - t_{n}, t_{n} - x^{*} \bigr\rangle \bigr) \\ ={}& \bigl\Vert t_{n} - x^{*} \bigr\Vert ^{2}+ \bigl( \bigl\Vert P_{C^{i}_{n+1}}(t_{n}) - t_{n} \bigr\Vert ^{2} + 2 \bigl\langle P_{C^{i}_{n+1}}(t_{n}) - t_{n}, t_{n} - P_{C^{i}_{n+1}}(t_{n}) \bigr\rangle \\ &{} + 2\bigl\langle P_{C^{i}_{n+1}}(t_{n}) - t_{n}, P_{C^{i}_{n+1}}(t_{n}) - x^{*} \bigr\rangle \bigr). \end{aligned} \end{aligned}$$
(3.12)

Hence,

$$\begin{aligned} \bigl\Vert x^{i}_{n}- x^{*} \bigr\Vert ^{2} = \bigl\Vert t_{n}- x^{*} \bigr\Vert ^{2} - \bigl( \Vert P_{C^{i}_{n+1}}t_{n} - t_{n} \Vert ^{2} + 2\bigl\langle P_{C^{i}_{n+1}}t_{n} - t_{n}, P_{C^{i}_{n+1}}t_{n} - x^{*}\bigr\rangle \bigr). \end{aligned}$$
(3.13)

This implies that

$$\begin{aligned} \bigl\Vert s_{n+1}- x^{*} \bigr\Vert ^{2} = \bigl\Vert t_{n}- x^{*} \bigr\Vert ^{2} - \bigl( \Vert P_{C^{i}_{n+1}}t_{n} - t_{n} \Vert ^{2} + 2\bigl\langle P_{C^{i}_{n+1}}t_{n} - t_{n}, P_{C^{i}_{n+1}}t_{n} - x^{*}\bigr\rangle \bigr). \end{aligned}$$
(3.14)

From (3.10) and (3.14), we have

$$\begin{aligned} \begin{aligned} \Vert P_{C^{i}_{n+1}}t_{n} - t_{n} \Vert ^{2} \leq{}& \bigl\Vert t_{n}- x^{*} \bigr\Vert ^{2} - \bigl\Vert s_{n+1} - x^{*} \bigr\Vert ^{2} +2 \bigl\langle P_{C^{i}_{n+1}}t_{n} - t_{n}, P_{C^{i}_{n+1}}t_{n} - x^{*}\bigr\rangle \\ \leq{}& \bigl\Vert s_{n}- x^{*} \bigr\Vert ^{2} - \bigl\Vert s_{n+1} - x^{*} \bigr\Vert ^{2} +\theta _{n}\bigl( \bigl\Vert s_{n} - x^{*} \bigr\Vert ^{2} - \bigl\Vert s_{n-1} - x^{*} \bigr\Vert ^{2}\bigr) \\ &{} + 2\theta _{n} \Vert s_{n} - s_{n-1} \Vert ^{2}+2 \bigl\langle P_{C^{i}_{n+1}}t_{n} - t_{n}, P_{C^{i}_{n+1}}t_{n} - x^{*}\bigr\rangle . \end{aligned} \end{aligned}$$
(3.15)

Clearly, from \(P_{C}s \in C\) and \(\langle s - P_{C}s, P_{C}s - y \rangle \geq 0, \forall y \in C\), we get \(\langle P_{C^{i}_{n+1}}t_{n} - t_{n}, P_{C^{i}_{n+1}}t_{n} - x^{*} \rangle \leq 0\), and \(\lim_{n\to \infty }\theta _{n } \|s_{n} - s_{n-1} \|^{2}=0\) from the assumption \(\sum_{n=1}^{\infty }\theta _{n}\|s_{n}-s_{n-1}\|^{2}<\infty \). Thus from (3.15), we conclude that

$$\begin{aligned} \lim_{n \rightarrow \infty } \Vert P_{C^{i}_{n+1}}t_{n} - t_{n} \Vert = 0, \quad\forall i=1,\dots,N. \end{aligned}$$
(3.16)

From \(t_{n} = s_{n} + \theta _{n}(s_{n} - s_{n-1})\), we get

$$\begin{aligned} \Vert t_{n} - s_{n} \Vert ^{2} \leq \theta _{n} \Vert s_{n} - s_{n-1} \Vert ^{2} \rightarrow 0, \end{aligned}$$

and hence

$$\begin{aligned} \Vert t_{n} - s_{n} \Vert \rightarrow 0,\quad n\rightarrow \infty. \end{aligned}$$
(3.17)

Since \(s_{n_{j}}\) ⇀ p, it follows from (3.17) that \(t_{n_{j}}\)⇀ p. Observe that \(h^{i}_{n}\) is Lipschitz continuous with modulus \(M_{i} > 0\) for all \(i=1,\dots,N\). It follows from Lemmas 2.1 and 3.3 that

$$\begin{aligned} \begin{aligned} \bigl\Vert x^{i}_{n}- x^{*} \bigr\Vert ^{2} &= \bigl\Vert P_{C_{n+1}} t_{n}-x^{*} \bigr\Vert ^{2} \\ & \leq \bigl\Vert t_{n}-x^{*} \bigr\Vert ^{2} - \Vert P_{C^{i}_{n+1}}t_{n} - t_{n} \Vert ^{2} \\ &= \bigl\Vert t_{n}-x^{*} \bigr\Vert ^{2} - \mathrm{dist}^{2} \bigl(t_{n},C^{i}_{n+1} \bigr) \\ &\leq \bigl\Vert t_{n}-x^{*} \bigr\Vert ^{2} - \biggl(\frac{1}{M_{i}}h^{i}_{n}(t_{n}) \biggr)^{2} \\ &\leq \bigl\Vert t_{n}-x^{*} \bigr\Vert ^{2}- \biggl(\frac{1}{2M_{i}} \rho \sigma _{n}^{i} \bigl\Vert t_{n}-u_{n}^{i} \bigr\Vert ^{2} \biggr)^{2}. \end{aligned} \end{aligned}$$
(3.18)

From (3.17) and (3.18), we obtain that

$$\begin{aligned} \biggl(\frac{1}{2M_{i}} \rho \sigma _{n}^{i} \bigl\Vert t_{n}-u_{n}^{i} \bigr\Vert ^{2} \biggr)^{2} \leq \bigl\Vert t_{n}-x^{*} \bigr\Vert ^{2}- \bigl\Vert x^{i}_{n} - x^{*} \bigr\Vert ^{2}. \end{aligned}$$
(3.19)

It follows from (3.16) that \(\sigma _{n}^{i} \| t_{n}-u_{n}^{i} \|^{2}\rightarrow 0\) as \(n\rightarrow \infty \) for all \(i=1,\dots,N\).

Case I. Suppose that for each \(i = 1,\dots,N\),

$$\begin{aligned} {\liminf_{n \rightarrow \infty }\sigma _{n}^{i}} > 0. \end{aligned}$$

Then

$$\begin{aligned} {0 \leq \bigl\Vert t_{n}-u_{n}^{i} \bigr\Vert ^{2}= \frac{\sigma _{n}^{i} \Vert t_{n}-u_{n}^{i} \Vert ^{2}}{ \sigma _{n}^{i}}}, \end{aligned}$$

which implies that

$$\begin{aligned} \begin{aligned} \limsup_{n \rightarrow \infty } \bigl\Vert t_{n} - u_{n}^{i} \bigr\Vert ^{2} &\leq \limsup_{n \rightarrow \infty }\bigl(\sigma _{n}^{i} \bigl\Vert t_{n} - u_{n}^{i} \bigr\Vert ^{2}\bigr) \biggl( \limsup _{n \rightarrow \infty } \frac{1}{\sigma _{n}^{i}} \biggr) \\ &=\limsup_{n \rightarrow \infty }\bigl(\sigma _{n}^{i} \bigl\Vert t_{n} - u_{n}^{i} \bigr\Vert ^{2}\bigr) \biggl( \frac{1}{ \liminf_{n \rightarrow \infty } \sigma _{n}^{i}} \biggr) = 0. \end{aligned} \end{aligned}$$
(3.20)

Thus,

$$\begin{aligned} \lim_{n \rightarrow \infty } \bigl\Vert t_{n} - u_{n}^{i} \bigr\Vert = 0, \end{aligned}$$
(3.21)

for all \(i=1,\dots,N\). From (3.17) and (3.21), we get

$$\begin{aligned} \bigl\Vert t_{n}-u_{n}^{i} \bigr\Vert \leq \Vert t_{n}-s_{n} \Vert + \bigl\Vert t_{n}-u_{n}^{i} \bigr\Vert \rightarrow 0, \quad n \rightarrow \infty. \end{aligned}$$
(3.22)

Since \(s_{n_{j}}\) ⇀p and due to (3.22), it follows that \(u_{n_{j}}^{i} \rightharpoonup p\) as \(j \rightarrow \infty \) for all \(i = 1,\dots,N\). By the definition of \(u^{i}_{n_{j}}\), we have

$$\begin{aligned} 0 \in \partial _{2} \psi _{i}\bigl(t_{n_{j}}, u_{n_{j}}^{i}\bigr) + \rho \bigl(u_{n_{j}}^{i} - t_{n_{j}}\bigr) + N_{C}\bigl(u_{n_{j}}^{i} \bigr). \end{aligned}$$

So, there exist

$$\begin{aligned} v^{i}_{n_{j}} \in \partial _{2}\psi _{i} \bigl(t_{n_{j}},u^{i}_{n_{j}}\bigr) \quad\text{and}\quad \overline{v}^{i}_{n_{j}} \in N_{C} \bigl(u^{i}_{n_{j}}\bigr). \end{aligned}$$

This implies that

$$\begin{aligned} \bigl\langle \overline{v}^{i}_{n_{j}},u^{i}_{n_{j}} -y\bigr\rangle &= \bigl\langle v^{i}_{n_{j}},y - u^{i}_{n_{j}} \bigr\rangle + \rho \bigl\langle u^{i}_{n_{j}}-t_{n_{j}}, y - u^{i}_{n_{j}} \bigr\rangle . \end{aligned}$$

Combining with

$$\begin{aligned} \psi _{i}(t_{n_{j}}, y) - \psi _{i} \bigl(t_{n_{j}},u_{n_{j}}^{i}\bigr) \geq \bigl\langle v^{i}_{n_{j}}, y - u_{n_{j}}^{i} \bigr\rangle ,\quad \forall y \in C, \end{aligned}$$

we have

$$\begin{aligned} \psi _{i}(t_{n_{j}}, y) - \psi _{i} \bigl(t_{n_{j}},u_{n_{j}}^{i}\bigr) + \rho \bigl\langle u^{i}_{n_{j}}-t_{n_{j}}, y - u^{i}_{n_{j}} \bigr\rangle \geq 0,\quad \forall y \in C. \end{aligned}$$
(3.23)

Taking \(j \rightarrow \infty \), by the jointly weak continuity of \(\psi _{i}\), (3.17) and (3.21), we obtain that

$$\begin{aligned} {\psi _{i} (p, y) - \psi _{i} (p, p) \geq 0.} \end{aligned}$$

Hence

$$\begin{aligned} {\psi _{i}(p, y) \geq 0, \quad\forall y \in C,} \end{aligned}$$

which implies that \(p \in {EP} (C,\psi _{i})\) for all \(i=1,\dots,N\).

Case II. Suppose \(\lim_{n \rightarrow \infty } \sigma _{n}^{i}=0\) for all \(i = 1,2,\dots,N\).

Then from the boundedness of \(\{u_{n}^{i}\}\), there exists \(\{u_{n_{k}}^{i}\} \subset \{u_{n}^{i}\}\) such that \(u_{n_{k}}^{i} \rightharpoonup \overline{u}^{i}\) as \(k \rightarrow \infty \). Replacing y by \(t_{n_{k}}\) in (3.23), we have

$$\begin{aligned} \psi _{i}\bigl(t_{n_{k}}, u^{i}_{n_{k}} \bigr) + \rho \bigl\Vert u^{i}_{n_{k}} - t_{n_{k}} \bigr\Vert ^{2} \leq 0. \end{aligned}$$
(3.24)

Moreover, by the Armijo line search rule (3.1), for \(m^{i}_{n_{k}-1}\), there exists \(w^{i}_{n_{k},m^{i}_{n_{k}-1}} \in \partial _{2}\psi _{i} (v^{i}_{n_{k},m^{i}_{n_{k}-1}} , v^{i}_{n_{k},m^{i}_{n_{k}-1}})\) such that

$$\begin{aligned} \bigl\langle w^{i}_{n_{k},m^{i}_{n_{k}-1}}, t_{n_{k}} - u_{n_{k}}^{i} \bigr\rangle < \frac{\rho }{2} \bigl\Vert u_{n_{k}}^{i} - t_{n_{k}} \bigr\Vert ^{2}. \end{aligned}$$
(3.25)

By the convexity of \(\psi _{i}(v^{i}_{n_{k},m^{i}_{n_{k}-1}}, \cdot )\) and due to (3.25), for

$$\begin{aligned} w^{i}_{n_{k},m^{i}_{n_{k}-1}} \in \partial _{2}\psi _{i} \bigl(v^{i}_{n_{k},m^{i}_{n_{k}-1}} , v^{i}_{n_{k},m^{i}_{n_{k}-1}}\bigr), \end{aligned}$$

we have

$$\begin{aligned} \begin{aligned} \psi _{i}\bigl(v^{i}_{n_{k}},_{m^{i}_{n_{k}-1}}, u^{i}_{n_{k}}\bigr) &\geq \psi _{i} \bigl(v^{i}_{n_{k}},_{m^{i}_{n_{k}-1}}, v^{i}_{n_{k}},_{m^{i}_{n_{k}-1}} \bigr) + \bigl\langle w^{i}_{n_{k}},_{m^{i}_{n_{k}-1}}, u^{i}_{n_{k}}-v^{i}_{n_{k}},_{m^{i}_{n_{k}-1}} \bigr\rangle \\ &=\bigl(1-\sigma ^{m^{i}_{n_{k}-1}}\bigr) \bigl\langle w^{i}_{n_{k}},_{m^{i}_{n_{k}-1}} , u^{i}_{n_{k}} - t_{n_{k}} \bigr\rangle \\ &> -\bigl(1 - \sigma ^{m^{i}_{n_{k}-1}}\bigr) \frac{\rho }{2} \bigl\Vert u^{i}_{n_{k}} - t_{n_{k}} \bigr\Vert ^{2}. \end{aligned} \end{aligned}$$
(3.26)

From (3.24) and (3.26), we obtain

$$\begin{aligned} \begin{aligned} \psi _{i}\bigl( v^{i}_{n_{k},m^{i}_{n_{k}-1}}, u^{i}_{n_{k}} \bigr) &\geq - \bigl(1 - \sigma ^{m^{i}_{n_{k}-1}}\bigr) \frac{\rho }{2} \bigl\Vert u^{i}_{n_{k}} - t_{n_{k}} \bigr\Vert ^{2} \\ &\geq \frac{1}{2} \bigl(1 - \sigma ^{i}_{n_{k}-1}\bigr) \psi _{i}\bigl(t_{n_{k}} - u^{i}_{n_{k}}\bigr). \end{aligned} \end{aligned}$$
(3.27)

By (3.1), since \(v^{i}_{n_{k},m^{i}_{n_{k}-1}} = (1 - \sigma ^{m^{i}_{n_{k}-1}})t_{n_{k}} + \sigma ^{m^{i}_{n_{k}-1}}u^{i}_{n_{k}}, \sigma _{m^{i}_{n_{k}-1}} \rightarrow 0\), and \(t_{n_{k}}\) converges weakly to \(p, u^{i}_{n_{k}}\) converges weakly to \(\overline{u}^{i}\) for all \(i = 1,2,\dots,N\), this implies that \(v^{i}_{n_{k},m^{i}_{n_{k}-1}} \rightharpoonup p\) as \(k \rightarrow \infty \). Since \(\{\| u^{i}_{n_{k}} - t_{n_{k}} \|^{2}\}\) is bounded, without loss of generality, we may assume that \(\lim_{k \to \infty } \| u^{i}_{n_{k}} - t_{n_{k}}\|^{2}\) exists for all \(i=1,\dots,N\). Hence, we get in the limit (3.27) that

$$\begin{aligned} \psi _{i} \bigl(p,\overline{u}^{i}\bigr) \geq 0 \geq \frac{1}{2}\psi \bigl(p, \overline{u}^{i}\bigr). \end{aligned}$$
(3.28)

Therefore, \(\psi _{i}(p, \overline{u}^{i}) = 0\), \(\overline{u}^{i} = p, \forall i = 1,\dots,N\) and \(\lim_{k \rightarrow \infty }\| u^{i}_{n_{k}} - t_{n_{k}} \| ^{2} = 0\). By Case I, it is immediate that \(p\in EP (C, \psi _{i})\) for all \(i = 1,\dots,N\).

Step 4. We show that \(\{s_{n}\}\) converges weakly to a point \(p\in EP (C, \psi _{i})\). Now, let \(x^{\ast }\) and p be two accumulation points of \(\{s_{n}\}\). Then there exists \(\{s_{n_{j}}\} \subset \{s_{n}\}\) such that \(s_{n_{j}}\rightharpoonup p\) and \(\{s_{n_{k}}\} \subset \{s_{n}\}\) such that \(s_{n_{k}}\rightharpoonup x^{\ast }\). Using similar arguments as in Step 2 above, we can show that \(x^{\ast }\), \(p\in \bigcap^{\infty }_{n=0}C^{i}_{n}\). Let \(\lim_{n \rightarrow \infty }\| s_{n} - x^{\ast } \|^{2} = \alpha \). Then

$$\begin{aligned} \begin{aligned} \alpha &= \lim_{n \rightarrow \infty } \bigl\Vert s_{n} - x^{ \ast } \bigr\Vert ^{2} = \lim _{j \rightarrow \infty } \bigl\Vert s_{n_{j}} - x^{\ast } \bigr\Vert ^{2} \\ &= \lim_{j \rightarrow \infty } \bigl[ \Vert s_{n_{j}} - p \Vert ^{2} + 2 \bigl\langle s_{n_{j}} - p, p - x^{\ast } \bigr\rangle + \bigl\Vert p - x^{\ast } \bigr\Vert ^{2} \bigr] \\ &= \lim_{j \rightarrow \infty } \bigl[ \Vert s_{n_{j}} - p \Vert ^{2} + \bigl\Vert p - x^{ \ast } \bigr\Vert ^{2} \bigr] \\ &= \lim_{n \rightarrow \infty } \bigl[ \Vert s_{n} - p \Vert ^{2} + \bigl\Vert p - x^{ \ast } \bigr\Vert ^{2} \bigr] \\ &= \lim_{n \rightarrow \infty } \bigl[ \bigl\Vert s_{n} - x^{\ast } \bigr\Vert ^{2} + 2 \bigl\Vert p - x^{\ast } \bigr\Vert ^{2} \bigr]. \end{aligned} \end{aligned}$$

Therefore, \(\| p - x^{\ast } \|= 0\), and so \(\{s_{n}\}\) converges weakly to p. This completes the proof. □

4 Application to image restoration problems

The image restoration problem can be modeled by the linear equation system which is equivalent to a matrix equation of the form

$$\begin{aligned} b = As + \upsilon, \end{aligned}$$
(4.1)

where \(s\in \mathbb{R}^{n\times 1}\) is the original image, \(b\in \mathbb{R}^{n\times 1}\) is the observed image, \(\upsilon \in \mathbb{R}^{n\times 1}\) is additive noise, and \(A\in \mathbb{R}^{n \times n}\) is the blurring operation. In order to solve problem (4.1), we aim to approximate the original image, vector s, by minimizing the additive noise, which is known as the following least squares (LS) problem:

$$\begin{aligned} \min_{s}\frac{1}{2} \Vert b-As \Vert _{2}^{2}, \end{aligned}$$
(4.2)

where \(\|\cdot \|\) is the \(\ell _{2}\)-norm defined by \(\|s\|_{2} = \sqrt{\sum_{i=1}^{n}|s_{i}|^{2}}\). The solution of (4.2) can be estimated by many well-known iteration methods.

Blur is an unsharp image area caused by camera or subject movement, inaccurate focusing, or the use of an aperture that gives a shallow depth of field. The blur effects are filters that smooth transitions and decrease contrast by averaging the pixels next to hard edges of defined lines and areas where there is significant color transition. In a digital image there are many types of blur effects, i.e., Gaussian blur, out of focus blur, and motion blur. In image restoration problems, the goal is to deblur an image without knowing the blurring operator. So, we define this goal to the following problem:

$$\begin{aligned} \min_{s\in \mathbb{R}^{n}}\frac{1}{2} \Vert A_{1}s-b_{1} \Vert _{2}^{2}, \qquad\min _{x\in \mathbb{R}^{n}}\frac{1}{2} \Vert A_{2}s-b_{2} \Vert _{2}^{2}, \quad\dots\quad \min_{s\in \mathbb{R}^{n}} \frac{1}{2} \Vert A_{N}s-b_{N} \Vert _{2}^{2}, \end{aligned}$$
(4.3)

where s is the original true image, \(A_{i}\) is the blurring matrix, \(b_{i}\) is the blurred image obtained by using the blurring matrix \(A_{i}\) for all \(i = 1, 2, \dots, N\). The set of common solutions of the problem (4.3) is denoted by Ω, which is nonempty. We can apply the proposed algorithm (Algorithm 3.1) to solve the problem (4.3) by setting \(\psi _{i}(s,t)=\langle A^{T}_{i}(A_{i}s-b_{i}),t-s\rangle \) for all \(s,t\in \mathbb{R}^{n\times 1}\). Both theoretical and experimental results demonstrate the convergence properties of the proposed algorithm in this section. However, for showing the effectiveness of the proposed algorithm, the PVTSE [29] and MHPSE [17] algorithms are also applied to compare.

The Cauchy error of PVTSE, MHPSE, and the proposed algorithm is defined as \(\|s_{n}-s_{n-1}\|_{\infty } < 10^{-8}\). The performance of the compared algorithms at \(s_{n}\) on the image restoring process is measured quantitatively by the means of the peak signal-to-noise ratio (PSNR), which is defined by

$$\begin{aligned} \operatorname{PSNR}(s_{n}) = 20\operatorname{log}_{10} \biggl( \frac{255^{2}}{MSE} \biggr), \end{aligned}$$

where \(\operatorname{MSE} = \|s_{n}-s\|_{2}^{2}\).

Next, we will present the restoration of images that have been corrupted by the following blur types:

Type I.:

Gaussian blur of filter size \(9\times 9\) with standard deviation \(\sigma = 4\) (the original image has been degraded by the blurring matrix \(A_{1}\)).

Type II.:

Out of focus blur (disk) with radius \(r = 6\) (the original image has been degraded by the blurring matrix \(A_{2} \)).

Type III.:

Motion blur specified with motion length of 21 pixels (len =21) and motion orientation 11∘ (\(\theta = 11\)) (the original image has been degraded by the blurring matrix \(A_{3} \)).

The RGB format for the color image shown in Fig. 1 is used to demonstrate the effectiveness and practicality of our algorithm compared with PVTSE and MHPSE algorithms.

Figure 1
figure 1

The original RGB image of size \(289 \times 448\)

The three different types of the original RGB image degraded by the blurring matrices \(A_{1}, A_{2}\), and \(A_{3}\) are shown in Fig. 2.

Figure 2
figure 2

The original RGB image degraded by blurring matrices \(A_{1}\), \(A_{2}\), and \(A_{3}\), respectively

We choose \(\eta =0.1\) and \(\rho =0.5\), and \(\theta _{n}\) can be chosen such that \(0\leq \theta _{n}\leq \bar{\theta }_{n}\) and

$$\begin{aligned} \bar{\theta }_{n}= \textstyle\begin{cases} \min \{ \frac{1}{n^{2} \Vert s_{n}-s_{n-1} \Vert ^{2}}, 0.25 \} &\text{if } s_{n}\neq s_{n-1}, \\ 0.25 &\text{otherwise.} \end{cases}\displaystyle \end{aligned}$$

After that, we apply PVTSE, MHPSE, and the proposed algorithms to get the solution of the deblurring problem with one of the three blurring matrices \(A_{1}, A_{2}\), and \(A_{3}\). The results of the PVTSE, MHPSE, and the proposed algorithms are demonstrated on the following cases:

Case I.:

Inputting \(A_{1}\) to PVTSE, MHPSE, and the proposed algorithms.

Case II.:

Inputting \(A_{2}\) to PVTSE, MHPSE, and the proposed algorithms.

Case III.:

Inputting \(A_{3}\) to PVTSE, MHPSE, and the proposed algorithms.

Case IV.:

Inputting \(A_{1}\) and \(A_{2}\) to PVTSE, MHPSE, and the proposed algorithms.

Case V.:

Inputting \(A_{1}\) and \(A_{3}\) to PVTSE, MHPSE, and the proposed algorithms.

Case VI.:

Inputting \(A_{2}\) and \(A_{3}\) to PVTSE, MHPSE, and the proposed algorithms.

Case VII.:

Inputting \(A_{1}\), \(A_{2}\), and \(A_{3}\) to the PVTSE, MHPSE, and the proposed algorithms.

Next, the common solutions of the deblurring problem for all cases under the three blurring matrices \(A_{1}, A_{2}\), and \(A_{3}\) by using PVTSE, MHPSE, and the proposed algorithms are presented. The restored images using these three algorithms after 500 iterations for all seven cases are shown in Figs. 3–5.

Figure 3
figure 3

The reconstructed RGB image with its PSNR (decimal cutting) for all cases with the PVTSE algorithm being used, after 500 iterations

Figure 4
figure 4

The reconstructed RGB image with its PSNR (decimal cutting) for all cases with the MHPSE algorithm being used, after 500 iterations

Figure 5
figure 5

The reconstructed RGB image with its PSNR (decimal cutting) for all cases with the proposed algorithm being used, after 500 iterations

From Figs. 3–5, we see that the common solution of the deblurring problem with \((N > 1)\) improves the quality of the considered image. And when all blurring matrices are used in finding the common solution of the deblurring problem, we get the best quality of the recovered RGB image. Moreover, it has been found that the recovered RGB image obtained by the proposed algorithm has the highest PSNR compared with the PVTSE and MHPSE algorithms.

Next, the behavior of Cauchy error, the peak signal-to-noise ratio (PSNR), and the number of line search steps per each iteration for recovering processes of the degraded RGB image by using the PVTSE, MHPSE, and the proposed algorithms with \(20{,}000\) iterations are demonstrated.

The quality improvements of the reconstructed RGB images based on PSNR being used are also illustrated for these three algorithms in Fig. 6. Their PSNR are also increased as the number of iterations is increased. The proposed method always gives a maximum value of PSNR when more than one blurring matrix is used in finding the common solution of the deblurring problem compared with PVTSE and MHPSE methods.

Figure 6
figure 6

PSNR quality plots of PVTSE, MHPSE, and the proposed algorithms in all cases of RGB images

The Cauchy error plots show the validity and confirm the convergence of PVTSE, MHPSE, and the proposed methods. It is remarkable that the Cauchy error plot of MHPSE is decreased as the number of iterations is increased. There was an oscillation on the Cauchy error plot throughout the iterations of the PVTSE algorithm. And a gentle oscillation has occurred at the beginning of the iteration for the proposed algorithm. After that the Cauchy error plot of the proposed algorithm is also decreased as the number of iterations is increased. Moreover, it can be seen that the proposed algorithms always give the smallest number of line search steps on each iteration compared with PVTSE and MHPSE algorithms. Through these results, it is shown that the proposed algorithm produces excellent efficiency compared with PVTSE and MHPSE algorithms.

5 Conclusion

In this work, we use a parallel method combining inertial hybrid algorithm with Armijo line search for solving common nonmonotone equilibrium problems. A weak convergence theorem is established under some suitable conditions imposed on the bifunction \(\psi _{i}\). Moreover, we apply our algorithms for solving unconstrained image recovery problems and show superior efficiency of our proposed algorithm when the number of subproblems are increased; see Fig. 5. Finally, we compare our main algorithms with PVTSE [29] and MHPSE [17] algorithms. It is remarkable that our proposed algorithm has a better convergence rate; see Figs. 6–7.

Figure 7
figure 7

Cauchy error plots of PVTSE, MHPSE, and the proposed algorithms in all cases of RGB images

Availability of data and materials

Contact the authors for data requests.

References

  1. Attouch, H., Czarnecki, M.O.: Asymptotic control and stabilization of nonlinear oscillators with nonisolated equilibria. J. Differ. Equ. 179(1), 278–310 (2000)

    Article  Google Scholar 

  2. Attouch, H., Goudon, X., Redont, P.: The heave ball with friction. I. The continuous dynamical system. Commun. Contemp. Math. 2(1), 1–34 (2000)

    Article  MathSciNet  Google Scholar 

  3. Attouch, H., Peypouquet, J., Redont, P.: A dynamic approach to an inertial forward–backward algorithm for convex minimization. SIAM J. Optim. 24(1), 232–256 (2014)

    Article  MathSciNet  Google Scholar 

  4. Bauschke, H.H., Combettes, P.L.: Convex Analysis and Monotone Operator Theory in Hilbert Spaces. CMS Books in Mathematics. Springer, New York (2011)

    Book  Google Scholar 

  5. Beck, A., Teboulle, M.: A fast iterative shrinkage–thresholding algorithm for linear inverse problems. SIAM J. Imaging Sci. 2(1), 183–202 (2009)

    Article  MathSciNet  Google Scholar 

  6. Bigi, G., Pappalardo, M., Passacantando, M.: Existence and solution methods for equilibria. Eur. J. Oper. Res. 227(1), 1–11 (2013)

    Article  MathSciNet  Google Scholar 

  7. Blum, E., Oettli, W.: From optimization and variational inequalities to equilibrium problems problems. Math. Stud. 63(1–4), 123–145 (1994)

    MathSciNet  MATH  Google Scholar 

  8. Bot, R.I., Csetnek, E.R.: An inertial forward–backward–forward primal–dual splitting algorithm for solving monotone inclusion problems. Numer. Algorithms 71(3), 519–540 (2016)

    Article  MathSciNet  Google Scholar 

  9. Castellani, M., Giuli, M.: Refinements of existence results for relaxed quasi-monotone equilibrium problems. J. Glob. Optim. 57(4), 1213–1227 (2013)

    Article  Google Scholar 

  10. Daniele, P., Gannessi, F., Maugeri, A. (eds.): Equilibrium Problems and Variational Models, Nonconvex Optimization and Its Application, vol. 68. Kluwer, Norwell (2003)

    Google Scholar 

  11. Dinh, B.V., Kim, D.S.: Projection algorithms for solving nonmonotone equilibrium problems in Hilbert space. J. Comput. Appl. Math. 302, 106–117 (2016)

    Article  MathSciNet  Google Scholar 

  12. Giannessi, F., Maugeri, A., Pardalos, P.M. (eds.): Equilibrium Problems: Nonsmooth Optimization and Variational Inequality Models Kluwer, Dordrecht (2001)

    MATH  Google Scholar 

  13. Iusem, A.N., Sosa, W.: Iterative algorithm for equilibrium problems. Optimization 52(3), 301–316 (2003)

    Article  MathSciNet  Google Scholar 

  14. Iyiola, O.S., Ogbuisi, F.U., Shehu, Y.: An inertial type iterative method with Armijo linesearch for nonmonotone equilibrium problems. Calcolo 55, 52 (2018)

    Article  MathSciNet  Google Scholar 

  15. Jleli, M., Karapinar, E., Petruiel, A., Samet, B., Vetro, C.: Optimization problems via best proximity point analysis. Abstr. Appl. Anal. 2014, 178040 (2014)

    MATH  Google Scholar 

  16. Karapinar, E., Sintunavarat, W.: The existence of optimal approximate solution theorems for generalized α-proximal contraction non-self-mappings and applications. Fixed Point Theory Appl. 2013, 323 (2013)

    Article  MathSciNet  Google Scholar 

  17. Kitisak, P., Cholamjiak, W., Yambangwai, D., Jaidee, R.: A modified parallel hybrid subgradient extragradient method for finding common solutions of variational inequality problems. Thai J. Math. 18(1), 261–274 (2020)

    MathSciNet  Google Scholar 

  18. Konnov, I.V.: Equilibrium Models and Variational Inequalities. Elsevier, Amsterdam (2007)

    MATH  Google Scholar 

  19. Lorenz, D.A., Pock, T.: An inertial forward–backward algorithm for monotone inclusions. J. Math. Imaging Vis. 51, 311–325 (2015)

    Article  MathSciNet  Google Scholar 

  20. Mainge, P.E.: Convergence theorem for inertial KM-type algorithms. J. Comput. Appl. Math. 219(1), 223–236 (2008)

    Article  MathSciNet  Google Scholar 

  21. Mastroeni, G.: On auxiliary principle for equilibrium problems. In: Daniele, P., Giannessi, F., Maugeri, A. (eds.) Equilibrium Problems and Variational Models, pp. 289–298. Kluwer, Dordrecht (2003)

    Chapter  Google Scholar 

  22. Moudafi, A.: Proximal point algorithm extended to equilibrium problems. J. Nat. Geom. 15(l–2), 91–100 (1999)

    MathSciNet  MATH  Google Scholar 

  23. Moudafi, A., Oliny, M.: Convergence of a splitting inertial proximal method for monotone operators. J. Comput. Appl. Math. 155, 447–454 (2003)

    Article  MathSciNet  Google Scholar 

  24. Muu, L.D.: Stability property of a class of variational inequalities. Math. Oper.forsch. Stat., Ser. Optim. 15(3), 347–353 (1984)

    MathSciNet  MATH  Google Scholar 

  25. Muu, L.D., Oettli, W.: Convergence of an adaptive penalty scheme for finding constrained equilibria. Nonlinear Anal. 18(12), 1159–1166 (1992)

    Article  MathSciNet  Google Scholar 

  26. Nesterov, Y.E.: A method for solving the convex programming problem with convergence rate \(O(1/k^{2})\). Dokl. Akad. Nauk SSSR 269, 543–547 (1983) (in Russian)

    MathSciNet  Google Scholar 

  27. Polyak, B.T.: Some methods of speeding up the convergence of iteration methods. USSR Comput. Math. Math. Phys. 4(5), 1–17 (1964)

    Article  Google Scholar 

  28. Reich, S., Sabach, S.: Three convergence theorems regarding iterative methods for solving equilibrium problems in reflexive Banach spaces. Contemp. Math. 568, 225–240 (2012)

    Article  MathSciNet  Google Scholar 

  29. Suantai, S., Peeyada, P., Yambangwai, D., Cholamjiak, W.: A parallel-viscosity-type subgradient extragradient-line method for finding the common solution of variational inequality problems applied to image restoration problems. Mathematics 8, 248 (2020). https://doi.org/10.3390/math8020248

    Article  Google Scholar 

  30. Voung, P.T., Strodiot, J.J., Nguyen, V.H.: Extragradient methods and linesearch algorithms for solving Ky Fan inequalities and fixed point problems. J. Optim. Theory Appl. 155(2), 605–627 (2012)

    Article  MathSciNet  Google Scholar 

Download references

Acknowledgements

S. Suantai would like to thank Chiang Mai University, Thailand. W. Cholamjiak would like to thank Thailand Science Research and Innovation under the project IRN62W0007 and University of Phayao, Thailand. D. Yambangwai would like to thank the Thailand Science Research and Innovation Fund and the University of Phayao (Grant No. FF64-UoE002).

Funding

Chiang Mai University, Thailand.

Author information

Authors and Affiliations

Authors

Contributions

The authors equally conceived the study, participated in its design and coordination, drafted the manuscript, participated in the sequence alignment, and read and approved the final manuscript.

Corresponding author

Correspondence to Watcharaporn Cholamjiak.

Ethics declarations

Competing interests

The authors declare that they have no competing interests.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Suantai, S., Yambangwai, D. & Cholamjiak, W. Solving common nonmonotone equilibrium problems using an inertial parallel hybrid algorithm with Armijo line search with applications to image recovery. Adv Differ Equ 2021, 410 (2021). https://doi.org/10.1186/s13662-021-03565-9

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13662-021-03565-9

MSC

Keywords