Skip to main content

Theory and Modern Applications

An inertial parallel algorithm for a finite family of G-nonexpansive mappings with application to the diffusion problem

Abstract

For finding a common fixed point of a finite family of G-nonexpansive mappings, we implement a new parallel algorithm based on the Ishikawa iteration process with the inertial technique. We obtain the weak convergence theorem of this algorithm in Hilbert spaces endowed with a directed graph by assuming certain control conditions. Furthermore, numerical experiments on the diffusion problem demonstrate that the proposed approach outperforms well-known approaches.

1 Introduction

In the literature of metric fixed point theory, the Banach contraction principle is well known. Many mathematicians have improved and generalized this theory in various ways, see [2, 11, 14, 21, 2325, 27, 30]. Browder [10], by using Banach’s result, proved a strong convergence theorem of an implicit iterative for nonexpansive mappings in a Hilbert space. Later on, Halpern [18] applied Browder’s convergence theorem to establish a strong convergence theorem of an explicit iteration for such mappings in a Hilbert space. In 2008, Jachymski [22] was the first to prove a generalization of the Banach contraction principle in a complete metric space endowed with a directed graph by combining two ideas from fixed point theory and graph theory. Then, in 2012, Aleomraninejad et al. [1] proposed several iterative procedures in Banach spaces involving a directed graph for G-contraction and G-nonexpansive mappings. In Hilbert spaces involving a directed graph, similar studies of Browder and Halpern were provided by Tiammee et al. [40] in 2015. Next, Tripak [41] in 2016 studied a two-step iteration process, called the Ishikawa iteration process, and used this scheme to prove weak and strong convergence theorems for estimating common fixed points in a uniformly convex Banach space involving a directed graph for G-nonexpansive mappings. Subsequently, numerous research studies have been conducted on two- and three-step iteration processes under conditions similar to Tripak [41], see [32, 36, 38, 42].

Otherwise, inertial extrapolation, which was initially presented by Polyak [29] as an acceleration technique, has recently been applied to solve a variety of convex minimization problems based on the heavy ball method of the two order time dynamical system. Two iterative steps are used in inertial form processes, with the second iteration being derived from the preceding two iterates. These methods are committed to be considered an effective technique for dealing with a variety of iterative algorithms, especially projection-based algorithms, see [3, 6, 26, 34, 35, 39, 46]. Within the forward–backward splitting framework, Beck and Teboulle [9] suggested the so-called fast iterative shrinkage thresholding algorithm (FISTA), which cleverly incorporates the ideas of Polyak [29], Nesterov [28], and Güler [17]. The FISTA has become a standard algorithm because it can be used to solve a wide range of practical problems in sparse signal recovery, image processing, and machine learning.

To approximate a finite family of quasi ϕ-nonexpansive mappings in a Banach space, Anh and Hieu [4, 5] proposed a parallel monotone hybrid method. Recently, Yambangwai et al. [44] applied the parallel monotone hybrid method to construct an algorithm for solving common variational inclusion problems in a Hilbert space. Some findings concerning the parallel approach to solve the fixed point problem and related problems have been published, see [12, 13, 19, 20, 37].

In this article, we develop a new parallel algorithm based on the Ishikawa iteration process with the inertial technique to prove the weak convergence theorem for estimating common fixed points of a finite family of G-nonexpansive mappings by assuming some control conditions in Hilbert spaces endowed with a directed graph. Moreover, we compare the proposed method to a well-known method in order to solve the diffusion problem.

2 Preliminaries

In this part, we bring back several conceptual outcomes that will be applicable to our new technique. The set of a fixed point of \(\mathcal{M}\) is denoted by \(\operatorname{Fix}(\mathcal{M})\), that is, \(\operatorname{Fix}(\mathcal{M}) = \{x : \mathcal{M}x=x\}\).

Definition 2.1

A metric space \(\mathcal{X}\) is said to be endowed with a transitive directed graph G if \(G = (V (G), E(G))\) is a directed graph such that the following hold:

  1. (i)

    G is transitive, that is, for any \(u,v,z\in V(G)\),

    $$ (u, v), (v, z)\in E(G)\quad \Rightarrow \quad (u, z)\in E(G);$$
  2. (ii)

    the set of vertices \(V (G)\) coincides with \(\mathcal{X}\);

  3. (iii)

    the set of edges \(E(G)\) contains the diagonal of \(\mathcal{X}\times \mathcal{X}\), that is, \(\{(x,x) : x \in \mathcal{X}\}\subseteq E(G)\);

  4. (iv)

    \(E(G)\) contains no parallel edges.

Definition 2.2

Let C be a nonempty subset of a Hilbert space \(\mathcal{H}\) and \(G=(V(G),E(G))\) be a directed graph such that \(V(G)=C\). A mapping \(\mathcal{M}\) on C is said to be G-nonexpansive if for each \(u, v\in C\) such that the following hold:

  1. (i)

    \(\mathcal{M}\) is edge-preserving, i.e.,

    $$ (u, v) \in E(G) \quad \Rightarrow \quad (\mathcal{M}u, \mathcal{M}v) \in E(G), $$
  2. (ii)

    \(\mathcal{M}\) does not increase the weights of edges of G, i.e.,

    $$ (u, v) \in E(G)\quad \Rightarrow \quad \Vert \mathcal{M}u - \mathcal{M}v \Vert \leq \Vert x - y \Vert . $$

Lemma 2.3

([7])

Let \(\{\sigma _{n}\}\) and \(\{\delta _{n}\}\) be nonnegative sequences of real numbers satisfying \(\sum_{n=1}^{\infty } \delta _{n}<\infty \) and \(\sigma _{n+1}\leq \sigma _{n}+\delta _{n}\). Then \(\{\sigma _{n}\}\) is a convergent sequence.

Lemma 2.4

([8, Opial])

Let Ω be a nonempty set of \(\mathcal{H}\) and \(\{\chi _{n}\}\) be a sequence in \(\mathcal{H}\). Suppose that the following assertions hold:

  1. (i)

    For every \(\rho \in \Omega \), the sequence \(\lbrace \|\chi _{n}-\rho \| \rbrace \) converges.

  2. (ii)

    Every weak sequential cluster point of \(\{\chi _{n}\}\) belongs to Ω.

Then \(\{\chi _{n}\}\) weakly converges to a point in Ω.

Definition 2.5

([36])

Let \(G=(V(G),E(G))\) be a directed graph and \(A\subseteq V(G)\). For \(v\in V(G)\), we say that

  1. (i)

    A is dominated by v if \((v,a)\in E(G)\) for all \(a\in A\).

  2. (ii)

    A dominates v if for each \(a\in A\), \((a,v)\in E(G)\).

Lemma 2.6

([33])

Let C be a nonempty, closed, and convex subset of a Hilbert space \(\mathcal{H}\) and \(G=(V(G),E(G))\) be a directed graph such that \(V(G)=C\). Let \(\mathcal{M} : C\rightarrow C\) be a G-nonexpansive mapping and \(\{u_{n}\}\) be a sequence in C such that \(u_{n}\rightharpoonup u\) for some \(u\in C\). If there exists a subsequence \(\{u_{n_{k}}\}\) of \(\{u_{n}\}\) such that \((u_{n_{k}},u)\in E(G)\) for all \(k\in \mathbb{N}\) and \(\{u_{n}-\mathcal{M}u_{n}\}\rightarrow v\) for some \(v\in \mathcal{H}\), then \((I-\mathcal{M})u=v\).

3 Main results

In this part, we construct a novel parallel scheme to find a common fixed point of a finite family of G-nonexpansive mappings based on the inertial Ishikawa iteration process. For all \(i=1, 2,\dots , N\), the following assumptions are maintained.

Assumption 1

\(\mathcal{H}\) is a real Hilbert space endowed with a transitive directed graph G such that \(E(G)\) is convex.

Assumption 2

\(T^{i}: \mathcal{H} \to \mathcal{H}\) is a G-nonexpansive mapping such that \(\mathbb{F} := \bigcap_{i=1}^{N} \operatorname{Fix}(T^{i}) \neq \emptyset \).

Assumption 3

\(\{\alpha _{n}^{i}\}, \{\beta _{n}^{i}\}\subset [0, 1]\) satisfies the condition such that \(\liminf_{n\to \infty }\alpha _{n}^{i}>0\) and \(0<\liminf_{n\to \infty }\beta _{n}^{i}\leq \limsup_{n \to \infty }\beta _{n}^{i}<1\), and \(\{\vartheta _{n}\}\subset [0, \vartheta )\) for some \(\vartheta >0\).

Next the algorithm is presented.

Algorithm
figure a

\((\star )\)

With Algorithm (), we are now ready for the main convergence theorem.

Theorem 3.1

Assume that Assumptions 13are true and that the following criteria are met:

  1. (i)

    \(\{\omega _{n}\}\) is dominated by ρ and \(\{\omega _{n}\} \) dominates ρ for all \(\rho \in \mathbb{F}\);

  2. (ii)

    If there exists a subsequence \(\{\omega _{n_{k}}\} \) of \(\{\omega _{n}\} \) such that \(\omega _{n_{k}}\rightharpoonup \mu \in \mathcal{H}\), then \((\omega _{n_{k}} , \mu ) \in E(G) \);

  3. (iii)

    \(\sum_{n=1}^{\infty }\vartheta _{n}\|\chi _{n}-\chi _{n-1} \|<\infty \).

Then the sequence \(\{\chi _{n}\} \) developed by Algorithm () weakly converges to an element in \(\mathbb{F}\).

Proof

Let \(\rho \in \mathbb{F}\). From condition (i), we gain \((\omega _{n}, \rho ), (\rho , \omega _{n}) \in E(G)\). Then \((T^{i} \omega _{n}, \rho )\in E(G)\) because \(T^{i}\) is edge-preserving for all \(i = 1,2,\ldots,N\). By the definition of \(\psi _{n}^{i}\) and \(E(G) \) is convex, we have \((\psi _{n}^{i},\rho )\in E(G)\) for all \(i = 1,2,\ldots,N\). For all \(i = 1,2,\ldots,N\), since the mapping \(T^{i}\) is G-nonexpansive, we have

$$\begin{aligned} \bigl\Vert \zeta _{n}^{i}-\rho \bigr\Vert &= \bigl\Vert \bigl(1-\alpha _{n}^{i}\bigr) ( \omega _{n}-\rho )+\alpha _{n}^{i} \bigl(T^{i} \psi _{n}^{i}-\rho \bigr) \bigr\Vert \\ &\leq \bigl(1-\alpha _{n}^{i}\bigr) \Vert \omega _{n}-\rho \Vert + \alpha _{n}^{i} \bigl\Vert T^{i}\psi _{n}^{i}-\rho \bigr\Vert \\ &\leq \bigl(1-\alpha _{n}^{i}\bigr) \Vert \omega _{n}-\rho \Vert +\alpha _{n}^{i} \bigl\Vert \psi _{n}^{i}-\rho \bigr\Vert \\ &= \bigl(1-\alpha _{n}^{i}\bigr) \Vert \omega _{n}-\rho \Vert +\alpha _{n}^{i} \bigl\Vert \bigl(1-\beta _{n}^{i}\bigr) (\omega _{n}-\rho )+\beta _{n}^{i} \bigl(T^{i} \omega _{n}-\rho \bigr) \bigr\Vert \\ &\leq \bigl(1-\alpha _{n}^{i}\bigr) \Vert \omega _{n}-\rho \Vert + \alpha _{n}^{i} \bigl\{ \bigl(1-\beta _{n}^{i}\bigr) \Vert \omega _{n}-\rho \Vert +\beta _{n}^{i} \bigl\Vert T^{i} \omega _{n}-\rho \bigr\Vert \bigr\} \\ &\leq \Vert \omega _{n}-\rho \Vert \\ &\leq \Vert \chi _{n}-\rho \Vert + \vartheta _{n} \Vert \chi _{n}- \chi _{n-1} \Vert . \end{aligned}$$

This implies that \(\Vert \chi _{n+1}-\rho \Vert \leq \Vert \chi _{n}-\rho \Vert + \vartheta _{n}\|\chi _{n}-\chi _{n-1}\|\). From Lemma 2.3 and condition (iii), we derive that \(\lim_{n\rightarrow \infty }\|\chi _{n}-\rho \|\) exists. In particular, \(\{\chi _{n}\}\) is bounded and also \(\{\omega _{n}\}\), \(\{\psi _{n}^{i}\}\), and \(\{\zeta _{n}^{i}\}\) for all \(i = 1,2,\ldots,N\). By some properties in \(\mathcal{H}\), we obtain, for all \(i = 1,2,\ldots,N\),

$$\begin{aligned} \bigl\Vert \zeta _{n}^{i}-\rho \bigr\Vert ^{2}&\leq \bigl(1-\alpha _{n}^{i} \bigr) \Vert \omega _{n}- \rho \Vert ^{2}+\alpha _{n}^{i} \bigl\Vert T^{i}\psi _{n}^{i}-\rho \bigr\Vert ^{2} \\ &\leq \bigl(1-\alpha _{n}^{i}\bigr) \Vert \omega _{n}-\rho \Vert ^{2}+\alpha _{n}^{i} \bigl\Vert \psi _{n}^{i}-\rho \bigr\Vert ^{2} \\ &\leq \bigl(1-\alpha _{n}^{i}\bigr) \Vert \omega _{n}-\rho \Vert ^{2} \\ &\quad {}+ \alpha _{n}^{i} \bigl\{ \bigl(1-\beta _{n}^{i}\bigr) \Vert \omega _{n}-\rho \Vert ^{2} +\beta _{n}^{i} \bigl\Vert T^{i} \omega _{n}-\rho \bigr\Vert ^{2}- \beta _{n}^{i}\bigl(1- \beta _{n}^{i} \bigr) \bigl\Vert T^{i} \omega _{n}-\omega _{n} \bigr\Vert ^{2} \bigr\} \\ &\leq \Vert \omega _{n}-\rho \Vert ^{2}-\alpha _{n}^{i}\beta _{n}^{i}\bigl(1- \beta _{n}^{i}\bigr) \bigl\Vert T^{i} \omega _{n}-\omega _{n} \bigr\Vert ^{2} \\ &\leq \Vert \chi _{n}-\rho \Vert ^{2}+2\vartheta _{n}\langle \chi _{n}-\chi _{n-1}, \omega _{n}-\rho \rangle -\alpha _{n}^{i}\beta _{n}^{i}\bigl(1-\beta _{n}^{i} \bigr) \bigl\Vert T^{i} \omega _{n}-\omega _{n} \bigr\Vert ^{2}. \end{aligned}$$
(3.1)

It follows that there are \(i_{n}\in \{1,2,\ldots,N\}\) and \(\bar{W}_{1}>0\) such that

$$ \alpha _{n}^{i_{n}}\beta _{n}^{i_{n}} \bigl(1-\beta _{n}^{i_{n}}\bigr) \bigl\Vert T^{i_{n}} \omega _{n}-\omega _{n} \bigr\Vert ^{2}\leq \Vert \chi _{n}-\rho \Vert ^{2}- \Vert \chi _{n+1}- \rho \Vert ^{2}+ \bar{W}_{1}\vartheta _{n} \Vert \chi _{n}- \chi _{n-1} \Vert . $$

From Assumption 3 and condition (iii), and using \(\lim_{n\rightarrow \infty }\|\chi _{n}-\rho \|\) exists, we obtain

$$ \lim_{n\rightarrow \infty } \bigl\Vert T^{i_{n}} \omega _{n}-\omega _{n} \bigr\Vert =0. $$
(3.2)

Since \((\psi _{n}^{i_{n}}, \rho )\) and \((\rho , \omega _{n})\) are in \(E(G)\) and by the transitivity property of G, we gain \((\psi _{n}^{i_{n}}, \omega _{n})\in E(G)\). Applying (3.2) to the definitions of \(\chi _{n+1}\) and \(\psi _{n}^{i_{n}}\), the following result is obtained:

$$\begin{aligned} \Vert \chi _{n+1}-\omega _{n} \Vert &=\alpha _{n}^{i} \bigl\Vert T^{i_{n}}\psi _{n}^{i_{n}}- \omega _{n} \bigr\Vert \\ &\leq \bigl\Vert T^{i_{n}}\psi _{n}^{i_{n}}-T^{i_{n}} \omega _{n} \bigr\Vert + \bigl\Vert T^{i_{n}} \omega _{n}-\omega _{n} \bigr\Vert \\ &\leq \bigl\Vert \psi _{n}^{i_{n}}-\omega _{n} \bigr\Vert + \bigl\Vert T^{i_{n}}\omega _{n}- \omega _{n} \bigr\Vert \\ &\leq 2 \bigl\Vert T^{i_{n}}\omega _{n}-\omega _{n} \bigr\Vert \to 0 \quad \text{as } n\to \infty . \end{aligned}$$

Again, by the definition of \(\chi _{n+1}\), we deduce, for all \(i = 1,2,\ldots,N\),

$$ \lim_{n\rightarrow \infty } \bigl\Vert \zeta _{n}^{i}-\omega _{n} \bigr\Vert =0. $$
(3.3)

From inequality (3.1), we have, for all \(i=1,2,\ldots,N\),

$$ \alpha _{n}^{i}\beta _{n}^{i} \bigl(1-\beta _{n}^{i}\bigr) \bigl\Vert T^{i} \omega _{n}- \omega _{n} \bigr\Vert ^{2}\leq \Vert \omega _{n}-\rho \Vert ^{2}- \bigl\Vert \zeta _{n}^{i}- \rho \bigr\Vert ^{2}\leq \bar{W}_{2} \bigl\Vert \zeta _{n}^{i}-\omega _{n} \bigr\Vert $$

for some \(\bar{W}_{2}>0\). This combined with equation (3.3) and Assumption 3 leads to, for all \(i=1,2,\ldots,N\),

$$ \lim_{n\rightarrow \infty } \bigl\Vert T^{i}w_{n}-w_{n} \bigr\Vert =0. $$
(3.4)

Next, let ρ̄ be a weak sequential cluster point of \(\{\omega _{n}\}\). Applying Lemma 2.6 to equation (3.4) with condition (ii), we deduce that \(\bar{\rho }\in \mathbb{F}\). Finally, since \(\lim_{n\rightarrow \infty }\vartheta _{n}\|\chi _{n}- \chi _{n-1}\|=0\) and using Opial’s lemma (Lemma 2.4), we can conclude that \(\{\chi _{n}\}\) weakly converges to an element in \(\mathbb{F}\). □

Additionally, we provide the following theorem for a family of G-nonexpansive mappings in a Hilbert space.

Theorem 3.2

Assume that \(\sum_{n=1}^{\infty }\vartheta _{n}\|\chi _{n}-\chi _{n-1} \|<\infty \) and Assumption 3is true. Let \(T^{i}\) be a family of nonexpansive mappings on a real Hilbert space \(\mathcal{H}\) for all \(i = 1,2,\ldots,N\) such that \(\mathbb{F}\neq \emptyset \). Then the sequence \(\{\chi _{n}\} \) developed by Algorithm () weakly converges to an element in \(\mathbb{F}\).

Proof

This proof is analogous to the proof of Theorem 3.1. □

4 Differential problems

Let us consider the following simple and well-known periodic one-dimensional diffusion problem with Dirichlet boundary conditions and initial data:

$$\begin{aligned}& u_{t} = \beta u_{xx} + f(x,t), \quad 0< x< l, t>0, \\& u(x,0)=u_{0}(x),\quad 0< x< l, \\& u(0,t)=\gamma _{1}(t),\qquad u(l,t)=\gamma _{2}(t),\quad t>0, \end{aligned}$$
(4.1)

where β is constant, \(u(x,t)\) represents the temperature at point \((x,t)\), and \(f(x,t)\), \(\gamma _{1}(t)\), \(\gamma _{2}(t)\) are sufficiently smooth functions. In what follows, we use the notations \(u^{n}_{i}\) and \((u_{xx})^{n}_{i}\) to represent the numerical approximations of \(u( x_{i}, t^{n})\) and \(u_{xx}(x_{i}, t^{n})\) and \(t^{n}=n\Delta t\), where Δt denotes the temporal mesh size. A set of schemes in solving problem (4.1) is based on the following well-known Crank–Nikolson type of scheme [43, 45]:

$$ \frac{u^{n+1}_{i}-u^{n}_{i}}{\Delta t} = \frac{\beta }{2} \bigl[(u_{xx})^{n+1}_{i}+(u_{xx})^{n}_{i} \bigr] +f^{n+1/2}_{i}, \quad i=2,\ldots ,N-1, $$
(4.2)

with the initial data

$$ u^{0}_{i} = u^{0}(x_{i}), \quad i =1, \ldots ,N, $$
(4.3)

and the Dirichlet boundary conditions

$$ u^{n+1}_{1}=\gamma _{1} \bigl(t^{n+1}\bigr),\qquad u^{n+1}_{N}=\gamma _{2}\bigl(t^{n+1}\bigr). $$
(4.4)

The matrix form of the second-order finite difference scheme (FDS) in solving diffusion problem (4.1) can be written as

$$ A \mathbf{u}^{n+1} = \mathbf{G}^{n}, $$
(4.5)

where \(\mathbf{G}^{n} = B \mathbf{u}^{n} + \mathbf{f}^{n+1/2}\),

$$\begin{aligned}& A = \begin{bmatrix} 1 + \eta &-\frac{\eta }{2} & & & \\ -\frac{\eta }{2} &1 + \eta &-\frac{\eta }{2} & & \\ &\ddots &\ddots &\ddots & \\ & &-\frac{\eta }{2} &1 + \eta &-\frac{\eta }{2} \\ & & &-\frac{\eta }{2} &1 + \eta \end{bmatrix}, \\& B = \begin{bmatrix} 1 - \eta &\frac{\eta }{2} & & & \\ \frac{\eta }{2} &1 - \eta &\frac{\eta }{2} & & \\ &\ddots &\ddots &\ddots & \\ & &\frac{\eta }{2} &1 - \eta &\frac{\eta }{2} \\ & & &\frac{\eta }{2} &1 - \eta \end{bmatrix}, \\& \mathbf{u}^{n} = \begin{bmatrix} u^{k}_{2} \\ u^{k}_{3} \\ \vdots \\ u^{k}_{N-2} \\ u^{k}_{N-1} \end{bmatrix}, \qquad \mathbf{f}^{n+1/2} = \begin{bmatrix} \frac{\eta }{2}\gamma ^{n+1/2}_{1} + \Delta t f^{n+1/2}_{2} \\ \Delta t f^{n+1/2}_{3} \\ \vdots \\ \Delta t f^{n+1/2}_{N-2} \\ \frac{\eta }{2}\gamma ^{n+1/2}_{2} + \Delta t f^{n+1/2}_{N-1} \end{bmatrix}, \end{aligned}$$

\(\eta =\beta \Delta t/ (\Delta x^{2} )\), \(\gamma ^{n+1/2}_{i} = \gamma _{i}(t^{n+1/2})\), \(i = 1,2\), and \(f^{n+1/2}_{i}=f_{i}(t^{n+1/2})\), \(i = 2, \ldots , N-1\). From equation (4.5), matrix A is square and symmetric positive definite. Traditionally iterative methods have been presented in solving the solution of linear systems (4.5). The well-known weight Jacobi (WJ) and successive over relaxation (SOR) methods [16, 43] are chosen to exemplify here (see Table 1).

Table 1 The specific name of WJ and SOR in solving linear system (4.5)

And ω is the weight parameter, D is the diagonal part of the matrix A, and L is the lower triangular part of the matrix \(D - A\), respectively. For the implementation of WJ and SOR, the availability of the selection rule for weight parameter ω and the optimal parameter \(\omega _{o}\) needs the values of the smallest and largest eigenvalues of matrix A. The calculations of their eigenvalue can be found in [15, 31]. Since the stability of WJ and SOR methods in solving linear system (4.5) generates from the discretization of the considered problem (4.1), the step sizes of time play an important role in the stability needed. The discussion on the stability of WJ and SOR in solving linear system (4.5) can be found in [16, 43].

Let us consider the linear system

$$ A \mathbf{u}^{n+1} = \mathbf{G}^{n}, $$
(4.6)

where \(A : \mathbb{R}^{N-2} \rightarrow \mathbb{R}^{N-2}\) is a linear and positive operator. Then linear system (4.6) has a unique solution. To find the solution of linear system (4.6), we manipulate this linear system into the form of a fixed point equation:

$$ T^{i} \mathbf{u}^{n+1} = \mathbf{u}^{n+1}, \quad \forall i = 1,2, \ldots , M. $$
(4.7)

Suppose that the solution of linear system (4.6) is the common solution of equation (4.7). We can apply our new inertial parallel algorithm to solve the common solution of equation (4.7) by using the G-nonexpansive mapping \(T^{i}\), \(\forall i = 1,2, \ldots , M\). The generated sequence \(\{\mathbf{u}^{(n,s)}\}\), \(s \in \mathbb{N}\) is created iteratively by using two initial data \(\mathbf{u}^{(n,1)}, \mathbf{u}^{(n,2)} \in \mathbb{R}^{N-2}\) and

$$ \begin{aligned} &\mathbf{t}^{(n,s+1)} = \mathbf{u}^{(n,s+1)} + \vartheta _{n}\bigl( \mathbf{u}^{(n,s+1)} - \mathbf{u}^{(n,s)}\bigr), \\ &\mathbf{v}_{i}^{(n,s+1)} = \bigl(1 - \beta ^{i}_{n}\bigr)\mathbf{t}^{(n,s+1)} + \beta ^{i}_{n} T^{i}\mathbf{t}^{(n,s+1)}, \\ &\mathbf{w}_{i}^{(n+1,s+1)} = \bigl(1 - \alpha ^{i}_{n}\bigr)\mathbf{t}^{(n,s+1)} + \alpha ^{i}_{n} T^{i}\mathbf{v}_{i}^{(n,s+1)}, \\ &\mathbf{u}^{(n+1,s+1)} =\mathrm{argmax} \bigl\Vert \mathbf{w}_{i}^{(n,s+1)} - \mathbf{t}^{(n,s+1)} \bigr\Vert , \quad n\geq 2, \end{aligned} $$
(4.8)

where the second superscript “s” denotes the number of iterations \(s=1,2,\ldots ,\widehat{S}_{n}\) and \(\{\alpha ^{i}_{n}\}\), \(\{\beta ^{i}_{n}\}\) are appropriate real sequences in \([0, 1]\). The following stopping criterion is used:

$$ \bigl\Vert \mathbf{u}^{n+1,\widehat{S}_{n}+1} -\mathbf{u}^{n+1,\widehat{S}_{n}} \bigr\Vert _{\infty }< \epsilon , $$

where “\(\widehat{S}_{n}\)” denotes the number of the last iteration at time \(t^{n}\) and after that set \(\mathbf{u}^{n,1} = \mathbf{u}^{(n+1,\widehat{S}_{n})}\), \(\mathbf{u}^{n,2} = \mathbf{u}^{(n+1,\widehat{S}_{n}+1)}\).

There are many different ways of rearranging equation (4.6) in the form of fixed point equation (4.7). For example, the well-known weight Jacobi (WJ), successive over relaxation (SOR), and Gauss–Seidel (GS, the SOR with \(\omega = 1\)) methods [16, 43, 45] present linear system (4.6) into the form of fixed point equation as \(\mathbf{u}^{n+1} = T^{\mathrm{WJ}}\mathbf{u}^{n+1}\), \(\mathbf{u}^{n+1} = T^{\mathrm{SOR}}\mathbf{u}^{n+1}\), and \(\mathbf{u}^{n+1} = T^{\mathrm{GS}}\mathbf{u}^{n+1}\), respectively (see Table 2).

Table 2 The different way of rearranging linear systems (4.5) into the form \(x = T(x)\)

From the fact that \(\|T\mathbf{x} - T\mathbf{y} \|= \| S\mathbf{x} - S\mathbf{y} \| \leq \| S \| \|\mathbf{x - y} \| < \| \mathbf{x- y} \|\) for all \(\mathbf{x}, \mathbf{y}\in \mathbb{R}^{m}\), where \(S: \mathbb{R}^{m}\rightarrow \mathbb{R}^{m}\), \(T\mathbf{x} = S\mathbf{x} + \mathbf{c}\) such that \(\mathbf{x}, \mathbf{c} \in \mathbb{R}^{m}\) and \(\|S \|< 1\). This shows that T is a G-nonexpansive mapping. In controlling the operators \(T^{\mathrm{WJ}}\) and \(T^{\mathrm{SOR}}\) in the form of \(T^{i}\mathbf{x} = S^{i} \mathbf{x} + \mathbf{c}^{i}\), \(i \in \{ \mathrm{WJ}, \mathrm{SOR}\}\),

$$\begin{aligned}& S^{\mathrm{WJ}} = I - \omega D^{-1} A, \qquad c^{\mathrm{WJ}} = \omega D^{-1} \mathbf{b}, \\& S^{\mathrm{SOR}} = I - \omega ( D - \omega L )^{-1} A, \qquad c^{\mathrm{SOR}} = \omega ( D - \omega L )^{-1} \mathbf{b} \end{aligned}$$

are G-nonexpansive mappings, their weight parameter must be properly modified. The implementation of weight parameter ω for the operator S of WJ and SOR methods is defined as its norm is less than one (\(\| S^{i} \| < 1 \)). Moreover, the optimal weight parameter \(\omega _{o}\) in getting the smallest norm for each type of operator S is indicated. It can be observed from Table 3 that these parameters result from the maximum and minimum values of matrix A.

Table 3 Implemented weight parameter and optimal weight parameter of operator S

Next, the proposed algorithm (4.8) in getting the solution of linear system (4.5) generated from a one-dimensional diffusion problem with Dirichlet boundary conditions and initial data (4.1) is compared with the well-known WJ, GS, and SOR methods. For simplicity, the proposed algorithm (4.8) with \(M = 2\) is studied. Two G-nonexpansive mappings \(T^{i}\) and \(T^{j}\) are chosen from the three operators \(T^{\mathrm{WJ}}\), \(T^{\mathrm{SOR}}\), and \(T^{\mathrm{GS}}\). And we call it the proposed algorithm with \(T^{i} {-} T^{j}\).

Let us consider the simple one-dimensional diffusion problems:

$$ \begin{aligned} &u_{t} = \beta u_{xx} + 0.4 \beta \bigl(4\pi ^{2} - 1\bigr) e^{-4 \beta t} \cos (4\pi x),\quad 0\leq x\leq 1, 0< t< t_{s}, \\ &u(x,0) = \cos (4\pi x) / 10,\qquad u(0,t)= e^{-4 \beta t}/10,\qquad u(1,t)= e^{-4 \beta t}/10, \\ &u(x,t)= e^{-4 \beta t}\cos (4\pi x) / 10. \end{aligned} $$
(4.9)

The results of WJ, GS, SOR and the proposed algorithm with \(M = 2\) are demonstrated and discussed in the following cases:

  1. Case I.

    WJ method

  2. Case II.

    GS method

  3. Case III.

    SOR method

  4. Case IV.

    The proposed algorithm with \(T^{\mathrm{WJ}} {-} T^{\mathrm{GS}}\)

  5. Case V.

    The proposed algorithm with \(T^{\mathrm{WJ}} {-} T^{\mathrm{SOR}}\)

  6. Case VI.

    The proposed algorithm with \(T^{\mathrm{GS}} {-} T^{\mathrm{SOR}}\).

Since we focus on the convergence of the proposed algorithm, the stability analysis in choosing the step sizes of time is not discussed in detail. The step size of time for the proposed algorithm is based on the smallest step size chosen from WJ and SOR methods in solving linear system (4.5) generated from the discretization of consideration problem (4.1). All computations are performed by using the uniform grid of 101 nodes, which corresponds to the solution of linear systems (4.5) with \(99 \times 99\) sizes respectively. The weight parameter ω of the proposed algorithm is set as its optimum weight parameter (\(\omega _{o}\)) defined in Table 3. We used \(\alpha _{n}^{i}=\beta _{n}^{i}=0.9\), \(\beta = 25\), \(\Delta t=\Delta x^{2}/10\) (step size of time), \(\epsilon =10^{-10}\), and

$$ \vartheta _{n}= \textstyle\begin{cases} \min \{ \frac{1}{n^{2} \Vert \mathbf{u}^{(n,s+1)}-\mathbf{u}^{(n,s)} \Vert _{2}}, 0.035 \} &\text{if } \mathbf{u}^{(n,s+1)} \neq \mathbf{u}^{(n,s)} \ \&\ 1\leq n< K, \\ 0.035 &\text{otherwise}, \end{cases} $$

where K is the number of iterations that we want to stop.

For testing purposes only, all computations are performed for \(0 \leq t \leq 0.01\) (when \(t \gg 0.05, u (x, t) \rightarrow 0\)). The exact error is measured by using \(\|\mathbf{u^{n}} - \mathbf{u}\|_{2}\). Figure 1 shows the approximate solution at \(t = 0.01\) and the approximate error per step of time for WJ, GS, SOR, and the proposed algorithm to problem (4.9) with \(\beta = 25\).

Figure 1
figure 1

Approximate solutions and approximate error of GS, WJ, SOR, and all cases of the proposed algorithms to problem (4.1) with \(\beta = 25\) and \(t = 0.01\)

It can be seen from Fig. 1 that all numerical solution matches the analytical solution reasonably well. Figure 2 shows the trend of the iterations number for WJ, GS, SOR, and the proposed algorithms in solving linear system (4.5) generated from the discretization of the considered problem.

Figure 2
figure 2

The evolution of iterations number for GS, WJ, SOR, and the proposed algorithm to problem (4.1) with \(\beta = 25\) and \(t \in (0,1]\)

Figure 2 shows that the iteration number of the proposed algorithm with \(T^{\mathrm{WJ}} {-} T^{\mathrm{GS}}\), \(T^{\mathrm{WJ}} {-} T^{\mathrm{SOR}}\), and \(T^{\mathrm{GS}} {-} T^{\mathrm{SOR}}\) is significantly decreased compared with the well-known GS, WJ, and SOR methods. And the proposed algorithm with \(T^{\mathrm{GS}} {-} T^{\mathrm{SOR}}\) gives the smallest number of iterations on every step of the time. However, even if using a small amount of iteration per step of time shows excellent performance of the proposed method, the stability condition of the proposed algorithm needs to be considered carefully as chosen for the results of the stability analysis with time. Moreover, the proposed algorithm (4.8) with the effect of parameter \(\vartheta _{n}\) is shown in Fig. 3. The proposed algorithm (4.8) with the following parameter \(\vartheta _{n}\):

$$ \vartheta _{n} = \textstyle\begin{cases} \theta _{n} & \text{if } \mathbf{u}^{(n,s+1)} \neq \mathbf{u}^{(n,s)} \ \& \ 1\leq n< K, \\ \frac{1}{n^{2} \Vert \mathbf{u}^{(n,s+1)}-\mathbf{u}^{(n,s)} \Vert _{2}} & \text{if } \mathbf{u}^{(n,s+1)} \neq \mathbf{u}^{(n,s)} \ \& \ n \geq K, \\ 0.2 & \text{otherwise}, \end{cases} $$

where

  1. Case I.

    \(\theta _{n} = 0\)

  2. Case II.

    \(\theta _{n} = \frac{1}{n^{2}}\)

  3. Case III.

    \(\theta _{n} = \frac{1}{2^{n}}\)

  4. Case IV.

    \(\theta _{n} = \frac{t_{n}-1}{t_{n+1}}\), where \(t_{1}=1\) and \(t_{n+1}=\frac{1+\sqrt{1+4t_{n}^{2}}}{2}\)

  5. Case V.

    \(\theta _{n} = 1 - \frac{n}{n+1} \)

  6. Case VI.
    $$ \vartheta _{n}= \textstyle\begin{cases} \min \{ \frac{1}{n^{2} \Vert \mathbf{u}^{(n,s+1)}-\mathbf{u}^{(n,s)} \Vert _{2}}, 0.035 \} &\text{if } \mathbf{u}^{(n,s+1)} \neq \mathbf{u}^{(n,s)} \ \& \ 1\leq n< K, \\ 0.035 &\text{otherwise}, \end{cases} $$

where K is the number of iterations that we want to stop. Figure 3(a) shows the iteration number per step of time of the proposed algorithm where parameter \(\vartheta _{n}\) is chosen as in Cases I–IV. Figures 3(b) and 3(c) show the iteration number per step of time of the proposed algorithm where parameter \(\vartheta _{n}\) is chosen as in Cases V and VI respectively.

Figure 3
figure 3

The evolution of iterations number for GS, WJ, SOR compared with the proposed algorithm using six cases of parameter \(\vartheta _{n}\) for problem (4.1) with \(\beta = 25\) and \(t \in (0,1]\)

The maximum, minimum, and average number of iterations per time step for the proposed algorithms using six cases of parameter \(\vartheta _{n}\) in solving problem (4.1) with \(\beta = 25\) and \(t \in (0,1]\) in Fig. 3 are also shown in Table 4.

Table 4 The maximum, minimum, and average number of iterations per time step for the proposed algorithm

From Table 4 and the graph of the evolution iterations number in Fig. 3, we see that the proposed algorithm applying the parameter \(\vartheta _{n}\) as in Case VI gives the smallest number of iterations on every step of the time.

5 Conclusion

In summary, we present a new parallel algorithm that solves the common fixed point problem for a finite family of G-nonexpansive mappings by combining the Ishikawa iteration process with the inertial technique. In a Hilbert space endowed with a directed graph, our main theorem guarantees that this algorithm weakly converges to an element of the problem’s solution set under certain conditions. Additionally, the algorithm is then applied to the problem of diffusion. In comparison to other well-known methods, such as WJ, GS, and SOR, numerical experiments show that the algorithm improves the number of iterations.

Availability of data and materials

Contact the authors for data requests.

References

  1. Aleomraninejad, S.M.A., Rezapour, S., Shahzad, N.: Some fixed point results on a metric space with a graph. Topol. Appl. 159(3), 659–663 (2012)

    Article  MathSciNet  MATH  Google Scholar 

  2. Alghamdi, M.A., Gulyaz-Ozyurt, S., Karapinar, E.: A note on extended Z-contraction. Mathematics 8, 195 (2020)

    Article  Google Scholar 

  3. Alvarez, F., Attouch, H.: An inertial proximal method for maximal monotone operators via discretization of a nonlinear oscillator with damping. Set-Valued Anal. 9, 3–11 (2001)

    Article  MathSciNet  MATH  Google Scholar 

  4. Anh, P.K., Hieu, D.V.: Parallel and sequential hybrid methods for a finite family of asymptotically quasi ϕ-nonexpansive mappings. J. Appl. Math. Comput. 48(1), 241–263 (2015)

    Article  MathSciNet  MATH  Google Scholar 

  5. Anh, P.K., Hieu, D.V.: Parallel hybrid iterative methods for variational inequalities, equilibrium problems, and common fixed point problems. Vietnam J. Math. 44(2), 351–374 (2016)

    Article  MathSciNet  MATH  Google Scholar 

  6. Attouch, H., Peypouquet, J., Redont, P.: A dynamical approach to an inertial forward–backward algorithm for convex minimization. SIAM J. Optim. 24(1), 232–256 (2014)

    Article  MathSciNet  MATH  Google Scholar 

  7. Auslender, A., Teboulle, M., Ben-Tiba, S.: A logarithmic–quadratic proximal method for variational inequalities. In: Computational Optimization, vol. 31. Springer, Boston (1999)

    Google Scholar 

  8. Bauschke, H.H., Combettes, P.L.: Convex Analysis and Monotone Operator Theory in Hilbert Spaces. CMS Books in Mathematics. Springer, New York (2011)

    Book  MATH  Google Scholar 

  9. Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM J. Imaging Sci. 2(1), 183–202 (2009)

    Article  MathSciNet  MATH  Google Scholar 

  10. Browder, F.E.: Convergence of approximants to fixed points of nonexpansive nonlinear mappings in Banach spaces. Arch. Ration. Mech. Anal. 24, 82–90 (1967)

    Article  MATH  Google Scholar 

  11. Charoensawan, P., Chaobankoh, T.: Best proximity point results for G-proximal Geraghty mappings. Thai J. Math. 18(3), 951–961 (2020)

    MathSciNet  MATH  Google Scholar 

  12. Cholamjiak, P., Suantai, S., Sunthrayuth, P.: An explicit parallel algorithm for solving variational inclusion problem and fixed point problem in Banach space. Banach J. Math. Anal. 14(1), 20–40 (2020)

    Article  MathSciNet  MATH  Google Scholar 

  13. Cholamjiak, W., Khan, S.A., Yambangwai, D., Kazmi, K.R.: Strong convergence analysis of common variational inclusion problems involving an inertial parallel monotone hybrid method for a novel application to image restoration. Rev. R. Acad. Cienc. Exactas Fís. Nat., Ser. A Mat. 114(2), Paper no. 99 (2020). https://doi.org/10.1007/s13398-020-00827-1

    Article  MathSciNet  MATH  Google Scholar 

  14. Dangskul, S., Suparatulatorn, R.: Global minimization of common best proximity points for generalized cyclic φ-contractions in metric spaces. Thai J. Math. 18(3), 1173–1183 (2020)

    MathSciNet  Google Scholar 

  15. El-Mikkawy, M.E.A.: Note on linear systems with positive definite tri-diagonal coefficient matrices. Indian J. Pure Appl. Math. 21(2), 1285–1293 (2002)

    MathSciNet  MATH  Google Scholar 

  16. Grzegorski, S.M.: On optimal parameter not only for the SOR method. Appl. Comput. Math. 8(5), 82–87 (2019)

    Article  Google Scholar 

  17. Güler, O.: On the convergence of the proximal point algorithm for convex minimization. SIAM J. Control Optim. 29, 403–419 (1991)

    Article  MathSciNet  MATH  Google Scholar 

  18. Halpern, B.: Fixed points of nonexpanding maps. Proc. Am. Math. Soc. 73, 957–961 (1967)

    MathSciNet  MATH  Google Scholar 

  19. Hieu, D.V.: Parallel and cyclic hybrid subgradient extragradient methods for variational inequalities. Afr. Math. 28(5), 677–692 (2017)

    Article  MathSciNet  MATH  Google Scholar 

  20. Hieu, D.V., Muu, L.D., Anh, P.K.: Parallel hybrid extragradient methods for pseudomonotone equilibrium problems and nonexpansive mappings. Numer. Algorithms 73, 197–217 (2016)

    Article  MathSciNet  MATH  Google Scholar 

  21. Hussain, A., Ali, D., Karapinar, E.: Stability data dependency and errors estimation for a general iteration method. Alex. Eng. J. 60(1), 703–710 (2021)

    Article  Google Scholar 

  22. Jachymski, J.: The contraction principle for mappings on a metric space with a graph. Proc. Am. Math. Soc. 136(4), 1359–1373 (2008)

    Article  MathSciNet  MATH  Google Scholar 

  23. Karapinar, E.: Couple fixed point theorems for nonlinear contractions in cone metric spaces. Comput. Math. Appl. 59(12), 3656–3668 (2010)

    Article  MathSciNet  MATH  Google Scholar 

  24. Karapinar, E.: Generalizations of Cariski Kirk’s theorem on partial metric spaces. Fixed Point Theory Appl. 2011, 4 (2011)

    Article  MATH  Google Scholar 

  25. Karapinar, E., Erhan, I.M.: Fixed point theorems for operators on partial metric spaces. Appl. Math. Lett. 24, 1894–1899 (2011)

    Article  MathSciNet  MATH  Google Scholar 

  26. Maingé, P.E.: Regularized and inertial algorithms for common fixed points of nonlinear operators. J. Math. Anal. Appl. 344, 876–887 (2008)

    Article  MathSciNet  MATH  Google Scholar 

  27. Marino, G., Scardamglia, B., Karapinar, E.: Strong convergence theorem for strict pseudo-contractions in Hilbert spaces. J. Inequal. Appl. 2016, 134 (2016)

    Article  MathSciNet  MATH  Google Scholar 

  28. Nesterov, Y.: A method for solving the convex programming problem with convergence rate \(O(1/k^{2})\). Dokl. Akad. Nauk SSSR 269(3), 543–547 (1983)

    MathSciNet  Google Scholar 

  29. Polyak, B.T.: Some methods of speeding up the convergence of iterative methods. Zh. Vychisl. Mat. Mat. Fiz. 4, 1–17 (1964)

    MathSciNet  Google Scholar 

  30. Roldán López de Hierro, A.F., Karapınar, E., Roldán López de Hierro, C., Martínez-Moreno, J.: Coincidence point theorems on metric spaces via simulation functions. J. Comput. Appl. Math. 275, 345–355 (2015)

    Article  MathSciNet  MATH  Google Scholar 

  31. Sogabe, T.: New algorithms for solving periodic tridiagonal and periodic pentadiagonal linear systems. Appl. Math. Comput. 202, 850–856 (2008)

    MathSciNet  MATH  Google Scholar 

  32. Sridarat, P., Suparatulatorn, R., Suantai, S., Cho, Y.J.: Convergence analysis of SP-iteration for G-nonexpansive mappings with directed graphs. Bull. Malays. Math. Sci. Soc. 42(5), 2361–2380 (2019)

    Article  MathSciNet  MATH  Google Scholar 

  33. Suantai, S., Donganont, M., Cholamjiak, W.: Hybrid methods for a countable family of G-nonexpansive mappings in Hilbert spaces endowed with graphs. Mathematics 7, 936 (2019)

    Article  Google Scholar 

  34. Suparatulatorn, R., Charoensawan, P., Poochinapan, K.: Inertial self-adaptive algorithm for solving split feasible problems with applications to image restoration. Math. Methods Appl. Sci. 42(18), 7268–7284 (2019)

    Article  MathSciNet  MATH  Google Scholar 

  35. Suparatulatorn, R., Charoensawan, P., Poochinapan, K., Dangskul, S.: An algorithm for the split feasible problem and image restoration. Rev. R. Acad. Cienc. Exactas Fís. Nat., Ser. A Mat. 115, 12 (2021). https://doi.org/10.1007/s13398-020-00942-z

    Article  MathSciNet  MATH  Google Scholar 

  36. Suparatulatorn, R., Cholamjiak, W., Suantai, S.: A modified S-iteration process for G-nonexpansive mappings in Banach spaces with graphs. Numer. Algorithms 77(2), 479–490 (2018)

    Article  MathSciNet  MATH  Google Scholar 

  37. Suparatulatorn, R., Suantai, S., Cholamjiak, W.: Hybrid methods for a finite family of G-nonexpansive mappings in Hilbert spaces endowed with graphs. AKCE Int. J. Graphs Comb. 14(2), 101–111 (2017)

    Article  MathSciNet  MATH  Google Scholar 

  38. Thianwan, T., Yambangwai, D.: Convergence analysis for a new two-step iteration process for G-nonexpansive mappings with directed graphs. J. Fixed Point Theory Appl. 21(44), 1–16 (2019)

    MathSciNet  MATH  Google Scholar 

  39. Thong, D.V., Hieu, D.V.: Modified subgradient extragradient method for variational inequality problems. Numer. Algorithms 79, 597–610 (2018)

    Article  MathSciNet  MATH  Google Scholar 

  40. Tiammee, J., Kaewkhao, A., Suantai, S.: On Browder’s convergence theorem and Halpern iteration process for G-nonexpansive mappings in Hilbert spaces endowed with graphs. Fixed Point Theory Appl. 2015, 187 (2015)

    Article  MathSciNet  MATH  Google Scholar 

  41. Tripak, O.: Common fixed points of G-nonexpansive mappings on Banach spaces, with a graph. Fixed Point Theory Appl. 2016, 87 (2016)

    Article  MathSciNet  MATH  Google Scholar 

  42. Yambangwai, D., Aunruean, S., Thianwan, T.: A new modified three-step iteration method for G-nonexpansive mappings in Banach spaces with a graph. Numer. Algorithms 20, 1–29 (2019)

    MATH  Google Scholar 

  43. Yambangwai, D., Cholamjiak, W., Thianwan, T., Dutta, H.: On a new weight tri-diagonal iterative method and its applications. Soft Comput. 25, 725–740 (2021)

    Article  Google Scholar 

  44. Yambangwai, D., Khan, S.A., Dutta, H., Cholamjiak, W.: Image restoration by advanced parallel inertial forward–backward splitting methods. Soft Comput. (2021). https://doi.org/10.1007/s00500-021-05596-6

    Article  Google Scholar 

  45. Yambangwai, D., Moshkin, N.: Deferred correction technique to construct high-order schemes for the heat equation with Dirichlet and Neumann boundary conditions. Eng. Lett. 21(2), 61–67 (2013)

    Google Scholar 

  46. Zhang, L., Zhao, H., Lv, Y.: A modified inertial projection and contraction algorithms for quasi-variational inequalities. Appl. Set-Valued Anal. Optim. 1, 63–76 (2019)

    Google Scholar 

Download references

Acknowledgements

This research was partially supported by Chiang Mai University. P. Charoensawan would like to thank the Faculty of Science, Chiang Mai University. W. Cholamjiak would like to thank Thailand Science Research and Innovation under the project IRN62W0007 and University of Phayao, Thailand. D. Yambangwai would like to thank the Thailand Science Research and Innovation Fund and the University of Phayao (Grant No. FF64-UoE002).

Funding

Chiang Mai University, Thailand.

Author information

Authors and Affiliations

Authors

Contributions

The authors equally conceived of the study, participated in its design and coordination, drafted the manuscript, participated in the sequence alignment, and read and approved the final manuscript.

Corresponding author

Correspondence to Raweerote Suparatulatorn.

Ethics declarations

Competing interests

The authors declare that they have no competing interests.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Charoensawan, P., Yambangwai, D., Cholamjiak, W. et al. An inertial parallel algorithm for a finite family of G-nonexpansive mappings with application to the diffusion problem. Adv Differ Equ 2021, 453 (2021). https://doi.org/10.1186/s13662-021-03613-4

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13662-021-03613-4

MSC

Keywords