Skip to main content

Theory and Modern Applications

A parallel algorithm for space-time-fractional partial differential equations

Abstract

This paper is dedicated to the derivation of a simple parallel in space and time algorithm for space and time fractional evolution partial differential equations. We report the stability, the order of the method and provide some illustrating numerical experiments.

1 Introduction

This paper is devoted to the derivation of a parallel algorithm for solving fractional in space and time fractional partial differential equations. We will first focus on the parallelization in-time using a parareal algorithm, which is the most standard parallel-in-time algorithm for solving differential equations [20, 29]. In this goal, we focus in the first part of the paper on the derivation and analysis of a parallel-in-time algorithm for fractional ordinary differential equations (FODEs) [40]. More specifically, a parareal-Gorenflo’s scheme [40] is derived and analyzed for FODEs. We notice that the analysis of convergence of the parareal method for solving ODEs remains partially valid for fractional equations [22, 32]. Then, we provide a parallel in space and time algorithm for fractional in space and time partial differential equations. Basically, the spatial approximation is provided thanks to (i) a Fourier-based pseudospectral method, which will benefit from the scalability of standard parallel FFT algorithms [18] and (ii) an efficient combination with the parallel-in-time algorithm.

A key element of the proposed overall algorithm is the parareal method. Let us recall its principle for solving differential equations. It consists in

  1. 1.

    first solving sequentially the differential equation over \([0;T]\) using a large time-step \(\Delta T=T/N\) for some \(N \in {\mathbb {N}}^{*}\), that is, on a coarse grid in time;

  2. 2.

    solving (using the same or more accurate scheme as the one used on the coarse mesh) in parallel on sayqprocessors (typically such that \(N\in q{\mathbb {N}}^{*}\) for an efficient workload) with a time-step \(\Delta t =\Delta T/R\) for some \(R\gg 1\), the equation on q subdomains \(\{[T_{qi};T_{q(i+1)}]\}_{0 \leq i \leq R-1}\), where \(T_{n}=n\Delta T\) for \(0\leq n \leq N\), \(T_{N}=T\), and with initial conditions (at \(\{T_{n}\}_{0\leq n \leq N-1}\)) on each subdomain, given by the solution initially computed on the coarse grid (Step 1).

The parareal method allows for an accurate parallel computation of ODEs, and we refer to [16, 21, 29] for details about this celebrated method for solving ordinary/partial differential equations. It is in particular successfully combined with traditional domain decomposition methods in space. Indeed it allows us to use a very large number of processors going beyond the usual limits of efficiency of domain decomposition methods with a too large number of spatial subdomains. In this paper, we are interested in the extension of the parareal method to FODEs, which have become very popular over this past decade. The recent progress in fractional differential equation solvers [6, 7, 25, 27, 30, 33, 39] is largely motivated by the development of fractional differential models in physics, mechanics, epidemiology, and applied probability allowing to take into account nonlocal in space or time effects [911, 15, 26, 30, 35, 37]. The main purpose of this paper is then to study efficient parallel algorithms for fractional ordinary and partial differential equations. In particular, we propose an original algorithm combining a parallelization in space and time. Very few works actually exist on parallel-in-time methods for FODEs; however, let us cite [38] where a parareal method along with collocation and Fourier-based FODE solvers is developed. At this stage, we do not consider realist models from the literature, but focus on toy-scalar linear equations, for which it is possible to provide a relatively precise analysis and to exhibit the strong convergence and efficiency properties. In this paper, we first propose a parareal version of standard Gorenflo’s scheme for approximating FODE [40]. In this goal, we consider

$$ D_{t}^{\alpha }y(t) = f \bigl(t,y(t) \bigr),\qquad y(0)=y_{0},\quad t \in [0;T] $$
(1)

for \(0< \alpha <1\), \(T>0\), and where f is a non-highly oscillatory Lipschitz continuous function. Lipschitz continuity allows for existence of a unique solution, while the non-highly oscillatory condition will allow for an efficient and accurate use standard quadratures. In particular the existence of unique solutions can be proved in some weighted subsets of \(C^{\alpha }\), see [19] for details. The analysis will be presented for the parareal-Gorenflo scheme approximating a linear FODE:

$$\begin{aligned} D_{t}^{\alpha }y(t) = -\lambda y(t) + g(t),\qquad y(0)=y_{0}, \end{aligned}$$
(2)

where \(\lambda >0\), \(0<\alpha <1\), and \(g \in C^{0}([0;T])\). The operator \(D_{t}^{\alpha } = {}^{\mathrm{C}}{}{D}_{t}^{\alpha }\) is here defined as a Caputo’s derivative [40], that is,

$$\begin{aligned} {}^{\mathrm{C}} {} {D}_{t}^{\alpha }y(t) = \frac {1}{ \varGamma (1-\alpha )} \int _{0}^{t}(t-\tau )^{-\alpha }D_{ \tau }y( \tau )\,d\tau, \end{aligned}$$

where the special gamma function Γ, for \(\operatorname{Re} (z)>0\), is defined as

$$\begin{aligned} \varGamma (z) := \int _{0}^{\infty }x^{z-1}e^{x}\,dx, \end{aligned}$$

and \(\varGamma (p)=(p-1)!\) for \(p\in {\mathbb {N}}^{*}\). We also recall the definition of Caputo’s derivative \({}^{\mathrm{C}}{}{I}_{t}^{\alpha }y(t)\)

$$\begin{aligned} {}^{\mathrm{C}} {} {I}_{t}^{\alpha }y(t) = \frac {1}{\varGamma (\alpha )} \int _{0}^{t}(t-\tau )^{\alpha -1} y(\tau )\,d \tau. \end{aligned}$$
(3)

Hence, (1) and (say) for \(y(0)=0\), and y solution to (2) we have

$$\begin{aligned} y(t) = -\frac {\lambda }{\varGamma (\alpha )} \int _{0}^{t}(t-\tau )^{ \alpha -1}D_{\tau }^{\alpha }y( \tau )\,d\tau + \frac {1}{\varGamma (\alpha )} \int _{0}^{t}(t-\tau )^{\alpha -1}g(\tau )\,d \tau. \end{aligned}$$

The well-posedness (existence and uniqueness) of this equation is analyzed in [14] for Lipschitz functions f. Let us also refer to [28, 31, 34] for other types of fractional differential equations. The main difficulty in parallelizing a FODE comes from the fact that the fractional derivative is a nonlocal integro-differential operator. As a consequence, it is no more possible to directly and efficiently compute in parallel the differential equation on fine grids, as usually proposed in the parareal method. In this paper, we propose a natural strategy which consists in computing in parallel the fractional integrals by decomposing them into a local (containing the “singularity” and computed with a fine resolution) and a history part (computed with a coarse resolution).

In the second part of the paper, we are then interested in a parallel algorithm for space-time fractional differential equations of the form

$$\begin{aligned} D_{t}^{\alpha }u(t,x) = -\lambda (x) D_{x}^{\beta }u(t,x),\qquad u(0,x)=u_{0} \in L^{2}({\mathbb {R}}), \end{aligned}$$

on \([0;T]\times {\mathbb {R}}\), with λ in the set of continuous and bounded real function \(C_{b}^{0}({\mathbb {R}})\), \(0<\alpha <1\) and \(\beta >0\). We recall that, denoting \(m=\lceil \beta \rceil \), Caputo’s derivative \(D_{x}^{\beta } = {}^{\mathrm{C}}{}{D}_{x}^{\beta }\) in this case is given by

$$\begin{aligned} {}^{\mathrm{C}} {} {D}_{x}^{\beta }u(t,x) = \frac {1}{\varGamma (m-\beta )} \int _{-\infty }^{x}(x-y)^{m-1-\beta } \partial ^{m}_{x}u(t,y)\,dy. \end{aligned}$$

Notice that several alternative definitions of fractional derivatives exist [12] such as the Riemann–Liouville (RL) fractional derivative of order β over the interval \(]-\infty;x]\), which is defined by

$$\begin{aligned} {}^{\mathrm{RL}} {} {D}_{x}^{\beta }u(t,x) = \frac {1}{\varGamma (m-\beta )}\frac {\partial ^{m}}{\partial x^{m}} \int _{-\infty }^{x}\frac {u(t,y)}{(x-y)^{\beta -m+1}} \,dy, \end{aligned}$$
(4)

while the right RL fractional derivative is given by

$$\begin{aligned} {}^{\mathrm{RL}}{}_{x}D^{\beta }u(t,x) = \frac {(-1)^{m}}{\varGamma (m-\beta )} \frac {\partial ^{m}}{\partial x^{m}} \int _{x}^{+\infty } \frac {u(t,y)}{(y-x)^{\beta -m+1}} \,dy. \end{aligned}$$

Similarly, we introduce the left fractional Riemann–Liouville integral operator of order β as follows:

$$\begin{aligned} {}^{\mathrm{RL}} {} {I}_{x}^{\beta }u(x) = \frac {1}{ \varGamma (\beta )} \int _{-\infty }^{x} \frac {u(t,y)}{(x-y)^{1-\beta }} \,dy. \end{aligned}$$
(5)

Through these notations, we have the relation

$$ {}^{\mathrm{RL}} {} {D}_{x}^{\beta }u(t,x) = \frac{\partial ^{m}}{\partial x^{m}} {}^{\mathrm{RL}} {} {I}_{x}^{m- \beta }u(t,x). $$
(6)

Let us also recall that the two derivatives are linked by the simple relation

$$\begin{aligned} {}^{\mathrm{RL}} {} {D}_{x}^{\alpha }u(x)= {}^{ \mathrm{C}} {} {D}_{x}^{\alpha }u(x) + \frac {1}{\varGamma (1- \alpha )}\times \frac {u(a)}{(x-a)^{\alpha }} \end{aligned}$$

for some real a. These definitions can naturally be used to define spatial fractional derivatives. However, to define the spatial fractional derivative, we rather use the Fourier-based definition (Riesz’s derivative denoted by \({}^{\mathrm{R}}{}{D}_{x}^{\beta }\)) which will be more convenient and which could still be implemented on a bounded spatial domain. Denoting by ξ the co-variable to x in Fourier space, and by \(\mathcal{F}\) the corresponding Fourier transform, we define

$$\begin{aligned} {}^{\mathrm{R}} {} {D}_{x}^{\beta }u(t,x) = \mathcal{F}^{-1} \bigl(({ \mathtt{i}}\xi )^{\beta }\mathcal{F} \bigl(u(t,\xi )\bigr) \bigr). \end{aligned}$$

Based on this definition, we will use a pseudospectral method based on discrete Fourier transform for spatial approximation. We again refer to [6, 7, 13, 30, 36] for some discussions about fractional operators and fractional differential equations. Finally, we discuss the combination of the parallel in time and space algorithms along with some numerical experiments.

This paper is organized as follows. Section 2 is dedicated to the derivation of the parareal method based on Gorenflo’s scheme for solving FODE. Using in particular the parareal method and standard parallel FFTs, we then derive in Sect. 3 a parallel algorithm for space-time fractional differential equations. We finally conclude in Sect. 4.

2 Parareal-Gorenflo algorithm for fractional differential equations

We denote by ΔT a coarse grid time-step and by Δt a fine grid time-step such that \(\Delta T=R\Delta t\). The coefficient \(R \in {\mathbb {N}}\backslash \{0\}\) corresponds to the number of subiterations in-time for each large time-step ΔT. We denote \(T_{n}=n\Delta T\) and \(t_{n;i}:= n\Delta T+i\delta t\) for \(n=0,\ldots, N-1\) and for \(i=0,\ldots,R\) such that \(t_{0,0}=0\) and \(t_{N-1;R}=T\). For a total of N coarse grid time iterations, we proceed to

−:

sequential computations on \([0;T_{N}]\);

−:

parallel computations on each \([T_{n};T_{n+1}]\) requiring computations from \([0;T_{n}]\) and on the grids \([T_{n};T_{n+1}]=\bigcup_{i=1}^{R}[t_{n;i-1};t_{n;i}]\), with \(n=0,\ldots,N-1\).

We now derive the algorithm and provide some analytical details.

2.1 Gorenflo’s scheme

We approximate (2) with Gorenflo’s scheme [17], which reads (on the coarse grid, using the notation \(\{y_{\Delta T}^{n}\}_{n}\) approximating \(\{y(T_{n})\}_{n}\)) as follows:

$$\begin{aligned} y_{\Delta T}^{n+1} = -\lambda \Delta T^{\alpha }y_{\Delta T}^{n} - \sum _{i=1}^{n+1}w_{i}^{(\alpha )}y_{\Delta T}^{n+1-i} + \Delta T^{ \alpha }g(T_{n+1}), \end{aligned}$$
(7)

where \(y_{\Delta T}^{n}\) approximates \(y(T_{n})\), and the weights read

$$\begin{aligned} w_{i}^{(\alpha )} = \sum _{l=1}^{i} \frac {\varGamma (l-\alpha )}{\varGamma (-\alpha )\varGamma (l+1)}. \end{aligned}$$
(8)

Gorenflo’s scheme, which extends Grünwald–Letnikov’s idea of a finite difference approximation of the fractional integral. In theory any other FODE solver could be combined with the parareal method such as those presented in [23] or the one used in [38]. The coefficients \(w_{i}^{(\alpha )}\) for \(\alpha =0.25,0.5,0.75\) are reported for \(1\leq i\leq 20\) in Fig. 1. The choice of this method is motivated by its simplicity and the fact that it is possible to easily increase its order of convergence. Other methods, such as the ones used in [23], can naturally be explored.

Figure 1
figure 1

Numerical experiment 1. (Left) Coefficients \(\{w_{i}^{(\alpha )}\}_{1\leq i \leq 20}\) for \(\alpha =0.25,0.5,0.75\). (Right) Convergence graph for Gorenflo’s scheme

Numerical experiment 1. We present a simple experiment from [17]: for \(0<\alpha <1\),

$$\begin{aligned} D_{t}^{\alpha }y(t) = -y(t) + t^{2} + \frac {2t^{2-\alpha }}{\varGamma (3-\alpha )},\qquad y(0)=0,\quad t \in [0,1], \end{aligned}$$

for which an explicit solution \(y_{\mathrm{exact}}(t)=t^{2}\) exists. We report the convergence graph in Fig. 1 (Right), that is, the \(\ell ^{2}\)-norm error as a function of the time-steps; we numerically estimate the order of convergence of the method which on this specific example (\(\alpha =0.5\)) leads to \(p\approx 1.5\). We refer to [25] on the derivation and convergence analysis of this scheme. In a compact form (7) reads

$$\begin{aligned} {\mathbf{Y}}_{\Delta T}^{n+1} = \mathcal{C}_{\Delta T} \bigl({ \mathbf{Y}}^{n}_{ \Delta T} \bigr), \end{aligned}$$

where \({\mathbf{Y}}^{n}_{\Delta T}=[y^{0}_{\Delta T},\ldots,y_{\Delta T}^{n}] \in {\mathbb {R}}^{n+1}\), where we use the convenient notation from [38]. The algebraic operator \(\mathcal{C}_{\Delta T}\) (from \({\mathbb {R}}^{n+1}\) to \({\mathbb {R}}^{n+2}\)) basically constructs the solution at time \(t_{n+1}\) from the solutions at previous times from (7).

We first discuss the absolute stability of Gorenflo’s scheme on a (coarse) grid:

$$\begin{aligned} y_{\Delta T}^{n+1} = -\lambda \Delta T^{\alpha }y_{\Delta T}^{n} - \sum_{i=1}^{n+1}w_{i}^{(\alpha )}y_{\Delta T}^{n+1-i} + \Delta T^{ \alpha }g(T_{n+1}), \end{aligned}$$

which can also be rewritten

$$\begin{aligned} {\mathbf{Y}}^{n+1}_{\Delta T} = \mathcal{M}_{\Delta T}{ \mathbf{Y}}^{n}_{\Delta T} + \mathcal{G}^{n+1}_{\Delta T}, \end{aligned}$$

where \(\mathcal{G}^{n+1}_{\Delta T} = [g(T_{1}),\ldots,g(T_{n+1})]\) and the matrix \(\mathcal{M}_{\Delta T} = \{m_{\Delta T;ij}\}_{i,j}\) is defined as

$$\begin{aligned} \textstyle\begin{cases} m_{\Delta T;ij} = 0, & i > j, \\ m_{\Delta T;ij} = -w^{(\alpha )}_{1}-\lambda \Delta t^{\alpha }, & i=j-1, \\ m_{\Delta T;ij} = -w_{i}^{(\alpha )}, & i>j+1, \end{cases}\displaystyle \end{aligned}$$

where \(\{w_{i}^{(\alpha )}\}_{i}\) is defined in (8). We are interested in the conditional stability of Gorenflo’s scheme. We then have the following.

Proposition 2.1

For\(1>\alpha >0\), \(\lambda >0\), Gorenflo’s scheme is conditionally absolutely stable in the sense that

$$ \lim_{n\rightarrow \infty } \bigl\vert y^{n}_{\Delta T} \bigr\vert = 0,\quad \textit{for any } \Delta T \leq (\alpha /\lambda )^{1/\alpha }. $$

Proof

We aim to study the limit at infinity of the numerical solution to

$$\begin{aligned} D_{t}^{\alpha }y(t) = -\lambda y(t),\qquad y(0)=1, \end{aligned}$$
(9)

when t goes to infinity. As Gorenflo’s method requires a priori \(y(0)=0\), we reformulate (9) as

$$\begin{aligned} D_{t}^{\alpha }y(t) = -\lambda y(t)-\lambda,\qquad y(0)=0. \end{aligned}$$
(10)

The solution to (9) is hence deduced from (10) by adding 1: \(y \leftarrow y+1\). Then Gorenflo’s scheme with \(n\geq 0\)

$$ y_{\Delta T}^{n+1} = -\lambda \Delta T^{\alpha }y_{\Delta T}^{n} - \sum_{i=1}^{n+1}w_{i}^{(\alpha )}y_{\Delta T}^{n+1-i} - \lambda \Delta T^{\alpha }. $$

First we notice that \(w_{1}^{(\alpha )}=-\alpha \in (-1,0)\) and more generally \(\alpha <-w_{i}^{(\alpha )}\). As a consequence, we can rewrite the scheme as

$$ y_{\Delta T}^{n+1} = \bigl(\alpha -\lambda \Delta T^{\alpha } \bigr)y_{ \Delta T}^{n} - \sum _{i=2}^{n+1}w^{(\alpha )}_{i}y^{n+1-i}_{\Delta T} +\lambda \Delta T^{\alpha }.$$
(11)

We prove by induction that \(\{y_{\Delta T}^{n}\}_{n}\) is decreasing and bounded from below by −1:

  1. 1.

    We assume that \(y^{n}_{\Delta T}\leq y_{\Delta T}^{n-1}\leq \cdots \leq y^{0}=0\). We have

    $$\begin{aligned} y_{\Delta T}^{n} = \bigl(\alpha -\lambda \Delta T^{\alpha } \bigr)y_{ \Delta T}^{n-1} - \sum _{i=2}^{n}w^{(\alpha )}_{i}y_{\Delta T}^{n-i} - \lambda \Delta T^{\alpha }. \end{aligned}$$
    (12)

    Thus

    $$\begin{aligned} y_{\Delta T}^{n+1} -y_{\Delta T}^{n}& = \bigl( \alpha -\lambda \Delta T^{\alpha } \bigr) \bigl(y_{\Delta T}^{n}-y_{\Delta T}^{n-1} \bigr) - \sum_{i=2}^{n+1}w^{( \alpha )}_{i}y^{n-i+1}_{\Delta T} + \sum_{i=2}^{n}w^{(\alpha )}_{i}y^{n-i}_{ \Delta T} \\ & = \bigl(\alpha -\lambda \Delta T^{\alpha } \bigr) \bigl(y_{\Delta T}^{n}-y_{ \Delta T}^{n-1} \bigr) - \sum_{i=2}^{n}w_{i}^{(\alpha )} \bigl(y^{n-i+1}_{ \Delta T}-y_{\Delta T}^{n-i}\bigr) - w_{n+1}^{(\alpha )} y^{0} \\ & = \bigl(\alpha -\lambda \Delta T^{\alpha } \bigr) \bigl(y_{\Delta T}^{n}-y_{ \Delta T}^{n-1} \bigr) - \sum_{i=2}^{n}w_{i}^{(\alpha )} \bigl(y^{n-i+1}_{ \Delta T}-y_{\Delta T}^{n-i}\bigr). \end{aligned}$$

    Hence, as (i) the coefficients \(w_{i}^{(\alpha )}\) are negative, (ii) less than 1 in absolute value, and (iii) \(y^{0}=0\), we get

    $$\begin{aligned} y_{\Delta T}^{n+1} -y_{\Delta T}^{n} = \bigl(\alpha -\lambda \Delta T^{\alpha } \bigr) \bigl(y_{\Delta T}^{n}-y_{\Delta T}^{n-1} \bigr) + \sum_{i=2}^{n} \bigl\vert w_{i}^{( \alpha )} \bigr\vert \bigl(y^{n-i+1}_{\Delta T}-y_{\Delta T}^{n-i} \bigr) \leq 0, \end{aligned}$$

    and conclude that at least for \(\Delta T \leq (\alpha /\lambda )^{1/\alpha }\) we get \(y_{\Delta T}^{n+1} \leq y_{\Delta T}^{n}\). Hence, the sequence \(\{y_{\Delta T}^{n}\}_{n}\) is increasing. Notice that applying a standard discrete Gronwall inequality would lead to a similar conclusion and can be used for proving the convergence of the method.

    $$\begin{aligned} y_{\Delta T}^{n+1} -y_{\Delta T}^{n} \leq \bigl(y_{\Delta T}^{1} -y_{ \Delta T}^{0} \bigr)\exp \Biggl(\bigl(\alpha -\lambda \Delta T^{\alpha }\bigr) \vert + \sum _{i=2}^{n} \bigl\vert w_{i}^{( \alpha )} \bigr\vert \Biggr). \end{aligned}$$
  2. 2.

    We still assume by induction that \(y_{\Delta T}^{n} \geq -1\) for all \(n\geq 0\). As

    $$\begin{aligned} y_{\Delta T}^{n+1} = \bigl(\alpha -\lambda \Delta T^{\alpha } \bigr)y_{ \Delta T}^{n} - \sum _{i=1}^{n}w^{(\alpha )}_{i+1}y^{n-i}_{\Delta T} - \lambda \Delta T^{\alpha } \end{aligned}$$
    (13)

    and

    $$ y_{\Delta T}^{n} = \bigl(\alpha -\lambda \Delta T^{\alpha } \bigr)y_{\Delta T}^{n-1} - \sum_{i=2}^{n}w_{i}^{(\alpha )}y_{\Delta T}^{n-i} - \lambda \Delta T^{\alpha }. $$

    Hence, for \(\alpha -\lambda \Delta T^{\alpha }\geq 0\),

    $$ y_{\Delta T}^{n} \leq \alpha y_{\Delta T}^{n-1} - \sum_{i=2}^{n}w_{i}^{( \alpha )}y_{\Delta T}^{n-i} $$

    and \(-\sum_{i=2}^{n}w_{i}^{(\alpha )}y_{\Delta T}^{n-i} \geq (\alpha -1)\). Thus, yet under the assumption \(\alpha -\lambda \Delta T^{\alpha }\geq 0\) and using \(y_{\Delta T}^{n}\geq -1\), we have

    $$\begin{aligned} y_{\Delta T}^{n+1} &= \bigl(\alpha -\lambda \Delta T^{\alpha } \bigr)y_{ \Delta T}^{n} - \sum_{i=1}^{n}w^{(\alpha )}_{i+1}y^{n-i}_{\Delta T} - \lambda \Delta T^{\alpha } \\ &\geq \bigl(\alpha -\lambda \Delta T^{\alpha } \bigr)y_{\Delta T}^{n} + (\alpha -1) -\lambda \Delta T^{\alpha } \\ &\geq \bigl(\alpha -\lambda \Delta T^{\alpha }\bigr) \bigl(1+y_{\Delta T}^{n} \bigr)-1 \\ &\geq -1. \end{aligned}$$

Thus \(\{y_{\Delta T}^{n}\}_{n}\) is decreasing and bounded then convergent to say \(y^{*}\). Taking the limit in n, we get

$$\begin{aligned} -\lim_{n \rightarrow \infty }\sum _{i=2}^{n}w_{i}^{(\alpha )}y_{ \Delta T}^{n-i} = \bigl(1-\alpha -\lambda \Delta T^{\alpha }\bigr)y^{*}. \end{aligned}$$
(14)

Taking for instance \(\Delta T=(\alpha /\lambda )^{1/\alpha }\) in Gorenflo’s scheme gives

$$ y_{\Delta T}^{n} = - \sum_{i=2}^{n}w_{i}^{(\alpha )}y_{\Delta T}^{n-i} -\alpha, $$

and using (14), we get

$$\begin{aligned} y^{*} = (1-\alpha )y^{*} - \alpha. \end{aligned}$$

We then deduce that the solution is convergent to \(y^{*}=-1\). □

2.2 Parareal algorithm for fractional differential equations

We now explicitly derive a parallel algorithm for FODE. Each interval \([T_{n};T_{n+1}]\) for \(0\leq n\leq N-1\) is then decomposed in R time-subdomains. For \(0\leq \ell \leq NR-1\) (corresponding to time \((n-1)\Delta T+i\Delta t\) with \(0\leq i\leq R\)), Gorenflo’s scheme on the fine grid reads

$$\begin{aligned} y_{\Delta t}^{\ell +1} = -\lambda \Delta t^{\alpha }y_{\Delta t}^{ \ell } - \sum_{j=1}^{\ell +1}w_{j}^{(\alpha )}y_{\Delta t}^{\ell +1-j} + \Delta t^{\alpha }g(t_{n;i}). \end{aligned}$$

It would be naturally highly inefficient to solve (even in parallel) the FODE on a “full” fine grid. In order to derive the parareal algorithm, we need to justify that the solution to (2) can be approximated thanks to a sum of “local” FODEs.

If we decompose \([0;T]=\bigcup_{n=0}^{N-1}[T_{n};T_{n+1}]\), and for \(1\leq n \leq N-1\), we denote by \(y_{n}\) the solution to

$$\begin{aligned} \frac {1}{\varGamma (1-\alpha )} \int _{T_{n}}^{t}(t-\tau )^{-\alpha }D_{ \tau }y_{n}( \tau )\,d\tau = -\lambda y_{n}(t) \quad\text{for } t \in [T_{n};T_{n+1}],\qquad y_{n}(T_{n}) = y_{n-1}(T_{n}). \end{aligned}$$

We naturally have the following.

Lemma 1

If\(\alpha \in (0,1)\), for any\(1\leq n \leq N-1\), we a priori have\(y_{ |[T_{n};T_{n+1}]}\neq y_{n}\), whereyis a solution to (2).

This is a simple consequence on the fact that fractional derivatives and integrals are nonlocal operators. The integral decomposition must be carefully designed.

Proof

Recall that \(y_{1}\) and \(y_{n}\) are the respective solutions to

$$\begin{aligned} \frac {1}{\varGamma (1-\alpha )} \int _{0}^{t}(t-\tau )^{-\alpha }D_{ \tau }y_{1}( \tau )\,d\tau = -\lambda y_{1}(t) \quad\text{for } t \in [0;T_{1}],\qquad y_{1}(0)= y_{0}, \end{aligned}$$

and for \(n> 1\),

$$\begin{aligned} \frac {1}{\varGamma (1-\alpha )} \int _{T_{n}}^{t}(t-\tau )^{-\alpha }D_{ \tau }y_{n}( \tau )\,d\tau = -\lambda y_{n}(t) \quad\text{for } t \in [T_{n};T_{n+1}],\qquad y_{n}(T_{n})= y_{n-1}(T_{n}). \end{aligned}$$

Trivially we have \(y_{ |[0;T_{1}]}=y_{1}\). Next, on any interval \([T_{n};T_{n+1}]\), we have \(y_{n}(T_{n})=y_{n-1}(T_{n})\). If\(y_{ |[T_{n};T_{n+1}]}=y_{n}\), then for \(t \in [T_{n};T_{n+1}]\)

$$\begin{aligned} -\lambda \bigl(y_{ |[T_{n};T_{n+1}]}(t) - y_{n}(t) \bigr) = {}& \frac {1}{\varGamma (1-\alpha )} \biggl[ \int _{0}^{T_{n}}(t-\tau )^{- \alpha }D_{\tau }y( \tau )\,d\tau \\ & {} + \int _{T_{n}}^{T_{n+1}}(t-\tau )^{-\alpha } \bigl(D_{\tau }y( \tau )- D_{\tau }y_{n}(\tau ) \bigr) \,d\tau \biggr]_{ |[T_{n};T_{n+1}]} \end{aligned}$$

is equivalent to

$$\begin{aligned} \frac {1}{\varGamma (1-\alpha )} \int _{0}^{T_{n}}(t-\tau )^{-\alpha }D_{ \tau }y( \tau )\,d\tau _{ |[T_{n};T_{n+1}]} = 0, \end{aligned}$$

which is in general not true. □

The proposition shows that in order to derive a consistent parareal method, it is necessary on each \([T_{n};T_{n+1}]\) to include a nonlocal contribution from \([0;T_{n}]\), corresponding to

$$\begin{aligned} \frac {1}{\varGamma (1-\alpha )} \int _{0}^{T_{n}}(t-\tau )^{-\alpha }D_{ \tau }y( \tau )\,d\tau _{ |[T_{n};T_{n+1}]}. \end{aligned}$$

Let us now detail the corresponding parareal method. Recall that different schemes can be used on the coarse and fine levels. Generally speaking, computations on fine grids will be only performed in parallel, while the coarse grid ones will be performed either in parallel or sequentially. We now detail the algorithms.

Algorithm

The overall parareal algorithm reads using usual compact notations [29, 38]

$$\begin{aligned} {\mathbf{Y}}^{n+1;k}_{\Delta T} = \mathcal{C}_{\Delta T} \bigl({\mathbf{Y}}^{n;k}_{ \Delta T} \bigr) - \mathcal{C}_{\Delta T} \bigl({\mathbf{Y}}^{n;k-1}_{\Delta T} \bigr) + \mathcal{F}_{\Delta T} \bigl({\mathbf{Y}}^{n;k-1}_{\Delta t} \bigr), \end{aligned}$$
(15)

where k denotes the domain decomposition in-time index/iteration, and where \({\mathbf{Y}}_{\Delta T}^{n;k} \in {\mathbb {R}}^{n+1}\) represents \([y_{\Delta T}^{0;k},\ldots,y_{\Delta T}^{n;k}]\), that is, the approximate solution on the coarse grid (but eventually computed by combining fine and coarse grid computations) at iteration k using the scheme (7). However, the computation of this quantity differs from the parareal method for ODEs due to the nonlocality of fractional operators. We proceed as follows:

−:

for \(k \geq 1\), we compute in parallel

$$\begin{aligned} \mathcal{F}_{\Delta T} \bigl({\mathbf{Y}}^{n;k}_{\Delta T} \bigr)= \bigl[ \mathcal{F}_{\Delta T} \bigl(y^{0;k}_{\Delta T} \bigr),\ldots, \mathcal{F}_{\Delta T} \bigl(y^{n;k}_{\Delta T} \bigr) \bigr], \end{aligned}$$

and where \(\mathcal{F}_{\Delta T} (y^{n;k}_{\Delta T} )\) denotes the approximate solution to

$$\begin{aligned} D_{t}^{\alpha }y(t) = -\lambda y(t)+g(t),\quad t \in [T_{n};T_{n+1}], \end{aligned}$$

which is equivalent, thanks to (3), to

$$\begin{aligned} y(t) = y(T_{n}) -\frac {\lambda }{\varGamma (\alpha )} \int _{T_{n}}^{t}(t- \tau )^{\alpha -1}y(\tau )\,d \tau + \frac {1}{\varGamma (\alpha )} \int _{T_{n}}^{t}(t- \tau )^{\alpha -1}g(\tau )\,d \tau, \end{aligned}$$

with initial data given at \(T_{n}\), and computed partially on a fine grid \(\{t_{n;0},\ldots,t_{n;R}\} \in [T_{n};T_{n+1}]^{R+1}\). Unlike the differential case, the nonlocality in time requires a special care. Indeed, on any interval \([T_{n};T_{n+1}]\), we still need to include a contribution of former times (\(t < T_{n}\)). In other words, this leads to a potentially very computationally complex method, even if it is solved in parallel. More specifically, for \(t\in [T_{n};T_{n+1}]\), we rewrite

$$\begin{aligned} I_{t}^{\alpha }y(t) &= \frac {1}{\varGamma (\alpha )} \int _{0}^{t}(t- \tau )^{\alpha -1}y(\tau )\,d\tau \\ & = \frac {1}{\varGamma (\alpha )} \int _{0}^{T_{n}}(t-\tau )^{\alpha -1}y( \tau )\,d \tau + \frac {1}{\varGamma (\alpha )} \int _{T_{n}}^{t}(t-\tau )^{ \alpha -1}y(\tau )\,d \tau. \end{aligned}$$

In particular, for any \(T_{n} \leq t \leq T_{n+1}\), we also have

$$\begin{aligned} y(t) = {}&y(0) -\frac {\lambda }{\varGamma (\alpha )} \Biggl[\sum _{j=0}^{n-1} \int _{T_{j}}^{T_{j+1}}(t-\tau )^{\alpha -1}y(\tau ) \,d \tau + \int _{T_{n}}^{t}(t- \tau )^{\alpha -1}y(\tau ) \,d \tau \Biggr] \\ &{}+ \frac {1}{\varGamma (\alpha )} \int _{0}^{t}(t-\tau )^{\alpha -1}g( \tau )\,d \tau. \end{aligned}$$

We propose to approximate the first integral (history part) using a coarse grid approximation at iteration k, and the second one on a fine grid. More specifically, we approximate on the coarse grid

$$\begin{aligned} H_{n}(t) = -\frac {\lambda }{\varGamma (\alpha )} \int _{0}^{T_{n}}(t- \tau )^{\alpha -1}y(\tau )\,d \tau. \end{aligned}$$

Notice that we do not have to deal with singularities on this integral, any (higher order) quadrature rule can be utilized. We approximate this term at \(t_{n;i}=T_{n}+i \Delta t\), say by (using a rectangle rule for simplicity)

$$\begin{aligned} \mathrm{H}_{n;i} = -\Delta T^{\alpha }\lambda \sum _{j=1}^{n} \frac {(n-j+i/R)^{\alpha }-(n-j-1+i/R)^{\alpha }}{\varGamma (\alpha +1)}y_{ \Delta T}^{j}, \end{aligned}$$
(16)

while the local part

$$\begin{aligned} L_{n}(t) = \frac {1}{\varGamma (\alpha )} \int _{T_{n}}^{t}(t-\tau )^{ \alpha -1}y(\tau )\,d\tau \end{aligned}$$

is approximated using the fine grid on \([T_{n};T_{n+1}]\). The local part can simply be rewritten as

$$\begin{aligned} L_{n}(t) = \frac {1}{\varGamma (\alpha )} \int _{0}^{t-T_{n}}(t-T_{n}- \tau )^{\alpha -1}y(\tau +T_{n})\,d\tau, \end{aligned}$$

with \(y(T_{n})\) given. For instance,

$$\begin{aligned} L_{n}(T_{n+1}) = \frac {1}{\varGamma (\alpha )} \int _{0}^{\Delta T}( \Delta T-\tau )^{\alpha -1}y(\tau +T_{n})\,d\tau. \end{aligned}$$

Over \([T_{n};T_{n+1}]\), that is, \(Rn \leq i+Rn \leq (R+1)n\) with \(0 \leq n \leq N-1\) and \(0 \leq i \leq R\), rather than computing on the fine grid a complete Gorenflo scheme

$$\begin{aligned} y_{\Delta t}^{nR+i+1;k} = -\lambda \Delta t^{\alpha }y_{\Delta t}^{nR+i;k} - \sum_{j=1}^{nR+i+1}w_{j}^{(\alpha )}y_{\Delta t}^{nR+i+1-j;k} + \Delta t^{\alpha }g(t_{n;i}), \end{aligned}$$

which would be, as already mentioned above, highly inefficient from the computational point of view. We compute in parallel (indexed here by n)

$$\begin{aligned} y_{\Delta t}^{nR+i+1;k} = -\lambda \Delta t^{\alpha }y_{\Delta t}^{nR+i;k} - \sum_{j=1}^{i+1}w_{j}^{(\alpha )}y_{\Delta t}^{nR+i+1-j;k} + \mathrm{H}_{n;i} + \Delta t^{\alpha }g(t_{n;i}), \end{aligned}$$

where \(\mathrm{H}_{n;i}\) is defined in (16), that is, the history contribution (from \([0;T_{n}]\)) is computed on the coarse grid and is consistent with \(D_{t}^{\alpha }y\), see Fig. 2. In other words, the contribution, the fine-grid \(\sum_{j=1}^{nR+i+1}w_{j}^{(\alpha )}y_{\Delta t}^{nR+i+1-j;k}\) on \([0;T_{n}]\), is replaced by a coarse grid approximation \(H_{n;i} + \sum_{j=1}^{i+1}w_{j}^{(\alpha )}y_{\Delta t}^{nR+i+1-j;k}\). These algebraic equations are explicit and are solved with linear complexity. It is important to notice that at this stage, and say at time \(t_{n}\), each processor should have access to the solution on the coarse grid at any time prior to \(t_{n}\), which is however cheap from the computational and memory usage points of view. The fine grid (local part) contribution could also be approximated using a rectangle rule; recall that the parareal method does not require that the same method be used on the fine-grid and coarse-grid.

Figure 2
figure 2

Parareal algorithm with fine/coarse grid in \(\mathcal{F}_{\Delta T}\)-step

−:

for \(k \geq 1\), the coarse grid contribution (prediction) reads formally

$$\begin{aligned} \mathcal{C}_{\Delta T} \bigl({\mathbf{Y}}^{n;k}_{\Delta T} \bigr)= \bigl[ \mathcal{C}_{\Delta T} \bigl(y^{0;k}_{\Delta T} \bigr),\ldots, \mathcal{C}_{\Delta T} \bigl(y^{n;k}_{\Delta T} \bigr) \bigr], \end{aligned}$$

and where \(\mathcal{C}_{\Delta T} (y^{n;k}_{\Delta T} )\) denotes the approximate solution to

$$\begin{aligned} D_{t}^{\alpha }y(t) = -\lambda y(t)+g(t),\quad t \in [T_{n};T_{n+1}], \end{aligned}$$

computed on the coarse grid using Gorenflo’s scheme (7).

The convergence criterion is as follows. We repeat the iterations for \(1\leq k \leq k_{\infty }\), until convergence (say in \(\ell ^{\infty }\)-norm on \({\mathbb {R}}^{n+1}\))

$$\begin{aligned} \max_{1\leq n\leq N} \bigl\Vert {\mathbf{Y}}_{\Delta T}^{n;k+1} - {\mathbf{Y}}_{ \Delta T}^{n;k} \bigr\Vert _{\infty } \leq \delta, \end{aligned}$$

where δ is a small fixed parameter, and the corresponding k is denoted by \(k_{\infty } \in {\mathbb {N}}^{*}\). The converged parareal-Gorenflo solution at final time is then \({\mathbf{Y}}_{\Delta T}^{N;k_{\infty }}\).

We propose an analysis of the order of the parareal method for the fractional equation

$$\begin{aligned} D_{t}^{\alpha }y = -\lambda y, \end{aligned}$$

approximated by the parareal-Gorenflo scheme. We propose to follow and then extend the analysis presented in [29].

Proposition 2.2

Assume that the algorithm on the fine grid is exact. Then, at iterationk, the parareal scheme has orderk, that is, there exists\(c_{k}(T)\)such that

$$\begin{aligned} \bigl\vert y_{\Delta T}^{n;k}-y(T_{n}) \bigr\vert + \max_{t\in [T_{n};T_{n+1}]} \bigl\vert y_{ \Delta t}^{n;k}(t)-y(t) \bigr\vert \leq c_{k}(T)\Delta T^{k} \quad\textit{for all } 0\leq n \leq N-1, \end{aligned}$$

whereydenotes the exact solution.

Notice that the extension to the case where the fine grid resolution is obtained by an approximate numerical method (as described in this paper) is more technical and is not presented here (see discussion below), but as in the classical parareal method for ODEs a similar conclusion is expected. We also refer to [38], where the authors arrive at the same conclusion with a different method. In order to prove the above proposition, we follow the same steps as [29].

Proof

The proof relies on the estimate of the jumps defined as follows:

$$\begin{aligned} \delta ^{n+1;k} = -\lambda \Delta T^{\alpha }\delta ^{n;k}-\sum_{i=1}^{n+1}w_{i}^{( \alpha )} \delta ^{n+1-i;k} + S^{n;k}, \end{aligned}$$

and we define

$$\begin{aligned} \Delta ^{n;k}:= \bigl(\delta ^{1;k},\ldots,\delta ^{n;k} \bigr)^{T}, \qquad {\boldsymbol{S}}^{n;k}:= \bigl(0,S^{1;k},\ldots,S^{n-1;k} \bigr)^{T},\qquad { \boldsymbol{Y}}^{n;k}:= \bigl(y_{\Delta T}^{1;k}, \ldots,y_{ \Delta T}^{n;k} \bigr)^{T}, \end{aligned}$$

where

$$\begin{aligned} S^{n;k} = y_{k}^{n-1;k}(T_{n})-y_{\Delta T}^{n;k} = G( \lambda,\Delta T)y_{\Delta T}^{n-1;k} - y_{\Delta T}^{n;k}, \end{aligned}$$

where we have denoted by G the exact semi-group for the FODE. We next introduce the lower triangular matrix \({\boldsymbol{M}}_{n}\) in \({\mathbb {R}}^{n\times n}\)

$$\begin{aligned} {\boldsymbol{M}}_{n}\Delta ^{n;k} = { \boldsymbol{S}}^{n;k}, \end{aligned}$$

where the pth line \({\boldsymbol{M}}^{(p)}_{n}\) of \({\boldsymbol{M}}_{n}\) is defined as

$$\begin{aligned} {\boldsymbol{M}}^{(p)}_{n} = \bigl[-w_{p-1}^{(\alpha )}, \ldots,-w_{1}^{( \alpha )}-\lambda \Delta T^{\alpha },1,0, \ldots,0 \bigr], \end{aligned}$$

and \({\boldsymbol{Y}}^{n;k+1}={\boldsymbol{y}}^{n;k} + \Delta ^{n;k} = { \boldsymbol{y}}^{n;k} + {\boldsymbol{M}}_{n}^{-1} {\boldsymbol{S}}^{n;k}\). In particular

$$\begin{aligned} y_{\Delta T}^{n;k+1} = y_{\Delta T}^{n-1;k}(T_{n}) +\delta ^{n;k} = G(\lambda,\Delta T)y_{\Delta T}^{n-1;k}+ \delta ^{n;k}. \end{aligned}$$

We denote \(\varepsilon ^{n;k+1} = y_{\Delta T}^{n;k+1}-y(T_{n})\), hence

$$\begin{aligned} \varepsilon ^{n;k+1} = {}& G(\lambda,\Delta T)y_{\Delta T}^{n-1;k}-G( \lambda,\Delta T)y(T_{n-1}) \\ & {} + \bigl({\boldsymbol{M}}_{n}^{-1} \bigl(0,G(\lambda, \Delta T)y_{ \Delta T}^{0;0}-y_{\Delta T}^{1;k}, \ldots,G(\lambda,\Delta T)y_{ \Delta T}^{n-2;k}-y_{\Delta T}^{n-1;k} \bigr)^{T} \bigr)_{n}. \end{aligned}$$

Then

$$\begin{aligned} \varepsilon ^{n;k+1} = G(\lambda,\Delta T)\varepsilon ^{n-1;k}+ \bigl({\boldsymbol{M}}_{n}^{-1} \bigl(0,G( \lambda,\Delta T)y_{\Delta T}^{0}-y_{ \Delta T}^{1;k}, \ldots,G(\lambda,\Delta T)y_{\Delta T}^{n-2;k}-y_{ \Delta T}^{n-1;k} \bigr)^{T} \bigr)_{n}, \end{aligned}$$

where we assume by induction assumption that \(|\varepsilon ^{n;k}| \leq c_{k}(T)\Delta T^{k}\). Then we rewrite

$$\begin{aligned} &G(\lambda,\Delta T)y_{\Delta T}^{p-1;k}-y_{\Delta T}^{p;k} \\ &\quad = G( \lambda,\Delta T) \bigl(y_{\Delta T}^{p-1;k}-G \bigl( \lambda,(p-1)\Delta T\bigr)y^{0}_{ \Delta T} \bigr)- \bigl(y_{\Delta T}^{p;k}-G (\lambda,p\Delta T)y^{0}_{ \Delta T} \bigr) \\ &\quad = G(\lambda,\Delta T)\varepsilon ^{p-1;k}-\varepsilon ^{p;k}. \end{aligned}$$

Next, we denote the triangular matrix \({\boldsymbol{N}}_{n}=\{N_{ij}\}_{1\leq i,j\leq n}:={\boldsymbol{M}}_{n}^{-1}\)

$$\begin{aligned} \varepsilon ^{n;k+1} = G(\lambda,\Delta T)\varepsilon ^{n-1;k} + \sum_{j=2}^{n}N_{nj} \bigl(G( \lambda,\Delta T)\varepsilon ^{j-2;k}- \varepsilon ^{j-1;k} \bigr). \end{aligned}$$

As \(N_{nn}=1\) and \(N_{n-1n}=0\), and using exactly similar arguments as in [29] (although it is algebraically a bit more heavy), we get that there exists \(c>0\) such that

$$\begin{aligned} \bigl\vert \varepsilon ^{n;k+1} \bigr\vert \leq c\Delta T^{2} \sum_{j=0}^{n-1} \bigl\vert \varepsilon ^{j;k} \bigr\vert . \end{aligned}$$

That is, as \(n\Delta T\leq T\), there exists \(c_{k+1}(T)>0\) such that

$$\begin{aligned} \varepsilon ^{n;k+1} \leq c_{k+1}(T)\Delta T^{k+1}. \end{aligned}$$

This concludes the proof. □

Denoting by \(y_{\Delta T}^{n;k}\) (resp. \(y_{\Delta t}^{n;k}\)) the solution on the coarse (resp. fine) grid, we get

$$\begin{aligned} \bigl\vert y_{\Delta T}^{n;k}-y(T_{n}) \bigr\vert \leq \bigl\vert y_{\Delta T}^{n;k}-y_{\Delta t}^{n;k} \bigr\vert + \bigl\vert y_{\Delta t}^{n;k}-y(T_{n}) \bigr\vert . \end{aligned}$$

Then from \(y_{\Delta T}^{1;k}-G(\lambda,\Delta T)y^{0}_{\Delta T}\) we have

$$\begin{aligned} \bigl\vert y_{\Delta T}^{1;k}-G(\lambda,\Delta T)y^{0}_{\Delta T} \bigr\vert \leq \bigl\vert y_{ \Delta T}^{1;k}-y_{\Delta t}^{1;k} \bigr\vert + \bigl\vert y_{\Delta t}^{1;k}-G(\lambda, \Delta T)y_{\Delta T}^{0} \bigr\vert . \end{aligned}$$

On the fine grid \(|y_{\Delta t}^{1;k}-G(\lambda,\Delta T)y_{\Delta T}^{0}| \leq c \Delta t\Delta T\). This comes from the fact that the length of the interval is ΔT and Δt is the fine grid time step. By induction, we have the following.

Corollary 2.1

At iterationk, the parareal algorithm such that Gorenflo’s scheme is used on both the coarse and fine grids is an orderkmethod, that is,

$$\begin{aligned} \bigl\vert y_{\Delta T}^{n;k}-y(T_{n}) \bigr\vert + \max_{t\in [T_{n};T_{n+1}]} \bigl\vert y_{ \Delta t}^{n;k}(t)-y(t) \bigr\vert \leq c_{k}(T)\Delta T^{k} \quad\textit{for all } 0\leq n \leq N-1. \end{aligned}$$

A detailed proof can actually be deduced from the theorem of convergence presented in [38].

Numerical experiment 2. We present some simple experiments to illustrate the method developed above. We still consider a benchmark presented in [17]: for \(0<\alpha <1\),

$$\begin{aligned} D_{t}^{\alpha }y(t) = -y(t) + t^{2} + \frac {2t^{2-\alpha }}{\varGamma (3-\alpha )},\qquad y(0)=0,\quad t \in [0,1], \end{aligned}$$

for which an explicit solution \(y_{\mathrm{exact}}(t)=t^{2}\) exists. We denote by \({\mathbf{Y}}^{n;k}_{\Delta T}\) the parareal/Gorenflo solution projected on the coarse grid at parareal iteration k. For different values of α, we then report the error \(\max_{1\leq n \leq N}\|{\mathbf{Y}}^{n}_{\mathrm{exact}} - {\mathbf{Y}}_{ \Delta T}^{n;k}\|_{2}\) in logscale in Fig. 3 (Left) for different values of α, with \(\Delta t=\Delta T/R\) and \(R=8\), \(\Delta T=2^{-6}\). The test shows in particular the strong dependence of the convergence rate as a function of the fractional derivative order. We also report in Fig. 3 (Right) for \(\alpha =0.5\), \(\Delta T=2^{-3}\) and respectively with \(R=2,4,8,16\) subdomains the \(\ell ^{\infty }\)-norm error as function of k, that is,

$$\begin{aligned} \max_{1\leq n\leq N} \bigl\Vert {\mathbf{Y}}_{\Delta T}^{n;k+1} - {\mathbf{Y}}_{ \Delta T}^{n;k} \bigr\Vert _{\infty }. \end{aligned}$$
Figure 3
figure 3

Numerical experiment 2. \(\|{\mathbf{Y}}^{n}_{\mathrm{exact}} - {\mathbf{Y}}_{\Delta T}^{n}\|_{2}\) as function of k with (Left) \(R=4\), \(\Delta T=2^{-6}\) and \(\alpha =0.2,0.4,0.6,0.8\). (Right) \(\Delta T=2^{-3}\) and \(R=2,4,8,16\)

In Fig. 4, we finally report the graph of convergence, calculated from \(\max_{1\leq n \leq N}\|{\mathbf{Y}}_{\Delta T}^{n;k_{\infty }}-y(T_{n}) \|_{\infty }\) as a function of coarse time-steps \(\Delta T=1/2^{i}\) for \(i=3,4,5,6\) and \(R=4\) subdomains. All these experiments illustrate the convergence and efficiency of the parareal method combined with Gorenflo’s scheme.

Figure 4
figure 4

Numerical experiment 2. Convergence graph for \(R=4\)

2.3 Computational complexity

We discuss the computational complexity and data storage of Gorenflo’s scheme for N time iterations on a given time grid. At iteration \(1\leq n \leq N\), the number of operations for updating the solution is linear in n, that is, the overall computational complexity is \(O(N^{2})\). Moreover, at iteration N, \(O(N)\) data must be stored. Regarding the computational complexity of the parareal approach, we denote by N the number of iterations on the coarse grid and NR on the fine grid. At each parareal iteration k, (i) \(O(N^{2})\) operations are performed sequentially on the coarse grid, (ii) \(O(NR^{2})\) operations on each fine grid covering \([T_{n};T_{n+1}]\). The total number of operations is hence \(O (k_{\infty }N(N+R^{2}) )\). On p processors, the complexity is \(O (k_{\infty }N(N+R^{2})/p )\) per processor. Notice that a sequential computation on a fine grid requires \(O(N^{2}R^{2})\) operations.

3 Space-time parallel algorithm for linear fractional in space-time differential equations

The approach which was presented above can be extended to fractional in space and time equation:

$$\begin{aligned} D_{t}^{\alpha }u(t,x) = -\lambda (x) D_{x}^{\beta }u(t,x),\qquad u(0,x)=u_{0} \in L^{2}({\mathbb {R}}), \end{aligned}$$
(17)

with \(0<\alpha <1\), \(\beta >0\), and \(\lambda \in C_{b}^{0}({\mathbb {R}})\) on \([0;T]\times {\mathbb {R}}\) and where \(D_{t}^{\alpha }\) denotes Caputo’s derivative and \(D_{x}^{\beta }\) denotes Riesz’ derivative. In this case it is possible to implement a coupled pseudospectral parareal method. Formally, denoting by \(\widehat{u}(t,\cdot )=\mathcal{F}_{x}u(t,\cdot )\) the Fourier transform in space (where ξ denotes the co-variable associated with x), we first assume that λ is a real constant. We directly have

$$\begin{aligned} D_{t}^{\alpha }\widehat{u}(t,\xi ) = -\lambda ({\mathtt{i}} \xi )^{ \beta }\widehat{u}(t,\xi ). \end{aligned}$$

We can then directly apply the parareal-Gorenflo’s method derived above. We set \(y_{\xi }(t):=({\mathtt{i}} \xi )^{\beta }\widehat{u}(t,\xi )\), hence

$$\begin{aligned} D_{t}^{\alpha }y_{\xi }(t) = -\lambda y_{\xi }(t). \end{aligned}$$

In this case the Fourier transform can easily be implemented in parallel, while the time-fractional derivative can be parallelized using the parareal method. However, whenever λ is no more constant, even the sequential algorithm is not that simple anymore.

3.1 Sequential algorithm

When λ is space-dependent, it is no more possible to efficiently directly apply a Fourier transform in space. In the latter case, we detail the proposed algorithm and we will then focus on the parallelization aspects. We denote the spatial grid points and indices as follows:

$$\begin{aligned} \mathcal{D}_{N_{x}} = \{x_{k}\}_{k\in \mathcal{O}_{N_{x}}},\quad \mathcal{O}_{N_{x}} = \{ k \in {\mathbb {N}}/ k=0,\ldots,N_{x}-1 \}, \end{aligned}$$

and the uniform mesh size by \(\Delta x:=x_{k+1}-x_{k}=2L_{x}/N_{x}\) for the entire domain \(\mathcal{D}:=[-L_{x},L_{x}]\). The corresponding discrete wavenumbers are defined by \(\xi _{p}=p\pi /L_{x}\) for \(p\in \mathcal{P}_{N_{x}}:= \{-N_{x}/2,\ldots,N_{x}/2-1\}\). In practice, (17) is hence solved on a bounded spatial domain \([-L_{x},L_{x}]\) with periodic boundary conditions. Regarding the pseudospectral approximations, we use the following notation:

$$ \widehat{u}_{p}(t) = \sum_{k=0}^{N_{x}-1}u(t,x_{k})e^{-{ \mathtt{i}}\xi _{p}(x_{k}+L_{x})},\qquad \widetilde{u}_{k}(t) = \frac {1}{N_{x}}\sum _{p=-N_{x}/2}^{N_{x}/2-1} \widehat{u}_{p}(t)e^{{\mathtt{i}}\xi _{p}(x_{k}+L_{x})}. $$
(18)

That is, \(\widehat{u}_{p}(t)\) is an approximate discrete Fourier transform with and \(\widetilde{u}_{k}(t)\) is an approximation to \(u(t,x_{k})\) obtained through an approximate inverse discrete Fourier transform with, for some \(c>0\),

$$\begin{aligned} \max_{k \in \mathcal{O}_{N_{x}}} \bigl\vert \widetilde{u}_{k}(t) - \mathcal{F}^{-1}(u) (t,x_{k}) \bigr\vert \leq cN_{x}^{1/2-s} \bigl\Vert u(t,\cdot ) \bigr\Vert _{H^{s}} \, , \end{aligned}$$

for \(s>1/2\) (in 1d) and \(u(t,\cdot )\in L^{1}\cap H^{s}\) periodic. In general, we do not have \(u(t,x_{k}) \neq \widetilde{u}_{k}(t)\). Such pseudospectral projection is for instance studied in [8]. Typically, the high modes which are neglected in the above approximation lead to the following aliasing error estimates for \(u(t,\cdot ) \in H^{r}\): there exists \(c>0\) such that

$$\begin{aligned} \bigl\Vert \widetilde{\widehat{u}}(t,\cdot ) -u(t,\cdot ) \bigr\Vert _{H^{s}} \leq c N_{x}^{s-r} \bigl\Vert u(t,\cdot ) \bigr\Vert _{H^{r}} \, , \end{aligned}$$

for some \(r>s>1/2\) (in 1d) and \(u(t,\cdot )\in L^{1}\cap H^{r}\) periodic. We yet refer to [8] for details.

We then introduce a discrete operator \([[ D_{x}^{\beta } ]]\) approximating \(\mathcal{F}^{-1} (({\mathtt{i}}\xi )^{\beta }\mathcal{F} )\), that is, \(D_{x}^{\beta }u(t, x_{k})\) is approximated by

$$\begin{aligned} \bigl[\bigl[D_{x}^{\beta }\bigr] \bigr]u_{k}(t) := \frac {1}{N_{x}}\sum _{p=-N_{x}/2}^{N_{x}/2-1}({ \mathtt{i}}\xi _{p})^{\beta } \widehat{\widetilde{u_{p}}}(t)e^{{\mathtt{i}}\xi _{p}((x_{k}+L_{x}) + \beta \pi )}. \end{aligned}$$
(19)

This approximation was analyzed in [6] for approximating space fractional partial differential equations or the Dirac equation in [5]. Interestingly, when solving the equation as an initial boundary value problem (on a bounded domain \([-L_{x},L_{x}]\)), the above spectral approach is still applicable. Indeed, imposing periodic boundary conditions \(u(t,L_{x})=u(t,-L_{x})\), it is in principle possible to include absorbing layers \([-L_{x},-L^{*}_{x}] \cup [L^{*}_{x},L_{x}]\) in (17) in order to avoid (i) potential artificial wave reflections and to absorb (ii) periodic waves traveling from one boundary to the next. Basically, denoting by S an absorbing function

$$\begin{aligned} S(x) = \textstyle\begin{cases} 1 & \text{if } \vert x \vert < L_{x}^{*}, \\ 1+e^{{\mathtt{i}}\theta }\widetilde{\sigma }(x) & \text{if } L_{x}^{*} \leq \vert x \vert < L_{x}, \end{cases}\displaystyle \end{aligned}$$

where the absorbing function \(\widetilde{\sigma }:\mathcal{D}\rightarrow {\mathbb {R}}\) is defined [3] as (\(\alpha \in {\mathbb {N}}^{*}\))

$$\begin{aligned} \widetilde{\sigma }(x) = \textstyle\begin{cases} \sigma ( \vert x \vert -L_{x}), & L_{x}^{*} \leq \vert x \vert < L_{x}, \\ 0, & \vert x \vert < L_{x}^{*} \end{cases}\displaystyle \end{aligned}$$
(20)

with traditional absorbing function σ absorbing functions. The rotation angle θ is usually fixed by the problem under study. Hence the equation is for instance transformed into

$$\begin{aligned} D_{t}^{\alpha }u(t,x) = -\frac {\lambda (x)}{S^{\beta }(x)} D_{x}^{ \beta }u(t,x), \qquad u(0,x)=u_{0}. \end{aligned}$$

From a practical point of view, the inclusion of the absorbing function in (17) does not complexify the approximation or analysis. In the following, we can simply consider that S is included in λ.

Hence, the parallel-in-time algorithm is applied to any \(k \in \mathcal{O}_{N_{x}}\) with \(\lambda _{k} = \lambda (x_{k})\) to

$$\begin{aligned} D_{t}^{\alpha }u_{k}(t) = -\lambda _{k} \bigl[\bigl[D_{x}^{\beta }\bigr]\bigr]u_{k}(t) . \end{aligned}$$

It was shown and numerically observed in [2, 5] that applied to fractional in-space (only) partial differential equations or to partial differential equations this (i) matrix-free, (ii) Fourier-based pseudospectral approximation allows for spectral convergence, while avoiding explicit convolution product computations. We then rewrite

$$\begin{aligned} u_{k}(t) = u_{k}(0) -\frac {\lambda _{k}}{ \varGamma (\alpha )} \int _{0}^{t}(t- \tau )^{\alpha -1}\bigl[ \bigl[D_{x}^{\beta }\bigr]\bigr]D_{\tau }u_{k}( \tau )\,d\tau. \end{aligned}$$

This pseudospectral-type approach was also used for instance for fractional and partial differential equations in [1, 3, 4, 6]. The pseudospectral-Gorenflo method reads on a (coarse) time-grid, \(T_{n}=n\Delta T\) with \(n=0,\ldots,N_{x}\) and \(k \in \mathcal{O}_{N_{x}}\)

$$\begin{aligned} u_{k}^{n+1} = -\lambda \Delta T^{\alpha }u_{k}^{n} - \lambda _{k} \sum _{i=1}^{n+1}w_{i}^{(\alpha )} \bigl[\bigl[D_{x}^{\beta }\bigr]\bigr]u_{k}^{n+1-i}, \end{aligned}$$
(21)

where \({\mathbf{u}}^{n} = \{u_{k}^{n} \}_{k \in \mathcal{O}_{N_{x}}}\) approximates \(\{u(T_{n},x_{k}) \}_{k \in \mathcal{O}_{N_{x}}}\) and

$$\begin{aligned} \bigl[\bigl[D_{x}^{\beta }\bigr] \bigr]u_{k}^{n} := \frac {1}{N_{x}} \sum _{p=-N_{x}/2}^{N_{x}/2-1}({\mathtt{i}}\xi _{p})^{\beta } \widehat{\widetilde{u_{p}^{n}}}e^{{\mathtt{i}}\xi _{p}((x_{k}+L_{x}) + \beta \pi )}. \end{aligned}$$
(22)

Denoting \(\varLambda:=[\lambda _{1},\ldots,\lambda _{N_{x}}]^{T}\otimes {\mathbf{1}} \in {\mathbb {R}}^{N_{x}\times N_{x}}\), and 1 the unit vector in \({\mathbb {R}}^{N_{x}}\), the numerical scheme reads overall

$$\begin{aligned} {\mathbf{u}}^{n+1} = - \Delta T^{\alpha } \varLambda {\mathbf{u}}^{n} - \varLambda \sum_{i=1}^{n+1}w_{i}^{(\alpha )} \bigl[\bigl[D_{x}^{\beta }\bigr]\bigr]{\mathbf{u}}^{n+1-i}. \end{aligned}$$
(23)

From traditional numerical analysis (see [8, 10, 11, 24, 26]), we expect that (i) for \(u_{0}\in H^{s}({\mathbb {R}})\), and (ii) denoting \({\mathbf{u}}^{n}\) the pseudospectral parareal approximation of the \(u(T_{n},\cdot )\), there exists \(c(T;u_{0};k)>0\) such that after k parareal iterations

$$\begin{aligned} \sup_{1\leq n \leq N_{T}} \bigl\Vert {\mathbf{u}}^{n} - u(T_{n},\cdot ) \bigr\Vert _{2} \leq c(k;T;u_{0}) \bigl(\Delta T^{k} + N_{x}^{-s} \bigr), \end{aligned}$$

where \(\| \cdot \|_{2}\) is the \(\ell ^{2}(\Delta x{\mathbb {Z}})\).

Numerical experiment 3. In order to illustrate the spectral convergence in space, we propose a simple test over \([0;T]\times [-L_{x},L_{x}]\) with periodic boundary conditions at \(\pm L_{x}\)

$$\begin{aligned} D_{t}^{1/2}u(t,x) = -\lambda (x) D_{x}^{1/2}u(t,x), \qquad u(0,x)=u_{0} , \end{aligned}$$
(24)

with \(u_{0}(x)=\exp (-x^{2}+{\mathtt{i}}k_{0}x)\cos (\pi x/2)/\mathcal{N}\), where \(k_{0}=-1\), \(T=0.1\), \(L_{x}=25\), \(\lambda (x)=\exp (-x^{2}/100)\), and \(\mathcal{N}\) is a normalization constant such that \(\|u_{0}\|_{L^{2}}=1\). As far as we know, there is no explicit solution to this equation. We choose \(\Delta T=10^{-2}\) and make Δx vary. We report in logscale the \(\ell ^{2}\)-norm error with a solution of reference at time \(t=T\) (computed on a \(5\times 10^{5}\)-point grid) as function of the space step from to \(64\times 10^{-4}\) to \(4\times 10^{-4}\) in Fig. 5 (Left), and the solution at final time T (Right). The experiment again shows the nice convergence properties of the proposed parallel methodology.

Figure 5
figure 5

Numerical experiment 3. (Left) Graph of convergence \(\|{\mathbf{u}}^{N_{T}}_{\Delta T}-u_{\mathrm{exact}}(T,\cdot )\|_{2}\) as function of ΔT in logscale. (Right) Solution to (24) at \(t=T\)

3.2 Space and time parallelization

In the following, we assume that p nodes/processors are used to solve in parallel a fractional space-time differential equations.

Parallelization in space. The parallelization in space is relatively straightforward and is two-fold. We assume in the following that we have access to p processors.

  1. 1.

    It is first based on the parallelization of the discrete Fourier transform (or fast Fourier transform), namely of \([[D_{x}^{\beta }]]u_{k}^{n}\), which approximates \(\mathcal{F}^{-1} (({\mathtt{i}}\xi )^{\beta }\mathcal{F} )u(t,x_{k})\). The parallelization in space is hence first based on the parallel computation of finite sums involved in the discrete Fourier transforms. This is typically implemented thanks to FFTW [18] on \(\mathcal{O}_{N_{x}}\) and \(\mathcal{P}_{N_{x}}\).

  2. 2.

    Once \([[D_{x}^{\beta }]]{\mathbf{u}}^{n}\) is computed, in order to update (in time) the approximate solution, we decompose

    $$\begin{aligned} {\mathbf{u}}^{n}_{\ell } = \bigl\{ u_{k+(\ell -1)N_{x}^{(p)}}^{n} \bigr\} _{1\leq k \leq N_{x}^{(p)}}, \qquad {\mathbf{u}}^{n} = \bigl\{ u_{k+(\ell -1)N_{x}^{(p)}}^{n} \bigr\} _{1\leq k\leq N_{x}^{(p)};1\leq \ell \leq p}, \end{aligned}$$

    where \(N_{x}^{(p)}=N_{x}/p\) and \(1\leq \ell \leq p\). The following algorithm is embarrassingly parallel and does not require any communication: for \(\ell =1,\ldots,p\),

    $$\begin{aligned} {\mathbf{u}}_{\ell }^{n+1} = -\Delta T^{\alpha }\varLambda {\mathbf{u}}_{\ell }^{n} - \varLambda \sum_{i=1}^{n+1}w_{i}^{(\alpha )} \bigl[\bigl[D_{x}^{\beta }\bigr]\bigr]{\mathbf{u}}_{ \ell }^{n+1-i}. \end{aligned}$$
    (25)

For a very large number of processors \(p \in {\mathbb {N}}\backslash \{0\}\), the first step of this parallelization in space, more specifically the FFT parallelization, can be become inefficient (corresponding to the discrete Fourier transform parallelization), in particular of course if \(p =O(N_{x})\). The latter condition can indeed occur with high performance computers which can possess hundreds of thousands of processors. A coupling with parallelization in time becomes relevant and is thus discussed in the following paragraph.

Parallelization in space and time. For a given number of processors p, we denote by \(p_{x}\) and \(p_{t}\) two integers such that \(p=p_{x}+p_{t}\). The key element in the simultaneous parallelization in space and time is the commutation of \(D^{\alpha }_{t}\) and \(\lambda (x)D^{\beta }_{x}\), that is, \([D^{\alpha }_{t},\lambda (x)D^{\beta }_{x}]=0\), where \([\cdot,\cdot ]\) is the operator commutator. Whenever p is very large, we propose a parallelization in space and time using \(p_{x}\) (resp. \(p_{t}\)) processors for the parallelization in space (resp. time). More specifically, we decompose \(\mathcal{O}_{N_{x}}\) in \(p_{x}\) disjoint subdomains \(\mathcal{O}_{N_{x}} = \bigcup_{\ell =1,\ldots,p_{x}}\mathcal{O}^{( \ell )}_{N_{x}}\) and \(p_{t}\) processors are used for the parallelization in time. That is, for \(r_{\ell } \in \mathcal{O}^{(\ell )}_{N_{x}}\) with \(\ell \in \{1,\ldots,p_{x}\}\), we perform a parallelization in time, as described in Sect. 2. The parallelization in space requires standard FFT parallelizations (Step 1) to compute on \(p_{x}\) processors \([[D_{x}^{\beta }]]{\mathbf{u}}^{n}\) and Algorithm (25) (Step 2) which is embarrassingly parallel and

$$\begin{aligned} {\mathbf{u}}^{n+1}_{\Delta T;\ell } = \mathcal{C}_{\Delta T} \bigl({\mathbf{u}}^{n}_{ \Delta T;\ell } \bigr) - \mathcal{C}_{\Delta T} \bigl({\mathbf{u}}^{n}_{ \Delta T;\ell } \bigr) + \mathcal{F}_{\Delta T} \bigl({\mathbf{u}}^{n;}_{ \Delta t;\ell } \bigr), \end{aligned}$$
(26)

where we have denoted by \({\mathbf{u}}^{n}_{\Delta T;\ell }=\{{\mathbf{u}}^{n;k}_{\Delta T;\ell }\}_{k}\) the coarse grid in-time approximation of u (i) at time \(T_{n}\), (ii) iteration k, (iii) and \(r_{\ell } \in \mathcal{O}^{(\ell )}_{N_{x}}\). The parallelization in time requires communications, see Fig. 6.

Figure 6
figure 6

Parallelization in space and time

4 Conclusion

We have proposed a simple extension of the parareal method combined with a fractional equation solver, namely Gorenflo’s scheme. The same strategy, based on the parareal method, can naturally be adapted to other FODE solvers and still benefit from the outstanding convergence properties of the parareal method. Some mathematical properties, such as stability, accuracy, etc., were also proposed along with numerical experiments (Figs. 1, 4). The latter allowed to illustrate the analytical properties proven in the paper and to show the feasibility of the approach from a practical point of view. Extension to parallel algorithms to space-time fractional PDE solvers was also proposed. The spatial parallelization was relying on (i) the Riesz derivative and (ii) the parallelization of the fast Fourier transform, and was successfully combined with the parareal-based FODE solver, see Fig. 5.

The extension of the proposed strategy to nonlinear scalar equations is currently investigated as well as its application to fractional physical models.

References

  1. Antoine, X., Besse, C., Rispoli, V.: High-order IMEX-spectral schemes for computing the dynamics of systems of nonlinear Schrödinger/Gross–Pitaevskii equations. J. Comput. Phys. 327, 252–269 (2016)

    MathSciNet  MATH  Google Scholar 

  2. Antoine, X., Fillion-Gourdeau, F., Lorin, E., MacLean, S.: Pseudospectral computational methods for the time-dependent Dirac equation in static curved spaces. J. Comput. Phys. 411, 109412 (2020)

    MathSciNet  Google Scholar 

  3. Antoine, X., Geuzaine, C., Tang, Q.: Coupling spectral methods and perfectly matched layer for simulating the dynamics of nonlinear Schrödinger equations. Application to rotating Bose–Einstein condensates (2019, submitted)

  4. Antoine, X., Lorin, E.: Computational performance of simple and efficient sequential and parallel Dirac equation solvers. Comput. Phys. Commun. 220, 150–172 (2017)

    MathSciNet  MATH  Google Scholar 

  5. Antoine, X., Lorin, E.: A simple pseudospectral method for the computation of the time-dependent Dirac equation with perfectly matched layers. J. Comput. Phys. 395, 583–601 (2019)

    MathSciNet  Google Scholar 

  6. Antoine, X., Lorin, E.: Towards perfectly matched layers for time-dependent space fractional PDEs. J. Comput. Phys. 391, 59–90 (2019)

    MathSciNet  Google Scholar 

  7. Antoine, X., Lorin, E., Zhang, Y.: Derivation and analysis of computational methods for fractional Laplacian equations with absorbing layers (2020, submitted)

  8. Bardos, C., Tadmor, E.: Stability and spectral convergence of Fourier method for nonlinear problems: on the shortcomings of the 2/3 de-aliasing method. Numer. Math. 129(4), 749–782 (2015)

    MathSciNet  MATH  Google Scholar 

  9. Baudron, A.-M., Lautard, J.-J., Maday, Y., Mula, O.: The parareal in time algorithm applied to the kinetic neutron diffusion equation. Lect. Notes Comput. Sci. Eng. 98, 437–444 (2014)

    MathSciNet  MATH  Google Scholar 

  10. Chechkin, A.V., Gorenflo, R., Sokolov, I.M.: Retarding subdiffusion and accelerating superdiffusion governed by distributed-order fractional diffusion equations. Phys. Rev. E, Stat. Phys. Plasmas Fluids Relat. Interdiscip. Topics 66(4), 7 (2002)

    Google Scholar 

  11. Chechkin, A.V., Gorenflo, R., Sokolov, I.M.: Fractional diffusion in inhomogeneous media. J. Phys. A, Math. Gen. 38(42), L679–L684 (2005)

    MathSciNet  MATH  Google Scholar 

  12. De Oliveira, E.C., Tenreiro Machado, J.A.: A review of definitions for fractional derivatives and integral. Math. Probl. Eng. 2014, Article ID 238459 (2014)

    MathSciNet  MATH  Google Scholar 

  13. Di Nezza, E., Palatucci, G., Valdinoci, E.: Hitchhiker’s guide to the fractional Sobolev spaces. Bull. Sci. Math. 136(5), 521–573 (2012)

    MathSciNet  MATH  Google Scholar 

  14. Diethelm, K., Ford, N.J.: Analysis of fractional differential equations. J. Math. Anal. Appl. 265(2), 229–248 (2002)

    MathSciNet  MATH  Google Scholar 

  15. Ezzat, M.A., El-Karamany, A.S., El-Bary, A.A.: Thermo-viscoelastic materials with fractional relaxation operators. Appl. Math. Model. 39(23–24), 7499–7512 (2015)

    MathSciNet  MATH  Google Scholar 

  16. Fischer, P.F., Hecht, F., Maday, Y.: A parareal in time semi-implicit approximation of the Navier–Stokes equations. Lect. Notes Comput. Sci. Eng. 40, 433–440 (2005)

    MathSciNet  MATH  Google Scholar 

  17. Ford, N.J., Connolly, J.A.: Comparison of numerical methods for fractional differential equations. Commun. Pure Appl. Anal. 5(2), 289–307 (2006)

    MathSciNet  MATH  Google Scholar 

  18. Frigo, M., Johnson, S.G.: The design and implementation of FFTW3. Proc. IEEE 93(2), 216–231 (2005)

    Google Scholar 

  19. Gambo, Y.Y., Ameen, R., Jarad, F., Abdeljawad, T.: Existence and uniqueness of solutions to fractional differential equations in the frame of generalized Caputo fractional derivatives. Adv. Differ. Equ. 13, 134 (2018)

    MathSciNet  MATH  Google Scholar 

  20. Gander, M., Petcu, M.: Analysis of a modified parareal algorithm for second-order ordinary differential equations. In: AIP Conference Proceedings, vol. 936, pp. 233–236 (2007)

    Google Scholar 

  21. Gander, M.J., Jiang, Y.-L., Li, R.-J.: Parareal Schwarz waveform relaxation methods. Lect. Notes Comput. Sci. Eng. 91, 451–458 (2013)

    MathSciNet  Google Scholar 

  22. Gander, M.J., Vandewalle, S.: Analysis of the parareal time-parallel time-integration method. SIAM J. Sci. Comput. 29(2), 556–578 (2007)

    MathSciNet  MATH  Google Scholar 

  23. Garrappa, R.: Numerical solution of fractional differential equations: a survey and a software tutorial. Mathematics 6(2), 16 (2018)

    MATH  Google Scholar 

  24. Goodman, J., Hou, T., Tadmor, E.: On the stability of the unsmoothed Fourier method for hyperbolic equations. Numer. Math. 67(1), 93–129 (1994)

    MathSciNet  MATH  Google Scholar 

  25. Gorenflo, R.: Fractional calculus: some numerical methods. In: CISM Courses and Lectures, vol. 378. Springer, Berlin (1997)

    Google Scholar 

  26. Gorenflo, R., Mainardi, F., Moretti, D., Paradisi, P.: Time fractional diffusion: a discrete random walk approach. Nonlinear Dyn. 29(1–4), 129–143 (2002)

    MathSciNet  MATH  Google Scholar 

  27. Kumar, S., Kumar, A., Momani, S., Aldhaifallah, M., Nisar, K.S.: Numerical solutions of nonlinear fractional model arising in the appearance of the stripe patterns in two-dimensional systems. Adv. Differ. Equ. 2019, 413 (2019)

    MathSciNet  Google Scholar 

  28. Li, X., Xu, C.: Existence and uniqueness of the weak solution of the space-time fractional diffusion equation and a spectral method approximation. Commun. Comput. Phys. 8(5), 1016–1051 (2010)

    MathSciNet  MATH  Google Scholar 

  29. Lions, J.-L., Maday, Y., Turinici, G.: Résolution d’EDP par un schéma en temps “pararéel”. C. R. Acad. Sci., Sér. 1 Math. 332(7), 661–668 (2001)

    MATH  Google Scholar 

  30. Lischke, A., Pang, G., Gulian, M., Song, F., Glusa, C., Zheng, X., Mao, Z., Cai, W., Meerschaert, M., Ainsworth, M., Karniadakis, G.E.: What is the fractional Laplacian? (2018) arXiv:1801.09767v2

  31. Logeswari, K., Ravichandran, C.: A new exploration on existence of fractional neutral integro-differential equations in the concept of Atangana–Baleanu derivative. Phys. A, Stat. Mech. Appl. 544, 123454 (2020)

    MathSciNet  Google Scholar 

  32. Maday, Y.: Symposium: Recent Advances on the Parareal in Time Algorithms. In: AIP Conference Proceedings, vol. 1168, pp. 1515–1516 (2009)

    Google Scholar 

  33. Mainardi, F., Gorenflo, R.: On Mittag-Leffler-type functions in fractional evolution processes. J. Comput. Appl. Math. 118(1–2), 283–299 (2000)

    MathSciNet  MATH  Google Scholar 

  34. Ravichandran, C., Logeswari, K., Jarad, F.: New results on existence in the framework of Atangana–Baleanu derivative for fractional integro-differential equations. Chaos Solitons Fractals 125, 194–200 (2019)

    MathSciNet  Google Scholar 

  35. Scalas, E., Gorenflo, R., Mainardi, F.: Fractional calculus and continuous-time finance. Phys. A, Stat. Mech. Appl. 284(1), 376–384 (2000)

    MathSciNet  Google Scholar 

  36. Tarasov, V.E.: On chain rule for fractional derivatives. Commun. Nonlinear Sci. Numer. Simul. 30(1–3), 1–4 (2016)

    MathSciNet  Google Scholar 

  37. Veeresha, P., Baskonus, H.M., Prakasha, D.G., Gao, W., Yel, G.: Regarding new numerical solution of fractional schistosomiasis disease arising in biological phenomena. Chaos Solitons Fractals 2020, 133 (2020)

    MathSciNet  Google Scholar 

  38. Xu, Q., Hesthaven, J.S., Chen, F.: A parareal method for time-fractional differential equations. J. Comput. Phys. 293, 173–183 (2015)

    MathSciNet  MATH  Google Scholar 

  39. Zeid, S.S.: Approximation methods for solving fractional equations. Chaos Solitons Fractals 125, 171–193 (2019)

    MathSciNet  Google Scholar 

  40. Zhou, Y.: Basic Theory of Fractional Differential Equations. World Scientific, Singapore (2014)

    MATH  Google Scholar 

Download references

Acknowledgements

Not applicable.

Availability of data and materials

Not applicable.

Funding

No funding.

Author information

Authors and Affiliations

Authors

Contributions

All authors read and approved the final manuscript.

Corresponding author

Correspondence to E. Lorin.

Ethics declarations

Competing interests

The author declares that he has no competing interests.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Lorin, E. A parallel algorithm for space-time-fractional partial differential equations. Adv Differ Equ 2020, 283 (2020). https://doi.org/10.1186/s13662-020-02744-4

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13662-020-02744-4

Keywords