Skip to content

Advertisement

  • Research
  • Open Access

Alternating-direction implicit finite difference methods for a new two-dimensional two-sided space-fractional diffusion equation

Advances in Difference Equations20182018:389

https://doi.org/10.1186/s13662-018-1836-z

  • Received: 8 December 2017
  • Accepted: 3 October 2018
  • Published:

Abstract

According to the principle of conservation of mass and the fractional Fick’s law, a new two-sided space-fractional diffusion equation was obtained. In this paper, we present two accurate and efficient numerical methods to solve this equation. First we discuss the alternating-direction finite difference method with an implicit Euler method (ADI–implicit Euler method) to obtain an unconditionally stable first-order accurate finite difference method. Second, the other numerical method combines the ADI with a Crank–Nicolson method (ADI–CN method) and a Richardson extrapolation to obtain an unconditionally stable second-order accurate finite difference method. Finally, numerical solutions of two examples demonstrate the effectiveness of the theoretical analysis.

Keywords

  • Two-dimensional two-sided space-fractional diffusion equations
  • The shifted left Grünwald formula
  • The standard right Grünwald formula
  • ADI methods
  • Richardson extrapolation

1 Introduction

According to the principle of conservation of mass, the equation of continuity form is given by
$$ \frac{\partial u(x,t)}{\partial t}+\frac{\partial Q(x,t)}{\partial x }=f(x,t), $$
(1.1)
where \(u(x,t)\) is the distribution function of the diffusing quantity, \(Q(x,t)\) is the diffusion flux, and \(f(x,t)\) is the source term. Then we modified the classical Fick’s law by
$$ Q(x,t)=-C(x)\frac{\partial}{\partial x } \int_{a}^{x}K_{+}(x,\xi){u(\xi ,t)}\,d \xi-D(x)\frac{\partial}{\partial x } \int_{x}^{b}K_{-}(x,\xi){u(\xi ,t)}\,d\xi, $$
(1.2)
where \(C(x)\) and \(D(x)\) are nonnegative diffusion coefficients, \(K_{+}(x,\xi) \) and \(K_{-}(x,\xi) \) are the kernel functions defined by
$$ \textstyle\begin{cases} K_{+}(x,\xi)=\frac{1}{\Gamma(1-\alpha) }(x-\xi)^{-\alpha}, & a\leq\xi \leq x; \\ K_{-}(x,\xi)=\frac{1}{\Gamma(1-\alpha) }(\xi-x)^{-\alpha}, & x\leq\xi \leq b, \end{cases} $$
(1.3)
where \(0<\alpha<1\). Combining Eqs. (1.1)–(1.3), we can get a one-dimensional two-sided space-fractional diffusions equation [1]:
$$ \begin{aligned}[b] &\frac{\partial u(x,t)}{\partial t}=\frac{\partial}{\partial x}\biggl(C(x) \frac{\partial^{\alpha} u(x,t)}{\partial x^{\alpha}}-D(x) \frac{\partial ^{\alpha} u(x,t)}{\partial(-x)^{\alpha}}\biggr)+f(x,t), \\ &\quad a\leq x\leq b,0 < \alpha< 1,t>0. \end{aligned} $$
(1.4)
In this paper, we discuss the two-dimensional two-sided space-fractional diffusion equation as follows:
$$ \begin{aligned}[b] \frac{\partial u(x,y,t)}{\partial t}&=\frac{\partial}{\partial x} \biggl(C_{x}(x,y) \frac{\partial^{\alpha} u(x,y,t)}{\partial x^{\alpha }}-D_{x}(x,y) \frac{\partial^{\alpha} u(x,y,t)}{\partial(-x)^{\alpha }}\biggr) \\ &\quad{}+ \frac{\partial}{\partial y}\biggl(C_{y}(x,y) \frac{\partial^{\beta} u(x,y,t)}{\partial y^{\beta}}-D_{y}(x,y) \frac{\partial^{\beta }u(x,y,t)}{\partial(-y)^{\beta}}\biggr) \\ &\quad{}+f(x,y,t),\quad(x,y)\in\Omega,t>0, \end{aligned} $$
(1.5)
subject to the initial condition
$$ u(x,y,0)=\phi(x,y),\quad (x,y)\in\bar{\Omega}, $$
(1.6)
and the zero Dirichlet boundary conditions
$$ u(a_{1},y,t)=u(a_{2},y,t)=u(x,b_{1},t)=u(x,b_{2},t)=0,\quad t \geq0, $$
(1.7)
where \(\Omega=(a_{1}, a_{2})\times(b_{1}, b_{2})\) is a rectangular domain, \(0<\alpha,\beta<1\), \(C_{x}(x,y)\), \(D_{x}(x,y) \), \(C_{y}(x,y)\), and \(D_{y}(x,y) \) are the nonnegative diffusion coefficients, \(f(x,y,t)\) is the source term. The \(\frac{\partial^{\gamma} u(x,y,t)}{\partial x^{\gamma}}\), \(\frac{\partial^{\gamma} u(x,y,t)}{\partial(-x)^{\gamma}}\) (\(\gamma= \alpha\) or β) are respectively the left and right Riemann–Liouville fractional derivatives [2, 3] which are defined by
$$\begin{aligned}& \frac{\partial^{\gamma} u(x,y,t)}{\partial x^{\gamma}}=\frac{1}{\Gamma (1-\gamma)}\frac{\partial}{\partial x} \int_{a_{1}}^{x}\frac {u(s,y,t)}{(x-s)^{\gamma}}\,ds, \end{aligned}$$
(1.8)
$$\begin{aligned}& \frac{\partial^{\gamma} u(x,y,t)}{\partial(-x)^{\gamma}}=\frac {-1}{\Gamma(1-\gamma)}\frac{\partial}{\partial x} \int_{x}^{a_{2}}\frac {u(s,y,t)}{(s-x)^{\gamma}}\,ds. \end{aligned}$$
(1.9)
The definitions of \(\frac{\partial^{\gamma} u(x,y,t)}{\partial y^{\gamma }}\), \(\frac{\partial^{\gamma} u(x,y,t)}{\partial(-y)^{\gamma}}\) are similar to the definitions of the x direction. As we cannot easily get the explicit analytical solutions of the fractional equations, so many researchers resort to their numerical solutions [410].

Moreover, a second-order method which combines the alternating-direction implicit approach with the Crank–Nicolson discretization and the Richardson extrapolation for the two-dimensional fractional diffusion equations was studied in [11]. Chen et al. [12] studied preconditioned iterative methods for the linear system arising in the numerical discretization of a two-dimensional space-fractional diffusion equation. Chen et al. [13] discussed the practical alternating-directions implicit method to solve the two-dimensional two-sided space fractional convection diffusion equation on a finite domain. Liu et al. [14] developed an alternating-direction implicit method for the two-dimensional Riesz space fractional diffusion equations with a nonlinear reaction term. Zeng et al. [15] proposed a Crank–Nicolson alternating-direction implicit Galerkin–Legendre spectral method for the two-dimensional Riesz space fractional nonlinear reaction-diffusion equations. Feng et al. [16] presented a second-order method for the space fractional diffusion equation with variable coefficient. Moroney et al. [17] developed a fast Poisson preconditioner for the efficient numerical solution of a class of two-sided nonlinear space-fractional diffusion equations. Chen et al. [18] proposed a fast finite difference approximation for identifying parameters in a two-dimensional space-fractional nonlocal model.

However, less focus has been on the variable coefficients FDE in a conservative form. The diffusion coefficient is generally space- or time- dependent in practical problems. In the numerical aspect of these two-sided space-fractional diffusion equations in one dimension, Chen et al. [1] developed a fast semi-implicit difference method for a nonlinear one-dimensional two-sided space-fractional diffusion equation with variable diffusivity coefficients. Feng et al. [19] presented a new finite volume method for a one-dimensional two-sided space-fractional diffusion equation. Feng et al. [20] discussed a fast second-order accurate method for a one-dimensional two-sided space-fractional diffusion. To our knowledge, the study on the finite difference method computation of these two-sided space-fractional diffusion equations in two dimensions is limited. This motivates us to develop the alternating-direction finite difference methods for this two-dimensional two-sided space-fractional diffusion equation in this paper.

The rest of the paper is organized as follows. In Sect. 2, we begin with some notations and properties. In Sect. 3, we present an ADI–implicit Euler method for this equation and its theory analysis. In Sect. 4, we present an ADI–CN method for this equation and its theory analysis. In Sect. 5, we present numerical experiments to check the accuracy of these methods.

2 Notations and properties

For the numerical approximation of the implicit difference method, we define a uniform grid of mesh point \((x_{i},y_{j},t_{k} )\), \(x_{i}=a_{1}+ih_{1} \) for \(i=0,1,\ldots, N_{x} \); \(y_{j}=b_{1}+jh_{2} \) for \(j=0,1,\ldots,N_{y} \); \(t_{k}=k\tau \), where \(h_{1}=\frac{b_{1}-a_{1}}{N_{x}} \), \(h_{2}=\frac{b_{2}-a_{2}}{N_{y}} \), τ are the mesh-width in the x−, y−, and the time direction, respectively. Let \(C_{i,j}=C_{x}(x_{i},y_{j})\), \(D_{i,j}=D_{x}(x_{i},y_{j})\), \(\bar{C}_{i,j}=C_{y}(x_{i},y_{j})\), \(\bar {D}_{i,j}=D_{y}(x_{i},y_{j})\), \(f_{i,j}^{k}=f(x_{i},y_{j},t_{k})\). Denote \(U_{i,j}^{k}\), \(u_{i,j}^{k}\) to be the exact and numerical solutions at the mesh point \((x_{i},y_{j},t_{k}) \), respectively. We use the shifted left Grünwald formula and the standard right Grünwald formula to approximate the left and right Riemann–Liouville fractional derivatives, respectively [21, 22]. We have the following formulae:
$$\begin{aligned}& \frac{\partial^{\gamma} u(x_{i},y_{j},t_{k})}{\partial x^{\gamma }}=\frac{1}{h_{1}^{\gamma}}\sum_{s=0}^{i+1}g_{s}^{({\gamma})} u_{i+1-s,j}^{k}+O(h_{1}), \\& \frac{\partial^{\gamma} u(x_{i},y_{j},t_{k})}{\partial(-x)^{\gamma }}=\frac{1}{h_{1}^{\gamma}}\sum_{s=0}^{N_{x}-i}g_{s}^{({\gamma})} u_{i+s,j}^{k}+O(h_{1}), \end{aligned}$$
where \(g_{s}^{({\mu})}\) (\(\mu=\alpha\) or β) are the normalized Grünwald weights [23]
$$ g_{s}^{({\mu})}=(-1)^{u}{\mu\choose s}. $$
The formulae of the y direction are similar to the formulae of the x direction.

Lemma 1

([23])

The normalized Grünwald weights \(g_{s}^{({\mu})}\) when \(0<\mu<1\) satisfy the properties:
  1. (i)

    \(\sum _{j=0}^{\infty}g_{0}^{({\mu})}=0\);

     
  2. (ii)

    \(g_{0}^{({\mu})}=1\), \(g_{j}^{({\mu})}<0\) for \(j\geq1\);

     
  3. (iii)

    \(\sum _{j=0}^{n}g_{j}^{({\mu})}>0\) for any \(n\geq1\);

     
  4. (iv)

    \(g_{j+1}^{({\mu})}-g_{j}^{({\mu})}=g_{j+1}^{({\mu+1})}\) for \(j\geq 1\);

     
  5. (v)

    \(\sum _{j=0}^{n}g_{j}^{({\mu+1})}<0\) for any \(n\geq1\).

     
Define the following finite difference operators:
$$\begin{aligned}& \begin{aligned}[b] \delta_{\alpha,x}{u_{i,j}^{k}} &= \frac{1}{h_{1}^{\alpha+1}}\Biggl[\sum_{s=0}^{i} \bigl(C_{i,j}g_{i+1-s}^{({\alpha})} -C_{i-1,j}g_{i-s}^{({\alpha})} \bigr) u_{s,j}^{k}+C_{i,j}g_{0}^{({\alpha })}u_{i+1,j}^{k} \Biggr] \\ &\quad{}+ \frac{1}{h_{1}^{\alpha+1}}\Biggl[\sum_{s=i}^{N_{x}} \bigl(D_{i-1,j}g_{s-i+1}^{({\alpha})} -D_{i,j}g_{s-i}^{({\alpha})} \bigr) u_{s,j}^{k} \\ &\quad{}+ D_{i-1,j}g_{0}^{({\alpha})}u_{i-1,j}^{k} \Biggr], \end{aligned} \end{aligned}$$
(2.1)
$$\begin{aligned}& \begin{aligned}[b] \delta_{\beta,y}{u_{i,j}^{k}} &= \frac{1}{h_{2}^{\beta+1}}\Biggl[\sum_{s=0}^{j} \bigl(\bar{C}_{i,j}g_{j+1-s}^{({\beta})} - \bar{C}_{i,j-1}g_{j-s}^{({\beta})}\bigr) u_{i,s}^{k}+ \bar {C}_{i,j}g_{0}^{({\beta})}u_{i,j+1}^{k} \Biggr] \\ &\quad{}+ \frac{1}{h_{2}^{\beta+1}}\Biggl[\sum_{s=j}^{N_{y}} \bigl(\bar {D}_{i,j-1}g_{s-j+1}^{({\beta})} - \bar{D}_{i,j}g_{s-j}^{({\beta})}\bigr) u_{i,s}^{k} \\ &\quad{}+\bar{D}_{i,j-1}g_{0}^{({\beta})}u_{i,j-1}^{k} \Biggr]. \end{aligned} \end{aligned}$$
(2.2)

3 ADI–implicit Euler method and its theory analysis

In this paper, we use the backward Euler scheme for the first-order time derivative. We use the shifted left Grünwald formulae and the standard right Grünwald formulae to approximate the left and right Riemann–Liouville fractional derivatives, respectively [1, 20]. We get a discrete approximation for Eq. (1.5) at the mesh point \((x_{i},y_{j},t_{k})\):
$$ \begin{aligned}[b] \frac{u_{i,j}^{k}-u_{i,j}^{k-1}}{\tau}&\approx\frac {1}{h_{1}} \biggl(C_{x}(x,y_{j}) \frac{\partial^{\alpha} u(x,y_{j},t_{k})}{ \partial x^{\alpha}}-D_{x}(x,y_{j}) \frac{\partial^{\alpha} u(x,y_{j},t_{k})}{\partial(-x)^{\alpha}}\biggr)\bigg|_{x_{i-1}}^{x_{i}} \\ &\quad{}+\frac{1}{h_{2}}\biggl(C_{y}(x_{i},y) \frac{\partial^{\beta} u(x_{i},y,t_{k})}{ \partial y^{\beta}}-D_{y}(x_{i},y) \frac{\partial^{\beta} u(x_{i},y,t_{k})}{\partial(-y)^{\beta }}\biggr)\bigg|_{y_{j-1}}^{y_{j}}+f_{i,j}^{k}. \end{aligned} $$
We can obtain
$$\begin{aligned} \frac{u_{i,j}^{k}-u_{i,j}^{k-1}}{\tau}&\approx\frac{1}{h_{1}^{\alpha +1}}\Biggl[ \Biggl(C_{i,j}\sum_{s=0}^{i+1}g_{s}^{({\alpha})} u_{i+1-s,j}^{k} -D_{i,j}\sum _{s=0}^{N_{x}-i}g_{s}^{({\alpha})} u_{i+s,j}^{k}\Biggr) \\ &\quad{}-\Biggl(C_{i-1,j}\sum_{s=0}^{i}g_{s}^{({\alpha})} u_{i-s,j}^{k}-D_{i-1,j}\sum _{s=0}^{N_{x}-i+1}g_{s}^{({\alpha})} u_{i+s-1,j}^{k}\Biggr)\Biggr] \\ &\quad{}+\frac{1}{h_{2}^{\beta+1}}\Biggl[\Biggl(\bar{C}_{i,j}\sum _{s=0}^{j+1}g_{s}^{({\beta})} u_{i,j+1-s}^{k} -\bar{D}_{i,j}\sum _{s=0}^{N_{y}-j}g_{s}^{({\beta})} u_{i,j+s}^{k}\Biggr) \\ &\quad{}-\Biggl(\bar{C}_{i,j-1}\sum_{s=0}^{j}g_{s}^{({\beta})} u_{i,j-s}^{k}-\bar{D}_{i,j-1}\sum _{s=0}^{N_{y}-j+1}g_{s}^{({\beta})} u_{i,j+s-1}^{k}\Biggr)\Biggr]+f_{i,j}^{k}. \end{aligned}$$
After some rearrangements, the implicit finite difference equation is given by
$$ \begin{aligned}[b] \frac{u_{i,j}^{k}-u_{i,j}^{k-1}}{\tau}&=\frac{1}{h_{1}^{\alpha +1}}\Biggl[ \Biggl(C_{i,j}\sum_{s=0}^{i+1}g_{i+1-s}^{({\alpha})} u_{s,j}^{k} -D_{i,j}\sum _{s=i}^{N_{x}}g_{s-i}^{({\alpha})} u_{s,j}^{k}\Biggr) \\ &\quad{}-\Biggl(C_{i-1,j}\sum_{s=0}^{i}g_{i-s}^{({\alpha})} u_{s,j}^{k}-D_{i-1,j}\sum _{s=i-1}^{N_{x}}g_{s-i+1}^{({\alpha})} u_{s,j}^{k}\Biggr)\Biggr] \\ &\quad{}+\frac{1}{h_{2}^{\beta+1}}\Biggl[\Biggl(\bar{C}_{i,j}\sum _{s=0}^{j+1}g_{j+1-s}^{({\beta})} u_{i,s}^{k} -\bar{D}_{i,j}\sum _{s=j}^{N_{y}}g_{s-j}^{({\beta})} u_{i,s}^{k}\Biggr) \\ &\quad{}-\Biggl(\bar{C}_{i,j-1}\sum_{s=0}^{j}g_{j-s}^{({\beta})} u_{i,j-s}^{k}-\bar{D}_{i,j-1}\sum _{s=j-1}^{N_{y}}g_{s-j+1}^{({\beta})} u_{i,s}^{k}\Biggr)\Biggr]+f_{i,j}^{k}. \end{aligned} $$
(3.1)
Equation (3.1) may be written as
$$ \begin{aligned}[b] &{u_{i,j}^{k}}- \frac{\tau}{h_{1}^{\alpha+1}}\Biggl[\sum_{s=0}^{i} \bigl(C_{i,j}g_{i+1-s}^{({\alpha})} -C_{i-1,j}g_{i-s}^{({\alpha})} \bigr) u_{s,j}^{k}+C_{i,j}g_{0}^{({\alpha })}u_{i+1,j}^{k} \Biggr] \\ &\qquad{}-\frac{\tau}{h_{1}^{\alpha+1}}\Biggl[\sum_{s=i}^{N_{x}} \bigl(D_{i-1,j}g_{s-i+1}^{({\alpha})} -D_{i,j}g_{s-i}^{({\alpha})} \bigr) u_{s,j}^{k}+D_{i-1,j}g_{0}^{({\alpha })}u_{i-1,j}^{k} \Biggr] \\ &\qquad{}-\frac{\tau}{h_{2}^{\beta+1}}\Biggl[\sum_{s=0}^{j} \bigl(\bar {C}_{i,j}g_{j+1-s}^{({\beta})} - \bar{C}_{i,j-1}g_{j-s}^{({\beta})}\bigr) u_{i,s}^{k}+ \bar {C}_{i,j}g_{0}^{({\beta})}u_{i,j+1}^{k} \Biggr] \\ &\qquad{}-\frac{\tau}{h_{2}^{\beta+1}}\Biggl[\sum_{s=j}^{N_{y}} \bigl(\bar {D}_{i,j-1}g_{s-j+1}^{({\beta})} - \bar{D}_{i,j}g_{s-j}^{({\beta})}\bigr) u_{i,s}^{k}+ \bar {D}_{i,j-1}g_{0}^{({\beta})}u_{i,j-1}^{k} \Biggr] \\ &\quad =u_{i,j}^{k-1}+\tau f_{i,j}^{k}.\end{aligned} $$
(3.2)
Combining Eqs. (2.1)–(2.2), Eq. (3.2) can be written in the operator form
$$ (1-\tau\delta_{\alpha,x}- \tau\delta_{\beta ,y}){u_{i,j}^{k}}=u_{i,j}^{k-1}+ \tau f_{i,j}^{k}. $$
(3.3)
In the following proposition, we show that this method defined by Eq. (3.2) is consistent with model (1.5) of the order \(O(\tau+h_{1}+h_{2})\).

Remark 1

The implicit difference scheme Eq. (3.2) can be rewritten as
$$\begin{aligned} &{u_{i,j}^{k}}- \frac{\tau}{h_{1}^{\alpha+1}}\Biggl[\sum_{s=0}^{i}(C_{i,j} -C_{i-1,j})g_{i-s}^{({\alpha})} u_{s,j}^{k}+ \sum_{s=0}^{i+1}C_{i,j}g_{i+1-s}^{({\alpha+1})}u_{s,j}^{k} \Biggr] \\ &\qquad{}-\frac{\tau}{h_{1}^{\alpha+1}}\Biggl[\sum_{s=i}^{N_{x}}(D_{i-1,j} -D_{i,j})g_{s-i}^{({\alpha})} u_{s,j}^{k}+ \sum_{s=i-1}^{N_{x}}D_{i-1,j}g_{s+1-i}^{({\alpha+1})}u_{s,j}^{k} \Biggr] \\ &\qquad{}-\frac{\tau}{h_{2}^{\alpha+1}}\Biggl[\sum_{s=0}^{j}( \bar{C}_{i,j} -\bar{C}_{i,j-1})g_{j-s}^{({\beta})} u_{i,s}^{k}+\sum_{s=0}^{j+1} \bar {C}_{i,j}g_{j+1-s}^{({\beta+1})}u_{i,s}^{k} \Biggr] \\ &\qquad{}-\frac{\tau}{h_{2}^{\beta+1}}\Biggl[\sum_{s=j}^{N_{y}}( \bar{D}_{i,j-1} -\bar{D}_{i,j})g_{s-j}^{({\beta})} u_{i,s}^{k}+\sum_{s=j-1}^{N_{y}} \bar {D}_{i,j-1}g_{s+1-j}^{({\beta+1})}u_{i,s}^{k} \Biggr] \\ &\quad =u_{i,j}^{k-1}+\tau f_{i,j}^{k}. \end{aligned}$$
(3.4)

Theorem 1

The implicit Euler method defined by Eq. (3.2) is consistent with model Eq. (1.5) of the order \(O(\tau+h_{1}+h_{2})\).

Proof

Equation (1.5) may be written as
$$ \begin{aligned}[b] \frac{\partial u(x,y,t)}{\partial t}&=\frac{\partial}{\partial x} \bigl(C_{x}(x,y)\bigr) \frac{\partial^{\alpha} u(x,y,t)}{\partial x^{\alpha}} +C_{x}(x,y) \frac{\partial^{\alpha+1} u(x,y,t)}{\partial x^{\alpha+1}} \\ &\quad{}-\frac{\partial}{\partial x}\bigl(D_{x}(x,y)\bigr) \frac{\partial^{\alpha } u(x,y,t)}{\partial(-x)^{\alpha}} -D_{x}(x,y) \frac{\partial^{\alpha+1} u(x,y,t)}{\partial(-x)^{\alpha +1}} \\ &\quad{}+\frac{\partial}{\partial y}\bigl(C_{y}(x,y)\bigr) \frac{\partial^{\beta } u(x,y,t)}{\partial y^{\beta}} +C_{y}(x,y) \frac{\partial^{\beta+1} u(x,y,t)}{\partial y^{\beta+1}} \\ &\quad{}-\frac{\partial}{\partial y}\bigl(D_{y}(x,y)\bigr) \frac{\partial^{\beta }u(x,y,t)}{\partial(-y)^{\beta}} -D_{y}(x,y) \frac{\partial^{\beta+1}u(x,y,t)}{\partial(-y)^{\beta+1}} \\ &\quad{}+f(x,y,t). \end{aligned} $$
(3.5)
From Eq. (3.4), we obtain the local truncation error term.
$$ \begin{aligned}[b] R_{i,j}^{k}&= \frac{U_{i,j}^{k}-U_{i,j}^{k-1}}{\tau} -\frac{1}{h_{1}^{\alpha+1}}\Biggl[\sum_{s=0}^{i}(C_{i,j} -C_{i-1,j})g_{i-s}^{({\alpha})} U_{s,j}^{k}+ \sum_{s=0}^{i+1}C_{i,j}g_{i+1-s}^{({\alpha+1})}U_{s,j}^{k} \Biggr] \\ &\quad{}-\frac{1}{h_{1}^{\alpha+1}}\Biggl[\sum_{s=i}^{N_{x}}(D_{i-1,j} -D_{i,j})g_{s-i}^{({\alpha})} U_{s,j}^{k}+ \sum_{s=i-1}^{N_{x}}D_{i-1,j}g_{s+1-i}^{({\alpha+1})}U_{s,j}^{k} \Biggr] \\ &\quad{}-\frac{1}{h_{2}^{\alpha+1}}\Biggl[\sum_{s=0}^{j} \bigl(\bar{C}_{i,j} -\bar{C}_{i,j-1}g_{j-s}^{({\beta})} \bigr) U_{i,s}^{k}+\sum_{s=0}^{j+1} \bar {C}_{i,j}g_{j+1-s}^{({\beta+1})}U_{i,s}^{k} \Biggr] \\ &\quad{}-\frac{1}{h_{2}^{\beta+1}}\Biggl[\sum_{s=j}^{N_{y}} \bigl(\bar{D}_{i,j-1} -\bar{D}_{i,j}g_{s-j}^{({\beta})} \bigr) U_{i,s}^{k}+\sum_{s=j-1}^{N_{y}} \bar {D}_{i,j-1}g_{s+1-j}^{({\beta+1})}U_{i,s}^{k} \Biggr]- f_{i,j}^{k}. \end{aligned} $$
(3.6)
From Eq. (3.5), we get
$$\begin{aligned} R_{i,j}^{k}&= \frac{U_{i,j}^{k}-U_{i,j}^{k-1}}{\tau}-\frac{\partial u(x,y,t)}{\partial t}\bigg|_{i,j}^{k} \\ &\quad{}-\frac{1}{h_{1}^{\alpha}}\Biggl[\sum_{s=0}^{i} \frac {(C_{i,j}-C_{i-1,j})}{h_{1}}g_{i-s}^{({\alpha})} U_{s,j}^{k} +\frac{\partial}{\partial x}\bigl(C_{x}(x,y)\bigr) \frac{\partial^{\alpha} u(x,y,t)}{\partial x^{\alpha}}\bigg|_{i,j}^{k} \Biggr] \\ &\quad{}-\frac{1}{h_{1}^{\alpha+1}}\Biggl[\sum_{s=0}^{i+1}C_{i,j}g_{i+1-s}^{({\alpha+1})}U_{s,j}^{k} -C_{x}(x,y) \frac{\partial^{\alpha+1} u(x,y,t)}{\partial x^{\alpha +1}}\bigg|_{i,j}^{k} \Biggr] \\ &\quad{}-\frac{1}{h_{1}^{\alpha}}\Biggl[\sum_{s=i}^{N_{x}} \frac{(D_{i-1,j} -D_{i,j})}{h_{1}}g_{s-i}^{({\alpha})} U_{s,j}^{k}+ \frac{\partial }{\partial x}\bigl(D_{x}(x,y)\bigr) \frac{\partial^{\alpha} u(x,y,t)}{\partial (-x)^{\alpha}}\bigg|_{i,j}^{k} \Biggr] \\ &\quad{}-\frac{1}{h_{1}^{\alpha+1}}\Biggl[\sum_{s=i-1}^{N_{x}}D_{i-1,j}g_{s+1-i}^{({\alpha+1})}U_{s,j}^{k}- D_{x}(x,y) \frac{\partial^{\alpha+1} u(x,y,t)}{\partial(-x)^{\alpha +1}}\bigg|_{i,j}^{k} \Biggr] \\ &\quad{}-\frac{1}{h_{2}^{\beta}}\Biggl[\sum_{s=0}^{j} \frac{(\bar{C}_{i,j} -\bar{C}_{i,j-1})}{h_{2}}g_{j-s}^{({\beta})} U_{i,s}^{k}- \frac{\partial}{\partial y}\bigl(C_{y}(x,y)\bigr) \frac{\partial^{\beta} u(x,y,t)}{\partial y^{\beta}}\bigg|_{i,j}^{k} \Biggr] \\ &\quad{}-\frac{1}{h_{2}^{\beta+1}}\Biggl[\sum_{s=0}^{j+1} \bar {C}_{i,j}g_{j+1-s}^{({\beta+1})}U_{i,s}^{k}- \biggl(C_{y}(x,y) \frac{\partial^{\beta+1} u(x,y,t)}{\partial y^{\beta +1}}\biggr)\bigg|_{i,j}^{k} \Biggr] \\ &\quad{}-\frac{1}{h_{2}^{\beta}}\Biggl[\sum_{s=j}^{N_{y}} \frac{(\bar{D}_{i,j-1} -\bar{D}_{i,j})}{h_{2}}g_{s-j}^{({\beta})} U_{i,s}^{k} +\frac{\partial}{\partial y}\bigl(D_{y}(x,y)\bigr) \frac{\partial^{\beta }u(x,y,t)}{\partial(-y)^{\beta}}\bigg|_{i,j}^{k} \Biggr] \\ &\quad{}-\frac{1}{h_{2}^{\beta+1}}\Biggl[\sum_{s=j-1}^{N_{y}} \bar {D}_{i,j-1}g_{s+1-j}^{({\beta+1})}u_{i,s}^{k}- D_{y}(x,y) \frac{\partial^{\beta+1}U(x,y,t)}{\partial(-y)^{\beta +1}}\bigg|_{i,j}^{k}\Biggr] \\ &=O(\tau+h_{1}+h_{2}). \end{aligned}$$
Therefore, the implicit Euler method defined by Eq. (3.2) is consistent with model Eq. (1.5) of the order \(O(\tau+h_{1}+h_{2})\). □
One standard method in the multi-dimensional PDEs is the ADI methods [11, 24]. For these methods, the difference equations are specified and solved in one direction at a time. For the ADI methods, the operator form Eq. (3.3) is written in a directional separation product form
$$ (1-\tau\delta_{\alpha,x}) (1-\tau\delta_{\beta ,y})u_{i,j}^{k} \approx u_{i,j}^{k-1}+\tau f_{i,j}^{k}, $$
(3.7)
which introduces an additional perturbation error equal to \(\tau ^{2}(\delta_{\alpha,x}\delta_{\beta,y})u_{i,j}^{k}\). Using Proposition 4.1 in [11], we can conclude that the ADI–implicit Euler method is also consistent with order \(O(\tau+h_{1}+h_{2})\). Equation (3.8) can be written in the matrix form
$$ \bar{S}\bar{T}U^{k}=U^{k-1}+\tau F^{k}, $$
(3.8)
where the matrices and represent the operators \(1-\tau\delta_{\alpha,x}\) and \(1-\tau\delta_{\beta,y}\), and
$$ U^{k}=\bigl(u_{1,1}^{k},u_{2,1}^{k}, \ldots ,u_{N_{x}-1,1}^{k},\ldots,u_{1,N_{y}-1}^{k},u_{2,N_{y}-1}^{k}, \ldots ,u_{N_{x}-1,N_{y}-1}^{k}\bigr), $$
(3.9)
and the vector \(F^{k}\) absorbs the source term and the boundary conditions in Eq. (3.9). Computationally, the ADI method for the above form is then set up and solved by the following iterative scheme at time \(t_{k}\):
(1) First solve the problem in the x-direction (for each fixed \(y_{q}\)) to obtain an intermediate solution \(u_{i,q}^{\ast}\) from
$$ (1-\tau\delta_{\alpha,x})u_{i,q}^{\ast}= u_{i,q}^{k-1}+\tau f_{i,q}^{k}. $$
(3.10)
(2) Then solve in the y-direction (for each fixed \(x_{q}\)) to obtain a solution \(u_{q,j}^{k}\) from
$$ (1-\tau\delta_{\beta,y})u_{q,j}^{k}= u_{q,j}^{\ast}. $$
(3.11)
From Eqs. (3.11)–(3.12), we can compute the boundary values for \(u^{\ast}\) from \(u_{N_{x},j}^{\ast}=(1-\tau\delta_{\beta,y})u_{N_{x},j}^{k}\), and using the zero Dirichlet boundary conditions, we can get
$$ u_{N_{x},j}^{\ast}=0. $$
(3.12)

Theorem 2

If \(C_{x}(x,y)\) and \(C_{y}(x,y)\) decrease monotonically along x and y, respectively; \(D_{x}(x,y)\) and \(D_{y}(x,y)\) increase monotonically along x and y, respectively. Each one-dimensional implicit system defined by the linear difference Eqs. (3.11)(3.12) is unconditionally stable for all \(0<\alpha,\beta<1\).

Proof

At each grid point \(y_{q}\), for \(q=1,2,\ldots,N_{y}-1\), consider the linear system of equations defined by Eq. (3.11). This Eq. (3.11) can be written as \(\bar{A}_{q}\bar{U}_{q}^{\ast}=\bar{U}_{q}^{k-1}+\tau F_{q}^{k}\), incorporating the boundary conditions from Eq. (3.13), where \(\bar{U}_{q}^{\ast}=(u_{1,q}^{\ast},u_{2,q}^{\ast},\ldots ,u_{N_{x}-1,q}^{\ast})\), \(F_{q}^{k}=(f_{1,q}^{k},f_{2,q}^{k},\ldots ,f_{N_{x}-1,q}^{k})\), and for each \(y_{q}\), the matrix \(\bar{A}_{q}=[A_{i,s}]\) for \(i=1,\ldots,N_{x}-1\) and \(s=1,\ldots,N_{x}-1\) of coefficients is defined by
$$ {A_{i,s}} = \textstyle\begin{cases} -r_{1}(C_{i,q}g_{i+1-s}^{(\alpha)}-C_{i-1,q}g_{i-s}^{(\alpha)}),& \text{for }s< i-1; \\ -r_{1}(C_{i,q}g_{2}^{(\alpha)}-C_{i-1,q}g_{1}^{(\alpha )}+D_{i-1,q}g_{0}^{(\alpha)}), & \text{for } s= i-1; \\ 1-r_{1}(C_{i,q}g_{1}^{(\alpha)}-C_{i-1,q}g_{0}^{(\alpha )})-r_{1}(D_{i-1,q}g_{1}^{(\alpha)}-D_{i,q}g_{0}^{(\alpha)}), & \text{for } s= i; \\ -r_{1}(C_{i,q}g_{0}^{(\alpha)}+D_{i-1,q}g_{2}^{(\alpha )}-D_{i,q}g_{1}^{(\alpha)}), & \text{for } s= i+1; \\ -r_{1}(D_{i-1,q}g_{s-i+1}^{(\alpha)}-D_{i,q}g_{s-i}^{(\alpha)}), & \text{for } s\geq i+2, \end{cases} $$
(3.13)
here \(r_{1}=\frac{\tau}{h_{1}^{\alpha+1}}\). \(C_{x}(x,y)\) decreases monotonically along x; \(D_{x}(x,y)\) increases monotonically along x. By Lemma 1 we have \(C_{i-1,q}\geq C_{i,q}\geq0\),\(D_{i,q}\geq D_{i-1,q}\geq 0\) (\(i=1,2,\ldots,N_{x}\)), \(C_{i,q}g_{j+1}^{\alpha}\geq C_{i,q}g_{j}^{\alpha}\geq C_{i-1,q}g_{j}^{\alpha}\),\(D_{i-1,q}g_{j+1}^{\alpha}\geq D_{i-1,q}g_{j}^{\alpha}\geq C_{i,q}g_{j}^{\alpha}\) (\(j\geq2\)). Let \(\bar{r}_{i}\) be the sum of elements along the ith row excluding the diagonal elements \(A_{i,i}\), then
$$\begin{aligned} \bar{r}_{i}&=r_{1}\sum _{s=1,s\neq i}^{N_{x}-1} \vert A_{i,s} \vert \\ &=r_{1}\Biggl[\sum_{s=1}^{i-2} \bigl(C_{i,q}g_{i+1-s}^{(\alpha )}-C_{i-1,q}g_{i-s}^{(\alpha)} \bigr) \\ &\quad{}+C_{i,q}g_{2}^{(\alpha)}-C_{i-1,q}g_{1}^{(\alpha )}+D_{i-1,q}g_{0}^{(\alpha)}+ C_{i,q}g_{0}^{(\alpha)}+D_{i-1,q}g_{2}^{(\alpha)} \\ &\quad{}-D_{i,q}g_{1}^{(\alpha)}+\sum _{s=i+2}^{N_{x}-1}\bigl(D_{i-1,q}g_{s-i+1}^{(\alpha)}-D_{i,q}g_{s-i}^{(\alpha )} \bigr)\Biggr] \\ &=r_{1}\Biggl[\Biggl(\sum_{s=0}^{i}g_{s}^{(\alpha)}-g_{1}^{(\alpha)} \Biggr)C_{i,q}-\Biggl(\sum_{s=0}^{i-1}g_{s}^{(\alpha)}-g_{0}^{(\alpha)} \Biggr)C_{i-1,q} \\ &\quad{}-\Biggl(\sum_{s=0}^{N_{x}-i}g_{s}^{(\alpha)}-g_{1}^{(\alpha )} \Biggr)D_{i,q}+\Biggl(\sum_{s=0}^{N_{x}-i-1}g_{s}^{(\alpha)}-g_{0}^{(\alpha )} \Biggr)D_{i-1,q}\Biggr] \\ &=r_{1}\Biggl[\sum_{s=0}^{i}g_{s}^{(\alpha )}(C_{i,q}-C_{i-1,q})+g_{i}^{(\alpha)}C_{i,q} +\sum_{s=0}^{N_{x}-i-1}g_{s}^{(\alpha )}(D_{i-1,q}-D_{i,q})+g_{N_{x}-i}^{(\alpha)}D_{i-1,q}\Biggr] \\ &\quad{}-r_{1}\bigl(C_{i,q}g_{1}^{(\alpha)}-C_{i-1,q}g_{0}^{(\alpha )} \bigr)-r_{1}\bigl(D_{i-1,q}g_{1}^{(\alpha)}-D_{i,q}g_{0}^{(\alpha)} \bigr) \\ &< r_{1}(\alpha C_{i,q}+C_{i-1,q}+\alpha D_{i-1,q}+D_{i,q}). \end{aligned}$$
(3.14)
We obtain
$$ \begin{aligned}[b] A_{i,i}&=1-r_{1} \bigl(C_{i,q}g_{1}^{(\alpha )}-C_{i-1,q}g_{0}^{(\alpha)} \bigr)-r_{1}\bigl(D_{i-1,q}g_{1}^{(\alpha )}-D_{i,q}g_{0}^{(\alpha)} \bigr) \\ &=1+r_{1}(\alpha C_{i,q}+C_{i-1,q}+\alpha D_{i-1,q}+D_{i,q}). \end{aligned} $$
(3.15)
As \(\bar{r}_{i}\leq A_{i,i}-1\), matrix \(\bar{A}_{q}\) is strictly diagonally dominant, which guarantees the invertibility of the matrix \(\bar{A}_{q}\), so \(\bar{A}_{q}\bar{U}_{q}^{\ast}=\bar{U}_{q}^{k-1}+\tau F_{q}^{k}\) is uniquely solvable. According to the Gershgorin theorem [23], every eigenvalue λ of the matrix \(\bar{A}_{q}\) has a real part larger than one, so the spectral radius of each matrix \(\bar{A}_{q}^{-1}\) is less than one. This proves that Eq. (3.11) is unconditionally stable. At each grid point \(x_{q}\), for \(q=1,2,\ldots,N_{x}-1\), consider the linear system of equations defined by Eq. (3.12). This Eq. (3.12) can be written as \(\hat{E}_{q}\hat{U}_{q}^{k}=\hat{U}_{q}^{\ast}\), incorporating the boundary conditions from Eq. (3.13), where \(\hat{U}_{q}^{k}=(u_{q,1}^{k},u_{q,2}^{k},\ldots,u_{q,N_{y}-1}^{k})\), and for each \(x_{q}\), the matrix \(\hat{E}_{q}=[E_{j,s}]\) for \(j=1,\ldots,N_{y}-1\) and \(s=1,\ldots,N_{y}-1\) of coefficients is defined by
$$ {E_{j,s}} = \textstyle\begin{cases} -r_{2}(\bar{C}_{q,j}g_{j+1-s}^{(\beta)}-\bar{C}_{q,j-1}g_{j-s}^{(\beta )}),& \text{for }s< j-1; \\ -r_{2}(\bar{C}_{q,j}g_{2}^{(\beta)}-\bar{C}_{q,j-1}g_{1}^{(\beta)}+\bar {D}_{q,j-1}g_{0}^{(\beta)}), & \text{for } s= j-1; \\ 1-r_{2}(\bar{C}_{q,j}g_{1}^{(\beta)}-C_{q,j-1}g_{0}^{(\beta )})-r_{2}(\bar{D}_{q,j-1}g_{1}^{(\beta)}-D_{q,j}g_{0}^{(\beta)}), & \text{for } s= j; \\ -r_{2}(\bar{C}_{q,j}g_{0}^{(\beta)}+\bar{D}_{q,j-1}g_{2}^{(\beta )}-D_{q,j}g_{1}^{(\beta)}), & \text{for } s= j+1; \\ -r_{2}(\bar{D}_{q,j-1}g_{s-j+1}^{(\beta)}-\bar{D}_{q,j}g_{s-j}^{(\beta )}), & \text{for } s\geq j+2, \end{cases} $$
(3.16)
here \(r_{2}=\frac{\tau}{h_{2}^{\beta+1}}\). Similarly, we can obtain that each eigenvalue λ of the matrix \(\hat{E}_{q}\) has a real part larger than one, so the spectral radius of each matrix \(\hat{E}_{q}^{-1}\) is less than one. This proves that Eq. (3.5) is also unconditionally stable. □

From Eqs. (3.9), (3.11), and (3.12), the matrix is a block diagonal matrix of \((N_{y}-1)\times(N_{y}-1)\) blocks whose blocks are the square \((N_{x}-1)\times(N_{x}-1)\) matrices resulting from Eq. (3.11). We can write \(\bar{S}=\operatorname{diag}(\bar{A}_{1},\bar{A}_{2},\ldots ,\bar{A}_{N_{y}-1})\). Similarly, the matrix is a block matrix of \((N_{x}-1)\times (N_{x}-1)\) blocks whose blocks are the square \((N_{x}-1)\times(N_{x}-1)\) diagonal matrices resulting from Eq. (3.12). In addition, we may write \(\bar{T}=[\bar{T}_{i,j}]\), where each \(\bar{T}_{i,j}\) is \((N_{x}-1)\times(N_{x}-1)\), \(\bar{T}_{i,j}=\operatorname{diag}((\hat{E_{1}})_{i,j},(\hat {E_{2}})_{i,j},\ldots,(\hat{E}_{N_{x}-1})_{i,j})\), where the notation \((\hat{E_{q}})_{i,j}\) refers to the \((i,j)\)th entry of the matrix \(\hat{E_{q}}\) defined previously (see [24]). To prove the stability and convergence of the ADI method, we need the following lemma. Let \(X=[x_{1},x_{2},\ldots,x_{m}]^{T}\), \(\|X\|_{\infty}=\max_{1\leq i \leq m}|x_{i}|\).

Lemma 2

([25])

If the matrix \(D=(d_{i,j})_{m\times m}\) satisfies the condition
$$ \sum_{l=1,l\neq i}^{m} \vert d_{i,l} \vert \leq d_{i,i}-1, $$
then
$$ \Vert X \Vert _{\infty}\leq \Vert DX \Vert _{\infty}. $$
(3.17)
To discuss the stability of the numerical method, we denote by \(\tilde{u}_{i,j}^{k}\) (\(1\leq i \leq N_{x}-1\), \(1\leq j \leq N_{y}-1\)) the approximate solution of the difference scheme with the initial condition \(\tilde{u}_{i,j}^{0}\) (\(1\leq i \leq N_{x}-1\), \(1\leq j \leq N_{y}-1\)), and define \(\varepsilon_{i}^{k} = {u}_{i,j}^{k} - \tilde {u}_{i,j}^{k}\), \(e_{i}^{k} = U_{i,j}^{k}- {u}_{i,j}^{k}\),
$$\begin{aligned}& \varepsilon^{k}=\bigl(\varepsilon_{1,1}^{k}, \varepsilon_{2,1}^{k},\ldots, \varepsilon_{N_{x}-1,1}^{k}, \ldots,\varepsilon _{1,N_{y}-1}^{k},\varepsilon_{2,N_{y}-1}^{k}, \ldots, \varepsilon_{N_{x}-1,N_{y}-1}^{k}\bigr)^{T}, \\& e^{k}=\bigl(e_{1,1}^{k},e_{2,1}^{k}, \ldots, e_{N_{x}-1,1}^{k},\ldots,e_{1,N_{y}-1}^{k},e_{2,N_{y}-1}^{k}, \ldots, e_{N_{x}-1,N_{y}-1}^{k}\bigr)^{T}. \end{aligned}$$

Theorem 3

If \(C_{x}(x,y)\) and \(C_{y}(x,y)\) decrease monotonically along x and y, respectively; \(D_{x}(x,y)\) and \(D_{y}(x,y)\) increase monotonically along x and y, respectively. The ADI–implicit Euler method defined by Eq. (3.9) is unconditionally stable and convergent, and there exists a positive constant \(C>0\) such that \(\|e^{k}\|_{\infty}\leq C(\tau+h_{1}+h_{2})\).

Proof

First we consider stability of the ADI–implicit Euler method. From Eq. (3.9) and the definition of \(\varepsilon^{k}\), we have
$$ \bar{S}\bar{T}\varepsilon^{k}=\varepsilon^{k-1}. $$
(3.18)
By Theorem 2, matrix \(\bar{A}_{q}\) and matrix \(\hat{E}_{q}\) satisfy the condition of Lemma 2. According to the relationship between the matrices and \(\bar{A}_{q}\), and the relationship between the matrices and \(\hat{E}_{q}\), we can obtain that and also satisfy the conditions of Lemma 2.
$$ \bigl\Vert \varepsilon^{k} \bigr\Vert _{\infty}\leq \bigl\Vert \bar{T}\varepsilon ^{k} \bigr\Vert _{\infty}\leq \bigl\Vert \bar{S}\bar{T}\varepsilon^{k} \bigr\Vert _{\infty} \leq \bigl\Vert \varepsilon^{k-1} \bigr\Vert _{\infty}. $$
(3.19)
Repeating k times, we have
$$ \bigl\Vert \varepsilon^{k} \bigr\Vert _{\infty}\leq \bigl\Vert \varepsilon^{0} \bigr\Vert _{\infty}. $$
(3.20)
Therefore the ADI method defined by Eq. (3.9) for the two-dimensional two-sided space-fractional diffusion equations is unconditionally stable. Then we consider the convergence of the ADI method. According to Eq. (3.2) and the definition of \(e^{k}\), we have \(\bar{S}\bar{T}e^{k}=e^{k-1}+\tau R^{k}\) and \(e^{0}=0\), where \(R^{k}=(R_{1,1}^{k},R_{2,1}^{k},\ldots, R_{N_{x}-1,1}^{k},\ldots,R_{1,N_{y}-1}^{k},R_{2,N_{y}-1}^{k}, \ldots,R_{N_{x}-1,N_{y}-1}^{k})^{T}\) and \(\|R^{k}\|_{\infty}\leq C_{1}(\tau+h_{1}+h_{2})\), \(C_{1}\) is a positive constant. Using Lemma 2, we obtain
$$ \bigl\Vert e^{k} \bigr\Vert _{\infty}\leq \bigl\Vert \bar{T}e^{k} \bigr\Vert _{\infty}\leq \bigl\Vert \bar{S} \bar{T}e^{k} \bigr\Vert _{\infty} \leq \bigl\Vert e^{k-1}+\tau R^{k} \bigr\Vert _{\infty}\leq \bigl\Vert e^{k-1} \vert +|\tau R^{k} \bigr\Vert _{\infty}. $$
(3.21)
Repeating k times, we have \(\|e^{k}\|_{\infty}\leq k\tau C_{1}(\tau+h_{1}+h_{2})\), so \(\|e^{k}\|_{\infty}\leq C(\tau+h_{1}+h_{2})\), here \(C=k\tau C_{1}\). Therefore the ADI–implicit Euler method defined by Eq. (3.9) is \(O(\tau +h_{1}+h_{2})\) accurate. □

4 ADI–CN method and its theory analysis

A CN method for Eq. (1.5) may be obtained into the differential equation centered at time \(t_{k-1/2}=\frac{1}{2}(t_{k}+t_{k-1})\) to obtain
$$ \begin{aligned}[b] \frac{u_{i,j}^{k}-u_{i,j}^{k-1}}{\tau}&\approx\frac {1}{h_{1}} \biggl(C_{x}(x,y_{j}) \frac{\partial^{\alpha} u(x,y_{j},t_{k-1/2})}{ \partial x^{\alpha}}-D_{x}(x,y_{j}) \frac{\partial^{\alpha} u(x,y_{j},t_{k-1/2})}{\partial(-x)^{\alpha}}\biggr)\bigg|_{x_{i-1}}^{x_{i}} \\ &\quad{}+\frac{1}{h_{2}}\biggl(C_{y}(x_{i},y) \frac{\partial^{\beta} u(x_{i},y,t_{k-1/2})}{ \partial y^{\beta} }-D_{y}(x_{i},y) \frac{\partial^{\beta} u(x_{i},y,t_{k-1/2})}{\partial(-y)^{\beta }}\biggr)\bigg|_{y_{j-1}}^{y_{j}}+f_{i,j}^{k-1/2}\hspace{-20pt} \\ &\approx\frac{1}{h_{1}^{\alpha+1}}\Biggl[\Biggl(C_{i,j}\sum _{s=0}^{i+1}g_{s}^{({\alpha})} u_{i+1-s,j}^{k-1/2} -D_{i,j}\sum _{s=0}^{N_{x}-i}g_{s}^{({\alpha})} u_{i+s,j}^{k-1/2}\Biggr) \\ &\quad{}-\Biggl(C_{i-1,j}\sum_{s=0}^{i}g_{s}^{({\alpha})} u_{i-s,j}^{k-1/2}-D_{i-1,j}\sum _{s=0}^{N_{x}-i+1}g_{s}^{({\alpha})} u_{i+s-1,j}^{k-1/2}\Biggr)\Biggr] \\ &\quad{}+\frac{1}{h_{2}^{\beta+1}}\Biggl[\Biggl(\bar{C}_{i,j}\sum _{s=0}^{j+1}g_{s}^{({\beta})} u_{i,j+1-s}^{k-1/2} -\bar{D}_{i,j}\sum _{s=0}^{N_{y}-j}g_{s}^{({\beta})} u_{i,j+s}^{k-1/2}\Biggr) \\ &\quad{}-\Biggl(\bar{C}_{i,j-1}\sum_{s=0}^{j}g_{s}^{({\beta})} u_{i,j-s}^{k}-\bar{D}_{i,j-1}\sum _{s=0}^{N_{y}-j+1}g_{s}^{({\beta})} u_{i,j+s-1}^{k-1/2}\Biggr)\Biggr]+f_{i,j}^{k-1/2}. \end{aligned} $$
(4.1)
After some rearrangements, combining Eqs. (2.1)–(2.2), Eq. (4.1) can be written in the operator form
$$ \biggl(1-\frac{\tau}{2}\delta_{\alpha,x}-\frac {\tau}{2} \delta_{\beta,y}\biggr){u_{i,j}^{k}}= \biggl(1+ \frac{\tau}{2}\delta _{\alpha,x}+\frac{\tau}{2}\delta_{\beta,y} \biggr){u_{i,j}^{k-1}}+\tau f_{i,j}^{k-1/2}. $$
(4.2)
For the ADI methods, the operator form Eq. (4.2) is rewritten in the following form:
$$ \biggl(1-\frac{\tau}{2}\delta_{\alpha,x}\biggr) \biggl(1- \frac{\tau}{2} \delta_{\beta,y}\biggr){u_{i,j}^{k}}= \biggl(1+\frac{\tau}{2}\delta_{\alpha ,x}\biggr) \biggl(1+ \frac{\tau}{2}\delta_{\beta,y}\biggr){u_{i,j}^{k-1}}+ \tau f_{i,j}^{k-1/2}, $$
(4.3)
which introduces an additional perturbation error equal to
$$ \frac{1}{4}(\tau)^{2}(\delta_{\alpha,x}\delta_{\beta ,y}) \bigl(u_{i,j}^{k}-u_{i,j}^{k-1}\bigr). $$
Similar to Theorem 1, we can conclude that Eq. (4.3) is also consistent with order \(O(\tau^{2}+h_{1}+h_{2})\). Equation (4.3) can now be solved by the following set of matrix equations defining the ADI method:
$$\begin{aligned}& \biggl(1-\frac{\tau}{2} \delta_{\alpha ,x} \biggr)u_{i,j}^{\ast}= \biggl(1+\frac{\tau}{2} \delta_{\beta ,y}\biggr)u_{i,j}^{k-1}+\frac{\tau}{2}f_{i,j}^{k-\frac{1}{2}}, \end{aligned}$$
(4.4)
$$\begin{aligned}& \biggl(1-\frac{\tau}{2} \delta_{\beta ,y} \biggr)u_{i,j}^{k}= \biggl(1+\frac{\tau}{2} \delta_{\alpha,x}\biggr)u_{i,j}^{\ast }+\frac{\tau}{2}f_{i,j}^{k-\frac{1}{2}}. \end{aligned}$$
(4.5)
The intermediate solution \(u_{i,j}^{\ast}\) should be defined carefully on the boundary, prior to solving the system of equations defined by Eq. (4.4) and Eq. (4.5). Otherwise, the first-order spatial accuracy of the two-step ADI method outlined above will be impacted. This is accomplished by subtracting Eq. (4.4) from Eq. (4.5) to get the following equation to define \(u_{i,j}^{\ast}\):
$$ 2u_{i,j}^{\ast}=\biggl(1-\frac{\tau}{2} \delta _{\beta,y}\biggr)u_{i,j}^{k}+\biggl(1+ \frac{\tau}{2} \delta_{\beta ,y}\biggr)u_{i,j}^{k-1}. $$
(4.6)
Thus, the boundary conditions for \(u_{i,j}^{\ast}\) (i.e., \(i=0 \) or \(N_{x}\) for \(j=1,\ldots,N_{y}-1 \)) needed to solve each set of equations in Eq. (4.6) are set from
$$\begin{aligned}& u_{0,j}^{\ast}=\biggl(1-\frac{\tau}{2} \delta _{\beta,y}\biggr)u_{0,j}^{k}+\biggl(1+ \frac{\tau}{2} \delta_{\beta,y}\biggr)u_{0,j}^{k-1}=0, \end{aligned}$$
(4.7)
$$\begin{aligned}& \begin{aligned}[b] u_{N_{x},j}^{\ast}&=\biggl(1-\frac{\triangle t}{2} \delta_{\beta,y}\biggr)u_{N_{x},j}^{k}+\biggl(1+ \frac{\tau}{2} \delta_{\beta ,y}\biggr)u_{N_{x},j}^{k-1} \\ &= \biggl(1-\frac{\tau}{2} \delta_{\beta,y}\biggr)u_{N_{x},j}^{k}+ \biggl(1+\frac{\tau}{2} \delta_{\beta,y}\biggr)u_{N_{x},j}^{k-1}. \end{aligned} \end{aligned}$$
(4.8)
The corresponding algorithm is implemented as follows:

First, solve the problem in the x-direction (for each fixed \(y_{l}\)) to obtain an intermediate solution \(u_{i,l}^{\ast}\).

Second, solve it in the y-direction (for each fixed \(x_{l}\)) to obtain a solution \(u_{l,j}^{k}\). According to the fact that the first step gives a set of \(N_{x}-1\) linear equations, the system of the equations may be written as
$$ (I-A_{l})U_{l}^{\ast}=Q_{l}^{k-1}+ \frac {\tau}{2}F_{l}^{k-\frac{1}{2}}, $$
(4.9)
where
$$\begin{aligned}& U_{l}^{\ast}=\bigl[u_{1,l}^{\ast},u_{2,l}^{\ast}, \ldots ,u_{N_{x}-1,l}^{\ast}\bigr], \\& Q_{l}^{k-1}=\Biggl[\sum_{v=1}^{N_{y}}B_{1,v}^{k-1}u_{1,v}^{k-1}, \sum_{v=1}^{N_{y}}B_{2,v}^{k-1}u_{2,v}^{k-1}, \ldots,\sum_{v=1}^{N_{y}}B_{N_{x}-1,v}^{k-1}u_{N_{x}-1,v}^{k-1} \Biggr], \\& F_{l}^{k-\frac{1}{2}}=\bigl[f_{1,l}^{k-\frac{1}{2}},f_{2,l}^{k-\frac {1}{2}}, \ldots,f_{N_{x}-2,l}^{k-\frac{1}{2}},f_{N_{x}-1,l}^{k-\frac {1}{2}}-A_{N_{x}-1,X_{x}}u_{N_{x},l}^{\ast} \bigr], \end{aligned}$$
the matrix \(A_{l}=[A_{i,s}]\) for \(i=1,\ldots,N_{x}-1\) and \(s=1,\ldots ,N_{x}-1\) of coefficients is defined by
$$ {A_{i,s}} = \textstyle\begin{cases} r_{1}(C_{i,q}g_{i+1-s}^{(\alpha)}-C_{i-1,q}g_{i-s}^{(\alpha)}),& \text{for }s< i-1; \\ r_{1}(C_{i,q}g_{2}^{(\alpha)}-C_{i-1,q}g_{1}^{(\alpha )}+D_{i-1,q}g_{0}^{(\alpha)}), & \text{for } s= i-1; \\ r_{1}(C_{i,q}g_{1}^{(\alpha)}-C_{i-1,q}g_{0}^{(\alpha )})+r_{1}(D_{i-1,q}g_{1}^{(\alpha)}-D_{i,q}g_{0}^{(\alpha)}), & \text{for } s= i; \\ r_{1}(C_{i,q}g_{0}^{(\alpha)}+D_{i-1,q}g_{2}^{(\alpha )}-D_{i,q}g_{1}^{(\alpha)}), & \text{for } s= i+1; \\ r_{1}(D_{i-1,q}g_{s-i+1}^{(\alpha)}-D_{i,q}g_{s-i}^{(\alpha)}), & \text{for } s\geq i+2, \end{cases} $$
(4.10)
and the coefficients \(B_{i,v}\) for \(i=1,\ldots,N_{x}-1\) are defined by
$$ {B_{i,v}} = \textstyle\begin{cases} -r_{2}(\bar{C}_{q,l}g_{l+1-s}^{(\beta)}-\bar{C}_{q,l-1}g_{l-s}^{(\beta )}),& \text{for }v< l-1; \\ -r_{2}(\bar{C}_{q,l}g_{2}^{(\beta)}-\bar{C}_{q,l-1}g_{1}^{(\beta)}+\bar {D}_{q,l-1}g_{0}^{(\beta)}), & \text{for } v= l-1; \\ 1-r_{2}(\bar{C}_{q,l}g_{1}^{(\beta)}-C_{q,l-1}g_{0}^{(\beta )})-r_{2}(\bar{D}_{q,l-1}g_{1}^{(\beta)}-D_{q,j}g_{0}^{(\beta)}), & \text{for } v= l; \\ -r_{2}(\bar{C}_{q,l}g_{0}^{(\beta)}+\bar{D}_{q,l-1}g_{2}^{(\beta )}-D_{q,l}g_{1}^{(\beta)}), & \text{for } v= l+1; \\ -r_{2}(\bar{D}_{q,l-1}g_{s-l+1}^{(\beta)}-\bar{D}_{q,j}g_{s-l}^{(\beta )}), & \text{for } v\geq l+2. \end{cases} $$
Similarly, according to the fact that the second step gives a set of \(N_{y}-1\) linear equations, the system of the equations may be written as
$$ (I-\hat{B}_{l})U_{l}^{k}=O_{l}^{\ast }+ \frac{\tau}{2}\hat{F}_{l}^{k-\frac{1}{2}}, $$
(4.11)
where
$$\begin{aligned}& U_{l}^{k}=\bigl[u_{l,1}^{k},u_{l,2}^{k}, \ldots,u_{l,N_{y}-1}^{k}\bigr], \\& O_{l}^{\ast}=\Biggl[\sum_{v=1}^{N_{x}} \hat{A}_{1,v}^{k-1}u_{v,1}^{\ast},\sum _{v=1}^{N_{x}}\hat{A}_{2,v}^{k-1}u_{v,2}^{\ast}, \ldots,\sum_{v=1}^{N_{x}}\hat{A}_{N_{y}-1,v}^{k-1}u_{v,N_{y}-1}^{\ast} \Biggr], \\& \hat{F}_{l}^{n+\frac{1}{2}}=\bigl[f_{l,1}^{k-\frac{l}{2}},f_{l,2}^{k-\frac {1}{2}}, \ldots,f_{l,N_{y}-2}^{k-\frac{1}{2}},f_{l,N_{y}-1}^{k- \frac{1}{2}}- \hat{B}_{N_{y}-1,N_{y}}u_{l,N_{y}-1}^{\ast}\bigr], \end{aligned}$$
the matrix \(\hat{B}_{l}=[\hat{B}_{j,v}]\) for \(i=1,\ldots,N_{x}-1\) and \(v=1,\ldots,N_{x}-1\) of coefficients is defined by
$$ {\hat{B}_{j,v}} = \textstyle\begin{cases} r_{2}(\bar{C}_{q,j}g_{j+1-s}^{(\beta)}-\bar{C}_{q,j-1}g_{j-s}^{(\beta )}),& \text{for }v< j-1; \\ -r_{2}(\bar{C}_{q,j}g_{2}^{(\beta)}-\bar{C}_{q,j-1}g_{1}^{(\beta)}+\bar {D}_{q,j-1}g_{0}^{(\beta)}), & \text{for } v= j-1; \\ 1-r_{2}(\bar{C}_{q,j}g_{1}^{(\beta)}-C_{q,j-1}g_{0}^{(\beta )})-r_{2}(\bar{D}_{q,j-1}g_{1}^{(\beta)}-D_{q,j}g_{0}^{(\beta)}), & \text{for } v= j; \\ -r_{2}(\bar{C}_{q,j}g_{0}^{(\beta)}+\bar{D}_{q,j-1}g_{2}^{(\beta )}-D_{q,j}g_{1}^{(\beta)}), & \text{for } v= j+1; \\ -r_{2}(\bar{D}_{q,j-1}g_{s-j+1}^{(\beta)}-\bar{D}_{q,j}g_{s-j}^{(\beta )}), & \text{for } v\geq j+2, \end{cases} $$
(4.12)
and the coefficients \(\hat{A}_{j,v}\) for \(j=1,\ldots,N_{y}-1\) are defined by
$$ {\hat{A}_{j,v}} = \textstyle\begin{cases} -r_{1}(C_{l,q}g_{l+1-s}^{(\alpha)}-C_{l-1,q}g_{l-s}^{(\alpha)}),& \text{for }V< l-1; \\ -r_{1}(C_{l,q}g_{2}^{(\alpha)}-C_{i-1,q}g_{1}^{(\alpha )}+D_{i-1,q}g_{0}^{(\alpha)}), & \text{for } s= l-1; \\ 1-r_{1}(C_{l,q}g_{1}^{(\alpha)}-C_{l-1,q}g_{0}^{(\alpha )})+r_{1}(D_{i-1,q}g_{1}^{(\alpha)}-D_{i,q}g_{0}^{(\alpha)}), & \text{for } s= l; \\ -r_{1}(C_{l,q}g_{0}^{(\alpha)}+D_{l-1,q}g_{2}^{(\alpha )}-D_{l,q}g_{1}^{(\alpha)}), & \text{for } s= l+1; \\ -r_{1}(D_{l-1,q}g_{s-l+1}^{(\alpha)}-D_{l,q}g_{s-l}^{(\alpha)}), & \text{for } s\geq l+2. \end{cases} $$
(4.13)
Equation (4.2) can be written in the matrix form
$$ (I-S) (I-T)U^{k}= (I+S) (I+T)U^{k-1}+F^{k-1/2}, $$
(4.14)
where the matrices S and T represent the operators \(1-\frac{\tau }{2} \delta_{\alpha,x}\) and \(1-\frac{\tau}{2} \delta_{\beta,y}\), which are matrices of size \((N_{x}-1)(N_{y}-1)\times(N_{x}-1)(N_{y}-1)\), \(F^{k-1/2}\) absorbs the source terms \(f^{k-1/2}\) and the boundary conditions in the discretized equation, and \(U^{k}=(u_{1,1}^{k-1},u_{2,1}^{k-1},\ldots,u_{N_{x}-1,1}^{k-1},\ldots, u_{1,N_{y}-1}^{k-1},u_{2,N_{y}-1}^{k-1},\ldots, u_{N_{x}-1,N_{y}-1}^{k-1})\). From Eqs. (4.9), (4.11), and (4.14), the matrix S is a block diagonal matrix of \((N_{y}-1)\times(N_{y}-1)\) blocks whose blocks are the square \((N_{x}-1)\times(N_{x}-1)\) matrices resulting from Eq. (4.9). We can write \(\bar{S}=\operatorname{diag}(A_{1},A_{2},\ldots,A_{N_{y}-1})\). Similarly, the matrix T is a block matrix of \((N_{x}-1)\times (N_{x}-1)\) blocks whose blocks are the square \((N_{x}-1)\times(N_{x}-1)\) diagonal matrices resulting from Eq. (4.11). In addition, we may write \(T=[T_{i,j}]\), where each \(T_{i,j}\) is \((N_{x}-1)\times(N_{x}-1)\), \(T_{i,j}=\operatorname{diag}((\hat{B_{1}})_{i,j},(\hat {B_{2}})_{i,j},\ldots,(\hat{B}_{N_{x}-1})_{i,j})\), where the notation \((\hat{B_{q}})_{i,j}\) refers to the \((i,j)\)th entry of the matrix \(\hat{B_{q}}\) defined previously.

Theorem 4

If \(C_{x}(x,y)\) and \(C_{y}(x,y)\) decrease monotonically along x and y, respectively; \(D_{x}(x,y)\) and \(D_{y}(x,y)\) increase monotonically along x and y, respectively, and the matrices S and T commute, then the ADI–CN method defined by Eq. (4.14) is unconditionally stable, and the ADI–implicit Euler method defined by Eq. (3.9) is \(O(\tau^{2}+h_{1}+h_{2})\) accurate.

Proof

From Theorem 2, if \(\bar{r}_{i}\) is the sum of elements along the ith row of the matrix \(A_{l}\) excluding the diagonal elements \(A_{i,i}\), we have
$$\bar{r}_{i}\leq-A_{i,i}. $$
According to the Gershgorin theorem, the eigenvalues of the matrix \(A_{l} \) lie in the union of the disks centered at \(A_{i,i} \) with the radius \(\sum_{v=1,j\neq i}^{m_{1}-1}|A_{i,v}|\); therefore, the eigenvalues of the matrix \(A_{l} \) have negative real parts. Similarly, the eigenvalues of the matrix \(\hat{B}_{l} \) have negative real parts. Since \(S=\operatorname{diag}(A_{1},A_{2},\ldots,A_{m_{2}-1})\), the eigenvalues of the matrix S are in the union of the Gershgorin disks for the matrices \(A_{l}^{\prime}s \); therefore, every eigenvalue of the matrix S has a negative real part. Similarly, every eigenvalue of the matrix T has a negative real part.

Because the matrices S and T commute, if \(\lambda_{1} \), \(\lambda_{2} \) are eigenvalues of matrices S and T, respectively, we can obtain \(\frac{(1+\lambda_{1})(1+\lambda _{2})}{(1-\lambda_{1})(1-\lambda_{2}) }\) is an eigenvalue of the matrix \((I-S)^{-1}(I+S)(I-T)^{-1}(I+T) \), thus the spectral radius of matrix \((I-T)^{-1}(I-S)^{-1}(I+S)(I+T) \) is less than one, then the ADI–CN method defined by Eq. (4.14) is unconditionally stable. Therefore, according to Lax’s equivalence theorem [26], the ADI–implicit Euler method defined by Eq. (3.9) is \(O(\tau^{2}+h_{1}+h_{2})\) accurate. □

Remark 2

(Richardson extrapolation)

The extrapolated solution is computed from
$$ u_{t_{k},x,y}=2u_{t_{k},x,h_{1}/2,y,h_{2}/2}-u_{t_{k},x,h_{1},y,h_{2}}, $$
where \((x,y)\) is a common grid point, and \(u_{t_{k},x,h_{1},y,h_{2}}\), \(u_{t_{k},x,h_{1}/2,y,h_{2}/2}\) denote the ADI–CN method solutions at the grid point \((x,y)\) on the coarse grid \((h_{1}/2,h_{2}/2)\) and the fine grid \((h_{1}/2,h_{2}/2)\), then we can get \(O(\tau^{2}+h_{1}^{2}+h_{2}^{2})\) accurate.

5 Numerical examples

In this section, we carry out numerical experiments to demonstrate the effectiveness of the ADI methods.

Example

The following two-dimensional two-sided space-fractional diffusion equation was considered:
$$ \begin{aligned}[b] \frac{\partial u(x,y,t)}{\partial t}&=\frac{\partial}{\partial x} \biggl(C_{x}(x,y) \frac{\partial^{\alpha} u(x,y,t)}{\partial x^{\alpha }}-D_{x}(x,y) \frac{\partial^{\alpha} u(x,y,t)}{\partial(-x)^{\alpha }}\biggr) \\ &\quad{}+ \frac{\partial}{\partial y}\biggl(C_{y}(x,y) \frac{\partial^{\beta} u(x,y,t)}{\partial y^{\beta}}-D_{y}(x,y) \frac{\partial^{\beta }u(x,y,t)}{\partial(-y)^{\beta}}\biggr) +f(x,y,t), \\ & \quad0< x< 1,0 < y< 1,0\leq t\leq T_{\mathrm{end}}, \end{aligned} $$
(5.1)
where \(T_{\mathrm{end}} \) is the end time. The nonnegative diffusion coefficient \(C_{x}(x,y)=\frac{\Gamma(4-\alpha)}{\Gamma(3)}\cdot(-x)\), \(C_{y}(x,y)=\frac{\Gamma(3-\beta)}{\Gamma(2)}\cdot(-y)\), \(D_{x}(x,y)=\frac{\Gamma(4-\alpha)}{\Gamma(3)}\cdot(x-1)\), \(D_{y}(x,y)=\frac{\Gamma(3-\beta)}{\Gamma(2)}(y-1)\). The source term \(f(x,y,t)\) is given by
$$ \begin{aligned}[b] f(x,y,t)&=-3e^{-3t}\bigl(x^{2}-x^{3} \bigr) \bigl(y-y^{2}\bigr)+\bigl\{ (2-\beta)^{2} \bigl[y^{1-\beta }-(1-y)^{1-\beta}\bigr]-2(3-\beta)\bigl[y^{2-\beta} \\ &\quad{}-(1-y)^{2-\beta}\bigr]\bigr\} e^{-3t}\bigl(x^{2}-x^{3} \bigr) +\biggl\{ (3-\alpha )^{2}\bigl[x^{2-\alpha}-2(1-x)^{2-\alpha} \bigr] \\ &\quad {} -3(4-\alpha)\bigl[x^{3-\alpha }+(1-x)^{3-\alpha}\bigr]- \frac{(3-\alpha)(2-\alpha)^{2}}{2}(1-x)^{1-\alpha}\biggr\} e^{-3t} \bigl(y-y^{2}\bigr), \end{aligned} $$
(5.2)
which satisfies the initial function
$$ \phi(x,y)=\bigl(x^{2}-x^{3}\bigr) \bigl(y-y^{2} \bigr), $$
(5.3)
and the zero Dirichlet boundary condition is
$$ u(0,y,t)=u(1,y,t)=u(x,0,t)=u(x,1,t)=0. $$
(5.4)
The exact solution to this problem is
$$ u(x,y,t)=e^{-3t}\bigl(x^{2}-x^{3}\bigr) \bigl(y-y^{2}\bigr). $$
(5.5)
Table 1 shows the maximum absolute numerical error and temporal convergence orders for the ADI–implicit Euler method with \(T_{\mathrm{end}}=1\). From this table, we see that the convergence order of the scheme is \(O(\tau+h_{1}+h_{2})\).
Table 1

Maximum errors and temporal convergence orders for the ADI–implicit Euler method with \(T_{\mathrm{end}}=1\)

\(\Delta t=h_{1}=h_{2}\)

α = 0.6, β = 0.8

α = 0.7, β = 0.9

Maximum error

Order

Maximum error

Order

1/10

4.8003e−3

-

1.7678e−3

-

1/20

2.6001e−3

0.885

8.8503e−4

0.999

1/40

1.3000e−3

1.000

4.4357e−4

0.996

1/80

6.7261e−4

0.956

2.2553e−4

0.976

Table 2 shows the maximum error and temporal convergence orders for ADI–CN extrapolated solution with \(T_{\mathrm{end}}=1\). From this table, we see that the convergence order of the scheme is \(O(\tau ^{2}+h_{1}^{2}+h_{2}^{2})\).
Table 2

Maximum errors and temporal convergence orders for ADI–CN extrapolated solution with \(T_{\mathrm{end}}=1\)

\(\Delta t=h_{1}=h_{2}\)

α = 0.6, β = 0.8

α = 0.7, β = 0.9

Maximum error

Order

Maximum error

Order

1/5

3.7326e−3

-

9.4226e−4

-

1/10

9.3601e−4

1.996

2.3674e−4

2.078

1/20

2.3732e−4

1.980

5.5934e−5

2.081

1/40

5.9725e−5

1.990

1.4838e−5

1.914

6 Conclusions

We use the shifted left Grünwald formula and the standard right Grünwald formula to approximate the left and right Riemann–Liouville fractional derivatives, respectively; we present an implicit Euler method and a CN method for the two-dimensional two-sided space-fractional diffusion equation. Two methods both combine with the ADI method to obtain unconditionally one-order accurate and two-order accurate finite difference methods.

Declarations

Acknowledgements

The authors would like to thank the referees for a careful reading and several constructive comments and making some useful corrections that have improved the presentation of the paper.

Funding

This work was supported by the National Natural Science Foundation of China (No. 11271141) and Chongqing Municipal Science and Technology Commission Fund Project (No. cstc2018jcyjAX0787).

Authors’ contributions

All authors contributed equally to this work. All authors read and approved the final manuscript.

Competing interests

The authors declare that they have no competing interests.

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Authors’ Affiliations

(1)
School of Mathematics and Computation Science, Hunan University of Science and Technology, Xiangtan, China
(2)
Department of Mathematics, South China Agricultural University, Guangzhou, China
(3)
School of Management, Guangdong University of Technology, Guangzhou, China

References

  1. Chen, S., Liu, F., Jiang, X., Turner, I., Anh, V.: A fast semi-implicit difference method for a nonlinear two-sided space-fractional diffusion equation with variable diffusivity coefficients. Appl. Math. Comput. 257, 591–601 (2015) MathSciNetMATHGoogle Scholar
  2. Podlubny, I.: Fractional Differential Equations. Academic Press, New York (1999) MATHGoogle Scholar
  3. Meerschaert, M.M., Sikorskii, A.: Stochastic Models for Fractional Calculus. De Gruyter, Berlin (2012) MATHGoogle Scholar
  4. Liu, F., Meerschaert, M.M., Mcgough, R.J., Zhuang, P., Liu, Q.: Numerical methods for solving the multi-term time-fractional wave-diffusion equation. Fract. Calc. Appl. Anal. 16(1), 9–25 (2013) MathSciNetView ArticleGoogle Scholar
  5. Hao, Z.P., Sun, Z.Z., Cao, W.R.: A fourth-order approximation of fractional derivatives with its applications. J. Comput. Phys. 281, 787–805 (2015) MathSciNetView ArticleGoogle Scholar
  6. Liu, F., Zhuang, P., Turner, I., Anh, V., Burrage, K.: A semi-alternating direction method for a 2-D fractional FitzHugh-Nagumo monodomain model on an approximate irregular domain. J. Comput. Phys. 293, 252–263 (2015) MathSciNetView ArticleGoogle Scholar
  7. Liu, T.H., Hou, M.Z.: A fast implicit finite difference method for fractional advection-dispersion equations with fractional derivative boundary conditions. Adv. Math. Phys. 2017, Article ID 8716752 (2017) MathSciNetGoogle Scholar
  8. Zheng, M., Liu, F., Turner, I., Anh, V.: A novel high order space–time spectral method for the time fractional Fokker–Planck equation. SIAM J. Sci. Comput. 37(2), 701–724 (2015) MathSciNetView ArticleGoogle Scholar
  9. Guo, B., Xu, Q., Zhu, A.: A second-order finite difference method for two-dimensional fractional percolation equations. Commun. Comput. Phys. 19(3), 733–757 (2016) MathSciNetView ArticleGoogle Scholar
  10. Yin, X., Li, L., Fang, S.: Second-order accurate numerical approximations for the fractional percolation equations. J. Nonlinear Sci. Appl. 10(8), 4122–4136 (2017) MathSciNetView ArticleGoogle Scholar
  11. Tadjeran, C., Meerschaert, M.M.: A second order accurate numerical method for the two-dimensional fractional diffusion equation. J. Comput. Phys. 220, 813–823 (2007) MathSciNetView ArticleGoogle Scholar
  12. Chen, H., Lv, W., Zhang, T.: A Kronecker product splitting preconditioner for two-dimensional space-fractional diffusion equations. J. Comput. Phys. 360, 1–14 (2018) MathSciNetView ArticleGoogle Scholar
  13. Chen, M., Deng, W.: A second-order numerical method for two-dimensional two-sided space fractional convection diffusion equation. Appl. Math. Model. 38(13), 3244–3259 (2014) MathSciNetView ArticleGoogle Scholar
  14. Liu, F., Chen, S., Turner, T., Burrage, K., Anh, V.: Numerical simulation for two-dimensional Riesz space fractional diffusion equations with a nonlinear reaction term. Cent. Eur. J. Phys. 11(10), 1221–1232 (2013) Google Scholar
  15. Zeng, F., Liu, F., Li, C., Burrage, K., Turner, I., Anh, V.: A Crank–Nicolson ADI spectral method for the two-dimensional Riesz space fractional nonlinear reaction-diffusion equation. SIAM J. Numer. Anal. 52(6), 2599–2622 (2014) MathSciNetView ArticleGoogle Scholar
  16. Feng, L., Zhuang, P., Liu, F., Turner, I., Yang, Q.: Second-order approximation for the space fractional diffusion equation with variable coefficient. Prog. Fract. Differ. Appl. 1(1), 23–35 (2015) Google Scholar
  17. Moroney, T., Yang, Q.: Efficient solution of two-sided nonlinear space-fractional diffusion equations using fast Poisson preconditioners. J. Comput. Phys. 246(246), 304–317 (2013) MathSciNetView ArticleGoogle Scholar
  18. Chen, S., Liu, F., Jiang, X., Turner, I., Burrage, K.: Fast finite difference approximation for identifying parameters in a two-dimensional space-fractional nonlocal model with variable diffusivity coefficients. SIAM J. Numer. Anal. 54(2), 606–624 (2016) MathSciNetView ArticleGoogle Scholar
  19. Feng, L.B., Zhuang, P., Liu, F., Turner, I.: Stability and convergence of a new finite volume method for a two-sided space-fractional diffusion equation. Appl. Math. Comput. 257, 591–601 (2015) MathSciNetMATHGoogle Scholar
  20. Feng, L.B., Zhuang, P., Liu, F., Turner, I., Anh, V., Li, J.: A fast second-order accurate method for a two-sided space-fractional diffusion equation with variable coefficients. Comput. Math. Appl. 73, 1155–1171 (2017) MathSciNetView ArticleGoogle Scholar
  21. Meerschaert, M.M., Tadjeran, C.: Finite difference approximations for fractional advection-dispersion flow equations. J. Comput. Appl. Math. 172, 65–77 (2004) MathSciNetView ArticleGoogle Scholar
  22. Tadjeran, C., Meerschaert, M.M., Scheffler, P.: A second order accurate numerical approximation for the fractional diffusion equation. J. Comput. Phys. 213, 205–213 (2006) MathSciNetView ArticleGoogle Scholar
  23. Samko, S.G., Kilbas, A.A., Marichev, O.I.: Fractional Integrals and Derivatives: Theory and Applications. Gordon Breach, London (1993) MATHGoogle Scholar
  24. Meerschaert, M.M., Scheffler, H.P., Tadjeran, C.: Finite difference methods for two-dimensional fractional dispersion equation. J. Comput. Phys. 211, 249–261 (2006) MathSciNetView ArticleGoogle Scholar
  25. Chen, S., Liu, F., Turner, I., Anh, V.: An implicit numerical method for the two-dimensional variable-order fractional percolation equation. Appl. Math. Comput. 219, 4322–4331 (2013) MathSciNetMATHGoogle Scholar
  26. Hentenryck, P., Bent, R., Upfal, E.: An Introduction to the Fractional Calculus and Fractional Differential Equations. Wiley, New York (1993) Google Scholar

Copyright

© The Author(s) 2018

Advertisement