Skip to main content

Fourth-order finite difference scheme and efficient algorithm for nonlinear fractional Schrödinger equations


To improve the computing efficiency, a fourth-order difference scheme is proposed and a fast algorithm is designed to simulate the nonlinear fractional Schrödinger (FNLS) equation oriented from the fractional quantum mechanics. The numerical analysis and experiments conducted in this article show that the proposed difference scheme has the optimal second-order and fourth-order convergence rates in time and space respectively, reduces its computation cost to \(\mathcal{O}(M\log M)\), and recognizes accurately its physical feature of FNLS such as the mass balance.


As is well known, numerous experiments have recognized that the fractional calculus can provide more flexible descriptions than the counterpart of integer-order for the real-world phenomena arising in various fields of science and engineering such as transmission of malaria disease [1], the constrained systems [2], the exothermic reactions model [3], and the spring pendulum [4], which has attracted a mounting number of valuable research work both mathematically and numerically during the last few years, see [5,6,7,8,9,10,11,12,13,14,15,16] and the references therein.

As one of the most significant applications of the fractional calculus in quantum mechanics, the fractional Schrödinger equation (FNLS) was derived from the Lévy path integrals instead of the Brownian path integrals as done in the classical Schrödinger equation given by Feynman and Hibbs [17]. The related mathematical analysis conducted in the literature of FNLS proved the existence and uniqueness of ground state solution, the global solution, and the well-posedness of the solution to Cauchy problem, see [18, 19]. For its computation, the implicitly conservative and split-step alternating direction difference methods, Galerkin finite element method in one- or two-dimensional space, were proposed in [20,21,22,23,24], consecutively.

Checking carefully the existing numerical methods, we find that although the difference methods are easily implemented, they possess low computing accuracy, which motivates us to design a high-order scheme to numerically solve the fractional Schrödinger. We also find that the nonlocality of the fractional Laplacian operator in FNLS often generates a non-sparse matrix of the discrete system, which makes the computation cost to be \(\mathcal{O}(M^{2})\) if CG-like algorithms are used.

The goals of this article are as follows: (1) to adopt a fourth-order difference scheme by applying the Crank–Nicolson scheme to discrete the temporal derivative and truncating the weighted and shifted difference formula, [25] to discrete the fractional Laplacian operator; (2) to prove the solvability of the scheme and conduct numerical analysis to verify the convergence rates, as well as to show that the proposed numerical scheme can inherit the physical feature of FNLS such as the mass balance; (3) to design an efficient numerical algorithm through combining the Toeplitz structure of the coefficient matrix and the fast Fourier transform, which will reduce the computation cost from \(\mathcal{O}(M^{2})\) to \(\mathcal{O}(M\log M)\); and (4) to conduct numerical experiments to verify the theoretical results and the physical properties.

The main novelties of this article at least are the following: (1) The scheme designed is a linearized one, due to which only a linear system needs to be solved, and thus the computational cost will be significantly reduced compared with the existing schemes; (2) The scheme possesses temporal convergence of second order and spatial convergence of fourth order, which, combining the fast stabilized bi-conjugate gradient (FSBiCG) algorithm, reduces the computational cost greatly, and thus improves the computational efficiency.

The remainder of this article is arranged as follows. In Sect. 2, we present the mathematical formula for FNLS and some related lemmas. In Sect. 3, the fourth-order difference scheme is constructed. In Sect. 4, we prove the solvability and convergence of the discrete system. The mass conservation properties as well as the stability are discussed in Sect. 5. In Sect. 6, a FSBiCG algorithm is proposed to reduce computation cost and storage. Several numerical experiments are reported to confirm our theoretical analysis in Sect. 7.

Throughout the article we use C to denote a generic constant which may take different values at different places.

Model problem and preliminaries

Consider the following fractional nonlinear Schrödinger equation (FNLS):

$$\begin{aligned} &iu_{t}-(-\Delta )^{\frac{\alpha }{2}}u+\beta \vert u \vert ^{2}u=0,\quad x\in (a,b), t\in [0,T], \end{aligned}$$
$$\begin{aligned} &u(a,t)=u(b,t)=0, \quad t\in [0,T], \end{aligned}$$
$$\begin{aligned} &u(x,0)=u_{0}(x), \quad x\in [a,b]. \end{aligned}$$

Here, \(\alpha \in (1,2]\), \(i^{2}=-1\); \(u=u(x,t)\) is a complex-valued wave function describing the state of microscopic particles, which reflects the fluctuation of microscopic particles; the initial condition \(u_{0}(x)\) is a given smooth function vanishing at the end points \(x=a\) and \(x=b\); the parameter β is a real constant describing the strength of the local interactions between particles. FNLS is called focusing (attractive) or defocusing (repulsive), depending on whether the minus or plus sign appears in the front of the nonlinearity above, respectively. The Riesz fractional derivative \((-\Delta )^{\frac{ \alpha }{2}}\) is defined as

$$\begin{aligned} (-\Delta )^{\frac{\alpha }{2}}u(x,t)=\frac{1}{2\cos \frac{\alpha \pi }{2}} \bigl(_{-\infty }D_{x}^{\alpha }u(x,t)+{}_{x}D _{\infty }^{\alpha }u(x,t) \bigr), \end{aligned}$$

in which \({}_{-\infty }D_{x}^{\alpha }u(x,t)\) and \({}_{x}D_{\infty }^{ \alpha }u(x,t)\) are expressed respectively by the following weighted and shifted difference formula [25]:

$$\begin{aligned} &{}_{-\infty }D_{x}^{\alpha }u(x,t) =\lim _{h\to 0}h^{-\alpha }\sum_{k=0} ^{\infty }q_{k}u\bigl(x-(k-p)h,t\bigr), \end{aligned}$$
$$\begin{aligned} &{}_{x}D_{\infty }^{\alpha }u(x,t) =\lim _{h\to 0}h^{-\alpha }\sum_{k=0} ^{\infty }q_{k}u\bigl(x+(k-p)h,t\bigr) , \end{aligned}$$

where p is an integer, and

$$\begin{aligned} q_{k}= \biggl(\frac{3}{2} \biggr)^{\alpha }\sum_{i=0}^{k}3^{-i} \omega _{i}\omega _{k-i} \end{aligned}$$


$$ \omega _{i}=\frac{\varGamma (i-\alpha )}{\varGamma (-\alpha )\varGamma (i+1)},\quad i=0,1,\ldots,k. $$

In fact, \(q_{k}\) can be expressed as the coefficients of the power series of the function \((\frac{3}{2}-2z+\frac{1}{2}z^{2} ) ^{\alpha }\),

$$\begin{aligned} \biggl(\frac{3}{2}-2z+\frac{1}{2}z^{2} \biggr)^{\alpha }= \biggl(\frac{3}{2} \biggr) ^{\alpha }(1-z)^{\alpha } \biggl(1-\frac{1}{3}z \biggr)^{\alpha }= \sum _{k=0}^{\infty }q_{k}z^{k}. \end{aligned}$$

In this paper, we adopt the ideas of [25] to truncate the Riesz fractional derivative (2.4). We first introduce several notations.

For positive integers M and N, we define a uniform partition for \(\varOmega =(a,b)\) by \(x_{j}=a+jh\), \(j=0,1,2,\ldots,M\), with \(h:= \frac{b-a}{M}\), and for the time interval \([0,T]\) by \(t_{n}=n\tau \) for \(n=0,1,\ldots,N\) with \(\tau:=\frac{T}{N}\). For a given grid function \(w^{n}=\{w^{n}|n=0,1,\ldots,N\}\), we introduce the following notations:

$$\begin{aligned} w^{n+\frac{1}{2}}=\frac{w^{n+1}+w^{n}}{2},\qquad \delta _{t}w^{n+\frac{1}{2}}= \frac{w^{n+1}-w^{n}}{\tau }. \end{aligned}$$

Let \(\mathcal{V}_{h}=\{w|w=(w_{1},w_{2},\ldots,w_{M-1})\}\) be the space of the inner grid functions. For any two grid functions \(u,w\in \mathcal{V}_{h}\), we define the discrete inner product and the associated \(l^{2}\)-norm as follows:

$$\begin{aligned} (u,w)=h\sum_{j=1}^{M-1}u_{j} \overline{w_{j}}, \qquad \Vert w \Vert ^{2}=( w,w). \end{aligned}$$

Collecting these notations introduced above, the fourth-order discretization for the fractional derivatives is given as follows.

Lemma 2.1


Suppose that \(\alpha \in (1,2)\), \(u\in L^{1}(\mathbb{R})\), \({_{-\infty }D_{x}^{\alpha +4}}u\), and its Fourier transform belongs to \(L^{1}(\mathbb{R})\). We denote the truncations of the weighted and shifted difference operators (2.5) and (2.6) by

$$\begin{aligned} _{L}D_{p}^{\alpha }u=\sum _{s=1}^{4}\lambda _{s}A_{+,p_{s}} \mathbf{u},\qquad {}_{R}D_{p}^{\alpha }u=\sum _{s=1}^{4}\lambda _{s}A_{-,p_{s}} \mathbf{u}, \end{aligned}$$

in which \(\mathbf{u}=(u_{1},u_{2},\ldots,u_{M-1})^{T}\in \mathcal{V} _{h}\), \(p_{s}\)are mutually distinct integers chosen flexibly for needs, \(\lambda _{s}\)satisfies the following system:

$$ \textstyle\begin{cases} \sum_{s=1}^{4}\lambda _{s}=1,\qquad \sum_{s=1}^{4}\lambda _{s}p_{s}=0,\\ \sum_{s=1}^{4}\lambda _{s}(3p_{s} ^{2}-2\alpha )=0,\\ \sum_{s=1}^{4}\lambda _{s}(2p_{s}^{3}-4\alpha p _{s}+3\alpha )=0, \end{cases} $$

which yields

$$\begin{aligned} \lambda _{s}=\frac{-9\alpha -4\alpha \sum_{i=1,i\ne s}^{4}p_{i}-6 \prod_{i=1,i\neq s}^{4}\lambda _{i}}{6\prod_{i=1,i\neq s}^{4}(\lambda _{s}-\lambda _{i})}\quad \textit{for }s=1,\ldots,4. \end{aligned}$$

The matrices \(A_{\pm,p_{s}}\)can be expressed by

$$ A_{+,p_{s}}=\frac{1}{h^{\alpha }} \begin{bmatrix} q_{p_{s}}&q_{p_{s}-1}&\ldots &q_{0} \\ q_{p_{s}+1}&q_{p_{s}}&q_{p_{s}-1}&\ldots &q_{0} \\ q_{p_{s}+2}&q_{p_{s}+1}&q_{p_{s}}&q_{p_{s}-1}&\ldots &q_{0} \\ \vdots &\ddots &\ddots &\ddots &\ddots &\ldots &\ddots \\ q_{M-2}&\ldots &\ddots &q_{p_{s}+1}&q_{p_{s}}&q_{p_{s}-1}&\ldots &q _{0} \\ \vdots &\ddots &\ldots &\ddots &\ddots &\ddots &\ddots &\vdots \\ q_{p_{s}+M-3}&\ldots &\ddots &\ldots &\ddots &q_{p_{j}+1}&q_{p_{s}}&q _{p_{s}-1} \\ q_{p_{s}+M-2}&q_{p_{s}+M-3}&\ldots &q_{M-2}&\ldots &q_{{p_{s}}+2}&q _{{p_{s}}+1}&q_{p_{s}} \end{bmatrix}, $$

and \(A_{-,p_{s}}=A_{+,p_{s}}^{T}\). Then there holds

$$\begin{aligned} {_{-\infty }D}_{x}^{\alpha }u={_{L}D}_{p}^{\alpha }u+ \mathcal{O}\bigl(h ^{4}\bigr), \qquad {_{x}D}_{\infty }^{\alpha }u={_{R}D}_{p}^{\alpha }u+ \mathcal{O}\bigl(h^{4}\bigr). \end{aligned}$$

Remark 2.2

In real computation, for \(s=1,\ldots,4\), we often choose \((p_{1},p _{2},p_{3},p_{4})=(1,-1,2, -2)\) confirming to \(p_{s}=(-1)^{s-1} ( [\frac{ {s-1}}{2} ]+1 )\), and thus \(\lambda _{s}\) and the entries \(q_{p_{s}+k}, k=-p_{s},-p_{s}+1,\ldots, M-2\) of the Toeplitz matrix can be calculated from (2.10) and (2.7) respectively, that is,

$$\begin{aligned} &\lambda _{1}=\frac{5}{36}\alpha +\frac{2}{3},\qquad \lambda _{2}=- \frac{13}{36}\alpha +\frac{2}{3},\qquad \lambda _{3}=-\frac{1}{72} \alpha -\frac{1}{6},\qquad \lambda _{4}=\frac{17}{72}\alpha - \frac{1}{6}, \\ &q_{p_{s}+k}= \biggl(\frac{3}{2} \biggr)^{\alpha } \sum_{i=0}^{p_{s}+k}3^{-i} \omega _{i}\omega _{{p_{s}+k}-i}. \end{aligned}$$

It was proved in Lemma 7 of [25] that the linear combinations \(\sum_{s=1}^{4}\lambda _{s}A_{+,p_{s}}\) and \(\sum_{s=1}^{4}\lambda _{s}A _{-,p_{s}}\) are negative definite.

A fourth-order difference scheme

In this section, we adopt the Crank–Nicolson discretization in the temporal direction as well as the fourth-order difference discretization in the spatial direction to construct a difference scheme for FNLS (2.1).

Denoting \(u_{j}^{n}:=u(x_{j},t_{n})\) at the point \(x_{j}\) and at time \(t_{n}\), and noticing (2.4) as well as the homogeneous boundary condition (2.2), we obtain

$$\begin{aligned} (-\Delta )^{\frac{\alpha }{2}}u_{j}^{n}= \frac{1}{2 \cos \frac{\alpha \pi }{2}}\bigl({_{L}D_{p}^{\alpha }}u^{n}_{j}+{_{R}D_{p}^{\alpha }}u_{j} ^{n}\bigr)+\mathcal{O}\bigl(h^{4}\bigr). \end{aligned}$$

Let \(U_{j}^{n}\) be the numerical approximation to \(u(x_{j},t_{n})\) and

$$\begin{aligned} \Delta _{h}^{\alpha }U_{j}^{n}:= \frac{1}{2 \cos \frac{\alpha \pi }{2}}\bigl( {_{L}D_{p}^{\alpha }}U_{j}^{n}+{_{R}D_{p}^{\alpha }}U_{j}^{n} \bigr),\quad 1\le j\le M-1,0\le n\le N. \end{aligned}$$

Then we discrete FNLS (2.1) as follows:

$$\begin{aligned} &i\delta _{t}U_{j}^{n+\frac{1}{2}}- \Delta _{h}^{\alpha }U_{j}^{n+ \frac{1}{2}}+ \frac{\beta }{2}\bigl(3|U_{j}^{n}|^{2}-|U_{j}^{n-1}|^{2} \bigr)U _{j}^{n+\frac{1}{2}}=0, \\ &\quad 1\le j\le M-1,1\le n\le N-1, \end{aligned}$$
$$\begin{aligned} &U_{j}^{0}=u_{0}(x_{j}),\quad 0 \le j\le M, \end{aligned}$$
$$\begin{aligned} &U_{0}^{n}=U_{M}^{n}=0,\quad 0 \le n\le N. \end{aligned}$$

It is worth noting that (3.2) is not a self-starting scheme, and the numerical solution at \(n=1\) should be provided by other schemes. For this, we introduce the following scheme to seek for \(U^{1}_{j}\), \(1\le j\le M-1\):

$$\begin{aligned} &i\frac{U_{j}^{(1)}-U_{j}^{0}}{\delta }-\Delta _{h}^{\alpha }U_{j}^{(1)}+ \beta \bigl\vert U_{j}^{0} \bigr\vert ^{2}U_{j}^{(1)}=0, \end{aligned}$$
$$\begin{aligned} &i\delta _{t}U_{j}^{\frac{1}{2}}-\Delta _{h}^{\alpha }U_{j}^{ \frac{1}{2}}+ \frac{\beta }{2} \bigl(3 \bigl\vert U_{j}^{(1)} \bigr\vert ^{2}- \bigl\vert U_{j}^{0} \bigr\vert ^{2} \bigr)U _{j}^{\frac{1}{2}}=0. \end{aligned}$$

For scheme (3.2)–(3.5b), only a linear system needs to be solved at each step.

Numerical analysis

In this section, the solvability and convergence of the fourth-order difference scheme proposed in (3.2)–(3.5b) will be analyzed. To start with, we introduce three lemmas which will be used later.

Three lemmas

Lemma 4.1

Given two grid functions \(U,V\in \mathcal{V}_{h}\), there is a linear operator \(\varLambda _{h}^{\alpha }\)such that

$$\begin{aligned} \bigl(\Delta _{h}^{\alpha }U,V\bigr)=\bigl( \varLambda _{h}^{\alpha }U,\varLambda _{h}^{\alpha }V \bigr). \end{aligned}$$


From Remark 2.2, we know that the matrices \(A=\frac{1}{2 \cos \frac{\alpha \pi }{2}}\sum_{s=1}^{4}\lambda _{s}A_{+,p_{s}}\) and \(B = \frac{1}{2 \cos \frac{\alpha \pi }{2}}\sum_{s=1}^{4}\lambda _{s}A _{-,p_{s}}\) are positive definite for \(\alpha \in (1,2]\). Let \(K=A+B\), then K is a real symmetric positive definite matrix satisfying

$$\begin{aligned} \Delta _{h}^{\alpha }U=KU, \quad \forall U\in \mathcal{V}_{h}. \end{aligned}$$

According to the spectral theorem [26], there is a real orthogonal matrix P and a real diagonal matrix \(D= \operatorname{diag}({\lambda })\) satisfying

$$\begin{aligned} K=PDP^{T}=\bigl(PD^{\frac{1}{2}}P^{T} \bigr)^{T}\bigl(PD^{\frac{1}{2}}P^{T} \bigr)=L^{T}L, \end{aligned}$$

where \(D^{\frac{1}{2}}=\operatorname{diag}(\sqrt{\lambda })\) and \(L=PD^{\frac{1}{2}}P^{T}\). It is easily shown that matrix L is a real symmetric positive definite matrix. Recalling the definition of \(\Delta _{h}^{\alpha }\), we obtain

$$\begin{aligned} \bigl(\Delta _{h}^{\alpha }U,V\bigr)=\bigl(h^{-\alpha }KU,V \bigr)=\bigl(h^{-\frac{\alpha }{2}}LU,h ^{-\frac{\alpha }{2}}LV\bigr). \end{aligned}$$

If we define the operator \(\varLambda _{h}^{\alpha }\) by \(\varLambda _{h}^{ \alpha }U = h^{-\frac{\alpha }{2}}LU\), we could get (4.1). Thus, the proof is completed. □

Lemma 4.2

Let \(\operatorname{Im}(\cdot )\)and \(\operatorname{Re}(\cdot )\)stand for the imaginary part and the real part of a complex number , respectively. Then, for any grid function \(U^{n}\in \mathcal{V}_{h}\), \(0\le n\le N\), we have

$$\begin{aligned} &\operatorname{Im}\bigl(\Delta _{h}^{\alpha }U^{n+\frac{1}{2}},U^{n+\frac{1}{2}} \bigr)=0, \end{aligned}$$
$$\begin{aligned} &\operatorname{Re}\bigl(\Delta _{h}^{\alpha }U^{n+\frac{1}{2}}, \delta _{t}U^{n+ \frac{1}{2}}\bigr)=\frac{1}{2\tau }\bigl( \bigl\Vert \varLambda _{h}^{\alpha }U^{n+1} \bigr\Vert ^{2}- \bigl\Vert \varLambda _{h}^{\alpha }U^{n} \bigr\Vert ^{2}\bigr). \end{aligned}$$


Lemma 4.1 implies (4.4) obviously. Using relation (4.1), we obtain

$$\begin{aligned} &\operatorname{Re}\bigl(\Delta _{h}^{\alpha }U^{n+\frac{1}{2}}, \delta _{t}U^{n+ \frac{1}{2}}\bigr) \\ &\quad =\operatorname{Re}\bigl(\varLambda _{h}^{\alpha }U^{n+\frac{1}{2}}, \varLambda _{h}^{ \alpha }\delta _{t}U^{n+\frac{1}{2}} \bigr) \\ &\quad =\frac{1}{2\tau }{\operatorname{Re}}\bigl(\varLambda _{h}^{\alpha }U^{n+1}+\varLambda _{h}^{\alpha }U^{n}, \varLambda _{h}^{\alpha }U^{n+1}-\varLambda _{h}^{\alpha }U^{n}\bigr) \\ &\quad =\frac{1}{2\tau }\bigl( \bigl\Vert \varLambda _{h}^{\alpha }U^{n+1} \bigr\Vert ^{2}- \bigl\Vert \varLambda _{h} ^{\alpha }U^{n} \bigr\Vert ^{2}\bigr),\quad 0\le n\le N-1. \end{aligned}$$

Thus, (4.5) is valid and the proof of the theorem is completed. □

Lemma 4.3


For any grid function \(U^{n}\in \mathcal{V}_{h}\), \(0\le n\le N\), the inequality

$$\begin{aligned} \bigl\Vert U^{n} \bigr\Vert _{\infty }^{2}\le \frac{1}{h} \bigl\Vert U^{n} \bigr\Vert ^{2} \end{aligned}$$


Solvability and convergence

In this subsection, we prove the solvability and convergence of difference scheme (3.2)–(3.5b) by induction.

Theorem 4.4

There exists a unique solution to difference scheme (3.2)(3.5b).


Noticing that difference scheme (3.2)–(3.5b) is a linear system, it suffices to prove that there exists a unique zero solution to its homogeneous system. For this purpose, we let the solution \(U^{n}=0\) for \(n=0,(1),1,\ldots, m-1\), and prove \(U^{m}=0\) by induction.

In fact, the homogeneous system of (3.2) at \(n=m-1\) is given by

$$\begin{aligned} U_{j}^{m}=-i\frac{\tau }{2} \Delta _{h}^{\alpha }U_{j}^{m}. \end{aligned}$$

Computing the discrete inner product of (4.8) with \(U^{m}\) and then taking the real part yield

$$\begin{aligned} \bigl\Vert U^{m} \bigr\Vert ^{2}+ \frac{\tau }{2} {\operatorname{Im}}\bigl(\Delta _{h}^{\alpha }U^{m},U ^{m}\bigr)=0. \end{aligned}$$

Using Lemma 4.2, we obtain

$$\begin{aligned} \bigl\Vert U^{m} \bigr\Vert =0, \end{aligned}$$

which implies \(U^{m}=0\), and hence \(U^{m}\) can be solved uniquely. This completes the proof. □

Before heading for the convergence analysis, we define the local truncation error \(R_{j}^{n+\frac{1}{2}}\) of scheme (3.2) for \(1\le n\le N-1\), \(1\le j\le M-1\), as

$$\begin{aligned} R_{j}^{n+\frac{1}{2}}=i\delta _{t}u_{j}^{n+\frac{1}{2}}-\Delta _{h}^{ \alpha }u_{j}^{n+\frac{1}{2}}+ \frac{\beta }{2}\bigl(3 \bigl\vert u_{j}^{n} \bigr\vert ^{2}- \bigl\vert u _{j}^{n-1} \bigr\vert ^{2}\bigr)u_{j}^{n+\frac{1}{2}}, \end{aligned}$$

and of scheme (3.5a)–(3.5b) for \(n=0\), \(1\le j\le M-1\), as

$$\begin{aligned} &R_{j}^{(1)}=i\frac{U_{j}^{(1)}-u_{j}^{0}}{\delta }-\Delta _{h}^{ \alpha }U_{j}^{(1)}+\beta \bigl\vert u_{j}^{0} \bigr\vert ^{2}U_{j}^{(1)}, \end{aligned}$$
$$\begin{aligned} &R_{j}^{\frac{1}{2}}=i\delta _{t}u_{j}^{\frac{1}{2}}- \Delta _{h}^{ \alpha }u_{j}^{\frac{1}{2}}+ \frac{\beta }{2} \bigl(3 \bigl\vert U_{j}^{(1)} \bigr\vert ^{2}- \bigl\vert u _{j}^{0} \bigr\vert ^{2} \bigr)u_{j}^{\frac{1}{2}}. \end{aligned}$$

From (3.1) and Taylor’s expansion, we have

$$\begin{aligned} \begin{aligned} & \bigl\vert R_{j}^{(1)} \bigr\vert \le C_{\delta }\bigl(\delta +h^{4}\bigr),\quad 1 \le j\le M-1, \\ & \bigl\vert R_{j}^{n+\frac{1}{2}} \bigr\vert \le C_{R}\bigl(\tau ^{2}+h^{4}\bigr),\quad 1\le j\le M-1, 0\le n\le N-1, \end{aligned} \end{aligned}$$

which gives

$$\begin{aligned} & \bigl\Vert R^{(1)} \bigr\Vert ^{2}\le (b-a) \bigl(C_{\delta }\bigl(\delta +h^{4}\bigr) \bigr)^{2}, \end{aligned}$$
$$\begin{aligned} & \bigl\Vert R^{n+\frac{1}{2}} \bigr\Vert ^{2}\le (b-a) \bigl(C_{R}\bigl(\tau ^{2}+h^{4}\bigr) \bigr)^{2}, \quad 0\le n\le N-1. \end{aligned}$$

Based on the truncation error introduced, the convergence results can be discussed. To start with, we define the error function \(e^{n} \in \mathcal{V}_{h}\) for \(0\le n\le N\) as

$$\begin{aligned} e_{j}^{n}=u_{j}^{n}-U_{j}^{n},\quad 1\le j\le M-1. \end{aligned}$$

Then we have the following results.

Theorem 4.5

Suppose that the original problem (2.1)(2.3) has a smooth solution, and assume that \(\tau \le Ch^{2}\)and \(\delta \le \tau ^{2}\). Then there exist \(\tau _{0}>0\)and \(h_{0}>0\)sufficiently small such that, when \(0<\tau \le \tau _{0}\)and \(0< h\le h_{0}\), we have

$$\begin{aligned} & \bigl\Vert u^{n}-U^{n} \bigr\Vert \le C\bigl(\delta +\tau ^{2}+h^{4}\bigr),\quad 0\le n \le N, \end{aligned}$$

whereCis a constant independent ofτ, δ, h.


We prove this theorem by induction. For \(n=0\), combining (2.3) with (3.3) implies the validity of (4.16).

For \(t=\delta \), subtracting (3.5a) from (4.12a), we obtain the error equation

$$\begin{aligned} i\frac{e_{j}^{(1)}}{\delta }-\Delta _{h}^{\alpha }e_{j}^{(1)}+ \beta \bigl\vert U _{j}^{0} \bigr\vert ^{2}e_{j}^{(1)}=R_{j}^{(1)}. \end{aligned}$$

Computing the discrete inner product of (4.17) with \(e^{(1)}\) yields

$$\begin{aligned} i\frac{1}{\delta }\bigl(e^{(1)},e^{(1)} \bigr)_{h}-\bigl(\Delta _{h}^{\alpha }e^{(1)},e ^{(1)}\bigr)_{h}+\beta \bigl( \bigl\vert U_{j}^{0} \bigr\vert ^{2}e^{(1)},e^{(1)} \bigr)_{h}=\bigl(R ^{(1)},e^{(1)} \bigr)_{h}. \end{aligned}$$

Taking the imaginary part of (4.18), combining with the triangle inequalities and Cauchy inequalities, as well as Lemma 4.2, we get

$$\begin{aligned} \bigl\Vert e^{(1)} \bigr\Vert ^{2} &=\delta {\operatorname{Im}}\bigl(R^{(1)},e^{(1)}\bigr)_{h} \\ &\le \delta \bigl( \bigl\Vert e^{(1)} \bigr\Vert ^{2}+ \bigl\Vert R^{(1)} \bigr\Vert ^{2} \bigr) \\ &\le \delta \bigl( \bigl\Vert e^{(1)} \bigr\Vert ^{2}+(b-a) \bigl(C_{R}\bigl(\delta +h^{4} \bigr)\bigr)^{2} \bigr). \end{aligned}$$

When \(\delta <\tau <\frac{1}{2}\), we obtain

$$\begin{aligned} \bigl\Vert e^{(1)} \bigr\Vert \le \sqrt{2(b-a) \delta }C_{R}\bigl(\delta +h^{4}\bigr)\le C\bigl(\delta +\tau ^{2}+h^{4}\bigr). \end{aligned}$$

Assume that \(\tau \le Ch^{2}\), then

$$\begin{aligned} \bigl\Vert U^{(1)} \bigr\Vert _{\infty }\le \bigl\Vert u^{(1)} \bigr\Vert _{\infty }+ \bigl\Vert e^{(1)} \bigr\Vert _{\infty } \le M_{0}+Ch^{\frac{7}{2}}. \end{aligned}$$

When \(0< h\le h_{0}:=C^{-\frac{2}{7}}\), it holds that

$$\begin{aligned} \bigl\Vert U^{(1)} \bigr\Vert _{\infty }\le M_{0}+1. \end{aligned}$$

It follows from (4.20) and (4.22) that (4.16) is valid for \(n=\delta \). Now we prove that (4.16) is valid for \(n=1\). Subtracting (3.5b) from (4.12b) yields

$$\begin{aligned} i\delta _{t}e_{j}^{\frac{1}{2}}- \Delta _{h}^{\alpha }e_{j}^{\frac{1}{2}}+ \frac{ \beta }{2} \bigl(3 \bigl\vert U_{j}^{(1)} \bigr\vert ^{2}- \bigl\vert U_{j}^{0} \bigr\vert ^{2} \bigr)e_{j}^{\frac{1}{2}}=G_{j}^{\delta }+R_{j}^{\frac{1}{2}}, \end{aligned}$$


$$\begin{aligned} G_{j}^{\delta } &=\frac{\beta }{2} \bigl[ \bigl(3 \bigl\vert U_{j}^{(1)} \bigr\vert ^{2}- \bigl\vert U _{j}^{0} \bigr\vert ^{2} \bigr)- \bigl(3 \bigl\vert U_{j}^{(1)} \bigr\vert ^{2}- \bigl\vert u_{j}^{0} \bigr\vert ^{2} \bigr) \bigr]u _{j}^{\frac{1}{2}} \\ &=\frac{\beta }{2} \bigl[3 \bigl( \bigl\vert U_{j}^{(1)} \bigr\vert ^{2}- \bigl\vert U_{j}^{(1)} \bigr\vert ^{2} \bigr)- \bigl( \bigl\vert U_{j}^{0} \bigr\vert ^{2}- \bigl\vert u_{j}^{0} \bigr\vert ^{2} \bigr) \bigr]u_{j}^{ \frac{1}{2}}. \end{aligned}$$

Noticing (3.3) as well as (2.3), we have

$$\begin{aligned} \bigl\vert G_{j}^{\delta } \bigr\vert \le 3 \vert \beta \vert M_{0}(M_{1}+M_{0}) \bigl\vert e_{j}^{(1)} \bigr\vert , \quad 1 \le j\le M-1. \end{aligned}$$

Set \(C_{M_{0}}=12\beta ^{2}M_{0}^{2}(M_{1}+M_{0})^{2}\), then

$$\begin{aligned} \bigl\Vert G^{\delta } \bigr\Vert ^{2} \le C_{M_{0}} \bigl\Vert e^{(1)} \bigr\Vert ^{2}. \end{aligned}$$

Computing the discrete inner product of (4.23) with \(e^{ \frac{1}{2}}\) and taking the imaginary part, using the triangle inequalities and Cauchy inequalities, noticing (4.4), (4.13), and (4.25), we obtain

$$\begin{aligned} \bigl\Vert e^{1} \bigr\Vert ^{2}- \bigl\Vert e^{0} \bigr\Vert ^{2}&=\tau {\operatorname{Im}} \bigl(G^{\delta }+R^{ \frac{1}{2}},e^{1}\bigr) \\ &\le\tau \bigl( \bigl\Vert e^{1} \bigr\Vert ^{2}+ \bigl\Vert G^{\delta } \bigr\Vert ^{2}+ \bigl\Vert R^{\frac{1}{2}} \bigr\Vert ^{2} \bigr) \\ &\le\tau \bigl( \bigl\Vert e^{1} \bigr\Vert ^{2}+C_{M_{0}} \bigl\Vert e^{\delta } \bigr\Vert ^{2}+(b-a) \bigl(C _{R}\bigl(\tau ^{2}+h^{4} \bigr)\bigr)^{2} \bigr). \end{aligned}$$

When \(\tau \le \frac{1}{2}\), it holds that

$$\begin{aligned} \bigl\Vert e^{1} \bigr\Vert ^{2}\le C\bigl(\bigl( \delta +\tau ^{2}+h^{4}\bigr)^{2}+(b-a) \bigl(C_{R}\bigl(\tau ^{2}+h ^{4}\bigr) \bigr)^{2}\bigr), \end{aligned}$$

where C is a constant. Thus it holds that

$$\begin{aligned} \bigl\Vert e^{1} \bigr\Vert \le C\bigl(\delta +\tau ^{2}+h^{4}\bigr). \end{aligned}$$

Again under the assumption \(\tau \le Ch^{2}\) as well as \(\delta \le \tau ^{2}\), combining the above inequality with (4.7) gives

$$\begin{aligned} \bigl\Vert U^{1} \bigr\Vert _{\infty }\le \bigl\Vert u^{1} \bigr\Vert _{\infty }+ \bigl\Vert e^{1} \bigr\Vert _{\infty }\le M _{0}+h^{-\frac{1}{2}} \bigl\Vert e^{1} \bigr\Vert \le M_{0}+Ch^{\frac{7}{2}}, \end{aligned}$$

and consequently, when \(0< h\le h_{0}\), we have

$$\begin{aligned} \bigl\Vert U^{1} \bigr\Vert _{\infty }\le M_{0}+1. \end{aligned}$$

Now we assume that (4.16) is valid for all \(0\le n\le m-1 \le N-1\), we then need to show that it is still valid for \(n=m\). Subtracting (3.2) from (4.11) yields

$$\begin{aligned} i\delta _{t}e_{j}^{n+\frac{1}{2}}- \Delta _{h}^{\alpha }e_{j}^{n+ \frac{1}{2}}+ \frac{\beta }{2} \bigl(3 \bigl\vert U_{j}^{n} \bigr\vert ^{2}- \bigl\vert U_{j}^{n-1} \bigr\vert ^{2} \bigr)e_{j}^{n+\frac{1}{2}}=G_{j}^{n}+R _{j}^{n+\frac{1}{2}}, \end{aligned}$$


$$\begin{aligned} G_{j}^{n} &=\frac{\beta }{2} \bigl[ \bigl(3 \bigl\vert U_{j}^{n} \bigr\vert ^{2}- \bigl\vert U_{j}^{n-1} \bigr\vert ^{2} \bigr)- \bigl(3 \bigl\vert u_{j}^{n} \bigr\vert ^{2}- \bigl\vert u_{j}^{n-1} \bigr\vert ^{2} \bigr) \bigr]u_{j}^{n+ \frac{1}{2}} \\ &=\frac{\beta }{2} \bigl[3 \bigl( \bigl\vert U_{j}^{n} \bigr\vert ^{2}- \bigl\vert u_{j}^{n} \bigr\vert ^{2} \bigr)- \bigl( \bigl\vert U_{j}^{n-1} \bigr\vert ^{2}- \bigl\vert u_{j}^{n-1} \bigr\vert ^{2} \bigr) \bigr]u_{j}^{n+ \frac{1}{2}}. \end{aligned}$$

Since (4.16) is valid for \(n\le m-1\), we have

$$\begin{aligned} \bigl\vert G_{j}^{n} \bigr\vert \le \vert \beta \vert M_{0}(M_{1}+M_{0}) \bigl(3 \bigl\vert e_{j}^{n} \bigr\vert + \bigl\vert e_{j}^{n-1} \bigr\vert \bigr),\quad 1\le j\le M-1,1\le n\le m-1, \end{aligned}$$

which implies

$$\begin{aligned} \bigl\Vert G^{n} \bigr\Vert ^{2} \le C_{M_{0}}\bigl( \bigl\Vert e^{n} \bigr\Vert ^{2}+ \bigl\Vert e^{n-1} \bigr\Vert ^{2} \bigr). \end{aligned}$$

Computing the discrete inner product of (4.28) with \(e^{n+\frac{1}{2}}\) and taking the imaginary part, using the triangle inequalities and Cauchy inequalities, noticing (4.4), (4.13), and (4.31), we have, for \(1\le n\le m-1\),

$$\begin{aligned} & \bigl\Vert e^{n+1} \bigr\Vert ^{2}- \bigl\Vert e^{n} \bigr\Vert ^{2}\\ &\quad=\tau {\operatorname{Im}} \bigl(G^{n}+R^{n+ \frac{1}{2}},e^{n+1}+e^{n} \bigr) \\ &\quad\le\tau \bigl( \bigl\Vert e^{n+1} \bigr\Vert ^{2}+ \bigl\Vert e^{n} \bigr\Vert ^{2}+ \bigl\Vert G^{n} \bigr\Vert ^{2}+ \bigl\Vert R^{n+ \frac{1}{2}} \bigr\Vert ^{2} \bigr) \\ &\quad\le\tau \bigl( \bigl\Vert e^{n+1} \bigr\Vert ^{2}+ \bigl\Vert e^{n} \bigr\Vert ^{2}+C_{M_{0}} \bigl( \bigl\Vert e^{n} \bigr\Vert ^{2}+ \bigl\Vert e^{n-1} \bigr\Vert ^{2}\bigr)+(b-a) \bigl(C_{R}\bigl(\tau ^{2}+h^{4}\bigr) \bigr)^{2} \bigr). \end{aligned}$$

When \(\tau \le \frac{1}{2}\), we obtain

$$\begin{aligned} \bigl\Vert e^{n+1} \bigr\Vert ^{2}- \bigl\Vert e^{n} \bigr\Vert ^{2}\le{}&\bigl(1+2(1+C_{M_{0}}) \tau \bigr) \bigl\Vert e^{n} \bigr\Vert ^{2}+2C_{M_{0}} \tau \bigl\Vert e^{n-1} \bigr\Vert ^{2} \\ &{}+2\tau (b-a) \bigl(C_{R}\bigl(\tau ^{2}+h^{4} \bigr)\bigr)^{2}. \end{aligned}$$

Using the above inequality, noticing (4.16), we get

$$\begin{aligned} \bigl\Vert e^{m} \bigr\Vert ^{2}&\le \bigl(2+2(1+C_{M_{0}})\tau \bigr) \bigl\Vert e^{m-1} \bigr\Vert ^{2}+2C_{M_{0}} \tau \bigl\Vert e^{m-2} \bigr\Vert ^{2}+2\tau (b-a) \bigl(C_{R}\bigl(\tau ^{2}+h^{4}\bigr)\bigr)^{2} \\ &\le C\bigl(\delta +\tau ^{2}+h^{4} \bigr)^{2} \bigl[2+2(1+C_{M_{0}})\tau +2C _{M_{0}} \tau \bigr]+2\tau (b-a) \bigl(C_{R}\bigl(\tau ^{2}+h^{4} \bigr)\bigr)^{2} \\ &\le C\bigl(\delta +\tau ^{2}+h^{4} \bigr)^{2} \bigl[2+2\tau +4C_{M_{0}}\tau +2 \tau (b-a) \bigr]. \end{aligned}$$

Let \(\tau _{0}:=\frac{1}{{2+4C_{M_{0}}+2(b-a)}}\), and choose τ small enough such that \(\tau \le \tau _{0}\), then there is a constant C such that

$$\begin{aligned} \bigl\Vert e^{m} \bigr\Vert ^{2}\le C\bigl(\delta + \tau ^{2}+h^{4}\bigr)^{2}, \end{aligned}$$

which immediately implies

$$\begin{aligned} \bigl\Vert e^{m} \bigr\Vert \le C\bigl(\delta +\tau ^{2}+h^{4}\bigr). \end{aligned}$$

Again under the assumption \(\tau \le Ch^{2}\) as well as \(\delta \le \tau ^{2}\), combining the above inequality with (4.7) gives

$$\begin{aligned} \bigl\Vert U^{m} \bigr\Vert _{\infty }\le \bigl\Vert u^{m} \bigr\Vert _{\infty }+ \bigl\Vert e^{m} \bigr\Vert _{\infty }\le M _{0}+h^{-\frac{1}{2}} \bigl\Vert e^{m} \bigr\Vert \le M_{0}+Ch^{\frac{7}{2}}, \end{aligned}$$

and consequently, when \(0< h\le h_{0}\), we have

$$\begin{aligned} \bigl\Vert U^{m} \bigr\Vert _{\infty }\le M_{0}+1. \end{aligned}$$

This together with (3.5a)–(3.5b) implies (4.16) for \(n=m\). By taking \(\delta =\tau ^{2}\), it holds that \(\|e^{n}\|\le C(\tau ^{2}+h^{4})\), i.e., \(\|u^{n}-U^{n}\|\le C(\tau ^{2}+h^{4})\). Thus the proof is completed by induction. □

Mass conservation

In this section, we demonstrate that the discrete solution preserves the mass conservation, which further ensures the stability of the difference scheme proposed.

Theorem 5.1

Scheme (3.2)(3.5b) preserves the mass in the following sense:

$$\begin{aligned} Q^{n}=Q^{0},\quad 0\le n\le N, \end{aligned}$$


$$\begin{aligned} Q^{n}:= \bigl\Vert U^{n} \bigr\Vert ^{2} \end{aligned}$$

is the mass in the discrete sense.


Computing the discrete inner product of (3.2) with \(U^{n+ \frac{1}{2}}\), then taking the imaginary part, we obtain

$$\begin{aligned} \bigl\Vert U^{n+1} \bigr\Vert ^{2}= \bigl\Vert U^{n} \bigr\Vert ^{2},\quad 0\le n\le N-1, \end{aligned}$$

where (4.4) is used. This immediately implies (5.1). □

Remark 5.2

It follows from Theorem 5.1 that the numerical solution of (3.2)–(3.4) is long-time bounded, i.e., there exists some constant \(C>0\) such that

$$\begin{aligned} \bigl\Vert U^{n} \bigr\Vert \le C,\quad 0\le n\le N. \end{aligned}$$

Hence, scheme (3.2)–(3.5b) is unconditionally \(L_{2}\)-stable.

Fast stabilized bi-conjugate gradient algorithm (FSBiCG)

In this section, we develop a fast algorithm to numerically solve (3.2)–(3.5b). For convenience, we rewrite (3.2)–(3.5b) into the matrix form

$$\begin{aligned} & (iI-\delta K+\delta \beta D_{0} )U^{(1)}=iU^{0},\qquad n= \delta, \end{aligned}$$
$$\begin{aligned} & \biggl(iI-\frac{\tau }{2}K+\frac{\beta \tau }{4}D_{1} \biggr)U^{1}= \biggl(i I+\frac{\tau }{2}K-\frac{\beta \tau }{4}D_{1} \biggr)U^{0}, \end{aligned}$$
$$\begin{aligned} & \biggl(iI-\frac{\tau }{2}K+\frac{\beta \tau }{4}D_{2} \biggr)U^{n+1}= \biggl(iI+\frac{\tau }{2}K-\frac{\beta \tau }{4}D_{2} \biggr)U^{n}, \quad n\ge 1, \end{aligned}$$

where I is the unit matrix of size \((M-1)\times (M-1)\), K is the discrete matrix of the fractional Laplace operator defined in Lemma 4.1, and \(D_{\ell }\) \((\ell =0,1,2)\) are diagonal matrices with diagonal elements \(D_{\ell }(j,j)\), \(j=1,2,\ldots,M-1\), given by

$$ D_{0}(j,j)= \bigl\vert U^{0}_{j} \bigr\vert ^{2}, \qquad D_{1}(j,j)=\frac{1}{2}\bigl(3 \bigl\vert U^{(1)}_{j} \bigr\vert ^{2}- \bigl\vert U ^{0}_{j} \bigr\vert ^{2} \bigr), \qquad D_{2}(j,j)=\frac{1}{2}\bigl(3 \bigl\vert U^{n}_{j} \bigr\vert ^{2}- \bigl\vert U^{n-1} _{j} \bigr\vert ^{2}\bigr). $$

It is easy to find that K is a non-sparse Toeplitz matrix, which requires \(\mathcal{O}(M^{2})\) computations and \(\mathcal{O}(M^{2})\) storages while solving linear system (6.1a)–(6.1c) by the CG-like iteration method.

The aim for this section is to reduce the storage and calculation to \(\mathcal{O}(M)\) and \(\mathcal{O}(M\operatorname{log}M)\), respectively. For this, we shall combine the stabilized bi-conjugate gradient algorithm (SBiCG) with the Toeplitz structure of the coefficient matrices to construct the fast stabilized bi-conjugate gradient algorithm (FSBiCG) [28]. This needs the following three steps:

The decomposition of a circulant matrix. It is known that a circulant matrix \(C_{M-1}\) can be diagonalized as follows [29, 30]:

$$ C_{M-1}=F_{M-1}^{-1}\operatorname{diag}(F_{M-1} \mathbf{c})F_{M-1}, $$

where c is the first column vector of C, \(F_{M-1}\) and \(F_{M-1}^{-1}\) are the discrete Fourier transform matrix and its inverse with entries given by

$$\begin{aligned} &F_{M-1}(j,l)=\exp \biggl(-\frac{2\pi ij\omega }{M-1} \biggr)\quad \text{and} \quad F _{M-1}^{-1}(j,l)=\frac{1}{M-1}\exp \biggl( \frac{2\pi ij\omega }{M-1} \biggr), \\ &\quad 0\le j,l\le M-1. \end{aligned}$$

It has been shown in [28] that the decomposition of circulant matrix C could be carried out within a computational cost of \(\mathcal{O}(M\operatorname{log}M)\).

Computations for circulant matrix-vector multiplication. According to [30], a computational cost of \(\mathcal{O}(M\operatorname{log}M)\) and a memory of \(\mathcal{O}(M)\) are required while seeking for the circulant matrix-vector multiplication by FFT or iFFT.

Computations for Toeplitz matrix-vector multiplication. Notice that a \((M-1)\times (M-1)\) Toeplitz matrix \(T_{M-1}\) is a matrix whose each descending diagonal from left to right is the same constant \(t_{i},i=0,\pm 1,\ldots,\pm (M-2)\), which needs storage of \(\mathcal{O}(M)\). On the other hand, a \((M-1)\times (M-1)\) Toeplitz matrix \(T_{M-1}\) can be embedded into a \(2(M-1)\times 2(M-1)\) circulant matrix \(C_{2M-2}\). Hence, for a vector \(\mathbf{x}\in \mathbf{R}^{M-1}\), \(T_{M-1}\mathbf{x}\) can be drawn from \(C_{2M-2}(\mathbf{x},\mathbf{0})^{T}\) via

$$\begin{aligned} C_{2M-2}\mathbf{X}= \begin{pmatrix} T_{M-1}&D_{M-1} \\ D_{M-1}&T_{M-1} \end{pmatrix} \begin{pmatrix} \mathbf{x} \\ \mathbf{0} \end{pmatrix}= \begin{pmatrix} T_{M-1}\mathbf{x} \\ D_{M-1}\mathbf{x} \end{pmatrix} \end{aligned}$$

with computational cost \(\mathcal{O}(M\operatorname{log}M)\). Here \(D_{M-1}\) is a Toeplitz matrix whose each descending diagonal from left to right is \(q_{i},i=1,\ldots,M-2,0,2-M,\ldots,-1\).

Collecting these deductions above, we modify the traditional SBiCG to formulate our fast SBiCG algorithm (FSBiCG), which is presented sentence by sentence in Algorithm 1, and the conclusion concerning the efficiency of the FSBiCG is given in Theorem 6.1.

Algorithm 1

The FSBiCG method for solving \(Ax=b\)

Theorem 6.1

Compared with the SBiCG method, the FSBiCG algorithm proposed here reduces computational cost and storage from \(\mathcal{O}(M^{2})\)and \(\mathcal{O}(M^{2})\)to \(\mathcal{O}(M\operatorname{log}M)\)and \(\mathcal{O}(M)\)per iteration, respectively.

Numerical experiment

In this section, two numerical experiments are performed. In the first experiment, we pay particular attention to verifying the convergence rates and computation cost as well as the efficiency of the FBiCG compared with other algorithms. In the second experiment, we test the abilities of scheme (3.2)–(3.5b) to hold the physical characteristics-mass conservation, subject to different initial values. These experiments are implemented by Matlab program on a family computer with configuration: Intel(R) Core(TM) i5-4590 CPU 3.3 GHz and 4 GB RAM.

Tests on the efficiency of the finite difference procedure and FSBiCG

Example 7.1

Assume \((a,b)=(0,1)\), \(T=1\), \(\beta =1\), the analytic solution is prescribed to be

$$\begin{aligned} u(x,t)=x^{5}(1-x)^{5}e^{-t} \end{aligned}$$

subject to the initial condition

$$\begin{aligned} u(x,0)=x^{5}(1-x)^{5} \end{aligned}$$

and the right-hand source term \(f(x,t)\),

$$\begin{aligned} f(x,t) = {}&{-}\frac{ e^{-t}}{2\cos (\frac{\alpha }{2}\pi )} \biggl[\frac{\varGamma (6)}{\varGamma (6-\alpha )} \bigl(x^{5-\alpha }+(1-x)^{5- \alpha } \bigr) \\ &{}-5\frac{\varGamma (7)}{\varGamma (7-\alpha )} \bigl(x^{6-\alpha }+(1-x)^{6- \alpha } \bigr) +10\frac{\varGamma (8)}{\varGamma (8-\alpha )} \bigl(x^{7- \alpha }+(1-x)^{7-\alpha } \bigr) \\ &{}-10\frac{\varGamma (9)}{\varGamma (9-\alpha )} \bigl(x^{8-\alpha }+(1-x)^{8- \alpha } \bigr) +5\frac{\varGamma (10)}{\varGamma (10-\alpha )} \bigl(x ^{9-\alpha }+(1-x)^{9-\alpha } \bigr) \\ &{}-\frac{\varGamma (11)}{\varGamma (11-\alpha )} \bigl(x^{10-\alpha }+(1-x)^{10-\alpha } \bigr) \biggr]-ix^{5}(1-x)^{5}e^{-t} + x^{15}(1-x)^{15}e ^{-3t}. \end{aligned}$$

Here \(\alpha \in (1,2)\).

Remark 7.2

Considering the facts that the main aim of this numerical example is to verify the computing efficiency of FSBiCG, and the exact analytical solutions of (2.1) are hardly available, we have to construct such an exact solution by attaching the nonzero source terms to the original systems without weakening the computing difficulties resulting from the non-locality of the fractional operators. This equation may be thought of as a transformed version of the corresponding fractional Schrödinger equation (2.1) with non-homogeneous boundary conditions.

In this example, we calculate the convergence rates in space and time at \(t=1\), and compare the computation time costs within different methods. Figure 1 depicts the initial condition, the numerical solution, and the exact solution with \(\alpha =1.9\), \(h= \frac{1}{2^{6}}\), \(\tau =h^{2}\). From Fig. 1, we find that the curve of numerical solution agrees well with the exact one. Let \(u_{h}\) be the numerical solution. Tables 1 and 2 test the error \(\|u-u_{h}\|\) as well as the spatial and temporal convergence rates in the \(L_{2}-\) and ∞− sense with the time increment \(\tau =h^{2}\) for \(\alpha =1.3\) and \(\alpha =1.9\), respectively. The numerical results in Tables 1 and 2 show that the spatial and temporal convergence rates are 4 and 2 respectively, which is in accord with the theoretical expectations of Theorem 4.5.

Figure 1

The initial value, numerical and exact solution at \(t=1\)

Table 1 Spatial convergence rates in \({L_{2}}\) and temporal rates
Table 2 Spatial convergence rates in \({L_{\infty }}\) and temporal rates

Table 3 tests the efficiency of the FSBiCG algorithm. We measure the time for the complete simulation until \(t=1\) with the time increment \(\tau =h^{2}\) for \(\alpha =1.3\) and \(\alpha =1.9\). We easily see that as the space step h becomes smaller and smaller, the CPU time consumed by the FSBiCG is much less than that of the Gauss elimination and the SBiCG, and the iterations are the same as SBiCG’s, which do not scale with M. For example, the CPU time of FSBiCG is \(1\mbox{ min }54 \mbox{ s}\) compared to the SBiCG’s \(23\mbox{ min }47\mbox{ s}\) and the Gauss elimination’s \(54\mbox{ min }5\mbox{ s}\) as \(h=\frac{1}{2^{7}}\), \(\alpha =1.3\).

Table 3 The CPU time consumed of the Gaussian elimination, the SBiCG method, and the FSBiCG method

Tests on conservation of mass

Example 7.3

We take \(\beta =1\) and \(\alpha =1.5\), 1.7, and 1.9, respectively, in (2.1) and select the step lengths \(h=0.25\), \(\tau =h ^{2}\), the interval considered here is chosen to be \([-20,20]\). The initial values are described in the following two categories to display the fitness of the numerical scheme for different systems. In the first category, the initial condition is chosen to be

$$\begin{aligned} u(x,0)=\sec (x)\cdot \exp (2ix),\quad x\in [-20,20], \end{aligned}$$

corresponding to the certain initial value, and in the second category, the initial condition is selected as the random sequence confirming to Poisson distribution corresponding to the uncertain initial value.

The numerical results for the two initial values are presented in Figs. 23, respectively. Figure 2 displays the mass subject to initial condition (7.3). To simulate the randomness of real quantum mechanics, we select the initial conditions as random sequences that obey the Poisson distribution and plot their mass in Fig. 3. We can find that in Figs. 23 the curves of the mass are lines parallel to the t-axis, and we conclude that the mass is kept very well.

Figure 2

Evolution of mass \(Q^{n}\) with initial values given in (7.3)

Figure 3

Evolution of mass \(Q^{n}\) with initial values confirmed to Poisson distribution

In addition, it deserves noting that the mass \(Q^{n}\) is an intrinsic property of the material systems. Once the initial and boundary condition is given, mass is uniquely determined, which does not change with the parameter α, as is proven in Lemma 5.1. As a result, the mass lines in Figs. 23 intercover.

Concluding remarks

We have established the well-defined fourth-order difference scheme for the nonlinear fractional Schrödinger equation to approximate the unknown function u. We found that the highlights of our paper at least are as follows: (1) it improves the convergence rate compared with the existing method by developing a fourth-order difference scheme toward the fractional Riesz derivatives; (2) it proves the solvability of the scheme and the mass balance property inherited by the difference solution; (3) the fast algorithm can be designed to reduce the storage to \(\mathcal{O}(M)\) and computational cost to \(\mathcal{O}(M\operatorname{log}M)\).

We remark that the problem studied in this manuscript can be solved by other numerical methodologies such as those discussed in [31, 32], which needs to be further investigated.


  1. 1.

    Kumar, D., Singh, J., Al Qurashi, M., et al.: A new fractional SIRS-SI malaria disease model with application of vaccines, antimalarial drugs, and spraying. Adv. Differ. Equ. 2019(1), 278 (2019)

    MathSciNet  Article  Google Scholar 

  2. 2.

    Baleanu, D.: Fractional Hamiltonian analysis of irregular systems. Signal Process. 86(10), 2632–2636 (2006)

    MATH  Article  Google Scholar 

  3. 3.

    Kumar, D., et al.: A new fractional exothermic reactions model having constant heat source in porous media with power, exponential and Mittag-Leffler laws. Int. J. Heat Mass Transf. 138, 1222–1227 (2019)

    Article  Google Scholar 

  4. 4.

    Baleanu, D., Asad, J.H., Jajarmi, A.: The fractional model of spring pendulum: new features within different kernels. Proc. Rom. Acad., Ser. A: Math. Phys. Tech. Sci. Inf. Sci. 19(3), 447–454 (2018)

    MathSciNet  Google Scholar 

  5. 5.

    Baleanu, D., Rezapour, S., Mohammadi, H.: Some existence results on nonlinear fractional differential equations. Philos. Trans. R. Soc., Math. Phys. Eng. Sci. 371(1990), 20120144 (2013)

    MathSciNet  MATH  Article  Google Scholar 

  6. 6.

    Agarwal, R.P., Baleanu, D., Hedayati, V., Rezapour, S.: Two fractional derivative inclusion problems via integral boundary condition. Appl. Math. Comput. 257, 205–212 (2015)

    MathSciNet  MATH  Google Scholar 

  7. 7.

    Baleanu, D., Asad, J.H., Jajarmi, A.: New aspects of the motion of a particle in a circular cavity. Proc. Rom. Acad., Ser. A: Math. Phys. Tech. Sci. Inf. Sci. 19, 361–367 (2018)

    MathSciNet  Google Scholar 

  8. 8.

    Mohammadi, F., Moradi, L., Baleanu, D., et al.: A hybrid functions numerical scheme for fractional optimal control problems: application to nonanalytic dynamic systems. J. Vib. Control 24(21), 5030–5043 (2018)

    MathSciNet  Google Scholar 

  9. 9.

    Baleanu, D., Mousalou, A., Rezapour, S.: A new method for investigating approximate solutions of some fractional integro-differential equations involving the Caputo–Fabrizio derivative. Adv. Differ. Equ. 2017(1), 51 (2017)

    MathSciNet  MATH  Article  Google Scholar 

  10. 10.

    Baleanu, D., Mousalou, A., Rezapour, S.: On the existence of solutions for some infinite coefficient-symmetric Caputo–Fabrizio fractional integro-differential equations. Bound. Value Probl. 2017(1), 145 (2017)

    MathSciNet  MATH  Article  Google Scholar 

  11. 11.

    Baleanu, D., Mousalou, A., Rezapour, S.: The extended fractional Caputo–Fabrizio derivative of order \(0\leq \sigma < 1\) on CR [0, 1] \(C_{\mathbb{R}}[0, 1] \) and the existence of solutions for two higher-order series-type differential equations. Adv. Differ. Equ. 2018(1), 255 (2018)

    MATH  Article  Google Scholar 

  12. 12.

    Kojabad, E.A., Rezapour, S.: Approximate solutions of a sum-type fractional integro-differential equation by using Chebyshev and Legendre polynomials. Adv. Differ. Equ. 2017(1), 351 (2017)

    MathSciNet  MATH  Article  Google Scholar 

  13. 13.

    Aydogan, S.M., Baleanu, D., Mousalou, A., et al.: On approximate solutions for two higher-order Caputo–Fabrizio fractional integro-differential equations. Adv. Differ. Equ. 2017(1), 221 (2017)

    MathSciNet  MATH  Article  Google Scholar 

  14. 14.

    Aydogan, M.S., Baleanu, D., Mousalou, A., Rezapour, S.: On high order fractional integro-differential equations including the Caputo–Fabrizio derivative. Bound. Value Probl. 2018(1), 90 (2018)

    MathSciNet  MATH  Article  Google Scholar 

  15. 15.

    Kumar, D., Singh, J., Purohit, S.D., et al.: A hybrid analytical algorithm for nonlinear fractional wave-like equations. Math. Model. Nat. Phenom. 14(3), 304 (2019)

    MathSciNet  MATH  Article  Google Scholar 

  16. 16.

    Bhatter, S., Mathur, A., Kumar, D., Singh, J.: A new analysis of fractional Drinfeld–Sokolov–Wilson model with exponential memory. Phys. A, Stat. Mech. Appl. 2019,122578 (2019)

    Google Scholar 

  17. 17.

    Laskin, N.: Fractional quantum mechanics. Phys. Rev. E, Stat. Phys. Plasmas Fluids Relat. Interdiscip. Topics 62(3), 3135 (2000)

    MathSciNet  MATH  Google Scholar 

  18. 18.

    Feng, B.: Ground states for the fractional Schrödinger equation. Electron. J. Differ. Equ. 2013, 127 (2013)

    MATH  Article  Google Scholar 

  19. 19.

    Hu, J., Jie, X., Hong, L.: The global solution for a class of systems of fractional nonlinear Schrödinger equations with periodic boundary condition. Comput. Math. Appl. 62(3), 1510–1521 (2011)

    MathSciNet  MATH  Article  Google Scholar 

  20. 20.

    Wang, D., Xiao, A., Yang, W.: Crank–Nicolson difference scheme for the coupled nonlinear Schrödinger equations with the Riesz space fractional derivative. J. Comput. Phys. 242(242), 670–681 (2013)

    MathSciNet  MATH  Article  Google Scholar 

  21. 21.

    Wang, D., Xiao, A., Wei, Y.: A linearly implicit conservative difference scheme for the space fractional coupled nonlinear Schrödinger equations. J. Comput. Phys. 272(3), 644–655 (2014)

    MathSciNet  MATH  Article  Google Scholar 

  22. 22.

    Pengde, W., Huang, C.: An energy conservative difference scheme for the nonlinear fractional Schrödinger equations. J. Comput. Phys. 293(3), 238–251 (2015)

    MathSciNet  MATH  Google Scholar 

  23. 23.

    Li, M., Huang, C., Wang, P.: Galerkin finite element method for nonlinear fractional Schrödinger equations. Numer. Algorithms 74(2), 1–27 (2016)

    Google Scholar 

  24. 24.

    Wang, P., Huang, C.: Split-Step Alternating Direction Implicit Difference Scheme for the Fractional Schrödinger Equation in Two Dimensions. Pergamon, Elmsford (2016)

    Google Scholar 

  25. 25.

    Guo, X., Li, Y., Wang, H.: A fourth-order scheme for space fractional diffusion equations. J. Comput. Phys. 373, 410–424 (2018)

    MathSciNet  MATH  Article  Google Scholar 

  26. 26.

    Shubin, M.A.: Pseudodifferential Operators and Spectral Theory. Springer, Berlin (2001)

    Google Scholar 

  27. 27.

    Xie, S.S., Li, G.X., Yi, S.: Compact finite difference schemes with high accuracy for one-dimensional nonlinear Schrödinger equation. Comput. Methods Appl. Mech. Eng. 198(9), 1052–1060 (2009)

    MATH  Article  Google Scholar 

  28. 28.

    Wang, F., Chen, H., Wang, H.: Finite element simulation and efficient algorithm for fractional Cahn–Hilliard equation. J. Comput. Appl. Math. 356, 248–266 (2019)

    MathSciNet  MATH  Article  Google Scholar 

  29. 29.

    Davis, P.J.: Circulant Matrices. Am. Math. Soc., Providence (2013)

    Google Scholar 

  30. 30.

    Gray, R.M.: Toeplitz and Circulant Matrices: a Review (2005)

    MATH  Article  Google Scholar 

  31. 31.

    Hajipour, M., Jajarmi, A., Baleanu, D.: On the accurate discretization of a highly nonlinear boundary value problem. Numer. Algorithms 79(3), 679–695 (2018)

    MathSciNet  MATH  Article  Google Scholar 

  32. 32.

    Hajipour, M., Jajarmi, A., Malek, A., et al.: Positivity-preserving sixth-order implicit finite difference weighted essentially non-oscillatory scheme for the nonlinear heat equation. Appl. Math. Comput. 325, 146–158 (2018)

    MathSciNet  MATH  Google Scholar 

Download references


This work is supported in part by the National Natural Science Foundation of China under Grant nos. 11471196, 10971254, 11471194.

Availability of data and materials

Not applicable.


National Natural Science Foundation of China under Grant nos. 11471196, 10971254, 11471194.

Author information




All authors contributed equally to this work. All authors read and approved the final manuscript.

Corresponding author

Correspondence to Huanzhen Chen.

Ethics declarations

Competing interests

The authors declare that they have no competing interests.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made.The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.To view a copy of this licence, visit

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Chang, Y., Chen, H. Fourth-order finite difference scheme and efficient algorithm for nonlinear fractional Schrödinger equations. Adv Differ Equ 2020, 4 (2020).

Download citation


  • Fractional Schrödinger equation
  • Fourth-order difference scheme
  • Fast algorithm
  • Mass balance
  • Numerical analysis