 Research
 Open Access
 Published:
A GPIU method for fractional diffusion equations
Advances in Difference Equations volume 2020, Article number: 398 (2020)
Abstract
The fractional diffusion equations can be discretized by applying the implicit finite difference scheme and the unconditionally stable shifted Grünwald formula. Hence, the generating linear system has a real Toeplitz structure when the two diffusion coefficients are nonnegative constants. Through a similarity transformation, the Toeplitz linear system can be converted to a generalized saddle point problem. We use the generalization of a parameterized inexact Uzawa (GPIU) method to solve such a kind of saddle point problem and give a new algorithm based on the GPIU method. Numerical results show the effectiveness and accuracy for the new algorithm.
Introduction
The fractional differential operator is suitable for describing the memory, genetic, mechanical and electrical properties of various materials. Compared with the classical integer differential operator, it can more concisely and accurately describe the biological, mechanical and physical processes with historical memory and spatial global correlation characteristics, such as abnormal diffusion of particles, quantization problem of nonlocal field theory, the fractional capacitance theory, universal voltage shunt, chaotic circuit analysis, physical semiconductors field, dispersion in porous media, physical and engineering issues related to fractal dimensions, and nonNewtonian fluid mechanics.
The fractional diffusion equations can be abstracted from many practical problems, such as a random walk model describing the competition between subdiffusion and superdiffusion Three different types of fractional diffusion equations can be derived according to the difference in particle waiting time and jumping steps: when the mean of the waiting time of each step is infinite and the squared mean of the jump steps is limited, the random walk model describes the abnormal subdiffusion phenomenon, and the time fractional diffusion equation is derived accordingly; when the mean of the waiting time is limited and the squared mean of the jump steps is infinite, the random walk model describes the superdiffusion phenomenon, and the spatial fractionalorder diffusion equation is derived accordingly; when the mean of the waiting time and the squared mean of the jump steps are both infinite, the random walk model describes the phenomenon of competition between subdiffusion and superdiffusion, and the fractional diffusion equation in time and space is derived accordingly; using the fractional convectiondiffusion equations to simulate Levy’s motion is very effective, it can more accurately simulate the solute motion process with long tail behavior; when studying Eulerian estimation of solute transport in porous media, the fractionalorder Fick law can be used to replace the traditional Fick law to obtain the fractionalorder convectiondiffusion equations; in the application of nonNewtonian fluid mechanics, to solve the analytical solution of a material model problem between the Hook solid model and Newtonian fluid model, a special function related to fractional calculus needs to be applied; in the early days, the problem of dynamic response to broadband crust induced by earthquakes that was hardly affected by velocity can be described by a fractional linear rheological solid model. It is precisely because of the advantages of fractional derivatives and the wide application of fractional diffusion equations (FDEs) (1) in groundwater contaminant transport [1, 2], turbulent flow [3], image processing [4], finance [5], physics [6] and other fields, that people have drawn extensive attention to its research.
Considerable numerical methods have been developed as the major ways for solving FDEs [7–12]. However, some salient features of fractional differential operators lead to the unconditional instability [13]. In addition, most numerical methods for finite difference equations (FDEs) can generate a full coefficient matrix, which requires the computational cost of \(O(N^{3})\) and the storage of \(O(N^{2})\), where N is the number of grid points [14]. In contrast, the secondorder diffusion equations which usually contribute to sparse coefficient matrices can be solved efficiently by fast iteration methods with computational complexity \(O(N)\).
To guarantee the stability, Meerschaert and Tadjeran [13] proposed a shifted Grünwald discretization to approximate fractional diffusion equations and it has been proved that it is actually unconditionally stable. Then, based on the Meerschaert–Tadjeran method, a special structure of the full coefficient matrices was presented in [14]. The method maintained a Toeplitzlike structure and reduced the storage requirement from \(O(N^{2})\) to \(O(N)\). Furthermore, the matrixvector multiplication for the Toeplitz matrix can be calculated by the fast Fourier transform (FFT) with \(O(N\log N)\) operations [15]. As a result, fast numerical methods for solving fractional equations with the shifted Grünwald formula have been developed, including the conjugate gradient normal residual (CGNR) methods [16, 17], the preconditioned CGNR (PCGNR) methods which employ some circulant preconditioners [17–19] and the multigrid method [20]. If the diffusion coefficients are constant, the full coefficient matrix originating from the Meerschaert–Tadjeran method will be nonHermitian positive definite. Therefore, the Hermitian and skewHermitian splitting (HSS) iteration and the corresponding preconditioned HSS (PHSS) iteration [21] have also been developed to solve fractional equations.
Since the full coefficient matrix is also a real Toeplitz matrix, the FDE can be transformed into a generalized saddle point problem based on some properties of the real Toeplitz matrix [22]. Therefore, the generalized parameterized inexact Uzawa (GPIU) method [23, 24] is suitable for solving this new saddle point problem. However, it shows some differences from the traditional GPIU method. This method should store three \(N/2\) order matrices. Although these matrices have symmetric structures (which will require less storage space), the complexity is less than \(O(N^{2})\). In the meantime, the complexity of calculation is also \(O(N^{2})\) since the method needs to compute the multiplication of several \(N/2\) order matrices.
The framework of this paper is as follows. In Sect. 2, we introduce the background of the discretization for the FDEs. Then the real Toeplitz linear system is converted to a generalized saddle point problem by similarity transformation in Sect. 3. In Sect. 4, the GPIU method is considered to solve this kind of generalized saddle point problem and its convergence is analyzed. Besides, a GPIU algorithm is presented. In Sect. 5, numerical examples are performed to illustrate the effectiveness and power of our new method. Finally, conclusions are given in Sect. 6.
The depiction of FDEs and finite difference discretization
Firstly, an initialboundary value problem of the FDEs is given by
where \(\alpha \in (1, 2)\) is the order of fractional derivatives, \(f(x,t)\) is the source term and diffusion coefficient functions \(d_{ \pm } \ge 0\) with \(d_{ +} + d_{ } \ne 0\). The leftsided and the rightsided fractional derivatives \(\frac{\partial ^{\alpha } u(x,t)}{\partial _{ +} x^{\alpha }} \) and \(\frac{\partial ^{\alpha } u(x,t)}{\partial _{ } x^{\alpha }} \) of (1) have the same definition as the Grünwald form [25]
where \(\lfloor x \rfloor \) denotes the floor of x and the Grünwald weights \(g_{k}^{(\alpha )}\) are the alternating fractional binomial coefficient given as
which can be evaluated by the following recursive form:
Besides, the coefficients \(g_{k}^{(\alpha )}\) satisfy the following properties [13, 14, 17, 21] when \(1 < \alpha < 2\):
Let N and M be positive integers, then \(\Delta x = \frac{x_{R}  x_{L}}{N + 1}\) and \(\Delta t = \frac{T}{M}\) are the sizes of spatial grids and time steps, respectively. Additionally, the spatial and temporal partition are defined as \(x_{i} = x_{L} + i\Delta x\) for \(i = 0, 1, \ldots , N + 1\) and \(t_{m} = m\Delta t\) for \(m = 0, 1, \ldots , M\). We also let \(u_{i}^{(m)} = u(x_{i},t_{m})\), \(d_{ \pm ,i}^{(m)} = d_{ \pm } (x_{i},t_{m})\) and \(f_{i}^{(m)} = f(x_{i},t_{m})\).
Consider the shifted Grünwald approximations [13, 17, 21]:
where \(g_{k}^{(\alpha )}\) is defined in (2) and the corresponding implicit finite difference scheme in (1) can be modified as follows:
Let \(u^{(m)} = [u_{1}^{(m)},u_{2}^{(m)}, \ldots ,u_{N}^{(m)}]^{T}\), \(f^{(m)} = [f_{1}^{(m)},f_{2}^{(m)}, \ldots ,f_{N}^{(m)}]^{T}\) and \(I \in R^{N \times N}\) be an identity matrix. Then the scheme (3) can construct the following equations:
and
where \(D_{ \pm }^{(m)} = \operatorname{diag}(d_{ \pm ,1}^{(m)}, \ldots ,d_{ \pm ,N}^{(m)})\) and
It is obvious that \(G_{\alpha } \) is a Toeplitz matrix [15] and the coefficient matrix \(A^{(m)}\) is a full Toeplitzlike matrix, which is nonsymmetric when \(d_{ +} \ne d_{ } \) and the matrixvector multiplication for the \(A^{(m)}\) can be obtained in \(O(N\log N)\) operations by the fast Fourier transform (FFT).
We can define \(v_{N,M} = \frac{\Delta x^{\alpha }}{\Delta t} = (x_{R}  x_{L})^{\alpha } T^{  1}\frac{M}{(N + 1)^{\alpha }} \), related to the numbers of time steps and grid points. The linear system (4) is represented as follows:
where
and
As the coefficient matrix \(M^{(m)}\) in (8) is a strictly diagonally dominant Mmatrix [14], we know the matrix \(\frac{M^{(m)} + M^{(m)^{T}}}{2}\) is also a nonsingular Mmatrix where \(M^{(m)^{T}}\) is used to denote the transposition of the matrix \(M^{(m)}\). For the reason that every nonsingular symmetric Mmatrix is positive definite, \(\frac{M^{(m)} + M^{(m)^{T}}}{2}\) is a symmetric positive definite matrix. Therefore, \(M^{(m)}\) is also positive definite (see [21, 26]).
The transformation for the Toeplitz matrix \(M^{(m)}\)
As mentioned in Sect. 2, the coefficient matrix \(M^{(m)}\) is a real positive definite Toeplitzlike matrix. We assume that the diffusion coefficient functions are two nonnegative constants, i.e., \(d_{ +,i}^{(m)} = d_{ +} \ge 0\), \(d_{ ,i}^{(m)} = d_{ } \ge 0\), but \(d_{ +} + d_{ } \ne 0\). Then \(M^{(m)}\) becomes a real nonsymmetric positive definite Toeplitz matrix. In addition, M is appropriately chosen according to N and satisfies the limit of \(v_{N,M}\) away from 0.
Then we transform the Toeplitz linear system (7) into a generalized saddle point problem by constructing an orthogonal matrix to deal with \(M^{(m)}\). The first row and column of the real \(N \times N\) Toeplitz matrix \(M^{(m)}\) can be indicated as
and
respectively.
We use the Hermitian and skewHermitian splitting (HSS) method to split \(M^{(m)}\) and get
where \(H^{(m)} = \frac{M^{(m)} + M^{(m)^{T}}}{2}\) is symmetric positive definite and \(S^{(m)} = \frac{M^{(m)}  M^{(m)^{T}}}{2}\) is skewsymmetric. Interestingly, \(H^{(m)}\) is a centrosymmetric Toeplitz matrix and \(S^{(m)}\) is a skewcentrosymmetric Toeplitz matrix. Denote \(J_{N} = [e_{N}, e_{N  1}, \ldots , e_{1}]\), where \(e_{i} \in R^{N}\) is the column vector with the ith row being 1, i.e., \(J_{N}\) is a permutation matrix where 1 is on the counterdiagonal and 0 is elsewhere. Thus the matrices \(H^{(m)}\) and \(S^{(m)}\) are able to be replaced by
respectively.
Considering that N is even or odd, we discuss the structures of \(H^{(m)}\) and \(S^{(m)}\) in detail.
Case I:N is even, let \(N = 2m\). Then \(H^{(m)}\) can be rewritten as
where \(E \in R^{m \times m}\) is a symmetric Toeplitz matrix whose first column can be given as
and \(F \in R^{m \times m}\) is a Toeplitz matrix whose first row and column are as follows:
respectively.
Analogously, \(S^{(m)}\) can be rewritten as
where \(G \in R^{m \times m}\) is a skewsymmetric Toeplitz matrix whose first column can be given as
and \(L \in R^{m \times m}\) is a Toeplitz matrix whose first row and column are as follows:
respectively.
We give an orthogonal matrix P defined as
then multiply the left and right sides of (9) by \(P^{T}\) from the left and P from the right, respectively, we have the new forms of \(\tilde{H}^{(m)}\) and \(\tilde{S}^{(m)}\):
and
Hence, let \(\tilde{M}^{(m)} = P^{T}M^{(m)}P\), \(\tilde{u}^{(m)} = P^{T}u^{(m)}\), and \(\tilde{b}^{(m  1)} = P^{T}b^{(m  1)}\), we can rewrite Eq. (7) as the following linear system:
and we have
For simplicity, let \(B = E + J_{m}F\), \(C = E  J_{m}F\) and \(W =  G + J_{m}L\), then we can find that
The coefficient matrix \(\tilde{M}^{(m)}\) can be expressed as
and \(\tilde{M}^{(m)}\) is real nonsymmetric positive definite because \(M^{(m)}\) is real positive definite.
Moreover, B and C are both symmetric positive definite matrices because \(\tilde{H}^{(m)}\) is a symmetric positive definite matrix. So the linear system (7) is transformed into a generalized saddle point problem under the assumption of case I.
Case II:N is an odd number, let \(N = 2m + 1\). Then the matrix \(H^{(m)}\) can be written as
where E, F and \(J_{m}\) are similar to the above, and the column vector \(z_{1}\) is expressed as follows:
In addition, \(S^{(m)}\) can be expressed as
where G and L are similar to the above, and the column vector \(z_{2}\) is expressed as follows:
Then the orthogonal matrix is defined as
Multiplying the left and right sides of (9) by \(P^{T}\) from the left and P from the right, respectively, we have the new forms of \(\tilde{H}^{(m)}\) and \(\tilde{S}^{(m)}\) as
and
The Toeplitz linear system (7) can also be constructed as a generalized saddle point problem which has a form similar to (10), where
All in all, no matter what positive integer N we choose, the scheme for the FDEs is able to be transformed into a generalized saddle point problem.
A GPIU method on the transformed saddle point problem
By splitting the new coefficient matrix \(\tilde{M}^{(m)}\) in the linear system (10), we get
where
here the definitions of B, C and W are given in the above section, \(Q_{1} \in R^{ \lceil N/2 \rceil } \) is a symmetric positive definite matrix, \(Q_{2} \in R^{ \lfloor N/2 \rfloor } \) is a symmetric positive definite matrix, and \(Q_{3} \in R^{ \lfloor N/2 \rfloor \times \lceil N/2 \rceil } \) is arbitrary, where \(\lceil N/2 \rceil \) and \(\lfloor N/2 \rfloor \) represents the ceil and the floor of \(N/2\), respectively.
Considering the GPIU method [23, 24], we choose \(Q_{2} = C/\delta \) where \(\delta > 0\), so \(Y^{(m)}\) can be rewritten as
Thus, the linear system (10) is redefined by the splitting form of (12) as
The form of the iteration matrix corresponding to (13) is as follows:
Denote by \(\rho (\boldsymbol{W})\) the spectral radius of the iteration matrix W, so Eq. (13) converges if and only if \(\rho (\boldsymbol{W}) < 1\).
Let λ be an eigenvalue of W and \([\tilde{u}_{x}^{H}, \tilde{u}_{y}^{H}]^{H}\) be the corresponding eigenvector with \(\tilde{u}_{x} \in C^{ \lceil N/2 \rceil } \) and \(\tilde{u}_{y} \in C^{ \lfloor N/2 \rfloor } \), then we obtain
Or, equivalently,
Aiming to obtain the convergence condition, the lemmas and theorem as follows are presented.
Lemma 4.1
Suppose thatB, Care symmetric positive definite, \(Q_{1}\)is symmetric positive semidefinite, \(Q_{2} = C/\delta \)is symmetric positive definite where\(\delta > 0\), and\(Q_{3}\)is arbitrary. Ifλis an eigenvalue ofWdefined by (14) and\([\tilde{u}_{x}^{H}, \tilde{u}_{y}^{H}]^{H}\)is the corresponding eigenvector, where\(\tilde{u}_{x} \in C^{ \lceil N/2 \rceil } \)and\(\tilde{u}_{y} \in C^{ \lfloor N/2 \rfloor } \), then

(a)
\(\lambda \ne 1\),

(b)
whenWis full of row rank, \(\tilde{u}_{x}\)is a nonzero vector,

(c)
whenWis not full of row rank

(i)
if\(\lambda \ne 1  \delta \), \(\tilde{u}_{x}\)is a nonzero vector;

(ii)
if\(\lambda = 1  \delta \), \(\tilde{u}_{x}\)can be a zero vector, and\(\lambda = 1  \delta \)is at least\(\lfloor N/2 \rfloor  r(W)\)eigenvalues ofW; where\(r(W)\)represents the rank of matrixWand\(r(W) = r(W^{T})\).

(i)
Proof
(a) If \(\lambda = 1\), then we have
We can rewrite the system as
Evidently, the coefficient matrix is nonsingular, so \([\tilde{u}_{x}^{H}, \tilde{u}_{y}^{H}]^{H} = 0\), which is contradictory, so \(\lambda \ne 1\).
(b) If \(\tilde{u}_{x} = 0\), then according to (15) we have
When W is full of row rank, according to the first equation of (16), we have \(\tilde{u}_{y} = 0\). This is a contradiction.
(c) When W is not full of row rank, let \(\tilde{u}_{x} = 0\).
(i) If \(\lambda \ne 1  \delta \), according to the second equation of (16) and \(Q_{2}\) is symmetric positive definite, we obtain \(\tilde{u}_{y} = 0\), which is a contradiction.
(ii) If \(\lambda = 1  \delta \), the fundamental set of solution can be obtained by the first equation of (16) on \(\tilde{u}_{y}\) which consists of \(\lfloor N/2 \rfloor  r(W)\) nonzero vector solutions, i.e., \(\lambda = 1  \delta \) is at least \(\lfloor N/2 \rfloor  r(W)\) eigenvalues, where \(r(W)\) represents the rank of matrix W and \(r(W) = r(W^{T})\). □
Lemma 4.2
Assume thatB, Care symmetric positive definite, \(Q_{1}\)is symmetric positive semidefinite, \(Q_{2} = C/\delta \)is symmetric positive definite where\(\delta > 0\), and\(Q_{3}\)is arbitrary. Letλbe an eigenvalue ofWand\([\tilde{u}_{x}^{H}, \tilde{u}_{y}^{H}]^{H}\)be the corresponding eigenvector with\(\tilde{u}_{x} \in C^{ \lceil N/2 \rceil } \)and\(\tilde{u}_{y} \in C^{ \lfloor N/2 \rfloor } \). If\(\tilde{u}_{y} = 0\),

(a)
whenWis full of row rank, then\(0 \le \lambda < 1\),

(b)
whenWis not full of row rank, then\(\vert \lambda \vert < 1\)if and only if\(0 < \delta < 2\).
Proof
(a) If \(\tilde{u}_{y} = 0\), from (15), we obtain
When W is full of row rank, since \(\lambda \ne 1\) and \(\tilde{u}_{x} \ne 0\), from Lemma 4.1, we denote
Through multiplying the both sides of the first equation in (17) by \(\frac{\tilde{u}_{x}^{H}}{\tilde{u}_{x}^{H}\tilde{u}_{x}}\) from the left, we get
(b) When W is not full of row rank, if \(\lambda \ne 1  \delta \), then \(\tilde{u}_{x} \ne 0\) from Lemma 4.1. As \(\lambda \ne 1\), in the same way as with (a), we get \(0 \le \lambda < 1\).
If \(\lambda = 1  \delta \), we rewrite the system (17) as
Thus, \(\tilde{u}_{x} \in \operatorname{null}[\delta Q_{1}  (1  \delta )B]\) and \(\tilde{u}_{x} \in \operatorname{null}[\delta Q_{3} + (1  \delta )W]\), where \(\operatorname{null}[x]\) represents the null space of the corresponding matrix. For guaranteeing that \(\vert \lambda \vert < 1\), we have \(\vert 1  \delta \vert < 1\), i.e., \(0 < \delta < 2\).
Moreover, we get \(\vert \lambda \vert < 1\) if and only if \(0 < \delta < 2\). □
Lemma 4.3
Ifϕandφare real numbers, the sufficient and necessary condition for the roots of the real quadratic equation\(x^{2} + \varphi x + \phi = 0\)to satisfy\(\vert x \vert < 1\)is that\(\vert \phi \vert < 1\)and\(\vert \varphi \vert < 1 + \phi \).
In order to ensure the convergence of the iteration method (13), the following theorems are used to give the necessary and sufficient conditions.
Theorem 4.1
Suppose thatB, Care symmetric positive definite, \(Q_{1}\)is symmetric positive semidefinite, \(Q_{2} = C/\delta \)is symmetric positive definite where\(\delta > 0\), and\(Q_{3}\)is such that\(W^{T}Q_{2}^{  1}Q_{3}\)is symmetric. Then the iteration method (13) is convergent if and only if
where
and\([\tilde{u}_{x}^{H}, \tilde{u}_{y}^{H}]^{H}\)be a corresponding eigenvector where\(\tilde{u}_{x} \in C^{ \lceil N/2 \rceil } \)and\(\tilde{u}_{y} \in C^{ \lfloor N/2 \rfloor } \).
Proof
Let λ be an eigenvalue of the iteration matrix W and \([\tilde{u}_{x}^{H}, \tilde{u}_{y}^{H}]^{H}\) be its corresponding eigenvector with \(\tilde{u}_{x} \in C^{ \lceil N/2 \rceil } \) and \(\tilde{u}_{y} \in C^{ \lfloor N/2 \rfloor } \). By Lemma 4.1, we know \(\lambda \ne 1\).
If \(\lambda = 1  \delta \), then (15) results in
Equivalently, \(\tilde{u}_{x} \in \operatorname{null}[(1  \delta )W + \delta Q_{3}]\), \(\tilde{u}_{y} = (WW^{T})^{  1}[\delta WQ_{1}  (1  \delta )WB]\tilde{u}_{x}\) when W is full of row rank, where \(\operatorname{null}[ \cdot ]\) represents the null space of the corresponding matrix. Then \(\lambda = 1  \delta \) is an eigenvalue of the iteration matrix W.
When W is not full of row rank, \(\tilde{u}_{x} \in \operatorname{null}[(1  \delta )W + \delta Q_{3}]\) and there exist \(\lfloor N/2 \rfloor  r(W)\) nonzero eigenvectors of \(\tilde{u}_{y}\) corresponding to \(\lambda = 1  \delta \) which are at least \(\lfloor N/2 \rfloor  r(W)\) eigenvalues of the iteration matrix W. Particularly, when \(\tilde{u}_{x}\) is the zero vector, it has been introduced in Lemma 4.1.
If \(\lambda \ne 1  \delta \), we have \(\tilde{u}_{x} \ne 0\) from the Lemma 4.1, and (15) can be rewritten as
Eliminating \(\tilde{u}_{y}\) in (19), we obtain
Then multiplying the left and right sides of (20) by \(\frac{\tilde{u}_{x}^{H}}{\tilde{u}_{x}^{H}\tilde{u}_{x}}\) from the left, we obtain
where
Then (21) can be modified as
When \(\gamma = 0\), we can get \(W\tilde{u}_{x} = 0\) and multiply the left and right sides of the first equation in (19) by \(\tilde{u}_{x}\) from the left, we have
When \(W\tilde{u}_{x} \ne 0\), we obtain \(\gamma > 0\). In accordance with Lemma 4.3, the roots of the quadratic Eq. (22) satisfy \(\vert \lambda \vert < 1\) if and only if
Solving the inequalities of (23), we have
In addition, we consider the case (b) from Lemma 4.2 when \(\tilde{u}_{y} = 0\), it actually satisfies the inequalities (24).
Therefore, we can get (24) where we have considered that \(\alpha \ge 0\) and \(\beta > 0\). □
Corollary 4.1
Under the assumptions of Theorem 4.1, convergence of the iteration method (13) is satisfied if and only if\(2(2  \delta )Q_{1} + (2  \delta )B + 2W^{T}Q_{2}^{  1}Q_{3}  W^{T}Q_{2}^{  1}W\)and\(\delta Q_{1} + B  W^{T}Q_{2}^{  1}Q_{3}\)are positive definite where\(0 < \delta < 2\).
By choosing different matrices \(Q_{1}\), \(Q_{2}\) and \(Q_{3}\) in (12), we get considerable iteration algorithms from the GPIU method for solving the generalized saddle point problem (11). Here, let {\tilde{u}}_{n+1}^{(m)}=\left[\begin{array}{c}{\tilde{u}}_{x+1}^{(m)}\\ {\tilde{u}}_{y+1}^{(m)}\end{array}\right], {\tilde{u}}_{n}^{(m)}=\left[\begin{array}{c}{\tilde{u}}_{x}^{(m)}\\ {\tilde{u}}_{y}^{(m)}\end{array}\right] and {\tilde{b}}^{(m1)}=\left[\begin{array}{c}{\tilde{b}}_{1}^{(m1)}\\ {\tilde{b}}_{2}^{(m1)}\end{array}\right], where \(\tilde{u}_{x + 1}^{(m)}\), \(\tilde{u}_{x}^{(m)}\), \(\tilde{b}_{1}^{(m  1)} \in C^{ \lceil N/2 \rceil } \), and \(\tilde{u}_{y + 1}^{(m)}\), \(\tilde{u}_{y}^{(m)}\), \(\tilde{b}_{2}^{(m  1)} \in C^{ \lfloor N/2 \rfloor } \). Then an efficient GPIU algorithm can be presented.
The GPIU Algorithm
If \(Q_{1} = \frac{1}{\omega } W^{T}Q_{2}^{  1}W\omega > 0\), \(Q_{2} = \frac{1}{\delta } C0 < \delta < 2\), \(Q_{3} = sW\), the GPIU method gives the following iteration scheme:
where the matrices B, C and W rely on N defined in Sect. 4.
Numerical examples
The FDEs problem (1) has been solved by premultiplying the coefficient matrix with \(P^{T}\) and postmultiplying it with P which is given in Sect. 3, firstly. Then the FDEs problem is changed into a generalized saddle point problem, so in Sect. 4 an algorithm is proposed based on the GPIU method. In this section, we discuss the GPIU algorithm with different parameters and comparison between the GPIU algorithm and the CGNR method (3). The initial value is defined as the zero vector at each time step, and each iteration process terminates when the current residual satisfies \(\Vert r_{k} \Vert _{2} / \Vert r_{0} \Vert _{2} < 10^{  7}\), where \(r_{k}\) is the residual vector of the linear system after k iterations. Then we collect data in the table as follows and define “N” as the number of spatial grid points, “M” as the number of time steps, “Error” as the difference between the exact solution and the approximate solution under the infinity norm, “CPU(s)” as the total CPU(s) time (seconds) to solve the whole FDEs problem, and “Iter” as the average number of iterations needed to solve the FDEs problem, i.e.,
where \(\operatorname{Iter}(m)\) represents the number of iterations needed to solve (7). Moreover, we obtain the true solution approximately by the program command \(\operatorname{inv}(\tilde{M}^{(m)})\tilde{b}^{(m  1)}\). All numerical experiments are executed in MATLAB (2016a) of a Windows 8.1 system on a machine with Intel (R) Core (TM) (i55257U CPU @ 2.70 GHz 2700 MHz).
Example 1
([17])
Consider the initialboundary value problem (1) on the spatial domain \([x_{L},x_{R}] = [0,2]\) and time interval \([0,T] = [0,1]\) with the diffusion coefficients \(d_{ +} = 0.6\), \(d_{ } = 0.5\), and
For satisfying \(v_{N,M}\) is bounded away from 0, let \(\Delta t \approx 2\Delta x^{\alpha } \). From Table 1, we show the advantage of efficiency according to the CPU time, Iter and Error. When N and M become large, the CPU time and Error will be slow and show inaccuracy, respectively. But the average number of iterations remains stabilized.
Table 2 demonstrates that not only the Iter but also CPU(s) by the CGNR method with optimal parameters are much higher than the GPIU method. The iteration steps and CPU time of the new method is obviously less than HSS. It also indicates that the Error is obviously improved for the FDEs over CGNR and HSS.
Figure 1 shows the convergence rate of GPIU and HSS. In order to get the least number of steps which can makes the error decrease to the ε times initial error, we normally define \( \ln (\rho (M))\) as the asymptotic convergence rate, where M is the iteration matrix of the iteration method for solving the equations. The minimum number of steps of the iteration is \(k \ge \frac{  \ln \varepsilon }{  \ln (\rho (M))}\). In Fig. 1, the Xaxis stands for the scale of the matrix, and the Yaxis stands for the theoretical minimum steps for iteration. It is shown that, for GPIU, the minimum number of steps increases slowly and almost tends to a stable value as N increases. But for HSS, the minimum number of steps increases sharply.
Example 2
([21])
The spatial domain \([x_{L},x_{R}] = [0,1]\) and the time interval \([0,T] = [0,1]\) with \(d_{ +} = 0.6\), \(d_{ } = 0.2\), initial condition \(u(x,0) = 4x^{2}(2  x)^{2}\), and the source term \(f(x,t) =  32e^{  t} [ x^{2} + (2  x)^{2} + 0.125x^{2}(2  x)^{2}  2.5(x^{3} + (2  x)^{3}) + \frac{25}{22}(x^{4} + (2  x)^{4}) ]\).
Table 3 illustrates the numerical results of Example 2. Here we specially select the source term \(f(x,t)\) as nonzero and nonlinear. Obviously, the Iter for the CGNR method is larger and the Iter of the new method is smaller than HSS. Moreover, the CPU time and the error of the GPIU method are much smaller than the CGNR method and HSS method. It indicates that the GPIU algorithm is still highly efficient and accurate for the case of nonlinear source term, so the GPIU algorithm can be extendable to the case of nonlinear source term.
Note. The memory usage for all tests in Tables 1, 2 and 3 is \(O(n)\), where n denotes the order of the coefficient matrix of equations [7].
Concluding remarks
In this paper, a GPIU method is put forward to solve the generalized saddle point problem generated by the fractional diffusion equations with constant coefficients by premultiplying the orthogonal matrix we constructed and postmultiplying the transposition of it, respectively. Then the convergence condition of the GPIU method is analyzed and we give a new GPIU algorithm.
The numerical results show that the proposed method is more effective than the CGNR method and HSS, and its advantages become more obvious as N increases. Compared with the CGNR method and the HSS method, this GPIU method can achieve a faster convergence rate in practice and theory, and can obtain a more accurate convergence solution in a shorter time. Unlike the CGNR method and the HSS method, with the increase of the matrix order N, the number of iteration steps of the GPIU method grows slowly, it is almost stable near an extremely low value, and the error is significantly improved. The better convergence results of GPIU are due to the fact that the storage of the new algorithm is greatly reduced to \(O(n)\), to be compared with the CGNR and HSS method, where n denotes the order of the coefficient matrix of equations.
For further consideration, more effective stationary methods like the GPIU method will be promoted to solve the FDE problems with constant or even variable diffusion coefficients.
References
 1.
Benson, D., Wheatcraft, S.W., Meerschaert, M.M.: Application of a fractional advectiondispersion equation. Water Resour. Res. 36, 1403–1413 (2000)
 2.
Benson, D., Wheatcraft, S.W., Meerschaert, M.M.: The fractionalorder governing equation of Lévy motion. Water Resour. Res. 36, 1413–1423 (2000)
 3.
Carreras, B.A., Lynch, V.E., Zaslavsky, G.M.: Anomalous diffusion and exit time distribution of particle tracers in plasma turbulence models. Phys. Plasmas 8, 5096–5103 (2001)
 4.
Bai, J., Feng, X.: Fractionalorder anisotropic diffusion for image denoising. IEEE Trans. Image Process. 16, 2492–2502 (2007)
 5.
Raberto, M., Scalas, E., Mainardi, F.: Waitingtimes and returns in highfrequency financial data: an empirical study. Physica A 314, 749–755 (2002)
 6.
Sokolov, I.M., Klafter, J., Blumen, A.: Fractional kinetics. Phys. Today 11, 28–53 (2002)
 7.
Shen, S., Liu, F., Anh, V., Tuner, I.: The fundamental solution and numerical solution of the Riesz fractional advectiondispersion equation. IMA J. Appl. Math. 73, 850–872 (2008)
 8.
Gao, G.H., Sun, Z.Z.: A compact difference scheme for the fractional subdiffusion equations. J. Comput. Phys. 230, 586–595 (2011)
 9.
Liu, F., Zhuang, P., Burrage, K.: Numerical methods and analysis for a class of fractional advectiondispersion models. Comput. Math. Appl. 64, 2990–3007 (2012)
 10.
Beumer, B., Kovács, M., Meerschaert, M.M.: Numerical solutions for fractional reactiondiffusion equations. Comput. Math. Appl. 55, 2212–2226 (2008)
 11.
Deng, W.: Finite element method for the space and time fractional Fokker–Planck equation. SIAM J. Numer. Anal. 47, 204–226 (2008)
 12.
Tadjeran, C., Meerschaert, M.M., Scheffler, H.P.: A secondorder accurate numerical approximation for the fractional diffusion equation. J. Comput. Phys. 213, 205–213 (2006)
 13.
Meerschaert, M.M., Tadjeran, C.: Finite difference approximations for twosided spacefractional partial differential equations. Appl. Numer. Math. 56, 80–90 (2006)
 14.
Wang, H., Wang, K., Sircar, T.: A direct \(O(N\mathrm{log} ^{2}N)\) finite difference method for fractional diffusion equations. J. Comput. Phys. 229, 8095–8104 (2010)
 15.
Chan, R., Ng, M.: Conjugate gradient methods for Toeplitz systems. SIAM Rev. 38, 427–482 (1996)
 16.
Wang, K., Wang, H.: A fast characteristic finite difference method for fractional advectiondiffusion equations. Adv. Water Resour. 34, 810–816 (2011)
 17.
Lei, S.L., Sun, H.W.: A circulant preconditioner for fractional diffusion equations. J. Comput. Phys. 242, 715–725 (2013)
 18.
Chan, R., Strang, G.: Toeplitz equations by conjugate gradients with circulant preconditioner. SIAM J. Sci. Stat. Comput. 10, 104–119 (1989)
 19.
Chan, T.: An optimal circulant preconditioner for Toeplitz systems. SIAM J. Sci. Stat. Comput. 9, 766–771 (1988)
 20.
Pang, H., Sun, H.: Multigrid method for fractional diffusion equations. J. Comput. Phys. 231, 693–703 (2012)
 21.
Bai, Y.Q., Huang, T.Z., Gu, X.M.: Circulant preconditioned iterations for fractional diffusion equations based on Hermitian and skewHermitian splittings. Appl. Math. Lett. 48, 14–22 (2015)
 22.
Chen, F., Jiang, Y.L.: On HSS and AHSS iteration methods for nonsymmetric positive definite Toeplitz systems. J. Comput. Appl. Math. 234, 2432–2440 (2010)
 23.
Zhou, Y.Y., Zhang, G.F.: A generalization of parameterized inexact Uzawa method for generalized saddle point problems. Appl. Math. Comput. 215, 599–607 (2009)
 24.
Bai, Z.Z., Wang, Z.Q.: On parameterized inexact Uzawa methods for generalized saddle point problems. Linear Algebra Appl. 428, 2900–2932 (2008)
 25.
Podlubny, I.: Fractional Differential Equations. Academic Press, New York (1999)
 26.
Quarteroni, A., Sacco, R., Saleri, F.: Numerical Mathematics, 2nd edn. Texts in Appl. Math., vol. 37. Springer, Berlin (2007)
 27.
Young, D.M.: Iterative Solution for Large Linear Systems. Academic Press, New York (1971)
Acknowledgements
The authors would like to thank the editors and reviewers for their reading.
Availability of data and materials
The data are available from the corresponding author upon request.
Authors’ information
HaiLong Shen is a Lecturer at Northeastern University, Shenyang, China. XinHui Shao is a Professor and Master advisor at Northeastern University. YuHan Li is studying for a Master’s degree at Northeastern University.
Funding
This work was funded by the Natural Science Foundation of Liaoning Province (CN) (No. 20170540323); Central University Basic Scientific Research Business Expenses Special Funds (N2005013).
Author information
Affiliations
Contributions
SHL designed, analyzed and wrote the paper. SXH guided the full text. LYH analyzed the data. All authors read and approved the final manuscript.
Corresponding author
Ethics declarations
Competing interests
The authors declare that they have no competing interests.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Shen, HL., Li, YH. & Shao, XH. A GPIU method for fractional diffusion equations. Adv Differ Equ 2020, 398 (2020). https://doi.org/10.1186/s13662020027319
Received:
Accepted:
Published:
DOI: https://doi.org/10.1186/s13662020027319
Keywords
 Fractional diffusion equations
 Generalized saddle point problem
 Stability
 Toeplitz linear system
 The shifted Grünwald formula