Skip to main content

A GPIU method for fractional diffusion equations

Abstract

The fractional diffusion equations can be discretized by applying the implicit finite difference scheme and the unconditionally stable shifted Grünwald formula. Hence, the generating linear system has a real Toeplitz structure when the two diffusion coefficients are non-negative constants. Through a similarity transformation, the Toeplitz linear system can be converted to a generalized saddle point problem. We use the generalization of a parameterized inexact Uzawa (GPIU) method to solve such a kind of saddle point problem and give a new algorithm based on the GPIU method. Numerical results show the effectiveness and accuracy for the new algorithm.

Introduction

The fractional differential operator is suitable for describing the memory, genetic, mechanical and electrical properties of various materials. Compared with the classical integer differential operator, it can more concisely and accurately describe the biological, mechanical and physical processes with historical memory and spatial global correlation characteristics, such as abnormal diffusion of particles, quantization problem of non-local field theory, the fractional capacitance theory, universal voltage shunt, chaotic circuit analysis, physical semiconductors field, dispersion in porous media, physical and engineering issues related to fractal dimensions, and non-Newtonian fluid mechanics.

The fractional diffusion equations can be abstracted from many practical problems, such as a random walk model describing the competition between sub-diffusion and super-diffusion Three different types of fractional diffusion equations can be derived according to the difference in particle waiting time and jumping steps: when the mean of the waiting time of each step is infinite and the squared mean of the jump steps is limited, the random walk model describes the abnormal sub-diffusion phenomenon, and the time fractional diffusion equation is derived accordingly; when the mean of the waiting time is limited and the squared mean of the jump steps is infinite, the random walk model describes the super-diffusion phenomenon, and the spatial fractional-order diffusion equation is derived accordingly; when the mean of the waiting time and the squared mean of the jump steps are both infinite, the random walk model describes the phenomenon of competition between sub-diffusion and super-diffusion, and the fractional diffusion equation in time and space is derived accordingly; using the fractional convection-diffusion equations to simulate Levy’s motion is very effective, it can more accurately simulate the solute motion process with long tail behavior; when studying Eulerian estimation of solute transport in porous media, the fractional-order Fick law can be used to replace the traditional Fick law to obtain the fractional-order convection-diffusion equations; in the application of non-Newtonian fluid mechanics, to solve the analytical solution of a material model problem between the Hook solid model and Newtonian fluid model, a special function related to fractional calculus needs to be applied; in the early days, the problem of dynamic response to broadband crust induced by earthquakes that was hardly affected by velocity can be described by a fractional linear rheological solid model. It is precisely because of the advantages of fractional derivatives and the wide application of fractional diffusion equations (FDEs) (1) in groundwater contaminant transport [1, 2], turbulent flow [3], image processing [4], finance [5], physics [6] and other fields, that people have drawn extensive attention to its research.

Considerable numerical methods have been developed as the major ways for solving FDEs [712]. However, some salient features of fractional differential operators lead to the unconditional instability [13]. In addition, most numerical methods for finite difference equations (FDEs) can generate a full coefficient matrix, which requires the computational cost of \(O(N^{3})\) and the storage of \(O(N^{2})\), where N is the number of grid points [14]. In contrast, the second-order diffusion equations which usually contribute to sparse coefficient matrices can be solved efficiently by fast iteration methods with computational complexity \(O(N)\).

To guarantee the stability, Meerschaert and Tadjeran [13] proposed a shifted Grünwald discretization to approximate fractional diffusion equations and it has been proved that it is actually unconditionally stable. Then, based on the Meerschaert–Tadjeran method, a special structure of the full coefficient matrices was presented in [14]. The method maintained a Toeplitz-like structure and reduced the storage requirement from \(O(N^{2})\) to \(O(N)\). Furthermore, the matrix-vector multiplication for the Toeplitz matrix can be calculated by the fast Fourier transform (FFT) with \(O(N\log N)\) operations [15]. As a result, fast numerical methods for solving fractional equations with the shifted Grünwald formula have been developed, including the conjugate gradient normal residual (CGNR) methods [16, 17], the preconditioned CGNR (PCGNR) methods which employ some circulant preconditioners [1719] and the multigrid method [20]. If the diffusion coefficients are constant, the full coefficient matrix originating from the Meerschaert–Tadjeran method will be non-Hermitian positive definite. Therefore, the Hermitian and skew-Hermitian splitting (HSS) iteration and the corresponding preconditioned HSS (PHSS) iteration [21] have also been developed to solve fractional equations.

Since the full coefficient matrix is also a real Toeplitz matrix, the FDE can be transformed into a generalized saddle point problem based on some properties of the real Toeplitz matrix [22]. Therefore, the generalized parameterized inexact Uzawa (GPIU) method [23, 24] is suitable for solving this new saddle point problem. However, it shows some differences from the traditional GPIU method. This method should store three \(N/2\) order matrices. Although these matrices have symmetric structures (which will require less storage space), the complexity is less than \(O(N^{2})\). In the meantime, the complexity of calculation is also \(O(N^{2})\) since the method needs to compute the multiplication of several \(N/2\) order matrices.

The framework of this paper is as follows. In Sect. 2, we introduce the background of the discretization for the FDEs. Then the real Toeplitz linear system is converted to a generalized saddle point problem by similarity transformation in Sect. 3. In Sect. 4, the GPIU method is considered to solve this kind of generalized saddle point problem and its convergence is analyzed. Besides, a GPIU algorithm is presented. In Sect. 5, numerical examples are performed to illustrate the effectiveness and power of our new method. Finally, conclusions are given in Sect. 6.

The depiction of FDEs and finite difference discretization

Firstly, an initial-boundary value problem of the FDEs is given by

$$ \textstyle\begin{cases} \frac{\partial u(x,t)}{\partial t} = d_{ +} (x,t)\frac{\partial ^{\alpha } u(x,t)}{\partial _{ +} x^{\alpha }} + d_{ -} (x,t)\frac{\partial ^{\alpha } u(x,t)}{\partial _{ -} x^{\alpha }} + f(x,t), \\ x \in (x_{L},x_{R}),\quad t \in (0,T], \\ u(x_{L},t) = u(x_{R},t) = 0,\quad 0 \le t \le T, \\ u(x, 0) = u_{0}(x),\quad x \in [x_{L},x_{R}], \end{cases} $$
(1)

where \(\alpha \in (1, 2)\) is the order of fractional derivatives, \(f(x,t)\) is the source term and diffusion coefficient functions \(d_{ \pm } \ge 0\) with \(d_{ +} + d_{ -} \ne 0\). The left-sided and the right-sided fractional derivatives \(\frac{\partial ^{\alpha } u(x,t)}{\partial _{ +} x^{\alpha }} \) and \(\frac{\partial ^{\alpha } u(x,t)}{\partial _{ -} x^{\alpha }} \) of (1) have the same definition as the Grünwald form [25]

$$\begin{aligned}& \frac{\partial ^{\alpha } u(x,t)}{\partial _{ +} x^{\alpha }} = \lim_{\Delta x \to 0^{ +}} \frac{1}{\Delta x^{\alpha }} \sum _{k = 0}^{ \lfloor (x - x_{L})/\Delta x \rfloor } g_{k}^{(\alpha )}u(x - k\Delta x,t), \\& \frac{\partial ^{\alpha } u(x,t)}{\partial _{ -} x^{\alpha }} = \lim_{\Delta x \to 0^{ -}} \frac{1}{\Delta x^{\alpha }} \sum _{k = 0}^{ \lfloor (x_{R} - x)/\Delta x \rfloor } g_{k}^{(\alpha )}u(x + k\Delta x,t), \end{aligned}$$

where \(\lfloor x \rfloor \) denotes the floor of x and the Grünwald weights \(g_{k}^{(\alpha )}\) are the alternating fractional binomial coefficient given as

{ g 0 ( α ) = 1 , g k ( α ) = ( 1 ) k ( α k ) , k N + ,
(2)

which can be evaluated by the following recursive form:

$$ g_{k + 1}^{(\alpha )} = \biggl(1 - \frac{\alpha + 1}{k + 1} \biggr)g_{k}^{(\alpha )},\quad k \in N_{ +}. $$

Besides, the coefficients \(g_{k}^{(\alpha )}\) satisfy the following properties [13, 14, 17, 21] when \(1 < \alpha < 2\):

$$ \textstyle\begin{cases} g_{0}^{(\alpha )} = 1, \qquad g_{1}^{(\alpha )} = - \alpha < 0,\qquad 1 \ge g_{2}^{(\alpha )} \ge g_{3}^{(\alpha )} \ge \cdots \ge 0, \\ \sum_{k = 0}^{\infty } g_{k}^{(\alpha )} = 0,\qquad \sum_{k = 0}^{m} g_{k}^{(\alpha )} < 0,\quad (m \ge 1 ). \end{cases} $$

Let N and M be positive integers, then \(\Delta x = \frac{x_{R} - x_{L}}{N + 1}\) and \(\Delta t = \frac{T}{M}\) are the sizes of spatial grids and time steps, respectively. Additionally, the spatial and temporal partition are defined as \(x_{i} = x_{L} + i\Delta x\) for \(i = 0, 1, \ldots , N + 1\) and \(t_{m} = m\Delta t\) for \(m = 0, 1, \ldots , M\). We also let \(u_{i}^{(m)} = u(x_{i},t_{m})\), \(d_{ \pm ,i}^{(m)} = d_{ \pm } (x_{i},t_{m})\) and \(f_{i}^{(m)} = f(x_{i},t_{m})\).

Consider the shifted Grünwald approximations [13, 17, 21]:

$$\begin{aligned}& \frac{\partial ^{\alpha } u(x_{i},t_{m})}{\partial _{ +} x^{\alpha }} = \frac{1}{\Delta x^{\alpha }} \sum_{k = 0}^{i + 1} g_{k}^{(\alpha )}u_{i - k + 1}^{(m)} + O(\Delta x), \\& \frac{\partial ^{\alpha } u(x_{i},t_{m})}{\partial _{ -} x^{\alpha }} = \frac{1}{\Delta x^{\alpha }} \sum_{k = 0}^{N - i + 2} g_{k}^{(\alpha )}u_{i + k - 1}^{(m)} + O(\Delta x), \end{aligned}$$

where \(g_{k}^{(\alpha )}\) is defined in (2) and the corresponding implicit finite difference scheme in (1) can be modified as follows:

$$ \frac{u_{i}^{(m)} - u_{i}^{(m - 1)}}{\Delta t} = \frac{d_{ +,i}^{(m)}}{\Delta x^{\alpha }} \sum_{k = 0}^{i + 1} g_{k}^{(\alpha )}u_{i - k + 1}^{(m)} + \frac{d_{ -,i}^{(m)}}{\Delta x^{\alpha }} \sum_{k = 0}^{N - i + 2} g_{k}^{(\alpha )}u_{i + k - 1}^{(m)} + f_{i}^{(m)}. $$
(3)

Let \(u^{(m)} = [u_{1}^{(m)},u_{2}^{(m)}, \ldots ,u_{N}^{(m)}]^{T}\), \(f^{(m)} = [f_{1}^{(m)},f_{2}^{(m)}, \ldots ,f_{N}^{(m)}]^{T}\) and \(I \in R^{N \times N}\) be an identity matrix. Then the scheme (3) can construct the following equations:

$$ \biggl(\frac{\Delta x^{\alpha }}{\Delta t}I + A^{(m)}\biggr)u^{(m)} = \frac{\Delta x^{\alpha }}{\Delta t}u^{(m - 1)} + \Delta x^{\alpha } f^{(m)} $$
(4)

and

$$ A^{(m)} = D_{ +}^{(m)}G_{\alpha } + D_{ -}^{(m)}G_{\alpha }^{T}, $$
(5)

where \(D_{ \pm }^{(m)} = \operatorname{diag}(d_{ \pm ,1}^{(m)}, \ldots ,d_{ \pm ,N}^{(m)})\) and

$$ G_{\alpha } = - \begin{bmatrix} g_{1}^{(\alpha )} & g_{0}^{(\alpha )} & 0 & \cdots & 0 & 0 \\ g_{2}^{(\alpha )} & g_{1}^{(\alpha )} & g_{0}^{(\alpha )} & 0 & \cdots & 0 \\ \vdots & g_{2}^{(\alpha )} & g_{1}^{(\alpha )} & \ddots & \ddots & \vdots \\ \vdots & \ddots & \ddots & \ddots & \ddots & 0 \\ g_{N - 1}^{(\alpha )} & \ddots & \ddots & \ddots & \ddots & g_{0}^{(\alpha )} \\ g_{N}^{(\alpha )} & g_{N - 1}^{(\alpha )} & \cdots & \cdots & g_{2}^{(\alpha )} & g_{1}^{(\alpha )} \end{bmatrix}_{N \times N}. $$
(6)

It is obvious that \(G_{\alpha } \) is a Toeplitz matrix [15] and the coefficient matrix \(A^{(m)}\) is a full Toeplitz-like matrix, which is non-symmetric when \(d_{ +} \ne d_{ -} \) and the matrix-vector multiplication for the \(A^{(m)}\) can be obtained in \(O(N\log N)\) operations by the fast Fourier transform (FFT).

We can define \(v_{N,M} = \frac{\Delta x^{\alpha }}{\Delta t} = (x_{R} - x_{L})^{\alpha } T^{ - 1}\frac{M}{(N + 1)^{\alpha }} \), related to the numbers of time steps and grid points. The linear system (4) is represented as follows:

$$ M^{(m)}u^{(m)} = b^{(m - 1)}, $$
(7)

where

$$ M^{(m)} = \frac{\Delta x^{\alpha }}{\Delta t}I + A^{(m)} = v_{N,M}I + D_{ +}^{(m)}G_{\alpha } + D_{ -}^{(m)}G_{\alpha }^{T} $$
(8)

and

$$ b^{(m - 1)} = v_{N,M}\bigl(u^{(m - 1)} + \Delta tf^{(m)}\bigr). $$

As the coefficient matrix \(M^{(m)}\) in (8) is a strictly diagonally dominant M-matrix [14], we know the matrix \(\frac{M^{(m)} + M^{(m)^{T}}}{2}\) is also a non-singular M-matrix where \(M^{(m)^{T}}\) is used to denote the transposition of the matrix \(M^{(m)}\). For the reason that every non-singular symmetric M-matrix is positive definite, \(\frac{M^{(m)} + M^{(m)^{T}}}{2}\) is a symmetric positive definite matrix. Therefore, \(M^{(m)}\) is also positive definite (see [21, 26]).

The transformation for the Toeplitz matrix \(M^{(m)}\)

As mentioned in Sect. 2, the coefficient matrix \(M^{(m)}\) is a real positive definite Toeplitz-like matrix. We assume that the diffusion coefficient functions are two non-negative constants, i.e., \(d_{ +,i}^{(m)} = d_{ +} \ge 0\), \(d_{ -,i}^{(m)} = d_{ -} \ge 0\), but \(d_{ +} + d_{ -} \ne 0\). Then \(M^{(m)}\) becomes a real non-symmetric positive definite Toeplitz matrix. In addition, M is appropriately chosen according to N and satisfies the limit of \(v_{N,M}\) away from 0.

Then we transform the Toeplitz linear system (7) into a generalized saddle point problem by constructing an orthogonal matrix to deal with \(M^{(m)}\). The first row and column of the real \(N \times N\) Toeplitz matrix \(M^{(m)}\) can be indicated as

$$ \bigl[v_{N,M} + d_{ +} g_{1}^{(\alpha )} + d_{ -} g_{1}^{(\alpha )}, d_{ +} g_{0}^{(\alpha )} + d_{ -} g_{2}^{(\alpha )}, d_{ -} g_{3}^{(\alpha )}, \ldots , d_{ -} g_{N}^{(\alpha )}\bigr] $$

and

$$ \bigl[v_{N,M} + d_{ +} g_{1}^{(\alpha )} + d_{ -} g_{1}^{(\alpha )}, d_{ +} g_{2}^{(\alpha )} + d_{ -} g_{0}^{(\alpha )}, d_{ +} g_{3}^{(\alpha )}, \ldots , d_{ +} g_{N}^{(\alpha )}\bigr]^{T}, $$

respectively.

We use the Hermitian and skew-Hermitian splitting (HSS) method to split \(M^{(m)}\) and get

$$ M^{(m)} = H^{(m)} + S^{(m)}, $$
(9)

where \(H^{(m)} = \frac{M^{(m)} + M^{(m)^{T}}}{2}\) is symmetric positive definite and \(S^{(m)} = \frac{M^{(m)} - M^{(m)^{T}}}{2}\) is skew-symmetric. Interestingly, \(H^{(m)}\) is a centrosymmetric Toeplitz matrix and \(S^{(m)}\) is a skew-centrosymmetric Toeplitz matrix. Denote \(J_{N} = [e_{N}, e_{N - 1}, \ldots , e_{1}]\), where \(e_{i} \in R^{N}\) is the column vector with the ith row being 1, i.e., \(J_{N}\) is a permutation matrix where 1 is on the counter-diagonal and 0 is elsewhere. Thus the matrices \(H^{(m)}\) and \(S^{(m)}\) are able to be replaced by

$$ H^{(m)} = J_{N}H^{(m)}J_{N}\quad \text{and} \quad S^{(m)} = - J_{N}S^{(m)}J_{N}, $$

respectively.

Considering that N is even or odd, we discuss the structures of \(H^{(m)}\) and \(S^{(m)}\) in detail.

Case I:N is even, let \(N = 2m\). Then \(H^{(m)}\) can be rewritten as

$$ H^{(m)} = \begin{bmatrix} E & J_{m}FJ_{m} \\ F & J_{m}EJ_{m} \end{bmatrix}, $$

where \(E \in R^{m \times m}\) is a symmetric Toeplitz matrix whose first column can be given as

$$ \biggl[v_{N,M} + (d_{ +} + d_{ -} )g_{1}^{(\alpha )}, \frac{d_{ +} + d_{ -}}{2}\bigl(g_{0}^{(\alpha )} + g_{2}^{(\alpha )}\bigr), \frac{d_{ +} + d_{ -}}{2}g_{3}^{(\alpha )}, \ldots , \frac{d_{ +} + d_{ -}}{2}g_{m}^{(\alpha )}\biggr]^{T}, $$

and \(F \in R^{m \times m}\) is a Toeplitz matrix whose first row and column are as follows:

$$\begin{aligned}& \biggl[\frac{d_{ +} + d_{ -}}{2}g_{m + 1}^{(\alpha )}, \frac{d_{ +} + d_{ -}}{2}g_{m}^{(\alpha )}, \frac{d_{ +} + d_{ -}}{2}g_{m - 1}^{(\alpha )}, \ldots , \frac{d_{ +} + d_{ -}}{2} \bigl(g_{0}^{(\alpha )} + g_{2}^{(\alpha )}\bigr) \biggr], \\& \biggl[\frac{d_{ +} + d_{ -}}{2}g_{m + 1}^{(\alpha )}, \frac{d_{ +} + d_{ -}}{2}g_{m + 2}^{(\alpha )}, \frac{d_{ +} + d_{ -}}{2}g_{m + 3}^{(\alpha )}, \ldots , \frac{d_{ +} + d_{ -}}{2}g_{N}^{(\alpha )} \biggr]^{T}, \end{aligned}$$

respectively.

Analogously, \(S^{(m)}\) can be rewritten as

$$ S^{(m)} = \begin{bmatrix} G & - J_{m}LJ_{m} \\ L & - J_{m}GJ_{m} \end{bmatrix}, $$

where \(G \in R^{m \times m}\) is a skew-symmetric Toeplitz matrix whose first column can be given as

$$ \biggl[0, \frac{d_{ +} - d_{ -}}{2}\bigl(g_{2}^{(\alpha )} - g_{0}^{(\alpha )}\bigr), \frac{d_{ +} - d_{ -}}{2}g_{3}^{(\alpha )}, \ldots , \frac{d_{ +} - d_{ -}}{2}g_{m}^{(\alpha )}\biggr]^{T}, $$

and \(L \in R^{m \times m}\) is a Toeplitz matrix whose first row and column are as follows:

$$\begin{aligned}& \biggl[\frac{d_{ +} - d_{ -}}{2}g_{m + 1}^{(\alpha )}, \frac{d_{ +} - d_{ -}}{2}g_{m}^{(\alpha )}, \frac{d_{ +} - d_{ -}}{2}g_{m - 1}^{(\alpha )}, \ldots , \frac{d_{ +} - d_{ -}}{2} \bigl(g_{2}^{(\alpha )} - g_{0}^{(\alpha )}\bigr) \biggr], \\& \biggl[\frac{d_{ +} - d_{ -}}{2}g_{m + 1}^{(\alpha )}, \frac{d_{ +} - d_{ -}}{2}g_{m + 2}^{(\alpha )}, \frac{d_{ +} - d_{ -}}{2}g_{m + 3}^{(\alpha )}, \ldots , \frac{d_{ +} - d_{ -}}{2}g_{N}^{(\alpha )} \biggr]^{T}, \end{aligned}$$

respectively.

We give an orthogonal matrix P defined as

$$ P = \frac{\sqrt{2}}{2} \begin{bmatrix} I_{m} & I_{m} \\ J_{m} & - J_{m} \end{bmatrix}, $$

then multiply the left and right sides of (9) by \(P^{T}\) from the left and P from the right, respectively, we have the new forms of \(\tilde{H}^{(m)}\) and \(\tilde{S}^{(m)}\):

$$ \tilde{H}^{m} = P^{T}H^{(m)}P = \begin{bmatrix} E + J_{m}F & 0 \\ 0 & E - J_{m}F \end{bmatrix} $$

and

$$ \tilde{S}^{(m)} = P^{T}S^{(m)}P = \begin{bmatrix} 0 & G + J_{m}L \\ G - J_{m}L & 0 \end{bmatrix}. $$

Hence, let \(\tilde{M}^{(m)} = P^{T}M^{(m)}P\), \(\tilde{u}^{(m)} = P^{T}u^{(m)}\), and \(\tilde{b}^{(m - 1)} = P^{T}b^{(m - 1)}\), we can rewrite Eq. (7) as the following linear system:

$$ \tilde{M}^{(m)}\tilde{u}^{(m)} = \tilde{b}^{(m - 1)} $$
(10)

and we have

$$ \tilde{M}^{(m)} = P^{T}M^{(m)}P = P^{T}H^{(m)}P + P^{T}S^{(m)}P = \tilde{H}^{(m)} + \tilde{S}^{(m)} = \begin{bmatrix} E + J_{m}F & G + J_{m}L \\ G - J_{m}L & E - J_{m}F \end{bmatrix}. $$

For simplicity, let \(B = E + J_{m}F\), \(C = E - J_{m}F\) and \(W = - G + J_{m}L\), then we can find that

$$ W^{T} = ( - G + J_{m}L)^{T} = - G^{T} + L^{T}J_{m} = G + J_{m}L. $$

The coefficient matrix \(\tilde{M}^{(m)}\) can be expressed as

$$ \tilde{M}^{(m)} = \begin{bmatrix} B & W^{T} \\ - W & C \end{bmatrix}, $$
(11)

and \(\tilde{M}^{(m)}\) is real non-symmetric positive definite because \(M^{(m)}\) is real positive definite.

Moreover, B and C are both symmetric positive definite matrices because \(\tilde{H}^{(m)}\) is a symmetric positive definite matrix. So the linear system (7) is transformed into a generalized saddle point problem under the assumption of case I.

Case II:N is an odd number, let \(N = 2m + 1\). Then the matrix \(H^{(m)}\) can be written as

$$ H^{(m)} = \begin{bmatrix} E & J_{m}z_{1} & J_{m}FJ_{m} \\ z_{1}^{T}J_{m} & v_{N,M} + (d_{ +} + d_{ -} )g_{1}^{(\alpha )} & z_{1}^{T} \\ F & z_{1} & J_{m}EJ_{m} \end{bmatrix}, $$

where E, F and \(J_{m}\) are similar to the above, and the column vector \(z_{1}\) is expressed as follows:

$$ \biggl[\frac{d_{ +} + d_{ -}}{2}g_{m + 1}^{(\alpha )}, \frac{d_{ +} + d_{ -}}{2}g_{m}^{(\alpha )}, \ldots , \frac{d_{ +} + d_{ -}}{2}\bigl(g_{0}^{(\alpha )} + g_{2}^{(\alpha )}\bigr)\biggr]^{T}. $$

In addition, \(S^{(m)}\) can be expressed as

$$ S^{(m)} = \begin{bmatrix} G & J_{m}z_{2} & - J_{m}LJ_{m} \\ - z_{2}^{T}J_{m} & 0 & z_{2}^{T} \\ L & - z_{2} & - J_{m}GJ_{m} \end{bmatrix}, $$

where G and L are similar to the above, and the column vector \(z_{2}\) is expressed as follows:

$$ \biggl[\frac{d_{ -} - d_{ +}}{2}g_{m + 1}^{(\alpha )}, \frac{d_{ -} - d_{ +}}{2}g_{m}^{(\alpha )}, \ldots , \frac{d_{ -} - d_{ +}}{2}\bigl(g_{2}^{(\alpha )} - g_{0}^{(\alpha )}\bigr)\biggr]^{T}. $$

Then the orthogonal matrix is defined as

$$ P = \frac{\sqrt{2}}{2} \begin{bmatrix} I_{m} & 0 & I_{m} \\ 0 & \sqrt{2} & 0 \\ J_{m} & 0 & - J_{m} \end{bmatrix}. $$

Multiplying the left and right sides of (9) by \(P^{T}\) from the left and P from the right, respectively, we have the new forms of \(\tilde{H}^{(m)}\) and \(\tilde{S}^{(m)}\) as

$$ \tilde{H}^{(m)} = P^{T}H^{(m)}P = \begin{bmatrix} E + J_{m}F & \sqrt{2} J_{m}z_{1} & 0 \\ \sqrt{2} z_{1}^{T}J_{m} & v_{N,M} + (d_{ +} + d_{ -} )g_{1}^{(\alpha )} & 0 \\ 0 & 0 & E - J_{m}F \end{bmatrix} $$

and

$$ \tilde{S}^{(m)} = P^{T}S^{(m)}P = \begin{bmatrix} 0 & 0 & G + J_{m}L \\ 0 & 0 & - \sqrt{2} z_{2}^{T}J_{m} \\ G - J_{m}L & \sqrt{2} J_{m}z_{2} & 0 \end{bmatrix}. $$

The Toeplitz linear system (7) can also be constructed as a generalized saddle point problem which has a form similar to (10), where

$$ B = \begin{bmatrix} E + J_{m}F & \sqrt{2} J_{m}z_{1} \\ \sqrt{2} z_{1}^{T}J_{m} & v_{N,M} + (d_{ +} + d_{ -} )g_{1}^{(\alpha )} \end{bmatrix},\qquad C = E - J_{m}F,\qquad W = \begin{bmatrix} G + J_{m}L \\ - \sqrt{2} z_{2}^{T}J_{m} \end{bmatrix}^{T}. $$

All in all, no matter what positive integer N we choose, the scheme for the FDEs is able to be transformed into a generalized saddle point problem.

A GPIU method on the transformed saddle point problem

By splitting the new coefficient matrix \(\tilde{M}^{(m)}\) in the linear system (10), we get

$$ \tilde{M}^{(m)} = X^{(m)} - Y^{(m)}, $$
(12)

where

$$ X^{(m)} = \begin{bmatrix} Q_{1} + B & 0 \\ - W + Q_{3} & Q_{2} \end{bmatrix}\quad \text{and}\quad Y^{(m)} = \begin{bmatrix} Q_{1} & - W^{T} \\ Q_{3} & Q_{2} - C \end{bmatrix}, $$

here the definitions of B, C and W are given in the above section, \(Q_{1} \in R^{ \lceil N/2 \rceil } \) is a symmetric positive definite matrix, \(Q_{2} \in R^{ \lfloor N/2 \rfloor } \) is a symmetric positive definite matrix, and \(Q_{3} \in R^{ \lfloor N/2 \rfloor \times \lceil N/2 \rceil } \) is arbitrary, where \(\lceil N/2 \rceil \) and \(\lfloor N/2 \rfloor \) represents the ceil and the floor of \(N/2\), respectively.

Considering the GPIU method [23, 24], we choose \(Q_{2} = C/\delta \) where \(\delta > 0\), so \(Y^{(m)}\) can be rewritten as

$$ Y^{(m)} = \begin{bmatrix} Q_{1} & - W^{T} \\ Q_{3} & (1 - \delta )Q_{2} \end{bmatrix}. $$

Thus, the linear system (10) is redefined by the splitting form of (12) as

$$ \begin{bmatrix} Q_{1} + B & 0 \\ - W + Q_{3} & Q_{2} \end{bmatrix}\tilde{u}_{n + 1}^{(m)} = \begin{bmatrix} Q_{1} & - W^{T} \\ Q_{3} & (1 - \delta )Q_{2} \end{bmatrix}\tilde{u}_{n}^{(m)} + \tilde{b}^{(m - 1)}. $$
(13)

The form of the iteration matrix corresponding to (13) is as follows:

$$ \boldsymbol{W} = \begin{bmatrix} Q_{1} + B & 0 \\ - W + Q_{3} & Q_{2} \end{bmatrix}^{ - 1} \begin{bmatrix} Q_{1} & - W^{T} \\ Q_{3} & (1 - \delta )Q_{2} \end{bmatrix}. $$
(14)

Denote by \(\rho (\boldsymbol{W})\) the spectral radius of the iteration matrix W, so Eq. (13) converges if and only if \(\rho (\boldsymbol{W}) < 1\).

Let λ be an eigenvalue of W and \([\tilde{u}_{x}^{H}, \tilde{u}_{y}^{H}]^{H}\) be the corresponding eigenvector with \(\tilde{u}_{x} \in C^{ \lceil N/2 \rceil } \) and \(\tilde{u}_{y} \in C^{ \lfloor N/2 \rfloor } \), then we obtain

$$ \boldsymbol{W} \begin{bmatrix} \tilde{u}_{x} \\ \tilde{u}_{y} \end{bmatrix} = \lambda \begin{bmatrix} \tilde{u}_{x} \\ \tilde{u}_{y} \end{bmatrix}, $$

Or, equivalently,

$$ \textstyle\begin{cases} Q_{1}\tilde{u}_{x} - W^{T}\tilde{u}_{y} = \lambda (Q_{1} + B)\tilde{u}_{x}, \\ Q_{3}\tilde{u}_{x} + (1 - \delta )Q_{2}\tilde{u}_{y} = \lambda ( - W + Q_{3})\tilde{u}_{x} + \lambda Q_{2}\tilde{u}_{y}. \end{cases} $$
(15)

Aiming to obtain the convergence condition, the lemmas and theorem as follows are presented.

Lemma 4.1

Suppose thatB, Care symmetric positive definite, \(Q_{1}\)is symmetric positive semidefinite, \(Q_{2} = C/\delta \)is symmetric positive definite where\(\delta > 0\), and\(Q_{3}\)is arbitrary. Ifλis an eigenvalue ofWdefined by (14) and\([\tilde{u}_{x}^{H}, \tilde{u}_{y}^{H}]^{H}\)is the corresponding eigenvector, where\(\tilde{u}_{x} \in C^{ \lceil N/2 \rceil } \)and\(\tilde{u}_{y} \in C^{ \lfloor N/2 \rfloor } \), then

  1. (a)

    \(\lambda \ne 1\),

  2. (b)

    whenWis full of row rank, \(\tilde{u}_{x}\)is a non-zero vector,

  3. (c)

    whenWis not full of row rank

    1. (i)

      if\(\lambda \ne 1 - \delta \), \(\tilde{u}_{x}\)is a non-zero vector;

    2. (ii)

      if\(\lambda = 1 - \delta \), \(\tilde{u}_{x}\)can be a zero vector, and\(\lambda = 1 - \delta \)is at least\(\lfloor N/2 \rfloor - r(W)\)eigenvalues ofW; where\(r(W)\)represents the rank of matrixWand\(r(W) = r(W^{T})\).

Proof

(a) If \(\lambda = 1\), then we have

$$ \textstyle\begin{cases} B\tilde{u}_{x} + W^{T}\tilde{u}_{y} = 0, \\ - W\tilde{u}_{x} + \delta Q_{2}\tilde{u}_{y} = 0. \end{cases} $$

We can rewrite the system as

$$ \begin{bmatrix} B & W^{T} \\ - W & C \end{bmatrix} \begin{bmatrix} \tilde{u}_{x} \\ \tilde{u}_{y} \end{bmatrix} = 0. $$

Evidently, the coefficient matrix is non-singular, so \([\tilde{u}_{x}^{H}, \tilde{u}_{y}^{H}]^{H} = 0\), which is contradictory, so \(\lambda \ne 1\).

(b) If \(\tilde{u}_{x} = 0\), then according to (15) we have

$$ \textstyle\begin{cases} W^{T}\tilde{u}_{y} = 0, \\ {} [\lambda - (1 - \delta )]Q_{2}\tilde{u}_{y} = 0. \end{cases} $$
(16)

When W is full of row rank, according to the first equation of (16), we have \(\tilde{u}_{y} = 0\). This is a contradiction.

(c) When W is not full of row rank, let \(\tilde{u}_{x} = 0\).

(i) If \(\lambda \ne 1 - \delta \), according to the second equation of (16) and \(Q_{2}\) is symmetric positive definite, we obtain \(\tilde{u}_{y} = 0\), which is a contradiction.

(ii) If \(\lambda = 1 - \delta \), the fundamental set of solution can be obtained by the first equation of (16) on \(\tilde{u}_{y}\) which consists of \(\lfloor N/2 \rfloor - r(W)\) non-zero vector solutions, i.e., \(\lambda = 1 - \delta \) is at least \(\lfloor N/2 \rfloor - r(W)\) eigenvalues, where \(r(W)\) represents the rank of matrix W and \(r(W) = r(W^{T})\). □

Lemma 4.2

Assume thatB, Care symmetric positive definite, \(Q_{1}\)is symmetric positive semidefinite, \(Q_{2} = C/\delta \)is symmetric positive definite where\(\delta > 0\), and\(Q_{3}\)is arbitrary. Letλbe an eigenvalue ofWand\([\tilde{u}_{x}^{H}, \tilde{u}_{y}^{H}]^{H}\)be the corresponding eigenvector with\(\tilde{u}_{x} \in C^{ \lceil N/2 \rceil } \)and\(\tilde{u}_{y} \in C^{ \lfloor N/2 \rfloor } \). If\(\tilde{u}_{y} = 0\),

  1. (a)

    whenWis full of row rank, then\(0 \le \lambda < 1\),

  2. (b)

    whenWis not full of row rank, then\(\vert \lambda \vert < 1\)if and only if\(0 < \delta < 2\).

Proof

(a) If \(\tilde{u}_{y} = 0\), from (15), we obtain

$$ \textstyle\begin{cases} Q_{1}\tilde{u}_{x} - \lambda (Q_{1} + B)\tilde{u}_{x} = 0, \\ Q_{3}\tilde{u}_{x} - \lambda ( - W + Q_{3})\tilde{u}_{x} = 0. \end{cases} $$
(17)

When W is full of row rank, since \(\lambda \ne 1\) and \(\tilde{u}_{x} \ne 0\), from Lemma 4.1, we denote

$$ \alpha = \frac{\tilde{u}_{x}^{H}Q_{1}\tilde{u}_{x}}{\tilde{u}_{x}^{H}\tilde{u}_{x}} \ge 0,\qquad \beta = \frac{\tilde{u}_{x}^{H}B\tilde{u}_{x}}{\tilde{u}_{x}^{H}\tilde{u}_{x}} > 0. $$

Through multiplying the both sides of the first equation in (17) by \(\frac{\tilde{u}_{x}^{H}}{\tilde{u}_{x}^{H}\tilde{u}_{x}}\) from the left, we get

$$ 0 \le \lambda = \frac{\alpha }{\alpha + \beta } < 1. $$

(b) When W is not full of row rank, if \(\lambda \ne 1 - \delta \), then \(\tilde{u}_{x} \ne 0\) from Lemma 4.1. As \(\lambda \ne 1\), in the same way as with (a), we get \(0 \le \lambda < 1\).

If \(\lambda = 1 - \delta \), we rewrite the system (17) as

$$ \textstyle\begin{cases} [\delta Q_{1} - (1 - \delta )B]\tilde{u}_{x} = 0, \\{} [\delta Q_{3} + (1 - \delta )W]\tilde{u}_{x} = 0. \end{cases} $$

Thus, \(\tilde{u}_{x} \in \operatorname{null}[\delta Q_{1} - (1 - \delta )B]\) and \(\tilde{u}_{x} \in \operatorname{null}[\delta Q_{3} + (1 - \delta )W]\), where \(\operatorname{null}[x]\) represents the null space of the corresponding matrix. For guaranteeing that \(\vert \lambda \vert < 1\), we have \(\vert 1 - \delta \vert < 1\), i.e., \(0 < \delta < 2\).

Moreover, we get \(\vert \lambda \vert < 1\) if and only if \(0 < \delta < 2\). □

Lemma 4.3

([23, 27])

Ifϕandφare real numbers, the sufficient and necessary condition for the roots of the real quadratic equation\(x^{2} + \varphi x + \phi = 0\)to satisfy\(\vert x \vert < 1\)is that\(\vert \phi \vert < 1\)and\(\vert \varphi \vert < 1 + \phi \).

In order to ensure the convergence of the iteration method (13), the following theorems are used to give the necessary and sufficient conditions.

Theorem 4.1

Suppose thatB, Care symmetric positive definite, \(Q_{1}\)is symmetric positive semidefinite, \(Q_{2} = C/\delta \)is symmetric positive definite where\(\delta > 0\), and\(Q_{3}\)is such that\(W^{T}Q_{2}^{ - 1}Q_{3}\)is symmetric. Then the iteration method (13) is convergent if and only if

$$ 0 < \delta < 2\quad \textit{and}\quad (\delta - 2)\alpha + \frac{\delta - 2}{2}\beta + \frac{1}{2}\gamma < \tau < \delta \alpha + \beta , $$

where

$$ \begin{gathered} \alpha = \frac{\tilde{u}_{x}^{H}Q_{1}\tilde{u}_{x}}{\tilde{u}_{x}^{H}\tilde{u}_{x}} \ge 0,\qquad \beta = \frac{\tilde{u}_{x}^{H}B\tilde{u}_{x}}{\tilde{u}_{x}^{H}\tilde{u}_{x}} > 0,\\ \gamma = \frac{\tilde{u}_{x}^{H}W^{T}Q_{2}^{ - 1}W\tilde{u}_{x}}{\tilde{u}_{x}^{H}\tilde{u}_{x}} \ge 0,\qquad \tau = \frac{\tilde{u}_{x}^{H}W^{T}Q_{2}^{ - 1}Q_{3}\tilde{u}_{x}}{\tilde{u}_{x}^{H}\tilde{u}_{x}}, \end{gathered} $$

and\([\tilde{u}_{x}^{H}, \tilde{u}_{y}^{H}]^{H}\)be a corresponding eigenvector where\(\tilde{u}_{x} \in C^{ \lceil N/2 \rceil } \)and\(\tilde{u}_{y} \in C^{ \lfloor N/2 \rfloor } \).

Proof

Let λ be an eigenvalue of the iteration matrix W and \([\tilde{u}_{x}^{H}, \tilde{u}_{y}^{H}]^{H}\) be its corresponding eigenvector with \(\tilde{u}_{x} \in C^{ \lceil N/2 \rceil } \) and \(\tilde{u}_{y} \in C^{ \lfloor N/2 \rfloor } \). By Lemma 4.1, we know \(\lambda \ne 1\).

If \(\lambda = 1 - \delta \), then (15) results in

$$ \textstyle\begin{cases} [\delta Q_{1} - (1 - \delta )B]\tilde{u}_{x} = W^{T}\tilde{u}_{y}, \\{} [(1 - \delta )W + \delta Q_{3}]\tilde{u}_{x} = 0. \end{cases} $$
(18)

Equivalently, \(\tilde{u}_{x} \in \operatorname{null}[(1 - \delta )W + \delta Q_{3}]\), \(\tilde{u}_{y} = (WW^{T})^{ - 1}[\delta WQ_{1} - (1 - \delta )WB]\tilde{u}_{x}\) when W is full of row rank, where \(\operatorname{null}[ \cdot ]\) represents the null space of the corresponding matrix. Then \(\lambda = 1 - \delta \) is an eigenvalue of the iteration matrix W.

When W is not full of row rank, \(\tilde{u}_{x} \in \operatorname{null}[(1 - \delta )W + \delta Q_{3}]\) and there exist \(\lfloor N/2 \rfloor - r(W)\) non-zero eigenvectors of \(\tilde{u}_{y}\) corresponding to \(\lambda = 1 - \delta \) which are at least \(\lfloor N/2 \rfloor - r(W)\) eigenvalues of the iteration matrix W. Particularly, when \(\tilde{u}_{x}\) is the zero vector, it has been introduced in Lemma 4.1.

If \(\lambda \ne 1 - \delta \), we have \(\tilde{u}_{x} \ne 0\) from the Lemma 4.1, and (15) can be rewritten as

$$ \textstyle\begin{cases} Q_{1}\tilde{u}_{x} - \lambda (Q_{1} + B)\tilde{u}_{x} = W^{T}\tilde{u}_{y}, \\ \tilde{u}_{y} = \frac{1}{(1 - \delta ) - \lambda } Q_{2}^{ - 1}[\lambda ( - W + Q_{3}) - Q_{3}]\tilde{u}_{x}. \end{cases} $$
(19)

Eliminating \(\tilde{u}_{y}\) in (19), we obtain

$$ \bigl[(1 - \delta ) - \lambda \bigr] \bigl[Q_{1} - \lambda (Q_{1} + B)\bigr]\tilde{u}_{x} = W^{T}Q_{2}^{ - 1} \bigl[\lambda ( - W + Q_{3}) - Q_{3}\bigr] \tilde{u}_{x}. $$
(20)

Then multiplying the left and right sides of (20) by \(\frac{\tilde{u}_{x}^{H}}{\tilde{u}_{x}^{H}\tilde{u}_{x}}\) from the left, we obtain

$$ \bigl[(1 - \delta ) - \lambda \bigr] \bigl[\alpha - \lambda (\alpha + \beta ) \bigr] = \lambda ( - \gamma + \tau ) - \tau , $$
(21)

where

$$ \begin{gathered} \alpha = \frac{\tilde{u}_{x}^{H}Q_{1}\tilde{u}_{x}}{\tilde{u}_{x}^{H}\tilde{u}_{x}} \ge 0,\qquad \beta = \frac{\tilde{u}_{x}^{H}B\tilde{u}_{x}}{\tilde{u}_{x}^{H}\tilde{u}_{x}} > 0,\\ \gamma = \frac{\tilde{u}_{x}^{H}W^{T}Q_{2}^{ - 1}W\tilde{u}_{x}}{\tilde{u}_{x}^{H}\tilde{u}_{x}} \ge 0,\qquad \tau = \frac{\tilde{u}_{x}^{H}W^{T}Q_{2}^{ - 1}Q_{3}\tilde{u}_{x}}{\tilde{u}_{x}^{H}\tilde{u}_{x}}. \end{gathered} $$

Then (21) can be modified as

$$ \lambda ^{2} - \frac{(2 - \delta )\alpha + (1 - \delta )\beta - \gamma + \tau }{\alpha + \beta } \lambda + \frac{(1 - \delta )\alpha + \tau }{\alpha + \beta } = 0. $$
(22)

When \(\gamma = 0\), we can get \(W\tilde{u}_{x} = 0\) and multiply the left and right sides of the first equation in (19) by \(\tilde{u}_{x}\) from the left, we have

$$ 0 \le \lambda = \frac{\alpha }{\alpha + \beta } < 1. $$

When \(W\tilde{u}_{x} \ne 0\), we obtain \(\gamma > 0\). In accordance with Lemma 4.3, the roots of the quadratic Eq. (22) satisfy \(\vert \lambda \vert < 1\) if and only if

$$ \textstyle\begin{cases} \vert 1 - \delta \vert < 1, \\ \vert (1 - \delta )\alpha + \tau \vert < \alpha + \beta , \\ \vert (2 - \delta )\alpha + (1 - \delta )\beta - \gamma + \tau \vert < (2 - \delta )\alpha + \beta + \tau . \end{cases} $$
(23)

Solving the inequalities of (23), we have

$$ \textstyle\begin{cases} 0 < \delta < 2, \\ (\delta - 2)\alpha + \frac{\delta - 2}{2}\beta + \frac{1}{2}\gamma < \tau < \delta \alpha + \beta . \end{cases} $$
(24)

In addition, we consider the case (b) from Lemma 4.2 when \(\tilde{u}_{y} = 0\), it actually satisfies the inequalities (24).

Therefore, we can get (24) where we have considered that \(\alpha \ge 0\) and \(\beta > 0\). □

Corollary 4.1

Under the assumptions of Theorem 4.1, convergence of the iteration method (13) is satisfied if and only if\(2(2 - \delta )Q_{1} + (2 - \delta )B + 2W^{T}Q_{2}^{ - 1}Q_{3} - W^{T}Q_{2}^{ - 1}W\)and\(\delta Q_{1} + B - W^{T}Q_{2}^{ - 1}Q_{3}\)are positive definite where\(0 < \delta < 2\).

By choosing different matrices \(Q_{1}\), \(Q_{2}\) and \(Q_{3}\) in (12), we get considerable iteration algorithms from the GPIU method for solving the generalized saddle point problem (11). Here, let u ˜ n + 1 ( m ) = [ u ˜ x + 1 ( m ) u ˜ y + 1 ( m ) ] , u ˜ n ( m ) = [ u ˜ x ( m ) u ˜ y ( m ) ] and b ˜ ( m 1 ) = [ b ˜ 1 ( m 1 ) b ˜ 2 ( m 1 ) ] , where \(\tilde{u}_{x + 1}^{(m)}\), \(\tilde{u}_{x}^{(m)}\), \(\tilde{b}_{1}^{(m - 1)} \in C^{ \lceil N/2 \rceil } \), and \(\tilde{u}_{y + 1}^{(m)}\), \(\tilde{u}_{y}^{(m)}\), \(\tilde{b}_{2}^{(m - 1)} \in C^{ \lfloor N/2 \rfloor } \). Then an efficient GPIU algorithm can be presented.

The GPIU Algorithm

If \(Q_{1} = \frac{1}{\omega } W^{T}Q_{2}^{ - 1}W\omega > 0\), \(Q_{2} = \frac{1}{\delta } C0 < \delta < 2\), \(Q_{3} = sW\), the GPIU method gives the following iteration scheme:

$$ \textstyle\begin{cases} \tilde{u}_{x + 1}^{(m)} = \tilde{u}_{x}^{(m)} + (\frac{1}{\omega } W^{T}Q_{2}^{ - 1}W + B)^{ - 1}(\tilde{b}_{1}^{(m - 1)} - B\tilde{u}_{x}^{(m)} - W^{T}\tilde{u}_{y}^{(m)}), \hfill \\ \tilde{u}_{y + 1}^{(m)} = \tilde{u}_{y}^{(m)} + \delta C^{ - 1}[(1 - s)W\tilde{u}_{x + 1}^{(m)} + sW\tilde{u}_{x}^{(m)} - C\tilde{u}_{y}^{(m)} + \tilde{b}_{2}^{(m - 1)}] \end{cases} $$

where the matrices B, C and W rely on N defined in Sect. 4.

Numerical examples

The FDEs problem (1) has been solved by pre-multiplying the coefficient matrix with \(P^{T}\) and post-multiplying it with P which is given in Sect. 3, firstly. Then the FDEs problem is changed into a generalized saddle point problem, so in Sect. 4 an algorithm is proposed based on the GPIU method. In this section, we discuss the GPIU algorithm with different parameters and comparison between the GPIU algorithm and the CGNR method (3). The initial value is defined as the zero vector at each time step, and each iteration process terminates when the current residual satisfies \(\Vert r_{k} \Vert _{2} / \Vert r_{0} \Vert _{2} < 10^{ - 7}\), where \(r_{k}\) is the residual vector of the linear system after k iterations. Then we collect data in the table as follows and define “N” as the number of spatial grid points, “M” as the number of time steps, “Error” as the difference between the exact solution and the approximate solution under the infinity norm, “CPU(s)” as the total CPU(s) time (seconds) to solve the whole FDEs problem, and “Iter” as the average number of iterations needed to solve the FDEs problem, i.e.,

$$ \operatorname{Iter} = \frac{1}{M}\sum_{m = 1}^{M} \operatorname{Iter}(m), $$

where \(\operatorname{Iter}(m)\) represents the number of iterations needed to solve (7). Moreover, we obtain the true solution approximately by the program command \(\operatorname{inv}(\tilde{M}^{(m)})\tilde{b}^{(m - 1)}\). All numerical experiments are executed in MATLAB (2016a) of a Windows 8.1 system on a machine with Intel (R) Core (TM) (i5-5257U CPU @ 2.70 GHz 2700 MHz).

Example 1

([17])

Consider the initial-boundary value problem (1) on the spatial domain \([x_{L},x_{R}] = [0,2]\) and time interval \([0,T] = [0,1]\) with the diffusion coefficients \(d_{ +} = 0.6\), \(d_{ -} = 0.5\), and

$$ u(x,0) = \exp \biggl( - \frac{(x - x_{c})^{2}}{2\sigma ^{2}}\biggr),\qquad x_{c} = 1.2, \qquad \sigma = 0.08,\qquad f(x,t) \equiv 0. $$

For satisfying \(v_{N,M}\) is bounded away from 0, let \(\Delta t \approx 2\Delta x^{\alpha } \). From Table 1, we show the advantage of efficiency according to the CPU time, Iter and Error. When N and M become large, the CPU time and Error will be slow and show inaccuracy, respectively. But the average number of iterations remains stabilized.

Table 1 The GPIU algorithm for solving Example 1 with different choices of parameters for \(\alpha =1.5\)

Table 2 demonstrates that not only the Iter but also CPU(s) by the CGNR method with optimal parameters are much higher than the GPIU method. The iteration steps and CPU time of the new method is obviously less than HSS. It also indicates that the Error is obviously improved for the FDEs over CGNR and HSS.

Table 2 Comparing results to solve Example 1 by the GPIU algorithm with \(\omega =1\), \(\delta =0.99\) and \(s = 0.01\), and the CGNR method for \(\alpha =1.2\), 1.5 and 1.8

Figure 1 shows the convergence rate of GPIU and HSS. In order to get the least number of steps which can makes the error decrease to the ε times initial error, we normally define \(- \ln (\rho (M))\) as the asymptotic convergence rate, where M is the iteration matrix of the iteration method for solving the equations. The minimum number of steps of the iteration is \(k \ge \frac{ - \ln \varepsilon }{ - \ln (\rho (M))}\). In Fig. 1, the X-axis stands for the scale of the matrix, and the Y-axis stands for the theoretical minimum steps for iteration. It is shown that, for GPIU, the minimum number of steps increases slowly and almost tends to a stable value as N increases. But for HSS, the minimum number of steps increases sharply.

Figure 1
figure 1

The convergence rate of GPIU and HSS

Example 2

([21])

The spatial domain \([x_{L},x_{R}] = [0,1]\) and the time interval \([0,T] = [0,1]\) with \(d_{ +} = 0.6\), \(d_{ -} = 0.2\), initial condition \(u(x,0) = 4x^{2}(2 - x)^{2}\), and the source term \(f(x,t) = - 32e^{ - t} [ x^{2} + (2 - x)^{2} + 0.125x^{2}(2 - x)^{2} - 2.5(x^{3} + (2 - x)^{3}) + \frac{25}{22}(x^{4} + (2 - x)^{4}) ]\).

Table 3 illustrates the numerical results of Example 2. Here we specially select the source term \(f(x,t)\) as non-zero and nonlinear. Obviously, the Iter for the CGNR method is larger and the Iter of the new method is smaller than HSS. Moreover, the CPU time and the error of the GPIU method are much smaller than the CGNR method and HSS method. It indicates that the GPIU algorithm is still highly efficient and accurate for the case of nonlinear source term, so the GPIU algorithm can be extendable to the case of nonlinear source term.

Table 3 Comparing results to solve Example 2 by the GPIU algorithm with \(\omega =1\), \(\delta =0.99\) and \(s = 0.01\), and the CGNR method for \(\alpha =1.2\), 1.5 and 1.8

Note. The memory usage for all tests in Tables 1, 2 and 3 is \(O(n)\), where n denotes the order of the coefficient matrix of equations [7].

Concluding remarks

In this paper, a GPIU method is put forward to solve the generalized saddle point problem generated by the fractional diffusion equations with constant coefficients by pre-multiplying the orthogonal matrix we constructed and post-multiplying the transposition of it, respectively. Then the convergence condition of the GPIU method is analyzed and we give a new GPIU algorithm.

The numerical results show that the proposed method is more effective than the CGNR method and HSS, and its advantages become more obvious as N increases. Compared with the CGNR method and the HSS method, this GPIU method can achieve a faster convergence rate in practice and theory, and can obtain a more accurate convergence solution in a shorter time. Unlike the CGNR method and the HSS method, with the increase of the matrix order N, the number of iteration steps of the GPIU method grows slowly, it is almost stable near an extremely low value, and the error is significantly improved. The better convergence results of GPIU are due to the fact that the storage of the new algorithm is greatly reduced to \(O(n)\), to be compared with the CGNR and HSS method, where n denotes the order of the coefficient matrix of equations.

For further consideration, more effective stationary methods like the GPIU method will be promoted to solve the FDE problems with constant or even variable diffusion coefficients.

References

  1. 1.

    Benson, D., Wheatcraft, S.W., Meerschaert, M.M.: Application of a fractional advection-dispersion equation. Water Resour. Res. 36, 1403–1413 (2000)

    Article  Google Scholar 

  2. 2.

    Benson, D., Wheatcraft, S.W., Meerschaert, M.M.: The fractional-order governing equation of Lévy motion. Water Resour. Res. 36, 1413–1423 (2000)

    Article  Google Scholar 

  3. 3.

    Carreras, B.A., Lynch, V.E., Zaslavsky, G.M.: Anomalous diffusion and exit time distribution of particle tracers in plasma turbulence models. Phys. Plasmas 8, 5096–5103 (2001)

    Article  Google Scholar 

  4. 4.

    Bai, J., Feng, X.: Fractional-order anisotropic diffusion for image denoising. IEEE Trans. Image Process. 16, 2492–2502 (2007)

    MathSciNet  Article  Google Scholar 

  5. 5.

    Raberto, M., Scalas, E., Mainardi, F.: Waiting-times and returns in high-frequency financial data: an empirical study. Physica A 314, 749–755 (2002)

    Article  Google Scholar 

  6. 6.

    Sokolov, I.M., Klafter, J., Blumen, A.: Fractional kinetics. Phys. Today 11, 28–53 (2002)

    Google Scholar 

  7. 7.

    Shen, S., Liu, F., Anh, V., Tuner, I.: The fundamental solution and numerical solution of the Riesz fractional advection-dispersion equation. IMA J. Appl. Math. 73, 850–872 (2008)

    MathSciNet  Article  Google Scholar 

  8. 8.

    Gao, G.H., Sun, Z.Z.: A compact difference scheme for the fractional sub-diffusion equations. J. Comput. Phys. 230, 586–595 (2011)

    MathSciNet  Article  Google Scholar 

  9. 9.

    Liu, F., Zhuang, P., Burrage, K.: Numerical methods and analysis for a class of fractional advection-dispersion models. Comput. Math. Appl. 64, 2990–3007 (2012)

    MathSciNet  Article  Google Scholar 

  10. 10.

    Beumer, B., Kovács, M., Meerschaert, M.M.: Numerical solutions for fractional reaction-diffusion equations. Comput. Math. Appl. 55, 2212–2226 (2008)

    MathSciNet  Article  Google Scholar 

  11. 11.

    Deng, W.: Finite element method for the space and time fractional Fokker–Planck equation. SIAM J. Numer. Anal. 47, 204–226 (2008)

    MathSciNet  Article  Google Scholar 

  12. 12.

    Tadjeran, C., Meerschaert, M.M., Scheffler, H.P.: A second-order accurate numerical approximation for the fractional diffusion equation. J. Comput. Phys. 213, 205–213 (2006)

    MathSciNet  Article  Google Scholar 

  13. 13.

    Meerschaert, M.M., Tadjeran, C.: Finite difference approximations for two-sided space-fractional partial differential equations. Appl. Numer. Math. 56, 80–90 (2006)

    MathSciNet  Article  Google Scholar 

  14. 14.

    Wang, H., Wang, K., Sircar, T.: A direct \(O(N\mathrm{log} ^{2}N)\) finite difference method for fractional diffusion equations. J. Comput. Phys. 229, 8095–8104 (2010)

    MathSciNet  Article  Google Scholar 

  15. 15.

    Chan, R., Ng, M.: Conjugate gradient methods for Toeplitz systems. SIAM Rev. 38, 427–482 (1996)

    MathSciNet  Article  Google Scholar 

  16. 16.

    Wang, K., Wang, H.: A fast characteristic finite difference method for fractional advection-diffusion equations. Adv. Water Resour. 34, 810–816 (2011)

    Article  Google Scholar 

  17. 17.

    Lei, S.L., Sun, H.W.: A circulant preconditioner for fractional diffusion equations. J. Comput. Phys. 242, 715–725 (2013)

    MathSciNet  Article  Google Scholar 

  18. 18.

    Chan, R., Strang, G.: Toeplitz equations by conjugate gradients with circulant preconditioner. SIAM J. Sci. Stat. Comput. 10, 104–119 (1989)

    MathSciNet  Article  Google Scholar 

  19. 19.

    Chan, T.: An optimal circulant preconditioner for Toeplitz systems. SIAM J. Sci. Stat. Comput. 9, 766–771 (1988)

    MathSciNet  Article  Google Scholar 

  20. 20.

    Pang, H., Sun, H.: Multigrid method for fractional diffusion equations. J. Comput. Phys. 231, 693–703 (2012)

    MathSciNet  Article  Google Scholar 

  21. 21.

    Bai, Y.Q., Huang, T.Z., Gu, X.M.: Circulant preconditioned iterations for fractional diffusion equations based on Hermitian and skew-Hermitian splittings. Appl. Math. Lett. 48, 14–22 (2015)

    MathSciNet  Article  Google Scholar 

  22. 22.

    Chen, F., Jiang, Y.L.: On HSS and AHSS iteration methods for nonsymmetric positive definite Toeplitz systems. J. Comput. Appl. Math. 234, 2432–2440 (2010)

    MathSciNet  Article  Google Scholar 

  23. 23.

    Zhou, Y.Y., Zhang, G.F.: A generalization of parameterized inexact Uzawa method for generalized saddle point problems. Appl. Math. Comput. 215, 599–607 (2009)

    MathSciNet  MATH  Google Scholar 

  24. 24.

    Bai, Z.Z., Wang, Z.Q.: On parameterized inexact Uzawa methods for generalized saddle point problems. Linear Algebra Appl. 428, 2900–2932 (2008)

    MathSciNet  Article  Google Scholar 

  25. 25.

    Podlubny, I.: Fractional Differential Equations. Academic Press, New York (1999)

    MATH  Google Scholar 

  26. 26.

    Quarteroni, A., Sacco, R., Saleri, F.: Numerical Mathematics, 2nd edn. Texts in Appl. Math., vol. 37. Springer, Berlin (2007)

    Book  Google Scholar 

  27. 27.

    Young, D.M.: Iterative Solution for Large Linear Systems. Academic Press, New York (1971)

    Google Scholar 

Download references

Acknowledgements

The authors would like to thank the editors and reviewers for their reading.

Availability of data and materials

The data are available from the corresponding author upon request.

Authors’ information

Hai-Long Shen is a Lecturer at Northeastern University, Shenyang, China. Xin-Hui Shao is a Professor and Master advisor at Northeastern University. Yu-Han Li is studying for a Master’s degree at Northeastern University.

Funding

This work was funded by the Natural Science Foundation of Liaoning Province (CN) (No. 20170540323); Central University Basic Scientific Research Business Expenses Special Funds (N2005013).

Author information

Affiliations

Authors

Contributions

SHL designed, analyzed and wrote the paper. SXH guided the full text. LYH analyzed the data. All authors read and approved the final manuscript.

Corresponding author

Correspondence to Hai-Long Shen.

Ethics declarations

Competing interests

The authors declare that they have no competing interests.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Shen, HL., Li, YH. & Shao, XH. A GPIU method for fractional diffusion equations. Adv Differ Equ 2020, 398 (2020). https://doi.org/10.1186/s13662-020-02731-9

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13662-020-02731-9

Keywords

  • Fractional diffusion equations
  • Generalized saddle point problem
  • Stability
  • Toeplitz linear system
  • The shifted Grünwald formula