Skip to main content

Theory and Modern Applications

A class of intrinsic parallel difference methods for time-space fractional Black–Scholes equation

Abstract

To quickly solve the fractional Black–Scholes (B–S) equation in the option pricing problems, in this paper, we construct pure alternative segment explicit–implicit (PASE-I) and pure alternative segment implicit–explicit (PASI-E) difference schemes for time-space fractional B–S equation. It is a kind of intrinsic parallel difference schemes constructed on the basis of classic explicit scheme and classic implicit scheme combined with alternate segmentation technique. PASE-I and PASI-E schemes are analyzed to be unconditionally stable, convergent with second-order spatial accuracy and \((2-\alpha)\)th-order time accuracy, and they have a unique solution. The numerical experiments show that the two schemes have obvious parallel computing properties, and the computation time is greatly improved compared to Crank–Nicolson (C–N) scheme. The PASE-I and PASI-E intrinsic parallel difference methods are efficient to solve the time-space fractional B–S equation.

1 Introduction

As one of the most famous basic equations of financial mathematics, the Black–Scholes (B–S) equation has attracted more and more attention from economists and applied mathematicians, not only because it is the cornerstone of option pricing theory, but also because the numerical simulation method of the B–S equation has played a significant role in promoting the study of the pricing of many financial derivatives. The traditional B–S model is obtained under many harsh assumptions; to make the theoretical price more in line with the actual quotation, it is necessary to relax the assumptions of the B–S model properly, and the modified B–S model is closer to the actual financial market [1, 2].

Based on the fact that stock prices follow fractional stochastic differential equations, some progress has been made in the study of fractional B–S models in recent years. Wyss (2000) derived the time-fractional B–S equation governing European call option [3]. Cartea and Del-Castillo-Negrete (2007) deduced three kinds of space-fractional B–S equations for pricing exotic options in a diffusion market with jumps [4]. With reference to the derivation of the classic B–S model, Jumarie (2008, 2010) used the fractional Taylor formula and Itô lemma to derive a more complete time-fractional B–S equation and time-space-fractional B–S equation [5, 6]. Liang et al. (2010) considered the option price in financial markets as a fractional transmission system and proposed single-parameter and biparameter fractional B–S equations [7]. These fractional B–S models can characterize long-term correlations in behavioral finance, which are more in line with the laws of financial market movement.

At present, the numerical algorithms for solving fractional differential equations mainly include the finite difference method, finite element method, series approximation method, and other methods, such as the spectral method, matrix transformation method, B-spline wavelet operational method, etc. [8, 9].

Fractional calculus has historical dependence and global correlation, which makes the numerical solution of fractional B–S equations require a large amount of calculations and storage. When simulating actual financial problems, it is difficult to perform long-term processes (the exponential increase in calculations over time) or large computation domain simulation even with high-performance computers [1013]. Therefore, in recent ten years, fast algorithms for the fractional B–S equation have been the focus of academic research. With rapid development of multicore and cluster technology, parallel algorithms have become one of the mainstream technologies to improve the computing efficiency. The parallel difference method of the fractional B–S equation has basic scientific significance and application value [14, 15].

For integer-order diffusion equations, Evans and Abdullab (1983) proposed the idea of group explicit and designed an alternating group explicit (AGE) scheme, which not only ensures the stability of numerical calculations, but also has good parallel nature [16]. Implicit schemes generally have good stability, but they are not suitable for parallelization. Inspired by the construction of the AGE method, Zhang (1991) proposed the idea of using the Saul’yev asymmetric scheme to construct a piecewise implicit scheme and established a class of alternating segment explicit–implicit (ASE-I) parallel difference methods and alternating segment Crank–Nicolson (ASC-N) parallel difference methods, where the stability and parallelism were both obtained [17]. Academician Zhou (1997) called the explicit–implicit mixing scheme of the most general parabolic equation as a difference scheme with intrinsic parallelism; he studied the existence, uniqueness, convergence, and stability of differential decomposition and established the basic theory of the parallel difference method for parabolic equations. Now this theory has been applied to numerical solving many integer-order evolution equations [18]. Zhu and Yuan (2003) presented the ASE-I scheme and ASC-N scheme for dispersive equation and compared the numerical solution and accuracy of the schemes by numerical experiments [19]. Wang (2006) proposed an ASC-N difference method for the third-order KdV equation and proved the linear and absolute stability of the scheme [20]. Yuan et al. (2007) constructed a parallel difference scheme with spatial second-order accuracy and unconditional stability for nonlinear parabolic equations [21].

The existing numerical methods for partial differential equations of integer order cannot be directly applied to numerical solving fractional partial differential equations, and even a completely different numerical analysis process has been produced. In recent years, some progress has been made in the research of parallel algorithms for fractional partial differential equations; most of the parallel algorithms of algebraic equations are studied from the point of view of numerical algebra. Wang et al. (2010) proposed a fast algorithm for the space-fractional diffusion equation based on the special structure of the established difference scheme; this was an earlier attempt to apply parallel computing to fractional difference equation [22]. Diethelm (2011) implemented a parallelized calculation for the fractional second-order Adams–Bashforth–Moulton method and discussed the accuracy of the parallel algorithm [23]. Gong et al. (2013) carried out parallel computation for the explicit difference scheme of the Riesz-type space-fractional reaction diffusion equation; the core of parallelization is the parallel computation of the product of matrix and vector and the addition of vectors and vectors [24]. Sweilam et al. (2014) constructed a class of parallel C–N schemes for time-fractional parabolic equations, the core of the method is to use the preconditioned conjugate gradient method to solve the algebraic equations [25]. Lu et al. (2015) established a difference scheme for the time-fractional subdiffusion equation and proposed a fast algorithm based on its special structure [26]. Wang et al. (2016) studied the parallel algorithm of implicit difference scheme for Caputo fractional reaction–diffusion equation, and the computational efficiency was improved compared to the original scheme [27].

For a long time, a large number of parallel schemes constructed are either conditionally stable or unconditionally stable, but the space has only first-order accuracy. To obtain a parallel scheme with higher precision and more relaxed stability conditions, we have to overcome the difficulties of numerical algebra and have sought to explore the parallelization of traditional difference schemes. At present, there are few studies on the parallel schemes for time-space fractional B–S equation. Based on the time-space-fractional B–S equation deduced by Jumarie, in this paper, we construct a class of intrinsic parallel difference schemes, pure alternative segment explicit–implicit (PASE-I) scheme and pure alternative segment implicit–explicit (PASI-E) scheme, and prove that these parallel difference methods are efficient to solve the time-space-fractional B–S equation.

2 The intrinsic parallel difference schemes of time-space-fractional B–S equation

2.1 Time-space-fractional B–S equation

We consider the following time-space-fractional B–S equation [6]:

$$ \textstyle\begin{cases} P_{t}^{(\alpha)} = (\frac{r}{\Gamma(2-\alpha)}P-r{{S}^{\alpha }}P_{S}^{(\alpha)}){{t}^{1-\alpha}}-\frac{{{\Gamma}^{3}}(1+\alpha )}{\Gamma(1+2\alpha)}{{\Gamma}^{2}}(2-\alpha){{\sigma }^{2}}{{S}^{2\alpha}}P_{S}^{(2\alpha)}, \\ P(S,T) = \max \{ S-K,0 \}, \end{cases} $$
(1)

where \(t>0\), \(0<\alpha\le1\), \(P(S,t)\) means the option price, S is the stock price, r denotes risk-free rate, σ is volatility, \(P_{t}^{(\alpha)}(S,t)\), \(P_{S}^{(\alpha)}\), and \(P_{S}^{(2\alpha)}\) are the Riemann–Liouville fractional derivatives, K is the exercise price, which means that the price at the expiration of the option is its profit or loss.

The corresponding boundary conditions are as follows:

$$P(0,t)=0,\qquad \lim_{t \to\infty}{P(S,t)}=S-K{{e}^{-r(T-t)}}. $$

The boundary conditions mean that when S is zero, the price of option will never return to its original state. When S is large enough, the implementation of call options is inevitable, and the spot price of the strike price is close to \(K{{e}^{-r(T-t)}}\).

The solution region is as follows:

$$\Sigma= \{ 0\le S\le\infty, 0\le t\le T \}. $$

With variable substitution we can get

$$S={{e}^{x}};\qquad t=T-\tau;\qquad P(S,t)={{e}^{-r\tau}}V(x, \tau). $$

Equation (1) can be transformed into

$$ \textstyle\begin{cases} V_{\tau}^{(\alpha)}(x,\tau)-(\gamma(\alpha)\frac{\Gamma(1-\alpha )}{\Gamma(1-2\alpha)}{{\sigma}^{2}}+r{{(T-\tau)}^{1-\alpha }}){{\tau}^{1-\alpha}}{{(T-\tau)}^{\alpha-1}}{{V}_{x}}(x,\tau) \\ \quad {}-\gamma(\alpha){{\sigma}^{2}}{{\tau}^{1-\alpha}}{{(T-\tau )}^{\alpha-1}}{{V}_{xx}}(x,\tau)=0, \\ V(x,0)=\max \{ {{e}^{x}}-K,0 \}, \end{cases} $$
(2)

where \(\gamma(\alpha)=\frac{{{\Gamma}^{3}}(1+\alpha){{\Gamma }^{2}}(1-\alpha)}{\Gamma(1+2\alpha)}\).

The solution region is written as

$${{\Sigma}_{0}}=\{-\infty\le x< +\infty, 0\le\tau\le T\}. $$

Usually, the financial institution specifies a sufficiently small value \(N^{-}>0\) as its lower bound and a sufficiently large value \(N^{+}<\infty\) as its upper bound. Then the problems can be solved in the following finite region:

$${\Sigma}_{1}=\bigl\{ {{N}^{-}}\le x< {{N}^{+}}, 0 \le\tau\le T\bigr\} . $$

Meanwhile, boundary conditions are transformed into the form

$$V\bigl({{N}^{+}},\tau\bigr)={{e}^{{{N}^{+}}+r\tau}}-K,\qquad V \bigl({{N}^{-}},\tau\bigr)=0. $$

2.2 The construction of PASE-I scheme

Time and space steps are \(k=\frac{T}{N}\) and \(h=\frac {{{N}^{+}}-{{N}^{-}}}{M}\), respectively, where M and N are positive integers.

Let

$$\textstyle\begin{cases} {{x}_{i}}={{N}^{-}}+ih,\quad i=0,1,2,\ldots,M, \\ {{\tau}_{n}}=nk,\quad n=0,1,2,\ldots,N. \end{cases} $$

The corresponding initial–boundary conditions are:

$$V_{i}^{0}=\max \bigl\{ {{e}^{{{x}_{i}}}}-K,0 \bigr\} , \qquad V_{{{N}^{-}}}^{n}=0,\qquad V_{{{N}^{+}}}^{n}={{e}^{{{N}^{+}}+r{{\tau}_{n}}}}-K. $$

We can discretize \(V_{\tau}^{(\alpha)}(x,\tau)\) in the following form:

$$ \frac{{{\partial}^{\alpha}}V({{x}_{i}},{{\tau}_{n+1}})}{\partial {{\tau}^{\alpha}}}=\frac{{{k}^{-\alpha}}}{\Gamma(2-\alpha)}\sum _{j=0}^{n}{{{l}_{j}}\bigl[V({{x}_{i}},{{ \tau}_{n+1-j}})-V}({{x}_{i}},{{\tau }_{n-j}})\bigr]+O \bigl({{k}^{2-\alpha}}\bigr), $$
(3)

where \({{l}_{j}}={{(j+1)}^{1-\alpha}}-{{j}^{\alpha}}\).

The classic explicit and implicit schemes for equation (2) are given as follows:

(1) The classic explicit scheme

$$\begin{aligned}& \frac{{{k}^{-\alpha}}}{\Gamma ( 2-\alpha )}\sum_{j=0}^{n}{{{l}_{j}} \bigl( V_{i}^{n+1-j}-V_{i}^{n-j} \bigr)} \\& \quad =\bigl[ab+r{{(T-nk)}^{1-\alpha}}\bigr]{{(nk)}^{1-\alpha}} {{(T-nk)}^{\alpha -1}}\frac{V_{i+1}^{n}-V_{i-1}^{n}}{2h} \\& \qquad {}+a{{(nk)}^{1-\alpha}} {{(T-nk)}^{\alpha-1}}\frac {V_{i+1}^{n}-2V_{i}^{n}+V_{i-1}^{n}}{{{h}^{2}}}. \end{aligned}$$

The simplified form is

$$ V_{i}^{n+1}=a_{n}'V_{i-1}^{n}+ \bigl({{w}_{1}}-b_{n}'\bigr)V_{i}^{n}+c_{n}'V_{i+1}^{n}+ \sum_{j=1}^{n-1}{({{l}_{j}}-{{l}_{j+1}})}V_{i}^{n-j}+{{l}_{n}}V_{i}^{0}. $$
(4)

(2) The classic implicit scheme

$$\begin{aligned}& \frac{{{k}^{-\alpha}}}{\Gamma ( 2-\alpha )}\sum_{j=0}^{n}{{{l}_{j}} \bigl( V_{i}^{n+1-j}-V_{i}^{n-j} \bigr)} \\& \quad = \bigl[ab+r{{(T-nk-k)}^{1-\alpha}}\bigr]{{(nk+k)}^{1-\alpha}} {{(T-nk-k)}^{\alpha-1}}\frac {V_{i+1}^{n+1}-V_{i-1}^{n+1}}{2h} \\& \qquad {}+a{{(nk+k)}^{1-\alpha }} {{(T-nk-k)}^{\alpha-1}} \frac {V_{i+1}^{n+1}-2V_{i}^{n+1}+V_{i-1}^{n+1}}{{{h}^{2}}}. \end{aligned}$$

The simplified form is

$$ -a_{n}'V_{i-1}^{n+1}+ \bigl(1+b_{n}'\bigr)V_{i}^{n+1}-c_{n}'V_{i+1}^{n+1}=(1-{{l}_{1}})V_{i}^{n}+ \sum_{j=1}^{n-1}{({{l}_{j}}-{{l}_{j+1}})}V_{i}^{n-j}+{{l}_{n}}V_{i}^{0}, $$
(5)

where

$$\begin{aligned}& a=\gamma(\alpha){{\sigma}^{2}},\qquad b=\frac{\Gamma(1-\alpha)}{\Gamma(1-2\alpha)},\qquad {{m}_{1}}=\Gamma(2-\alpha){{k}^{\alpha}}/2h, \\& {{m}_{2}} =\gamma(\alpha){{\sigma}^{2}} \Gamma(2- \alpha){{k}^{\alpha}}/{{h}^{2}}, \qquad {{g}_{n}} ={{(nk)}^{1-\alpha}} {{(T-nk)}^{\alpha-1}}, \\& {{q}_{n}}={{(nk)}^{1-\alpha}},\qquad {{a}'_{n}} =-{{m}_{1}}(ab{{g}_{n}}+r{{q}_{n}})+{{m}_{2}} {{g}_{n}},\qquad {{b}'_{n}}=2{{m}_{2}} {{g}_{n}}, \\& {{c}'_{n}}={{m}_{1}}(ab{{g}_{n}}+r{{q}_{n}})+{{m}_{2}} {{g}_{n}},\qquad {{w}_{j}}={{l}_{j-1}}-{{l}_{j}}\quad (j=1,2,\ldots, N). \end{aligned}$$

According to the serial explicit–implicit (E–I) and implicit–explicit (I–E) difference methods, we design the PASE-I scheme, which not only guarantees the stability of the numerical computation, but also has good parallel properties. The specific approach is as follows.

Let \(M-1=QL\), where Q and L are positive integers, Q is odd, and \(Q\ge{3}\), \(L\ge3\). The points calculated at the same time layer are divided into Q segments, which are recorded in order \({{S}_{{1}}},{{S}_{2}},\ldots,{{S}_{Q}}\). Every segment of the odd time layer is arranged from left to right in the order of “classic explicit–classic implicit–classic explicit”. In even time layer, the rule of calculation becomes “classic implicit–classic explicit–classic implicit”. See Fig. 1 for details, where the classic explicit scheme is used in place, and the classic implicit scheme is used in □ place. The solution of each implicit segment depends on the calculation of the beginning or last point of the adjacent explicit segment.

Figure 1
figure 1

Schematic diagram of PASE-I scheme

Above all, the PASE-I scheme of equation (2) can be constructed as follows:

$$ \textstyle\begin{cases} (I+{{G}_{1}}){{V}^{n+1}}=({{w}_{1}}I-{{G}_{2}}){{V}^{n}}+{{q}^{n}} \\ \hphantom{(I+{{G}_{1}}){{V}^{n+1}}={}}{}+\sum_{j=1}^{n-1}{{{w}_{j+1}}{{V}^{n-j}}}+{{l}_{n}}{{V}^{0}}, \\ (I+{{G}_{2}}){{V}^{n+2}}=({{w}_{1}}I-{{G}_{1}}){{V}^{n+1}}+{{q}^{n+2}} \\ \hphantom{(I+{{G}_{2}}){{V}^{n+2}}={}}{}+\sum_{j=1}^{n}{{{w}_{j+1}}{{V}^{n+1-j}}}+{{l}_{n+1}}{{V}^{0}}, \end{cases}\displaystyle \quad n=0,2,4,\ldots, $$
(6)

where \({{w}_{j}}={{l}_{j-1}}-{{l}_{j}}\) (\(j=1,2,\ldots, N\)), \({{q}^{n}}={{ ( a_{n}'V_{0}^{n},0,\ldots,0,c_{n}'V_{M}^{n} )}^{T}}\) (\(n=0,1,\ldots,N\)) \({{V}^{k}}={{ ( V_{1}^{k},V_{2}^{k},\ldots,V_{M-1}^{k} )}^{T}}\) (\(k=0,1,\ldots,N\)), \({{G}_{1}}\) and \({{G}_{2}} \) are \((M-1)\)th-order matrices, \({{Q}_{L-1}}\) is the \((L-1)\)th-order zero matrix, and \({{Q}_{L-2}}\) is the \((L-2)\)th-order zero matrix. The details are as follows:

$$\begin{aligned}& {{G}_{1}}=\left ( \begin{matrix} {{Q}_{L-1}} & {} & {} & {} & {} & {} & {} & {} \\ {} & {{P}_{L+2}} & {} & {} & {} & {} & {} & {} \\ {} & {} & {{Q}_{L-2}} & {} & {} & {} & {} & {} \\ {} & {} & {} & {{P}_{L+2}} & {} & {} & {} & {} \\ {} & {} & {} & {} & \ddots & {} & {} & {} \\ {} & {} & {} & {} & {} & {{Q}_{L-2}} & {} & {} \\ {} & {} & {} & {} & {} & {} & {{P}_{L+2}} & {} \\ {} & {} & {} & {} & {} & {} & {} & {{Q}_{L-1}} \end{matrix} \right )_{(M-1)\times(M-1)}, \\& {{G}_{2}}=\left ( \begin{matrix} {{{\bar{P}}}_{L+1}} & {} & {} & {} & {} & {} & {} & {} \\ {} & {{Q}_{L-2}} & {} & {} & {} & {} & {} & {} \\ {} & {} & {{P}_{L+2}} & {} & {} & {} & {} & {} \\ {} & {} & {} & {{Q}_{L-2}} & {} & {} & {} & {} \\ {} & {} & {} & {} & \ddots & {} & {} & {} \\ {} & {} & {} & {} & {} & {{P}_{L+2}} & {} & {} \\ {} & {} & {} & {} & {} & {} & {{Q}_{L-2}} & {} \\ {} & {} & {} & {} & {} & {} & {} & {{{\tilde{P}}}_{L+1}} \end{matrix} \right )_{(M-1)\times(M-1)}, \\& {{P}_{L+2}}=\left ( \begin{matrix} 0 & {} & {} & {} & {} \\ -a_{n}' & b_{n}' & -c_{n}' & {} & {} \\ {} & \ddots & \ddots & \ddots & {} \\ {} & {} & -a_{n}' & b_{n}' & -c_{n}' \\ {} & {} & {} & {} & 0 \end{matrix} \right )_{(L+2)\times(L+2)}, \\& {{\bar{P}}_{L+1}}=\left ( \begin{matrix} b_{n}' & -c_{n}' & {} & {} & {} \\ -a_{n}' & b_{n}' & -c_{n}' & {} & {} \\ {} & \ddots & \ddots & \ddots & {} \\ {} & {} & -a_{n}' & b_{n}' & -c_{n}' \\ {} & {} & {} & {} & 0 \end{matrix} \right )_{(L-1)\times(L-1)}, \\& {{\tilde{P}}_{L+1}}=\left ( \begin{matrix} 0 & {} & {} & {} & {} \\ -a_{n}' & b_{n}' & -c_{n}' & {} & {} \\ {} & \ddots & \ddots & \ddots & {} \\ {} & {} & -a_{n}' & b_{n}' & -c_{n}' \\ {} & {} & {} & -a_{n}' & b_{n}' \end{matrix} \right )_{(L+1)\times(L+1)}. \end{aligned}$$

3 The theoretical analysis of PASE-I difference scheme

3.1 Existence and uniqueness of solutions to PASE-I difference scheme

Lemma 1

(Kellogg [28])

Let \(\rho>0\), and let C be a nonnegative matrix (i.e., \(C + {{C}^{T}}\) is a nonnegative definite matrix). Then \({{ ( I+\rho C )}^{-1}}\) exists, and \({{ \Vert {{ ( I+\rho C )}^{-1}} \Vert }_{2}}\le1\).

Lemma 2

In the PASE-I scheme (6), \({{G}_{1}}\) and \({{G}_{2}}\) are both nonnegative matrices.

Proof

We only need to prove that both \(G_{1}+G_{1}^{T}\) and \(G_{2}+G_{2}^{T}\) are nonnegative definite matrices. We have

$${{P}_{L+2}} + P_{L+2}^{T} = \left [ \begin{matrix} 0 & -a_{n}' & {} & {} & {} \\ -a_{n}' & 2b_{n}' & -a_{n}'-c_{n}' & {} & {} \\ {} & \ddots & \ddots & \ddots & {} \\ {} & {} & -a_{n}'-c_{n}' & 2b_{n}' & -c_{n}' \\ {} & {} & {} & -c_{n}' & 0 \end{matrix} \right ]_{(L+2)\times(L+2)} $$

with \({{a}'_{n}}=-{{m}_{1}}(ab{{g}_{n}}+r{{q}_{n}})+{{m}_{2}}{{g}_{n}}\), \({{b}'_{n}}=2{{m}_{2}}{{g}_{n}}\), \({{c}'_{n}}={{m}_{1}}(ab{{g}_{n}}+r{{q}_{n}})+{{m}_{2}}{{g}_{n}}\), \(-a_{n}'-c_{n}' = 2{{m}_{2}}{{g}_{n}}\). That is to say, \({{P}_{L+2}} + P_{L+2}^{T}\) is a positive definite matrix. We can also get that \({{\bar{P}}_{L+2}} + \bar{P}_{L+2}^{T}\) and \({{\tilde {P}}_{L+1}}+\tilde{P}_{L+1}^{T}\) are positive definite matrices. So \(G_{1}+G_{1}^{T}\) and \(G_{2}+G_{2}^{T}\) obviously are diagonally dominant matrices. According to Lemma 1, \({{(I+{{G}_{1}})}^{-1}}\) and \({{(I+{{G}_{2}})}^{-1}}\) exist, and PASE-I scheme solution is unique. □

Theorem 1

The PASE-I scheme (6) for time-space-fractional B–S equation has a unique solution.

3.2 Stability of PASE-I difference scheme

Lemma 3

Let D be a nonnegative definite matrix (i.e., \(D + {{{D}}^{T}}\) is a nonnegative definite matrix). Then \({{ \Vert ( \delta I-\theta D ){{ ( I+\theta D )}^{-1}} \Vert }_{2}}\le1\) for all \(\theta,\delta>0\).

Proof

We have

$$\bigl\Vert ( \delta I-\theta D ){{ ( I+\theta D )}^{-1}} \bigr\Vert _{2} = \max_{\substack{\rho\in{{R}^{n}}\\ \rho\ne0}} \frac{ ( ( \delta I-\theta D ){{ ( I+\theta D )}^{-1}}\rho, ( \delta I-\theta D ){{ ( I+\theta D )}^{-1}}\rho )}{ ( \rho, \rho )}. $$

Denoting \(\psi = {{ ( I+\theta D )}^{-1}}\rho\), we get

$$\begin{aligned} \bigl\Vert ( \delta I-\theta D ){{ ( I+\theta D )}^{-1}} \bigr\Vert _{2}& =\max_{\substack{\psi\in{{R}^{n}} \\ \psi\ne0}} \frac{ ( ( \delta I-\theta D )\psi, ( \delta I-\theta D )\psi )}{ ( ( I+\theta D )\psi, ( I+\theta D )\psi )} \\ & = \max_{\substack{\psi\in{{R}^{n}} \\ \psi\ne0}} \frac{{{\delta}^{2}} ( \psi, \psi )-2\theta\delta ( D\psi, \psi ) + {{\theta }^{2}} ( D\psi, D\psi )}{ ( \psi, \psi )+2\theta ( D\psi, \psi ) + {{\theta}^{2}} ( D\psi, D\psi )} \\ & \le1. \end{aligned}$$

 □

Lemma 4

([29])

Applying the function \(g(x)={{x}^{1-\alpha}} (x\ge1)\), we can get the following relations:

$$0< {{w}_{n}}< \cdots< {{w}_{2}}< {{w}_{1}}< 1\quad \textit{and}\quad \sum_{j=1}^{n-1}{{{w}_{j}}=1-{{l}_{n}}}, \qquad \sum_{j=1}^{\infty}{{{w}_{j}}=1}. $$

Rewrite the PASE-I scheme in the following form:

$$ \textstyle\begin{cases} (I+{{W}_{1}}){{V}^{n+1}}=({{w}_{1}}I-{{W}_{2}}){{V}^{n}}+{{q}^{n}} \\ \hphantom{(I+{{W}_{1}}){{V}^{n+1}}={}}{}+\sum_{j=1}^{n-1}{{{w}_{j+1}}{{V}^{n-j}}}+{{l}_{n}}{{V}^{0}}, \\ (I+{{W}_{2}}){{V}^{n+2}}=({{w}_{1}}I-{{W}_{1}}){{V}^{n+1}}+{{q}^{n+2}} \\ \hphantom{(I+{{W}_{2}}){{V}^{n+2}}={}}{}+\sum_{j=1}^{n}{{{w}_{j+1}}{{V}^{n+1-j}}}+{{l}_{n+1}}{{V}^{0}}, \end{cases}\displaystyle \quad n=0,2,4, \ldots, $$
(7)

where

$$\begin{aligned}& {{W}_{1}}=\left ( \begin{matrix} {{Q}_{L}} & {} & {} & {} & {} & {} & {} \\ {} & {{P}_{L}} & {} & {} & {} & {} & {} \\ {} & {} & {{Q}_{L}} & {} & {} & {} & {} \\ {} & {} & {} & \ddots & {} & {} & {} \\ {} & {} & {} & {} & {{Q}_{L}} & {} & {} \\ {} & {} & {} & {} & {} & {{P}_{L}} & {} \\ {} & {} & {} & {} & {} & {} & {{Q}_{L}} \end{matrix} \right )_{ ( M-1 )\times ( M-1 )}, \\& {{W}_{2}}=\left ( \begin{matrix} {{P}_{L}} & {} & {} & {} & {} & {} & {} \\ {} & {{Q}_{L}} & {} & {} & {} & {} & {} \\ {} & {} & {{P}_{L}} & {} & {} & {} & {} \\ {} & {} & {} & \ddots & {} & {} & {} \\ {} & {} & {} & {} & {{P}_{L}} & {} & {} \\ {} & {} & {} & {} & {} & {{Q}_{L}} & {} \\ {} & {} & {} & {} & {} & {} & {{P}_{L}} \end{matrix} \right )_{ ( M-1 )\times ( M-1 )}, \\& {{P}_{L}}=\left ( \begin{matrix} b_{n}' & -c_{n}' & {} & {} & {} \\ -a_{n}' & b_{n}' & -c_{n}' & {} & {} \\ {} & \ddots & \ddots & \ddots & {} \\ {} & {} & -a_{n}' & b_{n}' & -c_{n}' \\ {} & {} & {} & -a_{n}' & b_{n}' \end{matrix} \right )_{L\times L}. \end{aligned}$$

Lemma 5

Let \(\tilde{V}_{i}^{n}\) and \(V_{i}^{n}\) be the approximate solution and numerical solution of the PASE-I scheme, respectively. Denote \(\varepsilon_{i}^{n}=\tilde{V}_{i}^{n}-V_{i}^{n}\) and \({{\varepsilon }^{n}} = (\varepsilon_{0}^{n},\varepsilon_{1}^{n},\ldots ,\varepsilon_{M}^{n})\) for \(0\le n\le N\). Then \({{ \Vert {{\varepsilon}^{n}} \Vert }_{2}}\le{{ \Vert {{\varepsilon}^{0}} \Vert }_{2}}\).

Proof

Applying \(\varepsilon_{i}^{n}=\tilde{V}_{i}^{n}-V_{i}^{n}\) to equation (7), we have:

$$ \textstyle\begin{cases} (I+{{W}_{1}}){{\varepsilon}^{n+1}}=({{w}_{1}}I-{{W}_{2}}){{\varepsilon }^{n}}+\sum_{j=1}^{n-1}{{{w}_{j+1}}{{\varepsilon }^{n-j}}}+{{l}_{n}}{{\varepsilon}^{0}}, \\ (I+{{W}_{2}}){{\varepsilon}^{n+2}}=({{w}_{1}}I-{{W}_{1}}){{\varepsilon }^{n+1}}+\sum_{j=1}^{n}{{{w}_{j+1}}{{\varepsilon }^{n+1-j}}}+{{l}_{n+1}}{{\varepsilon}^{0}}, \end{cases}\displaystyle \quad n=0,2,4,\ldots. $$
(8)

When \(n\ge3\),

$$\begin{aligned}& \begin{aligned} {{\varepsilon}^{n+2}}&={{ ( I+{{W}_{2}} )}^{-1}} ( {{w}_{1}}I-{{W}_{1}} ){{ ( I+{{W}_{1}} )}^{-1}} ( {{w}_{1}}I-{{W}_{2}} ){{\varepsilon}^{n}} \\ &\quad {} +{{ ( I+{{W}_{2}} )}^{-1}} ( {{w}_{1}}I-{{W}_{1}} ){{ ( I+{{W}_{1}} )}^{-1}} \bigl( {{w}_{2}} {{ \varepsilon }^{n-1}}+\cdots+{{w}_{n-1}} {{\varepsilon}^{1}}+{{l}_{n}} {{\varepsilon }^{0}} \bigr) \\ &\quad {} +{{ ( I+{{W}_{2}} )}^{-1}} \bigl( {{w}_{2}} {{\varepsilon }^{n}}+\cdots+{{w}_{n}} {{\varepsilon}^{1}}+{{l}_{n+1}} {{\varepsilon }^{0}} \bigr), \end{aligned} \\& \begin{aligned} {{\varepsilon}^{n+3}}&={{ ( I+{{W}_{1}} )}^{-1}} ( {{w}_{1}}I-{{W}_{2}} ){{ ( I+{{W}_{2}} )}^{-1}} ( {{w}_{1}}I-{{W}_{1}} ){{\varepsilon}^{n+1}} \\ &\quad {} +{{ ( I+{{W}_{1}} )}^{-1}} ( {{w}_{1}}I-{{W}_{2}} ){{ ( I+{{W}_{2}} )}^{-1}} \bigl( {{c}_{2}} {{ \varepsilon }^{n}}+\cdots+{{c}_{n+1}} {{\varepsilon}^{1}}+{{l}_{n+1}} {{\varepsilon }^{0}} \bigr) \\ &\quad {} +{{ ( I+{{W}_{1}} )}^{-1}} \bigl( {{w}_{2}} {{\varepsilon }^{n+1}}+\cdots+{{w}_{n+2}} {{\varepsilon}^{1}}+{{l}_{n+2}} {{\varepsilon }^{0}} \bigr). \end{aligned} \end{aligned}$$

Define the growth-share matrix \(Z={{ ( I+{{W}_{2}} )}^{-1}} ( {{w}_{1}}I-{{W}_{1}} ){{ ( I+{{W}_{1}} )}^{-1}} ( {{w}_{1}}I-{{W}_{2}} )\) and let \(\tilde{Z}= ( I+{{W}_{2}} )Z{{ ( I+{{W}_{2}} )}^{-1}}\). The eigenvalues of the matrices \({{W}_{1}}\) and \({{W}_{2}}\) are the same, and the eigenvalue is λ. According to Lemma 3,

$$\begin{aligned} \Vert Z \Vert &= \Vert {\tilde{Z}} \Vert = \bigl\Vert ( {{w}_{1}}I-{{W}_{1}} ){{ ( I+{{W}_{1}} )}^{-1}} ( {{w}_{1}}I-{{W}_{2}} ){{ ( I+{{W}_{2}} )}^{-1}} \bigr\Vert \\ &=\max \biggl\{ \biggl\vert {{ \biggl( \frac{{{w}_{1}}-\lambda}{1+\lambda} \biggr)}^{2}} \biggr\vert \biggr\} \le\max \biggl\{ \biggl\vert \frac{{{\max }^{2}} \{ {{w}_{1}},\lambda \}}{{{ ( 1+\lambda )}^{2}}} \biggr\vert \biggr\} \le1. \end{aligned}$$

Using mathematical induction, let bus prove that \({{ \Vert {{\varepsilon}^{n}} \Vert }_{2}}\le{{ \Vert {{\varepsilon}^{0}} \Vert }_{2}}\). When \(n=0\), \(( I+{{W}_{1}} ){{\varepsilon}^{1}}= ( I-{{W}_{2}} ){{\varepsilon}^{0}}\),

$${{ \bigl\Vert {{\varepsilon}^{1}} \bigr\Vert }_{2}}={{ \bigl\Vert {{ ( I+{{W}_{1}} )}^{-1}} ( I-{{W}_{2}} ){{\varepsilon }^{0}} \bigr\Vert }_{2}}\le \biggl\vert \frac{1-\lambda }{1+\lambda} \biggr\vert \cdot{{ \bigl\Vert {{\varepsilon}^{0}} \bigr\Vert }_{2}}\le{{ \bigl\Vert {{\varepsilon}^{0}} \bigr\Vert }_{2}}. $$

When \(n=1\), \((I+{{W}_{2}}){{\varepsilon }^{2}}=({{w}_{1}}I-{{W}_{1}}){{\varepsilon }^{1}}+{{l}_{1}}{{\varepsilon}^{0}}\). When \(\max \{ {{w}_{1}},\lambda \}={{w}_{1}}\),

$$\begin{aligned} {{ \bigl\Vert {{\varepsilon}^{2}} \bigr\Vert }_{2}}& \le{{ \bigl\Vert {{ ( I+{{W}_{2}} )}^{-1}} ( {{w}_{1}}I-{{W}_{1}} ){{ ( I+{{W}_{1}} )}^{-1}} ( I-{{W}_{2}} ) \bigr\Vert }_{2}} {{ \bigl\Vert {{\varepsilon}^{1}} \bigr\Vert }_{2}}+{{ \bigl\Vert {{ ( I+{{W}_{2}} )}^{-1}} \bigr\Vert }_{2}} {{ \bigl\Vert {{l}_{1}} {{\varepsilon}^{0}} \bigr\Vert }_{2}} \\ & \le\max \biggl\{ \biggl\vert \frac{{{w}_{1}}-\lambda}{1+\lambda} \biggr\vert \biggr\} {{ \bigl\Vert {{\varepsilon}^{1}} \bigr\Vert }_{2}}+\max \biggl\{ \frac{{{l}_{1}}}{1+\lambda} \biggr\} {{ \bigl\Vert {{\varepsilon}^{0}} \bigr\Vert }_{2}} \\ & \le\max \biggl\{ \biggl\vert \frac{1-\lambda}{1+\lambda} \biggr\vert \biggr\} {{ \bigl\Vert {{\varepsilon}^{0}} \bigr\Vert }_{2}} \\ & \le{{ \bigl\Vert {{\varepsilon}^{0}} \bigr\Vert }_{2}}, \end{aligned}$$

whereas when \(\max \{ {{w}_{1}},\lambda \}=\lambda\),

$$\begin{aligned} \begin{aligned} {{ \bigl\Vert {{\varepsilon}^{2}} \bigr\Vert }_{2}}&\le {{ \bigl\Vert {{ ( I+{{W}_{2}} )}^{-1}} ( {{w}_{1}}I-{{W}_{1}} ){{ ( I+{{W}_{1}} )}^{-1}} ( I-{{W}_{2}} ) \bigr\Vert }_{2}} {{ \bigl\Vert {{\varepsilon}^{1}} \bigr\Vert }_{2}}+{{ \bigl\Vert {{ ( I+{{W}_{2}} )}^{-1}} \bigr\Vert }_{2}} {{ \bigl\Vert {{l}_{1}} {{\varepsilon}^{0}} \bigr\Vert }_{2}} \\ & \le\max \biggl\{ \biggl\vert \frac{{{w}_{1}}-\lambda}{1+\lambda} \biggr\vert \biggr\} {{ \bigl\Vert {{\varepsilon}^{1}} \bigr\Vert }_{2}}+\max \biggl\{ \frac{{{l}_{1}}}{1+\lambda} \biggr\} {{ \bigl\Vert {{\varepsilon}^{0}} \bigr\Vert }_{2}} \\ & \le\max \biggl\{ \biggl\vert \frac{\lambda + {{l}_{1}} -{{w}_{1}}}{1+\lambda} \biggr\vert \biggr\} {{ \bigl\Vert {{\varepsilon }^{0}} \bigr\Vert }_{2}} \\ & \le{{ \bigl\Vert {{\varepsilon}^{0}} \bigr\Vert }_{2}}. \end{aligned} \end{aligned}$$

Assuming that \(n\le k + 1\), we will have \({{ \Vert {{\varepsilon }^{n}} \Vert }_{2}}\le{{ \Vert {{\varepsilon}^{0}} \Vert }_{2}}\) when \(n=k+2\). □

When \(\max \{ {{w}_{1}}-\lambda \}\le\max \{ {{w}_{1}},\lambda \}\le{{w}_{1}}\),

$$\begin{aligned} {{ \bigl\Vert {{\varepsilon}^{k+2}} \bigr\Vert }_{2}} =&{{ \bigl\Vert {{ ( I+{{W}_{2}} )}^{-1}} ( {{w}_{1}}I-{{W}_{1}} ){{ ( I+{{W}_{1}} )}^{-1}} ( {{w}_{1}}I-{{W}_{2}} ) \bigr\Vert }_{2}} {{ \bigl\Vert {{\varepsilon}^{k}} \bigr\Vert }_{2}} \\ &{} +{{ \bigl\Vert {{ ( I+{{W}_{2}} )}^{-1}} ( {{w}_{1}}I-{{W}_{1}} ){{ ( I+{{W}_{1}} )}^{-1}} \bigr\Vert }_{2}} {{ \bigl\Vert {{w}_{2}} {{\varepsilon}^{n-1}}+\cdots +{{w}_{n-1}} {{\varepsilon}^{1}}+{{l}_{n}} {{\varepsilon}^{0}} \bigr\Vert }_{2}} \\ &{} +{{ \bigl\Vert {{ ( I+{{W}_{2}} )}^{-1}} \bigr\Vert }_{2}} {{ \bigl\Vert {{w}_{2}} {{\varepsilon}^{n}}+ \cdots+{{w}_{n}} {{\varepsilon }^{1}}+{{l}_{n+1}} {{\varepsilon}^{0}} \bigr\Vert }_{2}} \\ \le&\max \biggl\{ \biggl\vert {{\biggl(\frac{{{w}_{1}}-\lambda}{1+\lambda } \biggr)}^{2}} \biggr\vert \biggr\} {{ \bigl\Vert {{ \varepsilon}^{k}} \bigr\Vert }_{2}}+\frac{({{w}_{1}}-\lambda)}{{{ ( 1+\lambda )}^{2}}} \bigl\Vert ({{w}_{2}}+\cdots+{{w}_{n-1}}+{{l}_{n}}){{ \varepsilon }^{0}} \bigr\Vert \\ &{} +\frac{1}{1+\lambda}{{ \bigl\Vert ({{w}_{2}}+\cdots +{{w}_{n}}+{{l}_{n+1}}){{\varepsilon}^{0}} \bigr\Vert }_{2}} \\ \le&{{ \biggl( \frac{{{w}_{1}}}{1+\lambda} \biggr)}^{2}} {{ \bigl\Vert {{ \varepsilon}^{0}} \bigr\Vert }_{2}}+\frac{{{w}_{1}} ( 1-{{w}_{1}} )}{{{ ( 1+\lambda )}^{2}}}{{ \bigl\Vert {{\varepsilon }^{0}} \bigr\Vert }_{2}}+ \frac{1-{{w}_{1}}}{1+\lambda}{{ \bigl\Vert {{\varepsilon}^{0}} \bigr\Vert }_{2}} \\ \le&\frac{{{w}_{1}}}{{{ ( 1+\lambda )}^{2}}}{{ \bigl\Vert {{\varepsilon}^{0}} \bigr\Vert }_{2}} \\ \le&{{ \bigl\Vert {{\varepsilon}^{0}} \bigr\Vert }_{2}}; \end{aligned}$$

when \(\max \{ {{w}_{1}}-\lambda \}\le\max \{ {{w}_{1}},\lambda \}\le\lambda\),

$$\begin{aligned} {{ \bigl\Vert {{\varepsilon}^{k+2}} \bigr\Vert }_{2}} =&{{ \bigl\Vert {{ ( I+{{W}_{2}} )}^{-1}} ( {{w}_{1}}I-{{W}_{1}} ){{ ( I+{{W}_{1}} )}^{-1}} ( {{w}_{1}}I-{{W}_{2}} ) \bigr\Vert }_{2}} {{ \bigl\Vert {{\varepsilon}^{k}} \bigr\Vert }_{2}} \\ &{} +{{ \bigl\Vert {{ ( I+{{W}_{2}} )}^{-1}} ( {{w}_{1}}I-{{W}_{1}} ){{ ( I+{{W}_{1}} )}^{-1}} \bigr\Vert }_{2}} {{ \bigl\Vert {{w}_{2}} {{\varepsilon}^{n-1}}+\cdots +{{w}_{n-1}} {{\varepsilon}^{1}}+{{l}_{n}} {{\varepsilon}^{0}} \bigr\Vert }_{2}} \\ &{} +{{ \bigl\Vert {{ ( I+{{W}_{2}} )}^{-1}} \bigr\Vert }_{2}} {{ \bigl\Vert {{w}_{2}} {{\varepsilon}^{n}}+ \cdots+{{w}_{n}} {{\varepsilon }^{1}}+{{l}_{n+1}} {{\varepsilon}^{0}} \bigr\Vert }_{2}} \\ \le&\max \biggl\{ \biggl\vert {{\biggl(\frac{{{w}_{1}}-\lambda}{1+\lambda } \biggr)}^{2}} \biggr\vert \biggr\} {{ \bigl\Vert {{ \varepsilon}^{k}} \bigr\Vert }_{2}}+\frac{({{w}_{1}}-\lambda)}{{{ ( 1+\lambda )}^{2}}} \bigl\Vert ({{w}_{2}}+\cdots+{{w}_{n-1}}+{{l}_{n}}){{ \varepsilon }^{0}} \bigr\Vert \\ &{} +\frac{1}{1+\lambda}{{ \bigl\Vert ({{w}_{2}}+\cdots +{{w}_{n}}+{{l}_{n+1}}){{\varepsilon}^{0}} \bigr\Vert }_{2}} \\ \le&{{ \biggl( \frac{\lambda}{1+\lambda} \biggr)}^{2}} {{ \bigl\Vert {{ \varepsilon}^{0}} \bigr\Vert }_{2}}+\frac{\lambda ( 1-{{w}_{1}} )}{{{ ( 1+\lambda )}^{2}}}{{ \bigl\Vert {{\varepsilon }^{0}} \bigr\Vert }_{2}}+ \frac{1-{{w}_{1}}}{1+\lambda}{{ \bigl\Vert {{\varepsilon}^{0}} \bigr\Vert }_{2}} \\ \le&\frac{\lambda}{1+\lambda}\frac{\lambda}{1+\lambda}+\frac{1-{{w}_{1}}}{1+\lambda}{{ \bigl\Vert {{ \varepsilon}^{0}} \bigr\Vert }_{2}} \\ \le&\frac{\lambda}{1+\lambda}{{ \bigl\Vert {{\varepsilon}^{0}} \bigr\Vert }_{2}} + \frac{1-{{w}_{1}}}{1+\lambda}{{ \bigl\Vert {{\varepsilon }^{0}} \bigr\Vert }_{2}} \\ \le&{{ \bigl\Vert {{\varepsilon}^{0}} \bigr\Vert }_{2}}. \end{aligned}$$

Obviously, we can get the following:

Theorem 2

The solution of PASE-I scheme (6) for the time-space-fractional B–S equation is unconditionally stable.

3.3 Convergence of PASE-I difference scheme

Lemma 6

Let \(V({{x}_{i}},{{\tau}_{n}})\) be the exact solution of equation (1). Define \(e_{i}^{n}=V({{x}_{i}},{{\tau}_{n}})-V_{i}^{n}\), \({{e}^{n}}=(e_{1}^{n},e_{2}^{n},\ldots,e_{M-1}^{n})\), \({{ \Vert {{e}^{n}} \Vert }_{\infty}} = \vert e_{l}^{n} \vert =\max_{1\le i\le m-1} \vert e_{i}^{n} \vert \), \(n=1,2,\ldots,N\). Then \({{ \Vert {{e}^{n}} \Vert }_{\infty}}\le l_{n}^{-1}C{{\tau }^{\alpha}}({{\tau}^{2-\alpha}}+{{h}^{2}})\), where C is a constant.

Proof

Substitute \(V_{i}^{n}=V({{x}_{i}},{{\tau}_{n}})-e_{i}^{n}\) into the difference scheme (7):

$$ \textstyle\begin{cases} (I+{{W}_{1}}){{e}^{n+1}}=({{w}_{1}}I-{{W}_{2}}){{e}^{n}} \\ \hphantom{(I+{{W}_{1}}){{e}^{n+1}}={}}{}+\sum_{j=1}^{n-1}{{{w}_{j+1}}{{e}^{n-j}}}+{{l}_{n}}{{e}^{0}}+{{\tau }^{\alpha}}{{R}^{n}}, \\ (I+{{W}_{2}}){{e}^{n+2}}=({{w}_{1}}I-{{W}_{1}}){{e}^{n+1}} \\ \hphantom{(I+{{W}_{2}}){{e}^{n+2}}={}}{}+\sum_{j=1}^{n}{{{w}_{j+1}}{{e}^{n+1-j}}}+{{l}_{n+1}}{{e}^{0}}+{{\tau }^{\alpha}}{{R}^{n+1}}, \end{cases}\displaystyle \quad n=0,2,4, \ldots, $$
(9)

where \(e_{i}^{0}=0\), \({{R}^{n}}=O({{\tau}^{2-\alpha}}+{{h}^{2}})\), that is, with a positive constant C, we have \(\Vert {{R}^{n}} \Vert \le C({{\tau}^{2-\alpha}}+{{h}^{2}})\).

When \(n=0\), \({{e}^{1}}={{ ( I+{{W}_{1}} )}^{-1}} ( I-{{W}_{2}} ){{e}^{0}}+{{ ( I+{{W}_{1}} )}^{-1}}{{\tau }^{\alpha}}{{R}^{0}}={{ ( I+{{W}_{1}} )}^{-1}}{{\tau }^{\alpha}}{{R}^{0}}\).

By Lemma 3,

$${{ \bigl\Vert {{e}^{1}} \bigr\Vert }_{\infty}}={{ \bigl\Vert {{ ( I+{{W}_{1}} )}^{-1}} {{\tau}^{\alpha}} {{R}^{0}} \bigr\Vert }_{\infty}}\le {{\tau}^{\alpha}} {{ \bigl\Vert {{R}^{0}} \bigr\Vert }_{\infty}}\le l_{0}^{-1}C{{\tau}^{\alpha}} \bigl( {{ \tau}^{2-\alpha}}+{{h}^{2}} \bigr). $$

To verify the convergence of the PASE-I scheme, we consider the E–I and I–E serial schemes. First, consider the classic E–I scheme: when \(n=0\),

$$\begin{aligned}& -a_{n}'e_{i-1}^{2}+{{ \bigl(1+b_{n}'\bigr)}_{2}}e_{i}^{2}-c_{n}'e_{i+1}^{2}=({{w}_{1}} {{l}_{2n}}+{{l}_{2n+1}})e_{i}^{0}+{{\tau }^{\alpha}}R_{i}^{2} = {{\tau}^{\alpha}}R_{i}^{2}, \\& \begin{aligned} {{ \bigl\Vert {{e}^{2}} \bigr\Vert }_{\infty}}&= \bigl\vert e_{{{L}_{2}}}^{2} \bigr\vert \le-a_{n}' \bigl\vert e_{{{L}_{2}}-1}^{2} \bigr\vert +\bigl(1+b_{n}'\bigr) \bigl\vert e_{{{L}_{2}}}^{2} \bigr\vert -c_{n}' \bigl\vert e_{{{L}_{2}}+1}^{2} \bigr\vert \\ &\le \bigl\vert -c_{n}'e_{{{L}_{2}}+1}^{2}+ \bigl(1+b_{n}'\bigr)e_{{{L}_{2}}}^{2}-a_{n}'e_{{{L}_{2}}-1}^{2} \bigr\vert \\ &= \bigl\vert {{\tau}^{\alpha}}R_{{{L}_{2}}}^{2} \bigr\vert \\ &\le C{{\tau}^{\alpha}}\bigl({{\tau}^{2-\alpha}}+{{h}^{2}} \bigr) \\ &\le l_{1}^{-1}C{{\tau}^{\alpha}}\bigl({{ \tau}^{2-\alpha}}+{{h}^{2}}\bigr). \end{aligned} \end{aligned}$$

Assuming that \(n\le2s\), we get \({{ \Vert {{e}^{2s}} \Vert }_{\infty }}\le l_{2s-1}^{-1}C{{\tau}^{\alpha}}({{\tau}^{2-\alpha }}+{{h}^{2}})\). When \(n=2s+2\),

$$\begin{aligned}& {{ \bigl\Vert {{e}^{2s+2}} \bigr\Vert }_{\infty}} \\& \quad = \bigl\vert e_{l}^{2s+2} \bigr\vert \le-c_{n}' \bigl\vert e_{{{L}_{2s+2}}+1}^{2s+2} \bigr\vert +(1+{{b}_{n}}) \bigl\vert e_{{{L}_{2s+2}}}^{2s+2} \bigr\vert -a_{n}' \bigl\vert e_{{{L}_{2s+2}}-1}^{2s+2} \bigr\vert \\& \quad \le \bigl\vert -c_{n}'e_{{{L}_{2s+2}}+1}^{2s+2}+(1+{{b}_{n}})e_{{{L}_{2s+2}}}^{2s+2}-a_{n}'e_{{{L}_{2s+2}}-1}^{2s+2} \bigr\vert \\& \quad = \Biggl\vert {{w}_{1}} {c'_{n}}e_{{{L}_{2s+2}}+1}^{2s}+ \bigl[{{w}_{1}}\bigl({{w}_{1}}-{b'_{n}} \bigr)+{{w}_{2}}\bigr]e_{{{L}_{2s+2}}}^{2s}+{{w}_{1}} {a'_{n}}e_{{{L}_{2s+2}}-1}^{2s} \\& \qquad {}+\sum _{j=1}^{2s-1}{({{w}_{1}} {{w}_{j+1}}+{{w}_{j+2}})e_{{{L}_{2s+2}}}^{2s-j}} +{{\tau}^{\alpha}}R_{{{L}_{2s+2}}}^{2s+2} \Biggr\vert \\& \quad \le{{w}_{1}} {{c}'_{n}} {{ \bigl\Vert {{e}^{2s}} \bigr\Vert }_{\infty }}+\bigl[{{w}_{1}} \bigl({{w}_{1}}-{{b}'_{n}}\bigr)+{{w}_{2}} \bigr]{{ \bigl\Vert {{e}^{2s}} \bigr\Vert }_{\infty}}+{{w}_{1}} {a'_{n}} {{ \bigl\Vert {{e}^{2s}} \bigr\Vert }_{\infty}} \\& \qquad {}+\sum_{j=1}^{2s-1}{({{w}_{1}} {{w}_{j+1}}+{{w}_{j+2}}){{ \bigl\Vert {{e}^{2s-j}} \bigr\Vert }_{\infty}}} +C{{\tau}^{\alpha}}\bigl({{\tau}^{2-\alpha}}+{{h}^{2}} \bigr) \\& \quad =\bigl(w_{1}^{2}+{{w}_{2}}\bigr){{ \bigl\Vert {{e}^{2s}} \bigr\Vert }_{\infty }}+({{w}_{1}} {{w}_{2}}+{{w}_{3}}){{ \bigl\Vert {{e}^{2s-1}} \bigr\Vert }_{\infty}}+\cdots+({{w}_{1}} {{w}_{2s}}+{{w}_{2s+1}}){{ \bigl\Vert {{e}^{1}} \bigr\Vert }_{\infty}} \\& \qquad {}+C{{ \tau}^{\alpha}}\bigl({{\tau}^{2-\alpha }}+{{h}^{2}}\bigr) \\& \quad \le \bigl[\bigl(w_{1}^{2}+{{w}_{2}} \bigr)l_{2s-1}^{-1}+({{w}_{1}} {{w}_{2}}+{{w}_{3}})l_{2s-2}^{-1}+ \cdots +({{w}_{1}} {{w}_{2s}}+{{w}_{2s+1}})l_{0}^{-1}+1 \bigr]C{{\tau}^{\alpha }}\bigl({{\tau}^{2-\alpha}}+{{h}^{2}} \bigr) \\& \quad \le\bigl[\bigl(w_{1}^{2}+{{w}_{2}}+{{w}_{1}} {{w}_{2}}+{{w}_{3}}+\cdots +{{w}_{1}} {{w}_{2s}}+{{w}_{2s+1}}\bigr){{l}_{2s}}+1\bigr]C{{ \tau}^{\alpha }}\bigl({{\tau}^{2-\alpha}}+{{h}^{2}}\bigr) \\& \quad =l_{2s}^{-1}\bigl(w_{1}^{2}+{{w}_{2}}+{{w}_{1}} {{w}_{2}}+{{w}_{3}}+\cdots +{{w}_{1}} {{w}_{2s}}+{{w}_{2s+1}}+{{l}_{2s}}\bigr)C{{ \tau}^{\alpha }}\bigl({{\tau}^{2-\alpha}}+{{h}^{2}}\bigr) \\& \quad =l_{2s}^{-1}({{l}_{1}} {{l}_{2s}}-{{l}_{2s+1}})C{{ \tau}^{\alpha }}\bigl({{\tau}^{2-\alpha}}+{{h}^{2}}\bigr) \\& \quad \le l_{2s-1}^{-1}C{{\tau}^{\alpha}}\bigl({{ \tau}^{2-\alpha }}+{{h}^{2}}\bigr). \end{aligned}$$

Then consider the classic I–E scheme. When \(n=0\),

$$\begin{aligned}& e_{i}^{2}=(1+{{w}_{1}})e_{i}^{1}+({{l}_{1}}-{{l}_{0}})e_{i}^{0}+{{ \tau }^{\alpha}}R_{i}^{2}=(1+{{w}_{1}})e_{i}^{1}+{{ \tau}^{\alpha}}R_{i}^{2}, \\& \begin{aligned} {{ \bigl\Vert {{e}^{2}} \bigr\Vert }_{\infty}}&= \bigl\vert e_{{{L}_{2}}}^{2} \bigr\vert = \bigl\vert (1+{{w}_{1}})e_{{{L}_{2}}}^{1}+{{ \tau}^{\alpha }}R_{{{L}_{2}}}^{2} \bigr\vert \\ & \le ( {{w}_{1}}+1 ){{ \bigl\Vert {{e}^{1}} \bigr\Vert }_{\infty }}+{{\tau}^{\alpha}}\bigl({{\tau}^{2-\alpha}}+{{h}^{2}} \bigr) \\ & \le \bigl\vert ({{w}_{1}}+1)l_{0}^{-1}+1 \bigr\vert \cdot C{{\tau}^{\alpha }}\bigl({{\tau}^{2-\alpha}}+{{h}^{2}} \bigr) \\ & \le({{w}_{1}}+2)C{{\tau}^{\alpha}}\bigl({{ \tau}^{2-\alpha }}+{{h}^{2}}\bigr) \\ & \le l_{1}^{-1}C{{\tau}^{\alpha}}\bigl({{ \tau}^{2-\alpha}}+{{h}^{2}}\bigr). \end{aligned} \end{aligned}$$

Assuming that \(n\le2s+1\), we get \({{ \Vert {{e}^{2s+1}} \Vert }_{\infty}}\le l_{2s}^{-1}C{{\tau}^{\alpha}}({{\tau}^{2-\alpha }}+{{h}^{2}})\). When \(n=2s+2\),

$$\begin{aligned}& {{ \bigl\Vert {{e}^{2s+2}} \bigr\Vert }_{\infty}} \\ & \quad = \bigl\vert e_{{{L}_{2s+2}}}^{2s+2} \bigr\vert \\ & \quad = \bigl\vert (1+{{w}_{1}})e_{{{L}_{2s+2}}}^{2s+1}+({{w}_{2}}-{{w}_{1}})e_{{{L}_{2s+2}}}^{2s}+ \cdots+ ({{w}_{2s+1}}-{{w}_{2s}})e_{{{L}_{2s+2}}}^{1} \\ & \qquad {}+({{l}_{2s +1}}-{{l}_{2s}})e_{{{L}_{2s+2}}}^{0}+{{ \tau}^{\alpha }}R_{{{L}_{2s+2}}}^{2s+2} \bigr\vert \\ & \quad = \Biggl\vert (1+{{w}_{1}})e_{{{L}_{2s+2}}}^{2s+1}+ \sum_{j=1}^{2s}{({{w}_{j+1}}-{{w}_{j}})e_{{{L}_{2s+2}}}^{2s+1-j}+{{ \tau }^{\alpha}}R_{{{L}_{2s+2}}}^{2s+2}} \Biggr\vert \\ & \quad =(1+{{w}_{1}}){{ \bigl\Vert {{e}^{2s+1}} \bigr\Vert }_{\infty }}+({{w}_{2}}-{{w}_{1}}){{ \bigl\Vert {{e}^{2s}} \bigr\Vert }_{\infty }}+\cdots+({{w}_{2s+1}}-{{w}_{2s}}){{ \bigl\Vert {{e}^{1}} \bigr\Vert }_{\infty}}+{{ \tau}^{\alpha}}C \bigl( {{\tau}^{2-\alpha}}+{{h}^{2}} \bigr) \\ & \quad \le(1+{{w}_{1}})l_{2s}^{-1}{{ \tau}^{\alpha}}C \bigl( {{\tau }^{2-\alpha}}+{{h}^{2}} \bigr)+({{w}_{2}}-{{w}_{1}})l_{2s-1}^{-1}{{ \tau}^{\alpha}}C \bigl( {{\tau }^{2-\alpha}}+{{h}^{2}} \bigr)+ \cdots \\ & \qquad {}+ ({{w}_{2s+1}}-{{w}_{2s}})l_{0}^{-1}{{ \tau}^{\alpha}}C \bigl( {{\tau }^{2-\alpha}}+{{h}^{2}} \bigr)+{{\tau}^{\alpha}}C \bigl( {{\tau }^{2-\alpha}}+{{h}^{2}} \bigr) \\ & \quad \le \bigl\{ (1+{{w}_{1}})l_{2s}^{-1}+({{w}_{2}}-{{w}_{1}})l_{2s-1}^{-1}+ \cdots+ ({{w}_{2s+1}}-{{w}_{2s}})l_{0}^{-1}+1 \bigr\} \cdot{{\tau}^{\alpha }}C \bigl( {{\tau}^{2-\alpha}}+{{h}^{2}} \bigr) \\ & \quad \le \bigl\{ (1+{{w}_{1}}+{{w}_{2}}-{{w}_{1}}+ \cdots+ {{w}_{2s+1}}-{{w}_{2s}})l_{2s+1}^{-1}+1 \bigr\} {{\tau}^{\alpha }}C \bigl( {{\tau}^{2-\alpha}}+{{h}^{2}} \bigr) \\ & \quad \le \bigl\{ (1+{{w}_{2s+1}})l_{2s+1}^{-1}+1 \bigr\} {{\tau}^{\alpha }}C \bigl( {{\tau}^{2-\alpha}}+{{h}^{2}} \bigr) \\ & \quad \le l_{2s+1}^{-1} \bigl\{ (1+{{w}_{2s+1}})+{{l}_{2s+1}} \bigr\} {{\tau}^{\alpha}}C \bigl( {{\tau}^{2-\alpha}}+{{h}^{2}} \bigr) \\ & \quad \le l_{2s+1}^{-1}{{\tau}^{\alpha}}C \bigl( {{ \tau}^{2-\alpha }}+{{h}^{2}} \bigr) \end{aligned}$$

because

$$\lim_{n\to\infty} \frac{l_{n}^{-1}}{{{n}^{\alpha }}}=\lim_{n\to\infty} \frac{{{n}^{-\alpha }}}{{{n}^{1-\alpha}}-{{(n-1)}^{1-\alpha}}}= \lim_{n\to\infty} \frac{{{n}^{-1}}}{1-{{(1-{{n}^{-1}})}^{1-\alpha}}}= \frac {1}{1-\alpha}. $$

Define \(n\tau\le Z\), \(l_{n}^{-1}\le m{{n}^{\alpha}}\), \(\hat {C}=C{{T}^{\alpha}}h\), where m, h, Ĉ are positive numbers. Then we get \(\vert V({{x}_{i}},{{\tau}_{n}})-V_{i}^{n} \vert \le h{{n}^{\alpha }}{{\tau}^{\alpha}}C({{\tau}^{2-\alpha}}+{{h}^{2}})=h{{(n\tau )}^{\alpha}}C({{\tau}^{2-\alpha}}+{{h}^{2}})\le\hat{C}({{\tau }^{2-\alpha}}+{{h}^{2}})\), where, \(i=2,3,\ldots,M\), \(n=1,2,\ldots,N\). □

Theorem 3

The solution of PASE-I scheme (6) for time-space-fractional B–S equation is unconditionally convergent. It satisfies

$$\bigl\vert V({{x}_{i}},{{\tau}_{n}})-V_{i}^{n} \bigr\vert \le\hat{C}\bigl({{\tau }^{2-\alpha}}+{{h}^{2}}\bigr), \quad \hat{C}>0. $$

4 The PASI-E difference scheme for time-space-fractional B–S equation

Similarly, every segment of the odd time layer is arranged from left to right in the order of “classic implicit–classic explicit–classic implicit”. In even time layer, the rule of calculation becomes “classic explicit–classic implicit–classic explicit”. We can obtain the PASI-E scheme of equation (1):

$$ \textstyle\begin{cases} (I+{{G}_{2}}){{V}^{n+1}}=({{w}_{1}}I-{{G}_{1}}){{V}^{n}}+{{q}^{n}} \\ \hphantom{(I+{{G}_{2}}){{V}^{n+1}}={}}{}+\sum_{j=1}^{n-1}{{{w}_{j+1}}{{V}^{n-j}}}+{{l}_{n}}{{V}^{0}}, \\ (I+{{G}_{1}}){{V}^{n+2}}=({{w}_{1}}I-{{G}_{2}}){{V}^{n+1}}+{{q}^{n+2}} \\ \hphantom{(I+{{G}_{1}}){{V}^{n+2}}={}}{}+\sum_{j=1}^{n}{{{w}_{j+1}}{{V}^{n+1-j}}}+{{l}_{n+1}}{{V}^{0}}, \end{cases}\displaystyle \quad n=0,2,4, \ldots, $$
(10)

where \({{G}_{1}}\), \({{G}_{2}}\), and \({{q}^{n}}\) are as before. The theoretical analysis of PASI-E scheme is similar to that of PASE-I scheme, so we get a similar theorem.

Theorem 4

The PASI-E scheme (10) for time-space-fractional B–S equation has a unique solution. The scheme is unconditionally stable and convergent. It also satisfies

$$\bigl\vert V({{x}_{i}},{{\tau}_{n}})-V_{i}^{n} \bigr\vert \le\hat{C}\bigl({{\tau }^{2-\alpha}}+{{h}^{2}}\bigr), \quad \hat{C}>0. $$

The PASE-I and PASI-E schemes not only guarantee the stability of the numerical computation, but also have good parallel properties, which can effectively improve the computational efficiency of solving the time-space-fractional B–S equation.

5 Numerical experiments

The numerical experiments were performed on the Intel Core i3 CPU in the Matlab R2012b environment.

Example 1

([30])

Consider the European call option with expiration dates of 3 months, 6 months, 9 months, and 12 months. The stock price S is 97 dollars, the final price K is 50 dollars, the risk rate \(r=0.01\) per annum, and the volatility \(\sigma=0.2\) per year.

We consider the following time-space-fractional B-S equation:

$$ \textstyle\begin{cases} P_{t}^{(\alpha)} = (\frac{r}{\Gamma(2-\alpha)}P-r{{S}^{\alpha }}P_{S}^{(\alpha)}){{t}^{1-\alpha}}-\frac{{{\Gamma}^{3}}(1+\alpha )}{\Gamma(1+2\alpha)}{{\Gamma}^{2}}(2-\alpha){{\sigma }^{2}}{{S}^{2\alpha}}P_{S}^{(2\alpha)}, \\ P(S,T) = \max \{ S-K,0 \}. \end{cases} $$
(11)

To make a comparison with C–N scheme in [30], we use PASE-I and PASI-E schemes to calculate the option prices. We take \(M=1001\), \(N=100\), \(L=200\), \(Q=\frac{M-1}{L}=5\), \({{N}^{+}}=\ln100\), \({{N}^{-}}=\ln0.1\), \(\alpha = 0.7\), \(T=12\). The surface plots of the C–N scheme solution, PASE-I scheme solution, and PASI-E scheme solution are as follows. It can be seen from Figs. 24 that the shapes of three schemes are consistent and their surfaces are smooth.

Figure 2
figure 2

The surface plot of the C–N scheme solution

Figure 3
figure 3

The surface plot of the PASE-I scheme solution

Figure 4
figure 4

The surface plot of the PASI-E scheme solution

The numerical solutions of the C–N scheme, PASE-I scheme, and PASI-E scheme for time-space-fractional B-S equation are compared as follows.

It can be seen from Table 1 and Fig. 5 that the numerical solutions obtained by these three schemes for the time-space-fractional B–S equation are very similar.

Figure 5
figure 5

numerical solutions of three schemes

Table 1 Numerical solutions of three schemes (\(\alpha=0.7\))

To verify the stability of PASE-I and PASI-E schemes, the relative errors over time are given: \(V_{i}^{n}\) is the C–N scheme solution, and \(\tilde{V}_{i}^{n}\) is the PASE-I (PASI-E) scheme solution. The SRET presents the change of relative errors with time, and the DTE presents the distribution of relative errors at space grid. We take \(M = 1001\), \(N=1000\), \(\alpha=0.7\). The SRET and DTE results of equation (11) are as follows:

$$\begin{aligned}& \operatorname{SRET}(n)=\sum_{i=1}^{M}{ \frac{|V_{i}^{n}-\tilde{V}_{i}^{n}|}{V_{i}^{n}}}, \\& \operatorname{DTE} ( i ) = \frac{1}{2}\sum_{j=1}^{N} \bigl( \tilde{V}_{i}^{j}-V_{i}^{j} \bigr)^{2}. \end{aligned}$$

It can be seen from Fig. 6 that the SRET is larger at the beginning; as the time step increases, the relative errors decrease rapidly and gradually keep at a constant level. Figure 7 shows that the DTE is between 0 and 0.045; the solutions of PASE-I and PASI-E schemes are almost the same with that of C–N scheme, and they have good accuracy to solve time-space-fractional B–S equation.

Figure 6
figure 6

The curves of the SRET at time layer

Figure 7
figure 7

The curves of the DTE at space layer

To verify whether the space-convergent orders and the time-convergent orders are consistent with theoretical analysis, we define \({{L}^{2}}\) as errors, Order1 as space-convergent orders, and Order2 as time-convergent orders [31]. The PASI-E scheme is similar to the PASE-I scheme. For convenience, we only consider the time- and space-convergent orders of the PASE-I scheme. The results are presented in Tables 2 and 3.

$$\begin{aligned}& {{E}_{2}} ( h,k )=\max_{0\le n\le N} {{ \bigl\Vert V_{i}^{n}-\tilde{V}_{i}^{j} \bigr\Vert }_{2}}, \\& \mathit{Order}1={{\log}_{2}} \biggl( \frac{{{E}_{2}} ( 2h,k )}{{{E}_{2}} ( h,k )} \biggr),\qquad \mathit{Order}2={{\log}_{2}} \biggl( \frac{{{E}_{2}} ( h,2k )}{{{E}_{2}} ( h,k )} \biggr). \end{aligned}$$
Table 2 The space-convergent orders and errors of PASE-I scheme solution (\(N=1000\))
Table 3 The time-convergent orders and errors of PASE-I scheme solution (\(M=1001\))

Table 2 shows that the space-convergent order of the PASE-I scheme is 2. Table 3 shows that the time-convergent order of the PASE-I scheme is \(2-\alpha\). The numerical results agree with theoretical analysis.

To better compare the computational efficiency of the C–N and PASE-I schemes, we take \(T=12\), the space layer is fixed as \(M=1001\), the selected time grid number is \(N=100,200,300,400,500,600,700,800,900,1000\). The results of the computing time and speedup ratio [15] are given as follows.

As we can see from Table 4, with the increase of the number of grids, the computing time of the two schemes is increasing, \(S_{p}\) is greater than 1 and gradually increases. That is to say, the PASE-I parallel scheme has obvious advantages over the serial C–N scheme, and the computational efficiency is higher.

Table 4 The computing time and speedup ratio of C–N scheme and PASE-I scheme

Above all, the C–N and PASE-I schemes have similar numerical solutions; they both have good accuracy to solve time-space-fractional B–S equation, but the computational efficiency of the PASE-I scheme is much higher than that of the C–N scheme.

Example 2

Let \(\alpha=2/3\), \(\alpha=1/2\), \(\alpha=1/3\), \(M=1001\), \(N=1000\). We use the PASE-I scheme to calculate the price of options and make a comparison with classic B–S model (\(\alpha=1\)).

From Table 5 and Fig. 8 we can see that the option price calculated by the PASE-I scheme is higher than that obtained by the standard B–S model. Generally speaking, for options with maturities of twelve months, the price calculated by the classic B–S model is lower than in the actual financial market [32]. This shows that the time-space-fractional B–S model can better describe the change process of the asset price. Therefore, to be more in line with the actual financial market, the parameter α in the time-space-fractional B-S equation should be properly selected according to actual data.

Figure 8
figure 8

The option price calculated by PASE-I scheme

Table 5 Comparison of numerical solutions of PASE-I scheme

6 Conclusions

Two intrinsic parallel difference schemes of PASE-I and PASI-E for time-space-fractional B–S equation are unconditionally stable, the schemes have unique solutions and converge with second-order spatial accuracy and \((2-\alpha)\)th-order time accuracy. The numerical experiments verify the theoretical analysis and show that the PASE-I and PASI-E difference methods have ideal computational precision and obvious parallel computing properties. They are suitable for various types of parallel computing systems. Especially, when the space points are large enough, the two schemes have obvious localization characteristics in terms of calculation and communication, They are very suitable for use on distributed storage large-scale parallel computing systems. Meanwhile, it is confirmed that the fractional B–S equation is more consistent with the actual financial market.

References

  1. Jiang, L.S., Xu, C.L., Ren, X.M., et al.: Mathematical Model and Case Analysis of the Pricing of Financial Derivatives. Higher Education Press, Beijing (2008) (in Chinese)

    Google Scholar 

  2. Kwok, Y.: Mathematical Models of Financial Derivatives, 2nd edn. Springer, Berlin (2008)

    MATH  Google Scholar 

  3. Wyss, W.: The fractional Black–Scholes equations. Fract. Calc. Appl. Anal. 3(1), 51–61 (2000)

    MathSciNet  MATH  Google Scholar 

  4. Cartea, A., Del-Castillo-Negrete, D.: Fractional diffusion models of option prices in markets with jumps. Phys. A, Stat. Mech. Appl. 374(2), 749–763 (2007)

    Article  Google Scholar 

  5. Jumarie, G.: Stock exchange fractional dynamics defined as fractional exponential growth driven by Gaussian white noise. Application to fractional Black–Scholes equations. Insur. Math. Econ. 42(1), 271–287 (2008)

    Article  MathSciNet  MATH  Google Scholar 

  6. Jumarie, G.: Derivation and solutions of some fractional Black–Scholes equations in coarse-grained space and time. Application to Merton’s optimal portfolio. Comput. Math. Appl. 59(3), 1142–1164 (2010)

    Article  MathSciNet  MATH  Google Scholar 

  7. Liang, J.R., Wang, J., Zhang, W.J., Qiu, W.Y., Ren, F.Y.: The solution to a bi-fractional Black–Scholes–Merton differential equation. Int. J. Pure Appl. Math. 58(1), 99–112 (2010)

    MathSciNet  Google Scholar 

  8. Sedigheh, Z., Habibollah, S., Mohammad, I.: Fractional integration operator for numerical solution of the integro-partial time fractional diffusion heat equation with weakly singular kernel. Asian-Eur. J. Math. 10(4), 1750071 (2017). https://doi.org/10.1142/S1793557117500711

    Article  MathSciNet  MATH  Google Scholar 

  9. Zeynab, K., Habibollah, S.: B-spline wavelet operational method for numerical solution of time-space fractional partial differential equations. Int. J. Wavelets Multiresolut. Inf. Process. 15(4), 1750034 (2017). https://doi.org/10.1142/S0219691317500345

    Article  MathSciNet  MATH  Google Scholar 

  10. Guo, B.L., Pu, X.K., Huang, F.H.: Fractional Partial Differential Equations and Their Numerical Solutions. Science Press, Beijiing (2015)

    Book  MATH  Google Scholar 

  11. Biorstad, P., Agrawal, O.P., Tenreiro, J.A.: Advances in Fractional Calculus: Theoretical Developments and Applications in Physics and Engineering. World Book Inc. Beijing, Beijing (2014)

    Google Scholar 

  12. Liu, F.W., Zhuang, P.H., Liu, Q.X.: Numerical Methods of Fractional Partial Differential Equations and Applications. Science Press, Beijing (2015) (in Chinese)

    Google Scholar 

  13. Sun, Z.Z., Gao, G.H.: Finite Difference Methods for Fractional Differential Equations. Science Press, Beijing (2015) (in Chinese)

    Google Scholar 

  14. Zhang, B.L., Gu, T.X., Mo, Z.Y.: Principles and Methods of Numerical Parallel Computation. National Defense Industry Press, Beijing (1999) (in Chinese)

    Google Scholar 

  15. Chi, X.B., Wang, Y.W., Wang, Y., Liu, F.: Parallel Computing and Implementation Technology. Science Press, Beijiing (2015) (in Chinese)

    Google Scholar 

  16. Evans, D.J., Abdullab, A.R.B.: Group explicit method for parabolic equations. Int. J. Comput. Math. 14(1), 73–105 (1983)

    Article  MathSciNet  Google Scholar 

  17. Zhang, B.L.: Alternating segment explicit–implicit method for diffusion equation. J. Numer. Methods Comput. Appl. 14, 245–253 (1991)

    MathSciNet  MATH  Google Scholar 

  18. Zhou, Y.L.: A finite difference scheme with intrinsic parallelism for quasilinear parabolic systems. Sci. China Ser. A, Math. 40(1), 43–48 (1997) (in Chinese)

    Google Scholar 

  19. Zhu, S.H., Yuan, G.W.: Difference schemes with intrinsic parallelism for dispersive equation. Acta Math. Appl. Sin. 26(3), 495–503 (2003) (in Chinese)

    MathSciNet  MATH  Google Scholar 

  20. Wang, W.Q.: Difference schemes with intrinsic parallelism for the KdV equation. Acta Math. Appl. Sin. 29(6), 995–1003 (2006) (in Chinese)

    MathSciNet  Google Scholar 

  21. Yuan, G.W., Sheng, Z.Q., Hang, X.D.: The unconditional stability of parallel difference schemes with second order convergence for nonlinear parabolic equation. J. Partial Differ. Equ. 20, 45–64 (2007)

    MATH  Google Scholar 

  22. Wang, H., Wang, K.X., Sircar, T.: A direct \(O(N \log^{2} N)\) finite difference method for fractional diffusion equations. J. Comput. Phys. 229, 8095–8104 (2010)

    Article  MathSciNet  MATH  Google Scholar 

  23. Diethelm, K.: An efficient parallel algorithm for the numerical solution of fractional differential equations. Fract. Calc. Appl. Anal. 14(3), 475–490 (2011)

    Article  MathSciNet  MATH  Google Scholar 

  24. Gong, C.Y., Bao, W.M., Tang, G.J., et al.: A parallel algorithm for the Riesz fraction reaction–diffusion equation with explicit finite difference method. Fract. Calc. Appl. Anal. 16(3), 654–669 (2013)

    Article  MathSciNet  MATH  Google Scholar 

  25. Sweilam, N.H., Moharram, H., Moniem, N.K.A., Ahmed, S.: A parallel Crank–Nicolson finite difference method for time-fractional parabolic equation. J. Numer. Math. 22(4), 363–382 (2014)

    Article  MathSciNet  MATH  Google Scholar 

  26. Lu, X., Pang, H.K., Sun, H.W.: Fast approximate inversion of a block triangular Toeplitz matrix with applications to fractional sub-diffusion equations. Numer. Linear Algebra Appl. 22(4), 866–882 (2015)

    Article  MathSciNet  MATH  Google Scholar 

  27. Wang, Q.L., Liu, J., Gong, C.N., et al.: An efficient parallel algorithm for Caputo fractional reaction–diffusion equation with implicit finite-difference method. Adv. Differ. Equ. 2016(1), 207 (2016). https://doi.org/10.1186/s13662-016-0929-9

    Article  MathSciNet  Google Scholar 

  28. Zhang, Q.: Finite Difference Methods for Partial Differential Equations. Science Press, Beijiing (2017) (in Chinese)

    Google Scholar 

  29. Shen, S., Liu, F., Anh, V., et al.: Implicit difference approximation for the time fractional diffusion equation. J. Appl. Math. Comput. 22(3), 87–99 (2006)

    Article  MathSciNet  Google Scholar 

  30. Yang, X.Z., Wu, L.F., Sun, S.Z., Zhang, X.: A universal difference method for time-space fractional Black–Scholes equation. Adv. Differ. Equ. 2016(1), 71 (2016). https://doi.org/10.1186/s13662-016-0792-8

    Article  MathSciNet  Google Scholar 

  31. Vong, S., Lyu, P., Wang, Z.: A compact difference scheme for fractional sub-diffusion equations with the spatially variable coefficient under Neumann boundary conditions. J. Sci. Comput. 66(2), 725–739 (2016)

    Article  MathSciNet  MATH  Google Scholar 

  32. Carr, P., Wu, L.R.: Time-changed Levy processes and option pricing. J. Financ. Econ. 71(1), 113–141 (2004)

    Article  Google Scholar 

Download references

Funding

The work was supported by the National Natural Science Foundation of China (Grant No. 11371135).

Author information

Authors and Affiliations

Authors

Contributions

All authors read and approved the final version of the manuscript.

Corresponding author

Correspondence to Xiaozhong Yang.

Ethics declarations

Competing interests

The authors declare that they have no competing interests.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Li, Y., Yang, X. & Sun, S. A class of intrinsic parallel difference methods for time-space fractional Black–Scholes equation. Adv Differ Equ 2018, 280 (2018). https://doi.org/10.1186/s13662-018-1736-2

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13662-018-1736-2

MSC

Keywords