Skip to main content

Numerical solution of fractional differential equations with Caputo derivative by using numerical fractional predict–correct technique

Abstract

Fractional differential equations have recently demonstrated their importance in a variety of fields, including medicine, applied sciences, and engineering. The main objective of this study is to propose an Adams-type multistep method for solving differential equations of fractional order. The method is developed by implementing the Lagrange interpolation and taking into account the idea of the Adams–Moulton method for fractional case. The fractional derivative applied in this study is in the Caputo derivative operator. The analysis of the proposed method is presented in terms of order of the method, order of accuracy, and convergence analysis, with the proposed method being proved to converge. The stability of the method is also examined, where the stability regions appear to be symmetric to the real axis for various values of α. In order to validate the competency of the proposed method, several numerical examples for solving linear and nonlinear fractional differential equations are included. The method will be presented in the numerical predict–correct technique for the condition where \(\alpha \in (0,1)\), in which α represents the order of fractional derivatives of \(D^{\alpha }y(t)\).

Introduction

For \(\alpha >0\) and under the assumptions that function f is smooth, we have the fractional initial value problem (FIVP) in the form [1]

$$\begin{aligned} {}_{C}\mathrm{D}^{\alpha } _{t_{0}} y(t)=f \bigl(t,y(t)\bigr),\qquad y(t_{0})=y_{0}, \end{aligned}$$
(1)

where α is the order of fractional differential equation (FDE) and \(0<\alpha <1\), while \({}_{C}\mathrm{D}^{\alpha } _{t_{0}} \) denotes the fractional Caputo’s α-derivative operator (in sequel, we denote it as \(\mathrm{D}^{\alpha }\)) defined as follows [2]:

$$\begin{aligned} \mathrm{D}^{\alpha }y(t)=\frac{1}{\Gamma (m-\alpha )} \int _{t_{0}}^{t} \frac{y^{(m)}(\tau )\,d\tau }{(t-\tau )^{\alpha -m+1}},\quad m-1< \alpha < m \in \mathbb{Z}^{+}. \end{aligned}$$
(2)

As stated in the research study by Garrappa and Roberto [2], \({}_{C}\mathrm{D}^{\alpha } _{t_{0}} = {}_{\mathrm{RL}}\mathrm{D}^{\alpha } _{t_{0}}(y(t)-y(t_{0}))\) with \({}_{\mathrm{RL}}\mathrm{D}^{\alpha } _{t_{0}}(y(t))\) is the Riemann–Liouville differential operator which can be defined as follows:

$$\begin{aligned} _{\mathrm{RL}}\mathrm{D}^{\alpha } _{t_{0}}y(t)=\frac{1}{\Gamma (m-\alpha )} \biggl( \frac{d}{dt} \biggr)^{m} \int _{t_{0}}^{t} \frac{y(\tau )\,d\tau }{(t-\tau )^{\alpha -m +1}},\quad \alpha >0, m= \lceil \alpha \rceil. \end{aligned}$$
(3)

Many experts choose to use the Caputo definition in their research. This is because, according to Diethelm and Ford [3], utilizing Caputo’s concept, one may typically have a well-intelligible physical meaning that can be measured. As a result, the Caputo definition will be used in this study to develop a new fractional multistep method for handling FDE problems. The existence and uniqueness of the solution of Equation (1) have been given in [3].

Numerous numerical treatments have been proposed to handle different types of FDE, such as finite difference method [4], fractional backward differentiation formulae [5], and block implicit Adams method [1]. Among these studies, [6] gives a comparison of numerical methods for single order FDEs, and concluding the predictor–corrector method by Diethelm et al. [7] is recommended in most cases. For both linear and nonlinear FDEs, the approach has been shown to have the advantage of being simple to implement. However, the function evaluation for the iterations will rise as the number of intervals N increases. In a study by Gnitchogna and Atangana [8], an explicit technique for solving fractional order partial differential equations based on the Adams–Bashforth numerical scheme in a Laplace space is proposed.

The main interest of this study is to develop an implicit method of order two for solving fractional differential equations in a predict–correct numerical scheme. The proposed method will be derived based on the idea of Adams–Moulton method, where the function will be interpolated using Lagrange interpolation. The analysis in terms of stability properties of the method, method order, its convergence properties as well as the order of accuracy will also be discussed. Note that in order to make a numerical comparison, this paper restricts the analysis in which \(0<\alpha <1\). Further, to validate the efficiency of the method, several numerical examples of FDE are included along with the discussion of the numerical results.

Fractional Adams method of predictor–corrector order 2

The proposed multistep method will be presented in the predictor–corrector form known as fractional Adams method of explicit order 2, implicit order 2 (FAM22).

Suppose that the interval \(t \in (a,b)\), where the computation for the approximation solution will take place for \(y(t_{i+1})\) at \(t_{i+1}\). The computation as well will require the previous approximate value of \(y(t_{i})\) at \(t_{i}\).

Explicit order 2

The explicit method of order 2 can be expressed as follows:

$$\begin{aligned} \begin{aligned} y(t_{i+1})= {}&y(t_{i}) + \frac{h^{\alpha }}{\Gamma (\alpha )} \biggl[ \biggl( \frac{2(i+1)^{\alpha }-(i)^{\alpha }}{\alpha } + \frac{(i+1)^{\alpha +1}+(i)^{\alpha +1}}{\alpha +1} \biggr) F_{i} \\ &{}+ \biggl( \frac{-(i+1)^{\alpha }}{\alpha } + \frac{(i+1)^{\alpha +1}-(i)^{\alpha +1}}{\alpha +1} \biggr) F_{i-1} \biggr]. \end{aligned} \end{aligned}$$
(4)

The above numerical solution obtained in Equation (4) can be discovered in the previous research [8], in which the researchers developed this method and extended it by implementing the concept of Laplace transform. The extended method is then being applied to solve problems of partial fractional differential equations (PFDE). However, the established method (4) will be considered as the predictor for FAM22 to solve ordinary FDE in the case of \(0<\alpha <1\).

Implicit order 2

As stated in the preceding section, the proposed implicit formula of FAM22 is derived using Lagrange interpolation while taking into consideration the concept of second order Adams–Moulton method.

The derivation procedure began by considering the FIVP in the form [9]

$$\begin{aligned} \mathrm{D}^{\alpha }y(t) = f\bigl(t,y(t)\bigr),\qquad y^{k}(0)=y_{0}^{k},\quad k=0,1,\dots, \lceil \alpha \rceil -1. \end{aligned}$$
(5)

The above FIVP of Equation (5) is known to be equivalent to the Volterra integral based on [9] and can be rewritten in the form

$$\begin{aligned} y(t)= \sum_{k=0}^{ \lceil \alpha \rceil -1} \frac{t^{k}}{k!}y^{k}(0)+\frac{1}{\Gamma (\alpha )} \int _{0}^{t}\bigl[(t- \tau )^{\alpha -1}f\bigl( \tau, y(\tau )\bigr)\bigr]\,d\tau. \end{aligned}$$
(6)

Then, using the classical first-order equation, we construct a fractional method, yielding Equation (5) as

$$\begin{aligned} Dy(t) = f\bigl(t,y(t)\bigr),\qquad y(0)=y_{0}. \end{aligned}$$
(7)

Equation (6) is simplified as referred in [8] and gives

$$\begin{aligned} y(t)=y_{0}+\frac{1}{\Gamma (\alpha )} \int _{0}^{t} \bigl[(t-\tau )^{ \alpha -1} f\bigl( \tau, y(\tau )\bigr)\bigr]\,d\tau. \end{aligned}$$
(8)

Rewrite Equation (8) based on two conditions which are as follows:

i. As \(t=t_{i+1}\), we have

$$\begin{aligned} y(t_{i+1})=y_{0}+\frac{1}{\Gamma (\alpha )} \int _{0}^{t_{i+1}} \bigl[(t_{i+1}- \tau )^{\alpha -1} f\bigl(\tau, y(\tau )\bigr)\bigr]\,d\tau. \end{aligned}$$
(9)

ii. As \(t=t_{i}\), we have

$$\begin{aligned} y(t_{i})=y_{0}+\frac{1}{\Gamma (\alpha )} \int _{0}^{t_{i}} \bigl[(t_{i}- \tau )^{\alpha -1} f\bigl(\tau, y(\tau )\bigr)\bigr]\,d\tau. \end{aligned}$$
(10)

Subtracting Equation (9) from Equation (10) will yield

$$\begin{aligned} \begin{aligned} y(t_{i+1})={}& y(t_{i}) + \frac{1}{\Gamma (\alpha )} \biggl[ \int _{0}^{t_{i+1}} (t_{i+1}-\tau )^{\alpha -1} f\bigl(\tau, y(\tau )\bigr)\,d\tau \\ &{}- \int _{0}^{t_{i}} (t_{i}-\tau )^{\alpha -1} f\bigl(\tau, y( \tau )\bigr)\,d\tau \biggr]. \end{aligned} \end{aligned}$$
(11)

The implicit formula of FAM22 is of order 2. Therefore, two interpolating functions of \(F_{i+1}\) and \(F_{i}\) are required in the Lagrange approximation for the proposed method, given by

$$\begin{aligned} P(t) (\approx f\bigl(\tau,y(\tau )\bigr)= \frac{t-t_{i}}{t_{i+1}-t_{i}} F_{i+1} + \frac{t-t_{i+1}}{t_{i}-t_{i+1}} F_{i}, \end{aligned}$$
(12)

and we also let

$$\begin{aligned} h=t_{i+1}-t_{i},\quad t=\tau. \end{aligned}$$
(13)

Substituting (12) and (13) into (11) yields

$$\begin{aligned} \begin{aligned} y(t_{i+1})={}& y(t_{i}) + \frac{1}{\Gamma (\alpha )} \biggl[ \int _{0}^{t_{i+1}} (t_{i+1}-t)^{\alpha -1} \biggl( \frac{t-t_{i}}{t_{i+1}-t_{i}} F_{i+1} + \frac{t-t_{i+1}}{t_{i}-t_{i+1}} F_{i} \biggr)\,dt \\ & {}- \int _{0}^{t_{i}} (t_{i}-t)^{ \alpha -1} \biggl( \frac{t-t_{i}}{t_{i+1}-t_{i}} F_{i+1} + \frac{t-t_{i+1}}{t_{i}-t_{i+1}} F_{i} \biggr)\,dt \biggr] \\ ={}& y(t_{i}) + \frac{1}{\Gamma (\alpha )} \Biggl[ \sum _{m=0}^{i} \int _{t_{m}}^{t_{m+1}} (t_{i+1}-t)^{\alpha -1} \biggl( \frac{t-t_{i}}{t_{i+1}-t_{i}} F_{i+1} + \frac{t-t_{i+1}}{t_{i}-t_{i+1}} F_{i} \biggr)\,dt \\ & {}-\sum_{m=0}^{i-1} \int _{t_{m}}^{t_{m+1}} (t_{i}-t)^{\alpha -1} \biggl(\frac{t-t_{i}}{t_{i+1}-t_{i}} F_{i+1} + \frac{t-t_{i+1}}{t_{i}-t_{i+1}} F_{i} \biggr) \Biggr] \\ ={}& y(t_{i}) + \frac{1}{\Gamma (\alpha )} \Biggl[ \sum _{m=0}^{i} \biggl( \frac{F_{i+1}}{h} \int _{t_{m}}^{t_{m+1}} (t_{i+1}-t)^{ \alpha -1}(t-t_{i}) \,dt \\ &{}- \frac{F_{i}}{h} \int _{t_{m}}^{t_{m+1}} (t_{i+1}-t)^{\alpha -1}(t-t_{i+1}) \,dt \biggr) \\ &{} -\sum_{m=0}^{i-1} \biggl( \frac{F_{i+1}}{h} \int _{t_{m}}^{t_{m+1}} (t_{i}-t)^{\alpha -1}(t-t_{i}) \,dt \\ & {}- \frac{F_{i}}{h} \int _{t_{m}}^{t_{m+1}} (t_{i}-t)^{\alpha -1}(t-t_{i+1}) \,dt \biggr) \Biggr]. \end{aligned} \end{aligned}$$
(14)

Next, we implement the following change of variables, where:

i. For the first summation part,

$$\begin{aligned} dy=-dt, \qquad y=t_{i+1}-t; \end{aligned}$$

ii. For the second summation part,

$$\begin{aligned} dy=-dt,\qquad y=t_{i}-t. \end{aligned}$$

Therefore, we have Equation (14) as follows:

$$\begin{aligned} \begin{aligned} y(t_{i+1})={}& y(t_{i}) + \frac{1}{\Gamma (\alpha )} \Biggl[ \sum _{m=0}^{i} \biggl\{ \frac{F_{i+1}}{h} \biggl( - \int _{t_{i+1}-t_{m}}^{t_{i+1}-t_{m+1}} (y)^{\alpha -1} (t_{i+1}-y-t_{i})\,dy \biggr) \\ &{} - \frac{F_{i}}{h} \biggl( - \int _{t_{i+1}-t_{m}}^{t_{i+1}-t_{m+1}} (y)^{\alpha -1} (t_{i+1}-y-t_{i+1})\,dy \biggr) \biggr\} \\ &{} -\sum_{m=0}^{i-1} \biggl\{ \frac{F_{i+1}}{h} \biggl( - \int _{t_{i}-t_{m}}^{t_{i}-t_{m+1}} (y)^{ \alpha -1} (t_{i}-y-t_{i})\,dy \biggr) \\ & {}-\frac{F_{i}}{h} \biggl( - \int _{t_{i}-t_{m}}^{t_{i}-t_{m+1}} (y)^{ \alpha -1} (t_{i}-y-t_{i+1})\,dy \biggr) \biggr\} \Biggr]. \end{aligned} \end{aligned}$$
(15)

Following that,

$$\begin{aligned} \begin{aligned} y(t_{i+1})={}& y(t_{i}) + \frac{1}{\Gamma (\alpha )} \Biggl[ \sum _{m=0}^{i} \biggl\{ \frac{F_{i+1}}{h} \biggl( - \int _{t_{i+1}-t_{m}}^{t_{i+1}-t_{m+1}} (y)^{\alpha -1} (t_{i+1}-y-t_{i})\,dy \biggr) \\ & {} -\frac{F_{i}}{h} \biggl( - \int _{t_{i+1}-t_{m}}^{t_{i+1}-t_{m+1}} (y)^{\alpha -1} (t_{i+1}-y-t_{i+1})\,dy \biggr) \biggr\} \\ &{} -\sum_{m=0}^{i-1} \biggl\{ \frac{F_{i+1}}{h} \biggl( - \int _{t_{i}-t_{m}}^{t_{i}-t_{m+1}} (y)^{ \alpha -1} (t_{i}-y-t_{i})\,dy \biggr) \\ & {}- \frac{F_{i}}{h} \biggl( - \int _{t_{i}-t_{m}}^{t_{i}-t_{m+1}} (y)^{ \alpha -1} (t_{i}-y-t_{i+1})\,dy \biggr) \biggr\} \Biggr] \\ ={}& y(t_{i}) + \frac{1}{\Gamma (\alpha )} \Biggl[ \sum _{m=0}^{i} \biggl\{ \frac{F_{i+1}}{h} \biggl( - \int _{t_{i+1}-t_{m}}^{t_{i+1}-t_{m+1}} (y)^{\alpha -1} (h-y)\,dy \biggr) \\ &{}- \frac{F_{i}}{h} \biggl( - \int _{t_{i+1}-t_{m}}^{t_{i+1}-t_{m+1}} (y)^{\alpha -1} (-y)\,dy \biggr) \biggr\} \\ &{} -\sum_{m=0}^{i-1} \biggl\{ \frac{F_{i+1}}{h} \biggl( - \int _{t_{i}-t_{m}}^{t_{i}-t_{m+1}} (y)^{ \alpha -1} (-y)\,dy \biggr) \\ &{} - \frac{F_{i}}{h} \biggl( - \int _{t_{i}-t_{m}}^{t_{i}-t_{m+1}} (y)^{ \alpha -1} (-h-y)\,dy \biggr) \biggr\} \Biggr] \\ ={}& y(t_{i}) + \frac{1}{\Gamma (\alpha )} \Biggl[ \sum _{m=0}^{i} \biggl\{ \frac{F_{i+1}}{h} \biggl( - \int _{t_{i+1}-t_{m}}^{t_{i+1}-t_{m+1}} \bigl(hy^{\alpha -1} -y^{\alpha }\bigr)\,dy \biggr) \\ & {} -\frac{F_{i}}{h} \biggl( - \int _{t_{i+1}-t_{m}}^{t_{i+1}-t_{m+1}} \bigl(-y^{\alpha }\bigr)\,dy \biggr) \biggr\} \\ &{} -\sum_{m=0}^{i-1} \biggl\{ \frac{F_{i+1}}{h} \biggl( - \int _{t_{i}-t_{m}}^{t_{i}-t_{m+1}} \bigl(-y^{ \alpha }\bigr)\,dy \biggr) \\ & {}- \frac{F_{i}}{h} \biggl( - \int _{t_{i}-t_{m}}^{t_{i}-t_{m+1}} \bigl(-hy^{ \alpha -1} -y^{\alpha }\bigr)\,dy \biggr) \biggr\} \Biggr]. \end{aligned} \end{aligned}$$
(16)

Next, solving the integration part will give

$$\begin{aligned} \begin{aligned} y(t_{i+1})={}& y(t_{i}) + \frac{1}{\Gamma (\alpha )} \Biggl[ \sum _{m=0}^{i} \biggl\{ \frac{F_{i+1}}{h} \biggl( \frac{-h}{\alpha }\bigl[(t_{i+1}-t_{m+1})^{ \alpha }-(t_{i+1}-t_{m})^{\alpha } \bigr] \\ &{} + \frac{1}{\alpha +1}\bigl[(t_{i+1}-t_{m+1})^{\alpha +1}-(t_{i+1}-t_{m})^{ \alpha +1} \bigr] \biggr) \\ & {} -\frac{F_{i}}{h} \biggl( \frac{1}{\alpha +1} \bigl[(t_{i+1}-t_{m+1})^{ \alpha +1}-(t_{i+1}-t_{m})^{\alpha +1} \bigr] \biggr) \biggr\} \\ &{} -\sum_{m=0}^{i-1} \biggl\{ \frac{F_{i+1}}{h} \biggl( \frac{1}{\alpha +1}\bigl[(t_{i}-t_{m+1})^{ \alpha +1}-(t_{i}-t_{m})^{\alpha +1} \bigr] \biggr) \\ & {}- \frac{F_{i}}{h} \biggl( \frac{h}{\alpha } \bigl[(t_{i}-t_{m+1})^{\alpha }-(t_{i}-t_{m})^{ \alpha } \bigr] \\ &{}+ \frac{1}{\alpha +1}\bigl[(t_{i}-t_{m+1})^{\alpha +1}-(t_{i}-t_{m})^{ \alpha +1} \bigr] \biggr) \biggr\} \Biggr] \\ ={}& y(t_{i}) + \frac{1}{\Gamma (\alpha )} \biggl[ \biggl\{ \frac{F_{i+1}}{h} \biggl( \frac{-h}{\alpha }\bigl[(t_{i+1}-t_{i+1})^{ \alpha }-(t_{i+1}-t_{0})^{\alpha } \bigr] \\ & {} + \frac{1}{\alpha +1}\bigl[(t_{i+1}-t_{i+1})^{\alpha +1}-(t_{i+1}-t_{0})^{ \alpha +1} \bigr] \biggr) \\ & {}- \frac{F_{i}}{h} \biggl( \frac{1}{\alpha +1} \bigl[(t_{i+1}-t_{i+1})^{\alpha +1}-(t_{i+1}-t_{0})^{ \alpha +1} \bigr] \biggr) \biggr\} \\ & {}- \biggl\{ \frac{F_{i+1}}{h} \biggl( \frac{1}{\alpha +1} \bigl[(t_{i}-t_{i})^{\alpha +1}-(t_{i}-t_{0})^{\alpha +1} \bigr] \biggr) \\ & {}- \frac{F_{i}}{h} \biggl( \frac{h}{\alpha } \bigl[(t_{i}-t_{i})^{\alpha }-(t_{i}-t_{0})^{ \alpha } \bigr] \\ & {}+ \frac{1}{\alpha +1}\bigl[(t_{i}-t_{i})^{\alpha +1}-(t_{i}-t_{0})^{\alpha +1} \bigr] \biggr) \biggr\} \biggr]. \end{aligned} \end{aligned}$$
(17)

Therefore, the numerical formula of the implicit part in FAM22 is obtained as follows:

$$\begin{aligned} \begin{aligned} y(t_{i+1})={}& y(t_{i}) + \frac{1}{\Gamma (\alpha )} \biggl[ h^{\alpha } \biggl\{ \biggl( \frac{(i+1)^{\alpha }}{\alpha } - \frac{(i+1)^{\alpha +1}}{\alpha +1} \biggr) F_{i+1} + \biggl( \frac{(i+1)^{\alpha +1}}{\alpha +1} \biggr) F_{i} \biggr\} \\ & {}- h^{\alpha } \biggl\{ \biggl(- \frac{(i)^{\alpha +1}}{\alpha +1} \biggr) F_{i+1} + \biggl( \frac{(i)^{\alpha }}{\alpha } + \frac{(i)^{\alpha +1}}{\alpha +1} \biggr) F_{i} \biggr\} \biggr], \\ y(t_{i+1})={}& y(t_{i}) + \frac{h^{\alpha }}{\Gamma (\alpha )} \biggl[ \biggl( \frac{(i+1)^{\alpha }}{\alpha } + \frac{(i)^{\alpha +1}-(i+1)^{\alpha +1}}{\alpha +1} \biggr) F_{i+1} \\ &{}+ \biggl( \frac{-(i)^{\alpha }}{\alpha } + \frac{(i+1)^{\alpha +1}-(i)^{\alpha +1}}{\alpha +1} \biggr) F_{i} \biggr]. \end{aligned} \end{aligned}$$
(18)

Equation (18) will act as the corrector of the proposed method FAM22.

Analysis of the method

As mentioned before, the main interest of this paper is to propose an implicit method for solving problems of FDE in a predict–correct numerical scheme. Therefore, the analysis of the method will focus on the implicit method of FAM22 that has been developed.

Order of the method

Definition 1

([5])

The general formulation of fractional linear multistep method (FLMM) for the solution of Equation (5) is considered:

$$\begin{aligned} \sum_{j=0}^{i}\alpha _{j}y_{i-j}=h^{\alpha }\sum_{j=0}^{i} \beta _{j}f(t_{i-j},y_{i-j}), \end{aligned}$$
(19)

where \(\alpha _{j}\) and \(\beta _{j}\) are real parameters and α denotes the fractional order.

Definition 2

([10])

As \(C_{0}=C_{1}=\cdots =C_{q}=0\) and \(C_{q+1} \neq 0\), the linear multistep method is said to be of order q. The following is the formula for calculating the constant \(C_{q}\):

$$\begin{aligned} C_{q}=\sum_{j=0}^{k} \biggl[\frac{j^{q} \alpha _{j}}{q!}- \frac{j^{q-1}\beta _{j}}{(q-1)!} \biggr], \quad q=0,1,2,\ldots, \end{aligned}$$
(20)

where k is the order of the proposed method, α is the coefficient acquired from the proposed method, and β is the coefficient obtained from the proposed method. It is important to note that the method’s error constant is \(C_{q+1}\).

Proof

For the objective of investigating the order of the implicit method of FAM22 in Equation (18), firstly, we obtain \(\alpha _{j}\) and \(\beta _{j}\) by comparing Equations (18) and (19). Therefore, we have

$$\begin{aligned} \begin{aligned} &\alpha _{0}=-1,\qquad \beta _{0}= \frac{1}{\Gamma (\alpha )} \biggl( \frac{-(i)^{\alpha }}{\alpha } + \frac{(i+1)^{\alpha +1}-(i)^{\alpha +1}}{\alpha +1} \biggr), \\ &\alpha _{1}=1,\qquad \beta _{1}= \frac{1}{\Gamma (\alpha )} \biggl( \frac{(i+1)^{\alpha }}{\alpha } + \frac{(i)^{\alpha +1}-(i+1)^{\alpha +1}}{\alpha +1} \biggr). \end{aligned} \end{aligned}$$
(21)

Substituting Equation (21) into (20) will give

$$\begin{aligned} \begin{aligned} &C_{0}=\sum _{j=0}^{2}\alpha _{j}=0, \\ &C_{1}=\sum_{j=0}^{2}(j\alpha _{j}-\beta _{j})=0, \\ &C_{2}=\sum_{j=0}^{2}\biggl( \frac{j^{2}\alpha _{j}}{2!}-j\beta _{j}\biggr)=0, \\ &C_{3}=\sum_{j=0}^{2}\biggl( \frac{j^{3}\alpha _{j}}{3!}- \frac{j^{2}\beta _{j}}{2!}\biggr)=-\frac{1}{12}. \end{aligned} \end{aligned}$$
(22)

Therefore, the implicit method is proven to be of order 2 with error constant \(C_{3}=-\frac{1}{12}\). □

Order of accuracy

To begin with, the implicit method of FAM22 is given in Equation (18), and for simplicity, let

$$\begin{aligned} \begin{aligned} &A =\frac{(i+1)^{\alpha }}{\alpha }, \qquad B= \frac{(i)^{\alpha +1}-(i+1)^{\alpha +1}}{\alpha +1}, \\ &C= \frac{-(i)^{\alpha }}{\alpha },\qquad D= \frac{(i+1)^{\alpha +1}-(i)^{\alpha +1}}{\alpha +1}. \end{aligned} \end{aligned}$$
(23)

Hence, we will obtain

$$\begin{aligned} \begin{aligned} y(t_{i+1})= y(t_{i}) + \frac{h^{\alpha }}{\Gamma (\alpha )} \bigl[ ( A + B ) F_{i+1}+ (C + D ) F_{i} \bigr]. \end{aligned} \end{aligned}$$

Next, according to the concept of Taylor expansion, it is given that

$$\begin{aligned} \begin{aligned} y(t_{i+1})=y(t_{i}) + hy^{\prime }(t_{i}) + \frac{h^{2}}{2!} y^{\prime \prime }( \theta ), \end{aligned} \end{aligned}$$
(24)

and for the case of initial value problem,

$$\begin{aligned} \begin{aligned} y^{\prime }(t_{i}) \approx f\bigl(t_{i},y(t_{i})\bigr). \end{aligned} \end{aligned}$$
(25)

Therefore, the local truncation error denoted as \(e_{i+1}\) can be obtained as follows:

$$\begin{aligned} \begin{aligned} e_{i+1}=y(t_{i+1})-y(t_{i}) - \frac{h^{\alpha }}{\Gamma (\alpha )} \bigl[ ( A + B ) y^{\prime }(t_{i+1}) \bigr]- \frac{h^{\alpha }}{\Gamma (\alpha )} \bigl[ (C + D ) y^{\prime }(t_{i}) \bigr]. \end{aligned} \end{aligned}$$
(26)

Expanding Equation (26) by implementing the Taylor expansion in Equations (24) and (25) will give

$$\begin{aligned} \begin{aligned} e_{i+1}={}&y(t_{i}) + hy^{\prime }(t_{i}) + \frac{h^{2}}{2!} y^{\prime \prime }( \theta )-y(t_{i}) - \frac{h^{\alpha }}{\Gamma (\alpha )} \biggl[ ( A + B ) y^{\prime }(t_{i}) + hy^{\prime \prime }(t_{i}) + \frac{h^{2}}{2!} y^{\prime \prime \prime }(\theta ) \biggr] \\ &{}-\frac{h^{\alpha }}{\Gamma (\alpha )} \bigl[ (C + D ) y^{\prime }(t_{i}) \bigr] + O \bigl(h^{3}\bigr). \end{aligned} \end{aligned}$$
(27)

Based on Equation (27), the local truncation error \(e_{i+1}\) is \(O(h^{3})\). As known, the local truncation error is usually denoted as \(O(h^{(n+1)})\), where n is the order of accuracy and h is the step size. Therefore, it can be concluded that the proposed method is proved to be in the second order of accuracy.

Convergence analysis

Theorem 1

([11])

Let \(f(t,y)\) be Lipschitz continuous at all points \((t,y)\) in the region R defined by [10], given by

$$\begin{aligned} a\leq t \leq b, \quad -\infty < y< \infty, \end{aligned}$$
(28)

such that a and b are finite. Suppose that there exists a constant L such that, for every \(t, y, y^{*}\), the coordinates \(t, y, y^{*}\) and \((t,y^{*})\) are both in R where

$$\begin{aligned} \bigl\vert f(t,y)-f\bigl(t,y^{*}\bigr) \bigr\vert \leq L \bigl\vert y-y^{*} \bigr\vert . \end{aligned}$$
(29)

Theorem 2

([1, 11, 12])

A linear multistep method is said to be convergent if, for all initial values problems subject to the hypothesis of Theorem 1as \(t\in [a,b]\) and \(0<\alpha <1\), we have that

$$\begin{aligned} \bigl\vert y-y^{*} \bigr\vert \leq K.t^{\alpha -1}h^{p}, \end{aligned}$$

where K is a constant depending only on α and p as \(p \in (0,1)\) stated by [12] and

$$\begin{aligned} \lim_{h\rightarrow 0} y_{i}=y^{*}(t_{i}). \end{aligned}$$

Proof

In the first step of this convergence analysis, we recall the proposed method as in Equation (18), where

$$\begin{aligned} \begin{aligned} y(t_{i+1})={}& y(t_{i}) + \frac{h^{\alpha }}{\Gamma (\alpha )} \biggl[ \biggl( \frac{(i+1)^{\alpha }}{\alpha } + \frac{(i)^{\alpha +1}-(i+1)^{\alpha +1}}{\alpha +1} \biggr) F_{i+1} \\ & {}+\biggl( \frac{-(i)^{\alpha }}{\alpha } + \frac{(i+1)^{\alpha +1}-(i)^{\alpha +1}}{\alpha +1} \biggr) F_{i} \biggr]. \end{aligned} \end{aligned}$$

Based on the above equation, let

$$\begin{aligned} \begin{aligned} &P =\frac{(i+1)^{\alpha }}{\alpha } + \frac{(i)^{\alpha +1}-(i+1)^{\alpha +1}}{\alpha +1}, \\ &Q= \frac{-(i)^{\alpha }}{\alpha } + \frac{(i+1)^{\alpha +1}-(i)^{\alpha +1}}{\alpha +1}. \end{aligned} \end{aligned}$$
(30)

For the next step, substituting Equation (30) into Equation (18) will give the following:

i. The exact form of the system is given by

$$\begin{aligned} \begin{aligned} &y^{*}(t_{i+1})-y^{*}(t_{i})= \frac{h^{\alpha }}{\Gamma (\alpha )} ( P ) F_{i+1}^{*} + \frac{h^{\alpha }}{\Gamma (\alpha )} ( Q ) F_{i}^{*} -\frac{1}{12} h^{3}y^{*(3)}( \xi ). \end{aligned} \end{aligned}$$
(31)

ii. The approximate form of the system is

$$\begin{aligned} \begin{aligned} y(t_{i+1})-y(t_{i})= \frac{h^{\alpha }}{\Gamma (\alpha )} ( P ) F_{i+1} + \frac{h^{\alpha }}{\Gamma (\alpha )} ( Q ) F_{i}. \end{aligned} \end{aligned}$$
(32)

Subtracting Equation (32) from (31) will give

$$\begin{aligned} \begin{aligned} y(t_{i+1})-y^{*}(t_{i+1})={}&y(t_{i})-y^{*}(t_{i}) + \frac{h^{\alpha }}{\Gamma (\alpha )} (P) \bigl[f(t_{i+1},y_{i+1})-f \bigl(t^{*}_{i+1},y^{*}_{i+1}\bigr)\bigr] \\ &{}+\frac{h^{\alpha }}{\Gamma (\alpha )} (Q) \bigl[f(t_{i},y_{i})-f \bigl(t^{*}_{i},y^{*}_{i}\bigr)\bigr] \\ &{}-\frac{1}{12} h^{3}y^{*(3)}(\xi ). \end{aligned} \end{aligned}$$
(33)

Let

$$\begin{aligned} \begin{aligned} & \vert d_{i+1} \vert = \bigl\vert y_{i+1}-y^{*}_{i+1} \bigr\vert ,\qquad \vert d_{i} \vert = \bigl\vert y_{i}-y^{*}_{i} \bigr\vert . \end{aligned} \end{aligned}$$
(34)

In the next step, we apply the Lipschitz condition as in Theorem 1 and the assumption in Equation (34). Therefore, we have

$$\begin{aligned} \begin{aligned} \biggl(1-\frac{h^{\alpha }P}{\Gamma (\alpha )} \biggr) \vert d_{i+1} \vert \leq \biggl(1+\frac{h^{\alpha }Q}{\Gamma (\alpha )} \biggr) \vert d_{i} \vert -\frac{1}{12}h^{3}y^{*(3)}(\xi ). \end{aligned} \end{aligned}$$
(35)

Rewriting Equation (35) based on Theorem 2, we obtain

$$\begin{aligned} \begin{aligned} \bigl(1-Kh^{\alpha } \bigr) \vert d_{i+1} \vert \leq \bigl(1+Kh^{ \alpha } \bigr) \vert d_{i} \vert - \frac{1}{12}h^{3}y^{*(3)}( \xi ). \end{aligned} \end{aligned}$$
(36)

As h is sufficiently small or \(h\rightarrow 0\) and the initial value tends to 0, it is proved that \(| d_{i+1} | \leq | d_{i} |\); therefore, we have \(|y_{i+1}|=|y_{i+1}^{*}|\) and \(|y_{i}|=|y_{i}^{*}|\). As a result, Theorem 2 is satisfied, and the implicit method of FAM22 has been proven to converge. □

Stability of the method

Characteristic polynomial

Definition 3

Let \(\lambda _{1}, \lambda _{2}, \ldots, \lambda _{m}\) be the roots of the characteristic equation

$$\begin{aligned} P(\lambda )= \lambda ^{m}-a_{m-1}\lambda ^{m-2}- \cdots-a_{1}\lambda -a_{0}, \end{aligned}$$
(37)

for the given m-step multistep method,

$$\begin{aligned} \begin{aligned} y_{i+1}={}&a_{m-1}y_{i}+a_{m-2}y_{i-1}+ \cdots+a_{0}y_{i+1-m} \\ &{}+h\bigl[b_{m}f(t_{i+1},y_{i+1})+b_{m-1}f(t_{i},y_{i})+ \cdots+b_{0}f(t_{i+1-m},y_{i+1-m})\bigr]. \end{aligned} \end{aligned}$$
(38)

If all roots with absolute value 1 are simple roots, then the difference equation is said to satisfy the root condition. Following that, methods that satisfy the root condition and have \(\lambda =1\) as the only root of the characteristic equation with magnitude one are called strongly stable [13].

Proof

Based on Definition 1, we can obtain the characteristic polynomial of our proposed method of Equation (18) as follows:

$$\begin{aligned} \begin{aligned} &P(\lambda )=\lambda ^{2}-\lambda =0, \\ &\lambda (\lambda -1)=0, \\ &\lambda =0,\qquad \lambda =1. \end{aligned} \end{aligned}$$
(39)

The above characteristic polynomials satisfy the root condition, therefore the method is strongly stable. □

Stability region

The stability analysis for the implicit method of FAM22 is performed by considering the following test equation [14]:

$$\begin{aligned} \begin{aligned} &D^{\alpha }y(t)=\lambda y(t),\qquad \lambda \epsilon \mathbb{C},\quad 0< \alpha < 1, \\ &y(t_{0})=y_{0}, \end{aligned} \end{aligned}$$
(40)

where the exact solution can be expressed in terms of the Mittag-Leffler function

$$\begin{aligned} E_{\alpha }(t)=\sum_{k=0}^{\infty } \biggl( \frac{t^{k}}{\Gamma (\alpha k+1)} \biggr) \end{aligned}$$

as \(y(t)=E_{\alpha }(\lambda (t-t_{0})^{\alpha })y_{0}\).

Substituting test Equation (40) into the numerical method in Equation (18) results in the stability polynomial of

$$\begin{aligned} \begin{aligned} & \biggl(1-\frac{(i+1)^{\alpha }}{\alpha }+ \frac{(i)^{(\alpha +1)}-(i+1)^{(\alpha +1)}}{\alpha +1} \biggr) \times \frac{\bar{h}}{\beta }\times r^{2} \\ &\quad{} -\biggl(1+ \frac{-(i)^{\alpha }}{\alpha }+ \frac{(i+1)^{(\alpha +1)}-(i)^{(\alpha +1)}}{\alpha +1} \biggr) \times \frac{\bar{h}}{\beta }\times r=0, \end{aligned} \end{aligned}$$
(41)

where r is the root of the stability polynomial \(\bar{h}=\lambda h^{\alpha }\) and \(\beta =\Gamma (\alpha )\).

The stability regions are shown in Fig. 1 by using Maple software for different values of α. The planes are separated between the complex and the real axis, where the horizontal axis is labeled as Re = Real axis and the vertical axis is labeled as Im = Imaginary.

Figure 1
figure 1

Stability region for the implicit method of FAM22 as \(\alpha =0.3-0.7\)

The boundary of the stability region is determined by substituting root r in the stability polynomial with \(1, -1\) and \(e^{I\theta }=\cos (t)+ I \sin (t)\), where I is the imaginary root for \(0<\theta <2\pi \). We determine the stability region by considering the fact that every root of r must meet the condition that r is real and \(|r| \leq 1\) as stated in [15]. As a result, the proposed method has been shown to be stable in the shaded region. The three figures also show that the regions are symmetric to the real axis and that the shape of the stability regions does not vary. However, the figure illustrates that as α increases, the regions appear to be enlarged.

The stability regions for the explicit method of FAM22 are also illustrated in Fig. 2 for \(\alpha =0.3, 0.5\), and 0.7. Based on the figure, it can be seen that as α increases, the regions symmetric to the real axis are approaching the left side of the imaginary axis.

Figure 2
figure 2

Stability region for the explicit method of FAM22 as \(\alpha =0.3-0.7\)

Implementation

Algorithm of the method

Foremost, the basic inputs are the values of endpoints a and b, the total number of intervals N, and the value of α. As we are interested in solving FDE of single order, we also need to input the initial value as \(y_{0}\). The procedure of implementation using FAM22 method is illustrated as follows:

  1. Step 1.

    Set \(t_{0}=a, t_{1}=b, y_{0}=c\), \(\alpha = \mathit{alpha}\), \(\Gamma (\alpha )=\mathit{gamma}\), and \(h=\frac{b-a}{N}\).

  2. Step 2.

    For \(i=0,1\), calculate the approximate value of \(y_{1}\) by using the fractional Euler method [16] \(y(t_{i+1})= y(t_{i}) + \frac{h^{\alpha }}{\Gamma (\alpha )} (F_{i})\).

  3. Step 3.

    Next, the implementation using FAM22 begins for \(i=2,3,\ldots,I\) in order to obtain the approximate value \(y_{2}, y_{3},\ldots, y_{N}\). Steps 4–6 are iterated until \(y_{N}\) is achieved.

  4. Step 4.

    Set \(t=a+ih\).

  5. Step 5.

    The method is implemented in a predictor–corrector numerical scheme. Explicit formula of Equation (4) is used as the predictor. Implicit formula of (18) which acts as the corrector is applied to obtain the desired approximate solution.

  6. Step 6.

    Calculate the absolute error using the formula \(\mathit{error} = \vert y_{i}-Y_{i} \vert \), where \(y_{i}\) is the approximate solution and \(Y_{i}\) is the exact solution. The output is the absolute error at each point t.

  7. Step 7.

    STOP.

Numerical examples and discussion

This research study includes solving five different types of FDE problems. Example 1 is a FIVP with variable coefficients, while Example 2 is a simple linear FDE with the Mittag-Leffler function as the exact solution. Example 3 is the FIVP system, which has an exact solution when \(\alpha =1.0\). Example 4 includes a nonlinear FDE initial value problem, whereas Example 5 is a fractional Riccati differential equations application problem (FRDE)

All numerical examples are computed in C programming. Below are the notations used in the tables:

N::

Total number of interval.

P::

Predictor.

C::

Corrector.

Approx.::

Approximate solution \(y(t)\).

Error::

Absolute error.

MXE::

Maximum error.

EOC::

Order of convergence.

FAM22::

Fractional Adams method of explicit order 2, implicit order 2 (in this research).

FFDM::

Fractional finite difference method [17].

2-BFBDF::

2-step Block fractional backward differentiation formula [1].

SFMoPF::

Spline function method of polynomial form [14].

FVIM::

Fractional variational iteration method [18].

MHPM::

Modified homotopy perturbation method [19].

Numerical examples

Example 1

FIVP with variable coefficients [17], given by

$$\begin{aligned} \begin{aligned} \mathrm{D}^{\alpha }y(t)=\frac{40{,}320}{\Gamma (9-\alpha )}t^{8-\alpha }-3 \frac{\Gamma (5+\alpha /2)}{\Gamma (5-\alpha /2)}t^{4-\alpha /2}+ \frac{9}{4}\Gamma (\alpha +1),\qquad y(0)=0. \end{aligned} \end{aligned}$$
(42)

The exact solution is \({y(t)=t^{8}-3t^{4+\alpha /2}+\frac{9}{4}t^{\alpha }}\).

Example 2

A simple linear fractional differential equation [20] is given by

$$\begin{aligned} \begin{aligned} \mathrm{D}^{\alpha }y(t) =-y(t),\qquad y(0)=1. \end{aligned} \end{aligned}$$
(43)

The exact solution is \({y(t)=E_{\alpha }(-t^{\alpha })}\), where \(E_{\alpha }(z)\) is the Mittag-Leffler function defined as \({E_{\alpha }(z)=\sum_{k=0}^{\infty } \frac{z^{k}}{\Gamma (k\alpha +1)}}\).

Example 3

The system of FIVP [1] is given by

$$\begin{aligned} \begin{aligned} &\mathrm{D}^{\alpha }y_{1}(t)=y_{1}(t)+y_{2}(t),\qquad y_{1}(0)=0. \\ &\mathrm{D}^{\alpha }y_{2}(t)=-y_{1}(t)+y_{2}(t),\qquad y_{2}(0)=1. \end{aligned} \end{aligned}$$
(44)

The exact solution is \({y_{1}(t)=e^{t}\sin (t), y_{2}(t)=e^{t}\cos (t)}\) when \(\alpha =1.0\).

Example 4

A nonlinear initial value problem of FDE [14] is given by

$$\begin{aligned} \begin{aligned} \mathrm{D}^{\alpha }y(t)= ( 1- y )^{4},\qquad y(0)=1. \end{aligned} \end{aligned}$$
(45)

The exact solution is \({y(t)=\frac{ 1+3t-(1+6t+9t^{2})^{\frac{1}{3}} }{(1+3t)}}\) as \(\alpha =1.0\).

Example 5

An application problem of fractional Riccati differential equations [18] is given by

$$\begin{aligned} \begin{aligned} \mathrm{D}^{\alpha }y(t)= -y^{2}(t)+1,\qquad y(0)=0.\end{aligned} \end{aligned}$$
(46)

The exact solution is \({y(t)=\frac{e^{2t}-1}{e^{2t}+1}}\) when \(\alpha =1.0\).

Discussion

Table 1 presents the absolute error at each point t for solving Example 1, where \(\alpha =0.10,0.30,0.50,0.70,0.90\) at different number of intervals N. According to the table, accuracy improves as N increases and α approaches 1.0. Based on the table, as a large gap of α is chosen, the obvious improvement in the accuracy can be seen, especially at \(t=1.0\) for respective \(h=10^{-3}, 10^{-4}\). For comparison purpose, Table 2 shows the absolute error at each point t between \(\alpha =0.20\) and 0.50 using FAM22 and the existence method FFDM. As shown in the table, FAM22 is able to produce comparable results as FFDM, and it was able to perform better as the value of α increased. Figure 3 displays the performance graph of approximate solutions for \(N=10,100,1000\) and \(\mathit{alpha}=0.50\) when solving Example 1, where the graph indicates that as the number of intervals for each point N increases, the approximate solutions approach the exact solution.

Figure 3
figure 3

Graph of approximate solution \(y(t)\) against point t as \(\alpha =0.50\) for solving Example 1 using FAM22

Table 1 Absolute error for solving Example 1 using FAM22
Table 2 Comparison of absolute error for solving Example 1 using FAM22 and FFDM as \(N=100\)

Table 3 in Example 2 shows the absolute error of solving a simple fractional differential equation with the Mittag-Leffler function as the exact solution for \(\alpha =0.3, 0.5,0.7\) with respective step sizes of \(h=10^{-3}, 10^{-4}\) at each point t. It can be observed in the table that as α increases, the absolute error reduces. Furthermore, when the step size h decreased, FAM22 demonstrated improved accuracy. Figure 4 depicts the performance graph for solving Example 2, where the graph shows that as the number of intervals N increases, the approximate solutions clearly approach the exact solution.

Figure 4
figure 4

Graph of approximate solution \(y(t)\) against point t as \(\alpha =0.50\) for solving Example 2 using FAM22

Table 3 Absolute error for solving Example 2 using FAM22

Table 4 shows the absolute error at each point t when \(\alpha =1.0\) with different number of point intervals \(N=10, 100\), and 1000 by using the proposed method FAM22 for solving Example 3. Based on the table, better accuracy was obtained as N increased. This implies that as N increases, the approximate solution when \(\alpha =1.0\) at each point t is closely approaching the exact solution. Besides, as we compare the maximum error obtained using FAM22 and the existing method 2-BFBDF as shown in Table 5, it can be seen that FAM22 produces either comparable or slightly better accuracy compared to 2-BFBDF. The graphs of approximate solutions \(y_{1}\) and \(y_{2}\) in the case of various values of α as \(N=100\) for solving Example 3 are also presented in Figs. 5 and 6. The figures illustrate that as α increases, the approximate solutions are indeed approaching the exact solution.

Figure 5
figure 5

Graph of approximate solution \(y_{1}(t)\) against point t as \(N=100\) for solving Example 3 using FAM22

Figure 6
figure 6

Graph of approximate solution \(y_{2}(t)\) against point t as \(N=100\) for solving Example 3 using FAM22

Table 4 Absolute error for solving Example 3 using FAM22 as \(\alpha =1.0\)
Table 5 Comparison of maximum error as \(\alpha =1.0\) for solving Example 3 using FAM22 and 2-BFBDF

In addition, the numerical results in the form of approximate solution and absolute error when \(\alpha =1.0\) at each point t for solving a problem of nonlinear FDE are tabulated in Table 6. The results show that as the number of point intervals N increases, the absolute error decreases. For better analysis, the results for solving Example 4 using the FAM22 method are compared to those of the existing method SFMoPF. The comparison results are shown in Table 7 where FAM22 managed to achieve comparable results as SFMoPF. The graph of solving Example 4 using FAM22 is also included in Fig. 7 where the approximate solution can be clearly seen approaching the exact solutions as α increases.

Figure 7
figure 7

Graph of approximate solution \(y(t)\) against point t as \(N=1000\) for solving Example 4 using FAM22

Table 6 Approximate solution and absolute error as \(\alpha =1.0\) for solving Example 4 by using FAM22
Table 7 Comparison of absolute error as \(\alpha =1.0\), \(N=10{,}000\) for solving Example 4 using FAM22 and SFMoPF

Table 8 shows the result on the absolute error at each point t for solving the FRDE problem by using the proposed method FAM22 as Example 5. The results are presented when \(\alpha =1.0\) for different interval \(N = 10, 100\), and 1000. According to the table, it can be seen that the accuracy of FAM22 is better as N increases. It implies that the approximate solution approaches the exact solution for smaller value of h. For better analysis, the graph of solving the FRDE problem using FAM22 is visualized in Fig. 8 for \(N = 10\) when \(\alpha = 0.75, 0.85, 0.90, 0.95, 1.0\). According to the graph, the approximate solutions \(y(t)\) approach the exact solution when α increases. In the interim, when \(\alpha =1.0\), Fig. 8 shows that the approximate solution is in line with the exact solution.

Figure 8
figure 8

Graph of approximate solution \(y(t)\) against point t as \(N=10\) for solving Example 5 using FAM22

Table 8 Absolute error for solving Example 5 using FAM22 as \(\alpha =1.0\)

This paper also includes the comparison between FAM22 and the existing methods FVIM and MHPM when solving FRDE, as presented in Table 9. Based on the comparison table, FAM22 produces a comparable result as the previous methods. It follows that FAM22 is also able to perform well in solving nonlinear FDEs [21].

Table 9 Comparison of approximate solution and absolute error as \(\alpha =1.0\), \(N= 10\) for solving Example 5 using FAM22, FVIM, and MHPM

The order of convergence (EOC) for each examples are calculated by using the general formula of

$$\begin{aligned} \mathrm{EOC}=\log _{10} \biggl( \frac{MXE(h)}{MXE(h/10)} \biggr). \end{aligned}$$
(47)

The order of convergence is calculated for both predictor and corrector methods. Based on the result, it can be seen that the order of convergence for corrector method is better compared to the predictor method. This achieved our goal of obtaining better numerical results as we solved the FDE problem using a predict–correct numerical scheme. Besides, for corrector method, in terms of the order of convergence, all examples have successfully satisfied second order accuracy, except for Example 4, which produced the first order accuracy. This could be because of the complexity of the example. Example 4 is an FDE of a nonlinear initial value problem. According to the investigation, as the computation for the approximate solution became more complicated, the accuracy may have been affected. However, when the absolute error acquired for solving Example 4 using FAM22 is compared to the existing method SFMoPF, it can be seen that FAM22 gives results that are comparable or show slightly better accuracy than the existing method.

Conclusion

This research paper has constructed derivation, analysis as well as the numerical experiment for the proposed method, fractional Adams method of explicit order 2, implicit order 2 (FAM22). Based on the experiment, FAM22 is proved to be able of achieving comparable results as the existing methods for the purpose of solving both linear and nonlinear FDE. The outcome for an increment in the order of FDE, α is also investigated where, as α increases and approaches 1.0, the results yield better accuracy. In addition, the numerical results also validate the convergence analysis where the approximate solutions indeed converge as the step size h decreases. Therefore, FAM22 can be a suitable alternative method to preserve accuracy for solving different types of FDEs.

Availability of data and materials

All data generated or analyzed during this research study are available from the corresponding author on reasonable request.

Abbreviations

FIVP:

Fractional initial value problem

FDE:

Fractional differential equation

FAM22:

Fractional Adams method of explicit order 2, implicit order 2

PFDE:

Partial fractional differential equation

FLMM:

Fractional linear multistep method

FRDE:

Fractional Riccati differential equation

FFDM:

Fractional finite difference method

SFMoPF:

Spline function method of polynomial form

FVIM:

Fractional variational iteration method

MHPM:

Modified homotopy perturbation method

References

  1. Jator, S., Biala, T.A.: Block backward differentiation formulas for fractional differential equations. J. Eng. Math. 2015, Article ID 650425 (2015). https://doi.org/10.1155/2015/650425

    Article  MATH  Google Scholar 

  2. Garrappa, R.: On some explicit Adams multistep methods for fractional differential equations. J. Comput. Appl. Math. 229, 392–399 (2009). https://doi.org/10.1016/j.cam.2008.04.004

    MathSciNet  Article  MATH  Google Scholar 

  3. Diethelm, K., Ford, N.J.: Analysis of fractional differential equations. J. Math. Anal. Appl. 265(2), 229–248 (2002)

    MathSciNet  Article  Google Scholar 

  4. Li, C., Zeng, F.: The finite difference methods for fractional ordinary differential equations. Numer. Funct. Anal. Optim. 34, 149–179 (2013). https://doi.org/10.1080/01630563.2012.706673

    MathSciNet  Article  MATH  Google Scholar 

  5. Galeone, L., Garrappa, R.: On multistep methods for differential equations of fractional order. Mediterr. J. Math. 3, 565–580 (2006). https://doi.org/10.1007/s00009-006-0097-3

    MathSciNet  Article  MATH  Google Scholar 

  6. Ford, N., Connolly, J.: Comparison of numerical methods for fractional differential equations. Commun. Pure Appl. Anal. 5(2), 289–307 (2006). https://doi.org/10.3934/cpaa.2006.5.289

    MathSciNet  Article  MATH  Google Scholar 

  7. Diethelm, K., Ford, N.: A predictor–corrector approach for the numerical solution of fractional differential equations. Nonlinear Dyn. 29, 3–22 (2002). https://doi.org/10.1023/A:1016592219341

    MathSciNet  Article  MATH  Google Scholar 

  8. Batogna, R., Atangana, A.: New two step Laplace Adam–Bashforth method for integer and noninteger order partial differential equations. Numer. Methods Partial Differ. Equ. 34, 1739–1758 (2017). https://doi.org/10.1002/num.22216

    MathSciNet  Article  MATH  Google Scholar 

  9. Diethelm, K., Ford, N.: Detailed error analysis for a fractional Adams method. Numer. Algorithms 36, 31–52 (2004). https://doi.org/10.1023/B:NUMA.0000027736.85078.be

    MathSciNet  Article  MATH  Google Scholar 

  10. Lambert, J.D.: Computational Methods in Ordinary Differential Equations (1973)

    MATH  Google Scholar 

  11. Diethelm, K.: The Analysis of Fractional Differential Equations. An Application-Oriented Exposition Using Differential Operators of Caputo Type. Springer, Berlin (2010). https://doi.org/10.1007/978-3-642-14574-2

    Book  MATH  Google Scholar 

  12. Li, C., Tao, C.: On the fractional Adams method. Comput. Math. Appl. 58(8), 1573–1588 (2009)

    MathSciNet  Article  Google Scholar 

  13. Burden, R.L., Faires, J.D.: Numerical Analysis, 9th edn. Cengage Learning, Boston (2010)

    MATH  Google Scholar 

  14. Al-Rabtah, A., Momani, S., Ramadan, M.A.: Solving linear and nonlinear fractional differential equations using spline functions. In: Abstract and Applied Analysis, vol. 2012 (2012)

    MATH  Google Scholar 

  15. Bonab, Z.F., Javidi, M.: Higher order methods for fractional differential equation based on fractional backward differentiation formula of order three. Math. Comput. Simul. 172, 71–89 (2020)

    MathSciNet  Article  Google Scholar 

  16. Ahmed, H.: Fractional Euler method: an effective tool for solving fractional differential equations. J. Egypt. Math. Soc. 26(1), 38–43 (2018)

    MathSciNet  Article  Google Scholar 

  17. Albadarneh, R., Zerqat, M., Batiha, I.: Numerical solutions for linear and non-linear fractional differential equations. Int. J. Pure Appl. Math. 106, 859–871 (2016). https://doi.org/10.12732/ijpam.v106i3.12

    Article  Google Scholar 

  18. Merdan, M.: On the solutions fractional Riccati differential equation with modified Riemann–Liouville derivative. Int. J. Differ. Equ. 2012, Article ID 346089 (2012). https://doi.org/10.1155/2012/346089

    MathSciNet  Article  MATH  Google Scholar 

  19. Odibat, Z., Momani, S.: Modified homotopy perturbation method: application to quadratic Riccati differential equation of fractional order, chaos. Chaos Solitons Fractals 36, 167–174 (2008). https://doi.org/10.1016/j.chaos.2006.06.041

    MathSciNet  Article  MATH  Google Scholar 

  20. Esmaeili, S., Shamsi, M., Luchko, Y.: Numerical solution of fractional differential equations with a collocation method based on Müntz polynomials. Comput. Math. Appl. 62(3), 918–929 (2011)

    MathSciNet  Article  Google Scholar 

  21. Odetunde, O.: A decomposition algorithm for the solution of fractional quadratic Riccati differential equations with Caputo derivatives. Am. J. Comput. Appl. Math. 4(3), 83–91 (2019). https://doi.org/10.5923/j.ajcam.20140403.03

    Article  Google Scholar 

Download references

Acknowledgements

The authors are mostly grateful for the financial support by the Fundamental Research Grant (FRGS) under the Ministry of Higher Education Malaysia with project number FRGS/1/2020/STG06/UPM/01/1 and Research University Grant (Putra Impact Grant) from Universiti Putra Malaysia with project number UPM/800-3/3/1/9629200.

Funding

This research project is supported by the Fundamental Research Grant (FRGS) under the Ministry of Higher Education Malaysia with project number FRGS/1/2020/STG06/UPM/01/1 and Research University Grant (Putra Impact Grant) from Universiti Putra Malaysia with project number UPM/800-3/3/1/9629200.

Author information

Affiliations

Authors

Contributions

ZAM provided resources, conducted formal analysis, administered and supervised the project, and reviewed the original manuscript. NAZ was in charge of the project’s investigation and methodology, as well as drafting and editing the original manuscript. AK and ZBI contributed resources to the project and supervised progress. All authors read and approved the final manuscript.

Corresponding author

Correspondence to Zanariah Abdul Majid.

Ethics declarations

Competing interests

The authors declare that they have no competing interests.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Zabidi, N.A., Majid, Z.A., Kilicman, A. et al. Numerical solution of fractional differential equations with Caputo derivative by using numerical fractional predict–correct technique. Adv Cont Discr Mod 2022, 26 (2022). https://doi.org/10.1186/s13662-022-03697-6

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13662-022-03697-6

Keywords

  • Fractional differential equation
  • Linear FDE
  • Nonlinear FDE
  • Single order FDE
  • Multistep method