Theory and Modern Applications

# Numerical solution of fractional-order Riccati differential equation by differential quadrature method based on Chebyshev polynomials

## Abstract

We apply the Chebyshev polynomial-based differential quadrature method to the solution of a fractional-order Riccati differential equation. The fractional derivative is described in the Caputo sense. We derive and utilize explicit expressions of weighting coefficients for approximation of fractional derivatives to reduce a Riccati differential equation to a system of algebraic equations. We present numerical examples to verify the efficiency and accuracy of the proposed method. The results reveal that the method is accurate and easy to implement.

## Introduction

The fractional differential equations have received considerable interest in recent years. In many applications, fractional derivatives and fractional integrals provide more accurate models of the systems than ordinary derivatives and integrals do. Many applications of fractional differential equations in the areas of solid mechanics and modeling of viscoelastic damping, electrochemical processes, dielectric polarization, colored noise, bioengineering, and various branches of science and engineering could be found, among others, in .

The existence and uniqueness of solutions of fractional differential equations have been investigated in [2, 3]. In general, most of the fractional differential equations have no exact solutions. Therefore, there has been significant interest in developing approximate methods for solving this kind of equations. Several methods have recently been proposed to solve these equations. These methods include the Adomian decomposition method , the homotopy analysis method , the Adams-Bashforth-Moulton method [6, 7], the Laplace transform method , the Bessel function method , and so on. However, few papers reported applications of the differential quadrature method to solve fractional-order differential equations.

The differential quadrature method was introduced by Richard Bellman and his associates in the early 1970s, following the idea of integral quadrature . The basic idea of the differential quadrature method is that any derivative at a mesh point can be approximated by a weighted linear sum of all the functional values along a mesh line. The key procedure in the differential quadrature method is the determination of weighting coefficients. Fung  introduced a modified differential quadrature method to incorporate initial conditions. He also discussed at length the stability of various grid patterns in the differential quadrature method.

In this study, we use the differential quadrature method to numerically solve the fractional-order Riccati differential equation

$$D^{\alpha}y=B(x)+C(x)y+D(x)y^{2},\quad x\in [0,1],$$
(1)

with initial condition

$$y(0)=c,$$
(2)

where $$0< \alpha \leq1$$, $$B(x)$$, $$C(x)$$, and $$D(x)$$ are known functions, and c is a constant. When $$\alpha=1$$, the fractional-order Riccati equation is the classic Riccati differential equation. The importance of the equation usually arises in the optimal control problems.

## Preliminaries and notation

In this section, we present some notation, definitions, and preliminary facts.

### Basic definitions of fractional integration and differentiation

There are various definitions of fractional integration and derivatives. The widely used definition of a fractional integral is the Riemann-Liouville definition, and that of a fractional derivative is the Caputo definition.

### Definition 1

The Riemann-Liouville fractional integral operator of order $$\alpha>0$$ of a function $$y\in C_{\mu}$$, $$\mu\geq -1$$, is defined as

$$J^{\alpha}y(x)=\frac{1}{\varGamma(\alpha)} \int^{x}_{0}(x-s)^{\alpha-1}f(s)\,ds, \quad \alpha>0.$$

### Definition 2

The fractional derivative $$D^{\alpha}$$ of $$y(x)$$ in the Caputo sense is defined as

$$D^{\alpha}y(x)=\frac{1}{\varGamma(n-\alpha)} \int_{0}^{x}(x-\tau)^{n-\alpha-1}f^{(n)}( \tau)\,d(\tau)$$
(3)

for $$n-1< \alpha \leq n$$, $$n\in N$$, $$t>0$$, and $$y(x)\in C_{-1}^{n}$$.

For the Caputo derivative, we have

$$D^{\alpha}x^{\beta}=\textstyle\begin{cases} 0 & \mbox{for }\beta\in N_{0}\mbox{ and }\beta< \lceil\alpha\rceil; \\ \frac{\varGamma(\beta+1)}{\varGamma(\beta+1-\alpha)}x^{\beta-\alpha} & \mbox{for }\beta\in N_{0}\mbox{ and }\beta\geq\lceil\alpha\rceil\mbox{ or for }\beta\notin N_{0}\mbox{ and }\beta>\lfloor\alpha\rfloor. \end{cases}$$
(4)

We use the ceiling function $$\lceil\alpha\rceil$$ to denote the smallest integer greater than or equal to α and the floor function $$\lfloor\alpha\rfloor$$ to denote the largest integer less than or equal to α. Also, $$N=\{0,1,2,\dots\}$$.

### Chebyshev polynomials and their properties

The well-known Chebyshev polynomials are defined on the interval $$[-1,1]$$ and are obtained by expanding the formulae

$$T_{n}(x)=\cos\bigl(n\arccos(x)\bigr),\quad n=0,1,\dots; x\in[-1,1].$$

They have the following properties:

• The three-term recurrence relation

$$T_{k+1}(x)=2xT_{k}(x)-T_{k-1}(x)$$

with $$T_{0}(x)=1$$ and $$T_{1}(x)=x$$.

• The expression of $$T_{n}(x)$$ in terms of x is given by 

$$T_{n}(x)=\sum_{k=0}^{\lfloor n/2 \rfloor}c_{k}^{(n)}x^{n-2k},$$
(5)

where

$$c_{k}^{(n)}=(-1)^{k}2^{n-2k-1} \frac{n}{n-k}\dbinom{n-k}{k}$$

and

$$c_{k}^{(2k)}=(-1)^{k}\quad (k\geq 0).$$
• Discrete orthogonality relation with the extrema of $$T_{n}(x)$$ as nodes. Let $$n>0$$, $$r,s\leq n$$, and $$x_{i}=-\cos(i\pi/n)$$, $$i=0,1,\ldots,n$$. Then

$$\sideset{} {^{\prime\prime}}\sum_{i=0}^{n} T_{r}(x_{i})T_{s}(x_{i})=K_{r} \delta_{rs},$$
(6)

where $$K_{0}=K_{n}{}=n$$ and $$K_{r}=\frac{1}{2}n$$ for $$1\leq r\leq n-1$$. The double prime indicates that the terms with suffixes $$i=0$$ and $$i=n$$ are to be halved.

This discrete orthogonality property leads us to a very efficient interpolation formula. For later use, we write the interpolation polynomial $$I_{N}y(x)$$ interpolating $$y(x)$$ at the points $$x_{i}=-\cos(i\pi/N)$$, $$i=0,1,\ldots,N$$, as a sum of Chebyshev polynomials in the form

$$I_{N}y(x)= \sideset{} {^{\prime\prime}}\sum _{k=0}^{N}c_{k}T_{k}(x).$$
(7)

The coefficients $$c_{k}$$ in (7) are given by the explicit formula 

$$c_{k}=\frac{2}{N}\sideset{} {^{\prime\prime}} \sum_{i=0}^{n}y(x_{i})T_{k}(x_{i}), \quad i=0,1,\ldots,N.$$
(8)

## Calculation of weighting coefficients of fractional-order derivatives

To apply the Chebyshev polynomials in the interval $$[0,1]$$, we used the shifted Chebyshev polynomials $$T^{*}_{n}(x)$$ defined in terms of the Chebyshev polynomials $$T_{n}(x)$$ by the relation

$$T^{*}_{n}(x)=T_{n}(2x-1).$$
(9)

Applying (7), (8), and (9), a function $$y(x)\in L_{2}[0,1]$$ is approximated by means of the shifted Chebyshev polynomials as

$$y(x)=T^{*}(x)\cdot P \cdot Y,$$
(10)

where

\begin{aligned} &T^{*}(x)=\bigl[T^{*}_{0}(x), T^{*}_{1}(x),\ldots,T^{*}_{N-1},T^{*}_{N}(x) \bigr], \\ &P= \begin{bmatrix}\frac{1}{2N}T^{*}_{0}(x_{0}) & \frac{2}{2N}T^{*}_{0}(x_{1}) & \frac{2}{2N}T^{*}_{0}(x_{2})& \cdots & \frac{1}{2N}T^{*}_{0}(x_{N})\\ \frac{1}{N}T^{*}_{1}(x_{0}) & \frac{2}{N}T^{*}_{1}(x_{1}) & \frac{2}{N}T^{*}_{1}(x_{2})& \cdots & \frac{1}{N}T^{*}_{1}(x_{N})\\ \vdots & \cdots & \cdots & \ddots & \vdots \\ \frac{1}{2N}T^{*}_{N}(x_{0}) & \frac{2}{2N}T^{*}_{N}(x_{1}) & \frac{2}{2N}T^{*}_{N}(x_{2})& \cdots & \frac{1}{2N}T^{*}_{N}(x_{N}) \end{bmatrix}, \\ &Y= \begin{bmatrix} y(x_{0}), y(x_{1}), \ldots, y(x_{N}) \end{bmatrix}^{T}, \end{aligned}

and $$x_{i}=\frac{1}{2} [1-\cos(i\pi/N) ]$$, $$i=0,1,2,\ldots,N$$. According to the definition of the Caputo fractional derivative, we can write

$$D^{\alpha}y(x)= D^{\alpha}T^{*}(x)\cdot P \cdot Y,$$
(11)

where $$\alpha >0$$.

The Caputo fractional derivative of the vector $$T^{*}(x)$$ in (10) can be expressed as

$$D^{\alpha}T^{*}(x)= D^{\alpha}X \cdot N,$$
(12)

where

$$N= \begin{bmatrix} 1 & -1 & 1 & -1 & \cdots & (-1)^{N}\\ 0 & 2 & -8 & 18 & \cdots & (-1)^{N-1}2N^{2}\\ 0 & 0 & 8 & -48 & \cdots & (-1)^{N-2}\frac{2}{3}N^{2}(N^{2}-1)\\ \vdots & \vdots & \vdots & \vdots & \ddots & \vdots\\ 0 &0 & 0 &\cdots & \cdots & 2^{2N-1} \end{bmatrix}$$

and

$$X =\bigl[1,x,x^{2},\ldots,x^{N}\bigr].$$

Using (4) with $$0<\alpha<1$$, we have

$$D^{\alpha}X= \bigl[0, c_{1}x^{1-\alpha}, c_{2}x^{2-\alpha}, \ldots , c_{N}x^{N-\alpha} \bigr],$$
(13)

where

$$c_{1}=\frac{\varGamma(2)}{\varGamma(2-\alpha)}, c_{2}=\frac{\varGamma(3)}{\varGamma(3-\alpha)},\ldots, c_{N}=\frac{\varGamma(N+1)}{\varGamma(N+1-\alpha)}.$$

Employing (11) and (13), we get

$$Y^{\alpha}=\varGamma \cdot N\cdot P\cdot Y,$$
(14)

where

$$Y^{\alpha}= \begin{bmatrix} D^{\alpha}y(x_{0}), D^{\alpha}(x_{1}), \ldots, D^{\alpha}y(x_{N}) \end{bmatrix}^{T}$$

and

$$\varGamma= \begin{bmatrix} 0& c_{1}x_{0}^{1-\alpha}& c_{2}x_{0}^{2-\alpha} & \cdots & c_{N}x_{0}^{N-\alpha}\\ 0& c_{1}x_{1}^{1-\alpha}& c_{2}x_{1}^{2-\alpha} & \cdots & c_{N}x_{1}^{N-\alpha}\\ 0& c_{1}x_{2}^{1-\alpha}& c_{2}x_{2}^{2-\alpha} & \cdots & c_{N}x_{2}^{N-\alpha}\\ \vdots & \cdots & \vdots & \vdots & \vdots &\ddots & \vdots \\ 0& c_{1}x_{N}^{1-\alpha}& c_{2}x_{N}^{2-\alpha} & \cdots & c_{N}x_{N}^{N-\alpha} \end{bmatrix}.$$

Then the weighting coefficient of the fractional derivative can be written in the matrix form

$$D^{*(\alpha)}=\varGamma \cdot N\cdot P.$$
(15)

The weighting coefficients can be written collectively in the matrix form as

$$D^{*(\alpha)}= \begin{bmatrix} d^{(\alpha)}_{00} & d^{(\alpha)}_{01} & \cdots & d^{(\alpha)}_{0N} \\ d^{(\alpha)}_{10} & d^{(\alpha)}_{11} & \cdots & d^{(\alpha)}_{1N} \\ \vdots & \vdots & \ddots & \vdots \\ d^{(\alpha)}_{N0} & d^{(\alpha)}_{N1} & \cdots & d^{(\alpha)}_{NN} \end{bmatrix}.$$
(16)

## Applications to fractional differential equation

To show the fundamental importance of weighting coefficients of fractional-order derivatives in the last section, we apply it for solving fractional-order Riccati differential equations. To solve the problem, we first consider incorporation of initial conditions. With the weighting coefficients $$D^{*(\alpha)}, 0< \alpha \leq 1$$, the initial condition is incorporated easily into the differential quadrature adopting the strategy of :

$$D^{\alpha}y(x_{i})=\sum^{N}_{j=0}d_{ij}^{(\alpha)}y(x_{j})=d_{i0}^{(\alpha)}y(0)+ \sum^{N}_{j=1}d_{ij}^{(\alpha)}y(x_{j}).$$

This equation can be rewritten in the matrix form as follows:

$$\begin{bmatrix} y^{(\alpha)}(x_{1})\\ y^{(\alpha)}(x_{2}) \\ \vdots \\ y^{(\alpha)}(x_{N}) \end{bmatrix} = \begin{bmatrix} d^{(\alpha)}_{10}\\ d^{(\alpha)}_{20} \\ \vdots \\ d^{(\alpha)}_{N0} \end{bmatrix} \cdot y(0)+ \begin{bmatrix} d^{(\alpha)}_{11} & d^{(\alpha)}_{12} & \cdots & d^{(\alpha)}_{1N}\\ d^{(\alpha)}_{21} & d^{(\alpha)}_{22} & \cdots & d^{(\alpha)}_{2N}\\ \vdots & \vdots & \ddots & \vdots \\ d^{(\alpha)}_{N1} & d^{(\alpha)}_{N2} & \cdots & d^{(\alpha)}_{NN} \end{bmatrix} \begin{bmatrix} y(x_{1})\\ y(x_{2}) \\ \vdots \\ y(x_{N}) \end{bmatrix}.$$
(17)

In equation (17), the initial condition is naturally incorporated into the differential quadrature rule. By substituting the approximation (17) into (1) and using the initial condition (2) we get the system of algebraic equations

\begin{aligned}[b] & \begin{bmatrix} d^{(\alpha)}_{10}\\ d^{(\alpha)}_{20} \\ \vdots \\ d^{(\alpha)}_{N0} \end{bmatrix} \cdot y(0) + \begin{bmatrix} d^{(\alpha)}_{11} & d^{(\alpha)}_{12} & \cdots & d^{(\alpha)}_{1N}\\ d^{(\alpha)}_{21} & d^{(\alpha)}_{22} & \cdots & d^{(\alpha)}_{2N}\\ \vdots & \vdots & \ddots & \vdots \\ d^{(\alpha)}_{N1} & d^{(\alpha)}_{N2} & \cdots & d^{(\alpha)}_{NN} \end{bmatrix} \begin{bmatrix} y(x_{1})\\ y(x_{2}) \\ \vdots \\ y(x_{N}) \end{bmatrix} \\ &\quad = \begin{bmatrix} B(x_{1})\\ B(x_{2})\\ \vdots\\ B(x_{N}) \end{bmatrix} + \begin{bmatrix} C(x_{1})y(x_{1})\\ C(x_{2})y(x_{2})\\ \vdots\\ C(x_{N})y(x_{N}) \end{bmatrix} + \begin{bmatrix} D(x_{1})y^{2}(x_{1})\\ D(x_{2})y^{2}(x_{2})\\ \vdots\\ D(x_{N})y^{2}(x_{N}) \end{bmatrix}. \end{aligned}
(18)

Solving the system of algebraic equations, we can obtain the vector $$[y_{i}]$$. Then, using (10), we can get the approximate solutions

$$y_{N}(x)=T^{*}(x)\cdot P \cdot Y.$$
(19)

## Some useful lemmas

In this section, we give some useful lemmas, which later play a significant role in the convergence analysis. We first introduce some notation. Let $$I:=(-1,1)$$, and let $$L^{2}_{\omega^{\alpha,\beta}}(I)$$ be the space of measurable functions whose square is Lebesgue integrable in I relative to the weight function $$\omega^{\alpha,\beta}(x)$$. The inner product and norm of $$L^{2}_{\omega^{\alpha,\beta}}(I)$$ are defined by

$$(u,v)_{\omega^{\alpha,\beta},I}= \int_{-1}^{1}u(x)v(x)\omega^{\alpha,\beta}\,dx, \quad u,v \in L^{2}_{\omega^{\alpha,\beta}}(I),$$

and

$$\Vert u \Vert _{\omega^{\alpha,\beta},I}=(u,u)_{\omega^{\alpha,\beta},I}^{\frac{1}{2}}.$$

For a nonnegative integer m, define

$$H^{m}_{\omega^{\alpha,\beta}},I:= \bigl\{ v: \partial_{x}^{k}v \in L^{2}_{\omega^{\alpha,\beta}}(I), 0\leq k \leq m \bigr\}$$

with the seminorm and the norm

$$\vert v \vert _{m,\omega^{\alpha,\beta}}= \bigl\Vert \partial_{x}^{m}v \bigr\Vert _{\omega^{\alpha,\beta}},\quad \Vert v \Vert _{m,\omega^{\alpha,\beta}}= \Biggl( \sum_{k=0}^{m} \vert v \vert _{k,\omega^{\alpha,\beta}}^{2} \Biggr)^{\frac{1}{2}}$$
(20)

and

$$\vert v \vert _{H^{m;N}_{\omega^{\alpha,\beta}}I}= \Biggl( \sum _{k=\min(m,N+1)}^{m} \bigl\Vert \partial_{x}^{k}v \bigr\Vert ^{2}_{L^{2}_{\omega^{\alpha,\beta}}(I)} \Biggr)^{\frac{1}{2}}.$$
(21)

To measure the truncation error, we introduce the nonuniformly weighted Sobolev space

$$B^{m}_{\alpha,\beta}(I):= \bigl\{ v:\partial^{k}_{x}v \in L^{2}_{\omega^{\alpha+k,\beta+k}}(I), 0\leq k \leq m \bigr\} ,\quad m\in N,$$

equipped with the norm and seminorm

$$\Vert v \Vert _{B^{m}_{\alpha,\beta}}= \Biggl( \sum_{k=0}^{m} \bigl\Vert \partial^{k}_{x}v \bigr\Vert _{\omega^{\alpha+k,\beta+k}} \Biggr)^{\frac{1}{2}}\quad \mbox{and} \qquad \vert v \vert _{B^{m}_{\alpha,\beta}}= \bigl\Vert \partial^{k}_{x}v \bigr\Vert _{\omega^{\alpha+m,\beta+m}}.$$

Particularly, let

$$\omega^{c}(x)=\omega^{-\frac{1}{2},-\frac{1}{2}}(x)$$

be the Chebyshev weight function.

For a given positive integer N, we denote by $$\{x_{i}\}_{i=0}^{N}$$ the set of $$N+1$$ Gauss-Lobatto points corresponding to the weight $$\omega^{\alpha,\beta}(x)$$. By $$P_{N}$$ we denote the space of all polynomials of degree not exceeding N. For all $$v\in C[-1,1]$$, we define the Lagrange interpolating polynomial $$I^{\alpha,\beta}_{N}v\in P_{N}$$ satisfying

$$I^{\alpha,\beta}_{N}v(x_{i})=v(x_{i}).$$

The Lagrange interpolating polynomial can be written in the form

$$I^{\alpha,\beta}_{N}v(x)=\sum_{i=0}^{N}v(x_{i})F_{i}(x), \quad 0\leq i\leq N,$$

where $$F_{i}(x)$$ is the Lagrange interpolation basis function associated with $$\{x_{i}\}_{i=0}^{N}$$.

### Lemma 1

()

Assume that $$v\in H^{m}_{\omega^{c}}$$ and denote by $$I_{N}v$$ its interpolation polynomial associated with the Gauss-Lobatto points $$\{x_{i}\}_{i=0}^{N}$$, namely,

$$I_{N}v(x_{i})=v(x_{i}).$$

Then we have the estimates

$$\Vert v-I_{N}v \Vert _{L^{2}_{\omega^{c}}}\leq CN^{-m} \vert v \vert _{H^{m;N}_{\omega^{c}}(I)}.$$

## Convergence analysis

In this section, we provide an error estimate of the applied method for the smooth solutions of fractional Riccati differential equations. To simplify the notation, without loss of generality, we let $$C(x)=1$$ and use the change of variables to convert (1) and the initial conditions to the form

$$\frac{1}{2\varGamma(1-\alpha)} \int_{-1}^{x} (x-t)^{\alpha} u'(t)\,dt=b(x)+c(x)u(x)+d(x)u^{2}(x),\quad -1 \leq x\leq 1,$$
(22)

with initial conditions

$$u(-1)=c,$$

where

$$u(x)=y \biggl(\frac{1}{2}(1+x) \biggr),\qquad b(x)=B \biggl( \frac{1}{2}(1+x) \biggr),\qquad d(x)=D \biggl(\frac{1}{2}(1+x) \biggr).$$

### Theorem 1

Let $$u(x)$$ be the exact solution of the Riccati differential equation (22), which is assumed to be sufficiently smooth. Let the approximate solution $$u_{N}(x)$$ be obtained by using the differential quadrature method together with a polynomial interpolation. If $$u(x)\in H^{m}_{\omega^{c}}(I)$$, then, for sufficiently large N, we have the error estimate

$$\bigl\Vert e(x) \bigr\Vert _{\omega^{c}}\leq CN^{-m}\bigl( \vert u \vert _{H^{m;N}_{\omega^{c}}}+ \bigl\vert u^{2} \bigr\vert _{H^{m;N}_{\omega^{c}}}\bigr)+CN^{1-m} \vert u \vert _{B_{\omega^{c}}^{m}}.$$
(23)

### Proof

Firstly, equation (22) holds at the Gauss-Lobatto points $$\{x_{i}\}_{i=0}^{N}$$ on $$[-1,1]$$:

$$\frac{1}{2\varGamma(1-\alpha)} \int_{-1}^{x_{i}} (x_{i}-t)^{\alpha} u'(t)\,dt=b(x_{i})+u(x_{i})+d(x_{i})u^{2}(x_{i}), \quad u(-1)=c.$$
(24)

We use $$u_{i}$$, $$0\leq i\leq N$$, to approximate the function value $$u(x_{i})$$, $$0\leq i\leq N$$, and use

$$u_{N}(x)=\sum_{i=0}^{N}u_{i}F_{i}(x)$$
(25)

to approximate the function $$u(x)$$, namely, $$u(x_{i})\approx u_{i}$$ and $$u(x)\approx u_{N}(x)$$. Then, the numerical scheme (18) can be rewritten as

$$\frac{1}{2\varGamma(1-\alpha)} \int_{-1}^{x_{i}} (x_{i}-t)^{\alpha} u_{N}'(t)\,dt=b(x_{i})+u_{i}+d(x_{i})u^{2}_{i}.$$
(26)

Subtracting (26) from (24) gives the error equations

$$u(x_{i})-u_{i}=\frac{1}{2\varGamma(1-\alpha)} \int_{-1}^{x_{i}}(x_{i}-t)^{\alpha} \bigl(u'(t)-u_{N}'(t)\bigr)\,dt+d(x_{i}) \bigl(u^{2}(x_{i})-u_{i}^{2}(x)\bigr).$$
(27)

Multiplying by $$F_{i}(x)$$ both sides of (27) and summing up from $$i=0$$ to $$i=N$$ yield

\begin{aligned}[b] u(x)-u_{N}(x)={}&u(x)-I_{N}u(x)+ \frac{1}{2\varGamma(1-\alpha)}I_{N} \biggl( \int_{-1}^{x}(x-t)^{\alpha}\bigl(u'(t)-u_{N}'(t) \bigr)\,dt \biggr) \\ &{}+I_{N} \bigl(d(x)\bigl[u^{2}(x)-u_{N}^{2}(x) \bigr] \bigr). \end{aligned}
(28)

Let $$e(x)=u(x)-u_{N}(x)$$ denote the error function. Then, (28) can be written as

$$e(x)=J_{1} +J_{2}+J_{3},$$
(29)

where

\begin{aligned} &J_{1}=u(x)-I_{N}u(x), \\ &J_{2}=I_{N} \bigl(D^{\alpha}e(x) \bigr), \\ &J_{3}=I_{N} \bigl(d(x)\bigl[\bigl(u^{2}(x)-u_{N}^{2}(x) \bigr)\bigr] \bigr). \end{aligned}

Then we can write

$$\bigl\Vert e(x) \bigr\Vert _{\omega^{c}}\leq \Vert J_{1} \Vert _{\omega^{c}}+ \Vert J_{2} \Vert _{\omega^{c}}+ \Vert J_{3} \Vert _{\omega^{c}}.$$
(30)

Applying Lemma 1 to $$u(x)$$, we have

$$\Vert J_{1} \Vert _{\omega^{c}}= \bigl\Vert u(x)-I_{N}u(x) \bigr\Vert _{\omega^{c}}\leq CN^{-m} \vert u \vert _{H^{m;N}_{\omega^{c}}}.$$
(31)

Now we begin to estimate $$\Vert J_{2} \Vert _{\omega^{c}}$$. Also, from [17, 18] we can conclude that

$$\Vert J_{2} \Vert _{\omega^{c}} \leq CN^{1-m} \bigl\Vert u^{(m)} \bigr\Vert _{\omega^{-\frac{1}{2}+m,-\frac{1}{2}+m}} =CN^{1-m} \vert u \vert _{B_{\omega^{c}}^{m}}.$$
(32)

We now estimate the third term $$\Vert J_{3} \Vert _{\omega^{c}}$$. By (27) we have

$$J_{3}\leq \max_{x\in[-1,1]} \bigl\vert b(x) \bigr\vert I_{N} \bigl(\bigl(u^{2}(x)-u_{N}^{2}(x) \bigr) \bigr).$$
(33)

By simple calculation we can rewrite $$J_{3}$$ as

$$J_{3}\leq \max_{x\in[-1,1]} \bigl\vert b(x) \bigr\vert \bigl(u^{2}(t)-u_{N}^{2}(t)+I_{N}u^{2}(t)-u^{2}(t) \bigr).$$

Therefore

$$\Vert J_{3} \Vert _{\omega^{c}}\leq \max_{x\in[0,1]} \bigl\vert b(x) \bigr\vert \bigl( \bigl\Vert u^{2}(x)-u_{N}^{2}(x) \bigr\Vert _{\omega^{c}}+ \bigl\Vert u^{2}(t)-I_{N}u^{2}(t) \bigr\Vert _{\omega^{c}}\bigr).$$

Since $$u^{2}(t)-u^{2}_{N}(t)=2u(t)e(t)-e(t)^{2}$$, we have

$$\bigl\Vert u^{2}(t)-u^{2}_{N}(t) \bigr\Vert _{\omega^{c}}\leq C \bigl\Vert u(t)e(t) \bigr\Vert _{\omega^{c}}+ \bigl\Vert e(t)^{2} \bigr\Vert _{\omega^{c}}.$$

As in , applying Banach algebra theory, we can obtain

$$\bigl\Vert u^{2}(t)-u^{2}_{N}(t) \bigr\Vert _{\omega^{c}}\leq C \bigl\Vert u(t) \bigr\Vert _{\omega^{c}} \bigl\Vert e(t) \bigr\Vert _{\omega^{c}}+ \bigl\Vert e(t) \bigr\Vert _{\omega^{c}}^{2}.$$

Due to Lemma 1, we have

$$\bigl\Vert u^{2}(t)-I_{N}u^{2}(t) \bigr\Vert _{\omega^{c}}\leq CN^{-m} \bigl\vert u^{2} \bigr\vert _{H^{m;N}_{\omega^{c}}}.$$

Consequently,

$$\Vert J_{3} \Vert _{\omega^{c}}\leq CN^{-m} \bigl\vert u^{2} \bigr\vert _{H^{m;N}_{\omega^{c}}} .$$
(34)

Therefore, a combination of (31), (32), and (34) yields estimate (23). □

## Illustrative examples

To illustrate the effectiveness of the proposed method, we carry out some test examples. The results obtained by this method reveal that it is very effective and convenient for fractional differential equations.

### Example 1

As the first example, we consider the fractional Riccati differential equation

$$D^{\frac{1}{2}}y(x)=-x^{2}\bigl(1+x^{\frac{5}{2}} \bigr)+\frac{8}{3\sqrt{\pi}}x^{\frac{3}{2}}+y(x)+\sqrt{x}y^{2}(x), \quad y(0)=0.$$
(35)

The exact solution of the problem is $$y(x)=x^{2}$$. Applying the differential quadrature method with $$N=2$$, we approximate $$D^{\frac{1}{2}}y(x)$$ as

$$\begin{bmatrix} y^{(\frac{1}{2})}(x_{0}) \\ y^{(\frac{1}{2})}(x_{1}) \\ y^{(\frac{1}{2})}(x_{2}) \end{bmatrix} = \begin{bmatrix} 0 & 0 & 0\\ -1.329808 & 1.063846 & 0.265962 \\ -0.376127 & -1.504506 & 1.880632 \end{bmatrix} \begin{bmatrix} y(x_{0}) \\ y(x_{1}) \\ y(x_{2}) \end{bmatrix},$$
(36)

where $$x_{0}=0, x_{1}=1/2$$, and $$x_{2}=1$$. Therefore, using (18) and (36), we obtain

$$\begin{bmatrix} 1.063846 & 0.265962 \\ -1.504506 & 1.880632 \end{bmatrix} \begin{bmatrix} y(x_{1}) \\ y(x_{2}) \end{bmatrix} = \begin{bmatrix} 0.237729\\ -0.495495 \end{bmatrix} + \begin{bmatrix} y(x_{1}) \\ y(x_{2}) \end{bmatrix} + \begin{bmatrix} \sqrt{1/2}y^{2}(x_{1}) \\ y^{2}(x_{2}) \end{bmatrix}.$$
(37)

Finally, by solving (37) we get

$$y(x_{1})=0.2500,\qquad y(x_{2})=1.0000.$$

Then, using (20), we have $$y(x)=x^{2}$$, which is the exact solution.

### Example 2

In this example, we consider the equation

$$D^{\alpha}y=1-y^{2},\quad 0< \alpha \leq 1,$$

subject to the initial condition $$y(0)=0$$. In general, the exact solution of the problem is not known. The exact solution for $$\alpha=1$$ is

$$y(x)=\frac{e^{2x}-1}{e^{2x}+1}.$$

The problem is considered in . We applied the differential quadrature method to solving the problem with $$N=4,8,12$$ and various values of α. The numerical solutions obtained by the present method and some other numerical methods such as the wavelet method [19, 20] and the artificial neural networks  are given in Tables 1, 2, and 3. Clearly, the approximations obtained by the differential quadrature method are in agreement with those obtained with the above-mentioned numerical methods. Table 4 shows the approximate solutions obtained by using the present method for $$\alpha=1$$, the Chebyshev wavelet operational matrix of the fractional integration  for $$k=6$$, $$m=2$$, and the Bernoulli wavelet method for $$k=2$$, $$m=5$$. Also, the numerical results with $$N=8$$ and $$\alpha=1/4,2/4,3/4,1$$ are plotted in Figure 1. The approximate solutions using the present method are in high agreement with the exact solutions for $$\alpha=1$$.

## Conclusion

A general formulation for the Chebyshev polynomial-based weighting coefficient matrix for approximation of fractional derivatives has been derived. The fractional derivatives are described in the Caputo sense. The matrix is used to get approximate numerical solutions of fractional Riccati differential equations. Our numerical results are compared with the solutions obtained by the wavelet and artificial neural network methods. The solution obtained using the present method shows that this approach can effectively solve the problem.

## References

1. Podlubny, I: Fractional Differential Equations. Academic Press, San Diego (1999)

2. Amairi, M, Aoun, M, Najar, S, Abdelkrim, MN: A constant enclosure method for validating existence and uniqueness of the solution of an initial value problem for a fractional differential equation. Appl. Math. Comput. 217(5), 2162-2168 (2010)

3. Deng, J, Ma, L: Existence and uniqueness of solutions of initial value problems for nonlinear fractional differential equations. Appl. Math. Lett. 23(6), 676-680 (2010)

4. Shawagfeh, NT: Analytical approximate solutions for nonlinear fractional differential equations. Appl. Math. Comput. 131(2), 517-529 (2002)

5. Hashima, I, Abdulaziz, O, Momani, S: Homotopy analysis method for fractional IVPs. Commun. Nonlinear Sci. Numer. Simul. 14(3), 674-684 (2009)

6. Diethelm, K, Ford, NJ, Freed, AD: A predictor-corrector approach for the numerical solution of fractional differential equations. Nonlinear Dyn. 29(1), 3-22 (2002)

7. Diethelm, K, Ford, NJ, Freed, AD: Detailed error analysis for a fractional Adams method. Numer. Algorithms 36(1), 31-52 (2004)

8. Yang, C, Hou, J: An approximate solution of nonlinear fractional differential equation by Laplace transform and Adomian polynomials. J. Inf. Comput. Sci. 10(1), 213-222 (2013)

9. Yüzbaşi, Ş: A numerical approximation based on the Bessel functions of first kind for solutions of Riccati type differential difference equations. Comput. Math. Appl. 64(6), 1691-1705 (2012)

10. Yüzbaşi, Ş: Numerical solution of the Bagley-Torvik equation by the Bessel collocation method. Math. Methods Appl. Sci. 36(3), 300-312 (2013)

11. Yüzbaşi, Ş: A numerical approximation for Volterra population growth model with fractional order. Appl. Math. Model. 37(5), 3216-3227 (2013)

12. Bellman, RE, Casti, J: Differential quadrature and long-term integration. J. Math. Anal. Appl. 34(2), 235-238 (1971)

13. Fung, TC: Solving initial value problems by differential quadrature method part 1: first-order equations. Int. J. Numer. Methods Eng. 50(6), 1411-1427 (2001)

14. Gil, A, Segura, J, Temme, NM: Numerical Methods for Special Functions. SIAM, Philadelphia (2007)

15. Boyd, JP: Chebyshev and Fourier Spectral Methods, 2nd edn. Dover, New York (1999)

16. Canuto, C, Hussaini, MY, Quarteroni, A, Zang, TA: Spectral Methods: Fundamentals in Single Domains. Springer, Berlin (2006)

17. Ghoreishi, F, Mokhtary, P: Spectral collocation method for multi-order fractional differential equations. Int. J. Comput. Methods 11(05), 1350072 (2014)

18. Mokhtary, P, Ghoreishi, F: The $${L}^{2}$$-convergence of the Legendre spectral tau matrix formulation for nonlinear fractional integro differential equations. Numer. Algorithms 58(4), 475-496 (2011)

19. Wang, Y, Fan, Q: The second kind Chebyshev wavelet method for solving fractional differential equations. Appl. Math. Comput. 218(17), 8592-8601 (2012)

20. Keshavarza, E, Ordokhania, Y, Razzaghi, M: Bernoulli wavelet operational matrix of fractional order integration and its applications in solving the fractional order differential equations. Appl. Math. Model. 38(24), 6038-6051 (2014)

21. Raja, MAZ, Manzar, MA, Samar, R: An efficient computational intelligence approach for solving fractional order Riccati equations using ANN and SQP. Appl. Math. Model. 39(10-11), 3075-3093 (2015)

## Acknowledgements

The authors are very grateful to the referees for carefully reading the paper and for their comments and suggestions, which have improved the paper.

## Author information

Authors

### Contributions

Both authors contributed equally to the writing of this paper. Both authors read and approved the final manuscript.

### Corresponding author

Correspondence to Jianhua Hou.

## Ethics declarations

### Competing interests

The authors declare that they have no competing interests. 