Skip to main content

Theory and Modern Applications

A class of Runge–Kutta methods for nonlinear Volterra integral equations of the second kind with singular kernels

Abstract

This paper aims to obtain an approximate solution for fractional order Riccati differential equations (FRDEs). FRDEs are equivalent to nonlinear Volterra integral equations of the second kind. In order to solve nonlinear Volterra integral equations of the second kind, a class of Runge–Kutta methods has been applied. Runge–Kutta methods have been implemented to solve nonsingular integral equations. In this work Volterra integral equations are singular. The singularity by a suitable subtraction technique will be weakened; then, this method will be applied to gain an approximate solution. Fractional derivatives are defined in the Caputo form of order \(0<\alpha\leq1\).

1 Introduction

A generalization of the classical Newtonian calculus is called fractional calculus and appears in many natural phenomena such as physical, chemical, sociological, biological, and economical processes. Fractional differential equations are one of the most important branches of fractional calculus. Fractional differential equations are an essential tool in mathematical modeling for many engineering and scientific problems [19]. FRDEs are well known equations that find many applications in scientific phenomena. The general form of FRDEs is as follows:

$$ D_{s}^{\alpha } x ( s ) = r ( s ) x^{2} ( s ) + q ( s ) x ( s ) + p ( s ) , \quad s >0, 0< \alpha \leq 1, $$
(1)

with the initial condition

$$ x ( 0 ) = k, $$
(2)

where \(p ( s ) \), \(q ( s ) \), and \(r ( s ) \) are known functions, \(D_{s}^{\alpha }\) is the Caputo fractional derivative operator. For \(\alpha =1\), FRDEs are the same as classical Riccati differential equations.

There are numerous direct numerical approaches for solving such equations. Some of these methods are as follows: optimal homotopy asymptotic method [10], homotopy analysis [1113], homotopy perturbation [1418], variational iteration [19, 20], modified variational iteration [21], differential transform [22], shifted Jacobi spectral method [23], Taylor matrix method [24], Adomian decomposition [1, 25, 26], and B-spline operational matrix method [27]. In this work, first, FRDEs will be converted into nonlinear Volterra integral equations of the second kind and then we will look for the solution by a class of Runge–Kutta methods. However, we usually suppose that the kernel and driving terms are continuous functions in the interval of integration and the kernel satisfies a uniform Lipschitz condition in x [28]. We are aware that, when this assumption violated, the method will fail utterly, or at best converge slowly. For solving singular integral equations, the purpose is to achieve a method as a product integration which converges as fast as for smooth problems [28]. We know that obtained equivalent nonlinear Volterra integral equations of the second kind, which will be described in Sect. 2, are singular at the final point. So by an appropriate subtraction technique, the singularity will be weakened and then Runge–Kutta approach can be applied. However, at singular points, the modified method is slow, yet we see two advantages in this approach, namely the ability to use this method for special singular nonlinear Volterra integral equations of the second kind and getting relatively accurate results for the solution in comparison with other methods. The singularity of the nonlinear kernel, \(k ( s, t, x ( t ) ) \), implies that the construction of methods with high order accuracy will not be easy. Under best conditions, the rate of convergence decreases [29]. During recent years, various papers have been devoted to the solution of linear and nonlinear weakly singular integral equations. In [30], Chebyshev spectral collocation method has been implemented to solve multidimensional nonlinear Volterra integral equation with a weakly singular kernel. The local discrete collocation method, which does not need any meshes, is called the meshless local discrete collocation (MLDC) method. This approach has been utilized for solving weakly singular integral equations [31]. In [29], a strong approach based on Legendre multiwavelets is presented for obtaining the approximate solution of Fredholm weakly singular integro-differential equations. A kind of subtraction technique has been applied in this paper. The discrete Galerkin approach with thin-plate splines based on scattered points is used to calculate the solution of nonlinear weakly singular Fredholm integral equation in [32]. In [33], tau approximation method has been applied to solve weakly singular Volterra-Hammerstein integral equations. An interesting numerical method by combining the product integration and collocation methods based on the radial basis functions has been used for solving weakly singular Fredholm integral equations in [34]. Moreover, Newton product integration method [35], the piecewise polynomial collocation method [36, 37], and quadratic spline collocation method [38] have been utilized for solving weakly singular integral equations. An efficient approach based on combining the radial basis functions and discrete collocation method has been implemented to solve nonlinear Volterra integral equations of the second kind in [39]. A numerical scheme based on the moving least squares method has been applied to solve integral equations in [40]. This approach is meshless. Some other related works that can be useful to better understand the research are [4147].

The rest of this paper is organized as follows: in the next section, we present a brief review of a class of Runge–Kutta methods for nonlinear Volterra integral equations of the second kind. In Sect. 3, we explain the subtraction of the singularity and application of the approach. In Sect. 4, we investigate two numerical examples. In Sect. 5, convergence analysis will be discussed. In the last section, we present the conclusions.

2 Preliminaries

The aim of this section is to recall some preliminaries about the objects used in our paper.

2.1 Definition

Definition 1

The Riemann–Liouville fractional integral of order \(\alpha >0\) of a function \(x: ( 0, \infty ) \rightarrow R\) is defined by

$$ J^{\alpha } x ( s ) = \frac{1}{\Gamma ( \alpha ) } \int_{0}^{s} ( s - t ) ^{\alpha - 1} x ( t ) \,dt, $$
(3)

where Γ is the Gamma function. It must be noted that in this paper \(0< \alpha \leq 1\). See [7] for more details and examples.

Definition 2

The Caputo fractional derivative of order \(\alpha >0\) of a function \(x: ( 0, \infty ) \rightarrow R\) is defined as

$$ D^{\alpha } x ( s ) = \frac{1}{\Gamma ( m - \alpha ) } \int_{0}^{s} ( s - t ) ^{m - \alpha - 1} x^{ ( m ) } ( t ) \,dt,\quad m = \lceil \alpha \rceil . $$
(4)

See [7] for more details and examples.

2.2 Existence of solutions

Consider the initial value problem (IVP) with Caputo fractional derivative given by

$$ D^{\alpha } x ( s ) = f \bigl( s, x ( s ) \bigr) , $$
(5)

with initial conditions

$$ D^{k} x ( 0 ) = x_{0}^{( k )}, \quad k =0,1,\ldots, m - 1. $$
(6)

We want to illustrate, by a theorem and a lemma, that every solution of the IVP given by (5) is also a solution of the following equation:

$$ x ( s ) = \sum_{k =0}^{m - 1} \frac{s^{k}}{k !} x_{0}^{( k )} + \frac{1}{\Gamma ( \alpha ) } \int_{0}^{s} ( s - t ) ^{\alpha - 1} f \bigl( t, x ( t ) \bigr) \,dt, \quad m = \lceil \alpha \rceil . $$
(7)

Theorem 1

Let \(\alpha > 0\), \(m = \lceil \alpha \rceil \), \(x_{0}^{(0)},\ldots, x_{0}^{( m - 1)} \in R\), \(L >0\) and \(h^{*} > 0\). Define \(H := \{ ( s, x ) :s \in [ 0, h^{*} ] , \vert x - \sum_{k =0} ^{m - 1} \frac{s^{k}}{k !} x_{0}^{ ( k ) } \vert \leq L \} \). Moreover, suppose that the function \(f: H \rightarrow R\) is continuous. Define \(P := \sup_{ ( s, z ) \in H} \vert f ( s, z ) \vert \) and

$$ h:= \textstyle\begin{cases} h^{*} & \textit{if }P =0, \\ \min \{ h^{*}, ( L\Gamma ( \alpha +1 )/ P ) ) ^{1/ n} \} & \textit{else}. \end{cases} $$

Then there exists a function \(x\in C [ 0, h ] \), satisfying IVP (5) (see [7]).

Lemma 1

Assume the hypotheses of Theorem 1. A function \(x \in C [ 0, h ] \) is a solution IVP (5) if and only if this function is a solution of the nonlinear Volterra integral equation of the second kind (7) (see [7]).

Remark 1

As a direct result from Lemma 1, let us consider the IVP given by

$$ D^{\alpha } x ( s ) = f \bigl( s, x ( s ) \bigr) , $$
(8)

with the initial condition

$$ x ( 0 ) = x_{0}, $$
(9)

where \(D^{\alpha }\) is Caputo fractional derivative, and \(f\in C ( [ 0, L ] \times R, R ) \), \(0< \alpha \leq 1\). Since f is presumed to be continuous, every solution of (8) is also a solution of the following nonlinear Volterra integral equation of the second kind:

$$ x ( s ) = x_{0} + \frac{1}{\Gamma ( \alpha ) } \int_{0}^{s} ( s - t ) ^{\alpha - 1} f \bigl( t, x ( t ) \bigr) \,dt, \quad t\in [ 0, L ] . $$
(10)

Furthermore, every solution of the equation given by (10) is a solution of (8) (see [7, 48]).

2.3 Runge–Kutta methods

Consider nonsingular Volterra equations of the second kind of the general form given by

$$ x ( s ) = y ( s ) + \int_{a}^{s} k \bigl( s, t, x ( t ) \bigr) \,dt, \quad a\leq s \leq b, $$
(11)

and suppose that the solution is defined over a finite interval \([ a, b ]\), y is continuous in the closed interval \([ a, b ] \), kernel is continuous in \(a\leq t\leq s\leq b\) and it satisfies a uniform Lipschitz condition in x. In Eq. (11), these conditions guaranty the existence of a unique continuous solution. Rung–Kutta methods are efficient numerical methods to approximate the solution of (11). These methods are self-starting approaches which specify the approximate solution at the points \(s_{i} = a + i h\), \(i =1,\ldots, N\); we generate approximations to the solution at some intermediate points in the closed interval \([ s_{i}, s_{i +1} ] \), \(i =0,\ldots, N - 1\), where \(s_{i} + \theta_{r} h\), \(i =0,\ldots, N - 1\), \(r =1,\ldots, p - 1\), and \(0= \theta_{0} \leq \theta_{1} \leq \cdots \leq \theta_{p - 1} \leq 1\). Then we apply the general p-stage Rung–Kutta approach to obtain an approximate solution of the initial value problem

$$\begin{aligned}& x ' ( s ) = f \bigl( s, x ( s ) \bigr) , \end{aligned}$$
(12)
$$\begin{aligned}& x ( a ) = x_{0}, \end{aligned}$$
(13)

given by

$$ x_{i +1} = x_{i} + h \sum_{l =0}^{p - 1} A_{pl} k_{l}^{i}, $$
(14)

where

$$\begin{aligned}& k_{0}^{i} = f ( a + i h, x_{i} ) , \end{aligned}$$
(15)
$$\begin{aligned}& k_{r}^{i} = f \Biggl( a + ( i + \theta_{r} ) h, x_{i} + h \sum_{l =0}^{r - 1} A_{rl} k_{l}^{i} \Biggr) , \quad r =1,\ldots, p - 1, \end{aligned}$$
(16)
$$\begin{aligned}& \sum_{l =0}^{r - 1} A_{rl} = \textstyle\begin{cases} \theta_{r},& r =1,2,\ldots, p - 1, \\ 1,& r = p, \end{cases}\displaystyle \end{aligned}$$
(17)

and where \(x_{l} \) is an approximation to the solution at \(s = s_{l} = a + l h\). We can rewrite Eq. (14) as follows:

$$ x_{i +1} = x_{i} + h \sum_{l =0}^{p - 1} A_{pl} f ( s_{i} + \theta_{l} h, x_{i + \theta_{l}} ) . $$
(18)

It must be noted that \(A_{pl}\), \(\theta_{l}\) are chosen to obtain the final approximate solution of a special order. Equation (18), for a given pair \(( p, q ) \), provides a set of nonlinear equations that may have no solutions, one solution, or a family of solutions; see [28] for more details and examples.

2.4 A class of Runge–Kutta methods

We can extend Eq. (18) to gain a class of Rung–Kutta methods to solve nonsingular Volterra equations of the second kind of the general form in (11). Substituting \(s_{i}\) into (11), results in

$$ x ( s_{i} ) = y ( s_{i} ) + \int_{a}^{a + i h} k \bigl( a + i h, t, x ( t ) \bigr) \,dt, \quad i =1,\ldots,N, $$
(19)

so

$$ x ( s_{i} ) = y ( s_{i} ) + \sum _{j =0}^{i - 1} \int_{a + j h}^{a +( j +1) h} k \bigl( a + i h, t, x ( t ) \bigr) \,dt, \quad i =1,\ldots, N. $$
(20)

We can consider an approximation \(x_{i}\) to \(x ( s_{i} ) \) from the following equation:

$$ x_{i} = y ( s_{i} ) + h \sum_{j =0}^{i - 1} \sum_{l =0} ^{p - 1} A_{pl} k \bigl( a + i h, a + ( j + \theta_{l} ) h, x_{j + \theta_{l}} \bigr) . $$
(21)

For \(s\in ( s_{i}, s_{i +1} ) \), Eq. (11) can be written as the following form:

$$ x ( s )= y ( s ) + \sum_{j =0}^{i - 1} \int_{s_{j}}^{s _{j +1}} k \bigl( s, t, x ( t ) \bigr) \,dt + \int_{s_{i}} ^{s} k \bigl( s, t, x ( t ) \bigr) \,dt. $$
(22)

By setting \(s = s_{i} + \theta_{\vartheta } h\), \(\vartheta =1,\ldots, p - 1\), the last integral in (22) will be approximated as follows:

$$ \int_{s_{i}}^{s_{i} + \theta_{\vartheta } h} k \bigl( s_{i} + \theta_{\vartheta } h, t, x ( t ) \bigr) \,dt \approx h \sum _{l =0}^{\vartheta - 1} A_{\vartheta l} k ( s_{i} + \theta_{\vartheta } h, s_{i} + \theta_{l} h, x_{i + \theta_{l}} ) . $$
(23)

According to (20), (21), (22), and (23), the Runge–Kutta method for (11) can be rewritten as the following form:

$$\begin{aligned} \begin{aligned} x_{i + \theta_{\vartheta } h} &= y ( s_{i} + \theta_{\vartheta } h ) + h \sum _{j =0}^{i - 1} \sum _{l =0}^{p - 1} A_{pl} k ( s_{i} + \theta_{\vartheta } h, s_{j} + \theta_{l} h, x_{j + \theta _{l}} ) \\ &\quad{} + h \sum_{l =0}^{\vartheta - 1} A_{\vartheta l} k ( s_{i} + \theta_{\vartheta } h, s_{i} + \theta_{l} h, x_{i + \theta_{l}} ) , \end{aligned} \end{aligned}$$
(24)

\(i =0,1,\ldots, N - 1\), \(\vartheta =1,2,\ldots, p - 1\), where \(x ( a ) = y ( a ) \), and \(A_{rj}\), \(\theta_{j}\), \(r =1,2,\ldots, p\), \(j =0,\ldots, p - 1\), describe the particular method; see [28] for more details and examples.

3 Subtraction of the singularity

In this section, we want to apply a class of Runge–Kutta methods for solving nonlinear Volterra integral equations of the second kind with singular kernels given by (11). For this, we assume that \(k ( s, t, x ( t ) ) = K ( s, t ) [ \beta x^{2} ( t ) + \gamma x ( t ) ] \), where

$$ K ( s, t ) = \frac{K_{0} ( s, t )}{( s - t )^{1 - \alpha }}, \quad \beta , \gamma \in R, 0< \alpha \leq 1, $$
(25)

and we suppose \(K_{0} ( s, t )\) is regular. Now we utilize the Runge–Kutta method given in Sect. 2.4 for the kernel provided in (25). According to Eq. (22), we have

$$\begin{aligned}& \begin{aligned}[b] x ( s )&= y ( s ) + \sum_{j =0}^{i - 1} \int_{s_{j}}^{s _{j +1}} K ( s, t ) \bigl[ \beta x^{2} ( t ) + \gamma x ( t ) \bigr] \,dt \\ &\quad{} + \int_{s_{i}}^{s} K ( s, t ) \bigl[ \beta x^{2} ( t ) + \gamma x ( t ) \bigr] \,dt. \end{aligned} \end{aligned}$$
(26)

We know that the singularity of the kernel is at \(s = t\), in the last term of Eq. (26), so it is enough that we obtain the last term as follows:

$$ \begin{aligned}[b] & \int_{s_{i}}^{s} K ( s, t ) \bigl[ \beta x^{2} ( t ) + \gamma x ( t ) \bigr] \,dt \\ &\quad{} = \int_{s_{i}}^{s} K ( s, t ) \bigl[ \bigl( \beta x^{2} ( t ) + \gamma x ( t ) \bigr) - \bigl( \beta x^{2} ( s ) + \gamma x ( s ) \bigr) + \bigl( \beta x^{2} ( s ) + \gamma x ( s ) \bigr) \bigr] \,dt, \end{aligned} $$
(27)

and finally,

$$\begin{aligned} & \int_{s_{i}}^{s} K ( s, t ) \bigl[ \beta x^{2} ( t ) + \gamma x ( t ) \bigr] \,dt \\ &\quad =\beta \int_{s_{i}} ^{s} \frac{K_{0} ( s, t )}{( s - t )^{1 - \alpha }} \bigl( x^{2} ( t ) - x^{2} ( s ) \bigr) \,dt + \gamma \int _{s_{i}}^{s} \frac{K_{0} ( s, t ) }{ ( s - t ) ^{1 - \alpha }} \bigl( x ( t ) - x ( s ) \bigr) \,dt \\ &\quad \quad{} + \bigl( \beta x^{2} ( s ) + \gamma x ( s ) \bigr) q ( s ) , \end{aligned}$$
(28)

where \(q ( s ) = \int_{s_{i}}^{s} \frac{K_{0} ( s, t )}{( s - t )^{1 - \alpha }} \,dt \) is known and can be computed easily. If the primal integral exists in the Riemann sense then the first and second terms of (28) are now regular at \(s = t\). Since \(x^{2} ( t ) - x^{2} ( s ) =0\), and \(x ( t ) - x ( s ) =0\), at the singular point \(s = t\), the singularity is weaker than in the previous case. So, we can now introduce Runge–Kutta method and apply it to (28). The singularity happens when \(\theta_{\vartheta } = \theta_{l}\), so the term \(\theta_{\vartheta } = \theta_{l}\), is omitted employing the identities \(K ( s, s ) ( x^{2} ( s ) - x^{2} ( s ) ) =0\) and \(K ( s, s ) ( x ( s ) - x ( s ) ) =0\). It must be noted that the singularity has been weakened, by this subtraction technique, but has not been removed completely. So by implementing the Runge–Kutta method, if \(\theta_{\vartheta } = \theta _{l}\), we can write the numerical form of Eq. (28) as follows:

$$\begin{aligned}& \int_{s_{i}}^{s} K ( s, t ) \bigl[ \beta x^{2} ( t ) + \gamma x ( t ) \bigr] \,dt \\ & \quad = \beta h \sum_{\substack{l =0\\[-2pt] \theta_{\vartheta } \neq \theta_{l}}} ^{\vartheta - 1} A_{\vartheta l} \bigl[ k ( s_{i} + \theta_{\vartheta } h, s_{i} + \theta_{l} h ) \bigl( x_{i + \theta_{l}}^{2} - x_{i + \theta_{\vartheta }}^{2} \bigr) \bigr] \\& \quad\quad{} + \gamma h \sum_{\substack{l =0\\ \theta_{\vartheta } \neq \theta_{l}}}^{\vartheta - 1} A_{\vartheta l} \bigl[ k ( s_{i} + \theta_{\vartheta } h, s_{i} + \theta_{l} h ) \bigr] ( x _{i + \theta_{l}} - x_{i + \theta_{\vartheta }} ) ] \\& \quad\quad{} + \bigl( \beta x^{2} ( s_{i} + \theta_{\vartheta } h ) + \gamma x ( s_{i} + \theta_{\vartheta } h ) \bigr) q ( s_{i} + \theta_{\vartheta } h ) . \end{aligned}$$
(29)

Finally, we write the numeric form of Eq. (24) as

$$\begin{aligned} x_{i + \theta_{\vartheta }} &= y ( s_{i} + \theta_{\vartheta } h ) + h \sum _{j =0}^{i - 1} \sum _{l =0}^{p - 1} A_{pl} K ( s_{i} + \theta_{\vartheta } h, s_{j} + \theta_{l h} ) \bigl[ \beta x_{j + \theta_{l}}^{2} + \gamma x_{j + \theta_{l}} \bigr] \\ &\quad{} + \beta h \sum_{\substack{l =0\\ \theta_{\vartheta } \neq \theta_{l} }}^{\vartheta - 1} A_{\vartheta l} \bigl[ k ( s_{i} + \theta_{\vartheta } h, s_{i} + \theta_{l} h ) \bigl( x_{i + \theta_{l}}^{2} - x_{i + \theta_{\vartheta }}^{2} \bigr) \bigr] \\ &\quad{} + \gamma h \sum_{\substack{ l =0 \\ \theta_{\vartheta } \neq \theta_{l}} }^{\vartheta - 1} A_{\vartheta l} \bigl[ k ( s_{i} + \theta_{\vartheta } h, s_{i} + \theta_{l} h ) \bigr] ( x _{i + \theta_{l}} - x_{i + \theta_{\vartheta }} ) ] \\ &\quad{} + \bigl( \beta x_{i + \theta_{\vartheta }}^{2} + \gamma x_{i + \theta_{\vartheta }} \bigr) q ( s_{i} + \theta_{\vartheta } h ) , \end{aligned}$$
(30)

where \(\beta , \gamma \in R\), \(\vartheta =1,2,\ldots, p - 1\), \(q ( s ) = \int_{s_{i}}^{s} \frac{K_{0} ( s, t )}{( s - t )^{1 - \alpha }} \,dt\), and \(K ( s, t ) = \frac{K_{0} ( s, t )}{( s - t )^{1 - \alpha }}\), \(i =0,1,\ldots, N - 1\). Here \(x_{i + \theta_{\vartheta }}\) is an approximation of \(x ( s_{i} + \theta_{\vartheta } h ) \), \(x ( a ) = y ( a ) \), and \(A_{rj}\), \(\theta_{j}\), \(r =1, 2,\ldots, p\), \(j =0,\ldots, p - 1\), describe the particular method (see [28, 29]).

The proposed approach, with this subtraction technique, is called the new p-stage Runge–Kutta method (NRKp).

4 Examples

In this section, first, a new 4-stage Runge–Kutta method (NRK4) is obtained, and then the application of this approach in solving fractional Riccati differential equations is illustrated by two examples.

We set \(p =4\), \(\vartheta =1, 2, 3\), \(i =0, 1,\ldots, N - 1\), \(r =1, 2,3, 4\), \(j =0,\ldots, 3\) and \(K_{0} ( s, t ) = c_{0}\), \(c_{0} \boldsymbol{\mathbb{\in R}}\), to gain a new 4-stage Runge–Kutta method from Eq. (30), and derive

$$\begin{aligned} x_{i + \theta_{\vartheta }} &= y ( s_{i} + \theta_{\vartheta } h ) + h \sum _{j =0}^{i - 1} \sum _{l =0}^{3} A_{4 l} K ( s_{i} + \theta_{\vartheta } h, s_{j} + \theta_{l} h ) \bigl[ \beta x_{j + \theta_{l}}^{2} + \gamma x_{j + \theta_{l}} \bigr] \\ &\quad{} + \beta h \sum_{\substack{ l =0 \\ \theta_{\vartheta } \neq \theta_{l}} }^{\vartheta - 1} A_{\vartheta l} \bigl[ k ( s_{i} + \theta_{\vartheta } h, s_{i} + \theta_{l} h ) \bigl( x_{i + \theta_{l}}^{2} - x_{i + \theta_{\vartheta }}^{2} \bigr) \bigr] \\ &\quad{} + \gamma h \sum_{\substack{ l =0 \\ \theta_{\vartheta } \neq \theta_{l}} }^{\vartheta - 1} A_{\vartheta l} \bigl[ k ( s_{i} + \theta_{\vartheta } h, s_{i} + \theta_{l} h ) \bigr] ( x _{i + \theta_{l}} - x_{i + \theta_{\vartheta }} ) ] \\ &\quad{} + \bigl( \beta x_{i + \theta_{\vartheta }}^{2} + \gamma x_{i + \theta_{\vartheta }} \bigr) q ( s_{i} + \theta_{\vartheta } h ) , \end{aligned}$$
(31)

where \(\theta_{0} =0\), \(\theta_{1} = \theta_{2} = \frac{1}{2}\), \(\theta _{3} =1\), \(A_{10} = \frac{1}{2}\), \(A_{20} =0\), \(A_{21} = \frac{1}{2}\), \(A_{30} = A_{31} =0\), \(A_{32} =1\), \(A_{40} = A_{43} = \frac{1}{6}\), and \(A_{41} = A_{42} = \frac{1}{3}\); see [28] for more details about the values of \(\theta_{j}\), \(j =0, 1, 2, 3\).

Example 1

Consider the following fractional Riccati differential equation:

$$ D^{\alpha } x ( s ) =1 - x^{2} ( s ) ,\quad 0 \leq s \leq 1, 0< \alpha \leq 1, $$
(32)

with the initial condition

$$ x ( 0 ) =0. $$
(33)

The exact solution, for \(\alpha =1\), is

$$ x ( s ) = \frac{e^{2 s} - 1}{e^{2 s} +1}. $$
(34)

According to Remark 1, we can write (32) as follows:

$$ x ( s ) = y ( s ) + \int_{0}^{s} k \bigl( s, t, x ( t ) \bigr) \,dt, \quad 0\leq s \leq 1, $$
(35)

where \(y ( s ) = \frac{s^{\alpha }}{\Gamma ( \alpha +1 ) }\), \(k ( s, t, x ( t ) ) = \beta k ( s, t ) x^{2} ( t ) \), \(K ( s, t ) = \frac{1}{( s - t )^{1 - \alpha }}\), \(\beta = - \frac{1}{\Gamma ( \alpha ) }\), and \(\gamma =0\). By setting \(i =0\), \(\vartheta =1 \) in (31), we have \(x_{\frac{1}{2}} = \frac{h^{\alpha }}{2^{\alpha } \Gamma ( \alpha +1 ) }\). In the next step, if \(i =0\), \(\vartheta =2\), the singularity happens at \(s = \frac{h}{2}\), so the second \(x_{\frac{1}{2}}\) can be achieved as

$$\begin{aligned}& x_{\frac{1}{2}} = \frac{h^{\alpha }}{2^{\alpha } \Gamma ( \alpha +1 ) } + h \beta \sum _{\substack{ l =0 \\ l \neq 1} }^{1} A_{2 l} k ( s_{0} + \theta_{2} h, s_{0} + \theta_{l} h ) \bigl( x_{0+ \theta_{l}}^{2} - x_{0+ \theta_{2}}^{2} \bigr) + \beta x_{\frac{1}{2}}^{2} q \biggl( \frac{h}{2} \biggr) , \end{aligned}$$
(36)

where \(q ( \frac{h}{2} ) = \int_{0}^{\frac{h}{2}} \frac{1}{( \frac{h}{2} - t )^{1 - \alpha }} \,dt\), and \(x_{\frac{1}{2}}\) is an approximation of \(x ( \frac{h}{2} ) \). The following quadratic polynomial will be obtained from (36):

$$ x_{\frac{1}{2}} = \frac{h^{\alpha }}{2^{\alpha } \Gamma ( \alpha +1 ) } \bigl( 1 - x_{\frac{1}{2}}^{2} \bigr) . $$
(37)

To get \(x_{\frac{1}{2}}\), we prefer to utilize the predictor–corrector method. For such a purpose, we use the first iteration of \(x_{ \frac{1}{2}} = \frac{h^{\alpha }}{2^{\alpha } \Gamma ( \alpha +1 ) }\) on the right-hand side of (37) as a predictor. Let us consider \(x_{\frac{1}{2}} = x_{\frac{1}{2}}^{(0)}\). Equation (37) can now be rewritten as follows:

$$ x_{\frac{1}{2}}^{(1)} = \frac{h^{\alpha }}{2^{\alpha } \Gamma ( \alpha +1 ) } \bigl( 1 - \bigl( x_{\frac{1}{2}}^{(0)} \bigr) ^{2} \bigr) , $$
(38)

i.e.,

$$ x_{\frac{1}{2}}^{(1)} = \frac{h^{\alpha }}{2^{\alpha } \Gamma ( \alpha +1 ) } \biggl( 1 - \biggl( \frac{h^{\alpha }}{2^{\alpha } \Gamma ( \alpha +1 ) } \biggr) ^{2} \biggr) . $$
(39)

By repeating this process, we obtain

$$ x_{\frac{1}{2}}^{(2)} = \frac{h^{\alpha }}{2^{\alpha } \Gamma ( \alpha +1 ) } \bigl( 1 - \bigl( x_{\frac{1}{2}}^{(1)} \bigr) ^{2} \bigr) . $$
(40)

Finally, after two iterations we get

$$ x_{\frac{1}{2}}^{(2)} = \frac{h^{\alpha }}{2^{\alpha } \Gamma ( \alpha +1 ) } \biggl( 1 - \biggl( \frac{h^{\alpha }}{2^{\alpha } \Gamma ( \alpha +1 ) } \biggl( 1 - \biggl( \frac{h^{\alpha }}{2^{ \alpha } \Gamma ( \alpha +1 ) } \biggr) ^{2} \biggr) \biggr) ^{2} \biggr) . $$
(41)

With this technique given by (36), the singularity of \(x_{ \frac{1}{2}}\) will disappear, and with the predictor–corrector method, the approximation of \(x ( \frac{h}{2} ) \) will improve. In the following, we put \(i =0\), \(\vartheta =3\), so

$$ x_{1} = \frac{h^{\alpha }}{\Gamma ( \alpha ) } \biggl( \frac{1}{ \alpha } - \frac{ ( x_{\frac{1}{2}}^{(2)} ) ^{2}}{2^{\alpha - 1}} \biggr) . $$
(42)

By substituting (41) into (42), we gain

$$ x_{1} = \frac{h^{\alpha }}{\Gamma ( \alpha ) } \biggl( \frac{1}{ \alpha } - \frac{ ( \frac{h^{\alpha }}{2^{\alpha } \Gamma ( \alpha +1 ) } ( 1 - ( \frac{h^{\alpha }}{2^{\alpha } \Gamma ( \alpha +1 ) } ( 1 - ( \frac{h^{\alpha }}{2^{ \alpha } \Gamma ( \alpha +1 ) } ) ^{2} ) ) ^{2} ) ) ^{2}}{2^{\alpha - 1}} \biggr) . $$
(43)

Similarly, for \(i =1\), \(\vartheta =1\),

$$ x_{\frac{3}{2}} = \frac{h^{\alpha }}{\Gamma ( \alpha ) } \biggl( \frac{1}{\alpha } \frac{3^{\alpha }}{2^{\alpha }} - \frac{2}{3} \bigl( x_{\frac{1}{2}}^{(2)} \bigr) ^{2} - \frac{4}{3} \frac{x _{1}^{2}}{2^{\alpha }} \biggr) . $$
(44)

In the following, if \(i =1\), \(\vartheta =2\), the singularity appears at \(s = \frac{3 h}{2}\), so we have

$$ x_{\frac{3}{2}} = \frac{h^{\alpha }}{\Gamma ( \alpha ) } \biggl[ \frac{1}{\alpha } \frac{3^{\alpha }}{2^{\alpha }} - \frac{2}{3} \bigl( x_{\frac{1}{2}}^{(2)} \bigr) ^{2} - \frac{1}{3} \frac{1}{2^{ \alpha }} x_{1}^{2} - \frac{1}{\alpha } \frac{3^{\alpha }}{2^{\alpha }} x_{\frac{3}{2}}^{2} \biggr] . $$
(45)

To obtain \(x_{\frac{3}{2}}\), we prefer to use the predictor–corrector method. For such a purpose, we apply the first iteration of \(x_{\frac{3}{2}} = \frac{h^{\alpha }}{\Gamma ( \alpha ) } ( \frac{2^{\alpha }}{\alpha 3^{\alpha }} - \frac{2}{3} ( x _{\frac{1}{2}}^{(2)} ) ^{2} - \frac{4}{3} \frac{x_{1}^{2}}{2^{ \alpha }} ) \) on the right-hand side of (45) as a predictor and consider \(x_{\frac{3}{2}} = x_{\frac{3}{2}}^{(0)}\). Equation (45) can now be rewritten as follows:

$$ x_{\frac{3}{2}}^{(1)} = \frac{h^{\alpha }}{\Gamma ( \alpha ) } \biggl[ \frac{1}{ \alpha } \frac{3^{\alpha }}{2^{\alpha }} - \frac{2}{3} \bigl( x_{ \frac{1}{2}}^{(2)} \bigr) ^{2} - \frac{1}{3} \frac{1}{2^{\alpha }} x _{1}^{2} - \frac{1}{\alpha } \frac{3^{\alpha }}{2^{\alpha }} \bigl( x _{\frac{3}{2}}^{(0)} \bigr) ^{2} \biggr] . $$
(46)

By repeating this process, we improve the approximation of \(x_{ \frac{3}{2}}\) to

$$ x_{\frac{3}{2}}^{(2)} = \frac{h^{\alpha }}{\Gamma ( \alpha ) } \biggl[ \frac{1}{ \alpha } \frac{3^{\alpha }}{2^{\alpha }} - \frac{2}{3} \bigl( x_{ \frac{1}{2}}^{(2)} \bigr) ^{2} - \frac{1}{3} \frac{1}{2^{\alpha }} x _{1}^{2} - \frac{1}{\alpha } \frac{3^{\alpha }}{2^{\alpha }} \bigl( x _{\frac{3}{2}}^{(1)} \bigr) ^{2} \biggr] . $$
(47)

Let us consider \(i =1\), \(\vartheta =3\), and then

$$ x_{2} = \frac{h^{\alpha }}{\Gamma ( \alpha ) } \biggl[ \frac{2^{ \alpha }}{\alpha } - \frac{2}{3} \biggl( \frac{3}{2} \biggr) ^{\alpha - 1} \bigl( x_{\frac{1}{2}}^{(2)} \bigr) ^{2} - \frac{1}{6} x_{1}^{2} - \frac{1}{2^{\alpha - 1}} \bigl( x_{\frac{3}{2}}^{(2)} \bigr) ^{2} \biggr] . $$
(48)

In the following, if \(i =2\), \(\vartheta =1\),

$$ x_{\frac{5}{2}} = \frac{h^{\alpha }}{\Gamma ( \alpha ) } \biggl[ \frac{1}{\alpha } \frac{5^{\alpha }}{2^{\alpha }} - \frac{2^{ \alpha }}{3} \bigl( x_{\frac{1}{2}}^{(2)} \bigr) ^{2} - \frac{3^{ \alpha - 2}}{2^{\alpha - 1}} x_{1}^{2} - \frac{2}{3} \bigl( x_{ \frac{3}{2}}^{(2)} \bigr) ^{2} - \frac{1}{3} \frac{x_{2}^{2}}{2^{ \alpha - 2}} \biggr] . $$
(49)

If \(i =2\), \(\vartheta =2\), the singularity is at \(s = \frac{5 h}{2}\), so we have

$$ x_{\frac{5}{2}} = \frac{h^{\alpha }}{\Gamma ( \alpha ) } \biggl[ \frac{1}{\alpha } \frac{5^{\alpha }}{2^{\alpha }} - \frac{2^{ \alpha }}{3} \bigl( x_{\frac{1}{2}}^{(2)} \bigr) ^{2} - \frac{3^{ \alpha - 2}}{2^{\alpha - 1}} x_{1}^{2} - \frac{2}{3} \bigl( x_{ \frac{3}{2}}^{(2)} \bigr) ^{2} - \frac{1}{3} \frac{x_{2}^{2}}{2^{ \alpha - 2}} - \frac{1}{\alpha 2^{\alpha }} x_{\frac{5}{2}}^{2} \biggr] . $$
(50)

To improve \(x_{\frac{5}{2}}\), we prefer to use the predictor–corrector method. For such a purpose, we apply (49) on the right-hand side of (50) as a predictor, and so we consider \(x_{\frac{5}{2}} = x_{\frac{5}{2}} ^{(0)}\). After two steps we achieve

$$ x_{\frac{5}{2}}^{(2)} = \frac{h^{\alpha }}{\Gamma ( \alpha ) } \biggl[ \frac{1}{ \alpha } \frac{5^{\alpha }}{2^{\alpha }} - \frac{2^{\alpha }}{3} \bigl( x_{\frac{1}{2}}^{(2)} \bigr) ^{2} - \frac{3^{\alpha - 2}}{2^{ \alpha - 1}} x_{1}^{2} - \frac{2}{3} \bigl( x_{\frac{3}{2}}^{(2)} \bigr) ^{2} - \frac{1}{3} \frac{x_{2}^{2}}{2^{\alpha - 2}} - \frac{1}{\alpha 2^{\alpha }} \bigl( x_{\frac{5}{2}}^{(1)} \bigr) ^{2} \biggr] . $$
(51)

Finally, for \(i =2\), \(\vartheta =3\),

$$ x_{3} = \frac{h^{\alpha }}{\Gamma ( \alpha ) } \biggl[ \frac{3^{ \alpha }}{\alpha } - \frac{2}{3} \biggl( \frac{5}{2} \biggr) ^{\alpha - 1} \bigl( x_{\frac{1}{2}}^{(2)} \bigr) ^{2} - \frac{2^{\alpha } x_{1} ^{2}}{3} - \frac{3^{\alpha - 2}}{2^{\alpha - 2}} \bigl( x_{ \frac{3}{2}}^{(2)} \bigr) ^{2} - \frac{x_{2}^{2}}{6} - \frac{1}{2^{ \alpha }} \bigl( x_{\frac{5}{2}}^{(2)} \bigr) ^{2} \biggr] . $$
(52)

In Example 1, the computed \(x_{j}\), \(j = \frac{1}{2}, 1, \frac{3}{2}, 2, \frac{5}{2}, 3\), form an approximation to the solution of the Eqs. (32)–(33) for \(0< \alpha \leq 1\), but \(x_{3} \) is considered as the final approximation of the solution. In Tables 1 and 2, a p-stage Runge–Kutta method, for \(p= 4\), which has led to an approximate solution, is called NRK4. Also, we present the solution achieved by three iterations from the modified variational iteration method (MVIM) [21] as well as the solution obtained by four terms from the modified homotopy perturbation method (HPM) [18]. A comparison of the results from NRK4 and HPM shows that the results obtained by NRK4 are more accurate in the closed interval \([0.6,1]\), and have less variation in relative error. A comparison of the results of NRK4 and MVIM shows that the results of NRK4 are more accurate in the closed interval \([0.8,1]\). To get any desired accuracy, we could proceed with this method and use more iterations; however, the relative errors are already small enough to be satisfied. It appears that the introduced modified Runge–Kutta method can be relatively accurate. The gained results are shown in Tables 1 and 2. It must be noted that \(x ( s )\) is an exact solution, for \({\alpha =1}\).

Table 1 The results of different methods for Example 1, \(\alpha =1\)
Table 2 Relative errors for Example 1, \(\alpha =1\)

Figure 1 shows a comparison between the exact solution and the numerical solution resulted from NRK4, for \(\alpha =1\). A comparison between the gained approximate solution by NRK4 and exact solution shows that the maximum of relative error happens at the last point and is less than or equal to 2.0523E−3. Moreover, the approximations of the solutions for various values of α are shown in Fig. 2. In (32), when α varies from 0 to 1, the approximate solution that is gained for a given α changes. For example, we suppose the arbitrary values of \(\alpha =0.25, 0.5, 0.75, 1\) in Fig. 2.

Figure 1
figure 1

Exact and numerical solutions of Example 1

Figure 2
figure 2

Numerical solutions of Example 1 for various values of \(0< \alpha \leq 1\)

Example 2

Consider the following fractional Riccati differential equation:

$$ D^{\alpha } x ( s ) =1+2 x ( s ) - x^{2} ( s ) , \quad 0 \leq s \leq 1, 0< \alpha \leq 1, $$
(53)

with the initial condition

$$ x ( 0 ) =0. $$
(54)

The exact solution, for \(\alpha =1\), is

$$ x ( s ) =1+ \sqrt{2} \tanh \biggl[ \sqrt{2} s + \frac{1}{2} \log \biggl( \frac{\sqrt{2} - 1}{\sqrt{2} +1} \biggr) \biggr] . $$
(55)

According to Remark 1, (53) can be written in the following form:

$$ x ( s ) = y ( s ) + \int_{0}^{s} k \bigl( s, t, x ( t ) \bigr) \,dt, \quad 0 \leq s \leq 1, $$
(56)

where \(y ( s ) = \frac{s^{\alpha }}{\Gamma ( \alpha +1 ) }\), \(k ( s, t, x ( t ) ) = K ( s, t ) [ \beta x^{2} ( t ) + \gamma x ( t ) ] \), \(K ( s, t ) = \frac{1}{( s-t )^{1 -\alpha }}\), \(\beta = - \frac{1}{\Gamma ( \alpha ) }\), and \(\gamma = \frac{2}{ \Gamma ( \alpha ) }\). We put \(i =0\), \(\vartheta =1\), in (31), and have \(x_{\frac{1}{2}} = \frac{h^{\alpha }}{2^{\alpha } \Gamma ( \alpha +1 ) }\). In the next step, if \(i =0\), \(\vartheta =2\), the singularity happens at \(s = \frac{h}{2}\), so the second \(x_{\frac{1}{2}}\) can be obtained as follows:

$$\begin{aligned} x_{\frac{1}{2}}& = \frac{h^{\alpha }}{2^{\alpha } \Gamma ( \alpha +1 ) } + h\beta \sum _{\substack{ l =0 \\ l \neq 1}}^{1} A_{2 l} k ( s_{0} + \theta_{2} h, s_{0} + \theta_{l} h ) \bigl( x_{0+ \theta_{l}}^{2} - x_{0+ \theta_{2}}^{2} \bigr) \\ &\quad{} + h\gamma \sum_{ \substack{ l =0 \\ l \neq 1} }^{1} A_{2 l} k ( s_{0} + \theta_{2} h, s_{0} + \theta_{l} h ) ( x_{0+ \theta_{l}} - x_{0+ \theta_{2}} ) \\ &\quad{} + \bigl( \beta x_{0+ \theta_{2}}^{2} + \gamma x_{0+ \theta_{2}} \bigr) q ( s_{0} + \theta_{2} h ) , \end{aligned}$$
(57)

and hence the following quadratic polynomial will be obtained from (57):

$$ x_{\frac{1}{2}} = \frac{h^{\alpha }}{2^{\alpha } \Gamma ( \alpha +1 ) } \bigl( 1+2 x_{\frac{1}{2}} - x_{\frac{1}{2}}^{2} \bigr) . $$
(58)

To get \(x_{\frac{1}{2}}\), we use the predictor–corrector method. We apply the first iteration of \(x_{\frac{1}{2}} = \frac{h^{\alpha }}{2^{ \alpha } \Gamma ( \alpha +1 ) }\) on the right-hand side of (58) as a predictor. Let us consider \(x_{\frac{1}{2}} = x_{ \frac{1}{2}}^{(0)}\), so Eq. (58) can be rewritten as follows:

$$ x_{\frac{1}{2}}^{(1)} = \frac{h^{\alpha }}{2^{\alpha } \Gamma ( \alpha +1 ) } \bigl( 1+2 x_{\frac{1}{2}}^{(0)} - \bigl( x_{ \frac{1}{2}}^{(0)} \bigr) ^{2} \bigr) , $$
(59)

i.e.,

$$ x_{\frac{1}{2}}^{(1)} = \frac{h^{\alpha }}{2^{\alpha } \Gamma ( \alpha +1 ) } \biggl( 1+ \frac{2 h^{\alpha }}{2^{\alpha } \Gamma ( \alpha +1 ) } - \biggl( \frac{h^{\alpha }}{2^{\alpha } \Gamma ( \alpha +1 ) } \biggr) ^{2} \biggr) . $$
(60)

By repeating this process, we obtain

$$ x_{\frac{1}{2}}^{(2)} = \frac{h^{\alpha }}{2^{\alpha } \Gamma ( \alpha +1 ) } \bigl( 1+2 x_{\frac{1}{2}}^{(1)} - \bigl( x_{ \frac{1}{2}}^{(1)} \bigr) ^{2} \bigr) . $$
(61)

Finally, after two iterations, we get

$$\begin{aligned} x_{\frac{1}{2}}^{(2)}& = \frac{h^{\alpha }}{2^{\alpha } \Gamma ( \alpha +1 ) } \biggl( 1+ \frac{2 h^{\alpha }}{2^{\alpha } \Gamma ( \alpha +1 ) } \biggl( 1+ \frac{2 h^{\alpha }}{2^{ \alpha } \Gamma ( \alpha +1 ) } - \biggl( \frac{h^{\alpha }}{2^{ \alpha } \Gamma ( \alpha +1 ) } \biggr) ^{2} \biggr) \\ &\quad{} - \biggl( \frac{h^{\alpha }}{2^{\alpha } \Gamma ( \alpha +1 ) } \biggl( 1+ \frac{2 h^{\alpha }}{2^{\alpha } \Gamma ( \alpha +1 ) } - \biggl( \frac{h^{\alpha }}{2^{\alpha } \Gamma ( \alpha +1 ) } \biggr) ^{2} \biggr) \biggr) ^{2} \biggr) . \end{aligned}$$
(62)

With the predictor–corrector method, the approximation of \(x ( \frac{h}{2} ) \) will improve, and with the proposed technique, given in (57), the singularity of \(x_{\frac{1}{2}}\) will disappear. In the following, we put \(i =0\), \(\vartheta =3\), so

$$\begin{aligned}& x_{1} = \frac{h^{\alpha }}{\Gamma ( \alpha ) } \biggl( \frac{1}{ \alpha } + \frac{ ( 2 x_{\frac{1}{2}}^{(2)} - ( x_{ \frac{1}{2}}^{(2)} ) ^{2} ) }{2^{\alpha - 1}} \biggr) , \end{aligned}$$
(63)

and, by substituting (62) into (63), \(x_{1}\) will be obtained. Similarly, for \(i =1\), \(\vartheta =1\),

$$ x_{\frac{3}{2}} = \frac{h^{\alpha }}{\Gamma ( \alpha ) } \biggl( \frac{3^{\alpha }}{\alpha 2^{\alpha }} + \frac{ ( 2 x_{ \frac{1}{2}}^{(2)} - ( x_{\frac{1}{2}}^{(2)} ) ^{2} ) }{3} + \frac{ ( 2 x_{1} - x_{1}^{2} ) }{32^{\alpha }} + \frac{ ( 2 x_{1} - x_{1}^{2} ) }{2^{\alpha }} \biggr) . $$
(64)

In the following, if \(i =1\), \(\vartheta =2\), the singularity happens at \(s = \frac{3 h}{2}\), so by using (31), we have

$$ x_{\frac{3}{2}} = \frac{h^{\alpha }}{\Gamma ( \alpha ) } \biggl( \frac{3^{\alpha }}{\alpha 2^{\alpha }} + \frac{2 ( 2 x _{\frac{1}{2}}^{(2)} - ( x_{\frac{1}{2}}^{(2)} ) ^{2} ) }{3} + \frac{ ( 2 x_{1} - x_{1}^{2} ) }{32^{\alpha }} + \frac{ ( 2 x_{\frac{3}{2}} - x_{\frac{3}{2}}^{2} ) }{ \alpha 2^{\alpha }} \biggr) , $$
(65)

where now the predictor–corrector method will be applied to get and improve \(x_{\frac{3}{2}}\). For such a purpose, we apply the first iteration of \(x_{\frac{3}{2}}\) on the right-hand side of (65) as a predictor and consider \(x_{\frac{3}{2}} = x_{\frac{3}{2}}^{(0)}\). Equation (65) will be rewritten as follows:

$$ x_{\frac{3}{2}}^{(1)} = \frac{h^{\alpha }}{\Gamma ( \alpha ) } \biggl( \frac{3^{ \alpha }}{\alpha 2^{\alpha }} + \frac{2 ( 2 x_{\frac{1}{2}}^{(2)} - ( x_{\frac{1}{2}}^{(2)} ) ^{2} ) }{3} + \frac{ ( 2 x_{1} - x_{1}^{2} ) }{32^{\alpha }} + \frac{ ( 2 x_{ \frac{3}{2}}^{(0)} - ( x_{\frac{3}{2}}^{(0)} ) ^{2} ) }{ \alpha 2^{\alpha }} \biggr) . $$
(66)

Finally, after two iterations, we get

$$ x_{\frac{3}{2}}^{(2)} = \frac{h^{\alpha }}{\Gamma ( \alpha ) } \biggl( \frac{3^{ \alpha }}{\alpha 2^{\alpha }} + \frac{2 ( 2 x_{\frac{1}{2}}^{(2)} - ( x_{\frac{1}{2}}^{(2)} ) ^{2} ) }{3} + \frac{ ( 2 x_{1} - x_{1}^{2} ) }{32^{\alpha }} + \frac{ ( 2 x_{ \frac{3}{2}}^{(1)} - ( x_{\frac{3}{2}}^{(1)} ) ^{2} ) }{ \alpha 2^{\alpha }} \biggr) . $$
(67)

Let us consider \(i =1\), \(\vartheta =3\), and then

$$ x_{2} = \frac{h^{\alpha }}{\Gamma ( \alpha ) } \biggl( \frac{2^{ \alpha }}{\alpha } + \frac{3^{\alpha - 2} ( 2 x_{\frac{1}{2}} ^{(2)} - ( x_{\frac{1}{2}}^{(2)} ) ^{2} ) }{2^{\alpha - 2}} + \frac{ ( 2 x_{1} - x_{1}^{2} ) }{6} + \frac{ ( 2 x_{\frac{3}{2}}^{(2)} - ( x_{\frac{3}{2}}^{(2)} ) ^{2} ) }{2^{ \alpha - 1}} \biggr) . $$
(68)

In the following, if \(i =2\), \(\vartheta =1\), then

$$\begin{aligned} \begin{aligned} x_{\frac{5}{2}} &= \frac{h^{\alpha }}{\Gamma ( \alpha ) } \biggl( \frac{5^{\alpha }}{\alpha 2^{\alpha }} + \frac{2^{\alpha } ( 2 x_{\frac{1}{2}}^{(2)} - ( x_{\frac{1}{2}}^{(2)} ) ^{2} ) }{3} + \frac{3^{\alpha - 2} ( 2 x_{1} - x_{1}^{2} ) }{2^{ \alpha - 1}} + \frac{2 ( 2 x_{\frac{3}{2}}^{(2)} - ( x_{ \frac{3}{2}}^{(2)} ) ^{2} ) }{3} \\ &\quad{} + \frac{ ( 2 x_{2} - x _{2}^{2} ) }{3 2^{\alpha }} + \frac{ ( 2 x_{2} - x_{2}^{2} ) }{2^{ \alpha }} \biggr) . \end{aligned} \end{aligned}$$
(69)

If \(i =2\), \(\vartheta =2\), the singularity happens at \(s = \frac{5 h}{2}\), so by using (31), we derive

$$\begin{aligned} \begin{aligned} x_{\frac{5}{2}} &= \frac{h^{\alpha }}{\Gamma ( \alpha ) } \biggl( \frac{5^{\alpha }}{\alpha 2^{\alpha }} + \frac{2^{\alpha } ( 2 x_{\frac{1}{2}}^{(2)} - ( x_{\frac{1}{2}}^{(2)} ) ^{2} ) }{3} + \frac{3^{\alpha - 2} ( 2 x_{1} - x_{1}^{2} ) }{2^{ \alpha - 1}} + \frac{2 ( 2 x_{\frac{3}{2}}^{(2)} - ( x_{ \frac{3}{2}}^{(2)} ) ^{2} ) }{3} \\ &\quad{} + \frac{ ( 2 x_{2} - x _{2}^{2} ) }{3 2^{\alpha }} + \frac{ ( 2 x_{\frac{5}{2}} - x _{\frac{5}{2}}^{2} ) }{\alpha 2^{\alpha }} \biggr) , \end{aligned} \end{aligned}$$
(70)

where now the predictor–corrector method will be applied to get and improve \(x_{\frac{5}{2}}\). For such a purpose, we apply the first iteration of \(x_{\frac{5}{2}}\) on the right-hand side of (70) as a predictor and consider \(x_{\frac{5}{2}} = x_{\frac{5}{2}}^{(0)}\). After two iterations, Eq. (70) will be transformed as follows:

$$\begin{aligned} x_{\frac{5}{2}}^{(2)} &= \frac{h^{\alpha }}{\Gamma ( \alpha ) } \biggl( \frac{5^{ \alpha }}{\alpha 2^{\alpha }} + \frac{2^{\alpha } ( 2 x_{ \frac{1}{2}}^{(2)} - ( x_{\frac{1}{2}}^{(2)} ) ^{2} ) }{3} + \frac{3^{\alpha - 2} ( 2 x_{1} - x_{1}^{2} ) }{2^{ \alpha - 1}} + \frac{2 ( 2 x_{\frac{3}{2}}^{(2)} - ( x_{ \frac{3}{2}}^{(2)} ) ^{2} ) }{3} \\ &\quad{} + \frac{ ( 2 x_{2} - x _{2}^{2} ) }{3 2^{\alpha }} + \frac{ ( 2 x_{\frac{5}{2}}^{(1)} - x_{\frac{5}{2}}^{(1)} ) }{\alpha 2^{\alpha }} \biggr) . \end{aligned}$$
(71)

Finally, for \(i =2\), \(\vartheta =3\),

$$\begin{aligned} x_{3} &= \frac{h^{\alpha }}{\Gamma ( \alpha ) } \biggl( \frac{3^{ \alpha }}{\alpha } + \frac{5^{\alpha - 1} ( 2 x_{\frac{1}{2}} ^{(2)} - ( x_{\frac{1}{2}}^{(2)} ) ^{2} ) }{3 2^{ \alpha - 2}} + \frac{2^{\alpha - 1} ( 2 x_{1} - x_{1}^{2} ) }{3} + \frac{3^{ \alpha - 2} ( 2 x_{\frac{3}{2}}^{(2)} - ( x_{\frac{3}{2}} ^{(2)} ) ^{2} ) }{2^{\alpha - 2}} \\ &\quad{} + \frac{ ( 2 x_{2} - x _{2}^{2} ) }{6} + \frac{ ( 2 x_{\frac{5}{2}}^{(2)} - x_{ \frac{5}{2}}^{(2)} ) }{2^{\alpha - 1}} \biggr) . \end{aligned}$$
(72)

In Example 2, the computed \(x_{j}\), \(j = \frac{1}{2}, 1, \frac{3}{2}, 2, \frac{5}{2}, 3\), form an approximation to the solution of Eqs. (53)–(54) for \(0< \alpha \leq 1\), but \(x_{3} \) is considered as the final approximation to the solution. The results of NRK4 have been compared with the results of the other two methods, HPM and MVIM. The solution gained by four terms is from HPM [18]. The solution achieved by three iterations comes from MVIM [21]. A comparison of the results of applying NRK4 and HPM shows that the results of NRK4 are almost as accurate as those of HPM in the closed interval \([0,1]\). A comparison of the results of applying NRK4 and MVIM shows that the results of MVIM are more accurate in the closed interval \([0,1]\). If we proceed with this method and use more iterations, we can get any desired accuracy. The obtained results have been shown in Tables 3 and 4. It must be noted that \(x ( s )\) is an exact solution, for \({ \alpha =1}\).

Table 3 The results of different methods for Example 2, \(\alpha =1\)
Table 4 Relative errors for Example 2, \(\alpha =1\)

Figure 3 shows a comparison between the exact solution and the numerical solution resulted from NRK4, for \(\alpha =1\). A comparison between the gained approximate solution by NRK4 and the exact solution shows that the maximum of relative error happens at the last point and is less than or equal to 1.2242E−2. Moreover, the approximations of the solutions for various values of α are shown in Fig. 2. In (53), when α varies from 0 to 1, the approximate solution that was obtained for different α changes. For example, we suppose the arbitrary values of \(\alpha =0.5, 0.6, 0.8, 1\) in Fig. 4.

Figure 3
figure 3

Exact and numerical solutions of Example 2

Figure 4
figure 4

Numerical solutions of Example 2 for various values of \(0< \alpha \leq 1\)

All calculations have been done using Maple on a computer with Intel Core i5-2430M CPU at 2.400 GHz, 4.00 GB of RAM and 64-bit operating system (Windows 7).

5 Convergence analysis

The convergence analysis of NRKp is the same as for a p-stage Runge–Kutta method; we refer the reader to [49], but, first, we need some preliminaries on convergence of the proposed approach. The solution of (11) with the special nonlinear kernel, \(k ( s, t, x ( t ) ) = K ( s, t ) [ \beta x^{2} ( t ) + \gamma x ( t ) ] \), where \(K ( s, t ) = \frac{K _{0} ( s, t )}{( s - t )^{1 - \alpha }}\), \(\beta , \gamma \in R\), \(0< \alpha \leq 1\), is not differentiable when \(s = t\); therefore, the rate of convergence and accuracy of proposed approach may be decreased, so when we utilize NRKp to solve nonlinear Volterra integral equations of the second kind, the predictor–corrector method needs to be applied at the last point \(s = t\). In this section, to simplify the proof of convergence, we consider \(k_{0} ( s,t ) = c_{0}\), \(c_{0} \mathbb{\in R}\).

Lemma 2

Let

$$ \vert \beta \vert \bigl\vert x ( t_{1} ) +x( t _{2} ) \bigr\vert + \vert \gamma \vert \leq \frac{LM}{ \vert c_{0} \vert }, \quad \beta , \gamma , c_{0} \mathbb{\in R}, $$
(73)

and \(x ( t_{1} ) \neq x( t_{2} )\). Then the nonlinear kernel, \(k ( s, t, x ( t ) ) = (\beta x^{2} ( t ) + \gamma x ( t ) )k ( s,t ) \), satisfies a Lipschitz condition with respect to the dependent variable x, i.e.,

$$ \bigl\vert k \bigl( s, t_{1}, x ( t_{1} ) \bigr) - k \bigl( s, t_{2}, x ( t_{2} ) \bigr) \bigr\vert \leq L \bigl\vert x ( t_{1} ) - x( t_{2} ) \bigr\vert , $$
(74)

where L is Lipschitz constant, \(M= \frac{(s- t_{2} )^{1-\alpha }}{(b-a)^{1- \alpha }}\), \(k ( s,t ) = \frac{c_{0}}{(s- t )^{1-\alpha }}\), and \(a \leq t_{1} < t_{2} < s \leq b\).

Proof

Write

$$ \bigl\vert k \bigl( s, t_{1}, x ( t_{1} ) \bigr) - k \bigl( s, t_{2}, x ( t_{2} ) \bigr) \bigr\vert = \vert c _{0} \vert \biggl\vert \frac{\beta x^{2} ( t_{1} ) + \gamma x ( t_{1} ) }{(s- t_{1} )^{1-\alpha }} - \frac{ \beta x^{2} ( t_{2} ) + \gamma x ( t_{2} ) }{(s- t_{2} )^{1-\alpha }} \biggr\vert , $$
(75)

then, according to the assumption, we have \(b -a\geq s- t_{1} > s - t _{2} > 0\), so \(\frac{1}{s - t_{2}} > \frac{1}{s- t_{1}} \geq \frac{1}{b -a}\), and \(\frac{1}{ ( s - t_{2} ) ^{1-\alpha }} > \frac{1}{ ( s - t_{1} ) ^{1-\alpha }} \geq \frac{1}{ ( b -a ) ^{1-\alpha }}\). In the following, we can write

$$\begin{aligned}& \bigl\vert k \bigl( s, t_{1}, x ( t_{1} ) \bigr) - k \bigl( s, t_{2}, x ( t_{2} ) \bigr) \bigr\vert \\& \quad \leq \vert c_{0} \vert \frac{ \vert ( s - t_{2} )^{1 - \alpha } ( \beta x^{2} ( t_{1} ) + \gamma x ( t _{1} ) ) - ( s - t_{1} )^{1 - \alpha } ( \beta x^{2} ( t_{2} ) + \gamma x ( t_{2} ) ) \vert }{ \vert ( s - t_{1} )^{1 - \alpha } ( s - t_{2} )^{1 - \alpha } \vert } \\& \quad \leq \vert c_{0} \vert \frac{ \vert ( s - t_{2} )^{1 - \alpha } ( \beta x^{2} ( t_{1} ) + \gamma x ( t _{1} ) ) - ( s - t_{1} )^{1 - \alpha } ( \beta x^{2} ( t_{2} ) + \gamma x ( t_{2} ) ) \vert }{ \vert ( s - t_{2} )^{2(1 - \alpha )} \vert } \\& \quad \leq \vert c_{0} \vert \frac{ \vert ( b-a )^{1 - \alpha } \vert ( \vert \beta ( x^{2} ( t _{1} ) -x^{2} ( t_{2} ) ) \vert + \vert \gamma ( x ( t_{1} ) - x ( t_{2} ) ) \vert ) }{ \vert ( s - t_{2} )^{2(1 - \alpha )} \vert } \\& \quad \leq \vert c _{0} \vert \frac{ \vert ( b-a )^{1 - \alpha } \vert ( \vert \beta \vert \vert ( x ( t_{1} ) +x ( t_{2} ) ) \vert + \vert \gamma \vert ) ( \vert x ( t _{1} ) - x ( t_{2} ) \vert ) }{ \vert ( s - t_{2} )^{2(1 - \alpha )} \vert }. \end{aligned}$$
(76)

Substituting (73) into (76), the lemma is proved. □

Theorem 2

Let \(\vert \beta \vert \vert x ( t_{1} ) +x( t_{2} ) \vert + \vert \gamma \vert \leq \frac{LM}{ \vert c_{0} \vert }\), β, γ, \(c_{0} \mathbb{\in R}\), then NRKp with the a special nonlinear kernel, \(k ( s, t, x ( t ) ) = (\beta x^{2} ( t ) + \gamma x ( t ) )k ( s,t ) \), where \(k ( s,t ) = \frac{c_{0}}{(s- t )^{1-\alpha }}\), is convergent; in other words,

$$ \lim_{h\rightarrow 0} \bigl\vert x ( s_{i} + \theta_{\vartheta } h ) - x_{i + \theta_{\vartheta }} \bigr\vert =0. $$
(77)

Proof

Let \(s= s_{i} + \theta_{\vartheta } h\), \(i=0,1,\ldots,N-1\), \(\vartheta =1,2,\ldots,p-1\), in (22). Then we have

$$\begin{aligned} \begin{aligned} x ( s_{i} + \theta_{\vartheta } h ) &= y ( s_{i} + \theta_{\vartheta } h ) + \sum_{j =0}^{i- 1} \int_{s_{j}}^{s _{j +1}} k \bigl( s_{i} + \theta_{\vartheta } h, t, x ( t ) \bigr) \,dt \\ &\quad{} + \int_{s_{i}}^{s_{i} + \theta_{\vartheta } h} k \bigl( s_{i} + \theta_{\vartheta } h, t, x ( t ) \bigr) \,dt. \end{aligned} \end{aligned}$$
(78)

Letting \(h < \delta \), \(e_{i + \theta_{\vartheta }} =x ( s_{i} + \theta_{\vartheta } h ) - x_{i + \theta_{\vartheta }}\), and subtracting (30) from (78), we obtain

$$\begin{aligned} e_{i+ \theta_{\vartheta }} & =h \sum_{j =0}^{i- 1} \sum _{l =0}^{p- 1} A_{pl} \bigl\{ k \bigl( s_{i} + \theta_{\vartheta } h, s_{j} + \theta _{l} h,x( s_{j} + \theta_{l} h ) \bigr) - k ( s_{i} + \theta_{\vartheta } h, s_{j} + \theta_{l} h, x_{j + \theta_{l}} ) \bigr\} \\ &\quad{} +h \sum_{l =0}^{\vartheta - 1} A_{\vartheta l} \bigl\{ k \bigl( s_{i} + \theta_{\vartheta } h, s_{i} + \theta_{l} h,x( s_{i} + \theta_{l} h ) \bigr) -k ( s_{i} + \theta_{\vartheta } h, s_{i} + \theta_{l} h, x_{i + \theta_{l}} ) \bigr\} , \end{aligned}$$
(79)

where

$$\begin{aligned} \begin{gathered} k \bigl( s_{i} + \theta_{\vartheta } h, s_{j} + \theta_{l} h,x( s _{j} + \theta_{l} h ) \bigr) = \frac{k_{0} ( \beta x^{2} ( s_{j} + \theta_{l} h ) + \gamma x ( s_{j} + \theta_{l} h ) ) }{ ( ( i-j ) + ( \theta_{\vartheta } - \theta_{l} ) h ) ^{1-\alpha }}, \\ k ( s_{i} + \theta_{\vartheta } h, s_{j} + \theta_{l} h, x_{j + \theta_{l}} ) = \frac{k_{0} ( \beta x_{j + \theta_{l}}^{2} + \gamma x_{j + \theta_{l}} ) }{ ( ( i-j ) + ( \theta_{\vartheta } - \theta_{l} ) h ) ^{1-\alpha }}, \\ k \bigl( s_{i} + \theta_{\vartheta } h, s_{i} + \theta_{l} h,x( s _{i} + \theta_{l} h ) \bigr) = \frac{k_{0} ( \beta x^{2} ( s_{i} + \theta_{l} h ) + \gamma x ( s_{i} + \theta_{l} h ) ) }{ ( ( \theta_{\vartheta } - \theta_{l} ) h ) ^{1-\alpha }}, \\ k ( s_{i} + \theta_{\vartheta } h, s_{i} + \theta_{l} h, x_{i + \theta_{l}} ) = \frac{k_{0} ( \beta x_{i + \theta_{l}}^{2} + \gamma x_{i + \theta_{l}} ) }{ ( ( \theta_{\vartheta } - \theta_{l} ) h ) ^{1-\alpha }}. \end{gathered} \end{aligned}$$
(80)

Clearly, from Lemma 2 and [49], the theorem is proved. □

6 Conclusion

In this work, a class of Runge–Kutta methods has been successfully implemented for solving fractional Riccati differential equations. In the first step, we convert a fractional Riccati differential equation into a singular nonlinear Volterra integral equation of the second kind considering Remark 1. In the second step, we solve singular nonlinear Volterra integral equation of the second kind using the Runge–Kutta method, with some manipulation. We called proposed approach a new p-stage Runge–Kutta method (NRKp). We implemented NRKp, for \(p=4\), to get approximate solutions of two examples. In these two examples, \(x_{3}\) is considered as an approximate solution, and obtained for different α. At singular points, to improve the accuracy of the approximate solution, we utilize the predictor–corrector method. The application results are compared with those of HPM and MVIM (Tables 14). A comparison of the approximate solutions shows that this method is accurate enough and can even be more accurate in many instances. We proved that the nonlinear kernel satisfies a Lipschitz condition, and then obtained that NRKp for a singular nonlinear Volterra integral equation of the second kind is convergent. As a direction for future research, we point that the final aim of presenting NRK4 is not only for finding approximate solutions to fractional Riccati differential equations but also for emphasizing that this approach can be applied for fractional differential equations, as well as singular Volterra integral equations. Moreover, this method can be used for solving those nonlinear Volterra integral equations of the second kind, which have a singular kernel, such as \(k ( s, t, x ( t ) ) = K ( s, t ) [ a_{0} x ( t ) + a_{1} x^{2} ( t ) + \cdots + a_{n- 1} x^{n} ( t ) ] \), where \(k ( s, t ) = \frac{K_{0} ( s, t )}{( s-t )^{1 -\alpha }}\), \(a_{0}, a_{1},\ldots, a_{n- 1} \mathbb{\in R}\), \(n\in \mathbb{N}\), \(0< \alpha \leq 1\).

References

  1. Das, S.: Functional Fractional Calculus. Springer, Berlin (2011)

    Book  Google Scholar 

  2. Ortigueira, M.D.: Fractional Calculus for Scientists and Engineers. Springer, New York (2011)

    Book  Google Scholar 

  3. Miller, K.S., Ross, B.: An Introduction to the Fractional Calculus and Fractional Differential Equations. Wiley, New York (1993)

    MATH  Google Scholar 

  4. Gorenflo, R., Mainardi, F.: Fractional Calculus Integral and Differential Equations of Fractional Order. Springer, New York (1997)

    MATH  Google Scholar 

  5. Mainardi, F.: Fractional Calculus and Waves in Linear Viscoelasticity: An Introduction to Mathematical Models. Imperial College Press, London (2010)

    Book  Google Scholar 

  6. Hilfer, R.: Applications of Fractional Calculus in Physics. World Scientific, Singapore (1999)

    MATH  Google Scholar 

  7. Diethelm, K.: The Analysis of Fractional Differential Equations: An Application-Oriented Exposition Using Differential Operators of Caputo Type. Springer, New York (2010)

    Book  Google Scholar 

  8. Mashoof, M., Sheikhani, A.R.: Simulating the solution of the distributed order fractional differential equations by block-pulse wavelates. UPB Sci. Bull., Ser. A 79(2), 193–206 (2017)

    Google Scholar 

  9. Mohammadi, F.: Fractional integro-differential equation with a weakly singular kernel by using block pulse functions. UPB Sci. Bull., Ser. A 79(1), 57–66 (2017)

    MathSciNet  Google Scholar 

  10. Marinca, V., Herisanu, N.: The Optimal Homotopy Asymptotic Method. Springer, Cham (2015)

    Book  Google Scholar 

  11. Liao, S.: Beyond Perturbation: Introduction to the Homotopy Analysis Method. CRC Press, Boca Raton (2003)

    Book  Google Scholar 

  12. Cang, J., et al.: Series solutions of non-linear Riccati differential equations with fractional order. Chaos Solitons Fractals 40(1), 1–9 (2009)

    Article  MathSciNet  Google Scholar 

  13. Biazar, J., Ghanbari, B.: HAM solution of some initial value problems arising in heat radiation equations. J. King Saud Univ., Sci. 24(2), 161–165 (2012)

    Article  Google Scholar 

  14. Aminikhah, H., Hemmatnezhad, M.: An efficient method for quadratic Riccati differential equation. Commun. Nonlinear Sci. Numer. Simul. 15(4), 835–839 (2010)

    Article  MathSciNet  Google Scholar 

  15. Khan, N.A., Ara, A., Jamil, M.: An efficient approach for solving the Riccati equation with fractional orders. Comput. Math. Appl. 61(9), 2683–2689 (2011)

    Article  MathSciNet  Google Scholar 

  16. Ayati, Z., Biazar, J.: On the convergence of homotopy perturbation method. J. Egypt. Math. Soc. 23(2), 424–428 (2015)

    Article  MathSciNet  Google Scholar 

  17. Biazar, J., et al.: He’s homotopy perturbation method: a strongly promising method for solving non-linear systems of the mixed Volterra–Fredholm integral equations. Comput. Math. Appl. 61(4), 1016–1023 (2011)

    Article  MathSciNet  Google Scholar 

  18. Odibat, Z., Momani, S.: Modified homotopy perturbation method: application to quadratic Riccati differential equation of fractional order. Chaos Solitons Fractals 36(1), 167–174 (2008)

    Article  MathSciNet  Google Scholar 

  19. Das, S.: Analytical solution of a fractional diffusion equation by variational iteration method. Comput. Math. Appl. 57(3), 483–487 (2009)

    Article  MathSciNet  Google Scholar 

  20. Jafari, H., Tajadodi, H.: He’s variational iteration method for solving fractional Riccati differential equation. Int. J. Differ. Equ. 2010, Article ID 764738 (2010)

    MathSciNet  MATH  Google Scholar 

  21. Jafari, H., Tajadodi, H., Baleanu, D.: A modified variational iteration method for solving fractional Riccati differential equation by Adomian polynomials. Fract. Calc. Appl. Anal. 16(1), 109–122 (2013)

    Article  MathSciNet  Google Scholar 

  22. Arikoglu, A., Ozkol, I.: Solution of fractional differential equations by using differential transform method. Chaos Solitons Fractals 34(5), 1473–1481 (2007)

    Article  MathSciNet  Google Scholar 

  23. Doha, E.H., et al.: On shifted Jacobi spectral approximations for solving fractional differential equations. Appl. Math. Comput. 219(15), 8042–8056 (2013)

    MathSciNet  MATH  Google Scholar 

  24. Gülsu, M., Sezer, M.: On the solution of the Riccati equation by the Taylor matrix method. Appl. Math. Comput. 176(2), 414–421 (2006)

    MathSciNet  MATH  Google Scholar 

  25. Duan, J.S., Rach, R., Buleanu, D., Wazwaz, A.M.: A review of the Adomian decomposition method and its applications to fractional differential equations. Commun. Fract. Calc. 3(2), 73–99 (2012)

    Google Scholar 

  26. Momani, S., Shawagfeh, N.: Decomposition method for solving fractional Riccati differential equations. Appl. Math. Comput. 182(2), 1083–1092 (2006)

    MathSciNet  MATH  Google Scholar 

  27. Jafari, H., Tajadodi, H., Baleanu, D.: A numerical approach for fractional order Riccati differential equation using B-spline operational matrix. Fract. Calc. Appl. Anal. 18(2), 387–399 (2015)

    Article  MathSciNet  Google Scholar 

  28. Delves, L.M., Mohamed, J.: Computational Methods for Integral Equations. Cambridge University Press, Cambridge (1988)

    MATH  Google Scholar 

  29. Lakestani, M., Saray, B.N., Dehghan, M.: Numerical solution for the weakly singular Fredholm integro-differential equations using Legendre multiwavelets. J. Comput. Appl. Math. 235(11), 3291–3303 (2011)

    Article  MathSciNet  Google Scholar 

  30. Wei, Y., Chen, Y., Shi, X.: A spectral collocation method for multidimensional nonlinear weakly singular Volterra integral equation. J. Comput. Appl. Math. (2017). https://doi.org/10.1016/j.cam.2017.09.037

    Article  Google Scholar 

  31. Assari, P.: Solving weakly singular integral equations utilizing the meshless local discrete collocation technique. Alex. Eng. J. (2017). https://doi.org/10.1016/j.aej.2017.09.015

    Article  Google Scholar 

  32. Assari, P.: Thin plate spline Galerkin scheme for numerically solving nonlinear weakly singular Fredholm integral equations. Appl. Anal. (2018). https://doi.org/10.1080/00036811.2018.1448073

    Article  Google Scholar 

  33. Ahmadabadi, M.N., Dastjerdi, H.L.: Tau approximation method for the weakly singular Volterra–Hammerstein integral equations. Appl. Math. Comput. 285, 241–247 (2016)

    MathSciNet  Google Scholar 

  34. Assari, P., Adibi, H., Dehghan, M.: The numerical solution of weakly singular integral equations based on the meshless product integration (MPI) method with error analysis. Appl. Numer. Math. 81, 76–93 (2014)

    Article  MathSciNet  Google Scholar 

  35. Razlighi, B.B., Soltanalizadeh, B.: Numerical solution of a nonlinear singular Volterra integral system by the Newton product integration method. Math. Comput. Model. 58, 1696–1703 (2013)

    Article  MathSciNet  Google Scholar 

  36. Brunner, H., Pedas, A., Vainikko, G.: The piecewise polynomial collocation method for nonlinear weakly singular Volterra equations. Math. Comput. 68(227), 1079–1095 (1999)

    Article  MathSciNet  Google Scholar 

  37. Brunner, H., Pedas, A., Vainikko, G.: Piecewise polynomial collocation methods for linear Volterra integro-differential equations with weakly singular kernels. SIAM J. Numer. Anal. 39(3), 957–982 (2001)

    Article  MathSciNet  Google Scholar 

  38. Pallav, R., Pedas, A.: Quadratic spline collocation method for weakly singular integral equations and corresponding eigenvalue problem. Math. Model. Anal. 7(2), 285–296 (2002)

    MathSciNet  MATH  Google Scholar 

  39. Assari, P., Dehghan, M.: The approximate solution of nonlinear Volterra integral equations of the second kind using radial basis functions. Appl. Numer. Math. (2018). https://doi.org/10.1016/j.apnum.2018.05.001

    Article  MathSciNet  MATH  Google Scholar 

  40. Mirzaei, D., Dehghan, M.: A meshless based method for solution of integral equations. Appl. Numer. Math. 60, 245–262 (2010)

    Article  MathSciNet  Google Scholar 

  41. Assari, P., Dehghan, M.: A meshless Galerkin scheme for the approximate solution of nonlinear logarithmic boundary integral equations utilizing radial basis functions. J. Comput. Appl. Math. 333, 362–381 (2018)

    Article  MathSciNet  Google Scholar 

  42. Assari, P., Dehghan, M.: Solving a class of nonlinear boundary integral equations based on the meshless local discrete Galerkin (MLDG) method. Appl. Numer. Math. 123, 137–158 (2018)

    Article  MathSciNet  Google Scholar 

  43. Assari, P., Adibi, H., Dehghan, M.: A meshless discrete Galerkin (MDG) method for the numerical solution of integral equations with logarithmic kernels. J. Comput. Appl. Math. 267, 160–181 (2014)

    Article  MathSciNet  Google Scholar 

  44. Assari, P., Adibi, H., Dehghan, M.: A meshless method for solving nonlinear two-dimensional integral equations of the second kind on non-rectangular domains using radial basis functions with error analysis. J. Comput. Appl. Math. 239(1), 72–92 (2013)

    Article  MathSciNet  Google Scholar 

  45. Zhang, L., Ma, F.: Pouzet–Runge–Kutta–Chebyshev method for Volterra integral equations of the second kind. J. Comput. Appl. Math. 288, 323–331 (2015)

    Article  MathSciNet  Google Scholar 

  46. Xiang, S., Wu, Q.: Numerical solutions to Volterra integral equations of the second kind with oscillatory trigonometric kernels. Appl. Math. Comput. 223, 34–44 (2013)

    MathSciNet  MATH  Google Scholar 

  47. Wu, Q., Xiang, S.: Fast multipole method for singular integral equations of second kind. Adv. Differ. Equ. 2015, 191 (2015). https://doi.org/10.1186/s13662-015-0515-6

    Article  MathSciNet  Google Scholar 

  48. Demirci, E., Ozalp, N.: A method for solving differential equations of fractional order. J. Comput. Appl. Math. 236(11), 2754–2762 (2012)

    Article  MathSciNet  Google Scholar 

  49. De Hoog, F., Weiss, R.: Implicit Runge–Kutta methods for second kind Volterra integral equations. Numer. Math. 23(3), 199–213 (1974)

    Article  MathSciNet  Google Scholar 

Download references

Availability of data and materials

Not applicable.

Funding

Not available.

Author information

Authors and Affiliations

Authors

Contributions

All authors contributed equally to this article. They read and approved the final manuscript.

Corresponding author

Correspondence to Jafar Biazar.

Ethics declarations

Competing interests

The authors declare that they have no competing interests.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Lichae, B.H., Biazar, J. & Ayati, Z. A class of Runge–Kutta methods for nonlinear Volterra integral equations of the second kind with singular kernels. Adv Differ Equ 2018, 349 (2018). https://doi.org/10.1186/s13662-018-1811-8

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13662-018-1811-8

Keywords