Skip to main content

Theory and Modern Applications

Numerical solution of Volterra partial integro-differential equations based on sinc-collocation method

Abstract

We provide the numerical solution of a Volterra integro-differential equation of parabolic type with memory term subject to initial boundary value conditions. Finite difference method in combination with product trapezoidal integration rule is used to discretize the equation in time and sinc-collocation method is employed in space. A weakly singular kernel has been viewed as an important case in this study. The convergence analysis has been discussed in detail, which shows that the approach exponentially converges to the solution. Furthermore, numerical examples and illustrations are presented to prove the validity of the suggested method.

1 Introduction

We consider a Volterra integro-differential equation with memory term of the form

$$ {u_{t}} ( {x,t} ) = \int_{0}^{t} {{k_{0}} ( {t - s} )u_{xx} ( {x,s} )\,ds + f ( {x,t} )} ,\quad x \in \Omega, t \in J, $$
(1)

subjected to initial and boundary conditions

$$ \begin{aligned} &u(a,t)=u(b,t)=0, \quad t\in J, \\ &u(x,0)=u_{0}(x),\quad x\in\Omega, \end{aligned} $$
(2)

where \(\Omega=[a,b] \subseteq\mathbb{R}\) and \(J=[0,T]\). Here \({u_{t}} = \frac{{\partial u}}{{\partial t}}\), \(u_{xx} = \frac{{\partial^{2} u}}{{\partial x^{2}}}\), and \(k_{0}\) is a real-valued and positive definite kernel, that is,

$$ \int_{0}^{T} {\varphi ( t ) \int_{0}^{t} {{k_{0}} ( {t - s} )\varphi (s )} } \,ds \,dt \ge0 $$
(3)

for all \(T >0\) and any continuous \(\varphi: [0,T]\longrightarrow \mathbb{R}\), and f is a real-valued function. If \(k_{0}\) is a smooth function on \(\mathbb{R}^{+}\), equation (1) is hyperbolic, whereas if \(k_{0}\) has a weak singularity at 0, such as \(k_{0}(t)=\frac{t^{ \beta- 1}}{\Gamma(\beta)}\), \(0 < \beta< 1\), then it adopts a parabolic behavior [1–3]. With \(u_{xx}=\nabla^{2}u\), the evolution equation (1) is sometimes called a fractional wave equation [4], because in the limiting case where \(\beta= 1\), after differentiation with respect to t, we obtain

$${u_{tt}} ( {x,t} ) = {\nabla^{2}}u ( {x,t} ) + f' ( {x,t} ), $$

and as \(\beta\rightarrow0\), we get the heat equation

$${u_{t}} ( {x,t} ) = {\nabla^{2}}u ( {x,t} ) + f ( {x,t} ). $$

Modeling phenomena in viscoelasticity, biological models, chemical kinetics, heat conduction in materials with memory, population dynamics, fluid dynamics and nuclear reactor dynamics, mathematical biology, financial mathematics, compression of viscoelastic media, and other similar areas are all done by partial integro-differential equations of type (1). See, for example, [5] and the references therein. This problem governs many physical systems occurring in diffusion problems as a particular case [6].

To treat the partial integro-differential equations (PIDEs), a substantial number of methods have been applied. For example, the pseudo-spectral Legendre-Galerkin method for solving a parabolic PIDE with convolution-type kernel was presented in [7]. Combination of radial basis functions and finite difference for solving nonlinear-type PIDEs with smooth kernel containing an unknown function was considered in [8]. Also, a spectral method was proposed in [9] for the PIDEs with a weakly singular kernel.

The numerical solution of equation (1) with a weakly singular kernel was considered by many authors, such as finite-element methods [3, 10], finite-difference methods [11, 12], compact difference schemes [13], spectral collocation methods [14], orthogonal spline collocation methods [15], variational iteration and Adomian decomposition methods [16], radial basis functions methods [17], and quasi-wavelet methods [18]. However, construction of precise numerical methods for integro-differential equations is still a challenge owing to the weak singularity of the kernel \(k_{0}\) that contains sharp states of transitions in the solution. This lack of smoothness of the solution near \(t=0\) results in a decay in the order of the practical performance of familiar timestepping methods for equation (1). For instance, the trapezoidal rule with product integration of the quadrature term does not produce expected \(\mathcal{O}({\Delta t}^{2})\) errors [19].

The sinc approximation has been studied by many authors to solve various equations such as integral equations [20], ordinary differential equations [21], partial differential equations [22–24], integro-differential equations [25], and so on, due to high accuracy, exponential rate of convergence, and near optimality of this method [26]. With these backgrounds, we extend the sinc-collocation method for solving partial integro-differential equations of type (1).

In this paper, the time discretization method to solve equation (1) is effected by a combination of finite difference and quadrature. For this purpose, we apply the backward Euler method in addition to the product trapezoidal integration rule [19] for the integral term. Consequently, equation (1) is reduced to a system of ordinary differential equations (ODEs), which is discretized with the sinc-collocation method. In addition, the accuracy and efficiency of the suggested method is tested with some examples and illustrations.

This paper is organized as follows. Section 2 provides some basic definitions, assumptions, and preliminaries of sinc approximation. In Section 3, we develop the sinc collocation method to solve Volterra partial integro-differential equations. In Section 4, we discuss the convergence analysis of the proposed method. Finally, in Section 5, numerical examples are solved to verify the accuracy and efficiency of the proposed approach.

2 Preliminaries

The goal of this section is to recall notation and definitions of the sinc function and state some known theorems important for the rest of this paper, which were discussed thoroughly in [27, 28].

The sinc method is basically defined on the real line. So, the sinc function is defined on the whole real line by

$$\operatorname{sinc} ( z ) = \textstyle\begin{cases} \frac{{\sin ( {\pi z} )}}{{\pi z}},& z \ne0,\\ 1,& z = 0, \end{cases} $$

and the translated sinc functions with evenly spaced nodes are given as

$$ S ( {j,h} ) (z) = \operatorname{sinc} \biggl( {\frac{{z - jh}}{h}} \biggr), \quad j = 0, \pm1, \pm2, \ldots. $$
(4)

The sinc function at the interpolating points \(x_{k} = kh\) is given by

$$S ( {j,h} ) ( {kh} ) = \delta_{jk}^{ ( 0 )} = \textstyle\begin{cases} 1,& k = j,\\ 0,& k \ne j. \end{cases} $$

They are based on the infinite strip \(D_{d}\) in the complex plane

$${D_{d}} = \biggl\{ {w = u + iv: \vert v \vert < d \le \frac {\pi}{2}} \biggr\} . $$

Let f be a function defined on \(\mathbb{R}\), and let \(h>0\) be the mesh size. Then the Whittaker cardinal function is defined by the infinite series as follows:

$$C(f,h,x)=\sum_{j=-\infty}^{\infty}f(jh)S(j,h) (x). $$

However, in practice, the finite number of terms are used in this series such as \(j = -N, \ldots, N\), where \(2N + 1\) is the number of sinc grid points. So,

$$C(f,h,x)\approx\sum_{j=-N}^{N}f(jh)S(j,h) (x), $$

where h is suitably selected depending on the properties of the function f and given positive integer N.

To construct an approximation on the interval \(\Gamma= [a, b]\), we consider the conformal map

$$ \phi ( z ) = \log \biggl( {\frac{{z - a}}{{b - z}}} \biggr). $$
(5)

The map Ï• carries the eye-shaped region

$$D_{E} = \biggl\{ {z = x + iy: \biggl\vert {\arg \biggl( {\frac{{z - a}}{{b - z}}} \biggr)} \biggr\vert < d \leqslant\frac{\pi}{2}} \biggr\} $$

onto \(D_{d}\) such that \(\phi(a)=-\infty\), \(\phi(b)=\infty\), where a, b are the boundary points of \(D_{E}\) with \(a,b \in\partial D_{E}\). For the sinc method on the interval \(\Gamma= [a, b]\), basis functions are derived from the composite translated sinc functions

$$S_{j}(z)=S ( {j,h} )\circ \bigl( {\phi ( z )} \bigr) = \operatorname{sinc} \biggl( {\frac{{ {\phi ( z )} - jh}}{h}} \biggr), \quad j = 0, \pm1, \pm2, \ldots. $$

The inverse map of \(w = \phi(z)\) is

$$z = {\phi^{ - 1}} ( w ) = \frac{{a + b{e^{w}}}}{{1 + {e^{w}}}}. $$

Let ψ denote the inverse map of ϕ, so we define the range of \(\phi^{-1}\) on the real line as

$$\Gamma = \bigl\{ {\psi ( u ) = {\phi^{ - 1}} ( u ) \in D_{E}: - \infty < u < \infty} \bigr\} = [ {a,b} ]. $$

For \(h>0\), let the points \(x_{k}\) on Γ be given by

$$ x_{k}=\psi(kh)= \frac{{a + b{e^{kh}}}}{{1 + {e^{kh}}}},\quad k \in\mathbb{Z}. $$
(6)

Definition 1

([27], p. 59)

Let \(B(D_{E})\) denote the class of functions f analytic in \(D_{E}\) such that, for some constant γ with \(0\leq\gamma<1\),

$$\int_{\psi ( {u + \sum} )} { \bigl\vert f(z) \,dz \bigr\vert } =\mathcal{O} \bigl( \vert x \vert ^{\gamma}\bigr) , \quad u\to \pm\infty, $$

where \(\sum = \{ {i\eta: \vert \eta \vert < d \leqslant\frac{\pi}{2}} \}\), and, for a simple closed contour δ in \(D_{E}\),

$$N ( {f,{D_{E}}} ) \equiv\lim_{\delta \to\partial{D_{E}}} \int_{\delta}{ \bigl\vert {f ( z )\,dz} \bigr\vert } < \infty, $$

where \(\partial{D_{E}}\) represents the boundary of \(D_{E}\).

Definition 2

([28], p. 180)

By \(L_{\alpha}(D_{E})\) we denote the set of all analytic functions f for which there exists a constant, C such that

$$ \bigl\vert f(z) \bigr\vert \leq C \frac{ \vert \rho(z) \vert ^{\alpha}}{(1+ \vert \rho (z) \vert )^{2\alpha}},\quad z\in D_{E}, 0 < \alpha\leq1, $$
(7)

where \(\rho(z)=e^{\phi(z)}\).

The following theorem presents the convergence result on the approximation of derivatives particularly useful for approximate solving some differential equations.

Theorem 1

([28], p. 208)

If \(\phi'u \in B ({{D_{E}}} )\) and

$$\sup_{\frac{{ - \pi}}{h} \leqslant t \leqslant\frac{\pi}{h}} \biggl\vert {{{ \biggl( {\frac{d}{{dx}}} \biggr)}^{l}} {e^{it\phi ( x )}}} \biggr\vert \leqslant{C_{1}} {h^{ - l}},\quad x \in \Gamma, $$

for \(l =0, 1, \ldots, m\) with a constant \(C_{1}\) depending only on m and Ï•. If \(u \in L_{\alpha}(D_{E})\), then taking \(h = \sqrt{{{\pi d} / {\alpha N}}} \) it follows that

$$\sup_{x \in\Gamma} \Biggl\vert {{u^{ ( l )}} ( x ) - {{\biggl( {\frac{d}{{dx}}} \biggr)}^{l}}\sum _{j = - N}^{N} {u ( {{x_{j}}} ){S_{j}} ( x )} } \Biggr\vert \leqslant C{N^{{{ ( {l + 1} )} / 2}}}exp \bigl( { - {{ ( {\pi d\alpha N} )}^{{1 / 2}}}} \bigr), $$

where C is a constant depending only on u, d, m, ϕ, and α.

The sinc-collocation method requires the derivatives of the composite sinc function to be evaluated at the nodes. So, we need to recall the following lemma.

Lemma 1

([27], p. 106)

Let Ï• be the conformal one-to-one mapping of the simply connected domain \(D_{E}\) onto \(D_{d}\) given by (5). Then

$$\begin{aligned}& \delta_{jk}^{ ( 0 )} = { { \bigl[ {S ( {j,h} ) \circ\phi ( x )} \bigr]} \big\vert _{x = {x_{k}}}} = \textstyle\begin{cases} 1,& j = k, \\ 0,& j \ne k, \end{cases}\displaystyle \end{aligned}$$
(8)
$$\begin{aligned}& \delta_{jk}^{ ( 1 )} = h\frac{d}{{d\phi}}{ { \bigl[ {S ({j,h} ) \circ\phi ( x )} \bigr]} \bigg\vert _{x = {x_{k}}}} = \textstyle\begin{cases} 0,& j = k, \\ \frac{{{{ ( { - 1} )}^{k - j}}}}{{k - j}},& j \ne k, \end{cases}\displaystyle \end{aligned}$$
(9)
$$\begin{aligned}& \delta_{jk}^{ ( 2 )} = {h^{2}}\frac{{{d^{2}}}}{{d{\phi ^{2}}}}{ { \bigl[ {S ( {j,h} ) \circ\phi ( x )} \bigr]} \bigg\vert _{x = {x_{k}}}} = \textstyle\begin{cases} \frac{{ - {\pi^{2}}}}{3},& j = k, \\ \frac{{ - 2{{ ( { - 1} )}^{k - j}}}}{{{{ ( {k - j} )}^{2}}}},& j \ne k. \end{cases}\displaystyle \end{aligned}$$
(10)

In equations (8)-(10), h is step size, and \(x_{k}\) is the sinc grid given by (6).

3 Description of the method

In this section, we give the sinc-collocation method for solving the partial integro-differential equation with kernel \(k_{0}(t-s)=(t-s)^{-\beta}\):

$$ {u_{t}} ( {x,t} ) = \int_{0}^{t} { ( {t - s} )^{-\beta}u_{xx}(x,s)\,ds + f ( {x,t} )} ,\quad0< x< 1 , t \in J, $$
(11)

with boundary and initial conditions

$$ \begin{aligned} &u(0,t)=u(1,t)=0, \quad0\leqslant t \leqslant T, \\ &u(x,0)=u_{0}(x),\quad0\leqslant x \leqslant1. \end{aligned} $$
(12)

The parameter β shows the order of singularity at the point \(s = x\), and we assume that \(0< \beta<1\). Since the integral kernel has this kind of singularity, equation (11) is said to have a weakly singular kernel. First of all, a description of the spatial-temporal discretization for this type of equations is provided in detail. The sinc-collocation algorithm is then described for solving equation (11).

3.1 Discretization in time

Now, the backward Euler method is applied for time derivatives in equation (11). Let \(t_{n}=n \Delta t\) with time step Δt, \(u^{n}=u(x,t_{n})\), and \(f^{n}=f(x,t_{n})\) for \(n=0, 1, \ldots ,M\), \(M= [{\frac{T}{k}} ]\), \(k\in{\Bbb {N}}\). By substituting \(t=t_{n+1}\) into the left-hand side of (11) for the first term, we have

$$ {u_{t}} ( {x,{t_{n+1}}} ) \approx \frac{{{u^{n + 1}} (x ) - {u^{n}} ( x )}}{{\Delta t}}+R_{n+1,1},\quad0 < x < 1, n \geqslant0, $$
(13)

where \(R_{n+1,1}=\mathcal{O}({\Delta t})\) is the order of the backward Euler method. The integral term of (11) can be approximated by unusual quadrature approximation, that is, a kind of the product trapezoidal integration rule [19] as follows:

$$\begin{aligned} \int_{0}^{{t_{n + 1}}} {{ ( {{t_{n + 1}} - s} )}^{ - \beta}} {u_{xx}} ( {x,s} )\,ds =& \sum _{l = 0}^{n} {\int _{{t_{l}}}^{{t_{l + 1}}} {{{ ( {{t_{n + 1}} - s} )}^{ - \beta }} {u_{xx}} ( {x,s} )\,ds} } \\ \approx& \sum_{l = 0}^{n} {\int_{{t_{l}}}^{{t_{l + 1}}} {{{ ( {{t_{n + 1}} - s} )}^{ - \beta}} \biggl\{ {\frac {{{t_{l + 1}} - s}}{{\Delta t}}u_{xx}^{l}(x) + \frac{{s - {t_{l}}}}{{\Delta t}}u_{xx}^{l + 1}(x)} \biggr\} \,ds} } \\ \approx& \frac{1}{{\Delta t}}\sum_{l = 0}^{n} {\bigl( {{A_{n,l}}u_{xx}^{l}(x) + {B_{n,l}}u_{xx}^{l + 1}(x)} \bigr)} + R_{n+1,2} , \end{aligned}$$
(14)

where \(R_{n+1,2}=\mathcal{O}({\Delta t^{2-\beta}})\) is the order of the product trapezoidal integration rule, proved by Dixon [29], and

$$ \begin{gathered} {A_{n,l}} = \int_{{t_{l}}}^{{t_{l + 1}}} {{{ ( {{t_{n + 1}} - s} )}^{ - \beta}} ( {{t_{l + 1}} - s} )\,ds} , \\ {B_{n,l}} = \int_{{t_{l}}}^{{t_{l + 1}}} {{{ ( {{t_{n + 1}} - s} )}^{ - \beta}} ( {s - {t_{l}}} )\,ds} . \end{gathered} $$
(15)

Substituting equations (13) and (14) into equation (11), we get the temporal semi-discrete form of (11) as follows:

$$ \begin{aligned} &{u^{n + 1}} ( x ) - {B_{n,n}}u_{xx}^{n + 1} ( x ) = \Delta t{f^{n + 1}} ( x ) +u^{n} ( x ) + \sum _{l = 0}^{n} {{\rho_{n,l}}u_{xx}^{l} ( x )}+R_{n+1}, \\ &u^{n+1}(0)=0, \qquad u^{n+1}(1)=0, \end{aligned} $$
(16)

where

$$\vert {{R_{n + 1}}} \vert \le\min \bigl\{ { \vert {{R_{n + 1,1}}} \vert , \vert {{R_{n + 1,2}}} \vert } \bigr\} , $$

and

$$ \begin{gathered} {\rho_{n,0}} = {A_{n,0}}, \\ {\rho_{n,l}} = {A_{n,l}} + {B_{n,l - 1}},\quad l = 1,2, \ldots,n, \end{gathered} $$
(17)

and with additional initial condition

$$ u^{0}(x)=u_{0}(x). $$
(18)

Ignoring the small error term \(R_{n+1}\), we arrive at the semidiscrete scheme

$$ {u^{n + 1}} ( x ) - {B_{n,n}}u_{xx}^{n + 1} ( x ) = \Delta t{f^{n + 1}} ( x ) +u^{n} ( x ) + \sum _{l = 0}^{n} {{\rho_{n,l}}u_{xx}^{l} ( x )},\quad 0 < x < 1, n \geqslant0. $$
(19)

The scheme (19) is implicit because the integral term depends on \(u^{n+1}\) and is accurate of order \(R_{n+1}=\mathcal{O}({\Delta t})\). In fact, we find that

$$ {u^{1}} ( x ) - {B_{0,0}}u_{xx}^{1} ( x ) = \Delta t{f^{1}} ( x ) +u^{0} ( x ), $$

and, for \(n\geqslant1\), by applying (19) at each step the right-hand side involves the solution at all previous time levels. As a consequence, we have a linear ordinary differential equation in the form (19) with boundary conditions (16) in each time level. Now, in each time level, we can use the sinc-collocation method to estimate the solution of the linear boundary value problem (19)-(16).

3.2 Discretization in space: sinc-collocation method

We discretize the spatial direction by the described sinc-collocation method. Assume that the approximate solution of (19) defined by

$$ {u_{m}^{n}} ( x ) = \sum _{j = - N}^{N} {c_{j}^{n}S ( {j,h} ) \circ\phi ( x )}, \quad m=2N+1, $$
(20)

and

$$ \phi(x) = \log\biggl(\frac{x}{1-x}\biggr) $$
(21)

and that the unknown coefficients \(c_{j}^{n}\) in (20) are determined by the sinc-collocation method. The points in the sinc-collocation method are

$$ {x_{k}} = \frac{{{e^{kh}}}}{{1 + {e^{kh}}}},\quad k=-N,\ldots,N, h = \sqrt{\frac{{\pi d}}{{\alpha N}}}, $$
(22)

so

$$ \begin{aligned}[b] \frac{{{d ^{2}}}}{{d {x^{2}}}}{u_{m}^{n}} ( x ) &= \sum_{j = - N}^{N} {c_{j}^{n}} \frac{{{d ^{2}}}}{{d {x^{2}}}} \bigl[ {S ( {j,h} ) \circ\phi ( x )} \bigr] \\ & = \sum_{j = - N}^{N} {c_{j}^{n}} \bigl[ {\phi'' ( x )S_{j}^{ ( 1 )} (x ) + {{ \bigl( {\phi' ( x )} \bigr)}^{2}}S_{j}^{ ( 2 )} ( x )} \bigr], \end{aligned} $$
(23)

where

$$ S_{j}^{(l)} ( x ) = \frac{{{d^{ ( l )}}}}{{d{\phi^{ ( l )}}}} \bigl[ {S ( {j,h} ) \circ\phi ( x )} \bigr],\quad l = 1,2. $$
(24)

Thus, by Theorem 2,

$$ \frac{{{d ^{2}}}}{{d {x^{2}}}}{u_{m}^{n}} ({{x_{i}}} ) = \sum_{j = - N}^{N} {c_{j}^{n}} \biggl[ {\phi'' ({{x_{i}}} )\frac{{\delta _{ji}^{ ( 1 )}}}{h} + {{ \bigl( {\phi' ({{x_{i}}} )} \bigr)}^{2}}\frac{{\delta_{ji}^{ ( 2 )}}}{{{h^{2}}}}} \biggr]. $$
(25)

By substituting (20) and (25) into (19) we have

$$ \begin{gathered}[b] \sum_{j = - N}^{N} {c_{j}^{n + 1}\delta_{ji}^{ ( 0 )}} - {B_{n,n}}\sum_{j = - N}^{N} {c_{j}^{n + 1}} \biggl[ {\phi'' ({{x_{i}}} )\frac{{\delta_{ji}^{ ( 1 )}}}{h} + {{ \bigl( {\phi' ({{x_{i}}} )} \bigr)}^{2}}\frac{{\delta _{ji}^{ ( 2 )}}}{{{h^{2}}}}} \biggr] \\ \quad = \Delta tf_{i}^{n + 1} + \sum _{j = - N}^{N} {c_{j}^{n}\delta _{ji}^{ ( 0 )}} + \sum_{l = 0}^{n} {\sum_{j = - N}^{N} {{\rho _{n,l}}c_{j}^{l}} \biggl[ {\phi'' ( {{x_{i}}} ) \frac{{\delta _{ji}^{ ( 1 )}}}{h} + {{ \bigl( {\phi' ( {{x_{i}}} )} \bigr)}^{2}}\frac{{\delta_{ji}^{ ( 2 )}}}{{{h^{2}}}}} \biggr]} \end{gathered} $$
(26)

with additional initial condition

$$ c_{i}^{0} = {u_{0}} ({{x_{i}}} ),\quad i = - N, \ldots,N. $$
(27)

Note that \(\delta_{ji}^{(0)}=\delta_{ij}^{(0)}\), \(\delta _{ji}^{(1)}=- \delta_{ij}^{(1)}\), and \(\delta_{ji}^{(2)}=\delta _{ij}^{(2)}\). We denote \(I^{(r)}=[\delta_{ij}^{(r)}]\), \(r=0,1,2\), where \(I^{(0)}\) is the identity matrix, and \(I^{(1)}\) and \(I^{(2)}\) are symmetric and skew-symmetric Toplitz matrices of order \(2N+1\), respectively. We define the \((2N+1) \times(2N+1)\) diagonal matrix as follows:

$$ D{ \bigl( {g ( x )} \bigr)_{ij}} = \textstyle\begin{cases} g ( {{x_{i}}} ),& i = j, \\ 0,& i \ne j. \end{cases} $$
(28)

By multiplying both sides of (26) by \(\frac{1}{{{{ ({\phi' ( {{x_{i}}} )} )}^{2}}}}\) we have

$$ \begin{gathered}[b] \frac{1}{{{{ ( {\phi' ( {{x_{i}}} )} )}^{2}}}}c_{i}^{n + 1} - {B_{n,n}}\sum_{j = - N}^{N} {\biggl[ { \biggl( {\frac{{ - \phi'' ( {{x_{i}}} )}}{{{{ ( {\phi' ({{x_{i}}} )} )}^{2}}}}} \biggr)\frac{{\delta_{ij}^{ (1 )}}}{h} + \frac{1}{{{h^{2}}}}\delta_{ij}^{ ( 2 )}} \biggr]} c_{j}^{n + 1} \\ \quad = \frac{{\Delta t}}{{{{ ( {\phi' ( {{x_{i}}} )} )}^{2}}}}f_{i}^{n + 1} + \frac{1}{{{{ ( {\phi' ( {{x_{i}}} )} )}^{2}}}}c_{i}^{n} + \sum _{l = 0}^{n} {\sum_{j = - N}^{N} {{\rho_{n,l}}} { \biggl[ { \biggl( {\frac{{ - \phi'' ( {{x_{i}}} )}}{{{{ ( {\phi' ( {{x_{i}}} )} )}^{2}}}}} \biggr) \frac{{\delta_{ij}^{ ( 1 )}}}{h} + \frac {1}{{{h^{2}}}}\delta_{ij}^{ ( 2 )}} \biggr]} c_{j}^{l}}. \end{gathered} $$
(29)

Therefore, system (29) can be written in a matrix form as

$$ \begin{gathered}[b] \biggl( {D \biggl( {{{ \biggl( {\frac{1}{{\phi'}}} \biggr)}^{2}}} \biggr) C^{n+1} - {B_{n,n}} \biggl[ {\frac{1}{h}D \biggl( {{{ \biggl( {\frac {1}{{\phi'}}} \biggr)}^{\prime}}} \biggr){I^{ ( 1 )}} + \frac{1}{{{h^{2}}}}{I^{ ( 2 )}}} \biggr]} \biggr){C^{n + 1}} \\ \quad = \Delta tD \biggl( {{{ \biggl( {\frac{1}{{\phi'}}} \biggr)}^{2}}} \biggr){F^{n + 1}} + D \biggl( {{{ \biggl( {\frac{1}{{\phi'}}} \biggr)}^{2}}} \biggr){C^{n}}\\ \qquad {} + \sum _{l = 0}^{n} {{\rho_{n,l}} \biggl[ {\frac{1}{h}D \biggl( {{{ \biggl( {\frac{1}{{\phi'}}} \biggr)}^{\prime}}} \biggr){I^{ ( 1 )}} + \frac{1}{{{h^{2}}}}{I^{ ( 2 )}}} \biggr]} C^{l} \end{gathered} $$
(30)

or in a compact form as

$$ P{C^{n + 1}} =R\bigl( \Delta t {F^{n+1}} + {C^{n}} \bigr)+ \sum_{l = 0}^{n} {{\rho _{n,l}}Q{C^{l}}}, $$
(31)

where

$$ \begin{gathered} Q = \frac{1}{h}D \biggl( {{{\biggl( {\frac{1}{{\phi'}}} \biggr)}^{\prime}}} \biggr){I^{ ( 1 )}} + \frac {1}{{{h^{2}}}}{I^{ ( 2 )}}, \\ R = D \biggl( {{{ \biggl( {\frac{1}{{\phi'}}} \biggr)}^{2}}} \biggr), \\ P = R - {B_{n,n}}Q, \end{gathered} $$
(32)

and

$$ {C^{n + 1}} = { \bigl( {c_{ - N}^{n + 1},c_{ - N + 1}^{n + 1}, \ldots ,c_{N}^{n + 1}} \bigr)^{t}},\qquad {F^{n + 1}} = { \bigl( {f_{ - N}^{n + 1},f_{- N + 1}^{n + 1}, \ldots,f_{N}^{n + 1}} \bigr)^{t}}. $$
(33)

If we set

$$ G^{n + 1} =R\bigl( \Delta t {F^{n+1}} + {C^{n}} \bigr)+ \sum_{l = 0}^{n} {{\rho _{n,l}}Q{C^{l}}}, $$
(34)

then the system of equations can be written as follows:

$$ P{C^{n + 1}}=G^{n + 1} $$
(35)

with additional initial condition

$$ {C^{0}} = { \bigl( {{u_{0}} ({{x_{ - N}}} ),{u_{0}} ( {{x_{- N + 1}}} ), \ldots,{u_{0}} ( {{x_{N}}} )} \bigr)^{t}}. $$
(36)

For each n, system (35) is a linear system of equations consisting of \(2N+1\) equations and \(2N+1\) unknowns. The coefficients \(c_{j}^{n}\) in the approximate solution (20) can be determined by solving this linear system.

4 Convergence analysis

In this section, we consider the ODE (19), and for simplicity, we can rewrite it as

$$ u^{n+1} ( x ) - {B_{n,n}} {\frac{d^{2}}{dx^{2}} \bigl(u^{n+1}(x)\bigr)} = g ( x ), $$
(37)

where

$$g( x )=\Delta t{f^{n + 1}} ( x ) +u^{n} ( x ) + \sum _{l = 0}^{n} {{\rho_{n,l}} {\frac{d^{2}}{dx^{2}} \bigl(u^{l}(x)\bigr)}}, $$

associated with boundary conditions

$$u^{n+1}(0)=u^{n+1}(1)=0. $$

Let \(u^{n+1}(x)\) be the exact solution of ODE (37), that is, the solution of given equations (11)-(12) at time level \((n + 1)\)th. Also, we assume that \(u_{m}^{n+1}(x)\) is the approximate solution of equation (37) by using the sinc-collocation (20). The computed solution of equations (11)-(12) at point \(x_{j}\) can be obtained by

$$ w^{n+1}_{m}(x) = \sum _{j = - N}^{N} {u^{n+1}(x_{j})} {S_{j}} ( x ). $$
(38)

We need to derive an upper bound for \(\Vert P^{-1}\Vert_{2}\), which is given in the following lemma.

Lemma 2

Let the matrix P be defined by equation (32). For \(x \in{\phi^{ - 1}} ( { ( { - \infty,\infty} )} )\), we can obtain

$$\frac{{P + {P^{*} }}}{2} = H - \frac{{{B_{n,n}}}}{{{h^{2}}}}{I^{ (2 )}}, $$

where \((\cdot)^{*}\) denotes the conjugate transpose of a matrix, and

$$H = D \biggl( \operatorname{Re} \biggl( {{{ \biggl( {\frac{1}{{\phi'}}} \biggr)}^{2}}} \biggr) \biggr) - \frac{{{B_{n,n}}}}{2h} \biggl\{ {D \biggl({{{ \biggl( {\frac{1}{{\phi'}}} \biggr)}^{\prime}}} \biggr){I^{ ( 1 )}} - {I^{ ( 1 )}}D \biggl( {{{ \biggl( {\overline{\frac{1}{{\phi'}}} } \biggr)}^{\prime}}} \biggr)} \biggr\} . $$

If the eigenvalues of matrix H are nonnegative, then there exists a constant \(c_{0}\), independent of N, such that

$$ { \bigl\Vert {{P^{ - 1}}} \bigr\Vert _{2}} \leqslant\frac {{4{d}{N}}}{{{\alpha\pi}{B_{n,n}}}} \biggl( {1 + \frac{{{c_{0}}}}{N}} \biggr) $$
(39)

for a sufficiently large N.

Proof

Let \(\lambda_{i}(\cdot)\), \(i=1, 2, \ldots, 2N+1\), be the eigenvalues of a matrix ordered as \(\lambda_{i}(\cdot) \leqslant \lambda_{i+1}(\cdot)\), and let \(\sigma_{i}\) be the singular values of the matrix P satisfying \(\sigma_{i} \leqslant\sigma_{i+1}\). Note that the matrix \(I^{(2)}\) is a symmetric, negative definite Toeplitz matrix with bounded eigenvalues and matrix \(I^{(1)}\) is a skew-symmetric Toeplitz matrix with complex eigenvalues ([27], p. 151-152). From ([30], p. 327, [23]) we have

$$\begin{aligned} {\sigma_{1}} ( P ) &= \min _{1 \leqslant i \leqslant2N + 1} {\sigma_{i}} ( P ) \geqslant\min _{1 \leqslant i \leqslant 2N + 1} \biggl\vert {{\lambda_{i}} \biggl( {\frac{{P + {P^{*} }}}{2}} \biggr)} \biggr\vert = \min_{1 \leqslant i \leqslant2N + 1} \biggl\vert {{\lambda_{i}} \biggl( {H - \frac{{{B_{n,n}}}}{{{h^{2}}}}{I^{( 2 )}}} \biggr)} \biggr\vert \\ &\geqslant\frac{{{B_{n,n}}}}{{{h^{2}}}}\min_{1 \leqslant i \leqslant2N + 1} \bigl\vert {{\lambda_{i}} \bigl( {{I^{ ( 2 )}}} \bigr)} \bigr\vert \geqslant \frac{{4{B_{n,n}}}}{{{h^{2}}}}{\sin ^{2}} \biggl( {\frac{\pi}{{4 ( {N + 1} )}}} \biggr), \end{aligned} $$

and setting \(h = \sqrt{{{\pi d} / {\alpha N}}} \) leads to

$${ \bigl\Vert {{P^{ - 1}}} \bigr\Vert _{2}} = \frac{1}{{{\sigma _{1}} ( P )}} \leqslant\frac{{{h^{2}}}}{{4{B_{n,n}}{{\sin }^{2}} ( {\frac{\pi}{{4 ( {N + 1} )}}} )}} \leqslant\frac{{4{h^{2}}{N^{2}}}}{{{\pi^{2}}{B_{n,n}}}} \biggl({1 + \frac {{{c_{0}}}}{N}} \biggr) = \frac{{4{d}{N}}}{{{\alpha\pi }{B_{n,n}}}} \biggl( {1 + \frac{{{c_{0}}}}{N}} \biggr), $$

where \(B_{n,n}\) is given by equation (15). □

The following theorem gives a bound for \(\vert u_{m}^{n+1}(x)-w^{n+1}_{m}(x)\vert\).

Theorem 2

Let \(u_{m}^{n+1}(x)\) be an approximate solution of equation (37), and let \(w^{n+1}_{m}(x)\) be an approximate solution of equations (11)-(12). Then, there exists a constant \(c_{4}\), independent of N, such that

$$ \sup_{x \in\Gamma} \bigl\vert {u_{m}^{n+1} ( x ) - {w^{n+1}_{m}} ( x )} \bigr\vert \le{c_{4}} {N^{3}}\exp \bigl( { - {{ ( {\pi d\alpha N} )}^{{1 / 2}}}} \bigr). $$
(40)

Proof

By equations (20) and (38) and the Cauchy-Schwarz inequality we have

$$\begin{aligned} \bigl\vert { u^{n+1}_{m} ( x ) - w^{n+1}_{m} ( x )} \bigr\vert &= \Biggl\vert {\sum _{j = - N}^{N} {{ c^{n+1}_{j}} {S_{j}} ( x )} - \sum_{j = - N}^{N} { u^{n+1} ( {{x_{j}}} ){S_{j}} ( x )} } \Biggr\vert \\ & \leqslant{ \Biggl( {\sum_{j = - N}^{N} {{{ \bigl\vert {{ c^{n+1}_{j}} - u^{n+1} ({{x_{j}}} )} \bigr\vert }^{2}}} } \Biggr)^{\frac{1}{2}}} { \Biggl( {\sum_{j = - N}^{N} {{{ \bigl\vert {{S_{j}} ( x )} \bigr\vert }^{2}}} } \Biggr)^{\frac{1}{2}}} . \end{aligned} $$

Since \({ ( {\sum_{j = - N}^{N} {{{ \vert {{S_{j}} ( x )} \vert }^{2}}} } )^{\frac{1}{2}}} \leqslant c_{1}\), where \(c_{1}\) is a constant independent of N, we get

$$ \bigl\vert {{u^{n+1}_{m}} ( x ) - w^{n+1}_{m} ( x )} \bigr\vert \leqslant c_{1} {\bigl\Vert C^{n+1} - V^{n+1} \bigr\Vert _{2}}, $$
(41)

where \(C^{n+1}\) is given by (35) and denoting the vector \(V^{n+1}\) by

$$ V^{n+1} = { \bigl( { u^{n+1} ({{x_{ - N}}} ), u^{n+1} ( {{x_{ - N + 1}}} ), \ldots,u^{n+1} ( {{x_{N}}} )} \bigr)^{t}}. $$
(42)

Using equation (35) in (41), we have

$$ \bigl\Vert C^{n+1}-V^{n+1} \bigr\Vert _{2} = \bigl\Vert P^{-1}\bigl(PC^{n+1}-PV^{n+1} \bigr) \bigr\Vert _{2} \leqslant \bigl\Vert P^{-1} \bigr\Vert _{2} \bigl\Vert PV^{n+1}-G^{n+1} \bigr\Vert _{2}. $$
(43)

Now, we must get a bound for \(\Vert PV^{n+1}-G^{n+1} \Vert _{2}\). For simplicity, we denote

$$ r_{k} = \bigl(P V^{n+1} - G^{n+1} \bigr)_{k}, \quad k = -N, \ldots,N, $$

and using equation (37), we obtain

$$\begin{aligned} \vert {{r_{k}}} \vert =& \bigl\vert {{g} ( {{x_{k}}} ) - g_{m} ({{x_{k}}} )} \bigr\vert \\ =& \biggl\vert {u^{n + 1} ({{x_{k}}} ) - {B_{n,n}}\frac{{{d^{2}}}}{{d{x^{2}}}} \bigl({u^{n + 1} ( {{x_{k}}} )} \bigr) - u_{m}^{n + 1} ( {{x_{k}}} ) + {B_{n,n}}\frac{{{d^{2}}}}{{d{x^{2}}}} \bigl({u_{m}^{n + 1} ( {{x_{k}}} )} \bigr)} \biggr\vert \\ \leqslant& \bigl\vert {u^{n + 1} ( {{x_{k}}} ) - u_{m}^{n + 1} ( {{x_{k}}} )} \bigr\vert + {B_{n,n}} \biggl\vert {\frac{{{d^{2}}}}{{d{x^{2}}}} \bigl( {u^{n + 1} ({{x_{k}}} )} \bigr) - \frac{{{d^{2}}}}{{d{x^{2}}}} \bigl( {u_{m}^{n + 1} ( {{x_{k}}} )} \bigr)} \biggr\vert . \end{aligned}$$
(44)

Now, using Theorem 1, we obtain

$$\begin{aligned} \Vert r_{k} \Vert \leqslant& {c_{2}} {N^{{1 / 2}}}\exp \bigl( { - {{ ( {\pi d\alpha N} )}^{{1 / 2}}}} \bigr) + {B_{n,n}} {c_{3}} {N^{{3 / 2}}}\exp \bigl( { - {{ ( {\pi d\alpha N} )}^{{1 / 2}}}} \bigr) \\ \leqslant& \exp \bigl( { - {{ ( {\pi d \alpha N} )}^{{1 / 2}}}} \bigr) \bigl( {{c_{2}} {N^{{3 / 2}}} + {B_{n,n}} {c_{3}} {N^{{3 / 2}}}} \bigr) \\ = & K{N^{{3 / 2}}}\exp \bigl( { - {{ ( {\pi d\alpha N} )}^{{1 / 2}}}} \bigr) , \end{aligned}$$
(45)

where \(c_{2}\) and \(c_{3}\) are constants independent of N, and \(K=c_{2}+B_{n,n}c_{3}\). We know that

$$ { \bigl\Vert {P{V^{n + 1}} - {G^{n + 1}}} \bigr\Vert _{2}} \leqslant \sqrt{2N + 1} { \bigl\Vert {P{V^{n + 1}} - {G^{n + 1}}} \bigr\Vert _{\infty}}, $$

and using inequality (45), we obtain

$$ { \bigl\Vert {P{V^{n + 1}} - {G^{n + 1}}} \bigr\Vert _{2}} \leqslant {\sqrt{2}K} {N^{2}} \exp \bigl( { - {{( {\pi d\alpha N} )}^{{1 / 2}}}} \bigr). $$
(46)

Now, using Lemma 2 and inequality (46) in (43), we have

$$ { \bigl\Vert {C^{n+1} - V^{n+1}} \bigr\Vert _{2}} \leqslant {\frac{4 \sqrt{2} d K (1+c_{0})}{\alpha\pi B_{n,n}} } {N^{3}}\exp \bigl( { - {{ ( {\pi d\alpha N} )}^{{1 / 2}}}} \bigr). $$
(47)

So, from (41) and (47) we get

$$\sup_{x \in\Gamma} \bigl\vert {u_{m}^{n+1} ( x ) - {w^{n+1}_{m}} ( x )} \bigr\vert \le{c_{4}} {N^{3}}\exp \bigl( { - {{ ( {\pi d\alpha N} )}^{{1 / 2}}}} \bigr), $$

where \(c_{4}={\frac{4 \sqrt{2} d K (1+c_{0})c_{1}}{\alpha\pi B_{n,n}} }\). □

Theorem 3

Let \(u^{n+1}(x)\) be the exact solution of ODE (37), and let \(u^{n+1}_{m}(x)\) be its sinc approximation defined by Eq. (20). Then, under the assumptions of Theorems 1 and 2, there exists a constant \(c_{7}\), independent of N, such that

$$ \sup_{x \in\Gamma} \bigl\vert {u^{n+1} ( x ) - {u^{n+1}_{m}} ( x )} \bigr\vert \le{c_{7}} {N^{3}}\exp \bigl( { - {{ ( {\pi d\alpha N} )}^{{1 / 2}}}} \bigr). $$
(48)

Proof

Applying the triangular inequality,

$$ \bigl\vert u^{n+1} ( x ) - u^{n+1}_{m} ( x ) \bigr\vert \le \bigl\vert u^{n+1} ( x ) - w^{n+1}_{m} ( x ) \bigr\vert + \bigl\vert w^{n+1}_{m} ( x ) - u^{n+1}_{m} ( x ) \bigr\vert . $$
(49)

After Applying Theorem 1, there exists a constant \(c_{5}\) independent of N such that

$$ \bigl\vert { u^{n+1} ( x ) - w^{n+1}_{m} ( x )} \bigr\vert \leqslant{c_{5}} {N^{{1 / 2}}}\exp \bigl( { - {{ ( {\pi d\alpha N} )}^{{1 / 2}}}} \bigr). $$
(50)

Also, using Theorem 2, we obtain

$$ \bigl\vert { w^{n+1}_{m} ( x )-u^{n+1}_{m} ( x )} \bigr\vert \leqslant{c_{6}} {N^{3}}\exp \bigl( { - {{ ( {\pi d\alpha N} )}^{{1 / 2}}}} \bigr), $$
(51)

where \(c_{6}\) is a constant independent of N. Finally, applying solutions to (50) and (51), we conclude

$$\sup_{x \in\Gamma} \bigl\vert { u^{n+1} ( x ) - u^{n+1}_{m} ( x )} \bigr\vert \leqslant{c_{7}} {N^{3}}\exp \bigl( { - {{ ( {\pi d\alpha N} )}^{{1 / 2}}}} \bigr), $$

where \({c_{7}} = \max \{ {{c_{5}},{c_{6}}} \}\). □

Remark

We know that the time discretization is affected by a combination of the backward Euler method and product trapezoidal integration rule with orders of accuracy \(\mathcal{O}({\Delta t})\) and \(\mathcal{O}({\Delta t}^{2 - \beta})\), respectively [1–3, 10, 29]. Then, by applying (40) the truncation error of the proposed approach for solution of equations (11)-(12) can be written as follows:

$${ \bigl\Vert {u ( {x,t} ) - {u_{m}} ( {x,t} )} \bigr\Vert _{\infty}} \leqslant\gamma \bigl( N^{3}{\exp \bigl( { - {{ ( {\pi d\alpha N} )}^{{1 / 2}}}} \bigr) +\Delta t} \bigr), $$

where γ is a constant independent of N.

5 Numerical results

In this section, we provide numerical experiments of the suggested method. In all examples, we set the parameters \(d=\frac{\pi}{2}\) and \(\alpha=1\) and denote the computational solution and exact analytical solution by \(u_{\mathrm{app}}\) and \(u_{\mathrm{ex}}\), respectively. The error estimation is given to show the accuracy of approximation, and the following maximum pointwise error between the exact and approximate solution is given:

$$ \Vert \cdot \Vert _{\infty}=\mathop{\mathrm{Max}}_{i,n} \bigl\vert u_{\mathrm{app}}(x_{i},t_{n})-u_{\mathrm{ex}}(x_{i},t_{n}) \bigr\vert , \quad i=-N,\ldots,N, n=0,1,\ldots,M. $$

To implement the method, the following algorithm is given.

The linear algebraic system in step 6 of Algorithm 1 is solved directly by using ‘linsolve’ command from ‘LinearAlgebra’ package in Matlab R2014a software, and to overcome the ill-conditioning faced in this problem, we used the following Tikhonov regularization [31], which states that ‘solve the system \(Ax=b\) by replacing \(\min_{x \in{\mathbb{R}^{n}}} \Vert {AX - b} \Vert _{2}\) by the least square problem \(\min_{x \in{\mathbb {R}^{n}}} \{ { \Vert {AX - b} \Vert _{2}^{2} + {\mu ^{2}} \Vert X \Vert _{2}^{2}} \}\)’. All the calculations were supported by Intel CORE Dual-Core at 2.20 GHz CPU with 4 GB RAM.

Algorithm 1
figure a

Implementation of the proposed approach

Example 1

Consider the following homogenous Volterra partial integro-differential equation [11]:

$$\begin{gathered} {u_{t}} ( {x,t} ) = {\int_{0}^{t} { ( {t - s} )} ^{- {1 / 2}}} {u_{xx}} ( {x,s} )\,ds,\quad0 < x < 1, 0 < t < 1, \\ u ( {0,t} ) = u ( {1,t} ) = 0,\quad0 \le t \le 1, \\ u ( {x,0} ) = \sin ( {\pi x} ),\quad0 \le x \le1, \end{gathered} $$

with analytic solution [32, 33]

$$u ( {x,t} ) = \sum_{k = 0}^{\infty}{{{ ( { - 1} )}^{k}}\Gamma{{ \biggl( {\frac{3}{2}k + 1} \biggr)}^{ - 1}} {{ \bigl( {{\pi^{{5 / 2}}} {t^{{3 / 2}}}} \bigr)}^{k}}\sin ( {\pi x} )} . $$

To evaluate the analytic solution practically at a specific point, we truncate this infinite series by the term \(k=21\). In Table 1, the outcomes of the three-point explicit method (TPEM), three-point implicit method (TPIM), Crank-Nicolson method (CNM), Crandall method (CM) (see [11]) with \(\Delta t = 10^{-4}\) are presented in order to compare with the sinc-collocation method solving the arising system solved by the Linsolve package (SMLP) and the sinc-collocation method solving the arising system by the Tikhonov regularization (SMTR) with \(\Delta t = 10^{-2}\), \(\Delta t = 10^{-3}\), and \(\Delta t = 10^{-4}\). In Figure 1 and Figure 2, we can also observe that the computational solution is highly consistent with the truncated analytical solution when Δt is selected small enough. Furthermore, in Table 2, the maximum pointwise errors and condition numbers for various values of N at \(t=0.01\), \(\Delta t=10^{-4}\), and \(T=1\) for SMLP and SMTR are reported, which shows the improved rate of convergence when the number of sinc points increases. Also, the global maximum pointwise errors at \(N = 4\) and \(\Delta t = 10^{-3}\) are plotted in Figure 3(a) for SMLP and in Figure 3(b) for SMTR in order to compare with the thin plate spline-radial basis function method (TPS-RBF), inverse multiquadric-radial basis function method (IMQ-RBF), and hyperbolic secant-radial basis function method (Sech-RBF) (see [17]) with \(\Delta t = 10^{-3}\) at \(N = 25\). These figures show that our method achieved more accurate results with less data grid points. Convergence curves of Table 2 are plotted in Figure 4. This figure indicates that the maximum errors decline at an exponential rate with respect to N for both SMLP and SMTR, and these graphs confirm the theoretical results.

Figure 1
figure 1

Truncated analytic and computed solutions of Example  1 with \(\pmb{N=4}\) at \(\pmb{t=1}\) by using the Linsolve package.

Figure 2
figure 2

Truncated analytic and computed solutions of Example  1 with \(\pmb{N=4}\) at \(\pmb{t=1}\) by using Tikhonov regularization.

Figure 3
figure 3

The global maximum pointwise errors at \(\pmb{N=4}\) and \(\pmb{\Delta t=10^{-3}}\) (a) by using the Linsolve package, (b) by using of Tikhonov regularization.

Figure 4
figure 4

Convergence of the SMLP and SMTR methods for various values of N at \(\pmb{t=0.01}\) , \(\pmb{\Delta t =10^{-4}}\) , and \(\pmb{T=1}\) .

Table 1 Comparison of estimated maximum pointwise errors of Example  1 for \(\pmb{N = 4}\) , \(\pmb{T=1}\) at \(\pmb{t=1}\) and different values of x
Table 2 Results for Example  1 at \(\pmb{t=0.01}\)

Example 2

Consider the following nonhomogenous Volterra partial integro-differential equation [1, 18]:

$$\begin{gathered} {u_{t}} ( {x,t} ) = {\int_{0}^{t} { ( {t - s} )} ^{- {1 / 2}}} {u_{xx}} ( {x,s} )\,ds+f(x,t),\quad0 < x < 1, 0 < t < 1, \\ u ( {0,t} ) = u ( {1,t} ) = 0,\quad0 \le t \le 1, \\ u ( {x,0} ) = \sin ( {\pi x} ),\quad0 \le x \le1. \end{gathered} $$

In the case of

$$f ( {x,t} ) = \frac{{2{t^{{1 / 2}}}}}{{\sqrt{\pi}}} \bigl( {{\pi^{2}}\sin\pi x - \sin2\pi x} \bigr) - 2{\pi^{2}} {t^{2}}\sin2\pi x, $$

the analytic solution is given by \(u ( {x,t} ) = \sin\pi x - \frac{{4{t^{{3 / 2}}}}}{{3\sqrt{\pi}}}\sin2\pi x\). We apply our presented methods SMLP and SMTR to this example for comparison with the quasi-wavelet method (QWM) [18]. We used \(N=32\) and \(\Delta t = 10^{-5}, 10^{-6}\). The global maximum pointwise errors in the solutions have been computed for 50th, 150th, 250th, 350th, and 450th time levels and tabulated in Table 3, which shows that the sinc method in comparison with QWM is considerably accurate. The analytic and exact solutions are compared in Figure 5 for \(N=32\) and \(\Delta t =10^{-6}\) by using the SMLP. In addition, the maximum pointwise errors in the solution by SMLP and SMTR in Table 3 are plotted in Figure 6 and Figure 7.

Figure 5
figure 5

Analytic and computed solutions of Example  2 with \(\pmb{\Delta t=10^{-6}}\) and \(\pmb{N=32}\) by using the Linsolve package.

Figure 6
figure 6

The global maximum pointwise errors at \(\pmb{N=32}\) and \(\pmb{\Delta t=10^{-6}}\) (a) by using the Linsolve package, (b) by using Tikhonov regularization.

Figure 7
figure 7

The global maximum pointwise errors for \(\pmb{N=32}\) at \(\pmb{t=0.00045}\) , \(\pmb{\Delta t =10^{-6}}\) , and \(\pmb{T=1}\) (a) by using the SMLP method, (b) by using the SMTR method.

Table 3 Results for Example  2

Example 3

Consider equation (1) in the nonhomogenous form when \(k_{0}(t-s)=(\pi(t-s))^{-\frac{1}{2}}\), \(0 \leqslant x\leqslant1\), \(0 \leqslant t\leqslant1\), \(u_{0}(x)=\sin(\pi x)\), and \(f(x,t)=\sin(\pi x)\). Thus, the analytical solution is given by [3]

$$u ( {x,t} ) = \Biggl\{ {\sum_{k = 0}^{\infty}{{{ ( { - 1} )}^{k}}\frac{{{{ ( {{\pi^{2}}{t^{{3 / 2}}}} )}^{k}}}}{{\Gamma ( {1 + \frac{3}{2}k} )}}} + t\sum _{k = 0}^{\infty}{{{ ( { - 1} )}^{k}} \frac{{{{ ( {{\pi ^{2}}{t^{{3 / 2}}}} )}^{k}}}}{{\Gamma ( {2 + \frac{3}{2}k} )}}} } \Biggr\} \sin ( {\pi x} ). $$

To evaluate the analytic solution practically at a specific point, the infinite series given above is truncated by the term \(k=21\). In Table 4, we show the results of the 50th, 150th, 250th, and 350th time levels of the three different grid sizes \(\Delta t = 10^{-5} \), \(\Delta t = 10^{-6}\), and \(\Delta t =10^{-7}\) for SMLP and SMTR methods when \(N=8, 16, 32, 64\), which verify that the sinc method is accurate enough. Besides, we can also see in Figure 8 that the computational solution is consistent with the truncated analytical solution. In addition, the maximum pointwise errors in the solution by SMLP and SMTR in Table 4 are plotted in Figure 9 and Figure 10.

Figure 8
figure 8

Truncated analytic and computed solutions of Example  3 with \(\pmb{\Delta t=0.00001}\) and \(\pmb{N=16}\) by using the Linsolve package.

Figure 9
figure 9

The global maximum pointwise errors at \(\pmb{N=16}\) and \(\pmb{\Delta t=10^{-5}}\) (a) by using the Linsolve package, (b) by using Tikhonov regularization.

Figure 10
figure 10

The global maximum pointwise errors for \(\pmb{N=16}\) at \(\pmb{t=0.0035}\) , \(\pmb{\Delta t =10^{-5}}\) , and \(\pmb{T=1}\) (a) by using the Linsolve package, (b) by using Tikhonov regularization.

Table 4 Results for Example  3

6 Conclusions

In this paper, the sinc-collocation method was applied to solve linear Volterra partial integro-differential equations by using the Linsolve package and Tikhonov regularization methods for a final ill-conditioned system. To illustrate the effectiveness of the method, some examples were solved based on the proposed algorithm. Also, the convergence of the method was given. The results show that the proposed method is practically reliable and consistent in comparison with other mentioned methods, and using the Tikhonov regularization method for solving the final ill-conditioned algebraic system, the rate of convergence improved.

References

  1. McLean, W, Mustapha, K: A second-order accurate numerical method for a fractional wave equation. Numer. Math. 105(3), 481-510 (2007)

    Article  MATH  MathSciNet  Google Scholar 

  2. McLean, W, Thomée, V, Wahlbin, LB: Discretization with variable time steps of an evolution equation with a positive-type memory term. J. Comput. Appl. Math. 69(1), 49-69 (1996)

    Article  MATH  MathSciNet  Google Scholar 

  3. McLean, W, Thomée, V: Numerical solution of an evolution equation with a positive-type memory term. ANZIAM J. 35(1), 23-70 (1993)

    MATH  MathSciNet  Google Scholar 

  4. Schneider, W, Wyss, W: Fractional diffusion and wave equations. J. Math. Phys. 30(1), 134-144 (1989)

    Article  MATH  MathSciNet  Google Scholar 

  5. Renardy, M, Nohel, JA: Mathematical Problems in Viscoelasticity, vol. 35. Longman, Harlow (1987)

    MATH  Google Scholar 

  6. Yanik, EG, Fairweather, G: Finite element methods for parabolic and hyperbolic partial integro-differential equations. Nonlinear Anal., Theory Methods Appl. 12(8), 785-809 (1988)

    Article  MATH  MathSciNet  Google Scholar 

  7. Fakhar-Izadi, F, Dehghan, M: An efficient pseudo-spectral Legendre-Galerkin method for solving a nonlinear partial integro-differential equation arising in population dynamics. Math. Methods Appl. Sci. 36(12), 1485-1511 (2013)

    Article  MATH  MathSciNet  Google Scholar 

  8. Avazzadeh, Z, Rizi, ZB, Ghaini, FM, Loghmani, G: A numerical solution of nonlinear parabolic-type Volterra partial integro-differential equations using radial basis functions. Eng. Anal. Bound. Elem. 36(5), 881-893 (2012)

    Article  MATH  MathSciNet  Google Scholar 

  9. Fakhar-Izadi, F, Dehghan, M: Space-time spectral method for a weakly singular parabolic partial integro-differential equation on irregular domains. Comput. Math. Appl. 67(10), 1884-1904 (2014)

    Article  MATH  MathSciNet  Google Scholar 

  10. Mustapha, K, McLean, W: Discontinuous Galerkin method for an evolution equation with a memory term of positive type. Math. Comput. 78(268), 1975-1995 (2009)

    Article  MATH  MathSciNet  Google Scholar 

  11. Dehghan, M: Solution of a partial integro-differential equation arising from viscoelasticity. Int. J. Comput. Math. 83(1), 123-129 (2006)

    Article  MATH  MathSciNet  Google Scholar 

  12. Tang, J, Xu, D: The global behavior of finite difference-spatial spectral collocation methods for a partial integro-differential equation with a weakly singular kernel. Numer. Math., Theory Methods Appl. 6(3), 556-570 (2013)

    MATH  MathSciNet  Google Scholar 

  13. Luo, M, Xu, D, Li, L: A compact difference scheme for a partial integro-differential equation with a weakly singular kernel. Appl. Math. Model. 39(2), 947-954 (2015)

    Article  MathSciNet  Google Scholar 

  14. Kim, CH, Choi, UJ: Spectral collocation methods for a partial integro-differential equation with a weakly singular kernel. ANZIAM J. 39(3), 408-430 (1998)

    MATH  MathSciNet  Google Scholar 

  15. Bialecki, B, Fairweather, G: Orthogonal spline collocation methods for partial differential equations. J. Comput. Appl. Math. 128(1), 55-82 (2001)

    Article  MATH  MathSciNet  Google Scholar 

  16. Yoon, J-M, Xie, S, Hrynkiv, V: Two numerical algorithms for solving a partial integro-differential equation with a weakly singular kernel. Appl. Appl. Math. 7(1), 133-141 (2012)

    MATH  MathSciNet  Google Scholar 

  17. Biazar, J, Asadi, MA: FD-RBF for partial integro-differential equations with a weakly singular kernel. Appl. Comput. Math. 4(6), 445-451 (2015)

    Article  Google Scholar 

  18. Long, W, Xu, D, Zeng, X: Quasi wavelet based numerical method for a class of partial integro-differential equation. Appl. Math. Comput. 218(24), 11842-11850 (2012)

    MATH  MathSciNet  Google Scholar 

  19. Linz, P: Analytical and Numerical Methods for Volterra Equations. SIAM, Philadelphia (1985)

    Book  MATH  Google Scholar 

  20. Rashidinia, J, Zarebnia, M: Solution of a Volterra integral equation by the sinc-collocation method. J. Comput. Appl. Math. 206(2), 801-813 (2007)

    Article  MATH  MathSciNet  Google Scholar 

  21. Rashidinia, J, Maleknejad, K, Taheri, N: Sinc-Galerkin method for numerical solution of the Bratu’s problems. Numer. Algorithms 62(1), 1-11 (2013)

    Article  MATH  MathSciNet  Google Scholar 

  22. Rashidinia, J, Barati, A, Nabati, M: Application of sinc-Galerkin method to singularly perturbed parabolic convection-diffusion problems. Numer. Algorithms 66(3), 643-662 (2014)

    Article  MATH  MathSciNet  Google Scholar 

  23. Rashidinia, J, Barati, A: Numerical solutions of one-dimensional non-linear parabolic equations using sinc collocation method. Ain Shams Eng. J. 6(1), 381-389 (2015)

    Article  Google Scholar 

  24. El-Gamel, M: A note on solving the fourth-order parabolic equation by the sinc-Galerkin method. Calcolo 52(3), 327-342 (2015)

    Article  MATH  MathSciNet  Google Scholar 

  25. Maleknejad, K, Khalilsaraye, IN, Alizadeh, M: On the solution of the integro-differential equation with an integral boundary condition. Numer. Algorithms 65(2), 355-374 (2014)

    Article  MATH  MathSciNet  Google Scholar 

  26. Sugihara, M: Near optimality of the sinc approximation. Math. Comput. 72(242), 767-786 (2003)

    Article  MATH  MathSciNet  Google Scholar 

  27. Lund, J, Bowers, KL: Sinc Methods for Quadrature and Differential Equations. SIAM, Philadelphia (1992)

    Book  MATH  Google Scholar 

  28. Stenger, F: Numerical Methods Based on Sinc and Analytic Functions, vol. 20, p. 565. Springer, New York (2012)

    MATH  Google Scholar 

  29. Dixon, J: On the order of the error in discretization methods for weakly singular second kind non-smooth solutions. BIT Numer. Math. 25(4), 623-634 (1985)

    Article  Google Scholar 

  30. Marshall, AW, Olkin, I, Arnold, B: Inequalities: Theory of Majorization and Its Applications. Springer, London (2010)

    MATH  Google Scholar 

  31. Hansen, PC: Rank-Deficient and Discrete Ill-Posed Problems: Numerical Aspects of Linear Inverse p. 128. SIAM, Philadelphia (1998)

    Book  Google Scholar 

  32. Tang, T: A finite difference scheme for partial integro-differential equations with a weakly singular kernel. Appl. Numer. Math. 11(4), 309-319 (1993)

    Article  MATH  MathSciNet  Google Scholar 

  33. Sanz-Serna, JM: A numerical method for a partial integro-differential equation. SIAM J. Numer. Anal. 25(2), 319-327 (1988)

    Article  MATH  MathSciNet  Google Scholar 

Download references

Acknowledgements

The authors would like to thank the reviewers for their constructive comments to improve the quality of this work.

Author information

Authors and Affiliations

Authors

Contributions

All authors read and approved the final manuscript.

Corresponding author

Correspondence to Mohammad Ali Fariborzi Araghi.

Ethics declarations

Competing interests

The authors declare that they have no competing interests.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Fahim, A., Fariborzi Araghi, M.A., Rashidinia, J. et al. Numerical solution of Volterra partial integro-differential equations based on sinc-collocation method. Adv Differ Equ 2017, 362 (2017). https://doi.org/10.1186/s13662-017-1416-7

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13662-017-1416-7

MSC

Keywords