Skip to main content

Theory and Modern Applications

Numerical method and convergence order for second-order impulsive differential equations

Abstract

This paper is devoted to the numerical scheme for the impulsive differential equations. The main idea of this method is, for the first time, to establish a broken reproducing kernel space that can be used in pulse models. Then the uniform convergence of the numerical solution is proved, and the time consuming Schmidt orthogonalization process is avoided. The proposed method is proved to be stable and have the second-order convergence. The algorithm is proved to be feasible and effective through some numerical examples.

1 Introduction

Pulse boundary value problems occur in many applications: population dynamics [1], physics, chemistry [2], irregular geometries and interface problems [3,4,5], signal processing [6, 7]. The research on the impulsive differential equations with all kinds of boundary value is much more active in recent years. However, only in the last few decades has the attention been paid to the theory and numerical analysis of IDEs. All kinds of methods have been widely used to study the existence of solutions for impulsive problems [8,9,10,11,12]. Many researchers have extensively studied the numerical methods of impulsive differential equations. Berenguer [13] provide a collage-type theorem for impulsive differential equations with inverse boundary conditions. Epshteyn [14, 15] solved the high-order differential equations with interface conditions based on Difference Potentials approach for the variable coefficient. Hossainzadeh [16] applied the Adomian Decomposition Method(ADM) for solving first-order impulsive differential equations. Zhang [17] researched numerical solutions to the first-order impulsive differential equations by collocation methods. Zhang [18] analyzed a class of linear impulsive delay differential equation by asymptotic stability. Impulsive differential equation is a mathematical form of problems in many application fields, how to solve the impulsive differential equation accurately is very important.

In this paper, we consider the following second-order impulsive differential equations (IDEs for short):

$$ \textstyle\begin{cases} u^{\prime \prime } ( x ) + a_{1} ( x ) u ' ( x ) + a_{0} ( x ) u ( x ) =f ( x ),\quad x\in [a,b]\backslash \{ c\}, \\ u ( a ) = \alpha _{1}, \quad\quad u ( b ) = \alpha _{2}, \\ \Delta u ' ( c ) = \alpha _{3},\quad\quad \Delta u ( c ) = \alpha _{4}, \end{cases} $$
(1)

where \(\Delta u ' ( c ) = u ' ( c^{+} ) - u' ( c^{-} )\), \(\alpha _{3}\) and \(\alpha _{4}\) are not at the same time as 0. \(a_{i} ( x )\) and \(f ( x )\) are known function, \(\alpha _{j} \in \mathbb{R}\), \(j=1,2, 3, 4\). In this paper, only one pulse point is considered, by that analogy, the algorithm can also be applied to multiple pulse points.

As known to all, the reproducing kernel method is a powerful tool to solve differential equations [19,20,21,22,23]. However, the reproducing kernel space is smooth, in order to solve the impulsive differential equation, for the first time, we propose a broken reproducing kernel space.

The aim of this paper is to derive the numerical solutions of Eq. (1) in Sect. 1. In Sect. 2, we introduce the reproducing kernel space for solving problems. Some primary results are analyzed in Sect. 3. The numerical algorithm and convergence order of approximate solution is presented in Sect. 4. In Sect. 5, the presented algorithms are applied to some numerical experiments. Then we end with some conclusions in Sect. 6.

2 The reproducing kernel method

The application of reproducing kernel method in the boundary value problems has been developed by many researchers, because this method is easy to obtain the exact solution with the series form and get approximate solution with higher precision [19, 20]. However, this method required the exact solution to be smooth, this leads to the fact that IDEs cannot be solved directly in the reproducing kernel space.

In this paper, the traditional reproducing kernel space is dealt with delicately, the space has been broken into two spaces that each one is smooth reproducing kernel space, so we can use this space to solve IDEs. We assume that Eq. (1) have a unique solution.

2.1 The traditional reproducing kernel space

•:

The reproducing kernel space \(W_{2}^{3} [a, c]\) is defined as follows:

  • \(W_{2}^{3} [ a, c ] = \{u(x)\mid u^{\prime \prime } \text{ is an absolutely continuous real value funcion}, {u}''' \in L^{2} [a, c] \}\) [20] (\(W_{a}^{3}\) for short).

The inner product and norm are defined as follows:

$$\begin{aligned}& \bigl\langle u(t), v(t) \bigr\rangle = \sum_{k=0}^{2} u^{(k)} ( a ) v^{(k)} ( a ) + \int _{a}^{c} u^{\prime \prime \prime } v ^{\prime \prime \prime } \,dt, \quad u,v\in W_{2}^{3} [ a, c ], \\& \Vert u \Vert = \sqrt{ \langle u, u \rangle _{w_{2}^{3}}}. \end{aligned}$$
•:

The reproducing kernel space \(W_{2}^{1} [a, c]\) is defined as follows:

  • \(W_{2}^{1} [ a, c ] = \{u(x)\mid u \text{ is an absolutely continuous real value funcion}, {u}'\in L^{2} [a, c] \}\) [20] (\(W_{a}^{1}\) for short).

The inner product and norm are defined as follows:

$$\begin{aligned}& \begin{gathered} \bigl\langle u(t), v(t) \bigr\rangle =u(0)v(0)+ \int _{a}^{c} u ' v ' \,dt,\quad u,v\in W_{2}^{1} [ a, c ], \\ \Vert u \Vert = \sqrt{ \langle u, u \rangle _{w_{2}^{1}}}. \end{gathered} \end{aligned}$$

The reproducing kernel spaces are \(W_{a}^{3} \) and \(W_{a}^{1} \) with reproducing kernel \(R_{t}^{0} (x) \) and \(r_{t}^{0} (x)\), respectively.

In the same way, the reproducing kernel spaces are \(W_{2}^{3} [c, b]\) (\(W_{b}^{3} \) for short) and \(W_{2}^{1} [c, b]\) (\(W_{b}^{1} \) for short) with reproducing kernel \(R_{t}^{1} (x) \) and \(r_{t}^{1} (x)\), respectively.

2.2 The reproducing kernel space with piecewise smooth

In this paper, consider that the exact solution of Eq. (1) is not a smooth function, so, we connected two reproducing kernel spaces on both sides of the impulsive point, we call it the broken reproducing kernel space.

Definition 2.1

The linear space \(W_{2, c}^{3} \) is defined as

$$ W_{2,c}^{3} [ a, b ] = \bigl\{ u ( x ) \vert \text{ if } x< c \text{ then } u ( x ) \in W_{a}^{3}, \text{if } x\geq c \text{ then } u(x)\in W_{b}^{3} \bigr\} . $$

Every \(u(x)\in W_{2, c}^{3} [a,b] \) has the following form:

$$ u (x ) = \textstyle\begin{cases} u_{0} ( x ), &x< c, \\ u_{1} ( x ), &x\geq c, \end{cases} $$

where \(u_{0} (x)\in W_{a}^{3}\), \(u_{1} (x)\in W_{b}^{3}\).

Theorem 2.1

Assuming that the inner product and norm in \(W_{2, c}^{3} [a,b] \) are given by

$$\begin{aligned}& \langle u,v \rangle _{W_{2, c}^{3}} = \langle u _{0}, v_{0} \rangle _{W_{a}^{3}} + \langle u_{1}, v_{1} \rangle _{W_{b}^{3}}, \quad u,v\in W_{2, c}^{3} [a,b], \\& \Vert u \Vert _{{W}_{2, c}^{3}} = \sqrt{ \langle u,u \rangle _{W_{2, c}^{3}}}, \quad u\in W_{2, c}^{3} [a,b] \end{aligned}$$
(2)

then the space \(W_{2, c}^{3} [a,b] \) is an inner space.

Proof

For any \(u,v,w\in W_{2, c}^{3} [a,b]\),

$$\begin{aligned} \langle u+v,w \rangle _{W_{2, c}^{3}}& = \langle u _{0} + v_{0}, w_{0} \rangle _{W_{a}^{3}} + \langle u_{1} + v_{1}, w_{1} \rangle _{W_{b}^{3}} \\ &= \langle u_{0}, w_{0} \rangle _{W_{a}^{3}} + \langle v _{0}, w_{0} \rangle _{W_{a}^{3}} + \langle u_{1}, w_{1} \rangle _{W_{b}^{3}} + \langle v_{1}, w_{1} \rangle _{W_{b}^{3}} \\ & = \langle u,w \rangle _{W_{2, c}^{3}} + \langle v,w \rangle _{W_{2, c}^{3}}. \end{aligned}$$

We can prove that Eq. (2) satisfies the other requirements of the inner product space. □

Theorem 2.2

The space \(W_{2, c}^{3} [a,b] \) is a Hilbert space.

Proof

Suppose that \(\{ u_{n} (x)\}_{n=1}^{\infty }\) is a Cauchy sequence in \(W_{2, c}^{3} [a,b]\), however,

$$ u_{n} ( x ) = \textstyle\begin{cases} u_{0,n} ( x ), &x< c, \\ u_{1,n} ( x ), &x\geq c, \end{cases}\displaystyle \quad n=1, 2,\ldots , $$

so, \(\{ u_{0, n} (x)\}_{n=1}^{\infty }\) and \(\{ u_{1, n} (x)\}_{n=1} ^{\infty }\) are Cauchy sequences in \(W_{a}^{3}\) and \(W_{b}^{3}\), respectively.

So, there are two functions \(g_{0} (x)\in W_{a}^{3}\), \(g_{1} (x) \in W_{b}^{3}\), and

$$ \bigl\Vert u_{0,n} ( x ) - g_{0} (x) \bigr\Vert _{W _{a}^{3}}^{2} \rightarrow 0, \qquad \bigl\Vert u_{1,n} ( x ) - g_{1} (x) \bigr\Vert _{W _{b}^{3}}^{2} \rightarrow 0. $$

Let

$$ g ( x ) = \textstyle\begin{cases} g_{0} ( x ), &x< c, \\ g_{1} ( x ), &x\geq c. \end{cases} $$

By Definition 2.1, \(g(x)\in W_{2, c}^{3} [a,b]\), and

$$ \bigl\Vert u_{n} ( x ) -g(x) \bigr\Vert _{W_{2,c}^{3}} ^{2} = \bigl\Vert u_{0,n} ( x ) - g_{0} (x) \bigr\Vert _{W_{a}^{3}}^{2} + \bigl\Vert u_{1,n} ( x ) - g_{1} (x) \bigr\Vert _{W_{b}^{3}}^{2} \rightarrow 0. $$

So, the space \(W_{2, c}^{3} [a,b] \) is a Hilbert space. □

Theorem 2.3

The space \(W_{2, c}^{3} [a,b] \) is a reproducing kernel space with the reproducing kernel function

$$ R_{t} ( x ) = \textstyle\begin{cases} R_{t}^{0} ( x ), & ( x,t ) \in [ a,c ) \times [a,c), \\ R_{t}^{1} ( x ), & ( x,t ) \in [ c,b ] \times [c,b], \\ 0, &\textit{others}. \end{cases} $$
(3)

Proof

Consider arbitrary \(u(x)\in W_{2, c}^{3} [a,b]\).

If \(t\in [ a,c )\), \(\langle u(x), R_{t} ( x ) \rangle _{W_{2, c}^{3}} = \langle u_{0} (x), R _{t}^{0} ( x ) \rangle _{W_{a}^{3}} + \langle u _{1} (x), 0 \rangle _{W_{b}^{3}} = u_{0} (t)\).

If \(t\in [c,b]\), \(\langle u(x), R_{t} ( x ) \rangle _{W_{2, c}^{3}} = \langle u_{0} (x), 0 \rangle _{W_{a} ^{3}} + \langle u_{1} (x), R_{t}^{1} ( x ) \rangle _{W_{b}^{3}} = u_{1} (t)\).

In conclusion, for every \(u(x)\in W_{2, c}^{3} [a,b]\), it follows that

$$ \bigl\langle u(x), R_{t} ( x ) \bigr\rangle =u(t). $$

 □

Similarly, the reproducing kernel space \(W_{2, c}^{1} [a,b] \) is defined as

$$ W_{2,c}^{1} [ a, b ] = \bigl\{ u ( x ) \vert \text{ if } x< c \text{ then } u ( x ) \in W_{a}^{1}, \text{if } x\geq c \text{ then } u(x)\in W_{b}^{1} \bigr\} $$
(4)

and it has the reproducing kernel function

$$ r_{t} ( x ) = \textstyle\begin{cases} r_{t}^{0} ( x ), & ( x,t ) \in [ a,c ) \times [a,c), \\ r_{t}^{1} ( x ), & ( x,t ) \in [ c,b ] \times [c,b], \\ 0, &\text{others}. \end{cases} $$
(5)

In order to solve Eq. (1), we introduce a linear operator \(\mathbb{L}: W_{2, c}^{3} [a,b]\rightarrow W_{2, c}^{1} [a,b]\),

$$ \mathbb{L} u= u^{\prime \prime } ( x ) + a_{1} ( x ) u ' ( x ) + a_{0} ( x ) u ( x ), \quad u\in W_{2, c}^{3} [a,b]. $$

By Ref. [20], it is easy to prove that \(\mathbb{L}\) is a bounded operator.

Then Eq. (1) can be transformed into the following form:

$$ \textstyle\begin{cases} \mathbb{L} u=f ( x ), \quad x\in [a,b]\backslash \{ c\}, \\ u ( a ) = \alpha _{1}, \quad\quad u ( b ) = \alpha _{2}, \\ \Delta u ' ( c ) = \alpha _{3}, \quad\quad \Delta u ( c ) = \alpha _{4}. \end{cases} $$
(6)

3 Primary result

In this section, the approximate solution of Eq. (6) is presented in the reproducing kernel space \(W_{2, c}^{3} [a,b]\). And the convergence of the approximate solution is proved, discuss the approximate solution of the situation and the range of error.

We make \(\{ x_{i} \}_{i=1}^{\infty }\) is a dense point set that removed the point c on the interval \([a, b]\), put

$$\begin{aligned}& \emptyset _{1} (x)=R_{a} ( x ), \qquad \emptyset _{2} (x)=R_{b} ( x ), \quad\quad \emptyset _{3} (x)= \frac{ \partial R_{x} ( t )}{\partial t} \bigg\vert _{t= c^{+}} - \frac{ \partial R_{x} ( t )}{\partial t} \bigg\vert _{t= c^{-}}, \\& \emptyset _{4} (x)=R_{x} \bigl( c^{+} \bigr) - R_{x} \bigl( c^{-} \bigr) \end{aligned}$$

and

$$ \psi _{i} (x)= \mathbb{L}^{*} r_{x_{i}} ( x ), \quad i=1, 2,\ldots , $$

where \(\mathbb{L}^{*} \) is the adjoint operator of \(\mathbb{L}\).

Let \(S_{n} =\operatorname{span}\{ \{ \psi _{i} (x) \}_{i=1}^{n} \cup \{ \emptyset _{j} (x) \}_{j=1}^{4} \}\). Then we can obtain \(S_{n} \in W_{2, c}^{3} [a,b]\).

The orthogonal projection operator are denoted by \(\mathbb{P}_{n}: W _{2, c}^{3} [a,b]\rightarrow S_{n}\).

Theorem 3.1

\(\psi _{i} ( x ) =\mathbb{L} R_{x} ( x_{i} )\), \(i=1, 2,\ldots \) .

Proof

$$\begin{aligned}& \psi _{i} ( x ) = \bigl\langle \mathbb{L} ^{*} r_{x_{i}}, R_{x} \bigr\rangle _{W_{2, c}^{3}} = \langle r _{x_{i}}, \mathbb{L} R_{x} \rangle _{W_{2, c}^{1}} =\mathbb{L} R_{x} ( x_{i} ),\quad i=1, 2,\ldots . \end{aligned}$$

 □

Theorem 3.2

For each fixed n, \(\{ \psi _{i} (x) \} _{i=1}^{n} \cup \{ \emptyset _{j} (x) \}_{j=1}^{4}\) is linearly independent in \(W_{2, c}^{3} [a,b]\).

Proof

Let

$$ 0= \sum_{i=1}^{n} \lambda _{i} \psi _{i} (t) + \sum_{j=1}^{4} k_{j} \emptyset _{j} (t) $$
•:

Consider

$$\begin{aligned}& h ( t ) \in W_{2, c}^{3} [ a,b ], \quad \textstyle\begin{cases} \mathbb{L} h=0, \quad t\in [a,b]\backslash \{ c\}, \\ h ( a ) = \alpha _{1}, \quad\quad h ( b ) = \alpha _{2}, \\ \Delta h ' ( c ) = \alpha _{3}, \quad\quad \Delta h ( c ) = \alpha _{4}, \end{cases}\displaystyle \end{aligned}$$

then

$$\begin{aligned} 0&= \Biggl\langle h ( t ), \sum_{i=1}^{n} \lambda _{i} \psi _{i} ( t ) + \sum _{j=1}^{4} k_{j} \emptyset _{j} ( t ) \Biggr\rangle \\ &= \sum_{i=1}^{n} \lambda _{i} \bigl\langle h ( t ), \mathbb{L}^{*} r_{x_{i}} ( t ) \bigr\rangle + k_{1} \bigl\langle h ( t ), R_{a} ( t ) \bigr\rangle + k_{2} \bigl\langle h ( t ), R_{b} ( t ) \bigr\rangle \\ &\quad\quad{} + k_{3} \biggl\langle h ( t ), \frac{\partial R_{x} ( t )}{\partial t} \bigg\vert _{t= c^{+}} - \frac{\partial R_{x} ( t )}{\partial t} \bigg\vert _{t= c^{-}} \biggr\rangle + k_{4} \bigl\langle h ( t ), R_{c^{+}} ( t ) - R_{c^{-}} ( t ) \bigr\rangle \\ &= \sum_{i=1}^{n} \lambda _{i} \mathbb{L} h ( x_{i} ) + k _{1} h ( a ) + k_{2} h ( b ) + k_{3} \bigl( h ' \bigl( c^{+} \bigr) - h ' \bigl( c^{-} \bigr) \bigr) + k _{4} \bigl( h \bigl( c^{+} \bigr) -h \bigl( c^{-} \bigr) \bigr) \\ &= k_{3}. \end{aligned}$$

Similarly, we have \(k_{1} =0\), \(k_{2} =0\), \(k_{4} =0\).

•:

Consider

$$\begin{aligned}& f_{j} ( t ) \textstyle\begin{cases} =0, & t= x_{1}, x_{2},\ldots, x_{j-1}, x_{j+1},\ldots, x_{n}, \\ \neq 0, &\text{others}, \end{cases}\displaystyle \quad f_{j} ( t ) \in W_{2, c}^{1} [ a,b ], \end{aligned}$$

take \(v_{j} ( t ) \in W_{2, c}^{3} [ a,b ] \) make

$$ \textstyle\begin{cases} \mathbb{L} v_{j} ( t ) = f_{j} ( t ),\quad t \in [a,b]\backslash \{ c\}, \\ v_{j} ( a ) =0, \quad\quad v_{j} ( b ) =0. \end{cases} $$

The unique solution to the above equations exists (see [20]), then

$$\begin{aligned} 0&= \Biggl\langle v_{j} ( t ), \sum_{i=1}^{n} \lambda _{i} \psi _{i} ( t ) \Biggr\rangle = \sum _{i=1}^{n} \lambda _{i} \bigl\langle v_{j} ( t ), \mathbb{L}^{*} r_{x_{i}} ( t ) \bigr\rangle \\ &= \sum_{i=1}^{n} \lambda _{i} \mathbb{L} v_{j} ( x_{i} ) = \sum _{i=1}^{n} \lambda _{i} f_{j} ( x_{i} ) = \lambda _{j} f_{j} ( x_{j} ). \end{aligned}$$

So, \(\lambda _{j} =0\), \(j=1, 2,\ldots,n\). □

Theorem 3.3

If \(u\in W_{2, c}^{3} [ a,b ]\) is the solution of Eq. (6), then \(u_{n} =\mathbb{P}_{n} u\) satisfies the following:

$$ \textstyle\begin{cases} \langle v, \psi _{i} \rangle =f ( x_{i} ),\quad i=1,2,\ldots, n, \\ \langle v, \emptyset _{1} \rangle = \alpha _{1},\quad\quad \langle v, \emptyset _{2} \rangle = \alpha _{2},\quad\quad \langle v, \emptyset _{3} \rangle = \alpha _{3},\quad\quad \langle v, \emptyset _{4} \rangle = \alpha _{4}. \end{cases} $$
(7)

Proof

Supposing \(u(x)\) is a solution of Eq. (6), there

$$\begin{aligned} \langle \mathbb{P}_{n} u, \psi _{i} \rangle _{W_{2, c} ^{3}} &= \langle u, \mathbb{P}_{n} \psi _{i} \rangle _{W _{2, c}^{3}} = \langle u, \psi _{i} \rangle _{W_{2, c} ^{3}} \\ &= \bigl\langle u, \mathbb{L}^{*} r_{x_{i}} \bigr\rangle _{W_{2, c} ^{3}} = \langle \mathbb{L} u, r_{x_{i}} \rangle _{W_{2, c} ^{1}} =\mathbb{L} u ( x_{i} ) =f( x_{i} ) \end{aligned}$$

and

$$ \langle \mathbb{P}_{n} u, \emptyset _{1} \rangle _{W_{2, c} ^{3}} = \langle u, \mathbb{P}_{n} \emptyset _{1} \rangle _{W_{2, c}^{3}} = \langle u, \emptyset _{1} \rangle _{W _{2, c}^{3}} = \langle u, R_{a} \rangle _{W_{2, c}^{3}} =u ( a ) = \alpha _{1}. $$

Similarly, we have

$$ \langle \mathbb{P}_{n} u, \emptyset _{2} \rangle = \alpha _{2}, \quad\quad \langle \mathbb{P}_{n} u, \emptyset _{3} \rangle = \alpha _{3}, \quad\quad \langle \mathbb{P}_{n} u, \emptyset _{4} \rangle = \alpha _{4}. $$

So, \(\mathbb{P}_{n} u\) is the solution of Eq. (7). □

In fact, \(u_{n} (x)\) is an approximate solution of the exact solution.

Theorem 3.4

If \(u\in W_{2, c}^{3} [ a,b ]\) is the solution of Eq. (6), put \(u_{n} =\mathbb{P}_{n} u\in S_{n}\) then \(u_{n} \) converges uniformly to u.

Proof

$$\begin{aligned} \bigl\vert u ( t ) - u_{n} (t) \bigr\vert &= \biggl\vert \biggl\langle u- u_{n}, \frac{\partial R _{t}}{\partial t} \biggr\rangle \biggr\vert \leq \biggl\Vert \frac{ \partial R_{t}}{\partial t} \biggr\Vert _{W_{2, c}^{3}} \Vert u- u_{n} \Vert _{W_{2, c}^{3}} \\ &\leq M \Vert u- u_{n} \Vert _{W_{2, c}^{3}} \rightarrow 0. \end{aligned}$$

 □

Similarly, we can prove that if \(t\in [ a,c ]\) and \([ c,b ]\), respectively, then \(u_{n}^{(i)}\) converges uniformly to \(u^{(i)}\), \(i=1,2\).

In order to analyze the convergence order of the algorithm proposed in this section, we derive the following lemma.

Lemma 3.1

([20])

If \(u_{n} =\mathbb{P}_{n} u\) is the approximate solution of \(\mathbb{L} u=f(x)\), \(\mathbb{ L}: W_{2}^{3} [a,b]\rightarrow W_{2}^{1} [a,b]\) is a linear operator, then

$$ \bigl\vert u^{(i)} - u_{n}^{(i)} \bigr\vert \leq M_{i} h^{2}, \quad i=0,1. $$

Theorem 3.5

The approximate solution \(u_{n} =\mathbb{P}_{n} u\) of Eq. (6) converges to its exact solution u with not less than second-order convergence.

Proof

By Definition 2.1, we get

$$ u ( x ) = \textstyle\begin{cases} u_{0} ( x ), &x< c, \\ u_{1} ( x ), &x\geq c. \end{cases} $$

In addition, \(u_{n} =\mathbb{P}_{n} u\) is converges uniformly to u by Theorem 3.4. So, there are \(u_{0,n}\) and \(u_{1,n}\) satisfying the following expressions:

$$ \bigl\Vert u( x ) - u_{n} (x) \bigr\Vert _{W_{2, c}^{3}}^{2} = \bigl\Vert u_{0} ( x ) - u_{0,n} (x) \bigr\Vert _{W_{a}^{3}}^{2} + \bigl\Vert u_{1} ( x ) - u_{1,n} (x) \bigr\Vert _{W_{b}^{3}}^{2} \rightarrow 0. $$

So, \(\Vert u_{0} ( x ) - u_{0,n} (x) \Vert _{W_{a}^{3}} ^{2} \rightarrow 0\), \(\Vert u_{1} ( x ) - u_{1,n} (x) \Vert _{W_{b}^{3}}^{2} \rightarrow 0\).

Note that \(u_{0,n} (x)\) is the approximate solution of \(\mathbb{L} u=f(x)\) in reproducing kernel space \(W_{a}^{3}\), take advantage of Lemma 3.1, we have

$$ \bigl\vert u_{0} (x)- u_{0,n} (x) \bigr\vert \leq M_{0} h^{2}. $$

Similarly, we have \(\vert u_{1} (x)- u_{1,n} (x) \vert \leq M_{1} h^{2}\).

For any \(x\in [a,b]\)

$$ \bigl\vert u(x)- u_{n} (x) \bigr\vert \leq \max \bigl\{ \bigl\vert u _{0} ( x ) - u_{0,n} ( x ) \bigr\vert , \bigl\vert u_{1} ( x ) - u_{1,n} ( x ) \bigr\vert \bigr\} \leq \max \bigl\{ M_{0} h^{2}, M_{1} h^{2} \bigr\} = M_{2} h^{2}. $$

Here h is step-size on the interval \([ a,b ]\), \(M_{0}\), \(M_{1}\), \(M_{2}\) are constants. Therefore, \(u_{n}\) converges to u not less than the second-order convergence. □

Furthermore, the following rate of convergence formulas can be obtained:

$$ \mathit{C.R}= \log _{2} \frac{ \vert u(x)- u_{n} (x) \vert }{ \vert u(x)- u_{2n} (x) \vert }. $$

By the results of this section, the exact solution of Eq. (6) can be expressed as

$$ u ( x ) = \sum_{i=1}^{\infty } \lambda _{i} \psi _{i} ( x ) + k_{1} \emptyset _{1} ( x ) + k_{2} \emptyset _{2} ( x ) + k_{3} \emptyset _{3} ( x ) + k_{4} \emptyset _{4} ( x ). $$
(8)

4 Numerical algorithm

In this section, the numerical algorithm for the approximate solution \(u_{n}\) is given. Now, the solution \(u_{n}\) of Eq. (7) is the approximate solution of Eq. (1). As \(u_{n} \in S_{n}\), so

$$ u_{n} ( x ) = \sum_{i=1}^{n} \lambda _{i} \psi _{i} ( x ) + k_{1} \emptyset _{1} ( x ) + k_{2} \emptyset _{2} ( x ) + k_{3} \emptyset _{3} ( x ) + k _{4} \emptyset _{4} ( x ). $$
(9)

To obtain the approximate solution \(u_{n}\), we only need to obtain the coefficients of each \(\psi _{i} ( x )\), \(i=1,2,\ldots,n\) and \(\emptyset _{j} ( x )\), \(j=1,2,3,4\). Use \(\psi _{i} ( x )\) and \(\emptyset _{j} ( x )\) to do the inner products with both sides of Eq. (9), we have

$$ \textstyle\begin{cases} \sum_{j=1}^{n} \lambda _{j} \langle \psi _{j}, \psi _{i} \rangle + \sum_{j=1}^{4} k_{j} \langle \psi _{i}, \emptyset _{j} \rangle =f ( x_{i} ), \quad i=1,2,\ldots,n, \\ \sum_{j=1}^{n} \lambda _{j} \langle \psi _{j}, \emptyset _{i} \rangle + \sum_{j=1}^{4} k_{j} \langle \emptyset _{i}, \emptyset _{j} \rangle = \alpha _{i}, \quad i=1,2,3,4. \end{cases} $$
(10)

This is the system of linear equations of \(\lambda _{i}\), \(k_{j}\), \(i=1,2,\ldots,n\), \(j=1,2,3,4\).

Let

$$\begin{aligned}& G_{n+4} = \begin{bmatrix} \langle \psi _{i}, \psi _{k} \rangle & \cdots & \langle \psi _{i}, \emptyset _{j} \rangle \\ \cdots & \cdots & \cdots \\ \langle \psi _{k}, \emptyset _{j} \rangle & \cdots & \langle \emptyset _{j}, \emptyset _{m} \rangle \end{bmatrix}_{i,k=1,2,\ldots,n; j,m=1,2,3,4}, \\& F= \bigl(f ( x_{1} ), f ( x_{2} ),\ldots,f ( x_{n} ), \alpha _{1}, \alpha _{2}, \alpha _{3},\alpha _{4} \bigr)^{T}. \end{aligned}$$

Consider that \(\{ \psi _{i} (x) \}_{i=1}^{n} \cup \{ \emptyset _{j} (x) \}_{j=1}^{4}\) is linearly independent in \(W_{2, c}^{3} [ a,b ]\); therefore, \(G^{-1}\) exists. Then we have

$$ ( \lambda _{1},\ldots, \lambda _{n}, k_{1}, k_{2}, k_{3}, k_{4} )^{T} =G ^{-1} {\cdot }F. $$

5 Numerical examples

In this section, the method proposed in this paper is applied to some impulsive differential equations to evaluate the approximate solution. In Examples 1–3, the reproducing space is \(W_{2, c}^{3} [ 0,1 ]\). We compare the numerical results with the other methods discussed in [13, 14]. Finally, the results show that our algorithm is practical and remarkably effective.

Example 1

(Ref. [13])

Consider the linear impulsive differential equation

$$ \textstyle\begin{cases} -u^{\prime \prime } ( x ) +u ( x ) =0, \quad \textit{a.e. }x\in (0,1), \\ u ( 0 ) =0, \quad\quad u ( 1 ) =-1, \\ \Delta u ' ( 1/4 ) =-2, \quad\quad \Delta u ( 1/4 ) =0. \end{cases} $$

The exact solution

$$ u ( x ) = \textstyle\begin{cases} \frac{e^{\frac{1}{4} -x} (-1- e^{\frac{3}{4}} + e^{\frac{3}{2}} )( e ^{2x} -1)}{e^{2} -1},& x\in [0, \frac{1}{4} ], \\ \frac{e^{- \frac{1}{4} -x} ( e^{2x} - e^{2x+ \frac{1}{2}} - e^{2x+ \frac{5}{4}} + e^{\frac{5}{4}} - e^{2} + e^{\frac{5}{2}} )}{e^{2} -1},& x\in ( \frac{1}{4},1]. \end{cases} $$

The numerical results are given in Table 1, where the rate of convergence \(\mathit{C.R}= \log _{2} \frac{ \vert u(x)- u_{n} (x) \vert }{ \vert u(x)- u_{2n} (x) \vert }\), from the comparison with method in [13], we confirm that our algorithm satisfies Theorem 3.5. It shown that the present method can produce a more accurate approximate solution.

Table 1 Comparison of absolute errors in Example 1

Example 2

(Ref. [13])

Consider the following equation with two pulse points:

$$ \textstyle\begin{cases} -u^{\prime \prime } ( x ) =0, \quad \textit{a.e. } x\in ( 0,1 ), \\ u ( 0 ) =0, \quad\quad u ( 1 ) =0, \\ \Delta u ' ( \frac{1}{3} ) =-1, \quad\quad \Delta u (\frac{1}{3} ) =0, \\ \Delta u ' ( \frac{4}{5} ) =1, \quad\quad \Delta u ( \frac{4}{5} ) =0. \end{cases} $$

The exact solution

$$ u ( x ) = \textstyle\begin{cases} \frac{7x}{15}, &x\in [0, \frac{1}{3} ], \\ \frac{1}{3} - \frac{8x}{15}, &x\in ( \frac{1}{3}, \frac{4}{5} ], \\ - \frac{7}{15} - \frac{7x}{15}, &x\in ( \frac{4}{5},1 ]. \end{cases} $$

In Fig. 1, the red dotted line is the numerical solution and the black line is the exact solution. Figure 2 shows the absolute error \(\vert u ( x ) - u_{n} (x) \vert \) when \(n=33\). Table 2 shows comparison of the absolute errors between our method and other methods. All graphs and tables show that our method is effective as we expect. It is worth noting that the approximate solutions of Example 1 and Example 2 are only proved norm of convergence to the exact solutions (see [13]). But, the approximate solutions of this paper are proved uniformly converges to \(u ( x )\).

Figure 1
figure 1

The exact solution and the approximate solution in Example 2 (\(n = 33\))

Figure 2
figure 2

The absolute errors of \(\vert u(x)- u _{n}(x) \vert \) in Example 2 (\(n = 33\))

Table 2 Comparison of absolute errors in Example 2

Example 3

(Ref. [14])

Consider the following impulsive equation with variable coefficients:

$$ (\beta u_{x} )_{x} =56 x^{6}, \quad x\in [ 0,1 ] \{ 0.5\}, \text{where } \beta = \textstyle\begin{cases} 1, &x\in [0, 0.5], \\ 2, &x\in (0.5, 1], \end{cases} $$

subject to the boundary and interface conditions:

$$ \textstyle\begin{cases} u ( 0 ) =0, \quad\quad u ( 1 ) = \frac{257}{512}, \\ \Delta u ' ( 0.5 ) =-0.5 u ' ( 0.5^{-} ),\quad\quad \Delta u ( 0.5 ) =0. \end{cases} $$

The exact solution

$$ u ( x ) = \textstyle\begin{cases} x^{8}, &x\in [0,0.5], \\ \frac{1}{2} ( x^{8} + \frac{1}{256} ), &x\in (0.5,1]. \end{cases} $$

Table 3 lists the absolute error and the rate of convergence C.R to Example 3, from the illustrative tables, we conclude that when truncation limit n is increased we can obtain a good accuracy. It shows that the proposed approach is very stable and effective.

Table 3 Comparison of absolute errors in Example 3

The proposed method not only can solve Eq. (1) of impulsive differential equation, but also can solve high-order impulsive differential equations and complex boundary value problems of pulse. The theory and algorithm is similar, we use the following example to prove the effectiveness of the algorithm.

Example 4

Consider the third-order linear impulsive differential equation

$$ \textstyle\begin{cases} u^{\prime \prime \prime } ( x ) + a_{2} (x)u^{\prime \prime } ( x ) + a _{0} (x)u ( x ) =f(x), \quad x\in [-2,2]\backslash \{ 0\}, \\ u ( -2 ) =16, \quad\quad u ( 2 ) = \frac{14}{3}, \quad\quad \int _{-1}^{1} u ( x ) \,dx= \frac{119}{120}, \\ \Delta u ( 0 ) =0, \quad\quad \Delta u ' ( 0 ) =1,\quad\quad \Delta u^{\prime \prime } ( 0 ) =2. \end{cases} $$

Here

$$\begin{aligned}& a_{0} (x)= \textstyle\begin{cases} 1-x, &x< 0, \\ e^{-x}, &x\geq 0, \end{cases}\displaystyle \qquad a_{2} (x)= \textstyle\begin{cases} -2 \cos x, &x< 0, \\ -2, &x\geq 0, \end{cases}\displaystyle \\& f(x)= \textstyle\begin{cases} x(- x^{4} + x^{3} -24x \cos x +24), &x< 0, \\ e^{-x} ( - \frac{x^{3}}{6} + x^{2} +x ) +2 ( x-2 ) -1, &x\geq 0. \end{cases}\displaystyle \end{aligned}$$

The exact solution

$$ u(x)= \textstyle\begin{cases} x^{4}, &x< 0, \\ - \frac{x^{3}}{6} + x^{2} +x, &x\geq 0. \end{cases} $$

In the Example 4, the reproducing space is \(W_{2, c}^{4} [ -2,2 ]\). Table 4 shows the absolute errors and convergence order of our method in different cases. In Figs. 3–6, the red dotted line is the numerical solution \(u_{n}^{ ( i )} (x)\) and the black line is the exact solution \(u^{ ( i )} ( x )\), \(i=0, 1,2,3\).

Figure 3
figure 3

\(u _{n}(x)\) and \(u(x)\)

Figure 4
figure 4

\(u' _{n}(x)\) and \(u'(x)\)

Figure 5
figure 5

\(u''(x)\) and \(u''(x)\)

Figure 6
figure 6

\(u'''(x)\) and \(u'''(x)\)

Table 4 Comparison of absolute errors in Example 4

6 Conclusion

In this paper, it is the first time to apply the reproducing kernel method to solve the impulsive differential equations. A broken reproducing kernel space is cleverly built, the reproducing space are reasonably simple because the author did not consider the complicated boundary conditions, and avoid the time consuming Schmidt orthogonalization process, and the approximate solution we get is no less than the second-order convergence. In Sect. 5, Numerical examples, we do four experiments with the new algorithm, and make a comparison with other algorithms. In fact, this technique can be extended to other class of impulsive boundary value problems. Although we just considered one pulse point in our presentation, by that analogy, the algorithm can also be applied to multiple pulse points. From the illustrative tables and figures, we find that the algorithm is remarkably accurate and effective as expected.

References

  1. Bainov, D.D., Dishliev, A.B.: Population dynamics control in regard to minimizing the time necessary for the regeneration of a biomass taken away from the population. Appl. Math. Comput. 39(1), 37–48 (1990)

    MathSciNet  MATH  Google Scholar 

  2. Bainov, D.D., Simenov, P.S.: Systems with Impulse Effect Stability Theory and Applications. Ellis Horwood, Chichester (1989)

    Google Scholar 

  3. LeVeque, R.J., Li, Z.: Immersed interface methods for Stokes flow with elastic boundaries or surface tension. SIAM J. Sci. Comput. 18(3), 709–735 (2012)

    Article  MathSciNet  MATH  Google Scholar 

  4. Huang, Y., Forsyth, P.A., Labahn, G.: Inexact arithmetic considerations for direct control and penalty methods: American options under jump diffusion. Appl. Numer. Math. 72(2), 33–51 (2013)

    Article  MathSciNet  MATH  Google Scholar 

  5. Liu, X.D., Sideris, T.C.: Convergence of the ghost fluid method for elliptic equations with interfaces. Math. Comput. 72(244), 1731–1746 (2013)

    Article  MathSciNet  MATH  Google Scholar 

  6. Cao, S., Xiao, Y., Zhu, H.: Linearized alternating directions method for \(l(1)\)-norm inequality constrained \(l(1)\)-norm minimization. Appl. Numer. Math. 85, 142–153 (2014)

    MathSciNet  MATH  Google Scholar 

  7. Candes, E.J., Romberg, J., Tao, T.: Robust uncertainty principles: exact signal reconstruction from highly incomplete frequency information. IEEE Trans. Inf. Theory 52(2), 489–509 (2006)

    Article  MathSciNet  MATH  Google Scholar 

  8. Wang, Q., Wang, M.: Existence of solution for impulsive differential equations with indefinite linear part. Appl. Math. Lett. 51, 41–47 (2016)

    Article  MathSciNet  MATH  Google Scholar 

  9. Rehman, M.U., Eloe, P.W.: Existence and uniqueness of solutions for impulsive fractional differential equations. Appl. Math. Comput. 224(4), 422–431 (2013)

    MathSciNet  MATH  Google Scholar 

  10. Jankowski, T.: Positive solutions to second order four-point boundary value problems for impulsive differential equations. Appl. Math. Comput. 202(2), 550–561 (2008)

    MathSciNet  MATH  Google Scholar 

  11. Bogun, I.: Existence of weak solutions for impulsive-Laplacian problem with superlinear impulses. Nonlinear Anal., Real World Appl. 13(6), 2701–2707 (2012)

    Article  MathSciNet  MATH  Google Scholar 

  12. Wang, J.R., Zhou, Y., Lin, Z.: On a new class of impulsive fractional differential equations. Appl. Math. Comput. 242, 649–657 (2014)

    MathSciNet  MATH  Google Scholar 

  13. Berenguer, M.I., Kunze, H., Torre, D.L., Galan, M.R.: Galerkin method for constrained variational equations and a collage-based approach to related inverse problems. J. Comput. Appl. Math. 292, 67–75 (2016)

    Article  MathSciNet  MATH  Google Scholar 

  14. Epshteyn, Y., Phippen, S.: High-order difference potentials methods for 1D elliptic type models. Appl. Numer. Math. 93, 69–86 (2015)

    Article  MathSciNet  MATH  Google Scholar 

  15. Epshteyn, Y.: Algorithms composition approach based on difference potentials method for parabolic problems communications. Commun. Math. Sci. 12(4), 723–755 (2014)

    Article  MathSciNet  MATH  Google Scholar 

  16. Hossainzadeh, H., Afrouzi, G., Yazdani, A.: Application of Adomian decomposition method for solving impulsive differential equations. J. Math. Comput. Sci. 2(4), 672–681 (2011)

    Article  Google Scholar 

  17. Zhang, Z., Liang, H.: Collocation methods for impulsive differential equations. Appl. Math. Comput. 228(228), 336–348 (2014)

    MathSciNet  MATH  Google Scholar 

  18. Zhang, G.L., Song, M.H., Liu, M.Z.: Asymptotic stability of a class of impulsive delay differential equations. J. Appl. Math. 10, 487–505 (2012)

    MathSciNet  Google Scholar 

  19. Cui, M., Lin, Y.: Nonlinear Numerical Analysis in the Reproducing Kernel Space. Nova Science Publishers, New York (2009)

    MATH  Google Scholar 

  20. Wu, B., Lin, Y.: Application of the Reproducing Kernel Space. Science Press (2012)

    Google Scholar 

  21. Zhao, Z., Lin, Y., Niu, J.: Convergence order of the reproducing kernel method for solving boundary value problems. Math. Model. Anal. 21(4), 466–477 (2016)

    Article  MathSciNet  Google Scholar 

  22. Mei, L., Jia, Y., Lin, Y.: Simplified reproducing kernel method for impulsive delay differential equations. Appl. Math. Lett. 83, 123–129 (2018)

    Article  MathSciNet  MATH  Google Scholar 

  23. Xu, M., Zhao, Z., Lin, Y.: A simplified reproducing kernel method for 1-D elliptic type interface problems. J. Comput. Appl. Math. 351, 29–40 (2019)

    Article  MathSciNet  MATH  Google Scholar 

Download references

Acknowledgements

The authors are grateful for the comments of the referee, which have improved the exposition of this paper.

Availability of data and materials

All data generated or analyzed during this study are included in this published article.

Funding

This work has been supported by 2018KQNCX338, a Young Innovative Talents Program in Universities and Colleges of Guangdong Province, and XT-2018-03, a Scientific Research-Innovation Team Project at Zhuhai Campus, Beijing Institute of Technology.

Author information

Authors and Affiliations

Authors

Contributions

LM conceived of the study, designed the study and collected the literature. HS proved the convergence of the algorithm. YL reviewed the full text. All authors were involved in writing the manuscript. All authors read and approved the final manuscript.

Corresponding author

Correspondence to Liangcai Mei.

Ethics declarations

Ethics approval and consent to participate

I declare that the papers submitted are the results of research by all of our authors.

Competing interests

The authors declare that no competing interests exist.

Consent for publication

The author agrees to publish it in this journal.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Mei, L., Sun, H. & Lin, Y. Numerical method and convergence order for second-order impulsive differential equations. Adv Differ Equ 2019, 260 (2019). https://doi.org/10.1186/s13662-019-2177-2

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13662-019-2177-2

Keywords