Skip to content

Advertisement

  • Research
  • Open Access

Stability of numerical solution for partial differential equations with piecewise constant arguments

Advances in Difference Equations20182018:71

https://doi.org/10.1186/s13662-018-1514-1

  • Received: 4 October 2017
  • Accepted: 2 February 2018
  • Published:

Abstract

In this paper, the numerical stability of a partial differential equation with piecewise constant arguments is considered. Firstly, the θ-methods are applied to approximate the original equation. Secondly, the numerical asymptotic stability conditions are given when the mesh ratio and the corresponding parameter satisfy certain conditions. Thirdly, the conditions under which the numerical stability region contains the analytic stability region are also established. Finally, some numerical examples are given to demonstrate the theoretical results.

Keywords

  • Partial differential equation
  • Piecewise constant arguments
  • θ-methods
  • Stability

MSC

  • 65L07
  • 65L20

1 Introduction

Recently, differential equations with piecewise constant arguments (EPCA) have received much attention from a number of investigators [15] in such various fields as population dynamics, physics, mechanical systems, control science and economics. The theory of EPCA was initiated in 1983 and 1984 with the contributions of Cooke and Wiener [6], Shah and Wiener [7], Wiener [8], and it has been developed by many authors [913]. In 1993, Wiener, pioneer of EPCAs, recollects in the book [14] the investigation of EPCA until that moment. Later, continuous efforts have been made devoted to considering various properties of EPCA [1518].

Generally speaking, in many cases analytic solutions of EPCA are hard to achieve and we are forced to use numerical methods to approximate them. Nevertheless, compared with the qualitative investigation of EPCA, the numerical study of EPCA is very late and rare. The original work for this field should be attributed to Liu et al. [19]. We think that it is the key step toward solving EPCA by numerical methods. Next, several results about the convergence, the stability and the dissipativity of numerical solutions for EPCA have been reported [2024]. However, all of them are based on ordinary differential equations (ODEs). To the best of the author’s knowledge, only few results were presented in the case of numerical treatment of partial differential equations with piecewise constant arguments (PEPCA) [25, 26]. In these two articles, the authors investigated the numerical stability of θ-methods and Galerkin methods for a simple PEPCA, respectively. In contrast to [25, 26], in the present paper we study a more complicated model and analyze the numerical stability.

In this paper, we consider the following initial boundary value problem (IBVP):
$$ \textstyle\begin{cases} u_{t}(x,t)=a^{2}u_{xx}(x,t)+bu_{xx}(x,[t])+cu_{xx} (x,2 [\frac {t+1}{2} ] ), \quad t> 0, \\ u(0,t)=u(1,t)=0, \\ u(x,0)=v(x), \end{cases} $$
(1)
where \(a,b,c \in\mathbb{R}\) and \(a\neq0\), \(u: \Omega=[0,1] \times [0,\infty)\rightarrow\mathbb{R}\), \(v: [0,1]\rightarrow\mathbb{R}\), \([\cdot]\) signifies the greatest integer function.

For the sake of the coming discussion, we derive the following stability conditions of (1) by using the similar method in [27, 28].

Lemma 1

If the following conditions are satisfied:
$$ \textstyle\begin{cases} (a^{2}+b+c)((a^{2}+b-c)e^{-a^{2}\pi^{2}j^{2}}-(b-a^{2}-c))>0, \\ (a^{2}+b+c)((a^{2}+b+c)e^{-a^{2}\pi^{2}j^{2}}-(b-a^{2}+c))>0, \end{cases} $$
(2)
where
$$c\neq\frac{a^{2}}{e^{-a^{2}\pi^{2}j^{2}}-1},\quad a\neq0, $$
then the zero solution of the equation in (1) is asymptotically stable.

2 The stability of the numerical solution

In this section, we consider the numerical asymptotic stability of θ-methods for (1).

2.1 The difference equation

Let \(\Delta t>0\) and \(\Delta x>0\) be time and spatial stepsizes, respectively. We also assume that Δt satisfies \(\Delta t=1/m\), where \(m \geq1\) is an integer, and Δx satisfies \(\Delta x=1/p\) for \(p\in\mathbb{N}\). Define the mesh points
$$t_{n}=n\Delta t,\quad n=0,1,2,\ldots, $$
and
$$x_{i}=i\Delta x, \quad i=0,1,2,\ldots,p. $$
Applying the θ-methods to (1), we have
$$ \textstyle\begin{cases} \frac{u_{i}^{n+1}-u_{i}^{n}}{\Delta t} \\ \quad = \theta\{ a^{2}\frac{u_{i+1}^{n+1}-2u_{i}^{n+1}+u_{i-1}^{n+1}}{\Delta x^{2}}+b\frac {u^{h}(x_{i+1},[t_{n+1}])-2u^{h}(x_{i},[t_{n+1}])+u^{h}(x_{i-1},[t_{n+1}])}{\Delta x^{2}} \\ \qquad {}+c\frac{u^{h}(x_{i+1},2[\frac{t_{n+1}+1}{2}])-2u^{h}(x_{i},2[\frac {t_{n+1}+1}{2}])+u^{h}(x_{i-1},2[\frac{t_{n+1}+1}{2}])}{\Delta x^{2}}\} \\ \qquad {}+(1-\theta)\{a^{2}\frac{u_{i+1}^{n}-2u_{i}^{n}+u_{i-1}^{n}}{\Delta x^{2}}+b\frac {u^{h}(x_{i+1},[t_{n}])-2u^{h}(x_{i},[t_{n}])+u^{h}(x_{i-1},[t_{n}])}{\Delta x^{2}} \\ \qquad {}+c\frac{u^{h}(x_{i+1},2[\frac{t_{n}+1}{2}])-2u^{h}(x_{i},2[\frac {t_{n}+1}{2}])+u^{h}(x_{i-1},2[\frac{t_{n}+1}{2}])}{\Delta x^{2}}\}, \\ u_{0}^{n}=u_{p}^{n}=0, \quad n=0,1,2,\ldots, \\ u_{i}^{0}=v(x_{i}), \quad i=0,1,2,\ldots,p, \end{cases} $$
(3)
where \(u_{i}^{n}\), \(u^{h}(x_{i},2[(t_{n}+1)/2])\) and \(u^{h}(x_{i},[t_{n}])\) are approximations to \(u(x_{i},t_{n})\), \(u(x_{i},2[(t_{n}+1)/2])\) and \(u(x_{i},[t_{n}])\), respectively.
Denote \(n=km+l\), \(k=0,1,2,\ldots\) , \(l=0,1,\ldots, m-1\), by the same technique in [29], we can define \(u^{h}(x_{i},[t_{n}+\eta h])\triangleq u_{i}^{km}\), \(u^{h}(x_{i},2[(2k-1+lh+\eta h+1)/2])\triangleq u_{i}^{2km}\) and \(u^{h}(x_{i},2[(2k+lh+\eta h+1)/2])\triangleq u_{i}^{2km}\), where \(\eta\in[0,1]\). So the equation in (3) reduces to the following two recurrence relations:
$$\begin{aligned}& \frac{u_{i}^{km+l+1}-u_{i}^{km+l}}{\Delta t} \\& \quad =a^{2}\theta \biggl(\frac {u_{i+1}^{km+l+1}-2u_{i}^{km+l+1}+u_{i-1}^{km+l+1}}{\Delta x^{2}} \biggr) +a^{2}(1-\theta) \biggl(\frac {u_{i+1}^{km+l}-2u_{i}^{km+l}+u_{i-1}^{km+l}}{\Delta x^{2}} \biggr) \\& \qquad {}+(b+c) \biggl(\frac{u_{i+1}^{km}-2u_{i}^{km}+u_{i-1}^{km}}{\Delta x^{2}} \biggr), \end{aligned}$$
(4)
when k is even and
$$\begin{aligned}& \frac{u_{i}^{km+l+1}-u_{i}^{km+l}}{\Delta t} \\& \quad =a^{2}\theta \biggl(\frac {u_{i+1}^{km+l+1}-2u_{i}^{km+l+1}+u_{i-1}^{km+l+1}}{\Delta x^{2}} \biggr) +a^{2}(1-\theta) \biggl(\frac {u_{i+1}^{km+l}-2u_{i}^{km+l}+u_{i-1}^{km+l}}{\Delta x^{2}} \biggr) \\& \qquad {}+b \biggl(\frac{u_{i+1}^{km}-2u_{i}^{km}+u_{i-1}^{km}}{\Delta x^{2}} \biggr)+c \biggl(\frac{u_{i+1}^{(k+1)m}-2u_{i}^{(k+1)m}+u_{i-1}^{(k+1)m}}{\Delta x^{2}} \biggr), \end{aligned}$$
(5)
when k is odd.

Basically, in each interval \([n,n+1)\), the equation in (1) can be seen as an original PDE, so the θ-methods for (1) are convergent of \(O(\Delta t+\Delta x^{2})\) if \(\theta\neq 1/2\), of \(O(\Delta t^{2}+\Delta x^{2})\) if \(\theta= 1/2\). A more detailed analysis on the convergence of the θ-methods can be found in [30, 31].

Let \(r=\Delta t/\Delta x^{2}\), so (4) and (5) become
$$\begin{aligned}& -a^{2}\theta ru_{i+1}^{km+l+1}+\bigl(1+2a^{2} \theta r\bigr)u_{i}^{km+l+1}-a^{2}\theta ru_{i-1}^{km+l+1} \\& \quad =a^{2}(1-\theta)ru_{i+1}^{km+l}+ \bigl(1-2a^{2}(1-\theta)r\bigr)u_{i}^{km+l}+a^{2}(1- \theta )ru_{i-1}^{km+l} \\& \qquad {}+(b+c)r\bigl(u_{i+1}^{km}-2u_{i}^{km}+u_{i-1}^{km} \bigr) \end{aligned}$$
(6)
and
$$\begin{aligned}& -a^{2}\theta ru_{i+1}^{km+l+1}+\bigl(1+2a^{2} \theta r\bigr)u_{i}^{km+l+1}-a^{2}\theta ru_{i-1}^{km+l+1} \\& \quad =a^{2}(1-\theta)ru_{i+1}^{km+l}+ \bigl(1-2a^{2}(1-\theta)r\bigr)u_{i}^{km+l}+a^{2}(1- \theta )ru_{i-1}^{km+l} \\& \qquad {} +br\bigl(u_{i+1}^{km}-2u_{i}^{km}+u_{i-1}^{km} \bigr)+cr\bigl(u_{i+1}^{(k+1)m}-2u_{i}^{(k+1)m}+u_{i-1}^{(k+1)m} \bigr), \end{aligned}$$
(7)
respectively. Moreover, let \(i=1,2,\ldots,p-1\), (6) and (7) yield
$$\begin{aligned}& \left ( \textstyle\begin{array}{@{}c@{\quad}c@{\quad}c@{\quad}c@{\quad}c@{}} 1+2a^{2}\theta r & -a^{2}\theta r & \cdots& 0 & 0\\ -a^{2}\theta r & 1+2a^{2}\theta r & \cdots& 0 & 0\\ \vdots& \vdots& \ddots& \vdots& \vdots\\ 0 & 0 & \cdots& 1+2a^{2}\theta r & -a^{2}\theta r\\ 0 & 0 & \cdots& -a^{2}\theta r & 1+2a^{2}\theta r \end{array}\displaystyle \right ) \left ( \textstyle\begin{array}{@{}c@{}} u_{1}^{km+l+1}\\ u_{2}^{km+l+1}\\ \vdots\\ \vdots\\ u_{p-1}^{km+l+1} \end{array}\displaystyle \right ) \\& \quad =\left ( \textstyle\begin{array}{@{}c@{\quad}c@{\quad}c@{\quad}c@{\quad}c@{}} \omega& a^{2}(1-\theta)r & \cdots& 0 & 0\\ a^{2}(1-\theta)r & \omega& \cdots& 0 & 0\\ \vdots& \vdots& \ddots& \vdots& \vdots\\ 0 & 0 & \cdots& \omega& a^{2}(1-\theta)r\\ 0 & 0 & \cdots& a^{2}(1-\theta)r & \omega \end{array}\displaystyle \right ) \left ( \textstyle\begin{array}{@{}c@{}} u_{1}^{km+l}\\ u_{2}^{km+l}\\ \vdots\\ \vdots\\ u_{p-1}^{km+l} \end{array}\displaystyle \right ) \\& \qquad {}+(b+c)r \left ( \textstyle\begin{array}{@{}c@{\quad}c@{\quad}c@{\quad}c@{\quad}c@{}} -2 & 1 & \cdots& 0 & 0\\ 1 & -2 & \cdots& 0 & 0\\ \vdots& \vdots& \ddots& \vdots& \vdots\\ 0 & 0 & \cdots& -2 & 1\\ 0 & 0 & \cdots& 1 & -2 \end{array}\displaystyle \right ) \left ( \textstyle\begin{array}{@{}c@{}} u_{1}^{km}\\ u_{2}^{km}\\ \vdots\\ \vdots\\ u_{p-1}^{km} \end{array}\displaystyle \right ) \end{aligned}$$
and
$$\begin{aligned}& \left ( \textstyle\begin{array}{@{}c@{\quad}c@{\quad}c@{\quad}c@{\quad}c@{}} 1+2a^{2}\theta r & -a^{2}\theta r & \cdots& 0 & 0\\ -a^{2}\theta r & 1+2a^{2}\theta r & \cdots& 0 & 0\\ \vdots& \vdots& \ddots& \vdots& \vdots\\ 0 & 0 & \cdots& 1+2a^{2}\theta r & -a^{2}\theta r\\ 0 & 0 & \cdots& -a^{2}\theta r & 1+2a^{2}\theta r \end{array}\displaystyle \right ) \left ( \textstyle\begin{array}{@{}c@{}} u_{1}^{km+l+1}\\ u_{2}^{km+l+1}\\ \vdots\\ \vdots\\ u_{p-1}^{km+l+1} \end{array}\displaystyle \right ) \\& \quad =\left ( \textstyle\begin{array}{@{}c@{\quad}c@{\quad}c@{\quad}c@{\quad}c@{}} \omega& a^{2}(1-\theta)r & \cdots& 0 & 0\\ a^{2}(1-\theta)r & \omega& \cdots& 0 & 0\\ \vdots& \vdots& \ddots& \vdots& \vdots\\ 0 & 0 & \cdots& \omega& a^{2}(1-\theta)r\\ 0 & 0 & \cdots& a^{2}(1-\theta)r & \omega \end{array}\displaystyle \right ) \left ( \textstyle\begin{array}{@{}c@{}} u_{1}^{km+l}\\ u_{2}^{km+l}\\ \vdots\\ \vdots\\ u_{p-1}^{km+l} \end{array}\displaystyle \right ) \\& \qquad {}+br \left ( \textstyle\begin{array}{@{}c@{\quad}c@{\quad}c@{\quad}c@{\quad}c@{}} -2 & 1 & \cdots& 0 & 0\\ 1 & -2 & \cdots& 0 & 0\\ \vdots& \vdots& \ddots& \vdots& \vdots\\ 0 & 0 & \cdots& -2 & 1\\ 0 & 0 & \cdots& 1 & -2 \end{array}\displaystyle \right ) \left ( \textstyle\begin{array}{@{}c@{}} u_{1}^{km}\\ u_{2}^{km}\\ \vdots\\ \vdots\\ u_{p-1}^{km} \end{array}\displaystyle \right ) \\& \qquad {}+cr \left ( \textstyle\begin{array}{@{}c@{\quad}c@{\quad}c@{\quad}c@{\quad}c@{}} -2 & 1 & \cdots& 0 & 0\\ 1 & -2 & \cdots& 0 & 0\\ \vdots& \vdots& \ddots& \vdots& \vdots\\ 0 & 0 & \cdots& -2 & 1\\ 0 & 0 & \cdots& 1 & -2 \end{array}\displaystyle \right ) \left ( \textstyle\begin{array}{@{}c@{}} u_{1}^{(k+1)m}\\ u_{2}^{(k+1)m}\\ \vdots\\ \vdots\\ u_{p-1}^{(k+1)m} \end{array}\displaystyle \right ), \end{aligned}$$
respectively, where \(\omega=1-2a^{2}(1-\theta)r\).
Introducing \(\mathbf{u}^{n}=(u_{1}^{n},u_{2}^{n},\ldots,u_{p-1}^{n})^{T}\), \(n=0,1,2,\ldots \) , \(\mathbf{v}(x)=(v(x_{1}),v(x_{2}),\ldots,v(x_{p-1}))^{T}\) and the \((p-1)\times(p-1)\) triple-diagonal matrix \(\mathbf{F}=\mathsf {diag}(-1,2,-1)\), then (3) becomes
$$ \textstyle\begin{cases} (\mathbf{I}+a^{2}\theta r\mathbf{F})\mathbf{u}^{km+l+1}=(\mathbf {I}-a^{2}(1-\theta)r\mathbf{F})\mathbf{u}^{km+l}-(b+c)r\mathbf{F}\mathbf {u}^{km}, \\ \mathbf{u}^{0}=\mathbf{v}(x), \end{cases} $$
(8)
when k is even, and
$$ \textstyle\begin{cases} (\mathbf{I}+a^{2}\theta r\mathbf{F})\mathbf{u}^{km+l+1}=(\mathbf {I}-a^{2}(1-\theta)r\mathbf{F})\mathbf{u}^{km+l}-br\mathbf{F}\mathbf {u}^{km}-cr\mathbf{F}\mathbf{u}^{(k+1)m}, \\ \mathbf{u}^{0}=\mathbf{v}(x), \end{cases} $$
(9)
when k is odd.

2.2 Stability analysis

From (8), we obtain
$$ \mathbf{u}^{km+l+1}=\mathbf{R}\mathbf{u}^{km+l}+ \mathbf{S}\mathbf{u}^{km}, $$
(10)
where
$$\begin{aligned}& \mathbf{R}=\bigl(\mathbf{I}+a^{2}\theta r\mathbf{F} \bigr)^{-1}\bigl(\mathbf {I}-a^{2}(1-\theta)r\mathbf{F}\bigr), \\& \mathbf{S}=-(b+c)r\bigl(\mathbf{I}+a^{2}\theta r\mathbf{F} \bigr)^{-1}\mathbf{F}. \end{aligned}$$
By (9), we also obtain
$$ \mathbf{u}^{km+l+1}=\mathbf{R}\mathbf{u}^{km+l}+ \mathbf{S}_{1}\mathbf {u}^{km}+\mathbf{S}_{2} \mathbf{u}^{(k+1)m}, $$
(11)
where
$$\begin{aligned}& \mathbf{R}=\bigl(\mathbf{I}+a^{2}\theta r\mathbf{F} \bigr)^{-1}\bigl(\mathbf {I}-a^{2}(1-\theta)r\mathbf{F}\bigr), \\& \mathbf{S}_{1}=-br\bigl(\mathbf{I}+a^{2}\theta r\mathbf{F} \bigr)^{-1}\mathbf{F}, \\& \mathbf{S}_{2}=-cr\bigl(\mathbf{I}+a^{2}\theta r\mathbf{F} \bigr)^{-1}\mathbf{F}. \end{aligned}$$
Iteration of (10) gives
$$ \mathbf{u}^{km+l+1}=\bigl(\mathbf{R}^{l+1}+\bigl( \mathbf{R}^{l+1}-\mathbf {I}\bigr) (\mathbf{R}-\mathbf{I})^{-1} \mathbf{S}\bigr)\mathbf{u}^{km}, $$
(12)
in the same way, from (11) we have
$$ \mathbf{u}^{km+l+1}=\bigl(\mathbf{R}^{l+1}+\bigl( \mathbf{R}^{l+1}-\mathbf {I}\bigr) (\mathbf{R}-\mathbf{I})^{-1} \mathbf{S}_{1}\bigr)\mathbf{u}^{km}+\bigl(\mathbf {R}^{l+1}-\mathbf{I}\bigr) (\mathbf{R}-\mathbf{I})^{-1} \mathbf{S}_{2}\mathbf {u}^{(k+1)m}. $$
(13)
Thus we get
$$ \mathbf{u}^{n}= \textstyle\begin{cases} (\mathbf{R}^{l}+(\mathbf{R}^{l}-\mathbf{I})(\mathbf{R}-\mathbf {I})^{-1}\mathbf{S})\mathbf{u}^{km}, & k \mbox{ is even}, \\ (\mathbf{R}^{l}+(\mathbf{R}^{l}-\mathbf{I})(\mathbf{R}-\mathbf {I})^{-1}\mathbf{S}_{1})\mathbf{u}^{km} \\ \quad {} +(\mathbf{R}^{l}-\mathbf{I})(\mathbf{R}-\mathbf{I})^{-1}\mathbf {S}_{2}\mathbf{u}^{(k+1)m}, & k \mbox{ is odd}. \end{cases} $$
(14)
So
$$ \mathbf{u}^{n}= \textstyle\begin{cases} (\mathbf{R}^{l}+(\mathbf{R}^{l}-\mathbf{I})(\mathbf{R}-\mathbf {I})^{-1}\mathbf{S})\mathbf{u}^{2jm}, & n=2jm+l, \\ (\mathbf{R}^{l}+(\mathbf{R}^{l}-\mathbf{I})(\mathbf{R}-\mathbf {I})^{-1}\mathbf{S}_{1})\mathbf{u}^{(2j-1)m} \\ \quad {}+(\mathbf{R}^{l}-\mathbf{I})(\mathbf{R}-\mathbf{I})^{-1}\mathbf {S}_{2}\mathbf{u}^{2jm}, & n=(2j-1)m+l. \end{cases} $$
(15)
Let \(l=m-1\) in (15) gives
$$\textstyle\begin{cases} \mathbf{u}^{(2j+1)m}=(\mathbf{R}^{m}+(\mathbf{R}^{m}-\mathbf{I})(\mathbf {R}-\mathbf{I})^{-1}\mathbf{S})\mathbf{u}^{2jm}, \quad j=0,1,\ldots, \\ \mathbf{u}^{2jm}=(\mathbf{I}-(\mathbf{R}^{m}-\mathbf{I})(\mathbf {R}-\mathbf{I})^{-1}\mathbf{S}_{2})^{-1}(\mathbf{R}^{m}+(\mathbf {R}^{m}-\mathbf{I})(\mathbf{R}-\mathbf{I})^{-1}\mathbf{S}_{1})\mathbf {u}^{(2j-1)m}, \quad j=1,2,\ldots. \end{cases} $$
Hence we have \(\mathbf{u}^{(2j+1)m}=\mathbf{M}\mathbf{u}^{(2j-1)m}\), where
$$\mathbf{M}=\bigl(\mathbf{R}^{m}+\bigl(\mathbf{R}^{m}- \mathbf{I}\bigr) (\mathbf {R}-\mathbf{I})^{-1}\mathbf{S}\bigr) \bigl( \mathbf{R}^{m}+\bigl(\mathbf{R}^{m}-\mathbf {I}\bigr) ( \mathbf{R}-\mathbf{I})^{-1}\mathbf{S}_{1}\bigr) \bigl( \mathbf{I}-\bigl(\mathbf {R}^{m}-\mathbf{I}\bigr) (\mathbf{R}- \mathbf{I})^{-1}\mathbf{S}_{2}\bigr)^{-1}. $$
Therefore
$$ \mathbf{u}^{n}= \textstyle\begin{cases} (\mathbf{R}^{l}+(\mathbf{R}^{l}-\mathbf{I})(\mathbf{R}-\mathbf {I})^{-1}\mathbf{S})\mathbf{M}^{j}\mathbf{u}^{0}, & n=2jm+l, j=0,1,\ldots, \\ (\mathbf{R}^{l}+(\mathbf{R}^{l}-\mathbf{I})(\mathbf{R}-\mathbf {I})^{-1}\mathbf{S}_{1})\mathbf{M}^{j-1}\mathbf{u}^{1} \\ \quad {}+(\mathbf{R}^{l}-\mathbf{I})(\mathbf{R}-\mathbf{I})^{-1}\mathbf {S}_{2}\mathbf{N}\mathbf{M}^{j}\mathbf{u}^{0},& n=(2j-1)m+l, j=1,2,\ldots, \end{cases} $$
(16)
where \(\mathbf{u}^{1}=\mathbf{N}\mathbf{u}^{0}\), \(\mathbf{N}=\mathbf {R}^{m}+(\mathbf{R}^{m}-\mathbf{I})(\mathbf{R}-\mathbf{I})^{-1}\mathbf {S}\) and \(l=0,1,\ldots,m-1\).

Lemma 2

If the coefficients a, b and c satisfy
$$ \biggl\vert \frac{\beta^{m}+\frac{b}{a^{2}}(\beta^{m}-1)}{1-\frac{c}{a^{2}}(\beta ^{m}-1)} \biggr\vert < 1 $$
(17)
and
$$ \biggl\vert \beta^{m}+\frac{b+c}{a^{2}}\bigl( \beta^{m}-1\bigr) \biggr\vert < 1, $$
(18)
then the zero solution of the equation in (3) is asymptotically stable, where
$$ \beta=\frac{1-a^{2}(1-\theta)r\lambda_{\mathbf{F}}}{1+a^{2}\theta r\lambda _{\mathbf{F}}}. $$
(19)

Proof

From (16) and [25], we know that the largest eigenvalue (in modulus) of the matrix M is
$$\lambda_{\mathbf{M}}=\frac{ (\beta^{m}+\frac{b}{a^{2}}(\beta^{m}-1) ) (\beta^{m}+\frac{b+c}{a^{2}}(\beta^{m}-1) )}{1-\frac{c}{a^{2}}(\beta^{m}-1)}, $$
where β is defined in (19). The zero solution of the equation in (3) is asymptotically stable if and only if \(|\lambda_{\mathbf{M}}|<1\). So (17) and (18) are got. □

Theorem 1

Under the conditions of Lemma 2, if the conditions
$$ \bigl(a^{2}+b+c\bigr) \bigl(\beta^{m}-1\bigr) \bigl(\bigl(a^{2}+b-c\bigr)\beta^{m}-\bigl(b-a^{2}-c \bigr)\bigr)< 0 $$
(20)
and
$$ \bigl(a^{2}+b+c\bigr) \bigl(\beta^{m}-1\bigr) \bigl(\bigl(a^{2}+b+c\bigr)\beta^{m}-\bigl(b-a^{2}+c \bigr)\bigr)< 0 $$
(21)
are satisfied, where \(c\neq a^{2}/(\beta^{m}-1)\), \(a\neq0\), then the zero solution of the equation in (3) is asymptotically stable.

Proof

If \(a\neq0\), (17) and (18) are equivalent to
$$\biggl(\frac{\beta^{m}+\frac{b}{a^{2}}(\beta^{m}-1)}{1-\frac{c}{a^{2}}(\beta ^{m}-1)}+1 \biggr) \biggl(\frac{\beta^{m}+\frac{b}{a^{2}}(\beta^{m}-1)}{1-\frac {c}{a^{2}}(\beta^{m}-1)}-1 \biggr)< 0 $$
and
$$\biggl(\beta^{m}+\frac{b+c}{a^{2}}\bigl(\beta^{m}-1\bigr)+1 \biggr) \biggl(\beta^{m}+\frac {b+c}{a^{2}}\bigl(\beta^{m}-1 \bigr)-1 \biggr)< 0. $$
After some derivations we can get (20) and (21). The proof is completed. □

Definition 1

The set of all points \((a,b,c)\) which satisfies (2) is called an asymptotic stability region denoted by H.

Definition 2

The set of all points \((a,b,c)\) at which the θ-methods for (1) which satisfies (20) is asymptotically stable is called an asymptotic stability region denoted by S.

For convenience, we divide the region H into three parts:
$$\begin{aligned}& H_{0}=\bigl\{ (a,b,c)\in H: \bigl(a^{2}+b+c\bigr) \bigl(a^{2}+b-c\bigr)=0\bigr\} , \\& H_{1}=\bigl\{ (a,b,c)\in H: \bigl(a^{2}+b+c\bigr) \bigl(a^{2}+b-c\bigr)>0\bigr\} , \\& H_{2}=\bigl\{ (a,b,c)\in H: \bigl(a^{2}+b+c\bigr) \bigl(a^{2}+b-c\bigr)< 0\bigr\} . \end{aligned}$$
In the similar way, we denote
$$\begin{aligned}& S_{0}=\bigl\{ (a,b,c)\in S: \bigl(a^{2}+b+c\bigr) \bigl(a^{2}+b-c\bigr)=0\bigr\} , \\& S_{1}=\bigl\{ (a,b,c)\in S: \bigl(a^{2}+b+c\bigr) \bigl(a^{2}+b-c\bigr)>0\bigr\} , \\& S_{2}=\bigl\{ (a,b,c)\in S: \bigl(a^{2}+b+c\bigr) \bigl(a^{2}+b-c\bigr)< 0\bigr\} . \end{aligned}$$

It is easy to see that \(H=H_{0}\cup H_{1}\cup H_{2}\), \(S=S_{0}\cup S_{1}\cup S_{2}\), \(H_{i}\cap H_{j}=\Phi\), \(S_{i}\cap S_{j}=\Phi\) and \(H_{i}\cap S_{j}=\Phi\), \(i \neq j\), \(i,j=0,1,2\).

Theorem 2

Under the constraints
$$ \frac{b-a^{2}-c}{b+a^{2}-c}\leq0 $$
(22)
and
$$ \frac{b-a^{2}+c}{b+a^{2}+c}\leq0, $$
(23)
if the following conditions are satisfied:
\(r\neq1/(a^{2} \lambda_{\mathbf{F}} (1-\theta))\) and
$$ \textstyle\begin{cases} r< \min\frac{2}{a^{2} \lambda_{\mathbf{F}} (1-2\theta)},\quad 0 \leq \theta< 1/2, \\ r>0,\quad 1/2 \leq\theta\leq1, \end{cases} $$
(24)
for m is even, and
$$ r< \min\frac{1}{a^{2} \lambda_{\mathbf{F}} (1-\theta)}, $$
(25)
for m odd, then \(H_{1}\subseteq S_{1}\).

Proof

By the definition of \(H_{1}\) we know that (2) is satisfied when (22) and (23) hold. In the same way, according to the definition of \(S_{1}\) we know that (20) is satisfied when (22) and (23) hold and \(0<\beta ^{m}<1\), where β is defined in (19), then we can get \(H_{1}\subseteq S_{1}\). Therefore, (24) and (25) can be obtained from \(0<\beta^{m}<1\). The proof is completed. □

Theorem 3

Under the constraints
$$ \frac{b-a^{2}-c}{b+a^{2}-c}\geq1 $$
(26)
and
$$ \frac{b-a^{2}+c}{b+a^{2}+c}\leq0, $$
(27)
if the following conditions are satisfied:
\(r\neq1/(a^{2} \lambda_{\mathbf{F}} (1-\theta))\) and
$$\textstyle\begin{cases} r< \min\frac{2}{a^{2} \lambda_{\mathbf{F}} (1-2\theta)},\quad 0 \leq \theta< 1/2, \\ r>0, \quad 1/2 \leq\theta\leq1, \end{cases} $$
for m even, and
$$r< \min\frac{1}{a^{2} \lambda_{\mathbf{F}} (1-\theta)}, $$
for m odd, then \(H_{2}\subseteq S_{2}\).

Proof

Similar to the proof of Theorem 2, we can omit it. □

Theorem 4

Under the constraints
$$\begin{aligned}& \frac{b+a^{2}-c}{b+a^{2}+c}=0, \end{aligned}$$
(28)
$$\begin{aligned}& \frac{b-a^{2}-c}{b+a^{2}+c}< 0, \end{aligned}$$
(29)
and
$$ \frac{b-a^{2}+c}{b+a^{2}+c}\leq0, $$
(30)
if the following conditions are satisfied:
\(r\neq1/(a^{2} \lambda_{\mathbf{F}} (1-\theta))\) and
$$\textstyle\begin{cases} r< \min\frac{2}{a^{2} \lambda_{\mathbf{F}} (1-2\theta)},\quad 0 \leq \theta< 1/2, \\ r>0,\quad 1/2 \leq\theta\leq1, \end{cases} $$
for m even, and
$$r< \min\frac{1}{a^{2} \lambda_{\mathbf{F}} (1-\theta)}, $$
for m odd, then \(H_{0}\subseteq S_{0}\).

Proof

Follows directly from the proof of Theorem 2. □

Remark 1

If \(\theta=1\), then the corresponding fully implicit finite difference scheme is asymptotically stable unconditionally.

3 Numerical experiments

To demonstrate our theoretical result, some numerical examples are adopted in this section. Consider the following two problems:
$$\begin{aligned}& \textstyle\begin{cases} u_{t}(x,t)=u_{xx}(x,t)+\frac{1}{2} u_{xx}(x,[t])+\frac{1}{4} u_{xx} (x,2 [\frac{t+1}{2} ] ), \quad t> 0, \\ u(0,t)=u(1,t)=0, \\ u(x,0)=\sin(\pi x), \end{cases}\displaystyle \end{aligned}$$
(31)
$$\begin{aligned}& \textstyle\begin{cases} u_{t}(x,t)=u_{xx}(x,t)-2u_{xx}(x,[t])+2u_{xx} (x,2 [\frac {t+1}{2} ] ),\quad t> 0, \\ u(0,t)=u(1,t)=0, \\ u(x,0)=\sin(\pi x). \end{cases}\displaystyle \end{aligned}$$
(32)
In Tables 14 we list the absolute errors \(\operatorname{AE}(1/m,1/p)\), \(\operatorname{AE}(1/4m,1/2p)\) and \(\operatorname{AE}(1/2m, 1/2p)\) at \(x=1/2\), \(t=1\) of the θ-methods for (31) and (32), the ratio of \(\operatorname{AE}(1/m,1/p)\) over \(\operatorname{AE}(1/4m,1/2p)\) in Tables 1, 3 and the ratio of \(\operatorname{AE}(1/m,1/p)\) over \(\operatorname{AE}(1/2m,1/2p)\) in Tables 2, 4. We can see from these tables that the numerical methods conserve their orders of convergence.
Table 1

Errors of (31) with \(\theta= 0\)

m

p

AE(1/m,1/p)

AE(1/4m,1/2p)

AE(1/m,1/p)/AE(1/4m,1/2p)

210

23

9.9920e − 006

2.4180e − 006

4.1323

212

24

2.4180e − 006

5.9963e − 007

4.0325

214

25

5.9963e − 007

1.4960e − 007

4.0082

216

26

1.4960e − 007

3.7382e − 008

4.0019

Table 2

Errors of (31) with \(\theta= 1/2\)

m

p

AE(1/m,1/p)

AE(1/2m,1/2p)

AE(1/m,1/p)/AE(1/2m,1/2p)

210

24

3.8741e − 006

9.5800e − 007

4.0439

211

25

9.5800e − 007

2.3885e − 007

4.0109

212

26

2.3885e − 007

5.9671e − 008

4.0028

213

27

5.9671e − 008

1.4915e − 008

4.0007

Table 3

Errors of (32) with \(\theta= 0\)

m

p

AE(1/m,1/p)

AE(1/4m,1/2p)

AE(1/m,1/p)/AE(1/4m,1/2p)

210

23

8.2800e − 002

2.0000e − 002

4.1400

212

24

2.0000e − 002

5.0000e − 003

4.0000

214

25

5.0000e − 003

1.2000e − 003

4.1667

216

26

1.2000e − 003

3.0971e − 004

3.8746

Table 4

Errors of (32) with \(\theta= 1/2\)

m

p

AE(1/m,1/p)

AE(1/2m,1/2p)

AE(1/m,1/p)/AE(1/2m,1/2p)

210

24

3.2100e − 002

7.9000e − 003

4.0633

211

25

7.9000e − 003

2.0000e − 003

3.9500

212

26

2.0000e − 003

4.9437e − 004

4.0456

213

27

4.9437e − 004

1.2357e − 004

4.0007

In Figures 14, we draw the numerical solutions of the θ-methods. It is easy to see that the numerical solutions are asymptotically stable. In Figures 5 and 6, we draw the error figures for the numerical solutions with \(\theta=1\). It can be seen that the numerical method is of high accuracy.
Figure 1
Figure 1

The numerical solution of (31) with \(\theta= 0\), \(m = 6400\), \(p = 20\) and \(r = 1/16\)

Figure 2
Figure 2

The numerical solution of (31) with \(\theta= 0.5\), \(m = 128\), \(p = 16\) and \(r = 2\)

Figure 3
Figure 3

The numerical solution of (32) with \(\theta= 0\), \(m = 6400\), \(p = 20\) and \(r = 1/16\)

Figure 4
Figure 4

The numerical solution of (32) with \(\theta= 0.5\), \(m = 128\), \(p = 16\) and \(r = 2\)

Figure 5
Figure 5

Errors of (31) with \(\theta= 1\), \(m = 1024\), \(p = 32\) and \(r = 1\)

Figure 6
Figure 6

Errors of (32) with \(\theta= 1\), \(m = 1024\), \(p = 32\) and \(r = 1\)

Declarations

Acknowledgements

The author is grateful to the reviewers for their careful reading and useful comments. This work is supported by the National Natural Science Foundation of China (No. 11201084) and Natural Science Foundation of Guangdong Province (No. 2017A030313031).

Authors’ contributions

The author read and approved the final manuscript.

Competing interests

The author declares that he has no competing interests.

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Authors’ Affiliations

(1)
School of Applied Mathematics, Guangdong University of Technology, Guangzhou, China

References

  1. Cavalli, F., Naimzada, A.: A multiscale time model with piecewise constant argument for a boundedly rational monopolist. J. Differ. Equ. Appl. 22, 1480–1489 (2016) MathSciNetView ArticleMATHGoogle Scholar
  2. Dai, L., Fan, L.: Analytical and numerical approaches to characteristics of linear and nonlinear vibratory systems under piecewise discontinuous disturbances. Commun. Nonlinear Sci. Numer. Simul. 9, 417–429 (2004) View ArticleMATHGoogle Scholar
  3. Dai, L., Singh, M.C.: On oscillatory motion of spring-mass systems subjected to piecewise constant forces. J. Sound Vib. 173, 217–232 (1994) View ArticleMATHGoogle Scholar
  4. Gurcan, F., Bozkurt, F.: Global stability in a population model with piecewise constant arguments. J. Math. Anal. Appl. 360, 334–342 (2009) MathSciNetView ArticleMATHGoogle Scholar
  5. Wiener, J., Lakshmikantham, V.: A damped oscillator with piecewise constant time delay. Nonlinear Stud. 1, 78–84 (2000) MathSciNetMATHGoogle Scholar
  6. Cooke, K.L., Wiener, J.: Retarded differential equations with piecewise constant delays. J. Math. Anal. Appl. 99, 265–297 (1984) MathSciNetView ArticleMATHGoogle Scholar
  7. Shah, S.M., Wiener, J.: Advanced differential equations with piecewise constant argument deviations. Int. J. Math. Math. Sci. 6, 671–703 (1983) MathSciNetView ArticleMATHGoogle Scholar
  8. Wiener, J.: Differential equations with piecewise constant delays. In: Lakshmikantham, V. (ed.) Trends in the Theory and Practice of Nonlinear Differential Equations, pp. 547–580. Dekker, New York (1983) Google Scholar
  9. Akhmet, M.U., Arugǎslan, D., Yılmaz, E.: Stability in cellular neural networks with a piecewise constant argument. J. Comput. Appl. Math. 233, 2365–2373 (2010) MathSciNetView ArticleMATHGoogle Scholar
  10. Bereketoglu, H., Seyhan, G., Ogun, A.: Advanced impulsive differential equations with piecewise constant arguments. Math. Model. Anal. 15, 175–187 (2010) MathSciNetView ArticleMATHGoogle Scholar
  11. Karakoc, F.: Asymptotic behaviour of a population model with piecewise constant argument. Appl. Math. Lett. 70, 7–13 (2017) MathSciNetView ArticleMATHGoogle Scholar
  12. Muroya, Y.: New contractivity condition in a population model with piecewise constant arguments. J. Math. Anal. Appl. 346, 65–81 (2008) MathSciNetView ArticleMATHGoogle Scholar
  13. Pinto, M.: Asymptotic equivalence of nonlinear and quasi linear differential equations with piecewise constant arguments. Math. Comput. Model. 49, 1750–1758 (2009) MathSciNetView ArticleMATHGoogle Scholar
  14. Wiener, J.: Generalized Solutions of Functional Differential Equations. World Scientific, Singapore (1993) View ArticleMATHGoogle Scholar
  15. Berezansky, L., Braverman, E.: Stability conditions for scalar delay differential equations with a non-delay term. Appl. Math. Comput. 250, 157–164 (2015) MathSciNetMATHGoogle Scholar
  16. Dimbour, W.: Almost automorphic solutions for differential equations with piecewise constant argument in a Banach space. Nonlinear Anal. 74, 2351–2357 (2011) MathSciNetView ArticleMATHGoogle Scholar
  17. El Raheem, Z.F., Salman, S.M.: On a discretization process of fractional-order logistic differential equation. J. Egypt. Math. Soc. 22, 407–412 (2014) MathSciNetView ArticleMATHGoogle Scholar
  18. Muminov, M.I.: On the method of finding periodic solutions of second-order neutral differential equations with piecewise constant arguments. Adv. Differ. Equ. 2017, 336 (2017) MathSciNetView ArticleGoogle Scholar
  19. Liu, M.Z., Song, M.H., Yang, Z.W.: Stability of Runge–Kutta methods in the numerical solution of equation \(u'(t)=au(t)+a_{0}u([t])\). J. Comput. Appl. Math. 166, 361–370 (2004) MathSciNetView ArticleMATHGoogle Scholar
  20. Li, C., Zhang, C.J.: Block boundary value methods applied to functional differential equations with piecewise continuous arguments. Appl. Numer. Math. 115, 214–224 (2017) MathSciNetView ArticleMATHGoogle Scholar
  21. Milosevic, M.: The Euler–Maruyama approximation of solutions to stochastic differential equations with piecewise constant arguments. J. Comput. Appl. Math. 298, 1–12 (2016) MathSciNetView ArticleMATHGoogle Scholar
  22. Song, M.H., Liu, X.: The improved linear multistep methods for differential equations with piecewise continuous arguments. Appl. Math. Comput. 217, 4002–4009 (2010) MathSciNetMATHGoogle Scholar
  23. Wang, W.S., Li, S.F.: Dissipativity of Runge–Kutta methods for neutral delay differential equations with piecewise constant delay. Appl. Math. Lett. 21, 983–991 (2008) MathSciNetView ArticleMATHGoogle Scholar
  24. Zhang, G.L.: Stability of Runge–Kutta methods for linear impulsive delay differential equations with piecewise constant arguments. J. Comput. Appl. Math. 297, 41–50 (2016) MathSciNetView ArticleMATHGoogle Scholar
  25. Liang, H., Liu, M.Z., Lv, W.J.: Stability of θ-schemes in the numerical solution of a partial differential equation with piecewise continuous arguments. Appl. Math. Lett. 23, 198–206 (2010) MathSciNetView ArticleMATHGoogle Scholar
  26. Liang, H., Shi, D.Y., Lv, W.J.: Convergence and asymptotic stability of Galerkin methods for a partial differential equation with piecewise constant argument. Appl. Math. Comput. 217, 854–860 (2010) MathSciNetMATHGoogle Scholar
  27. Wang, Q., Wen, J.C.: Analytical and numerical stability of partial differential equations with piecewise constant arguments. Numer. Methods Partial Differ. Equ. 30, 1–16 (2014) MathSciNetView ArticleMATHGoogle Scholar
  28. Wiener, J., Debnath, L.: A wave equation with discontinuous time delay. Int. J. Math. Math. Sci. 15, 781–788 (1992) MathSciNetView ArticleMATHGoogle Scholar
  29. Song, M.H., Liu, M.Z.: Numerical stability and oscillations of the Runge–Kutta methods for the differential equations with piecewise continuous arguments of alternately retarded and advanced type. J. Inequal. Appl. 2012, 290 (2012) MathSciNetView ArticleMATHGoogle Scholar
  30. Blanco-Cocom, L., Àvila-Vales, E.: Convergence and stability analysis of the θ-method for delayed diffusion mathematical models. Appl. Math. Comput. 231, 16–25 (2014) MathSciNetGoogle Scholar
  31. Zhang, Q.F., Chen, M.Z., Xu, Y.H., Xu, D.H.: Compact θ-method for the generalized delay diffusion equation. Appl. Math. Comput. 316, 357–369 (2018) MathSciNetGoogle Scholar

Copyright

© The Author(s) 2018

Advertisement