Open Access

Semi-nonoscillation intervals in the analysis of sign constancy of Green’s functions of Dirichlet, Neumann and focal impulsive problems

Advances in Difference Equations20172017:81

https://doi.org/10.1186/s13662-017-1134-1

Received: 1 December 2016

Accepted: 9 March 2017

Published: 20 March 2017

Abstract

We consider the following second order differential equation with delay:
$$\textstyle\begin{cases} (Lx)(t)\equiv{x''(t)+\sum_{j=1}^{p} {a_{j}(t)x'(t-\tau_{j}(t))}+\sum_{j=1}^{p} {b_{j}(t)x(t-\theta_{j}(t))}}=f(t), & t\in[0,\omega], \\ x(t_{k})=\gamma_{k}x(t_{k}-0), x'(t_{k})=\delta_{k}x'(t_{k}-0), & k=1,2,\ldots,r. \end{cases} $$
In this paper we use focal problems to analyze the sign constancy of Green’s functions.

Keywords

impulsive equationsGreen’s functionspositivity/negativity of Green’s functionsboundary value problemsecond order

MSC

34K1034B3734A4034A3734K48

1 Introduction

Impulsive equations attract attention of many recognized mathematicians. See, for example, the books [16]. The positivity of solutions to the Dirichlet problem was studied in [7]. A generalized Dirichlet problem was considered in [79]. Multipoint problems and problems with integral boundary conditions were considered in [1015]. The Dirichlet problem for impulsive equations with impulses at variable moments was studied in [16]. All these works considered impulsive ordinary differential equations.

Let us assume that all trajectories of solutions to a non-impulsive ordinary differential equation are known. In this case, impulses imply only choosing the trajectory between the points of impulses, but we stay on the trajectory of a corresponding solution of the non-impulsive equation between \(t_{i}\) and \(t_{i+1}\). In the case of an impulsive equation with delay, it is not true anymore. That is why the properties of delay impulsive equations can be quite different.

There are only a few results on the positivity of solutions to impulsive differential equations with delay. Note the results [1719] about the positivity of Green’s functions for boundary value problems for first order delay impulsive equations. Nonoscillation of second order delay impulsive differential equation was studied in [20]. Sturmian comparison theory for impulsive second order delay equations was studied in [21]. The positivity of Green’s functions for the nth order impulsive delay differential equation was considered in [22]. The idea of construction of Green’s functions for second order impulsive differential equations was first proposed in [23]. The use of Green’s functions for auxiliary impulsive problems in the study of sign constancy of delay impulsive differential equations was proposed in [24], where a one-point problem was studied. Note also paper [25] for focal problems.

In this paper we use the results for one-point and focal problems in order to obtain results on the sign constancy for Green’s functions of two-point boundary value problems.

Let us consider the following impulsive equations:
$$\begin{aligned} &(Lx) (t)\equiv{x''(t)+\sum _{j=1}^{p} {a_{j}(t)x' \bigl(t- \tau_{j}(t) \bigr)}+\sum_{j=1}^{p} {b_{j}(t)x \bigl(t-\theta_{j}(t) \bigr)}}=f(t),\quad t\in[0, \omega] \end{aligned}$$
(1.1)
$$\begin{aligned} &x(t_{k})=\gamma_{k}x(t_{k}-0), \qquad x'(t_{k})=\delta_{k}x'(t_{k}-0),\quad k=1,2,\ldots,r, \end{aligned}$$
(1.2)
$$\begin{aligned} &0=t_{0}< t_{1}< t_{2}< \cdots< t_{r}< t_{r+1}= \omega, \\ &x(\zeta)=0,\quad \zeta< 0, \end{aligned}$$
(1.3)
where \(f, a_{j}, b_{j}:[0,\omega]\rightarrow{\mathbb{R}}\) are summable functions and \(\tau_{j}, \theta_{j}:[0,\omega]\rightarrow{[0,+\infty )}\) are measurable functions for \(j=1,2,\ldots,p\). p and r are natural numbers, \(\gamma_{k}\) and \(\delta_{k}\) are real positive numbers.

Let D be a space of functions \(x:[0,\omega]\rightarrow{\mathbb {R}}\) such that their derivative \(x'(t)\) is absolutely continuous on every interval \(t\in{[t_{i},t_{i+1})}\), \(i=0,\ldots,r, x''\in {L_{\infty}}\), there exist the finite limits \(x(t_{i}-0)= \lim_{t\rightarrow{t_{i}^{-}}} x(t)\) and \(x'(t_{i}-0)= \lim_{t\rightarrow {t_{i}^{-}}} x'(t)\) and condition (1.2) is satisfied at points \(t_{i}\) (\(i=0,\ldots,r\)). We understand solution x as a function \(x\in{D}\) satisfying (1.1)-(1.3).

Definition 1.1

We call \([0,\omega]\) a semi-nonoscillation interval of \((Lx)(t)=0\) if every nontrivial solution having zero of derivative does not have zero on this interval.

The influence of nonoscillation on sign properties of Green’s functions in the case of nth order differential equations was found in the known papers [26, 27]. An extension of these results on delay differential equations was obtained in [28, 29]. The importance of a semi-nonoscillation interval in the case of non-impulsive delay differential equations was first noted in [30]. In this paper we develop the use of semi-nonoscillation intervals to impulsive delay differential equations.

2 Construction of Green’s functions

For equation (1.1) we consider the following variants of boundary conditions:
$$\begin{aligned} &x(\omega)=0,\qquad x'(\omega)=0, \end{aligned}$$
(2.1)
$$\begin{aligned} &x(0)=0, \qquad x'(\omega)=0, \end{aligned}$$
(2.2)
$$\begin{aligned} &x'(0)=0,\qquad x(\omega)=0, \end{aligned}$$
(2.3)
$$\begin{aligned} &x(0)=0,\qquad x(\omega)=0, \end{aligned}$$
(2.4)
$$\begin{aligned} &x'(0)=0,\qquad x'(\omega)=0. \end{aligned}$$
(2.5)
Thus boundary conditions (2.1), (2.2), (2.3) are of focal sort, (2.4) is Dirichlet’s one, (2.5) is Neumann’s condition.

We denote by \(G_{i}(t,s)\) Green’s function of problem (1.1)-(1.3), (2.i) respectively.

It is known from the formula of solutions’ representation for a system of delay impulsive equations (see [17] and [25]) that the general solution of (1.1)-(1.3) can be represented in the form
$$ x(t)= v_{1}(t)x(0)+C(t,0)x'(0) + \int_{0}^{t} {C(t,s)f(s)\,ds,} $$
(2.6)
where \(C(t,s)\) is the Cauchy function and \(v_{1}(t)\) is the solution of the semi-homogenous problem
$$ \textstyle\begin{cases} (Lx)(t)=0,\quad t\in{[0,\omega]}, \\ x(0)=1, \qquad x'(0)=0. \end{cases} $$
(2.7)
\(C(t,s)\), as a function of t, for every fixed s, satisfies the equation
$$\begin{aligned} &x''(t)+\sum _{j=1}^{p} {a_{j}(t)x' \bigl(t- \tau_{j}(t) \bigr)}+\sum_{j=1}^{p} {b_{j}(t)x \bigl(t-\theta_{j}(t) \bigr)}=0,\quad t\in{[s, \omega]}, \end{aligned}$$
(2.8)
$$\begin{aligned} &x(t_{k})=\gamma_{k}x(t_{k}-0),\qquad x'(t_{k})=\delta_{k}x'(t_{k}-0),\quad k=i_{s}+1,\ldots,r, \end{aligned}$$
(2.9)
$$\begin{aligned} &t_{i_{s}}< s< t_{i_{s}+1}< \cdots< t_{r}< t_{r+1}= \omega, \\ & x(\zeta)=x'(\zeta)=0, \quad \zeta< s, \end{aligned}$$
(2.10)
and the initial conditions \(C(s,s)=0, \frac{\partial}{\partial t}C(s,s)=1\). Note that \(C(t,s)=0\) for \(t< s\).
Using this general representation, we can obtain the following formulas of Green’s functions:
$$ G_{1}(t,s)=C(t,s)+\frac{h(t,s)}{v_{1}(\omega)C'_{t}(\omega,0)-v'_{1}(\omega )C(\omega,0)}, $$
(2.11)
where
$$\begin{aligned} & h(t,s)=C'_{t}(\omega,s) \bigl[v_{1}(t)C(\omega,0)-v_{1}(\omega )C(t,0) \bigr] \\ &\phantom{ h(t,s)=}{}+C(\omega,s) \bigl[v'_{1}(\omega)C(t,0)-v_{1}(t)C'_{t}( \omega,0) \bigr], \\ & G_{2}(t,s)=C(t,s)-C(t,0)\frac{C'_{t}(\omega,s)}{C'_{t}(\omega,0)}, \end{aligned}$$
(2.12)
$$\begin{aligned} & G_{3}(t,s)=C(t,s)-C(\omega,s)\frac{v_{1}(t)}{v_{1}(\omega)}, \end{aligned}$$
(2.13)
$$\begin{aligned} & G_{4}(t,s)=C(t,s)-C(t,0)\frac{C(\omega,s)}{C(\omega,0)}. \end{aligned}$$
(2.14)
Let us consider the following homogeneous equation:
$$ (Lx) (t)=0,\quad t\in{[0,\omega]}. $$
(2.15)

Lemma 2.1

If
  1. (1)

    \(b_{j}(t)\leq{0}, t\in{[0,\omega]}\);

     
  2. (2)
    the Cauchy function \(C_{1}(t,s)\) of the first order equation
    $$ \textstyle\begin{cases} y'(t)+\sum_{j=1}^{p}{a_{j}(t)y(t-\tau_{j}(t))}=0,\quad t\in{[0,\omega]}, \\ y(t_{k})=\delta_{k} y(t_{k}-0),\quad k=1,\ldots,m, \\ y(\zeta)=0,\quad \zeta< 0, \end{cases} $$
    (2.16)
    is positive for \(0\leq{s}\leq{t}\leq{\omega}\).
     

Then the Cauchy function \(C(t,s)\) of equation (1.1) and its derivative \(C'_{t}(t,s)\) are positive in \(0\leq{s}\leq{t}\leq {\omega}\).

Proof

It follows from the condition \(C(s,s)=0, C'(s,s)=1\) that there exists \(\epsilon_{s}>0\) such that \(C(t,s)>0\) and \(C'_{t}(t,s)>0\) for \(t\in {(s,s+\epsilon_{s})}\). Let us suppose that there exists a point η such that \(C'_{t}(\eta,s)=0, C'_{t}(t,s)>0\) for \(t\in{[0,\eta)}\). It is clear that in this case \(x(t)=C(t,s)\) satisfies the equation
$$ x''(t)+\sum _{j=1}^{p}{a_{j}(t)x' \bigl(t- \tau_{j}(t) \bigr)}=\phi(t),\quad t\in{[s,\omega]}, $$
(2.17)
where \(\phi(t)=-\sum_{j=1}^{p}{b_{j}(t)x(t-\theta_{j}(t))}, t\in{[s,\eta ]}\). It follows from the nonnegativity of \(b_{j}(t)\) (\(j=1,\ldots,p\)) and the positivity of \(x(t)=C(t,s)\) that \(\phi(t)\geq{0}\) for \(t\in {[0,\eta]}\).
Let us denote \(y(t)=x'(t)\). Then we can write an equation for \(y(t)\) in the form
$$ \textstyle\begin{cases} y'(t)+\sum_{j=1}^{p}{a_{j}(t)y(t-\tau_{j}(t))}=\phi(t),\quad t\in{[s,\omega]}, \\ y(t_{k})=\delta_{k} y(t_{k}-0), \quad k=1,\ldots,m, \\ y(\zeta)=0, \quad \zeta< 0. \end{cases} $$
(2.18)
It is clear that \(y(s)=1\). The solution of (2.18) can be written
$$ y(t)= \int_{s}^{t}{C_{1}(t,\zeta)\phi(\zeta)\,d \zeta}+C_{1}(t,s), $$
(2.19)
where \(C_{1}(t,s)\) is the Cauchy function of (2.18). Now it is clear that
$$ y(\eta)= \int_{s}^{\eta}{C_{1}(\eta,\zeta)\phi( \zeta)\,d\zeta }+C_{1}(\eta,s)>0. $$
(2.20)
Lemma 2.1 has been proven. □

Lemma 2.2

If the conditions (1) and (2) of Lemma 2.1 are fulfilled, then Green’s function \(G_{2}(t,s)\) of (1.1)-(1.3), (2.2) exists and there exists an interval \((0,\epsilon_{s})\) such that \(G_{2}(t,s)<0\) for \(t\in(0,\epsilon_{s})\).

Proof

According to Lemma 2.1, we have \(C'_{t}(t,s)>0\) in \(0\leq{s}\leq{t}\leq{\omega}\). This implies that Green’s function \(G_{2}(t,s)\) of (2.15), (1.2), (1.3), (2.2), which is defined by (2.12), exists.

It is clear that
$$\begin{aligned} &G_{2}(0,s)=C(0,s)-C(0,0)\frac{C'_{t}(\omega,s)}{C'_{t}(\omega,0)}=0, \end{aligned}$$
(2.21)
$$\begin{aligned} & G'_{2t}(0,s)=-C'_{t}(0,0) \frac{C'_{t}(\omega,s)}{C'_{t}(\omega,0)}=-\frac {C'_{t}(\omega,s)}{C'_{t}(\omega,0)}< 0. \end{aligned}$$
(2.22)
It means from (2.22) that there exists an interval \((0,\epsilon _{s})\) such that \(G_{2}(t,s)<0\) for \(t\in(0,\epsilon_{s})\).

Lemma 2.2 has been proven. □

Lemma 2.3

If the conditions (1) and (2) of Lemma 2.1 are fulfilled, then Green’s function \(G_{3}(t,s)\) of (1.1)-(1.3), (2.3) exists and there exists an interval \((0,\epsilon_{s})\) such that \(G_{3}(t,s)<0\) for \(t\in(0,\epsilon_{s})\).

Proof

It follows from the condition \(v_{1}(0)=1, v'_{1}(0)=0\), where \(v_{1}(t)\) is a solution of problem (2.7), that there exists \(\epsilon>0\) such that \(v_{1}(t)>0\) for \(t\in{(0,\epsilon)}\). Let us suppose that there exists a point η such that \(v'_{1}(\eta)=0, v'_{1}(t)>0\) for \(t\in{[0,\eta)}\). It is clear that in this case \(x(t)=v_{1}(t)\) satisfies the equation
$$ x''(t)+\sum _{j=1}^{p}{a_{j}(t)x' \bigl(t- \tau_{j}(t) \bigr)}=\phi(t),\quad t\in{[s,\omega]}, $$
(2.23)
where \(\phi(t)=-\sum_{j=1}^{p}{b_{j}(t)x(t-\theta_{j}(t))}, t\in{[s,\eta ]}\). It follows from the nonnegativity of \(b_{j}(t)\) (\(j=1,\ldots,p\)) and the positivity of \(x(t)=v_{1}(t)\) that \(\phi(t)\geq{0}\) for \(t\in {[0,\eta]}\).
Let us denote \(y(t)=x'(t)\). Then we can write an equation for \(y(t)\) in the form
$$ \textstyle\begin{cases} y'(t)+\sum_{j=1}^{p}{a_{j}(t)y(t-\tau_{j}(t))}=\phi(t),\quad t\in{[s,\omega]}, \\ y(t_{k})=\delta_{k} y(t_{k}-0),\quad k=1,\ldots,m, \\ y(\zeta)=0,\quad \zeta< 0. \end{cases} $$
(2.24)
It is clear that \(y(s)=0\). The solution of (2.24) can be written
$$ y(t)= \int_{s}^{t}{C_{1}(t,\zeta)\phi(\zeta)\,d \zeta}, $$
(2.25)
where \(C_{1}(t,s)\) is the Cauchy function of (2.24). Now, from the positivity of \(C_{1}(t,\zeta)\), it is clear that
$$ y(\eta)= \int_{s}^{\eta}{C_{1}(\eta,\zeta)\phi( \zeta)\,d\zeta}>0. $$
(2.26)
It means that \(v'_{1}(\eta)=y(\eta)>0\). Now it is clear that \(v_{1}(t)>0\) for \(t\in{[0,\omega]}\). It means that there is no nontrivial solution to the problem \((Lx)(t)=0, x'(0)=0, x(\omega)=0\). If there is no nontrivial solution of the problem, then Green’s function of (1.1)-(1.3), (2.3) exists and
$$ G_{3}(0,s)=C(0,s)-C(\omega,s)\frac{v_{1}(0)}{v_{1}(\omega)}=- \frac {C(\omega,s)}{v_{1}(\omega)}< 0. $$
(2.27)
Lemma 2.3 has been proven. □

Lemma 2.4

If the conditions (1) and (2) of Lemma 2.1 are fulfilled, then Green’s function \(G_{4}(t,s)\) of (1.1)-(1.3), (2.4) exists and there exists an interval \((0,\epsilon_{s})\) such that \(G_{4}(t,s)<0\) for \(t\in(0,\epsilon_{s})\).

Proof

Let us demonstrate that the problem \((Lx)(t)=0, x(0)=0, x(\omega)=0\) has only the trivial solution. If there exists a nontrivial solution of this problem, it is proportional to \(C(t,0)\). According to Lemma 2.1, \(C(t,0)>0\) for \(t\in{(0,\omega]}\). It means that \(x(\omega)=C(\omega,0)>0\). That contradicts the assumption \(x(\omega)=0\).

Let us take a look at \(G_{4}(0,s)\) and \(G'_{4t}(0,s)\)
$$\begin{aligned} &G_{4}(0,s)=-C(0,0)\frac{C(\omega,s)}{C(\omega,0)}=0, \end{aligned}$$
(2.28)
$$\begin{aligned} &G'_{4t}(0,s)=-C'_{t}(0,0) \frac{C(\omega,s)}{C(\omega,0)}=-\frac {C(\omega,s)}{C(\omega,0)}< 0, \end{aligned}$$
(2.29)
since \(C(t,s)\) is positive. It means that there exists an interval \((0,\epsilon_{s})\) such that \(G_{4}(t,s)<0\) for \(t\in(0,\epsilon_{s})\).

Lemma 2.4 has been proven. □

3 Sign constancy of Green’s functions

In this section we will prove the sign constancy of Green’s functions \(G_{4}(t,s)\) and \(G_{5}(t,s)\) using the results from [24] and [25] about the sign constancy of \(G_{1}(t,s), G_{2}(t,s)\) and \(G_{3}(t,s)\).

Theorem 3.1

Assume that the following conditions are fulfilled:
  1. (1)

    \(G_{1}^{\xi}(t,s)\geq{0}, t,s\in{[0,\xi]}\) for every \(0<\xi<\omega\).

     
  2. (2)

    \([0,\omega]\) is a semi-nonoscillation interval of \((Lx)(t)=0\).

     
  3. (3)

    \(b_{j}(t)\leq{0}, t\in{[0,\omega]}\).

     
  4. (4)

    The Cauchy function \(C_{1}(t,s)\) of the first order equation (2.16) is positive for \(0\leq{s}\leq{t}\leq{\omega}\).

     
Then \(G_{2}(t,s)\leq{0}\), \(G_{3}(t,s)\leq{0}\), \(G_{4}(t,s)\leq {0}\) for \(t,s\in{[0,\omega]}\) and under the additional condition \(\sum_{j=1}^{p}{b_{j}(t)\chi(t-\theta_{j}(t))}\not\equiv{0}, t\in {[0,\omega]}\), where
$$ \chi(t,s)= \textstyle\begin{cases} 1,& t\geq{s}, \\ 0,& t< s, \end{cases} $$
(3.1)
we have also \(G_{5}(t,s)\leq{0}\) for \(t,s\in{[0,\omega]}\).

Proof

Let us start with problem (1.1)-(1.3), (2.2). According to Lemma 2.2, there exists a unique solution for every summable \(f(t)\). Let us assume that \(G_{2}(t,s)\) changes sign. It means that there exists a function \(f(t)\geq{0}\) such that the solution \(x(t)\) changes sign. Then there is a point \(0<\xi<\omega\) such that \(x(\xi)>0\) and \(x'(\xi)=0\) (see Figure 1). From condition 1, we know that Green’s function for this problem \(G_{1}^{\xi}(t,s)\) is nonnegative. Then \(x(t)\geq{0}\) for \(t\in{[0,\xi]}\). But, according to Lemma 2.2, \(x(t)<0\) for t which are close to 0. This contraction demonstrates that the solution \(x(t)\) cannot change its sign for nonnegative \(f(t)\). This proves that \(G_{2}(t,s)\) should be nonpositive.
Figure 1

\(x(t)\).

Let us now consider problem (1.1)-(1.3), (2.3). According to Lemma 2.3, there exists a unique solution for every summable \(f(t)\). Let us assume that \(G_{3}(t,s)\) changes sign. It means that there exists a function \(f(t)\geq{0}\) such that the solution \(x(t)\) changes sign. Then there is a point \(0<\xi<\omega\) such that \(x(\xi)>0\) and \(x'(\xi)=0\) (see Figure 2). From condition 1, we know that Green’s function for this problem \(G_{1}^{\xi}(t,s)\) is nonnegative. Then \(x(t)\geq{0}\) for \(t\in{[0,\xi]}\). But, according to Lemma 2.3, \(x(t)<0\) for t which are close to 0. This contraction demonstrates that the solution \(x(t)\) cannot change its sign for nonnegative \(f(t)\). This proves that \(G_{3}(t,s)\) should be nonpositive.
Figure 2

\(x(t)\).

Let us now consider problem (1.1)-(1.3), (2.4). According to Lemma 2.4, there exists a unique solution for every summable \(f(t)\). Let us assume that \(G_{4}(t,s)\) changes sign. It means that there exists a function \(f(t)\geq{0}\) such that the solution \(x(t)\) changes sign. Then there is a point \(0<\xi<\omega\) such that \(x(\xi)>0\) and \(x'(\xi)=0\) (see Figure 3). From condition 1, we know that Green’s function for this problem \(G_{1}^{\xi}(t,s)\) is nonnegative. Then \(x(t)\geq{0}\) for \(t\in{[0,\xi]}\). But, according to Lemma 2.4, \(x(t)<0\) for t which are close to 0. This contraction demonstrates that the solution \(x(t)\) cannot change its sign for nonnegative \(f(t)\). This proves that \(G_{4}(t,s)\) should be nonpositive.
Figure 3

\(x(t)\).

Let us now consider problem (1.1)-(1.3), (2.5). The condition
$$ \sum_{j=1}^{p}{b_{j}(t)\chi \bigl(t-\theta_{j}(t) \bigr)}\not\equiv{0} $$
(3.2)
means that \(b_{j}(t)\chi(t-\theta_{j}(t))<0\) on the set of positive measure. Let us prove the nonpositivity of Green’s function \(G_{5}(t,s)\) step by step.
Step 1. Let us suppose that there is a solution of \((Lx)(t)=f(t), t\in{[0,\omega]}\), \(x'(0)=x'(\omega)=0\) with nonnegative \(f(t)\geq{0}, f(t)\not\equiv{0}\) such that \(x(t)\geq {0}\). It is clear that this \(x(t)\) satisfies the equation
$$ x''(t)+\sum _{j=1}^{p}{a_{j}(t)x' \bigl(t- \tau_{j}(t) \bigr)}=\phi(t),\quad t\in{[0,\omega]}, $$
(3.3)
where \(\phi(t)=f(t)-\sum_{j=1}^{p}{b_{j}(t)x(t-\theta_{j}(t))}\). It is clear that \(\phi(t)\geq{0}\) and from the fact \(\sum_{j=1}^{p}{b_{j}(t)\chi(t-\theta_{j}(t))}<0\) on the set of positive measure, we have \(\phi(t)>0\) for \(t\in{[0,\omega]}\).
Let us denote \(y(t)=x'(t)\). Then we can write an equation for \(y(t)\) in the form
$$ \textstyle\begin{cases} y'(t)+\sum_{j=1}^{p}{a_{j}(t)y(t-\tau_{j}(t))}=\phi(t),\quad t\in{[0,\omega]}, \\ y(\zeta)=0,\quad \zeta< 0. \end{cases} $$
(3.4)
It is clear that \(y(0)=0\). The solution of (3.4) can be written
$$ y(t)= \int_{0}^{t}{C_{1}(t,s)\phi(s)\,ds}, $$
(3.5)
where \(C_{1}(t,s)\) is the Cauchy function of (3.4). It follows that \(C_{1}(t,s)>0\) from [17]. Now it is clear that
$$ y(\omega)= \int_{0}^{\omega}{C_{1}(t,s)\phi(s)\,ds}>0, $$
(3.6)
and \(x'(\omega)=y(\omega)>0\). This demonstrates that the case \(x(t)>0\) for \(f(t)\geq{0}, f(t)\not\equiv{0}\) is impossible.

Step 2. Let us assume that there exists a solution \(x(t)\) changing sign on \([0,\omega]\) for nonnegative \(f(t)\). We have to consider two cases: the solution \(x(t)\) changes sign first time from positive to negative; and the solution \(x(t)\) changes sign first from negative to positive.

In the first case, we have a point η such that \(x(\eta)=0\). It means that our function \(x(t)\) satisfies the problem \((Lx)(t)=f(t), x'(0)=0, x(\eta)=0\). We have proven above that \(G_{3}(t,s)\leq{0}\) and this excludes the possibility of \(x(t)>0\) for \(t\in{[0,\eta)}\).

In the second case, we have a point ξ such that
$$ \textstyle\begin{cases} (Lx)(t)=f(t),\quad t\in{[0,\omega]}, \\ x(\xi)=\alpha>0,\quad x'(\xi)=0, \end{cases} $$
(3.7)
and the condition \(G_{1}^{\xi}(t,s)\geq{0}\) implies that \(x(t)>0\). We proved in Step 1 that the situation \(x(t)>0\) is impossible. Then \(G_{5}(t,s)\leq{0}\) for \(t,s\in{[0,\omega]}\).

Theorem 3.1 has been proven. □

Theorem 3.2

Assume that \(a_{j}\geq{0}, b_{j}\leq{0}\) for \(j=1,\ldots,p\), \(0<\gamma _{k}\leq{1}, 0<\delta_{k}\leq{1}\) for \(k=1,\ldots,r\), and there exists a function \(v\in{D}\) and \(\epsilon>0\) such that
$$ (Lv) (t)\geq{\epsilon}>0,\qquad v(t)>0,\qquad v'(t)< 0,\qquad v''(t)>0,\quad t\in{(0,\omega)}, $$
(3.8)
where the differential operator L is defined by (1.1). And let \([0,\omega]\) be a semi-nonoscillation interval of \((Lx)(t)=0\). Then Green’s functions \(G_{2}(t,s)\), \(G_{3}(t,s)\), \(G_{4}(t,s)\) satisfy the inequalities \(G_{2}(t,s)\leq{0},G_{3}(t,s)\leq{0},G_{4}(t,s)\leq{0}, (t,s)\in{[0,\omega]\times{[0,\omega]}}\). If, in addition, \(\sum_{j=1}^{p}{b_{j}(t)\chi(t-\theta_{j}(t))}\not\equiv{0}, t\in{[0,\omega ]}\), then \(G_{5}(t,s)\leq{0}, (t,s)\in{[0,\omega]\times{[0,\omega]}}\).

Proof

It is clear that all the conditions of assertion (1) of Theorem 4.1 from [24] are fulfilled. According to this theorem, \(G_{1}^{\xi}(t,s)\geq{0}\) for every \(t,s\in(0,\omega)\) and every \(0<\xi<\omega\). Using Theorem 3.1 above, we obtain that \(G_{2}(t,s)\leq{0}\), \(G_{3}(t,s)\leq{0}\), \(G_{4}(t,s)\leq{0}\) for \(t,s\in {[0,\omega]}\). If, in addition, \(\sum_{j=1}^{p}{b_{j}(t)\chi(t-\theta _{j}(t))}\not\equiv{0}, t\in{[0,\omega]}\), then it follows that \(G_{5}(t,s)\leq{0}\) for \(t,s\in{[0,\omega]}\).

Theorem 3.2 has been proven. □

Theorem 3.3

Assume that the following conditions are fulfilled:
  1. (1)

    \(G_{2}^{\xi}(t,s)\leq{0}, t,s\in{[0,\xi]}\) for every \(0<\xi<\omega\).

     
  2. (2)

    \([0,\omega]\) is a semi-nonoscillation interval of \((Lx)(t)=0\).

     
Then \(G_{4}(t,s)\leq{0}\) for \(t,s\in{[0,\omega]}\).

Proof

Let us consider problem (1.1)-(1.3), (2.4). According to Lemma 2.4, there exists a unique solution for every summable \(f(t)\). Let us assume that \(G_{4}(t,s)\) changes sign. It means that there exists a function \(f(t)\geq{0}\) such that the solution \(x(t)\) changes sign from negative to positive according to Lemma 2.4. Then there is a point \(0<\xi<\omega\) such that \(x(\xi)=\alpha>0\) and \(x'(\xi)=0\) (see Figure 1). From condition 1, we know that Green’s function for this problem, \(G_{2}^{\xi}(t,s)\) is nonpositive. From condition 2, it follows that the solution of problem \((Lx)(t)=0, x(\xi )=\alpha>0, x'(\xi)=0\) is positive for \(t\in{(0,\xi]}\). Then \(x(t)\leq{0}\) for \(t\in{[0,\xi]}\). This contradicts Lemma 2.4, which claims that \(x(t)\) can change its sign only from negative to positive for nonnegative \(f(t)\). Then \(G_{4}(t,s)\) should be nonpositive.

Theorem 3.3 has been proven. □

Theorem 3.4

Assume that \(a_{j}\geq{0}, b_{j}\geq{0}\) for \(j=1,\ldots,p\), \(1\leq {\gamma_{k}}, 1\leq{\delta_{k}}\), for \(k=1,\ldots,r\), and there exists a function \(v\in{D}\) and \(\epsilon>0\) such that
$$ (Lv) (t)\leq{-\epsilon}< 0,\qquad v(t)>0,\qquad v'(t)>0,\qquad v''(t)< 0,\quad t\in{(0,\omega)}, $$
(3.9)
where the differential operator L is defined by (1.1). And let \([0,\omega]\) be a semi-nonoscillation interval of \((Lx)(t)=0\). Then Green’s function \(G_{4}(t,s)\) of (1.1)-(1.3), (2.4) satisfies the inequality \(G_{4}(t,s)\leq{0}, (t,s)\in{[0,\omega ]\times{[0,\omega]}}\).

Proof

Looking at Theorem 5.1 from [25], we can see that the problem satisfies all of the conditions. Then \(G_{2}(t,s)\leq{0}\) for every \(t,s\in(0,\omega)\). Using Theorem 3.3 above, we obtain that \(G_{4}(t,s)\leq{0}\) for \(t,s\in{[0,\omega]}\).

Theorem 3.4 has been proven. □

Example 3.5

Let us now find an example of a function v satisfying the condition of Theorem 3.2. To this end, let us start with \(v(t)=e^{-\alpha{t}}\) in the interval \(t\in{[0,t_{1})}\). The function v in the rest of the intervals will be of the form
$$ v(t)=c_{i}e^{-\alpha{a_{i}t}},\quad t\in{[t_{i},t_{i+1})}, $$
(3.10)
where
$$ \textstyle\begin{cases} v(t_{i})=\gamma_{i}v(t_{i}-0), \\ v'(t_{i})=\delta_{i}v'(t_{i}-0). \end{cases} $$
(3.11)
After some calculations, we get that v is of the form
$$ \textstyle\begin{cases} v(t)=e^{-\alpha{t}},\quad t\in{[0,t_{1})}, \\ v(t)=\prod_{j=1}^{i}{\gamma_{j}}e^{-\alpha\frac{\prod _{j=1}^{i}{\delta_{j}}}{\prod_{j=1}^{i}{\gamma_{j}}}t},\quad t\in{[t_{i},t_{i+1})}. \end{cases} $$
(3.12)
For the next theorems, we use the following notation:
$$\begin{aligned} &E=\min_{i=1,2,\ldots,r}{\frac{\prod_{j=1}^{i}{\delta_{j}}}{\prod_{j=1}^{i}{\gamma_{j}}}}, \end{aligned}$$
(3.13)
$$\begin{aligned} &\Theta=\max_{t\in{[0,\omega]}}{\max_{i=1,2,\ldots,r}{ \theta_{j}(t)}}, \end{aligned}$$
(3.14)
$$\begin{aligned} &\mathrm {T}=\max_{t\in{[0,\omega]}}{\max_{i=1,2,\ldots,r}{ \tau_{j}(t)}} \end{aligned}$$
(3.15)
$$\begin{aligned} & \Omega=\max{\{\mathrm {T},\Theta\}}. \end{aligned}$$
(3.16)

Theorem 3.6

If \(a_{j}\geq{0}, b_{j}\leq{0}, 0<\gamma_{i}<\delta_{i}\leq{1}, j=1,\ldots ,r\), and
$$ \frac{4}{\Omega^{2}}e^{-2}>\frac{2}{\Omega} \mathop{\operatorname{esssup}}_{t\in [0,\omega]}\sum_{j=1}^{p}{ \bigl\vert a_{j}(t) \bigr\vert }+ \mathop{\operatorname{esssup}}_{t\in[0,\omega]} \sum_{j=1}^{p}{ \bigl\vert b_{j}(t) \bigr\vert }, $$
(3.17)
then Green’s functions \(G_{2}(t,s)\), \(G_{3}(t,s)\) and \(G_{4}(t,s)\) are nonnegative.

Proof

Let us substitute this \(v(t)\), defined by (3.12), into the condition of Theorem 3.2
$$ \begin{aligned} &\alpha^{2}\frac{ (\prod_{j=1}^{i}{\delta_{j}} )^{2}}{ (\prod_{j=1}^{i}{\gamma_{j}} )^{2}}-\alpha \frac{\prod_{j=1}^{i}{\delta_{j}}}{\prod_{j=1}^{i}{\gamma_{j}}}\mathop{\operatorname{esssup}}_{t\in[0,\omega]}\sum _{j=1}^{p}{ \bigl\vert a_{j}(t) \bigr\vert e^{\alpha \frac{\prod_{j=1}^{i}{\delta_{j}}}{\prod_{j=1}^{i}{\gamma_{j}}}\tau _{j}(t)}} \\ &\quad{}-\mathop{\operatorname{esssup}}_{t\in[0,\omega]}\sum_{j=1}^{p}{ \bigl\vert b_{j}(t) \bigr\vert e^{\alpha\frac{\prod_{j=1}^{i}{\delta_{j}}}{\prod_{j=1}^{i}{\gamma _{j}}}\theta_{j}(t)}}>0. \end{aligned} $$
(3.18)
Thus
$$ \alpha^{2} E^{2} e^{-\alpha{E}\Omega}>\alpha{E} \mathop{\operatorname{esssup}}_{t\in [0,\omega]}\sum_{j=1}^{p}{ \bigl\vert a_{j}(t) \bigr\vert }+ \mathop{\operatorname{esssup}}_{t\in[0,\omega]} \sum_{j=1}^{p}{ \bigl\vert b_{j}(t) \bigr\vert }, $$
(3.19)
where Ω is defined by (3.16). Denoting \(F(\alpha)=\alpha^{2} E^{2} e^{-\alpha{E}\Omega}\), we can find its maximum using the derivative
$$ F'(\alpha)= \bigl(2\alpha e^{-\alpha{E}\Omega}-\alpha^{2} E \Omega e^{-\alpha{E}\Omega} \bigr)E^{2}=\alpha(2-E\Theta\alpha) e^{-\alpha {E}\Omega} E^{2}, $$
(3.20)
and we get that \(\alpha=\frac{2}{E\Omega}\) is a point of maximum. Substituting this α into (3.19), we see that (3.17) implies, according to Theorem 3.3, the nonnegativity of \(P(t,s)\).

Theorem 3.6 has been proven. □

In the particular case \(a_{j}(t)=0, j=1,\ldots,p\), we have

Corollary 3.7

If \(b_{j}\leq{0}, 0<\gamma_{i}<\delta_{i}\leq{1}, j=1,\ldots,r\), and
$$ \frac{4}{\Omega^{2}}e^{-2}>\mathop{\operatorname{esssup}}_{t\in[0,\omega]}\sum _{j=1}^{p}{ \bigl\vert b_{j}(t) \bigr\vert }, $$
(3.21)
then Green’s functions \(G_{2}(t,s)\), \(G_{3}(t,s)\) and \(G_{4}(t,s)\) are nonnegative.

Theorem 3.8

If \(a_{j}\geq{0}, b_{j}\leq{0}, 0<\delta_{i}\leq{\gamma_{i}}\leq{1}, j=1,\ldots,r\), and
$$ \frac{4E^{2}}{\Omega^{2}}e^{-2}>\frac{2E}{\Omega} \mathop{\operatorname{esssup}}_{t\in[0,\omega]}\sum_{j=1}^{p}{ \bigl\vert a_{j}(t) \bigr\vert }+\mathop{\operatorname{esssup}}_{t\in[0,\omega]}\sum _{j=1}^{p}{ \bigl\vert b_{j}(t) \bigr\vert }, $$
(3.22)
then Green’s functions \(G_{2}(t,s)\), \(G_{3}(t,s)\) and \(G_{4}(t,s)\) are nonnegative.

Proof

Let us substitute this \(v(t)\), defined by (3.12), into the condition of Theorem 3.2
$$ \begin{aligned}& \alpha^{2}\frac{ (\prod_{j=1}^{i}{\delta_{j}} )^{2}}{ (\prod_{j=1}^{i}{\gamma_{j}} )^{2}}-\alpha \frac{\prod_{j=1}^{i}{\delta_{j}}}{\prod_{j=1}^{i}{\gamma_{j}}}\mathop{\operatorname{esssup}}_{t\in[0,\omega]}\sum _{j=1}^{p}{ \bigl\vert a_{j}(t) \bigr\vert e^{\alpha \tau_{j}(t)}} \\ &\quad{}-\mathop{\operatorname{esssup}}_{t\in[0,\omega]}\sum_{j=1}^{p}{ \bigl\vert b_{j}(t) \bigr\vert e^{\alpha\theta_{j}(t)}}>0. \end{aligned} $$
(3.23)
Thus
$$ \alpha^{2} E^{2} e^{-\alpha\Omega}>\alpha{E} \mathop{\operatorname{esssup}}_{t\in [0,\omega]}\sum_{j=1}^{p}{ \bigl\vert a_{j}(t) \bigr\vert }+ \mathop{\operatorname{esssup}}_{t\in[0,\omega]} \sum_{j=1}^{p}{ \bigl\vert b_{j}(t) \bigr\vert }, $$
(3.24)
where Ω is defined by (3.16). Denoting \(F(\alpha)=\alpha^{2} E^{2} e^{-\alpha\Omega}\), we can find its maximum using the derivative
$$ F'(\alpha)= \bigl(2\alpha e^{-\alpha\Omega}-\alpha^{2} \Omega e^{-\alpha \Omega} \bigr)E^{2}=\alpha(2-\Theta\alpha) e^{-\alpha\Omega} E^{2}, $$
(3.25)
and we get that \(\alpha=\frac{2}{\Omega}\) is a point of maximum. Substituting this α into (3.24), we see that (3.33) implies, according to Theorem 3.3, the nonnegativity of \(G_{2}(t,s)\), \(G_{3}(t,s)\) and \(G_{4}(t,s)\).

Theorem 3.8 has been proven. □

Theorem 3.9

Assume that
$$ \frac{\omega^{2}}{2} \mathop{\operatorname{esssup}}\sum_{j=1}^{p} { \bigl\vert a_{j}(t) \bigr\vert }+\omega \mathop{\operatorname{esssup}}\sum _{j=1}^{p} { \bigl\vert b_{j}(t) \bigr\vert }< 1. $$
(3.26)
Then \([0,\omega]\) is a semi-nonoscillation interval.

Example 3.10

Let us now find an example of a function v satisfying the condition of Theorem 3.4. To this end, let us start with \(v(t)=t(2\omega-t)\) in the interval \(t\in{[0,t_{1})}\), where ϵ is a small positive constant. The function v in the rest of the intervals will be of the form
$$ v(t)=v(t_{i})+v'(t_{i}) (t-t_{i})-(t-t_{i})^{2},\quad t\in{[t_{i},t_{i+1})}, i=1,\ldots ,r, t_{r+1}=\omega, $$
(3.27)
where
$$ \textstyle\begin{cases} v(t_{i})=\gamma_{i}v(t_{i}-0), \\ v'(t_{i})=\delta_{i}v'(t_{i}-0). \end{cases} $$
(3.28)
Thus
$$ \textstyle\begin{cases} v(t)=t(2\omega-t),\quad t\in{[0,t_{1})}, \\ v(t)=v(t_{i})+v'(t_{i})(t-t_{i})-(t-t_{i})^{2},\quad t\in{[t_{i},t_{i+1})}, \end{cases} $$
(3.29)
where \(v(t_{i})\) and \(v'(t_{i})\) can be presented in the forms
$$ \textstyle\begin{cases} v(t_{i})=t_{1}(2\omega-t_{1})\prod_{j=1}^{i}{\gamma_{j}}+\sum_{k=2}^{i}{v'(t_{k})(t_{k}-t_{k-1})\prod_{j=k}^{i}{\gamma_{j}}} \\ \phantom{v(t_{i})=}{}-\sum_{k=2}^{i}{(t_{k}-t_{k-1})^{2}\prod_{j=k}^{i}{\gamma_{j}}}, \\ v'(t_{i})=2(\omega-t_{1})\prod_{j=1}^{i}{\delta_{j}}-2\sum_{k=2}^{i}{(t_{k}-t_{k-1})\prod_{j=k}^{i}{\delta_{j}}}. \end{cases} $$
(3.30)
Let us assume that \(v(t)>0\) and substitute this \(v(t)\) into condition (3.9) of Theorem 3.4.
For the next corollary, we use the following notation:
$$\begin{aligned} &\Omega_{1}=\max_{i=1,2,\ldots,r}{ \bigl[v'(t_{i})-2t_{i} \bigr]}, \end{aligned}$$
(3.31)
$$\begin{aligned} &\Omega_{2}=\max{ \biggl[\max_{i=1,2,\ldots,r}{v \biggl(\frac {v'(t_{i})}{2}+t_{i} \biggr)},\max_{i=0,1,\ldots,r}{v(t_{i})} \biggr]}, \end{aligned}$$
(3.32)
where \(v(t_{r+1})=v(\omega)\).

Corollary 3.11

If \(a_{j}\geq{0}, b_{j}\geq{0}, 1\leq{\gamma_{k}}, 1\leq{\delta_{k}}, j=1,\ldots,p\), \(v(t)\) defined by (3.29) is positive for \(t\in {(0,\omega)}\) and
$$ \Omega_{1} \sum_{j=1}^{p} {a_{j}(t)}+\Omega_{2} \sum_{j=1}^{p} {b_{j}(t)}< 2, $$
(3.33)
then Green’s function \(G_{4}(t,s)\) of problem (1.1)-(1.3), (2.4) is nonpositive.

Proof

Let us substitute this \(v(t)\), defined by (3.29), into the assertion of Theorem 3.4
$$\begin{aligned} & {-}2+\sum_{i=1}^{p}{a_{i}(t) \max_{i=1,2,\ldots,r}{ \bigl[v'(t_{i})-2t_{i} \bigr]}} \\ &\quad{}+\sum_{i=1}^{p}{b_{i}(t)\max{ \biggl[\max_{i=1,2,\ldots,r}{v \biggl(\frac{v'(t_{i})}{2}+t_{i} \biggr)},\max_{i=0,1,\ldots,r}{v(t_{i})} \biggr]}}< 0, \end{aligned}$$
(3.34)
and we get the condition
$$ \Omega_{1}\sum_{j=1}^{p}{a_{j}(t)}+ \Omega_{2}\sum_{j=1}^{p}{b_{j}(t)}< 2. $$
(3.35)
 □

Declarations

Acknowledgements

Dr. Shlomo Yanetz (Bar Ilan University, Israel) for his important and valuable remarks. This paper is a part of the second author’s Ph.D. thesis which is being carried out in the Department of Mathematics at Bar-Ilan University.

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Authors’ Affiliations

(1)
Department of Mathematics, Ariel University
(2)
Department of Mathematics, Bar Ilan University

References

  1. Azbelev, NV, Maksimov, VP, Rakhmatullina, LF: Introduction to the Theory of Linear Functional-Differential Equations. Advanced Series in Math. Science and Engineering, vol. 3. World Federation Publisher Company, Atlanta (1995) MATHGoogle Scholar
  2. Bainov, D, Simeonov, P: Impulsive Differential Equations. Pitman Monographs and Surveys in Pure and Applied Mathematics, vol. 66. Longman Scientific, Harlow (1993) MATHGoogle Scholar
  3. Lakshmikantham, V, Bainov, DD, Simeonov, PS: Theory of Impulsive Differential Equations. World Scientific, Singapore (1989) View ArticleMATHGoogle Scholar
  4. Pandit, SG, Deo, SG: Differential Equations Involving Impulses. Lecture Notes in Mathematics, vol. 954. Springer, Berlin (1982) View ArticleMATHGoogle Scholar
  5. Samoilenko, AM, Perestyuk, AN: Impulsive Differential Equations. World Scientific, Singapore (1992) MATHGoogle Scholar
  6. Zavalishchin, SG, Sesekin, AN: Dynamic Impulse Systems: Theory and Applications. Mathematics and Its Applications, vol. 394. Kluwer, Dordrecht (1997) View ArticleMATHGoogle Scholar
  7. Jiang, D, Lin, X: Multiple positive solutions of Dirichlet boundary value problems for second order impulsive differential equations. J. Math. Anal. Appl. 321, 501-514 (2006) View ArticleMATHMathSciNetGoogle Scholar
  8. Feng, M, Xie, D: Multiple positive solutions of multi-point boundary value problem for second-order impulsive differential equations. J. Comput. Appl. Math. 223, 438-448 (2009) View ArticleMATHMathSciNetGoogle Scholar
  9. Jiang, J, Liu, L, Wu, Y: Positive solutions for second order impulsive differential equations with Stieltjes integral boundary conditions. Adv. Differ. Equ. 2012, 124 (2012) View ArticleMATHMathSciNetGoogle Scholar
  10. Hao, X, Liu, L, Wu, Y: Positive solutions for second order impulsive differential equations with integral boundary conditions. Commun. Nonlinear Sci. Numer. Simul. 16, 101-111 (2011) View ArticleMATHMathSciNetGoogle Scholar
  11. Hu, L, Liu, L, Wu, Y: Positive solutions of nonlinear singular two-point boundary value problems for second-order impulsive differential equations. Appl. Math. Comput. 196, 550-562 (2008) MATHMathSciNetGoogle Scholar
  12. Jankowski, T: Positive solutions to second order four-point boundary value problems for impulsive differential equations. Appl. Math. Comput. 202, 550-561 (2008) MATHMathSciNetGoogle Scholar
  13. Jankowski, T: Positive solutions for second order impulsive differential equations involving Stieltjes integral conditions. Nonlinear Anal. 74, 3775-3785 (2011) View ArticleMATHMathSciNetGoogle Scholar
  14. Lee, EK, Lee, YH: Multiple positive solutions of singular two point boundary value problems for second order impulsive differential equations. Appl. Math. Comput. 158, 745-759 (2004) MATHMathSciNetGoogle Scholar
  15. Agarwal, RP, O’Regan, D: Multiple nonnegative solutions for second order impulsive differential equations. Appl. Math. Comput. 114, 51-59 (2000) MATHMathSciNetGoogle Scholar
  16. Rachunkova, I, Tomecek, J: A new approach to BVPs with state-dependent impulses. Bound. Value Probl. 2013, 22 (2013). doi:10.1186/1687-2770-2013-22 View ArticleMATHMathSciNetGoogle Scholar
  17. Domoshnitsky, A, Drakhlin, M: Nonoscillation of first order impulse differential equations with delay. J. Math. Anal. Appl. 206, 254-269 (1997) View ArticleMATHMathSciNetGoogle Scholar
  18. Domoshnitsky, A, Volinsky, I: About positivity of Green’s functions for nonlocal boundary value problems with impulsive delay equations. Sci. World J. 2014, Article ID 978519 (2014) View ArticleGoogle Scholar
  19. Domoshnitsky, A, Volinsky, I: About differential inequalities for nonlocal boundary value problems with impulsive delay equations. Math. Bohem. 140, 121-128 (2015) MATHMathSciNetGoogle Scholar
  20. Tian, YL, Weng, PX, Yang, JJ: Nonoscillation for a second order linear delay differential equation with impulses. Acta Math. Appl. Sin. 20, 101-114 (2004) View ArticleMATHMathSciNetGoogle Scholar
  21. Bainov, D, Domshlak, Y, Simeonov, P: Sturmian comparison theory for impulsive differential inequalities and equations. Arch. Math. 67, 35-49 (1996) View ArticleMATHMathSciNetGoogle Scholar
  22. Domoshnitsky, A, Drakhlin, M, Litsyn, E: On boundary value problems for N-th order functional differential equations with impulses. Adv. Math. Sci. Appl. 8(2), 987-996 (1998) MATHMathSciNetGoogle Scholar
  23. Domoshnitsky, A, Landsman, G, Yanetz, S: About sign-constancy of Green’s functions for impulsive second order delay equations. Opusc. Math. 34(2), 339-362 (2014) View ArticleMATHMathSciNetGoogle Scholar
  24. Domoshnitsky, A, Landsman, G, Yanetz, S: About sign-constancy of Green’s function of one-point problem for impulsive second order delay equations. Funct. Differ. Equ. 21(1-2), 3-15 (2014) MATHMathSciNetGoogle Scholar
  25. Domoshnitsky, A, Landsman, G, Yanetz, S: About sign-constancy of Green’s function of a two-point problem for impulsive second order delay equations. Electron. J. Qual. Theory Differ. Equ. 2016, 9 (2016) View ArticleMATHGoogle Scholar
  26. Chichkin, EA: Theorem about differential inequality for multipoint boundary value problems. Izv. Vysš. Učebn. Zaved., Mat. 27(2), 170-179 (1962) MathSciNetGoogle Scholar
  27. Levin, AY: Non-oscillation of solutions of the equation \(x^{(n)}+p_{n-1}(t)x^{(n-1)}+\cdots+p_{0}(t)x=0\). Usp. Mat. Nauk 24(2), 43-96 (1969) MATHGoogle Scholar
  28. Azbelev, NV, Domoshnitsky, A: A question concerning linear differential inequalities. Differ. Equ. 27, 257-263 (1991); translation from Differentsial’nye uravnenija, 27, 257-263 (1991) MATHGoogle Scholar
  29. Azbelev, NV, Domoshnitsky, A: A question concerning linear differential inequalities. Differ. Equ. 27, 923-931 (1991); translation from Differentsial’nye uravnenija, 27, 641-647 (1991) Google Scholar
  30. Bainov, D, Domoshnitsky, A: Theorems on differential inequalities for second order functional-differential equations. Glas. Mat. 29(49), 275-289 (1994) MATHMathSciNetGoogle Scholar

Copyright

© The Author(s) 2017