Skip to main content

A class of singular n-dimensional impulsive Neumann systems

Abstract

This paper investigates the existence of infinitely many positive solutions for the second-order n-dimensional impulsive singular Neumann system

$$\begin{aligned}& -\mathbf{x}^{\prime\prime}(t)+ M\mathbf{x}(t)=\lambda {\mathbf{g}}(t)\mathbf{f} \bigl(t,\mathbf{x}(t) \bigr),\quad t\in J, t\neq t_{k}, \\& -\Delta {\mathbf{x}}^{\prime}|_{t=t_{k}}=\mu {\mathbf{I}}_{k} \bigl(t_{k},\mathbf{x}(t_{k}) \bigr),\quad k=1,2,\ldots ,m, \\& \mathbf{x}^{\prime}(0)=\mathbf{x}^{\prime}(1)=0. \end{aligned}$$

The vector-valued function x is defined by

$$\begin{aligned}& \mathbf{x}=[x_{1},x_{2},\dots ,x_{n}]^{\top }, \qquad \mathbf{g}(t)=\operatorname{diag} \bigl[g_{1}(t), \ldots ,g_{i}(t), \ldots , g_{n}(t) \bigr], \end{aligned}$$

where \(g_{i}\in L^{p}[0,1]\) for some \(p\geq 1\), \(i=1,2,\ldots , n\), and it has infinitely many singularities in \([0,\frac{1}{2})\). Our methods employ the fixed point index theory and the inequality technique.

Introduction

Impulsive differential equations have gained considerable importance due to their varied applications in many problems of physics, chemistry, biology, applied sciences and engineering. For details and explanations, we refer the reader to Refs. [19]. In particular, great interest has been shown by many authors in the subject of impulsive boundary value problems (IBVPs), and a variety of results for IBVPs equipped with different kinds of boundary conditions have been obtained, for instance, see [1028] and the references cited therein.

However, there is almost no paper on second-order n-dimensional impulsive systems, especially for multi-parameter second-order n-dimensional impulsive singular Neumann systems. In this paper, we will introduce this new problem and discuss the existence of infinitely many positive solutions.

Consider the n-dimensional nonlinear second-order impulsive Neumann system

$$ \begin{aligned} &{-}\mathbf{x}^{\prime\prime}(t)+ M\mathbf{x}(t)=\lambda { \mathbf{g}}(t)\mathbf{f} \bigl(t,\mathbf{x}(t) \bigr),\quad t\in J, t\neq t_{k}, \\ &{-}\Delta {\mathbf{x}}^{\prime}|_{t=t_{k}}=\mu { \mathbf{I}}_{k} \bigl(t_{k},\mathbf{x}(t_{k}) \bigr),\quad k=1,2,\ldots ,m, \end{aligned} $$
(1.1)

with the following boundary conditions:

$$ {\mathbf{x}}^{\prime}(0)=\mathbf{x}^{\prime}(1)=0, $$
(1.2)

where λ and μ are positive parameters and M is a positive constant, \(J=[0,1], t_{k} \in \mathrm{R}\), \(k =1,2,\ldots ,m\), \(m \in \mathrm{N}\) satisfy \(0< t_{1}< t_{2}<\cdots <t_{k}<\cdots <t_{n}<1\). In addition,

$$\begin{aligned}& {\mathbf{x}}=[x_{1},x_{2},\ldots ,x_{n}]^{\top }, \\& {\mathbf{g}}(t)=\operatorname{diag} \bigl[g_{1}(t),g_{2}(t), \ldots ,g_{n}(t) \bigr], \\& {\mathbf{f}}(t,\mathbf{x})= \bigl[f_{1}(t,\mathbf{x}),\ldots ,f_{i}(t,\mathbf{x}),\ldots ,f_{n}(t,\mathbf{x}) \bigr]^{\top }, \\& {\mathbf{I}}_{k} \bigl(t_{k},\mathbf{x}(t_{k}) \bigr)= \bigl[I_{k}^{1} \bigl(t_{k}, \mathbf{x}(t_{k}) \bigr), \ldots ,I_{k}^{i} \bigl(t_{k},\mathbf{x}(t_{k}) \bigr),\ldots ,I_{k}^{n} \bigl(t_{k},\mathbf{x}(t _{k}) \bigr) \bigr]^{\top }, \\& -\Delta {\mathbf{x}}^{\prime}|_{t=t_{k}}= \bigl[-\Delta x_{1}^{\prime}|_{t=t_{k}}, - \Delta x_{2}^{\prime}|_{t=t_{k}},\ldots ,-\Delta x_{n}^{\prime}|_{t=t_{k}} \bigr]^{\top }, \end{aligned}$$

here

$$ f_{i}(t,\mathbf{x})=f_{i}(t,x_{1},\ldots ,x_{i},\ldots , x_{n}), \qquad I_{k} ^{i}(t_{k},\mathbf{x})=I_{k}^{i}(t_{k},x_{1}, \ldots ,x_{i},\ldots , x _{n}). $$

Therefore, system (1.1) means that

$$ \textstyle\begin{cases} -x_{1}^{\prime\prime}(t)+ Mx_{1}(t)=\lambda g_{1}(t)f_{1}(t,x_{1}(t),x_{2}(t),\dots ,x_{n}(t)),& t\in J, t\neq t_{k}, \\ -\Delta x_{1}^{\prime}|_{t=t_{k}}=\mu I_{k}^{1}(t_{k},x_{1}(t_{k}),x _{2}(t_{k}),\dots ,x_{n}(t_{k})),&k=1,2,\ldots ,m, \\ -x_{2}^{\prime\prime}(t)+ Mx_{2}(t)=\lambda g_{2}(t)f_{2}(t,x_{1}(t),x_{2}(t),\ldots ,x_{n}(t)),&t\in J,t\neq t_{k}, \\ -\Delta x_{2}^{\prime}|_{t=t_{k}}=\mu I_{k}^{2}(t_{k},x_{1}(t_{k}),x _{2}(t_{k}),\dots ,x_{n}(t_{k})),&k=1,2,\ldots ,m, \\ \cdots , \\ -x_{n}^{\prime\prime}(t)+ Mx_{n}(t)=\lambda g_{n}(t)f_{n}(t,x_{1}(t),x_{2}(t), \ldots ,x_{n}(t)),& t\in J, t\neq t_{k}, \\ -\Delta x_{n}^{\prime}|_{t=t_{k}}=\mu I_{k}^{n}(t_{k},x_{1}(t_{k}),x _{2}(t_{k}),\dots ,x_{n}(t_{k})),&k=1,2,\ldots ,m, \end{cases} $$
(1.3)

where \(-\Delta x_{i}^{\prime}|_{t=t_{k}}=x_{i}^{\prime}((t_{k})^{+})-x_{i} ^{\prime}((t_{k})^{-})\) and in which \(x_{i}^{\prime}((t_{k})^{+})\) and \(x _{i}^{\prime}((t_{k})^{-})\) denote the right-hand limit and left-hand limit of \(x_{i}^{\prime}(t)\) at \(t=t_{k}\), respectively.

Similarly, (1.2) means that

$$ \textstyle\begin{cases} x_{1}^{\prime}(0)=x_{1}^{\prime}(1)=0, \\ x_{2}^{\prime}(0)=x_{2}^{\prime}(1)=0, \\ \cdots , \\ x_{n}^{\prime}(0)=x_{n}^{\prime}(1)=0. \end{cases} $$
(1.4)

By a solution x to system (1.1)–(1.2), we understand a vector-valued function \(\mathbf{x}=[x_{1},x_{2},\dots ,x_{n}]^{\top } \in C^{2}(J,R^{n})\), which satisfies (1.1) and (1.2) for \(t\in J\). In addition, for each \(i=1,2,\dots ,n, k =1,2,\ldots ,m\), \(x_{i}(t _{k}^{ +})\) and \(x_{i}(t_{k}^{-})\) exist and \(x_{i}(t)\) is absolutely continuous on each interval \((0,t_{1}]\) and \((t_{k},t_{k+1}]\). A solution is positive if, for each \(i=1,2,\dots ,n\), \(x_{i}(t)>0\) for all \(t\in J\) and there is at least one nontrivial component of x is positive on J.

For the case of \(n=1, \lambda =1\) and \(\mathbf{I}_{k}\equiv 0, k=1,2, \ldots ,m\), system (1.1)–(1.2) reduces to the problem studied by Sun, Cho and O’Regan in [29]. By using a cone fixed point theorem, the authors obtained some sufficient conditions for the existence of positive solutions in Banach spaces. Very recently, in the case \(n=1, M=0, \lambda =1\) and \(\mathbf{I}_{k}\equiv 0, k=1,2,\ldots ,m\), Sovrano and Zanolin [30] presented a multiplicity result of positive solutions for system (1.1)–(1.2) by applying shooting method. For other excellent results on Neumann boundary value problems, we refer the reader to the references [3142].

Here we emphasize that our problem is new in the sense of multi-parameter second-order n-dimensional impulsive singular Neumann systems introduced here. To the best of our knowledge, the existence of single or multiple positive solutions for multi-parameter second-order n-dimensional impulsive singular Neumann systems (1.1)–(1.2) has not yet to be studied, especially for the existence of infinitely many positive solutions for system (1.1)–(1.2). In consequence, our main results of the present work will be a useful contribution to the existing literature on the topic of second-order n-dimensional impulsive singular Neumann systems. The existence of infinitely many positive solutions for the given problem are new, though they are proved by applying the well-known method based on the fixed index theory in cones and the inequality technique.

Throughout this paper, we use \(i=1,2,\dots ,n\), unless otherwise stated.

Let the components of g, f and \(\mathbf{I}_{k}\) satisfy the following conditions:

\((H_{1})\) :

\(g_{i}(t)\in L^{p}[0,1]\) for some \(p\in [1,+\infty )\), and there exists \(N_{i}>0\) such that \(g_{i}(t)\geq N_{i}\) a.e. on J;

\((H_{2})\) :

for every \(g_{i}(t),i=1,2,\ldots ,n\), there exists a sequence \(\{t_{j}^{\prime}\}_{j=1}^{\infty } \) such that \(t_{1}^{\prime}<\delta \), where \(\delta =\min \{t_{1},\frac{1}{2}\} \), \(t_{j}^{\prime} \downarrow t_{0}^{\prime}>0 \) and \(\lim_{t\rightarrow t_{j}^{\prime}} g_{i}(t) =+\infty\) for all \(j=1, 2,\ldots \) ;

\((H_{3})\) :

\(f_{i}(t,\mathbf{x})\in C(J\times R_{+}^{n}, R_{+}), I_{k} ^{i}(t_{k},\mathbf{x}(t_{k}))\in C(J\times R_{+}^{n}, R_{+})\), where \(R^{+}=[0,+\infty )\) and \(R_{+}^{n}=\prod_{i=1}^{n}R_{+}\).

Remark 1.1

It is not difficult to see that the condition (\(H _{2}\)) plays an important role in the proof of Theorem 3.1, and there are many functions satisfying (\(H_{2}\)), for detail to see Example 3.1.

Remark 1.2

From the proof of the main results reported by Sovrano and Zanolin [30], it is not difficult to see that \(f(t,u)>0\) for \(u>0\) is an important condition, although we consider the multiplicity of positive solution on the parameter λ and μ without using it, for detail, to see Theorem 3.1.

Our plan of this article is as follows. In Sect. 2, we collect some well-known results to be used in the subsequent sections and present several new properties of Green’s function, which plays a pivotal role in obtaining the main results given in Sect. 3. In the final section, we also give an example of a family of diagonal matrix functions \(\mathbf{g}(t)\) such that \((H_{2})\) holds.

Preliminaries

Let \(J^{\prime}=J\setminus \{ t_{1},t_{2},\ldots ,t_{m} \} \) and \(E=C[0,1]\). We define \(PC_{1}[0,1]\) in E by

$$ PC_{1}[0,1]= \bigl\{ x\in E:x^{\prime}(t)\in C(t_{k}, t_{k+1}), \exists x ^{\prime} \bigl(t_{k}^{-} \bigr), x^{\prime} \bigl(t_{k}^{+} \bigr), k=1,2, \ldots ,m \bigr\} . $$
(2.1)

Then \(PC_{1}[0,1]\) is a real Banach space with the norm

$$ \Vert x \Vert _{PC_{1}}=\max \bigl\{ \Vert x \Vert _{\infty }, \bigl\Vert x^{\prime} \bigr\Vert _{\infty } \bigr\} , $$

where \(\Vert x \Vert _{\infty }=\sup_{t\in J}\vert x(t) \vert , \Vert x^{\prime} \Vert _{\infty }=\sup_{t\in J}\vert x^{\prime}(t) \vert \).

Let \(PC_{1}^{n}[0,1]=\underbrace{PC_{1}[0,1]\times \cdots \times PC _{1}[0,1]}_{n}\), and, for any \(\mathbf{x}=[x_{1},x_{2},\dots ,x_{n}]^{ \top }\in PC_{1}^{n}[0,1]\),

$$ \Vert {\mathbf{x}} \Vert =\sum_{i=1}^{n} \Vert x_{i} \Vert _{PC_{1}}. $$
(2.2)

Then \((PC_{1}^{n}[0,1],\Vert \cdot \Vert )\) is a real Banach space.

Suppose that \(G(t,s)\) is the Green’s function of the boundary value problem

$$ -x_{i}^{\prime\prime}(t)+Mx_{i}(t)=0,\qquad x_{i}^{\prime}(0)=x_{i}^{\prime}(1)=0, $$

then

$$ G(t,s)= \frac{1}{\gamma \sinh \gamma } \textstyle\begin{cases} \cosh \gamma (1-t)\cosh \gamma s, &0\leq s\leq t\leq 1, \\ \cosh \gamma (1-s)\cosh \gamma t, &0\leq t\leq s\leq 1, \end{cases} $$
(2.3)

where \(\cosh t = \frac{e^{t}+e^{-t}}{2}\), \(\sinh t= \frac{e^{t}-e^{-t}}{2}\), \(\gamma =\sqrt{M}\).

It is obvious that

$$ A=\frac{1}{\gamma \sinh \gamma }\leq G(t,s)\leq \frac{\cosh \gamma }{ \gamma \sinh \gamma }=B,\quad \forall t,s\in J, $$
(2.4)

and then we have

$$ A\leq G(s,s)\leq B,\quad \forall s\in J. $$

Lemma 2.1

For any \(\theta \in (t_{0}^{\prime},\delta )\), there is

$$ \frac{\cosh\gamma \theta }{\gamma \sinh \gamma }\leq G(t,s)\leq \frac{ \cosh \gamma \theta \cosh \gamma (1-\theta )}{\gamma \sinh \gamma }, \quad \forall t\in [\theta,1], s\in J. $$
(2.5)

Proof

We get Eq. (2.5) easily by the definition of \(G(t,s)\), we omit it here. □

To establish the existence of positive solutions to system (1.1)–(1.2), for a fixed \(\theta \in (t_{0}^{\prime},\delta )\), we construct the cone \(\mathbf{K}_{\theta }\) in \(PC_{1}^{n}[0,1]\) by

$$\begin{aligned} {\mathbf{K}}_{\theta }&= \Biggl\{ \mathbf{x}=(x_{1},x_{2}, \dots ,x_{n}) \in PC _{1}^{n}[0,1]:x_{i}(t) \geq 0, \\ &\quad {} i=1,2,\ldots ,n, t\in J,\min_{t\in [\theta ,1]} \sum _{i=1}^{n}x_{i}(t) \geq \sigma \Vert \textbf{x} \Vert \Biggr\} , \end{aligned}$$
(2.6)

where

$$ \sigma =\frac{\cosh \gamma \theta }{\rho \gamma \sinh \gamma }, $$
(2.7)

here ρ is defined by

$$ \rho =\max \{B,\sinh \gamma \}, $$
(2.8)

and it is easy to see \(\mathbf{K}_{\theta }\) is a closed convex cone of \(PC_{1}^{n}[0,1]\).

Let \(\{\theta_{j}\}_{j=1}^{\infty }\) be such that \(t_{j+1}^{\prime}<\theta _{j}<t_{j}^{\prime}\), \(j=1,2,\ldots \) . Then we get \(0<\cdots <t_{j+1}^{\prime}< \theta_{j}<t_{j}^{\prime}<\cdots <t_{3}^{\prime}<\theta_{2}<t_{2}^{\prime}<\theta_{1}<t _{1}^{\prime}<\delta \leq t_{1}<t_{2}<\cdots <t_{m}<1\), and then, for any \(j\in \textrm{N}\), we can define the cone \(\mathbf{K}_{\theta_{j}}\) by

$$ {\mathbf{K}}_{\theta_{j}}= \Biggl\{ \mathbf{x} \in PC_{1}^{n}[0,1]: x_{i}(t) \geq 0,t\in J, i=1,2,\ldots ,n, \min _{t\in [\theta_{j},1]} \sum_{i=1}^{n}x_{i}(t) \geq \sigma_{j}\Vert \textbf{x} \Vert \Biggr\} , $$
(2.9)

where

$$ \sigma_{j}=\frac{\cosh \gamma \theta_{j}}{\rho \gamma \sinh \gamma }, $$
(2.10)

here ρ is defined by (2.8), and

$$ \theta_{j}\in \bigl[t_{j+1}^{\prime},t_{j}^{\prime} \bigr],\quad j=1,2,\ldots . $$
(2.11)

It is easy to see \(\mathbf{K}_{\theta_{j}}\) is also a closed convex cone of \(PC_{1}^{n}[0,1]\).

Also, for a positive number τ, define \(\mathbf{K}_{\tau \theta_{j}}\) by

$$ {\mathbf{K}}_{\tau \theta_{j}}= \bigl\{ \mathbf{x}\in {\mathbf{K}}_{\theta_{j}}: \Vert \textbf{x} \Vert < \tau \bigr\} . $$

Remark 2.1

It is obvious that \(0<\sigma ,\sigma_{j} <1\) by the definition of σ and \(\sigma_{j}\).

Lemma 2.2

If \((H_{1})\)\((H_{3})\) hold, then system (1.1)(1.2) has a unique solution \(\mathbf{x}=[x_{1},x_{2},\ldots , x_{n}]^{\top } \in R_{+}^{n}\) in which \(x_{i}(t)\) given by

$$ x_{i}(t) =\lambda \int_{0}^{1}G(t,s)g_{i}(s)f_{i} \bigl(s,\mathbf{x}(s) \bigr)\,ds+ \mu \sum_{k=1}^{m}G(t,t_{k})I_{k}^{i} \bigl(t_{k},\mathbf{x}(t_{k}) \bigr). $$
(2.12)

Proof

We use the fact that system (1.1)–(1.2) is equivalent to system (1.3)–(1.4). Therefore system (1.1)–(1.2) has a unique solution x, which is equivalent to the following problem:

$$ \textstyle\begin{cases} -x_{i}^{\prime\prime}(t)+ Mx_{i}(t)=\lambda g_{i}(t)f(t,x_{1}(t), x_{2}(t),\ldots , x_{n}(t)),& t\in J, t\neq t_{k}, \\ -\Delta x_{i}^{\prime}|_{t=t_{k}}=I_{k}(t_{k}, x_{1}(t_{1}), x_{2}(t_{2}),\ldots , x_{n}(t_{n})),& k=1,2,\ldots ,m, \\ x_{i}^{\prime}(0)=x_{i}^{\prime}(1)=0,\end{cases} $$
(2.13)

has a unique solution \(x_{i}\), which is given by (2.12).

Next, by a proof which is similar to that of Lemma 2.4 in [40], we can show that (2.12) holds. This finishes the proof of Lemma 2.2. □

Let \(\mathbf{T}_{\lambda \mu }: \mathbf{K}_{\theta_{j}} \to PC_{1}^{n}[0,1]\) be a map with components \((T_{\lambda \mu }^{1},\ldots ,T_{\lambda \mu }^{i},\ldots ,T_{\lambda \mu }^{n})\). We understand that \(\mathbf{T}_{\lambda \mu }\mathbf{x}=(T_{\lambda \mu }^{1}{\mathbf{x}},\ldots ,T _{\lambda \mu }^{i}{\mathbf{x}},\ldots ,T_{\lambda \mu }^{n}{\mathbf{x}})^{ \top }\), where

$$\begin{aligned} \bigl(T_{\lambda \mu }^{i}{\mathbf{x}} \bigr) (t) &=\lambda \int_{0}^{1}G(t,s)g_{i}(s)f _{i} \bigl(s,\mathbf{x}(s) \bigr)\,ds \\ &\quad {}+ \mu \sum_{k=1}^{m}G(t,t_{k})I_{k}^{i} \bigl(t_{k}, \mathbf{x}(t_{k}) \bigr),\quad i=1,2,\ldots ,n. \end{aligned}$$
(2.14)

Remark 2.2

It follows from Lemma 2.2 and the definition of \(\mathbf{T}_{\lambda \mu }\) that

$$ {\mathbf{x}}=[x_{1},x_{2},\dots ,x_{n}]^{\top } \in PC_{1}^{n}[0,1] $$

is a solution of the system (1.1)–(1.2) if and only if \(\mathbf{x}=[x _{1},x_{2},\ldots ,x_{n}]^{\top }\) is a fixed point of operator \(\textbf{T}_{\lambda \mu }\).

Lemma 2.3

Assume that \((H_{1})\)\((H_{3})\) hold. Then \(\mathbf{T}_{\lambda \mu }(\mathbf{K}_{\theta_{j}})\subset {\mathbf{K}}_{\theta_{j}} \) and \(\mathbf{T}_{\lambda \mu }: \mathbf{K}_{\theta_{j}} \to {\mathbf{K}}_{\theta_{j}}\) is a completely continuous.

Proof

By the theory of matrix analysis, if we want to prove that \(\mathbf{T}_{\lambda \mu }(\mathbf{K}_{\theta_{j}})\subset {\mathbf{K}}_{\theta _{j}} \) and \(\mathbf{T}_{\lambda \mu }: \mathbf{K}_{\theta_{j}} \to{\mathbf{K}}_{\theta_{j}}\) is a completely continuous, then, for \(i=1,2,\ldots , n\), we only prove that \(T_{\lambda \mu }^{i}(\mathbf{K} _{\theta_{j}})\subset {\mathbf{K}}_{\theta_{j}} \) and \(T_{\lambda \mu } ^{i}: \mathbf{K}_{\theta_{j}} \to {\mathbf{K}}_{\theta_{j}}\) is a completely continuous.

Firstly, we prove that \(T_{\lambda \mu }^{i}(\mathbf{K}_{\theta_{j}}) \subset {\mathbf{K}}_{\theta_{j}}\). For \(t\in [\theta_{j},1]\), it follows from (2.5) and (2.14) that

$$\begin{aligned} \bigl(T_{\lambda \mu }^{i}\textbf{x} \bigr) (t) &=\lambda \int_{0}^{1}G(t,s)g_{i}(s)f _{i} \bigl(s,\mathbf{x}(s) \bigr)\,ds+ \mu \sum_{k=1}^{m}G(t,t_{k})I_{k}^{i} \bigl(t _{k},\mathbf{x}(t_{k}) \bigr) \\ &\leq B \Biggl[\lambda \int_{0}^{1}g_{i}(s)f_{i} \bigl(s,\mathbf{x}(s) \bigr)\,ds +\mu \sum_{k=1}^{m}I_{k}^{i} \bigl(t_{k},\mathbf{x}(t_{k}) \bigr) \Biggr]. \end{aligned}$$
(2.15)

It is obvious that

$$ G_{t}^{\prime}(t,s)= \frac{1}{\sinh \gamma } \textstyle\begin{cases} -\sinh \gamma (1-t)\cosh \gamma s,&0\leq s\leq t\leq 1, \\ \sinh \gamma (1-s)\cosh \gamma t,&0\leq t\leq s\leq 1, \end{cases} $$
(2.16)

and

$$ \max_{t,s\in J,t\neq s} \bigl\vert G_{t}^{\prime}(t,s) \bigr\vert \leq \sinh \gamma . $$
(2.17)

By (2.14) and (2.17), we have

$$\begin{aligned} \bigl\vert \bigl(T_{\lambda \mu }^{i}\textbf{x} \bigr)^{\prime}(t) \bigr\vert &= \Biggl\vert \lambda \int_{0}^{1}G_{t}^{\prime}(t,s)g_{i}(s)f_{i} \bigl(s,\mathbf{x}(s) \bigr)\,ds+\mu \sum_{k=1}^{m}G_{t}^{\prime}(t,t_{k})I_{k}^{i} \bigl(t_{k},\mathbf{x}(t_{k}) \bigr) \Biggr\vert \\ &\leq \lambda \int_{0}^{1} \bigl\vert G_{t}^{\prime}(t,s) \bigr\vert g_{i}(s)f_{i} \bigl(s,\mathbf{x}(s) \bigr)\,ds+ \mu \sum_{k=1}^{m} \bigl\vert G_{t}^{\prime}(t,t_{k}) \bigr\vert I_{k}^{i} \bigl(t_{k},\mathbf{x}(t _{k}) \bigr) \\ &\leq \sinh \gamma \Biggl[ \lambda \int_{0}^{1}g_{i}(s)f_{i} \bigl(s,\mathbf{x}(s) \bigr)\,ds+ \mu \sum_{k=1}^{m}I_{k}^{i} \bigl(t_{k},\mathbf{x}(t_{k}) \bigr) \Biggr] . \end{aligned}$$
(2.18)

For any \(t\in J\), combined with (2.15) and (2.18), we have

$$ \bigl\Vert T_{\lambda \mu }^{i}\textbf{x} \bigr\Vert _{PC_{1}} \leq \rho \Biggl[ \lambda \int_{0}^{1}g_{i}(s)f_{i} \bigl(s,\mathbf{x}(s) \bigr)\,ds+ \mu \sum_{k=1}^{m}I _{k}^{i} \bigl(t_{k},\mathbf{x}(t_{k}) \bigr) \Biggr] . $$
(2.19)

Then, by (2.5), (2.6) and (2.19)

$$\begin{aligned} \min_{t\in [\theta_{j},1]} \bigl(T_{\lambda \mu }^{i}{\mathbf{x}} \bigr) (t)&=\min_{t\in [\theta_{j},1]} \Biggl[ \lambda \int_{0}^{1}G(t,s) g _{i}(s)f_{i} \bigl(s,\mathbf{x}(s) \bigr)\,ds+\mu \sum_{k=1}^{m}G(t,t_{k})I_{k}^{i} \bigl(t_{k},\mathbf{x}(t_{k}) \bigr) \Biggr] \\ &\geq \frac{\cosh \gamma \theta_{j}}{\gamma \sinh \gamma } \Biggl[ \lambda \int_{0}^{1}g_{i}(s)f_{i} \bigl(s,\mathbf{x}(s) \bigr)\,ds+\mu \sum_{k=1}^{m}I _{k}^{i} \bigl(t_{k},\mathbf{x}(t_{k}) \bigr) \Biggr] \\ &\geq \frac{\cosh \gamma \theta_{j}}{\rho \gamma \sinh \gamma } \rho \Biggl[ \lambda \int_{0}^{1}g_{i}(s)f_{i} \bigl(s,\mathbf{x}(s) \bigr)\,ds+\mu \sum_{k=1}^{m}I_{k}^{i} \bigl(t_{k},\mathbf{x}(t_{k}) \bigr) \Biggr] \\ &\geq \sigma_{j} \bigl\Vert T_{\lambda \mu }^{i} \textbf{x} \bigr\Vert _{PC^{1}}. \end{aligned}$$
(2.20)

This shows that \(T_{\lambda \mu }^{i}(\mathbf{K}_{\theta_{j}})\subset {\mathbf{K}}_{\theta_{j}}\).

Next, by using similar arguments of Lemmas 5 and 6 [16] one can prove that the operator \(T_{\lambda \mu }^{i}: \mathbf{K}_{\theta_{j}} \to {\mathbf{K}}_{\theta_{j}}\) is completely continuous. So the proof of Lemma 2.3 is complete. □

To obtain some of the norm inequalities in our main results, we employ the famous Hölder inequality.

Lemma 2.4

(Hölder)

Let \(e\in L^{p}[a,b]\) with \(p>1\), \(h\in L^{q}[a,b]\) with \(q>1\) and \(\frac{1}{p}+\frac{1}{q}=1\). Then \(eh\in L^{1}[a,b]\) and

$$ \Vert eh \Vert _{1}\le \Vert e \Vert _{p}\Vert h \Vert _{q}. $$

Let \(e\in L^{1}[a,b]\), \(h\in L^{\infty }[a,b]\). Then \(eh\in L^{1}[a,b]\) and

$$ \Vert eh \Vert _{1}\le \Vert e \Vert _{1}\Vert h \Vert _{\infty }. $$

Finally, we state the well-known fixed point index theorem in [43].

Lemma 2.5

Let E be a real Banach space and let K be a cone in E. For \(r>0\), we define \(K_{r}= \{ x\in K:\Vert x \Vert < r \} \). Assume that \(T:\bar{K}_{r}\rightarrow K\) is completely continuous such that \(Tx\neq x\) for \(x\in \partial K{r}= \{ x\in K:\Vert x \Vert =r \} \).

  1. (i)

    If \(\Vert Tx \Vert \geq \Vert x \Vert \) for \(x\in \partial K_{r}\), then \(\mathbf{i}(T,K _{r},K)=0\).

  2. (ii)

    If \(\Vert Tx \Vert \leq \Vert x \Vert \) for \(x\in \partial K_{r}\), then \(\mathbf{i}(T,K _{r},K)=1\).

Main result

In this section, we establish the solvable intervals of the positive parameters λ and μ for the existence of the infinitely many positive solutions for system (1.1)–(1.2) by using Lemma 2.4 and Lemma 2.5.

For ease of expression, we introduce the following notation:

$$\begin{aligned}& \bigl(f_{0}^{\tau } \bigr)^{i}=\max \biggl\{ \max _{t\in J}\frac{f_{i}(t, \mathbf{x})}{\tau }, 0\leq \Vert \mathbf{x} \Vert \leq \tau \biggr\} ,\qquad F_{0}^{ \tau }=\max_{1\leq i\leq n} \bigl(f_{0}^{\tau } \bigr)^{i}; \\& \bigl(f_{\sigma_{j}\tau }^{\tau } \bigr)^{i}=\min \biggl\{ \min _{t\in [\theta_{j},1]}\frac{f_{i}(t,\mathbf{x})}{\tau }, \sigma _{j}\tau \leq \Vert \mathbf{x} \Vert \leq \tau \biggr\} ,\qquad F_{\sigma_{j}\tau } ^{\tau }= \min_{1\leq i\leq n} \bigl(f_{\sigma_{j}\tau }^{\tau } \bigr)^{i}; \\& I_{0}^{\tau }(k))^{i}= \max \biggl\{ \max _{t\in J}\frac{I_{k} ^{i}(t,\mathbf{x})}{\tau }, 0\leq \Vert \mathbf{x} \Vert \leq \tau \biggr\} ,\qquad \mathbf{I}_{0}^{\tau }(k)=\max _{1\leq i\leq n} \bigl(I_{0}^{\tau }(k) \bigr)^{i}; \\& \bigl(I_{\sigma_{j}\tau }^{\tau }(k) \bigr)^{i}=\min \biggl\{ \min _{t\in [\theta_{j},1]}\frac{I_{k}^{i}(t,\mathbf{x})}{\tau }, \sigma_{j}\tau \leq \Vert \mathbf{x} \Vert \leq \tau \biggr\} ,\qquad \mathbf{I}_{\sigma _{j}\tau }^{\tau }(k)= \min_{1\leq i\leq n} \bigl(I_{\sigma_{j}\tau } ^{\tau }(k) \bigr)^{i}, \end{aligned}$$

where \(i=1,2,\ldots ,n\), \(j=1,2,\ldots \) , and

$$ D=\max \bigl\{ \Vert G \Vert _{q}\Vert g_{i} \Vert _{p}, \Vert G \Vert _{1}\Vert g_{i} \Vert _{\infty }, B\Vert g_{i} \Vert _{1} \bigr\} ,\qquad \rho_{0}=\min \biggl\{ 1,\frac{A}{\cosh \gamma } \biggr\} . $$

We consider the following three cases for \(\omega_{i}(t)\in L^{P}[0,1]: p>1\), \(p=1\) and \(p=\infty \). Case \(p>1\) is treated in the following theorem. It is our main result.

Theorem 3.1

Assume that \((H_{1})\)\((H_{3})\) hold. Let \(\{r_{j}\}_{j=1}^{\infty }\), \(\{\eta_{j}\}_{j=1}^{\infty }\) and \(\{R_{j}\}_{j=1}^{\infty }\) be such that

$$ R_{j+1}< \sigma_{j}r_{j}< r_{j}< \sigma_{j}\eta_{j}< \eta_{j} < R_{j}, \quad j=1,2, \ldots . $$
(3.1)

For each natural number j, we assume that f and \(\mathbf{I} _{k}\) satisfy

(\(H_{4}\)):

\(F_{0}^{r_{j}}\leq L, F_{0}^{R_{j}}\leq L\) and for any \(k\in \{1,2,\ldots ,m\}\), \(\mathbf{I}_{0}^{r_{j}}(k)\leq L\), \(\mathbf{I} _{0}^{R_{j}}(k)\leq L\), where

$$ L< \min \biggl\{ \frac{1}{n\lambda \rho_{0}D},\frac{1}{n\mu mA} \biggr\} ; $$
(3.2)
(\(H_{5}\)):

\(F_{\sigma_{j}\eta_{j}}^{\eta_{j}}\geq l\) and for any \(k\in \{1,2,\ldots ,m\}\), \(\mathbf{I}_{\sigma_{j}\eta_{j}}^{\eta_{j}} \geq l\), where \(l>0\).

Then there exist \(\lambda_{0}>0\), \(\mu_{0}>0\) such that, for \(\lambda >\lambda_{0}\), \(\mu >\mu_{0}\), system (1.1)(1.2) has two infinite families of positive solutions \(\{\textbf{x}_{j}^{(1)}\}_{j=1} ^{\infty }\), \(\{\textbf{x}_{j}^{(2)}\}_{j=1}^{\infty }\) and \(\Vert {\mathbf{x}}_{j}^{(1)} \Vert >\sigma_{j}\eta_{j}\).

Proof

Letting \(\lambda_{0}=\sup \{\lambda_{j}\}\), \(\lambda_{j}=\frac{1}{2AN_{i}(1-\theta_{j})l}\), and \(\mu_{0}=\sup \{\mu_{j}\}, \mu_{j}=\frac{1}{2A _{j}ml}\), \(j=1,2,\ldots \) . Then, for any \(\lambda >\lambda_{0}\), \(\mu >\mu_{0}\), (2.14) and Lemma 2.3 imply that \(\textbf{T}_{\lambda \mu }\) and \(T_{\lambda \mu }^{i}\) (\(i=1,2,\ldots ,n\)) are all completely continuous.

Let \(t\in J\), \(\mathbf{x}\in \partial {\mathbf{K}}_{r_{j}\theta_{j}}\). Then \(\Vert {\mathbf{x}} \Vert = r_{j}\).

Therefore, for any \(\mathbf{x}\in \partial {\mathbf{K}}_{r_{j}\theta_{j}}\), it follows from \((H_{4})\) that

$$\begin{aligned} \bigl(T_{\lambda \mu }^{i}{\mathbf{x}} \bigr) (t)&=\lambda \int_{0}^{1}G(t,s)g_{i}(s)f _{i} \bigl(s,\mathbf{x}(s) \bigr)\,ds+\mu \sum_{k=1}^{m}G(t,t_{k})I_{k}^{i} \bigl(t _{k},\mathbf{x}(t_{k}) \bigr) \\ &\leq \lambda \int_{0}^{1}G(s,s)g_{i}(s)Lr_{j}\,ds+ \mu \sum_{k=1} ^{m}G(t,t_{k})Lr_{j} \\ &\leq \lambda L\Vert G \Vert _{q}\Vert g_{i} \Vert _{p}r_{j}+ \mu LmBr_{j} \\ &< \frac{r_{j}}{n}+\frac{r_{j}}{n}=\frac{r_{j}}{n}= \frac{\Vert {\mathbf{x}} \Vert }{n}. \end{aligned}$$
(3.3)

Moreover, by (2.4), (2.5), (2.14), (2.16) and \((H_{4})\),

$$\begin{aligned} \bigl\vert \bigl(T_{\lambda \mu }^{i}{\mathbf{x}} \bigr)^{\prime}(t) \bigr\vert &\leq \lambda \int_{0}^{1} \bigl\vert G_{t}^{\prime}(t,s) \bigr\vert g_{i}(s)f_{i} \bigl(s,\mathbf{x}(s) \bigr)\,ds \\ &\quad {}+ \mu \sum_{k=1}^{m} \bigl\vert G_{t}^{\prime}(t,t_{k}) \bigr\vert I_{k}^{i} \bigl(t_{k},\mathbf{x}(t_{k}) \bigr) \\ &\leq \sinh \gamma \Biggl[\lambda \int_{0}^{1}g_{i}(s)f_{i} \bigl(s,\mathbf{x}(s) \bigr)\,ds+ \mu \sum_{k=1}^{m}I_{k}^{i} \bigl(t_{k},\mathbf{x}(t_{k}) \bigr) \Biggr] \\ &\leq \frac{\sinh \gamma }{ A} \Biggl[\lambda \int_{0}^{1}G(t,s)g_{i}(s)f_{i} \bigl(s, \mathbf{x}(s) \bigr)\,ds+ \mu \sum_{k=1}^{m}AI_{k}^{i} \bigl(t_{k},\mathbf{x}(t _{k}) \bigr) \Biggr] \\ &\leq \frac{\sinh \gamma }{ A} \Biggl[\lambda \int_{0}^{1}\Vert G \Vert _{q}\Vert g_{i} \Vert _{p}f_{i} \bigl(s,\mathbf{x}(s) \bigr)\,ds+ \mu A\sum_{k=1}^{m}I_{k}^{i} \bigl(t_{k}, \mathbf{x}(t_{k}) \bigr) \Biggr] \\ &\leq \frac{\sinh \gamma }{ A} \bigl(\lambda Lr_{j}\Vert G \Vert _{q}\Vert g_{i} \Vert _{p}+ \mu AmLr_{j} \bigr) \\ &< \frac{r_{j}}{n}+\frac{r_{j}}{n} \\ &=\frac{r_{j}}{n}=\frac{\Vert {\mathbf{x}} \Vert }{n}. \end{aligned}$$
(3.4)

Consequently, from (3.3) and (3.4), we have

$$ \Vert {\mathbf{T}}_{\lambda \mu } \Vert =\sum_{i=1}^{n} \bigl\Vert T_{\lambda \mu }^{i}\textbf{x}\Vert _{PC^{1}}\leq \Vert \textbf{x} \bigr\Vert ,\quad \forall \textbf{x}\in \partial \textbf{K}_{r_{j}\theta_{j}}. $$
(3.5)

And then, by Lemma 2.5, we get

$$ {\mathbf{i}}(\mathbf{T}_{\lambda \mu }, \textbf{K}_{r_{j}\theta_{i}}, \textbf{K}_{\theta_{j}})=1. $$
(3.6)

Similarly, for \(\textbf{x}\in \partial \textbf{K}_{R_{j}\theta_{j}}\), we have \(\Vert T_{\lambda \mu }\textbf{x} \Vert \leq \Vert \textbf{x} \Vert \), and it follows from Lemma 2.5 that

$$ {\mathbf{i}}(\mathbf{T}_{\lambda \mu }, \textbf{K}_{R_{j}\theta_{j}}, \textbf{K}_{\theta_{j}})=1. $$
(3.7)

On the other hand, letting

$$ {\mathbf{x}}\in {\mathbf{K}}_{\sigma_{j}\eta_{j}\theta_{j}}^{\eta_{j}}= \Biggl\{ \mathbf{x} \in {\mathbf{K}}_{\theta_{j}}:\Vert {\mathbf{x}} \Vert < \eta_{j},\min_{t\in [\theta_{j},1]}\sum_{i=1}^{n}x_{i}(t)> \sigma_{j}\eta _{j} \Biggr\} , $$

then \(\Vert \textbf{x} \Vert \leq \eta_{j}\). And hence, it is similar to the proof of (3.5), we have

$$ \Vert {\mathbf{T}}_{\lambda \mu }{\mathbf{x}} \Vert \leq \eta_{j}. $$
(3.8)

Furthermore, for \(\textbf{x}\in \bar{\textbf{K}}_{\sigma_{j}\eta_{j} \theta_{j}}^{\eta_{j}}\), we have \(\Vert {\mathbf{x}} \Vert \leq \eta_{j}, \min_{t\in [\theta_{j},1]}\sum_{i=1}^{n}x_{i}(t)\geq \sigma_{j}\eta _{j}\), and then it follows from \((H_{5})\) that

$$\begin{aligned} \min_{t\in [\theta_{j},1]} \bigl(T_{\lambda \mu }^{i}{\mathbf{x}} \bigr) (t)&= \min_{t\in [\theta_{j},1]} \Biggl[ \lambda \int_{0}^{1}G(t,s)g _{i}(s)f_{i} \bigl(s,\mathbf{x}(s) \bigr)\,ds+ \mu \sum_{k=1}^{m}G(t,t_{k})I _{k}^{i} \bigl(t_{k},\mathbf{x}(t_{k}) \bigr) \Biggr] \\ & \geq A\lambda \int_{0}^{1}g_{i}(s)f_{i} \bigl(s,\mathbf{x}(s) \bigr)\,ds+A\mu \sum_{k=1}^{m}I_{k}^{i} \bigl(t_{k},\mathbf{x}(t_{k}) \bigr) \\ & \geq AN_{i}\lambda \int_{\theta_{j}}^{1}f_{i} \bigl(s,x(s) \bigr)\,ds+A \mu ml\eta _{j} \\ & \geq A N_{i}\lambda (1-\theta_{j})l\eta_{j}+A \mu ml\eta_{j} \\ & > A N_{i}\lambda_{0}(1-\theta_{j})l \eta_{j}+A\mu_{0}ml\eta_{j} \\ & \geq A N_{i}\lambda_{i}(1-\theta_{j})l \eta_{j}+A\mu_{j} ml\eta_{j} \\ & > \frac{\eta_{j}}{2}+\frac{\eta_{j}}{2} \\ & =\eta_{j}=\Vert \textbf{x} \Vert , \end{aligned}$$

which shows that

$$ \min_{t\in [\theta_{j},1]}\sum_{i=1}^{n}\bigl(T_{\lambda \mu }^{i}x_{i}(t)\bigr)\geq \min_{t\in [\theta_{j},1]}\bigl(T_{\lambda \mu }^{i}x_{i}(t)\bigr)>\Vert {\mathbf{x}} \Vert . $$
(3.9)

Letting \(\mathbf{x}_{0}=(x_{0}^{1},\ldots ,x_{0}^{i},\ldots ,x_{0}^{n})\) and \(\mathbf{F}(t,\mathbf{x})=(1-t)\mathbf{T}_{\lambda \mu }{\mathbf{x}}+t \textbf{x}_{0}\), where \(x_{0}^{i}\equiv \frac{\sigma_{j}\eta_{j}+\eta _{j}}{2}\), \(i=1,2,\ldots ,n\), then \(\mathbf{F}: J\times \bar{\textbf{K}} _{\sigma_{j}\eta_{j}\theta_{j}}^{\eta_{j}}\rightarrow \textbf{K}_{ \theta_{j}}\) is completely continuous, and from the analysis above, we obtain for \((t,\textbf{x})\in J\times \bar{\mathbf{K}}_{\sigma_{j}\eta _{j}\theta_{j}}^{\eta_{i}}\),

$$ {\mathbf{F}}(t,\mathbf{x})\in \bar{\mathbf{K}}_{\sigma_{j}\eta_{j}\theta_{j}} ^{\eta_{j}}. $$
(3.10)

Therefore, for \(t\in J, \mathbf{x}\in \bar{\mathbf{K}}_{\sigma_{j}\eta_{j} \theta_{j}}^{\eta_{j}}\), we have \(\mathbf{F}(t,\mathbf{x})\neq {\mathbf{x}}\). Hence, by the normality property and the homotopy invariance property of the fixed point index, we obtain

$$ {\mathbf{i}} \bigl(\mathbf{T}_{\lambda \mu }, \textbf{K}_{\sigma_{i}\eta_{j}\theta _{j}}^{\eta_{j}}, \textbf{K}_{\theta_{j}} \bigr)=\mathbf{i} \bigl(\mathbf{x}_{0}, \textbf{K}_{\sigma_{j}\eta_{j}\theta_{j}}^{\eta_{j}}, \textbf{K}_{ \theta_{j}} \bigr)=1. $$
(3.11)

Consequently, by the solution property of the fixed point index, \(\mathbf{T}_{\lambda \mu }\) has a fixed point \(\textbf{x}_{j}^{(1)}\) and \(\textbf{x}_{j}^{(1)}\in \bar{\textbf{K}}_{\sigma_{j}\eta_{j}\theta _{j}}^{\eta_{j}}\). By Lemma 2.2 and (2.14), it follows that \(\textbf{x}_{j}^{(1)}\) is a solution to system (1.1)–(1.2), and

$$ \bigl\Vert {\mathbf{x}}_{j}^{(1)} \bigr\Vert > \sigma_{j}\eta_{j}. $$

On the other hand, from (3.6), (3.7) and (3.11) together with the additivity of the fixed point index, we get

$$\begin{aligned}& \mathbf{i} \bigl(\mathbf{T}_{\lambda \mu },\mathbf{K}_{R_{j}\theta_{j}}/ \bigl( \bar{ \mathbf{K}}_{f} {r_{j}\theta_{j}}\cup \bar{ \mathbf{K}}_{\sigma_{j}\eta _{j}\theta_{j}}^{\eta_{j}} \bigr),\mathbf{K}_{\theta_{j}} \bigr) \\& \quad =\mathbf{i}(\mathbf{T}_{\lambda \mu },\mathbf{K}_{R_{j}\theta_{j}}, \mathbf{K}_{ \theta_{j}})-\mathbf{i} \bigl(\mathbf{T}_{\lambda \mu }, \bar{ \mathbf{K}}_{\sigma _{j}\eta_{j}\theta_{j}}^{\eta_{j}},\mathbf{K}_{\theta_{j}} \bigr)- \mathbf{i}( \mathbf{T}_{\lambda \mu },\bar{\mathbf{K}}_{r_{j}\theta_{j}}, \mathbf{K}_{\theta _{j}}) =1-1-1=-1. \end{aligned}$$
(3.12)

Hence, by the solution property of the fixed point index, \(\mathbf{T} _{\lambda \mu }\) has a fixed point \(\mathbf{x}_{j}^{(2)}\) and \(\mathbf{x} _{j}^{(2)}\in {\mathbf{K}}_{R_{j}}/(\bar{\mathbf{K}}_{r_{j}}\cup \bar{\mathbf{K}}_{\sigma_{j}\eta_{j}\theta_{j}}^{\eta_{j}})\). Since \(j\in \mathrm{N}\) was arbitrary, the proof is complete. □

The following corollary deals with the case \(p=\infty \).

Corollary 3.1

Assume that for each natural number j, \((H_{1})\)\((H_{5})\) hold. Let \(\{r_{i}\}_{i=1}^{\infty }\), \(\{\eta_{j}\}_{j=1}^{\infty }\)and \(\{R_{j}\}_{j=1}^{\infty }\) be such that

$$ R_{j+1}< \sigma_{j}r_{j}< r_{j}< \sigma_{j}\eta_{j}< \eta_{j} < R_{j},\quad i=1,2, \ldots . $$

Then there exists \(\lambda_{0}>0\), \(\mu_{0}>0\) such that, for \(\lambda >\lambda_{0}\), \(\mu >\mu_{0}\), system (1.1)(1.2) has two infinite families of positive solutions \(\{\textbf{x}_{j}^{(1)}\}_{j=1} ^{\infty }\) and \(\{\textbf{x}_{j}^{(2)}\}_{j=1}^{\infty }\).

Proof

Let \(\Vert G \Vert _{1}\Vert g_{i} \Vert _{\infty }\) replace \(\Vert G \Vert _{q} \Vert g_{i} \Vert _{p}\) and repeat the argument above. □

Finally, we consider the case of \(p=1\).

Corollary 3.2

Assume that for each natural number j, \((H_{1})\)\((H_{5})\) hold. Let \(\{r_{j}\}_{j=1}^{\infty }\), \(\{\eta_{j}\}_{j=1}^{\infty }\)and \(\{R_{j}\}_{j=1}^{\infty }\) be such that

$$ R_{j+1}< \sigma_{j}r_{j}< r_{j}< \sigma_{j}\eta_{j}< \eta_{j} < R_{j}, \quad i=1,2, \ldots \,. $$

Then there exists \(\lambda_{0}>0\), \(\mu_{0}>0\) such that, for \(\lambda >\lambda_{0}\), \(\mu >\mu_{0}\), system (1.1)(1.2) has two infinite families of positive solutions \(\{\textbf{x}_{j}^{(1)}\}_{j=1} ^{\infty }\) and \(\{\textbf{x}_{j}^{(2)}\}_{j=1}^{\infty }\).

Proof

Let \(B\Vert g_{i} \Vert _{1}\) replace \(\Vert G \Vert _{q}\Vert g_{i} \Vert _{p}\) and repeat the previous argument. Similar to the proof of Theorem 3.1, we can get Corollary 3.2. □

Corollary 3.3

Assume that for each natural number j, \((H_{1})\)\((H_{3})\) and \((H_{5})\) hold. Let \(\{r_{j}\}_{j=1}^{ \infty }\), \(\{\eta_{j}\}_{j=1}^{\infty }\)and \(\{R_{j}\}_{j=1}^{ \infty }\) be such that

$$ R_{j+1}< \sigma_{j}r_{j}< r_{j}< \sigma_{j}\eta_{j}< \eta_{j} < R_{j}, \quad i=1,2, \ldots \,. $$

Then there exists \(\lambda_{0}>0\), \(\mu_{0}>0\) such that, for \(\lambda >\lambda_{0}\), \(\mu >\mu_{0}\), system (1.1)(1.2) has one infinite families of positive solutions.

Remark 3.1

Some ideas of the n-dimensional system are from [44].

Remark 3.2

Some ideas of the existence of denumerably many positive solutions are from [45].

Remark 3.3

From the proof of Theorem 3.1, it is not difficult to see that \((H_{2})\) plays an important role in the proof that system (1.1)–(1.2) has two infinite families of positive solutions. As an example, we consider a family of diagonal matrix functions \(\mathbf{g}(t)\) as follows.

Example 3.1

We will check that there exists a family of diagonal matrix functions \(\mathbf{g}(t)\) satisfying condition \((H_{2})\).

For ease of the discussion, an example of the case \(n=2\) is given as follows. Define \(\mathbf{g}(t)\) by

$$ {\mathbf{g}}(t)= \left ( \textstyle\begin{array}{@{}c@{\quad }c@{}} g_{1}(t) & 0 \\ 0 & g_{2}(t) \end{array}\displaystyle \right ) , $$

where \(g_{1}(t)\) and \(g_{2}(t)\) singular at \(t_{j}^{\prime}\), \(j=1,2,\ldots\) , where

$$ t_{j}^{\prime}=\frac{2}{5}-\frac{1}{10}\sum_{i=1}^{j} \frac{1}{(2i-1)^{4}},\quad j=1,2,\ldots\,. $$
(3.13)

It follows from (3.13) that

$$\begin{aligned}& t_{1}^{\prime}=\frac{2}{5}-\frac{1}{10}= \frac{3}{10}, \\& t_{j}^{\prime}-t_{j+1}^{\prime}= \frac{1}{10(2j+1)^{4}}, \quad j=1,2,\ldots \,, \end{aligned}$$

and from \(\sum_{j=1}^{\infty }\frac{1}{(2j-1)^{4}}=\frac{\pi ^{4}}{96}\), we have

$$ t_{0}^{\prime}=\lim_{j\rightarrow \infty }t_{j}^{\prime}= \frac{2}{5}- \frac{1}{10}\sum_{j=1}^{\infty } \frac{1}{(2j-1)^{4}} = \frac{2}{5}-\frac{1}{10}\cdot \frac{\pi^{4}}{96}=\frac{2}{5}-\frac{\pi ^{4}}{960}>\frac{1}{10}. $$

Let

$$ \tau_{1}=\frac{\sqrt{2}}{3} \biggl(\frac{\pi^{2}}{4}-1 \biggr),\qquad \tau_{2}=-\sqrt{2}e \biggl(\frac{\pi^{2}}{4}-1 \biggr). $$

Consider the functions

$$\begin{aligned}& g_{1}(t)=\sum_{j=1}^{\infty }g_{j}^{(1)}(t), \quad t\in J, \\& g_{2}(t)=\sum_{j=1}^{\infty }g_{j}^{(2)}(t), \quad t\in J, \end{aligned}$$

where

$$\begin{aligned}& g_{j}^{(1)}(t)= \textstyle\begin{cases} \frac{j+2}{(j+1)!(t_{j}^{\prime}+t_{j+1}^{\prime})},& t\in [0,\frac{t_{j}^{\prime}+t_{j+1}^{\prime}}{2} ),\\ \frac{1}{\tau_{1}\sqrt{t_{j}^{\prime}-t}},&t\in [\frac{t_{j}^{\prime}+t_{j+1}^{\prime}}{2},t_{j}^{\prime} ), \\ \frac{1}{\tau_{1}\sqrt{t-t_{j}^{\prime}}},& t\in [t_{j}^{\prime},\frac{t_{j}^{\prime}+t_{j-1}^{\prime}}{2} ], \\ \frac{j+2}{(j+1)!(2-t_{j}^{\prime}-t_{j-1}^{\prime})},& t\in (\frac{t_{j}^{\prime}+t_{j-1}^{\prime}}{2},1 ], \end{cases}\displaystyle \end{aligned}$$

and

$$\begin{aligned}& g_{j}^{(2)}(t)= \textstyle\begin{cases} \frac{2}{(2j-2)!(t_{j}^{\prime}+t_{j+1}^{\prime})},& t\in [0,\frac{t_{j}^{\prime}+t_{j+1}^{\prime}}{2} ), \\ \frac{1}{\tau_{2}\sqrt{t_{j}^{\prime}-t}},& t\in [\frac{t_{j}^{\prime}+t_{j+1}^{\prime}}{2},t_{j}^{\prime} ), \\ \frac{1}{\tau_{2}\sqrt{t-t_{j}^{\prime}}},& t\in [t_{j}^{\prime},\frac{t_{j}^{\prime}+t_{j-1}^{\prime}}{2} ], \\ \frac{2}{(2j-2)!(2-t_{j}^{\prime}-t_{j-1}^{\prime})},& t\in (\frac{t_{j}^{\prime}+t_{j-1}^{\prime}}{2},1 ]. \end{cases}\displaystyle \end{aligned}$$

From \(\sum_{j=1}^{\infty }\frac{j+2}{(j+1)!}=2e-3\), \(\sum_{j=1}^{\infty }\frac{2}{(2j-2)!}=e+e^{-1}\) and \(\sum_{j=1}^{\infty }\frac{1}{(2j-1)^{2}}=\frac{\pi^{2}}{8}\), we have

$$\begin{aligned} &\begin{aligned}[b] \sum_{j=1}^{\infty } \int_{0}^{1}g_{j}^{(1)}(t)\,dt&= \sum_{j=1}^{\infty } \biggl\{ \int_{0}^{\frac{t_{j}^{\prime}+t_{j+1}^{\prime}}{2}}\frac{j+2}{(j+1)!(t _{j}^{\prime}+t_{j+1}^{\prime})}\,dt+ \int_{\frac{t_{j-1}^{\prime}+t_{j}^{\prime}}{2}}^{1}\frac{j+2}{(j+1)!(2-t _{j}^{\prime}-t_{j-1}^{\prime})}\,dt \\ &\quad {}+ \int_{\frac{t_{j}^{\prime}+t_{j+1}^{\prime}}{2}}^{t_{j}}\frac{1}{\tau_{1}\sqrt{t _{j}^{\prime}-t}}\,dt+ \int_{t_{j}}^{\frac{t_{j-1}^{\prime}+t_{j}^{\prime}}{2}}\frac{1}{ \tau_{1}\sqrt{t-t_{j}^{\prime}}}\,dt \biggr\} \\ &=\sum_{j=1}^{\infty }\frac{j+2}{(j+1)!} + \frac{\sqrt{2}}{\tau _{1}}\sum_{j=1}^{\infty } \Bigl( \sqrt{\bigl(t_{j}^{\prime}-t_{j+1}^{\prime} \bigr)} +\sqrt{\bigl(t_{j-1}^{\prime}-t_{j}^{\prime} \bigr)} \Bigr) \\ &=2e-3+\frac{\sqrt{2}}{\tau_{1}}\sum_{j=1}^{\infty } \biggl( \frac{1}{(2j+1)^{2}} +\frac{1}{(2j-1)^{2}} \biggr) \\ &=2e-3+\frac{\sqrt{2}}{\tau_{1}} \biggl(\frac{\pi^{2}}{8}-1+\frac{\pi ^{2}}{8} \biggr) \\ &=2e-3+\frac{\sqrt{2}}{\tau_{1}} \biggl(\frac{\pi^{2}}{4}-1 \biggr) \\ &=2e-3+3=2e, \end{aligned} \end{aligned}$$
(3.14)
$$\begin{aligned} &\begin{aligned}[b] \sum_{j=1}^{\infty } \int_{0}^{1}g_{j}^{(2)}(t)\,dt&= \sum_{j=1}^{\infty } \biggl\{ \int_{0}^{\frac{t_{j}^{\prime}+t_{j+1}^{\prime}}{2}}\frac{2}{(2j-2)!(t _{j}^{\prime}+t_{j+1}^{\prime})}\,dt+ \int_{\frac{t_{j-1}^{\prime}+t_{j}^{\prime}}{2}}^{1}\frac{2}{(2j-2)!(2-t _{j}^{\prime}-t_{j-1}^{\prime})}\,dt \\ &\quad {}+ \int_{\frac{t_{j}^{\prime}+t_{j+1}^{\prime}}{2}}^{t_{j}}\frac{1}{\tau_{2}\sqrt{t _{j}^{\prime}-t}}\,dt+ \int_{t_{j}}^{\frac{t_{j-1}^{\prime}+t_{j}^{\prime}}{2}}\frac{1}{\tau_{2}\sqrt{t-t_{j}^{\prime}}}\,dt \biggr\} \\ &=\sum_{j=1}^{\infty }\frac{2}{(2j-2)!}+ \frac{\sqrt{2}}{\tau_{2}}\sum_{j=1}^{\infty } \Bigl( \sqrt{\bigl(t_{j}^{\prime}-t_{j+1}^{\prime} \bigr)}+\sqrt{\bigl(t_{j-1}^{\prime}-t_{j}^{\prime} \bigr)} \Bigr) \\ &=e+e^{-1}+\frac{\sqrt{2}}{\tau_{2}}\sum_{j=1}^{\infty } \biggl(\frac{1}{(2j+1)^{2}}+\frac{1}{(2j-1)^{2}}\biggr) \\ &=e+e^{-1}+\frac{\sqrt{2}}{\tau_{2}} \biggl(\frac{\pi^{2}}{8}-1+ \frac{\pi^{2}}{8} \biggr) \\ &=e+e^{-1}+\frac{\sqrt{2}}{\tau_{2}} \biggl(\frac{\pi^{2}}{4}-1 \biggr) \\ &=e+e^{-1}-e^{-1}=e. \end{aligned} \end{aligned}$$
(3.15)

Thus, from (3.14) and (3.15), it is easy to see that

$$\begin{aligned}& \int_{0}^{1}{\mathbf{g}}(t)\,dt= \int_{0}^{1}\sum_{j=1}^{\infty }g_{j} ^{(1)}(t)\,dt =\sum_{j=1}^{\infty } \int_{0}^{1}\omega_{j}^{(1)}(t)\,dt=2e< \infty , \\& \int_{0}^{1}{\mathbf{g}}(t)\,dt= \int_{0}^{1}\sum_{j=1}^{\infty }g_{j} ^{(2)}(t)\,dt =\sum_{j=1}^{\infty } \int_{0}^{1}\omega_{j}^{(2)}(t)\,dt=e< \infty . \end{aligned}$$

Therefore \(\omega_{1}(t),\omega_{2}(t)\in L^{1}[0,1]\), which shows that condition \((H_{2})\) holds.

References

  1. Onose, H.: Oscillatory properties of the first order nonlinear advance and delayed differential inequalities. Nonlinear Anal. 8, 171–180 (1984)

    MathSciNet  Article  MATH  Google Scholar 

  2. Erbe, L.H., Freedman, H.I., Liu, X.Z., Wu, J.H.: Comparison principles for impulsive parabolic equations with applications to models of single species growth. J. Aust. Math. Soc. Ser. B, Appl. Math 32, 382–400 (1991)

    MathSciNet  Article  MATH  Google Scholar 

  3. Bainov, D., Minchev, E.: Estimates of solutions of impulsive parabolic equations and applications to the population dynamics. Publ. Math. 40, 85–94 (1996)

    MathSciNet  Article  MATH  Google Scholar 

  4. Liu, X., Willms, A.: Impulsive controllability of linear dynamical systems with applications to maneuvers of spacecraft. Math. Probl. Eng. 4, 277–299 (1996)

    Article  MATH  Google Scholar 

  5. Lu, Z., Chi, X., Chen, L.: The effect of constant and pulse vaccination on SIR epidemic model with horizontal and vertical transmission. Math. Comput. Model. 36, 1039–1057 (2002)

    MathSciNet  Article  MATH  Google Scholar 

  6. Zhang, H., Chen, L., Nieto, J.J.: A delayed epidemic model with stage-structure and pulses for pest management strategy. Nonlinear Anal., Real World Appl. 9, 1714–1726 (2008)

    MathSciNet  Article  MATH  Google Scholar 

  7. D’Onofrio, A.: On pulse vaccination strategy in the SIR epidemic model with vertical transmission. Appl. Math. Lett. 18, 729–732 (2005)

    MathSciNet  Article  MATH  Google Scholar 

  8. Pasquero, S.: Ideality criterion for unilateral constraints in time-dependent impulsive mechanics. J. Math. Phys. 46, 112904 (2005)

    MathSciNet  Article  MATH  Google Scholar 

  9. Guo, Y.: Globally robust stability analysis for stochastic Cohen–Grossberg neural networks with impulse control and time-varying delays. Ukr. Math. J. 69, 1049–1060 (2017)

    MathSciNet  Google Scholar 

  10. Liu, Y., O’Regan, D.: Multiplicity results using bifurcation techniques for a class of boundary value problems of impulsive differential equations. Commun. Nonlinear Sci. Numer. Simul. 16, 1769–1775 (2011)

    MathSciNet  Article  MATH  Google Scholar 

  11. Ma, R., Yang, B., Wang, Z.: Positive periodic solutions of first-order delay differential equations with impulses. Appl. Math. Comput. 219, 6074–6083 (2013)

    MathSciNet  MATH  Google Scholar 

  12. Zhang, H., Liu, L., Wu, Y.: Positive solutions for nth-order nonlinear impulsive singular integro-differential equations on infinite intervals in Banach spaces. Nonlinear Anal. 70, 772–787 (2009)

    MathSciNet  Article  MATH  Google Scholar 

  13. Hao, X., Liu, L., Wu, Y.: Positive solutions for second order impulsive differential equations with integral boundary conditions. Commun. Nonlinear Sci. Numer. Simul. 16, 101–111 (2011)

    MathSciNet  Article  MATH  Google Scholar 

  14. Jiang, J., Liu, L., Wu, Y.: Positive solutions for second order impulsive differential equations with Stieltjes integral boundary conditions. Adv. Differ. Equ. 2012, 1 (2012)

    MathSciNet  Article  MATH  Google Scholar 

  15. Zhang, X., Feng, M., Ge, W.: Existence of solutions of boundary value problems with integral boundary conditions for second-order impulsive integro-differential equations in Banach spaces. J. Comput. Appl. Math. 233, 1915–1926 (2010)

    MathSciNet  Article  MATH  Google Scholar 

  16. Yan, J.: Existence of positive periodic solutions of impulsive functional differential equations with two parameters. J. Math. Anal. Appl. 327, 854–868 (2007)

    MathSciNet  Article  MATH  Google Scholar 

  17. Chen, X., Du, Z.: Existence of positive periodic solutions for a neutral delay predator-prey model with Hassell–Varley type functional response and impulse. Qual. Theory Dyn. Syst. 17(1), 67–80 (2018). https://doi.org/10.1007/s12346-017-0223-6

    MathSciNet  Article  Google Scholar 

  18. Liu, J., Zhao, Z.: Variational approach to second-order damped Hamiltonian systems with impulsive effects. J. Nonlinear Sci. Appl. 9, 3459–3472 (2016)

    MathSciNet  Article  MATH  Google Scholar 

  19. Zhou, J., Li, Y.: Existence and multiplicity of solutions for some Dirichlet problems with impulsive effects. Nonlinear Anal., Theory Methods Appl. 71, 2856–2865 (2009)

    MathSciNet  Article  MATH  Google Scholar 

  20. Hao, X., Liu, L., Wu, Y.: Iterative solution for nonlinear impulsive advection-reaction-diffusion equations. J. Nonlinear Sci. Appl. 9, 4070–4077 (2016)

    MathSciNet  Article  MATH  Google Scholar 

  21. Hao, X., Liu, L.: Mild solution of semilinear impulsive integro-differential evolution equation in Banach spaces. Math. Methods Appl. Sci. 40, 4832–4841 (2017)

    MathSciNet  MATH  Google Scholar 

  22. Bai, Z., Dong, X., Yin, C.: Existence results for impulsive nonlinear fractional differential equation with mixed boundary conditions. Bound. Value Probl. 2016(1), 63 (2016)

    MathSciNet  Article  MATH  Google Scholar 

  23. Zhang, X., Yang, X., Ge, W.: Positive solutions of nth-order impulsive boundary value problems with integral boundary conditions in Banach spaces. Nonlinear Anal., Theory Methods Appl. 71, 5930–5945 (2009)

    Article  MATH  Google Scholar 

  24. Bai, L., Nieto, J.J., Wang, X.: Variational approach to non-instantaneous impulsive nonlinear differential equations. J. Nonlinear Sci. Appl. 10, 2440–2448 (2017)

    MathSciNet  Article  Google Scholar 

  25. Tian, Y., Bai, Z.: Existence results for the three-point impulsive boundary value problem involving fractional differential equations. Comput. Math. Appl. 59, 2601–2609 (2010)

    MathSciNet  Article  MATH  Google Scholar 

  26. Zhang, X., Feng, M.: Transformation techniques and fixed point theories to establish the positive solutions of second order impulsive differential equations. J. Comput. Appl. Math. 271, 117–129 (2014)

    MathSciNet  Article  MATH  Google Scholar 

  27. Zuo, M., Hao, X., Liu, L., Cui, Y.: Existence results for impulsive fractional integro-differential equation of mixed type with constant coeffcient and antiperiodic boundary conditions. Bound. Value Probl. 2017, 1 (2017)

    Article  MATH  Google Scholar 

  28. Liu, J., Zhao, Z.: Multiple solutions for impulsive problems with non-autonomous perturbations. Appl. Math. Lett. 64, 143–149 (2017)

    MathSciNet  Article  MATH  Google Scholar 

  29. Sun, Y., Cho, Y., O’Regan, D.: Positive solutions for singular second order Neumann boundary value problems via a cone fixed point theorem. Appl. Math. Comput. 210, 80–86 (2009)

    MathSciNet  MATH  Google Scholar 

  30. Sovrano, E., Zanolin, F.: Indefinite weight nonlinear problems with Neumann boundary conditions. J. Math. Anal. Appl. 452, 126–147 (2017)

    MathSciNet  Article  MATH  Google Scholar 

  31. Gao, S., Chen, L., Nieto, J.J., Torres, A.: Analysis of a delayed epidemic model with pulse vaccination and saturation incidence. Vaccine 24, 6037–6045 (2006)

    Article  Google Scholar 

  32. Chu, J., Sun, Y., Chen, H.: Positive solutions of Neumann problems with singularities. J. Math. Anal. Appl. 337, 1267–1272 (2008)

    MathSciNet  Article  MATH  Google Scholar 

  33. Dang, H., Oppenheimer, S.F.: Existence and uniqueness results for some nonlinear boundary value problems. J. Math. Anal. Appl. 198, 35–48 (1996)

    MathSciNet  Article  MATH  Google Scholar 

  34. Dong, Y.: A Neumann problem at resonance with the nonlinearity restricted in one direction. Nonlinear Anal. 51, 739–747 (2002)

    MathSciNet  Article  MATH  Google Scholar 

  35. Erbe, L.H., Wang, H.: On the existence of positive solutions of ordinary differential equations. Proc. Am. Math. Soc. 120, 743–748 (1994)

    MathSciNet  Article  MATH  Google Scholar 

  36. Ma, R.: Existence of positive radial solutions for elliptic systems. J. Math. Anal. Appl. 201, 375–386 (1996)

    MathSciNet  Article  MATH  Google Scholar 

  37. Yazidi, N.: Monotone method for singular Neumann problem. Nonlinear Anal. 49, 589–602 (2002)

    MathSciNet  Article  MATH  Google Scholar 

  38. Sun, J., Li, W.: Multiple positive solutions to second order Neumann boundary value problems. Appl. Math. Comput. 146, 187–194 (2003)

    MathSciNet  MATH  Google Scholar 

  39. Jiang, D., Liu, H.: Existence of positive solutions to second order Neumann boundary value problem. J. Math. Res. Exposition 20, 360–364 (2000)

    MathSciNet  MATH  Google Scholar 

  40. Liu, X., Li, Y.: Positive solutions for Neumann boundary value problems of second-order impulsive differential equations in Banach spaces. Abstr. Appl. Anal. 2012, Article ID 401923 (2012). https://doi.org/10.1155/2012/401923

    MathSciNet  MATH  Google Scholar 

  41. Zhang, X.: Parameter dependence of positive solutions for second-order singular Neumann boundary value problems with impulsive effects. Abstr. Appl. Anal. 2014, Article ID 968792 (2014). https://doi.org/10.1155/2014/968792

    MathSciNet  Google Scholar 

  42. Sun, F., Liu, L., Wu, Y.: Infnitely many sign-changing solutions for a class of biharmonic equation with p-Laplacian and Neumann boundary condition. Appl. Math. Lett. 73, 128–135 (2017)

    MathSciNet  Article  MATH  Google Scholar 

  43. Guo, D., Lakshmikantham, V.: Nonlinear Problems in Abstract Cones. Academic Press, Inc., New York (1988)

    MATH  Google Scholar 

  44. Wang, H.: On the number of positive solutions of nonlinear systems. J. Math. Anal. Appl. 281, 287–306 (2003)

    MathSciNet  Article  MATH  Google Scholar 

  45. Kaufmann, E.R., Kosmatov, N.: A multiplicity result for a boundary value problem with infinitely many singularities. J. Math. Anal. Appl. 269, 444–453 (2002)

    MathSciNet  Article  MATH  Google Scholar 

Download references

Acknowledgements

The authors are grateful to anonymous referees for their constructive comments and suggestions, which has greatly improved this paper.

Funding

This work is sponsored by the National Natural Science Foundation of China (11401031), the Beijing Natural Science Foundation (1163007) and the Scientific Research Project of Construction for Scientific and Technological Innovation Service Capacity (KM201611232017, KM201611232019).

Author information

Authors and Affiliations

Authors

Contributions

The authors contributed equally in this article. They have all read and approved the final manuscript.

Corresponding author

Correspondence to Meiqiang Feng.

Ethics declarations

Competing interests

The authors declare that there is no conflict of interest regarding the publication of this manuscript. The authors declare that they have no competing interests.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Li, P., Feng, M. & Wang, M. A class of singular n-dimensional impulsive Neumann systems. Adv Differ Equ 2018, 100 (2018). https://doi.org/10.1186/s13662-018-1558-2

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13662-018-1558-2

Keywords

  • Multi-parameter
  • n-dimensional impulsive Neumann system
  • Infinitely many singularities
  • Matrix theory
  • Fixed point index theory and inequality technique