• Research
• Open Access

The shooting method and integral boundary value problems of third-order differential equation

https://doi.org/10.1186/s13662-016-0824-4

• Accepted: 30 March 2016
• Published:

Abstract

In this paper, the existence of at least one positive solution for third-order differential equation boundary value problems with Riemann-Stieltjes integral boundary conditions is discussed. By applying the shooting method and the comparison principle, we obtain some new results which extend the known ones. Meanwhile, an example is worked out to demonstrate the main results.

Keywords

• shooting method
• positive solution
• third-order
• boundary conditions including Stieltjes integrals

1 Introduction

It is well known that third-order equations arise from many branches of applied mathematics and physics. For example, in the deflection of a curved beam having a constant or varying cross section, a three layer beam, electromagnetic waves or gravity driven flows . There have been extensive studies on third-order differential equation BVPs (boundary value problems), for example . Most of these results are obtained via applying the topological degree theory, the fixed point theorems on cones, the lower and upper solution method, the critical point theory and monotone technique. We refer the reader to  and the references therein.

Recently, the attention has shifted to BVPs with Stieltjes integral boundary condition since this kind of conditions has been considered a single framework of multipoint and integral type boundary conditions. For more comments on the Riemann-Stieltjes integral boundary condition and its importance, we refer the reader to [4, 5] and other related work such as [6, 7].

In the existing literature, there are very few papers dealing with third-order differential equations with Riemann-Stieltjes integral boundary conditions. We found that Graef and Webb  studied the following problem:
$$\left \{ \textstyle\begin{array}{@{}l} u'''(t)=g(t)f(t,u(t)),\quad t\in(0,1),\\ u(0)=\alpha[u], \qquad u'(p)=0, \qquad u''(1)+\beta[u]=\lambda[u''], \end{array}\displaystyle \right .$$
where $$p > \frac{1}{2}$$, and $$\alpha[u]$$, $$\beta[u]$$, and $$\lambda[v]$$ are linear functional on $$C[0, 1]$$ given by a Riemann-Stieltjes integral. The existence of multiple positive solutions is obtained by the application of the fixed point index theory.
In 2012, Jankowski  used a fixed point theorem to establish the existence of at least three non-negative solutions of some nonlocal BVPs to the third-order differential equation
$$\left \{ \textstyle\begin{array}{@{}l} x'''(t)+h(t)f(t,x(\alpha(t)))=0, \quad t\in(0,1),\\ x(0)=x''(0)=0,\\ x(1)=\beta x(\eta)+\lambda[x],\quad \beta>0, \eta\in(0,1), \end{array}\displaystyle \right .$$
where λ denotes a linear functional on $$C(J)$$ given by $$\lambda [x]=\int^{1}_{0}x(t)\,d\Lambda(t)$$ involving a Stieltjes integral with a suitable function Λ of bounded variation.

In , the author applied the method of lower and upper solutions to generate an iterative technique and discussed the existence of solutions of nonlinear third-order ordinary differential equations with integral boundary conditions. Pang and Xie  investigated the existence of concave positive solutions and established corresponding iterative schemes for a third-order differential equation with Riemann-Stieltjes integral boundary conditions using the monotone iterative technique.

It is well known that the classical shooting method could be effectively used to establish the existence and multiplicity results for differential equation BVPs. To some extent, this approach has an advantage over the traditional methods. Readers can see  and the references therein for details.

Using the shooting method, Henderson  obtained solutions of the three point BVP for the second-order equation
$$y''=f\bigl(x,y,y'\bigr), \qquad y(x_{1})=y_{1},\qquad y(x_{2})-y(x_{3})=y_{2},$$
where $$f : (a,b) \times\mathbb{R}^{2}\to\mathbb{R}$$ is continuous, $$a< x_{1}< x_{2}< x_{3}< b$$, and $$y_{1},y_{2}\in\mathbb{R}$$.
In , by applying the shooting method and the comparison principle, Wang investigated the existence results of positive solutions for the Riemann-Stieltjes integrals BVPs
$$\left \{ \textstyle\begin{array}{@{}l} u''(t)+a(t)f(u(t))=0, \quad 0< t< 1,\\ u(0)=0, \qquad u(1)=\alpha\int^{\eta}_{0}u(s)\,ds, \end{array}\displaystyle \right .$$
where $$f\in C([0,\infty);[0,\infty))$$ and $$0<\eta<1$$, $$\alpha\geq0$$ are given constants, $$0<\alpha\eta^{2}<2$$.
However, to the best of our knowledge, no paper has considered the existence of positive solutions for third-order differential equation with the shooting method till now. Motivated by the excellent work mentioned above, in this paper, we try to employ the shooting method to establish the criteria for the existence of positive solutions to the following third-order differential equation with integral boundary condition:
$$\left \{ \textstyle\begin{array}{@{}l} u'''(t) + h(t)f(u(t), u'(t))=0,\quad 0 < t < 1,\\ u'(0) = \alpha[u'], \qquad u''(0)=0, \qquad u(1)= \beta[u], \end{array}\displaystyle \right .$$
(1.1)
where $$\alpha[u] = \int_{0}^{1}u(s)\,dA(s)$$, $$\beta[u] = \int _{0}^{1}u(s)\,dB(s)$$, and $$\alpha[u]$$, $$\beta[u]$$ are linear functions on $$C[0,1]$$ given by the Riemann-Stieltjes integral, $$A(t)$$, $$B(t)$$ are suitable functions of a bounded variation.
Set
$$f^{x}=\lim_{v\to x}\sup\max_{u\in[0,+\infty)} \frac{f(u,v)}{v},\qquad f_{x}=\lim_{v\to x}\inf\min _{u\in[0,+\infty)} \frac{f(u,v)}{v}.$$
In this paper, we always assume
(H1):

$$f \in C([0,\infty)\times[0,\infty); [0,\infty))$$, $$f(u, v)\not \equiv0$$;

(H2):

$$h\in C([0,1];[0,\infty))$$;

(H3):

$$\int^{1}_{0}\,dA(t)> 1$$, $$0< \int^{1}_{0}\,dB(t)<1$$.

2 Preliminaries

Define an operator $$A : C[0,1] \to C^{1}[0,1]$$ as
$$Ay(t)= \int_{0}^{1}G(t,s)y(s)\,ds$$
(2.1)
for $$t \in[0,1]$$, where
$$G(t,s)=\frac{1}{1-\int^{1}_{0}\,dB(t)} \left \{ \textstyle\begin{array}{@{}l@{\quad}l} \int^{s}_{0}\,dB(t),& 0\leq s \leq t\leq1 ,\\ 1-\int^{1}_{s}\,dB(t),& 0\leq t \leq s\leq1, \end{array}\displaystyle \right .$$
is the Green function for the following first-order differential equation:
$$\left \{ \textstyle\begin{array}{@{}l} u'(t)=y(t),\quad 0 < t < 1,\\ u(1) = \beta[u]. \end{array}\displaystyle \right .$$
Let $$y=u'$$, then BVP (1.1) is equivalent to the following second-order BVP:
$$\left \{ \textstyle\begin{array}{@{}l} y''(t) + h(t)f(Ay(t),y(t))=0,\quad 0 < t < 1,\\ y(0) = \alpha[y], \qquad y'(0)=0. \end{array}\displaystyle \right .$$
(2.2)

Lemma 2.1

If y is a positive solution of (2.2), then u is a positive solution of (1.1).

Proof

Assume y is a positive solution of (2.2), then $$y(t)> 0$$ for $$t\in (0,1)$$ and it follows from $$u(t)=Ay(t)$$ that $$u(t)$$ satisfies (1.1). Assume on the contrary that there is a $$t_{0} \in(0,1)$$ such that $$u(t_{0})= \min_{t\in(0,1)}u(t)\leq0$$, then $$u'(t_{0})=0$$ and $$u''(t_{0}) \geq0$$, which yields $$y(t_{0})=u'(t_{0})=0$$. This contradicts the assumption that y is a positive solution of (2.2). Hence, $$u(t)>0$$ for all $$t\in(0,1)$$. □

The principle of the shooting method converts the BVP into an IVP (initial value problem) by finding suitable initial values m such that equation (2.2) comes with the initial value condition as
$$\left \{ \textstyle\begin{array}{@{}l} y''(t) + h(t)f(Ay(t),y(t))=0,\quad 0 < t < 1,\\ y(0) = m, \qquad y'(0)=0. \end{array}\displaystyle \right .$$
(2.3)

Under the assumptions (H1)-(H3), denote by $$y(t,m)$$ the solution of the IVP (2.3). We assume that f is strong continuous enough to guarantee that $$y(t,m)$$ is uniquely defined and that it depends continuously on both t and m. The discussion of this problem can be found in . Therefore the solution of IVP (2.3) exists.

Denote
$$k(m)=\frac{y(0,m)}{\int^{1}_{0}y(t,m)\,dA(t)}, \qquad \varphi (m)=y(0,m)- \int^{1}_{0}y(t,m)\,dA(t).$$

Then solving (2.2) is equivalent to finding a $$m^{*}$$ such that $$k(m^{*}) = 1$$ or $$\varphi(m^{*})=0$$.

Lemma 2.2

(Sturm comparison theorem) 

Let $$\varphi_{1}$$ and $$\varphi_{2}$$ be non-trivial solutions of the equations
$$y''+q_{1}(x)y=0, \qquad y''+q_{2}(x)y=0,$$
respectively, on an interval I; here $$q_{1}$$ and $$q_{2}$$ are continuous functions such that $$q_{1}(x) \leq q_{2}(x)$$ on I. Then between any two consecutive zeros $$x_{1}$$ and $$x_{2}$$ of $$\varphi_{1}$$, there exists at least one zero of $$\varphi_{2}$$ unless $$q_{1}(x) \equiv q_{2}(x)$$ on $$(x_{1},x_{2})$$.

Lemma 2.3

Let $$y(t,m)$$, $$z(t,m)$$, $$Z(t,m)$$ be the solution of the IVPs, respectively,
\begin{aligned}& y''(t)+F(t)y(t)=0, \qquad y(0)=m, \qquad y'(0)=0, \\& z''(t) + g(t)z(t)=0,\qquad z(0)=m, \qquad z'(0)=0, \\& Z''(t)+G(t)Z(t)=0, \qquad Z(0) =m, \qquad Z'(0)=0, \end{aligned}
and suppose that $$F(t)$$, $$g(t)$$, and $$G(t)$$ are continuous functions defined on $$[0, 1]$$ such that
$$g(t) \leq F(t) \leq G(t), \quad t\in[0,1].$$
If $$Z(t,m)$$ does not vanish in $$(0,1]$$, then for any $$0 \leq\xi\leq s \leq1$$, we have
$$\frac{z(\xi,m)}{z(s, m)} \leq\frac{y(\xi,m)}{y(s,m)} \leq\frac{Z(\xi ,m)}{Z(s,m)},$$
(2.4)
and hence, for any $$0 \leq s \leq1$$, we have
$$\frac{z(0,m)}{\int_{0}^{1}z(s,m)\,dA(s)} \leq\frac{y(0,m)}{\int _{0}^{1}y(s,m)\,dA(s)} \leq\frac{Z(0,m)}{\int_{0}^{1}Z(s,m)\,dA(s)}.$$
(2.5)

Proof

The proof for (2.4) can be found in . The continuity of the integrands implies the existence of the Riemann integral. In view of the definition of Stieltjes integral, by using the inequality of the limit, we have (2.5). □

Lemma 2.4

Assume that (H1)-(H2) hold and $$0 < \int^{1}_{0}\,dA(t)< 1$$, then BVP (2.2) has no positive solution.

Proof

If BVP (2.2) has a positive solution $$y(t)$$, then $$y(t,m)$$ is the positive solution of IVP (2.3). For $$m > 0$$, we compare the solution $$y(t, m)$$ of IVP given by (2.3) with the solution $$z(t)=m$$ of
$$z''(t) + 0z(t)=0, \qquad z(0) = m, \qquad z'(0)=0.$$
By Lemma 2.3, we have
$$\frac{y(0,m)}{\int^{1}_{0}y(s,m)\,dA(s)}\geq\frac{z(0)}{\int ^{1}_{0}z(s)\,dA(s)}= \frac{m}{m\int^{1}_{0}\,dA(s)}=\frac{1}{\int ^{1}_{0}\,dA(s)}.$$
In fact, $$y(0,m)=\int^{1}_{0}y(s,m)\,dA(s)$$. That is, $$\int ^{1}_{0}\,dA(s)\geq1$$.

Hence, we need $$\int^{1}_{0}\,dA(s)\geq1$$, and we assume $$\int ^{1}_{0}\,dA(s)> 1$$ in (H3) in order to satisfy (3.1). □

3 Main results

In the following, we assume that $$A(t)$$ has continuous derivative function $$\alpha(t)$$ and $$\alpha(t)> 1$$ for $$t\in[0,1]$$ such that $$\int^{1}_{0}\,dA(t)=\int^{1}_{0}\alpha(t)\,dt > 1$$.

For the sake of convenience, we denote
\begin{aligned}& \max_{0 \leq t \leq1}\bigl\{ h(t)\bigr\} =h^{L},\qquad \min _{0 \leq t \leq1}\bigl\{ h(t)\bigr\} =h^{l}, \\& \max_{0 \leq t \leq1}\bigl\{ \alpha(t)\bigr\} =\alpha^{L},\qquad \min _{0 \leq t \leq 1}\bigl\{ \alpha(t)\bigr\} =\alpha^{l}. \end{aligned}

It is obvious that $$\alpha^{L}\geq\alpha^{l} > 1$$.

Lemma 3.1

Assume that (H1)-(H3) hold. Then there exist a solution $$x=A_{1}\in (0,\frac{\pi}{2})$$ such that
$$g_{1}(x):=\frac{\alpha^{l} \sin x}{x} \geq1$$
(3.1)
and a solution $$x=A_{2} \in(0,\frac{\pi}{2})$$ such that
$$g_{2}(x):=\frac{\alpha^{L} \sin x}{x} \leq1.$$
(3.2)

Proof

From (H3) and the Figure 1, we can easily get Lemma 3.1. Figure 1 The function $$\pmb{\frac{\sin (x)}{x}}$$ .  □

Theorem 3.2

Assume that (H1)-(H3) hold. Suppose one of the following conditions holds:
1. (i)

$$0\leq f^{0} < \frac{\underline{A}^{2}}{h^{L}}$$, $$f_{\infty} > \frac{\bar{A}^{2}}{h^{l}}$$;

2. (ii)

$$0\leq f^{\infty} < \frac{\underline{A}^{2}}{h^{L}}$$, $$f_{0} > \frac{\bar{A}^{2}}{h^{l}}$$.

Then problem (1.1) has at least one positive solution, where
$$\underline{A}=\min\{A_{1}, A_{2}\}, \qquad \bar{A}=\max \{A_{1}, A_{2}\},$$
and $$A_{1}$$, $$A_{2}$$ are defined in (3.1) and (3.2), respectively.

Proof

As we mentioned above, BVP (1.1) having a positive solution is equivalent to BVP (2.2) having a positive solution.

(i) Since $$0\leq f^{0} < \frac{\underline{A}^{2}}{h^{L}}$$, there exists a positive number r such that
$$\frac{f(Ay,y)}{y} < \frac{\underline{A}^{2}}{h^{L}}\leq\frac{A^{2}_{1}}{h^{L}} , \quad 0< y\leq r.$$
(3.3)
Let $$0< m^{*}_{1}< r$$, from the Sturm comparison theorem and the concavity of $$y(t,m^{*}_{1})$$ for $$t\in[0,1]$$, we have
$$0\leq h(t)f\bigl(Ay\bigl(t,m^{*}_{1}\bigr),y \bigl(t,m^{*}_{1}\bigr)\bigr)< h^{L}\frac{A^{2}_{1}}{h^{L}}y \bigl(t,m^{*}_{1}\bigr) = A^{2}_{1}y \bigl(t,m^{*}_{1}\bigr), \quad t\in(0,1].$$
(3.4)
Let $$Z(t)= m^{*}_{1}\cos(A_{1}t)$$ for $$t\in[0,1]$$, then $$Z(t)$$ satisfies the following IVP:
$$Z''(t)+A^{2}_{1}Z(t)=0, \qquad Z(0) =m^{*}_{1}, \qquad Z'(0)=0.$$
(3.5)
From (3.4), Lemma 2.3, and Lemma 3.1, we have
\begin{aligned} k\bigl(m^{*}_{1}\bigr)&= \frac{y(0,m^{*}_{1})}{\int^{1}_{0}y(t,m^{*}_{1})\,dA(t)}\leq \frac {Z(0,m^{*}_{1})}{\int^{1}_{0}Z(t,m^{*}_{1})\,dA(t)} =\frac{1}{\int^{1}_{0}\cos(A_{1}t)\,dA(t)} \\ &\leq\frac{1}{\alpha^{l} \int^{1}_{0}\cos(A_{1}t)\,dt}=\frac{A_{1}}{\alpha^{l} \sin A_{1}} \leq1, \end{aligned}
(3.6)
that is, $$\varphi(m^{*}_{1})\leq0$$.
On the other hand, the second inequality in (i) implies that there exists a number L large enough such that
$$\frac{f(Ay,y)}{y}> \frac{\bar{A}^{2}}{h^{l}} \geq\frac{A^{2}_{2}}{h^{l}}, \quad y\geq L,$$
(3.7)
and there exists a positive number $$\epsilon< A_{2}$$ small enough that
$$\frac{f(Ay,y)}{y}> \frac{(A_{2}+\epsilon)^{2}}{h^{l}}, \quad y\geq L.$$
(3.8)

Next, we will find a positive number $$m^{*}_{2}$$ such that $$\varphi (m^{*}_{2})\geq0$$.

There exist a value $$m^{*}_{2}$$ and a positive number σ such that
$$0< \frac{A_{2}}{A_{2}+\epsilon}\leq\sigma\leq1\quad \mbox{and}\quad y\bigl(t,m^{*}_{2} \bigr)\geq L \quad\mbox{for } t\in(0,\sigma].$$

Since the solution $$y(t,m)$$ is concave and $$y'(0,m)=0$$, it hits the line $$y=L$$ at most one time for the constant L defined in (3.7) and $$t\in(0,1]$$. We denote the intersecting time by $$\bar{\delta}_{m}$$ provided it exists. Henceforth, denote $$I_{m}=(0,\bar{\delta}_{m}]\subseteq(0,1]$$. If $$y(1,m)\geq L$$, then $$\bar{\delta}_{m}=1$$.

The discussion is divided into three steps.

Step 1. We claim that there exists a value $$m_{0}$$ large enough such that $$0\leq y(t,m_{0})\leq L$$ for $$t\in[\bar{\delta}_{m_{0}},1]$$ and $$y(t,m_{0})\geq L$$ for $$t\in I_{m_{0}}$$.

Otherwise, provided $$y(t,m)\leq L$$ for all $$t\in[0,1]$$ as $$m\to\infty$$, then by integrating both sides of equation (2.3) from 0 to t, we have
$$y(t,m)=m- \int^{t}_{0}(t-s)h(s)f\bigl(Ay(s,m),y(s,m)\bigr)\,ds.$$
(3.9)
Hence, from (3.3) and the continuity of $$f(Ay,y)$$, we have
$$m= y(1,m)+ \int^{1}_{0}(1-s)h(s)f\bigl(Ay(s,m),y(s,m)\bigr)\,ds \leq L+L_{f}h^{L}.$$
(3.10)
Since A is defined in (2.1) as a continuous operator that depends on y, for $$f(Ay,y)$$ there exists a maximum for $$y \in[0,L]$$. Denote $$L_{f}=\max_{y\in[0,L]}f(Ay,y)$$. If we choose $$m>L+L_{f}h^{L}$$, (3.10) will lead to a contradiction.

Since $$y(t,m)$$ is continuous and concave, there exists a number $$m_{0}$$ large enough such that $$y(t,m_{0})\geq L$$ for $$t\in I_{m_{0}}$$.

Step 2. There exists a monotonically increasing sequence $$\{m_{k}\}$$ such that the sequence $$\bar{\delta}_{m_{k}}$$ is increasing on $$m_{k}$$. That is,
$$I_{m_{0}}\subset I_{m_{1}}\subset\cdots\subset I_{m_{k}} \dots\subseteq(0,1]$$
and $$y(t,m_{k})\geq L$$ for $$t\in I_{m_{k}}$$.
We prove that
$$\bar{\delta}_{m_{k-1}}< \bar{\delta}_{m_{k}},\quad k=1, 2, \ldots\mbox{ for } m_{k-1}< m_{k}.$$
(3.11)
Since f guarantees that $$y(t,m)$$ is uniquely defined, the solution $$y(t,m_{k-1})$$ and $$y(t,m_{k})$$ have no intersection in the interval $$[\bar{\delta}_{m_{k-1}},1)$$. It follows from
$$y(0,m_{k})=m_{k}>m_{k-1}=y(0,m_{k-1})$$
that
$$y(\bar{\delta}_{m_{k-1}},m_{k})>y(\bar{\delta}_{m_{k-1}},m_{k-1}).$$
Thus we have (3.11).
When $$k=1$$, see the relationship of m and $$I_{m}$$ in Figure 2. Figure 2 The relationship of m and $$\pmb{I_{m}}$$ .

Step 3. Seek a value $$m^{*}_{2}$$ and a positive number σ such that $$0< \frac{A_{2}}{A_{2}+\epsilon}\leq\sigma\leq1$$ and $$y(t,m^{*}_{2})\geq L$$ for $$t\in(0,\sigma]$$.

Following step 1, step 2, and the extension principle of solutions, there exists a positive integer n large enough such that
$$\bar{\delta}_{m_{n}}\geq\frac{A_{2}}{A_{2}+\epsilon}.$$
If we taken $$m^{*}_{2}=m_{n}$$, $$\sigma=\bar{\delta}_{m_{n}}$$, then
$$\sigma(A_{2}+\epsilon)\geq A_{2}.$$
(3.12)

In the following, we prove that $$k(m^{*}_{2})\geq1$$ for the selected $$m^{*}_{2}$$ and σ.

Set $$z(t)= m^{*}_{2} \cos\sigma(A_{2}+\epsilon)t$$, then $$z(t)$$ satisfies the following IVP:
$$z''(t)+\sigma^{2}(A_{2}+ \epsilon)^{2}z(t)=0,\qquad z(0) =m^{*}_{2},\qquad z'(0)=0,$$
(3.13)
where $$\sigma\leq1$$. From (3.8), we have
$$\frac{f(Ay,y)}{y}> \frac{\sigma^{2}(A_{2}+\epsilon)^{2}}{h^{l}}, \quad y\geq L.$$
Further, noting that $$y(1,m^{*}_{2})>L$$ (this time $$\sigma=1$$) or $$y(1,m^{*}_{2})\leq y(\sigma, m^{*}_{2})=L$$, then by Lemma 2.3 and Lemma 3.1 we have
\begin{aligned} k\bigl(m^{*}_{2}\bigr)&= \frac{y(0,m^{*}_{2})}{\int^{1}_{0}y(t,m^{*}_{2})\,dA(t)}\geq \frac {z(0,m^{*}_{2})}{\int^{1}_{0}z(t,m^{*}_{2})\,dA(t)} =\frac{1}{\int^{1}_{0}\cos[\sigma(A_{2}+\epsilon)t]\,dA(t)} \\ &\geq\frac{1}{\alpha^{L} \int^{1}_{0}\cos[\sigma(A_{2}+\epsilon)t]\,dt}=\frac {\sigma(A_{2}+\epsilon)}{\alpha^{L} \sin[\sigma(A_{2}+\epsilon)]}\geq\frac {A_{2}}{\alpha^{L}\sin A_{2}} \geq1, \end{aligned}
(3.14)
which implies $$\varphi(m^{*}_{2})\geq0$$.

From (3.6) and (3.14), we can find a $$m^{*}$$ between $$m^{*}_{1}$$ and $$m^{*}_{2}$$ such that $$y(t,m^{*})$$ is the solution of (2.2). So that $$u(t,m^{*})=Ay(t,m^{*})$$ is the solution of (1.1).

Now, we prove for (ii).

Set $$z(t)= m^{*}_{3} \cos\sigma(A_{1}+\epsilon)t$$ and $$Z(t)= m^{*}_{4}\cos (A_{2}t)$$ for $$t\in[0,1]$$, then $$z(t)$$ and $$Z(t)$$ satisfy the following IVPs, respectively:
\begin{aligned}& z''(t)+\sigma^{2}(A_{1}+ \epsilon)^{2}z(t)=0,\qquad z(0) =m^{*}_{3}, \qquad z'(0)=0, \end{aligned}
(3.15)
\begin{aligned}& Z''(t)+A^{2}_{2}Z(t)=0,\qquad Z(0) =m^{*}_{4}, \qquad Z'(0)=0, \end{aligned}
(3.16)
where $$\sigma\leq1$$.
Similar to (3.6) and (3.14), it follows from (2.3) and (3.1)-(3.2) that
\begin{aligned} k\bigl(m^{*}_{3}\bigr)&= \frac{y(0,m^{*}_{3})}{\int^{1}_{0}y(t,m^{*}_{3})\,dA(t)}\leq \frac {z(0,m^{*}_{3})}{\int^{1}_{0}z(t,m^{*}_{3})\,dA(t)} =\frac{1}{\int^{1}_{0}\cos[\sigma(A_{1}+\epsilon)t]\,dA(t)} \\ &\leq\frac{1}{\alpha^{l} \int^{1}_{0}\cos[\sigma(A_{1}+\epsilon)t]\,dt}=\frac {\sigma(A_{1}+\epsilon)}{\alpha^{l} \sin[\sigma(A_{1}+\epsilon)]}\leq\frac {A_{1}}{\alpha^{l}\sin A_{1}} \leq1, \end{aligned}
(3.17)
where $$0< \sigma\leq\frac{A_{1}}{A_{1}+\epsilon}\leq1$$, and
\begin{aligned} k\bigl(m^{*}_{4}\bigr)&= \frac{y(0,m^{*}_{4})}{\int^{1}_{0}y(t,m^{*}_{4})\,dA(t)}\geq \frac {Z(0,m^{*}_{4})}{\int^{1}_{0}Z(t,m^{*}_{4})\,dA(t)} =\frac{1}{\int^{1}_{0}\cos(A_{2} t)\,dA(t)} \\ &\geq\frac{1}{\alpha^{L} \int^{1}_{0}\cos(A_{2} t)\,dt}=\frac{A_{2}}{\alpha^{L} \sin A_{2}} \geq1. \end{aligned}
(3.18)

From (3.17) and (3.18), we can find a $$m^{*}$$ between $$m^{*}_{3}$$ and $$m^{*}_{4}$$ such that $$y(t,m^{*})$$ is the solution of (2.2). So $$u(t,m^{*})=Ay(t,m^{*})$$ is the solution of (1.1). The proof of the theorem is complete. □

Example

Consider the BVP
$$\left \{ \textstyle\begin{array}{@{}l} u'''(t) + h(t)f(u(t), u'(t))=0,\quad 0 < t < 1,\\ u'(0) = \int^{1}_{0}u'(t)\,dA(t), \qquad u''(0)=0,\qquad u(1)= \int ^{1}_{0}u(t)\,dB(t), \end{array}\displaystyle \right .$$
(3.19)
where
\begin{aligned}& h(t)=\frac{1}{2}t+\frac{1}{2},\qquad \alpha(t)=\frac{3}{10}t+ \frac {11}{10}, \quad t\in[0,1], \\& f(u, v)=\cos u+\frac{1}{10}v+1, \quad\mbox{for } u\in[0,+\infty ), v \in[0,+\infty). \end{aligned}
Simple calculation shows that
$$h^{L}=1,\qquad h^{l}=\frac{1}{2},\qquad \alpha^{L}=\frac{7}{5}, \qquad\alpha ^{l}= \frac{11}{10},\qquad f^{\infty}=\frac{1}{10}, \qquad f_{0}=\infty.$$
We can find the proper $$A_{1}=\frac{1}{2}$$ and $$A_{2}=\frac{3}{2}$$ such that $$\frac{\alpha^{l} \sin A_{1}}{A_{1}}\geq1$$ and $$\frac{\alpha^{L} \sin A_{2}}{A_{2}}\leq1$$. Therefore (H1)-(H3) and the condition (ii) of Theorem 3.2 are satisfied. It implies that (3.19) has at least one positive solution $$u(t)$$.

Declarations

Acknowledgements

The work is supported by Chinese Universities Scientific Fund (Project No.2016LX002) 