Open Access

Existence results and the monotone iterative technique for nonlinear fractional differential systems involving fractional integral boundary conditions

Advances in Difference Equations20172017:264

Received: 29 March 2017

Accepted: 3 August 2017

Published: 1 September 2017


By establishing a comparison result and using the monotone iterative technique combined with the method of upper and lower solutions, we have investigated the existence of extremal solutions for nonlinear fractional differential systems with integral boundary conditions. As an example, an application is presented to demonstrate the accuracy of the new approach.


fractional differential systemupper and lower solutionsmonotone iterative techniqueintegral boundary conditions



1 Introduction

In this paper, we consider the following differential equations with integral boundary conditions:
$$ \textstyle\begin{cases} -D^{\alpha}x(t)= f(t,x(t)),\quad t\in[0,1], \\ x(0)=0, \\ D^{\alpha-1}x(1)=I^{\beta}g(\eta,x(\eta))+k=\frac{1}{\Gamma(\beta)}\int _{0}^{\eta}(\eta-s)^{\beta-1}g(s,x(s))\,ds+k, \end{cases} $$
where \(D^{\alpha}\) are the standard Riemann-Liouville fractional derivatives, \(I^{\beta}\) is the Riemann-Liouville fractional integral.
Throughout this paper, we always suppose that

\(1<\alpha<2\), \(\beta>1\), \(0<\eta<1\), \(k\in\mathbb{R}\), and \(f\in C([0,1]\times\mathbb{R},\mathbb{R})\), \(g\in C([0,1]\times\mathbb {R},\mathbb{R})\).

Recently, much attention has been focused on the study of the existence of solutions for fractional differential systems with initial or two-point boundary value conditions, by using the monotone iterative technique, combined with the method of upper and lower solutions; for details, see [17]. But up to now, three-point and fractional integral boundary value problems for fractional differential systems have seldom been considered. The aim of this paper is to investigate the existence of extremal solutions for fractional equation (1.1), involving Riemann-Liouville fractional integral boundary conditions. To the best of our knowledge, in most of the papers and books considered to deal with fractional derivatives of order \(\alpha\in(1,2)\), the nonlinear term f is required to satisfy monotonicity conditions on the unknown function x or their derivatives. These monotonicity type conditions are not required in this paper.

The paper is organized as follows: Preliminaries are in Section 2. Then in Section 3 we construct the monotone sequences of solutions and prove their uniform convergence to the solutions of the systems. Finally, an example is presented to demonstrate the accuracy of the new approach.

2 Preliminaries

In this section, we deduce some preliminary results which will be used in the next section.

Denote \(C_{\alpha}[0,1]=\{x :x \in C[0,1], D^{\alpha}x(t)\in C[0,1]\}\) and endowed with the norm \(\|x\|_{\alpha}=\|x\|+\|D^{\alpha}x\|\), where \(\|x\| =\max_{0\leq t\leq1}|x(t)|\) and \(\|D^{\alpha}x\|=\max_{0\leq t\leq 1}|D^{\alpha}x(t)|\). Then \((C_{\alpha}[0,1], {\|\cdot\|}_{\alpha})\) is a Banach space.

Definition 2.1

We say that \(x(t)\in C_{\alpha}[0,1]\) is a lower solution of problem (1.1) if
$$\textstyle\begin{cases} -D^{\alpha}x(t)\leq f(t,x(t)), \quad t\in[0,1], \\ x(0)=0, \\ D^{\alpha-1}x(1)\leq I^{\beta}g(\eta,x(\eta))+k, \end{cases} $$
and it is an upper solution of (1.1) if the above inequalities are reversed.
For the sake of convenience, we now present some assumptions as follows:

Assume that \(x_{0},y_{0}\in C_{\alpha}[0,1]\) are lower and upper solutions of problem (1.1), respectively, and \(x_{0}(t)\leq y_{0}(t)\), \(t\in[0,1]\).

There exists \(M(t)\in C[0,1]\) such that
$$f(t,y)-f(t,x)\geq-M(t) (y-x), $$
for \(x_{0}(t)\leq x(t)\leq y(t) \leq y_{0}(t)\), \(t\in[0,1]\).
There exists a constant \(\lambda\geq0\), such that
$$g(t,y)-g(t,x)\geq\lambda(y-x), $$
for \(x_{0}(t)\leq x(t)\leq y(t) \leq y_{0}(t)\), \(t\in[0,1]\).



\(2\Gamma(\alpha+\beta)\int_{0}^{1} |M(s)| \,ds<\Gamma(\alpha)[\Gamma (\alpha+\beta)-\lambda\eta^{\alpha+\beta-1}]\).

For any \(t\in(0,1)\), we have
$$\Gamma(2-\alpha)t^{\alpha}M(t)>1-\alpha $$
$$\Gamma(2-\alpha)\lambda\eta^{\beta}< \Gamma(\beta). $$

Lemma 2.1


Let \(h\in C[0,1]\), \(b\in\mathbb{R}\), and \(\Gamma (\alpha+\beta)\neq\lambda\eta^{\alpha+\beta-1}\); then the fractional boundary value problem
$$ \textstyle\begin{cases} -D^{\alpha}x(t)= h(t),\quad t\in[0,1], \\ x(0)=0, \\ D^{\alpha-1}x(1)=\lambda I^{\beta}x(\eta)+b=\frac{\lambda}{\Gamma(\beta )}\int_{0}^{\eta}(\eta-s)^{\beta-1}x(s)\,ds+b, \end{cases} $$
has the following integral representation of the solution:
$$x(t)= \int_{0}^{1}G(t,s)h(s)\,ds+\frac{b\Gamma(\alpha+\beta)t^{\alpha -1}}{\Gamma(\alpha)[\Gamma(\alpha+\beta)- \lambda\eta^{\alpha+\beta-1}]}, $$
$$G(t,s)=\frac{1}{\Delta} \textstyle\begin{cases} [\Gamma(\alpha+\beta)-\lambda(\eta-s)^{\alpha+\beta-1}]t^{\alpha-1} \\ \quad {}- [\Gamma(\alpha+\beta)-\lambda\eta^{\alpha+\beta-1}](t-s)^{\alpha-1}, &s\leq t,s\leq\eta; \\ \Gamma(\alpha+\beta)t^{\alpha-1}-\lambda(\eta-s)^{\alpha+\beta -1}t^{\alpha-1}, &t\leq s\leq\eta; \\ \Gamma(\alpha+\beta)[t^{\alpha-1}-(t-s)^{\alpha-1}]+\lambda\eta^{\alpha +\beta-1} (t-s)^{\alpha-1}, &\eta\leq s\leq t; \\ \Gamma(\alpha+\beta)t^{\alpha-1}, &s\geq t,s\geq\eta, \end{cases} $$
and \(\Delta=\Gamma(\alpha)[\Gamma(\alpha+\beta)-\lambda\eta^{\alpha +\beta-1}]\).

Lemma 2.2


If (H4) holds, then Green’s function \(G(t,s)\) satisfies
$$0\leq G(t,s)\leq\frac{\Gamma(\alpha+\beta)}{\Gamma(\alpha)[\Gamma(\alpha +\beta)-\lambda\eta^{\alpha+\beta-1}]}\bigl(1+t^{\alpha-1}\bigr). $$

Lemma 2.3

Let \(b\in\mathbb{R}\), \(\sigma(t)\in C[0,1]\) and (H4), (H5) hold; then the following boundary problem:
$$ \textstyle\begin{cases} -D^{\alpha}x(t)= \sigma(t)-M(t)x(t),\quad t\in[0,1], \\ x(0)=0, \\ D^{\alpha-1}x(1)=\lambda I^{\beta}x(\eta)+b, \end{cases} $$
has a unique solution \(x(t)\in C[0,1]\).


It follows from Lemma 2.1 that problem (2.2) is equivalent to the following integral equation:
$$x(t)= \int_{0}^{1}G(t,s)\bigl[\sigma(s)-M(s)x(s)\bigr] \,ds+\frac{b\Gamma(\alpha+\beta )t^{\alpha-1}}{\Gamma(\alpha)[\Gamma(\alpha+\beta)- \lambda\eta^{\alpha +\beta-1}]}, \quad \forall t\in[0,1]. $$
$$Ax(t)= \int_{0}^{1}G(t,s)\bigl[\sigma(s)-M(s)x(s)\bigr] \,ds+\frac{b\Gamma(\alpha+\beta )t^{\alpha-1}}{\Gamma(\alpha)[\Gamma(\alpha+\beta)- \lambda\eta^{\alpha +\beta-1}]}, \quad \forall t\in[0,1]. $$
For any \(u,v \in C[0,1]\), by (H4) and Lemma 2.2, we have
$$\begin{aligned} \bigl\vert Ax(t)-Ay(t) \bigr\vert \leq& \int_{0}^{1}G(t,s) \bigl\vert M(s) \bigr\vert \cdot \bigl\vert x(s)-y(s) \bigr\vert \,ds \\ \leq&\frac{\Gamma(\alpha+\beta)(1+t^{\alpha-1})\| x-y\| }{\Gamma(\alpha)[\Gamma(\alpha+\beta)- \lambda\eta^{\alpha+\beta -1}]} \int_{0}^{1} \bigl\vert M(s) \bigr\vert \,ds \\ \leq&\frac{2\Gamma(\alpha+\beta)\| x-y\|}{\Gamma(\alpha )[\Gamma(\alpha+\beta)- \lambda\eta^{\alpha+\beta-1}]} \int_{0}^{1} \bigl\vert M(s) \bigr\vert \,ds. \end{aligned}$$
Noting that we have (H5), which implies \(\frac{2\Gamma(\alpha+\beta )\int_{0}^{1}| M(s)| \,ds}{\Gamma(\alpha)[\Gamma(\alpha+\beta)- \lambda \eta^{\alpha+\beta-1}]}<1\), \(| Ax(t)-Ay(t)|<\| x-y\|\). Consequently,
$$\| Ax-Ay\|< \| x-y\|. $$
By the Banach fixed point theorem, the operator A has a unique fixed point. That is, (2.2) has a unique solution. □

Lemma 2.4


Assume that \(x(t)\in C[0,1]\) satisfies the following conditions:
  1. (i)

    \(D^{\alpha}x(t)\in C[0,1]\), for \(\alpha\in(1,2)\);

  2. (ii)

    \(x(t)\) attains its global minimum at \(t_{0}\in(0,1)\).

$$D^{\alpha}x(t)|_{t=t_{0}}\geq\frac{1-\alpha}{\Gamma(2-\alpha)}t_{0}^{-\alpha}x(t_{0}). $$

Lemma 2.5


Assume that \(x(t)\in C[0,1]\) satisfies the following conditions:
  1. (i)

    \(D^{\delta}x(t)\in C[0,1]\), for \(\delta\in(0,1)\);

  2. (ii)

    \(x(t)\) attains its global minimum at \(t_{0}\in(0,1]\).

$$D^{\delta}x(t)|_{t=t_{0}}\leq\frac{t_{0}^{-\delta}}{\Gamma(1-\delta)}x(t_{0}). $$

Lemma 2.6

Assume that (H6) holds, \(x(t)\in C[0,1]\), satisfying \(D^{\alpha}x(t)\in C[0,1]\) and
$$ \textstyle\begin{cases} -D^{\alpha}x(t)\geq-M(t)x(t), \quad t\in[0,1], \\ x(0)=0, \\ D^{\alpha-1}x(1)\geq\lambda I^{\beta}x(\eta), \end{cases} $$
then \(x(t)\geq0\), \(\forall t\in[0,1]\).


Suppose that \(x(t)\geq0\), \(t\in[0,1]\) is not true. From the continuity of \(x(t)\) it follows that there exists some \(t_{1}\in(0,1]\) such that \(x(t_{1})=\min_{t\in[0,1]}x(t)<0\).

Case (i). If \(t_{1}\in(0,1)\), by Lemma 2.4 and (H6), we have
$$0\geq D^{\alpha}x(t)|_{t=t_{1}}-M(t_{1})x(t_{1}) \geq \biggl[\frac{1-\alpha }{\Gamma(2-\alpha)}t_{1}^{-\alpha}-M(t_{1}) \biggr]x(t_{1})>0, $$
which is a contradiction.
Case (ii). If \(t_{1}=1\), by Lemma 2.5, one gets
$$D^{\alpha-1} x(t)|_{t=1}\leq\frac{x(1)}{\Gamma(2-\alpha)}. $$
On the other hand, from the boundary condition of (2.3) and (H6), we obtain
$$\begin{aligned} D^{\alpha-1}x(1) \geq&\lambda I^{\beta}x(\eta)=\frac{\lambda}{\Gamma (\beta)} \int_{0}^{\eta}(\eta-s)^{\beta-1}x(s)\,ds \\ =&\frac{\lambda}{\Gamma(\beta)}\cdot(\eta-\xi)^{\beta-1}\cdot x(\xi )\cdot\eta,\quad 0< \xi< \eta< 1 \\ \geq&\frac{\lambda}{\Gamma(\beta)}\cdot(\eta-\xi)^{\beta-1}\cdot x(1)\cdot\eta \\ \geq&\frac{\lambda}{\Gamma(\beta)}\cdot\eta^{\beta-1}\cdot x(1)\cdot \eta \\ >&\frac{x(1)}{\Gamma(2-\alpha)}, \end{aligned}$$
which is a contradiction. Therefore, we obtain \(x(t)\geq0\), \(\forall t\in[0,1]\). The proof is complete. □

3 Main results

In this section, we present the main result of our paper, which ensures the existence of extremal solutions for problem (1.1).

Theorem 3.1

Suppose that conditions (H1)-(H6) hold. Then problem (1.1) has extremal solutions \(x^{*},y^{*}\in[x_{0},y_{0}]\). Moreover, there exist monotone iterative sequences \(\{x_{n}\},\{y_{n}\}\subset C_{\alpha}[0,1]\) such that \(x_{n}\rightarrow x^{*}\), \(y_{n}\rightarrow y^{*}\) uniformly on \(t\in[0,1]\), as \(n\rightarrow \infty\) and
$$x_{0}\leq x_{1}\leq\cdots\leq x_{n}\leq\cdots \leq x^{*}\leq y^{*}\leq\cdots\leq y_{n}\leq\cdots\leq y_{1}\leq y_{0}. $$


For \(n=0,1,2,\ldots\) , we define
$$ \textstyle\begin{cases} -D^{\alpha}x_{n+1}(t)= f(t,x_{n}(t))-M(t)[x_{n+1}(t)-x_{n}(t)],\quad t\in[0,1], \\ x_{n+1}(0)=0, \\ D^{\alpha-1}x_{n+1}(1)=I^{\beta}\{g(\eta,x_{n}(\eta))+\lambda[x_{n+1}(\eta )-x_{n}(\eta)]\}+k \\ \hphantom{D^{\alpha-1}x_{n+1}(1)}=\lambda I^{\beta}x_{n+1}(\eta)+I^{\beta}[g(\eta ,x_{n}(\eta))-\lambda x_{n}(\eta)]+k, \end{cases} $$
$$ \textstyle\begin{cases} -D^{\alpha}y_{n+1}(t)= f(t,y_{n}(t))-M(t)[y_{n+1}(t)-y_{n}(t)],\quad t\in[0,1], \\ y_{n+1}(0)=0, \\ D^{\alpha-1}y_{n+1}(1)=I^{\beta}\{g(\eta,y_{n}(\eta))+\lambda[y_{n+1}(\eta )-y_{n}(\eta)]\}+k \\ \hphantom{D^{\alpha-1}y_{n+1}(1)}=\lambda I^{\beta}y_{n+1}(\eta)+I^{\beta}[g(\eta ,y_{n}(\eta))-\lambda y_{n}(\eta)]+k. \end{cases} $$
In view of Lemma 2.3, for any \(n\in\mathbb{N}\), problems (3.1) and (3.2) have a unique solution \(x_{n+1}(t)\), \(y_{n+1}(t)\) respectively, which are well defined. First, we show that
$$x_{0}(t)\leq x_{1}(t)\leq y_{1}(t)\leq y_{0}(t),\quad t\in[0,1]. $$
Let \(w(t)=x_{1}(t)-x_{0}(t)\). The definitions of \(x_{1}(t)\) and (H1) yield
$$\textstyle\begin{cases} -D^{\alpha}w(t)\geq-M(t)w(t),\quad t\in[0,1], \\ w(0)=0, \\ D^{\alpha-1}w(1)\geq\lambda I^{\beta} w(\eta). \end{cases} $$
According to Lemma 2.6, we have \(w(t)\geq0\), \(t\in[0,1]\), that is, \(x_{1}(t)\geq x_{0}(t)\). Using the same reasoning, we can show that \(y_{0}(t)\geq y_{1}(t)\), for all \(t\in[0,1]\).
Now, we put \(p(t)=y_{1}(t)-x_{1}(t)\). From (H2) and (H3), we get
$$\begin{aligned} -D^{\alpha }p(t) =&f\bigl(t,y_{0}(t)\bigr)-M(t) \bigl[y_{1}(t)-y_{0}(t)\bigr]-f\bigl(t,x_{0}(t) \bigr)+M(t)\bigl[x_{1}(t)-x_{0}(t)\bigr] \\ \geq&-M(t)\bigl[y_{0}(t)-x_{0}(t)\bigr]-M(t) \bigl[y_{1}(t)-y_{0}(t)\bigr]+M(t)\bigl[x_{1}(t)-x_{0}(t) \bigr] \\ =&-M(t)p(t). \end{aligned}$$
Also \(p(0)=0\), and
$$\begin{aligned} D^{\alpha-1}p(1) =&I^{\beta}\bigl\{ g\bigl(\eta,y_{0}(\eta) \bigr)+\lambda\bigl[y_{1}(\eta )-y_{0}(\eta)\bigr]\bigr\} - I^{\beta}\bigl\{ g\bigl(\eta,x_{0}(\eta)\bigr)+\lambda \bigl[x_{1}(\eta)-x_{0}(\eta)\bigr]\bigr\} \\ =&I^{\beta}\bigl\{ g\bigl(\eta,y_{0}(\eta)\bigr)-g\bigl( \eta,x_{0}(\eta)\bigr)+\lambda\bigl[y_{1}(\eta) -y_{0}(\eta)\bigr]-\lambda\bigl[x_{1}(\eta)-x_{0}( \eta)\bigr]\bigr\} \\ \geq&I^{\beta}\bigl\{ \lambda\bigl[y_{0}(\eta)-x_{0}( \eta)\bigr]+\lambda\bigl[y_{1}(\eta) -y_{0}(\eta)\bigr]- \lambda\bigl[x_{1}(\eta)-x_{0}(\eta)\bigr]\bigr\} \\ =&\lambda I^{\beta}p(\eta). \end{aligned}$$
These results and Lemma 2.6 imply that \(y_{1}(t)\geq x_{1}(t)\), \(t\in[0,1]\).
In the next step, we show that \(x_{1}\), \(y_{1}\) are lower and upper solutions of problem (1.1), respectively. Note that
$$\begin{aligned} -D^{\alpha }x_{1}(t) =&f\bigl(t,x_{0}(t)\bigr)-f \bigl(t,x_{1}(t)\bigr)+f\bigl(t,x_{1}(t)\bigr)-M(t) \bigl[x_{1}(t)-x_{0}(t)\bigr] \\ \leq&M(t)\bigl[x_{1}(t)-x_{0}(t)\bigr]+f \bigl(t,x_{1}(t)\bigr)-M(t)\bigl[x_{1}(t)-x_{0}(t) \bigr] \\ =&f\bigl(t,x_{1}(t)\bigr). \end{aligned}$$
Also \(x_{1}(0)=0\), and
$$\begin{aligned} D^{\alpha-1}x_{1}(1) =&I^{\beta}\bigl\{ g\bigl( \eta,x_{0}(\eta)\bigr)-g\bigl(\eta,x_{1}(\eta )\bigr)+g\bigl( \eta,x_{1}(\eta)\bigr) +\lambda\bigl[x_{1}( \eta)-x_{0}(\eta)\bigr]\bigr\} +k \\ \leq&I^{\beta}\bigl\{ \lambda\bigl[x_{0}(\eta)-x_{1}( \eta)\bigr]+g\bigl(\eta,x_{1}(\eta )\bigr)+\lambda\bigl[x_{1}( \eta)-x_{0}(\eta)\bigr]\bigr\} +k \\ =&I^{\beta}g\bigl(\eta,x_{1}(\eta)\bigr)+k \end{aligned}$$
by assumptions (H2) and (H3). This proves that \(x_{1}\) is a lower solution of problem (1.1). Similarly, we can prove that \(y_{1}\) is an upper solution of (1.1).
Using mathematical induction, we see that
$$x_{0}(t)\leq x_{1}(t)\leq\cdots\leq x_{n}(t)\leq x_{n+1}(t)\leq y_{n+1}(t) \leq y_{n}(t)\leq\cdots \leq y_{1}(t)\leq y_{0}(t),\quad t\in[0,1], $$
since the space of solution is \(C_{\alpha}[0,1]\). Using the standard arguments, it is easy to show \(\{x_{n}\}\) and \(\{y_{n}\}\) are uniformly bounded and equi-continuous. By the Arzela-Ascoli theorem, we have \(\{x_{n}\}\) and \(\{y_{n}\}\) converge, say to \(x^{*}(t)\) and \(y^{*}(t)\), uniformly on \([0,1]\), respectively. That is
$$\lim_{n\rightarrow\infty}x_{n}(t)=x^{*}(t),\qquad \lim _{n\rightarrow\infty }y_{n}(t)=x^{*}(t), \quad t\in[0,1]. $$

Moreover, \(x^{*}(t)\) and \(y^{*}(t)\) are the solutions of problem (1.1) and \(x_{0}\leq x^{*}\leq y^{*}\leq y_{0}\) on \([0,1]\).

To prove that \(x^{*}(t)\), \(y^{*}(t)\) are extremal solutions of (1.1), let \(u\in[x_{0},y_{0}]\) be any solution of problem (1.1). We suppose that \(x_{m}(t)\leq u(t)\leq y_{m}(t)\), \(t\in[0,1]\) for some m. Let \(v(t)=u(t)-x_{m+1}(t)\), \(z(t)=y_{m+1}(t)-u(t)\). Then by assumption (H2) and (H3), we see that
$$\textstyle\begin{cases} -D^{\alpha}v(t)\geq-M(t)v(t),\quad t\in[0,1], \\ v(0)=0, \\ D^{\alpha-1}v(1)\geq\lambda I^{\beta}v(\eta), \end{cases} $$
$$\textstyle\begin{cases} -D^{\alpha}z(t)\geq-M(t)z(t), \quad t\in[0,1], \\ z(0)=0, \\ D^{\alpha-1}z(1)\geq\lambda I^{\beta}z(\eta). \end{cases} $$
These and Lemma 2.6 imply that \(x_{m+1}(t)\leq u(t)\leq y_{m+1}(t)\), \(t\in [0,1]\), so by induction \(x_{n}(t)\leq u(t)\leq y_{n}(t)\), on \([0,1]\) for all n. Taking the limit as \(n\longrightarrow\infty\), we conclude \(x^{*}(t)\leq u(t)\leq y^{*}(t)\), \(t\in[0,1]\). The proof is complete. □


Consider the following problem:
$$ \textstyle\begin{cases} -D^{\frac{3}{2}}x(t)= -\frac{1}{16}t^{2}x^{2}(t)+\frac{1}{5}t^{3}, \quad t\in [0,1], \\ x(0)=0, \\ D^{\frac{1}{2}}x(1)=I^{\frac{3}{2}}g(\frac{1}{4},x(\frac{1}{4}))+1.2 =\frac{1}{\Gamma(\frac{3}{2})}\int_{0}^{\frac{1}{4}}(\frac{1}{4}-s)^{\frac {1}{2}}(s+1)x(s)\,ds+1.2, \end{cases} $$
where \(\alpha=\frac{3}{2}\), \(\beta=\frac{3}{2}\), \(\eta=\frac{1}{4}\), \(k=1.2\), and
$$\textstyle\begin{cases} f(t,x)= -\frac{1}{16}t^{2}x^{2}(t)+\frac{1}{5}t^{3}, \\ g(t,x)=(t+1)x. \end{cases} $$
Take \(x_{0}(t)=0\), \(y_{0}(t)=2t^{\frac{1}{2}}\). It is not difficult to verify that \(x_{0}\), \(y_{0}\) are lower and upper solutions of (3.3), respectively, and \(x_{0}\leq y_{0}\). So (H1) holds.
In addition, we have
$$ f(t,y)-f(t,x)=-\frac{1}{16}t^{2}x^{2}+ \frac{1}{16}t^{2}y^{2}\geq-\frac {1}{4}t^{\frac{3}{2}}(y-x) $$
$$ g(t,y)-g(t,x)=(t+1) (y-x)\geq(y-x), $$
where \(x_{0}(t)\leq x(t)\leq y(t)\leq y_{0}(t)\).

Therefore (H2) and (H3) hold.

From (3.4) and (3.5), we have
$$M(t)=\frac{1}{4}t^{\frac{3}{2}}, \quad \lambda=1. $$
$$\begin{aligned}& \Gamma(\alpha+\beta)=\Gamma(3)=2>\lambda\eta^{\alpha+\beta-1}=\biggl( \frac {1}{4}\biggr)^{2}, \\& 2\Gamma(\alpha+\beta) \int_{0}^{1} \bigl\vert M(s) \bigr\vert \,ds=2 \cdot2 \int_{0}^{1}\frac {1}{4}s^{\frac{3}{2}}\,ds= \frac{2}{5} < \Gamma(\alpha)\bigl[\Gamma(\alpha+\beta)-\lambda \eta^{\alpha+\beta-1}\bigr] \\& \hphantom{2\Gamma(\alpha+\beta) \int_{0}^{1} \bigl\vert M(s) \bigr\vert \,ds}= \Gamma\biggl(\frac{3}{2}\biggr)\biggl[2-\biggl( \frac{1}{4}\biggr)^{2}\biggr]\approx1.717, \\& \Gamma(2-\alpha)\lambda\eta^{\beta}=\Gamma\biggl(2-\frac{3}{2} \biggr)\cdot1\cdot \biggl(\frac{1}{4}\biggr)^{\frac{3}{2}} = \frac{1}{4}\cdot\Gamma\biggl(\frac{3}{2}\biggr)< \Gamma(\beta)=\Gamma \biggl(\frac {3}{2}\biggr), \\& \Gamma(2-\alpha)\cdot t^{\alpha}\cdot M(t)=\Gamma\biggl(\frac{1}{2} \biggr)\cdot t^{\frac{3}{2}}\cdot\frac{1}{4}\cdot t^{\frac{3}{2}} >1- \alpha=-\frac{1}{2}, \quad \mbox{for } t\in(0,1). \end{aligned}$$
It shows that (H4), (H5) and (H6) hold. By Theorem 3.1, problem (3.3) has extremal solutions in \([x_{0}(t), y_{0}(t)]\).



Project was supported by the Guiding Innovation Foundation of Northeast Petroleum University (No. 2016YDL-02) and the Youth Scientific Research Fund of Northeast Petroleum University (No. NEPUQN2015-1-21).

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (, which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Authors’ Affiliations

School of Mathematics and Statistics, Northeast Petroleum University, Daqing, P.R. China


  1. Wang, G: Monotone iterative technique for boundary value problems of a nonlinear fractional equation with deviating arguments. J. Comput. Appl. Math. 236, 2425-2430 (2012) MathSciNetView ArticleMATHGoogle Scholar
  2. Wang, G, Agarwal, RP, Cabada, A: Existence results and the monotone iterative technique for systems of nonlinear fractional differential equations. Appl. Math. Lett. 25, 1019-1024 (2012) MathSciNetView ArticleMATHGoogle Scholar
  3. Zhang, L, Ahmad, B, Wang, G: Explicit iterations and extremal solutions for fractional differential equations with nonlinear integral boundary conditions. Appl. Math. Comput. 268, 388-392 (2015) MathSciNetGoogle Scholar
  4. Zhang, L, Ahmad, B, Wang, G: The existence of an extremal solution to a nonlinear system with the right-handed Riemann-Liouville fractional derivative. Appl. Math. Lett. 31, 1-6 (2014) MathSciNetView ArticleMATHGoogle Scholar
  5. Jankowski, T: Boundary problems for fractional differential equations. Appl. Math. Lett. 28, 14-19 (2014) MathSciNetView ArticleMATHGoogle Scholar
  6. Jian, H, Liu, B, Xie, S: Monotone iterative solutions for nonlinear fractional differential systems with deviating arguments. Appl. Math. Comput. 262, 1-14 (2015) MathSciNetGoogle Scholar
  7. Liu, X, Jia, M, Ge, W: The method of lower and upper solutions for mixed fractional four-point boundary value problem with p-Laplacian operator. Appl. Math. Lett. 65, 56-62 (2017) MathSciNetView ArticleMATHGoogle Scholar
  8. Wang, G: Explicit iteration and unbounded solutions for fractional integral boundary value problem on an infinite interval. Appl. Math. Lett. 17, 1-7 (2015) MathSciNetMATHGoogle Scholar
  9. Xie, W, Xiao, J, Luo, Z: Existence of solutions for Riemann-Liouville fractional boundary value problem. Abstr. Appl. Anal. 2014, Article ID 540351 (2014) MathSciNetGoogle Scholar


© The Author(s) 2017