Skip to content


  • Research
  • Open Access

Translation, solving scheme, and implementation of a periodic and optimal impulsive state control problem

Advances in Difference Equations20182018:93

  • Received: 1 November 2017
  • Accepted: 6 February 2018
  • Published:


The periodic solution of the impulsive state feedback controls (ISFC) has been investigated extensively in the last decades. However, if the ecosystem is exploited in a period mode, what strategies are implemented to optimize the cost function at the minimal cost? Firstly, under the hypothesis that the system has a periodic solution, an optimal problem of ISFC is transformed into a parameter optimization problem in an unspecified time with inequality constraints, and together with the constraint of the first arrival threshold. Secondly, the rescaled time and a constraint violation function are introduced to translate the above optimal problem to a parameter selection problem in a specified time with the unconstraint. Thirdly, gradients of the objective function on all parameters are given to compute the optimal value of the cost function. Finally, three examples involving the marine ecosystem, computer virus, and resource administration are illustrated to confirm the validity of our approaches.


  • Impulsive state feedback control (ISFC)
  • Rescaled time transformation
  • Constraint violation function
  • Parameter optimization
  • Numerical simulation

1 Introduction

The topic about impulsive state feedback controls (abbreviated as ISFC) has been investigated extensively in the last decades due to its potential applications in culturing microorganisms [13], pest integrated management [46], disease control [7, 8], fish harvesting [911], and wildlife management [12, 13]. For example, [1] proposed a bioprocess model with ISFC to acquire an equivalent stable output by the precise feeding. Ref. [4] explored the periodic solution of an entomopathogenic nematode invading the insect model with ISFC. Ref. [7] considered some vaccines into a disease by ISFC, and got the uniqueness of order one periodic solution (OOPS) by geometric method. Ref. [12] formulated a white-headed langur’s ISFC model with sparse effect and continuous delay to study the periodic and artificial releasing. On ISFC models, scholars often pay close attention to the qualitative analysis of OOPS. Ref. [11] proposed a phytoplankton–fish model with ISFC and then formulated an optimal control problem (OCP, for short) and strived to find the appropriate harvesting rates to maximize the cost function in an impulsive period. Here, the solvability of the system in one period provides convenience for solving the OCP by Lagrange multiplier. But for the complicated ecosystem whose analytical solution cannot be expressed explicitly, if it is exploited in a period mode, what period and strategies are implemented to optimize the cost function at the minimal cost? Furthermore, how to translate the OCP of the ISFC to a problem with parameter optimization in one period is interesting. So far, few researchers have paid attention to these tasks which are the focuses of our paper.

The utilizations of optimal control can be found almost in all applied science fields, such as fishery model [14], iatrochemistry [15], switching powers [16], astrovehicle controls [17], undersea vehicles [18], eco-epidemiology [19], and virus therapies [20]. The Pontryagin principle and the Hamilton equation are the main theoretical tools to solve continuous system control [21]. However, the hybrid optimization problems involving the pulse threshold and system parameters are still sufficiently challenging and worth exploring. The control parameter technique offers the feasibility for solving this problem [21]. Teo et al. described the detailed and basic theory of the control parameter method in [22]. Until now, many important results have been achieved in recent years. We will apply these theories together with the constraint transcription technique [23] to resolve the above issues.

The other components of this paper are as follows. In Section 2, an optimal problem of state impulse feedback control is transformed into a parameter optimization problem in an unspecified time with inequality constraint, and together with the constraint of the first arrival threshold. In Section 3 we derive the required gradient formulas and present an algorithm for solving the approximate OCP. In Section 4, we give three examples and numerical simulations. Finally, a conclusion is provided in Section 5.

2 Problem statement and translation

Consider the ISFC system:
$$\begin{aligned}& \frac{d\hat{\boldsymbol{y}}}{dt}=\boldsymbol{f}(\hat{\boldsymbol{y}},\boldsymbol{\delta }),\qquad \phi(\hat{\boldsymbol{y}},\boldsymbol{\delta})\neq0, \end{aligned}$$
$$\begin{aligned}& \triangle\hat{\boldsymbol{y}}=\boldsymbol{I}(\hat{\boldsymbol{y}},\boldsymbol{\beta}), \qquad\phi ( \hat{\boldsymbol{y}},\boldsymbol{\delta})=0, \end{aligned}$$
where \(t\in\mathbb{R}\), \(\hat{\boldsymbol{y}}\in\mathbb{R}^{n}\), \(\boldsymbol {f}\in\mathbb{R}^{n}\) is a given function, \(\boldsymbol{\delta}=(\delta _{1},\ldots,\delta_{m})\) and \(\boldsymbol{\beta}=(\beta_{1},\ldots,\beta_{q})\) are the control parameter vectors.
Denote by \(\hat{\boldsymbol{y}}(t;0,\hat{\boldsymbol{y}}_{0},\boldsymbol{\delta},\boldsymbol{\beta })\) the solution of (2.1) and (2.2) satisfying the initial value
$$\hat{\boldsymbol{y}} \bigl(0^{+};0,\hat{\boldsymbol{y}}_{0},\boldsymbol{\delta},\boldsymbol{\beta} \bigr)=\hat{\boldsymbol {y}}_{0}. $$
For convenience, \(\hat{\boldsymbol{y}}(t;0,\hat{\boldsymbol{y}}_{0},\boldsymbol{\delta},\boldsymbol {\beta})\) is abbreviated to \(\hat{\boldsymbol{y}}(t)\) or \(\hat{\boldsymbol{y}}\). \(\boldsymbol{I}(\hat{\boldsymbol{y}},\boldsymbol{\beta})\) is the impulsive effect and \(\phi(\hat{\boldsymbol{y}},\boldsymbol{\delta})=0\) is the impulsive set. In detail,
$$\begin{gathered} \hat{\boldsymbol{y}}=\left ( \textstyle\begin{array}{c} \hat{y}_{1}\\ \vdots\\ \hat{y}_{n} \end{array}\displaystyle \right ), \qquad\hat{\boldsymbol{y}}_{0}=\left ( \textstyle\begin{array}{c} \hat{y}_{10} \\ \vdots\\ \hat{y}_{n0} \end{array}\displaystyle \right ), \\ \boldsymbol{f}=\left ( \textstyle\begin{array}{c} f_{1}(\hat{y}_{1},\ldots,\hat{y}_{n},\boldsymbol{\delta}) \\ \vdots\\ f_{n}(\hat{y}_{1},\ldots,\hat{y}_{n},\boldsymbol{\delta}) \end{array}\displaystyle \right ), \qquad\boldsymbol{I}=\left ( \textstyle\begin{array}{c} I_{1}(\hat{y}_{1},\ldots,\hat{y}_{n},\boldsymbol{\beta}) \\ \vdots\\ I_{n}(\hat{y}_{1},\ldots,\hat{y}_{n},\boldsymbol{\beta}) \end{array}\displaystyle \right ).\end{gathered} $$
Next, let us introduce the following assumptions.
  1. (H1)

    Assume that \({f_{i}}\), ϕ, and I are continuously differentiable.

  2. (H2)

    Denote the Euclidean norm by \(\|\cdot\|\). Suppose that there exists a constraint \(k>0\) meeting \(|{f_{i}}(\hat{\boldsymbol{y}})|\leq k(1+\|\hat{\boldsymbol{y}}\|)\) for all \(\hat{\boldsymbol{y}}\).

  3. (H3)
    Assume that for fixed \(\hat{\boldsymbol{y}}_{0}\), (2.1) and (2.2) have a unique OOPS \(\Gamma _{{A}\rightarrow{B}}\) possessing period T, where A and B are the terminal and initial points of OOPS, respectively. When the impulsive effect take places, the point A is mapped to B, namely
    $${A} \bigl(\hat{y}_{1}(T)+I_{1} \bigl(\hat{\boldsymbol{y}}(T),\boldsymbol{\beta} \bigr),\ldots,\hat {y}_{n}(T)+I_{n} \bigl(\hat{\boldsymbol{y}}(T), \boldsymbol{\beta} \bigr) \bigr) \rightarrow{B}(\hat {y}_{10},\ldots, \hat{y}_{n0}). $$
Next, our aim is to formulate an optimal problem on one period under hypothesis (H3), namely (2.1) and (2.2) possess an OOPS \(\Gamma _{{A}\rightarrow{B}}\). Then system (2.1) is modified into
$$ \frac{d\hat{\boldsymbol{y}}}{dt}=\boldsymbol{f}(\hat{\boldsymbol{y}},\boldsymbol{\delta }),\quad t\in(0,T). $$
By (H3) it is obtained that
$$ \hat{y}_{i0}=\hat{y}_{i}(T)+I_{i} \bigl(\hat{\boldsymbol{y}}(T),\boldsymbol{\beta} \bigr), \quad i=1,\ldots,n. $$
In order to guarantee that T is the first positive time at which the solution \(\hat{\boldsymbol{y}}(t)\) of (2.1) intersects with the surface \(\phi(\hat{\boldsymbol{y}}(t),\boldsymbol{\delta})=0\), \(\hat{\boldsymbol{y}}(t)\) should be defined as follows:
  1. (H4)
    \(\hat{\boldsymbol{y}}(t)\in\Omega\), where
    $$ \Omega= \bigl\{ \hat{\boldsymbol{y}}(t)\mid \phi \bigl(\hat{\boldsymbol{y}}(t),\boldsymbol{\delta} \bigr)\neq 0 \mbox{ for } t\in(0,T) \mbox{ and }\phi \bigl(\hat{\boldsymbol{y}}(T), \boldsymbol {\delta} \bigr)=0 \bigr\} . $$


If T is not the first positive time at which the solution \(\hat{\boldsymbol{y}}\) of (2.1) intersects with the surface \(\phi(\hat{\boldsymbol{y}}(t),\boldsymbol{\delta})=0\), then there exists \(0<\breve{T}<T\) such that \(\phi(\hat{\boldsymbol{y}}(\breve{T}),\boldsymbol {\delta})=0\). This appears in contradiction to the definitions of (2.5).

Obviously, (2.5) is equivalent to
$$ \Omega= \bigl\{ \hat{\boldsymbol{y}}(t)\mid \phi \bigl(\hat{\boldsymbol{y}}(t),\boldsymbol{\delta} \bigr)>\phi \bigl(\hat{\boldsymbol{y}}(T),\boldsymbol{\delta} \bigr)\mbox{ or } \phi \bigl( \hat{\boldsymbol{y}}(t),\boldsymbol{\delta} \bigr)< \phi \bigl(\hat{\boldsymbol{y}}(T),\boldsymbol{\delta} \bigr) \mbox{ for } t\in(0,T) \bigr\} . $$
Together with (2.4), the solution which firstly arrives at the surface \(\phi(\hat{\boldsymbol{y}}(T),\boldsymbol{\delta})=0\) at time T from the initial point \((0,\hat{\boldsymbol{y}}_{0})\) (that is, the solution in (2.5) or (2.6)) is renewed by
$$ \Omega= \bigl\{ \Psi \bigl({\hat{\boldsymbol{y}}}(t),\hat{\boldsymbol{y}}_{0} \bigr)< 0 \mbox{ and } \Psi^{*} \bigl(\hat{\boldsymbol{y}}(t),\hat{\boldsymbol{y}}(T) \bigr)< 0, t \in(0,T) \bigr\} . $$
Define admissible sets Λ and Θ, which have respectively p and q dimensions, such that \(\boldsymbol{\delta}\in \Lambda\), \(\boldsymbol{\beta}\in\Theta\). Then, for each \((\boldsymbol{\delta}, \boldsymbol{\beta})\in\Lambda\times\Theta\), the boundary condition of the mixed type (2.4) is equivalently expressed as
$$ \boldsymbol{\Phi} \bigl(\hat{\boldsymbol{y}}_{0},\hat{\boldsymbol{y}}(T|\boldsymbol{ \delta},\boldsymbol{\beta}) \bigr)=0, $$
where \(\boldsymbol{\Phi}=(\Phi_{1},\ldots,\Phi_{n})^{T}\) is an n-dimensional vector function. Clearly, the terminal time T depends on the vector \((\boldsymbol{\delta}, \boldsymbol{\beta})\), and hence is a variable. It is assumed that there is \(\hat {T}<\infty\) such that it is the upper bound of all admissible T.
  1. (H5)

    Assume that \(\Phi_{i}\), Ψ, and \(\Psi^{*}\) are continuously differentiable.

Next, we give the cost (objective) function:
$$ J_{0}=\Theta_{0} \bigl(\hat{\boldsymbol{y}}(\boldsymbol{ \delta}) (T),\boldsymbol{\beta} \bigr)+ \int _{0}^{T} L_{0} \bigl(\hat{\boldsymbol{y}}( \boldsymbol{\delta}) (t) \bigr)\,dt, $$
where \(\Theta_{0}\in\mathbb{R}^{n}\rightarrow\mathbb{R}\) defines the terminal cost and \(L_{0}\) defines the running cost. Equation (2.9) also can be called the objective function [21].
Assume that \(\Theta_{0}\) and \(L_{0}\) satisfy the following conditions.
  1. (H6)

    \(\Theta_{0}\) is continuously differentiable.

  2. (H7)

    Functions \(L_{0}\) are continuously differentiable concerning the component \(\hat{\boldsymbol{y}}\) for each \(t\in [0,\hat{T}]\). Additionally, there is a constraint \(l>0\) such that \(|{L_{0}}(\hat{\boldsymbol{y}})|\leq l(1+\|\hat{\boldsymbol{y}}\|)\) for all \(\hat{\boldsymbol{y}}\).

Now, an OCP is formulated officially as follows:

Subject to (2.3), seek a parameter vector \((\boldsymbol{\delta},\boldsymbol{\beta})\in\Lambda\times\Theta\) satisfying that the objective function (2.9) is minimized over \(\Lambda\times\Theta\). Here T is a period meeting conditions (2.7) and (2.8).

In particular, if the solution \(\hat{\boldsymbol{y}}(t)\) (\(t\in(0,T)\)) of (2.1) and (2.2) is monotonic, then (2.7) is rewritten as
$$ \hat{y}_{i0}< \hat{y}_{i}(t)< \hat{y}_{i}(T) \quad\mbox{or}\quad \hat {y}_{i}(T)< \hat{y}_{i}(t)< \hat{y}_{i0},\quad i=1,\ldots, n, t\in(0,T), $$
which ensures that T is the first positive time of the solution \(\hat{\boldsymbol{y}}(t)\) arriving at the surface \(\phi(\hat{\boldsymbol{y}}(t),\boldsymbol {\delta})=0\). Correspondingly, combined with (2.4), (2.10) is equivalently adapted by
$$ \Psi_{i} \bigl({\hat{y}_{i}}(t),{\hat{y}}_{i0} \bigr)< 0,\quad i=1,\ldots, n, \qquad\Psi ^{*}_{j} \bigl({ \hat{y}}_{j}(t),{\hat{y}}_{j}(T) \bigr)< 0,\quad j=1,\ldots, n. $$

3 Solving scheme

The variability of jump time increases the difficulty in solving the problem (\({P_{0}}\)). To circumvent this difficulty, we choose the time-scaling transformation technology called the control parameter enhancing transform ( CPET, for short). Ref. [24] firstly preferred CPET to ascertain optimal switching instants for time-optimal controls. Afterwards, we employ CPET to project the variable jump times to fixed points by an updated time scale, thus an updated optimal issue with the fixed jump times is yielded. For applying this method, we introduce the rescaled time [24, 25]
$$ s=t/T. $$
Obviously, system (2.3) is rewritten as
$$ \frac{d\boldsymbol{y}}{ds}=\boldsymbol{h}(\boldsymbol{y}, \boldsymbol{\delta}, T), $$
where \(s\in(0,1)\). Here we refer to T as an organic parameter which is a decision variable. In addition,
$$\begin{gathered} \boldsymbol{y}(s)=\hat{\boldsymbol{y}}(Ts), \\\boldsymbol{h} \bigl(\boldsymbol{y}(s),\boldsymbol{\delta},T \bigr)=T\boldsymbol{f} \bigl(\hat{\boldsymbol{y}}(Ts),\boldsymbol { \delta} \bigr).\end{gathered} $$
Then (2.7) and (2.8) can be respectively expressed as
$$ \Psi \bigl(\boldsymbol{y}(s),\boldsymbol{y}_{0} \bigr)< 0,\qquad \Psi^{*} \bigl( \boldsymbol{y}(s),\boldsymbol {y}(1) \bigr)< 0,\quad s\in(0,1), $$
$$ \boldsymbol{\Phi} \bigl(\boldsymbol{y}_{0},\boldsymbol{y}(1),\boldsymbol{\beta} \bigr)=0. $$
And the cost function (2.9) is equivalent to
$$ J_{1}=\Theta_{0} \bigl(\boldsymbol{y}(1),\boldsymbol{\beta} \bigr)+ \int_{0}^{1} L_{0} \bigl(\boldsymbol {y}(s),\boldsymbol{ \delta},T \bigr)\,ds, $$
$$\Theta_{0} \bigl(\boldsymbol{y}(1),\boldsymbol{\beta} \bigr)=\Theta_{0} \bigl(\hat{\boldsymbol{y}}(\boldsymbol {\delta}) (T),\boldsymbol{\beta} \bigr),\qquad L_{0} \bigl( \boldsymbol{y}(s),\boldsymbol{\delta },T \bigr)=TL_{0}(\hat{\boldsymbol{y}}(\boldsymbol{\delta}) (t). $$
Thus, we can change the problem (\({P_{0}}\)) into the following problem:

Given system (3.2), find a combined parameter vector \((\boldsymbol{\delta},\boldsymbol{\beta},T)\in\Lambda \times\Theta\times(0,\hat{T})\) to minimize the objective functional (3.5) and meanwhile satisfy (3.3) and (3.4).

By Theorems in [25] and [26], the next result is valid.

Lemma 3.1

The OCP (\({P_{0}}\)) the problem (\({P_{1}}\)).

Next, we recommend an exact penalty method to overcome the remaining difficulty that the constraints (3.3) define a disjoint feasible region. Such constraints are referred to as functional inequality or path constraints. The essential dilemma about these constraints lies in the innumerable restriction on the state variables in the time scale [21].

Constraint (3.3) is a non-standard “open” state constraint. So we can approximate it as follows:
$$ \Psi \bigl(\boldsymbol{y}(s),\boldsymbol{y}_{0} \bigr)\leq\bar{ \varepsilon},\qquad \Psi^{*} \bigl(\boldsymbol {y}(s),\boldsymbol{y}(1) \bigr)\leq\bar{\varepsilon}, \quad s \in[\delta,1-\delta], $$
where \(\bar{\varepsilon}>0\) and \(\delta\in(0,\frac{1}{2})\) are adjustable parameters.
Then, we define a constraint violation function as
$$ \begin{aligned}[b] \Xi(\boldsymbol{\delta}, \boldsymbol{\beta}, T)={}& \bigl[\boldsymbol{\Phi}\bigl(\boldsymbol{y}_{0},\boldsymbol {y}(1),\boldsymbol{\beta}\bigr) \bigr]^{2} \\ & +T \int_{0}^{1} \bigl\{ \bigl[\max \bigl\{ \bar{ \varepsilon},\Psi \bigl(\boldsymbol{y}(s),\boldsymbol{y}_{0}\bigr) \bigr\} \bigr]^{2}+ \bigl[\max \bigl\{ \bar{\varepsilon},\Psi^{*}\bigl(\boldsymbol{y}(s), \boldsymbol{y}(1)\bigr) \bigr\} \bigr]^{2} \bigr\} \,ds. \end{aligned} $$
Note that \(\Xi(\boldsymbol{\delta}, \boldsymbol{\beta}, T)=0\) if and only if (3.3) and (3.4) hold. By the strategy presented in [2730], one sets up an exact penalty function
$$ J_{2}(\boldsymbol{\delta}, \boldsymbol{\beta}, T,\varepsilon) =\left \{ \textstyle\begin{array}{l@{\quad}l} J_{1}(\boldsymbol{\delta}, \boldsymbol{\beta},T),& \mbox{if } \varepsilon=0 \mbox{ and }\Xi(\boldsymbol{\delta}, \boldsymbol{\beta} ,T)=0, \\ J_{1}(\boldsymbol{\delta}, \boldsymbol{\beta},T)+\varepsilon^{-\alpha}\Xi(\boldsymbol {\delta}, \boldsymbol{\beta},T)+\sigma\varepsilon^{\gamma},& \mbox{if } \varepsilon>0, \\ +\infty,& \mbox{otherwise}, \end{array}\displaystyle \right . $$
where \(\sigma>0\) is a positive penalty parameter. \(\alpha>0\) and \(\gamma>0\) are constants meeting \(1\leq {\gamma}\leq \alpha\). The new decision variable ε satisfies
$$ 0\leq\varepsilon\leq\varepsilon_{1}, $$
where \(\varepsilon_{1}\) is a small positive number. This method was first mentioned in [31].
Now we give the unconstrained control problem:

Optimize a combined parameter vector \((\boldsymbol {\delta},\boldsymbol{\beta},T)\in\Lambda\times\Theta\times(0,\hat{T})\) and the new decision variable \(\varepsilon\in[0,\varepsilon_{1}]\) to minimize the transformed equivalent cost function \(J_{2}(\boldsymbol{\delta },\boldsymbol{\beta},T,\varepsilon)\) subject to the dynamics given by (3.1) and (3.2) in the interval \((0,1)\).

According to the main convergence result of [29], when ε̄ and δ approach zero, the cost of \(J_{2}\) approaches the optimal cost \(J_{1}\) of problem (\(P_{2}\)).

Theorem 3.1

Let \(\iota>0\) be an arbitrary fixed number. For any enough small \(\delta>0\), we can find a corresponding positive point \(\bar{\varepsilon}_{1}(\delta)>0\) such that
$$|J_{2}-J_{1}|< \iota,\quad \bar{\varepsilon}\in(0,\bar{ \varepsilon}_{1}]. $$
The above approximate problem is a nonlinear optimization one. For minimizing the objective function which subjects to a group of constraints, the narrow decision variables are selected. And for the decision vector, the cost and objective functions are implicit in problem (\(P_{2}\)). Then we can develop their gradients to produce search directions which guide profitability for the search space [21]. For implementing these algorithms, it is essential to compute the partial derivatives of the final cost function. A method for computing gradients is the so-called costate method. From Theorem 4.1 in [32] and Section 5.2 in [33], we define the corresponding Hamiltonian function
$$ \begin{aligned}[b] H\bigl(s,\boldsymbol{y}(s), \boldsymbol{y}(1),\boldsymbol{\lambda}(s),\boldsymbol{\delta},T,\varepsilon \bigr)={}&L_{0} \bigl(\boldsymbol{y}(s),\boldsymbol{\delta},\boldsymbol{\beta},T\bigr)+\boldsymbol{\lambda}^{T}\boldsymbol {h}( \boldsymbol{y}, \boldsymbol{\delta}, T) \\ &+ \varepsilon^{-\nu}T \bigl\{ \bigl[\max\bigl\{ \bar{\varepsilon},\Psi \bigl(\boldsymbol {y}(s),\boldsymbol{y}_{0}\bigr)\bigr\} \bigr]^{2} \\ & +\bigl[\max\bigl\{ \bar{\varepsilon},\Psi^{*}\bigl(\boldsymbol{y}(s),\boldsymbol{y}(1)\bigr)\bigr\} \bigr]^{2} \bigr\} , \end{aligned} $$
where \(\boldsymbol{\lambda}^{T}(s)=(\lambda_{1}(s),\ldots,\lambda_{n}(s))\) and \(\lambda_{i}(s)\) is the corresponding costate for \(i=1,2,\ldots,n\). Furthermore, \(\boldsymbol{\lambda}(s)\) is determined by the following differential equations:
$$\begin{aligned}& \frac{d\boldsymbol{\lambda}}{ds}=- \biggl(\frac{\partial H(s,\boldsymbol{y}(s),\boldsymbol {y}(1),\boldsymbol{\lambda}(s),\boldsymbol{\delta},T,\varepsilon)}{\partial\boldsymbol {y}} \biggr)^{T}, \end{aligned}$$
$$\begin{aligned}& \boldsymbol{\lambda}^{T}(1)=\frac{\partial\Theta_{0}}{\partial\boldsymbol {y}(1)}+2 \varepsilon^{-\alpha}\boldsymbol{\Phi} \bigl(\boldsymbol{y}_{0},\boldsymbol{y}(1),\boldsymbol { \beta} \bigr)^{2}\frac{\partial\boldsymbol{\Phi}}{\partial\boldsymbol{y}(1)}+ \int _{0}^{1}\frac{\partial H}{\partial\boldsymbol{y}(1)}\,ds. \end{aligned}$$

Theorem 3.2

The gradients of \(J_{2}\) concerning T, δ, β as well as ε are awarded by
$$\begin{aligned}& \frac{\partial J_{2}}{\partial T}= \int_{0}^{1}\frac{\partial H}{\partial T}\,ds, \end{aligned}$$
$$\begin{aligned}& \frac{\partial J_{2}}{\partial\boldsymbol{\delta}}= \int_{0}^{1}\frac {\partial H}{\partial\boldsymbol{\delta}}\,ds, \end{aligned}$$
$$\begin{aligned}& \frac{\partial J_{2}}{\partial\boldsymbol{\beta}}=\frac{\partial\Theta _{0}}{\partial\boldsymbol{\beta}}+2\varepsilon^{-\alpha} \boldsymbol{\Phi} \bigl(\boldsymbol {y}_{0},\boldsymbol{y}(1),\boldsymbol{\beta} \bigr) \frac{\partial\boldsymbol{\Phi}}{\partial\boldsymbol {\beta}}, \end{aligned}$$
$$\begin{aligned}& \frac{\partial J_{2}}{\partial\varepsilon}=-\boldsymbol{\alpha}\varepsilon ^{-\boldsymbol{\alpha}-1}\Xi( \boldsymbol{ \delta},\boldsymbol{\beta},T)+\gamma\sigma \varepsilon^{\gamma-1}. \end{aligned}$$

Note that, instead of an initial condition, the costate systems (3.10) and (3.11) involve a terminal value. So, we must integrate them from \(s=1\) to \(s=0\). Furthermore, in view of equations (3.12)–(3.15), we address the algorithm about calculating \(J_{2}\) and its gradients as follows.

Algorithm 1

Input a group \((\boldsymbol{\delta},\boldsymbol{\beta},T)\in\Lambda\times\Theta \times(0,\hat{T})\),
  1. (i)

    Solve systems (3.2), (3.10), and (3.11) to obtain \(\boldsymbol{y}(s)\) and λ.

  2. (ii)

    Use \(\boldsymbol{y}(s)\) to compute \(J_{2}\).

  3. (iii)

    Use \(\boldsymbol{y}(s)\) and λ to compute \(\frac{\partial J_{2}}{\partial T}\), \(\frac{\partial J_{2}}{\partial\boldsymbol{\delta}}\), and \(\frac{\partial J_{2}}{\partial\boldsymbol{\beta}}\) according to equations (3.12), (3.13), and (3.14).


In the above, the methodology proposed involves transforming the periodic optimal control problem into a standard optimal control problem, after which standard computational techniques can be applied. Similarly, the case of the first positive time T assured by (2.11) can be derived.

4 Application

In this section, three examples are given to implement the above theories and approaches; and furthermore, to verify the validity of our algorithm.

Example 4.1

(Phytoplankton–fish system)

Consider the following impulsive system [11]:
$$ \left \{ \textstyle\begin{array}{l} \left . \textstyle\begin{array}{l} \frac{dp}{dt}=(r-az)p, \\ \frac{dz}{dt}=(bp-u-\frac{dp}{\gamma+ p})z, \end{array}\displaystyle \right \} \quad z< H, \\ \left . \textstyle\begin{array}{l} \Delta p=p(T^{+})-p(T)=-e_{1}p, \\ \Delta z=z(T^{+})-z(T)=-e_{2}z, \end{array}\displaystyle \right \}\quad z=H, \\ \textstyle\begin{array}{l} p(0)=p_{0}\geq0,\qquad z(0)=z_{0}\geq0. \end{array}\displaystyle \end{array}\displaystyle \right . $$
Here \(p_{0}\) and \(z_{0}\) present respectively the initial levels of phytoplankton and fish. From Theorems 3.1 and 4.3 in [11], it is obtained that for fixed \((p_{0},z_{0})\) system (4.1) has an OOPS \(\Gamma_{A\rightarrow B}\) from \(A((1-e_{1})p_{1}, (1-e_{2})H)\) to \(B(e_{1},H)\). Then Zhao et al. [11] formulated an OCP and strived to seek the appropriate harvesting rates \(e_{1}^{*}\) and \(e_{2}^{*}\) to maximize the cost function \(J(e_{1},e_{2})=C_{1}e_{1}p_{1}+C_{2}e_{2}H\) in an impulsive period. \(C_{1}\) and \(C_{2}\) describe the prices per unit biomass of the phytoplankton and fish, respectively.
In our paper, based on the periodic solution theory in [11], we know that the resources are exploited in a period mode. Then, what strategies are implemented to optimize the cost function at the minimal cost? For this, we take the harvesting rates \(e_{1}\), \(e_{2}\) and the harvest period T as control parameters to achieve the maximal revenue, namely
$$\min_{e_{1},e_{2},T} \bigl\{ J_{1}(e_{1},e_{2}) \bigr\} = \min_{e_{1},e_{2},T}\{ -C_{1}e_{1}p_{1}-C_{2}e_{2}H \}. $$
Combined with the periodicity of harvesting, the solution \((p(t),z(t))\) of system (4.1) on \((0,T]\) meets the following conditions:
$$\begin{aligned}& \left \{ \textstyle\begin{array}{l} \left . \textstyle\begin{array}{l} \frac{dp}{dt}=(r-az)p, \\ \frac{dz}{dt}=(bp-u-\frac{dp}{\gamma+ p})z, \end{array}\displaystyle \right .\quad t\in(0,T), \end{array}\displaystyle \right . \end{aligned}$$
$$\begin{aligned}& p(T)=\frac{p_{0}}{1-e_{1}},\qquad z(T)=H=\frac{z_{0}}{1-e_{2}}. \end{aligned}$$
Furthermore, together with the monotonicity of \(z(t)\), it is obtained
$$ z(0)< z(t)< z(T), $$
where T also is the first positive time such that (4.3) and (4.4) hold.
After the rescaled time transformation, (4.2), (4.3), and (4.4) can be rewritten as follows:
$$\begin{aligned}& \left \{ \textstyle\begin{array}{l} \left . \textstyle\begin{array}{l} \frac{dp}{ds}=T(r-az)p\doteq f_{1}, \\ \frac{dz}{ds}=T(bp-u-\frac{dp}{\gamma+ p})z\doteq f_{2}, \end{array}\displaystyle \right . \quad \mbox{for } s \in(0,1), \end{array}\displaystyle \right . \end{aligned}$$
$$\begin{aligned}& p(1)=\frac{p_{0}}{1-e_{1}}, \qquad z(1)=H=\frac{z_{0}}{1-e_{2}}, \end{aligned}$$
$$\begin{aligned}& z(0)< z(s)< z(1). \end{aligned}$$
Our cost function can be expressed as follows: subject to (4.5)–(4.7),
$$\min_{e_{1},e_{2},T} \bigl\{ J_{2}(e_{1},e_{2}) \bigr\} =\min_{e_{1},e_{2},T} \bigl\{ -C_{1}e_{1}p(1)-C_{2}e_{2}z(1) \bigr\} . $$
For system (4.5)–(4.7), define the violent function by
$$ \begin{aligned}[b] \Xi(e_{1},e_{2}, T)={}& \bigl[(1-e_{1})p(1)-p_{0}\bigr]^{2}+ \bigl[(1-e_{2})z(1)-z_{0}\bigr]^{2} \\ & +T \int_{0}^{1}\bigl\{ \bigl[\max\bigl\{ \bar{ \varepsilon},z(0)-z(s)\bigr\} \bigr]^{2}+\bigl[\max\bigl\{ \bar{ \varepsilon},z(s)-z(1)\bigr\} \bigr]^{2}\bigr\} . \end{aligned} $$
Noting that \(\Xi(e_{1},e_{2},T)=0\) if and only if constraints (4.6) and (4.7) are satisfied, then our cost function \(J_{2}(e_{1},e_{2})\) turns into
$$ J_{3}(e_{1},e_{2})=-C_{1}e_{1}p(1)-C_{2}e_{2}z(1)+ \varepsilon^{-v}\Xi (e_{1},e_{2},T)+\sigma \varepsilon^{w}. $$
After that, the corresponding Hamiltonian function is
$$ \begin{aligned}[b] H\bigl(s,p(s),z(s), \lambda_{1}(s),\lambda_{2}(s),T,\varepsilon\bigr)={}& \lambda _{1}f_{1}+\lambda_{2}f_{2} \\ & +T\bigl[\max\bigl\{ \bar{\varepsilon},z(0)-z(s)\bigr\} \bigr]^{2}\\ &+T \bigl[\max\bigl\{ \bar {\varepsilon},z(s)-z(1)\bigr\} \bigr]^{2}, \end{aligned} $$
where \(\lambda_{1}(s)\) and \(\lambda_{2}(s)\) are determined by the auxiliary system:
$$ \left \{ \textstyle\begin{array}{l} \textstyle\begin{array}{l} \dot{\lambda_{1}}(s) =-T[\lambda_{1}(r-az)+\lambda_{2}z(b-\frac {dy}{(\gamma+p)^{2}})], \\ [3pt] \dot{\lambda_{2}}(s) =-T[-\lambda_{1}ap+\lambda_{2}(bp-u-\frac {dp}{\gamma+ p})] \\ \phantom{\dot{\lambda_{2}}(s) =} -2\varepsilon^{-v}T\max\{\bar{\varepsilon},z(0)-z(s)\} +2\varepsilon^{-v}T\max\{\bar{\varepsilon},z(s)-z(1)\}, \end{array}\displaystyle \\ \textstyle\begin{array}{l} \lambda_{1}(1) =-C_{1}e_{1}+2\varepsilon ^{-v}((1-e_{1})p(1)-p_{0})(1-e_{1}), \\ \lambda_{2}(1) =-C_{2}e_{2}+2\varepsilon ^{-v}((1-e_{2})z(1)-z_{0})(1-e_{2}) \\ \phantom{\lambda_{2}(1) =} -\int_{0}^{1}2\varepsilon^{-v}T\max\{\bar{\varepsilon },z(s)-z(1)\}\,ds . \end{array}\displaystyle \end{array}\displaystyle \right . $$
The gradients of (4.9) with respect to T, \(e_{1}\), \(e_{2}\), and ε are addressed as follows:
$$\begin{aligned}& \begin{aligned}[b]\frac{\partial J_{3}}{\partial T}={}& \int_{0}^{1} \biggl\{ \lambda _{1}(r-az)p+ \lambda_{2}\biggl(bp-u-\frac{d p}{\gamma+ p}\biggr)z \\ & + \varepsilon^{-v}\bigl[\max\bigl\{ \bar{\varepsilon },z(0)-z(s)\bigr\} \bigr]^{2}+\varepsilon^{-v}\bigl[\max\bigl\{ \bar{\varepsilon },z(s)-z(1)\bigr\} \bigr]^{2} \biggr\} \,ds, \end{aligned} \end{aligned}$$
$$\begin{aligned}& \frac{\partial J_{3}}{\partial e_{1}}= -C_{1}p_{1}-2p_{1} \varepsilon ^{-v}\bigl[(1-e_{1})p_{1}-p_{0} \bigr], \end{aligned}$$
$$\begin{aligned}& \frac{\partial J_{3}}{\partial e_{2}}= -C_{2}H-2z_{1} \varepsilon ^{-v}\bigl[(1-e_{2})z_{1}-z_{0} \bigr], \end{aligned}$$
$$\begin{aligned}& \frac{\partial J_{3}}{\partial\varepsilon}= -v\varepsilon ^{-v-1}\Xi(e_{1},e_{2},T)+w\sigma \varepsilon^{w-1}. \end{aligned}$$
Next, we give the simulation of Example 4.1. Take \(e_{1}\), \(e_{2}\), T, and ε as control parameters. The parameters are chosen as
$$ \begin{gathered} r=1.144,\qquad a=0.2,\qquad b=0.2,\qquad d=0.5,\qquad \gamma=2,\qquad u=0.5, \\ v=2,\qquad w_{=}1.55,\qquad \bar{\varepsilon}=-1e{-}8,\qquad \sigma=100,\qquad C_{1}=3,\qquad C_{2}=2, \end{gathered} $$
with the initial values \(p_{0}=7\), \(z_{0}=1\). For the initial guesses \(T_{0}=3\), \(e_{10}=0.7\), \(e_{20}=0.9 \), and \(\varepsilon_{0}=0.1\), we obtain that the cost function \(J_{0}=1634.15\) and the threshold \(h_{0}=S(T_{0})=13.64\). Furthermore, by the costate equations and transversality conditions in (4.5) as well as the gradients (4.12)–(4.15), starting with the above initial values, we recover the optimal control scheme showed in Table 1. Apparently, the harvest period T is extended, harvesting rates \(e_{1}\) and \(e_{2}\) are enlarged, and our final gains are also increased after our optimal control.Additionally, we adapt this set of data to plot the phase diagram of food and species according to the optimal and initial controls, respectively (see Figure 1). The red circle represents the trajectories of populations on the basis of the optimal scheme, whereas the blue solid line represents the trajectory under the initial scheme. From Figure 1, we also notice that the threshold of plankton also increases, which is consistent with the results in Table 1. Summarily, our optimal tactics not only delay the harvest but also magnify the harvesting threshold and benefit, which are desirable for human exploitation.
Figure 1
Figure 1

Phase portrait of (4.1) with optimal and non-optimal controls on one period \((0,T]\)

Table 1

Results of simulation 1


Non-optimal control

Optimal control


\(T_{0}=1.5\), \(e_{10}=0.7\)

\(T^{*}=1.6789\), \(e_{1}^{*}=0.6823\)

\(e_{20}=0.9\), \(\varepsilon_{0}=0.1\)

\(e_{2}^{*}=0.9075\), \(\varepsilon ^{*}=0.1\)

\(J_{3}=7931.6\), \(H_{0}=11.23 \)

\(J_{3}^{*}=8140.94\), \(H^{*}=14.49 \)


\(T_{0}=1.5\), \(e_{10}=0.4\)

\(T^{*}=1.8038\), \(e_{1}^{*}=0.4285\)

\(e_{20}=0.8\), \(\varepsilon_{0}=0.1\)

\(e_{2}^{*}=0.8429\), \(\varepsilon ^{*}=0.1\)

\(J_{3}=6291.7\), \(H_{0}=11 \)

\(J_{3}^{*}=7592.98\), \(H^{*}=16.44 \)


\(T_{0}=1\), \(e_{10}=0.7\)

\(T^{*}=1.8364\), \(e_{1}^{*}=0.7127\)

\(e_{20}=0.9\), \(\varepsilon_{0}=0.1\)

\(e_{2}^{*}=0.9665\), \(\varepsilon ^{*}=0.1\)

\(J_{3}=5333.51\), \(H_{0}=13.64 \)

\(J_{3}^{*}=8041.65\), \(H^{*}=16.84 \)

Example 4.2

(Computer virus propagation under media coverage)

In this section, a new computer virus model with state impulsive control [8, 34, 35] is utilized to exemplify our algorithm and approach:
$$ \left \{ \textstyle\begin{array}{l} \left . \textstyle\begin{array}{l} \frac{ds}{dt}=g-ds+ni-(b_{1}-b_{2}\frac{i}{m+i})si, \\ \frac{di}{dt}=(b_{1}-b_{2}\frac{i}{m+i})si-ni-di, \end{array}\displaystyle \right \} \quad i< H, \\ \left . \textstyle\begin{array}{l} \Delta s=-e_{1}s, \\ \Delta i=-e_{2}i, \end{array}\displaystyle \right\}\quad i=H, \end{array}\displaystyle \right . $$
where n denotes the recovery rate from an infected computer to a susceptible one due to the application of antivirus software. \(b_{1}\) denotes the contact rate at which the susceptible computer gets infected before media alert, \(b_{2}\) is the maximum reduction of contact rate through media coverage, and \(m>0\) represents the effect of media coverage.
For (4.17), the existence together with stability of OPS have been showed in [8]. Then, in the following, we analogously formulate an optimal state pulse control problem of system (4.17) on a period \((0,T]\):
$$ \left \{ \textstyle\begin{array}{l} \left . \textstyle\begin{array}{l} \frac{ds}{dt}=g-ds+ni-(b_{1}-b_{2}\frac{i}{m+i})si, \\ [2pt] \frac{di}{dt}=(b_{1}-b_{2}\frac{i}{m+i})si-ni-di, \end{array}\displaystyle \right \} \quad t\in(0,T), \end{array}\displaystyle \right . $$
where \(s(t)\) and \(i(t)\) satisfy the inequality constraint
$$ i(t)< i(T), $$
and equality constraints
$$ s(T)=\frac{s_{0}}{1-e_{1}},\qquad i(T)= \frac{i_{0}}{1-e_{2}}. $$
Here T is also the first positive time such that (4.20) holds.
Next we build the cost function. Namely, subject to (4.18)–(4.20), find the appropriate \(b_{2}\), T, \(e_{1}\), and \(e_{2}\) to minimize the objective function
$$ J_{0}(b_{2}, T, e_{1}, e_{2})=p_{1}e_{1}s(T)+p_{2}e_{2}i(T)- \omega T+ \int_{0}^{T}i(t)\,dt, $$
where \(p_{1}\) and \(p_{2}\) denote the cost of susceptible and infected computers and ω is the weight factor.
Utilizing the time scale transformation and the violent violation function, the optimal solution of the above control problem is determined by the following system:
$$\begin{aligned}& \left \{ \textstyle\begin{array}{l} \left . \textstyle\begin{array}{l} \frac{ds}{dx}=T[g-ds+ni-(b_{1}-b_{2}\frac{i}{m+i})si], \\ \frac{di}{dx}=T[(b_{1}-b_{2}\frac{i}{m+i})si-ni-di], \end{array}\displaystyle \right\}\quad x\in(0,1), \end{array}\displaystyle \right . \end{aligned}$$
$$\begin{aligned}& \left \{ \textstyle\begin{array}{l} \textstyle\begin{array}{l} \dot{\lambda_{1}}(x) =-T[\lambda_{1}(-d-(b_{1}-b_{2}\frac {i}{m+i})i)+\lambda_{2}(b_{1}-b_{2}\frac{i}{m+i})i)], \\ \dot{\lambda_{2}}(x) =-2\varepsilon^{-v}T\max\{\bar{\varepsilon },i(s)-i(1)\}-T[\lambda_{1}(n-(b_{1}-b_{2}\frac{i}{m+i}))s-si(-\frac {m}{(m+i)^{2}})] \\ \phantom{\dot{\lambda_{2}}(x) =} -\lambda_{2}T[(b_{1}-b_{2}\frac{i}{m+i})s+si(-\frac {m}{(m+i)^{2}})-n-d]-T, \end{array}\displaystyle \\ \textstyle\begin{array}{l} \lambda_{1}(1)=p_{1}e_{1}+2\varepsilon ^{-v}((1-e_{1})s(1)-s_{0})(1-e_{1}), \\ \lambda_{2}(1)=p_{2}e_{2}+2\varepsilon ^{-v}((1-e_{2})i(1)-i_{0})(1-e_{2})-2\int_{0}^{1}\varepsilon ^{-v}T\max\{\bar{\varepsilon},i(x)-i(1)\}\,dx, \end{array}\displaystyle \end{array}\displaystyle \right . \end{aligned}$$
and the derivatives of \(J_{1}\) on T, \(b_{2}\), \(e_{1}\), \(e_{2}\), and ε are administrated by
$$\begin{aligned}& \begin{aligned}[b] \frac{\partial J_{1}}{\partial T}={}& \int_{0}^{1} \biggl\{ \lambda _{1} \biggl(g-ds+n i-\biggl(b_{1}-b_{2}\frac{i}{m+i}\biggr) \biggr)+\lambda _{2}\biggl(b_{1}-b_{2} \frac{i}{m+i}\biggr)si-n i-di) \\ & + \varepsilon^{-v}\bigl[\max\bigl\{ \bar{\varepsilon}, i(x)-i(1)\bigr\} \bigr]^{2}+(I-\omega) \biggr\} \,dx, \end{aligned} \end{aligned}$$
$$\begin{aligned}& \frac{\partial J_{1}}{\partial b_{2}}= \int_{0}^{1}\biggl[(\lambda _{1}- \lambda_{2})T\frac{si^{2}}{m+i}\biggr]\,dx, \end{aligned}$$
$$\begin{aligned}& \frac{\partial J_{1}}{\partial e_{1}}= -p_{1}s(1)-2s(1) \varepsilon ^{-v}\bigl[(1-e_{1})s(1)-s_{0}\bigr], \end{aligned}$$
$$\begin{aligned}& \frac{\partial J_{1}}{\partial e_{2}}= -p_{2}H-2i(1) \varepsilon ^{-v}\bigl[(1-e_{2})i(1)-i_{0}\bigr], \end{aligned}$$
$$\begin{aligned}& \frac{\partial J_{1}}{\partial\varepsilon}= -v\varepsilon ^{-v-1}\Xi(e_{1},e_{2},T)+w\sigma \varepsilon^{w-1}, \end{aligned}$$
$$\begin{aligned}& \begin{aligned}[b] \Xi(e_{1},e_{2}, T)={}& \bigl((1-e_{1})s(1)-s(0) \bigr)^{2}+ \bigl((1-e_{2})i(1)-i(0) \bigr)^{2} \\ & +T \int_{0}^{1} \bigl[\max \bigl\{ \bar{\varepsilon },i(x)-i(1) \bigr\} \bigr]^{2}\,dx, \end{aligned} \end{aligned}$$
$$\begin{aligned}& J_{1}=p_{1}e_{1}s(1)+p_{2}e_{2}i(1)+ \int_{0}^{1}\bigl(-\omega T+Ti(x)\bigr)\,dx+ \varepsilon^{-v}\Xi(e_{1},e_{2}, T)+\sigma \varepsilon^{w}. \end{aligned}$$
Next, we give the simulation of Example 4.2. In system (4.17), a set of parameter values is chosen as
$$ \begin{gathered} g=0.3,\qquad d=0.18,\qquad n=0.18,\qquad b_{1}=0.65,\qquad m=3,\qquad v=2, \\ w=1.55,\qquad \bar{\varepsilon}=-1e{-}8,\qquad\sigma=100,\qquad p_{1}=1,\qquad p_{2}=2,\qquad \omega =0.5 \end{gathered} $$
with the initial value \(s_{0}=0.68\) and \(i_{0}=0.36\). In Table 2, we give three sets of parameter initial values to compute the minimal cost function \(J^{*}\) and to find the appropriate parameters. From Table 2, we find that after optimal control the contact rate \(b_{2}\) through media coverage is increased, while the period T is shortened, the number of the infected computers is maintained at a low level \(H^{*}\) and the cost is reduced. That is, increasing the influence of media and taking regular anti-virus measures for a computer will reduce the cost of prevention and control of computer viruses.Furthermore, we display the dynamic behavior of susceptible and infected computers and the optimal threshold \(H^{*}\) in Figure 2 with the first set of Table 2. All of the red cycles illustrate the behaviors of susceptible and infected computers when optimal control action is taken, while all of the blue solid lines present the habit of susceptible and infected computers when non-optimal control is taken.
Figure 2
Figure 2

Phase portrait of system (4.17) with optimal and non-optimal controls on one period \((0,T]\)

Table 2

Results of simulation 2


Non-optimal control

Optimal control


T = 8, \(b_{2}=0.6\), \(\varepsilon_{0}=0.1 \)

\(T^{*}=6.5499\), \(b_{2}^{*}=0.6214\), \(\varepsilon^{*}=4.521e{-}5\)

\(e_{1}=0.8\), \(e_{2}=0.9 \)

\(e_{1}^{*}=0.1613\), \(e_{2}^{*}=0.4572\)

\(J_{1}=69.4267\), H = 0.7347

\(J^{*}=1.36176\), \(H^{*}=0.6632\)


T = 7, \(b_{2}=0.55\), \(\varepsilon_{0}=0.1 \)

\(T^{*}=6.1392\), \(b_{2}^{*}=0.6345\), \(\varepsilon^{*}=1.61e{-}6\)

\(e_{1}=0.7\), \(e_{2}=0.6 \)

\(e_{1}^{*}=0.1703\), \(e_{2}^{*}=0.4371\)

\(J_{1}=53.4043\), H = 0.7005

\(J^{*}=0.7111\), \(H^{*}=0.6396\)


T = 6, \(b_{2}=0.5\), \(\varepsilon_{0}=0.1 \)

\(T^{*}=5.9094\), \(b_{2}^{*}=0.5386\), \(\varepsilon^{*}=0.441e{-}7\)

\(e_{1}=0.2\), \(e_{2}=0.4 \)

\(e_{1}^{*}=0.1553\), \(e_{2}^{*}=0.4422\)

\(J_{1}=32.5521\), H = 0.6578

\(J^{*}=0.6984\), \(H^{*}=0.6454\)

Example 4.3

(Species-food system)

After our little sojourn in the simple examples, it is time to return to a complex example. Consider the following system in [36]:
$$ \left \{ \textstyle\begin{array}{l} \left . \textstyle\begin{array}{l} \frac{dx}{dt}=-\gamma xy, \\ \frac{dy}{dt}=-y(\epsilon-\delta x), \end{array}\displaystyle \right\}\quad \mbox{if } x\neq x_{1}, \\ \left . \textstyle\begin{array}{l} \Delta x(\tau_{k})=\lambda, \\ \Delta y(\tau_{k})= \left \{ \textstyle\begin{array}{l@{\quad}l} 0, & \mbox{if $k $ is not divisible by $n$}, \\ -\alpha y(\tau_{k}), & \mbox{if $k$ is divisible by $n$}, \end{array}\displaystyle \right . \end{array}\displaystyle \right \} \quad \mbox{if } x=x_{1}, \end{array}\displaystyle \right . $$
\(x(t)\) and \(y(t)\) denote the absolute or relative quantities of the food A and the species B at moment t. Let \(n>0\) be an integer and assume that the quantity of food increases by λ unit each impulse effect, while the population of the species decreases by jumps only at the moments of the impulse effect \(\tau_{k}\) whose ordinal number k is a multiple of n. We also assumed that \(\lambda>0\) and \(0<\alpha<1\).
Based on the research of the period solution in [36], we propose our optimal problem on the condition that (4.32) admits a periodic solution with period T. According to the theory in Section 2, we can rewrite system (4.32) on one period \((0,T)\) as follows:
$$ \left \{ \textstyle\begin{array}{l} \left . \textstyle\begin{array}{l} \frac{dx}{dt}=-\gamma xy, \\ [2pt] \frac{dy}{dt}=-y(\epsilon-\delta x), \end{array}\displaystyle \right\}\quad \mbox{if } t\neq \tau_{k}, k=1,2,\ldots,n, k\in\mathbb {Z}_{+}, \tau_{k}\in(0,T] , \\ \left . \textstyle\begin{array}{l} \Delta x=\lambda, \\ \Delta y=0, \end{array}\displaystyle \right \}\quad \mbox{if } t= \tau_{k}, k=1,2,\ldots,n-1, \\ \left . \textstyle\begin{array}{l} \Delta x=\lambda, \\ \Delta y=-\alpha y, \end{array}\displaystyle \right\}\quad \mbox{if } t= \tau_{n}, \end{array}\displaystyle \right . $$
where n is a positive integer representing the number of releasing food. Furthermore, the populations of the food and the species meet the following restraints on \((0, T]\):
$$\begin{aligned}& \left \{ \textstyle\begin{array}{l@{\quad}l} x(\tau_{k-1}^{+})>x(t)>x(\tau_{ k}), &k=1,2,\ldots,n, \\ y(\tau_{k-1}^{+})< y(t), &k=1,2,\ldots,n, \end{array}\displaystyle \right . \end{aligned}$$
$$\begin{aligned}& x(\tau_{k})+ \lambda=x_{0},\quad k=1,2,\ldots,n,\qquad y(T)=y(\tau_{n})= \frac {y_{0}}{1-\alpha}, \end{aligned}$$
where \(\tau_{n}\) is also the first positive time such that (4.35) holds. Thus, our OCP can be written as follows.
Problem (\({P_{0}}\)): In order to minimize the cost of the total releasing food and maximize the quality of species at the terminal time, we would like to find λ and α to minimize the cost function
$$J^{n}_{0}=p_{1}n\lambda-p_{2}\alpha y(T), $$
where \(p_{1}\) and \(p_{2}\) denote the market prices of the food and species.
First, the time scaling transformation is utilized to project the impulsive moments \(t=0,\tau_{1}, \tau_{2},\ldots, \tau_{n}\) into \(s=0,1,2,\ldots,n\) in a new time horizon. Define the required transformation:
$$ \frac{dt(s)}{ds}=v(s) $$
with \(t(0)=0\). \(v(s)\) is the time scaling control function and is discontinuous at \(s=1, 2,\ldots,n\). That is,
$$ v(s)=\sum_{k=1}^{n}\bar{ \tau}_{k}\chi_{(k-1,k)}(s), $$
where \(\bar{\tau}_{k}=\tau_{k}-\tau_{k-1}\) is the duration which satisfies the following condition:
$$ \sum_{k=1}^{n}\bar{ \tau}_{k}=T. $$
Define the indicator function of I by
$$ \chi_{I}(s)=\left \{ \textstyle\begin{array}{l@{\quad}l} 1, & \mbox{if } s\in I, \\ 0, & \mbox{otherwise}. \end{array}\displaystyle \right . $$
By time scaling transformation, systems (4.33)–(4.35) turn into
$$\begin{aligned}& \left \{ \textstyle\begin{array}{l} \left . \textstyle\begin{array}{l} \frac{dx}{ds}=v(s)(-\gamma xy), \\ \frac{dy}{ds}=v(s)[-y(\epsilon-\delta x)], \end{array}\displaystyle \right \}\quad s\in(0,n) , \\ \left . \textstyle\begin{array}{l} \Delta x=\lambda, \\ \Delta y=0, \end{array}\displaystyle \right\}\quad s=1,2,\ldots, n-1 , \\ \left . \textstyle\begin{array}{l} \Delta x=\lambda, \\ \Delta y=-\alpha y, \end{array}\displaystyle \right \}\quad s=n , \end{array}\displaystyle \right . \end{aligned}$$
$$\begin{aligned}& \left \{ \textstyle\begin{array}{l@{\quad}l} x((k-1)^{+})>x(s)>x(k),& k=1,2,\ldots,n, \\ y((k-1)^{+})< y(s), &k=1,2,\ldots,n, \end{array}\displaystyle \right . \end{aligned}$$
$$\begin{aligned}& x(k)+\lambda=x_{0},\quad k=1,2,\ldots,n, \qquad y(n)=y(T)=\frac{y_{0}}{1-\alpha}. \end{aligned}$$

Thus we can change problem (\(P_{0}\)) into the next problem.

Problem (\(P_{1}\)): Minimize the transformed cost function
$$J_{1}^{n}=p_{1}n\lambda-p_{2}\alpha y(n) $$
by selecting λ and α.
This optimal problem has multi-jump times, which differs from the first two examples and is difficult to solve. For this, time translation transformation will be introduced [37]. Define
$$ x_{i}(s)=x(s+i-1),\qquad y_{i}(s)=y(s+i-1),\qquad \tau_{i}(s)=t(s+i-1),\quad s=1,2,\ldots,n. $$
Then (4.40) becomes
$$ \left \{ \textstyle\begin{array}{l} \left . \textstyle\begin{array}{l} \frac{dx_{i}}{ds}=\bar{\tau}_{i}(-\gamma x_{i}y_{i})\doteq f_{1}^{i}, \\ [2pt] \frac{dy_{i}}{ds}=\bar{\tau}_{i}[-y_{i}(\epsilon-\delta x_{i})]\doteq f_{2}^{i}, \end{array}\displaystyle \right .\quad s\in(0,1), \mbox{and } i=1,2,\ldots,n, \\ \frac{d\tau_{i}(s)}{ds}=\bar{\tau}_{i}. \end{array}\displaystyle \right . $$
with the initial condition \(x_{1}(0)=x_{0}\) and \(y_{1}(0)=y_{0}\). Then constraints (4.41) and (4.42) are rewritten by
$$\begin{aligned}& \left \{ \textstyle\begin{array}{l@{\quad}l} x_{i}(0)>x_{i}(s)>x_{i}(1),& i=1,2,\ldots,n, s\in(0,1), \\ y_{i}(0)< y_{i}(s), &i=1,2,\ldots,n, \end{array}\displaystyle \right . \end{aligned}$$
$$\begin{aligned}& x_{i}(1)+\lambda=x_{0}, \quad k=1,2,\ldots,n, \qquad y_{n}(1)=\frac {y_{0}}{1-\alpha}. \end{aligned}$$
Further, the cost function \(J_{1}^{n}\) is rewritten as follows:
$$J_{2}^{n}=p_{1}n\lambda-p_{2}\alpha y_{n}(1). $$
In view of (3.7), we define a constraint violation function
$$ \begin{aligned}[b] \Xi(\lambda, \alpha, \bar{\tau})={}& \sum_{i=1}^{n}\bigl[x_{i}(1)-x_{0}+ \lambda\bigr]^{2}+\bigl[(1-\alpha )y_{n}(1)-y_{0} \bigr]^{2} \\ & + \int_{0}^{1} \bigl\{ \bigl[\max\bigl\{ x_{i}(s)-x_{0},\bar{\varepsilon}\bigr\} \bigr]^{2}+ \bigl[\max\bigl\{ x_{i}(1)-x_{i}(s), \bar{\varepsilon}\bigr\} \bigr]^{2} \\ & + \bigl[\max\bigl\{ y_{i}(s)-y_{i}(0),\bar{\varepsilon} \bigr\} \bigr]^{2} \bigr\} \,ds. \end{aligned} $$

Denote \(\bar{\tau}=(\bar{\tau}_{1},\bar{\tau}_{2},\ldots,\bar{\tau }_{n})\). Then problem (\(P_{2}\)) is given as follows:

Optimize parameters \(\lambda,\alpha\), the vector τ̄ together with the decision variable \(\varepsilon\in[0,\varepsilon _{1}]\) to minimize the transformed equivalent cost function
$$J_{3}^{n}=p_{1}n\lambda-p_{2}\alpha y_{n}(1)+\varepsilon^{-a}\Xi +\sigma\varepsilon^{b} $$
subject to (4.44)–(4.46) as well as the restraints \(0<\alpha<1\) and \(\lambda>0\).
Next, according to Theorem 4.1 in [38] and [32], we define the corresponding Hamiltonian function
$$ \begin{aligned}[b] H_{i}={}& \varepsilon^{-a}\bar{\tau}_{i} \bigl\{ \bigl[\max\bigl\{ x_{i}(s)-x_{i}(0),\bar{\varepsilon}\bigr\} \bigr]^{2}+ \bigl[\max\bigl\{ x_{i}(1)-x_{i}(s), \bar{\varepsilon}\bigr\} \bigr]^{2} \\ & + \bigl[\max\bigl\{ y_{i}(0)-y_{i}(s),\bar{\varepsilon} \bigr\} \bigr]^{2} \bigr\} +l_{1}^{i}f_{1}^{i}+l_{2}^{i}f_{2}^{i} \quad\mbox{for } i=1,2,\ldots, n, \end{aligned} $$
where \(l_{1}^{i}\) and \(l_{2}^{i}\) are determined by the auxiliary system:
$$ \left \{ \textstyle\begin{array}{l} \left . \textstyle\begin{array}{l} \frac{dl_{1}^{i}}{ds} = -\bar{\tau}_{i} \{2\varepsilon ^{-a} [\max\{x_{i}(s)-x_{i}(0),\bar{\varepsilon}\}-\max\{ x_{i}(1)-x_{i}(s),\bar{\varepsilon}\} ] \\ \phantom{\frac{dl_{1}^{i}}{ds} =} -l_{1}^{i}\gamma y_{i}+l_{2}^{i}\delta y_{i} \} , \\ \frac{dl_{2}^{i}}{ds}= -\bar{\tau}_{i} \{ -l_{1}^{i}\gamma x_{i}-l_{2}^{i}\epsilon+l_{2}^{i}\delta x_{i} \\ \phantom{\frac{dl_{2}^{i}}{ds}=} -2\varepsilon^{-a} [\max\{y_{i}(0)-y_{i}(s),\bar {\varepsilon}\} ]^{2} \}, \end{array}\displaystyle \right\}\quad i=1,\ldots,n, \\ \left . \textstyle\begin{array}{l} l_{1}^{i}(1)= 2\varepsilon^{-a} \{(x_{i}(1)-x_{0}+\lambda)+ \int_{0}^{1}\bar{\tau}_{i}\max\{x_{i}(1)-x_{i}(s),\bar{\varepsilon }\}\,ds \} \\ \phantom{l_{1}^{i}(1)=} + l_{1}^{i+1}(0), \\ l_{2}^{i}(1)= l_{2}^{i+1}(0), \end{array}\displaystyle \right \} \quad i=1,\ldots,n-1, \\ \left . \textstyle\begin{array}{l} l_{1}^{n}(1)= 2\varepsilon^{-a} \{(x_{n}(1)-x_{0}+\lambda)+\int _{0}^{1}\bar{\tau}_{n}\max\{x_{n}(1)-x_{n}(s),\bar{\varepsilon}\} \,ds, \\ l_{2}^{n}(1)= -p_{2}\alpha+2\varepsilon^{-a}(1-\alpha)((1-\alpha )y_{n}(1)-y_{0}), \end{array}\displaystyle \right \}\quad i=n. \end{array}\displaystyle \right . $$
Now we give the derivatives of \(J_{3}^{n}\) on τ̄, λ, α as well as ε by
$$\begin{aligned}& \begin{aligned}[b] \nabla_{\tau}J_{3}^{n}={}& \sum_{i=1}^{n} \int_{0}^{1} \bigl\{ \varepsilon^{-a} \bigl\{ \bigl[\max\bigl\{ x_{i}(s)-x_{0},\bar{\varepsilon} \bigr\} \bigr]^{2}+\bigl[\max\bigl\{ x_{i}(1)-x_{i}(s), \bar{\varepsilon}\bigr\} \bigr]^{2} \\ & +\bigl[\max\bigl\{ y_{i}(0)-y_{i}(s),\bar{\varepsilon} \bigr\} \bigr]^{2} \bigr\} -l_{1}^{i}\gamma x_{i}y_{i}-l_{2}^{i} \bigl[y_{i}(\epsilon-\delta x_{i})\bigr] \bigr\} \,ds, \end{aligned} \end{aligned}$$
$$\begin{aligned}& \nabla_{\lambda}J_{3}^{n}=p_{1}n+2 \sum_{i=1}^{n}\varepsilon ^{-a} \bigl(x_{i}(1)-x_{0}+\lambda \bigr)+\sum _{i=2}^{n}l_{1}^{i}(0), \end{aligned}$$
$$\begin{aligned}& \nabla_{\alpha}J_{3}^{n}=-p_{2}y_{n}(1)-2 \varepsilon ^{-a}y_{n}(1) \bigl((1-\alpha)y_{n}(1)-y_{0} \bigr), \end{aligned}$$
$$\begin{aligned}& \nabla_{\varepsilon}J_{3}^{n}=-a \varepsilon^{-a-1}\Xi(\lambda, \alpha, \bar{\tau})+b\sigma \varepsilon^{b-1}. \end{aligned}$$
Next, the simulation of Example 4.3 is given. Choose \((x_{0},y_{0})=(5,0.5)\). The parameter values are taken as
$$ \begin{gathered} \gamma=0.6,\qquad \epsilon=0.4,\qquad \delta=0.3,\qquad a=2,\qquad b=1.55, \\ \bar{\varepsilon}=-1e{-}8,\qquad \sigma=100,\qquad p_{1}=1,\qquad p_{2}=2,\qquad \lambda =4.5,\qquad \alpha=0.8773. \end{gathered} $$
The optimal problem is solved by using a Matlab program and the above computational approach.
Take \(n=5\) which implies that the food is released five times and the species is harvested one time on one period \((0,T]\). Then the simulation results of system (4.32) are listed in Table 3 for various parameters. Here, the optimal time intervals \(\tau_{1}^{*}\), \(\tau_{2}^{*}\), \(\tau _{3}^{*}\), \(\tau_{4}^{*}\), and \(\tau_{5}^{*}\) are shortened, which consequently gives rise to the shortness of period T. Also the optimal increment of food \(\lambda^{*}\) is decreased slightly, meanwhile the optimal harvesting rate \(\alpha^{*}\) keeps invariable and the cost value \(J_{3}^{5*}\) is reduced. Also Figure 3(c) visualizes the optimal control strategy that less food is released at every impulsive time and more amount of species is acquired at the terminal time. We plot the time evolutions of food and species according to the optimal and non-optimal control laws with the first set of Table 3 (see Figures 3(a) and (b)). The black line means the dynamic behavior of non-optimal control and the red line means the dynamic behavior of optimal control. Obviously, the optimal tactics partly promotes the level of the species, which is desirable from the protection population.
Figure 3
Figure 3

Comparisons of system (4.32) with optimal and non-optimal controls on one period \((0,T] \) with \(n=5\). (a) and (b) are the time series of food and species, (c) is the phase portrait

Table 3

Results of simulation for \(n=5\)


Non-optimal control

Optimal control


\(\tau_{1}=3.23\), \(\tau_{2}=1.98\), \(\tau_{3}=1.44 \)

\(\tau_{1}^{*}=2.99\), \(\tau_{2}^{*}=1.77\), \(\tau_{3}^{*}=1.26\)

\(\tau_{4}=1.13\), \(\tau_{5}=0.94 \)

\(\tau_{4}^{*}=0.98\), \(\tau _{5}^{*}=0.81\)

λ = 4.51, α = 0.8773, \(\varepsilon_{0}=0.1 \)

\(\lambda ^{*}=4.4\), \(\alpha^{*}=0.89\), \(\varepsilon^{*}=6.581e{-}5\)

\(J_{3}^{5}=18.2467\), \(x_{1}=0.49 \)

\(J_{3}^{5*}=14.1559\), \(x_{1}^{*}=0.6\)


\(\tau_{1}=2.35\), \(\tau_{2}=1.27\), \(\tau_{3}=0.88 \)

\(\tau_{1}^{*}=2.34\), \(\tau_{2}^{*}=1.26\), \(\tau_{3}^{*}=0.876\)

\(\tau_{4}=0.67\), \(\tau_{5}=0.54 \)

\(\tau_{4}^{*}=0.67\), \(\tau _{5}^{*}=0.54\)

λ = 4, α = 0.9028, \(\varepsilon_{0}=0.1 \)

\(\lambda ^{*}=3.99\), \(\alpha^{*}=0.9029\), \(\varepsilon^{*}=71e{-}5\)

\(J_{3}^{5}=13.5705\), \(x_{1}=1 \)

\(J_{3}^{5*}=10.6311\), \(x_{1}^{*}=4\)


\(\tau_{1}=4.77\), \(\tau_{2}=3.82\), \(\tau_{3}=3.2 \)

\(\tau_{1}^{*}=4.74\), \(\tau_{2}^{*}=3.80\), \(\tau_{3}^{*}=3.17\)

\(\tau_{4}=2.76\), \(\tau_{5}=2.44 \)

\(\tau_{4}^{*}=2.73\), \(\tau _{5}^{*}=2.40\)

λ = 4.8, α = 0.7724, \(\varepsilon_{0}=0.1 \)

\(\lambda ^{*}=4.79\), \(\alpha^{*}=0.7274\), \(\varepsilon^{*}=21e{-}4\)

\(J_{3}^{5}=24.2191\), \(x_{1}=0.2 \)

\(J_{3}^{5*}=21.30982\), \(x_{1}^{*}=0.21\)

Next, we consider the case of \(n=1\), which implies that the food is released and the species is harvested one time on one period \((0,T]\), respectively. The results obtained are shown in Table 4 for three sets of parameters. For given initial interval T, initial recruitment rate α, our goal is to compute the minimal cost function \(J_{3}^{1*}\) with time interval \(T^{*}\), release amount \(\lambda^{*}\) and the capture rate \(\alpha^{*}\). We explore that control policy not only drops the level of the cost function, but also boosts the numbers of species at the terminal time. Meanwhile, the time interval is shortened too. Graphical output in Figure 4(c) directly displays our optimal control strategy expressed by (4.32) with \(n=1\). All of the black lines illustrate the behaviors of non-optimal control while all of the red lines mean the dynamic behavior of optimal control. When optimal control strategy is implemented, the trajectories of food and species are drawn in Figures 4(a) and (b). It shows that the level of species is higher in an optimal mode than in a non-optimal one at the terminal time.
Figure 4
Figure 4

Comparisons of system (4.32) with optimal and non-optimal controls when \(n=1\). (a) and (b) are the time series of food and species, (c) is the phase portrait

Table 4

Results of simulation for \(n=1\)


Non-optimal control

Optimal control


T = 3.23, λ = 4.51

\(T ^{*}=2.93\), \(\lambda ^{*}=4.37\)

α = 0.5881, ε = 0.1

\(\alpha ^{*}=0.62\), \(\varepsilon^{*}=1.57e{-}4\)

\(J_{3}^{1}=3.65\), \(x_{1}=0.49\)

\(J_{3}^{1*}=2.77\), \(x_{1}^{*}=0.63\)


T = 2.35, λ = 4

\(T ^{*}=2.31\), \(\lambda ^{*}=3.96\)

α = 0.6501, ε = 0.1

\(\alpha ^{*}=0.65\), \(\varepsilon^{*}=9e{-}5\)

\(J_{3}^{1}=2.71\), \(x_{1}=1\)

\(J_{3}^{1*}=2.01\), \(x_{1}^{*}=1.04\)


T = 4.77, λ = 4.8

\(T ^{*}=4.42\), \(\lambda ^{*}=4.76\)

α = 0.2952, ε = 0.1

\(\alpha ^{*}=0.41\), \(\varepsilon^{*}=2.64e{-}4\)

\(J_{3}^{1}=4.94\), \(x_{1}=0.2\)

\(J_{3}^{1*}=4.05\), \(x_{1}^{*}=0.24\)

In order to compare the two optimal control modes (namely \(n=1\) and \(n=5\)), we compute \(5J^{1*}_{3}\) in view of Table 4, which represents that the mode \(n=1\) is performed five times. Compared with \(J^{5*}_{3}\), which represents that the mode \(n=5\) is executed one time, it indicates that the cost function \(5J^{1*}_{3}\) is slightly less than \(J^{5*}_{3}\). This result implies that the control mode of frequent releasing food and frequent harvest species is superior to that of frequent releasing food and infrequent harvest species.

5 Discussions

The topic about ISFC has been investigated extensively in the last decades due to its potential applications. Many authors have made every endeavor to explore the periodic solution of various systems including population, ecology, chemostat, epidemic, and so on. However, if the system is exploited in a period mode, what strategies are implemented to achieve the objective of optimal management? So far, few researchers keep a watchful eye on this task which has been our focus in the above sections. In summary, our approaches are concluded by the following three procedures. (1) Under the hypothesis that the ISFC system has a periodic solution, an optimal problem of ISFC is transformed into a parameter optimization problem in an unspecified time with inequality constraint, and together with the constraint of the first arrival threshold. (2) The rescaled time and a constraint violation function are introduced to translate the above optimal problem to a parameter selection problem in a specified time with the unconstraint. (3) The gradients of the objective function on all of parameters are given to compute the optimal value of the cost function. Finally, three examples involving the marine ecosystem, computer virus control, and resource administration are illustrated to confirm the validity of our approaches. In these examples, the parameters of the impulse and system on continuous systems and a hybrid system are optimized respectively.

Despite some endeavors in this paper, it may be beneficial to investigate a wider variety of topics in the future: (1) The actual data is necessary to achieve effective state feedback impulsive control. (2) Exploring other means to solve the OCP of the state dependent impulsive systems. (3) Applying this method to other fields. (4) For the optimal problems of the hybrid system in Example 4.3, the number of impulsive effect happens is worth exploring.



The authors thank the referees for their careful reading of the original manuscript and many valuable comments and suggestions that greatly improved the presentation of this paper.


This work was supported by the National Natural Science Foundation of China (11471243, 11501409).

Authors’ contributions

All authors contributed equally to the manuscript and read and approved the final draft.

Competing interests

The authors declare that they have no competing interests.

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (, which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Authors’ Affiliations

School of Science, Tianjin Polytechnic University, Tianjin, China
School of Computer Science and Software Engineering, Tianjin Polytechnic University, Tianjin, China


  1. Tian, Y., Sun, K., Chen, L., Kasperski, A.: Studies on the dynamics of a continuous bioprocess with impulsive state feedback control. Chem. Eng. J. 157(2), 558–567 (2010) View ArticleGoogle Scholar
  2. Li, Z., Chen, L., Liu, Z.: Periodic solution of a chemostat model with variable yield and impulsive state feedback control. Appl. Math. Model. 36(3), 1255–1266 (2012) MathSciNetView ArticleMATHGoogle Scholar
  3. Zhao, Z., Yang, L., Chen, L.: Impulsive state feedback control of the microorganism culture in a turbidostat. J. Math. Chem. 47(4), 1224–1239 (2010) MathSciNetView ArticleMATHGoogle Scholar
  4. Sun, K., Tian, Y., Chen, L., Kasperski, A.: Nonlinear modelling of a synchronized chemostat with impulsive state feedback control. Math. Comput. Model. 52(1–2), 227–240 (2010) MathSciNetView ArticleMATHGoogle Scholar
  5. Wei, C., Chen, L.: Homoclinic bifurcation of prey–predator model with impulsive state feedback control. Appl. Math. Comput. 237(7), 282–292 (2014) MathSciNetMATHGoogle Scholar
  6. Tang, S., Tang, B., Wang, A., Xiao, Y.: Holling II predator–prey impulsive semi-dynamic model with complex Poincaré map. Nonlinear Dyn. 81(3), 1575–1596 (2015) View ArticleMATHGoogle Scholar
  7. Guo, H., Chen, L., Song, X.: Dynamical properties of a kind of sir model with constant vaccination rate and impulsive state feedback control. Int. J. Biomath. 10, Article ID 1750093 (2017) MathSciNetView ArticleMATHGoogle Scholar
  8. Zhang, M., Song, G., Chen, L.: A state feedback impulse model for computer worm control. Nonlinear Dyn. 85(3), 1561–1569 (2016) MathSciNetView ArticleMATHGoogle Scholar
  9. Wei, C., Chen, L.: Heteroclinic bifurcations of a prey–predator fishery model with impulsive harvesting. Int. J. Biomath. 6, Article ID 1350031 (2013) MathSciNetView ArticleMATHGoogle Scholar
  10. Guo, H., Chen, L., Song, X.: Qualitative analysis of impulsive state feedback control to an algae–fish system with bistable property. Appl. Math. Comput. 271, 905–922 (2015) MathSciNetGoogle Scholar
  11. Zhao, Z., Pang, L., Song, X.: Optimal control of phytoplankton–fish model with the impulsive feedback control. Nonlinear Dyn. 88, 2003–2011 (2017) MathSciNetView ArticleGoogle Scholar
  12. Chen, S., Xu, W., Chen, L., Huang, Z.: A white-headed langurs impulsive state feedback control model with sparse effect and continuous delay. Commun. Nonlinear Sci. Numer. Simul. 50, 88–102 (2017) MathSciNetView ArticleGoogle Scholar
  13. Guo, H., Song, X., Chen, L.: Qualitative analysis of a Korean pine forest model with impulsive thinning measure. Appl. Math. Comput. 234(234), 203–213 (2014) MathSciNetMATHGoogle Scholar
  14. Run, Y.U., Leung, P.: Optimal partial harvesting schedule for aquaculture operations. Mar. Resour. Econ. 21(3), 301–315 (2006) View ArticleGoogle Scholar
  15. Martin, R.B.: Optimal control drug scheduling of cancer chemotherapy. Automatica 28(6), 1113–1123 (1992) MathSciNetView ArticleGoogle Scholar
  16. Loxton, R.C., Teo, K.L., Rehbock, V., Ling, W.K.: Optimal switching instants for a switched-capacitor DC/DC power converter. Automatica 45(4), 973–980 (2009) MathSciNetView ArticleMATHGoogle Scholar
  17. Açıkmeşe, B. Blackmore, L.: Lossless convexification of a class of optimal control problems with non-convex control constraints. Automatica 47, 341–347 (2011) MathSciNetView ArticleMATHGoogle Scholar
  18. Chyba, M., Haberkorn, T., Smith, R.N., Choi, S.K.: Design and implementation of time efficient trajectories for autonomous underwater vehicles. Ocean Eng. 35(1), 63–76 (2008) View ArticleGoogle Scholar
  19. Liang, X., Pei, Y., Zhu, M., Lv, Y.: Multiple kinds of optimal impulse control strategies on plant–pest–predator model with eco-epidemiology. Appl. Math. Comput. 287–288, 1–11 (2016) MathSciNetGoogle Scholar
  20. Pei, Y., Li, C., Liang, X.: Optimal therapies of a virus replication model with pharmacological delays based on reverse transcriptase inhibitors and protease inhibitors. J. Phys. A, Math. Theor. 50, Article ID 455601 (2017). MathSciNetView ArticleMATHGoogle Scholar
  21. Lin, Q., Loxton, R., Teo, K.L.: The control parameterization method for nonlinear optimal control: a survey. J. Ind. Manag. Optim. 10(1), 275–309 (2017) MathSciNetView ArticleMATHGoogle Scholar
  22. Teo, K.L., Goh, C.J., Wong, K.H.: A Unified Computational Approach to Optimal Control Problems. Longman, Harlow (1991) MATHGoogle Scholar
  23. Caccetta, L., Loosen, I., Rehbock, V.: Computational aspects of the optimal transit path problem. J. Ind. Manag. Optim. 4(1), 95–105 (2017) MathSciNetMATHGoogle Scholar
  24. Lee, H.W.J., Teo, K.L., Rehbock, V., Jennings, L.S.: Control parametrization enhancing technique for time-optimal control problems. Dyn. Syst. Appl. 6(2), 243–262 (1997) MathSciNetMATHGoogle Scholar
  25. Teo, K.L., Goh, C.J., Lim, C.C.: A computational method for a class of dynamical optimization problems in which the terminal time is conditionally free. IMA J. Math. Control Inf. 6(1), 81–95 (1989) MathSciNetView ArticleMATHGoogle Scholar
  26. Lin, Q., Loxton, R., Teo, K.L., Wu, Y.H.: Optimal control problems with stopping constraints. J. Glob. Optim. 63(4), 835–861 (2015) MathSciNetView ArticleMATHGoogle Scholar
  27. Jiang, C., Lin, Q., Yu, C., Teo, K.L., Duan, G.R.: An exact penalty method for free terminal time optimal control problem with continuous inequality constraints. J. Optim. Theory Appl. 154(1), 30–53 (2012) MathSciNetView ArticleMATHGoogle Scholar
  28. Yu, C., Teo, K.L., Bai, Y.: An exact penalty function method for nonlinear mixed discrete programming problems. Optim. Lett. 7(1), 23–38 (2013) MathSciNetView ArticleMATHGoogle Scholar
  29. Lin, Q., Loxton, R., Teo, K.L., Wu, Y.H., Yu, C.: A new exact penalty method for semi-infinite programming problems. J. Comput. Appl. Math. 261(4), 271–286 (2014) MathSciNetView ArticleMATHGoogle Scholar
  30. Yu, C., Teo, K.L., Zhang, L., Bai, Y.: A new exact penalty function method for continuous inequality constrained optimization problems. J. Ind. Manag. Optim. 6(4), 559–576 (2010) MathSciNetView ArticleMATHGoogle Scholar
  31. Teo, K., Goh, C.: A simple computational procedure for optimization problems with functional inequality constraints. IEEE Trans. Autom. Control 32(10), 940–941 (2003) View ArticleMATHGoogle Scholar
  32. Liu, Y., Teo, K.L., Jennings, L.S., Wang, S.: On a class of optimal control problems with state jumps. J. Optim. Theory Appl. 98(1), 65–82 (1998) MathSciNetView ArticleMATHGoogle Scholar
  33. Rui, L.: Optimal Control Theory and Application of Pulse Switching System. University of Electronic Science and Technology Press, Chengdu (2010) Google Scholar
  34. Tchuenche, J.M., Dube, N., Bhunu, C.P., Smith, R.J., Bauch, C.T.: The impact of media coverage on the transmission dynamics of human influenza. BMC Public Health 11(Suppl. 1), S5 (2011) Google Scholar
  35. Li, Y., Cui, J.: The effect of constant and pulse vaccination on sis epidemic models incorporating media coverage. Commun. Nonlinear Sci. Numer. Simul. 14(5), 2353–2365 (2009) MathSciNetView ArticleMATHGoogle Scholar
  36. Li, X., Bohner, M., Wang, C.K.: Impulsive Differential Equations. Pergamon, Elmsford (2015) MATHGoogle Scholar
  37. Wu, C.Z., Teo, K.L.: Global impulsive optimal control computation. J. Ind. Manag. Optim. 2(2), 435–450 (2017) MathSciNetMATHGoogle Scholar
  38. Teo, K.L.: Control parametrization enhancing transform to optimal control problems. Nonlinear Anal., Theory Methods Appl. 63(5–7), e2223–e2236 (2005) View ArticleMATHGoogle Scholar


© The Author(s) 2018