Open Access

An optimal stopping problem in the stochastic Gilpin-Ayala population model

Advances in Difference Equations20122012:210

https://doi.org/10.1186/1687-1847-2012-210

Received: 7 September 2012

Accepted: 25 November 2012

Published: 10 December 2012

Abstract

We present an explicit solution to an optimal stopping problem of the stochastic Gilpin-Ayala population model by applying the smooth pasting technique (Dixit in The Art of Smooth Pasting, 1993 and Dixit and Pindyck in Investment under Uncertainty, 1994). The optimal stopping rule is to find an optimal stopping time and an optimal stopping boundary of maximizing the expected discounted reward, which are given in this paper explicitly.

Keywords

optimal stopping timestochastic Gilpin-Ayala population modelsmooth pasting technique

1 Introduction

Optimal stopping problems of stochastic systems play an important role in the field of stochastic control theory. A special interest in such problems is attracted by many fields such as finance, biology models and so on.

The aim of the optimal stopping problems is to search for random times at which the stochastic processes should be stopped to make the expected values of the given reward functionals optimal. Lots of explicitly solvable stopping problems with exponentially discounted stopping problems are mainly those for one-dimensional diffusion processes. The optimal stopping times are the first time at which the underlying processes exit certain regions restricted by constant boundaries.

In this paper, the optimal stopping time for the stochastic Gilpin-Ayala model [14], whose solution is a diffusion process, is introduced, and the explicit expressions for the value functions and the boundaries in such optimal stopping problems are obtained. To our best knowledge, there have been few tries to research the optimal harvesting problems based on optimal stopping, and many scholars studied stochastic logistic models such as [5, 6]. There are only a few results about the corresponding stochastic Gilpin-Ayala model, which is our motivation.

The Gilpin-Ayala population model is one of the most important and classic mathematical bio-economic models due to its theoretical and practical significance. In 1973, Gilpin and Ayala [7] claimed the following model:
d X t = ( r X t b X t θ + 1 ) d t , θ > 0 ,
(1.1)

where X t denotes the density of resource population at time t, r > 0 is called the intrinsic growth rate and b = r / K > 0 , K is the environmental carrying capacity. It is obvious that (1.1) becomes the classic logistic population model when θ = 1 .

Recently, Eq. (1.1) has been extensively studied and many important results have been obtained; see, e.g., [811].

However, the population systems are affected by random disturbances such as environment effects, financial events and so on in the real world. In order to fit the real world better, the white noise is introduced into the population systems by many researchers [2, 3, 1214]. In this paper, we study the optimal stopping problem of the stochastic Gilpin-Ayala population model
d X t = ( r X t b X t θ + 1 ) d t + μ X t d B t , X 0 = x 0 , t > 0 , θ > 0 ,
(1.2)

where the constants r, b are mentioned in (1.1) and B t is one-dimensional Brownian motion [15].

The outline for this paper is as follows. Section 2 of this paper is concerned with the general problem of choosing an optimal stopping time for the stochastic Gilpin-Ayala population model. In Section 3, a closed-form candidate function for the value function is given. We verify the candidate for the expected reward is optimal and the optimal stopping boundary is expressed by the smooth pasting technique.

2 Formulation of the problem

Let the probability space ( Ω , F , P ) satisfy the usual conditions. Suppose the population with size X t at time t is given by the stochastic Gilpin-Ayala population model
d X t = ( r X t b X t θ + 1 ) d t + μ X t d B t , X 0 = x 0 , t > 0 .
(2.1)
It can be proved that if r > 0 and b > 0 , then the stochastic Gilpin-Ayala equation (2.1) has a global, continuous positive solution X t defined by
X t x = ( 1 x θ e θ ( ( r 1 2 μ 2 ) t + μ B ( t ) ) + 0 t b θ e θ ( ( r 1 2 μ 2 ) ( t s ) + μ ( B ( t ) B ( s ) ) ) d s ) 1
(2.2)

for all t 0 , B ( t ) is one-dimensional Brownian motion (see [16]), and note that 0 X t < K .

The optimal stopping rule here can be considered to find an optimal value function Φ and an optimal stopping time τ such that
Φ ( s , x ) = sup τ E ( 0 , x ) [ e ρ τ ( X τ a ) + w ] = E ( 0 , x ) [ e ρ τ ( X τ a ) + w ] , a > 0 .
(2.3)
The sup is taken over all stopping times τ of the process X t and the reward function
g ( s , x ) = e ρ s ( x a ) + w ,
(2.4)

where the discounting exponent ρ > 0 , e ρ τ ( X τ a ) is the profit at time τ and a represents a fixed fee and it is natural to assume that a < K . The positive constant w represents the permanent assets. E x denotes the expectation with respect to the probability law Q x of the process X t , t 0 starting at X 0 = x > 0 .

Note that it is trivial that the initial value x a . So we further assume that x > a and the stopping time τ is bounded since 0 < X τ < K .

3 Analysis

Let us start with the infinitesimal generator [15] A of the Itô diffusion Y t = ( t , X t ) T , which is defined by
f ( y ) = lim t 0 E y [ f ( Y t ) ] f ( y ) t , y R 2 , f C 2 ( R 2 ) .
(3.1)
By the application of Itô formula, we have
A f ( s , x ) = f s + ( r x b x θ + 1 ) f x + 1 2 μ 2 x 2 2 f x 2 , f C 2 ( R 2 ) ,
(3.2)
which is based on
d Y t = ( 1 r X t b X t θ + 1 ) d t + ( 0 μ X t ) d B t .
(3.3)
And
A g = g s + ( r x b x θ + 1 ) g x + 1 2 μ 2 x 2 2 g x 2 = ( r ρ b x θ ) e ρ s x + ρ a e ρ s
(3.4)
for all s > 0 , x > 0 [15]. In order to find the unknown value function Φ from (2.3) and the unknown boundary x 0 , we consider
A f ( s , x ) = 0 , ( s , x ) R + 2 .
(3.5)
If we try a solution of (3.5) of the form
f ( s , x ) = e ρ s ϕ ( x ) ,
(3.6)
and substitute (3.6) into (3.5), we obtain
ρ ϕ ( x ) + ( r x b x θ + 1 ) ϕ ( x ) + 1 2 μ 2 x 2 ϕ ( x ) = 0 , x > 0 .
(3.7)
The general solution ϕ of (3.7) is
ϕ ( x ) = C 1 x θ a 1 U ( a 1 , b 1 , 2 b μ 2 θ x θ ) + C 2 x θ a 1 M ( a 1 , b 1 , 2 b μ 2 θ x θ )
(3.8)
by setting
a 1 = 1 2 μ 2 2 r + μ 4 + ( 4 r + 8 ρ ) μ 2 + 4 r 2 μ 2 θ
(3.9)
and
b 1 = μ 2 θ + μ 4 + ( 4 r + 8 ρ ) μ 2 + 4 r 2 μ 2 θ ,
(3.10)
where C 1 , C 2 are arbitrary constants. Here U ( a 2 , b 2 , x ) is the confluent hypergeometric function, whose integral representation is
U ( a 2 , b 2 , x ) = 1 Γ ( a 2 ) 0 e x t t a 2 1 ( 1 + t ) b 2 a 2 1 d t
(3.11)

for a 2 > 0 and b 2 > 1 (see [7, 17, 18]). M ( a , b , x ) is the Kummer hypergeometric function and Γ denotes the gamma function.

If ϕ ( x ) goes to ∞ as x , we must have C 2 = 0 since ϕ ( x ) is bounded. Then we define the candidate h ( s , x ) : R + 2 R for the optimal value function Φ in (2.3) by
h ( s , x ) = f ( s , x ) + w = { e ρ s f ˆ ( x ) + w , 0 < x < x 0 , e ρ s g ˆ ( x ) + w , x x 0 ,
(3.12)
where
f ˆ ( x ) = C 1 x θ a 1 U ( a 1 , b 1 , 2 b μ 2 θ x θ ) , 0 < x < x 0
and
g ˆ ( x ) = x a , x x 0 .
We observe that the constant
C 1 = ( x 0 a ) [ ( x 0 ) θ a 1 U ( a 1 , b 1 , 2 b μ 2 θ ( x 0 ) θ ) ] 1
(3.13)
is determined by
  1. (1)
    value matching condition [19, 20]
    f ˆ ( x 0 ) = g ˆ ( x 0 )
    (3.14)
     
and
  1. (2)
    smooth pasting condition
    f ˆ ( x 0 ) = g ˆ ( x 0 ) .
    (3.15)
     

In fact, x 0 = θ a a 1 θ a 1 1 is showed to be the unique solution of (3.15) by the following assumptions and Lemma 3.1.

We assume the following.

Assumption 1
ρ > r .
(3.16)
Assumption 2
K > θ a 1 a θ a 1 1 .
(3.17)

The following lemma provides an optimal stopping boundary.

Lemma 3.1 x 0 = θ a a 1 θ a 1 1 is the maximum value point of h ( s , x ) given by (3.12) with respect to x 0 , 0 < x 0 < K for fixed s > 0 , 0 < x < K .

Proof Let h x 0 ( s , x ) = 0 for arbitrary s > 0 , 0 < x < K , then we derive
C ( s , x ) x 0 θ a 1 1 U 1 ( a 1 , b 1 , 2 b μ 2 θ x 0 θ ) ( ( 1 θ a 1 ) x 0 + θ a 1 a ) ( 1 L ( y 0 ) R ( x 0 ) + 1 ) = 0
(3.18)
by setting C ( s , x ) = e ρ s x θ a 1 U ( a 1 , b 1 , 2 b μ 2 θ x θ ) and y 0 = 2 b μ 2 θ x 0 θ together with
L ( y 0 ) = y 0 U ( a 1 + 1 , b 1 + 1 , y 0 ) U ( a 1 , b 1 , y 0 )
(3.19)
and
R ( x 0 ) = x 0 θ a 1 ( a x 0 ) .
(3.20)
Since R ( x 0 ) increases on the interval [ 0 , a ) with R ( 0 ) = 0 and R ( a ) = + , and R ( x 0 ) is an increasing function on the interval ( a , + ) with R ( a + ) = and R ( + ) = ( θ a 1 ) 1 , we can deduce that L ( y 0 ) is a decreasing function on ( 0 , ) with L ( 0 + ) > 0 . In fact, we only need to check that L ( x ) decreases on R + with L ( 0 + ) > 0 and L ( + ) = 0 . L ( + ) = 0 is trivial due to the fact that U ( a , b , x ) z a as z . To prove L ( 0 + ) > 0 , we take the change-of-variable formula to (3.11), then it follows
U ( a 2 , b 2 , y 0 ) = y 0 1 b 2 Γ ( a 2 ) 0 e t t a 2 1 ( t + y 0 ) b 2 a 2 1 d t ,
(3.21)
which directly implies
lim y 0 0 L ( y 0 ) = Γ ( a 1 ) Γ ( b 1 ) Γ ( 1 + a 1 ) Γ ( 1 + b 1 ) = b 1 1 a 1
(3.22)
for b 1 > 1 + a 1 > 1 . Next, with the help of the integral representation (3.21), we observe that
L ( y 0 ) = 0 t ( t + 1 ) f y 0 ( t ) d t a 1 0 ( t + 1 ) f y 0 ( t ) d t ,
(3.23)

where 0 f y 0 ( t ) d t = 0 A e t t a 1 ( t + y 0 ) b 1 a 1 2 d t = 1 , t 0 , with some normalizing constant A for b 1 > 1 + a 1 > 1 . Then by applying the Jensen inequality and considering the obvious fact that b 1 > 1 + a 1 > 1 , we deduce d d y 0 L ( y 0 ) 0 , which gives the monotonicity of L ( y 0 ) on ( 0 , K ) (similar discussion can be found in [21]).

Then we conclude that
  1. (1)

    There exists a unique solution, which satisfies L ( y 0 ) = R ( x 0 ) + 1 , of (3.16) on ( 0 , a ) and note that P ( x 0 ) L ( y 0 ) R ( x 0 ) 1 > 0 on ( a , K ) under Assumption 2.

     
  2. (2)
    The maximum value is given by
    x 0 = θ a a 1 θ a 1 1 > 0
    (3.24)
     

under (3.9) and Assumption 1 on the interval ( a , K ) . The proof is completed.

 □

Now, let us give the following lemma for our main Theorem 3.3.

Lemma 3.2 Under Assumptions 1 and 2, the function h ( s , x ) : R + 2 R satisfies the following properties (1)-(3):
  1. (1)

    h ( s , x ) g ( s , x ) given by (2.4) for all x > 0 , s > 0 .

     
  2. (2)
    For x θ a a 1 θ a 1 1 , s > 0 ,
    A f ( s , x ) = f s + ( r x b x 2 ) f x + 1 2 μ 2 x 2 2 f x 2 0 .
    (3.25)
     
  3. (3)

    A h = 0 , 0 < x < θ a a 1 θ a 1 1 , s > 0 .

     
Proof It is clear that A h = 0 by construction, for 0 < x < θ a a 1 θ a 1 1 , s > 0 . We check that
  1. (1)

    h ( s , x ) > g ( s , x ) for 0 < x < θ a a 1 θ a 1 1 , i.e., h ( s , x ) > e ρ s ( x a ) + w for 0 < x < θ a a 1 θ a 1 1 and

     
  2. (2)

    A h ( x ) = A g ( x ) < 0 for x θ a a 1 θ a 1 1 . This is easily done by routine calculation under Assumptions 1 and 2.

     

 □

Let us give our main theorem.

Theorem 3.3 Under Assumptions 1 and 2, setting y = ( s , x ) and Y t = ( t , X t ) T , the function h ( y ) : R + 2 R defined by
h ( y ) = { e ρ s ( a θ a 1 1 ) ( θ a a 1 ) θ a 1 U ( a 1 , b 1 , 2 b μ 2 θ ( θ a a 1 θ a 1 1 ) θ ) ( ( θ a 1 1 ) x ) θ a 1 U ( a 1 , b 1 , 2 b μ 2 θ x θ ) + w , 0 < x < θ a a 1 θ a 1 1 , e ρ s ( x a ) + w , x θ a a 1 θ a 1 1
is the optimal value function. Moreover, the optimal stopping region F and the optimal stopping time τ are given by
F = { y R + 2 : h ( y ) = g ( y ) } = { ( s , x ) : s > 0 , θ a a 1 θ a 1 1 < x < }
(3.26)
and
τ : = τ F = inf { t > 0 , Y t y F } < + .
(3.27)
Proof Let τ be any stopping time with E x [ τ ] < for the process { Y t , t > 0 } and any t R + , then by Dynkin’s formula [15]
E y [ h ( Y τ t ) ] = h ( y ) + E y [ 0 τ t A h ( Y t ) d t ] .
(3.28)
Therefore, by (1) and (2) in Lemma 3.2, we get
h ( y ) E y [ g ( Y τ t ) ] .
(3.29)
Taking lim ̲ t of both sides of (3.29), we have by the Fatou lemma [22]
h ( y ) E y [ g ( Y τ ) 1 { τ < } ] .
(3.30)
Since τ is arbitrary with E x [ τ ] < , we conclude that
h ( y ) h ( y ) , y R + 2 .
(3.31)
We proceed to prove h ( y ) h ( y ) .
  1. (a)

    If y F , then h ( y ) = g ( y ) h ( y ) . So, we have h ( y ) = h ( y ) by (3.31) and τ = 0 is optimal for y D .

     
  2. (b)
    Next, suppose y F . By Dynkin’s formula [15] and the fact that τ < a.s. R y for y R + 2 , we have
    h ( y ) = E y [ 0 τ F t A h ( Y s ) d s + h ( Y τ F t ) ] .
    (3.32)
     
So, by (1), (3) in Lemma 3.2 and the fact that τ < a.s. R y for y R + 2 and h C 2 ( R + 2 ) [18], we get
h ( y ) = lim t E y [ 0 τ F t A ˆ h ( Y s ) d s + h ( Y τ F t ) ] = E y [ h ( Y τ F ) ] = E y [ g ( Y τ F ) ] h ( y ) .
(3.33)
Combining the two cases (a), (b) and (3.31), we obtain
h ( y ) h ( y ) h ( y ) .
(3.34)

So, h ( y ) = h ( y ) and τ = τ F is optimal, y F .

We conclude that h ( y ) = h ( y ) for all y R + 2 and the stopping time τ is defined by
τ = { 0 , y F , τ F , y F .
(3.35)

 □

4 Conclusion and further studies/research

This paper describes the optimal harvesting problems of the stochastic Gilpin-Ayala population model as an optimal stopping problem, which is our first try. Meanwhile, we obtain the explicit optimal value function and optimal stopping time by using the smooth pasting technique. Finally, we prove the result. Furthermore, our work can lead a new way for the optimal harvesting problem in the real world. In further direction, the optimal harvesting problems for the stochastic predator-prey model and related stochastic models will be considered.

Declarations

Acknowledgements

We are grateful to Prof. Wang Ke for a number of helpful suggestions for improving the article. The second author was supported by the Natural Science Foundation of the Education Department of Heilongjiang Province (Grant No. 12521116).

Authors’ Affiliations

(1)
Department of Mathematics, Harbin Institute of Technology
(2)
School of Applied Science, Harbin University of Science and Technology

References

  1. Gilpin ME, Ayala FJ: Global models of growth and competition. Proc. Natl. Acad. Sci. USA 1973, 70: 3590–3593. 10.1073/pnas.70.12.3590View ArticleMATHGoogle Scholar
  2. Lian B, Hu S: Stochastic delay Gilpin-Ayala competition models. Stoch. Dyn. 2006, 6: 561–576. 10.1142/S0219493706001888MathSciNetView ArticleMATHGoogle Scholar
  3. Lian B, Hu S: Asymptotic behaviour of the stochastic Gilpin-Ayala competition models. J. Math. Anal. Appl. 2008, 339: 419–428. 10.1016/j.jmaa.2007.06.058MathSciNetView ArticleMATHGoogle Scholar
  4. Liu M, Wang K: Stationary distribution, ergodicity and extinction of a stochastic generalized logistic system. Appl. Math. Lett. 2012, 25: 1980–1985. 10.1016/j.aml.2012.03.015MathSciNetView ArticleMATHGoogle Scholar
  5. Liu M, Wang K: Persistence and extinction in stochastic non-autonomous logistic systems. J. Math. Anal. Appl. 2011, 375: 443–457. 10.1016/j.jmaa.2010.09.058MathSciNetView ArticleMATHGoogle Scholar
  6. Liu M, Wang K: Asymptotic properties and simulations of a stochastic logistic model under regime switching. Math. Comput. Model. 2011, 54: 2139–2154. 10.1016/j.mcm.2011.05.023View ArticleMathSciNetMATHGoogle Scholar
  7. Abramovitz M, Stegun IA: Handbook of Mathematical Functions with Formulas, Graphs, and Mathematical Tables. Wiley, New York; 1972. National Bureau of StandardsGoogle Scholar
  8. Clark CW: Mathematical Bioeconomics: the Optimal Management of Renewal Resources. Wiley, New York; 1976.MATHGoogle Scholar
  9. Clark CW: Mathematical Bioeconomics: the Optimal Management of Renewal Resources. 2nd edition. Wiley, New York; 1990.MATHGoogle Scholar
  10. Fan M, Wang K: Optimal harvesting policy for single population with periodic coefficients. Math. Biosci. 1998, 152: 165–177. 10.1016/S0025-5564(98)10024-XMathSciNetView ArticleMATHGoogle Scholar
  11. Zhang X, Shuai Z, Wang K: Optimal impulsive harvesting policy for single population. Nonlinear Anal., Real World Appl. 2003, 4: 639–651. 10.1016/S1468-1218(02)00084-6MathSciNetView ArticleMATHGoogle Scholar
  12. Alvarez LHR, Shepp LA: Optimal harvesting of stochastically fluctuating populations. Math. Biosci. 1998, 37: 155–177.MathSciNetMATHGoogle Scholar
  13. Alvarez LHR: Optimal harvesting under stochastic fluctuations and critical depensation. Math. Biosci. 1998, 152: 63–85. 10.1016/S0025-5564(98)10018-4MathSciNetView ArticleMATHGoogle Scholar
  14. Lungu EM, Øksendal B: Optimal harvesting from a population in a stochastic crowded environment. Math. Biosci. 1997, 145: 47–75. 10.1016/S0025-5564(97)00029-1MathSciNetView ArticleMATHGoogle Scholar
  15. Øksendal B: Stochastic Differential Equations. 6th edition. Springer, New York; 2005.MATHGoogle Scholar
  16. Wang K: Stochastic Biomathematics Models. Science Press, Beijing; 2010.Google Scholar
  17. Muller KE:Computing the confluent hypergeometric function, M ( a , b , x ) . Numer. Math. 2001, 90: 179–196. 10.1007/s002110100285MathSciNetView ArticleMATHGoogle Scholar
  18. Olver FWJ, Lozier DW, Boisvert RF, Clark CW: NIST Handbook of Mathematical Function. Cambridge University Press, Cambridge; 2010.MATHGoogle Scholar
  19. Dixit A: The Art of Smooth Pasting. Harwood Academic, Switzerland; 1993.MATHGoogle Scholar
  20. Dixit A, Pindyck R: Investment under Uncertainty. Princeton University Press, New Jersey; 1994.Google Scholar
  21. Gapeev PV, Markus R: An optimal stopping problem in a diffusion-type model with delay. Stat. Probab. Lett. 2006, 76: 601–608. 10.1016/j.spl.2005.09.006View ArticleMathSciNetMATHGoogle Scholar
  22. Halmos P: Measure Theory. Springer, Berlin; 1974.MATHGoogle Scholar

Copyright

© Ai and Sun; licensee Springer 2012

This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.