Open Access

Sufficient and necessary conditions on the existence of stationary distribution and extinction for stochastic generalized logistic system

Advances in Difference Equations20152015:10

https://doi.org/10.1186/s13662-014-0345-y

Received: 27 September 2014

Accepted: 25 December 2014

Published: 16 January 2015

Abstract

In this paper, we consider the existence of stationary distribution and extinction for a stochastic generalized logistic system. Sufficient and necessary conditions for the existence of a stationary distribution and extinction are obtained. (a) The system has a unique stationary distribution if and only if the noise intensity is less than twice the intrinsic growth rate. The probability density function has been solved by the stationary Fokker-Planck equation. (b) The system will become extinct when and only when the noise intensity is no less than twice the intrinsic growth rate, and the exponential extinction rate is estimated precisely by two parameters of the systems. A new perspective is provided to explain the recurrence phenomenon in practice. Nontrivial examples are provided to illustrate our results.

Keywords

stochastic generalized logistic systemextinctionstationary Fokker-Planck equationstationary distributionItô’s formula

1 Introduction

In the past few decades, population systems have received a great deal of research attention since they have been successfully used in a variety of application fields, including biology, epidemiology, economics, and neural networks (see [17]). Population systems are always subject to environmental noise. It is therefore necessary to reveal how the noise affects the population systems. Recently, the population dynamics under environmental noise has been extensively considered by many authors (see [811]). It is well known that when the noise intensity is sufficiently large, the population will become extinct, while it will remain stochastic permanent when the noise intensity is small.

In fact, if we make a great number of records to investigate the dynamic behavior of a permanent population system, we may find that a single record may fluctuate around a fixed point even if the number of records is large. In order to illustrate such biological phenomena clearly, more and more attention has been paid to the existence of stationary distribution and positive recurrence of population systems in recent years (see [1215]). In this paper, we will concentrate on the stationary distribution and extinction of a stochastic generalized logistic system. The obtained results provide a new perspective to explain such biological phenomena (see Remark 3).

Consider the stochastic generalized logistic system (Gilpin-Ayala) with the following form:
$$\begin{aligned} dx(t)=x\bigl(r-ax^{\alpha}\bigr)\,dt+\sigma x \,dB(t), \end{aligned}$$
(1)
where r is the intrinsic growth rate, \(\sigma^{2}>0\) is the noise intensity, and \(B(t)\) is the one-dimensional Brownian motion. Throughout this paper, we impose the condition:
$$\begin{aligned} r>0,\qquad a>0, \qquad\alpha>0. \end{aligned}$$
(2)

The logistic system is one of the famous population systems due to its universal existence and importance. More recently, the asymptotic behavior of a stochastic logistic system has received a lot of attention (see [1620]). Jiang et al. [16] showed the stability in time average and stochastic permanence of a non-autonomous logistic equation with random perturbation. Li et al. [18] discussed the stochastic logistic population under regime switching, and sufficient and necessary conditions for stochastic permanence and extinction under some assumptions are obtained. Liu and Wang [20] and Mao [15] studied the stationary distribution of more general stochastic population systems than system (1); the result in [20] and [15] showed that when \(0<\alpha\leq1\), the system (1) has a stationary distribution. Then some questions arise naturally: Is there a stationary distribution to system (1) in the case of \(\alpha>1\)? If yes, can we compute the probability density function of the stationary distribution? And can we compute the mean or variance?

In addition, the existing literature (see [14, 15, 18]) shows clearly that if the noise intensity is more than twice the intrinsic growth rate, the population will become extinct exponentially, whereas it will remain stochastic permanent or has a stationary distribution when the noise intensity is less than twice the intrinsic growth rate. Then one interesting question is: What will happen if the noise intensity equals twice the intrinsic growth rate?

However, to the best of the author’s knowledge, few studies have attempted to investigate the density function of the stationary distribution and the asymptotic behavior under the assumption that the intrinsic growth rate equals half of the noise intensity. In this paper, we are concerned with these topics. The primary contributions of this paper are as follows:
  • The probability density function of the stationary distribution was obtained by solving the stationary Fokker-Planck equation.

  • By using some novel techniques, we point out that system (1) will also be extinct when the noise intensity equals twice the intrinsic growth rate.

  • Sufficient and necessary conditions for the existence of stationary distribution and extinction are established.

The organization of the paper is as follows. Section 2 describes some preliminaries. The main results are stated in Sections 3 and 4. In Sections 3 and 4, we show that system (1) either has a stationary distribution or becomes extinct. The probability density function, mean, and variance of the stationary distribution are obtained in Section 3. The exponential extinction rate is given precisely in Section 4. In Section 5, the sufficient and necessary conditions and some important remarks are stated and three numerical examples are given to illustrate the effectiveness of our results.

2 Notation

Throughout this paper, unless otherwise specified, let ( Ω , F , { F t } t 0 , P ) be a complete probability space with a filtration { F t } t 0 satisfying the usual conditions (i.e. it is increasing and right continuous, while F 0 contains all -null sets). The gamma function \(\Gamma(s)\) is defined for positive real number \(s>0\), which is defined via a convergent improper integral, \(\Gamma(s)=\int^{\infty }_{0}t^{s-1}\exp(-t)\,dt\).

In the same way as Mao et al. [8] did, we can also show the following result on the existence of global positive solution.

Lemma 2.1

Assume that condition (2) holds. Then for any given initial value \(x_{0}\in R_{+}\), there is a unique solution \(x(t,x_{0})\) to system (1) and the solution will remain in \(R_{+}\) with probability 1, namely
$$\mathbb{P}\bigl\{ x(t,x_{0}) \in R_{+}, \forall t \geq 0 \bigr\} = 1, $$
for any \(x_{0}\in R_{+}\).

Lemma 2.2

Let condition (2) hold. Then for any \(p>0\), there exists a constant \(K_{p}\) such that \(\sup_{0\leq t\leq\infty} E x(t)^{p}< K_{p}\).

The proof is similar to Liu et al. [19]; it is omitted here.

3 Stationary distribution and its probability density function

The main aim of this section is to study the existence of a unique stationary distribution of system (1). Let us prepare by a well-known lemma (see Hasminskii [21, pp.106-125]). Let \(X(t)\) be a homogeneous Markov process in \(E^{n}\subset R^{n}\) described by the following stochastic differential equation:
$$\begin{aligned} dX(t)=b(X)\,dt +\sum _{m=1}^{d} \sigma_{m}(X)\,dB_{m}(t). \end{aligned}$$
(3)
The diffusion matrix is \(A(x)= (a_{ij}(x) )\), \(a_{ij}(x)=\sum _{m=1}^{d}\sigma^{i}_{m}(x)\sigma^{j}_{m}(x)\).

Lemma 3.1

[21]

We assume that there is a bounded open subset \(G \subset E^{n}\) with a regular (i.e. smooth) boundary such that its closure \(\bar{G}\subset E^{n}\), and
  1. (i)

    in the domain G and some neighborhood therefore, the smallest eigenvalue of the diffusion matrix \(A(x)\) is bounded away from zero;

     
  2. (ii)

    if \(x\in E^{n}\setminus G\), the mean time τ at which a path issuing from x reaches the set G is finite, and \(\sup_{x\in K} E_{x}\tau<+\infty\) for every compact subset \(K \in E^{n}\) and throughout this paper we set \(\inf\emptyset=\infty\).

     
We then have the following assertions:
  1. (1)
    The Markov process \(X(t)\) has a stationary distribution \(\mu(\cdot)\) with density in \(E^{n}\). Let \(f(x)\) be a function integrable with respect to the measure \(\mu(\cdot)\). Then
    $$\begin{aligned} \mathbb{P}\biggl\{ \lim _{t\rightarrow\infty}\frac{1}{t}\int ^{t}_{0}f\bigl(x(s)\bigr)\,ds=\int _{E^{n}} f(y) \mu(dy) \biggr\} =1. \end{aligned}$$
    (4)
     
  2. (2)
    The probability density function \(\varphi(y)\) of \(\mu(\cdot)\) is the unique bounded solution to the stationary Fokker-Planck equation
    $$\begin{aligned} \frac{1}{2}\sum ^{d}_{i,j=1} \frac{\partial^{2}}{\partial y_{i} \partial y_{j}}\bigl(a_{ij}(y)\varphi\bigr)-\sum ^{d}_{i=1} \frac{\partial }{\partial y_{i}}\bigl(b_{i}(y)\varphi\bigr)=0, \end{aligned}$$
    (5)
    satisfying the additional condition \(\int _{E^{n}}\varphi(y)\,dx=1\).
     

Theorem 3.2

Let condition (2) and \(\sigma ^{2}<2r\) hold. We then have the following assertions:
  1. (1)

    System (1) has a unique stationary distribution denoted by \(\mu(\cdot)\).

     
  2. (2)
    The probability density function of \(\mu(\cdot)\) denoted by \(\varphi(y)\) has the following form:
    $$\begin{aligned} \varphi(y)=\frac{\alpha}{\Gamma(\frac{2r-\sigma^{2}}{\alpha\sigma ^{2}})}\biggl(\frac{2a}{\alpha\sigma^{2}} \biggr)^{\frac{2r-\sigma^{2}}{\alpha\sigma ^{2}}}y^{\frac{2r}{\sigma^{2}}-2}\exp\biggl(-\frac{2a}{\alpha\sigma ^{2}}y^{\alpha} \biggr)I_{(0, \infty)}(y), \end{aligned}$$
    (6)
    where \(I_{(0, \infty)}(y)\) is the indicator function for the set \((0, \infty)\). Its mean and variance are \((\frac{\alpha\sigma^{2}}{2a})^{\frac{1}{\alpha}}\frac{\Gamma(\frac {2r}{\alpha\sigma^{2}})}{\Gamma(\frac{2r-\sigma^{2}}{\alpha\sigma ^{2}})}\) and \((\frac{\alpha\sigma^{2}}{2a})^{\frac{2}{\alpha}} (\frac {\Gamma(\frac{2r+\sigma^{2}}{\alpha\sigma^{2}})}{\Gamma(\frac{2r-\sigma ^{2}}{\alpha\sigma^{2}})}-(\frac{\Gamma(\frac{2r}{\alpha\sigma ^{2}})}{\Gamma(\frac{2r-\sigma^{2}}{\alpha\sigma^{2}})})^{2})\), respectively.
     

Proof

The proof is composed of two parts. The first part is to prove the existence of stationary distribution. The second part is to obtain the probability density function by solving the stationary Fokker-Planck equation. Let \(x(t) = x(t; x_{0})\) for simplicity.

Let us now show the existence of a stationary distribution. To validate condition (i) and (ii), it suffices to prove that there exist some neighborhood U and a nonnegative \(C^{2}\)-function V such that \(\sigma^{2} x^{2}\) is uniformly elliptical in U and L V 1 for any \(x \in R_{+} \setminus U\) (for details refer to [15, p.400]). By the condition \(\sigma^{2}<2r\), we can find a number \(\eta>0\) such that \(\eta\in(0, \frac{2r}{\sigma^{2}}-1)\), \(\eta<\alpha\). Applying Itô’s formula to \(V(x)=x + x^{-\eta}\) we have
L V ( x ) = ( r x + a η x α η a 2 x α + 1 ) a 2 x α + 1 ( r η η ( η + 1 ) σ 2 2 ) x η .
Since \(a>0\), there exists a constant \(K>0\) such that \(\sup_{0\leq x<\infty}[rx+a\eta x^{\alpha-\eta}-\frac{a}{2}x^{\alpha+1}]\leq K\). This implies
L V ( x ) K a 2 x α + 1 ( r η η ( η + 1 ) σ 2 2 ) x η .
Note from \(r\eta-\frac{\eta(\eta+1)\sigma^{2}}{2}>0\) that there is a sufficiently large N, such that
L V ( x ) 1 , x R + G N ; inf x G N λ min ( σ 2 x 2 ) = σ 2 N 2 > 0 ,
where \(G_{N}=\{x: \frac{1}{N}< x<N\}\subset R_{+}\). This immediately implies condition (i) and (ii) in Lemma 3.1. Therefore, system (1) has a stationary distribution \(\mu(\cdot)\).
Now, we aim to prove the assertion (2). Since system (1) has a unique positive solution, the \(\mu(\cdot)\) will be restricted to region \(R_{+}\). By virtue of Lemma 3.1, we have the probability density function \(\varphi(y)\) satisfying the following stationary Fokker-Planck equation:
$$\begin{aligned} \frac{1}{2}\frac{\partial^{2}}{\partial y^{2}}\bigl(\sigma^{2} y^{2}\varphi\bigr)-\frac{\partial}{\partial y}\bigl(y\bigl(r-ay^{\alpha} \bigr)\varphi\bigr)=0,\quad y>0, \end{aligned}$$
(7)
with the normalization condition \(\int_{0}^{\infty}\varphi(y)\,dy=1\). Using the integrating factor
$$\exp\biggl(1-\int\frac{2y(r-ay^{\alpha})}{\sigma^{2}}y^{2}\biggr)\,dy, $$
the solution to (7) can be expressed in the form of (6). Now, we proceed to compute the mean and variance of the stationary distribution. For the readers’ convenience, some notations are given as follows:
$$\mu_{p}=\int^{\infty}_{0}y^{p} \varphi(y)\,dy,\qquad \Delta=\mu_{2}-\mu^{2}_{1}. $$
It is easy to observe that \(\mu_{1}\) and Δ are just the mean and variance of the stationary distribution, respectively. Simple computations show that \(\mu_{1}=(\frac{\alpha\sigma^{2}}{2a})^{\frac{1}{\alpha}}\frac{\Gamma (\frac{2r}{\alpha\sigma^{2}})}{\Gamma(\frac{2r-\sigma^{2}}{\alpha\sigma ^{2}})}\), \(\mu_{2}=(\frac{\alpha\sigma}{2a})^{\frac{2}{\alpha}}\frac{\Gamma (\frac{2r+\sigma^{2}}{\alpha\sigma^{2}})}{\Gamma(\frac{2r-\sigma ^{2}}{\alpha\sigma^{2}})}\). This implies
$$\begin{aligned} \mu_{1}=\biggl(\frac{\alpha\sigma^{2}}{2a}\biggr)^{\frac{1}{\alpha}} \frac{\Gamma (\frac{2r}{\alpha\sigma^{2}})}{\Gamma(\frac{2r-\sigma^{2}}{\alpha\sigma ^{2}})},\qquad\Delta=\biggl(\frac{\alpha\sigma^{2}}{2a}\biggr)^{\frac{2}{\alpha}} \biggl( \frac{\Gamma(\frac{2r+\sigma^{2}}{\alpha\sigma^{2}})}{\Gamma(\frac {2r-\sigma^{2}}{\alpha\sigma^{2}})}-\biggl(\frac{\Gamma(\frac{2r}{\alpha\sigma ^{2}})}{\Gamma(\frac{2r-\sigma^{2}}{\alpha\sigma^{2}})}\biggr)^{2} \biggr). \end{aligned}$$

The proof is completed. □

Remark 1

Note, for \(\alpha=1\), \(d=1\), the system (1) becomes the classic logistic system (see [15]). The probability density function has the following form:
$$\begin{aligned} \varphi(y)=\frac{1}{\Gamma(\frac{2r-\sigma^{2}}{\sigma^{2}})}\biggl(\frac {2a}{\sigma^{2}}\biggr)^{\frac{2r-\sigma^{2}}{\sigma^{2}}}y^{\frac{2r}{\sigma ^{2}}-2} \exp\biggl(-\frac{2a}{\sigma^{2}}y\biggr)I_{(0,\infty)}(y). \end{aligned}$$
It is easy to observe that the stationary distribution \(\mu(\cdot)\) obeys the gamma distribution in this case. The mean and variance become \(\mu_{1}=\frac{2r-\sigma^{2}}{2a}\), \(\Delta=\frac{\sigma ^{2}(2r-\sigma^{2})}{4a^{2}}\). In this case, our result on mean and variance coincides with the result in Mao [15, p.403]. It is worth noting that we provide a more detailed description of the stationary distribution than that by Mao [15].

4 Extinction

In this section, we will show that if the noise is sufficiently large, the solution to system (1) will become extinct with probability 1.

Theorem 4.1

Let condition (2) and \(\sigma ^{2}\geq2r\) hold and \(x(t,x_{0})\) be the global solution to system (1) with any positive initial value \(x_{0}\). We then have the following assertions:
  1. (i)
    If \(\sigma^{2}>2r\), the solution \(x(t,x_{0})\) to system (1) has the property that
    $$\begin{aligned} \lim _{t\rightarrow\infty}\frac{\ln x(t,x_{0})}{t}=-\biggl(\frac{\sigma^{2}}{2}-r\biggr) \quad\textit{a.s.} \end{aligned}$$
    (8)
    That is, the population will become extinct exponentially with probability one and the exponential extinction rate is \(-(\frac{\sigma^{2}}{2}-r)\).
     
  2. (ii)
    If \(\sigma^{2}=2r\), the solution \(x(t,x_{0})\) to system (1) has the property that
    $$\begin{aligned} \lim _{t\rightarrow\infty} x(t,x_{0})=0 \quad\textit{a.s.},\qquad \lim _{t\rightarrow\infty} \frac{\ln x(t,x_{0})}{t}=0 \quad\textit{a.s.} \end{aligned}$$
    (9)
    That is, system (1) still becomes extinct with zero exponential extinction rate.
     

To prove Theorem 4.1, let us present three lemmas which are essential to the proof.

Lemma 4.2

[23]

Suppose that an n-dimensional stochastic process \(x(t)\) on \(t\geq0\) satisfies the condition
$$E\bigl|{ x}(t)-{ x}(s)\bigr|^{\alpha}\leq C|t-s|^{1+\beta}, \quad 0\leq s,t< \infty $$
for some positive constants α, β, and C. Then there exists a continuous modification \(\tilde{{ x}}(t)\) of \({ x}(t)\) which has the property that, for every \(\gamma\in(0, \frac{\beta}{\alpha})\), there is a positive random variable \(\delta(\omega)\) such that
$$\mathbb{P} \biggl\{ \omega: \mathop{\sup_{0< t-s<\delta(\omega)}}\limits_{0\leq s,t<\infty} \frac{|\tilde{{ x}}(t,\omega)-\tilde{{ x}}(s,\omega)|}{|t-s|^{\gamma }}\leq \frac{2}{1-2^{-\gamma}} \biggr\} =1. $$
In other words, almost every sample path of \(\tilde{{ x}}\) is locally but uniformly Hölder-continuous with exponent γ.

Lemma 4.3

Let condition (2) hold and \(x(t,x_{0})\) be the global solution to system (1) with any positive initial value \(x_{0}\). For any \(\beta>0\), \(x^{\beta}(t,x_{0})\) is uniformly continuous on \([0, \infty)\) a.s.

The proof of this lemma is rather standard; hence it is omitted. For details the reader is referred to [19].

Proof of Theorem 4.1

As the whole proof is very technical, we will divide it into two steps. The first step is to show the exponential extinction of system (1) when \(\sigma^{2}>2r\). The second step is to show the extinction with zero exponential extinction rate in the case of \(\sigma^{2}=2r\). Let \(x(t) = x(t; x_{0})\) for simplicity.

Step 1: In this step, we aim to prove assertion (8). It follows from Itô’s formula that
$$\begin{aligned} \ln x(t)=\ln x(0)+\int^{t}_{0}\biggl(r- \frac{\sigma^{2}}{2}\biggr)\,ds-a\int^{t}_{0}x^{\alpha }(s)\,ds+ \int^{t}_{0}\sigma \,dB(s). \end{aligned}$$
Dividing both sides by t yields
$$\begin{aligned} \frac{\ln x(t)}{t}=\frac{\ln x(0)}{t}+\frac{1}{t}\int ^{t}_{0}\biggl(r-\frac{\sigma^{2}}{2}\biggr)\,ds- \frac {a}{t}\int^{t}_{0}x^{\alpha}(s)\,ds+ \frac{1}{t}\int^{t}_{0}\sigma \,dB(s). \end{aligned}$$
(10)
Using the law of strong large numbers for martingales (see [22]), we can claim that
$$\lim _{t\rightarrow\infty}\frac{1}{t}\int^{t}_{0} \sigma \,dB(s)=0 \quad\mbox{a.s.} $$
Letting \(t\rightarrow\infty\) yields
$$\begin{aligned} \limsup _{t\rightarrow\infty}\frac{\ln x(t)}{t}\leq-\biggl(\frac{\sigma^{2}}{2}-r\biggr)\quad \mbox{a.s.} \end{aligned}$$
This shows that for any \(\epsilon\in(0, \frac{\sigma^{2}}{2}-r)\), there is a positive random variable \(T(\epsilon)\) such that, with probability 1,
$$\begin{aligned} x(t)\leq e^{-(\frac{\sigma^{2}}{2}-r)t+\epsilon t}, \quad\forall t>T(\epsilon) \mbox{ a.s.} \end{aligned}$$
It follows that
$$\begin{aligned} x^{\alpha}(t)\leq e^{-\alpha(\frac{\sigma^{2}}{2}-r)t+\alpha\epsilon t},\quad \forall t>T(\epsilon) \mbox{ a.s.}, \end{aligned}$$
which means
$$\int^{\infty}_{0}x^{\alpha}(s)\,ds<\infty\quad \mbox{a.s.} $$
Then letting \(t\rightarrow\infty\) on both sides of (10) yields
$$\begin{aligned} \lim _{t\rightarrow\infty}\frac{\ln x(t)}{t}=-\biggl(\frac{\sigma^{2}}{2}-r\biggr)\quad \mbox{a.s.} \end{aligned}$$

Step 2: Now, let us finally show assertion (9). The proof of this step is composed of two parts. We first show the almost sure convergence of \(x(t)\) to zero as \(t\rightarrow\infty\). Then we show that the exponential extinction rate is zero.

Decompose the sample space into three mutually exclusive events as follows:
$$\begin{aligned}& E_{1}= \Bigl\{ \omega: \limsup _{t\rightarrow\infty} \bigl|x(t)\bigr|\geq\liminf _{t\rightarrow\infty}\bigl|x(t)\bigr|= \gamma>0 \Bigr\} ; \\& E_{2}= \Bigl\{ \omega: \limsup _{t\rightarrow\infty} \bigl|x(t)\bigr|> \liminf _{t\rightarrow\infty}\bigl|x(t)\bigr| = 0 \Bigr\} ; \\& E_{3}= \Bigl\{ \omega: \lim _{t\rightarrow\infty} \bigl|x(t)\bigr|=0\Bigr\} . \end{aligned}$$
When \(\sigma^{2}=2r\), (10) has the following form:
$$\begin{aligned} \frac{\ln x(t)}{t}=\frac{\ln x(0)}{t}-\frac{a}{t}\int ^{t}_{0}x^{\alpha}(s)\,ds+\frac{1}{t}\int ^{t}_{0}\sigma \,dB(s). \end{aligned}$$
(11)
We, furthermore, decompose the sample space into the following two mutually exclusive events according to the convergence of \(\int^{\infty}_{0}x^{\alpha}(s)\,ds\):
$$\begin{aligned} J_{1}= \biggl\{ \omega: \int^{\infty}_{0}x^{\alpha}(s)\,ds< \infty\biggr\} ,\qquad J_{2}= \biggl\{ \omega: \int^{\infty}_{0}x^{\alpha}(s)\,ds= \infty\biggr\} . \end{aligned}$$
The proof of \(\lim_{t\rightarrow\infty}x(t)=0\) is equivalent to showing \(J_{1}\subset E_{3}\), \(J_{2}\subset E_{3}\) a.s. The strategy of the proof is as follows:
  • First, using Lemmas 4.2 and 4.3, we show that \(J_{1}\subset E_{3}\).

  • Second, using some novel techniques, we prove that \(\mathbb{P}(J_{2}\cap E_{1})=0\) and \(\mathbb{P}(J_{2}\cap E_{2})=0\), which means \(J_{2}\subset E_{3}\) a.s.

Now we realize this strategy as follows:

Case 1: Let us now show \(J_{1}\subset E_{3}\) a.s. It follows from Lemma 4.2 that almost every sample path of \(x^{\alpha}(t)\) is locally but uniformly Hölder continuous. And therefore almost every sample path of \(x^{\alpha}(t)\) must be uniformly continuous. Combining the definition of \(J_{1}\) and Lemma 4.3, we have
$$\lim _{t\rightarrow\infty}{ x}(t)=0 \quad \mbox{a.s.}, $$
which means \(J_{1}\subset E_{3}\) a.s.

Case 2: Now, we turn to the proof that \(J_{2}\subset E_{3}\) a.s. It is sufficient to show \(\mathbb{P}(J_{2}\cap E_{1})=0\) and \(\mathbb{P}(J_{2}\cap E_{2})=0\). We prove by contradiction.

If \(\mathbb{P}(J_{2}\cap E_{1})>0\), for any \(\omega\in J_{2}\cap E_{1}\), \(\varepsilon_{0}\in(0, \frac{\gamma}{2})\), there exists \(T=(\varepsilon_{0},\omega)\) such that
$$\begin{aligned} x(t)>\gamma-\varepsilon_{0}>\frac{\gamma}{2}, \quad \forall t>T \mbox{ a.s.} \end{aligned}$$
It then follows from (11) that
$$\begin{aligned} \frac{1}{t}\int^{t}_{0}x^{\alpha}(s)\,ds= \frac{1}{t}\int^{T}_{0}x^{\alpha }(s)\,ds+ \frac{1}{t}\int^{t}_{t}x^{\alpha}(s)\,ds \geq\frac{1}{t}\int^{T}_{0}x^{\alpha}(s)\,ds+ \frac{t-T}{t}\biggl(\frac{\gamma}{2}\biggr)^{\alpha} . \end{aligned}$$
Letting \(t\rightarrow\infty\), we obtain
$$\liminf _{t\rightarrow\infty}\frac{1}{t}\int^{t}_{0}x^{\alpha }(s)\,ds> \biggl(\frac{\gamma}{2}\biggr)^{\alpha}>0 \quad \mbox{a.s.} $$
This implies
$$\begin{aligned} \limsup _{t\rightarrow\infty}\frac{\ln x(t)}{t}\leq-a\biggl(\frac{\gamma}{2} \biggr)^{\alpha}<0 \quad\mbox{a.s.}, \end{aligned}$$
which contradicts the definition of \(J_{2}\) and \(E_{1}\). So \(\mathbb{P}(J_{2}\cap E_{1})=0\) must hold.
Now we proceed to show \(\mathbb{P}(J_{2}\cap E_{2})>0\) is false. For this purpose, we need a few more notations as follows:
A t ε : = { 0 s t : x ( s ) ε } , d t ε : = m ( A t ε ) t , d ε : = lim inf t d t ε , D ε : = { ω J 2 E 2 : d ε > 0 } ,
where m ( A t ε ) indicates the length of \(A^{\varepsilon}_{t}\). It is easy to see that \(D^{0}=J_{2}\cap E_{2}\). For any \(\varepsilon_{1}<\varepsilon_{2}\), simple computations show that
A t ε 1 A t ε 2 , m ( A t ε 1 ) m ( A t ε 2 ) , d t ε 1 = m ( A t ε 1 ) t d t ε 2 = m ( A t ε 2 ) t ,
which implies
$$\begin{aligned} d^{\varepsilon_{2}}\leq d^{\varepsilon_{1}}, \qquad D^{\varepsilon_{2}}\subset D^{\varepsilon_{1}},\quad \forall \varepsilon_{1}<\varepsilon_{2}. \end{aligned}$$
It is easy to observe from the continuity of probability that
$$\begin{aligned} \mathbb{P}\bigl(D^{\varepsilon}\bigr)\rightarrow \mathbb{P} \bigl(D^{0}\bigr)=\mathbb{P}(J_{2}\cap E_{2}) \quad\mbox{as } \varepsilon\rightarrow0. \end{aligned}$$
If \(\mathbb{P}(J_{2}\cap E_{2})>0\), there exists \(\epsilon>0\) such that \(\mathbb{P}(D^{\epsilon})>0\). For any \(\omega\in D^{\epsilon}\), simple computations show that
$$\begin{aligned} \frac{1}{t}\int^{t}_{0}x^{\alpha}(s)\,ds= \frac{1}{t}\int _{A^{\epsilon}_{t}}x^{\alpha}(s)\,ds+\frac{1}{t}\int _{[0,t]\setminus A^{\epsilon}_{t}}x^{\alpha}(s)\,ds\geq \frac{1}{t}\int _{A^{\epsilon}_{t}}x^{\alpha}(s)\,ds. \end{aligned}$$
By letting \(t\rightarrow\infty\), we have
$$\begin{aligned} \liminf _{ t\rightarrow\infty}\frac{1}{t}\int^{t}_{0}x^{\alpha}(s)\,ds \geq\liminf _{ t\rightarrow\infty}\frac{1}{t}\int _{A^{\epsilon}_{t}}x^{\alpha }(s)\,ds\geq d^{\epsilon_{2}} \epsilon^{\alpha} \quad\mbox{a.s.} \end{aligned}$$
(12)
Substituting (12) into (11), we obtain
$$\begin{aligned} \limsup _{t\rightarrow\infty}\frac{\ln x(t)}{t}\leq-a d^{\epsilon_{2}} \epsilon^{\alpha}<0 \quad\mbox{a.s.} \end{aligned}$$
This contradicts the definition of \(J_{2}\) and \(E_{2}\). It yields the desired assertion \(\mathbb{P}(J_{2}\cap E_{2})=0\) immediately. Combining with the fact \(J_{1}\subset E_{3}\), \(\mathbb{P}(J_{2}\cap E_{1})=0\), and \(\mathbb{P}(J_{2}\cap E_{2})=0\), we can claim that
$$\begin{aligned} \lim _{t\rightarrow\infty}x(t)=0 \quad\mbox{a.s.}, \end{aligned}$$
which means system (1) is extinct when \(\sigma^{2}=2r\). It follows that \(\lim_{t\rightarrow\infty}x^{\alpha}(t)=0\) a.s. This implies
$$\begin{aligned} \lim _{t\rightarrow\infty}\frac{1}{t}\int^{t}_{0}x^{\alpha}(s)=0\quad \mbox{a.s.} \end{aligned}$$
(13)
By the law of strong large numbers for martingales and (13), letting \(t\rightarrow\infty\) on both sides of (11) yields
$$\begin{aligned} \lim _{t\rightarrow\infty}\frac{\ln x(t)}{t}=0 \quad\mbox{a.s.} \end{aligned}$$
The proof is completed. □

Remark 2

Comparing with the existing literature [18, 20], we point out that the exponential extinction rate is just the difference between intrinsic growth rate and half of the noise intensity. Especially, we present some novel techniques to show the extinction of the system when \(\sigma^{2}=2r\).

5 Summary and numerical examples

In this paper, we have discussed the existence of a stationary distribution and extinction of system (1), and sufficient conditions have been established in Theorems 3.2 and 4.1. Note that the two sufficient conditions are complementary and mutually exclusive. Thus, there are also the necessary conditions. In conclusion, we formulate the sufficient and necessary conditions as a theorem.

Theorem 5.1

Letcondition (2) hold. There are two mutually exclusive possibilities for systems (1): either a stationary distribution exists, or it becomes extinct. That is, the system is stationary if and only if \(\sigma^{2}<2r\), while it is extinctive if and only if \(r\leq\frac{\sigma^{2}}{2}\).

Remark 3

In the existing literature (see [14]), the recurrence phenomenon is attributed to the positive recurrence. Now we try to explain the phenomenon via the divergence of the solution to the system. Note from Theorems 3.2 and 4.1 that there is \(E_{1}\cup E_{2}\subset\Omega\) with \(\mathbb{P}(E_{1}\cup E_{2})=1\) when \(\sigma^{2}<2r\). It is easy to prove by contradiction that
$$\mathbb{P} \Bigl(\limsup _{t\rightarrow\infty} x(t,\omega)=\liminf _{t\rightarrow\infty} x(t,\omega)>0 \Bigr)=0. $$
Thus, for almost sure \(\omega\in E_{1}\cup E_{2}\), we have
$$\limsup _{t\rightarrow\infty} x(t,\omega)>\liminf _{t\rightarrow\infty} x(t,\omega). $$
Thenthere exists \(\theta_{2}(\omega)>\theta_{1}(\omega)>0\) such that the process \(x(t,\omega)\) is up-crossing the interval \((\theta _{1}(\omega), \theta_{2}(\omega) )\) infinitely many times. Let \(\theta_{1}\), \(\theta_{2}\) denote the higher and lower population levels, respectively. Defining a sequence of stopping times:
$$\begin{aligned}& \tau_{1}=\inf\bigl\{ t\geq0: x(t)\geq \theta_{2}\bigr\} ,\\& \tau_{2k}=\inf\bigl\{ t\geq\sigma_{2k-1}: x(t)\leq \theta_{1} \bigr\} , \qquad\tau_{2k+1}=\inf\bigl\{ t\geq \sigma_{2k}: x(t)\geq\theta_{2} \bigr\} ,\quad k=1,2,\ldots. \end{aligned}$$
It follows from the definition of \(E_{1}\) and \(E_{2}\) that \(\tau_{k}<\infty\), \(\forall k\geq1\), a.s. This implies that the higher and lower population levels of the population occur infinite times. Meanwhile, by virtue of the ergodic property, the average of records approaches the means of their invariant distributions as the number is large. In conclusion, we provide a new point of view to describe some biological phenomena of a permanent population system.

Example 5.1

Consider a stochastic generalized logistic system as follows:
$$\begin{aligned} dx(t)=x\bigl(0.5-x^{2}\bigr)\,dt+\sigma x \,dB(t). \end{aligned}$$
(14)
The existence and uniqueness of the solution follows from Lemma 2.1. We consider the solution \(x(t,x_{0})\) with initial date \(x_{0}=1\). Let \(x(t) = x(t; 1)\) for simplicity.

(i) \(\sigma=0.5\):

Since \(0.5>\frac{0.5^{2}}{2}\), by virtue of Theorem 3.2, system (14) is stochastically permanent and has a unique stationary distribution. Figure 1 shows a stochastic trajectory of \(x(t)\) generated by the Euler scheme for time step \(\Delta=2^{-8}\) for system (14) on \([0,300]\). Figure 2 shows the probability density function \(p(y)\) of system (14).
Figure 1

Stochastic trajectory of \(\pmb{x(t)}\) for system ( 14 ) with \(\pmb{\sigma=0.5}\) .

Figure 2

Probability density function of system ( 14 ) with \(\pmb{\sigma=0.5}\) .

(ii) \(\sigma=2\):

Note that \(0.5<\frac{2^{2}}{2}\), by virtue of Theorem 4.1, system (14) is exponentially extinctive. Figures 3 and 4 show the stochastic trajectory of \(\frac{\ln x(t)}{t}\) and \(x(t)\) generated by the Heun scheme for time step \(\Delta=10^{-3}\) for system (14) on \([0,200]\) and \([0,20]\), respectively.
Figure 3

Stochastic trajectory of \(\pmb{\frac{\ln x(t)}{t}}\) for system ( 14 ) with \(\pmb{\sigma=2}\) .

Figure 4

Stochastic trajectory of \(\pmb{x(t)}\) for system ( 14 ) with \(\pmb{\sigma=2}\) .

(iii) \(\sigma=1\):

Note that \(0.5=\frac{1^{2}}{2}\), by virtue of Theorem 4.1, system (14) is extinctive with zero exponential extinction rate. Figures 5 and 6 show the stochastic trajectory of \(x(t)\) and \(\frac{\ln x(t)}{t}\) generated by the Heun scheme for time step \(\Delta=10^{-3}\) for system (14) on \([0,200]\).
Figure 5

Stochastic trajectory of \(\pmb{\frac{\ln x(t)}{t}}\) for system ( 14 ) with \(\pmb{\sigma=1}\) .

Figure 6

Stochastic trajectory of \(\pmb{x(t)}\) for system ( 14 ) with \(\pmb{\sigma=1}\) .

Declarations

Acknowledgements

The project reported here is supported by the National Science Foundation of China (Grant Nos. 61304070, 11271146), the National Key Basic Research Program of China (973 Program) (2013CB228204).

Open Access This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly credited.

Authors’ Affiliations

(1)
College of Science, Hohai University
(2)
School of Automation, Huazhong University of Science and Technology

References

  1. Bischi, GI, Tramontana, F: Three-dimensional discrete-time Lotka-Volterra models with an application to industrial clusters. Commun. Nonlinear Sci. Numer. Simul. 15, 3000-3014 (2010) View ArticleMATHMathSciNetGoogle Scholar
  2. He, X, Li, C, Huang, T, Li, C: Codimension two bifurcation in a delayed neural network with unidirectional coupling. Nonlinear Anal., Real World Appl. 14, 1191-1202 (2013) View ArticleMATHMathSciNetGoogle Scholar
  3. He, X, Li, C, Huang, T, Li, C, Huang, J: A recurrent neural network for solving bilevel linear programming problem. IEEE Trans. Neural Netw. Learn. Syst. 25, 824-830 (2014) View ArticleGoogle Scholar
  4. He, X, Li, C, Huang, T, Li, C: Neural network for solving convex quadratic bilevel programming problems. Neural Netw. 51, 17-25 (2014) View ArticleMATHGoogle Scholar
  5. Moreau, Y, Louies, S, Vandewalle, J, Brenig, L: Embedding recurrent neural networks into predator-prey models. Neural Netw. 12, 237-245 (1999) View ArticleGoogle Scholar
  6. Wen, S, Zeng, Z, Huang, T: Dynamic behaviors of memristor-based delayed recurrent networks. Neural Comput. Appl. 23, 815-821 (2013) View ArticleGoogle Scholar
  7. Wen, S, Zeng, Z, Huang, T, Zeng, Z, Chen, Y, Li, P: Circuit design and exponential stabilization of memristive neural networks. Neural Netw. 63, 48-56 (2015) View ArticleGoogle Scholar
  8. Mao, X, Marion, G, Renshaw, E: Asymptotic behavior of the stochastic Lotka-Volterra model. J. Math. Anal. Appl. 287, 141-156 (2003) View ArticleMATHMathSciNetGoogle Scholar
  9. Mao, X, Marion, G, Renshaw, E: Environmental noise suppresses explosion in population dynamics. Stoch. Process. Appl. 97, 95-110 (2002) View ArticleMATHMathSciNetGoogle Scholar
  10. Mao, X, Yuan, C, Zou, J: Stochastic differential delay equations in population dynamics. J. Math. Anal. Appl. 304, 296-320 (2005) View ArticleMATHMathSciNetGoogle Scholar
  11. Zhu, C, Yin, G: On hybrid competitive Lotka-Volterra ecosystems. Nonlinear Anal., Theory Methods Appl. 71, 1370-1379 (2009) View ArticleMathSciNetGoogle Scholar
  12. Dang, HN, Du, NH, Yin, G: Existence of stationary distributions for Kolmogorov systems of competitive type under telegraph noise. J. Differ. Equ. 257, 2078-2101 (2014) View ArticleMATHMathSciNetGoogle Scholar
  13. Jiang, D, Ji, C, Li, X, O’Regan, D: Analysis of autonomous Lotka-Volterra competition systems with random perturbation. J. Math. Anal. Appl. 390, 82-595 (2012) MathSciNetGoogle Scholar
  14. Liu, HX, Li, X, Yang, Q: The ergodic property and positive recurrence of a multi-group Lotka-Volterra mutualistic system with regime switching. Syst. Control Lett. 62, 805-810 (2013) View ArticleMATHMathSciNetGoogle Scholar
  15. Mao, X: Stationary distribution of stochastic population systems. Syst. Control Lett. 60, 398-405 (2011) View ArticleMATHGoogle Scholar
  16. Jiang, D, Shi, N, Li, X: Global stability and stochastic permanence of a non-autonomous logistic equation with random perturbation. J. Math. Anal. Appl. 340, 588-597 (2008) View ArticleMATHMathSciNetGoogle Scholar
  17. Jiang, D, Shi, N: A note on non-autonomous logistic with random perturbation. J. Math. Anal. Appl. 303, 164-172 (2005) View ArticleMATHMathSciNetGoogle Scholar
  18. Li, X, Gray, A, Jiang, D, Mao, X: Sufficient and necessary conditions of stochastic permanence and extinction for stochastic logistic populations under regime switching. J. Math. Anal. Appl. 376, 11-28 (2011) View ArticleMATHMathSciNetGoogle Scholar
  19. Liu, L, Shen, Y, Jiang, F: The almost sure asymptotic stability and pth moment asymptotic stability of nonlinear stochastic differential systems with polynomial growth. IEEE Trans. Autom. Control 56, 1985-1990 (2011) View ArticleMathSciNetGoogle Scholar
  20. Liu, M, Wang, K: Stationary distribution, ergodicity and extinction of a stochastic generalized logistic system. Appl. Math. Lett. 25, 1980-1985 (2012) View ArticleMATHMathSciNetGoogle Scholar
  21. Hasminskii, RZ: Stochastic Stability of Differential Equations. Springer, Berlin (2011) Google Scholar
  22. Mao, X, Yuan, C: Stochastic Differential Equations with Markovian Switching. Imperial College Press, London (2006) View ArticleMATHGoogle Scholar
  23. Karatzas, I, Shreve, SE: Brownian Motion and Stochastic Calculus. Springer, Berlin (1991) MATHGoogle Scholar

Copyright

© Liu and Shen; licensee Springer 2015