Skip to main content

Theory and Modern Applications

Ergodicity of stochastic smoking model and parameter estimation

Abstract

In this paper, we first propose a stochastic smoking model driven by Brownian motion based on a deterministic smoking model. We show that when the coefficients of the noise are small, the smoking model is ergodic. We then estimate the drift coefficients of stochastic smoking model by a least squares estimation and the ergodic theory on the stationary distribution. Finally, we develop a new approach to estimating the diffusion coefficients. Computer simulations will be used to illustrate our theory.

1 Introduction

As we all know, smoking is not only harmful to human health, but it also does harm to a smoker’s whole family. In the long run, smoking does harm to the whole society. According to the World Health Organization website [1], statistics investigation shows the following key facts:

  • Tobacco kills up to half of its users.

  • Tobacco kills around 6 million people each year. More than 5 million of those deaths are the result of direct tobacco use while more than \(600\mbox{,}000\) are the result of non-smokers being exposed to second-hand smoke.

  • Nearly 80% of the world’s 1 billion smokers live in low- and middle-income countries.

In recent years, several researchers have proposed some mathematical models to characterize smoking behavior. First, Castillo-Garsow et al. [2] presented a deterministic smoking model, then Sharomi and Gumel [3] further developed the deterministic model. For fixed time \(t\geq0\), they separated the total population \(N(t)\) into fours classes: potential smokers \(P(t)\), current smokers \(S(t)\), smokers who temporarily quit smoking \(Q_{t}(t)\), smokers not smoking at some stage, and smokers who have quit smoking permanently \(Q_{p}(t)\). Besides, their smoking model is based on the following assumptions (A):

  1. (A1)

    The average number of contacts per unit time is c.

  2. (A2)

    The birth rate of the total population is μ.

  3. (A3)

    The death rate of the total population is μ.

  4. (A4)

    The current smokers try to quit smoking at the rate γ.

  5. (A5)

    The smokers temporarily quit smoking become current smokers again at the rate Î±.

  6. (A6)

    The smokers temporarily quit smoking become smokers who have quit smoking permanently at the rate σ.

  7. (A7)

    The total population \(N(t)\equiv N^{*}\) is a constant and \(N^{*}\) is independent of time t.

According to assumption (A), \(cP(t)\) is the average number of visits to social gatherings of the susceptible per unit of time. Out of those gatherings may come influence on potential smokers, which is the presence of smokers given by the proportionality \(\frac{S(t)}{N(t)}\). Besides, they assume that q is the probability of becoming a smoker for a member of potential smokers after contact with a smoker. Therefore, the total average change rate of smokers is \(\frac{cP(t)S(t)}{N(t)}\).

For notational simplicity, let

$$ \beta:=\frac{cq}{N(t)}=\frac{cq}{N^{*}};\qquad\Lambda:=\mu N(t)=\mu N^{*}. $$

Consequently, the smoking model can be written as

$$ \begin{aligned} &dP(t)= \bigl[\Lambda-\mu P(t)-\beta P(t)S(t) \bigr]\,dt, \\ &dS(t)= \bigl[-(\mu+\gamma) S(t)+\beta P(t)S(t)+\alpha Q_{t}(t) \bigr]\,dt,\\ &dQ_{t}(t)=\bigl[-(\mu+\alpha) Q_{t}(t)+\gamma(1-\sigma) S(t)\bigr]\,dt, \\ &dQ_{p}(t)=\bigl[-\mu Q_{p}(t)+\gamma\sigma S(t)\bigr] \,dt, \end{aligned} $$

where \(P(0)>0\), \(S(0)>0\), \(Q_{t}(0)>0\), \(Q_{p}(0)>0\), \(0<\sigma, \mu, \beta, \gamma, \alpha<1\), and \(\Lambda>0\).

They proved local stability and global stability of this model according to a basic generator number. They have studied that the associated smoking-free equilibrium is globally asymptotically stable whenever a certain threshold, known as the smokers-generation number, is less than unity, and unstable if this threshold is greater than unity.

It is reasonable to assume that the death of potential smokers \(P(t)\), current smokers \(S(t)\), smokers who temporarily quit smoking \(Q_{t}(t)\), and smokers who have quit smoking permanently \(Q_{p}(t)\) is \(\mu_{1}\), \(\mu_{2}\), \(\mu_{3}\), \(\mu_{4}\), respectively. Therefore, we will get the following model:

$$\begin{aligned} \begin{aligned} &dP(t)= \bigl[\Lambda-\mu_{1} P(t)-\beta P(t)S(t) \bigr] \,dt, \\ &dS(t)= \bigl[-(\mu_{2}+\gamma) S(t)+\beta P(t)S(t)+\alpha Q_{t}(t) \bigr]\,dt, \\ &dQ_{t}(t)=\bigl[-(\mu_{3}+\alpha) Q_{t}(t)+ \gamma(1-\sigma) S(t)\bigr]\,dt, \\ &dQ_{p}(t)=\bigl[-\mu_{4} Q_{p}(t)+\gamma\sigma S(t)\bigr]\,dt. \end{aligned} \end{aligned}$$
(1)

Although deterministic smoking model can characterize the dynamical behavior of the smoking population in some way, it assumes that parameters are deterministic irrespective of environmental fluctuations, which imposes some limitations in mathematical modeling of ecological systems. In the real world, many random factors (earthquakes, typhoons, car accidents, and other unforeseen factors) can make the parameters \(\mu_{i}\), \(i=1,2,3,4\) into random variables, that is,

$$ -\mu_{i}\rightarrow-\mu_{i}+\mbox{error}_{i}, \quad i=1,2,3,4, $$

where \(\mbox{error}_{i}\) is a random term.

According to the central limit theorem, the \(\mbox{error}_{i}\,dt\) term can be approximated by a normal distribution with mean 0 and variance \(\sigma_{i}\,dt\). Consequently,

$$\begin{aligned} -\mu_{i}\,dt\rightarrow-\mu_{i}\,dt+\sigma_{i} \,dB_{i}(t),\quad i=1,2,3,4, \end{aligned}$$
(2)

where \(\sigma_{i}>0\) and \(B_{i}(t)\), \(i=1,\ldots,4\), are for standard Brownian motion. To better handle the problem in mathematics, we assume that \(B_{i}(t)\), \(i=1, 2,3,4\), are independent of each other.

Substituting (2) into equation (1), we get the following stochastic differential equation:

$$\begin{aligned} \begin{aligned} &dP(t)= \bigl[\Lambda-\mu_{1} P(t)-\beta P(t)S(t) \bigr] \,dt+\sigma_{1}P(t)\,dB_{1}(t), \\ &dS(t)= \bigl[-(\mu_{2}+\gamma) S(t)+\beta P(t)S(t)+\alpha Q_{t}(t) \bigr]\,dt+\sigma_{2}S(t)\,dB_{2}(t), \\ &dQ_{t}(t)=\bigl[-(\mu_{3}+\alpha) Q_{t}(t)+ \gamma(1-\sigma) S(t)\bigr]\,dt+\sigma_{3}Q_{t}(t) \,dB_{3}(t), \\ &dQ_{p}(t)=\bigl[-\mu_{4} Q_{p}(t)+\gamma\sigma S(t)\bigr]\,dt+\sigma_{4}Q_{P}(t)\,dB_{4}(t). \end{aligned} \end{aligned}$$
(3)

Recently, Lahrouz et al. [4] studied that a stochastic mathematical model of smoking has stability under certain conditions. And many scholars have studied the effects of stochastic noises on the biological model: Gard [5] pointed out that permanence in the corresponding deterministic model is preserved in the stochastic model if the intensities of the random fluctuations are not too large; Gray et al. [6] discussed the impacts of stochastic noises on one-dimensional stochastic SIS model; Zhang and Chen [7] presented new sufficient conditions for the existence and uniqueness of a stationary distribution of general diffusion processes, which is efficient for the stochastic smoking model (3).

Moreover, parameter estimation for stochastic differential equations (for short SDEs) has been a topic of interest in recent years. Many scholars have studied the parameter estimation for SDEs, for example, Bishwal [8], Timmer [9] and Kristensen et al. [10]. Very recently, Young et al. [11] reviewed parameter estimation methods for SDEs; Gray et al. [12] estimated the parameters in the stochastic SIS epidemic model based on a pseudo-maximum likelihood estimation and least squares estimation (for short LSE) by discrete observations and so on.

In the paper, for convenience, we let

$$x_{1}(t)=P(t),\qquad x_{2}(t)=S(t),\qquad x_{3}(t)=Q_{t}(t),\qquad x_{4}(t)=Q_{p}(t). $$

Thus, (3) becomes the following stochastic differential equation (for short SDE):

$$\begin{aligned} \left ( { \begin{matrix} dx_{1}(t) \\ dx_{2}(t) \\ dx_{3}(t) \\ dx_{4}(t) \end{matrix} } \right )=&\left ( { \begin{matrix} \Lambda-\mu_{1} x_{1}(t)-\beta x_{1}(t)x_{2}(t)\\ -(\mu_{2}+\gamma)x_{2}(t)+\beta x_{1}(t)x_{2}(t)+\alpha x_{3}(t)\\ -(\mu_{3}+\alpha)x_{3}(t)+\gamma(1-\sigma)x_{2}(t)\\ -\mu_{4} x_{4}(t)+\gamma\sigma x_{2}(t) \end{matrix} } \right )\,dt \\ &{}+ \left ( { \begin{matrix} \sigma_{1}x_{1}(t) & 0 & 0 & 0\\ 0 & \sigma_{2}x_{2}(t) & 0 & 0\\ 0 & 0 & \sigma_{3}x_{3}(t) & 0\\ 0 & 0 & 0 & \sigma_{4}x_{4}(t) \end{matrix} } \right )\left ( { \begin{matrix} dB_{1}(t) \\ dB_{2}(t)\\ dB_{3}(t)\\ dB_{4}(t) \end{matrix} } \right ). \end{aligned}$$
(4)

Although the SDE (4) looks like the epidemic models, which have been extensively discussed in the present literature, the SDE (4) has essential differences. The first main difference from the susceptible-infectious-recovered (for short SIR) idea is that the SDE (4) adds a term \(\alpha Q_{t}(t)\), which makes the SDE (4) more difficult than SIR. The reader can see that the emergence of the coefficient σ makes the stochastic smoking model more difficult to deal with. Since one has the nonlinearity of the coefficients, one cannot obtain the explicit expressions for the drift coefficients of the SDE (4) by LSE directly.

The second main difference that the equation of SIR is three-dimensional, while the SDE (4) is four-dimensional. Consequently, the SDE (4) is worth considering. To the best of our knowledge, this paper is the first to consider the stationary distribution and parameter estimation of the SDE (4). When \(\sigma=0\), the model will degenerate into the epidemic model, the existence of a stationary distribution is an open question. Besides, this paper uses the quadratic variation to estimate the diffusion coefficients of the SDE (4) being a new and more simple approach than the classical regression analysis.

It is natural to ask the following questions:

  1. (Q1)

    Does the SDE (4) have a unique global positive solution?

  2. (Q2)

    Under which conditions does the SDE (4) have a unique stationary distribution?

  3. (Q3)

    Can we estimate the parameters of the SDE (4) by LSE directly?

Compared to the present literature, our paper has made the following contributions:

  • We find a useful and efficient function to prove the existence of a stationary distribution for the SDE (4) based on a result from Khasminskii [13].

  • Two new methods for parameter estimation are proposed. One method estimates the drift coefficients of the SDE (4) by using the ergodic theory on the stationary distribution and LSE; the other new method estimates diffusion coefficients of the SDE (4) by quadratic variation of the logarithm of sample paths.

In this paper, we will answer the above three questions one by one. The organization of this paper is as follows: In Section 2, by Lyapunov method, we show that the SDE (4) has an existence and uniqueness positive solution. In Section 3, we show that when the coefficients of the noise are small, the smoking model has a unique stationary distribution. In Section 4, we estimate the parameters in the SDE (4) by LSE, the ergodic theory on the stationary distribution, and quadratic variation.

2 Global positive solution

Throughout this paper, unless otherwise specified, we let \((\Omega,\mathcal{F},\{\mathcal{F}_{t}\}_{t\geq0},P)\) be a complete probability space with a filtration \(\{\mathcal{F}_{t}\}_{t\geq0}\) satisfying the usual conditions (i.e. it is increasing and right continuous while \(\mathcal{F}_{0}\) contains all P-null sets). Let \(B_{i}(t),i=1,2,3,4\), be standard Brownian motion defined on the probability space. Denote \(\mathbb{R}_{+}=(0,\infty)\) and \(\mathbb{R}^{4}_{+}=\{x\in\mathbb{R}^{4}:x_{i}>0, i=1,2,3,4\}\).

If A is a vector or matrix, its transpose is denoted by \(A^{T}\). If A is a matrix, its trace norm is denoted by \(|A|=\sqrt{\operatorname{trace}(A^{T}A)}\) while its operator norm is denoted by \(\|A\|=\sup\{|Ax|:|x|=1\}\). If A is a symmetric matrix, its smallest and largest eigenvalue are denoted by \(\lambda _{\min}(A)\) and \(\lambda_{\max}(A)\), respectively.

Theorem 2.1

For any initial value \(x(0)=(x_{1}(0),x_{2}(0),x_{3}(0),x_{4}(0))^{T}\in\mathbb{R}^{4}_{+}\), the SDE (4) has a unique global positive solution \(x(t)=(x_{1}(t),x_{2}(t),x_{3}(t),x_{4}(t))^{T}\in\mathbb{R}^{4}_{+}\) for all \(t\geq0\) with probability one, namely \(P\{x(t)\in\mathbb{R}^{4}_{+}\textit{ for all }t\geq0\}=1\).

Proof

Since the coefficients of the SDE (4) are locally Lipschitz continuous, it is well known that, for any initial value \(x(0)\in\mathbb{R}^{4}_{+}\), there is a unique local solution \(x(t)\) on \(t\in[0,\tau_{e})\) where \(\tau_{e}\) is the explosion time (see, e.g., pp. 154-155, Mao [14]).

To show this solution is global, we need to prove that \(\tau_{e}=\infty\) a.s. Let \(m_{0}>0\) be sufficiently large for \(\frac{1}{m_{0}}< x(0)< m_{0}\). For each integer \(m\geq m_{0}\), define the stopping time

$$\begin{aligned} \tau_{m}=\inf\biggl\{ t\in[0,\tau_{e})|x_{i}(t) \notin\biggl(\frac{1}{m},m \biggr) \mbox{ for some } i, i=1,2,3,4 \biggr\} , \end{aligned}$$

where \(\inf\emptyset=\infty\) (∅ denotes the empty set). We have \(\tau_{m}\leq\tau_{e}\). Incidently, if \(\tau_{m}=\infty\) a.s., then \(\tau_{e}=\infty\) a.s. and \(x(t)\in\mathbb{R}^{4}_{+}\) a.s. for all \(t\geq0\).

Define \(V: \mathbb{R}_{+}^{4}\rightarrow\mathbb{R}_{+}\):

$$ \begin{aligned} V(x)=(x_{1}+x_{2}+x_{3}+x_{4})^{2}+ \frac{1}{x_{2}}+\frac{1}{x_{3}}+\frac{1}{x_{4}}. \end{aligned} $$

Let \(T>0\) be an arbitrary positive real number. By Itô’s formula, we get, for any \(0\leq t\leq \tau_{m}\wedge T\) and \(m\geq m_{1}\),

$$\begin{aligned} dV\bigl(x(t)\bigr)&=LV\bigl(x(t)\bigr)\,dt+2\bigl(x_{1}(t)+x_{2}(t)+x_{3}(t)+x_{4}(t) \bigr)\sigma_{1}x_{1}(t)\,dB_{1}(t) \\ &\quad+ \biggl(2\bigl(x_{1}(t)+x_{2}(t)+x_{3}(t)+x_{4}(t) \bigr)\sigma_{2}x_{2}(t) -\frac{\sigma_{2}}{x_{2}(t)} \biggr) \,dB_{2}(t) \\ &\quad+ \biggl(2\bigl(x_{1}(t)+x_{2}(t)+x_{3}(t)+x_{4}(t) \bigr)\sigma_{3}x_{3}(t)-\frac{\sigma_{3}}{x_{3}(t)} \biggr) \,dB_{3}(t) \\ &\quad+ \biggl(2\bigl(x_{1}(t)+x_{2}(t)+x_{3}(t)+x_{4}(t) \bigr)\sigma_{4}x_{4}(t)-\frac{\sigma_{4}}{x_{4}(t)} \biggr) \,dB_{4}(t), \end{aligned}$$
(5)

where LV: \(\mathbb{R}_{+}^{4}\rightarrow\mathbb{R}\) is

$$ \begin{aligned} LV(x)&=2(x_{1}+x_{2}+x_{3}+x_{4}) (\Lambda-\mu_{1}x_{1}-\mu_{2}x_{2}- \mu_{3}x_{3}-\mu_{4} x_{4}) \\ &\quad+\sigma_{1}^{2}x_{1}^{2}+ \sigma_{2}^{2}x_{2}^{2}+ \sigma_{3}^{2}x_{3}^{2}+ \sigma_{4}^{2}x_{4}^{2}-\beta \frac{x_{1}}{x_{2}}+\frac{\mu_{2}+\gamma+\sigma_{2}^{2}}{x_{2}}-\alpha\frac{x_{3}}{x_{2}^{2}} \\ &\quad+\frac{\mu_{3}+\alpha+\sigma_{3}^{2}}{x_{3}}-\gamma(1-\sigma)\frac {x_{2}}{x_{3}}+\frac{\mu_{4}+\sigma_{4}^{2}}{x_{4}}- \sigma\gamma\frac{x_{2}}{x_{4}}. \end{aligned} $$

By

$$ a^{2}+b^{2}\geq2ab\quad a, b\in \mathbb{R}, $$

it follows that

$$\begin{aligned} LV(x)\leq{}&2\Lambda(x_{1}+x_{2}+x_{3}+x_{4})+ \sigma_{1}^{2}x_{1}^{2}+ \sigma_{2}^{2}x_{2}^{2}+ \sigma_{3}^{2}x_{3}^{2}+ \sigma_{4}^{2}x_{4}^{2} \\ &{}+\frac{\mu_{2}+\gamma+\sigma_{2}^{2}}{x_{2}}+\frac{\mu_{3}+\alpha+\sigma _{3}^{2}}{x_{3}}+\frac{\mu_{4}+\sigma_{4}^{2}}{x_{4}} \\ \leq{}&\Lambda^{2}+(x_{1}+x_{2}+x_{3}+x_{4})^{2}+ \sigma_{1}^{2}x_{1}^{2}+ \sigma_{2}^{2}x_{2}^{2}+ \sigma_{3}^{2}x_{3}^{2}+ \sigma_{4}^{2}x_{4}^{2} \\ &{}+\frac{\mu_{2}+\gamma+\sigma_{2}^{2}}{x_{2}}+\frac{\mu_{3}+\alpha+\sigma _{3}^{2}}{x_{3}}+\frac{\mu_{4}+\sigma_{4}^{2}}{x_{4}} \\ \leq{}&\Lambda^{2}+C_{1}(x_{1}+x_{2}+x_{3}+x_{4})^{2} \\ &{}+\frac{\mu_{2}+\gamma+\sigma_{2}^{2}}{x_{2}}+\frac{\mu_{3}+\alpha+\sigma _{3}^{2}}{x_{3}}+\frac{\mu_{4}+\sigma_{4}^{2}}{x_{4}} \\ \leq{}&\Lambda^{2}+C_{2}V(x) \\ \leq{}& C\bigl(1+V(x)\bigr), \end{aligned}$$

where \(C_{1}=\max\{\sigma_{1}^{2}+1,\sigma_{2}^{2}+1,\sigma_{3}^{2}+1,\sigma_{4}^{2}+1\} \), \(C_{2}=\max\{C_{1}, \mu_{2}+\gamma+\sigma_{2}^{2}, \mu_{3}+\alpha+\sigma_{2}^{2}, \mu_{4}+\sigma_{4}^{2}\}\), and \(C=\max\{C_{2}, \Lambda^{2}\}\).

Now, for any \(t\in[0,T]\), we can integrate both sides of (5) from 0 to \((\tau_{m}\wedge t)\) and then take the expectations to get

$$ \begin{aligned} EV\bigl(x(t \wedge\tau_{m})\bigr)&=V \bigl(x(0)\bigr)+E \int_{0}^{t \wedge\tau_{m}}LV\bigl(x(s)\bigr)\,ds \\ &\leq V\bigl(x(0)\bigr)+E \int_{0}^{\tau_{m}\wedge t}C\bigl(1+V \bigl(x(s) \bigr)\bigr)\,ds \\ &\leq V\bigl(x(0)\bigr)+CT+E \int_{0}^{\tau_{m}\wedge t}CV \bigl(x(s) \bigr)\,ds \\ &\leq V\bigl(x(0)\bigr)+CT+C \int_{0}^{t}EV \bigl(x(\tau_{m}\wedge s) \bigr)\,ds. \end{aligned} $$

By the Gronwall inequality, we have

$$\begin{aligned} EV\bigl(x(T \wedge\tau_{m})\bigr)\leq\bigl(V \bigl(x(0)\bigr)+CT \bigr)e^{CT}. \end{aligned}$$
(6)

Note that, for every \(\omega\in\{\tau_{m}\leq T\}\), \(x(\tau_{m})\) equals either m or \(\frac{1}{m}\), and hence

$$\begin{aligned} V\bigl(x(\tau_{m})\bigr)\geq\biggl(3m^{2}+ \frac{1}{m} \biggr)\wedge\biggl(\frac{3}{m^{2}}+m \biggr). \end{aligned}$$

It then follows from (6) that

$$\begin{aligned} \bigl(V\bigl(x(0)\bigr)+CT \bigr)e^{CT}&\geq E \bigl[I_{\{\tau_{m}\leq T\}}(\omega)V\bigl(x(\tau_{m})\bigr) \bigr] \\ &\geq\biggl( \biggl(3m^{2}+\frac{1}{m} \biggr)\wedge \biggl(\frac{3}{m^{2}}+m \biggr) \biggr)P(\tau_{m}\leq T). \end{aligned}$$
(7)

Letting \(m\rightarrow+\infty\) on both sides of inequality (7), we obtain

$$P(\tau_{\infty}\leq T)=0. $$

Since T is arbitrary, we have \(P(\tau_{\infty}=\infty)=1\). The proof is complete. □

3 Stationary distribution

In this section, we will give some sufficient conditions which guarantee that the SDE (4) has a unique stationary distribution.

To show the existence and uniqueness of stationary distribution of the SDE (4), we follow the main ideas of Mao [15] and Tong et al. [16]. Let us first cite a well-known result from Khasminskii [13] as a lemma.

Lemma 3.1

see Khasminskii [13] p. 107-109

The SDE (4) has a unique stationary distribution if there is a bounded open subset G of \(\mathbb{R}^{4}_{+}\) with a regular (i.e. smooth) boundary such that its closure \(\bar {G}\subset\mathbb{R}^{4}_{+}\), and

  1. (i)

    \(\inf_{x\in G}\lambda_{\min}({\operatorname {diag}}(x_{1},x_{2},x_{3},x_{4})\sigma\sigma^{T}\operatorname {diag}(x_{1},x_{2},x_{3},x_{4}))>0\), where \(\sigma=(\sigma_{1},\sigma_{2},\sigma _{3},\sigma_{4})^{T}\);

  2. (ii)

    \(\sup_{x(0)\in K-G}E(\tau_{G})<\infty\) for every compact subset K of \(\mathbb{R}^{4}_{+}\) such that \(G\subset K\), where \(\tau_{G}=\inf\{ t\geq0:x(t)\in G\}\) and throughout this paper we set \(\inf\emptyset=\infty\).

Theorem 3.1

If \(2\mu_{1}>\sigma_{1}^{2}\), \(2\mu_{2}>\sigma_{2}^{2}\), \(2\mu_{3}>\sigma_{3}^{2}\), and \(2\mu_{4}>\sigma_{4}^{2}\) hold, then the SDE (4) is ergodic.

Proof

Let M be a sufficiently large number. Set

$$ G= \biggl\{ x\in \mathbb{R}^{4}_{+}: \frac{1}{M}< x_{i}< M\mbox{ for all } i=1,2,3,4 \biggr\} \subset\bar{G}\subset \mathbb{R}^{4}_{+} $$

and Ḡ is the closure of G.

First, we verify condition (i) in Lemma 3.1. \(A_{x}\) is defined by

$$A_{x}=\sigma^{T}\operatorname{diag}(x_{1},x_{2},x_{3},x_{4})\quad\mbox{for }x\in \bar{G}. $$

Clearly, \(\lambda_{\min}(A_{x}^{T}A_{x})\geq0\). If \(\lambda_{\min }(A_{x}^{T}A_{x})=0\), then there is a vector \(\xi=(\xi_{1},\xi_{2},\xi_{3},\xi _{4})^{T}\in\mathbb{R}^{4}\) such that \(|\xi|\neq0\) and \(A_{x}\xi=0\).

This implies that \(\xi^{T}A_{x}^{T}A_{x}\xi=0\). By \(\sigma_{i}>0\), \(i=1,2,3,4\), and the uniformly positive definiteness for the matrix \(A_{x}^{T}A_{x}\) with respect to \(x\in\bar{G}\), we obtain \(\xi=0\), but this contradicts the fact that \(|\xi|\neq0\). Therefore, we must have \(\lambda_{\min }(A_{x}^{T}A_{x})> 0\). Noting that \(\lambda_{\min}(A_{x}^{T}A_{x})\) is a continuous function of \(x\in \bar{G}\), we have

$$ \inf_{x\in G}\lambda_{\min}\bigl(A_{x}^{T}A_{x} \bigr)\geq\min_{x\in\bar{G}}\lambda_{\min}\bigl(A_{x}^{T}A_{x} \bigr)>0. $$

Therefore, we have verified condition (i) in Lemma 3.1. Next, we will verify condition (ii) in Lemma 3.1.

Consider a function \(V_{1}: \mathbb{R}_{+}^{4}\rightarrow\mathbb{R}_{+}\)

$$ V_{1}(x)=(x_{1}+x_{2}+x_{3}+x_{4})^{2}- \log(x_{1}+x_{2}+x_{3}+x_{4}). $$
(8)

Applying Itô’s formula to (8) we can see that

$$\begin{aligned} &dV_{1}\bigl(x(t)\bigr) \\ &\quad =LV_{1}\bigl(x(t) \bigr)\,dt \\ &\qquad {}+ \biggl(2\bigl(x_{1}(t)+x_{2}(t)+x_{3}(t)+x_{4}(t) \bigr)-\frac{1}{x_{1}(t)+x_{2}(t)+x_{3}(t)+x_{4}(t)} \biggr)\sigma_{1}x_{1}(t) \,dB_{1}(t) \\ &\qquad {}+ \biggl(2\bigl(x_{1}(t)+x_{2}(t)+x_{3}(t)+x_{4}(t) \bigr)-\frac{1}{x_{1}(t)+x_{2}(t)+x_{3}(t)+x_{4}(t)} \biggr)\sigma_{2}x_{2}(t) \,dB_{2}(t) \\ &\qquad {}+ \biggl(2\bigl(x_{1}(t)+x_{2}(t)+x_{3}(t)+x_{4}(t) \bigr)-\frac{1}{x_{1}(t)+x_{2}(t)+x_{3}(t)+x_{4}(t)} \biggr)\sigma_{3}x_{3}(t) \,dB_{3}(t) \\ &\qquad {}+ \biggl(2\bigl(x_{1}(t)+x_{2}(t)+x_{3}(t)+x_{4}(t) \bigr)-\frac{1}{x_{1}(t)+x_{2}(t)+x_{3}(t)+x_{4}(t)} \biggr)\sigma_{4}x_{4}(t) \,dB_{4}(t), \end{aligned}$$
(9)

where \(LV_{1}:\mathbb{R}_{+}^{4}\rightarrow\mathbb{R}_{+}\) is defined by

$$\begin{aligned} LV_{1}(x)={}&{-}\bigl(2\mu_{1}- \sigma_{1}^{2}\bigr)x_{1}^{2}+2\Lambda x_{1}-\bigl(2\mu_{2}-\sigma_{2}^{2} \bigr)x_{2}^{2}+2\Lambda x_{2} \\ &{}-\bigl(2\mu_{3}-\sigma_{3}^{2} \bigr)x_{3}^{2}+2\Lambda x_{3}-\bigl(2 \mu_{4}-\sigma_{4}^{2}\bigr)x_{4}^{2}+2 \Lambda x_{4} \\ &{}-\frac{\Lambda}{x_{1}+x_{2}+x_{3}+x_{4}}+\frac{\mu_{1}x_{1}+\mu_{2}x_{2}+\mu _{3}x_{3}+\mu_{4}x_{4}}{x_{1}+x_{2}+x_{3}+x_{4}} \\ &{}+\frac{1}{(x_{1}+x_{2}+x_{3}+x_{4})^{2}}\bigl(\sigma_{1}^{2}x_{1}^{2}+ \sigma_{2}^{2}x_{2}^{2}+ \sigma_{3}^{2}x_{3}^{2}+ \sigma_{4}^{2}x_{4}^{2}\bigr). \end{aligned}$$
(10)

By (10), we get

$$ \begin{aligned} LV_{1}(x)\leq{}& {-}\bigl(2\mu_{1}- \sigma_{1}^{2}\bigr)x_{1}^{2}+2\Lambda x_{1}-\bigl(2\mu_{2}-\sigma_{2}^{2} \bigr)x_{2}^{2}+2\Lambda x_{2}-\bigl(2 \mu_{3}-\sigma_{3}^{2}\bigr)x_{3}^{2}+2 \Lambda x_{3} \\ &{}-\bigl(2\mu_{4}-\sigma_{4}^{2} \bigr)x_{4}^{2}+2\Lambda x_{4}-\frac{\Lambda}{x_{1}+x_{2}+x_{3}+x_{4}}+ \hat{\mu}+\hat{\sigma} \\ ={}&{-}\bigl(2\mu_{1}-\sigma_{1}^{2}\bigr) \biggl(x_{1}-\frac{\Lambda}{2\mu_{1}-\sigma_{1}^{2}} \biggr)^{2}-\bigl(2 \mu_{2}-\sigma_{2}^{2}\bigr) \biggl(x_{2}- \frac{\Lambda}{2\mu_{2}-\sigma_{2}^{2}} \biggr)^{2} \\ &{}-\bigl(2\mu_{3}-\sigma_{3}^{2}\bigr) \biggl(x_{3}-\frac{\Lambda}{2\mu_{3}-\sigma_{3}^{2}} \biggr)^{2}-\bigl(2 \mu_{4}-\sigma_{4}^{2}\bigr) \biggl(x_{4}- \frac{\Lambda}{2\mu_{4}-\sigma_{4}^{2}} \biggr)^{2} \\ &{}+\Lambda^{2} \biggl(\frac{1}{2\mu_{1}-\sigma_{1}^{2}}+\frac{1}{2\mu_{2}-\sigma_{2}^{2}}+ \frac{1}{2\mu_{3}-\sigma_{3}^{2}}+\frac{1}{2\mu_{4}-\sigma_{4}^{2}} \biggr) \\ &{}-\frac{\Lambda}{x_{1}+x_{2}+x_{3}+x_{4}}+\hat{\mu}+\hat{\sigma}, \end{aligned} $$

where \(\hat{\mu}=\max\{\mu_{1},\mu_{2},\mu_{3},\mu_{4}\}\) and \(\hat{\sigma}=\max \{\sigma_{1}^{2},\sigma_{2}^{2},\sigma_{3}^{2},\sigma_{4}^{2}\}\).

Under the conditions of \(2\mu_{1}>\sigma_{1}^{2}\), \(2\mu_{2}>\sigma_{2}^{2}\), \(2\mu _{3}>\sigma_{3}^{2}\), and \(2\mu_{4}>\sigma_{4}^{2}\), it is not difficult to see that, for a sufficiently large number M,

$$ LV_{1}(x)\leq-1,\quad x\in\biggl(0,\frac{1}{M} \biggr]\times \biggl(0,\frac{1}{M} \biggr]\times\biggl(0,\frac{1}{M} \biggr]\times\biggl(0,\frac{1}{M}\biggl] $$

and

$$ LV_{1}(x)\leq-1,\quad x\in[M,\infty)\times[M,\infty)\times[M,\infty )\times[M,\infty). $$

Therefore,

$$ LV_{1}(x)\leq-1\quad \mbox{for all }x\in \mathbb{R}_{+}^{4}-G. $$
(11)

Let the initial value \(x(0)\in\mathbb{R}_{+}^{4}-G\) be arbitrary and let \(\tau_{G}\) be the stopping time as defined in Lemma 3.1.

By (9) and (11), it follows that

$$ 0\leq V_{1}\bigl(x(0)\bigr)-E(t\wedge\tau_{G}),\quad \forall t\geq0. $$

Letting \(t\rightarrow\infty\) we obtain

$$ E(\tau_{G})\leq V_{1}\bigl(x(0)\bigr),\quad\forall x(0) \in\mathbb{R}_{+}^{4}-G. $$

This immediately implies condition (ii) in Lemma 3.1. The assertion hence follows from Lemma 3.1. The proof is complete. □

Next, we give an example to illustrate Theorem 3.1.

Example 3.1

We choose \(\Lambda=200\), \(\mu_{1}=0.1\), \(\mu_{2}=0.2\), \(\mu_{3}=0.15\), \(\mu_{4}=0.1\), \(\beta=0.01\), \(\alpha=0.15\), \(\gamma=0.3\), \(\sigma=0.4\), \(\sigma_{1}=0.2\), \(\sigma_{2}=0.2\), \(\sigma_{3}=0.1\), \(\sigma_{4}=0.1\), \(x_{1}(0)=250\), \(x_{2}(0)=700\), \(x_{3}(0)=450\), and \(x_{4}(0)=400\) for the SDE (4).

We compute \(2\mu_{1}-\sigma_{1}^{2}=0.16\), \(2\mu_{2}-\sigma_{2}^{2}=0.36\), \(2\mu _{3}-\sigma_{3}^{2}=0.29\), and \(2\mu_{4}-\sigma_{4}^{2}=0.19\). It then follows from Theorem 3.1 that the SDE (4) has a unique stationary distribution. We can apply the Euler-Maruyama (for short EM) method (see Mao [15]) to produce the approximate distribution for the stationary distribution. In comparison, we will perform a computer simulation of 1,000,000 iterations of the single path of \((x_{1}(t),x_{2}(t),x_{3}(t),x_{4}(t))\) with initial value \(x_{1}(0)=250,x_{2}(0)=700\), \(x_{3}(0)=450\), and \(x_{4}(0)=400\) for the SDE (4) and its corresponding deterministic model (1), using the EM method with step size \(\Delta=0.001\), which is shown in Figure 1.

Figure 1
figure 1

Computer simulation of the path \(\pmb{(x_{1}(t),x_{2}(t),x_{3}(t),x_{4}(t))}\) with initial value \(\pmb{x_{1}(0)=250}\) , \(\pmb{x_{2}(0)=700}\) , \(\pmb{x_{3}(0)=450}\) , \(\pmb{x_{4}(0)=400}\) for the SDE ( 4 ) and its corresponding deterministic model ( 1 ), using the EM method with step size \(\pmb{\Delta=0.001}\) .

Moreover, sorting the 1,000,000 iterations of \(x_{1}(t)\) into sorted data from the smallest to the largest one, the 25,000th and 975,000th value in the sorted data are 28.7396 and 64.0840 respectively. Approximately, these give the 95% confidence interval \((28.7396, 64.0840)\) for \(x_{1}(t)\) asymptotically, that is

$$\begin{aligned} P\bigl(28.7396< x_{1}(t)< 64.0840\bigr)\approx95\%, \end{aligned}$$

for all sufficiently large numbers t.

Similarly, one can obtain

$$ \begin{aligned} &P\bigl(285.0995< x_{2}(t)< 698.2622\bigr)\approx95 \%, \\ &P\bigl(158.2328< x_{3}(t)< 423.3447\bigr)\approx95 \%, \\ &P\bigl(276.7638< x_{4}(t)< 873.3084\bigr)\approx95 \%, \end{aligned} $$

for all sufficiently large numbers t.

The histograms of the paths of \(x_{i}(t)\), \(i=1,\ldots,4\), are shown in Figure 2.

Figure 2
figure 2

The histograms of the paths of \(\pmb{x_{i}(t)}\) , \(\pmb{i=1,\ldots,4}\) , with initial value \(\pmb{x_{1}(0)=250}\) , \(\pmb{x_{2}(0)=700}\) , \(\pmb{x_{3}(0)=450}\) , \(\pmb{x_{4}(0)=400}\) .

4 Parameter estimation

In this section, we estimate the parameters in the SDE (4). We find the normal equation is a nonlinear equation when we estimate the drift coefficients of the SDE (4) by using LSE directly. One cannot get the explicit expressions for LSE for the drift coefficients. Therefore, we will develop a useful method to estimate the drift coefficients of the SDE (4). Moreover, we will use a quadratic variation of the logarithm of sample paths to estimate the diffusion coefficients of the SDE (4).

Let \(\nu(\cdot)\) be the stationary distribution of the SDE (4) and its solution \(x(t)\) with initial value \(x(0)\in\mathbb{R}^{4}_{+}\). Before we state our main results, we first cite the ergodic theory on the stationary distribution from Khasminskii [13] as a lemma.

Lemma 4.1

see Khasminskii [13] p. 110

If \(f:\mathbb{R}^{4}_{+}\rightarrow\mathbb{R}\) is integrable with respect to the measure \(\nu(\cdot)\), then

$$ \lim_{t\rightarrow\infty}\frac{1}{t} \int_{0}^{t}f\bigl(x(s)\bigr)\,ds= \int_{\mathbb{R}^{4}_{+}}f(y)\nu(dy)\quad\textit{a.s.} $$

for every initial value \(x(0)\in\mathbb{R}^{4}_{+}\).

Besides, we also recall the definition and properties of a quadratic covariation.

Definition 4.1

see Klebaner [17] p. 218

If \(X(t)\) and \(Y(t)\) are semimartingales on the common space, then the quadratic covariation process, also known as the square bracket process and denoted \([X,Y](t)\), is defined, as usual, by

$$\begin{aligned}{} [X,Y](t)=\lim\sum_{\kappa=1}^{m}\bigl(X \bigl(t_{\kappa}^{m}\bigr)-X\bigl(t_{\kappa-1}^{m} \bigr)\bigr) \bigl(Y\bigl(t_{\kappa}^{m}\bigr)-Y \bigl(t_{\kappa-1}^{m}\bigr)\bigr), \end{aligned}$$

where the limit is taken over shrinking partitions \(\{t_{\kappa-1}^{m}\} _{\kappa=1}^{m}\) of the interval \([0,t]\) with \(\Delta t=\max_{\kappa }(t^{m}_{\kappa}-t^{m}_{\kappa-1})\rightarrow0\) as \(m\rightarrow\infty\) and is in probability.

Lemma 4.2

see Klebaner [17] p. 219

If X and Y are semimartingales, \(H_{1}\) and \(H_{2}\) are predictable processes, then the quadratic covariation of stochastic integrals \(\int _{0}^{t}H_{1}(s)\,dX(s)\) and \(\int_{0}^{t}H_{2}(s)\,dY(s)\) has the following property:

$$ \biggl[ \int_{0}^{\cdot}H_{1}(s)\,dX(s), \int_{0}^{\cdot}H_{2}(s)\,dY(s) \biggr](t)= \int_{0}^{t}H_{1}(s)H_{2}(s)\,d[X,Y](s). $$

Now, let us first give a theorem.

Theorem 4.1

If \(2\mu_{1}>\sigma_{1}^{2}\), \(2\mu_{2}>\sigma_{2}^{2}\), \(2\mu _{3}>\sigma_{3}^{2}\), and \(2\mu_{4}>\sigma_{4}^{2}\) hold, then there is a positive constant CÌ„, which is independent of t, such that the solution \(x(t)\) of the SDE (4) has the property that

$$\begin{aligned} \limsup_{t\rightarrow\infty}E\bigl\vert x(t)\bigr\vert ^{2}\leq \bar{C}. \end{aligned}$$

Proof

By Theorem 2.1, the unique solution \(x(t)\) of the SDE (4) will remain in \(\mathbb{R}^{4}_{+}\). Let

$$ \eta=\frac{1}{4}\min\bigl\{ 2\mu_{1}-\sigma_{1}^{2}, 2\mu_{2}-\sigma_{2}^{2}, 2\mu_{3}-\sigma_{3}^{2}, 2\mu_{4}-\sigma_{4}^{2} \bigr\} . $$

Let \(V_{2}: \mathbb{R}_{+}^{4}\rightarrow\mathbb{R}_{+}\):

$$ V_{2}(x)=e^{\eta t}(x_{1}+x_{2}+x_{3}+x_{4})^{2}. $$
(12)

Applying Itô’s formula to (12) we can find that

$$\begin{aligned} dV_{2}\bigl(x(t)\bigr)&=LV_{2}\bigl(x(t)\bigr) \,dt+2e^{\eta t}\bigl(x_{1}(t)+x_{2}(t)+x_{3}(t)+x_{4}(t) \bigr)\sigma_{1}x_{1}(t)\,dB_{1}(t) \\ &\quad+2e^{\eta t}\bigl(x_{1}(t)+x_{2}(t)+x_{3}(t)+x_{4}(t) \bigr)\sigma_{2}x_{2}(t)\,dB_{2}(t) \\ &\quad+2e^{\eta t}\bigl(x_{1}(t)+x_{2}(t)+x_{3}(t)+x_{4}(t) \bigr)\sigma_{3}x_{3}(t)\,dB_{3}(t) \\ &\quad+2e^{\eta t}\bigl(x_{1}(t)+x_{2}(t)+x_{3}(t)+x_{4}(t) \bigr)\sigma_{4}x_{4}(t)\,dB_{4}(t), \end{aligned}$$
(13)

where \(LV_{2}:\mathbb{R}_{+}^{4}\rightarrow\mathbb{R}_{+}\) is defined by

$$ \begin{aligned} LV_{2}(x)={}&\eta e^{\eta t}(x_{1}+x_{2}+x_{3}+x_{4})^{2}-e^{\eta t} \bigl(2\mu_{1}-\sigma_{1}^{2}\bigr)x_{1}^{2}+2e^{\eta t} \Lambda x_{1} \\ &{}-e^{\eta t}\bigl(2\mu_{2}-\sigma_{2}^{2} \bigr)x_{2}^{2}+2e^{\eta t}\Lambda x_{2}-e^{\eta t} \bigl(2\mu_{3}-\sigma_{3}^{2}\bigr)x_{3}^{2}+2e^{\eta t} \Lambda x_{3} \\ &{}-e^{\eta t}\bigl(2\mu_{4}-\sigma_{4}^{2} \bigr)x_{4}^{2}+2e^{\eta t}\Lambda x_{4}. \end{aligned} $$

Using the inequality

$$ (a+b+c+d)^{2}\leq2\bigl(a^{2}+b^{2}+c^{2}+d^{2} \bigr),\quad a,b,c,d\in \mathbb{R}, $$

we obtain

$$ \begin{aligned} LV_{2}(x)\leq{}&2\eta e^{\eta t} \bigl(x_{1}^{2}+x_{2}^{2}+x_{3}^{2}+x_{4}^{2} \bigr)-e^{\eta t}\bigl(2\mu_{1}-\sigma_{1}^{2} \bigr)x_{1}^{2}+2e^{\eta t}\Lambda x_{1} \\ &{}-e^{\eta t}\bigl(2\mu_{2}-\sigma_{2}^{2} \bigr)x_{2}^{2}+2e^{\eta t}\Lambda x_{2}-e^{\eta t} \bigl(2\mu_{3}-\sigma_{3}^{2}\bigr)x_{3}^{2}+2e^{\eta t} \Lambda x_{3} \\ &{}-e^{\eta t}\bigl(2\mu_{4}-\sigma_{4}^{2} \bigr)x_{4}^{2}+2e^{\eta t}\Lambda x_{4} \\ \leq{}& e^{\eta t}U_{1}(x), \end{aligned} $$

where

$$ \begin{aligned} U_{1}(x)={}& \biggl\{ -\frac{1}{2}\bigl(2 \mu_{1}-\sigma_{1}^{2}\bigr) \biggl(x_{1}- \frac{2\Lambda}{2\mu_{1}-\sigma_{1}^{2}} \biggr)^{2}-\frac{1}{2}\bigl(2 \mu_{2}-\sigma_{2}^{2}\bigr) \biggl(x_{2}- \frac{2\Lambda}{2\mu_{2}-\sigma_{2}^{2}} \biggr)^{2} \\ &{}-\frac{1}{2}\bigl(2\mu_{3}-\sigma_{3}^{2} \bigr) \biggl(x_{3}-\frac{2\Lambda}{2\mu_{3}-\sigma_{3}^{2}} \biggr)^{2}- \frac{1}{2}\bigl(2\mu_{4}-\sigma_{4}^{2} \bigr) \biggl(x_{4}-\frac{2\Lambda}{2\mu_{4}-\sigma_{4}^{2}} \biggr)^{2} \\ &{}+2\Lambda^{2} \biggl(\frac{1}{2\mu_{1}-\sigma_{1}^{2}}+ \frac{1}{2\mu_{2}-\sigma_{2}^{2}}+\frac{1}{2\mu_{3}-\sigma_{3}^{2}}+\frac{1}{2\mu _{4}-\sigma_{4}^{2}} \biggr) \biggr\} . \end{aligned} $$

Note that the function \(U_{1}(x)\) is uniformly bounded, namely,

$$\begin{aligned} \tilde{C}:=\sup_{x\in \mathbb{R}^{4}_{+}}U_{1}(x)< \infty. \end{aligned}$$

We therefore have

$$ LV_{2}(x)\leq e^{\eta t}\tilde{C}. $$

Integrating on both sides of (13), we derive that

$$ e^{\eta t}E\bigl(x_{1}(t)+x_{2}(t)+x_{3}(t)+x_{4}(t) \bigr)^{2}\leq\bigl(x_{1}(0)+x_{2}(0)+x_{3}(0)+x_{4}(0) \bigr)^{2}+\frac{\tilde{C}}{\eta}\bigl(e^{\eta t}-1\bigr). $$

This implies immediately that

$$ \limsup_{t\rightarrow\infty}E\bigl(x_{1}(t)+x_{2}(t)+x_{3}(t)+x_{4}(t) \bigr)^{2}\leq\bar{C}:=\frac{\tilde{C}}{\eta}. $$

By

$$ a^{2}+b^{2}+c^{2}+d^{2}\leq (a+b+c+d)^{2},\quad a,b,c,d>0, $$

we obtain

$$ \limsup_{t\rightarrow\infty}E\bigl\vert x(t)\bigr\vert ^{2}\leq \bar{C}. $$

The proof is complete. □

Theorem 4.2

If \(2\mu_{1}>\sigma_{1}^{2}\), \(2\mu_{2}>\sigma_{2}^{2}\), \(2\mu _{3}>\sigma_{3}^{2}\), and \(2\mu_{4}>\sigma_{4}^{2}\) hold, then

$$\begin{aligned} \begin{aligned} &\Lambda-\mu_{1} \bar{\nu}_{1}+\alpha\bar{\nu_{3}}=(\mu_{2}+\gamma)\bar{\nu}_{2}, \\ & \gamma(1-\sigma)\bar{\nu_{2}}-(\mu_{3}+\alpha)\bar{ \nu_{3}}=0, \\ & \sigma\gamma\bar{\nu}_{2}=\mu_{4} \bar{\nu}_{4}, \end{aligned} \end{aligned}$$
(14)

where

$$\begin{aligned} (\bar{\nu}_{1},\bar{\nu}_{2},\bar{\nu}_{3},\bar{\nu}_{4})^{T}=\lim_{t\rightarrow\infty}\frac{1}{t} \int_{0}^{t}x(s)\,ds= \int_{\mathbb{R}^{4}_{+}}y\nu(dy)\quad \textit{a.s.} \end{aligned}$$

Proof

For any initial value \(x(0)\in\mathbb{R}_{+}^{4}\), it follows directly from the SDE (4) that

$$\begin{aligned} \begin{aligned} &x_{1}(t)=x_{1}(0)+ \int_{0}^{t}\bigl(\Lambda-\mu_{1}x_{1}(s)- \beta x_{1}(s)x_{2}(s)\bigr)\,ds+Y_{1}(t), \\ &x_{2}(t)=x_{2}(0)+ \int_{0}^{t}\bigl(\beta x_{1}(s)x_{2}(s)-( \mu_{2}+\gamma)x_{2}(s)+\alpha x_{3}(s)\bigr) \,ds+Y_{2}(t), \\ &x_{3}(t)=x_{3}(0)+ \int_{0}^{t}\bigl(-(\mu_{3}+ \alpha)x_{3}(s)+\gamma(1-\sigma)x_{2}(s)\bigr) \,ds+Y_{3}(t), \\ &x_{4}(t)=x_{4}(0)+ \int_{0}^{t}\bigl(-\mu_{4}x_{4}(s)+ \sigma\gamma x_{2}(s)\bigr)\,ds+Y_{4}(t), \end{aligned} \end{aligned}$$
(15)

for \(t>0\), where \(Y_{i}(t)=\sigma_{i}\int_{0}^{t}x_{i}(s)\,dB_{i}(s)\), \(i=1,\ldots,4\).

The quadratic variation of \(Y_{i}(t)\), \(i=1,\ldots,4\), are given by

$$ [Y_{i},Y_{i}](t)=\sigma_{i}^{2} \int_{0}^{t}x_{i}^{2}(s)\,ds, \quad i=1,\ldots,4. $$

According to Theorem 3.1, Lemma 4.1, and Theorem 4.1, it is easy to see that

$$ \begin{aligned} & \int_{\mathbb{R}^{4}_{+}}\vert y\vert \nu(dy)< \infty,\qquad \int_{\mathbb{R}^{4}_{+}}\vert y\vert ^{2}\nu(dy)< \infty, \\ &\lim_{t\rightarrow\infty}\frac{1}{t} \int_{0}^{t}x_{i}(s)x_{j}(s) \,ds= \int_{\mathbb{R}^{4}_{+}}y_{i}y_{j}\nu(dy)\quad \text{a.s.}, \end{aligned} $$

for every \(x(0)\in\mathbb{R}^{4}_{+}\), and \(i,j=1,2,3,4\).

It then follows that, for \(i=1,\ldots,4\),

$$ \limsup_{t\rightarrow\infty}\frac{[Y_{i},Y_{i}](t)}{t}< \infty\quad \mbox{a.s.} $$

Therefore, by the strong law of large numbers of martingales, for all \(i=1,2,3,4\),

$$ \lim_{t\rightarrow\infty}\frac{Y_{i}(t)}{t}=0,\quad \mbox{a.s.} $$

Now we can divide the two sides of (15) by t and then letting \(t\rightarrow\infty\) gives

$$ \begin{aligned} &\lim_{t\rightarrow\infty}\frac{x_{1}(t)}{t}=\Lambda- \mu_{1} \bar{\nu}_{1}-\beta m_{2}, \\ &\lim_{t\rightarrow\infty}\frac{x_{2}(t)}{t}=\beta m_{2}-( \mu_{2}+\gamma)\bar{\nu}_{2}+\alpha\bar{\nu}_{3}, \\ &\lim_{t\rightarrow\infty}\frac{x_{3}(t)}{t}=-(\mu_{3}+\alpha) \bar{\nu_{3}}+\gamma(1-\sigma)\bar{\nu_{2}}, \\ &\lim_{t\rightarrow\infty}\frac{x_{4}(t)}{t}=-\mu_{4}\bar{ \nu_{4}}+\gamma\sigma\bar{\nu_{2}}, \end{aligned} $$

where

$$ m_{2}=\lim_{t\rightarrow\infty}\frac{1}{t} \int_{0}^{t}x_{1}(s)x_{2}(s) \,ds. $$

We will show that

$$ \lim_{t\rightarrow\infty}\frac{x_{i}(t)}{t}=0,\quad i=1,\ldots,4, \qquad \mbox{a.s.} $$

Otherwise, it is positive. When it is positive, \(x_{i}(t)\), \(i=1,\ldots,4\) tend to infinity which contradicts Theorem 4.1, which implies the required assertion (14). The proof is complete. □

Next, we will obtain the estimators Λ̂, \(\hat{\mu}_{1}\), \(\hat{\mu}_{3}\), \(\hat{\mu_{4}}\), β̂, and α̂ by applying the technique used in [12]. Let \((x_{1,0},x_{2,0},x_{3,0},x_{4,0})^{T}\), \((x_{1,1},x_{2,1},x_{3,1},x_{4,1})^{T}\), … , \((x_{1,n},x_{2,n},x_{3,n},x_{4,n})^{T}\) be observations from the SDE (4). Given a step size Δt and setting \(x_{i,0}=x_{i}(0)\), \(i=1,2,3,4\), the EM scheme produces the following discretization over small intervals \([\kappa\Delta t,(\kappa+1)\Delta t]\):

$$\begin{aligned} \begin{aligned} &x_{1,\kappa}-x_{1,\kappa-1}=(\Lambda-\mu_{1}x_{1,\kappa-1}- \beta x_{1,\kappa-1}x_{2,\kappa-1})\Delta t+\sigma_{1}x_{1,\kappa-1} \varepsilon_{1,\kappa-1}\sqrt{\Delta t}, \\ &x_{2,\kappa}-x_{2,\kappa-1}=\bigl(\beta x_{1,\kappa-1}x_{2,\kappa-1}-( \mu_{2}+\gamma)x_{2,\kappa-1}+\alpha x_{3,\kappa-1}\bigr)\Delta t+ \sigma_{2}x_{2,\kappa-1} \varepsilon_{2,\kappa-1}\sqrt{\Delta t}, \\ &x_{3,\kappa}-x_{3,\kappa-1}=\bigl(-(\mu_{3}+ \alpha)x_{3,\kappa-1}+\gamma(1-\sigma)x_{2,\kappa-1}\bigr)\Delta t+ \sigma_{3}x_{3,\kappa-1} \varepsilon_{3,\kappa-1}\sqrt{\Delta t}, \\ &x_{4,\kappa}-x_{4,\kappa-1}=(-\mu_{4}x_{4,\kappa-1}+\gamma \sigma x_{2,\kappa-1})\Delta t+\sigma_{4}x_{4,\kappa-1} \varepsilon_{4,\kappa-1}\sqrt{\Delta t}, \end{aligned} \end{aligned}$$
(16)

where \(\varepsilon_{i,\kappa}\) (\(i=1,2,3,4\)) is an i.i.d. \(N(0,1)\) sequence and \(\varepsilon_{\kappa}:=(\varepsilon_{1,\kappa}, \varepsilon _{2,\kappa}, \varepsilon_{3,\kappa},\varepsilon_{4,\kappa})^{T}\) is independent of \(\{(x_{1,p},x_{2,p},x_{3,p},x_{4,p})^{T},p<\kappa\}\) for each κ. Besides, when \(i\neq j\), \(\varepsilon_{i,\kappa}\) is independent of \(\varepsilon_{j,\kappa}\) for \(i,j=1,2,3,4\).

In order to apply the least square estimation, (16) is rewritten as the following form:

$$\begin{aligned} \begin{aligned} &\frac{x_{1,\kappa}-x_{1,\kappa-1}}{x_{1,\kappa-1}\sqrt {\Delta t}}=\frac{\Lambda}{x_{1,\kappa-1}}\sqrt{\Delta t}- \mu_{1}\sqrt{\Delta t} -\beta x_{2,\kappa-1}\sqrt{\Delta t}+ \sigma_{1}\varepsilon_{1,\kappa-1}, \\ &\frac{x_{2,\kappa}-x_{2,\kappa-1}}{x_{2,\kappa-1}\sqrt{\Delta t}}=\beta x_{1,\kappa-1}\sqrt{\Delta t}-(\mu_{2}+ \gamma)\sqrt{\Delta t} +\alpha\frac{x_{3,\kappa-1}}{x_{2,\kappa-1}}\sqrt {\Delta t} + \sigma_{2}\varepsilon_{2,\kappa-1}, \\ &\frac{x_{3,\kappa}-x_{3,\kappa-1}}{x_{3,\kappa-1}\sqrt{\Delta t}}=-(\mu _{3}+\alpha)\sqrt{\Delta t}+\gamma(1-\sigma) \frac{x_{2,\kappa-1}}{x_{3,\kappa-1}}\sqrt{\Delta t}+\sigma _{3}\varepsilon_{3,\kappa-1}, \\ &\frac{x_{4,\kappa}-x_{4,\kappa-1}}{x_{4,\kappa-1}\sqrt{\Delta t}}=-\mu _{4}\sqrt{\Delta t}+\gamma\sigma \frac{x_{2,\kappa-1}}{x_{4,\kappa-1}}\sqrt{\Delta t}+\sigma _{4}\varepsilon_{4,\kappa-1}. \end{aligned} \end{aligned}$$
(17)

We get from Theorem 4.2

$$ \mu_{2}+\gamma=\frac{\Lambda}{\bar{v}_{2}}-\mu_{1} \frac{\bar{v}_{1}}{\bar{v}_{2}}+\alpha\frac{\bar{\nu}_{3}}{\bar{\nu_{2}}},\qquad \gamma(1-\sigma)= \frac{\bar{\nu}_{3}}{\bar{\nu}_{2}}(\mu_{3}+\alpha),\qquad \sigma\gamma=\frac {\bar{\nu}_{4}}{\bar{\nu}_{2}} \mu_{4}. $$
(18)

We will consider a time interval of total length \(T_{1}\) divided into n subintervals each of length Δt so \(n\Delta t=T_{1}\). Hence as \(n\rightarrow\infty\) and \(\Delta t\rightarrow0\) with \(n\Delta t = T_{1}\), one has

$$ \frac{1}{n\Delta t}\sum_{\kappa=1}^{n}x_{i,\kappa} \Delta t\rightarrow\frac{1}{T_{1}} \int_{0}^{T_{1}}x_{i}(t)\,dt,\quad i=1,2,3,4. $$

Hence

$$ \bar{\nu}_{i}\approx\frac{1}{n}\sum _{\kappa=1}^{n}x_{i,\kappa},\quad i=1,2,3,4. $$
(19)

Therefore, the objective function is given by

$$ \begin{aligned} &F(\Lambda,\mu_{1},\mu_{3}, \mu_{4},\beta,\alpha) \\ &\quad =\sum_{\kappa=1}^{n} \biggl[ \biggl( \frac{x_{1,\kappa}-x_{1,\kappa-1}}{x_{1,\kappa-1}\sqrt{\Delta t}}-\frac {\Lambda}{x_{1,\kappa-1}}\sqrt{\Delta t}+\mu_{1}\sqrt{ \Delta t} +\beta x_{2,\kappa-1}\sqrt{\Delta t} \biggr)^{2} \\ &\qquad{}+ \biggl(\frac{x_{2,\kappa}-x_{2,\kappa-1}}{x_{2,\kappa-1}\sqrt {\Delta t}}+\frac{\Lambda}{\bar{v}_{2}}\sqrt{\Delta t}- \mu_{1}\frac{\bar{v}_{1}}{\bar{v}_{2}}\sqrt{\Delta t}-\beta x_{1,\kappa -1}\sqrt{ \Delta t}+\alpha\biggl(\frac{\bar{\nu}_{3}}{\bar{\nu}_{2}}-\frac {x_{3,\kappa-1}}{x_{2,\kappa-1}} \biggr)\sqrt{\Delta t} \biggr)^{2} \\ &\qquad{}+ \biggl(\frac{x_{3,\kappa}-x_{3,\kappa-1}}{x_{3,\kappa-1}\sqrt {\Delta t}}+(\mu_{3}+\alpha)\sqrt{\Delta t} \biggl(1-\frac{\bar{\nu}_{3}}{\bar{\nu}_{2}}\frac{x_{2,\kappa -1}}{x_{3,\kappa-1}} \biggr) \biggr)^{2} \\ &\qquad{}+ \biggl(\frac{x_{4,\kappa}-x_{4,\kappa-1}}{x_{4,\kappa-1}\sqrt {\Delta t}}+\mu_{4}\sqrt{\Delta t} \biggl(1-\frac{\bar{\nu}_{4}x_{2,\kappa-1}}{\bar{\nu}_{2}x_{4,\kappa-1}} \biggr) \biggr)^{2} \biggr]. \end{aligned} $$

We have the following normal equations

$$\begin{aligned} \frac{\partial F}{\partial\Lambda}=0, \qquad \frac{\partial F}{\partial\mu_{1}}=0, \qquad \frac{\partial F}{\partial\beta}=0, \qquad \frac{\partial F}{\partial\alpha}=0, \qquad \frac{\partial F}{\partial\mu_{3}}=0, \qquad \frac{\partial F}{\partial\mu_{4}}=0. \end{aligned}$$

In other words,

$$ \begin{aligned} &a_{11}\Lambda+a_{12} \mu_{1}+a_{13}\beta+a_{14}\alpha+a_{15} \mu_{3}=b_{1}, \\ &a_{21}\Lambda+a_{22}\mu_{1}+a_{23} \beta+a_{24}\alpha+a_{25}\mu_{3}=b_{2}, \\ &a_{31}\Lambda+a_{32}\mu_{1}+a_{33} \beta+a_{34}\alpha+a_{35}\mu_{3}=b_{3}, \\ &a_{41}\Lambda+a_{42}\mu_{1}+a_{43} \beta+a_{44}\alpha+a_{45}\mu_{3}=b_{4}, \\ &a_{51}\Lambda+a_{52}\mu_{1}+a_{53} \beta+a_{54}\alpha+a_{55}\mu_{3}=b_{5}, \end{aligned} $$

and

$$ \mu_{4}\sum_{\kappa=1}^{n} \biggl(1- \frac{\bar{\nu}_{4}x_{2,\kappa-1}}{\bar{\nu}_{2}x_{4,\kappa-1}} \biggr)^{2}=\frac{1}{\Delta t}\sum _{\kappa=1}^{n} \biggl[\frac{x_{4,\kappa}-x_{4,\kappa-1}}{x_{4,\kappa-1}} \biggl( \frac{\bar{\nu}_{4}x_{2,\kappa-1}}{ \bar{\nu}_{2}x_{4,\kappa-1}}-1 \biggr) \biggr], $$

where

$$\begin{aligned} &a_{15}=a_{25}=a_{35}=a_{51}=a_{52}=a_{53}=0, \qquad a_{11}=\sum_{\kappa=1}^{n} \biggl[ \frac{1}{x_{1,\kappa-1}^{2}}+\frac{1}{\bar{v}_{2}^{2}} \biggr], \\ &a_{12}=a_{21}=-\sum_{\kappa=1}^{n} \biggl[\frac{1}{x_{1,\kappa-1}}+\frac{\bar{v}_{1}}{\bar{v_{2}}^{2}} \biggr],\qquad a_{13}=a_{31}=- \sum_{\kappa=1}^{n} \biggl[\frac{x_{2,\kappa-1}}{x_{1,\kappa-1}}+ \frac{x_{1,\kappa-1}}{\bar{v}_{2}} \biggr], \\ &a_{14}=a_{41}=\frac{1}{\bar{v}_{2}}\sum _{\kappa=1}^{n} \biggl[\frac{\bar{\nu}_{3}}{\bar{\nu}_{2}}- \frac{x_{3,\kappa-1}}{x_{2,\kappa-1}} \biggr],\qquad a_{22}=n \biggl[1+\frac {\bar{v}_{1}^{2}}{\bar{v}_{2}^{2}} \biggr],\\ &a_{23}=a_{32}=\sum_{\kappa=1}^{n} \biggl[x_{2,\kappa-1}+\frac{\bar{v}_{1}}{\bar{v_{2}}}x_{1,\kappa-1} \biggr], \qquad a_{24}=a_{42}=\frac{\bar{v}_{1}}{\bar{v}_{2}}\sum _{\kappa=1}^{n} \biggl(\frac{x_{3,\kappa-1}}{x_{2,\kappa-1}}- \frac{\bar{v}_{3}}{\bar{v_{2}}} \biggr),\\ & a_{33}=\sum_{\kappa=1}^{n} \bigl[x_{2,\kappa-1}^{2}+x_{1,\kappa-1}^{2} \bigr], \qquad a_{34}=a_{43}=\sum_{\kappa=1}^{n} \biggl[ \biggl(\frac{x_{3,\kappa-1}}{x_{2,\kappa-1}}-\frac{\bar{\nu }_{3}}{\bar{\nu}_{2}} \biggr)x_{1,\kappa-1} \biggr],\\ & a_{44}=\sum_{\kappa=1}^{n} \biggl[ \biggl(1-\frac{\bar{\nu}_{3}x_{2,\kappa-1}}{\bar{\nu}_{2}x_{3,\kappa -1}} \biggr)^{2}+ \biggl( \frac{\bar{\nu}_{3}}{\bar{\nu}_{2}}-\frac{x_{3,\kappa-1}}{x_{2,\kappa-1}} \biggr)^{2} \biggr], \\ &a_{45}=a_{54}=a_{55}=\sum _{\kappa=1}^{n} \biggl(1-\frac{\bar{\nu}_{3}x_{2,\kappa-1}}{\bar{\nu }_{2}x_{3,\kappa-1}} \biggr)^{2}, \\ &b_{1}=\frac{1}{\Delta t}\sum_{\kappa=1}^{n} \biggl[\frac{x_{1,\kappa}-x_{1,\kappa-1}}{x_{1,\kappa-1}^{2}}-\frac {1}{\bar{v}_{2}}\frac{x_{2,\kappa}-x_{2,\kappa-1}}{x_{2,\kappa-1}} \biggr], \\ &b_{2}=\frac{1}{\Delta t}\sum_{\kappa=1}^{n} \biggl[\frac{x_{2,\kappa}-x_{2,\kappa-1}}{x_{2,\kappa-1}}\frac{\bar {v}_{1}}{\bar{v}_{2}}-\frac{x_{1,\kappa}-x_{1,\kappa-1}}{x_{1,\kappa-1}} \biggr], \\ &b_{3}=\frac{1}{\Delta t}\sum_{\kappa=1}^{n} \biggl[\frac{x_{2,\kappa}-x_{2,\kappa-1}}{x_{2,\kappa-1}}x_{1,\kappa -1}-\frac{x_{1,\kappa}-x_{1,\kappa-1}}{x_{1,\kappa-1}}x_{2,\kappa-1} \biggr], \\ &b_{4}=\frac{1}{\Delta t}\sum_{\kappa=1}^{n} \biggl[\frac{x_{3,\kappa}-x_{3,\kappa-1}}{x_{3,\kappa-1}} \biggl(\frac {\bar{\nu}_{3}x_{2,\kappa-1}}{\bar{\nu}_{2}x_{3,\kappa-1}}-1 \biggr)-\frac {x_{2,\kappa}-x_{2,\kappa-1}}{x_{2,\kappa-1}} \biggl(\frac{\bar{\nu}_{3}}{\bar{\nu}_{2}}-\frac{x_{3,\kappa-1}}{x_{2,\kappa -1}} \biggr) \biggr], \\ &b_{5}=\frac{1}{\Delta t}\sum_{\kappa=1}^{n} \biggl[\frac{x_{3,\kappa}-x_{3,\kappa-1}}{x_{3,\kappa-1}} \biggl(\frac {\bar{\nu}_{3}x_{2,\kappa-1}}{\bar{\nu}_{2}x_{3,\kappa-1}}-1 \biggr) \biggr]. \end{aligned}$$

Thus, we have point estimators as

$$ \begin{aligned} &\hat{\Lambda}=\frac{D_{1}}{D},\qquad\hat{ \mu_{1}}=\frac{D_{2}}{D},\qquad\hat{\beta}=\frac{D_{3}}{D},\qquad \hat{\alpha}=\frac{D_{4}}{D},\qquad\hat{\mu_{3}}= \frac{D_{5}}{D}, \end{aligned} $$
(20)

and

$$ \hat{\mu}_{4}=\frac{1}{\Delta t}\sum _{\kappa=1}^{n} \biggl[\frac{x_{4,\kappa}-x_{4,\kappa-1}}{x_{4,\kappa-1}} \biggl( \frac{\bar{\nu}_{4}x_{2,\kappa-1}}{ \bar{\nu}_{2}x_{4,\kappa-1}}-1 \biggr) \biggr]\Bigm/\sum_{\kappa=1}^{n} \biggl(1-\frac{\bar{\nu}_{4}x_{2,\kappa-1}}{\bar{\nu}_{2}x_{4,\kappa-1}} \biggr)^{2}, $$
(21)

where

$$\begin{aligned} &D_{1}= \begin{vmatrix} b_{1} & a_{12} & \cdots& a_{15}\\ b_{2} & a_{22} & \cdots& a_{25}\\ \vdots& \vdots& &\vdots\\ b_{5} & a_{52} & \cdots& a_{55} \end{vmatrix} , \qquad D_{i}= \begin{vmatrix} a_{11}& \cdots &b_{1} & \cdots& a_{15}\\ a_{21} & \cdots& b_{2} & \cdots& a_{25}\\ \vdots& & \vdots & &\vdots\\ a_{51} & \cdots & b_{5} & \cdots& a_{55} \end{vmatrix} , \\ &D_{5}= \begin{vmatrix} a_{11} & a_{12} & \cdots& a_{14} & b_{1}\\ a_{21} & a_{22} & \cdots& a_{24} & b_{2}\\ \vdots& \vdots& &\vdots &\vdots\\ a_{51} & a_{52} & \cdots& a_{54} & b_{5} \end{vmatrix} , \qquad D= \begin{vmatrix} a_{11} & a_{12} & \cdots& a_{15}\\ a_{21} & a_{22} & \cdots& a_{25}\\ \vdots& \vdots& &\vdots\\ a_{51} & a_{52} & \cdots& a_{55} \end{vmatrix} . \end{aligned}$$

Combining (18)-(21), we get the point estimator as

$$\begin{aligned} \begin{aligned} &\hat{\gamma}=(\hat{\mu_{3}}+\hat{\alpha}) \frac{\sum_{\kappa=1}^{n}x_{3,\kappa}}{\sum_{\kappa=1}^{n}x_{2,\kappa}}+\hat {\mu}_{4} \frac{\sum_{\kappa=1}^{n}x_{4,\kappa}}{ \sum_{\kappa=1}^{n}x_{2,\kappa}},\qquad \hat{\sigma} = \frac{\hat{\mu}_{4}\sum_{\kappa=1}^{n}x_{4,\kappa}}{ \hat{\gamma}\sum_{\kappa=1}^{n}x_{2,\kappa}}, \\ &\hat{\mu}_{2}=\frac{\hat{\Lambda}}{\frac{1}{n}\sum_{\kappa =1}^{n}x_{2,\kappa}}-\hat{\mu}_{1} \frac{\sum_{\kappa=1}^{n} x_{1,\kappa}}{\sum_{\kappa=1}^{n}x_{2,\kappa}} +\hat{\alpha}\frac{\sum _{\kappa=1}^{n}x_{3,\kappa}}{ \sum_{\kappa=1}^{n}x_{2,\kappa}}-\hat{\gamma}. \end{aligned} \end{aligned}$$
(22)

We have estimated the drift coefficients of the SDE (4). Next, we estimate the diffusion coefficients \(\sigma_{1}\), \(\sigma_{2}\), \(\sigma _{3}\), and \(\sigma_{4}\). If we estimate the drift coefficients of the smoking model, then we can apply regression analysis approach to giving the unbiased estimators \(\hat{\sigma}_{1}\), \(\hat{\sigma}_{2}\), \(\hat{\sigma }_{3}\), and \(\hat{\sigma}_{4}\). However, for the auto-regression case, we will provide an efficient and simple method. Our method relies heavily on the properties of quadratic variation, which are independent of the drift coefficients.

By Itô’s formula, we find

$$ \begin{aligned} d\log x_{1}(t)&= \biggl[ \frac{\Lambda}{x_{1}(t)} -\mu_{1}-\beta x_{2}(t)- \frac{1}{2}\sigma^{2}_{1} \biggr]\,dt+ \sigma_{1} \,dB_{1}(t), \\ d\log x_{2}(t)&= \biggl[-\mu_{2}-\gamma-\frac{1}{2} \sigma^{2}_{2}+\beta x_{1}(t)+\alpha \frac{x_{3}(t)}{x_{2}(t)} \biggr]\,dt+\sigma_{2} \,dB_{2}(t), \\ d\log x_{3}(t)&= \biggl[-\mu_{3}-\frac{1}{2} \sigma^{2}_{3}-\alpha+\gamma\frac{x_{2}(t)}{x_{3}(t)} \biggr]\,dt+ \sigma_{3} \,dB_{3}(t), \\ d\log x_{4}(t)&= \biggl[-\mu_{4}+\sigma\gamma \frac{x_{2}(t)}{x_{4}(t)}-\frac{1}{2}\sigma_{4}^{2} \biggr] \,dt+\sigma_{4}\,dB_{4}(t). \end{aligned} $$

Then, for \(\forall t>0\),

$$ \begin{aligned} \log x_{1}(t)&=\log x_{1}(0)+ \int_{0}^{t} \biggl[\frac{\Lambda}{x_{1}(s)} - \mu_{1}-\beta x_{2}(s)-\frac{1}{2} \sigma^{2}_{1} \biggr]\,ds+ \int_{0}^{t}\sigma_{1} \,dB_{1}(s), \\ \log x_{2}(t)&=\log x_{2}(0)+ \int_{0}^{t} \biggl[-(\mu_{2}+\gamma)- \frac{1}{2}\sigma^{2}_{2}+\beta x_{1}(s)+ \alpha\frac{x_{3}(s)}{x_{2}(s)} \biggr]\,ds+ \int_{0}^{t}\sigma_{2} B_{2}(s), \\ \log x_{3}(t)&=\log x_{3}(0)+ \int_{0}^{t} \biggl[-\mu_{3}- \frac{1}{2}\sigma^{2}_{3}-\alpha+\gamma \frac{x_{2}(s)}{x_{3}(s)} \biggr]\,ds+ \int_{0}^{t}\sigma_{3} B_{3}(s), \\ \log x_{4}(t)&=\log x_{4}(0)+ \int_{0}^{t} \biggl[-\mu_{4}+\sigma\gamma \frac{x_{2}(s)}{x_{4}(s)}-\frac{1}{2}\sigma_{4}^{2} \biggr] \,ds+ \int_{0}^{t}\sigma_{4} \,dB_{4}(s). \end{aligned} $$

It is not difficult to see \(\log x_{i}(t)\), \(i=1,2,3,4\), are semimartingales. By the properties of the quadratic variation, it follows that

$$\begin{aligned} &[\log x_{1}, \log x_{1}](t) \\ &\quad =2 \biggl[ \int_{0}^{\cdot} \biggl(\frac{\Lambda}{x_{1}(s)} - \mu_{1}-\beta x_{2}(s)-\frac{1}{2} \sigma^{2}_{1} \biggr)\,ds, \int_{0}^{\cdot}\sigma_{1} \,dB_{1}(s) \biggr](t) \\ &\qquad{}+ \biggl[ \int_{0}^{\cdot} \biggl(\frac{\Lambda}{x_{1}(s)} - \mu_{1}-\beta x_{2}(s)-\frac{1}{2} \sigma^{2}_{1} \biggr)\,ds, \int_{0}^{\cdot} \biggl(\frac{\Lambda}{x_{1}(s)} - \mu_{1}-\beta x_{2}(s)-\frac{1}{2} \sigma^{2}_{1} \biggr)\,ds \biggr](t) \\ &\qquad{}+ \biggl[ \int_{0}^{\cdot}\sigma_{1} \,dB_{1}(s), \int_{0}^{\cdot}\sigma_{1} \,dB_{1}(s) \biggr](t). \end{aligned}$$
(23)

Applying Lemma 4.2 to (23) we find

$$ \begin{aligned} & \biggl[ \int_{0}^{\cdot} \biggl(\frac{\Lambda}{x_{1}(s)} - \mu_{1}-\beta x_{2}(s)-\frac{1}{2} \sigma^{2}_{1} \biggr)\,dt, \int_{0}^{\cdot}\sigma_{1} \,dB_{1}(s) \biggr](t) \\ &\quad = \int_{0}^{t}\sigma_{1} \biggl( \frac{\Lambda}{x_{1}(s)} -\mu_{1}-\beta x_{2}(s)- \frac{1}{2}\sigma^{2}_{1} \biggr)\,d[s, B_{1}](s) \\ &\quad =0, \\ & \biggl[ \int_{0}^{\cdot} \biggl(\frac{\Lambda}{x_{1}(s)} - \mu_{1}-\beta x_{2}(s)-\frac{1}{2} \sigma^{2}_{1} \biggr)\,ds, \int_{0}^{\cdot} \biggl(\frac{\Lambda}{x_{1}(s)} - \mu_{1}-\beta x_{2}(s)-\frac{1}{2} \sigma^{2}_{1} \biggr)\,ds \biggr](t) \\ &\quad =0, \end{aligned} $$

and

$$\begin{aligned} \biggl[ \int_{0}^{\cdot}\sigma_{1} \,dB_{1}(s), \int_{0}^{\cdot}\sigma_{1} \,dB_{1}(s) \biggr](t)=\sigma_{1}^{2}t. \end{aligned}$$

Consequently

$$\begin{aligned}{} [\log x_{1}, \log x_{1}](t)=\sigma^{2}_{1} t,\quad\mbox{a.s.} \end{aligned}$$

Similarly, we have

$$ \begin{aligned} &[\log x_{i}, \log x_{i}](t)= \sigma^{2}_{i} t,\quad i=2,3,4,\mbox{ a.s.} \end{aligned} $$

It then follows that

$$\begin{aligned} \sigma^{2}_{i}=\frac{1}{t}[\log x_{i}, \log x_{i}](t),\quad i=1,2,3,4, \quad a.s. \end{aligned}$$

According to Definition 4.1, when \(n\rightarrow\infty\), \(\Delta t\rightarrow0\) with \(t=n\Delta t\), we have

$$ \begin{aligned} &\frac{1}{n\Delta t}\sum_{\kappa=1}^{n}( \log x_{i,\kappa}-\log x_{i,\kappa-1})^{2}\rightarrow \frac{1}{t}[\log x_{i}, \log x_{i}](t)\quad i=1,2,3,4, \quad \mbox{a.s.} \end{aligned} $$

Thus, we get the estimators

$$ \hat{\sigma}^{2}_{i}=\frac{1}{n\Delta t}\sum _{\kappa=1}^{n}(\log x_{i,\kappa}-\log x_{i,\kappa-1})^{2},\quad i=1,2,3,4. $$
(24)

Applying Definition 4.1 again, we have the following corollary.

Corollary 4.1

For \(i=1,2,3,4\), the estimators \(\hat{\sigma}_{i}\) are strongly consistent, that is,

$$ P \Bigl(\lim_{\Delta t\rightarrow0}\hat{\sigma}_{i}^{2}= \sigma_{i}^{2} \Bigr)=1. $$

Proof

Recall the well-known fact that \([B,B](t)=t\) a.s. Then one can obtain the desired strong consistence by the definition of the quadratic variation. □

An example is given to illustrate the efficiency of our methods.

Example 4.1

Now, we keep the system parameters the same as in Example 3.1. We can perform a computer simulation of \(1,000,000\) iterations of the single path of \(x(t)=(x_{1}(t),x_{2}(t),x_{3}(t),x_{4}(t))\) with initial value \(x_{1}(0)=250 \), \(x_{2}(0)=700\), \(x_{3}(0)=450\), \(x_{4}(0)=400\) for the SDE (4) using the EM method (see Mao [15]) with step size \(\Delta=0.001\). Taking the averages of 15 times of computing (20), (22), and (24), respectively, based on the random numbers from model (17) we get

$$\begin{aligned} &\hat{\Lambda}=201.0341, \qquad \hat{\mu_{1}}=0.1103,\qquad \hat{ \mu_{2}}=0.2079,\qquad \hat{\mu_{3}}=0.1481, \\ &\hat{\mu_{4}}=0.0976,\qquad \hat{\beta}=0.0101,\qquad \hat{\alpha}=0.1503,\qquad \hat{\gamma}=0.3019,\qquad \hat{\sigma}=0.3917, \\ &\hat{\sigma}_{1}=0.2009, \qquad \hat{\sigma}_{2}=0.2001, \qquad \hat{\sigma}_{3}=0.1001,\qquad \hat{\sigma}_{4}=0.0999. \end{aligned}$$

We see the results of the above estimators are very close to the true values.

References

  1. http://www.who.int/mediacentre/factsheets/fs339/en/

  2. Castillo-Garsow, C, Jordán-Salivia, G, Rodriguez-Herrera, A: Mathematical models for the dynamics of tobacco use, recovery and relapse. Technical Report Series, BU-1505-M, Department of Biometrics, Cornell University, (2000)

  3. Sharomi, O, Gumel, A: Curtailing smoking dynamics: a mathematical modeling approach. Appl. Math. Comput. 195, 475-499 (2008)

    MathSciNet  MATH  Google Scholar 

  4. Lahrouz, A, Omari, L, Kiouach, D, Belmaâti, A: Deterministic and stochastic stability of a mathematical model of smoking. Stat. Probab. Lett. 81, 1276-1284 (2011)

    Article  MathSciNet  MATH  Google Scholar 

  5. Gard, T: Persistence in stochastic food web models. Bull. Math. Biol. 46, 357-370 (1984)

    Article  MathSciNet  MATH  Google Scholar 

  6. Gray, A, Greenhalgh, D, Hu, L, Mao, X, Pan, J: A stochastic differential equations SIS epidemic model. SIAM J. Appl. Math. 71, 876-902 (2011)

    Article  MathSciNet  MATH  Google Scholar 

  7. Zhang, Z, Chen, D: A new criterion on existence and uniqueness of stationary distribution for diffusion processes. Adv. Differ. Equ. 2013 13 (2013)

    Article  MathSciNet  Google Scholar 

  8. Bishwal, J: Parameter Estimation in Stochastic Differential Equations. Springer, Berlin (2008)

    Book  MATH  Google Scholar 

  9. Timmer, J: Parameter estimation in nonlinear stochastic differential equations. Chaos Solitons Fractals 11, 2571-2578 (2000)

    Article  MATH  Google Scholar 

  10. Kristensen, N, Madsen, H, Jørgensen, S: Parameter estimation in stochastic grey-box models. Automatica 40, 225-237 (2004)

    Article  MathSciNet  MATH  Google Scholar 

  11. Nielsen, J, Madsen, H, Youg, P: Parameter estimation in stochastic differential equations: an overview. Annu. Rev. Control 24, 83-94 (2000)

    Article  Google Scholar 

  12. Pan, J, Gray, A, Greenhalgh, D, Mao, X: Parameter estimation for the stochastic SIS epidemic model. Stat. Inference Stoch. Process. 17, 75-98 (2014)

    Article  MathSciNet  MATH  Google Scholar 

  13. Khasminskii, R: Stochastic Stability of Differential Equations, 2nd edn. Springer, Berlin (2011)

    MATH  Google Scholar 

  14. Mao, X: Stochastic Differential Equations and Applications, 2nd edn. Horwood, Chichester (2008)

    Book  Google Scholar 

  15. Mao, X: Stationary distribution of stochastic population systems. Syst. Control Lett. 60, 398-405 (2011)

    Article  MathSciNet  MATH  Google Scholar 

  16. Tong, J, Zhang, Z, Bao, J: The stationary distribution of the facultative population model with a degenerate noise. Stat. Probab. Lett. 83, 655-664 (2013)

    Article  MathSciNet  MATH  Google Scholar 

  17. Klebaner, F: Introduction to Stochastic Calculus with Applications, 2nd edn. Monash University, Melbourne (2004)

    MATH  Google Scholar 

Download references

Acknowledgements

The authors are grateful to Prof. Litan Yan and the anonymous reviewers for their valuable comments and suggestions which led to improvements in this manuscript. The research of Z. Zhang was partially supported by the National Natural Science Foundation of China (Nos. 11201062 and 11471071), the Fundamental Research Funds for the Central Universities (No. 233201300012) and the institute of nonlinear science of Donghua University. The research of J. Tong was partially supported by the National Natural Science Foundation of China (Nos. 11401093 and 11571071). The research of M. Dong was partially supported by the National Undergraduate Student Innovation Program (No. 201610255018).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Zhenzhong Zhang.

Additional information

Competing interests

The authors declare that they have no competing interests.

Authors’ contributions

The authors Xuekang Zhang and Zhenzhong Zhang have made the major contribution for this manuscript. The author Jinying Tong gave some help and suggestions to estimation of parameters. The author Mei Dong presents some help to simulate the processes using matlab programs for pictures. All authors read and approved the final manuscript.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Zhang, X., Zhang, Z., Tong, J. et al. Ergodicity of stochastic smoking model and parameter estimation. Adv Differ Equ 2016, 274 (2016). https://doi.org/10.1186/s13662-016-0997-x

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13662-016-0997-x

Keywords