Skip to main content

Alternative way to derive the distribution of the multivariate Ornstein–Uhlenbeck process


In this paper, we solve the Fokker–Planck equation of the multivariate Ornstein–Uhlenbeck process to obtain its probability density function. This approach allows us to ascertain the distribution without solving it analytically. We find that, at any moment in time, the process has a multivariate normal distribution. We obtain explicit formulae of mean, covariance, and cross-covariance matrix. Moreover, we obtain its mean-reverting condition and the long-term distribution.


For decades, stochastic processes have become more popular as a model for fluctuations over time. Including the noise term is the main advantage of the stochastic model. The Ornstein–Uhlenbeck process is one of the most well-known stochastic processes used in many research areas such as mathematical finance [1], physics [2], and biology [3]. It was introduced by L. Ornstein and G. Eugene Uhlenbeck (1930). This process is defined as the solution of stochastic differential equation

$$ dX(t)=\theta \bigl(\mu -X(t)\bigr)\,dt+\sigma \,dW(t), $$

where \(\theta \neq 0\), μ, and \(\sigma >0\) are constant parameters, and \(W(t)\) is the Wiener process. The parameter μ is the long-term mean, θ is the velocity, and σ is the friction coefficient. Its analytic solution, a function of mean, variance, and covariance functions over time t were derived. An important feature of this process (with positive θ) is the mean reversion, which means that it tends to its long-term mean μ as t tends to infinity. So, at any moment in time, if the value of the yield is greater than the long-term mean, then the drift becomes negative, so that the yield is pulled down in the direction of the long-term mean. Similarly, if the value of the yield is smaller than the long-term mean, then the drift becomes positive, so that the yield is pushed up to the long-term mean.

The multivariate Ornstein–Uhlenbeck process is a generalization to multiple dimensions of the Ornstein–Uhlenbeck process. It is defined as the solution of the multivariate stochastic differential equation

$$ \begin{aligned}[b] d\boldsymbol{X}(t)=\boldsymbol{\theta } \bigl(\boldsymbol{\mu }-\boldsymbol{X}(t) \bigr)\,dt+\boldsymbol{\sigma } \,d \boldsymbol{W}(t), \end{aligned} $$

where θ is an \(n\times n\) invertible real matrix, μ is an n-dimensional real vector, σ is an \(n\times m\) positive real matrix, and \(\boldsymbol{W}(t)\) is an m-dimensional standard Wiener process. The idea of this generalization arises when we simultaneously deal with more than one quantity. The univariate Ornstein–Uhlenbeck process forces us to model \(X(t)\) independently, which is not a realistic assumption. It certainly does not work when all quantities are related in some sense. Consequently, many researchers apply this process to their interesting situations with limitations [4,5,6], and [7].

In most research in the past, this process was considered as a solution of stochastic differential equation (2). It was solved for its solution, and then its distribution, mean, covariance, and cross-covariance function matrix was computed. This research is different in that we derive its distribution and parameters without solving it analytically: we consider the probability density function as a solution of the Fokker–Plank equation.


In this section, we introduce some well-known definitions and results, which can be found in [8,9,10,11].

Proposition 1

Let \(\boldsymbol{X}(t)\) be a multivariate Itô process defined by the stochastic differential equation

$$ d\boldsymbol{X}(t)=\boldsymbol{\mu }\bigl(\boldsymbol{X}(t),t\bigr)\,dt+ \boldsymbol{\sigma }\bigl(\boldsymbol{X}(t),t\bigr)\,d\boldsymbol{W}(t), $$

where \(\boldsymbol{\mu }(\boldsymbol{X}(t),t)\) is an n-dimensional vector, \(\boldsymbol{\sigma }(\boldsymbol{X}(t),t)\) is an \(n\times m\) matrix, and \(\boldsymbol{W}(t)\) is an m-dimensional standard Wiener process. The probability density function \(p(\boldsymbol{x},t)\) of \(\boldsymbol{X}(t)\) satisfies the Fokker–Planck equation

$$ \frac{\partial }{\partial t}p\bigl(\boldsymbol{x}(t),t\bigr) =-\frac{\partial }{\partial \boldsymbol{x}}\bigl[ \boldsymbol{\mu }\bigl(\boldsymbol{x}(t),t\bigr)p\bigl(\boldsymbol{x}(t),t\bigr) \bigr] +\frac{\partial ^{2}}{ \partial \boldsymbol{x}^{2}}\bigl[D\bigl(\boldsymbol{x}(t),t\bigr)p\bigl( \boldsymbol{x}(t),t\bigr)\bigr], $$

where \(\boldsymbol{D}(\boldsymbol{x}(t),t)=\boldsymbol{\sigma }(\boldsymbol{X}(t),t)\boldsymbol{\sigma }^{T}( \boldsymbol{X}(t),t)\). This equation is also known as the Kolmogorov forward equation.

Let \(f:\mathbb{R}^{n}\rightarrow \mathbb{R}\) be continuous. The n-dimensional Fourier transform of f is the function \(\mathfrak{F}(f):\mathbb{R}^{n}\rightarrow \mathbb{R}\) defined by

$$ \mathfrak{F}(f) (\boldsymbol{u})= \int _{\mathbb{R}^{n}}f(\boldsymbol{x})e^{-i(\boldsymbol{x}\cdot \boldsymbol{u})}\,d\boldsymbol{x}, $$

where i is the imaginary unit.

Lemma 1

Let \(f:\mathbb{R}^{n}\rightarrow \mathbb{R}\) be a continuously differentiable function such that \(\lim_{\|\boldsymbol{x}\|\rightarrow \infty }f(\boldsymbol{x})=0\). For any \(n\times n\) real matrix A and n-dimensional real vector c, the following properties hold:

  1. 1

    \(\mathfrak{F}(\frac{\partial }{\partial \boldsymbol{x}}\cdot \boldsymbol{c}f( \boldsymbol{x}))=i\boldsymbol{u}^{T}\boldsymbol{c}\mathfrak{F}(f)(\boldsymbol{u})\),

  2. 2

    \(\mathfrak{F}(\frac{\partial }{\partial \boldsymbol{x}}\frac{\partial }{\partial \boldsymbol{x}}:\boldsymbol{A}f(\boldsymbol{x}))=-\boldsymbol{u}^{T}\boldsymbol{A}\boldsymbol{u} \mathfrak{F}(f)(\boldsymbol{u})\),

  3. 3

    \(\mathfrak{F}(\frac{\partial }{\partial \boldsymbol{x}}\cdot \boldsymbol{Ax}f( \boldsymbol{x}))= - (\frac{\partial \mathfrak{F}(f)(\boldsymbol{u})}{\partial \boldsymbol{u}} )^{T}\boldsymbol{A}^{T}\boldsymbol{u}\).

For any square matrix A, we define the exponential of A, denoted \(e^{\boldsymbol{A}}\), as \(\sum_{k=0}^{\infty }\frac{\boldsymbol{A} ^{k}}{k!}\), where \(\boldsymbol{A}^{0}\) is the identity matrix I. Note that this series always converges, so the exponential is well-defined.

Lemma 2

For any square matrix A, the following properties hold:

  1. 1


  2. 2


  3. 3

    \(e^{\boldsymbol{A}}\) is invertible with \(e^{-\boldsymbol{A}}\) as its inverse,

  4. 4

    \(\frac{de^{\boldsymbol{A}t}}{dt}=\boldsymbol{A}e^{\boldsymbol{A}t}\), so if A is invertible, then \(\int e^{\boldsymbol{A}t}\,dt=\boldsymbol{A}^{-1}e^{\boldsymbol{A}t}\).

Main results

Theorem 1

The characteristic function of the n-dimensional Ornstein–Uhlenbeck process \(\boldsymbol{X}(t)\) satisfying (2) with the initial value \(\boldsymbol{X}(0)=\boldsymbol{x}_{0}\) is given by

$$ \phi (\boldsymbol{u},t)=\exp \biggl[i\boldsymbol{u}^{T} \bigl(e^{-\boldsymbol{\theta } t}\boldsymbol{x} _{0}+\bigl(\boldsymbol{I}-e^{-\boldsymbol{\theta } t} \bigr)\boldsymbol{\mu } \bigr) -\frac{1}{2}\boldsymbol{u} ^{T} \biggl( \int _{0}^{t} e^{\boldsymbol{\theta }(s-t)}\boldsymbol{\sigma } \boldsymbol{\sigma } ^{T}e^{\boldsymbol{\theta }^{T} (s-t)}\,ds \biggr)\boldsymbol{u} \biggr]. $$


The Fokker–Planck equation of (2) is given by

$$ \frac{\partial p}{\partial t} =- \biggl[ \frac{\partial }{\partial \boldsymbol{x}}\boldsymbol{\theta \mu }p -\frac{\partial }{\partial \boldsymbol{x}} \boldsymbol{\theta x}p \biggr] +\frac{1}{2} \frac{\partial ^{2}}{\partial \boldsymbol{x}^{2}}\boldsymbol{D}p, $$

where \(\boldsymbol{D}=\boldsymbol{\sigma }\boldsymbol{\sigma }^{T}\), with initial condition

$$ p(\boldsymbol{x})=\boldsymbol{\delta }^{2} ( \boldsymbol{x}-\boldsymbol{x}_{0} ). $$

First, taking the n-dimensional Fourier transform of equation (6), we get

$$ \frac{\partial \hat{p}}{\partial t} =-i\boldsymbol{u}^{T}\boldsymbol{ \theta \mu } \hat{p} + \biggl(\frac{\partial \hat{p}}{\partial \boldsymbol{u}} \biggr)^{T} \boldsymbol{\theta }^{T}\boldsymbol{u} -\frac{1}{2} \boldsymbol{u}^{T}\boldsymbol{D}\boldsymbol{u}\hat{p}, $$

where \(\hat{p}(\boldsymbol{u},t)\) is the n-dimensional Fourier transform of \(p(\boldsymbol{x},t)\).

The initial condition (7) becomes

$$ \hat{p}(\boldsymbol{u}_{0})=\exp \bigl(-i \boldsymbol{u}_{0}^{T}\boldsymbol{x}_{0} \bigr). $$

Note that equation (8) is a first-order partial differential equation, so we will apply the method of characteristic.

Consider the system

$$ \frac{d\boldsymbol{u}}{dt} =\boldsymbol{\theta }^{T}\boldsymbol{u} $$

with initial condition \(\boldsymbol{u}(0)=\boldsymbol{u}_{0}\). The solution of this system is

$$ \boldsymbol{u}=e^{\boldsymbol{\theta }^{T}t}\boldsymbol{u}_{0}. $$

Consider the other equation

$$ \frac{d\hat{p}}{d t} = \biggl[-i\boldsymbol{u}^{T} \boldsymbol{\theta \mu } -\frac{1}{2} \boldsymbol{u}^{T} \boldsymbol{D}\boldsymbol{u} \biggr]\hat{p}. $$

Substituting u from (10) into (11), we get

$$ \frac{d\hat{p}}{\hat{p}} = \biggl[-i\boldsymbol{u}_{0}^{T}e^{\boldsymbol{\theta }t} \boldsymbol{\theta \mu } -\frac{1}{2}\boldsymbol{u}_{0}^{T}e^{\boldsymbol{\theta }t} \boldsymbol{D}e ^{\boldsymbol{\theta }^{T}t}\boldsymbol{u}_{0} \biggr]\,dt. $$


$$ \begin{aligned}[b] \hat{p} & =\hat{p}_{0} \exp \biggl[-i\boldsymbol{u}_{0}^{T}\bigl(e^{\boldsymbol{\theta }t}- \boldsymbol{I}\bigr)\boldsymbol{\mu } -\frac{1}{2}\boldsymbol{u}_{0}^{T} \biggl( \int _{0}^{t}e^{ \boldsymbol{\theta }t}\boldsymbol{D}e^{\boldsymbol{\theta }^{T}t} \,dt \biggr)\boldsymbol{u}_{0} \biggr]. \end{aligned} $$

Then, substituting \(\hat{p}_{0}\) from (9) and \(\boldsymbol{u}_{0}\) by inverting (10) into (13), we get

$$ \begin{aligned}[b] \hat{p} & =\exp \biggl[-i\boldsymbol{u}_{0}^{T} \boldsymbol{x}_{0} -i\boldsymbol{u}_{0}^{T}\bigl(e ^{\boldsymbol{\theta }t}-\boldsymbol{I}\bigr)\boldsymbol{\mu } -\frac{1}{2} \boldsymbol{u}_{0}^{T} \biggl( \int _{0}^{t}e^{\boldsymbol{\theta }t}\boldsymbol{D}e^{\boldsymbol{\theta }^{T}t} \,dt \biggr)\boldsymbol{u}_{0} \biggr] \\ & =\exp \biggl[-i\boldsymbol{u}^{T}e^{-\boldsymbol{\theta } t} \boldsymbol{x}_{0} -i\boldsymbol{u}^{T}e ^{-\boldsymbol{\theta } t} \bigl(e^{\boldsymbol{\theta }t}-\boldsymbol{I}\bigr)\boldsymbol{\mu } -\frac{1}{2} \boldsymbol{u}^{T}e^{-\boldsymbol{\theta } t} \biggl( \int _{0}^{t}e^{\boldsymbol{\theta }t} \boldsymbol{D}e^{\boldsymbol{\theta }^{T}t} \,dt \biggr)e^{-\boldsymbol{\theta }^{T}t}\boldsymbol{u} \biggr] \\ & =\exp \biggl[-i\boldsymbol{u}^{T}e^{-\boldsymbol{\theta } t} \boldsymbol{x}_{0}-i\boldsymbol{u}^{T}\bigl( \boldsymbol{I}-e^{-\boldsymbol{\theta } t}\bigr)\boldsymbol{\mu } -\frac{1}{2} \boldsymbol{u}^{T} \biggl( \int _{0}^{t} e^{\boldsymbol{\theta }(s-t)}\boldsymbol{\sigma } \boldsymbol{\sigma }^{T}e^{ \boldsymbol{\theta }^{T} (s-t)}\,ds \biggr)\boldsymbol{u} \biggr]. \end{aligned} $$

Since the characteristic function is the Fourier transform with opposite sign in the complex exponential, we are done. □

Corollary 1

The n-dimentianal Ornstein–Uhlenbeck process \(\boldsymbol{X}(t)\) satisfying (2) has an n-dimensional normal distribution with mean vector

$$ \boldsymbol{M}(t)=e^{-\boldsymbol{\theta } t}\boldsymbol{X}_{0}+ \bigl(\boldsymbol{I}-e^{-\boldsymbol{\theta } t}\bigr) \boldsymbol{\mu } $$

and covariance matrix

$$ \boldsymbol{\varSigma }(t)= \int _{0}^{t} e^{\boldsymbol{\theta }(s-t)}\boldsymbol{\sigma } \boldsymbol{\sigma }^{T}e^{\boldsymbol{\theta }^{T} (s-t)}\,ds. $$

Moreover, the probability density function of \(\boldsymbol{X}(t)\) is given by

$$ p(\boldsymbol{x},t)=\frac{\exp (-\frac{1}{2} (x-\boldsymbol{M}(t) )^{T} \boldsymbol{\varSigma }^{-1}(t) (x-\boldsymbol{M}(t) ) )}{\sqrt{ \vert 2\pi \boldsymbol{\varSigma }(t) \vert }}. $$


Comparing (5) with a characteristic function of multivariate normal distribution with mean M and covariance matrix Σ,

$$ \phi (\boldsymbol{u})=\exp \biggl[i\boldsymbol{u}^{T}\boldsymbol{M}- \frac{1}{2}\boldsymbol{u}^{T} \boldsymbol{\varSigma } \boldsymbol{u}\biggr], $$

we obtain the result. □

Theorem 2

The cross-covariance function matrix of an n-dimensional Ornstein– Uhlenbeck process \(\boldsymbol{X}(t)\) satisfying (2) is given by

$$ \boldsymbol{\varGamma }(s,t)= \int _{0}^{\min (s,t)} e^{-\boldsymbol{\boldsymbol{\theta }} (s-u)}\boldsymbol{ \boldsymbol{\sigma }} \boldsymbol{\boldsymbol{\sigma }}^{T}e^{-\boldsymbol{\boldsymbol{\theta }}^{T} (t-u)} \,du. $$


Let \(\boldsymbol{\varGamma }(s,t)=\mathbb{E}[(\boldsymbol{X}(t)-\boldsymbol{M}(t))(\boldsymbol{X}(s)- \boldsymbol{M}(s))^{T}]\). From (15) we can see that \(\boldsymbol{M}'(t)=- \boldsymbol{\theta }(\boldsymbol{M}(t)-\mu )\). Then

$$ \begin{aligned}[b] \frac{\partial ^{2}\boldsymbol{\varGamma }}{\partial s\,\partial t} & = \mathbb{E} \bigl[\bigl( \boldsymbol{X}'(s)-\boldsymbol{M}'(s) \bigr) \bigl(\boldsymbol{X}'(t)-\boldsymbol{M}'(t) \bigr)^{T}\bigr] \\ & = \mathbb{E}\bigl[\bigl(-\boldsymbol{\theta }\bigl(\boldsymbol{X}(s)- \boldsymbol{M}(s)\bigr)+\boldsymbol{\sigma } \boldsymbol{\xi }(s)\bigr) \bigl(- \boldsymbol{\theta }\bigl(\boldsymbol{X}(t)-\boldsymbol{M}(t)\bigr)+\boldsymbol{ \sigma } \boldsymbol{\xi }(t)\bigr)^{T}\bigr] \\ & = \boldsymbol{\theta } \boldsymbol{\varGamma } \boldsymbol{\theta }^{T} - \boldsymbol{\theta } \boldsymbol{K}(s,t) \boldsymbol{\sigma }^{T}-\boldsymbol{\sigma } \boldsymbol{L}(s,t)\boldsymbol{\theta }^{T}+ \boldsymbol{\sigma }\mathbb{E}\bigl[\boldsymbol{\xi }(s) \boldsymbol{\xi }^{T}(t)\bigr]\boldsymbol{\sigma }^{T}, \end{aligned} $$

where \(\boldsymbol{\xi }(t)\) is an n-dimensional white noise, \(\boldsymbol{K}(s,t)= \mathbb{E}[(\boldsymbol{X}(s)-\boldsymbol{M}(s))\boldsymbol{\xi }^{T}(t)]\), and \(\boldsymbol{L}(s,t)= \mathbb{E}[\boldsymbol{\xi }(s)(\boldsymbol{X}(t)-\boldsymbol{M}(t))^{T}]\).

Taking the derivative of \(\boldsymbol{K}(s,t)\) with respect to t, we get

$$ \begin{aligned}[b] \frac{\partial \boldsymbol{K}}{\partial s} & = \mathbb{E}\bigl[\bigl( \boldsymbol{X}'(s)-\boldsymbol{M}'(s)\bigr) \boldsymbol{\xi }^{T}(t)\bigr] \\ & = \mathbb{E}\bigl[-\boldsymbol{\theta }\bigl(\boldsymbol{X}(s)- \boldsymbol{M}(s)\bigr)\boldsymbol{\xi }^{T}(t)+ \boldsymbol{\sigma } \boldsymbol{\xi }(s)\boldsymbol{\xi }^{T}(t)\bigr] \\ & = -\boldsymbol{\theta } K(s,t)+\boldsymbol{\sigma }\mathbb{E}\bigl[ \boldsymbol{\xi }(s) \boldsymbol{\xi }^{T}(t)\bigr]. \end{aligned} $$

Since \(\mathbb{E}[\boldsymbol{\xi }(s)\boldsymbol{\xi }^{T}(t)]=\boldsymbol{\delta }^{m}(s-t)\) and \(\boldsymbol{K}(0,t)=0\), we get the solution

$$ \boldsymbol{K}(s,t)= \textstyle\begin{cases} e^{-\boldsymbol{\theta }(s-t)}\boldsymbol{\sigma } & \text{for } s>t, \\ 0 & \text{for } s< t. \end{cases} $$

Similarly, we get

$$ \boldsymbol{L}(s,t)= \textstyle\begin{cases} 0 & \text{for } s>t, \\ \boldsymbol{\sigma }^{T}e^{-\boldsymbol{\theta }(s-t)} & \text{for } s< t. \end{cases} $$

So, if \(t>s\), then

$$ \begin{aligned}[b] \frac{\partial ^{2}\boldsymbol{\varGamma }}{\partial s\,\partial t} & =\boldsymbol{ \theta } \boldsymbol{\varGamma } \boldsymbol{\theta }^{T} - \boldsymbol{ \theta } e^{-\boldsymbol{\theta }(s-t)} \boldsymbol{\sigma }\boldsymbol{\sigma }^{T} + \boldsymbol{\sigma }\boldsymbol{\delta }^{m}(s-t) \boldsymbol{\sigma }^{T} \end{aligned} $$

with initial condition \(C(0,t)=C(s,0)=0\). This equation has the solution

$$ \boldsymbol{\varGamma }(s,t)= \int _{0}^{s} e^{-\boldsymbol{\boldsymbol{\theta }} (s-u)}\boldsymbol{ \boldsymbol{\sigma }} \boldsymbol{\boldsymbol{\sigma }}^{T}e^{-\boldsymbol{\boldsymbol{\theta }}^{T} (t-u)} \,du. $$

On the other hand, if \(s>t\), then we similarly obtain that

$$ \boldsymbol{\varGamma }(s,t)= \int _{0}^{t} e^{-\boldsymbol{\boldsymbol{\theta }} (s-u)}\boldsymbol{ \boldsymbol{\sigma }} \boldsymbol{\boldsymbol{\sigma }}^{T}e^{-\boldsymbol{\boldsymbol{\theta }}^{T} (t-u)} \,du. $$

This completes the proof. □

From this result it follows that if we let \(s=t\), then the cross-covariance function matrix becomes the covariance matrix as in (16).

If the parameter θ of univariate Ornstein–Uhlenbeck process is positive, then the process is mean-reverting. For the multivariate case, we also have a condition for mean-reverting, which is stated in the following theorem.

Theorem 3

The n-dimensional Ornstein–Uhlenbeck process \(\boldsymbol{X}(t)\) satisfying (2) is mean-reverting if all eigenvalues of θ are positive.


Since \(e^{-\boldsymbol{\theta } t}\) tends to the zero matrix as t tends to infinity if all eigenvalues of θ are positive, we can conclude from (15) that, with this condition, \(\boldsymbol{M}(t)\) tends to μ.

For \(\boldsymbol{\varSigma }(t)\), the situation is different, since we cannot take t in (16) to infinity directly as we do for \(\boldsymbol{M}(t)\). We apply the identity \(\operatorname{vec}(\boldsymbol{ABC})=(\boldsymbol{C}^{T}\otimes \boldsymbol{A}) \operatorname{vec}( \boldsymbol{B})\), where is the Kronecker product defined in [12], and \(\operatorname{vec}(\boldsymbol{A})\) is defined as the column vector made of the columns of A stacked atop one another from left to right. Then

$$ \operatorname{vec}\bigl(\boldsymbol{\varSigma }(t)\bigr)= \int _{0}^{t} e^{\boldsymbol{\theta }(s-t)}\otimes e^{ \boldsymbol{\theta }(s-t)}\,ds \operatorname{vec}\bigl(\boldsymbol{\boldsymbol{\sigma } \boldsymbol{\sigma }^{T}}\bigr). $$

Now we use another identity \(e^{\boldsymbol{A}\otimes \boldsymbol{B}}=e^{\boldsymbol{A}} \oplus e^{\boldsymbol{B}}\) where is the Kronecker sum. Then we obtain

$$ \begin{aligned}[b] \operatorname{vec}\bigl(\boldsymbol{\varSigma }(t) \bigr) & = \int _{0}^{t} e^{\boldsymbol{\theta }(s-t)}\otimes e ^{\boldsymbol{\theta }(s-t)}\,ds \operatorname{vec}\bigl(\boldsymbol{\sigma }\boldsymbol{ \sigma }^{T}\bigr) \\ & = \int _{0}^{t} e^{(\boldsymbol{\theta }\oplus \boldsymbol{\theta })(s-t)}\,ds \operatorname{vec}\bigl( \boldsymbol{\sigma }\boldsymbol{\sigma }^{T} \bigr) \\ & = (\boldsymbol{\theta }\oplus \boldsymbol{\theta })^{-1} \bigl( \boldsymbol{I}- e^{-( \boldsymbol{\theta }\oplus \boldsymbol{\theta })t} \bigr) \operatorname{vec}\bigl(\boldsymbol{\sigma }\boldsymbol{\sigma } ^{T}\bigr). \end{aligned} $$

Since all eigenvalues of \(\boldsymbol{\theta }\oplus \boldsymbol{\theta }\) are still positive, the covariance matrix converts to a constant matrix Σ such that \(\operatorname{vec}(\boldsymbol{\varSigma })= (\boldsymbol{\theta }\oplus \boldsymbol{\theta })^{-1} \operatorname{vec}(\boldsymbol{\sigma }\boldsymbol{\sigma }^{T})\). □


In this paper, we propose a new method to derive the distribution of the multivariate Ornstein–Uhlenbeck process by solving its forward equation. We apply the characteristic method and Fourier transform to solve the equation. We obtain the characteristic function of the multivariate Ornstein–Uhlenbeck process and also its density function. Our explicit result shows that the multivariate Ornstein–Uhlenbeck process, at any time, is a multivariate normal random variable. We also derive the mean vector, covariance matrix, and cross covariance matrix and obtain its mean-reverting condition, which is an extension of the univariate case. It is well known that the univariate Ornstein–Uhlenbeck process has a mean-reverting property when the parameter θ is positive. In our study, for the multivariate case, we have found that the process is mean-reverting as t increases and all eigenvalues of the matrix θ are positive.


  1. 1.

    Vasicek, O.: An equilibrium characterization of the term structure. J. Financ. Econ. 5, 177–388 (1977)

    Article  Google Scholar 

  2. 2.

    Lemons, D.S.: An Introduction to Stochastic Processes in Physics. Johns Hopkins University Press, Baltimore (2002)

    Google Scholar 

  3. 3.

    Ditlevsen, S., Samson, A.: Introduction to Stochastic Models in Biology, Stochastic Biomathematical Models: With Applications to Neuronal Modeling. Springer, Berlin (2013)

    Google Scholar 

  4. 4.

    Trost, D.C., Overman, E.A. II, Ostroff, J.H., Xiong, W., March, P.: A model for liver homeostasis using modified mean-reverting Ornstein–Uhlenbeck process. Comput. Math. Methods Med. 11, 27–47 (2010)

    MathSciNet  Article  Google Scholar 

  5. 5.

    Christensena, J.H.E., Dieboldb, F.X., Rudebuscha, G.D.: The affine arbitrage-free class of Nelson–Siegel term structure models. J. Econom. 164, 4–20 (2011)

    MathSciNet  Article  Google Scholar 

  6. 6.

    Fasen, V.: Statistical estimation of multivariate Ornstein–Uhlenbeck processes and applications to co-integration. J. Econom. 172, 325–337 (2013)

    MathSciNet  Article  Google Scholar 

  7. 7.

    Phewchean, N., Wu, Y.H., Lenbury, Y.: Option pricing with stochastic volatility and market price of risk: an analytic approach. In: Recent Advances in Finite Differences and Applied & Computational Mathematics Conference Proceedings, Athens, May 14–16, 2013, pp. 135–139 (2013)

    Google Scholar 

  8. 8.

    Klebaner, F.C.: Introduction to Stochastic Calculus with Applications, 2nd edn. Imperial College Press, London (2005)

    Google Scholar 

  9. 9.

    Risken, H.: The Fokker–Planck Equation, 2nd edn. Springer Series in Synergetics, vol. 18. Springer, Berlin Heidelberg (1996)

    Google Scholar 

  10. 10.

    Stade, E.: Fourier Analysis. Wiley, New York (2011)

    Google Scholar 

  11. 11.

    Hall, B.C.: Lie Groups, Lie Algebras, and Representations, 2nd edn. Springer, Berlin (2015)

    Google Scholar 

  12. 12.

    Graham, A.: Kronecker Products and Matrix Calculus with Applications. Ellis Horwood, Chichester (1981)

    Google Scholar 

Download references


We acknowledge the support of the Centre of Excellence in Mathematics, CHE, Thailand.

Author information




Both authors contributed equally to this work. All authors read and approved the final manuscript.

Corresponding author

Correspondence to N. Phewchean.

Ethics declarations

Competing interests

The authors declare that they have no competing interests.

Consent for publication

Both authors have seen and approved the submission of this manuscript.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (, which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Vatiwutipong, P., Phewchean, N. Alternative way to derive the distribution of the multivariate Ornstein–Uhlenbeck process. Adv Differ Equ 2019, 276 (2019).

Download citation


  • 93E03
  • 60H10


  • Multivariate Ornstein–Uhlenbeck process
  • Multivariate normal distribution
  • Fokker–Planck equation
  • n-dimensional Fourier transform