 Research
 Open Access
The interval versions of the Kalman filter and the EM algorithm
 O AlGahtani^{1}Email author,
 J AlMutawa^{2},
 M ElGebeily^{2} and
 R Agarwal^{1, 3}
https://doi.org/10.1186/168718472012172
© AlGahtani et al.; licensee Springer 2012
 Received: 2 May 2012
 Accepted: 14 September 2012
 Published: 2 October 2012
Abstract
In this paper, we study state space models represented by interval parameters and noise. We introduce an interval version of the Expectation Maximization (EM) algorithm for the identification of the interval parameters of the system. We also introduce a suboptimal interval Kalman filter for the identification and estimation of the state vectors. The work requires the introduction of the concept of interval random variables which we also include in this work together with a study of their interval statistical properties such as expectation, conditional expectation and variance. Although the interval Kalman filter introduced here is suboptimal, it successfully recovers the state vectors to a high precision in the simulation examples we have run.
Keywords
 Probability Density Function
 Kalman Filter
 State Space Model
 Expectation Maximization Algorithm
 Interval Arithmetic
1 Introduction
In a state space model, some parameters of the system such as the coefficient matrices may not be precisely known or they gradually change with time. One way to account for these uncertainties is to allow such parameters to be represented by interval entities. The question then arises as to how to extend identification and estimation techniques to interval settings.
To our knowledge, no attempt has been made so far to extend identification techniques such as the EM algorithm to interval state space models. In this work, we give one such an extension.
In the existing literature, an optimal interval Kalman filter was attempted in [1]. That attempt suffered from the lack of proper definitions and rigorous treatment. The idea in [1] was to replace the interval system setting with the ‘worst case inversion’ while keeping everything else unchanged. So, the ultimate treatment in [1] amounts to the application of the traditional Kalman filter to the system representing the worst case scenario. This way the authors were able to avoid the difficulties that arise when dealing with interval arithmetic and concepts. On the other hand, this algorithm cannot be called optimal and the concept of the optimal interval Kalman filter remains an open question.
In our work, we introduce a spacial interval arithmetic that always produces results that are smaller (in the sense that it is contained) than the traditional interval arithmetic [2, 3]. This arithmetic enables the extension of the Kalman filter as well as the EM algorithm to interval setting in a true sense. In our restricted interval arithmetic, the interval Kalman filter we introduce here is optimal. However, with respect to the more general interval arithmetic, our interval Kalman filter is suboptimal.
2 A special interval arithmetic
We introduce a special set of interval operations that will enable the extension of the usual linear system concepts to the interval setting in a seamless manner. The more general definitions of the interval operations can be found in [2]. The arithmetic introduced here avoids such vague terms as ‘interval extension’, ‘inclusion function’, determinants etc. that have been used in the literature [1, 4–6].
with the usual restriction $0\notin J$ if • = ÷.
Observe that all operations in Definition 1 result in intervals since they can be regarded as continuous functions defined on the unit interval $[0,1]$. For example, a typical element in $I\ast J$ is ${(1\alpha )}^{2}ac+\alpha (1\alpha )(ad+bc)+{\alpha}^{2}bd$ which is a continuous function of α. The operations in Definition 1 give similar results to the usual interval operations as given in [2] when $\u2022\in \{+,\}$, but generally they give only subintervals if $\u2022\in \{\ast ,\xf7\}$. For example, if $I=[2,2]$, then $I\ast I=[0,4]$ according to Definition 1, while the usual definition in [2] gives $I\ast I=[4,4]$.
These two properties, which are missing in the usual interval operations, will enable the extension of many results from usual state space models to interval state space models. On the other hand, these definitions were motivated by our attempt to arrive at a definition of interval random variables and investigate the corresponding statistical properties. We feel that they are the natural ones to handle interval systems. This feeling is reassured by the numerical results we obtained in the simulation examples (see Section 5). While we expected to obtain a construction of a suboptimal interval Kalman filter, the constructed filter was actually able to recover the exact simulated intervals rather than subintervals.
Interval vectors and matrices are defined similarly:

A vector $\mathbf{v}\in {\mathbf{IR}}^{n}$ is defined as$\mathbf{v}=[\mathbf{a},\mathbf{b}]=\{{\mathbf{x}}_{\alpha}=(1\alpha )\mathbf{a}+\alpha \mathbf{b}:\alpha \in [0,1]\},$
and the inequality holds componentwise.

A matrix $\mathbf{A}\in {\mathbf{IR}}^{n\times n}$ is defined as$\mathbf{A}=[A,B]=\{{X}_{\alpha}=(1\alpha )A+\alpha B:\alpha \in [0,1]\},$
and the inequality holds componentwise.
provided that the involved operations make sense. In the same spirit, interval matrix operations are defined as follows:

The interval determinant is defined by$det(\mathbf{A})=\{det({X}_{\alpha}):\alpha \in [0,1]\}.$

The interval adjoint is defined by$adj(\mathbf{A})=\{adj({X}_{\alpha}):\alpha \in [0,1]\}.$

The interval inverse is defined by$\begin{array}{rcl}{\mathbf{A}}^{1}& =& \{{X}_{\alpha}^{1}:\alpha \in [0,1]\}\\ =& \{\frac{adj({X}_{\alpha})}{det({X}_{\alpha})}:\alpha \in [0,1]\}\\ =& \frac{adj(\mathbf{A})}{det(\mathbf{A})}.\end{array}$

Suppose that ${\mathbf{A}}^{1}$ exists. We define the solution of the interval linear system $\mathbf{AX}=\mathbf{b}$ to be$\mathbf{X}=\{X\in {\mathbb{R}}^{n}:{A}_{\alpha}X={b}_{\alpha},\alpha \in [0,1]\}.$
The last inclusion holds because if $X\in \mathcal{S}$, then there is an $A\in \mathbf{A}$ and a $b\in \mathbf{b}$ with $AX=b$. Then $X={A}^{1}b\in {\mathbf{A}}^{1}b\subset {\mathbf{A}}^{1}\mathbf{b}$. Thus, $\mathcal{S}\subset {\mathbf{A}}^{1}\mathbf{b}$. Noting that ${\mathbf{A}}^{1}\mathbf{b}$ is an interval vector and $[\mathcal{S}]$ is minimal, we get that $[\mathcal{S}]\subset {\mathbf{A}}^{1}\mathbf{b}$.
For the rest of this paper, we will use the special interval operations defined above.
The map q defines a metric in IR.
3 Interval random variables
We begin by discussing the measurability of setvalued maps and then introduce the definition of an interval random variable. The basic definitions and more details can be found in [7]. A measurable space $(\mathrm{\Omega},\mathcal{A})$ consists of a basic set Ω together with a σalgebra $\mathcal{A}$ of subsets of Ω called measurable sets. Here, we consider closed convex value setvalued maps $F:\mathrm{\Omega}\rightrightarrows {\mathbb{R}}^{k}$, i.e., $F(\omega )$ is a closed convex subset of ${\mathbb{R}}^{k}$ for each $\omega \in \mathrm{\Omega}$. This is the case when F is interval valued. The latter notion means that for each $\omega \in \mathrm{\Omega}$, the components of $F(\omega )$ are closed intervals in ℝ.
Definition 3 Let $(\mathrm{\Omega},\mathcal{A})$ be a measurable space and $F:\mathrm{\Omega}\rightrightarrows {\mathbb{R}}^{k}$ be a setvalued map. F is called measurable if the inverse image of each open set is a measurable set: if $O\subset {\mathbb{R}}^{k}$ is open, then ${F}^{1}(O)\in \mathcal{A}$.
We are now in a position to introduce the definition of interval random variables and interval stochastic processes.
 1.
X is measurable, and
 2.
the function $x\mapsto {p}_{x}$ is continuous on X, where ${p}_{x}$ is the probability density function for the random variable x.
An interval stochastic process is an indexed set of interval random variables.
In order to study the expectations and variances of interval random variables, we need to discuss first the integral of setvalued maps and, in particular, intervalvalued maps. The discussion begins with the notion of measurable selections.
Definition 5 Let $(\mathrm{\Omega},\mathcal{A})$ be a measurable space and $F:\mathrm{\Omega}\rightrightarrows {\mathbb{R}}^{k}$ be a measurable setvalued map. A measurable selection of F is a measurable map $f:\mathrm{\Omega}\to {\mathbb{R}}^{k}$ satisfying $f(\omega )\in F(\omega )$ for each $\omega \in \mathrm{\Omega}$.
It is well known that every measurable setvalued map has at least one measurable selection [8]. Furthermore, we have the following equivalences [7].
 1.
F is measurable.
 2.
${G}_{F}\in \mathcal{A}\otimes \mathcal{B}$.
 3.
${F}^{1}(B)\in \mathcal{A}$ for every $B\in \mathcal{B}$.
 4.There exists a sequence of measurable selections ${\{{f}_{n}\}}_{n=1}^{\mathrm{\infty}}$ of F such that$F(\omega )=\overline{\bigcup _{n\ge 1}{f}_{n}(\omega )}$
for each $\omega \in \mathrm{\Omega}$.
A countable family of measurable selections satisfying the last property is called dense.
Let $F:\mathrm{\Omega}\rightrightarrows {\mathbb{R}}^{k}$ be an intervalvalued map. We define the two special functions ${l}_{F}$ and ${r}_{F}$ such that ${l}_{F}(\omega )=a(\omega )$ and ${r}_{F}(\omega )=b(\omega )$, where $F(\omega )=[a(\omega ),b(\omega )]$ for each $\omega \in \mathrm{\Omega}$. The next lemma shows that ${l}_{F}$ and ${r}_{F}$ are measurable selections of F when the latter is measurable.
Lemma 7 Let $F:\mathrm{\Omega}\rightrightarrows {\mathbb{R}}^{k}$ be a measurable intervalvalued map. Then the point functions ${l}_{F}$ and ${r}_{F}$ are measurable selections of F.
Then ${l}_{F}(\omega )={inf}_{n\ge 1}{f}_{n}(\omega )$ and ${r}_{F}(\omega )={sup}_{n\ge 1}{f}_{n}(\omega )$ (here the inf and sup operations are taken componentwise). Since the inf and the sup operators preserve measurability, we see that the functions ${l}_{F}$ and ${r}_{F}$ are measurable selections of F. □
Thus, ${l}_{F}(t)=t={f}_{1}(t)$ and ${r}_{F}(t)=(t+\frac{1}{t})={f}_{2}(t)$. For every $t\in [1,\mathrm{\infty})$, the set ${\{{r}_{n}t+(1{r}_{n})(t+\frac{1}{t})\}}_{n=1}^{\mathrm{\infty}}$ is dense in the interval $[t,t+\frac{1}{t}]$. Thus, $F(t)=\overline{{\bigcup}_{n\ge 1}{f}_{n}(t)}$.
Now suppose that $(\mathrm{\Omega},\mathcal{A},\mu )$ is a measure space and $F:\mathrm{\Omega}\rightrightarrows {\mathbb{R}}^{k}$ is a setvalued map. A measurable selection f of F is an integrable selection if f is integrable with respect to the measure μ. The set of all integrable selections of F will be denoted by ℱ. The map F is called integrably bounded if there exists a μintegrable function $g\in {L}^{1}(\mathrm{\Omega};\mathbb{R},\mu )$ such that $F(\omega )\subset g(\omega )\mathbf{B}$ for μalmost every $\omega \in \mathrm{\Omega}$. Here, B denotes the unit ball in ${\mathbb{R}}^{k}$. In this case, every measurable selection f of F is also an integrable selection since $f(\omega )\in F(\omega )\subset g(\omega )\mathbf{B}$ implies that $\parallel f(\omega )\parallel \le g(\omega )$, where $\parallel \cdot \parallel $denotes the Euclidean norm on ${\mathbb{R}}^{k}$.
We shall say that F is integrable if every measurable selection is integrable.
where ${f}_{\alpha}=(1\alpha ){l}_{F}+\alpha {r}_{F}$. Hence, $\theta \in {\int}_{\mathrm{\Omega}}F\phantom{\rule{0.2em}{0ex}}d\mu $.
The second equality is an immediate consequence of this. □
It will always be assumed that both ${l}_{F}$ and ${r}_{F}$ are integrable.
In view of (3), we have the following corollary.
We shall say that Z is normally distributed if each $z\in Z$ is normally distributed. An interval stochastic process ${\{{Z}_{t}\}}_{t\in T}$ will be called normally distributed if for each $t\in T$, ${Z}_{t}$ is normally distributed.
Guided by this and Lemma 9, we can define the interval expectation of the interval random variable Z as follows.
It should also be noted that the expectation of a vector random variable is the vector of expectations of its components.
The same is true if I is an interval vector and Z is an interval random variable.
To introduce covariance of two interval random variables Y, Z, we need to assume that the function $(x,y)\mapsto {p}_{x,y}$ is continuous on $Y\times Z$. Here, ${p}_{x,y}$ is the joint probability density function of the two random variables x, y.
where $a=Var({l}_{Z})$, $b=Var({r}_{Z})$, $c=Cov({l}_{Z},{r}_{Z})$. This last equation provides a formula for computing the interval $Var(Z)$.
For interval random vectors, the above definitions hold componentwise.
The two interval random variables Y, Z will be called uncorrelated if for each ${y}_{\alpha}\in Y$, ${z}_{\alpha}\in Z$, ${y}_{\alpha}$, ${z}_{\alpha}$ are uncorrelated. Therefore, Y, Z are uncorrelated if and only if $Cov(Y,Z)=[0]$.
It is now straightforward to check the following theorem.
 1.
$Cov(\mathit{\lambda}\mathbf{Y},\mathbf{W})=\mathit{\lambda}Cov(\mathbf{Y},\mathbf{W})$,
 2.
$Cov(\mathbf{Y}+\mathbf{Z},\mathbf{W})=Cov(\mathbf{Y},\mathbf{W})+Cov(\mathbf{Z},\mathbf{W})$,
 3.
$Cov(\mathbf{AY},\mathbf{BW})=\mathbf{A}Cov(\mathbf{Y},\mathbf{W}){\mathbf{B}}^{T}$.
The assumed continuous dependence of the probability density function (joint density function) on the random variable (variables) in an interval random variable (interval random variables) implies that the conditional probability density function is also continuous. This guarantees that the generalization of the conditional density function to the interval setting is always an interval.
The following theorem is easily checked.
 1.
$E(\mathbf{X}+\mathbf{Y}\mathbf{Z})=E(\mathbf{X}\mathbf{Y})+E(\mathbf{Y}\mathbf{Z})$,
 2.
$E(\mathbf{AY}\mathbf{Z})=\mathbf{A}E(\mathbf{Y}\mathbf{Z})$.
4 The interval state space model
with initial value ${\mathbf{\Pi}}_{0}$.
4.1 The interval Kalman filter
is called the Kalman gain. The initial conditions are ${\mathbf{x}}_{0}^{0}={\mathit{\mu}}_{0}$ and ${\mathbf{P}}_{0}^{0}={\mathbf{\Sigma}}_{0}$.
4.1.1 The EM algorithm in interval setting
 1.
Initialize the procedure by selecting starting values for the elements of the parameter set ${\mathbf{\Theta}}^{(0)}=\{{\mathbf{A}}^{(0)},{\mathbf{H}}^{(0)},{\mathbf{Q}}^{(0)},{\mathbf{R}}^{(0)}\}$ and estimate ${\mathit{\mu}}_{0}$.
 2.
(Estep) For $j=1,2,\dots $ , use the parameter set ${\mathbf{\Theta}}^{(j1)}$ to estimate the smoothed values ${\mathbf{x}}_{t}^{n}$,${\mathbf{P}}_{t}^{n}$, ${\mathbf{P}}_{t,t1}^{n}$ (equations (8)(10)) for $t=1,2,\dots ,n$.
 3.
 4.
Repeat steps 2 and 3 above until convergence is achieved.
5 Simulation results
Declarations
Acknowledgements
Dr. O. AlGahtani extends his appreciation to the Research Center of Teachers College, King Saud University for funding his work through the research group project No. RGPTCR07. The second and third authors would like to thank King Fahd University of Petroleum and Minerals for the excellent research facilities they provide.
Authors’ Affiliations
References
 Chen G, Wang J, Shieh LS: Interval Kalman filtering. IEEE Trans. Aerosp. Electron. Syst. 1997, 33(1):250–259.View ArticleGoogle Scholar
 Alefeld G, Herzberger J: Introduction to Interval Computations. Academic Press, San Diego; 1983.Google Scholar
 Rohn J: Inverse interval matrix. SIAM J. Numer. Anal. 1993, 3: 864–870.MathSciNetView ArticleGoogle Scholar
 Bentbib AH: Conjugate directions method for solving interval linear systems. Numer. Algorithms 1999, 21: 79–86. 10.1023/A:1019149111226MathSciNetView ArticleGoogle Scholar
 Kubica BJ, Malinowski K: Interval random variables and their application in queueing systems with longtailed service times. SMPS 2006, 393–403.Google Scholar
 Chen W, Tan S: Robust portfolio selection using interval random programming. In: FUZZIEEE, Korea, 2009, August 2024 (2009)Google Scholar
 Aubin JP, Frankowska H: SetValued Analysis. Birkhäuser, Basel; 1990.Google Scholar
 Ekland I, Témam R Classics in Applied Mathematics 28. In Convex Analysis and Variational Problems. SIAM, Philadelphia; 1999.View ArticleGoogle Scholar
 Jazwinski A: Stochastic Precesses and Filtering Theory. Academic Press, New York; 1970.Google Scholar
 Tanizaki H: Nonlinear Filtering: Estimation and Applications. Springer, Berlin; 1996.View ArticleGoogle Scholar
 Bilms, JA: A gentle tutorial of the EM algorithm and its applications to parameter estimation for Gaussian mixture and hidden Markov models. Technical report TR97–021, ICSI (1997)Google Scholar
 Dempster AP, Laird NM, Rubin DB: Maximum likelihood from uncomplete data via the EM algorithm. J. R. Stat. Soc. B 1977, 39: 1–38.MathSciNetGoogle Scholar
 Shumway RH, Stoffer DS: An approach to time series smoothing and forecasting using the EM algorithm. J. Time Ser. Anal. 1982, 3(4):253–264. 10.1111/j.14679892.1982.tb00349.xView ArticleGoogle Scholar
 Shumway RH, Stoffer DS: Time Series Analysis and Its Applications. Springer, Berlin; 2006.Google Scholar
Copyright
This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.