Skip to main content

Theory and Modern Applications

Filtering and identification of a state space model with linear and bilinear interactions between the states

Abstract

In this paper, we introduce a new bilinear model in the state space form. The evolution of this model is linear-bilinear in the state of the system. The classical Kalman filter and smoother are not applicable to this model, and therefore, we derive a new Kalman filter and smoother for our model. The new algorithm depends on a special linearization of the second-order term by making use of the best available information about the state of the system. We also derive the expectation maximization (EM) algorithm for the parameter identification of the model. A Monte Carlo simulation is included to illustrate the efficiency of the proposed algorithm. An application in which we fit a bilinear model to wind speed data taken from actual measurements is included. We compare our model with a linear fit to illustrate the superiority of the bilinear model.

1 Introduction

Bilinear systems are a special type of nonlinear systems capable of representing a variety of important physical processes. They are used in many applications in real life such as chemistry, biology, robotics, manufacturing, engineering, and economics [14] where linear models are ineffective or inadequate. They have also been recently used to analyze and forecast weather conditions [510].

Bilinear systems have three main advantages over linear ones: Firstly, they describe a wider class of problems of practical importance. Secondly, they provide more flexible approximations to nonlinear systems than linear systems do. Thirdly, one can make use of their rich geometric and algebraic structures, which promises to be a fruitful field of research for scientists [2] as well as practitioners.

Bilinear models were first introduced in the control theory literature in 1960s [11]. So far, the type of nonlinearity that is extensively treated and analyzed consists of bilinear interaction between the states of the system and the system input [1, 2, 12]. Aside from their practical importance, these systems are easier to handle because they are reducible to linear ones through the use of a certain Kronecker product. In this work, we treat the case where the nonlinearity of the system consists of bilinear interaction between the states of the system themselves. This means that our model will be able to handle evolutions according to the Lotka-Volterra models [6] or the Lorenz weather models [7, 8, 10], thus enabling a wider and more flexible application of such models. To the best of our knowledge, no attempt has been made to treat such systems in the general setting presented here.

The widespread use of bilinear models motivates the need to develop their parameter identification algorithms. A lot of work exists in the literature which presents methods of estimation and parameter identification of linear and nonlinear systems [1321]. The two most widely used techniques fall under the names of least square estimation and maximum likelihood estimation, respectively.

The maximum likelihood estimation is computed through the well-known EM algorithm [22]. It is an iterative method that tries to improve a current estimate of the system parameters by maximizing the underlying likelihood densities. The algorithm is useful in a variety of incomplete data problems, where algorithms such as the Newton-Raphson method may turn out to be more complicated. It consists of two steps called the Expectation step or the E-step and the Maximization step or the M-step; hence the name of the algorithm. This name was first coined by Dempster, Laird, and Rubin in their fundamental paper [22]. In this paper, we develop the EM algorithm for our bilinear system. This will necessitate also the development of a Kalman filter and smoother suitable for the nonlinear system at hand. The direct development of the recursions for the nonlinear filters is very complicated if not impossible altogether. Instead, we develop our recursions based on a linearization of the quadratic term that uses the most current state estimate available.

The remainder of this article is arranged as follows. In Section 2, the bilinear state space model problem is stated along with underlying assumptions. In Section 3, we derive the bilinear Kalman filter and smoother. Section 4 estimates the unknown parameters in the bilinear state space model via the EM algorithm. Section 5 presents a simulation example that produces very satisfactory results. A real world example is given in Section 6.

2 The bilinear state space model

In this section, we introduce a bilinear state space model and describe a generalization of the Kalman filter and smoother to this model. Our model subsumes the well-known Lorentz-96 model [7] for weather forecast, and the Lotka-Volterra evolution equations appear in many applications in chemistry, biology and control [4, 6]. Other types of bilinear models were investigated in [2, 11], where bilinearity occurs because of the interaction between the input and states of the system.

We will adopt the geometric notation as presented in [17] where the matrix inner product of two random vectors is defined by

x,y=E ( x y T ) ,

and

x 2 =E ( x x T ) =x,x.

We know that

x,y=cov(x,y)+E(x)E ( y ) T .

Given a sequence Y={ y 1 , y 2 ,, y t } of random vectors, the conditional expectation E(x|Y) with respect to this inner product is interpreted geometrically as the orthogonal projection of the vector x in the space spanned by the vectors of Y. In particular, if x is uncorrelated with the elements of Y and if it has zero mean, then x is orthogonal to the subspace generated by Y and E(x|Y)=0. We will also use the projection notation

π t x:= π Y x:=E(x|Y).

It is characterized by

x π t x,z=0,

for all zM(Y); the closed subspace of L 2 of all random vectors z which can be written as measurable functions of the elements of Y [3].

To introduce the model, let us first define the bilinear function a: R n × R n R n ( n + 1 ) 2 by

a(x,y)= ( x 1 y 1 , x 1 y 2 , , x 1 y n , x 2 y 2 , x 2 y 3 , , x 2 y n , , x n y n ) T ,

where a(,) is similar to the Kronecker product function except that there is no repetition of the entries. Consider the bilinear state space model given by

(1)
(2)

where x k R n is the state vector, y k R p is the measurement vector, and z k =a( x k , x k ) is the bilinear term given by

z k =a( x k , x k )= ( x 1 2 , x 1 x 2 , , x 1 x n , x 2 2 , x 2 x 3 , , x 2 x n , , x n 2 ) T .

The matrices are of appropriate dimensions, i.e., A R n × n , B R n × n ( n + 1 ) 2 and C R p × p . The uncorrelated noise corruption signals w k and v k are, as usual, assumed to be white having Gaussian distribution with zero mean and covariances Q and R, respectively, i.e.,

w k N ( 0 , Q ) , v k N ( 0 , R ) , w k , w l = Q δ k l , v k , v l = R δ k l

and

w k , v l =0.

Lemma 1 w k , z l =0, v k , z l =0, lk.

Proof Let lk. Then since x l , w k are uncorrelated and E{ w k }=0, w k x l . This means that E{ w k | x l }=0. Hence,

w k , z l = E { E { w k a ( x l , x l ) | x l } } = E { E { w k | x l } a ( x l , x l ) } = 0 .

The second equation can be shown in exactly the same way. □

The Taylor polynomial expansion of the form a(x,x) at the point x 0 can be written as follows (with z=a(x,x)):

z= z 0 + z ( x 0 )(x x 0 )+ 1 2 H(x, x 0 )(x x 0 ),
(3)

where z (x) is the n ( n + 1 ) 2 ×n gradient of a(x,x) given by

z (x)= [ x i x j x l ] i , j , l = 1 , 2 , , m ,

and H(x, x 0 ) is given by

H(x, x 0 )= [ ( x x 0 ) T D 1 ( x x 0 ) T D 2 ( x x 0 ) T D m ] ,

with D k being the matrix of second-order derivatives of the entries of a(x,x) and m= n ( n + 1 ) 2 . That is,

D k = [ z k 1 x 1 z k 2 x 1 z k n x 1 z k 1 x 2 z k 2 x 2 z k n x 2 z k 1 x n z k 2 x n z k n x n ] ,where k=1,2,,m.

To illustrate, suppose n=3, then

z (x)= [ 2 x 1 0 0 x 2 x 1 0 x 3 0 x 1 0 2 x 2 0 0 x 3 x 2 0 0 2 x 3 ]

and

D 1 = [ 2 0 0 0 0 0 0 0 0 ] , D 2 = [ 0 1 0 1 0 0 0 0 0 ] , D 3 = [ 0 0 1 0 0 0 1 0 0 ] , D 4 = [ 0 0 0 0 2 0 0 0 0 ] , D 5 = [ 0 0 0 0 0 1 0 1 0 ] , D 6 = [ 0 0 0 0 0 0 0 0 2 ] .

Note, for example, that the Lorentz-96 model (with n=3) takes the form (1) with A=I and

B= [ 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 1 0 ] .

3 A bilinear Kalman filter and smoother

In this section, we will develop a Kalman filter and smoother for the bilinear system (1) and (2).

3.1 A bilinear Kalman filter

Given a sequence of measurements Y t ={ y 1 , y 2 ,, y t }, let

x k t = E ( x k | Y t ) : = E t ( x k ) , P k t = x k x k t 2 , z k t = E t ( z k ) , P ˙ k t = x k x k t , z k z k t , P ¨ k t = z k z k t 2 .

When x= x k , equation (3) becomes

z= z 0 + z ( x 0 )( x k x 0 )+ 1 2 H( x k , x 0 )( x k x 0 ).
(4)

In order to compute equation (4), we approximate the second-degree term H( x k , x 0 ) by using the most current available state estimation for x k ; that is,

  • In the case of prediction, we take

    x k x k k 2 ,so H( x k , x 0 )H ( x k k 2 , x 0 ) .

By setting x 0 = x k k 1 , equation (4) becomes

z k z k k 1 + z ( x k k 1 ) ( x k x k k 1 ) + 1 2 H ( x k k 2 , x k k 1 ) ( x k x k k 1 ) .
  • In the case of filtering, we take

    x k x k k 1 ,so H( x k , x 0 )H ( x k k 1 , x 0 ) .

By setting x 0 = x k k , equation (4) becomes

z k z k k + z ( x k k ) ( x k x k k ) + 1 2 H ( x k k 1 , x k k ) ( x k x k k ) .
  • In the case of smoothing, we take

    x k x k k + 2 ,so H( x k , x 0 )H ( x k k + 2 , x 0 ) .

By setting x 0 = x k k + 1 , equation (4) becomes

z k z k k + 1 + z ( x k k + 1 ) ( x k x k k + 1 ) + 1 2 H ( x k k + 2 , x k k + 1 ) ( x k x k k + 1 ) .

In summary, we have the following linearization:

z k z k t + V k t ( x k x k t ) ,
(5)

where

V k t = z ( x k t ) + 1 2 H ( x k t ± 1 , x k t ) .

We also define

(6)
(7)
(8)

Theorem 2 For the bilinear state space model defined by (1) and (2), we have

(9)
(10)

with

x k + 1 k + 1 = x k + 1 k + K k + 1 [ y k C x k + 1 k ] , P k + 1 k + 1 = [ I K k + 1 C ] P k + 1 k , P ˙ k + 1 k + 1 = P k + 1 k + 1 [ V k + 1 k + 1 ] T , P ¨ k + 1 k + 1 = V k + 1 k + 1 P ˙ k + 1 k + 1 , K k + 1 = P k + 1 k C T [ C P k + 1 k C T + R ] 1 ,

and

V k + 1 k + 1 = z ( x k + 1 k + 1 ) + 1 2 H ( x k + 1 k , x k + 1 k + 1 ) ,k=0,,N.

Proof Equation (9) is obtained by applying the conditional expectation E k () to (1):

x k + 1 k = E k ( x k + 1 ) = E k ( A x k + B z k + w k ) = A E k ( x k ) + B E k ( z k ) + E k ( w k ) = A x k k + B z k k .

To obtain the error recursion (10), we proceed as follows:

P k + 1 k = x k + 1 x k + 1 k 2 = ( I π k ) x k + 1 2 = ( I π k ) ( A x k + B z k + w k ) 2 = A ( x k x k k ) + B ( z k z k k ) + w k 2 = A x k x k k 2 A T + B z k z k k 2 B T + A x k x k k , z k z k k B T + B z k z k k , x k x k k A T + A x k x k k , w k + w k , x k x k k A T + B z k z k k , w k + w k , z k z k k B T + w k 2 = A P k k A T + A P ˙ k k B T + B ( P ˙ k k ) T A T + B P ¨ k k B T + Q .

Now, when t=k, we derive the filtering steps. Let

ρ k = y k E k 1 ( y k ) = ( I π k 1 ) y k = ( I π k 1 ) ( C x k + v k ) = y k C x k k 1 = C ( x k x k k 1 ) + v k , k = 1 , , N .

Then, the mean of the innovations is given by

E k 1 ( ρ k )= π k 1 (I π k 1 ) y k =0,

and the variance

Σ k + 1 = ρ k + 1 2 = C ( x k + 1 x k + 1 k ) + v k + 1 2 = C x k + 1 x k + 1 k 2 C T + v k + 1 2 = C P k + 1 k C T + R .

Also,

ρ k + 1 , y k = y k + 1 y k + 1 k , y k = ( I π k ) y k + 1 , y k = y k + 1 , ( I π k ) y k = 0 ,

which means that the innovations are orthogonal to the past measurements. On the other hand,

x k + 1 , ρ k + 1 = x k + 1 , C ( x k + 1 x k + 1 k ) + v k + 1 = x k + 1 , x k + 1 x k + 1 k C T = x k + 1 , ( I π k ) x k + 1 C T = ( I π k ) x k + 1 , ( I π k ) x k + 1 C T = ( I π k ) x k + 1 2 C T = P k + 1 k C T .

From these results, we conclude that x k + 1 and ρ k + 1 have a Gaussian joint distribution conditional on Y k . That is,

{ ( x k + 1 ρ k + 1 ) | { y t } 1 k } N { ( x k + 1 k 0 ) , ( P k + 1 k P k + 1 k C T C P k + 1 k Σ k + 1 ) } .

Now, since Y k , ρ k + 1 are orthogonal,

x k + 1 k + 1 = π k + 1 ( x k + 1 ) = π { Y k , ρ k + 1 } ( x k + 1 ) = π Y k ( x k + 1 ) + π ρ k + 1 ( x k + 1 ) = E k ( x k + 1 ) + x k + 1 , ρ k + 1 Σ k + 1 1 ρ k + 1 = x k + 1 k + P k + 1 k C T [ C P k + 1 k C T + R ] 1 ρ k + 1 = x k + 1 k + K k + 1 [ y k + 1 C x k + 1 k ] ,

where

K k + 1 = P k + 1 k C T [ C P k + 1 k C T + R ] 1 = P k + 1 k C T Σ k + 1 1

represents the Kalman gain.

Next, we derive the recursion for P k + 1 k + 1 . Since x k + 1 x k + 1 k =( x k + 1 x k + 1 k + 1 )+ π ρ k + 1 ( x k + 1 x k + 1 k ) is an orthogonal decomposition,

P k + 1 k + 1 = x k + 1 x k + 1 k + 1 2 = x k + 1 x k + 1 k 2 π ρ k + 1 ( x k + 1 x k + 1 k ) 2 = P k + 1 k x k + 1 x k + 1 k , ρ k + 1 Σ k + 1 1 ρ k + 1 2 = P k + 1 k P k + 1 k C T Σ k + 1 1 ρ k + 1 2 = P k + 1 k P k + 1 k C T Σ k + 1 1 ρ k + 1 2 Σ k + 1 1 C P k + 1 k = P k + 1 k P k + 1 k C T Σ k + 1 1 C P k + 1 k = P k + 1 k K k + 1 C P k + 1 k = [ I K k + 1 C ] P k + 1 k .

The equation for P ˙ k + 1 k + 1 is obtained as follows:

P ˙ k + 1 k + 1 = x k + 1 x k + 1 k + 1 , z k + 1 z k + 1 k + 1 = x k + 1 x k + 1 k + 1 , V k + 1 k + 1 ( x k + 1 x k + 1 k + 1 ) = x k + 1 x k + 1 k + 1 , x k + 1 x k + 1 k + 1 [ V k + 1 k + 1 ] T = P k + 1 k + 1 [ V k + 1 k + 1 ] T .

Finally, for P ¨ k + 1 k + 1 we have

P ¨ k + 1 k + 1 = z k + 1 z k + 1 k + 1 , z k + 1 z k + 1 k + 1 = V k + 1 k + 1 x k + 1 x k + 1 k + 1 , x k + 1 x k + 1 k + 1 [ V k + 1 k + 1 ] T = V k + 1 k + 1 P k + 1 k + 1 [ V k + 1 k + 1 ] T = V k + 1 k + 1 P ˙ k + 1 k + 1 .

This completes the proof. □

We summarize the bilinear Kalman filter as follows:

(11)
(12)

and

V k k = z ( x k k ) + 1 2 H ( x k k 1 , x k k ) ,k=0,,N.

Also, note that the bilinear Kalman filter algorithm is a generalization of the Kalman filter for the linear case which is given in [17].

3.2 A bilinear Kalman smoother

In this subsection, we will develop a Kalman smoother for the bilinear system (1) and (2). We will use the following notation:

P k 1 , k 2 N = x k 1 x k 1 N , x k 2 x k 2 N , P ˙ k 1 , k 2 N = x k 1 x k 1 N , z k 2 z k 2 N , P ¨ k 1 , k 2 N = z k 1 z k 1 N , z k 2 z k 2 N .

Lemma 3 Let

ϵ k + 1 ={ v k + 1 ,, v N , w k + 2 ,, w N }.
(13)

Then for 1kN1 and with the approximation (5),

L { y m } 1 N =L { { y m } 1 k , x k + 1 x k + 1 k , ϵ k + 1 } ,
(14)

where L{} denotes the subspace spanned by {}.

Proof Recall that

z m = z m N + V m N ( x m x m N ) ,

that is,

( z m z m N ) L { x m x m N } .

Since

Similarly, since

Continuing in this manner, we get (14). □

We state the bilinear Kalman smoother in the following theorem.

Theorem 4 Consider the bilinear state space model (1) and (2) with x N N and P N N as given in (11) and (12). Then for k=N1,,1, we have

(15)
(16)

where

J k = [ P k k A T + P ˙ k k B T ] [ P k + 1 k ] 1 .

Proof Noting the mutual orthogonality of { y } 1 k , { x k + 1 x k + 1 k } and ϵ k + 1 and the orthogonality of x k and ϵ k + 1 ,

x k N = π N x k = π k x k + π ( x k + 1 x k + 1 k ) x k = x k k + x k , x k + 1 x k + 1 k x k + 1 x k + 1 k 2 ( x k + 1 x k + 1 k ) = x k k + x k , x k + 1 x k + 1 k [ P k + 1 k ] 1 ( x k + 1 x k + 1 k ) .

Now,

x k , x k + 1 x k + 1 k = x k , A x k + B z k + w k x k + 1 k = x k , A ( x k x k k ) + B ( z k z k k ) = x k x k k , x k x k k A T + x k x k k , z k z k k B T = P k k A T + P ˙ k k B T .

Thus,

x k N = x k k + [ P k k A T + P ˙ k k B T ] [ P k + 1 k ] 1 ( x k + 1 x k + 1 k ) = x k k + J k ( x k + 1 x k + 1 k ) .

Equation (15) now follows by taking the projection π N again of both sides and noting that kN. To derive (16), we compute

P k N = x k x k N 2 = x k x k k J k ( x k + 1 x k + 1 k ) 2 = x k x k k 2 x k x k k , x k + 1 x k + 1 k J k T J k x k + 1 x k + 1 k , x k x k k + J k P k + 1 k J k T = P k k ( 1 π k ) x k , x k + 1 J k T J k x k + 1 , ( 1 π k ) x k + J k P k + 1 k J k T = P k k ( 1 π k ) x k , A x k + B z k J k T J k A x k + B z k , ( 1 π k ) x k + J k P k + 1 k J k T = P k k ( P k k A T + P ˙ k k B T ) J k T J k ( A P k k + B P ˙ k k ) + J k P k + 1 k J k T = P k k J k P k + 1 k J k T J k P k + 1 k J k T + J k P k + 1 k J k T = P k k J k P k + 1 k J k T ,

which completes the proof. □

The next theorem states the bilinear lag-one recursions.

Theorem 5 Consider the bilinear state space model (1) and (2). Then

P k + 1 , k N = A P k N + B ( P ˙ k N ) T , P ˙ k + 1 , k N = P k + 1 , k N [ V k N ] T .

Proof Using the definitions in (6) and (7),

P k + 1 , k N = x k + 1 x k + 1 N , x k x k N = ( 1 π N ) x k + 1 , ( 1 π N ) x k = x k + 1 , ( 1 π N ) x k = A x k + B z k + w k , ( 1 π N ) x k = A x k , ( 1 π N ) x k + B z k , ( 1 π N ) x k = A P k N + B ( P ˙ k N ) T .

Also,

P ˙ k + 1 , k N = x k + 1 x k + 1 N , z k z k N = x k + 1 x k + 1 N , V k N ( x k x k N ) = x k + 1 x k + 1 N , x k x k N [ V k N ] T = P k + 1 , k N [ V k N ] T .

 □

4 The bilinear EM algorithm

The unknown parameter set θ={A,B,C,Q,R,V,μ} is estimated by the EM algorithm that iteratively updates the current estimate θ(i) of θ by maximizing the log-likelihood function

(17)

where

  • f 0 () represents the n-variate normal density of the initial state x 0 with mean μ and the covariance matrix V.

  • f v () represents the p-variate normal density with zero mean and the covariance matrix R.

  • f w () represents the n-variate normal density function with zero mean and the covariance matrix Q.

The conditional expectation step (E-step) finds the missing data, i.e., X N , given the observed data and current estimated parameters, and then substitutes these expectations for the missing data. Specifically, let θ(i1) be the current estimate of the parameter θ, then the E-step finds the conditional expectation E{} of the complete-data log-likelihood given θ(i1):

q ( θ | θ ( i 1 ) ) =E { log L ( θ , X N , Y N ) | Y N , θ ( i 1 ) } .
(18)

The M-step determines θ(i) by maximizing the expected complete-data log-likelihood

q ( θ ( i ) | θ ( i 1 ) ) q ( θ | θ ( i 1 ) ) ,θ.

The following theorem accomplishes the expectation step.

Theorem 6 For the bilinear state space model (1) and (2),

q ( θ ( i ) | θ ( i 1 ) ) = 1 2 log | V | 1 2 Tr { V 1 ( Δ x ˆ 0 μ T μ x ˆ 0 T + μ μ T ) } N 2 log | Q | 1 2 Tr { Q 1 ( Θ Ψ A T Π B T A Ψ T + A Φ A T B Π T + B Λ B T ) } N 2 log | R | 1 2 Tr { R 1 ( δ Ω C T C Ω T + C Φ C T ) } + const ,

where

Δ = E N ( x 0 x 0 T ) , x ˆ 0 = E N ( x 0 ) , Θ = k = 1 N ( x k N ( x k N ) T + P k N ) , Ψ = k = 1 N ( x k N ( x k 1 N ) T + P k , k 1 N ) , Π = k = 1 N ( x k N ( z k 1 N ) T + P ˙ k , k 1 N ) , Φ = k = 1 N ( x k 1 N ( x k 1 N ) T + P k 1 N ) , Γ = k = 1 N E N ( x k 1 z k 1 T ) , Λ = k = 0 N 1 ( z k N ( z k N ) T + P ¨ k N ) , Ω = k = 1 N x k N y k T , δ = k = 1 N y k 2 .

Proof Since the system is Markovian, we may use Bayes’ rule successively to get

p(θ, X N , Y N )=p( y 1 ,, y N , x 0 ,, x N )
(19)
=p( x 0 ) k = 1 N p( y k | x k ) k = 0 N 1 p( x k + 1 | x k ).
(20)

From the assumptions on x 0 , w k , and v k , the density functions p( x 0 ), p( y k | x k ), and p( x k + 1 | x k ) are given by

p ( x 0 ) = 1 ( 2 π ) n 2 | V | 1 2 exp { 1 2 ( x 0 μ ) T V 1 ( x 0 μ ) } , p ( x k + 1 | x k ) = 1 ( 2 π ) n 2 | Q | 1 2 exp { 1 2 ( x k + 1 A x k B z k ) T Q 1 ( x k + 1 A x k B z k ) } ,

and

p( y k | x k )= 1 ( 2 π ) n 2 | R | 1 2 exp { 1 2 ( y k C x k ) T R 1 ( y k C x k ) } .

Now, substituting these densities in (17) and taking the logarithm of both sides, we get

L ( θ , X N , Y N ) = 1 2 log | V | 1 2 ( x 0 μ ) T V 1 ( x 0 μ ) N 2 log | Q | 1 2 k = 0 N 1 ( x k + 1 A x k B z k ) T Q 1 ( x k + 1 A x k B z k ) N 2 log | R | 1 2 k = 1 N ( y k C x k ) T R 1 ( y k C x k ) + const .

The result follows upon taking the expectation conditional on Y N , making use of

E N ( x T A y ) = Tr [ A E N ( x y T ) ] , E N ( x k z k T ) = 0 , E N ( x k + 1 z k T ) = B E N ( z k z k T ) ,

and simplifying. The middle equality follows from the fact that odd moments of Gaussian random variables vanish. □

The computation of Θ, Φ, Ψ, Π and Λ given a current estimate θ(i) of θ involves the use of the bilinear Kalman filter and smoother introduced in Sections 3.1 and 3.2. For this purpose, we introduce

Θ = k = 1 N E t ( x k x k T ) = k = 1 N ( x k t ( x k t ) T + P k t ) , Φ = k = 1 N ( x k 1 t ( x k 1 t ) T + P k 1 t ) , Ψ = k = 1 N ( x k t ( x k 1 t ) T + P k , k 1 t ) , Π = k = 1 N ( x k t ( z k 1 t ) T + P ˙ k , k 1 t ) , Λ = k = 0 N 1 ( z k t ( z k t ) T + P ¨ k t ) .

The next step of the EM algorithm is to maximize the function q(θ(i)|θ(i1)) with respect to θ.

Theorem 7 The maximizer of q(θ(i)|θ(i1)) is obtained for the parameter vector θ given by

(21)
(22)
(23)

Proof Let

Then

q ( θ ( i ) | θ ( i 1 ) ) = q 1 (μ,V)+ q 2 (A,B,Q)+ q 3 (C,R)+const,

which means that q(θ(i),θ(i1)) is maximized by separately minimizing q 1 , q 2 , q 3 . This is done by setting the partial derivative of q with respect to each parameter equal to zero (i.e., q x =0) and solving the resulting system of equations. □

The EM algorithm for a bilinear state space model is summarized as follows.

Bilinear EM algorithm

  1. 1.

    Initialize the EM algorithm by choosing initial values of θ(0).

  2. 2.

    Calculate the incomplete-data likelihood, logL( Y n ;θ).

  3. 3.

    Execute the E-step by using the bilinear Kalman filter and smoother in (9)-(10) and (15)-(16), respectively.

  4. 4.

    Execute the M-step using (21)-(23) and update the estimates of θ using (M-step) to obtain θ(i).

  5. 5.

    Repeat Steps 2 to 4 until convergence.

5 Simulation results

A 1,000 Monte Carlo simulation is performed to illustrate the utility of the bilinear algorithm. The observed data are generated according to the second-order bilinear state space model

(24)
(25)

where w k and v k are independent identically distributed (i.i.d.) Gaussian noises such that

w k N ( 0 , 0.01 × I 2 ) , v k N ( 0 , 0.01 ) .

In all simulations, the number of iterations for the EM algorithm is fixed and its value set to J=100.

Figure 1 shows a sample of realizations of the input noise w k , and Figure 2 shows the output noise v k , respectively. Figure 3 compares the observed output signals and the estimated output signal. The average estimates of the parameters are

A = [ 0.3891 0.1015 0.09812 0.2117 ] , B = [ 0.0012 1.1016 0 0 0.0345 0.9761 ] , C = [ 0.0927 1.014 ] .

The mean square error (MSE) is defined as

E N = 1 N k = 1 N ( y k C x k | k 1 ) 2 ,

and its value for 1,000 run for different values of cov(R) and cov(Q) is kept constant, which is shown in Table 1.

Figure 1
figure 1

Input noise.

Figure 2
figure 2

Output noise.

Figure 3
figure 3

Observed and estimated output signals.

Table 1 Comparison of the mean square errors

6 Application to wind speed

In this section, we apply the proposed bilinear algorithm to the daily averaged wind speed data for Arar, a city located in the north eastern region of the kingdom of Saudi Arabia for a period of 16 1 2 years as shown in Figure 4. It should be noted that all the calculations are carried out on normalized time series data.

Figure 4
figure 4

Wind speed data.

To estimate the dimension of the state in the state space model, we apply the stochastic subspace system identification algorithm described in [23]. This is done by constructing the singular value diagram of the block Hankel matrix for the normalized wind speed data as shown in Figure 5. That is, the dimension of the state equals the number of the significant singular values; here n=2. For clarity, we compare the observed wind speed values with the estimated ones using a linear model [3] with our proposed algorithm for a period of 100 days as shown in Figure 6. The estimated parameters for the linear state space model are

A= [ 0.9475 0.1991 0.1968 0.0935 ] ,C= [ 0.5355 0.3492 ] ,

and for the bilinear state space model, they are

A = [ 0.9475 0.1991 0.1968 0.0935 ] , C = [ 0.5355 0.3492 ] , B = [ 0 1.092 1.092 0.01092 0.0557 0.1092 ] .

The MSE for the estimated wind speed data using the linear EM algorithm is 0.3864, and 0.0197 for the bilinear EM algorithm.

Figure 5
figure 5

SVD for wind speed.

Figure 6
figure 6

Estimated wind speed.

References

  1. Krener AJ: Bilinear and nonlinear realizations of input-output maps. SIAM J. Control 1975, 13(4):827–834. 10.1137/0313049

    Article  MathSciNet  MATH  Google Scholar 

  2. Pardalos PM, Yatsenko V: Optimization and Control of Bilinear Systems: Theory, Algorithms and Applications. Springer, Berlin; 2008.

    Book  MATH  Google Scholar 

  3. Shumway R, Stoffer D: An approach to time series smoothing and forecasting using the EM algorithm. J. Time Ser. Anal. 1982, 3(4):253–264. 10.1111/j.1467-9892.1982.tb00349.x

    Article  MATH  Google Scholar 

  4. Strgate SH: Nonlinear Dynamics and Chaos, with Applications to Physics, Biology, Chemistry, and Engineering. Perseus Books, New York; 1994.

    Google Scholar 

  5. Galanis G, Anadranistakis M: A one-dimension Kalman filter for correction of near surface temperature forecasts. Meteorol. Appl. 2002, 9: 437–441. 10.1017/S1350482702004061

    Article  Google Scholar 

  6. Goel NS: On the Volterra and Other Nonlinear Models of Interacting Populations. Academic Press, San Diego; 1971.

    Google Scholar 

  7. Lorenz EN, Emanuel KE: Optimal sites for supplementary weather observations: simulations with a small model. J. Atmos. Sci. 1998, 55: 399–414. 10.1175/1520-0469(1998)055<0399:OSFSWO>2.0.CO;2

    Article  Google Scholar 

  8. Lorenz EN: Designing chaotic models. J. Atmos. Sci. 2005, 62: 1574–1588. 10.1175/JAS3430.1

    Article  MathSciNet  Google Scholar 

  9. Monbet V, Ailliot P, Prevosto M: Survey of stochastic models for wind and sea state time series. Probab. Eng. Mech. 2007, 22: 113–126. 10.1016/j.probengmech.2006.08.003

    Article  Google Scholar 

  10. Roy D, Musielak ZE: Generalized Lorenz models and their routes to chaos. III. Energy-conserving horizontal and vertical mode truncations. Chaos Solitons Fractals 2007, 33: 1064–1070. 10.1016/j.chaos.2006.05.084

    Article  MathSciNet  MATH  Google Scholar 

  11. Preistley MB: Non-Linear and Non-Stationary Time Series Analysis. Academic Press, San Diego; 1989.

    Google Scholar 

  12. Gibson S, Wills A, Ninness B: Maximum-likelihood parameter estimation of bilinear systems. IEEE Trans. Autom. Control 2005, 50: 1581–1596.

    Article  MathSciNet  Google Scholar 

  13. Anderson JL, Anderson SL: A Monte Carlo implementation of the nonlinear filtering problem to produce ensemble assimilations and forecasts. Mon. Weather Rev. 1999, 127: 2741–2758. 10.1175/1520-0493(1999)127<2741:AMCIOT>2.0.CO;2

    Article  Google Scholar 

  14. Bendat J: Nonlinear System Analysis and Identification from Random Data. Wiley-Interscience, New York; 1990.

    MATH  Google Scholar 

  15. Daum FE: Exact finite dimensional nonlinear filters. IEEE Trans. Autom. Control 1986, 31(7):616–622. 10.1109/TAC.1986.1104344

    Article  MathSciNet  MATH  Google Scholar 

  16. Ha QP, Trinh H: State and input simultaneous estimation for a class of nonlinear system. Automatica 2004, 40: 1779–1785. 10.1016/j.automatica.2004.05.012

    Article  MathSciNet  MATH  Google Scholar 

  17. Kailath T, Sayed A, Hassibi B: Linear Estimation. Prentice Hall, New York; 2000.

    MATH  Google Scholar 

  18. Kerschen G, Worden K, Vakakis AF, Golinval J: Past, present and future of nonlinear system identification in structural dynamics. Mech. Syst. Signal Process. 2006, 20: 505–592. 10.1016/j.ymssp.2005.04.008

    Article  Google Scholar 

  19. Ljung L: System Identification: Theory for the User. 2nd edition. Prentice Hall, Upper Saddle River; 1999.

    MATH  Google Scholar 

  20. Norgaard M, Poulsen NK, Ravn O: New developments in state estimation for nonlinear systems. Automatica 2000, 36: 1627–1638. 10.1016/S0005-1098(00)00089-3

    Article  MathSciNet  MATH  Google Scholar 

  21. Wiener N: Nonlinear Problems in Random Theory. MIT Press, Boston; 1958.

    MATH  Google Scholar 

  22. Dempster A, Laird N, Rubin D: Maximum likelihood from incomplete data via the EM algorithm. J. R. Stat. Soc. B 1977, 39: 1–38.

    MathSciNet  MATH  Google Scholar 

  23. Tanaka H, Katayama T: A stochastic realization algorithm via block LQ decomposition in Hilbert space. Automatica 2006, 42: 741–746. 10.1016/j.automatica.2005.12.025

    Article  MathSciNet  MATH  Google Scholar 

Download references

Acknowledgements

The first author was supported by Tayyebah University. The second and third authors would like to thank King Fahd University for the excellent research facilities they provide.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to A Al-Mazrooei.

Additional information

Competing interests

The authors declare that they have no competing interests.

Authors’ contributions

The authors have achieved equal contributions to each part of this paper. All authors read and approved the final version of the manuscript.

Authors’ original submitted files for images

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 2.0 International License (https://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and permissions

About this article

Cite this article

Al-Mazrooei, A., Al-Mutawa, J., El-Gebeily, M. et al. Filtering and identification of a state space model with linear and bilinear interactions between the states. Adv Differ Equ 2012, 176 (2012). https://doi.org/10.1186/1687-1847-2012-176

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/1687-1847-2012-176

Keywords