Open Access

Reconstructing the initial state for the nonlinear system and analyzing its convergence

Advances in Difference Equations20142014:82

https://doi.org/10.1186/1687-1847-2014-82

Received: 30 September 2013

Accepted: 26 February 2014

Published: 12 March 2014

Abstract

An algorithm to approximate the initial state of a nonlinear system is described, and its convergence is also analyzed in detail. The forward and backward observers are used alternately and repeatedly to solve the approximation problem, and their nudging term can be proved close to zero. Then the convergence problem based on the observers derived by using semi-discretization and full-discretization in space is considered.

Keywords

forward and backward observersconvergence analysissemi-discretization and full-discretization in space

1 Introduction

It is important to estimate the initial state of a linear partial difference system based on the observations over given time interval in science and engineering such as in oceanography, meteorology, medical imaging and so on, see for [1]. In oceanography such problem is called data assimilation for instance [2, 3]. The problem has been introduced in the quasi-geostrophic model in oceanography successfully [4] and arose in medical imaging by impedance-acoustic tomography [5, 6]. More recently, the time reversal method has been applied in the context of infinite-dimensional systems to estimate the initial data; see [7, 8].

The standard nudging method for solving the approximation problem usually adds a relaxation term to the equations of the system to construct the forward observation. Similarly the backward observation is constructed by adding a relaxation term with opposite sign. In this paper, performing the forward and backward observers repeatedly, our algorithm can be obtained.

Firstly, the paper estimates the initial state of the inverse problems of the nonlinear distributed parameter system according to its input and output function measured over some finite time interval. The main idea is to repeatedly apply the same segment of data back and forth in sequence by constructing two observers called the forward and backward observer, respectively. Two observers are constructed by adding a relaxation term which goes to 0 to the state equations under certain conditions and works in forward and backward time, respectively.

Secondly, the paper considers the convergence analysis of the iterative algorithm for the nonlinear system. The analysis is fully based on the numerical analysis derived by using the semi-discretization and the full-discretization successively, and the algorithm is still based on the observers method to simplify the problems.

Let X and Y be Hilbert space, called the state space and output space, respectively. Let A : D ( A ) X be the generator of a strongly continuous group T of isometries on X. Assume the operator C L ( X , Y ) called the observation operator and let B : D ( B ) X be the bounded operator. The above operators describe the time reversible nonlinear system and the system is described by the following equation:
z ˙ ( t ) = ( A t + B ) z ( t ) , z ( 0 ) = z 0 ,
(1.1)
y ( t ) = C z ( t ) ,
(1.2)

where z and y are called the state and output function, respectively. Such systems are often used as models of vibrating systems, electromagnetic phenomena or in quantum mechanics.

Firstly, our aim is to reconstruct the initial data z 0 of the system when the output function y on the known time interval [ 0 , τ ] is given.

The paper is organized as follows: The preliminary knowledge is introduced in Section 2. The initial state is estimated just by one step iteration and the convergence is described briefly in Section 3. The correlative conclusions is considered after iterating n times in Section 4. The convergence accuracy is analyzed in detail for the iteration method for the nonlinear system in Section 5. The numerical result is showed in Section 6.

2 Description

Definition 2.1 System (1.1)-(1.2) is said to be exactly observable in some time τ if there exists k τ > 0 , such that
0 τ y ( t ) Y 2 d t k τ 2 z 0 X 2 , z 0 D ( A ) .
(2.1)

If system (1.1)-(1.2) is exactly observable in some time τ, it is exactly observable in any time. The inequality (2.1) is called the observation or observability inequality (see [9]). That guarantees the initial state z 0 is uniquely determined by the observed quantity y ( t ) on [ 0 , τ ] . To solve the infinite-dimensional system in the paper, we assume that the system is well-posed, i.e. the system is exactly observable.

Definition 2.2 There exists an operator A k : D ( A k ) X that generates an exponentially stable semigroup T k on X and another operator H L ( Y , X 1 K ) where X 1 K denotes the analog of the space X 1 K , such that
A = A k H C .
(2.2)

Then the pair ( A , C ) is said to be (forward) estimatable (see [1]).

Definition 2.3 There exists an operator A b k : D ( A b k ) X that generates an exponentially stable semigroup S k on X and another operator H b L ( Y , X 1 , b K ) where X 1 , b K denotes the analog of the space X 1 K , such that
A = A b k H b C .
(2.3)

Then the pair ( A , C ) is said to be backward estimatable (see [1]).

Proposition 2.4 Assume that A is the skew-adjoint operator and T is the unitary group generated by A, then the following assertions are equivalent:
  1. (i)

    ( A , C ) is exactly observable.

     
  2. (ii)

    ( A , C ) is forward estimatable.

     
  3. (iii)

    ( A , C ) is backward estimatable.

     

Proof The equivalence is contained in Proposition 3.7 in [1]. □

3 Properties of one step iteration

Assume ( A , C ) is estimatable. We can construct an observer as follows: Z is the state of the forward observer and it satisfies the differential equation
{ Z ˙ ( t ) = ( A t + B ) Z ( t ) H t ( y ( t ) C Z ( t ) ) , Z ( 0 ) = Z 0 ,
(3.1)

where Z 0 X is an arbitrary initial guess of z 0 which can be proved independent of the guess in the following text.

Define the estimation error by e ( t ) = Z ( t ) z ( t ) , then
e ˙ ( t ) = ( A k t + B ) e ( t ) = ( A t + H C t + B ) e ( t ) .

Thus e ( t ) = e 1 2 A k t 2 e B t e ( 0 ) = T t 2 2 k e B t e ( 0 ) where T t 2 2 k = e 1 2 A k t 2 denotes the semigroup generated by A k at the time t 2 2 .

By using the method of separation of variables to (3.1), we can obtain
Z ( t ) = 0 t e 1 2 A k ( t 2 s 2 ) e B ( t s ) H s y ( s ) d s + e 1 2 A k t 2 e B t Z 0 = e ( H C + A ) t 2 2 0 t e ( H C A ) s 2 2 e B ( t s ) H s y ( s ) d s + e ( H C + A ) t 2 2 e B t Z 0 .
(3.2)
Now suppose ( A , C ) is backward estimatable. We can also construct a backward observer as follows: Z ˜ is the state of the backward observer and it satisfies the differential equation
{ Z ˜ ˙ ( t ) = ( A t + B ) Z ˜ ( t ) + H b t ( y ( t ) C Z ˜ ( t ) ) , Z ˜ ( τ ) = Z ( τ ) .
(3.3)
Define the estimation error by e b ( t ) = Z ˜ ( t ) z ( t ) , then
e ˙ b ( t ) = ( A b k t + B ) e b ( t ) = ( A t H b C t + B ) e b ( t ) .

Thus e b ( t ) = e 1 2 A b k t 2 e B t e 1 2 A b k τ 2 e B τ e b ( τ ) = S τ 2 t 2 2 k e B ( t τ ) e b ( τ ) where S t 2 2 k = e 1 2 A b k ( τ 2 t 2 ) denotes the semigroup generated by A b k at the time τ 2 t 2 2 . Since e b ( τ ) = e ( τ ) , we have e b ( t ) = S τ 2 t 2 2 k T τ 2 2 k e ( 0 ) e B t .

Similarly, we can also get the solution of (3.3):
Z ˜ ( t ) = τ t e 1 2 A b k ( s 2 t 2 ) e B ( t s ) H b s y ( s ) d s + e 1 2 A b k ( τ 2 t 2 ) e B ( t τ ) Z ˜ ( τ ) = e ( H b C + A ) t 2 2 0 τ t e ( H b C + A ) ( τ s ) 2 2 e B ( t τ + s ) H b ( τ s ) y ( τ s ) d s + e ( H b C + A ) t 2 τ 2 2 e B ( t τ ) Z ( τ ) .
(3.4)
Proposition 3.1 Z and Z ˜ are given by (3.1) and (3.3), let K = H and K = H b , and K, K , C are the symmetric definite positive matrices. Then for any t [ 0 , τ ] , if K, K are large enough, we have
lim K + Z ( t ) = C 1 y ( t ) , lim K + Z ˜ ( t ) = C 1 y ( t ) ,

where K , K + means that any eigenvalue of the matrices tends to infinity.

Proof Since K, K , C are symmetric definite positive matrices, when K, K are large enough, K C A and K C + A are definite. By utilizing the Green formula to (3.2), we can obtain
Z ( t ) = e ( K C A ) t 2 2 0 t e ( K C A ) s 2 2 e B ( t s ) K s y ( s ) d s + e ( K C A ) t 2 2 e B t Z 0 = K ( K C A ) 1 [ y ( t ) e ( K C A ) t 2 2 e B t y ( 0 ) 0 t e ( K C A ) t 2 s 2 2 d e B ( t s ) y ( s ) ] + e ( K C A ) t 2 2 e B t Z 0 .
Thus
lim K + Z ( t ) = lim K + K ( K C A ) 1 y ( t ) = C 1 y ( t ) .
Similarly, we can also prove
lim K + Z ˜ ( t ) = lim K + K ( K C + A ) 1 y ( t ) = C 1 y ( t ) .

 □

It can be seen that Z ( t ) and Z ˜ ( t ) are totally independent of the initial condition Z 0 of the system.

Theorem 3.2 Assume ( A , C ) is backward estimatable, then
Z ˜ ( 0 ) z 0 = S τ 2 2 k T τ 2 2 k ( Z 0 z 0 ) .
(3.5)
If we set L t = S t 2 2 k T t 2 2 k and Z 0 = 0 , we have η = S τ 2 2 k T τ 2 2 k = L τ < 1 , and
z 0 = n = 0 L τ n Z ˜ ( 0 ) .
(3.6)
Proof From e b ( t ) = S τ 2 t 2 2 k T τ 2 2 k e B t e ( 0 ) , we have
Z ˜ ( 0 ) z 0 = S τ 2 2 k T τ 2 2 k ( Z 0 z 0 ) .
If Z 0 = 0 , we have
Z ˜ ( 0 ) = ( I S τ 2 2 k T τ 2 2 k ) z 0 .
By Proposition 3.7 in [1], we have η < 1 , then
z 0 = ( I L τ ) 1 Z ˜ ( 0 ) .
Using a Neumann series, we can obtain
z 0 = n = 0 L τ n Z ˜ ( 0 ) ,

where L τ n denotes n times of L τ . □

The process that computes Z ( τ ) by using the forward observer (3.1) and then computes Z ˜ ( 0 ) by using the backward observer (3.3) is just one step iteration. For accuracy, the repeated multiple iterations should be further concerned as the above one step iteration.

4 Properties of multiple iterations

Consider the iterative algorithm on repeated estimation cycles. For n 0 , suppose H = H b and define Z ( n ) ( t ) and Z ˜ ( n ) ( t ) as the solutions of the following systems, respectively:
{ Z ˙ ( n ) ( t ) = ( A t + B ) Z ( n ) ( t ) H t ( y ( t ) C Z ( n ) ( t ) ) , Z ( n ) ( 0 ) = Z ˜ ( n 1 ) ( 0 ) , Z ˜ ( 1 ) ( 0 ) = Z 0 ,
(4.1)
{ Z ˜ ˙ ( n ) ( t ) = ( A t + B ) Z ˜ ( n ) ( t ) + H t ( y ( t ) C Z ˜ ( n ) ( t ) ) , Z ˜ ( n ) ( τ ) = Z ( n ) ( τ ) ,
(4.2)

where Z 0 X is an arbitrary initial guess of z 0 which is independent of the guess and Z ˜ ( n ) ( 0 ) denotes the value Z ˜ ( 0 ) at the n th iteration.

By K = K = H = H b , from (3.2), (3.4), it is easy to obtain
Z ( n ) ( t ) = e ( K C A ) t 2 2 0 t e ( K C A ) s 2 2 e B ( t s ) K s y ( s ) d s + e ( K C A ) t 2 2 e B t Z ( n ) ( 0 ) ,
(4.3)
and
Z ˜ ( n ) ( 0 ) = 0 τ e ( K C + A ) ( τ s ) 2 2 e B ( τ + s ) K ( τ s ) y ( τ s ) d s + e ( K C + A ) τ 2 2 e B τ Z ( n ) ( τ ) .
According to the repeated iterations, Z ( n ) ( 0 ) = Z ˜ ( n 1 ) ( 0 ) and Z ˜ ( n ) ( τ ) = Z ( n ) ( τ ) , we can get
Z ( n ) ( 0 ) = ( 1 e 2 K C τ 2 2 ) 1 ( 1 e 2 n K C τ 2 2 ) [ 0 τ e ( K C A ) s 2 2 e 2 K C τ 2 2 e B s K s y ( s ) d s + 0 τ e ( K C + A ) ( τ s ) 2 2 e B ( s τ ) K ( τ s ) y ( τ s ) d s ] + e 2 n K C τ 2 2 Z 0 .
By (4.2) and the above equation, if n + , we have
lim n + Z ( n ) ( 0 ) = Z ( 0 ) = ( 1 e 2 K C τ 2 2 ) 1 [ 0 τ e ( K C A ) s 2 2 e 2 K C τ 2 2 e B s K s y ( s ) d s + 0 τ e ( K C + A ) ( τ s ) 2 2 e B ( s τ ) K ( τ s ) y ( τ s ) d s ] ,
and for t [ 0 , τ ] , we have
lim n + Z ( n ) ( t ) = Z ( t ) = e ( K C A ) t 2 2 0 t e ( K C A ) s 2 2 e B ( t s ) K s y ( s ) d s + e ( K C A ) t 2 2 e B t Z ( 0 ) .
According to Proposition 3.1, we know that
lim K + Z ( t ) = C 1 y ( t ) , t [ 0 , τ ] .
Similarly, for t [ 0 , τ ] , we can get
lim n + Z ˜ ( n ) ( t ) = Z ˜ ( t )
and
lim K + Z ˜ ( t ) = C 1 y ( t ) .

It can be seen that Z n ( t ) and Z ˜ n ( t ) are totally independent of the initial condition Z 0 of the system.

Theorem 4.1 Assume ( A , C ) is backward estimatable, then
Z ˜ ( n ) ( 0 ) z 0 = ( S τ 2 2 k T τ 2 2 k ) n + 1 ( Z 0 z 0 ) ,
(4.4)
and for n 0 , we have
Z ˜ ( n ) ( 0 ) z 0 η n + 1 Z 0 z 0 .
(4.5)
Proof From Theorem 3.2 and Z ( n ) ( 0 ) = Z ˜ ( n 1 ) ( 0 ) , we know that
Z ˜ ( n ) ( 0 ) z 0 = S τ 2 2 k T τ 2 2 k ( Z ( n ) ( 0 ) z 0 ) = S τ 2 2 k T τ 2 2 k ( Z ˜ ( n 1 ) ( 0 ) z 0 ) = ( S τ 2 2 k T τ 2 2 k ) 2 ( Z ( n 1 ) ( 0 ) z 0 ) = S τ 2 2 k T τ 2 2 k ( Z ˜ ( n 2 ) ( 0 ) z 0 ) = ( S τ 2 2 k T τ 2 2 k ) n + 1 ( Z 0 z 0 ) .

Since η = S τ 2 2 k T τ 2 2 k < 1 , the conclusion can easily be obtained. □

Theorem 4.2 Assume ( A , C ) is backward estimatable, and set Z 0 = 0 , then
z 0 = i = 0 L τ i ( n + 1 ) Z ˜ ( n ) ( 0 ) .
(4.6)

Proof It is similar to the proof of Theorem 3.2. □

The above iterative algorithm on the nonlinear system has been proved to be convergent if the feedback term K is large enough. Z ( t ) and Z ˜ ( t ) in the forward and backward observers are also totally determined by the output function y ( t ) of the system. Thus the initial state can be approximated by the algorithm, but the accuracy analysis is still a problem.

5 Numerical convergence

In this section, the convergence accuracy based on the observers is treated according to the semi-discretization and full-discretization method.

Let A = i A 0 be the skew-adjoint operator, i.e., A = A , then A 0 : D ( A 0 ) X is the self-adjoint operator, i.e., A 0 = A 0 . If A is the skew-adjoint operator, we often choose H and H b equivalent to C , i.e., A k = i A 0 C C and A b k = i A 0 C C .

The system (1.1)-(1.2) can be rewritten as
z ˙ ( t ) = ( i A 0 t + B ) z ( t ) , z ( 0 ) = z 0 ,
(5.1)
y ( t ) = C z ( t ) .
(5.2)

Throughout the section, let z 0 D ( A 0 2 ) and z ( t ) D ( A 0 ) D ( A 0 2 ) .

For simplicity, let Z 0 = 0 . Then the forward and backward observers (3.1) and (3.3) can be expressed, respectively, as
{ Z ˙ ( t ) = ( i A 0 t + B ) Z ( t ) + C t ( y ( t ) C Z ( t ) ) , Z ( 0 ) = 0 ,
(5.3)
{ Z ˜ ˙ ( t ) = ( i A 0 t + B ) Z ˜ ( t ) C t ( y ( t ) C Z ˜ ( t ) ) , Z ˜ ( τ ) = Z ( τ ) .
(5.4)
According to Theorem 3.2, we can obtain the expression of the initial state
z 0 = n = 0 L τ n Z ˜ ( 0 ) .
(5.5)
The system (5.3)-(5.4) can be easily rewritten in the general form
{ u ˙ ( t ) = i A 0 t u ( t ) C C t u ( t ) ± B u ( t ) + F ( t ) + D τ u ( t ) , u ( 0 ) = u 0 ,
(5.6)

where for the forward observer (5.3), we set u ( t ) = Z ( t ) , u 0 = 0 , F ( t ) = C t y ( t ) = C C t z ( t ) and D = 0 , and for the backward observer (5.4), we set u ( t ) = Z ˜ ( τ t ) , u 0 = Z ˜ ( τ ) = Z ( τ ) , F ( t ) = C ( τ t ) y ( τ t ) = C C ( τ t ) z ( τ t ) and D = ( i A 0 + C C ) = A b k .

Define the subspace D ( A 0 1 2 ) with the norm φ 1 2 = A 0 1 2 φ ( φ D ( A 0 1 2 ) ) in X. By the relations of the domain, we can get the embedding relations of the domain with the corresponding forms of the norm,
D ( A 0 2 ) ( 2 ) D ( A 0 ) ( 1 ) D ( A 0 1 2 ) ( 1 2 ) X (  or  0 ) .
According to the embedding properties, we can obtain the following relations of the norm. There exist M 1 , M 2 , M 3 > 0 , for α X , such that
α M 1 α 1 2 M 2 α 1 M 3 α 2 .

In order to prove the corresponding convergence conclusions, some preparatory lemma, which can simplify the proof procedure, has to be proved firstly.

Lemma 5.1 The initial value problem (5.6) is given, there exists M > 0 , such that
u ( t ) α M ( u 0 α + t F α , ) , α = 0 , 1 , 2 , u ˙ ( t ) α M [ ( t + τ + 1 ) u 0 α + 1 + t ( t + τ + 1 ) F α + 1 , ] + F α , , α = 0 , 1 ,

where F α , = sup t [ 0 , τ ] F α .

Proof By (5.6), we can obtain
u ( t ) = { 0 t T t 2 s 2 2 k e B ( t s ) F ( s ) d s + T t 2 2 k e B t u 0 , 0 t S s 2 t 2 2 k S τ ( t s ) k e B ( s t ) F ( s ) d s + S t ( 2 τ t ) 2 k e B t u 0 .

By the triangle inequality and the boundedness of T k , S k , B, the first conclusion can be obtained.

By (5.6), we can obtain
u ˙ ( t ) = { A t u ( t ) C C t u ( t ) + B u ( t ) + F ( t ) , A t u ( t ) + C C t u ( t ) B u ( t ) + F ( t ) ( A + C C ) τ u ( t ) .

Similarly, by the triangle inequality, the boundedness of B, C, the embedding properties, and the first inequality, the second conclusion can also be obtained. □

5.1 Semi-discretization

In this section, let h be the mesh size and N h be the optimal truncation parameter. We can construct the finite-dimensional subspace X h of D ( A 0 1 2 ) where D ( A 0 1 2 ) denotes the domain of the operator A 0 1 2 .

Define the orthogonal projection operator P : D ( A 0 1 2 ) X h . Denote M as a constant independently of τ, and suppose that there exist M > 0 , θ > 1 and h ˆ > 0 , such that for h ( 0 , h ˆ ) ,
P φ φ M h θ φ 1 2 , φ D ( A 0 1 2 ) .
(5.7)
The generalized solution of the system (5.6) on the Galerkin significance is to find u ( t ) D ( A 0 1 2 ) satisfying
{ u ˙ ( t ) , φ = i A 0 t u ( t ) , φ C C t u ( t ) , φ ± B u ( t ) , φ u ˙ ( t ) , φ = + F ( t ) , φ + D τ u ( t ) , φ , u ( 0 ) = u 0 ,
(5.8)

for all φ D ( A 0 1 2 ) and t [ 0 , τ ] , where u 0 D ( A 0 2 ) .

Start from the Galerkin method to approximate the variation formulation (5.8), i.e., the semi-discretization method is to find the unique solution u h X h satisfying the variation formulation
{ u ˙ h ( t ) , φ h = i t u h ( t ) , φ h 1 2 C C t u h ( t ) , φ h u ˙ h ( t ) , φ h = ± B u h ( t ) , φ h + F h ( t ) , φ h + D τ u h ( t ) , φ h , u h ( 0 ) = u 0 , h ,
(5.9)

for all φ h X h and t [ 0 , τ ] , where u 0 , h X h is the given approximation of u 0 in X, and F h is the corresponding approximation of F in L 1 ( [ 0 , τ ] , X ) .

Assume that y h is the corresponding approximation of y in L 1 ( [ 0 , τ ] , Y ) , Z h and Z ˜ h are the Galerkin approximations of Z and Z ˜ , respectively, and L h , t = S h , t 2 2 k T h , t 2 2 k is the approximation of L t = S t 2 2 k T t 2 2 k .

Proposition 5.1 There exist M > 0 , θ > 1 and h ˆ > 0 , such that for h ( 0 , h ˆ ) and t [ 0 , τ ] , we have
P u ( t ) u h ( t ) P u 0 u 0 , h + M h θ [ ( t 3 + t 2 τ + t 2 ) F 2 , + t F 1 , + ( t 2 + t τ + t ) u 0 2 ] + 0 t F F h d s .
Proof For all φ h X h , subtracting (5.9) from (5.8), we can obtain
u ˙ u ˙ h , φ h = i t u t u h , φ h 1 2 C C t u C C t u h , φ h ± B u B u h , φ h + F F h , φ h + D τ u D τ u h , φ h .
(5.10)
Noting that P u u , φ h 1 2 = 0 is established for φ h X h , thus we have
u u h , φ h 1 2 = P u u h , φ h 1 2 P u u , φ h 1 2 = P u u h , φ h 1 2 .
Let ϑ h = 1 2 P u u h 2 , thus
P u u h = 2 ϑ h and ϑ ˙ h = Re P u ˙ u ˙ h , P u u h .
Since P u ˙ u ˙ h , φ h = P u ˙ u ˙ , φ h + u ˙ u ˙ h , φ h and (5.10), the above equation with φ h = P u u h can be rewritten as
ϑ ˙ h = Re P u ˙ u ˙ h , P u u h = P u ˙ u ˙ , φ h C C t ( u u h ) , φ h ± B ( u u h ) , φ h + F F h , φ h + D τ ( u u h ) , φ h = P u ˙ u ˙ , φ h C C t ( u u h ) , φ h ± B ( u u h ) , φ h + F F h , φ h { 0 , C C τ ( u u h ) , φ h .
By the boundedness of B, C, we have
ϑ ˙ h [ P u ˙ u ˙ + M ( t + τ + 1 ) P u u + F F h ] P u u h .
(5.11)
Since d d t 2 ϑ h = ϑ ˙ h 2 ϑ h , the integration is
0 t ϑ ˙ h P u u h d s = 0 t d d s 2 ϑ h d s = 2 ϑ h ( t ) 2 ϑ h ( 0 ) = P u u P u 0 u 0 , h .
By (5.7), Lemma 5.1, and the embedding property, there exist M > 0 , θ > 1 , and h ˆ > 0 , such that for t [ 0 , τ ] and h ( 0 , h ˆ ) , we have
P u ˙ u ˙ + M ( t + τ + 1 ) P u u M h θ [ ( t + τ + 1 ) u 0 2 + t ( t + τ + 1 ) F 2 , + F 1 , ] .
Then the integration of the inequality (5.11) can be rewritten as
P u ( t ) u h ( t ) P u 0 u 0 , h + 0 t [ P u ˙ u ˙ + M ( s + τ + 1 ) P u u ] d s + 0 t F F h d s P u 0 u 0 , h + M h θ 0 t ( s + τ + 1 ) u 0 2 d s + M h θ 0 t [ s ( s + τ + 1 ) F 2 , + F 1 , ] d s + 0 t F F h d s .

Thus, after the calculation of the integration, the result can be obtained. □

By the conclusion, the error approximations of the semigroup T k , S k , and the operator L t can be derived.

Proposition 5.2 There exist M > 0 , θ > 1 and h ˆ > 0 , such that for h ( 0 , h ˆ ) , n N and t [ 0 , τ ] , we have
L t n u 0 L h , t n u 0 M h θ { 1 + n [ ( τ t ) 2 + t 2 + τ 2 + τ + 1 ] } u 0 2 .
Proof By the triangle inequality, we have
L t n u 0 L h , t n u 0 L t n u 0 P L t n u 0 + P L t n u 0 L h , t n u 0 .
For the first term, by (5.7), the embedding property and η = L t < 1 , the term can be estimated as
L t n u 0 P L t n u 0 M h θ u 0 2 .
(5.12)
For the second term, using mathematical induction, we can prove that
P L t n u 0 L h , t n u 0 M n h θ [ ( τ t ) 2 + t 2 + τ 2 + τ + 1 ] u 0 2 .
(5.13)
When n = 1 , by the definition of L t and L h , t , we have
P L t u 0 L h , t u 0 = P T t 2 2 k S t 2 2 k u 0 T h , t 2 2 k S h , t 2 2 k u 0 P T t 2 2 k S t 2 2 k u 0 T h , t 2 2 k S t 2 2 k u 0 + T h , t 2 2 k ( S t 2 2 k u 0 S h , t 2 2 k u 0 ) .
(5.14)
When B = F = F h = 0 and P u 0 = u 0 , h , let u h ( t ) = T h , t 2 2 k u 0 and u h ( τ t ) = S h , t 2 2 k u 0 , respectively, we have
u ˙ h ( t ) = { ( i A 0 C C ) t u h ( t ) , ( i A 0 + C C ) t u h ( t ) ( i A 0 + C C ) τ u h ( t ) ,

which is exactly (5.6).

Thus using Proposition 5.1, we can derive the existence of M > 0 , θ > 1 , and h ˆ > 0 , such that for h ( 0 , h ˆ ) and t [ 0 , τ ] , we have
P T t 2 2 k u 0 T h , t 2 2 k u 0 M h θ ( t 2 + t τ + t ) u 0 2 , P S t 2 2 k u 0 S h , t 2 2 k u 0 M h θ [ ( τ t ) 2 + τ ( τ t ) + τ t ] u 0 2 .
For the first term of (5.14), using the above conclusion and the uniform boundedness of S t 2 2 k , we get
P T t 2 2 k S t 2 2 k u 0 T h , t 2 2 k S t 2 2 k u 0 M h θ ( t 2 + t τ + t ) u 0 2 .
Similarly, for the second term of (5.14), using the above conclusion, (5.7), and the uniform boundedness of T t 2 2 k and S t 2 2 k , we get
T h , t 2 2 k ( S t 2 2 k u 0 S h , t 2 2 k u 0 ) S t 2 2 k u 0 S h , t 2 2 k u 0 S t 2 2 k u 0 P S t 2 2 k u 0 + P S t 2 2 k u 0 S h , t 2 2 k u 0 M h θ u 0 2 + M h θ [ ( τ t ) 2 + τ ( τ t ) + ( τ t ) ] u 0 2 .
Substituting into (5.14), consequently
P L t u 0 L h , t u 0 P T t 2 2 k S t 2 2 k u 0 T h , t 2 2 k S t 2 2 k u 0 + T h , t 2 2 k ( S t 2 2 k u 0 S h , t 2 2 k u 0 ) M h θ [ ( τ t ) 2 + t 2 + τ 2 + τ + 1 ] u 0 2 ,

which shows that (5.13) holds when n = 1 .

Now suppose that (5.13) holds for n 1 ( n 2 ). Then for n, we have
P L t n u 0 L h , t n u 0 P L t ( L t n 1 u 0 ) L h , t ( L t n 1 u 0 ) + L h , t ( L t n 1 u 0 L h , t n 1 u 0 ) M n h θ [ ( τ t ) 2 + t 2 + τ 2 + τ + 1 ] u 0 2 ,

which is exactly (5.13). Thus we obtain the result. □

Next we estimate the error in semi-discretization.

Theorem 5.3 There exist M > 0 , θ > 1 , and h ˆ > 0 , such that for h ( 0 , h ˆ ) and t [ 0 , τ ] , we have
z 0 z 0 , h M [ ( η N h + 1 1 η + h θ ( τ 2 + τ + 1 ) N h 2 ) z 0 2 + N h 0 τ C s ( y ( s ) y h ( s ) ) d s ] .
Proof Using (5.5) and z 0 , h = n = 0 N h L h , τ n Z ˜ ( 0 ) , we can get
z 0 z 0 , h = n > N h L τ n Z ˜ ( 0 ) + n = 0 N h ( L τ n L h , τ n ) Z ˜ ( 0 ) + n = 0 N h L h , τ n ( Z ˜ ( 0 ) Z ˜ h ( 0 ) ) .
Therefore, we have
z 0 z 0 , h E 1 + E 2 + E 3 ,
(5.15)
where we have set
{ E 1 = n > N h L τ n Z ˜ ( 0 ) , E 2 = n = 0 N h ( L τ n L h , τ n ) Z ˜ ( 0 ) , E 3 = ( n = 0 N h L h , τ n ) Z ˜ ( 0 ) Z ˜ h ( 0 ) .
The first term, by η = L τ < 1 and Z ˜ ( 0 ) = ( I L τ ) z 0 , can be estimated as
E 1 = ( n = N h + 1 η n ) I L τ z 0 M η N h + 1 1 η z 0 2 .
(5.16)
Similarly, the second term, by Proposition 5.2, can be estimated as
E 2 M h θ n = 0 N h [ 1 + n ( τ 2 + τ + 1 ) ] Z ˜ ( 0 ) 2 M h θ [ N h + 1 + ( τ 2 + τ + 1 ) ( N h 2 + N h ) ] z 0 2 M h θ ( τ 2 + τ + 1 ) ( N h 2 + N h ) z 0 2 .
(5.17)
For the third term, from Proposition 5.2 we know that L h , τ is uniformly bounded, thus we have
E 3 M N h Z ˜ ( 0 ) Z ˜ h ( 0 ) M N h ( Z ˜ ( 0 ) P Z ˜ ( 0 ) + P Z ˜ ( 0 ) Z ˜ h ( 0 ) ) .
(5.18)
For the first term of (5.17), with (5.5), (5.7), and the embedding property we have
Z ˜ ( 0 ) P Z ˜ ( 0 ) M h θ z 0 2 .
(5.19)

For the second term of (5.17), to estimate it we apply twice Proposition 5.1 for the time reversed backward observer and the forward observer, respectively.

Firstly, when u ( t ) = Z ˜ ( τ t ) , we have F ( t ) = C ( τ t ) y ( τ t ) , u 0 = Z ( τ ) , and u 0 , h = Z h ( τ ) ,
P Z ˜ ( 0 ) Z ˜ h ( 0 ) = P u ( τ ) u h ( τ ) P u 0 u 0 , h + M h θ [ ( τ 3 + τ 2 ) F 2 , + τ F 1 , + ( τ 2 + τ ) u 0 2 ] + 0 τ F F h d s P Z ( τ ) Z h ( τ ) + M h θ [ ( τ 4 + τ 3 ) C y 2 , + τ 2 C y 1 , + ( τ 2 + τ ) Z ( τ ) 2 ] + 0 τ ( τ t ) C ( y ( τ t ) y h ( τ t ) ) d t .
Then, when u ( t ) = Z ( t ) , F ( t ) = C t y ( t ) , u 0 = u 0 , h = 0 , we have
P Z ( τ ) Z h ( τ ) = P u ( τ ) u h ( τ ) M h θ [ ( τ 4 + τ 3 ) C y 2 , + τ 2 C y 1 , ] + 0 τ t C ( y ( t ) y h ( t ) ) d t .

Applying Lemma 5.1 u ( τ ) 2 M ( u 0 2 + τ F 2 , ) , we get Z ( τ ) 2 M τ 2 C y 2 , .

And we can easily obtain
0 τ ( τ t ) C ( y ( τ t ) y h ( τ t ) ) d t = 0 τ t C ( y ( t ) y h ( t ) ) d t , C y 1 , C y 2 , = C C z 2 , M z 2 , = M z 0 2 .
Thus the second term of (5.17) can be estimated as
P Z ˜ ( 0 ) Z ˜ h ( 0 ) M h θ [ ( τ 4 + τ 3 ) C y 2 , + τ 2 C y 1 , + ( τ 2 + τ ) Z ( τ ) 2 ] + 2 0 τ t C ( y ( t ) y h ( t ) ) d t M h θ ( τ 4 + τ 3 + τ 2 ) z 0 2 + 2 0 τ t C ( y ( t ) y h ( t ) ) d t .
(5.20)
Therefore, substituting (5.18) and (5.19) into (5.17), we can obtain
E 3 M N h [ h θ ( τ 4 + τ 3 + τ 2 + 1 ) z 0 2 + 0 τ t C ( y ( t ) y h ( t ) ) d t ] .
(5.21)
Above all, substituting (5.15), (5.16), and (5.20) into (5.14), we can obtain
z 0 z 0 , h M { ( η N h + 1 1 η + h θ [ 1 + ( τ 4 + τ 3 + τ 2 + τ + 1 ) N h + ( τ 2 + τ + 1 ) N h 2 ] ) z 0 2 + N h 0 τ C t ( y ( t ) y h ( t ) ) d t } ,

which implies the conclusion holds. □

The choice of N h will lead to an explicit error estimate which is just dependent on h, and the proper choice of N h is important. If we choose N h = θ ln h ln η , then according to Theorem 5.3, we can get
z 0 z 0 , h M τ [ h θ ( ln 2 h + | ln h | ) z 0 2 + | ln h | 0 τ C s ( y ( s ) y h ( s ) ) d s ] .

5.2 Full-discretization

Divide the time interval [ 0 , τ ] into N subintervals and let the time step Δ t = τ N ( N 1 ).

Denote t k = k Δ t ( 0 k N ), then τ = N Δ t .

By using the implicit Euler scheme at time t k with the previous Galerkin approximation (5.9), assume
u ˙ ( t ) D t u ( t k ) = u ( t k ) u ( t k i ) Δ t .
Then the full-discretization problem is to find the solution u h k X h such that
{ D t u h k , φ h = i t k u h k , φ h 1 2 C C t k u h k , φ h D t u h k , φ h = ± B u h k , φ h + F h k , φ h + D τ u h k , φ h , u h 0 = u 0 , h ,
(5.22)

for all φ h X h and 0 k N , where u 0 , h X h is the given approximation of u 0 and F h k is the corresponding approximation of F ( t k ) in X.

Assume that y h k is the corresponding approximation of y ( t k ) in Y, Z h k and Z ˜ h k are the approximations of Z t k and Z ˜ ( t k ) , respectively, and L h , Δ t , k = S h , Δ t , k k T h , Δ t , k k is the approximation of L t k = S t k 2 2 k T t k 2 2 k .

The convergence analysis is similar to that in the semi-discretization, thus we can prove two main ingredients of the error estimation as in the semi-discretization.

Proposition 5.4 There exist M > 0 , θ > 1 , and h ˆ > 0 , such that for h ( 0 , h ˆ ) and t [ 0 , τ ] , we have
P u ( t k ) u h k P u 0 u 0 , h + M { Δ t i = 1 k F ( t i ) F h i + ( h θ + Δ t ) [ ( t k 3 + t k τ 2 + t k 2 τ + t k 2 + t k τ + t k ) u 0 2 + ( t k 4 + t k 3 τ + t k 2 τ 2 + t k 3 + t k 2 τ + t k 2 ) F 2 , + ( t k 2 + t k τ + t k ) F 1 , + t k F ˙ ( t k ) ] } .
Proof Expand u ( t ) into the Taylor series at time t k 1 and denote the residual term of the first order Taylor expansion by R ( t k ) , then
R ( t k ) = u ( t k ) u ( t k 1 ) Δ t u ˙ ( t k ) .
(5.23)
Namely,
u ˙ ( t k ) = D t u ( t k ) 1 Δ t R ( t k ) .
(5.24)
By the relation (5.24), for all φ h X h and 0 k N , we can get
D t ( u ( t k ) u h k ) , φ h = u ˙ ( t k ) + 1 Δ t R ( t k ) D t u h k , φ h = u ˙ ( t k ) , φ h + 1 Δ t R ( t k ) , φ h D t u h k , φ h .
(5.25)
Substituting (5.8) and (5.24) into (5.25) at time t k , then
D t ( u ( t k ) u h k ) , φ h = i t k ( u ( t k ) u h k ) , φ h 1 2 C C t k ( u ( t k ) u h k ) , φ h ± B ( u ( t k ) u h k ) , φ h + F ( t k ) F h k , φ h + D t N ( u ( t k ) u h k ) , φ h + 1 Δ t R ( t k ) , φ h .
(5.26)
Noting that P u u , φ h 1 2 = 0 is established for φ h X h , thus
u ( t k ) u h k , φ h 1 2 = u ( t k ) P u ( t k ) , φ h 1 2 + P u ( t k ) u h k , φ h 1 2 = P u ( t k ) u h k , φ h 1 2 .
We can also easily get
D t ( P u ( t k ) u h k ) , φ h = D t ( u ( t k ) u h k ) , φ h + D t ( P u ( t k ) u ( t k ) ) , φ h .
(5.27)
Let ϑ h k = 1 2 P u ( t k ) u h k 2 , therefore for k [ 0 , N ] , we can obtain
P u ( t k ) u h k = 2 ϑ h k ,
and for ψ , ϕ X , we can obtain
1 2 ( ψ 2 + ϕ 2 + ψ ϕ 2 ) = Re ψ ϕ , ψ .
Let ψ = P u ( t k ) u h k , ϕ = P u ( t k 1 ) u h k 1 , by the definition of D t , the above identity can be rewritten as
D t ϑ h k = Re D t ( P u ( t k ) u h k ) , P u ( t k ) u h k 1 2 ( P u ( t k ) u h k ) ( P u ( t k 1 ) u h k 1 ) 2 Re D t ( P u ( t k ) u h k ) , P u ( t k ) u h k .
(5.28)
Substituting (5.26) and (5.27) into (5.28) with φ h = P u ( t k ) u h k , then
D t ϑ h k D t ( P u ( t k ) u ( t k ) ) , φ h C C t k ( u ( t k ) u h k ) , φ h ± B ( u ( t k ) u h k ) , φ h + F ( t k ) F h k , φ h + 1 Δ t R ( t k ) , φ h + D t N ( u ( t k ) u h k ) , φ h D t ( P u ( t k ) u ( t k ) ) , φ h C C t k ( u ( t k ) u h k ) , φ h ± B ( u ( t k ) u h k ) , φ h + F ( t k ) F h k , φ h + 1 Δ t R ( t k ) , φ h { 0 , C C t N ( u ( t k ) u h k ) , φ h .
(5.29)
By P u ( t k ) u h k P u ( t k ) u h k + P u ( t k 1 ) u h k 1 , we have
P u ( t k ) u h k ϑ h k + ϑ h k 1 = P u ( t k ) u h k 1 2 ( P u ( t k ) u h k + P u ( t k 1 ) u h k 1 ) 2 .
(5.30)
Similarly, by the definition of D t , we can easily obtain
D t ϑ h k = D t ϑ h k ϑ h k + ϑ h k 1 .
(5.31)
Using the boundedness of B, C and from (5.7), (5.29), (5.30), and (5.31), we can see that there exist M > 0 , h ˆ > 0 , and θ > 1 such that for all h ( 0 , h ˆ ) and 0 k N , we have
D t ϑ h k M [ h θ ( D t u ( t k ) 1 2 + ( t k + τ + 1 ) u ( t k ) 1 2 ) + F ( t k ) F h k + 1 Δ t R ( t k ) ] .
(5.32)
By the definition of R ( t ) in D ( A 0 1 2 ) and the mean value theorem, we can obtain
1 Δ t R ( t k ) 1 2 sup s [ t k 1 , t k ] u ˙ ( s ) 1 2 + u ˙ ( t k ) 1 2 .
(5.33)
From the fundamental property of the norm, Lemma 5.1, (5.33), and the embedding property, we can obtain
D t u ( t k ) 1 2 u ˙ ( t k ) 1 2 + 1 Δ t R ( t k ) 1 2 M [ ( t + τ + 1 ) u 0 2 + t ( t + τ + 1 ) F 2 , + F 1 , ] .
(5.34)
By the definition of R ( t ) in X, for ξ [ t k 1 , t k ] , we can obtain
R ( t k ) = ( t k 1 t k ) u ˙ ( t k ) + t k 1 t k u ˙ ( s ) d s = t k 1 t k ( t k 1 s ) u ¨ ( s ) d s = 1 2 ( Δ t ) 2 u ¨ ( ξ ) .
Thus
R ( t k ) ( Δ t ) 2 sup s [ t k 1 , t k ] u ¨ ( s ) .
(5.35)
And since B, C are bounded, we have
u ¨ ( t ) = d u ˙ d t ( t ) = i A 0 t u ˙ ( t ) C C t u ˙ ( t ) ± B u ˙ ( t ) + F ˙ ( t ) + D τ u ˙ ( t ) + i A 0 u ( t ) C C u ( t ) ( t + τ ) u ˙ ( t ) 1 + M ( t + τ + 1 ) u ˙ ( t ) + u ( t ) 1 + M u ( t ) + F ˙ ( t ) .
(5.36)
Hence, from (5.35) and (5.36), we can obtain
R ( t k ) M ( Δ t ) 2 [ ( t k 2 + τ 2 + t k τ + t k + τ + 1 ) u 0 2 + ( t k 3 + t k 2 τ + t k τ 2 + t k 2 + t k τ + t k ) F 2 , + ( t k + τ + 1 ) F 1 , + F ˙ ( t k ) ] .
And by simple iterations we get
i = 1 k D t ϑ h i = ϑ h k ϑ h 0 Δ t = P u ( t k ) u h k P u 0 u 0 , h 2 Δ t .
(5.37)
Substituting (5.32), (5.34), and (5.37) into (5.38) with t k = k Δ t , then
P u ( t k ) u h k P u 0 u 0 , h + M { Δ t i = 1 k F ( t i ) F h i + ( h θ + Δ t ) [ ( t k 3 + t k τ 2 + t k 2 τ + t k 2 + t k τ + t k ) u 0 2 + ( t k 4 + t k 3 τ + t k 2 τ 2 + t k 3 + t k 2 τ + t k 2 ) F 2 , + ( t k 2 + t k τ + t k ) F 1 , + t k F ˙ ( t k ) ] } .

Therefore we get the conclusion. □

Proposition 5.5 There exist M > 0 , θ > 1 , and h ˆ > 0 , such that for h ( 0 , h ˆ ) , n N , and k [ 0 , N ] , we have
L t k n u 0 L h , Δ t , k n u 0 M { h θ + n ( h θ + Δ t ) [ t k 3 + t k τ 2 + t k 2 τ + t k 2 + τ 2 + τ + ( τ t k ) 3 + ( τ t k ) τ 2 + ( τ t k ) 2 τ + ( τ t k ) 2 ] } u 0 2 .
Proof By the triangle inequality, we have
L t k n u 0 L h , Δ t , k n u 0 L t k n u 0 P L t k n u 0 + P L t k n u 0 L h , Δ t , k n u 0 .
For the first term, using (5.7), the embedding property and η = L t < 1 , the term can be estimated as
L t k n u 0 P L t k n u 0 M h θ u 0 2 .
(5.38)
For the second term, using mathematical induction, we can prove that
P L t k n u 0 L h , Δ t , k n u 0 M n ( h θ + Δ t ) [ t k 3 + t k τ 2 + t k 2 τ + t k 2 + τ 2 + τ + ( τ t k ) 3 + ( τ t k ) τ 2 + ( τ t k ) 2 τ + ( τ t k ) 2 ] u 0 2 .
(5.39)
When n = 1 , by definition of L t k and L h , Δ t , k , we have
P L t k u 0 L h , Δ t , k u 0 = P T t k 2 2 k S t k 2 2 k u 0 T h , Δ t , h k S h , Δ t , h k u 0 ( P T t k 2 2 k T h , Δ t , k k ) P S t k 2 2 k u 0 + T h , Δ t , k k ( P S t k 2 2 k u 0 S h , Δ t , k k u 0 ) .
(5.40)
When B = F ( t k ) = F h k = 0 and P u 0 = u 0 , h , Δ t , let u h k = T h , Δ t , k k u 0 and u h N k = S h , Δ t , k k u 0 , respectively, it follows that
u ˙ h k = { ( i A 0 C C ) t k u h k , ( i A 0 + C C ) t k u h k ( i A 0 + C C ) τ u h k ,

which is exactly (5.6).

Thus using Proposition 5.4, we can derive the existence of M > 0 , θ > 1 , and h ˆ > 0 , such that for h ( 0 , h ˆ ) and k [ 0 , N ] , we have
P T t k 2 2 k u 0 T h , Δ t , k k u 0 M ( h θ + Δ t ) ( t k 3 + t k τ 2 + t k 2 τ + t k 2 + t k τ + t k ) u 0 2 , P S t K 2 2 k u 0 S h , Δ t , k k u 0 M ( h θ + Δ t ) [ ( τ t k ) 3 + ( τ t k ) τ 2 + ( τ t k ) 2 τ + ( τ t k ) 2 + ( τ t k ) τ + ( τ t k ) ] u 0 2 .
For the first term of (5.40), using the above conclusion and the uniform boundedness of S k , we get
( P T t k 2 2 k T h , Δ t , k k ) P S t k 2 2 k u 0 M ( h θ + Δ t ) ( t k 3 + t k τ 2 + t k 2 τ + t k 2 + t k τ + t k ) u 0 2 .
For the second term of (5.40), similarly, using the above conclusion, (5.7), and the uniform boundedness of T k , we get
T h , Δ t , k k ( P S t k 2 2 k u 0 S h , Δ t , k k u 0 ) M ( h θ + Δ t ) [ ( τ t k ) 3 + ( τ t k ) τ 2 + ( τ t k ) 2 τ + ( τ t k ) 2 + ( τ t k ) τ + ( τ t k ) ] u 0 2 .
Substituting into (5.40), consequently
P L t u 0 L h , Δ t , k u 0 M ( h θ + Δ t ) [ t k 3 + t k τ 2 + t k 2 τ + t k 2 + τ 2 + τ + ( τ t k ) 3 + ( τ t k ) τ 2 + ( τ t k ) 2 τ + ( τ t k ) 2 ] u 0 2 ,

which shows that (5.39) holds when n = 1 .

Now suppose that (5.39) holds for n 1 ( n 2 ), then for n, we have
P L t k n u 0 L h , t n u 0 P L t k ( L t k n 1 u 0 ) L h , Δ t , k P ( L t k n 1 u 0 ) + L h , Δ t , k ( P L t k n 1 u 0 L h , Δ t , k n 1 u 0 ) M n ( h θ + Δ t ) [ t k 3 + t k τ 2 + t k 2 τ + t k 2 + τ 2 + τ + ( τ t k ) 3 + ( τ t k ) τ 2 + ( τ t k ) 2 τ + ( τ t k ) 2 ] u 0 2 ,

which is exactly (5.39). Thus from (5.38) and (5.39), we obtain the result. □

Next, let us estimate the error in full-discretization.

Theorem 5.6 There exist M > 0 , θ > 1 , and h ˆ > 0 , such that for h ( 0 , h ˆ ) and t [ 0 , τ ] , we have
z 0 z 0 , h , Δ t M [ ( η N h , Δ t + 1 1 η + ( h θ + Δ t ) ( τ 5 + τ 4 + τ 3 + τ 2 + τ + 1 ) N h , Δ t 2 ) z 0 2 + N h , Δ t Δ t i = 0 N t i C ( y ( t i ) y h i ) ] .
Proof By (5.5) and z 0 , h , Δ t = n = 0 N h , Δ t L h , Δ t , N n Z ˜ ( 0 ) , we get
z 0 z 0 , h , Δ t = n = 0 L τ n Z ˜ ( 0 ) n = 0 N h L h , Δ t , N n Z ˜ h ( 0 ) = n > N h L τ n Z ˜ ( 0 ) + n = 0 N h , Δ t , N ( L τ n L h , Δ t , N n ) Z ˜ ( 0 ) + n = 0 N h , Δ t L h , Δ t , N n ( Z ˜ ( 0 ) Z ˜ h ( 0 ) ) .
Therefore, we have
z 0 z 0 , h , Δ t E 1 + E 2 + E 3 ,
(5.41)
where we have set
{ E 1 = n > N h L τ n Z ˜ ( 0 ) , E 2 = n = 0 N h ( L τ n L h , Δ t , N n ) Z ˜ ( 0 ) , E 3 = ( n = 0 N h L h , Δ t , N n ) Z ˜ ( 0 ) Z ˜ h ( 0 ) .
The first term, by η = L τ < 1 and Z ˜ ( 0 ) = ( I L τ ) z 0 , can be estimated as
E 1 M η N h , Δ t + 1 1 η z 0 2 .
(5.42)
Similarly, the second term, by Proposition 5.5, can be estimated as
E 2 M n = 0 N h , Δ t [ h θ + n ( h θ + Δ t ) ( τ 3 + τ 2 + τ ) ] u 0 2 M [ ( N h , Δ t + 1 ) h θ + ( h θ + Δ t ) ( τ 3 + τ 2 + τ ) ( N h , Δ t 2 + N h , Δ t ) ] z 0 2 M ( h θ + Δ t ) [ ( τ 3 + τ 2 + τ ) N h , Δ t 2 + ( τ 3 + τ 2 + τ + 1 ) N h , Δ t + 1 ] z 0 2 .
(5.43)
For the third term, from Proposition 5.5 we know that L h , τ is uniformly bounded, thus we have
E 3 M N h , Δ t Z ˜ ( 0 ) Z ˜ h ( 0 ) M N h , Δ t ( Z ˜ ( 0 ) P Z ˜ ( 0 ) + P Z ˜ ( 0 ) Z ˜ h ( 0 ) ) .
(5.44)
For the first term of (5.44), with (5.5), (5.7), and the embedding property, we have
Z ˜ ( 0 ) P Z ˜ ( 0 ) M h θ z 0 2 .
(5.45)
For the second term of (5.44), to estimate it we apply Proposition 5.4 twice for the time reversed backward observer and the forward observer, respectively, which is similar to (5.19). Therefore,
P Z ˜ ( 0 ) Z ˜ h ( 0 ) M { ( h θ + Δ t ) [ ( τ 5 + τ 4 + τ 3 ) C y 2 , + ( τ 3 + τ 2 ) C y 2 , + τ C y + τ 2 C y ˙ + ( τ 3 + τ 2 + τ ) Z ( τ ) 2 ] + 2 Δ t i = 0 N t i C ( y ( t i ) y h i ) d t } M { ( h θ + Δ t ) ( τ 5 + τ 4 + τ 3 + τ 2 + τ ) z 0 2 , + Δ t i = 0 N t i C ( y ( t i ) y h i ) } .
Thus, substituting the above inequality and (5.45) into (5.44), we can obtain
E 3 M N h , Δ t [ ( h θ + Δ t ) ( τ 5 + τ 4 + τ 3 + τ 2 + τ + 1 ) z 0 2 + Δ t i = 0 N t i C ( y ( t i ) y h i ) ] .
(5.46)
Substituting (5.42), (5.43), and (5.46) into (5.41), we can obtain
z 0 z 0 , h , Δ t M { N h , Δ t Δ t i = 0 N t i C ( y ( t i ) y h i ) + η N h , Δ t + 1 1 η z 0 2 + ( h θ + Δ t ) [ 1 + ( τ 3 + τ 2 + τ ) N h , Δ t 2 + ( τ 5 + τ 4 + τ 3 + τ 2 + τ + 1 ) N h ] z 0 2 } ,

which implies the conclusion holds. □

The choice of N h , Δ t will lead to an explicit error estimate which is just dependent on h, and the proper choice of N h , Δ t is important. If we choose N h , Δ t = ln ( h θ + Δ t ) ln η , according to Theorem 5.6, we can get
z 0 z 0 , h , Δ t M τ [ | ln ( h θ + Δ t ) | Δ t i = 0 N t i C ( y ( t i ) y h i ) + ( h θ + Δ t ) ln 2 ( h θ + Δ t ) z 0 2 ] .

6 Examples

In the section we apply algorithm to reconstruct the initial state for the nonlinear equation, and the algorithms were developed under Matlab. Let Ω R d ( d 1 ) and O Ω . Given the state space X = L 2 ( Ω ) and the output space Y = X = L 2 ( O ) . The operators A : D ( A ) = H 2 ( Ω ) H 0 1 ( Ω ) X , C L ( X , Y ) and B are defined by A z ( x , t ) = a Δ z ( x , t ) ( a > 0 ), C z ( x , t ) = { z ( x , t ) , x O , 0 , x O and B z ( x , t ) = 0 .

We consider the following initial and boundary value problem:
{ z ˙ ( x , t ) = a t Δ z ( x , t ) , ( x , t ) Ω × [ 0 , τ ] , z ( x , t ) = 0 , ( x , t ) Ω × [ 0 , τ ] , z ( x , 0 ) = z 0 , x Ω .
The output function is
{ y ( x , t ) = z ( x , t ) , ( x , t ) O × [ 0 , τ ] , y ( x , t ) = 0 , x O , t [ 0 , τ ] .
The corresponding observer system is
{ Z ˙ ( n ) ( x , t ) = a t Δ Z ( n ) ( x , t ) t C Z ( n ) ( x , t ) + t y ( x , t ) , Z ( n ) ( x , 0 ) = Z ˜ ( n 1 ) ( x , 0 ) , Z ˜ ( 1 ) ( x , 0 ) = Z 0 ,
(6.1)
{ Z ˜ ˙ ( n ) ( x , t ) = a t Δ Z ˜ ( n ) ( x , t ) + t C Z ˜ ( n ) ( x , t ) t y ( x , t ) , Z ˜ ( n ) ( x , τ ) = Z ( n ) ( x , τ ) ,
(6.2)

where Z 0 X is an arbitrary initial guess of z 0 which is taken to be zero.

In order to show the efficiency of the iterative algorithm, we consider the particular case where Ω = [ 0 , 100 ] , O = [ 40 , 60 ] , Δ t = τ T , T = 100 , a = 50 , h = 0.1 , and the initial data to be recover is z 0 = x ( 100 x ) / 100 . We use a Crank-Nicolson scheme and quasi-reversible method of regularization inverse inversion to simulate the observer systems (6.1) and (6.2) from one iteration to multiple iterations in time combined with a finite difference space discretization. Figure 1 shows the initial state, Figure 2 shows the final evolution of the output function. After one forward and backward iteration, we can obtain Figure 3 and Figure 4, obviously the result is not accurate enough, and after five iterations we obtain Figure 5. Figure 5 shows that the recursive algorithm reconstructs the initial state as far as possible. The algorithm take the simplest system and still needs to be improved.
Figure 1

The initial state.

Figure 2

The final state of the output function.

Figure 3

The final state of one forward iteration.

Figure 4

The final state of one backward iteration.

Figure 5

The state of five iterations.

7 Conclusion

The above iterative algorithm by using the forward and backward observers may estimate the initial state of the inverse problems of the nonlinear system under certain conditions. The numerical convergence accuracy analysis based on observers in the semi-discretization and the full-discretization can also be obtained. The convergence analysis of z 0 , h and z 0 , h , Δ t towards z 0 , h has been showed for the nonlinear system if the truncation parameters N h and N h , Δ t are optimal. The estimate error we have got provides the admissible upper bound under which the convergence can be guaranteed. The innovation in the paper is that this paper introduces the algorithm more systematically and comprehensively and demonstrates it in detail. We need to work on more applications of the algorithm and on the accuracy.

Declarations

Acknowledgements

This work was supported by the National Natural Science Foundation of China. Grant No. 51205286.

Authors’ Affiliations

(1)
Department of Mathematics, School of Science, Tianjin University

References

  1. Ramdani K, Tucsnak M, Weiss G: Recovering the initial state of an infinite-dimensional system using observers. Automatica 2010, 46(10):1616–1625. 10.1016/j.automatica.2010.06.032MathSciNetView ArticleGoogle Scholar
  2. Auroux D, Blum J: A nudging-based data assimilation method: the Back and Forth Nudging (BFN) algorithm. Nonlinear Process. Geophys. 2008, 15: 305–319. 10.5194/npg-15-305-2008View ArticleGoogle Scholar
  3. Auroux D, Blum J: Back and forth nudging algorithm for data assimilation problems. C. R. Math. Acad. Sci. Paris 2005, 340: 873–878. 10.1016/j.crma.2005.05.006MathSciNetView ArticleGoogle Scholar
  4. Hoke J, Anthes A: The initialization of numerical models by a dynamic initialization technique. Mon. Weather Rev. 1976, 104: 1551–1556. 10.1175/1520-0493(1976)104<1551:TIONMB>2.0.CO;2View ArticleGoogle Scholar
  5. Liu K: Locally distributed control and damping for the conservative system. SIAM J. Control Optim. 1997, 35(5):1574–1590. 10.1137/S0363012995284928MathSciNetView ArticleGoogle Scholar
  6. Tucanak M, Weiss G Birkhäuser Advanced Texts. In Observation and Control for Operator Semigroups. Birkhäuser, Basel; 2009.View ArticleGoogle Scholar
  7. Byungik K: Multiple valued iterative dynamics models of nonlinear discrete-time control dynamical systems with disturbance. J. Korean Math. Soc. 2013, 50: 17–39. 10.4134/JKMS.2013.50.1.017MathSciNetView ArticleGoogle Scholar
  8. Curtain RF, Zwart H Texts in Applied Mathematics 21. In An Introduction to Infinite-Dimensional Linear Systems Theory. Springer, New York; 1995.View ArticleGoogle Scholar
  9. Ramdani K, Takahashi T, Tenenbaum G, Tucsnak M: A spectral approach for the exact observability of infinite-dimensional systems with skew-adjoint generator. J. Funct. Anal. 2005, 226: 193–229. 10.1016/j.jfa.2005.02.009MathSciNetView ArticleGoogle Scholar

Copyright

© Xie and Chang; licensee Springer. 2014

This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly credited.