- Research
- Open Access

# Identification and control of delayed unstable and integrative LTI MIMO systems using pattern search methods

- J Herrera
^{1, 2}Email author, - A Ibeas
^{2}, - M de la Sen
^{3}, - S Alcántara
^{2}and - S Alonso-Quesada
^{4}

**2013**:331

https://doi.org/10.1186/1687-1847-2013-331

© Herrera et al.; licensee Springer. 2013

**Received:**27 June 2013**Accepted:**25 September 2013**Published:**19 November 2013

## Abstract

In this paper, we develop a multi-model adaptive control strategy to be applied on delay compensation schemes for stable/unstable LTI MIMO systems. The only requirement is that the delay should be bounded and decoupled from the control strategy. The delay identification problem is formulated as an optimization one, and it is framed on the abstract definition of the Generalized Pattern Search Method (GPSM). Taking advantage of the global convergence analysis presented by GPSM, we make a stability analysis of the proposed approach and delay identification capabilities. Simulation examples show the usefulness of the proposed strategy proving that the scheme is capable of identifying the delay and stabilizing the system even with a larger delay. The capabilities of the approach are tested on a second-order delayed unstable process, a MIMO unstable systems and an irrigation channel model. Additionally, simulation examples on an irrigation channel with time-varying delay are presented.

## Keywords

- Reference Signal
- Nominal Model
- MIMO System
- Rational Component
- Trial Point

## 1 Introduction

The external delay in a process causes the output signal to be delayed with respect to the input. If a control strategy has to be designed for this system, the presence of the delay makes it a more difficult task, especially, when the rational part of the system is unstable [1, 2]. Various strategies have been used to counteract the delay effect. Thus, the tuning of PID controllers is perhaps the most widely used. In [3–6], different tuning rules for stable/unstable systems with delay can be found. The disadvantage of such techniques is that they only work well, when the delay is small compared with the time constant of the system [7].

For systems, where the delay is dominant (*i.e.*, the delay is big compared with the time constant of the system), different control strategies such as delay compensation schemes (DCS) should be used, being the most well-known the Smith predictor and its modifications for unstable and integrative systems [8–11]. Generally, these approaches have to be applied offline to the nominal parameters of the system known beforehand. An additional shortcoming is the lack of robustness to small uncertainties in the delay. In practice, when using a DCS for an unstable system, it is very difficult to ensure the closed-loop stability in the presence of delay uncertainty. This makes the DCS design considerably more difficult than its stable counterpart. In this way, much effort has been dedicated during the last years to the design of controllers for systems with delay uncertainty.

Recently, a framework focused on the identification and control of systems with delay uncertainty, has been proposed both for stable [12–14] and integrative [15] systems. The approach presented is based on the classical SP and a multi-model scheme. The multi-model scheme contains a battery of time-varying models which are updated using a modification rule. Each model possesses the same rational component but a different delay value. The algorithm compares the mismatch between the actual system and each model and selects, at each time interval, the one that best describes the behaviour of the actual system, providing online identification of the delay while simultaneously ensuring the closed-loop stability. The way in which the delay varies is determined by a heuristic optimization; this allows both the delay identification and the system control. Additionally, this approach leads to a robustly stable closed-loop system while achieving a great performance for systems with unknown long delays.

In this sense, the work presented in [12] is extended in this paper to potentially unstable and integrative systems. In this case, the control scheme is based on the modified Smith predictor (MoSP) introduced in [8], and the optimization is framed into a Pattern Search Method (PSM) [16]. It is worth emphasizing that the component by component delay identification in an unstable MIMO system is a difficult task, since it is impractical to estimate from open-loop experiments.

Control-oriented model identification methods have been of great interest in the process control community, and there are different approaches, such as design of optimal controllers, based on the particle swarm optimization [17], LMI optimization [18–21] to treat the identification problem. The drawback of all these works is that the optimization is done offline. Recently, a systematic closed-loop parametric online identification method based on a step response test and online weight least squares optimization was proposed in [22] for integrative and unstable processes. However, this work is treated without dealing the control strategy. Therefore, the modification of controller parameters must be done offline.

In this paper, the delay identification is formulated as an optimization problem, which is solved using the so-called Pattern Search Method (PSM). The PSM has been used in mathematics and optimization theory [23, 24], but its use in control is rather limited, with few works on it, [25, 26]. Moreover, in these two works, there is neither an analysis of convergence nor a frame on the PSM. This contrasts with the present work, where the proposed PSM is framed correctly on the generalized PSM [16], and therefore, the proposed PSM inherits the general convergence properties. Thus, analytical stability properties are formulated for this approach adequately and easily, since previous results and concepts can be used for this purpose.

The PSM is implemented for practical purposes on the modified Smith predictor (MoSP) [8], and it is complemented through a multi-model scheme running in parallel [27]. The multi-model scheme contains the trial points (battery of models), which are updated through time by using a modification rule called *exploratory move* in the PSM context. Each model possesses the same rational component but a different value for the delay. After an exploratory move, the algorithm compares the mismatch between the actual system and each model, and selects at each time interval the one that best describes the behaviour of the system, providing an online estimation of the delay. An advantage of the multi-model approach is that the PSM is implemented under simple mathematical operations, which make its implementation relatively simple.

The proposed approach is tested on different systems models and under different conditions as: a second-order delayed unstable process, where an uncertainty in the rational component of the system is taken into account; also it is tested on a MIMO unstable systems and on an irrigation channel model, which is an integrative MIMO system. Additionally, simulation results have been extended to time-varying ones showing the potential and applications of the proposed approach.

The paper is organized as follows. Section 2 states the problem formulation. Section 3 presents the proposed control scheme framed on the GPSM. The stability analysis is performed in Section 4. Simulation examples are presented in Section 5. Finally, Section 6 summarizes the main conclusions.

## 2 Problem formulation

where ${G}^{df}(s)$ is a matrix containing the rational component of the system, $H(s)=({H}_{ij}(s))=({e}^{-{h}_{ij}s})$, for $i,j=1,2,\dots ,n$, is a matrix containing only delays, and • denotes the Schur (or component-wise) product [28]. The following assumptions are made about system (1).

**Assumption 1** The rational transfer function matrix ${G}^{df}(s)$ is proper, with no pole-zero unstable cancellations and known.

**Assumption 2** The delay between each input/output component lies in a known compact interval. That is, there exist two known matrices $\overline{H}=({\overline{h}}_{ij})$, $\underline{H}=({\underline{h}}_{ij})\in {\mathbb{R}}^{n\times n}$ such that ${\underline{h}}_{ij}\le {h}_{ij}\le {\overline{h}}_{ij}$, $\mathrm{\forall}i,j$.

Assumption 1 is feasible in many control problems, where a nominal model of the system is available beforehand, and it does not posses unstable pole-zero cancellations. However, the delay may be unknown or even time-varying, and hence it has to be estimated [29]. Note that there is no assumption on the stability of ${G}^{df}(s)$ being potentially unstable. Assumption 2 will be used in the proposed pattern-search-based algorithm to estimate the delay in Section 3, and it is feasible in many practical control problems, where bounds on the delay are known.

### 2.1 Modified Smith predictor

*s*, in the next equations. Thus, the Laplace transform with zero initial conditions of the closed-loop response obtained from Figure 1 is given by

Hence, the system with internal delay that appears in Eq. (2), becomes a system with external delay in Eq. (3). Therefore, this is precisely a topology that decouples the delay from the control strategy, making the system easier to control, since compensators ${K}^{1}$, ${K}^{2}$ and *C* are designed regardless of the delay (*i.e.*, based only on the rational component of the system which is known beforehand). Otherwise, if the delay is not known beforehand, the exact compensation cannot be performed despite that the rational component is known, and the closed-loop Eq. (2) can be potentially unstable.

The problem faced corresponds to the case when the matrix delay *H* is unknown, and our objective is to obtain an estimation of the matrix delay $\stackrel{\u02c6}{H}$ to be used in the MoSP structure depicted in Figure 1 in order to guarantee the stability of the system. The method of estimating the delay involves the formulation of the identification problem as an optimization problem solved online by PSM, which is briefly explained below.

### 2.2 Generalized pattern search method (GPSM)

GPSM was proposed in [16] for derivative-free unconstrained optimization (minimization in this case) of continuously differentiable convex functions $J:{\mathbb{R}}^{n}\to \mathbb{R}$.

The GPSM consists of a sequence of iterations ${\stackrel{\u02c6}{H}}_{k}^{\mathrm{nom}}$, $k\in \mathbb{N}$. At each iteration, a number of *trial steps*$\mathrm{\Delta}{h}_{k}^{n}$ are added to the iteration ${\stackrel{\u02c6}{H}}_{k}^{\mathrm{nom}}$ to obtain a number of *trial points*${\stackrel{\u02c6}{H}}_{k}^{n}={\stackrel{\u02c6}{H}}_{k}^{\mathrm{nom}}+\mathrm{\Delta}{h}_{k}^{n}$ at iteration *k*. The objective function *J* is evaluated on these trial points through a series of *exploratory moves*, which defines a procedure in which the trial points are evaluated, and the values obtained are compared with $J({\stackrel{\u02c6}{H}}_{k}^{\mathrm{nom}})$. Then the trial step $\mathrm{\Delta}{h}_{k}^{\ast}$ associated with minimum value of $J({\stackrel{\u02c6}{H}}_{k}^{\mathrm{nom}}+\mathrm{\Delta}{h}_{k}^{n})-J({\stackrel{\u02c6}{H}}_{k}^{\mathrm{nom}})\le 0$ is chosen to generate the next estimate of the patterns iteration ${\stackrel{\u02c6}{H}}_{k+1}^{\mathrm{nom}}={\stackrel{\u02c6}{H}}_{k}^{\mathrm{nom}}+\mathrm{\Delta}{h}_{k}^{\ast}$. The trial steps $\mathrm{\Delta}{h}_{k}^{n}$ are generated using a *step length* parameter ${\mathrm{\Delta}}_{k}\in {\mathbb{R}}_{+}^{n}$, which is also updated through time depending on the value of $\mathrm{\Delta}{h}_{k-1}^{n}$. The evolution of the trial points establishes the convergence properties of the algorithm. A full PSM explanation can be found in [16].

## 3 Proposed control scheme using PSM

### 3.1 The patterns

*l*th and zeros elsewhere), the step length parameter, ${\mathrm{\Delta}}_{k}={[\mathrm{\Delta}{h}_{k}^{1},\mathrm{\Delta}{h}_{k}^{2},\dots ,\mathrm{\Delta}{h}_{k}^{{N}_{m}}]}^{T}\in {\mathbb{R}}^{1\times {N}_{m}}$, is adjusted by the algorithm whose values are defined initially by the designer and the constant matrix ${C}^{p,l}\in {\mathbb{R}}^{{N}_{m}\times n}$ (matrices having one in the position (

*l*th,

*p*th) and zeros elsewhere). In such a way that the trial points take the form (5)-(7).

It can be seen that the trial points are generated by the addition of the patterns, which in turn are generated by the length parameter ${\mathrm{\Delta}}_{k}$, to the nominal model ${\stackrel{\u02c6}{H}}_{k}$. Note that the zero state (no changes in the nominal model) and both positive and negative directions to change ${\stackrel{\u02c6}{H}}_{k}$ should all be included, as explained in the definition of ${\mathrm{\Delta}}_{k}$ in Section 3.3. In this way, it is ensured that all the search space is evaluated.

### 3.2 Objective function

where $Q={Q}^{T}>0$, ${e}^{(p,l)}(\tau )=y(\tau )-{\stackrel{\u02c6}{y}}^{(p,l)}(\tau )$ is the output error, $y(\tau )$ is the vector output of the plant at the instant $t=\tau $, while ${\stackrel{\u02c6}{y}}^{(p,l)}(\tau )$ denotes the (vector) output of each different model ${\stackrel{\u02c6}{H}}^{(p,l)}$. Notice that both $y(t)$ and ${\stackrel{\u02c6}{y}}^{(p,l)}(t)$ depend on the reference signal $r(t)$, and so does (8). This dependence is expressed in (8) implicitly as dependence with *t*. ${T}_{\mathrm{res}}$ is the so-called residence time and defines the time interval window, where the delay model ${\stackrel{\u02c6}{H}}^{(p,l)}$ is evaluated. Also it can be seen that the objective function (8) differs from the proposed in [16], because it is time-dependent. However, it is not a problem to frame the present approach into the GPSM as explained in Section 3.4.

where $\mathrm{\Omega}=((I+({K}^{1}+C){\stackrel{\u02c6}{G}}^{df}-C{\stackrel{\u02c6}{G}}^{df}\u2022{\stackrel{\u02c6}{H}}^{(p,l)}){\stackrel{\u02c6}{\chi}}^{-1}\chi +C{G}^{df}\u2022H)$. It is readily seen from (9) that the error is zero when ${\stackrel{\u02c6}{H}}^{(p,l)}=H$. According to this fact, the search for the minimum of the function (8) leads to an estimation of the actual matrix delay. Thus, the identification problem is converted into an optimization one. It is important to notice that (8) is a continuously differentiable function, since it is defined as the square of the subtraction of two functions that are continuously differentiable. Also, (8) satisfies the compactness condition stated in [16]. On the other hand, the simple decrease condition (convexity) cannot be guaranteed to be satisfied.

### 3.3 Proposed pattern search method

for $0<\eta <1$. Furthermore, ${lim}_{k\to \mathrm{\infty}}{\mathrm{\Gamma}}_{k}=0$. Notice that since ${\mathrm{\Delta}}_{k}$ contains positive and negative values, the trial points (5) and (7) are defined as additions and subtractions to the nominal model. These positive and negative values along with the zero value in the central position of (10) and an adequate value for ${N}_{m}$ and ${\mathrm{\Gamma}}_{0}$ allow to make a dense search in the delay space (*i.e.*, the complete search space can be explored to find the minimum of the objective function (8)).

The models are not equally spaced in the delay space since the mesh is more refined near the nominal delay. To show this issue, the separation between consecutive patterns can be calculated according to Eq. (10). Thus, consider only the positive values for ${\mathrm{\Delta}}_{k}$ (since the separation for the negative part is identical due to symmetry): ${\mathrm{\Delta}}_{k}={[{l}^{2}{\mathrm{\Gamma}}_{k}]}_{l=0}^{l=(\frac{{N}_{m}-1}{2})}$. Then, the separation between two consecutive patterns is: $\delta {\mathrm{\Delta}}_{k}={(l+1)}^{2}{\mathrm{\Gamma}}_{k}-{l}^{2}{\mathrm{\Gamma}}_{k}=(2l+1){\mathrm{\Gamma}}_{k}$ with $l=0,1,\dots ,(\frac{{N}_{m}-1}{2})-1$. Notice that $\delta {\mathrm{\Delta}}_{k}$ increases as *l* increases, which means that the patterns are more separated as they get far from the nominal model. The largest separation between patterns is given by $\delta {\mathrm{\Delta}}_{k,\mathrm{max}}=({N}_{m}-2){\mathrm{\Gamma}}_{k}$, while the minimum separation is given by $\delta {\mathrm{\Delta}}_{k,\mathrm{min}}={\mathrm{\Gamma}}_{k}$ at each iteration, *k*.

In this approach, the values of *η*, ${\mathrm{\Gamma}}_{0}$ and ${N}_{m}$ are fixed by the designer, in relation with the desired convergence time, the cycle time and architecture of the processor. For practical purposes, the approach possesses the advantage of being easily implementable in real systems, such as microcontrollers, low cost chips or similar programmable devices. The approach operation is simple and yields a great source of possible applications.

### 3.4 Convergence results of the identification scheme

This section states the convergence results for the Algorithm 3, which guarantee the identification of the actual matrix delay. The proof is divided into two steps. Firstly, the conditions under which (8) has a unique global minimum when ${\stackrel{\u02c6}{H}}_{k}^{\mathrm{nom}}=H$ are stated. Secondly, it will be shown that the proposed Algorithm 1 is able to asymptotically find the global minimum of function $J(\stackrel{\u02c6}{H},t)$.

From (8), it can be seen that $J(\stackrel{\u02c6}{H},t)=0$ when $\stackrel{\u02c6}{H}=H$, and $J(\stackrel{\u02c6}{H},t)\ge 0$ since its integrand is non-negative. Thus, the global minimum is calculated when $J(\stackrel{\u02c6}{H},t)=0$. The next Assumption 3 will be used subsequently.

**Assumption 3**Fix an arbitrary ${T}_{\mathrm{res}}>0$. The reference signals ${r}_{1}(t),{r}_{2}(t),\dots ,{r}_{n}(t)$ satisfy the following conditions:

for all $\lambda ,{\lambda}_{i},{\gamma}_{i}\in [\underline{h},\overline{h}]\cap (0,\mathrm{\infty})$ for $i=1,2,\dots ,n$, $\overline{h}=max{\overline{h}}_{ij}$, $\underline{h}=min{\underline{h}}_{ij}$, $j\ne i$, ${\u03f5}_{j}\in \{0,1\}$ and $t\in \mathcal{I}\subseteq [k{T}_{\mathrm{res}},(k+1){T}_{\mathrm{res}})$, $k\in \mathbb{N}$, for at least one connected interval ℐ of positive measure.

The meaning and role played by Assumption 3 in the delay identification is pointed out in the proof of Lemma 1. Basically, the interpretation of (13) is that the reference signals cannot be periodic, and that the different sums between them cannot be equal to other reference signal or a delayed version of it. This interpretation allows us to generate reference signals easily in practice despite Eq. (13) looks complicated.

**Lemma 1** *The function* (8) *has a unique global minimum at*$\stackrel{\u02c6}{H}=H$*for all*$t\ge 0$, *provided that the reference signals*${r}_{1}(t),{r}_{2}(t),\dots ,{r}_{n}(t)$*satisfy Assumption * 3 *for a given*${T}_{\mathrm{res}}>0$.

*Proof* The proof is done in the same way as the proof of Lemma 2 in [12]. □

In conclusion, if compensation between the different components is not possible due to Assumption 3, then the actual matrix delay is the unique global minimum for the objective function (8).

Lemma 1 states that the minimum of $J(\stackrel{\u02c6}{H},t)$ is unique if the reference signals ${r}_{j}(t)$ satisfy Assumption 3. The approach presents the peculiarity that the global minimum is always the same, but for each time window, the function $J(\stackrel{\u02c6}{H},t)$ may take a different form. However, the same ideas and concepts from the GPSM can still be applied to this problem.

Next, we establish that the proposed algorithm is able to find the global minimum of the proposed function $J(\stackrel{\u02c6}{H},t)$. Lemma 1 guarantees that under Assumption 3, $J(\stackrel{\u02c6}{H},t)$ has a global minimum, but there may be local minima, while the GPSM is designed for decreasing functions (convex functions). Fortunately, the original GPSM given in [16] is extended in [30] to functions with multiple local minima. This has been done by converting the search into dense, according to the ideas in [30], which can be achieved in the presented algorithm making the parameter ${\mathrm{\Gamma}}_{0}$ very close to zero and ${N}_{m}$ sufficiently large.

Now, we can formulate the following result on delay identification based on the dense construction of patterns according to the ideas in [30] and the global convergence results stated in [16].

**Theorem 1** *Consider the delay system given by* (1) *satisfying Assumptions * 1 *and * 2. *Thus*, *the PSM based Algorithm *3*through models* (5)-(7) *can identify the actual matrix delay provided that the reference signals*${r}_{1}(t),{r}_{2}(t),\dots ,{r}_{n}(t)$*satisfy Assumption * 3 *for a value of*${T}_{\mathrm{res}}$, ${\mathrm{\Gamma}}_{0}$*is sufficiently close to zero and*${N}_{m}$*is sufficiently large*.

The proof of Theorem 1 relies on two basic features. The first one is that the mesh generated by the patterns is dense in the search space. This fact allows obtaining an estimate of the delay lying in a neighbourhood of the actual delay, where there is no local optimum except the global one. Secondly, the algorithm is proven to converge to the actual delay by reduction to the absurd.

*Proof* If the initial estimate is the actual matrix delay, then the delay is identified and the theorem is proven. Thus, consider that ${\stackrel{\u02c6}{H}}_{0}\ne H$. The particular problem of pattern search methods is that the objective function to be optimized is time-varying, since it changes at each residence time. In this way, we may consider each of the objective functions (8) evaluated at each residence time multiple, $t=k{T}_{\mathrm{res}}$, to define the family of functions, ${J}_{k}(\stackrel{\u02c6}{H})={\int}_{(k-1){T}_{\mathrm{res}}}^{t=k{T}_{\mathrm{res}}}{(y(\tau )-\stackrel{\u02c6}{y}(\tau ,\stackrel{\u02c6}{H}))}^{2}\phantom{\rule{0.2em}{0ex}}d\tau $, which satisfy that ${J}_{k}(\stackrel{\u02c6}{H})=0$ if and only if $\stackrel{\u02c6}{H}=H$. Lemma 1 ensures that all these functions possess a common unique global minimum provided that Assumption 3 holds. However, each of these functions may possess a number of local minima.

- (a)
All the functions ${J}_{k}(\stackrel{\u02c6}{H})$ only have the actual matrix delay as local minimum. Then, any new iteration ${\stackrel{\u02c6}{H}}_{1}$ belongs to an interval, where there is no other minimum than the actual delay. This would be the best situation since the optimization process will not be threatened by the presence of local minima.

- (b)
At least one of the functions ${J}_{k}(\stackrel{\u02c6}{H})$ has extra local minima being distinct of the global minimum. In this case, let us denote by $K\subseteq \mathbb{N}$ the set indexing the functions having local minima and ${C}_{k}$, with $k\in K$, the set of all delays being local minima of function ${J}_{k}(\stackrel{\u02c6}{H})$, excluding the global optimum at the actual matrix delay.

Hence, ${J}_{k}(\stackrel{\u02c6}{H})>0$ for all $\stackrel{\u02c6}{H}\in {C}_{k}$ and all $k\in K$. Moreover, we can define $\theta =inf\{{J}_{k}(\stackrel{\u02c6}{H})|\stackrel{\u02c6}{H}\in {C}_{k},k\in K\}>0$ which exists and is finite and positive. The parameter

*θ*defines a neighbourhood,*I*, around the actual delay, where all the functions ${J}_{k}(\stackrel{\u02c6}{H})$ do not have any local minima except the global one. In addition, denote by*λ*the diameter of the set*I*. Thus, if the patterns define a dense subset of the search space in such a way that the patterns cover all the space, $2{(\frac{{N}_{m}-1}{2})}^{2}{\mathrm{\Gamma}}_{0}\ge \overline{H}-\underline{H}$ (*i.e.*, the spread of the patterns given by the total amplitude of ${\mathrm{\Delta}}_{0}$ is larger than the uncertainty range given by Assumption 2), and the separation between them is sufficiently small, $\parallel \delta {\mathrm{\Delta}}_{0,\mathrm{max}}\parallel =({N}_{m}-2)\parallel {\mathrm{\Gamma}}_{0}\parallel <\lambda $, which is achieved with a sufficiently large ${N}_{m}$ and a sufficiently small $\parallel {\mathrm{\Gamma}}_{0}\parallel $, then for any initial estimate ${\stackrel{\u02c6}{H}}_{0}$, there exists an estimate ${\stackrel{\u02c6}{H}}_{1}$ such that $J({\stackrel{\u02c6}{H}}_{1})<\theta $. This implies that the iteration provides an estimate within a neighbourhood*I*of the actual matrix delay, where all the functions do not have any other minimum than the actual one.

In conclusion, the iteration ${\stackrel{\u02c6}{H}}_{1}$ is always guaranteed to belong to an interval, where the actual matrix delay, *H*, is the only optimum. This is the first feature of the proof. Notice that it is not necessary to compute the location of the minima of the functions ${J}_{k}(\stackrel{\u02c6}{H})$ since they are only used to ensure that there exists such an estimate ${\stackrel{\u02c6}{H}}_{1}$. Once the estimate ${\stackrel{\u02c6}{H}}_{1}$ belongs to *I*, all the following iterations will, *i.e.*, ${\stackrel{\u02c6}{H}}_{k}\in I$ for all $k\ge 1$.

Notice that the estimates sequence ${\{{\stackrel{\u02c6}{H}}_{k}\}}_{k=0}^{\mathrm{\infty}}$ converges to a constant finite limit since ${lim}_{k\to \mathrm{\infty}}{\mathrm{\Gamma}}_{k}={lim}_{k\to \mathrm{\infty}}{\eta}^{k}{\mathrm{\Gamma}}_{0}=0$ for $0<\eta <1$ and, therefore, ${lim}_{k\to \mathrm{\infty}}{\mathrm{\Delta}}_{k}=0$ in Eq. (10), which makes all the patterns collapse to only one asymptotically. Hence, no oscillatory behaviour is asymptotically possible for ${\stackrel{\u02c6}{H}}_{k}$, and convergence to a constant value is guaranteed.

Now, the convergence of the estimates to the actual matrix delay is performed by reduction to the absurd. Thus, assume that ${\stackrel{\u02c6}{H}}_{k}\to {H}_{\ast}\in I$ with ${H}_{\ast}\ne H$, which means that the estimates converge to a delay value different from the actual one. In this way, there exists $\u03f5>0$ such that $\parallel H-{H}_{\ast}\parallel >2\u03f5$. It will be proven that this leads to a contradiction. Recall that Lemma 1 ensures that the global minimum is unique and located at the actual matrix delay, provided that Assumption 3 holds, which implies that ${J}_{k}({H}_{\ast})>0$ for all *k*.

- (i)
There is a finite ${k}_{0}\in \mathbb{N}$ such that ${\stackrel{\u02c6}{H}}_{k}={H}_{\ast}$ for all $k\ge {k}_{0}\ge 1$, which means that the limit ${H}_{\ast}$ is reached in finite time.

- (ii)
For a prescribed finite $\u03f5>0$, there is a finite ${k}_{0}={k}_{0}(\u03f5)\in \mathbb{N}$ such that for all $k\ge {k}_{0}\ge 1$, $\parallel {H}_{\ast}-{\stackrel{\u02c6}{H}}_{k}\parallel \le \u03f5$, which means that the limit ${H}_{\ast}$ is reached asymptotically.

- (i)
This case requires that ${\stackrel{\u02c6}{H}}_{{k}_{0}}={\stackrel{\u02c6}{H}}_{{k}_{0}+1}={H}_{\ast}$. However, since in particular ${J}_{{k}_{0}}({H}_{\ast})>0$, and there are no local minima of any function ${J}_{k}$ in

*I*, since they are avoided by its definition, there is a direction in the search space, for which ${J}_{{k}_{0}}(\stackrel{\u02c6}{H})$ decreases since the minimum is located at $H\in I$,*i.e.*, ${J}_{{k}_{0}}(H)=0$. Taking into account that the patterns define a dense mesh in the search space, there is a direction in $\mathrm{\Delta}{h}_{k}^{p,l}$, which defines the patterns through Eq. (4), near this decreasing direction and a value $1\le l\le (\frac{{N}_{m}-1}{2})$ such that the parameter step length ${\mathrm{\Delta}}_{{k}_{0}}={l}^{2}{\mathrm{\Gamma}}_{{k}_{0}}$ given by Eq. (10), defines a pattern ${\stackrel{\u02c6}{H}}_{k}^{(p,l)}$ with $J({\stackrel{\u02c6}{H}}_{{k}_{0}}^{(p,l)})<J({\stackrel{\u02c6}{H}}_{{k}_{0}})$. Consequently, ${\stackrel{\u02c6}{H}}_{{k}_{0}+1}={\stackrel{\u02c6}{H}}_{{k}_{0}}^{(p,l)}\ne {\stackrel{\u02c6}{H}}_{{k}_{0}}$, which is a contradiction with the initial assumption that ${\stackrel{\u02c6}{H}}_{{k}_{0}}={\stackrel{\u02c6}{H}}_{{k}_{0}+1}={H}_{\ast}$. - (ii)The condition $\parallel {H}_{\ast}-{\stackrel{\u02c6}{H}}_{k}\parallel \le \u03f5$ for a sufficiently small finite $\u03f5>0$ defines a neighbourhood
*L*of ${H}_{\ast}\ne H$, where the estimates lie, ${\stackrel{\u02c6}{H}}_{k}\in L$, for all $k\ge {k}_{0}$. However, this fact along with $\parallel H-{H}_{\ast}\parallel >2\u03f5$ implies that$\begin{array}{rl}2\u03f5& <\parallel H-{H}_{\ast}\parallel =\parallel H-{\stackrel{\u02c6}{H}}_{k}+{\stackrel{\u02c6}{H}}_{k}+{H}_{\ast}\parallel \\ \le \parallel H-{\stackrel{\u02c6}{H}}_{k}\parallel +\parallel {\stackrel{\u02c6}{H}}_{k}+{H}_{\ast}\parallel \le \parallel H-{\stackrel{\u02c6}{H}}_{k}\parallel +\u03f5\end{array}$(14)$\phantom{\rule{1em}{0ex}}\Rightarrow \phantom{\rule{1em}{0ex}}\parallel H-{\stackrel{\u02c6}{H}}_{k}\parallel >\u03f5.$(15)Therefore, there is a finite $\underline{\mu}(\u03f5)>0$ satisfying ${J}_{k}(\stackrel{\u02c6}{H})\ge \underline{\mu}>0$ for all $\stackrel{\u02c6}{H}\in L$. Since the functions ${J}_{k}(\stackrel{\u02c6}{H})$ are continuous with respect to $\stackrel{\u02c6}{H}$, there exists a value ${\stackrel{\u02c6}{H}}_{{k}_{0}}^{m}\notin L$ such that ${J}_{k}({\stackrel{\u02c6}{H}}_{k}^{m})={\mu}_{1}$ for some $0<{\mu}_{1}<\underline{\mu}$. Hence, since ${\stackrel{\u02c6}{H}}_{{k}_{0}}^{m}\notin L$, then $\parallel {H}_{\ast}-{\stackrel{\u02c6}{H}}_{{k}_{0}}^{m}\parallel >\u03f5$.

Notice that since the estimates belong to the interval

*I*, where there are no local minima, and $\parallel \delta {\mathrm{\Delta}}_{0,\mathrm{max}}\parallel =({N}_{m}-2)\parallel {\mathrm{\Gamma}}_{0}\parallel <\lambda $, the distance between the estimate and ${H}_{\ast}$ is strictly smaller than $\parallel \delta {\mathrm{\Delta}}_{k,\mathrm{max}}\parallel =({N}_{m}-2){\eta}^{k}\parallel {\mathrm{\Gamma}}_{0}\parallel <\lambda $, which is strictly smaller than $(\frac{{N}_{m}-1}{2}){\eta}^{k}\parallel {\mathrm{\Gamma}}_{0}\parallel $ at each step*k*, which is the range defined by the step length, through Eq. (10),*i.e.*, there are patterns outside the region*L*. Thus, the patterns define a dense mesh in the search space, and there is a direction in $\mathrm{\Delta}{h}_{k}^{p,l}$, which defines the patterns through Eqs. (5)-(7), and a value $1\le l\le (\frac{{N}_{m}-1}{2})$ such that the parameter step length ${\mathrm{\Delta}}_{{k}_{0}}={l}^{2}{\mathrm{\Gamma}}_{{k}_{0}}$ defines a pattern ${\stackrel{\u02c6}{H}}_{k}^{(p,l)}$ near a value of the delay ${\stackrel{\u02c6}{H}}_{k}^{m}$ satisfying ${J}_{k}({\stackrel{\u02c6}{H}}_{k}^{m})={\mu}_{1}$ for some $0<{\mu}_{1}<\underline{\mu}$, since the patterns go ahead the boundary of*L*. Then, $J({\stackrel{\u02c6}{H}}_{{k}_{0}}^{(p,l)})<J(\stackrel{\u02c6}{H})$ for all $\stackrel{\u02c6}{H}\in L$. Consequently, ${\stackrel{\u02c6}{H}}_{{k}_{0}+1}={\stackrel{\u02c6}{H}}_{{k}_{0}}^{(p,l)}\notin L$ which is a contradiction with the initial assumption that ${\stackrel{\u02c6}{H}}_{k}\in L$ for, in particular, $k={k}_{0}+1$.

In conclusion, there is a contradiction with the initial assumption of ${\stackrel{\u02c6}{H}}_{k}\to {H}_{\ast}$ with ${H}_{\ast}\ne H$, and hence, the sequence of iterations converges to the actual matrix delay *H*, proving the theorem. □

Notice that the identification result has been established thanks to the fact that the identification problem is formulated within a GPSM, taking advantage of this technique in its applications to control theory. Also, note that Theorem 1 requires ${\mathrm{\Gamma}}_{0}$ being sufficiently close to zero, and that ${N}_{m}$ is sufficiently large. However, it has been observed in simulation examples that a finite value is sufficient for practical applications as shown in Section 5.

### 3.5 Extension to time-varying delays

where $\underline{\mathrm{\Gamma}}>0$ is the lower bound of the reduction factor matrix. As it can be seen, Eq. (16) is easily implementable in Algorithm 3. This condition implies that the length parameter (10) is only reduced until a certain positive value (which is defined by $\underline{\mathrm{\Gamma}}$). In this way, we can ensure that there will always be different models (5)-(7) generated in such way that all the possible search space can be evaluated regardless the potential time-evolution of the delays. Thus, the proposed Algorithm 3 is flexible enough to handle the identification of time-varying delays. Note that for time-varying delay case, the precision of the estimation is determined by $\underline{\mathrm{\Gamma}}$ value.

## 4 Stability analysis

This section states the stability properties of the close-loop. We will use the identification properties of the proposed algorithm, stated in Theorem 1, to show that the nominal delay converges to a neighbourhood of the actual matrix delay in finite time and eventually becomes a time-invariant system. Thus, the stability theorem is formulated as follows.

**Theorem 2** *The closed*-*loop system depicted in Figure *2*obtained from Eqs*. (1), (5)-(7) *through Algorithm *3*is stable provided that Assumptions * 1 *and * 2 *hold*, ${\mathrm{\Gamma}}_{0}$*is sufficiently close to zero*, ${N}_{m}$*is sufficiently large and*$(C+{K}^{1})$*stabilizes*${G}^{df}(s)$.

*Proof*The proof is made by contradiction. If the output is unbounded, then the input signal behaves as a non-periodic signal satisfying Assumption 3 for any value of ${T}_{\mathrm{res}}$. Therefore, Theorem 1 guarantees that the nominal delay converges to the actual matrix delay, and hence, ${lim}_{t\to \mathrm{\infty}}\stackrel{\u02c6}{H}(t)=H$ implying: ${lim}_{t\to \mathrm{\infty}}{\stackrel{\u02c6}{h}}_{ij}(t)={h}_{ij}$, $\mathrm{\forall}ij$. Consequently, there exist finite ${t}_{ij}\in \mathbb{R}$ such that $|{\stackrel{\u02c6}{h}}_{ij}(t)-{h}_{ij}|\le {\u03f5}_{ij}$, $\mathrm{\forall}t\ge {t}_{ij}$ for any positive prescribed ${\u03f5}_{ij}>0$. Thus, denote by $\u03f5={max}_{ij}{\u03f5}_{ij}$ and ${t}^{\ast}={max}_{ij}{t}_{ij}$ and consider the state-space realization of the closed-loop system given by Eq. (2):

where the delays *h*, $\stackrel{\u02c6}{h}(t)$ are the representation of the matrix delays into a state-space description, while ${A}_{0}$ and ${A}_{1}$ are appropriate matrices. Furthermore, ${A}_{0}$ is the state-space description of the perfectly compensated delay given by the closed-loop system Eq. (3), and hence stable by design through compensators *C* and ${K}^{1}$. However, since Theorem 1 guarantees that the delay is identified, then $\parallel x(t-h)-x(t-\stackrel{\u02c6}{h}(t))\parallel \to 0$, $\mathrm{\forall}t\ge {t}_{1}\ge {t}^{\ast}$, and there is $\delta =\delta (\u03f5)$ such that $\parallel x(t-h)-x(t-\stackrel{\u02c6}{h}(t))\parallel \le \delta $$\mathrm{\forall}t\ge {t}_{1}$.

*i.e.*, the system with $r(t)=0$). Thus, the solution to Eq. (17) is given by:

for $t\ge {t}_{1}$, since the matrix ${A}_{0}$ is a stability matrix, the system is linear, and there is no finite escape time on the finite interval $[0,{t}_{1}]$, and the entries of the matrix ${A}_{1}$ are bounded, since it is the realization of a finite transfer function given by Eq. (2). Thus, all the signals in the closed-loop system are bounded. Hence, the state is bounded, which is a contradiction with the initial assumption, where the output and, therefore, the state diverge. □

It can be seen that the proof is straightforward, since the scheme has been correctly framed within the PSM, which is, inheriting its convergence properties, and the MoSP has a minimum robust behaviour. Note that since the delay is identified, the output tends to the perfectly compensated system output.

## 5 Simulation examples

In this section, we will examine the performance of the proposed scheme in four simulation scenarios. (i) The first scenario shows that it is not necessary to know the system in its totality. Therefore, we suppose that the unstable system has an uncertainty of 6% in its parameters. (ii) The second scenario shows the effectiveness of the scheme in the delay identification, using a $2\times 2$ MIMO system with unstable poles. (iii) The proposed approach is tested on an irrigation channel model, which is modelled as an integrative MIMO system. (iv) Finally, simulation for time-varying delay system.

### 5.1 Second-order delayed unstable processes

and $C=\frac{(0.0625s+0.35)}{0.25s}$, ${K}^{1}=12.468$ and ${K}^{2}=1.414$. For this simulation, we used a ${T}_{\mathrm{res}}=1.2$ seconds, ${\gamma}_{ij}=0.1$ and ${\mathrm{\Delta}}_{k}=[2,4,8,12,16,20,23]$.

A comparison with the MoSP [8] is made. First, is to note that the nominal delay used in the MoSP is lower ${\stackrel{\u02c6}{h}}_{0}^{\mathrm{nom}}=9$ seconds in comparison with the initial nominal model used in our approach ${\stackrel{\u02c6}{h}}_{0}^{\mathrm{nom}}=23$ seconds, this was necessary, because if we use a delay of ${\stackrel{\u02c6}{h}}_{0}^{\mathrm{nom}}=23$ seconds in the MoSP, this becomes unstable.

It can be seen that good results are obtained, despite the number of models (${N}_{m}=7$) is small, and that the ${\gamma}_{ij}$ value is quite large, that is, 0.1 in comparison with requirements of Theorem 1.

### 5.2 MIMO case

As can be noticed, the convergence time is small in comparison with the channel delays, which permits the system’s stability.

### 5.3 Integrative case

*C*is given by

and ${K}^{1}=diag(8,8,8)$ and ${K}^{2}=diag(1.1,1.1,1.1)$. The way of selecting these values for ${K}^{1}$ and ${K}^{2}$ controllers is explained in detail in [8].

The initial parameters used in the algorithm are given by: ${T}_{\mathrm{res}}=2.5$ seconds, ${\stackrel{\u02c6}{H}}_{k}^{\mathrm{nom}}=diag(0,0,0)$ seconds, and ${\mathrm{\Delta}}_{k}=[2,4,6,8,10,12,14,16,18,20,22]$. Note that the number of models is low ${N}_{m}=11$.

As expected, the simulation shows that in all three pools, better performance is achieved by the proposed scheme, when the time delay has a modelling error. Although the control strategies used in these simulations are not in the same conditions, since the proposed scheme adjusts the controller through time, while in the other case, the control strategy is always fixed, this comparison is useful to show the effectiveness of the proposed scheme.

### 5.4 Time-varying delay

Figure 11(a), (b) and (c) shows the water level for pool 1, 2 and 3, respectively. The proposed scheme is compared with PI controller proposed in [8], where the delays are fixed in $\stackrel{\u02c6}{H}=diag(11.02,10.08,9.1)$. As expected, the simulation shows that in all three pools, better performance is achieved by the proposed scheme for time-varying delays.

Figures 11(d), (e) and (f) shows the delay evolution for pools 1, 2 and 3, respectively. It is noteworthy that the identification is very precise, and the presented Algorithm 3 is able to follow the time-evolution of the delays. It can be seen that the approach can identify both abrupt as continuous changes in the delay.

## 6 Conclusion

This paper has presented a delay identification strategy that can be applied to delay compensation control schemes for stable/unstable MIMO systems. The main objectives are the delay identification and ensuring the closed-loop stability, this is usually difficult for unstable system. The approach is formulated as an optimization problem and then framed into the generalized patterns search method, inheriting the convergence properties, which are a novelty both in the control theory as well as in mathematics. The optimization has been implemented online, using a multiple-model scheme, which is also a novel implementation of pattern search methods.

Despite convergence results require technical conditions that seem difficult to meet, the generation of such reference signals can be accomplished easily in practice. Therefore, the simplification of the technical requirements on the input signal for the convergence results is an open question of research.

Finally, it is shown that the proposed approach is robustly stable, even when the rational component of the system has a 6% error, which provides versatility and could be implemented in real systems. Moreover, simulation results showed that the identification is given with a great precision. Additionally, simulation results were presented for a time-varying delay case, where it is corroborated that the expected good results in practical situations require readjustment of the model time delays. In authors’ opinion, pattern search methods constitute a powerful optimization technique for control-oriented applications such that it can be extended in future to the case, where the delay is time varying or for combined parametric and delay identification.

## Declarations

### Acknowledgements

This work was partially supported by the Spanish Ministry of Economy and Competitiveness through grant DPI2012-30651, by the Basque Government (Gobierno Vasco) through grant IE378-10 and by the University of the Basque Country through grant UFI 11/07.

## Authors’ Affiliations

## References

- Dong Y, Wei J: Output feedback stabilization of nonlinear discrete-time systems with time-delay.
*Adv. Differ. Equ.*2012, 2012(73):1-11.MathSciNetMATHGoogle Scholar - Wei F, Cai Y: Existence, uniqueness and stability of the solution to neutral stochastic functional differential equations with infinite delay under non-Lipschitz conditions.
*Adv. Differ. Equ.*2013, 2013(51):1-12.MathSciNetGoogle Scholar - Alcántara S, Pedret C, Vilanova R, Zhang WD: Simple analytical min-max model matching approach to robust proportional-integrative-derivative tuning with smooth set-point response.
*Ind. Eng. Chem. Res.*2010, 49(2):690-700.View ArticleGoogle Scholar - Lee Y, Lee J, Park S: PID controller tuning for integrating and unstable processes with time delay.
*Chem. Eng. Sci.*2000, 55: 3481-3493.View ArticleGoogle Scholar - Ntogramatzidis L, Ferrante A: Exact tuning of PID controllers in control feedback design.
*IET Control Theory Appl.*2011, 5(4):565-578.MathSciNetView ArticleGoogle Scholar - Shi D, Wang J, Ma L: Design of balanced proportional-integral-derivative controllers based on bilevel optimisation.
*IET Control Theory Appl.*2011, 5(1):84-92.MathSciNetView ArticleGoogle Scholar - Zhang WD, Sun YX: Modified Smith predictor for controlling integrator/time delay processes.
*Ind. Eng. Chem. Res.*1996, 35: 2769-2772.View ArticleGoogle Scholar - Majhi S, Atherton DP: Modified Smith predictor and controller for processes with time delay.
*Control Theory Appl.*1999, 146: 359-366.View ArticleGoogle Scholar - Meng D, Jia Y, Du J, Yu F: Learning control for time-delay systems with iteration-varying uncertainty: a Smith predictor-based approach.
*IET Control Theory Appl.*2011, 4(12):2707-2718.MathSciNetView ArticleGoogle Scholar - Normey-Rico JE, Camacho EF:
*Control of Dead-Time Processes*. Springer, Berlin; 2007.MATHGoogle Scholar - De Paor AM: A modified Smith predictor and controller for unstable processes with time delay.
*Int. J. Control*1985, 41(4):1025-1036.MathSciNetView ArticleMATHGoogle Scholar - Herrera J, Ibeas A, Alcantara S, de la Sen M: Multimodel-based techniques for the identification and adaptive control of delayed multi-input multi-output systems.
*IET Control Theory Appl.*2011, 5(1):188-202.MathSciNetView ArticleGoogle Scholar - Garcia CA, Ibeas A, Herrera J, Vilanova R: Inventory control for the supply chain: an adaptive control approach based on the identification of the lead-time.
*Omega*2012, 40: 314-327.View ArticleGoogle Scholar - Herrera J, Ibeas A, Alcantara S, Vilanova R: Multimodel-based techniques for the identification of the delay in mimo systems.
*Proceedings of the 2010 American Control Conference*, Marriott Waterfront, Baltimore, MD, USA, June 30-July 02 2010.Google Scholar - Herrera J, Ibeas A, Alcantara S, de la Sen M: Identification and control of integrative mimo systems using pattern search algorithms: an application to irrigation channels.
*Eng. Appl. Artif. Intell.*2013, 26: 334-346.View ArticleGoogle Scholar - Torczon V: On the convergence of pattern search algorithms.
*SIAM J. Optim.*1997, 7(1):1-25.MathSciNetView ArticleMATHGoogle Scholar - Pan I, Das S, Gupta A: Tuning of an optimal fuzzy PID controller with stochastic algorithms for networked control systems with random time delay.
*ISA Trans.*2011, 50: 21-27.View ArticleGoogle Scholar - Dong Y, Liu J: Exponential stabilization of uncertain nonlinear time-delay systems.
*Adv. Differ. Equ.*2012, 2012(180):1-15.MathSciNetGoogle Scholar - Gouaisbaut F, Dambrine M, Richard JP: Robust control of delay systems: a sliding mode control design via LMI.
*Syst. Control Lett.*2002, 46: 219-230.MathSciNetView ArticleMATHGoogle Scholar - Sangapate P: New sufficient conditions for the asymptotic stability of discrete time-delay systems.
*Adv. Differ. Equ.*2012, 2012(28):1-8.MathSciNetMATHGoogle Scholar - Xiang M, Xiang Z: Reliable control of positive switched systems with time-varying delays.
*Adv. Differ. Equ.*2013, 2013(25):1-15.MathSciNetGoogle Scholar - Liu T, Gao F: Closed-loop step response identification of integrating and unstable processes.
*Chem. Eng. Sci.*2010. 10.1016/j.ces.2010.01.013Google Scholar - Bogani C, Gasparo MG, Papini A: Generalized pattern search methods for a class of nonsmooth optimization problems with structure.
*J. Comput. Appl. Math.*2009, 229: 283-293.MathSciNetView ArticleMATHGoogle Scholar - Liu L, Zhang X: Generalized pattern search methods for linearly equality constrained optimization problems.
*Appl. Math. Comput.*2006, 181: 527-535.MathSciNetView ArticleMATHGoogle Scholar - Jelali M: Estimation of valve stiction in control loops using separable least-squares and global search algorithms.
*J. Process Control*2008, 18: 632-642.View ArticleGoogle Scholar - Negenborn RR, Leirens S, De Schutter B, Hellendoorn J: Supervisory non linear MPC for emergency voltage control using pattern search.
*Control Eng. Pract.*2009, 17: 841-848.View ArticleGoogle Scholar - Ibeas A, de la Sen M: Artificial intelligence and graph theory tools for describing switched linear control systems.
*Appl. Artif. Intell.*2006, 20(9):703-741.View ArticleGoogle Scholar - Bernstein DS:
*Matrix Mathematics*. Princeton University Press, Princeton; 2005.Google Scholar - Marchetti G, Scali C, Lewin DR: Identification and control of open-loop unstable processes by relay methods.
*Automatica*2001, 37: 2049-2055.View ArticleMATHGoogle Scholar - Audet C, Dennis JE: Mesh adaptive direct search algorithms for constrained optimization.
*SIAM J. Optim.*2006, 17: 188-217.MathSciNetView ArticleMATHGoogle Scholar

## Copyright

This article is published under license to BioMed Central Ltd. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.