Robust non-negative least mean square algorithm based on step-size scaler against impulsive noise

Conventional non-negative algorithms restrict the weight coefficient vector under non-negativity constraints to satisfy several inherent characteristics of a specific system. However, the presence of impulsive noise causes conventional non-negative algorithms to exhibit inferior performance. Under this background, a robust non-negative least mean square (R-NNLMS) algorithm based on a step-size scaler is proposed. The proposed algorithm uses a step-size scaler to avoid the influence of impulsive noise. For various outliers, the step-size scaler can adjust the step size of the algorithm, thereby eliminating the large error caused by impulsive noise. Furthermore, to improve the performance of the proposed algorithm in sparse system identification, the inversely-proportional R-NNLMS (IP-RNNLMS) algorithm is proposed. The simulation result demonstrates that the R-NNLMS algorithm can eliminate the influence of impulsive noise while showing fast convergence rate and low steady-state error under other noises. In addition, the IP-RNNLMS algorithm has faster convergence rate compared with the R-NNLMS algorithm under sparse system.


Introduction
Adaptive algorithms are widely used in adaptive control, denoising, channel equalization, and system identification. The least mean square (LMS) and normalized LSM (NLMS) algorithms are extensively used because of their robust performance and low computational complexity. However, several problems in their implementation have been identified. The least mean absolute third (LMAT) algorithm, which is superior to LMS for a noisy environment, is proposed. Furthermore, the robust normalized LMAT algorithms are proposed to improve the filtering accuracy and the robustness of the LMAT algorithm [1]. In order to improve the signal model and performance of the adaptive algorithm, bias-compensated conception is proposed in the most recent study of adaptive algorithms. In comparison with traditional adaptive algorithm models, bias-compensated algorithms consider the influence of input and output noises simultaneously. Simulation results and performance analysis suggest that bias-compensated algorithms exhibit better performance than tra-ditional adaptive algorithms [2][3][4][5][6]. The family of the kernel adaptive algorithm demonstrates excellent performance in terms of online prediction and nonlinear problems [7,8].
Under some specific system characteristics and noise environments, the performance of traditionally adaptive algorithms degrades and they need to change framework. Based on this idea, adaptive algorithms need to add different constraints in some applications. In recent decades, the non-negativity constraint problem has been developed [9,10]. These methods are helpful to construct non-negativity constraint framework. And the non-negative LMS (NNLMS) algorithm is widely studied [11]. The two limitations of the NNLMS algorithm are identified as the vulnerability to the occurrence of large coefficient update spread and unbalanced convergence rates. Thus, the reweighting NNLMS and logarithmic reweighting NNLMS (LR-NNLMS) algorithms are proposed. Problems, such as the large coefficient update spread and the unbalanced convergence rate, were completely solved under sparse system identification [12,13].
For the family of non-negativity algorithms, the most common problem in practical application is the existence of impulsive measurement noise, which causes conventional non-negativity algorithms to have inferior performance. In recent studies, many adaptive algorithms have been proposed to address this problem [14][15][16][17][18][19]. For example, a family of robust adaptive filtering algorithms based on sigmoid cost, which imbeds the conventional cost function into the sigmoid framework, can smooth out large fluctuation caused by the impulsive interferences. In this paper, a robust NNLMS algorithm based on a stepsize scaler in non-negative constraint condition is proposed. The proposed algorithm uses a step-size scaler to eliminate the large estimation error caused by impulsive noise. The simulation result demonstrates the remarkable performance of our method in the experiments. To cope with the problem of unbalanced convergence of the R-NNLMS algorithm under sparse system, caused by the weight vector, the IP-RNNLMS algorithm using the inversely-proportional function is proposed in this paper.
The remainder of this work is organized as follows. Section 2 demonstrates the signal model and the derivation of the algorithm. Section 3 presents the performance of the algorithm in environments with non-impulsive noise and impulsive noise. Section 4 concludes this paper.

Signal model
In consideration of the following signal model, the input signal x k = [x 1 , x 2 , . . . , x L ] T refers to the time-delay vector of L taps. The number of taps is usually equal to the estimated parameter dimension. The unknown system weight vector is represented by w 0 = [w 1 , w 2 , . . . , w L ] T . v k is the system noise, and w(k) is the adaptive algorithm estimation vector at iteration k. Figure 1 shows the desired output signal of the unknown system, which is obtained by where the output signal cannot be noise-free data because it will be corrupted by various noises. Common noises have Gaussian and impulsive noise. Impulsive noise causes the adaptive algorithm to be misadjusted. Thus, the purpose of the novel algorithm is to eliminate the influence of impulsive noise. The error of the system is obtained by

Review of the NNLMS algorithm
The conventional non-negative algorithms restrict the weight coefficient vector under non-negativity constraints to satisfy several inherent characteristics of a specific system. Therefore, the NNLMS algorithm is expressed as the optimization problem to receive non-negative weight: where w * is the desired weight vector of the algorithm, and w j is the jth element of the iteration weight vector. The Lagrange multiplier method is used to obtain the optimization weight vector of Formula (3). Thus, Formula (3) can be transformed into the following equation: The inequality term of Formula (3), the Karush-Kuhn-Tucker (KKT) condition is considered as follows: where λ is Lagrange multiplier vector. [λ * ] j is the jth element of λ * . The first-order partial derivative of Eq. (4) should be considered to determine the optimal weight vector w * and the desired parameter vector λ * . Taking the first-order partial derivative of Eq. (4) yields Combining Eqs. (5) and (6) yields The form of the fixed-point iteration algorithm can be obtained as follows: where the function f j (w(k)) is an arbitrary positive function of w(k). The iteration formula of the algorithm can be obtained by where μ is a positive step size. D w (k) is a diagonal matrix, in which the diagonal elements equal the single elements in the iteration vector w(k). The cost function of the NNLMS algorithm is expressed as follows: Formula (9) can be rewritten as follows: The impulsive noise commonly has high amplitude, which will affect the value of the output signal and cause a large error. In Formula (11), the presence of e k causes the algorithm imbalance during the iteration. In order to solve this problem, the cost function based on the tanh function is proposed in what follows.

Proposed algorithms 3.1 The R-NNLMS algorithm
The presence of impulsive noises will cause a large estimation error that will cause performance degradation. The tanh cost function will approach the finite value when the value of the argument is extremely large or small. The cost function J(w) uses the tanh function, which can solve the large estimation error problem. The cost function J(w) will be transformed into the following form: where β is the key parameter of shape adjustment that affects the performance of the cost function, meanwhile β > 0. When the estimation error is oversized, the cost function J(w) will approach 1 β , thereby eliminating the influence of impulsive noise. Therefore, the cost function J(w) is The first-order partial derivative of Eq. (13) is identified and can be expressed as follows: where Equation (9) and Eqs. (14) and (15) are combined, and the iteration formula is adjusted as follows: We take a value of the function f (w(k)) as 1 in Eq. (16). Thus, the iteration algorithm can be rewritten as follows: where D x (k) is a diagonal matrix, in which the diagonal elements are equal to the input signal x k . s(β, e k / x k ) is a step-size scaler, which is the algorithm based on step-size scaler against impulsive noise. When the value of estimation error is oversized, the step size approaches zero. On the contrary, the estimation error is not an outlier; thus, the step size will be a finite value, which will not affect the iteration algorithm.

The IP-RNNLMS algorithm
The correction e k D x (k)w(k) is to keep the non-negativity of the algorithm in Eq. (17). However, it was observed that the occurrence of the weight w(k) will affect the convergence rate of the iteration in Eq. (17). When the desired weight w * approaches zero, the convergence will slow down or stall, which causes the unbalanced convergence of the algorithm. Meanwhile, the values of the desired weight and the initial iteration weight will introduce difficulties for step size selection. In order to solve this problem, Eq. (17) introduces the inversely-proportional function. From the above, f j (w(k)) is an arbitrary positive function of w(k). The paper uses the inversely-proportional function to replace corresponding item in Eq. (16), we can get The weight w(k) and function f (w(k)) are combined, the iteration formula (17) will be rewritten: where the diagonal elements of the diagonal matrix D g (k) are of the following form: In the IP-RNNLMS algorithm, the unbalanced convergence problem can be solved well. Meanwhile, the non-negativity and the robustness of the algorithm can be maintained under impulsive noise.

Simulation
The performance of the proposed algorithms is validated by the system identification simulation under various noises. The proposed algorithms are compared with other algorithms under the non-negativity condition. The estimated error is used to evaluate the performance of the algorithm, which is defined as follows: The system input is randomly generated, and the tap number L is 10. The system parameter vector was set to w 0 = [0.8, 0.6, 0.5, 0.4, 0.3, 0.2, 0.1, -0.1, -0.3, -0.6] T in simulation; in addition, the initial iteration weight is drawn from a uniform distribution with unit power.
The experiments are conducted to validate the improvement of the R-NNLMS algorithm. The experiments need to verify the robustness of the R-NNLMS algorithm under different noises. Hence, the performance experiment was divided into two parts. The rest of this section compares the performance of the R-NNLMS and the IP-RNNLMS algorithms under two types of noise.

Performance in non-impulsive noise
The estimated error of the R-NNLMS and NNLMS algorithms under non-impulsive noise was calculated to assess the performance of the algorithm. Figure 2 displays the estimated error curve with the Gaussian noise of SNR = 10. Figure 3 illustrates the performance of the algorithm under uniformly distributed noise. The amplitude of the uniformly distributed noise is between ±1. The signal-to-noise ratio (SNR) of the output signal is calculated as follows: The two experiments evaluate the performance of the R-NNLMS algorithm with nonimpulsive noise. Figures 2 and 3 show that the R-NNLMS algorithm has fast convergence rate and low steady-state error. The two curves almost coincide under the nonnegativity condition. This means that the R-NNLMS algorithm can perform well under non-impulsive noise.

Performance in impulsive noise
The impulsive noise g k is added to the system output signal y k with a background noise of SNR = 10 dB. g k is generated as g k = a k u k , where a k is the binomial distribution of Pr(w = 0) = 0.1 (the probability of w = 0 is 0.1), and u k is a zero-mean white Gaussian distribution with the variance of σ 2 u = 1000σ 2 y .

Performance comparison
This section validates the robustness of the IP-RNNLMS algorithm under sparse system.
The experiment is conducted under different noises. The system input and impulse noise settings are consistent with the previous experiments. With the output using the Gaussian As analyzed above, the IP-RNNLMS algorithm is to solve the problem of unbalanced convergence and to improve estimation accuracy. Figure 6 and Fig. 7 illustrate performance curves of the IP-RNNLMS algorithm under the Gaussian noise and impulsive noise Pr(w = 0) = 0.5, respectively. Figure 6 and Fig. 7 show that the IP-RNNLMS algorithm has obviously faster convergence rate under different noises. Meanwhile, the IP-RNNLMS algorithm still can eliminate the influence of impulsive noise.

Conclusion
In this work, the R-NNLMS algorithm based on a step-size scaler is proposed under nonnegative constraints. The influence of impulsive noise can be removed by the step-size scaler. The simulation result shows the effectiveness of the R-NNLMS algorithm under impulsive noise. The R-NNLMS algorithm performs well under non-impulsive noise. Meanwhile, the IP-RNNLMS algorithm solves the problem of unbalanced convergence well in spare system identification. The performance analysis of the proposed algorithm and the research in practical applications will be carried out in future studies.