Skip to main content

Theory and Modern Applications

Robust non-negative least mean square algorithm based on step-size scaler against impulsive noise

Abstract

Conventional non-negative algorithms restrict the weight coefficient vector under non-negativity constraints to satisfy several inherent characteristics of a specific system. However, the presence of impulsive noise causes conventional non-negative algorithms to exhibit inferior performance. Under this background, a robust non-negative least mean square (R-NNLMS) algorithm based on a step-size scaler is proposed. The proposed algorithm uses a step-size scaler to avoid the influence of impulsive noise. For various outliers, the step-size scaler can adjust the step size of the algorithm, thereby eliminating the large error caused by impulsive noise. Furthermore, to improve the performance of the proposed algorithm in sparse system identification, the inversely-proportional R-NNLMS (IP-RNNLMS) algorithm is proposed. The simulation result demonstrates that the R-NNLMS algorithm can eliminate the influence of impulsive noise while showing fast convergence rate and low steady-state error under other noises. In addition, the IP-RNNLMS algorithm has faster convergence rate compared with the R-NNLMS algorithm under sparse system.

1 Introduction

Adaptive algorithms are widely used in adaptive control, denoising, channel equalization, and system identification. The least mean square (LMS) and normalized LSM (NLMS) algorithms are extensively used because of their robust performance and low computational complexity. However, several problems in their implementation have been identified. The least mean absolute third (LMAT) algorithm, which is superior to LMS for a noisy environment, is proposed. Furthermore, the robust normalized LMAT algorithms are proposed to improve the filtering accuracy and the robustness of the LMAT algorithm [1]. In order to improve the signal model and performance of the adaptive algorithm, bias-compensated conception is proposed in the most recent study of adaptive algorithms. In comparison with traditional adaptive algorithm models, bias-compensated algorithms consider the influence of input and output noises simultaneously. Simulation results and performance analysis suggest that bias-compensated algorithms exhibit better performance than traditional adaptive algorithms [2–6]. The family of the kernel adaptive algorithm demonstrates excellent performance in terms of online prediction and nonlinear problems [7, 8].

Under some specific system characteristics and noise environments, the performance of traditionally adaptive algorithms degrades and they need to change framework. Based on this idea, adaptive algorithms need to add different constraints in some applications. In recent decades, the non-negativity constraint problem has been developed [9, 10]. These methods are helpful to construct non-negativity constraint framework. And the non-negative LMS (NNLMS) algorithm is widely studied [11]. The two limitations of the NNLMS algorithm are identified as the vulnerability to the occurrence of large coefficient update spread and unbalanced convergence rates. Thus, the reweighting NNLMS and logarithmic reweighting NNLMS (LR-NNLMS) algorithms are proposed. Problems, such as the large coefficient update spread and the unbalanced convergence rate, were completely solved under sparse system identification [12, 13].

For the family of non-negativity algorithms, the most common problem in practical application is the existence of impulsive measurement noise, which causes conventional non-negativity algorithms to have inferior performance. In recent studies, many adaptive algorithms have been proposed to address this problem [14–19]. For example, a family of robust adaptive filtering algorithms based on sigmoid cost, which imbeds the conventional cost function into the sigmoid framework, can smooth out large fluctuation caused by the impulsive interferences. In this paper, a robust NNLMS algorithm based on a step-size scaler in non-negative constraint condition is proposed. The proposed algorithm uses a step-size scaler to eliminate the large estimation error caused by impulsive noise. The simulation result demonstrates the remarkable performance of our method in the experiments. To cope with the problem of unbalanced convergence of the R-NNLMS algorithm under sparse system, caused by the weight vector, the IP-RNNLMS algorithm using the inversely-proportional function is proposed in this paper.

The remainder of this work is organized as follows. Section 2 demonstrates the signal model and the derivation of the algorithm. Section 3 presents the performance of the algorithm in environments with non-impulsive noise and impulsive noise. Section 4 concludes this paper.

2 Signal model and the NNLMS algorithm

2.1 Signal model

In consideration of the following signal model, the input signal \({{\mathrm{x}}_{k}} = {[{x_{1}},{x_{2}}, \ldots,{x_{L}}]^{T}}\) refers to the time-delay vector of L taps. The number of taps is usually equal to the estimated parameter dimension. The unknown system weight vector is represented by \({{\mathrm{w}}_{0}} = {[{w_{1}},{w_{2}}, \ldots ,{w_{L}}]^{T}}\). \({v_{k}}\) is the system noise, and \({\mathrm{w}}(k)\) is the adaptive algorithm estimation vector at iteration k.

Figure 1 shows the desired output signal of the unknown system, which is obtained by

$$\begin{aligned} {d_{k}} = {{\mathrm{x}}_{k}}^{T}{{ \mathrm{w}}_{0}} + {v_{k}}, \end{aligned}$$
(1)

where the output signal cannot be noise-free data because it will be corrupted by various noises. Common noises have Gaussian and impulsive noise. Impulsive noise causes the adaptive algorithm to be misadjusted. Thus, the purpose of the novel algorithm is to eliminate the influence of impulsive noise.

Figure 1
figure 1

Adaptive algorithm signal model

The error of the system is obtained by

$$\begin{aligned} {e_{k}} = {d_{k}} - {{ \mathrm{x}}_{k}}^{T}{ \mathrm{w}}(k). \end{aligned}$$
(2)

2.2 Review of the NNLMS algorithm

The conventional non-negative algorithms restrict the weight coefficient vector under non-negativity constraints to satisfy several inherent characteristics of a specific system. Therefore, the NNLMS algorithm is expressed as the optimization problem to receive non-negative weight:

$$\begin{aligned} \begin{gathered} {{\mathrm{w}}^{*}} = \arg\min J({\mathrm{w}}) \\ \quad\text{subject to }{w_{j}} \ge0,\quad j = 1,2, \ldots,L ,\end{gathered} \end{aligned}$$
(3)

where \({{\mathrm{w}}^{*}}\) is the desired weight vector of the algorithm, and \({w_{j}}\) is the jth element of the iteration weight vector.

The Lagrange multiplier method is used to obtain the optimization weight vector of Formula (3). Thus, Formula (3) can be transformed into the following equation:

$$\begin{aligned} J({\mathrm{w}},\lambda) = J({\mathrm{w}}) - \lambda{\mathrm{w}} . \end{aligned}$$
(4)

The inequality term of Formula (3), the Karush–Kuhn–Tucker (KKT) condition is considered as follows:

$$\begin{aligned} w_{j}^{*}{ \bigl[{\lambda^{*}} \bigr]_{j}} = 0,\quad j = 1,2, \ldots,L , \end{aligned}$$
(5)

where λ is Lagrange multiplier vector. \({[{\lambda^{*}}]_{j}}\) is the jth element of \({\lambda^{*}}\).

The first-order partial derivative of Eq. (4) should be considered to determine the optimal weight vector \({{\mathrm{w}}^{*}}\) and the desired parameter vector \({\lambda^{*}}\). Taking the first-order partial derivative of Eq. (4) yields

$$\begin{aligned} \begin{gathered} {\partial_{\mathrm{w}}}J({ \mathrm{w}},\lambda) = {\partial_{\mathrm {w}}}J({\mathrm{w}}) - \lambda, \\ {\partial_{\mathrm{w}}}J\bigl({{\mathrm{w}}^{*}}\bigr) - {\lambda^{*}} = 0. \end{gathered} \end{aligned}$$
(6)

Combining Eqs. (5) and (6) yields

$$\begin{aligned} w_{j}^{*}{ \bigl[{\partial_{\mathrm{w}}}J \bigl({{ \mathrm{w}}^{*}} \bigr) \bigr]_{j}} = 0. \end{aligned}$$
(7)

The form of the fixed-point iteration algorithm can be obtained as follows:

$$\begin{aligned} {w_{j}}(k + 1) = {w_{j}}(k) + \mu{f_{j}} \bigl({\mathrm{w}}(k) \bigr){w_{j}}(k){ \bigl[ - { \partial_{\mathrm{w}}}J \bigl({{\mathrm{w}}^{*}} \bigr) \bigr]_{j}}, \end{aligned}$$
(8)

where the function \({f_{j}}({\mathrm{w}}(k))\) is an arbitrary positive function of \({\mathrm{w}}(k)\). The iteration formula of the algorithm can be obtained by

$$\begin{aligned} {\mathrm{w}}(k + 1) = {\mathrm{w}}(k) + \mu f \bigl({\mathrm {w}}(k) \bigr){D_{\mathrm{w}}}(k) \bigl[ - {\partial_{\mathrm{w}}}J \bigl({ \mathrm {w}}(k) \bigr) \bigr], \end{aligned}$$
(9)

where μ is a positive step size. \({D_{\mathrm{w}}}(k)\) is a diagonal matrix, in which the diagonal elements equal the single elements in the iteration vector \({\mathrm{w}}(k)\).

The cost function of the NNLMS algorithm is expressed as follows:

$$\begin{aligned} J({\mathrm{w}}) = E \bigl[{ \bigl\vert {{d_{k}} - {{ \mathrm{x}}_{k}}^{T}{\mathrm{w}}(k)} \bigr\vert ^{2}} \bigr]. \end{aligned}$$
(10)

Formula (9) can be rewritten as follows:

$$\begin{aligned} {\mathrm{w}}(k + 1) = {\mathrm{w}}(k) + \mu{e_{k}} {D_{\mathrm {x}}}(k){\mathrm{w}}(k). \end{aligned}$$
(11)

The impulsive noise commonly has high amplitude, which will affect the value of the output signal and cause a large error. In Formula (11), the presence of \({e_{k}}\) causes the algorithm imbalance during the iteration. In order to solve this problem, the cost function based on the tanh function is proposed in what follows.

3 Proposed algorithms

3.1 The R-NNLMS algorithm

The presence of impulsive noises will cause a large estimation error that will cause performance degradation. The tanh cost function will approach the finite value when the value of the argument is extremely large or small. The cost function \(J({\mathrm{w}})\) uses the tanh function, which can solve the large estimation error problem. The cost function \(J({\mathrm{w}})\) will be transformed into the following form:

$$\begin{aligned} J({\mathrm{w}}) = \frac{1 }{\beta}\tanh \biggl( \frac{\beta }{ 2}{ \bigl({e_{k}}/ \Vert {{ \mathrm{x}}_{k}} \Vert \bigr)^{2}} \biggr), \end{aligned}$$
(12)

where β is the key parameter of shape adjustment that affects the performance of the cost function, meanwhile \(\beta > 0\). When the estimation error is oversized, the cost function \(J({\mathrm{w}})\) will approach \(\frac{1 }{\beta}\), thereby eliminating the influence of impulsive noise. Therefore, the cost function \(J({\mathrm{w}})\) is

$$\begin{aligned} J({\mathrm{w}}) =& \frac{1}{\beta}\tanh \biggl( \frac{\beta }{2}{ \bigl({e_{k}}/ \Vert {{\mathrm{x}}_{k}} \Vert \bigr)^{2}} \biggr) \\ =& \frac{1}{\beta}\frac{{1 - \exp( - \beta{{({e_{k}}/ \Vert {{\mathrm {x}}_{k}} \Vert )}^{2}})}}{{1 + \exp( - \beta{{({e_{k}}/ \Vert {{\mathrm {x}}_{k}} \Vert )}^{2}})}}. \end{aligned}$$
(13)

The first-order partial derivative of Eq. (13) is identified and can be expressed as follows:

$$\begin{aligned} {\partial_{\mathrm{w}}}J({\mathrm{w}}) = - s \bigl( \beta,{e_{k}}/ \Vert {{\mathrm {x}}_{k}} \Vert \bigr) \frac{{{{\mathrm{x}}_{k}}} }{{ \Vert {{\mathrm {x}}_{k}} \Vert ^{2}}}{e_{k}}, \end{aligned}$$
(14)

where

$$\begin{aligned} s \bigl(\beta,{e_{k}}/ \Vert {{ \mathrm{x}}_{k}} \Vert \bigr) = \frac{{4\exp( - \beta {{({e_{k}}/ \Vert {{\mathrm{x}}_{k}} \Vert )}^{2}})} }{{{{(1 + \exp( - \beta{{({e_{k}}/ \Vert {{\mathrm{x}}_{k}} \Vert )}^{2}}))}^{2}}}}. \end{aligned}$$
(15)

Equation (9) and Eqs. (14) and (15) are combined, and the iteration formula is adjusted as follows:

$$\begin{aligned} {\mathrm{w}}(k + 1) =& {\mathrm{w}}(k) + \mu s \bigl( \beta,{e_{k}}/ \Vert {{\mathrm {x}}_{k}} \Vert \bigr)f \bigl({\mathrm{w}}(k) \bigr){D_{\mathrm{w}}}(k)\frac {{{{\mathrm{x}}_{k}}}}{{ \Vert {{\mathrm{x}}_{\mathrm {k}}} \Vert ^{2}}}{e_{k}} \\ =& {\mathrm{w}}(k) + \mu s \bigl(\beta,{e_{k}}/ \Vert {{ \mathrm{x}}_{\mathrm {k}}} \Vert \bigr)f \bigl({\mathrm{w}}(k) \bigr){D_{\mathrm{x}}}(k)\frac{{{\mathrm {w}}(k)}}{{ \Vert {{\mathrm{x}}_{k}} \Vert ^{2}}}{e_{k}}. \end{aligned}$$
(16)

We take a value of the function \(f({\mathrm{w}}(k))\) as 1 in Eq. (16). Thus, the iteration algorithm can be rewritten as follows:

$$\begin{aligned} {\mathrm{w}}(k + 1) = {\mathrm{w}}(k) + \mu s \bigl( \beta,{e_{k}}/ \Vert {{\mathrm {x}}_{k}} \Vert \bigr) \frac{{{e_{k}}} }{{ \Vert {{\mathrm{x}}_{\mathrm {k}}} \Vert ^{2}}}{D_{\mathrm{x}}}(k){\mathrm{w}}(k), \end{aligned}$$
(17)

where \({D_{\mathrm{x}}}(k)\) is a diagonal matrix, in which the diagonal elements are equal to the input signal \({{\mathrm{x}}_{k}}\). \(s(\beta,{e_{k}}/\|{{\mathrm{x}}_{k}}\|)\) is a step-size scaler, which is the algorithm based on step-size scaler against impulsive noise. When the value of estimation error is oversized, the step size approaches zero. On the contrary, the estimation error is not an outlier; thus, the step size will be a finite value, which will not affect the iteration algorithm.

3.2 The IP-RNNLMS algorithm

The correction \({e_{k}}{D_{\mathrm{x}}}(k){\mathrm{w}}(k)\) is to keep the non-negativity of the algorithm in Eq. (17). However, it was observed that the occurrence of the weight \({\mathrm{w}}(k)\) will affect the convergence rate of the iteration in Eq. (17). When the desired weight \({{\mathrm{w}}^{*}}\) approaches zero, the convergence will slow down or stall, which causes the unbalanced convergence of the algorithm. Meanwhile, the values of the desired weight and the initial iteration weight will introduce difficulties for step size selection. In order to solve this problem, Eq. (17) introduces the inversely-proportional function. From the above, \({f_{j}}({\mathrm{w}}(k))\) is an arbitrary positive function of \({\mathrm {w}}(k)\). The paper uses the inversely-proportional function to replace corresponding item in Eq. (16), we can get

$$\begin{aligned} {f_{j}} \bigl({\mathrm{w}}(k) \bigr) = \frac{1 }{{ \vert {{w_{j}}(k)} \vert + \eta }}. \end{aligned}$$
(18)

The weight \({\mathrm{w}}(k)\) and function \(f({\mathrm{w}}(k))\) are combined, the iteration formula (17) will be rewritten:

$$\begin{aligned} {\mathrm{w}}(k + 1) = {\mathrm{w}}(k) + \mu s \bigl( \beta,{e_{k}}/ \Vert {{\mathrm {x}}_{k}} \Vert \bigr) \frac{{{e_{k}}} }{{\|{{\mathrm{x}}_{\mathrm {k}}}\|^{2}}}{D_{g}}(k){{\mathrm{x}}_{k}}, \end{aligned}$$
(19)

where the diagonal elements of the diagonal matrix \({D_{g}}(k)\) are of the following form:

$$\begin{aligned} {g_{j}} \bigl({\mathrm{w}}(k) \bigr) = \frac{{{w_{j}}(k)} }{{ \vert {{w_{j}}(k)} \vert + \eta}}. \end{aligned}$$
(20)

In the IP-RNNLMS algorithm, the unbalanced convergence problem can be solved well. Meanwhile, the non-negativity and the robustness of the algorithm can be maintained under impulsive noise.

4 Simulation

The performance of the proposed algorithms is validated by the system identification simulation under various noises. The proposed algorithms are compared with other algorithms under the non-negativity condition. The estimated error is used to evaluate the performance of the algorithm, which is defined as follows:

$$\begin{aligned} 10{\log_{10}} \bigl(E \bigl[ \bigl\Vert {\mathrm{w}(k) - {{ \mathrm{w}}^{*}}} \bigr\Vert _{2}^{2} \bigr]/ \bigl\Vert {{{\mathrm{w}}^{*}}} \bigr\Vert _{2}^{2} \bigr). \end{aligned}$$
(21)

The system input is randomly generated, and the tap number L is 10. The system parameter vector was set to \({{\mathrm{w}}_{0}} = {[0.8,0.6,0.5,0.4,0.3,0.2,0.1, - 0.1, - 0.3, - 0.6]^{T}}\) in simulation; in addition, the initial iteration weight is drawn from a uniform distribution with unit power.

The experiments are conducted to validate the improvement of the R-NNLMS algorithm. The experiments need to verify the robustness of the R-NNLMS algorithm under different noises. Hence, the performance experiment was divided into two parts. The rest of this section compares the performance of the R-NNLMS and the IP-RNNLMS algorithms under two types of noise.

4.1 Performance in non-impulsive noise

The estimated error of the R-NNLMS and NNLMS algorithms under non-impulsive noise was calculated to assess the performance of the algorithm. Figure 2 displays the estimated error curve with the Gaussian noise of SNR = 10. Figure 3 illustrates the performance of the algorithm under uniformly distributed noise. The amplitude of the uniformly distributed noise is between ±1. The signal-to-noise ratio (SNR) of the output signal is calculated as follows:

$$\begin{aligned} \mathrm{SNR} = 10{\log_{10}} \biggl(\frac{{E(y_{k}^{2})} }{{E(v_{k}^{2})}} \biggr), \end{aligned}$$
(22)
Figure 2
figure 2

Estimated error of the NNLMS and R-NNLMS algorithms under the environment of non-impulsive noise with background noise SNR = 10 dB

Figure 3
figure 3

Estimated error of the NNLMS and R-NNLMS algorithms under non-impulsive noise with the uniformly distributed background noise in between ±1

The two experiments evaluate the performance of the R-NNLMS algorithm with non-impulsive noise. Figures 2 and 3 show that the R-NNLMS algorithm has fast convergence rate and low steady-state error. The two curves almost coincide under the non-negativity condition. This means that the R-NNLMS algorithm can perform well under non-impulsive noise.

4.2 Performance in impulsive noise

The impulsive noise \({g_{k}}\) is added to the system output signal \({y_{k}}\) with a background noise of SNR = 10 dB. \({g_{k}}\) is generated as \({g_{k}} = {a_{k}}{u_{k}}\), where \({a_{k}}\) is the binomial distribution of \({\operatorname{Pr}({w} = 0) = 0.1}\) (the probability of \(w = 0\) is 0.1), and \({u_{k}}\) is a zero-mean white Gaussian distribution with the variance of \(\sigma_{u}^{2} = 1000\sigma_{y}^{2}\).

In this section, the performance of the R-NNLMS algorithm is investigated under impulsive noise. Additionally, Fig. 4 and Fig. 5 illustrate the performance comparison among R-NNLMS, NNLMS, sigmoid least mean square algorithm (SLMS), and generalized maximum correntropy criterion (GMCC). Figures 4 and 5 show that the R-NNLMS algorithm obtained a lower estimated error compared with other algorithms. The NNLMS algorithm does not perform similarly to the adaptive algorithm under impulsive noise. The SLMS and GMCC algorithms can converge but with high steady-state error under non-negativity constraints. The R-NNLMS algorithm can eliminate the influence of impulsive noise.

Figure 4
figure 4

Estimated error of the R-NNLMS and other algorithms in an additional environment of impulsive noise (\(\operatorname{Pr}({w} = 0) = 0.1\)) with the background noise of SNR = 10 dB

Figure 5
figure 5

Estimated error of the R-NNLMS and other algorithms in an additional environment of impulsive noise (\(\operatorname{Pr}({w} = 0) = 0.5\)) with the background noise of SNR = 10 dB

4.3 Performance comparison

This section validates the robustness of the IP-RNNLMS algorithm under sparse system. The experiment is conducted under different noises. The system input and impulse noise settings are consistent with the previous experiments. With the output using the Gaussian noise of SNR=10, the unknown 30-coefficient sparse system is

$$\begin{aligned} { [ {{{\mathrm{w}}_{0}}} ]_{j}} = \left \{ { \textstyle\begin{array}{l@{\quad}l} {1 - 0.05j,}&{j = 1, \ldots,20},\\ 0,&{j = 21, \ldots,25},\\ { - 0.01(j - 25),}&{ j = 26, \ldots,30.} \end{array}\displaystyle } \right . \end{aligned}$$
(23)

As analyzed above, the IP-RNNLMS algorithm is to solve the problem of unbalanced convergence and to improve estimation accuracy. Figure 6 and Fig. 7 illustrate performance curves of the IP-RNNLMS algorithm under the Gaussian noise and impulsive noise \(\operatorname{Pr}({w} = 0) = 0.5\), respectively. Figure 6 and Fig. 7 show that the IP-RNNLMS algorithm has obviously faster convergence rate under different noises. Meanwhile, the IP-RNNLMS algorithm still can eliminate the influence of impulsive noise.

Figure 6
figure 6

Estimated error of the IP-RNNLMS and R-NNLMS algorithms in SNR = 10dB

Figure 7
figure 7

Estimated error of the IP-RNNLMS and R-NNLMS algorithms under impulsive noise (\(\operatorname{Pr}(w = 0) = 0.5\))

5 Conclusion

In this work, the R-NNLMS algorithm based on a step-size scaler is proposed under non-negative constraints. The influence of impulsive noise can be removed by the step-size scaler. The simulation result shows the effectiveness of the R-NNLMS algorithm under impulsive noise. The R-NNLMS algorithm performs well under non-impulsive noise. Meanwhile, the IP-RNNLMS algorithm solves the problem of unbalanced convergence well in spare system identification. The performance analysis of the proposed algorithm and the research in practical applications will be carried out in future studies.

References

  1. Xiong, K., Wang, S., Chen, B.: Robust normalized least mean absolute third algorithms. IEEE Access 7, 10318–10330 (2019)

    Article  Google Scholar 

  2. Zheng, Z., Liu, Z., Zhao, H.: Bias-compensated normalized least-mean fourth algorithm for noisy input. Circuits Syst. Signal Process. 36, 3864–3873 (2017)

    Article  Google Scholar 

  3. Kang, B., Yoo, J., Park, P.: Bias-compensated normalised LMS algorithm with noisy input. Electron. Lett. 49, 538–539 (2013)

    Article  Google Scholar 

  4. Wang, W., Zhao, H., Lu, L., Yi, Y.: Bias-compensated constrained least mean square adaptive filter algorithm for noisy input and its performance analysis. Digit. Signal Process. 84, 26–37 (2019)

    Article  MathSciNet  Google Scholar 

  5. Jung, S.M., Park, P.G.: Normalised least-mean-square algorithm for adaptive filtering of impulsive measurement noises and noisy inputs. Electron. Lett. 49, 1270–1271 (2013)

    Article  Google Scholar 

  6. Jung, S.M., Park, P.G.: Stabilization of a bias-compensated normalized least-mean-square algorithm for noisy inputs. IEEE Trans. Signal Process. 65, 2949–2961 (2017)

    Article  MathSciNet  Google Scholar 

  7. Liu, L., Xu, Y., Yang, J., Jiang, S.: A polarized random Fourier feature kernel least-mean-square algorithm. IEEE Access 7, 50833–50838 (2019)

    Article  Google Scholar 

  8. Liu, L., Sun, C., Jiang, S.: A reduced Gaussian kernel least-mean-square algorithm for nonlinear adaptive signal processing. Circuits Syst. Signal Process. 38, 371–394 (2019)

    Article  Google Scholar 

  9. Bro, R., Jong, S.D.: A fast non-negativity-constrained least squares algorithm. J. Chemom. 11, 393–401 (1997)

    Article  Google Scholar 

  10. Chih, J.L.: On the convergence of multiplicative update algorithms for nonnegative matrix factorization. IEEE Trans. Neural Netw. 18, 1589–1596 (2007)

    Article  Google Scholar 

  11. Chen, J., Richard, C., Bermudez, J.C.M., Honeine, P.: Nonnegative least-mean-square algorithm. IEEE Trans. Signal Process. 59, 5225–5235 (2011)

    Article  MathSciNet  Google Scholar 

  12. Chen, J., Richard, C., Bermudez, J.C.M.: Reweighted nonnegative least-mean-square algorithm. Signal Process. 128, 131–141 (2016)

    Article  Google Scholar 

  13. Shokrolahi, S.M., Jahromi, M.N.: Logarithmic reweighting nonnegative least mean square algorithm. Signal Image Video Process. 12, 51–57 (2018)

    Article  Google Scholar 

  14. Song, I., Park, P.G., Newcomb, R.W.: A normalized least mean squares algorithm with a step-size scaler against impulsive measurement noise. IEEE Trans. Circuits Syst. II, Express Briefs 60, 442–445 (2013)

    Article  Google Scholar 

  15. Huang, F., Zhang, J., Zhang, S.: NLMS algorithm based on variable parameter cost function robust against impulsive interferences. IEEE Trans. Circuits Syst. II, Express Briefs 64, 600–604 (2017)

    Article  Google Scholar 

  16. Zhang, S., Zhang, J., Han, H.: Robust shrinkage normalized sign algorithm in impulsive noise environment. IEEE Trans. Circuits Syst. II, Express Briefs 64, 91–95 (2017)

    Article  Google Scholar 

  17. Chen, B., Xing, L., Zhao, H., Zheng, N., Principe, J.C.: Generalized correntropy for robust adaptive filtering. IEEE Trans. Signal Process. 64, 3376–3387 (2016)

    Article  MathSciNet  Google Scholar 

  18. Huang, F., Zhang, J., Zhang, S.: A family of robust adaptive filtering algorithms based on sigmoid cost. Signal Process. 149, 179–192 (2018)

    Article  Google Scholar 

  19. Xiong, K., Wang, S.: Robust least mean logarithmic square adaptive filtering algorithms. J. Franklin Inst. 356, 654–674 (2019)

    Article  MathSciNet  Google Scholar 

Download references

Acknowledgements

I would like to show my deepest gratitude to my friends.

Availability of data and materials

The datasets used and/or analysed during the current study are available from the corresponding author on reasonable request.

Funding

This work was supported by the National Natural Science Foundation of China under Grant No. 61763018, by the Education Department of Jiangxi Province under Grant No. GJJ170493, by Special Project and 5G Program of Jiangxi Province under Grant No. 20193ABC03A058, by the Education Department of Jiangxi Province under Grant No. GJJ190451, by the Program of Qingjiang Excellent Young Talents, Jiangxi University of Science and Technology.

Author information

Authors and Affiliations

Authors

Contributions

All authors contributed equally to the writing of this paper. All authors read and approved the final manuscript.

Corresponding author

Correspondence to Kuangang Fan.

Ethics declarations

Competing interests

The authors declare that they have no competing interests.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Fan, K., Qiu, H., Pei, C. et al. Robust non-negative least mean square algorithm based on step-size scaler against impulsive noise. Adv Differ Equ 2020, 199 (2020). https://doi.org/10.1186/s13662-020-02654-5

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13662-020-02654-5

Keywords