1

Kernel Adaptive Filtering Method for Noise ReductionAkshay Nagashetti

Abstract —An active noise cancellation system have been built

and implemented. Speech and ultrasound data were both used to

verify the system. MATLAB/Simulink is used to design and test

a least mean square (LMS) and a recursive least square (RLS)

adaptive lter for the project. It requires more careful veri cation

for numerous simulations for the accuracy of design. Thus results

obtained with the nite precision from the MATLAB model

to ne-tune the lter.

There are four types of FIR structures

were investigated, we investigate the enhancement of speech

by applying kernel adaptive lter. Removal of noise is very

necessary for many applications such as conversation of telephone

, recognition of speech , etc. Further Kernel methods have been

showing good results for other applications like recognition of

handwriting , weightings of inverse distance , etc.

In this paper mainly focus for the enhancement of audio signal

for speech from noise and then compare Least Mean-Square

(LMS) and Recursive Least Square (RLS) algorithms.

Index Terms —Adaptive noise cancellation (ANC), LMS algo-

rithm, adaptive lter, RLS algorithm.

I. IN T RO D U C T I O N

A DAPTIVE lters are one of the best useful in cases where

conditions of signal or parameters of system are slower

changes and for lters for the adjustment to compromise for

the change. The simple but very powerful lter can be called

as linear adaptive combiner, for which will be nothing more

compared to adjustable FIR lter. The LMS criterion can be a

search algorithm which will be using for providing the strategy

for adjustment for the coef cients of lters.

Get quality help now
Bella Hamilton
Verified

Proficient in: Computer Science

5 (234)

“ Very organized ,I enjoyed and Loved every bit of our professional interaction ”

+84 relevant experts are online
Hire writer

In the FIR and IIR conventional digital lters, which will be

assumed for the parameters process for determination of the

lter characteristics is known. They will be changed according

to time, but the nature of the variation is assumed to be

known. In various practical problems, the coef cients of an

adaptive lter can be adjusted to compensate for changes in

input signal, output signal or system parameters. In spite of

being rigid, an adaptive system could be learned by the signal

characteristics and track slow changes. An adaptive lter will

be very helpful for the uncertainty about the characteristics of

a signal or when these characteristics change.

II. N O I S ECA N C E L L AT I O N U S I N G KE R N E L ADA P T I V E

F I LT E R

The Single channel for feedback adaptive Active Noise

Cancellation (ANC) system works by the process of the noise

acoustical (the target noise) that could be liked for reducing, by

the production of an anti-noise which will cancels out for the

noise component by the method of this adaptive ltering. Thus,

the main aim of an ANC system is for reduction component of the noise for the signal of interest. Figure 1 shows schematic

diagram of a single channel feedback active noise cancellation

system Fig. 1. Single channel feedback active noise cancellation

The anti-noise waveform will be similar for that of the target

noise, except that its phase can be reversed by 180 degrees.

If such of the waveforms will be combined together. It will

also result in a much weaker residual waveform (this waveform

would be of zero amplitude if the anti-noise matches perfectly

with the target noise).

The residual waveform can be what is picked up by the

microphone is shown in gure2. The feedback ANC system

produced an anti-noise by predicting the incoming target noise Fig. 2. Description of control active noise

There can be a lot of approach reported in the literature

relating for enhancement of speech. Since for the last ten years,

adaptive lters shall be effective and popular approaches for

the enhancement of speech. The main advantage of Adaptive

lters is that the detection of time varying potentials and also

tracking the dynamic variations of the signals. Before going

for the LMS an introduction to adaptive lter is given as

follows. As the name is adaptive lters it can be important

to understand the meaning of the terms adaptive and lter in a

very general sense. The adjective adaptive can be understood

2

by considering a system which is tried to adjusting itself

so as to respond for some phenomenon that is taking place

in its surrounding. In some words the system tries to make

adjustment in the parameters with a aim of meeting some well

de ned goals or target on which depends upon the state for

the system as well as its surrounding. This is what adaptation

means. Such that there will be a need to have a set of steps or

certain procedure by which this process of adaptation is carried

out. Thus the system in which that will carry out or undergoes

the process of adaptation is called by the more technical name

lter.

III. N O I S ECA N C E L L AT I O N U S I N G KE R N E L ADA P T I V E

F I LT E R AL G O R I T H M S

A. Least Mean Square (LMS) Algorithm The Least Mean Square (LMS) algorithm, brings into use

for the rst time by Widrow and Hoff, is an adaptive algorithm

using kernel techniques. LMS algorithm can be used the

calculation of the gradient vector in the data available. The

LMS includes an computational procedures in making the

corrections of the weighted vector for the direction of the

negative of the gradient vector which can be eventually may

leads to the minimum mean square error. When Comparing

to other algorithms, the LMS algorithm can be considered

for simpler because of it which will not require correlation

functions for calculations nor does it require matrix inversions.

In the LMS algorithm, the coef cients are adjusted from

sample to sample in such a way as to minimize the Mean

Square Error (MSE).The LMS is based on the steepest descent

algorithm where the weight vector is updated from sample to

sample as follows

Wk+1 =

W

k

k (1)

where W

kand

k are the weight and the true gradient vectors,

respectively at the kth sampling instant. controls the stability

and rate of convergence.

Initially, set each weight to an arbitrary xed value as 0.

For each subsequent sampling instant, k, carryout steps

from 2 to 4 below.

Compute lter output

n=0

k = X

w (

i) X

k

i (2)

Compute the error estimate

ek =

y

k

n= 0

k (3)

Update the next lter weights

wk+ 1(

i) = w

k(

i) + 2 e

kx

k 1 (4)

Because of its simplicity, the LMS algorithm is one of the

popular adaptive algorithm. However, the LMS algorithm is

very slow and data dependent convergence behaviour. One of

the primary disadvantages of the LMS algorithm is having

a xed step size parameter for every iteration. This requires

an understanding of the statistics of the input signal prior to

commencing the adaptive ltering operation. In practice this

is rarely achievable. Fig. 3. Flowchart of LMS algorithm

Implementation of Least Mean Square Algorithm for noise

reduction

Initially, the weight parameter (w) and also the loop variable

are set to zero. Then obtained input signals from the micro-

phone. In the next step the lter output is calculated and is

further used to compute the error estimate signal.

Then the lter weights are updated by xing the step size

value () to be a constant. This procedure is repeated until

the loop parameter becomes equal to the buffer size. This

implementation is depicted in gure 3.

Stability OF LMS:

Condition for stability is: 0? =mu? 2 (input signal power)

Larger values for step size

Increases adaptation rate (faster adaptation)

Increases residual mean-squared error

B. Recursive Least Square (RLS) Algorithm Recursive least square (RLS) is another algorithm for adap-

tive lters. This algorithm attempts to directly update the auto

and cross-correlation matrices in order to approach the Wiener-

Hopf equation.

RLS is relatively complex algorithm as compared to LMS

and NLMS algorithm. Also performance of RLS in terms of

convergence and Mean Square Error (MSE) is better when

compare to LMS and NLMS.

The Recursive least square (RLS) adaptive lter is an al-

gorithm which recursively determines the lter coef cients

that reduces a weighted linear least squares cost function

relating to the input signals. The RLS algorithms are known

for their excellent performance when working in time varying

3

environments but at the cost of an increased computational

complexity along with stability problems.

RLS adaptation algorithm with input signals y(n) and x(n)

is given below. Initial values for RLS algorithm is given

by w (0) = wI

For m = 1,2, lter gain update vector is given by

(5)

Error signal equation is given by

e(n ) = x(n ) wT

(n 1)y(n ) (6)

Filter coef cient adaptation is given by

w(n ) = w(n 1) k(n )e (n ) (7)

Inverse correlation matrix update is calculated using

P(n)= 1[P(n1)(1)k(n)yT(P(n1)] (8)

P (n ) =

1

[P (n 1) (

1)k(n )y T

(P (n 1))] (8)

IV. E X P E R I M E N TA L RE S U LT S Fig. 4. Mean Square Error for LMS Fig. 5. Variation Of MSE With Respect To

Fig. 6. Mean Square Error for RLS

V. CO M PA R I S I O N O F AL G O R I T H M S

We compare the LMS and RLS algorithm and conclude

which adaptive algorithm is suitable for adaptive lter. Both

computational resource and convergence speed requirements

when choosing an adaptive lter algorithm are important. The

key difference is that LMS algorithm is a Markov process. It

has its present state, but other than that it doesnt remember

data from the past. For time varying signals this is a character-

istic because past data will give you invalid information about

the current parameters.

The RLS algorithm uses all of the information, present

and past, but that can be a problem if the past data is

ambiguous for the current parameters. If researcher looking for

a quantitative rule for when to use one or the other, we don’t

have one. RLS algorithm is more computationally intensive

than LMS algorithm, so if LMS is good enough, then that is

the secure one to go with. RLS algorithm converges faster, but

it is more computationally intensive and has the time varying

4

Fig. 7. Input and noise signal

Fig. 8. LMS lter output

disadvantage. Fig. 9. Performance Comparison of Kernel Adaptive Algorithm

VI. CO N C L U S I O N

Active noise cancellation (ANC) scheme employs the adap-

tive digital lter to generate control signals. The adaptive lter

updates its coef cients iteratively to track the best possible

solutions using adaptive algorithms LMS was the simplest

and easiest to implement but it converges at the slowest

rate RLS has rapid rate convergence, compared to LMS.

RLS is computationally more expensive than LMS. The RLS

algorithms are known for their excellent performance when

working in time varying environments but at the cost of

an increased computational complexity and some stability

problems.We have studied the trade-offs between theMSE

performance and the complexity for several state-of-the-art

kernel adaptive ltering algorithms, on three benchmark data

sets. The proposed gures of merit are meaningful indicators

of the relative performances of these algorithms: Since they

allow us to highlight advantages and disadvantages of each

algorithm in different scenarios, they constitute an interesting

tool for the practitioner.As expected, we observed that there is

not a single best performing algorithm for all scenarios. Rather,

the optimal choice of algorithm depends on the target MSE

range, the available computational resources and the particular

data set. In future work we plan to include additional measures,

such as as the convergence speed, which may be of interest

in scenarios with less restrictions on complexity.

RE F E R E N C E S

[1] Benesty, Jacob, and Yiteng Huang. Adaptive Signal Processing: Appli- cations to Real-world Problems. Berlin: Springer, 2003. Print

[2] Cowan, C. F. N., Peter M. Grant, and P. F. Adams. Adaptive Filters. Englewood Cliffs, NJ: Prentice-Hall, 1985. Print.

[3] Oliver Gauci, Carl J. Debono, Paul Micallef, ”A Reproducing Kernel Hilbert Space Approach for Speech Enhancement” ISCCSP 2008, Malta,

12-14 March 2008.

[4] Honig, Michael L., and David G. Messerschmitt. Adaptive Filters: Structures, Algorithms,and Applications. Boston: Kluwer, 1984. Print.

[5] SaeedV.Vaseghi, Advanced digital signal processing and noise reduc- tion.” John Wiley and Sons.

[6] lter.

[7] Simon Haykin,”Adaptive Filter Theory”, prentice hall of india,4th Edi- tion.

[8] adaptive lter

[9] Weifeng Liu, Puskal P. Pokharel, and Jose C. Principe,”The Kernel Least-Mean-Square Algorithm” IEEE Transactions on signal processing,

vol 56, no 2, February 2008.

[10] Jose C. Principe, Weifeng Liu, Simon Haykin ”Kernel Adaptive Filter- ing: A Comprehensive Introduction”, Wiley, March 2010.

[11] F. Girosi, M. Jones and T. Poggio, Regularization theroy and neural networks architectures, Neural Compuatation, vol. 7, pp. 219-269, 1995.

5

[12] B. Scholkopf, A. Smola and K. Muller, Nonlinear component analysis as a kernel eigenvalue problem, Neural Compuatation, vol. 10, no.5, pp.

1299-1319, Jul. 1998.

[13] Y. Engel, S. Mannor and R. Meir, The Kernel Recursive Least-Squares

Algorithm, IEEE Transactions on signal processing, vol. 52, no.8, pp.

2275-2285, 2004.

[14] W. Liu, P. Pokharel and J. Principe, The kernel least mean quare algorithm, IEEE Transactions on Signal Processing, vol. 56, iss. 2, pp.

543-554, 2008.

[15] J. R. OHair and B. W. Suter, Kernel design techniques for alias-free time-frequency distributions, in Proc. IEEE Int. Conf.Acoust., Speech,

Signal Processing-ICASSP 94, vol. 3, 1994, pp. 333-336.

Cite this page

kernel. (2019, Dec 10). Retrieved from http://paperap.com/kernel-best-essay/

Let’s chat?  We're online 24/7