Issue |
Acta Acust.
Volume 8, 2024
Topical Issue - Active Noise and Vibration Control
|
|
---|---|---|
Article Number | 51 | |
Number of page(s) | 9 | |
DOI | https://doi.org/10.1051/aacus/2024024 | |
Published online | 11 October 2024 |
Scientific Article
Generalised performance of neural network controllers for feedforward active noise control of nonlinear systems
Institute of Sound and Vibration Research, University of Southampton, University Road, Southampton, SO17 1BJ, UK
* Corresponding author: x.pike@soton.ac.uk
Received:
25
March
2024
Accepted:
11
June
2024
Advances in digital technologies have allowed for the development of complex active noise and vibration control solutions that have been utilised in a wide range of applications. Such control systems are commonly designed using linear filters, which cannot fully capture the dynamics of nonlinear systems. To overcome such issues, it has been shown that replacing linear controllers with Neural Networks (NNs) can improve control performance in the presence of nonlinearities. Many real systems are subject to non-stationary disturbances where the magnitude of the system excitation time dependent. However, within the literature, the performance of single NN controllers across different excitation levels has not been thoroughly explored. In this paper, a method of training Multilayer Perceptrons (MLPs) for single-input-single-output (SISO) feedforward acoustic noise control is presented. In a simple time-discrete simulation, the performance of the trained NNs is investigated for different excitation levels. The effects of the properties of the training data and NN controller on generalised performance are explored. It is demonstrated that the generalised control performance of the MLP controllers falls as the range of magnitudes included in the training data is increased, and that this performance can be recovered by increasing the number of hidden nodes within the controller.
Key words: Active control / Neural network / Nonlinear
© The Author(s), Published by EDP Sciences, 2024
This is an Open Access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
1 Introduction
Unwanted noise and vibration can be problematic in both engineering systems and in public and private spaces. Passive control solutions are capable of effectively reducing high frequency components of noise and vibration but are typically large and/or heavy for low frequency control, possibly exceeding the design constraints of a given application. Active control solutions, by contrast, are capable of effective control at low frequencies, and are typically lightweight and compact. Historically, feedforward active noise and vibration control systems have been implemented using linear control filters and linear plant models, commonly using the FxLMS algorithm. However, it is well understood that nonlinearities present in either the plant or primary path of the control system can have a significant impact on control performance [1–4]. Many approaches have been proposed to overcome this limitation, including polynomial, cross-term or trigonometric expansion of the reference signal [5, 6], genetic algorithms [7] and fuzzy logic-based methods [8]. A further common approach, which has been applied to active control over the past few decades, is the application of machine learning methods. NNs in particular are known to possess the property of being ‘universal approximators’ [9] and are therefore an attractive black-box method for the modelling and control of unknown or uncertain nonlinear systems. The similarities in structure between NNs and linear filters provides good motivation for their use in both system modelling and feedforward controller design. Many different uses of NNs have been studied, including system modelling [4, 10–13], feedforward controller design [4, 10, 14], inverse modelling [15], signal prediction and feedback control [16–20], linear filter selection [21], adaptive parameter estimation for linear controllers [20, 22], frequency-domain control [23], multichannel controller design [24], and signal classification [25]. In previous work utilising NNs as feedforward controllers, however, the ability for the controller or plant model NNs to generalise across a range of excitation levels of the studied system has not been thoroughly explored. This is clearly a desirable quality in any real implementations of such a control system where the properties of the excitation, and therefore the effect of the system nonlinearity, may change over time. In this paper, a simulation of a simple noise control system implementing a time-domain Multilayer Perceptron (MLP) controller is studied. Section 2 defines the simulated system, system parameters and simulation method. Section 3 explains the controller training methodology. Section 4 presents simulated results in the time and frequency domains. Finally, Section 5 discuss conclusions and presents possible future research directions.
2 Problem definition
2.1 Simulated System
The considered system consists of two acoustic sources. The primary source, which generates the acoustic disturbance, is modelled as a damped Duffing oscillator, assumed to radiate as a monopole acoustic source. The secondary acoustic source, which generates the cancelling acoustic signal, is modelled as a simple harmonic oscillator, also assumed to radiate as a monopole source. Figure 1 presents a diagram of the simulated system.
![]() |
Figure 1 Diagram of the simulated system, consisting of a nonlinear primary acoustic source, and a linear control source. System parameter values are given in Table 1. |
The displacement of the Duffing oscillator, ya(t), is caused by the motion of the floor to which it is coupled. The displacement, x(t), of this floor is also taken to be the reference signal passed to the feedforward controller. The displacement, yb(t), of the mass, mb, is caused by the control force, Fc(t), produced by the controller.
The equations of motion for the total system are
where p(t) = ya(t) − x(t) and the remaining variables are defined in Figure 1 and their values are provided in Table 1. These parameter values were selected such that the two oscillators have unity mass, but distinct resonance frequencies of 60 Hz and 80 Hz. The damping coefficients ca and cb were selected such that each oscillator is subject to 20% of critical damping, and the oscillators are therefore not significantly underdamped or overdamped. Assuming for simplicity that the error sensor is equidistant from the two point monopole sources and that the constant amplitude scaling factors are equal both cases such that they can be neglected, the error signal is defined as
where δa and δb are the acoustic delays, in time, between the primary and secondary sources and the error microphone, respectively. In all cases, the signal x[n] is Gaussian white noise band-limited to the frequency range [0, 250] Hz. The motion of the sources is simulated in the time domain using a 4th order Runge-Kutta method at a sample rate of fs = 2 kHz.
Simulated system parameter values.
3 Controller design, training and testing
3.1 Controller architecture
A diagram of the NN architecture used to train the controller is presented in Figure 2. Similarly to the case of a linear controller, the NN controller and plant model each take as input a tapped delay line of the sampled reference signal, x[n], and sampled control signal, u[n], respectively. The plant model output, , is linearly summed with the disturbance signal, d[n], to generate the error signal estimate,
, which is used to update the weights and biases of the controller NN via backpropagation through the full network consisting of both the controller and plant model. Given a tapped delay line of length L of the reference signal, x[n], given by
the control signal, u[n], can be generated by passing x[n] through the controller NN. If the NN architecture is that of an MLP with a single hidden layer, then the output of the NN is given by
where are the output weights of the NN, bo is the NN output bias, and hi are the NN hidden layer node values, given by
where W is a matrix of weights between the input layer and hidden layer, [Wx[n]]i is the ith element of the vector Wx[n], σ(⋅) is the nonlinear activation function applied to the controller hidden layer, and is the bias of the ith hidden layer node. In total,
![]() |
Figure 2 Block diagram of the controller training method, assuming an FIR plant model of order 2. |
However, a full tapped delay line u[n] is required to infer the output of the plant model, , and therefore the error estimate,
. It is therefore necessary to generate control signal values u[n − 1], u[n − 2], …, u[n – I + 1] for a tapped delay line of length I. A previously presented solution to this problem [26] is to train the model using the training data sequentially, storing the values of the control signal in memory, and calling upon them when evaluating the output of the plant model at each timestep during the updating of the controller weights and biases. However, a result of this approach is that the error estimate,
, will not accurately reflect the control performance of the current iteration of the controller. The control signal tapped delay line is calculated from the outputs of the current and previous L − 1 iterations of the controller, and therefore so is the error estimate. This could plausibly lead to stability and performance issues in the training of the controller NN. Perhaps more importantly, this sequential approach will face the issue of catastrophic interference [27], meaning it is unsuitable for training networks without compromising generalised control performance. As illustrated in Figure 2, an alternative approach is proposed here where all required previous controller outputs, u[n − k], are generated using the current iteration of the controller. In general,
where all weights and biases in equation (8) are those of the current iteration of the controller during training. Irrespective of whether the values of u[n − k] are called from memory or generated from the current iteration of the controller, standard backpropagation techniques can be used to update the weights and biases of the controller to minimise a given cost function of . The approach presented in Figure 2 is clearly more computationally intensive than simply storing u[n] in memory. It should be noted, however, that computing u[n] is only required during the controller training. The controller NN is assumed here to be fixed during operation and so, for a NN controller with the same number of hidden nodes, the computational cost to produce u[n] from x[n] in operation is independent of the training method.
3.2 Controller training
The controllers were trained to minimise the Mean Squared Error (MSE) signal, defined as
where the average is calculated over the samples in the training batch. The backpropagation used the Adam algorithm [28] with parameters α = 3 × 10−5, β1 = 0.9, β2 = 0.99, and ϵ = 10−7. These parameters were selected through trial and error with a view to reaching a trade-off between controller performance and training speed.
In all cases, the plant model used for controller training was an FIR filter (equivalent to an MLP with no hidden layer) with 140 taps, which was capable of achieving high levels of modelling accuracy due to the linear nature of the simulated plant response.
3.3 Training data
For each instance of network training, two sets of 900 s of simulated data are generated. The first set is used for training, and the second set is used as a validation set to assess model overfitting. Each dataset consists of the reference signal, x[n], and the resultant displacement of the primary source, ya[n]. In each 900 s simulation, the magnitude of the reference signal, x[n], is increased linearly in time from the lower to the upper bound of the the training range. The training data is split into mini-batches of size 128, selected randomly from the data. In each iteration of the training, 1000 such mini-batches are employed. The full set of training data are therefore not used in each iteration. This is due to a training bias which is implemented to optimise network training for generalised performance, explained below. All networks are trained over 500 iterations. All signals used to generate training and validation data are first normalized to have a standard deviation of 1 to facilitate quick and stable training of the NNs. As expected, using small amounts of training data was observed to result in model overfitting, reducing the performance of the controllers and obscuring the underlying limits on controller performance. The relatively large amount of training data used was therefore chosen to minimize the effect of overfitting during network training, and therefore no regularization was applied to the networks.
During testing, it was initially observed that MLP controllers trained using data with equal weighting across all magnitudes of the reference signal produced MSE attenuation that was approximately equal across much of the training range. As shall be noted in Section 4, the maximum performance of the MLP controllers at each magnitude of the reference signal is not equal. As a result, training an MLP as a controller across a range of reference signal magnitudes results in control performance close to the maximum for the largest training magnitudes, but underperformance relative to the maximum at the lowest training magnitudes. To counteract this, the random selection of the training samples was weighted based on the magnitude of the reference signal. That is, for a set of N training examples with reference signal magnitudes xmag in the range a < xmag < b, the probability of training example q being included in a training batch is given (up to a normalizing factor) by
where γ is a factor controlling the training bias. The inclusion of this training bias affects the resultant performance of the MLP controllers across the training range. Appropriate selection of γ for a given training range results in generalised control performance that approaches the maximum MLP controller performance across the training range. This effect is illustrated in Section 4.2.
3.4 Testing
During the controller testing phase, the controller network is extracted and used to generate the control signal u[n] in a new simulation. At each timestep, the controller is input with a tapped delay line of the reference signal, x[n] = [x[n],…, x[n – L + 1]], generating a control signal sample, u[n]. The testing is undertaken across a range of magnitudes of x[n]. However, this magnitude is kept constant within each testing simulation. Each testing simulation is undertaken over 60 s, and control performance is defined as MSE attenuation, measured in dB relative to the sampled disturbance signal, d[n].
4 Results
4.1 Maximum control performance
To first establish an upper limit on the generalised control performance of the MLP controllers across a range of magnitudes of x[n], a set of 90 networks with 100 hidden nodes were trained at 30 equally spaced magnitudes of x[n] from 10−9 to 10−5 m with three controllers trained at each level to obtain an average of the control performance. Controller weights and biases were initialized with a random Gaussian distribution with zero mean and a standard deviation of 0.05. The performance of these controllers is presented in Figure 3. Also presented is the performance of FIR filter-based controllers trained using the same Adam algorithm, and the control performance of FIR controllers trained in-simulation using the FxLMS algorithm.
![]() |
Figure 3 Average control performance of the MLP controller, and an FIR controller trained using the Adam algorithm and FxLMS. Averaging is undertaken over 5 instances of a trained controller. Solid lines represent the mean control performance. Shaded regions present 2 standard deviations of control performance around the mean. |
At very low magnitudes of the reference signal, both the FIR and MLP controllers achieve a similar level of performance of between approximately 55 and 60 dB. At the lowest magnitudes, the FIR controller outperforms the MLP controller slightly. However, as the magnitude of x[n] increases and the degree of nonlinearity stimulated in the primary path increases, the MLP controller demonstrates a clear control advantage compared to the linear controller of up to 20 dB. At the highest levels of nonlinearity, the performance of both controller architectures falls. However, the MLP controller still achieves some control advantage of approximately 5 dB. Interestingly, the performance of the linear controller shows a standard deviation of 2.6 dB at the lowest magnitude of x[n] tested. This is unexpected, and is absent in the results of the FxLMS-trained controller. It is reasonable to assume, therefore, that this variance is a result of the Adam algorithm used to train the FIR controllers.
Figure 4 presents Power Spectral Density (PSD) estimates of the disturbance and error signals at a range of values of xmag for the FIR and MLP controllers. As observed in Figure 3, at the lowest value of xmag, the FIR controller slightly outperforms the MLP controller. However, as xmag increases, the MLP controller outperforms the FIR controller. Notably, the MLP controller outperforms the FIR controller across all frequencies present in the reference signal.
![]() |
Figure 4 Frequency-domain errors corresponding to the MLP and FIR filter-based controllers over a range of reference signal magnitudes xmag, averaged over 5 instances of controller training. Black: Disturbance; Blue: FIR controller error; Red: MLP controller error. All signals used to calculate PSD estimates are normalised with respect to the RMS of the disturbance signal. |
4.2 Effect of training bias γ
As explained in Section 3.3, the training of the MLP controllers for control across a range of xmag is influenced by a training bias which is fully defined by a parameter γ. In effect, this parameter controls the slope of the generalised performance curve within the training region for a given controller. As the generalised performance of the controllers is limited to a maximum at each magnitude of x[n], the tuning of γ allows for generalised performance to be maximised across the training region. Figure 5 presents the control performance of 3 MLP controllers trained with different values of the parameter γ. In each case, the controller contains a hidden layer of 100 nodes, and the training range is shown by the red shaded region. As observed in Section 4.1, the control performance of MLP networks is subject to variance. Therefore, to produce smooth generalised performance curves that illustrate the effect of γ more clearly, the training data used for each network was identical, and the reference signal used in each testing simulation was identical across tests, but scaled to the appropriate magnitude. As expected, the effect of increasing γ is to change the slope of the generalised performance curve within the training region. We can observe that, for γ = 1.2 × 106, the generalised network performance is close to the maximum at the bottom of the training range, but falls away from the maximum near the top. We see the opposite effect for the network trained with γ = 0. For γ = 6 × 105, the generalised performance of the controller relative to the maximum is approximately constant within the training range.
![]() |
Figure 5 Performance of 3 MLP controllers trained with varying values of the training bias parameter γ. The red highlighted region represents the range of magnitudes of x[n] over which the controllers were trained. |
4.3 Effect of training range
For a fixed number of hidden nodes in the MLP controller, the effect of increasing the range of xmag over which the controller is trained is investigated. Figure 6 presents the control performance of 4 networks with increasing training widths. The first network, trained at xmag = 5 × 10−5 m, achieves control performance equal to the defined maximum at the trained magnitude, as expected. It also demonstrates some generalised performance capacity, with performance relative to the maximum curve dropping by approximately 3 dB within the range of xmag ∈ [4 × 10−6, 6 × 10−6]. Increasing the range of the training to [4 × 10−6, 6 × 10−6] slightly improves the control performance at the edges of this range at very little cost to performance at the centre of the range. Further increasing the training range continues to reduce the performance of the controller at the centre of the range whilst improving generalisability. This is expected, as increasing the range of the magnitude of the training data increases the range of nonlinear behaviour exhibited by the training data. We should therefore expect that keeping the number of hidden nodes in the networks fixed, but increasing the training range, should reduce the peak performance of the controller relative to the maximum curve as the controller network becomes increasingly under-powered to perform the required generalisation task.
![]() |
Figure 6 MSE attenuation of 4 MLP controllers trained with reference signal magnitude within the highlighted range. Each plot presents the averaged performance of 3 controller instances trained within the presented range. |
4.4 Effect of network size
The ability of NNs to generate nonlinear mappings from input to output is derived from the nonlinear activation functions applied to the nodes of the hidden layer(s). It is natural, therefore, to assume that increasing the number of nodes in the hidden layer of the MLP controller will increase its generalised control performance across a range of reference signal magnitudes. Figure 7 shows the generalised control performance of 5 MLP controllers with a range of hidden nodes from 12 to 200. At each magnitude of x[n] and number of hidden nodes, 3 controllers were trained and the performance averaged over the 3 controllers. Across the training range, the controller with 12 hidden nodes achieves a performance approximately 7 dB below the defined maximum MLP performance. Increasing the number of hidden nodes to 25 increases the performance of the controller relative to the maximum by approximately 2 dB. However, this approximately doubles the number of parameters to be learned within the network, and approximately doubles the number of operations required to infer the output of the network. Further doubling of the number of hidden nodes in the MLP controllers further increases the generalised control performance of the controllers. However, this increase becomes increasingly small as the performance of the networks approach the maximum. Noting that the controller containing 12 hidden nodes generates a control performance advantage over the linear controllers of up to 10 dB, the computational cost of increasing the number of hidden nodes from 12 to 200 to gain an additional 8 dB in control performance is considerable. In practical applications, the additional computational cost of both training and inferring the outputs of these networks may be prohibitive to their implementation for excessively large networks. However, it should be noted that increasing the number of hidden nodes in the MLP controller does recover the control performance lost by increasing the controller training width.
![]() |
Figure 7 Generalised control performance of MLP controllers trained with varying numbers of nodes in the hidden layer, trained within the highlighted range. Each curve represents the average performance of 3 trained controller instances. |
5 Conclusions
In this paper, a method of training MLP NNs for use as time-domain controllers for feedforward active noise control has been presented. For the simple case presented, the maximum achievable performance of linear and MLP controllers has been estimated, demonstrating that the control performance of both the linear and MLP controllers falls as the degree of nonlinearity in the system primary path increases. The ability of MLP controllers to achieve generalised performance across a range of system stimulation magnitudes has been investigated, and a training bias parameter γ has been introduced to balance the control performance across the training range. The effect of varying this parameter has been presented, demonstrating that, for a fixed number of hidden nodes in the controller, the parameter can control the relation between generalised controller performance and reference signal magnitude. Increasing the controller training range has been shown to reduce controller performance over the same range. However, it has also been demonstrated that this performance may be recovered by increasing the number of hidden nodes in the MLP controller.
Additional contributions which would extend this work in the future could include the study of an extended set of NN architectures, a wider study of the NN hyperparameters, the study of a wider range of types of nonlinear behaviour, comparison of controller performance to other common nonlinear control strategies, as well as experimental validation of the results under similar nonlinear conditions.
Acknowledgments
This work was supported by an EPSRC Prosperity Partnership (No. EP/S03661X/1). The authors acknowledge the use of the IRIDIS High Performance Computing Facility, and associated support services at the University of Southampton, in the completion of this work.
Conflicts of interest
The authors declare no conflicts of interest.
Data availability statement
The data are available from the corresponding author on request.
References
- X. Pike, J. Cheer: Limitations of FxLMS in feedforward active vibration control of a non linear two-degree-of-freedom system, INTER NOISE and NOISE-CON Congress and Conference Proceedings 265, 2 (2023) 5219–5229. [CrossRef] [Google Scholar]
- M. Costa, J. Bermudez, N. Bershad: Stochastic analysis of the filtered-x LMS algorithm in systems with nonlinear secondary paths, IEEE Transactions on Signal Processing 50, 6 (2002) 1327–1342. [CrossRef] [Google Scholar]
- O. Tobias, R. Seara: On the LMS algorithm with constant and variable leakage factor in a nonlinear environment, IEEE Transactions on Signal Processing 54, 9 (2006) 3448–3458. [CrossRef] [Google Scholar]
- S. Snyder, N. Tanaka: Active control of vibration using a neural network, IEEE Transactions on Neural Networks 6, 4 (1995) 819–828. [CrossRef] [PubMed] [Google Scholar]
- L. Tan, J. Jiang: Adaptive volterra filters for active control of nonlinear noise processes, IEEE Transactions on Signal Processing 49, 8 (2001) 1667–1676. [CrossRef] [Google Scholar]
- E. Reddy, D. Das, K. Prabhu: Fast exact multichannel FSLMS algorithm for active noise control, Signal Processing 89, 5 (2009) 952–956. [CrossRef] [Google Scholar]
- C. Wangler, C. Hansen: Genetic algorithm adaptation of non-linear filter structures for active sound and vibration control, in Proceedings of ICASSP ’94 IEEE International Conference on Acoustics, Speech and Signal Processing, Adelaide, SA, Australia, 19–22 April, 1994 IEEE. [Google Scholar]
- Q.-Z. Zhang, W.-S. Gan: Active noise control using a simplified fuzzy neural network, Journal of Sound and Vibration 272, 1–2 (2004) 437–449. [CrossRef] [Google Scholar]
- K. Hornik, M. Stinchcombe, H. White: Multilayer feedforward networks are universal approximators, Neural Networks 2, 5 (1989) 359–366. [CrossRef] [Google Scholar]
- M. Bouchard, B. Paillard, C.L. Dinh: Improved training of neural networks for the nonlinear active control of sound and vibration, IEEE Transactions on Neural Networks 10, 2 (1999) 391–401. [CrossRef] [PubMed] [Google Scholar]
- K. Narendra, K. Parthasarathy: Identification and control of dynamical systems using neural networks, IEEE Transactions on Neural Networks 1, 1 (1990) 4–27. [CrossRef] [PubMed] [Google Scholar]
- D. Hong, H. Lee, Y. Han, B. Kim: Numerical feasibility study for transverse vibration control of rotating shaft with a neural network-based tracking algorithm, INTER-NOISE and NOISE-CON Congress and Conference Proceedings 263, 5 (2021) 1293–1298. [CrossRef] [Google Scholar]
- H.-S. Kim: Development of seismic response simulation model for building structures with semi-active control devices using recurrent neural network, Applied Sciences 10, 11 (2020) 3915. [CrossRef] [Google Scholar]
- C.-Y. Chang, F.-B. Luoh: Enhancement of active noise control using neural-based filtered-x algorithm, Journal of Sound and Vibration 305, 1 (2007) 348–356. [CrossRef] [Google Scholar]
- Y. Yan, L. Dong, Y. Han, W. Li: A general inverse control model of a magneto-rheological damper based on neural network, Journal of Vibration and Control 28, 7 (2022) 952–963. [CrossRef] [Google Scholar]
- K. Na, S.-I. Chae: Single-sensor active noise cancellation using recurrent neural network predictors, in Proceedings of International Conference on Neural Networks (ICNN’97), vol. 4, Houston, TX, USA, 12 June, IEEE, 1997, pp. 2153–2156. [Google Scholar]
- C. Chen, T.-D. Chiueh: Multilayer perceptron neural networks for active noise cancellation, in 1996 IEEE International Symposium on Circuits and Systems. Circuits and Systems Connecting the World. ISCAS 96, vol. 3, Atlanta, GA, USA, 15 May, IEEE, 1996, pp. 523–526. [Google Scholar]
- M. Salmasi, H. Mahdavi-Nasab, H. Pourghassem, Comparison of feed-forward and recurrent neural networks in active cancellation of sound noise, in: 2011 International Conference on Multimedia and Signal Processing, Guilin, China, 14–15 May, IEEE, 2011, pp. 25–29. [Google Scholar]
- T. Matsuura, T. Hiei, H. Itoh, K. Torikoshi: Active noise control by using prediction of time series data with a neural network, in 1995 IEEE International Conference on Systems, Man and Cybernetics. Intelligent Systems for the 21st Century, vol. 3, Vancouver, BC, Canada, 22–25 October, IEEE, 1995, pp. 2070–2075. [Google Scholar]
- K. Hiramoto, T. Matsuoka: Active vibration control of structural systems with a preview of a future seismic waveform generated by remote waveform observation data and an artificial intelligence–based waveform estimation system, Journal of Vibration and Control 26, 17 (2020) 1602–1613. [CrossRef] [Google Scholar]
- Q. Wang, J. Wang, X. Huang, L. Zhang: Semiactive nonsmooth control for building structure with deep learning, Complexity 2017 (2017) 1–8. [CrossRef] [Google Scholar]
- J. Liu, X. Li, X. Zhang, X. Chen: Modeling and simulation of energy-regenerative active suspension based on BP neural network PID control, Shock and Vibration 2019 (2019) 1–8. [CrossRef] [Google Scholar]
- H. Zhang, D. Wang: Deep ANC: a deep learning approach to active noise control, Neural Networks 141 (2021) 1–10. [CrossRef] [PubMed] [Google Scholar]
- H. Zhang, D. Wang: Deep MCANC: a deep learning approach to multi-channel active noise control, Neural Networks 158 (2023) 318–327. [CrossRef] [PubMed] [Google Scholar]
- R. Ranjan, J. He, T. Murao, L. Bhan, W.-S. Gan: Selective active noise control system for open windows using sound classification, INTER-NOISE and NOISE-CON Congress and Conference Proceedings 253 (2016) 1921–1931. [Google Scholar]
- Z. Qiu, W. Zhang: Trajectory planning and diagonal recurrent neural network vibration control of a flexible manipulator using structural light sensor, Mechanical Systems and Signal Processing 132 (2019) 563–594. [CrossRef] [Google Scholar]
- M. McCloskey, N. Cohen: Catastrophic interference in connectionist networks: the sequential learning problem, Psychology of Learning and Motivation 24 (1989) 109–165. [CrossRef] [Google Scholar]
- D.P. Kingma, J. Ba: Adam: a method for stochastic optimization, in: 3rd International Conference on Learning Representations, {ICLR} 2015, San Diego, CA, USA, May 7-9, 2015. [Google Scholar]
Cite this article as: Pike X. & Cheer J. 2024. Generalised performance of neural network controllers for feedforward active noise control of nonlinear systems. Acta Acustica, 8, 51.
All Tables
All Figures
![]() |
Figure 1 Diagram of the simulated system, consisting of a nonlinear primary acoustic source, and a linear control source. System parameter values are given in Table 1. |
In the text |
![]() |
Figure 2 Block diagram of the controller training method, assuming an FIR plant model of order 2. |
In the text |
![]() |
Figure 3 Average control performance of the MLP controller, and an FIR controller trained using the Adam algorithm and FxLMS. Averaging is undertaken over 5 instances of a trained controller. Solid lines represent the mean control performance. Shaded regions present 2 standard deviations of control performance around the mean. |
In the text |
![]() |
Figure 4 Frequency-domain errors corresponding to the MLP and FIR filter-based controllers over a range of reference signal magnitudes xmag, averaged over 5 instances of controller training. Black: Disturbance; Blue: FIR controller error; Red: MLP controller error. All signals used to calculate PSD estimates are normalised with respect to the RMS of the disturbance signal. |
In the text |
![]() |
Figure 5 Performance of 3 MLP controllers trained with varying values of the training bias parameter γ. The red highlighted region represents the range of magnitudes of x[n] over which the controllers were trained. |
In the text |
![]() |
Figure 6 MSE attenuation of 4 MLP controllers trained with reference signal magnitude within the highlighted range. Each plot presents the averaged performance of 3 controller instances trained within the presented range. |
In the text |
![]() |
Figure 7 Generalised control performance of MLP controllers trained with varying numbers of nodes in the hidden layer, trained within the highlighted range. Each curve represents the average performance of 3 trained controller instances. |
In the text |
Current usage metrics show cumulative count of Article Views (full-text article views including HTML views, PDF and ePub downloads, according to the available data) and Abstracts Views on Vision4Press platform.
Data correspond to usage on the plateform after 2015. The current usage metrics is available 48-96 hours after online publication and is updated daily on week days.
Initial download of the metrics may take a while.