Abstract
This paper discusses approaches to noise-resistant training of MLP neural networks. We present various aspects of the issue and the ways of obtaining that goal by using two groups of approaches and combinations of them. The first group is based on a different processing of each vector depending of the likelihood of the vector being an outlier. The likelihood is determined by instance selection and outlier detection. The second group is based on training MLP neural networks with non-differentiable robust objective functions. We evaluate the performance of particular methods with different level of noise in the data for regression problems.
Access provided by Autonomous University of Puebla. Download to read the full chapter text
Chapter PDF
Similar content being viewed by others
Keywords
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.
References
Beliakov, G., Kelarev, A., Yearwood, J.: Derivative-free optimization and neural networks for robust regression. Optimization 61(12), 1467–1490 (2012)
Ben-Gal, I.: Outlier detection. Kluwer Academic Publishers (2005)
Chen, D., Jain, R.: A robust backpropagation learning algorithm for function approximation. IEEE Transactions on Neural Networks 5(3), 467–479 (1994)
Chuang, C.C., Su, S.F., Hsiao, C.C.: The annealing robust backpropagation (arbp) learning algorithm. IEEE Transactions on Neural Networks 11(5), 1067–1077 (2000)
El-Melegy, M.T., Essai, M.H., Ali, A.A.: Robust training of artificial feedforward neural networks. In: Hassanien, A.-E., Abraham, A., Vasilakos, A.V., Pedrycz, W. (eds.) Foundations of Computational, Intelligence Volume 1. SCI, vol. 201, pp. 217–242. Springer, Heidelberg (2009)
El-Melegy, M.: Random sampler m-estimator algorithm for robust function approximation via feed-forward neural networks. In: The 2011 International Joint Conference on Neural Networks (IJCNN), pp. 3134–3140 (2011)
El-Melegy, M.: Ransac algorithm with sequential probability ratio test for robust training of feed-forward neural networks. In: The 2011 International Joint Conference on Neural Networks (IJCNN), pp. 3256–3263 (2011)
El-Melegy, M.: Random sampler m-estimator algorithm with sequential probability ratio test for robust function approximation via feed-forward neural networks. IEEE Transactions on Neural Networks and Learning Systems 24(7), 1074–1085 (2013)
Golak, S., Burchart-Korol, D., Czaplicka-Kolarz, K., Wieczorek, T.: Application of neural network for the prediction of eco-efficiency. In: Liu, D., Zhang, H., Polycarpou, M., Alippi, C., He, H. (eds.) ISNN 2011, Part III. LNCS, vol. 6677, pp. 380–387. Springer, Heidelberg (2011)
Guillen, A.: Applying mutual information for prototype or instance selection in regression problems. In: ESANN 2009 (2009)
Hampel, F.R., Ronchetti, E.M., Rousseeuw, P.J., Stahel, W.A.: Robust Statistics: The Approach Based on Influence Functions (Wiley Series in Probability and Statistics), revised edn. Wiley-Interscience, New York (2005)
Hart, P.: The condensed nearest neighbor rule (corresp.). IEEE Transactions on Information Theory 14(3), 515–516 (1968)
Huber, P.J.: Robust Statistics. Wiley Series in Probability and Statistics. Wiley-Interscience (1981)
Kordos, M., Duch, W.: Variable Step Search Algorithm for Feedforward Networks. Neurocomputing 71(13-15), 2470–2480 (2008)
Kordos, M., Białka, S., Blachnik, M.: Instance selection in logical rule extraction for regression problems. In: Rutkowski, L., Korytkowski, M., Scherer, R., Tadeusiewicz, R., Zadeh, L.A., Zurada, J.M. (eds.) ICAISC 2013, Part II. LNCS, vol. 7895, pp. 167–175. Springer, Heidelberg (2013)
Kordos, M., Blachnik, M., Strzempa, D.: Do We Need Whatever More Than k-NN? In: Rutkowski, L., Scherer, R., Tadeusiewicz, R., Zadeh, L.A., Zurada, J.M. (eds.) ICAISC 2010, Part I. LNCS (LNAI), vol. 6113, pp. 414–421. Springer, Heidelberg (2010)
Kordos, M., Rusiecki, A.: Improving MLP Neural Network Performance by Noise Reduction. In: Dediu, A.-H., Martín-Vide, C., Truthe, B., Vega-Rodríguez, M.A. (eds.) TPNC 2013. LNCS, vol. 8273, pp. 133–144. Springer, Heidelberg (2013)
Liano, K.: Robust error measure for supervised neural network learning with outliers. IEEE Transactions on Neural Networks 7(1), 246–250 (1996)
Pernia-Espinoza, A.V., Ordieres-Mere, J.B., de Pison, F.J.M., Gonzalez-Marcos, A.: Tao-robust backpropagation learning algorithm. Neural Networks 18(2), 191–204 (2005)
Prechelt, L.: Proben1 – a set of neural network benchmark problems and benchmarking rules. Tech. rep. (1994)
Rousseeuw, P.J., Leroy, A.M.: Robust Regression and Outlier Detection. John Wiley & Sons, Inc., New York (1987)
Rousseeuw, P.J.: Least median of squares regression. Journal of the American Statistical Association 79(388), 871–880 (1984)
Rusiecki, A.: Robust LTS backpropagation learning algorithm. In: Sandoval, F., Prieto, A., Cabestany, J., Graña, M. (eds.) IWANN 2007. LNCS, vol. 4507, pp. 102–109. Springer, Heidelberg (2007)
Rusiecki, A.: Robust MCD-based backpropagation learning algorithm. In: Rutkowski, L., Tadeusiewicz, R., Zadeh, L.A., Zurada, J.M. (eds.) ICAISC 2008. LNCS (LNAI), vol. 5097, pp. 154–163. Springer, Heidelberg (2008)
Rusiecki, A.: Robust learning algorithm based on iterative least median of squares. Neural Processing Letters 36(2), 145–160 (2012)
Rusiecki, A.: Robust learning algorithm based on LTA estimator. Neurocomputing 120, 624–632 (2013)
Salvador, G., Derrac, J., Ramon, C.: Prototype selection for nearest neighbor classification: Taxonomy and empirical study. IEEE Transactions on Pattern Analysis and Machine Intelligence 34, 417–435 (2012)
Tolvi, J.: Genetic algorithms for outlier detection and variable selection in linear regression models. Soft Computing 8, 527–533 (2004)
Merz, C., Murphy, P.: Uci repository of machine learning databases (2013), http://www.ics.uci.edu/mlearn/MLRepository.html
Wilson, D.L.: Asymptotic properties of nearest neighbor rules using edited data. IEEE Transactions on Systems, Man and Cybernetics SMC-2(3), 408–421 (1972)
Zhang, J.: Intelligent selection of instances for prediction functions in lazy learning algorithms. Artifcial Intelligence Review 11, 175–191 (1997)
Source code and datasets used in the paper, https://code.google.com/p/mlp2013/
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2014 Springer International Publishing Switzerland
About this paper
Cite this paper
Rusiecki, A., Kordos, M., Kamiński, T., Greń, K. (2014). Training Neural Networks on Noisy Data. In: Rutkowski, L., Korytkowski, M., Scherer, R., Tadeusiewicz, R., Zadeh, L.A., Zurada, J.M. (eds) Artificial Intelligence and Soft Computing. ICAISC 2014. Lecture Notes in Computer Science(), vol 8467. Springer, Cham. https://doi.org/10.1007/978-3-319-07173-2_13
Download citation
DOI: https://doi.org/10.1007/978-3-319-07173-2_13
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-07172-5
Online ISBN: 978-3-319-07173-2
eBook Packages: Computer ScienceComputer Science (R0)