Abstract
Autoencoders have become increasingly popular in anomaly detection tasks over the years. Nevertheless, it remains a challenge to train autoencoders for anomaly detection tasks properly. A key contributing factor to this problem in many applications is the absence of a clean dataset from which the normal case can be learned. Instead, autoencoders must be trained based on a contaminated dataset containing an unknown amount of anomalies that potentially harm the training process. In this paper, we address this problem by studying the impact of the loss function on the robustness of an autoencoder. It is common practice to train an autoencoder by minimizing a loss function (e.g. squared error loss) under the assumption that all features are equally important to be reconstructed well. We relax this assumption and introduce a new loss function that adapts its robustness to anomalies based on the characteristics of data and on a per feature basis. Experimental results show that an autoencoder can be trained by this loss function robustly even when the training process is subject to many anomalies.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Aytekin, C., Ni, X., Cricri, F., Aksu, E.: Clustering and unsupervised anomaly detection with l2 normalized deep auto-encoder representations. In: Proceedings of the IEEE International Joint Conference on Neural Networks, pp. 1–6 (2018)
Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. In: Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, vol. 9, pp. 249–256 (2010)
Hawkins, S., He, H., Williams, G., Baxter, R.: Outlier detection using replicator neural networks. In: International Conference on Data Warehousing and Knowledge Discovery, pp. 170–180 (2002)
Huber, P.J.: Robust estimation of a location parameter. Ann. Math. Stat. 35(1), 73–101 (1964)
Innes, M., et al.: Fashionable modelling with Flux. arXiv preprint arXiv:1811.01457 (2018)
Kingma, D.P., Ba, J.: Adam: a method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014)
LeCun, Y., Cortes, C., Burges, C.: MNIST handwritten digit database. http://yann.lecun.com/exdb/mnist
Rumelhart, D., Hinton, G., Williams, R.: Learning internal representations by error propagation. In: Parallel Distributed Processing, vol. 1 (1986)
Schreyer, M., Sattarov, T., Borth, D., Dengel, A., Reimer, B.: Detection of anomalies in large scale accounting data using deep autoencoder networks. arXiv preprint arXiv:1709.05254 (2018)
Zhou, C., Paffenroth, R.C.: Anomaly detection with robust deep autoencoders. In: Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 665–674 (2017)
Acknowledgements
I want to thank Hennie Daniels and the two anonymous reviewers for providing feedback on an early draft version of this paper.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2021 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Triepels, R. (2021). Anomaly Detection by Robust Feature Reconstruction. In: Iliadis, L., Macintyre, J., Jayne, C., Pimenidis, E. (eds) Proceedings of the 22nd Engineering Applications of Neural Networks Conference. EANN 2021. Proceedings of the International Neural Networks Society, vol 3. Springer, Cham. https://doi.org/10.1007/978-3-030-80568-5_4
Download citation
DOI: https://doi.org/10.1007/978-3-030-80568-5_4
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-80567-8
Online ISBN: 978-3-030-80568-5
eBook Packages: Intelligent Technologies and RoboticsIntelligent Technologies and Robotics (R0)