Summary
Learning Classifier Systems have previously been shown to have some application in deducing the characteristics of complex multi-modal test environments to a suitable level of accuracy. In this study, an accuracy-based Learning Classifier System, XCS, is used. The system has the capability of inducing a set of general rules from a sample of data points using a combination of Reinforcement Learning and a Genetic Algorithm. The investigation presented here builds on earlier work in this area by considering the application of a memetic approach during learning. The motivation for this investigation is identify if any increases in learning speed and classification performance can be made. The type of memetic learning used is based on Lamarckian Evolution but has several subtle differences from the standard approach. In particular, the Learning Classifier System is based on a Reinforcement Learning paradigm that has a dynamic effect on the fitness landscape. And, the form of lifetime learning used is based on a Widrow-Hoff delta rule update procedure in which changes to an individual’s genotypic description are based upon some distance measure between the individual and a “focal rule”’ (analogous to a local optima in a standard MA). In addition, no distinction is made between genotype and phenotype. Initial investigations focus on the effects on performance for three different learning rates and three different “focal rule” identification options for two different test environments - a two-dimensional and a decomposable six-dimensional test environment. Results show that improvements can be made over a non-memetic approach. The study also considers the use of a self-adaptive learning mechanism. Self-adaptation has been suggested as beneficial for Genetic Algorithms where the technique is usually used for adapting the mutation rate in a time-dependant and decentralised way. However, the investigation of a self-adaptive learning mechanism presented here focuses on the benefits of adjusting the Widrow-Hoff learning rate used within the memetic-learning component of the system. The mechanism was applied to both test environments. Results show that the mechanism can provide a more robust learning system both in terms of reduction in the number of system parameters and increased generalisation and solution convergence. Further detailed analysis of experimental results for the decomposable six-dimensional test function is also performed. This would otherwise be non-trivial for a non-decomposable six-dimensional function. The classification accuracy of several different versions of the system including those systems with and without memetic or self-adaptive memetic learning are analysed region by region showing the effects of the new learning approach at a much greater level of detail. Analysis shows that the self-adaptive memetic version of the classifier system outperforms the non-adaptive and non-memetic versions in some of the regions.
Access provided by Autonomous University of Puebla. Download to read the full chapter text
Chapter PDF
Similar content being viewed by others
Keywords
- Classification Accuracy
- Identification Approach
- Test Environment
- Memetic Algorithm
- Learn Classifier System
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.
References
Bäck T (1992) Self-Adaptation in Genetic Algorithms. In: Proceedings of the First European Conference on Artificial Life. MIT Press, Cambridge. pp. 263–271
Baldwin J (1896) A New Factor in Evolution. American Naturalist 30: 441–451
Beasley D, Bull D, Martin R (1993) A Sequential Niche Technique for Multimodal Function Optimisation. Evolutionary Computation, 1(2): 101–125
Bernadó E, Llorà X, Garrell J (2001) XCS and GALE: a Comparative Study of Two Learning Classifier Systems with Six Other Learning Algorithms on Classification Tasks. In: Lanzi PL, Stolzmann W, and Wilson SW (Eds.), Advances in Learning Classifier Systems. Fourth International Workshop (IWLCS-2001), Lecture Notes in Artificial Intelligence (LNAI-2321). Springer-Verlag: Berlin, pp. 115–133
Blake C, Merz C (1998) UCI Repository of Machine Learning Databases. University of California, Irvine Available at http://www.ics.uci.edu/~mlearn/MLRepository.html
Bonham C (2000) Evolutionary Decomposition of Complex Design Spaces. PhD Thesis, University of Plymouth
Bonham C, Parmee I (1999) An Investigation of Exploration and Exploitation Within Cluster-Oriented Genetic Algorithms (COGAs). In: Banzhaf W, Daida J, Eiben A, Garzon M, Honavar V, Jakiela M, Smith R (Eds), Proceedings of the Genetic and Evolutionary Computation Conference 1999, Morgan Kaufmann, pp. 1491–1497 Available at http://www.ad-comtech.co.uk/Parmee-Publications.htm
Bull L, Wyatt D, Parmee I (2002) Initial Modifications to XCS for use in Interactive Evolutionary Design. In: Merelo J, Adamidis P, Beyer HG, Fernandez-Villacanas JL, Schwefel HP (eds) Parallel Problem Solving From Nature-PPSN VII, Springer Verlag, pp. 568–577
Butz M, Wilson S (2001) An Algorithmic Description of XCS. In: Lanzi PL, Stolzmann W, Wilson SW (Eds.), Advances in Learning Classifier Systems. Third International Workshop (IWLCS-2000), Lecture Notes in Artificial Intelligence (LNAI-1996). Springer-Verlag: Berlin, pp. 253–272
Holland J (1975) Adaptation in Natural and Artificial Systems, The University of Michigan Press, Ann Arbor
Holland J (1986) Escaping Brittleness: the Possibilities of General-Purpose Learning Algorithms Applied to Parallel Rule-based Systems. In: Michalski RS, Carbonell JG, Mitchell TM (Eds.), Machine Learning, An Artificial Intelligence Approach. Morgan Kaufmann: Los Altos, California
Holmes J (1996) A Genetics-Based Machine Learning Approach to Knowledge Discovery in Clinical Data. Journal of the American Medical Informatics Association Supplement 883
Japkowicz N, Stephen S (2002) The Class Imbalance Problem: A Systematic Study. Intelligent Data Analysis, Volume 6(5): 429–450
Kaelbling L, Littman M, Moore A (1996) Reinforcement Learning: A Survey. Journal of Artificial Intelligence Research 4: 237–285
Kocis L, Whiten WJ (1997) Computational Investigations in Low Discrepancy Sequences, ACM Transactions on Mathematical Software, 23(2): 266–294
Kohavi R, Provost F (1998) Glossary of Terms. Machine Learning 30: 271–274
Kononenko I, Bratko I (1991) Information-Based Evaluation Criterion for Classifier’s Performance. Machine Learning 6: 67–80
Krasnogor N, Smith J (2000) Memetic Algorithms: Syntactic Model and Taxonomy. Technical Report, Intelligent Computer Systems Centre, University of the West of England, Bristol, U.K. in 2000 and Automated Scheduling, Optimisation and Planning Group, University of Nottingham, U.K. in 2002
Kubat M, Holte R, Matwin S (1997) Learning when Negative Examples Abound. In: Proceedings of the Ninth European Conference on Machine Learning, LNCS 1224, Springer-Verlag, pp. 146–153
Lewis D, Gale W (1994) A Sequential Algorithm for Training Text Classifiers. In: Proceedings of SIGIR-94, 17th ACM International Conference on Research and Development in Information Retrieval, ACM/Springer, pp. 3–12
Ling C, Li C (1998) Data Mining for Direct Marketing: Problems and Solutions. In: Proceedings of ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD-98), AAAI, pp. 73–79
Morgan C (1896) On Modification and Variation, Science 4: 733–740
Moscato P (1989) On Evolution, Search, Optimization, Genetic Algorithms and Martial Arts-Towards Memetic Algorithms. Technical Report 826, California Institute of Technology, Pasadena, California.
Osborn H (1896) Ontogenic and Phylogenic Variation, Science 4: 786–789
Parmee I (1996) The Maintenance of Search Diversity for Effective Design Space Decomposition using Cluster-Oriented Genetic Algorithms (COGAs) and Multi-Agent Strategies (GAANT). In: Proceedings of 2nd International Conference on Adaptive Computing in Engineering Design and Control, PEDC, University of Plymouth, pp. 128–138 Available at http://www.ad-comtech.co.uk/Parmee-Publications.htm
Stone C, Bull L (2003) For Real! XCS with Continuous-Valued Inputs. Evolutionary Computation 11(3): 299–336 Also available at http://www.cems.uwe.ac.uk/lcsg
Swets JA (1988) Measuring the Accuracy of Diagnostic Systems. Science 240: 1285–1293
Weiss G, Provost F (2001) The Effect of Class Distribution on Classifier Learning: An Empirical Study. Technical Report ML-TR-44, Department of Computer Science, Rutgers University
Whitley D, Gordon S, Mathias K (1994) Larmarckian Evolution, the Baldwin Effect and Function Optimization. In: Davidor Y, Schwefel H, Manner R (Eds.) Parallel Problem Solving From Nature-PPSN III, Springer-Verlag, pp. 6–15
Wilson S (1995) Classifier Fitness Based on Accuracy. Evolutionary Computation 3(2): 149–175
Wilson S (2000) Get real! XCS with Continuous-valued inputs. In: Lanzi PL, Stolzmann W, Wilson SW, (Eds.) Learning Classifier Systems. From Foundations to Applications Lecture Notes in Artificial Intelligence (LNAI-1813) Springer-Verlag: Berlin, pp. 209–222
Wilson S (2001) Compact Rulesets for XCSI. In: Lanzi PL, Stolzmann W, Wilson SW, (Eds.) Advances in Learning Classifier Systems. Fourth International Workshop (IWLCS-2001), Lecture Notes in Artificial Intelligence (LNAI-2321). Springer-Verlag: Berlin, pp. 197–210
Wilson S (2001) Mining Oblique Data with XCS, In: Lanzi PL, Stolzmann W, Wilson SW, (Eds.) Advances in Learning Classifier Systems. Third International Workshop (IWLCS-2000), Lecture Notes in Artificial Intelligence (LNAI-1996). Springer-Verlag: Berlin, pp. 158–177
Wyatt D, Bull L (2003) Using XCS to Describe Continuous-Valued Problem Spaces. Technical Report UWELCSG03-004. Available at http://www.cems.uwe.ac.uk/lcsg
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2005 Springer-Verlag Berlin Heidelberg
About this chapter
Cite this chapter
Wyatt, D., Bull, L. (2005). A Memetic Learning Classifier System for Describing Continuous-Valued Problem Spaces. In: Hart, W.E., Smith, J.E., Krasnogor, N. (eds) Recent Advances in Memetic Algorithms. Studies in Fuzziness and Soft Computing, vol 166. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-32363-5_15
Download citation
DOI: https://doi.org/10.1007/3-540-32363-5_15
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-22904-9
Online ISBN: 978-3-540-32363-1
eBook Packages: EngineeringEngineering (R0)