- Aims:
On completion of this course the students should have acquired the fundamental ideas,
algorithms and techniques of Symbolic Machine Learning (excluding Neural Network Learning
and Genetic algorithms), i.e. the Artificial Intelligence approach to ML. They should be
able to read and understand specialised ML literature, use existing ML systems and
implement simple ML algorithms.
- Learning strategies:
Tutoring and computer exercises with ML algorithms and systems.
- Overall duration and format:
One semester (15 weeks) with 2 hours seminars and 1 our lab work per week.
- Credit hours:
2
- Lecturer:
Zdravko Markov - Assoc. Prof., Institute of Information Technologies -
Bulgarian Academy of Sciences
- Literature:
Markov, Z.,
Inductive Methods for Machine Learning, TEMPUS JEN 1497 & SOFTEX, Sofia,
1996 (in Bulgarian).
Gennari, J.H., Langley, P., Fisher, D.,
Models of Incremental Concept Formation, Artificial Intelligence, Vol 40, pp.
11-61,1989.
Genesereth, M. and N. Nilsson,
Logical Foundations of Artificial Intelligence, Morgan Kaufmann, 1987.
Luger,G.F. and W.A. Stubblefield,
Artificial Intelligence: structures and strategies for complex problem solving
(Second edition), The Benjamin/Cummings Publishing Company, Inc., 1993.
Michalski, R. and R. Stepp.,
Learning from observation: conceptual clustering, in: Michalski, Carbonell and
Mitchell (eds.), Machine Learning: Artificial Intelligence Approach, Vol.1, Tioga, 1983.
Michalski, R., J. Carbonell and T. Mitchell,
Machine Learning - Artificial Intelligence Approach (Volume 2), Mogran Kaufmann,
1986.
Mitchell, T.,
Generalization as Search, Artificial Intelligence 18, 1982.
Mitchell, T. M., Keller, R. M. and Kedar-Cabelli, S. T.,
Explanation-Based Generalization: A Unifying View, Machine Learning 1, pp 47-80,
Kluwer, Boston, 1986.
Mitchell, T. M., Utgoff, P. E. and Banerji, R.,
Learning by Experimentation: Acquiring and Refining Problem-Solving Heuristics,
in Michalski, Carbonell and Mitchell (eds.), Machine Learning: An Artificial Intelligence
Approach, Vol. 1, Tioga Publishing Company, 1983, pp 163-190.
Muggleton, S.,
Inductive Logic Programming, New Generation Computing, Vol.8, 1991, 295-318.
Muggleton, S. (ed.),
Inductive Logic Programming, Academic Press, 1992.
Quinlan, J.R.,
Learning logical definitions from relations, Machine Learning, Vol.5, 1990,
239-266.
Quinlan, J.R.,
Induction of Decision Trees, Machine Learning, Vol.1, No.1, 1986.
- Course outline:
The course gives a broad overview of the ML approaches and further focuses on some of them
mostly within the Inductive paradigm. Some basic algorithms for Unsupervised learning are
also discussed. The inductive learning approaches are emphasised since they are natural
extensions and applications of classical AI techniques.
- Main Topics:
The material is organised in 7 topics each one discussing a typical ML problem, the basic
techniques used for its solution and an well-known algorithm or system related to it. The
topics are accompanied with a number of exercises and a selected bibliography. They are as
follows:
Topic 1: Basic concepts, Learning semantic networks. The basic paradigms,
strategies and techniques of ML are introduced. The problem of learning semantic network
descriptions from examples is discussed in detail, further illustrating the basic concepts
of inductive learning.
Required reading:
- Markov, Z. Inductive Methods for Machine Learning, TEMPUS JEN 1497 & SOFTEX, Sofia,
1996 (in Bulgarian), chapters 1 and 2.
Topic 2: Inductive Learning, Version space. The induction problem is defined
formally and further illustrated by describing the classical candidate elimination
algorithm for version space search. A simple attribute-value language which is used in the
following topics is introduces. Required reading:
- Markov, Z. Inductive Methods for Machine Learning, TEMPUS JEN 1497 & SOFTEX, Sofia,
1996 (in Bulgarian), chapter 3.
- Genesereth, M. and N. Nilsson. Logical Foundations of Artificial Intelligence, Morgan
Kaufmann, 1987, chapter 3.
- Luger,G.F. and W.A. Stubblefield. Artificial Intelligence: structures and strategies for
complex problem solving (Second edition). The Benjamin/Cummings Publishing Company, Inc.,
1993, chapter 3.
Topic 3: Induction of Decision Trees. The decision trees as a hypothesis language
are introduced. The basic algorithm from the ID3 family is described and illustrated with
examples.
Required reading:
- Markov, Z. Inductive Methods for Machine Learning, TEMPUS JEN 1497 & SOFTEX, Sofia,
1996 (in Bulgarian), chapters 5, 6 and 7.
- Quinlan, J.R. Induction of Decision Trees, Machine Learning, Vol.1, No.1, 1986, chapter
5, 6 and 7.
Topic 4: Conceptual clustering. The basic ideas of unsupervised learning are
presented. Two well-know algorithms for conceptual clustering are discussed - CLUSTER/2
and COBWEB.
Required reading:
- Gennari, J.H., Langley, P., Fisher, D. Models of Incremental Concept Formation,
Artificial Intelligence, Vol 40, pp. 11-61,1989, chapter 12.
- Luger,G.F. and W.A. Stubblefield. Artificial Intelligence: structures and strategies for
complex problem solving (Second edition). The Benjamin/Cummings Publishing Company, Inc.,
1993, chapter 12.
- Michalski, R. and R. Stepp. Learning from observation: conceptual clustering, in:
Michalski, Carbonell and Mitchell (eds.), Machine Learning: Artificial Intelligence
Approach, Vol.1, Tioga, 1983, chapter 12.
Topic 5: Explanation-based learning (EBL). The basic ideas of learning in a
presence of a domain theory are presented. The advantages and drawbacks of EBL are
discussed in connection with the traditional inductive (similarity based) approaches
Required reading:
- Luger,G.F. and W.A. Stubblefield. Artificial Intelligence: structures and strategies for
complex problem solving (Second edition). The Benjamin/Cummings Publishing Company, Inc.,
1993, chapter 12.
- Mitchell, T. M., Keller, R. M. and Kedar-Cabelli, S. T. Explanation-Based
Generalization: A Unifying View, Machine Learning 1, pp 47-80, Kluwer, Boston, 1986,
chapter 12.
Topic 6: Occam's Razor, Bayes Learning and Minimal Description Length Principle.
The basic ideas and techniques within the framework of the complexity-based approaches to
ML are presented.
Required reading:
- Markov, Z. Inductive Methods for Machine Learning, TEMPUS JEN 1497 & SOFTEX, Sofia,
1996 (in Bulgarian), chapter 10.
Topic 7: Inductive Logic Programming (ILP). The recently emerged area of Inductive
Logic Programming is discussed in the framework of the classical induction problem. The
basic ILP techniques, such as clause subsumption, inverse resolution and predicate
invention are described. The ILP strategies are illustrated with examples.
Required reading:
- Markov, Z. Inductive Methods for Machine Learning, TEMPUS JEN 1497 & SOFTEX, Sofia,
1996 (in Bulgarian). chapters 8 and 9.
- Muggleton, S. Inductive Logic Programming, New Generation Computing, Vol.8, 1991,
chapters 8 and 9.
- Assessment:
Based on answering questions and solving illustrative problems.
- Prerequisites:
The material is based on some fundamental notions of First Order Logic and Logic
Programming (Predicate calculus, Resolution etc.) which are common for any AI course. The
exercises are based mainly on Prolog. Therefore a general AI course and a Prolog
programming course are desirable to be taken before attending the ML course.