DOCTORAL DISSERTATION DEFENSE
A Theory of Natural Learning

Candidate: Alexander Botta
Advisor: Ernest Davis
2:00 p.m., Wednesday, May 1, 1991
room 402, Warren Weaver Hall

Abstract

Unsupervised learning is based on capturing regularities in data. We formalize the vague notion of regularity, using the concept of algorithmic information (Solomonoff,Chaitin, Koppel). We present a theory on how regularities are induced and accumulated. A generative model captures a regularity if it achieves compression. A basic regularity is a building block for hierarchical structures. We prove that a basic regularity may be identified as a local maximum in compressibility. Stepwise induction is a polynomial-time approach to structures whose basic components have bound complexity.

Agents exploring a universe engage in active learning. The regularities of their sensory-motor streams are similar to Piaget's schemes and constituents of an induced ontology. We illustrate these ideas on three microworlds. First are Moore automata. State representations are constructed incrementally from results of tests when in that state and from outputs percieved on the way to that state. The second world contains loosely coupled geometric objects. They are basic regularities identifiable by stepwise induction. In the third world the agent has an elaborate eye and can move objects on a tiled surface. Statistical correlations between sets of stimuli are induced, then models are constructed to generate instances of new correlations from already known ones.

Algorithmic information theory allows a unified perspective on many areas of learning research. We define analysis as the separation of novelty in data from the already known. We present explanation based generalization as a well formalized instance of analysis, and constructive induction as an ill defined instance. We show EBG to specialize a theory through positive examples, and we prove it a language independent method, valid beyond the predicate calculus representations.