next up previous
Next: Human learning Up: No Title Previous: No Title

Introduction

The ability to learn is a hallmark of intelligence. Humans rapidly and reliably learn many kinds of regularities and generalizations. Any learning theory must explain the search and representation biases that make fast and robust learning possible. We propose a model of incremental one-shot learning that exploits the properties of sparse representations and the constraints imposed by a plausible hardware mechanism.

Our particular system design is consistent with what you would expect of computer engineers. We think naturally in terms of buffers, bidirectional constraints, symbolic differences, and greedy learning algorithms. As you will see, each of those particular concepts came to play an important role in our processing and learning system and in our ultimate conclusions.

We demonstrate our learning model in the domain of morphophonology--the connection between the structure of words and their pronunciation. Here a key to fast learning is that the phonemes that are actually used in language are only a few of the possible phonemes. In each language only a few of the possible combinations of phonemes may appear in words. We find that it is the sparseness of these spaces that makes the acquisition of regularities effective.

The phonemes are equivalence classes of speech sounds that are distinguished by speakers of a particular language. A phoneme is a representation of a range of continuous signals as a discrete symbol.gif Although one can ``morph'' one speech sound into another by a continuous process, the speaker will usually perceive it as a distinct phoneme: the phonemes are the psychoacoustic equivalent of digital values in a system implemented with electrical voltages and currents.gif



Ken Yip
Tue Jan 7 21:53:31 EST 1997