Models Of Learning Systems - Carnegie Mellon School Of .

8m ago
234.12 KB
28 Pages

[24]From Encyclopedia of Computer Science and Technology, volume 11, pp. 2451. Marcel Dekker, New York, NY, 1978.MODELS OF LEARNING SYSTEMSINTRODUCTIONGiving a machine the ability to learn, adapt, organize, or repair itself areamong the oldest and most ambitious goals of computer science. In the early daysof computing, these goals were central to the new discipline called cybernetics [2,127]. Over the past two decades, progress toward these goals has come from avariety of fields—notably computer science, psychology, adaptive control theory,pattern recognition, and philosophy. Substantial progress has been made in developing techniques for machine learning in highly restricted environments. Computerprograms have been written which can learn to play good checkers [93, 94], learnto filter out the strong heartbeat of a mother in order to pick out the weaker heartbeat of the fetus [125], and learn to predict the mass spectra of complex molecules[13]. Each of these programs, however, is tailored to its particular task, takingadvantage of particular assumptions and characteristics associated with its domain.The search for efficient, powerful, and general methods for machine learning hascome only a short way.The terms adaptation, learning, concept-formation, induction, self-organization,and self-repair have all been used in the context of learning system (LS) research.The research has been conducted within many different scientific communities,however, and these terms have come to have a variety of meanings. It is thereforeoften difficult to recognize that problems which are described differently may infact be identical. Learning system models as well are often tuned to the requirements of a particular discipline and are not suitable for application in relateddisciplines.The term “learning system” is very broad, and often misleading. In the contextof this article, a learning system is considered to be any system which uses informtion obtained during one interaction with its environment to improve its performance during future interactions. This rough characterization may include man/machine systems (see Ref. 64) in which humans take on active roles as requiredfunctional components. In some systems there is continuous interaction with theenvironment, with feedback and subsequent improvement. In other systems thereis a sharp distinction between the interactions that constitute training and subsequent performance or predictions with no further training. Another way of differentiating between various learning systems is on the basis of what kinds of alterationsthey perform.

[25]MODELS OF LEARNING SYSTEMSData baseManagementsystems alterassertionsAdaptivecontrolsystems alterparametersConceptformationsystems alterstructuresFIG. 1. Spectrum of learning systems.Figure 1 shows several classes of systems that fit the above characterization,and lists the kinds of alterations which they perform. Data base systems are amongthe earliest kinds of systems which fit our definition. Such systems represent information about their environment by sets of alterable assertions. In the late 1950sand early 1960s, adaptive control techniques were first used to build programs whichalter parameters in equations which model some aspect of the external world [93,125]. The perceptrons of the early 1960s [71, 90] represent an attempt to use adaptive control techniques to train recognition networks by altering weighting parameters. More recently, concept formation (and other) systems have been writtenwhich build and alter structural representations as their model of the external world.In short, an important difference to be noted in LS’s is in their internal representtations of the outer environment: some are mathematical models, some are linguistic assertions, and still others are structures encoding symbolic relations.In this article, three distinct approaches to machine learning and adaptationare considered:1. The adaptive control approach2. The pattern recognition approach3. The artificial intelligence approachProgress in each of these areas is summarized in the first part of the article.In the next part a general model for learning systems is presented which allowscharacterization and comparison of individual algorithms and programs in all ofthese areas. Specific examples of learning systems are described in terms of themodel.ADAPTIVE SYSTEM APPROACH TO LEARNINGIn the control literature, learning is generally assumed to be synonymous withadaptation. It is often viewed as estimation or successive approximation of theunknown parameters of a mathematical structure which is chosen by the LS designerto represent the system under study [18, 30]. Once this has been done, control techniques known to be suitable for the particular chosen structure can be applied. Thusthe emphasis has been on parameter learning, and the achievement of stable, reliable performance [106]. Problems are commonly formulated in stochastic terms,and the use of statistical procedures to achieve optimal performance with respectto some performance criterion such as mean square error, is standard [130].

[26]MODELS OF LEARNING SYSTEMSThere are many overlapping and sometimes contradictory definitions of theterms related to adaptive systems. The following set, formulated by Glorioso [34],serves to illustrate the main features. An “adaptive system” is defined as a systemwhich responds acceptably with respect to some performance criterion in the faceof changes in the environment or its own internal structure. A “learning system”is an adaptive system that responds acceptably within some time interval followinga change in its environment, and a “self-repairing system” is one that respondsacceptably within some time interval following a change in its internal structure.Finally, a “self-organizing system” is an adaptive or learning system in which theinitial state is unknown, random, or unimportant.Adaptive control is an outgrowth of automatic control that has attracted significant research effort since the mid-1950s [3]. These investigations have been motivated by a desire for development of real-time control of incompletely knownsystems or “plants.” Limited plant specification is normally assumed to entailunknown, drifting parameters in a prescribed mathematical description. Variousmethods of adaptive control have been implemented for control of aerospace andindustrial processes, as well as man-machine and socioeconomic systems.Adaptive controllers have been coarsely divided into two large classes of activeand passive adaptivity [113]. “Active adaptive” controllers are based on dual control theory [25]. In addition to the available real-time information, they utilize theknowledge that future observations will be made which will provide further possibleperformance evaluation and regulate their learning accordingly. “Passive adaptive”controllers utilize the available real-time measurements but ignore the availabilityof future observations. This limitation results in much simpler adaptive algorithms.Thus passive techniques have been much more extensively investigated.Passive ControllersPassive adaptive controllers can be subdivided into two classes: indirect anddirect, denoting the primary focus of the adaptation mechanism either on plantparameter determination or control parameter determination, respectively.Indirect adaptive control, originally suggested in Ref. 51, arbitrarily separatesthe control task into plant identification and control law calculation from the plantparameter estimates. This approach was designed to utilize the existing arsenal ofcontrol techniques requiring exact specification of the plant. Acceptance of thismethod has led to considerable interest in system identification [4]. Most parameterestimation schemes, however, are inherently open loop and suffer consistency andidentifiability constraints when encompassed by feedback. This limitation can becircumvented by the injection of a perturbation input [95].The alternative, which avoids the necessity of proper plant identification, isdirect adaptive control, in which the available control parameters themselves areadjusted in order to improve the overall performance of the control system. Twobroad techniques exist for establishment of convergent control parameter adaptationschemes: search methods and stability analysis. Search techniques generally sufferlocal convergence, whether based on gradient [40] or heuristic [30] methods. Alternatively, adaptive control algorithms arising from stability analysis can guaranteeglobal asymptotic stability as a by-product. The widest application of stability theory

[27]MODELS OF LEARNING SYSTEMSto adaptive control design has utilized Liapunov’s second method [63]. The earliestapplication of Liapunov function synthesis for designing adaptive loops [98] utilizeda model reference approach.Model reference adaptive control techniques (see example in the Appendix)implement adjustment of reachable parameters in the overall controlled system sothat its response to some reference signal exactly matches that of a predeterminedmodel due to the same reference. Such a structural arrangement, in general,requires the ability to adjust each parameter independently in the overall controlledsystem. Assumption of this capability hampers the current sophisticated schemesof adapting feedforward and feedback parameters solely from plant input and outputmeasurements [58, 74] by occasionally necessitating an unbounded control effort.Control effort boundedness is encouraged by abandoning exact output matching forinput matching [48] which requires nonparametric, a posteriori determination ofthe optimal input.No single adaptive control approach mentioned is without limitations in attempting to provide adequate control of a plant known only to be describable within ageneral structural class. The primary focus of adaptive control on parameter selection has led to provably convergent single level schemes. The ongoing merger ofheuristic, layerable learning system concepts (as described below) with theseconvergent parameter adjustment algorithms of restricted applicability shouldimprove the efficacy of adaptive control.PATTERN RECOGNITION APPROACH TO LEARNINGPattern recognition techniques are primarily employed at the interface ofintelligent agents and the real world of physical measurements and processes. Theinterface attempts to provide some sensory capability to the agent, such as vision,touch, or some other nonhuman sensory modality. In this context, a “pattern” maybe an image, a spoken word, a radar return from an aircraft, or whatever is appropriate to describe or classify a physical environment that is viewed through a particular set of sensors.The problem of pattern recognition is often viewed as the development of a setof rules which can be used to assign observed patterns to particular known classesby examination of a set of patterns of known class membership. There are, however, a variety of related problems that can be discussed in the same framework.These include pattern classification, in which the classification rules are known,and the problem is simply assignment of patterns to classes, pattern formation, inwhich the classes themselves must be defined, and pattern description, in whichthe problem is to form descriptions (which are often symbolic in form) of theobserved patterns, rather than assign them to classes.The major concerns in pattern recognition are:Convergence: the learning system should eventually settle on a stable set of rules,classes, or descriptions.Optimality: the objective is minimization of some cost functional, such as theaverage risk associated with classification.

[28]MODELS OF LEARNING SYSTEMSComputational complexity: the objective is minimization of the difficulty of using analgorithm, measured in terms of computation time, memory requirements, orprogramming complexity.Pattern Recognition SubclassesPattern recognition is presently characterized by two major approaches. Theseare the statistical decision-theoretic or discriminant approach, which employs aclassification model, and the linguistic (syntactic) or structural approach, whichemploys a description model. The first approach has been more extensively studiedand a modestly large body of theory has been constructed, whereas the secondapproach is relatively new and many unsolved problems remain.The decision-theoretic approach commonly involves the extraction of a set ofcharacteristic (typically low-level) measurements, or “features,” from a set ofpatterns. Each pattern is thus represented as a feature vector in a feature space,and the task of the pattern recognition device is to partition the feature space insuch a way as to classify the individual patterns. Features, then, are usually chosenso that the “distance” (on some suitable metric) between patterns in the featurespace is maximized [89]. This approach has been successful for applications such ascommunication of a known set of signal waveforms corrupted by some form of distortion, such as noise or multipath interference. However, it has been criticizedbecause it is concerned only with statistical relationships between features, andtends to ignore other structural relationships which may characterize patterns [52].The linguistic or structural approach has been developed in part to correctsome of the difficulties seen in the decision-theoretic approach. With this paradigm,patterns are viewed as compositions of components, called subpatterns or patternprimitives, that are typically higher-level objects than the features of the decisiontheoretic model. Patterns are often viewed as sentences in a language defined by aformal grammar (sometimes called a pattern grammar). Segmentation of patternsinto primitives and formation of structural descriptions are thus the primary issues.This approach embodies an attempt to use other sources of information as aids topattern recognition