?? paper3
字號:
analogies.NPinterestingness..LEHeuristics are organized around the facets.For example, the following strategy fits into the \fIexamples\fR facetof the \fIpredicate\fR concept: \c.sp.BQIf, empirically, 10 times as many elements.ulfailsome predicate P as.ulsatisfyit, then some.ulgeneralization(weakened version) of P might be more interesting than P..FQ.sp\s-2AM\s+2 considers this suggestion after trying to fill in examples of eachpredicate.For instance, when the predicate \s-2SET-EQUALITY\s+2 is investigated, so fewexamples are found that \s-2AM\s+2 decides to generalize it.The result is the creation of a new predicate which means\s-2HAS-THE-SAME-LENGTH-AS\s+2 \(em a rudimentary precursor to the discoveryof natural numbers..ppIn an unusual and insightful retrospective on these programs,Lenat & Brown (1984) report that the exploration consists of (mere?) syntacticmutation of programs expressed in certain representations..[Lenat Brown 1984.]The key element of the approach is to find representations with a highdensity of interesting concepts so that many of the random mutations will beworth exploring.If the representation is not well matched to the problem domain, mostexplorations will be fruitless and the method will fail..ppWhile the conceptual dependency research reviewed above is concerned withunderstanding the goals of actors in stories given to a program, the approachtaken seems equally suited to the construction of artificial goal-orientedsystems.If a program could really understand or empathize with the motives of people,it seems a small technical step to turn it around to create an autonomoussimulation with the same motivational structure.Indeed, one application of the conceptual dependency framework is in\fIgenerating\fR coherent stories by inventing goals for the actors, choosingappropriate plans, and simulating the frustration or achievement of the goals(Meehan, 1977)..[Meehan 1977 talespin.]The ``learning by discovery'' research shows how plausible subgoals can begenerated from an overall goal of maximizing the interestingness ofthe concepts being developed.It is worth noting that Andreae (1977) chose a similar idea, ``novelty,''as the driving force behind a very different learning system..[Andreae 1977 thinking with the teachable machine.]Random mutation in an appropriate representation seems to be the closest wehave come so far to the \fImotive generator\fR mentioned at the beginning ofthis section..rh "The mechanism and psychology of natural goal-seeking."Now turn to natural systems.The objection to the above-described use of goals in natural languageunderstanders and discovery programs is that they are just programmed in.The computer only does what it is told.In the first case, it is told a classification of goals and giveninformation about their interrelationships, suitable plans for achieving them,and so on.In the second case it is told to maximize interestingness by randommutation.On the surface, these seem to be a pale reflection of the autonomousself-government of natural systems.But let us now look at how goals seem to arise in natural systems..ppThe eminent British anatomist J.Z.\ Young describes the modern biologist'shighly mechanistic view of the basic needs of animals..[Young 1978 programs of the brain.]``Biologists no longer believe that living depends upon some specialnon-physical agency or spirit,'' he avers (Young, 1978, p.\ 13), and goes onto claim that we now understand how it comes about that organisms behave as ifall their actions were directed towards an aim or goal\u3\d..FN3.\ \ Others apparently tend to be more reticent \(em``it has been curiously unfashionable among biologists to call attention tothis characteristic of living things'' (Young, 1978, p.\ 16)..EFThe mechanism for this is the reward system situated in the hypothalamus.For example, the cells of the hypothalamus ensure that the right amount offood and drink are taken and the right amount is incorporated to allow thebody to grow to its proper size.These hypothalamic centers stimulate the need for what is lacking, forinstance of food, sex, or sleep, and they indicate satisfaction when enoughhas been obtained.Moreover, the mechanism has been traced to a startling level of detail.For example, Young describes how hypothalamic cells can beidentified which regulate the amount of water in the body..sp.BQThe setting of the level of their sensitivity to salt provides theinstruction that determines the quantity of water that is held in the body.We can say that the properties of these cells are physical symbols``representing'' the required water content.They do this in fact by actually swelling or shrinking when the saltconcentration of the blood changes..FQ "Young, 1978, p.\ 135".spFood intake is regulated in the same way.The hypothalamus ensures propagation of the species by directing reproductivebehavior and, along with neighboring regions of the brain, attends to the goalof self-preservation by allowing us to defend ourselves if attacked..ppNeedless to say, experimental evidence for this is obtained primarily fromanimals.Do people's goals differ?The humanistic psychologist Abraham Maslow propounded a theory of humanmotivation that distinguishes between different kinds of needs (Maslow, 1954)..[Maslow 1954.]\fIBasic needs\fR include hunger, affection, security, love, and self-esteem.\fIMetaneeds\fR include justice, goodness, beauty, order, and unity.Basic needs are arranged in a hierarchical order so that some are strongerthan others (eg security over love); but all are generally stronger thanmetaneeds.The metaneeds have equal value and no hierarchy, and one can be substitutedfor another.Like the basic needs, the metaneeds are inherent in man, and when they are notfulfilled, the person may become psychologically sick (suffering, for example,from alienation, anguish, apathy, or cynicism)..ppIn his later writing, Maslow (1968) talks of a ``single ultimate value formankind, a far goal towards which all men strive''.Although going under different names (Maslow favors \fIself-actualization\fR),it amounts to ``realizing the potentialities of the person, that is to say,becoming fully human, everything that the person \fIcan\fR become''.However, the person does not know this.As far as he is concerned, the individual needs are the driving force.He does not know in advance that he will strive on after the current needhas been satisfied.Maslow produced the list of personality characteristics of the psychologicallyhealthy person shown in Table\ 1..RF.in 0.5i.ll -0.5i.nr x0 \n(.l-\n(.i\l'\n(x0u'.in +\w'\(bu 'u.fi.NPThey are realistically oriented..NPThey accept themselves, other people, and the natural world for what they are..NPThey have a great deal of spontaneity..NPThey are problem-centered rather than self-centered..NPThey have an air of detachment and a need for privacy..NPThey are autonomous and independent..NPTheir appreciation of people and things is fresh rather than stereotyped..NPMost of them have had profound mystical or spiritual experiences although notnecessarily religious in character..NPThey identify with mankind..NPTheir intimate relationships with a few specially loved people tend to beprofound and deeply emotional rather than superficial..NPTheir values and attitudes are democratic..NPThey do not confuse means with ends..NPTheir sense of humor is philosophical rather than hostile..NPThey have a great fund of creativeness..NPThey resist conformity to the culture..NPThey transcend the environment rather than just coping with it..nf.in -\w'\(bu 'u\l'\n(x0u'.ll +1i.in 0.FE "Table 1: Characteristics of self-actualized persons (Maslow, 1954)".ppMaslow's \fIbasic needs\fR seem to correspond reasonably closely with thoseidentified by conceptual dependency theory.Moreover, there is some similarity to the goals mentioned by Young (1978),which, as we have seen, are thought to be ``programmed in'' to the brain in anastonishingly literal sense.Consequently it is not clear how programs in which these goals are embeddeddiffer in principle from goal-oriented systems in nature.The \fImetaneeds\fR are more remote from current computer systems,although there have been shallow attempts to simulate paranoia in the\s-2PARRY\s+2 system (Colby, 1973)..[Colby 1973 simulations of belief systems.]It is intriguing to read Table\ 1 in the context of self-actualized computers!Moreover, one marvels at the similarity between the single-highest-goal modelof people in terms of self-actualization, and the architecture for discoveryprograms sketched earlier in terms of a quest for ``interestingness''..rh "The sceptical view."The philosopher John Haugeland addressed the problem of natural languageunderstanding and summed up his viewpoint in the memorable aphorism,``thetrouble with Artificial Intelligence is that computers don't give a damn''(Haugeland, 1979)..[Haugeland 1979 understanding natural language.]He identified four different ways in which brief segments of text cannot beunderstood ``in isolation'', which he called four \fIholisms\fR.Two of these, concerning \fIcommon-sense knowledge\fR and\fIsituational knowledge\fR,are the subject of intensive research in natural language analysis systems.Another, the \fIholism of intentional interpretation\fR,expresses the requirement that utterances and descriptions ``make sense'' andseems to be at least partially addressed by the goal/plan orientation of somenatural language systems.It is the fourth, called \fIexistential holism\fR, that is most germane to thepresent topic.Haugeland argues that one must have actually \fIexperienced\fR emotions (likeembarrassment, relief, guilt, shame) to understand``the meaning of text that (in a familiar sense) \fIhas\fR any meaning''.One can only experience emotions in the context of one's own self-image.Consequently, Haugeland concludes that``only a being that cares about who it is, as some sort of enduring whole,can care about guilt or folly, self-respect or achievement, life or death.And only such a being can read.'' Computers just don't give a damn..ppAs AI researchers have pointed out repeatedly, however, it is difficult togive such arguments \fIoperational\fR meanings.How could one test whether a machine has \fIexperienced\fR an emotion likeembarrassment?If it acts embarrassed, isn't that enough?And while machines cannot yet behave convincingly as though they do experienceemotions, it is not clear that fundamental obstacles stand in the way offurther and continued progress.There seems to be no reason in principle why a machine cannot be given aself-image..ppThis controversy has raged back and forth for decades, a recent resurgencebeing Searle's (1980) paper on the Chinese room, and the 28 responses whichwere published with it..[Searle 1980 minds programs.]Searle considered the following \fIgedanken\fP experiment.Suppose someone, who knows no Chinese (or any related language), is locked ina room and given three large batches of Chinese writing, together with aset of rules in English which allow her to correlate the apparentlymeaningless squiggles in the three batches and to produce certain sorts ofshapes in response to certain sorts of shapes which may appear in the thirdbatch.Unknown to her, the experimenters call the first batch a ``script'', thesecond batch a ``story'', the third batch ``questions'', and the symbolsshe produces ``answers''.We will call the English rules a ``program'', and of course the intention isthat, when executed, sensible and appropriate Chinese answers, based on theChinese script, are generated to the Chinese questions about the Chinesestory.But the subject, with no knowledge of Chinese, does not see them that way.The question is, given that with practice the experimenters become so adeptat writing the rules and the subject so adept at interpreting them
?? 快捷鍵說明
復制代碼
Ctrl + C
搜索代碼
Ctrl + F
全屏模式
F11
切換主題
Ctrl + Shift + D
顯示快捷鍵
?
增大字號
Ctrl + =
減小字號
Ctrl + -