ARTIFICIAL INTELLIGENCE——人工智能(英文)
- 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
- 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
- 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。
ARTIFICIAL INTELLIGENCE——人工智能
1 Artificial intelligence (AI) is, in theory, the ability of an artificial mechanism to demonstrate some form of intelligent behavior equivalent to the behaviors observed in intelligent living organisms. Artificial intelligence is also the name of the field of science and technology in which artificial mechanisms that exhibit behavior resembling intelligence are developed and studied.
2 The term AI itself, and the phenomena actually observed, invite --- indeed demand --- philosophical speculation about what in fact constitutes the mind or intelligence. These kinds of questions can be considered separately, however, from a description of the various endeavors to construct increasingly sophisticated mechanisms that exhibit “intelligence.”
3 Research into all aspects of AI is vigorous. Some concern exists among workers in the field, however, that both the progress and expectations of AI have been overstated. AI programs are primitive when compared to the kinds of intuitive reasoning and induction of which the human brain or even the brains of much less advanced organisms are capable. AI has indeed shown great promise in the area of expert systems --- that is, knowledge-based expert programs --- but while these programs are powerful when answering questions within a specific domain, they are nevertheless incapable of any type of adaptable, or truly intelligent, reasoning.
4 Examples of AI systems include computer programs that perform such tasks as medical diagnoses and mineral prospecting. Computers have also been programmed to display some degree of legal reasoning, speech understanding, vision interpretation, natural-language processing, problem solving, and learning. Although most of these systems have proved valuable either as research vehicles or in specific, practical applications, most of them are also still very far from being perfected.
5 CHARACTERISTICS OF AI: No generally accepted theories have yet emerged within the field of AI, owing in part to the fact that AI is a very young science. It is assumed, however, that on the highest level an AI system must receive input from its environment, determine an action or response, and deliver an output to its environment. A mechanism for interpreting the input is needed. This need has led to research in speech understanding, vision, and natural language. The interpretation must be represented in some form that can be manipulated by the machine.
6 In order to achieve this goal, techniques of knowledge representation are invoked. The AI interpretation of this, together with knowledge obtained previously, is
manipulated within the system under study by means of some mechanism or algorithm. The system thus arrives at an internal representation of the response or action. The development of such processes requires techniques of expert reasoning, common-sense reasoning, problem solving, planning, signal interpretation, and learning. Finally, the system must网
construct an effective response. This requires techniques of natural-language generation.
7 THE FIFTH-GENERATION ATTEMPT: In the 1980s, in an attempt to develop an expert system on a very large scale, the Japanese government began building powerful computers with hardware that made logical inferences in the computer language PROLOG. (Following the idea of representing knowledge declaratively, the logic programming PROLOG had been developed in England and France. PROLOG is actually an inference engine that searches declared facts and rules to confirm or deny a hypothesis. A drawback of PROLOG is that it cannot be altered by the programmer.) The Japanese referred to such machines as “fifth-generation” computers.
8 By the early 1990s, however, Japan had forsaken this plan and even announced that they were ready to release its software. Although they did not detail reasons for their abandonment of the fifth-generation program, U.S scientists faulted their efforts at AI as being too much in the direction of computer-type logic and too little in the direction of human thinking processes. The choice of PROLOG was also criticized. Other nations were by then not developing software in that computer language and were showing little further enthusiasm for it. Furthermore, the Japanese were not making much progress in parallel processing, a kind of computer architecture involving many independent processors working together in parallel—a method increasingly important in the field of computer science. The Japanese have now defined a “sixth-generation” goal instead, called the Real World Computing Project, that veers away from the expert-systems approach that works only by built-in logical rules.
9 THE FUTURE OF AI RESEARCH: One impediment to building even more useful expert systems has been, from the start, the problem of input---in particular, the feeding of raw data into an AI system. To this end, much effort has been devoted to speech recognition, character recognition, machine vision, and natural-language processing. A second problem is in obtaining knowledge. It has proved arduous to
extract knowledge from an expert and then code it for use by the machine, so a great deal of effort is also being devoted to learning and knowledge acquisition.
10 One of the most useful ideas that has emerged from AI research, however, is that facts and rules (declarative knowledge) can be represented separately from decision-making algorithms (procedural knowledge). This realization has had a profound effect both on the way that scientists approach problems and on the engineering techniques used to produce AI systems. By adopting a particular procedural element, called an inference engine, development of an AI system is reduced to obtaining and codifying sufficient rules and facts from the problem domain. This codification process is called knowledge engineering. Reducing system development to knowledge engineering has opened the door to non-AI practitioners. In a
ddition, business and industry have been recruiting AI scientists to build expert systems.
11 In particular, a large number of these problems in the AI field have been associated with robotics. There are, first of all, the mechanical problems of getting a machine to make very precise or delicate movements. Beyond that are the much more difficult problems of programming sequences of movements that will enable a robot to interact effectively with a natural environment, rather than some carefully designed laboratory setting. Much work in this area involves problem solving and planning.
12 A radical approach to such problems has been to abandon the aim of developing “reasoning” AI systems and to produce, instead, robots that function “reflexively”. A leading figure in this field has been Rodney Brooks of the Massachusetts Institute of Technology. These AI researchers felt that preceding efforts in robotics were doomed to failure because the systems produced could not function in the real world. Rather than trying to construct integrated networks that operate under a centralizing control and maintain a logically consistent model of the world, they are pursuing a behavior-based approach named subsumption architecture.
13 Subsumption architecture employs a design technique called “layering,”---a form of parallel processing in which each layer is a separate behavior-producing network that functions on its own, with no central control. No true separation exists, in these layers, between data and computation. Both of them are distributed over the same networks. Connections between sensors and actuators in these systems are kept short as well. The resulting robots might be called “mindless,” but in fact they have demonstrated remarkable abilities to learn and to adapt to real-life circumstances.
14 The apparent successes of this new approach have not convinced many supporters of integrated-systems development that the alternative is a valid one for drawing nearer to the goal of producing true AI. The arguments that have arisen between practitioners of the two different methodologies are in fact profound ones. They have implications about the nature of intelligence in general, whether natural or artificial。