Combining Declarative and Procedural Knowledge to Automate and Represent Ontology Mapping
OVM学习笔记
OVM学习笔记一、OVM概述OVM是基于CDV的验证,由OVC组成。
OVC基本成员:(1)Data Item(Transaction);(2)BFM(Driver);(3)Sequencer;(4)Monitor;(5)Agent;(6)Enviroment;1.1 Transaction:DUT时序信号抽象成的数据包,如:因特网的数据包,总线事务(读/写、地址、数据)、CPU指令等。
1.2 BFM:将抽象的数据包转换成DUT的驱动时序。
1.3 Sequencer:高级激励生成器,按需求产生Transaction给BFM。
其内建功能:(1)可根据DUT当前状态产生Transaction;(2)可根据用户需求按特定顺序产生Transaction,从而构建特定意义的向量;(3)Enables time modeling in reusable scenarios.(4)Supports declarative and procedural constraints for the same scenario.(5)可系统级同步,控制多个接口(multipe interface)。
1.4 Monitor:(1)将时序信号抽象成Transaction;(2)将Transaction抽象成EVENT发生给其他组件;(3)执行时序或数据检查;(4)覆盖率收集;(5)打印仿真进度;1.5 Agent:Sequencer、BFM、Monitor的顶层,用于配置连接三者。
一个ENV中可有多个Agent,如Master/Transmit Agent驱动DUT,Slave/Receive Agent 接收DUT的输出。
Agent应可设置为主动或被动,主动Agent可驱动DUT,而被动Agent只能监控DUT。
1.6 Enviroment:Env为多个Agent和Bus Monitor的顶层,用于连接这些组件,组成验证环境的拓扑结构。
心理学专业名词中英文对照
感觉动作期----sensorimotor stage 物永久性----objective permanence
运思前期----preoperational st. 保留概念----conservation
心理学专业名词中英文对照
心理学专业名词中英文对照
感觉记忆(SM)—sensory memory 短期记忆(STM)—short-term M.
长期记忆(LTM)—long-term memory 复诵---rehearsal
预示(激发)----priming 童年失忆症---childhood amnesia
发展------development 先天-----nature
后天-----nurture 成熟-------maturation
(视觉)偏好法-----preferential method
习惯法-----habituation 视觉悬崖-----visual cliff
剥夺或丰富(环境)---deprivation or enrichment of env. 基模----schema
概念---concept
原型----prototype 属性----property 特征---feature
范例策略--exemplar strategy 语言相对性(假说)—linguistic relativity th.
音素---phoneme 词素---morpheme
(字词的)外延与内涵意义—denotative & connotative meaning
气质----temperament 依附---attachment 性别认定---gender identity
declarative knowledge英文定义
declarative knowledge英文定义Declarative knowledge refers to the knowledge that can be expressed in the form of propositions or statements. It is a type of knowledge that describes facts, concepts, principles, and relationships. Declarative knowledge can be either true or false and can be expressed in a declarative sentence.Declarative knowledge is different from procedural knowledge, which refers to knowledge about how to perform a task or activity. While declarative knowledge represents facts and information, procedural knowledge represents skills and know-how. Declarative knowledge is often acquired through formal education, reading, observation, or other forms of learning. It can be stored in long-term memory and retrieved when needed.One of the important characteristics of declarative knowledge is that it can be shared and communicated. It can be transferred from one person to another through language, writing, or other means of communication. Declarative knowledge is essential for various aspects of human life, including education, science, technology, and culture.In summary, declarative knowledge is the knowledge that can be expressed in the form of propositions or statements, representing facts, concepts, principles, and relationships. It is different from procedural knowledge and can be acquired, stored, and shared through various means of learning and communication.。
外语学习中的 ‘虚’与‘实’
Explicit vs. Implicit Learning
Explicit learning is a “conscious awareness and intention” to learn. In addition, explicit learning involves “input processing to find out whether the input information contains regularities, and if so, to work out the concepts and rules with which these regularities can be captured”. Explicit learning is an active process where students seek out the structure of information that is presented to them.
Key characteristics of implicit and explicit knowledge
Characteristics Awareness Implicit knowledge Intuitive awareness of linguistic norms Type of knowledge Procedural knowledge of rules and fragments Explicit knowledge Conscious awareness of linguistic norms Declarative knowledge of grammatical rules and fragments Systematicity Variable but systematic Anomalous and inconsistent knowledge knowledge Accessibility Access to knowledge Access to knowledge by by means of automatic means of controlled processing processing Use of L2 knowledge Access to knowledge during Access to knowledge during fluent performance planning difficulty SelfSelf-report Nonverbalizable Verbalizable Learnability Potentially only within critical period Any age
By
Working Paper No. ThreeThe Metaprise, The AKMS, and The Enterprise KnowledgePortalByJoseph M. Firestone, Ph.D.Executive Information Systems, Inc.eisai@Revised March 16, 2000© 1999-2000 Executive Information Systems, Inc...IntroductionThis is a paper about four terms: The Metaprise, the Artficial KnowledgeManagement System (AKMS), the Enterprise Information Portal (EIP), andthe Enterprise Knowledge Portal (EKP). They’re important terms. TheMetaprise is short-hand for the 21st Century knowledge-managed,knowledge innovating organization, The AKMS is the name of acomprehensive type of IT application supporting Knowledge Management.It is at the foundation of the KMC’s AKMS Standards Sub-Committee. EIPis a new software application and investment space identified by MerrillLynch. And the EKP is a type of EIP segmenting that space. In this paperI’ll lay out the relationships among these terms and develop a conceptmap including all of them. The map will show the convergence ofterminology on a new and, I hope, powerful construct: the Metaprise asthe knowledge-managing, knowledge-innovating organization of the 21stCentury supported by an Enterprise Knowledge Portal system as itscentral AKMS application.The MetapriseDefinitionFigure One provides an overview of a Knowledge Life Cycle model begunin collaboration with Mark McElroy, Edward Swanstrom, Douglas Weidner, and Steve Cavaleri [1], during meetings sponsored by the Knowledge Management Consortium International (KMCI), and further developed recently by Mark McElroy and myself [2]. Knowledge Production and Knowledge Integration are core knowledge processes in the model. Knowledge Production produces Validated Knowledge Claims (VKCs), Unvalidated Knowledge Claims (UKCs), and Invalidated Knowledge Claims (IKCs), and information about the status of these. Organizational Knowledge (OK) is composed of all of the foregoing results of knowledge production. It is what is integrated into the enterprise by the Knowledge Integration process.Figure One -- The Knowledge Life Cycle Model (Overview)The knowledge integration process, in turn, produces the Distributed Organizational Knowledge Base (DOKB) and the DOKB, in its turn, has a major impact on structures incorporating organizational knowledge such as business processes and information systems. Coupled with external sources these structures then feed back to impact Knowledge Production at a later time -- which is why it’s called the Knowledge Life Cycle (KLC) model.Drilling down into knowledge production (figure two), the KLC view is that information acquisition, and individual and group learning, impact on knowledge claim formulation, which, in turn, produces Codified Knowledge Claims (CKCs). These, in their turn, are tested in the knowledge validation sub-process, which produces organizational knowledge. Individual andgroup learning may involve knowledge production from the perspective ofthe individual or group, but from the perspective of the enterprise, what the2individuals and groups learn is information, not knowledge. Similarly information acquired may be knowledge from the perspective of the external parties it is acquired from.Figure Two -- The Components of Knowledge Production Drilling down into knowledge integration (figure three), organizational knowledge is integrated across the enterprise by the broadcasting, searching/retrieving, teaching, and sharing sub-processes. These generally work in parallel rather than sequentially. And not all arenecessary to a specific instance of the KLC. All may be based in personalnon-electronic or electronic interactions.3Figure Three -- The Components of Knowledge IntegrationHere is a glossary of the major terms used in the KLC Model.Sidebar One: Glossary for Figures One - ThreeCodified Knowledge Claims - Information that has been codified, and is claimed to be true, but which has not yet been subjected to organizational validation.Distributed Organizational Knowledge Base -an abstract construct representing the outcome of knowledge integration.The DOKB is found everywhere in the enterprise, not merely in electronic repositories.Experiential Feedback Loops - Processes by which information concerning the outcomes of organizational learning activities are fed back into the knowledge production phase of an organization’s knowledge life cycle as a useful reference for future action.Individual and Group Learning - A process involving human interaction, knowledge claim formulation, and validation by which new individual and/or group level knowledge is created.Information About Invalidated Knowledge Claims -Information that asserts the existence of invalidated knowledge claims and the circumstances under which such knowledge was invalidated.Information About Unvalidated Knowledge Claims -Information thats asserts the existence of unvalidated knowledge claims, and the circumstances under which such knowledge was tested and neither validated nor invalidated.Information About Validated Knowledge Claims -Information that asserts the existence of validated knowledge claims and the circumstances under which such knowledge was validated.Information Acquisition - A process by which an organization either deliberately or serendipitously acquires knowledge claims or information produced by others external to the organization.Invalidated Knowledge - A collection of codified invalidated knowledge claims.4Invalidated Knowledge Claims - Codified knowledge claims that have not satisfied an organization’s validation criteria. Falsehoods.Knowledge Claim - A codified expression of potential knowledge which may be held as validated knowledge at an individual and/or group level, but which has not yet been subjected to a validation process at an organizational level. Information. Knowledge claims are components of hierarchical networks of rules, that if validated would become the basis for organizational or agent behavior.Knowledge Claim Formulation - A process involving human interaction by which new organizational knowledge claims are formulated.Knowledge Integration - The process by which an organization introduces new knowledge claims to its operating environment and retires old ones. Knowledge Integration includes all knowledge transmission, teaching, knowledge sharing, and other social activity that communicates either an understanding of previously produced organizational knowledge to knowledge workers, or the knowledge that certain sets of knowledge claims have been tested, and that they and information about their validity strength is available in the organizational knowledge base, or some degree of understanding between these alternatives. Knowledge integration processes, therefore, may also include the transmission and integration of information.Knowledge Production - A process by which new organizational knowledge is created, discovered, or made. Synonymous with "organizational learning."Knowledge Validation Process - A process by which knowledge claims are subjected to organizational criteria to determine their value and veracity.Organizational Knowledge - A complex network of validated knowledge claims held by an organization, consisting of declarative and procedural rules.Organizational Learning - A process involving human interaction, knowledge claim formulation, and validation by which new organizational knowledge is created. (business) Structures Incorporating Organizational56Knowledge - Outcomes of organizational system interaction.The organization behaves through these structures includingbusiness processes, strategic plans, authority structures,information systems, policies and procedures, etc. Knowledgestructures exist within these business structures and are theparticular configurations of knowledge found in them.Unvalidated Knowledge Claims - Codified knowledge claimsthat have not satisfied an organization’s validation criteria, butwhich were not invalidated either. Knowledge claims requiringfurther study.Validated Knowledge Claims - Codified knowledge claimsthat have best satisfied an organization’s validation criteriacompared to other, competing, knowledge claims. "Truth" aswe currently know it.The Knowledge Management Process (KMP) is an on-going persistent interaction among human-based agents within the Natural Knowledge Management System (NKMS) [3]. The KMP is distinct from other interactions of the NKMS. Agents participating in it aim at integrating its agents, various components, and activities into a planned, directed,unified whole producing, maintaining, enhancing, acquiring, and transmitting the enterprise's knowledge base. Knowledge Management is human activity that is part of the interaction constituting the KMP.Figure Four -- The Metaprise -- TheKnowledge Managing, Knowledge Innovating OrganizationA Metaprise [1] [4] is an organization that has implemented an authoritative and formal Knowledge Management Process that not only manages knowledge processes, but also manages itself and its own rate of innovation. The Metaprise therefore contains at least two legitimated levels of process activity above the knowledge process level. The first analyzes and manages what occurs at the fundamental knowledge process level of interaction, and the second does the same at the knowledge management process level of interaction as well. In short, the Metaprise is the knowledge-managing, knowledge-innovating organization. It is illustrated in Figure Four.KM as a discipline needs a short hand expression to refer to the knowledge-managing, knowledge innovating organization. The term "Metaprise" is a good choice. It recognizes the existence in some organizations of the "meta" or formal KM activity level over and above the fundamental knowledge process level of interaction, and also the existence of other levels above the KM activity level that manage and control innovation at the KM activity level.Formal KM activity is activity dedicated to shaping the direction of the NKMS. It is not fundamental knowledge process activity. But it is independent of it and about it. Organizations that have formal KM activity, have taken a deliberate and conscious step toward growing and institutionalizing organizational intelligence, adaptability, creativity, and learning. Assuming their success in implementing their KMP, they are much more nearly 21st Century "intelligent enterprises" than their competitors. But if they implement the KM activity level alone, they are still not Metaprises, but only pre-metaprises. To become a Metaprise, they still must implement at least another level of KM process activity in addition to first level KM. This is necessary to produce new knowledge about knowledge production, or, in other words, to innovate about the rate of innovationAmong Metaprises we can distinguish types along two important dimensions, thereby providing the basis of a useful classification. The first is the number of levels of knowledge management interaction a Metaprise, has implemented. The second is the breadth of knowledge management activities it has implemented at each level.Levels of Knowledge ManagementBy levels of knowledge management interaction, I mean to distinguish multiple levels of KM process activity arranged in a hierarchy. In principle, and, at least with respect to knowledge production, the hierarchy has an infinite number of levels [5]. The hierarchy is generated by considerations similar to those specified by Bertrand Russell [6] in his theory of types,7and Gregory Bateson [7] in his theory of learning and communication. Knowledge processes occur at the same level of agent interaction as other business processes. Let's call this business process level of interaction Level Zero of enterprise Complex Adaptive System (cas) interaction [8]. At this level, pre-existing knowledge is used by business processes and by knowledge processes to implement activity. And, in addition, knowledge processes produce and integrate knowledge about business processes using (a) previously produced knowledge about how to implement these knowledge processes, (b) infrastructure, (c) staff, and (d) technology, whose purpose is to provide the foundation for knowledge production and knowledge integration at level zero. But from where does this infrastructure, staff, knowledge, and technology come. Who manages them, and how are they changed?They don't come from, by and through the level zero knowledge processes -- these only produce, transfer, and acquire knowledge about business processes such as the sales, marketing or manufacturing processes. So, this is where Level One of cas interaction, the lowest level of knowledge management comes in.This level one KM process interaction is responsible for producing, and integrating knowledge about Level Zero knowledge production and integration processes to knowledge workers at Level Zero. It is this knowledge which is used at both Level Zero and Level One to implement knowledge processes and KM knowledge and information processing. Let’s call this level one knowledge the Enterprise Knowledge Management (EKM) model.The KM process and EKM model at Level One are also responsible for providing the knowledge infrastructure, staff, and technology necessary for implementing knowledge processes at Level Zero. In turn, knowledge processes at Level Zero use this infrastructure, staff, and technology to produce and integrate the knowledge used by the business processes. The relationships between level one KM and level zero knowledge and business processes are illustrated in Figure Five.Knowledge about level zero knowledge processes, as well as infrastructure, staff, and technology change when level one KMP interactions introduce changes. That is, changes occur: when the level one KMP produces, and integrates new knowledge about how to implement level zero knowledge processes; and when it adds or subtracts from the existing infrastructure, staff, and technology based on new knowledge it produces. There are two possible sources of these changes.8Figure Five -- Level Zero/Level One KM Process Relationships First, knowledge production at Level One can change the EKM model, which, in turn, impacts on (a) knowledge about how to produce or integrate knowledge about (Level Zero) business processes, (b) knowledge about how to acquire information or integrate knowledge about Level One information acquisition or integration processes (c) staffing, (d) infrastructure, and (e) technology. This type of change then, originates in the KM Level One process interaction itself.Second, knowledge expressed in the EKM model about how to produce knowledge at Level One may change. This knowledge however, is only used in arriving at the Level One EKM model. It is not explained or accounted for by it. It is determined, instead by a KM Level Two process and is accounted for in a Level Two EKM model produced by this interaction. Figure Six adds the KM Level Two process to the process relationships previously shown in Figure Five.Instead of labeling the three levels of processes discussed so far as Level Zero, Level One, and Level Two, it is more descriptive to think of them as the knowledge process level, the KM or meta-knowledge process level, and the meta-KM level of process interaction. There is no end, in principle, to the hierarchy of levels of process interaction and accompanying EKM models. The number of levels we choose to model and to describe, will bedetermined by how complete an explanation of knowledge managementactivity we need to accomplish our purposes.9Figure Six -- Level Zero --Level Two KM Process Relationships§ The knowledge process level produces knowledge about business processes, and uses knowledge about how to produce (how to innovate) knowledge about business processes. This level cannot change knowledge about how to produce knowledge. It can change knowledge about business processes.§ The KM (pre-metaprise, meta-knowledge) process level produces the knowledge about how to produce knowledge about business processes, and uses knowledge about how to produce KM level knowledge about how to produce knowledge about business processes. This level can change knowledge about how to produce knowledge, but cannot change knowledge about how to produce KM-level knowledge.§ The meta-KM (first Metaprise level) produces: (a) knowledge about how to produce knowledge about KM knowledge processes, and (b) knowledge about how to produce KM level knowledge about how to produce knowledge about knowledge processes. It uses knowledge about how to produce Meta-KM level knowledge about how to produce knowledge about KM knowledge processes. This level can change knowledge about how to produce KM-level knowledge, but cannot change knowledge about how to produce Meta-KM level knowledge.§ Level Three, the meta-meta-KM process level of interaction produces knowledge about how to produce Meta-KM level-produced knowledgeabout how to produce knowledge about KM knowledge processes, anduses Meta-Meta KM level-produced knowledge about how to produce10knowledge about Meta-KM level knowledge processes. This level can change knowledge about how to produce Meta-KM level knowledge, but cannot change knowledge about how to produce Meta-Meta KM level knowledge.Level Three then, seems to be the minimum number of levels needed for a view of KM allowing one to change (accelerate) the rate of change in KM level knowledge. And in some situations, where we need even more leverage over our knowledge about how to arrive at knowledge about KM processes, we may even need to go to a fourth (meta-meta-meta-) KM level.Distinctions among metaprises according to the Level of Knowledge Management practiced in them, lets us talk about pre-Metaprises, Level One Metaprises, Level Two Metaprises and so on. It should be possible to usefully characterize the successful 21st century intelligent enterprise, at least on a business domain specific basis, as a Level X Metaprise, when we have more empirical evidence on how many KM levels are needed for competitiveness in any business domain.Thus, the relative effectiveness of Metaprises at different levels is an empirical question, not something we should assume as given. While it’s very likely that effectiveness will increase as Metaprises move from Level One to higher levels, there may be a point at which diminishing returns set in. Or there may even be a point at which movement up the ladder of levels leads to negative returns relative to the investment required to add a KM level, or leads to fewer returns than alternative investments in other areas. ROI considerations must apply to Metaprise KM enhancements, as well as to other Metaprise business processes.Breadth of KM ProcessesBy breadth of knowledge management processes, I mean the extent to which all of the major KM activities are implemented at any specified level of the Metaprise. So what are these major KM activities? Here’s a conceptual framework that begins to specify them.§ Business process activities may be viewed as sequentially linked and as governed by validated rule sets, or knowledge. [1] [3] [9][10]§ A linked sequence of activities performed by one or more agents sharing at least one objective is a Task.§ A linked sequence of tasks governed by validated rule sets, producing results of measurable value to the agent or agents11performing the tasks is a Task Pattern.§ A cluster of task patterns, not necessarily performed sequentially, often performed iteratively and incrementally, is a Task Cluster.§ Finally, a hierarchical network of interrelated, purposive, activities of intelligent agents that transforms inputs into valued outcomes, acluster of task clusters, is a business process.The activity to business process hierarchy is illustrated in Figure Seven.Figure Seven -- The Activity To Business Process Hierarchy This hierarchy, ranging from activities to processes, applies to knowledge and KM processes as well as to operational business processes. Enterprise KM activities may be usefully categorized according to a scheme of task clusters which, with some additions and changes, generally follows Mintzberg [11]. There are three types of KM task clusters: interpersonal behavior, information (and knowledge) processing behavior, and decision making. Each type of task cluster is broken down further into more specific types of task pattern activities in the text below.Interpersonal Behavior§ Interpersonal Behavior includes figurehead or ceremonial KM activity.This activity focuses on performing formal KM acts such as signingcontracts, attending public functions on behalf of the enterprise's KM12process, and representing the KM process to dignitaries visiting the enterprise.§ A second type of interpersonal activity is leadership. This includes hiring, training, motivating, monitoring, and evaluating staff. It also includes persuading non-KM agents within the enterprise of the validity of KM process activities. That is, KM activity includes building political support for KM and knowledge processes within the enterprise.§ A third type of interpersonal KM activity is building relationships with individuals and organizations external to the enterprise. This is another political activity designed to build status for KM and to cultivate external sources of support for KM.Knowledge and Information Processing§Knowledge Production is a KM as well as a knowledge process. KM knowledge production is different in that it is here that the rules for knowledge production that are used at the level of knowledge processes are specified. Keep in mind that knowledge production at this level involves planning, descriptive, cause-and-effect, predictive, and assessment knowledge about the two fundamental level zero knowledge processes, as well as these categories of knowledge about level one interpersonal, knowledge integration, and decision making KM activities. The only knowledge not produced by level one knowledge production, is knowledge about how to accomplish knowledge production at Level One. Once again, the rules constituting this last type of knowledge are produced at Level Two.§ KM Knowledge Integration is affected by KM knowledge production, and also affects knowledge production activities by stimulating new ones. KM knowledge integration at any KM level also plays the critical role of diffusing "how-to" knowledge to lower KM and knowledge process levels.Decision Making Activities•Changing knowledge process rules at lower KM and knowledge process levels. Essentially this involves making the decision to change such rules and causing both the new rules and the mandate to use them to be transferred to the lower level.§Crisis Handling would involve such things as meeting CEO requests for new competitive intelligence in an area of high strategic interest for an enterprise, and directing rapid development of a KM support infrastructure in response to requests from high level executives,13§Allocating Resources for KM support infrastructures, training, professional conferences, salaries for KM staff, funds for new KMprograms, etc.§Negotiating agreements with representatives of business processes over levels of effort for KM, the shape of KM programs, the ROIexpected of KM activities, etc.Altogether, there are nine KM activities in the three task clusters. This classification is probably not complete. There are likely other activities, as well as other task clusters I have overlooked. When we come up with a better classification, we will then have the capability to define types of Metaprises based on both variation in levels of KM, and in the breadth of KM task clusters and activities that are implemented. This should give us a fairly rich two-dimensional classification of Metaprises, which we can then further segment by performance and other characteristics as seems appropriate.The Artificial Knowledge Management System (AKMS)The AKMS supports the NKMS of the Metaprise, along with its formal knowledge Management process. It is designed to manage the integration of computer hardware, software, and networking objects/components into a functioning whole, supporting enterprise knowledge production, and integration processes. The AKMS, in other words, supports producing, and integrating the enterprise's knowledge base. The enterprise's knowledge base, in turn, is used by its agents to perform Knowledge, Knowledge Management, and other business processes. I’ve defined and described the AKMS and its key component, the Artificial Knowledge Manager (AKM) in more detail elsewhere [12]. The basic architecture of the AKMS has been developed in a "strawman" version by the Knowledge Management Consortium (KMC) and is illustrated in Figure Eight.It shows clients, application servers, communication buses and data stores integrated through a single logical component called an Artificial Knowledge Manager (AKM). The AKM performs its central integrative functions by providing process control and distribution services, an Active, In-memory Object Model supplemented by a persistent object store, and Connectivity Services to provide for passing data, information, and knowledge from one component to another. A more concrete visual picture showing the variety of component types in the AKMS, is provided in Figure Nine.1415Figure Eight -- KMC "Straw Man" AKMS ArchitectureFigure Nine -- Components of the AKMSSidebar Two: Figure Nine Abbreviations Web= Web Information Server Pub = Publication & Delivery ServerKDD = Knowledge Discovery in Databases/ DataMining ServersETML = Extraction, Transformation, Migration andLoadingDDS = Dynamic Data Staging AreaDW = Data WarehouseODS = Operational Data StoreERP = Enterprise Resource PlanningQuery = Query and Reporting ServerCTS = Component Transaction ServerBPE = Business Process EngineROLAP = Relational Online Analytical ProcessingAn important difference between the two figures is that the communications bus aspect of the AKMS is implicit in Figure Nine, where I have assumed that the AKM incorporates it. The AKM provides the computing framework necessary to dynamically integrate the Metaprise’s computing support for KM activities and processes. Figure Nine makes plain the diversity of component types in the Metaprise’s AKMS. It is because of this diversity and its rapid rate of growth in the last few years that the AKM becomes necessary. Change in the AKMS’s components and objects can be introduced through so many sources that if the AKMS is to adapt to change, it needs an integrative component like the AKM to play the major role in its integration and adaptation.The Key Architectural Components of the AKMS are:§ The Artificial Knowledge Manager (AKM);§ Stateless Application Servers;§ Application Servers that maintain State;§ Object/Data Stores;§ Object Request Brokers (e.g., CORBA, DCOM); and§ Client Application Components.In order to provide the flavor of the AKMS I’ll briefly describe these various components (with the exception of client application components) below.The AKM16An AKM provides Process Control Services, an Object Model of the Artificial Knowledge Management System (AKMS) (the system corresponding to the AKMS architecture), and Connectivity to all metaprise information, data stores, and applications. What I mean by these terms is covered in detail in [12]. Here a brief outline should provide at least a flavor of the AKM sufficient to develop AKMS connections to the Metaprise and Enterprise Knowledge Portals.Process Control Services include:§ In-memory proactive object state management and synchronization across distributed objects and throughintelligent agents;§ Component management and work flow management through intelligent agents§ Transactional multi-threading;§ business rule management and processing; and§ metadata management.An In-memory Active Object Model/Persistent Object Store is characterized by:§ Event-driven behavior;§ AKMS-wide model with shared representation;§ Declarative as well as procedural business rules;§ Caching along with partial instantiation of objects;§ A Persistent Object Store for the AKM;§ Reflexive Objects.Connectivity Services should have:§ Language APIs: C, C++, Java, CORBA, COM;§ Databases: Relational, ODBC, OODBMS, hierarchical, network, flat file, etc.;§ Wrapper connectivity for application software: custom, CORBA, or COM-based; and§ Applications connectivity including all the categories mentioned in Figure Nine above, whether these are mainframe, server, ordesktop - based.17。
现代语言教学的十大原则
靳洪刚: 现代语言教学的十大原则
习者进一步明确语言学习的最终目的是完成生活中的各种任务,解决生活中的各种问题。 任务教学设计一般须包含三个任务阶段及五个基本组成部分。任务的三个阶段分别
是:前期任务、核心任务、后期任务。前期阶段是任务准备阶段,多在课下完成。前期任务的 目的是激活学习者已有知识,为新的语言学习奠定基础。从语言输入、信息来源、交际背景 等方面为学生提供语言及交际框架( scaffolding) ,帮助学生顺利进入核心任务阶段。因此, 这一阶段的教学主要以激活已有知识、处理语言输入为主,活动多以理解诠释性阅读、听力 为主。核心任务阶段包含两个方面的教学,一是以语言形式为中心的教学,目的是给学习者 建立完成核心任务的交际框架,帮助学习者整合信息;二是以语言使用为中心的任务模拟、 教学实施,多以口语输出、人际交流的方式在课上完成。核心任务的目的是提供具有一定认 知及语言复杂度的模拟任务,让学习者有目的地使用目标语言,完成任务,取得预期结果。 主要形式为,首先采用合班学习语言形式,然后分组互动,完成信息交换、信息组合、意见交 换等任务。后期阶段是任务总结、反思、实际生活应用阶段,多以实地操作、书面输出或口头 演说的方式在课上或课下完成,主要采用书面总结、实地调查、口头报告等形式。
一 引言 现代语言教育在近二十年来受到了三大领域科学研究成果的极大影响。这三大领域分 别是:语言习得研究、认知心理学以及教育学。就第二语言习得领域而言,在过去的五十年 中,研究者通过各种实验研究,如语言对比、错误分析、语言普遍原则、认知心理学、语言获 得过程等方面的实验,对不同语言的习得顺序、习得速度、语言输入及输出的作用、课堂过 程、学习策略等方面进行了系统研究,得出了不少定论。这些研究成果形成了第二语言教学 领域的部分教学原则。就认知心理学来看,研究者从普遍学习理论,人类认知过程,大脑记 忆、储存、加工等语言的处理过程,记忆储存方式,输入频率,视觉、听觉凸显性,反例对比等 方面,提出具体的语言学习理论及第二语言教学策略,极大地影响了第二语言课堂过程及学 习过程的教学原则。就教育学来看,研究者强调教学要以学习者为中心,要让学习者参与学 习过程,进行各种合作及个人化的教学,强调与实际经验结合起来。从这一理论出发形成了 多种第二语言教学方法,它们强调以学生为中心,以沟通为目的,通过任务教学的方式达到 第二语言教学的目的。 这些领域的科学研究及学科发展成果引入第二语言教学领域后,语言教学领域发生了 巨大变革和根本性转变。这一根本性转变表现在语言教学的六个方面:第一,就语言教学原 则( methodological principles) 而言,现代语言教学在教学经验的基础上,更重视借鉴科学的 实证研究来指导教学( empirically motivated methodological principles) ;第二,就教学内容( instructional content) 而言,现代语言教学不再是单一的语言知识的学习,而是跨越三种交际模
《第二语言习得理论》复习纲要附参考答案
《第二语言习得理论》复习纲要附参考答案《第二语习得理论》复习纲要第一章引言1、二语习得研究者的研究对象一般是群体,而不是个体,你如何看待这个问题?10二语习得研究中,研究者的研究对象一般是群体,研究结论也是对某个群体而言。
但作为二语教学的老师,他更加关注一个个的个体,他关心的是如何让每一个学生很好的掌握第二语言。
这种情况下,二语习得研究者得到的结论,很可能对进行教学的老师的指导意义不是很大。
因此,二语习得研究者有必要加强对个体的研究,而不是局限于群体。
2、你认为什么是学外语的最好方式?9第二章人1、什么是一语习得的行为主义模型?提出者是?16-17一语习得的天生论模型的理论基础是乔姆斯基的转换生成语法,天生论认为人的大脑中有一个语言习得的机制,小孩出生后在任何一个语言环二语学习者对所学语言文化的态度影响学习过程,言语适应模型认为人们之间的交谈包含三个不同的过程,即交谈双方保持他们各自的说话方式,让各自的说话方式与对方显得越发不同,双方采用对方的说话特点以相互靠拢。
8、什么是foreigner talk?31当本族语者与非本族语者交谈时,所采用的简化的语言表达形式,可能会不符合语法规范。
9、人们一般认为学习者影响其学习效果,但Hermann提出了相反的看法,他提出的Hermann’s Resultative Hypothesis指什么?32一般认为,学习者对所学语言以及讲该语言的社会的态度会影响他学习该语言成功与否,但Hermann提出了另一个看法,称之为Hermann’s resultative hypothesis,认为可能恰好相反,学习者如果学习成功,就能促进他对该语言和讲这种语言的社会的正面态度,如果学习失败或者不太成功,他的态度会是消极的。
10、什么是语言规划?3211、人们一般把英语分为哪几个环?34-35我们一般把英语分为三个环,即inner circle,outer circle和expanding circle。
CRUST
Learning: Knowledge Representation, Organization, and AcquisitionDanielle S. McNamara and Tenaha O’ReillyOld Dominion UniversityKnowledge acquisition is the process of absorbing and storing new information in memory, the success of which is often gauged by how well the information can later be remembered, or retrieved from memory. The process of storing and retrieving new information depends heavily on the representation and organization of this information. Moreover, the utility of knowledge can also be influenced by how the information is structured. For example, a bus schedule can be represented in the form of a map or a timetable. On the one hand, a timetable provides quick and easy access to the arrival time for each bus, but does little for finding where a particular stop is situated. On the other hand, a map provides a detailed picture of each bus stop’s location, but cannot efficiently communicate bus schedules. Both forms of representation are useful, but it is important to select the representation most appropriate for the task. Similarly, knowledge acquisition can be improved by considering the purpose and function of the desired information. This article provides an overview of knowledge representation and organization, and offers five guidelines to improve knowledge acquisition and retrieval.Knowledge Representation and OrganizationThere are numerous theories of how knowledge is represented and organized in the mind including rule-based production models (Anderson & Lebière, 1998), distributed networks (Rumelhart & McClelland, 1986), and propositional models (Kintsch, 1998). However, these theories are all fundamentally based on the concept of semantic networks. A semantic networkFigure 1: Schematic representation of a semantic networkis a method of representing knowledge as a system of connections between concepts in memory. This section explains the basic assumptions of semantic networks and describes several different types of knowledge.Semantic NetworksAccording to semantic network models, knowledge is organized based on meaning, such that semantically related concepts are interconnected. Knowledge networks are typically represented as diagrams of nodes (i.e., concepts) and links (i.e., relations). The nodes and links are given numerical weights to represent their strengths in memory. In Figure 1, the node representing DOCTOR is strongly related to SCALPEL, whereas NURSE is weakly related to SCALPEL. These link strengths are represented here in terms of line width. Similarly, some nodes in Figure 1 are bolded to represent their strength in memory. Concepts such as DOCTOR and BREAD are more memorable because they are more frequently encountered than concepts such as SCALPEL and CRUST.Mental excitation, or activation, spreads automatically from one concept to another related concept. For example, thinking of BREAD spreads activation to related concepts, such as BUTTER and CRUST. These concepts are primed, and thus more easily recognized or retrieved from memory. For example, in a typical semantic priming study (Meyer &Schvaneveldt, 1976), a series of words (e.g., BUTTER) and nonwords (e.g., BOTTOR) are presented, and participants determine whether each item is a word. A word is more quickly recognized if it follows a semantically related word. For example, BUTTER is more quickly recognized as a word if BREAD precedes it rather than NURSE. This result supports the assumption that semantically related concepts are more strongly connected than unrelated concepts.Figure 2: Schematic representation of ideas (propositions) in a semantic network.Network models represent more than simple associations. They must represent the ideas and complex relationships that comprise knowledge and comprehension. For example, the idea “The doctor uses a scalpel” can be represented as the proposition USE(DOCTOR,SCALPEL) consisting of the nodes DOCTOR and SCALPEL and the link USE (see Figure 2). Educators have successfully used similar diagrams, called concept maps, to communicate important relations and attributes amongst the key concepts of a lesson (Guastello, Beasley, & Sinatra 2000).Types of KnowledgeThere are numerous types of knowledge, but the most important distinction is between declarative and procedural knowledge. Declarative knowledge refers to our memory for concepts, facts, or episodes, whereas procedural knowledge refers to the ability to perform various tasks. Knowledge of how to drive a car, solve a multiplication problem, or throw a football are all forms of procedural knowledge, called procedures or productions. Procedural knowledge may begin as declarative knowledge, but is proceduralized with practice (Anderson, 1982). For example, when first learning to drive a car, you may be told to put the key in the ignition to start the car, which is a declarative statement. However, after starting the car numerous times, this act becomes automatic and is completed with little thought. Indeed, procedural knowledge tends to be accessed automatically and require little attention. It also tends to be more durable (less susceptible to forgetting) than declarative knowledge (Jensen & Healy, 1998).Knowledge AcquisitionThis section describes five guidelines for knowledge acquisition that emerge from how knowledge is represented and organized.Process the material semantically. Knowledge is organized semantically; therefore, knowledge acquisition is optimized when the learner focuses on the meaning of the new material. Craik and his colleagues were among the first to provide evidence for the importance of semantic processing(Craik & Tulving, 1975). In their studies, participants answered questions concerning target words that varied according to the depth of processing involved. For example, semantic questions (e.g., Would the word fit appropriately in the sentence?: "He met a____ on the street"? FRIEND vs. TREE) involves a greater depth of processing than phonemic questions (e.g., Does the word rhyme with LATE?: CRATE vs. TREE), which in turn have a greater depth than questions concerning the structure of a word (e.g., Is the word in capital letters?: TREE vs. tree). They found that words processed semantically were better learned than words processed phonemically or structurally. Further studies have confirmed that learning benefits from greater semantic processing of the material.Process and retrieve information frequently. A second learning principle is to test and retrieve the information numerous times. Retrieving, or self-producing information can be contrasted with simply reading or copying it. Decades of research on a phenomenon called the generation effect has shown that passively studying items by copying or reading them does little for memory in comparison to self-producing, or generating, an item (Slamecka & Graf, 1978). Moreover, learning improves as a function of the number of times information is retrieved. Within an academic situation, this principle points to the need for frequent practice tests, worksheets, or quizzes. In terms of studying, it is also important to break up, or distribute retrieval attempts (Melton, 1967; Glenberg, 1979). Distributed retrieval can include studying or testing items in a random order, with breaks, or on different days. In contrast, repeating information numerous times sequentially involves only a single, retrieval from long-term memory, which does little to improve memory for the information.Learning and retrieval conditions should be similar. How knowledge is represented is determined by the conditions and context (internal and external) in which it is learned, and this in turn determines how it is retrieved: Information is best retrieved when the conditions of learning and retrieval are the same. This principle has been referred to as encoding specificity (Tulving & Thompson, 1973). For example, in one experiment, participants were shown sentences with anadjective and a noun printed in capital letters (e.g. The CHIP DIP tasted delicious.) and told that their memory for the nouns would be tested afterward. In the recognition test, participants were shown the noun either with the original adjective (CHIP DIP), a different adjective (SKINNY DIP), or without an adjective (DIP). Noun recognition was better when the original adjective (CHIP) was presented than when no adjective was presented. Moreover, presenting a different adjective (SKINNY) yielded the lowest recognition (Light & Carter-Sobell, 1970). This finding underscores the importance of matching learning and testing conditions.Encoding specificity is also important in terms of the questions used to test memory or comprehension. Different types of questions tap into different levels of understanding. For example, recalling information involves a different level of understanding, and different mental processes than does recognizing information. Likewise, essay and open-ended questions assess a different level of understanding than do multiple-choice questions (McNamara & Kintsch, 1996). Essay and open-ended questions generally tap into a conceptual or situational understanding of the material, which results from an integration of text-based information and the reader’s prior knowledge. In contrast, multiple-choice questions involve recognition processes and typically assess a shallow or text-based understanding. A text-based representation can be impoverished and incomplete because it consists only of concepts and relations within the text. This level of understanding, likely developed by a student preparing for a multiple-choice exam, would be inappropriate preparation for an exam with open-ended or essay questions. Thus, students should benefit by adjusting their study practices according to the expected type of questions. Alternatively, students may benefit from reviewing the material in many different ways, such as recognizing the information, recalling the information, and interpreting the information. These latter processes improve understanding and maximize the probability that the various ways thematerial is studied will match the way it is tested. From a teacher’s point of view, including different types of questions on worksheets or exams ensures that each student will have an opportunity to convey their understanding of the material.Connect new information to prior knowledge. Knowledge is interconnected; therefore, new material that is linked to prior knowledge will be better retained. A driving factor in text and discourse comprehension is prior knowledge (Bransford & Johnson, 1972). Skilled readers actively use their prior knowledge during comprehension. Prior knowledge helps the reader to fill in contextual gaps within the text and to develop a better global understanding or situation model of the text. Given that texts rarely (if ever) spell out everything needed for successful comprehension, using prior knowledge to understand text and discourse is critical. Moreover, thinking about what you already know about a topic provides connections in memory to the new information – the more connections that are formed, the more likely the information will be retrievable from memory.Create cognitive procedures. Procedural knowledge is better retained and more easily accessed. Therefore, one should develop and use cognitive procedures when learning information. Procedures can include short cuts for completing a task (e.g., using "fast 10s" to solve multiplication problems) as well as memory strategies that increase the distinctive meaning of information. Cognitive research has repeatedly demonstrated the benefits of memory strategies, or mnemonics, for enhancing the recall of information. There are numerous types of mnemonics, but one well-known mnemonic is the method of loci. This technique was invented originally for the purpose of memorizing long speeches in the times before luxuries such as paper and pencil were readily available (Yates, 1966). The first task is to imagine and memorize a series of distinct locations along a familiar route, such as a pathway from one campus buildingto another. Each topic of a speech (or word in a word list; Crovitz, 1971) can then be pictured in a location along the route. When it comes time to recall the speech or word list, the items are simply "found" by mentally traveling the pathway.Mnemonics are generally effective because they increase semantic processing of the words (or phrases) and render them more meaningful by linking them to familiar concepts in memory. Mnemonics also provide “ready-made” effective cues for retrieving the information. Another important aspect of mnemonics is that mental imaging is often involved. Images not only render the information more meaningful, but they provide an additional route for "finding" information in memory (e.g., Paivio, 1990). As mentioned earlier, increasing the number of meaningful links to information in memory increases the likelihood it can be retrieved.Strategies are also an important component of metacognition (Hacker, Dunlosky, & Graesser, 1998). Metacognition is the ability to think about, understand and manage one’s learning. First one must develop an awareness of one's own thought processes. Simply being aware of thought processes increases the likelihood of more effective knowledge construction. Second, the learner must be aware of whether or not comprehension has been successful. Realizing when comprehension has failed is crucial to learning. The final, and most important stage of metacognitive processing is fixing the comprehension problem. The individual must be aware of and use strategies to remedy comprehension and learning difficulties. For successful knowledge acquisition to occur, all three of these processes must occur. Without thinking or worrying about learning, the student cannot realize whether the concepts have been successfully grasped. Without realizing that information has not been understood, the student cannot engage in strategies to remedy the situation. If nothing is done about a comprehension failure, awareness is futile.ConclusionKnowledge acquisition is integrally tied to how the mind organizes and represents information. Learning can be enhanced by considering the fundamental properties of human knowledge as well as the ultimate function of the desired information. The most important property is that knowledge is organized semantically; therefore, learning methods should enhance meaningful study of the new information. Learners should also create as many links to the information as possible. In addition, learning methods should be matched to the desired outcome. Just as using a bus timetable to find a bus stop location is ineffective, learning to recognize information will do little good on an essay exam.2,161 wordsDanielle S. McNamaraTenaha O'ReillyBibliographyAnderson, J. R. 1982. Acquisition of a cognitive skill. Psychological Review89:369-406. Anderson, J. R., and Lebière, C. 1998. The Atomic Components of Thought. Mahwah, NJ: Erlbaum.Bransford, J., and Johnson, M. K. 1972. Contextual prerequisites for understanding some investigations of comprehension and recall. Journal of Verbal Learning and VerbalBehavior11: 717-726.Craik, F. I. M., and Tulving, E. 1975. Depth of processing and the retention of words in episodic memory. Journal of Experimental Psychology: General194:268-294.Crovitz, H. F. 1971. The capacity of memory loci in artificial memory. Psychonomic Science24: 187-188.Hacker, D. J., Dunlosky, J., and Graesser, A. C. 1998. Metacognition in Educational Theory and Practice. Mahwah, NJ: Lawrence Erlbaum.Guastello, F., Beasley, M., and Sinatra, R. 2000. Concept mapping effects on science content comprehension of low-achieving inner-city seventh graders. Rase: Remedial & Special Education 21: 356-365.Glenberg, A. M. 1979. Component-levels theory of the effects of spacing of repetitions on recall and recognition. Memory & Cognition 7: 95-112.Kintsch, W. 1998. Comprehension: A Paradigm for Cognition. New York: Cambridge University Press.Jensen, M. B., and Healy, A. F. 1998. Retention of procedural and declarative information from the Colorado Drivers' Manual. In M. J. Intons-Peterson & D. Best (Eds.), MemoryDistortions and their Prevention (pp. 113-124). Mahwah, NJ: Erlbaum.Light, L. L., and Carter-Sobell, L. 1970. Effects of changed semantic context on recognition memory. Journal of Verbal Learning and Verbal Behavior9:1-11.McNamara, D. S., and Kintsch, W. 1996. Learning from text: Effects of prior knowledge and text coherence. Discourse Processes 22: 247-287.Melton, A. W. 1967. Repetition and retrieval from memory. Science 158: 532.Meyer, D. E., and Schvaneveldt, R. W. 1976. Meaning, memory structure, and mental processes.Science192:27-33.Paivio, A. 1990. Mental Representations: A Dual Coding Approach. NY: Oxford University Press.Rumelhart, D. E., and McClelland, J. L. 1986. Parallel Distributed Processing: Explorations in the Microstructure of Cognition (Vol. 1: Foundations). Cambridge, MA: MIT press. Slamecka, N. J., and Graf, P. 1978. The generation effect: Delineation of a phenomenon.Journal of Experimental Psychology: Human Learning and Memory4: 592-604.Tulving, E., and Thompson, D. M. 1973. Encoding specificity and retrieval processes in episodic memory. Psychological Review80: 352-373.Yates, F. A. 1966. The Art of Memory. Chicago, IL: University of Chicago Press.。
complexity,accuracy and fluency in sla
Complexity, Accuracy and Fluency in Second Language Acquisition1ALEX HOUSEN AND 2FOLKERT KUIKEN1Vrije Universiteit Brussel, 2Universiteit van AmsterdamINTRODUCTIONThis special issue addresses a general question that is at the heart of much research in applied linguistics and second language acquisition (SLA): What makes a second or foreign language (L2) user, or a native speaker for that matter, a more or less proficient language user?Many researchers and language practicioners believe that the constructs of L2 performance and L2 proficiency are multi-componential in nature , and that their principal dimensions can be adequately, and comprehensively, captured by the notions of complexity, accuracy and fluency (e.g. Skehan 1998; Ellis 2003, 2008; Ellis and Barkhuizen 2005). As such, complexity, accuracy and fluency (henceforth CAF) have figured as major research variables in applied linguistic research. CAF have been used both as performance descriptors for the oral and written assessment of language learners as well as indicators of learners’ proficiency underlying their performance; they have also been used for measuring progress in language learning.A review of the literature suggests that the origins of this triad lie in research on L2 pedagogy where in the 1980s a distinction was made between fluent versus accurate L2 usage to investigate the development of oral L2 proficiency in classroom contexts. One of the first to use this dichotomy was Brumfit (1984), who distinguished between fluency-oriented activities, which foster spontaneous oral L2 production, and accuracy-oriented activities, which focus on linguistic form and on the controlled production of grammatically correct linguistic structures in the L2 (cf. also Hammerly 1991).The third component of the triad, complexity, was added in the 1990s, following Skehan(1989) who proposed an L2 model which for the first time included CAF as the three principal proficiency dimensions. In the 1990s the three dimensions were also given their traditional working definitions, which are still used today. Complexity has thus been commonly characterized as ‘[t]he extent to which the language produced in performing a task is elaborate and varied’ (Ellis 2003: 340), accuracy as the ability to produce error-free speech, and fluency as the ability to process the L2 with ‘native-like rapidity’ (Lennon 1990: 390) or ‘the extent to which the language produced in performing a task manifests pausing, hesitation, or reformulation’ (Ellis 2003: 342).CAF in SLA researchSince the 1990s these three concepts have appeared predominantly, and prominently, as dependent variables in SLA research. Examples include studies of the effects on L2 acquisition of age, instruction, individuality features, task type, as well as studies on the effects of learning context (e.g. Bygate 1999; Collentine 2004; Derwing and Rossiter 2003; Skehan and Foster 1999; Freed 1995; Freed, Segalowitz and Dewey 2004; Kuiken and Vedder 2007; Muñoz 2006; Spada and Tomita 2007; Yuan and Ellis 2003). From this diverse body of research, CAF emerge as distinct components of L2 performance and L2 proficiency which can be separately measured and which may be variably manifested under varying conditions of L2 use, and which may be differentially developed by different types of learners under different learning conditions.From the mid-1990s onwards, inspired by advances in cognitive psychology and psycholinguistics (cf. Anderson 1993; Levelt 1989), CAF have also increasingly figured as the primary foci or even as the independent variables of investigation in SLA (e.g. Guillot 1999; Hilton, 2008; Housen, Pierrard and Van Daele 2005; Larsen-Freeman 2006; Lennon 2000; Riggenbach 2000; Robinson 2001; Segalowitz 2007; Skehan 1998; Skehan and Foster 2007; Tonkyn 2007; Towell 2007; Towell and Dewaele 2005; Tavakoli and Skehan 2005; Van Daele,Housen and Pierrard 2007). Here CAF emerge as principal epiphenomena of the psycholinguistic mechanisms and processes underlying the acquisition, representation and processing of L2 knowledge. There is some evidence to suggest that complexity and accuracy are primarily linked to the current state of the learner’s (partly declarative, explicit and partly procedural, implicit) interlanguage knowledge (L2 rules and lexico-formulaic knowledge) whereby complexity is viewed as ‘the scope of expanding or restructured second language knowledge’ and accuracy as ‘the conformity of second language knowledge to target language norms’ (Wolfe-Quintero et al. 1998: 4). Thus, complexity and accuracy are seen as relating primarily to L2 knowledge representation and to the level of analysis of internalized linguistic information. In contrast, fluency is primarily related to learners’ control over their linguistic L2 knowledge, as reflected in the speed and ease with which they access relevant L2 information to communicate meanings in real time, with ‘control improv[ing] as the learner automatizes the process of gaining access’ (Wolfe-Quintero et al. 1998: 4).Defining CAFIn spite of the long research interest in CAF, none of these three constructs is uncontroversial and many questions remain, including such fundamental questions as how complexity, accuracy and fluency should be defined as constructs. Despite the belief that we share a common definition of CAF as researchers and language teachers, there is evidence that agreement cannot be taken for granted and that various definitions and interpretations coexist. Accuracy (or correctness) is probably the oldest, most transparent and most consistent construct of the triad, referring to the degree of deviancy from a particular norm (Hammerly 1991; Wolfe-Quintero et al. 1998). Deviations from the norm are usually characterized as errors. Straightforward though this characterization may seem, it raises the thorny issue of criteria for evaluating accuracy and identifying errors, including whether these criteria should be tuned to prescriptive standard norms(as embodied by an ideal native speaker of the target language) or to non-standard and even non-native usages acceptable in some social contexts or in some communities (Ellis 2008; James 1998; Polio 1997).There is not the same amount of (relative) denotative congruence in the applied linguistics community with regard to fluency and complexity as there is with regard to accuracy. Historically, and in lay usage, fluency typically refers to a person's general language proficiency, particularly as characterized by perceptions of ease, eloquence and ‘smoothness’ of speech or writing (Chambers 1997; Freed 2000; Guillot 1999; Hilton 2008; Lennon 1990; Koponen and Riggenbach 2000). Language researchers for their part have mainly analyzed oral production data to determine exactly which quantifiable linguistic phenomena contribute to fluency in L2 speech (e.g. Lennon 1990; Kormos and Dénes 2004; Cucchiarini, Strik and Boves 2002; Towell, Hawkins and Bazergui 1996). This research suggests that speech fluency is a multi-componential construct in which different sub-dimensions can be distinguished, such as speed fluency (rate and density of delivery), breakdown fluency (number, length and distribution of pauses in speech) and repair fluency (number of false starts and repetitions) (Tavakoli and Skehan 2005).As befits the term, complexity is the most complex, ambiguous and least understood dimension of the CAF triad. For a start, the term is used in the SLA literature to refer both to properties of language task (task complexity) and to properties of L2 performance and proficiency (L2 complexity) (e.g., Robinson 2001; Skehan 2001). L2 complexity in turn has been interpreted in at least two different ways: as cognitive complexity and as linguistic complexity (DeKeyser 2008; Housen, Pierrard and Van Daele 2005; Williams and Evans 1998). Both types of complexity in essence refer to properties of language features (items, patterns, structures, rules) or (sub)systems (phonological, morphological, syntactic, lexical) thereof. However, whereas cognitive complexity is defined from the perspective of the L2 learner-user, linguistic complexity is defined from the perspective of the L2 system or the L2 features. Cognitive complexity (ordifficulty) refers to the relative difficulty with which language features are processed in L2 performance and acquisition. The cognitive complexity of an L2 feature is a variable property which is determined both by subjective, learner-dependent factors (e.g. aptitude, memory span, motivation, L1 background) as well as by more objective factors, such as its input saliency or its inherent linguistic complexity. Thus, cognitive complexity is a broader notion than linguistic complexity, which is one of the (many) factors that may (but need not) contribute to learning or processing difficulty.Linguistic complexity, in turn, has been thought of in at least two different ways: as a dynamic property of the learner’s interlanguage system at large and as a more stable property of the individual linguistic elements that make up the interlanguage system. Accordingly, when considered at the level of the learner’s interlanguage system, linguistic complexity has been commonly interpreted as the size, elaborateness, richness and diversity of the learner’s linguistic L2 system. When considered at the level of the individual features themselves, one could speak of structural complexity, which itself can be further broken down into the formal and the functional complexity of an L2 feature (DeKeyser 1998; Williams and Evans 1988; Housen, Pierrard and Van Daele 2005).Operationalizing and measuring CAFClearly, then, accuracy and particularly fluency and complexity are multifaceted and multidimensional concepts. Related to the problems of constructed validity discussed above (i.e. the fact that CAF lack appropriate definitions supported by theories of linguistics and language learning), there are also problems concerning their operationalization, that is, how CAF can be validly, reliably and efficiently measured. CAF have been evaluated across various language domains by means of a wide variety of tools, ranging from holistic and subjective ratings by lay or expert judges, to quantifiable measures (frequencies, ratios, formulas) of general or specificlinguistic properties of L2 production so as to obtain more precise and objective accounts of an L2 learner’s level within each (sub-)dimension of proficiency (e.g. range of word types and proportion of subordinate clauses for lexical and syntactic complexity, number and type of errors for accuracy, number of syllables and pauses for fluency; for inventories of CAF measures, see Ellis and Barkhuizen 2005; Iwashita, Brown, McNamara and O'Hagan 2008; Polio 2001; Wolfe-Quintero et al. 1998). However, critical surveys of the available tools and metrics for gauging CAF have revealed various problems, both in terms of the analytic challenges which they present and in terms of their reliability, validity and sensitivity (Norris and Ortega 2003; Ortega 2003; Polio 1997, 2001; Wolfe-Quintero et al. 1998). Also the (cor)relation between holistic and objective measures of CAF, and between general and more specific, developmentally-motivated measures, does not appear to be straightforward (e.g. Halleck 1995; Skehan 2003; Robinson and N. Ellis 2008).Interaction of CAF componentsAnother point of discussion concerns the question to what extent these three dimensions arein(ter)dependent in L2 performance and L2 development (Ellis 1994, 2008; Skehan 1998; Robinson 2001; Towell 2007). For instance, according to Ellis, increase in fluency in L2 acquisition may occur at the expense of development of accuracy and complexity due to the differential development of knowledge analysis and knowledge automatization in L2 acquisition and the ways in which different forms of implicit and explicit knowledge influence the acquisition process. The differential evolution of fluency, accuracy and complexity would furthermore be caused by the fact that ‘the psycholinguistic processes involved in using L2 knowledge are distinct from acquiring new knowledge. To acquire the learner must attend consciously to the input and, perhaps also, make efforts to monitor output, but doing so may interfere with fluent reception and production’ (Ellis 1994: 107). Researchers who subscribe tothe view that the human attention mechanism and processing capacity are limited (e.g. Bygate 1999; Skehan 1998; Skehan and Foster 1999) also see fluency as an aspect of L2 production which competes for attentional resources with accuracy, while accuracy in turn competes with complexity. Learners may focus (consciously or subconsciously) on one of the three dimensions to the detriment of the other two. A different view is proposed by Robinson (2001, 2003) who claims that learners can simultaneously access multiple and non-competitional attentional pools; as a result manipulating task complexity by increasing the cognitive demands of a task can lead to simultaneous improvement of complexity and accuracy.OVERVIEW OF THE VOLUMEAs the above discussion demonstrates, many challenges remain in attempting to understand the nature and role of CAF in L2 use, L2 acquisition and in L2 research. But despite these challenges, complexity, accuracy and fluency are concepts that are still widely used to evaluate L2 learners, both in SLA research as in L2 education contexts. We therefore thought it timely to take stock of what L2 research on CAF has brought us so far and in which directions future research could or should develop. With this broad goal in mind, four central articles were invited (by Rod Ellis; Peter Skehan; John Norris and Lourdes Ortega; Peter Robinson, Teresa Cadierno and Yasuhiro Shirai), and two commentary articles were commissioned (by Diane Larsen-Freeman and Gabriele Pallotti).Controversial issuesThe following issues were offered to the contributors as guidelines for reflection and discussion: 1.The constructs of CAF: definition, theoretical base and scopeExactly what is meant by complexity, accuracy and fluency, i.e. how can they be defined as constructs? To what extent do CAF adequately and exhaustively capture all relevant aspects anddimensions of L2 performance and L2 proficiency? To what extent are the three constructs themselves multi-componential? How do they manifest themselves in the various domains of language (e.g. phonology and prosody, lexis, morphology, syntax)? How do they relate to theoretical models of L2 competence, L2 proficiency and L2 processing? And how do CAF relate to L2 development (i.e. are CAF valid indicators of language development)?2.Operationalization and measurement of CAFHow can the three constructs best be operationalized as components of L2 performance and L2 proficiency in a straightforward, objective and non-intuitive way in empirical research designs? How can they be most adequately (i.e. validly, reliably and practically) measured?3.Interdependency of the CAF component sTo what extent are the three CAF components independent of one another in either L2 performance, L2 proficiency and L2 development? To what extent can they be measured separately?4.Underlying correlates of CAFWhat are the underlying linguistic, cognitive and psycholinguistic correlates of CAF? How do the three constructs relate to a learner’s knowledge bases (e.g. implicit-explicit, declarative-procedural), memory stores (working, short-term or long-term), and processing mechanisms and learning processes (e.g. attention, automatization, proceduralization)?5.External factors that influence CAFWhich external factors can influence the manifestation and development of CAF in L2 learning and use, such as, for example characteristics of language tasks (e.g. type and amount of planning), personality and socio-psychological features of the L2 learner (e.g. degree of extraversion, language anxiety, motivation, language aptitude), and features of pedagogic intervention (e.g. what types of instruction are effective for developing each of these dimensions within a classroom context?)The contributions to this special issue all explicitly focus on either one, two or all three of the CAF constructs in relation to one or several of the five issues listed above, which in some cases are illustrated with new empirical research. We will now present a short overview of the topics and questions that are raised by the authors in the four central articles and in the two commentaries.EllisThe first article by Rod Ellis addresses the role and effects of one type of external factor, planning, on CAF in L2 performance and L2 acquisition. Ellis first presents a state-of-art/comprehensive survey of the research on planning. Three types of planning seem to be relevant with respect to CAF: rehearsal, strategic planning and within-task planning. Ellis concludes that all three types of planning have a beneficial effect on fluency, but the results for complexity and accuracy are more mixed, reflecting both the type of planning and also the mediating role of various other external factors, including task design, implementation variables and individual difference factors.Ellis then provides a theoretical account for the role of planning in L2 performance in terms of Levelt’s (1989) model of speech production and the distinction between implicit and explicit L2 knowledge. Rehearsal provides an opportunity for learners to attend to all three components in Levelt’s model – conceptualization, formulation and articulation – and thus benefits all three dimensions of L2 production. According to the author, strategic planning assists conceptualization in particular and thus contributes to greater message complexity and also to enhanced fluency. Unpressured within-task planning eases formulation and also affords time for monitoring, that is, for using explicit L2 knowledge; in this way accuracy increases.SkehanThe second article, by Peter Shehan, addresses the issue of operationalization and measurement of CAF. Skehan claims that fluency needs to be rethought if it is to be measured effectively. In addition he argues that CAF measures need to be supplemented by measures of lexical use. Not only because empirical evidence suggests that the latter is a separate aspect of overall performance, but also because lexical access and retrieval figure prominently in all models of speech production. Skehan also points to the lack of native speaker data in CAF research. Such data are of crucial importance, as they constitute a baseline along which L2 learners can be compared. Skehan presents a number of empirical studies in which, for identical tasks and similar task conditions, both native and non-native participants are involved, and for which measures of complexity, accuracy (for non-native speakers only), fluency, and lexis were obtained. Results suggest that the difference between native and non-native performance on tasks is related more to aspects of fluency and lexis than to the grammatical complexity of the language produced. Regarding fluency, the major difference between the two groups is the pattern of pause locations, in that native speakers use end-of-clause points for more effective, listener-friendly pausing, pausing there slightly more often albeit for shorter periods, while non-natives pause more mid-clause. Lexical performance is noticeably different between the two groups, both in terms of lexical density and of lexical variety (i.e. the use of less frequent words). Especially interesting is the difference in disruptiveness for fluency of the use of less frequent words, as non-natives are derailed in speech planning when they are pushed to use such words more because of task demands.Skehan also considers the issue of interdependency between CAF measures; in particular between accuracy and complexity, since positive correlations between these two aspects have been less common in the literature. In order to account for these correlations. Skehan explores rival claims from his own Trade-off Hypothesis and Robinson’s Cognition Hypothesis. Skehan argues that such joint raised performance in accuracy and complexity is not a function of taskdifficulty (as Robinson’s Cognition Hypothesis would predict) but, rather, that it reflects the joint operation of separate task and task condition factors. Like Ellis, Skehan tries to link the research findings to Levelt’s (1989) model of speaking.Robinson, Cadierno and ShiraiThe article by Peter Robinson, Teresa Cadierno and Yasuhiro Shirai exemplifies a particularly prolific strand of empirical research on CAF, namely research on the impact of task properties on learners’ L2 performance. The authors present results of two studies that measure the effects of increasing the complexity of task demands in two conceptual domains (time and motion) using specific rather than general measures of the accuracy and complexity of L2 speech production. The studies are carried out within the theoretical framework of Robinson’s Cognition Hypothesis. This hypothesis claims that pedagogic tasks should be sequenced for learners in an order of increasing cognitive complexity, and that along resource-directing dimensions of task demands increasing effort at conceptualization promotes more complex and more grammaticized L2 speech production.The specific measures used are motivated by research into the development of tense-aspect morphology for reference time, and by typological, cross-linguistic research into the use of lexicalization patterns for reference to motion. Results show that there is more complex, developmentally advanced use of tense-aspect morphology on conceptually demanding tasks compared to less demanding tasks, and a trend to more accurate, target-like use of lexicalization patterns for referring to motion on complex tasks. By using specific measures of complexity and accuracy (alongside general measures), these authors address the issue of measurement of CAF in their contribution. They contrast the effectiveness of these conceptually specific metrics with the general metrics for assessing task-based language production used in previous studies, and argue for the use of both. In addition, Robinson, Cadierno and Shirai also argue for a higher sensitivityof the specific measures which are used in order to gauge cognitive processing effects on L2 speech production along selected dimensions of task complexity.Norris and OrtegaThe article by John Norris and Lourdes Ortega addresses the crucial issue of the operationalization and measurement of CAF. They critically examine current practices in the measurement of complexity, accuracy, and fluency in L2 production to illustrate the need for what they call more organic and sustainable measurement practices. Building from the case of syntactic complexity, they point to impoverished operationalizations of multi-dimensional CAF constructs and the lack of attention to CAF as a dynamic and inter-related set of constantly changing sub-systems. They observe a disjuncture among the theoretical claims researchers make, the definition of the constructs that they attempt to measure, and the grain size and focus of the operationalizations via which measurement happens. Furthermore they question current reasoning, under which a linear or co-linear trajectory of greater accuracy, fluency, and complexity is expected. Instead they want to consider measurement demands that stem from a dynamic, variable, and non-linear view of L2 development. They therefore call for a closer relation between theory and measurement and argue for a more central role for multi-dimensionality, dynamicity, variability, and non-linearity in future CAF research.This overview of the four central articles in this volume shows that the authors approach CAF from various perspectives, focus on different issues and investigate distinct research topics. What they share is their desire to build further on the results to date. This is where the commentaries by Diane Larsen-Freeman and Gabriele Pallotti come in.Larsen-FreemanLarsen-Freeman starts by reminding us of the fact that, historically, CAF research has come out of the search for an L2 developmental index. The big challenge has always been how to operationalize CAF. According to Larsen-Freeman the measures we have been using to date may be too blunt and not suitable because we may not have been looking at the right things in the right places. She therefore seconds Robinson, Cadierno and Shirai’s suggestion not to stick to general measures, but to use more specific measures and to look at more detailed aspects of performance. She further points out that the operationalization and measurement issue is complicated by the interdependency of the CAF components. As mentioned by some of the authors in this volume, there is an increasing amount of evidence, that complexity, accuracy and fluency do not operate in complete independence form each other, and that findings obtained by CAF measures depend on the participants involved and on the context in which the data have been collected. For those reasons Larsen-Freeman does not expect much from studying the CAF components one by one to see what effect they have on learner performance in a linear causal way. In her view such a reductionist approach does little to advance our understanding, as we risk ignoring their mutual interaction. Instead, we should try to capture the development of multiple sub-systems over time, and in relation to each other. With reference to Wolfe-Quintero et al. (1998) who have demonstrated that many, if not all, aspects of language development are non-linear, Larsen-Freeman calls for a broader conceptual framework and for more longitudinal and non-linear research, in which difference and variation occupy a central role. She considers a dynamic or complex systems theory, in which more socially-oriented measures of development are employed as the best candidates for such a framework.PallottiPallotti starts by signaling some definitional and operationalizational problems of CAF constructs. As an example of an unresolved question in this area he opposes Skehan – whodoubts whether lexical and syntactic complexity are ‘different aspects of the same performance area’ or two separate areas – to Norris and Ortega, who consider syntactic complexity to be a multi-dimensional construct with several sub-constructs. Pallotti considers CAF to be a good starting point for describing linguistic performance, but they do not constitute a theory or a research program in themselves. He emphasizes that a clear distinction should be made between on the one hand CAF, referring to the properties of language performance as a product, and linguistic development on the other, referring to a process, with its sub-dimensions such as route and rate.In line with Larsen-Freeman, and with specific reference to the contributions by Norris and Ortega and Robinson et al., Pallotti welcomes the use of specific measures in addition to the more general ones, as one cannot expect that ‘all sorts of task complexification lead to higher complexity of any linguistic feature.’ He questions, however, what the use of specific measures may contribute to theorizing about CAF. Although by using specific measures the relationship between task difficulty and linguistic complexity may become more reliable, ‘discovering such relationships looks more like validating the tasks as elicitation procedures for specific linguistic features than like confirmations of general theories about speech production.’Pallotti agrees with Larsen-Freeman’s call for a more central role of non-linearity in L2 acquisition. He illustrates this by referring to Norris and Ortega’s example that syntactic complexity as measured by means of a subordination ratio may not always increase linearly, but that syntactic complexity may grow in other ways, for example by phrasal and clausal complexification. And also for accuracy it is not always the case that ‘more is better’. He does not, however, embrace Larsen-Freeman’s idea that variation should move to the front of CAF research. This is what he calls ‘the necessary variation fallacy’: research should not only be concerned with variations and differences, but also with constants and similarities. Instead he argues that adequacy be included as a separate dimension of L2 production and proficiency,。
英汉翻译:心理学术语
感觉记忆(SM)—sensory memory短期记忆(STM)—short-term M。
长期记忆(LTM)—long-term memory复诵——rehearsal预示(激发)——priming童年失忆症——childhood amnesia视觉编码(表征)——visualcode(representation)听觉编码—acoustic code运作记忆——working memory语意性知识—semantic knowledge记忆扫瞄程序—memory scanning procedure竭尽式扫瞄程序-exhaustive S.P。
自我终止式扫瞄—self-terminated S。
程序性知识—procedural knowledge命题(陈述)性知识——propositional(declarative)knowl edge情节(轶事)性知识—episodic K。
讯息处理深度—depth of processing精致化处理—elaboration登录特殊性—coding specificity记忆术—mnemonic位置记忆法—method of loci字钩法—peg word(线)探索(测)(激发)字—prime关键词——key word命题思考——propositional thought心像思考——imaginal thought行动思考——motoric thought概念——concept原型——prototype属性——property特征——feature范例策略——exemplar strategy语言相对性(假说)—linguisticrelativity th。
音素——phoneme词素——morpheme(字词的)外延与内涵意义—denotative & connotative meaning(句子的)表层与深层结构—surface& deep structure语意分析法——semanticdifferential全句语言—holophrastic speech过度延伸——over-extension电报式语言—telegraphic speech关键期——critical period差异减缩法——differencereduction方法目的分析——means-endsanalysis倒推——working backward动机——motive自由意志——free will决定论——determinism本能——instinct种属特有行为——species specific驱力——drive诱因——incentive驱力减低说——drive reductionth。
conceptual tranfer
back
3. Transfer in trilingual and multilingual settings
trilingual setting:三语环境 multilingual settings:多语环境 E.g. on both sides of the border between Nigeria and Cameroon(喀麦隆), where involves the languages of two former colonial powers as well as the several indigenous languages
A challenge to CPH
Flege, Murray, and MacKay found that immigrants to Canada from Italy showed age differences in their pronunciation of English.
他们同意年龄的差异能够导致 发音的差异,但是不同意是关键 期的存在导致发音的差异
Q3 When may be the cross-linguistic influence inevitable?
The cross-linguistic influence may be inevitable only when a SL begins to develop and only after the processes of primary LA as well as underway(进 行中).
NL, L2, and TL
Which one affects a lot in TL learning?
The cognitive Approach
The cognitive Approach 认知教学法背景认知教学法是上世纪 60 年代美国著名心理学家 John B. Carroll 首先提出的。
Carroll 认为,“第二语言是一种知识的整体,外语教学主要是通过对它的各种语音、语法和词汇形式的学习和分析,从而,对这些形式获得的意识的控制的过程,”换句话说,认知教学法是使学生在相当程度上认识和控制语言的结构。
认知是心理学的一个术语,它用来描绘不同的人在观察、组织、分析以及回忆信息、经验等方面的不同的习惯性倾向。
认知法试图用认知-符号学习理论代替听说法的刺激-反应学习理论。
认知法反对语言是“结构模式”的理论,反对在教学中进行反复的机械操作练习。
它认为语言是受规则支配的创造性活动语言的习惯是掌握规则,而不是形成习惯,提倡用演绎法讲授语法。
在学习语音时,同时学习文字。
听、说、读、写 4 种语言技能从学习外语一开始就同时进行训练,允许使用本族语和翻译的手段进行标音,认为语言错误在外语学习过程中是不可避免的副产物。
它强调理解在外语教学中的作用,主张在理解新的语言材料的基础上创造性地交际练习。
在教学中,广泛利用视听教具使外语教学情景化和交际化。
认知法是以认识心理学作为其理论基础,它使外语教学法建立在科学的基础上。
但认知法作为一个新的独立外语教学法体系还是不够完善的,必须从理论上和实践上加以充实。
•该法产生于20世纪60年代中期的美国•作为听说法的对立面而产生•20世纪60年代,科学技术飞速发展,由于国际间在政治、经济、军事、科技等领域的激烈竞争,社会要求大量能够直接进行国际间科技文化交流的高水平人才。
美国心理学、教育学、语言学等基础理论学科的快速发展The method produces in the mid - 1960 - the United StatesAs I heard the opposite of methodIn the 1960 s, the rapid development of science and technology, as a result of international in the political, economic, military, science and technology in the field of fierce competition, the society requires a large number of able to communicate directly to the international science and technology culture of high level talents.The psychology, pedagogy, linguistics, etc. The rapid development of the basic theoretical subjects•backgroundCognitive teaching method is in the 1960 s the famous American psychologist John b. Carroll first put forward. Carroll thinks, "the whole second language is a kind of knowledge, foreign language teaching is mainly based on its various forms of learning phonetics, grammar and vocabulary and analysis, thus, awareness of these forms to obtain control of the process," in other words, the cognitive teaching method is to make students know and control the structure of language in a great extent. Cognitive psychology is aterm that is used to describe different people in the observers, organization, analysis and recall information, experience and so on the different habitual tendencies. Cognitive method to learning theory in cognitive - symbols instead of the method was learning theory of stimulus and response. Cognitive method against language is "structural model" theory, the opposition in the teaching of mechanical operation practice over and over again. It is believed that language is controlled by the rules of the creative activities of language habits is master rules, rather than the form habits, advocate using deductive method to teach grammar. In learning pronunciation at the same time learning the text. Listening, speaking, reading and writing the four language skills from the start learning a foreign language training at the same time, allows the use of native language and translation methods for the sound, believed that language errors in the process of foreign language learning is inevitable byproduct. It emphasizes the understanding of the role of in foreign language teaching, claim to understand the new language materials on the basis of communicative practice. In the teaching, extensive use of audio-visual AIDS to make foreign language teaching situation and communication. Cognitive method is based on cognition psychology as its theoretical basis, it makes foreign language teaching method based on science. But thecognitive method as a new independent of foreign language teaching system still not perfect, from the theory and practice must be full.具体实施在外语教学时,我们应该在认知教学法的指导下,让学生多接触一些语言材料,包括一些他们尚不能运用的语言材料,让他们吸收新的养分,以丰富所学的外语。
第二语言习得理论好
第一章二语习得中的关键因素二语习得是一个复杂的过程,包括许多相关的因素,这一章主要探讨了这个过程出现的一些关键因素。
第一部分提出什么是第二语言习得。
对于这个问题,作者从六个方面张开论述。
1 二语习得是一个总和现象,由许多因素构成。
2 二语习得和第一语言习得的比较3 区分二语习得和外语习得 4 句法和形态在二语习得中的中心地位 5 区分语言能力和语言行为 6 习得和学的对比第二部分主要论述了二语习得过程中出现的关键因素,也就是本章的重点问题。
主要有7 个方面:1 第一语言的作用论述了第一语言在二语习得过程中的影响,涉及到语言迁移和对比分析等概念。
2 语言发展的自然路线提出二语习得路线与第一语言习得的顺序相近,有一定的习得顺序。
提出L1=L2的假说,和错误分析假说。
3 学习者语言的情境变化主要讨论在两种不同的情境下,语言学习者使用二语知识存在差异。
4 个人学习者的差异主要讨论语言学习者的年龄、态度、认知风格、动机、个性等因素对二语习得的影响。
5 语言输入的重要作用6 学习策略7 正式教学在二语习得重的重要地位。
第三步分总结调查二语习得的框架 1 情境因素2 输入3 学习者差异 4 学习策略5 语言输出第二章第一语言的作用人们普遍认为第一语言对二语习得有重要的影响,而且这种影响又分为两种,一种是好的影响,及所谓的正迁移,另一种是起阻碍作用的影响,及所谓的负迁移。
作者首先从行为主义学习理论入手,从习惯和错误两个方面对第一语言的影响进行分析。
行为主义理论认为当第一语言习惯和第二语言习惯相似的时候,第一语言对第二语言的习得起促进作用,不会或很少出现错误,反之,当第一语言和第二语言差异很大的时候,第一语言就会对二语的习得产生抑制影响,出现错误,不利于二语的习得。
其次,从预测潜在错误的角度,作者有引入了对比分析假说。
对比分析假说有分为心理方面和语言学方面。
在心理学方面,对比分析有强弱两种说法,强假说认为通过对第一和第二语言的区别分析可以预测二语习得过程中的错误。
广东省佛山市2024年高二下学期教学质量检测英语试题含解析
A.Giving Sophia forting Sophia.
C.Taking Sophia to the hospital.D.Looking for supplies for Sophia.
1.What did Grace find about Sophia when arriving at her house?
A.She was calling for help.B.She was weak with hunger.
C.She was in critical condition.D.She was suffering an arm injury.
For Pauli, this sense of responsibility for the environment and communities was the driving force behind the project. “We have done too much analysis on environmental issues, and too much analysis on the problem often leads to inaction. I knew that whatever we’re doing is far from what is needed, and it’s also far from what is possible.”
When Grace got to the house around 10 p.m. she saw Sophia lying on the ground outside, bleeding from her head. Her eyes kept rolling to the back of her head. Sophia, who had a previous arm injury and a bad knee, recalled that she had been waiting outside for the delivery. As she turned, she fell and hit her head and lost consciousness.
《语言习得与外语教学》-课程教学大纲
《语言习得与外语教学》课程教学大纲一、课程基本信息课程代码:16064402课程名称:语言习得与外语教学英文名称:Second Language Acquisition and TEFL课程类别:专业课学时:32学分:2适用对象:英语语言文学专业考核方式:考查先修课程:基础英语I-III、英语语音、英语语法、英语听力、英语口语、英语泛读、英语写作二、课程简介《语言习得与外语教学》是为英语语言文学专业二年级学生开设的专业选修课。
本课程旨在为英语语言文学专业的学生系统概括介绍第二语言习得研究的不同视角与理论,具体包括母语习得与二语习得的关联、对比分析与偏误分析、习得顺序研究、克拉申的语言监控模式、普遍语法与第二语言习得、第二语言学习者个体差异因素、第二语言习得的认知模式、社会文化理论、语言输入与互动等。
由于第二语言习得发展与第二语言教学密不可分,本课程还将概述主要的外语教学法流派,并利用案例分析,探讨课堂环境下的第二语言学习中存在的问题与对策,帮助学生思考自己在外语学习中遇到的问题。
Second Language Acquisition and TEFL is an optional course for sophomores majoring in English Language. It is intended to introduce the students to some most important theories in the field of second language acquisition (SLA), including the association between first language acquisition and second language acquisition, contrastive analysis and error analysis, developmental sequence, Krashen’s Monitor Model, UG approach to SLA, individual differences in SLA, cognitive approach to SLA, sociocultural theories, the role of language input and interaction in SLA, etc.. In addition to outlining basic ideas and claims about second language acquisition from different perspectives, this course will also deal with some most popular approaches and methods in second language teaching, thus helping students to better understand their own problems in L2 learning.三、课程性质与教学目的本课程全面介绍第二语言习得研究的内容、性质、所涵盖的子领域,以及相关的经典理论及研究成果。
Language Transfer(二语习得论述题重点)
Language TransferDuring the process of foreign language learning, the influence of mother tongue in foreign language study can not be ignored. Generally speaking, there are positive transfer and negative transfer of mother tongue in foreign language learning. Positive transfer plays a positive role in English learning, while negative transfer just the opposite and cause a lot of troubles. Language transfer mainly shows in the following aspects: Phonemics, vocabulary,grammar,writing etc. We should make best use of the advantages and overcome the disadvantages.Teachers:Teachers should help students find out the commonalities of two languages in the processing of teaching, and make Ss eliminate anxiety to learn English.(Awareness of the Differences and Similarities between Native Language and the Target Language the differences of the two languages are the important point in teaching) Teachers also should view and handle students' mistakes correctly, and don't correct and criticize too much when they make mistakes in order to avoid students not to use target language to express themselves.Pay attention to teaching methods. Having English-Chinese Bilingual Extensive Reading Exercises;Effectively Utilizing Bilingual DictionariesStudents:Have a correct view on language transfer. in stead of regarding negative transfer as barriers in the process of learning English, learners should fully recognize it and realize that it is an essential stage and a learning strategy in the process of English learning. Through contrastive analysis and error analysis one find out the causes of mistakes and make every effort to avoid or minimize negative transfer.Increasing the amount of language input. learners should be given a large amount of foreign language input, read more ,listen more, practice more and recite more,especially the original English material, and learners should be encouraged to communicate with each other in English, only in this way can learners avoid or minimize the influence of negative transfer.Enhance the import of cultural background. In the process of teaching, teachers should use the method of contrastive analysis, and find out the differences and similarities of two languages through the comparison of them, especially the customs and historical background. Enhance the learning of culture background of Britain and America, only in this way can help learners acquire a second language faster and better.The influence and interference of MT is inevitable.What we should do is to understand the root cause, the ways and classifications of MT Transfer and to have a better view of English learning.FossilizationAccording to Krashen there are five reasons for fossilization. Insufficient quantity of the target language input ;Inappropriate quality of the target language input ;The affective filter ;The target language output filter ;The acquisition of deviant forms of the target language. A ccording to many researchers, fossilization can’t be eliminate, but it can be reduced to some degree.Teachers:Formulating Appropriate Teaching Strategies At the initial stage:teachers should let students focus on the language. features of the target language and emphasize its accuracy and fluency. At the advanced stage, learners begin to leam the advanced grammar and complex sentence structure, so teachers should warn students not to use some communicative skills or their learned knowledge to avoid or paraphrase their unfamiliar language.Helping Students formulate appropriate learning strategies Teachers should pay special attention to lead students to adopt the learning strategies which are suitable to the present stage and help students find out learning strategies which are right for them. Teachers should teach students how to leam and use different learning strategies with the change of learning content. Moreover, teachers should examine whether the learning strategies adopted by students are effective and make timely adjustments. Directing Students' Communicative Strategies Teachers should cultivate students' communicative competence and make students adopt active and effective strategies to solve the difficulties in the communication to reduce the IL fossilization. In addition, teachers should increase more effective and correct input and guide students to adopt the correct communicative strategy and grasp the higher language competence, thus the fossilization can't appear early.Arousing Students' Intrinsic Learning MotivationCultivating Students' Cross-culture AwarenessCoping with Students' Errors Correctly(giving effective feedback)Improving Teachers' QualityStudents:Self-consciousness of language fossilizationReducing the negative transfer of mother tongue (Accumulation of declarative and procedural knowledge;Increase of language output ;Exposure to Target language and Target language culture)Increasing the quality and quantity of optimal input.Adoption proper learning strategiesInterlanguageAll in all, teachers should keep in mind that interlanguage is a process which is approaching the target language step by step. During this process students slowly revise the interim systems to accommodate new hypotheses about the target language system. Teachers should pay much attention to the studies of interlanguage so as to treat students’interlanguage fairly and properly, value the training of learning strategies and provide students with more opportunities for comprehensible input and output so that learners’interlanguage could develop rapidly towards the target language.Learning strategies should be taught or trained;Positive attitude towards errors;Input and output should be balanced in class;Make full use of positive transfer.Critical Period HypothesisPutting Emphasis on Pronunciation and Listening; Paying AttentiontoTeaehing Strategies1.在我国,外语学习并非越早越好,英语学习的最佳时期是10岁。
remember单词记忆
remember单词记忆Remember is a commonly used word in everyday language. It is a verb that refers to the process of recalling information or experiences from memory. Memory is the mental faculty of retaining and recalling past experiences and knowledge. Remembering is an essential cognitive function that plays a vital role in our daily lives.Recalling information from memory can occur in various ways. One way is through explicit or declarative memory, which involves conscious efforts to retrieve specific facts or events. For example, when asked about who the first president of the United States was, we remember the name George Washington. This type of memory retrieval is often aided by cues or prompts that trigger associations in the brain and help in recall.Remembering can also involve implicit or non-declarative memory, which refers to the unconscious recollection of skills or habits. This type of memory is responsible for our ability to perform various actions without conscious effort. For instance, riding a bicycle involves implicit memory where we remember how to balance and pedal without having to consciously think about it. This type of memory is formed through repeated practice and is less dependent on conscious awareness.Furthermore, remembering can be influenced by various factors such as emotions, context, and attention. Emotions play a significant role in memory formation and retrieval. When an event is emotionally charged, it tends to be remembered more vividly. For example, we may remember a traumatic experience or a happyoccasion with greater clarity than neutral events. Contextual cues, such as the environment or the presence of familiar people, can also trigger memory recall. Attention is another key factor in remembering. Paying attention to information or experiences enhances the encoding process, making it easier to retrieve later on.Memory is a complex phenomenon that can be categorized into different types. One classification is short-term memory and long-term memory. Short-term memory refers to the temporary storage of information that lasts for a few seconds to a minute. For example, remembering someone's phone number for a short period of time before writing it down. On the other hand, long-term memory involves the relatively permanent storage of information that can last for years. Long-term memory can be further divided into episodic memory, semantic memory, and procedural memory. Episodic memory refers to memory of specific events or experiences, such as recalling a particular birthday party. Semantic memory involves general knowledge and factual information, like knowing that Paris is the capital of France. Procedural memory refers to the memory of skills and how to perform various tasks, such as driving a car.Improving memory and the ability to remember can be beneficialin various aspects of life. There are several strategies that can aidin remembering information. One technique is repetition or rehearsal, where information is repeatedly reviewed, strengthening memory traces. Associating information with visual imagery or creating mental images can also enhance memory recall. Mnemonic devices, such as acronyms or rhymes, can help in remembering lists or complex information. Additionally,organizing information and breaking it down into smaller, manageable chunks, known as chunking, can make it easier to remember.In conclusion, remembering is a fundamental aspect of our cognitive abilities. It involves the retrieval of information or experiences from memory through conscious or unconscious processes. Memory is a complex phenomenon influenced by various factors such as emotions, context, and attention. Understanding memory processes and employing effective memory strategies can improve our ability to remember and enhance overall cognitive functioning.。
- 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
- 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
- 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。
Combining Declarative and Procedural Knowledge to Automate and Represent Ontology MappingLi Xu1,David W.Embley2,and Yihong Ding21Department of Computer Science,University of Arizona South,Sierra Vista,Arizona85635,U.S.A.{lxu}@2Department of Computer Science,Brigham Young University,Provo,Utah84602,U.S.A.{embley,ding}@Abstract.Ontologies on the Semantic Web are by nature decentralized.Fromthe body of ontology mapping approaches,we can draw a conclusion that aneffective approach to automate ontology mapping requires both data and meta-data in application domains.Most existing approaches usually represent data andmetadata by ad-hoc data structures,which is lack of formalisms to capture theunderlying semantics.Moreover,to approach semantic interoperability,there isa need to represent mappings between ontologies with well-defined semanticsthat guarantee accurate exchange of information.To address these problems,wepropose that domain ontologies attached with extraction procedures are capableof representing knowledge required tofind direct and indirect matches betweenontologies.Also mapping ontologies attached with query procedures not onlysupport equivalent inferences and computations on equivalent concepts and rela-tions but also improve query performance by applying query procedures to derivetarget-specific views.We conclude that a combination of declarative and proce-dural representation with ontologies favors the analysis and implementation forontology mapping that promises accurate and efficient semantic interoperability.1IntroductionOntologies on the Semantic Web,by nature,are decentralized and built independently by distinct groups.The research on ontology mapping is to compare ontological de-scriptions forfinding and representing semantic affinities between two ontologies.By analyzing the body of ontology mapping approaches[2][5][6][7][12][14][17],a key conclusion is that an effective ontology mapping approach requires a principled combination of several base techniques such as linguistic matching of names of on-tology elements,detecting overlap in the choice of data types and representation of data values,considering patterns of relationships between elements,and using domain knowledge[12].To support knowledge sharing between base ontology-mapping techniques,a knowl-edge base that describes domain models is of great value.The knowledge bases in most existing approaches,however,are represented informally by ad-hoc data struc-tures,which are difficult to capture well defined semantics effectively.To further facili-tate interoperability between ontologies,there is a need to represent mappings between ontologies such that the mapping representation guarantees to successfully exchange2Li Xu,David W.Embley,and Yihong Dinginformation.The research work that addressed this ontology-mapping representation problem is usually done separately from the research that focuses onfinding seman-tic affinities[3][9][10][13],which is lack of support for an efficient approach to achieve interoperability on the Semantic Web.To approach these problems within one knowledge-representation framework,we argue that a combination of declarative and procedural representation based on ontologies favors the analysis and implementation for ontology mapping and promises accurate and efficient semantic interoperability.Our declarative representation for ontology mapping includes(1)domain ontologies that provide semantic bridges to establish communications between base techniques in order tofind semantic affinities between ontologies;and(2)mapping ontologies that provide means to correctly exchange information.Declaratively,ontologies are usually expressed in a logic-based language so that detailed,accurate,consistent,sound,and meaningful distinctions can be made among concepts and relations.Their logic base therefore promises proper reasoning and inference on ontologies.However,the expression power of ontologies is limited with ontology mapping. Ontologies have difficulties to effectively express semantic heterogeneity between on-tologies.For example,within a domain,different vocabulary terms can describe a same concept and populated concept instances can have various lexical appearance.Unfor-tunately,the capability of handling semantic heterogeneity is extremely important for ontology mapping since its goal is tofind and represent semantic affinities between se-mantically heterogeneous ontologies.Moreover,to support interoperability across on-tologies,based on a debate on the mailing list of the IEEE Standard Upper Ontology working group,3semantic operability is to use logic in order to guarantee that,after data are transmitted from a sender system to a receiver,all implications made by one system had to hold and be provable by the other,and that there should be a logical equivalence between those implications.To express equivalent concepts and relations between two ontologies,we must issue queries to compute views over ontologies since ontologies rarely match directly[17].The associated set of inference rules with ontologies,how-ever,neither support expressing complex queries nor reasoning queries efficiently.Procedural attachment is a common technique to enforce the expression power in case where an expression power is limited[16].A procedural attachment is a method that is implemented by an external procedure.We employ two types of procedural at-tachments in our approach.A domain ontology shared by base ontology-mapping tech-niques is attached with extraction procedures.An extraction procedure is an encoded method with extraction patterns that express the lexical instantiations of ontology con-cepts.A mapping ontology,on the other hand,is attached with query procedures to establish a communication across ontologies.Each mapping instance maps a source ontology to a target ontology,which is formally specified such that source data is ready to load into the target.Because it is not always promising that we have direct mappings [7][17],a query procedure computes a target-specific view over the source so that the view data satisfies all implications made by the target when we can not map a source ontology to a target ontology directly.In this paper,we offer the following contributions:(1)attaching extraction proce-dures with domain ontologies to represent knowledge shared by base techniques tofind 3Message thread on the SUO mailing list initiated at /email/msg07542html.Title Suppressed Due to Excessive Length3 semantic affinities between ontologies;and(2)attaching query procedures with map-ping ontologies to efficiently interoperate heterogeneous ontologies based on mapping results produced by base techniques.We present the details of our contribution as fol-lows.Section2describes elements in input and domain ontologies and how to apply domain ontologies to supportfinding semantic affinities between ontologies.Section3 describes source-to-target mappings as mapping ontologies and how the representation supports accurate and efficient semantic interoperability.Section4gives an experimen-tal result to demonstrate the contribution of applying domain ontologies to ontology mapping.Finally,we summarize and draw conclusions in Section5.2Domain Model Representations2.1Input OntologyAn ontology include classes,slots,slot restrictions,and instances[4].Classes and in-stances form an ontology.A class is a collection of entities.Each entity of the class is said to be an instance of that class.With IS-A and PART-OF relationships,classes constitute a hierarchy.Slots attached to a class describe properties of objects in the class.Each slot has a set of restrictions on its values,such as cardinalities and ranges. By adapting an algebra approach to represent ontologies as logical theories[10],We provide the following definition.Definition1.An input ontology O=(S,A,F),where S is the signature that de-scribes the vocabulary for classes and slots,A is a set of axioms that specify the intended interpretation of the vocabulary in some domain of discourse,and F is a set of ground facts that classifying instances with class and slot symbols in the signature S.For discussion convenience,in this paper we use rooted hypergraphs graphs to il-lustrate structure properties between classes and slots in ontological signatures.A hy-pergraph includes a set of nodes modeling classes and slots and a set of edges modeling relations between them.The root node is representing a designated class of primary in-terest.Figure1,for example,shows two ontology hypergraphs(whose roots are house and House).In hypergraphs,we present a class or slot using either a solid box or a dashed one where a dashed box indicates that there is data populated for the concept, a functional relation using a line with an arrow from its domain to its range,and a nonfunctional relation using a line without arrowhead.2.2Domain OntologyTo represent domain knowledge tofind semantic affinities between two ontologies,we use domain ontologies attached with extraction procedures to capture semantics for ontology mapping.Ground facts are not part of a domain ontology since the domain ontology is not populated with instances.We define a domain ontology as follows.Definition2.A domain ontology O=(S,A,P),where S is the ontological signa-ture,A is a set of ontological axioms,and P is a set of procedures that extract metadata4Li Xu,David W.Embley,and Yihong Ding(a)Ontology Signature1(partial)(b)Ontology Signature2(partial)Fig.1.Signatures of Input Ontologiesand data from vocabulary terms and populated instances of input ontologies based on extraction rules.Extraction procedures attached with domain ontologies apply data extraction tech-niques[8]to retrieve data and metadata when matching two ontologies.Each extraction procedure is designed for either a class or slot in a domain ontology.When an extraction procedure is invoked,a recognizer does the extraction by applying a set of extraction rules specified using regular expressions.Figure2shows the regular expressions using the Perl syntax for slot V iew and P hone in a real-estate domain.Each list of regular expressions include declarations for data values that can poten-tially populate a class or slot and keywords that can be used as vocabulary terms to name classes and slots.We describe the data values using extract clauses and the keywords using keyword clauses.When applied to an input ontology,both the extract and keyword clauses causes a string matching a regular expression to be extracted,where the string can be a vocabulary term in the ontological signature or a data values classified by the ontological ground facts.2.3Application of Domain OntologyFigure3shows three components in a real-estate domain ontology,which we used to automate the mapping between two ontologies in Figure1and also for mapping real-world ontologies in the real-estate domain in general.Each dashed box in Fig-ure3associates with an extraction procedure that is capable of extracting both pop-ulated values and vocabulary terms for the concept.Filled-in(black)triangles denote aggregation(“PART-OF”relationships).And open(white)triangles denote generaliza-tion/specialization(“IS-A”superclasses and subclasses).Provided with the domain ontology described in Figure3,we can discover many semantic affinities between Ontology1in Figure1(a)and Ontology2in Figure1(b)as follows.1.Terminological Relationships.The extraction patterns applied by extraction proce-dures specify common vocabulary terms used to name classes and slots.Based on the P hone component in Figure3(b),the vocabulary term phone day in OntologyTitle Suppressed Due to Excessive Length5 View matches[15]case insensitiveconstant{extract“\bmountain\sview\b”;},{extract“\bwater\sfront\b”;},{extract“\briver\sview\b”;},{extract“\bpool\sview\b”;},{extract“\bgolf\s*course\b”;},{extract“\bcoastline\sview\b”;},...{extract“\bgreenbelt\sview\b”;};keyword“\bview(s)?\b”;End;Phone matches[15]case insensitiveconstant{extract“\b\d{3}-\d{4}\b”;},–nnn-nnnn{extract“\b\(\d{3}\)\s*\d{3}-\d{4}\b”;},–(nnn)nnn-nnnn{extract“\b\d{3}-\d{3}-\d{4}\b”;},–nnn-nnn-nnnn{extract“\b\d{3}\\\d{3}-\d{4}\b”;},–nnn\nnn-nnnn{extract“\b1-\d{3}-\d{3}-\d{4}\b”;};–1-nnn-nnn-nnnnKeyword“\bcall\b”,“\bphone\b”;End;Fig.2.Example of regular expressions in a real-estate domain 1matches with keywords specified for concept Day P hone and the term P hone in Ontology2matches with keywords for concept P hone.Based on the“IS-A”relationship between Day P hone and P hone,we canfind the semantic affinity between phone day in Ontology1and P hone in Ontology2.2.Merged/Split Values.Based on the Address declared in the ontology in Figure3(a),the attached extraction procedure detects that(1)the values of address in Ontology 1match with extraction patterns for concept Address,and(2)the values of Street, City,and State in Ontology2match with extraction patterns for concepts Street, City,and State respectively.Based on“PART-OF”relationships in Figure3(a), we canfind the“PART-OF”relationships between Street,City,and State in Ontology2and address in Ontology1.3.Superset/Subset.By calling extraction procedures attached with in Figure3(b),phone day in Ontology1matches with both keywords and data value patterns for Day P hone and phone in Ontology2matches with P hone.In Figure3(b)the ontology explicitly declares P hone is a superset of Day P hone based on the“IS-A”relationship between Day P hone and P hone.Thus we canfind the semantic affinity between phone day in Ontology1and P hone in Ontology2.4.Vocabulary Terms/Data Instances.Extraction procedures apply extraction patternsto recognize keywords and value patterns over both ontology terms and populated instances since it is difficult to distinguish boundaries between metadata and pop-ulated data instances in complex knowledge representation systems.In Ontology 1,W atherfront is data classified for view in its ontological ground facts.In On-6Li Xu,David W.Embley,and Yihong Ding(a)Address(b)Phone(c)ViewFig.3.Real-estate domain ontology(partial)tology2,W ater front is a vocabulary term in its ontological signature.Boolean values“Yes”and“No”associated with W ater front in Ontology2are not its val-ues but to show whether the values W ater front should be included as description values for view of House in Ontology1if we do mapping.The extraction proce-dure for concept V iew in Figure3(c)recognizes terms such as W ater front in Ontology2as values while the procedure for concept W ater F ront can recognize keyword“water front”associated with view in Ontology1.Since W ater F ront “IS-A”V iew in Figure3(c),by derivation,we can detect that view in Ontology1 has a semantic affinity with W ater front in Ontology2.3Mapping Result Representation3.1Source-to-target MappingWe adopt an ontology mapping definition as follows[10].Definition3.A source-to-target mapping M ST from O S=(S S,A S,F S)to O T=(S T,A T,F T)is a morphism f(SS )=STsuch that AT|=f(AS),i.e.all interpre-tations that satisfy OT axioms also satisfy OStranslated axioms if there exists twosub-ontologies OS =(SS,AS,FS)(SS⊆S S,AS⊆A S,FS⊆F S)and O T=(ST ,AT,FT)(ST⊆S T,AT⊆A T,FT⊆F T).Our representation solution for source-to-target mapping allows a variety of sourcederived data based on the discovered semantic affinities between two input ontologies. These source derive data include missing generalizations and specializations,mergedTitle Suppressed Due to Excessive Length7 and split values,and etc.Therefore,our solution“extends”elements in an ontological signature S S of a source ontology O S by including views computed via queries,each of which we call a view element.We let V S denote the extension of S S with derived, source view elements.Every source-to-target mapping M ST is composed of a set of triples.Each triple t=(e t,e s,q e)is a mapping element,where e t∈S T,e s∈V S,q e is either empty or a mapping expression.We call a triple t=(e t,e s,q e)a direct match which binds e s∈S S to e t∈S T,or an indirect match which binds a view element e s∈V S−S S to e t∈S T.When a mapping element t is an indirect match,q e is a mapping expression to illustrate how to compute the view element e s over the source ontology O S.To represent source-to-target mapping as logic theories,we specify source-to-target mappings as populated instances of a mapping ontology,which is defined as follows.Definition4.A mapping ontology O=(S,A,F,P),where S is the ontological signature,A is the set of ontological axioms,F is a set of ground facts presenting source-to-target mappings,and P is a set of query procedures that describe designed query behaviors to compute views over ontologies.If a mapping element t=(e t,e s,q e)in a source-to-target mapping M ST is an indirect match,i.e.e s is a source view element,a query procedure is attached with t to compute e s by applying the mapping expression q e.3.2Mapping ExpressionsWe can view each class and class slot(including view elements corresponding to either classes or class slots)in ontologies as single-attribute or multiple-attribute relations. Relational algebra is ready to be applied to describe procedural behaviors for query procedures attached with mapping ontologies.Therefore,we present mapping expres-sions by an extended relational algebra since traditional operators in relational algebra do not cover the ones required to address problems such as Merged/Split values and Vocabulary Terms/Data Instances.For example,to address Merged/Split Values,we designed two operations Compo-sition and Decomposition in the extended relational algebra.We describe the two oper-ations as follows.In the notation,a relation r has a set of attributes;attr(r)denotes the set of attributes in r;and|r|denotes the number of tuples in r.–Compositionλ.Theλoperator has the formλ(A1,...,A n),Ar where each A i,1≤i≤n,is either an attribute of r or a string,and A is a new attribute.Applying this operation forms a new relation r ,where attr(r )=attr(r)∪{A}and|r |=|r|. The value of A for tuple t on row l in r is the concatenation,in the order specified, of the strings among the A i’s and the string values for attributes among the A i’s for tuple t on row l in r.–Decompositionγ.Theγoperator has the formγRA,A r where A is an attribute ofr,and A is a new attribute whose values are obtained from A values by applying a routine R.Applying this operation forms a new relation r ,where attr(r )= attr(r)∪{A }and|r |=|r|.The value of A for tuple t on row l in r is obtained by applying the routine R on the value of A for tuple t on row l in r.8Li Xu,David W.Embley,and Yihong DingAssuming that Ontology1in Figure1(a)is the target and Ontology2in Figure1(b) is the source,the follow lists the derivation of a view element House−address in Ontology2that matches with house−address in Ontology1.Address−Address ⇐πAddress,Address λ(Street,“,”,City,“,”,State),Address (Address−Street1Address−City1Address−State) House−address ⇐ρAddress ←address πHouse,Address (House−Address1Address−Address )Theλoperator denotes the Composition operation in the relational algebra.The Composition operation merges values in Street,City and State for a new concept Address .3.3Semantic InteroperabilityDefinition5.A semantic interoperable system I=(O T,{O Si },{M SiT}),where O Tis a target ontology,{O Si }is a set of n source ontologies,and{M SiT}is a set of nsource-to-target mappings,such that for each source ontology O Si there is a mappingM Si Tfrom O Sito O T,1≤i≤n.The following theorem provides that accurate information exchange between on-tologies is guaranteed by derived source-to-target mappings.Theorem1.Given a semantic interoperable system I=(O T,{O Si },{M SiT})where1≤i≤n,data facts F OS i →O Tflowing from O Sito O T based M SiThold andare provable by O T.Note that data facts F OS i flowing from O Sito O T based on M SiThave classifi-cations to either signature or view elements in O Si .Since a source-to-target mappingdefines a morphism f(SO Si )=SO T,the data facts F OS ihence hold the classificationsto the signature elements in O T that correspond source elements in O S.Assume that user queries issued over I are Select-Project-Join queries and we also assume that they do not contain comparison predicates such as≤and=.We use the following standard notation for conjunctive queries.Q(X):−P1(X1),...,P n(X n)X,X1,...,X n are tuples of variables,and X⊆X1...X n.The predicates P i(1≤i≤n)is a target signature element.When evaluating query answers for a user query Q, the semantic interoperable system I transparently reformulates Q as Q ext,a query over the target and source ontologies in I.Since each target signature element P i possibly corresponds to a set of source elements{s|s→P1},to obtain Q ext,we substitute P i in Q by adjoining P i to{s|s→P i}.Note that a source element s in the substitution set for P i in Q may be a source view element,derived by invoking a query procedure.With query reformulation in place,we can now prove that query answers are sound—every answer to a user query Q is an entailed fact according to the source(s)and the target—and that query answers contain all the entailed facts for Q that the sources and the target have to offer—maximal for the query reformulation.Title Suppressed Due to Excessive Length9Theorem2.Let Q extI be the query answers obtained by evaluating Q Ext over I.Given a user query Q over I,a tuple<a1,a2,...,a M>in Q ExtI is a sound answerin Q extIfor Q.Theorem3.If Q Ext is a reformulated query in I for a query Q over I,Q Ext is a maximally contained reformulation for q with respect to I.4Experimental ResultWe used a real-world application,Real Estate,to evaluate applications of a domain on-tology shared by a set of matching technique[17].The Real Estate application hasfive ontologies.We decided to let any one of the ontologies be the target and let any other ontology be the source.In summary,we tested20pairs of ontologies for the Real Estate application.In the test,Merged/Split Values appear four times,Superset/Subset appear 48times,and Vocabulary Terms/Data Instances appear10times.With all other indirect and direct matches,there are a total of876matches.We evaluate the performance of our approach based on three measures:precision,recall and the F-measure,a standard measure for recall and precision together[1].By exploiting knowledge specified in the domain ontologies attached with extraction procedures,the performance reached94% recall,90%precision,and an F-measure of92%4.One obvious limitation to our approach is the need to manually construct an application-specific domain ontology with extraction procedures.To facilitate the knowledge ac-quiring process to build domain ontologies,we can reuse existing ontologies.Machine learning techniques can also be applied to facilitate the construction of extraction pat-terns for extraction procedures.Since we predefine a domain ontology for a particular application,we can compare any two ontologies for the application using the same do-main ontology.Therefore,the work of creating a domain ontology is amortized over repeated usage.5ConclusionsWe have proposed an approach to automate and represent ontology mappings by com-bining both declarative and procedural representations.We have tested that a set of base techniques are able to establish communications via domain ontologies attached with extraction procedures.By sharing the domain ontologies,the base techniques detected indirect matches related to problems such as Superset/Subset,Merged/Split values,as well as Vocabulary Terms/Data Instances.To approach semantic interoper-ability across ontologies,we present source-to-target mappings as mapping ontologies attached with query procedures,which not only support equivalent inferences and com-putations on equivalent concepts and relations but also improve query performance by applying query procedures.The source-to-target mapping instances lead automaticallyto a rewriting of every target element as a union of the target element and correspond-ing virtual source-view elements.Query reformulation thus reduces to rule unfolding4See a detailed explanation about the experiment in[17]10Li Xu,David W.Embley,and Yihong Dingby applying the view definition expressions for the target elements in the same way database systems apply view definitions.References1.R.Baeza-Yates and B.Ribeiro-Neto.Modern Information Retrieval.Addison Wesley,MenloPark,California,1999.2.J.Berlin and A.Motro.Database schema matching using machine learning with featureselection.In Proceedings of the International Conference on Advanced Information Systems Engineering(CAISE2002),pages452–466,Toronto Canada,2002.3. D.Calvanese,G.De Giacomo,and M.Lenzerini.A framework for ontology integration.In Proceedings of the1st Internationally Semantic Web Working Symposium(SWWS),pages 303–317,2001.4.V.K.Chaudhri,A.Farquhar,R.Fikes,P.D.Karp,and J.P.Rice.OKBC:a programmaticfoundation for knowledge base interoperability.In Proceedings of the Fifteenth National Conference on Artificial Intelligence(AAAI-98),Madison,Wisconsin,1998.5.R.Dhamankar,Y.Lee,A.Doan,A.Halevy,and P.Domingos.iMAP:Discovering complexsemantic matches between database schemas.In Proceedings of the2004ACM SIGMOD International Conference on Management of Data(SIGMOD2004),pages283–294,Paris, France,June2004.6.H.Do and A-a system forflexible combination of schema matching ap-proaches.In Proceedings of the28th International Conference on Very Large Databases (VLDB),pages610–621,Hong Kong,China,August2002.7. A.Doan,J.Madhavan,R.Dhamankar,P.Domingos,and A.Halevy.Learning to matchontologies on the semantic web.VLDB Journal,12:303–319,2003.8. D.W.Embley,D.M.Campbell,Y.S.Jiang,S.W.Liddle,D.W.Lonsdale,Y.-K.Ng,and R.D.Smith.Conceptual-model-based data extraction from multiple-record Web pages.Data& Knowledge Engineering,31(3):227–251,November1999.9. A.Y.Halevy.Answering queries using views:A survey.The VLDB Journal,10(4):270–294,December2001.10.Y.Kalfoglou and M.Schorlemmer.Ontology mapping:the state of the art.The KnowledgeEngineering Review,18(1):1–31,2003.11.W.Li and C.Clifton.SEMINT:A tool for identifying attribute correspondences in heteroge-neous databases using neural networks.Data&Knowledge Engineering,33(1):49–84,April 2000.12.J.Madhavan,P.A.Bernstein,A.Doan,and A.Halevy.Corpus-based schema matching.InICDT’05,y,2005.13.J.Madhavan,P.A.Bernstein,P.Domingos,and A.Halevy.Representing and reasoningabout mappings between domain models.In Proceedings of the18th National Conference on Artificial Intelligence(AAAI’02),2002.14. A.Maedche,B.Motic,N.Silva,and R.V olz.Mafra-an ontology mapping framework in thesemantic web.In Proceedings of the ECAI Workshop on Knowledge Transformation,Lyon, France,July2002.15. E.Rahm and P.A.Bernstein.A survey of approaches to automatic schema matching.TheVLDB Journal,10(4):334–350,December2001.16.S.Russell and P.Norvig.Artificial Intelligence:A Mordern Approach.Pearson Education,Inc.,second edition edition,2003.17.L.Xu and D.W.Embley.A composite approach to automating direct and indirect schemarmation Systems,available online April2005(accepted to appear).。