A type-theoretic framework for formal reasoning with different logical foundations
专业英语
questions
How
do you distinguish steel from cast iron? How do you distinguish low alloy steel from high alloy steel?
1.1.1 Iron and Steel
The earth contains a large number of metals which are useful to man. One of the most important of these is iron. Modern industry needs considerable quantities of this metal, either in the form of iron or in the form of steel.
Mechanical Engineering materials
Organic polymer materials Inorganic non-metallic materials
plastic rubber Synthetic Fibers Traditional ceramics Special Ceramics Metal Matrix Composites
1.1.1 Iron and Steel
The ore becomes molten, and its oxides combine with carbon from the coke. The non-metallic constituents of the ore combine with the limestone to form a liquid slag. This floats on top of the molten iron, and passed out of the furnace through a tap. The metal which remains is pig iron.
电子信息工程专业本科人才培养方案
电子信息工程专业本科人才培养方案一、专业名称、代码专业名称:电子信息工程专业代码:080701二、培养目标本专业培养能为社会主义现代化建设服务的、德、智、体、美全面发展的、具有较高文化素质修养、敬业精神和社会责任感、掌握电子信息工程及相关专业的基本理论知识,具有较强的自学能力和工程实践能力,能从事电子信息系统和设备的研发、维护、运营的高级工程技术人才。
三、培养规格和要求本专业人才应具有以下知识、能力和素质:1、知识结构要求D工具性知识:具备英语(英文写作和表达)、信息科学基础知识、资料检索、计算机基本知识,专业相关软件应用知识。
2)人文社会科学知识:经济学、社会学、哲学和历史知识、自然界和社会的可持续发展知识、政治、法律法规等公共政策和管理知识。
3)学科基础知识:具备高等数学、工程数学、大学物理、工程图学、电路、模电、数电等学科基础知识的基本内容;4)专业知识:具备信号与系统、电磁场理论、通信原理、数字信号处理、通信电子线路、微机原理及应用等专业核心课程的基本理论和相关的工程实践知识。
具有一定的电路系统或信息处理的专业知识和其他方向的专业知识。
2、能力结构要求D获取知识能力:文献查询和检索,获取信息能力、国家发展战略与学科关系对应、自主学习,更新知识,提高工作效率。
2)应用知识能力:具有从实践中发现问题、了解问题;定义问题、定性分析问题;建立模型,进行理论分析和实验研究;提出问题解决方法和建议。
3)工程实践能力:掌握先进技术方法和现代技术手段;电子信息产品的设计、制作、使用维护等能力;电路仿真、系统仿真等方面能力;较强的创新意识、创新的基本能力4)交流协作能力:具有文字表达、语言表达和交流能力;具有学科内、跨学科、跨文化背景合作的初步能力;勇于接受挑战,较强的竞争意识和竞争能力;一定的系统思维能力,分清主次因素;组织、协调和开展电子信息工程项目的基本能力;了解学科的国际先进技术和发展趋势;一定的国际视野和跨文化环境下的交流能力。
博弈英语作文
博弈英语作文Game theory is a field of study that analyzes the strategic interactions between decision-makers with different, often conflicting, objectives. It provides a framework for understanding how individuals or entities make choices in situations where the outcome depends not only on their own actions but also on the actions of others. This discipline has applications in a wide range of fields, including economics, political science, computer science, biology, and even psychology.The foundations of game theory were laid in the 1940s by the mathematician John von Neumann and the economist Oskar Morgenstern, who published their seminal work "Theory of Games and Economic Behavior." In this book, they introduced the concept of the "game," which is a formal model of an interactive situation where players make decisions in order to achieve their desired outcomes.At the heart of game theory is the idea of the "rational player," an entity that seeks to maximize its own payoff or utility given theactions of the other players. The rational player is assumed to have complete information about the game, including the available strategies, the possible outcomes, and the payoffs associated with each outcome. The player is also assumed to be able to accurately assess the probabilities of different outcomes and to choose the strategy that will lead to the best possible outcome for themselves.One of the key concepts in game theory is the Nash equilibrium, named after the mathematician John Nash. A Nash equilibrium is a set of strategies, one for each player, where no player can improve their payoff by unilaterally changing their strategy. In other words, each player's strategy is the best response to the strategies of the other players. The existence of a Nash equilibrium is a fundamental result in game theory, and it has important implications for understanding the behavior of rational players in strategic situations.Another important concept in game theory is the idea of cooperation and competition. In some games, players may be able to achieve better outcomes by cooperating with each other, while in other games, the players' interests are inherently in conflict, and they must compete to achieve their desired outcomes. The study of these different types of games has led to the development of a range of solution concepts, such as the Pareto optimal solution and the minimax solution, which help to predict the outcomes of strategic interactions.Game theory has a wide range of applications in the real world. In economics, game theory is used to analyze market competition, bargaining, and the design of auctions and other market mechanisms. In political science, game theory is used to study the strategic behavior of political actors, such as voters, politicians, and interest groups. In computer science, game theory is used to design algorithms and protocols for distributed systems, such as the internet, where multiple agents must coordinate their actions to achieve a desired outcome.One of the most well-known applications of game theory is in the field of evolutionary biology, where it is used to understand the evolution of cooperation and competition among living organisms. The concept of the "evolutionary stable strategy" in game theory has been used to explain the emergence of complex social behaviors, such as altruism and reciprocity, in various species.Despite its many successes, game theory is not without its limitations. One of the key challenges in applying game theory is the difficulty of accurately modeling the behavior of real-world players, who may not always behave in a perfectly rational manner. Additionally, the complexity of many real-world situations can make it difficult to apply game-theoretic models in a straightforward way.Despite these challenges, game theory remains a powerful and influential field of study, with applications across a wide range of disciplines. As technology continues to advance and the world becomes increasingly interconnected, the need for a deeper understanding of strategic interactions and decision-making will only continue to grow. By providing a rigorous framework for analyzing these complex situations, game theory will likely play an increasingly important role in shaping our understanding of the world around us.。
Autive
AutiveThe Transformative Power of Autive InnovationIn the ever-evolving landscape of technology and industry, the concept of "Autive" — a term coined to encapsulate the fusion of autonomy, innovation, and intelligent systems — emerges as a formidable force reshaping our world. This futuristic concept not only underscores the importance of self-directed, automated processes but also emphasizes the ingenuity and adaptability necessary for sustainable progress. The narrative of Autive innovation, therefore, unfolds as a tale of empowerment, efficiency, and societal transformation.Embracing Autonomy: The Cornerstone of ProgressAt the heart of Autive lies autonomy, a hallmark characteristic that frees machines and systems from the direct, constant control of humans. From driverless cars navigating crowded cities to robotic arms precisely assembling complex products in factories, autonomy enables a level of precision and scalability previously unimaginable. This autonomy drives a paradigm shift, allowing human resources to be redirected towards more creative, strategic endeavors while mundane, repetitive tasks are handled seamlessly by intelligent machines. Innovation: The Catalyst for EvolutionInnovation, the lifeblood of Autive, constantly pushes the boundaries of what is possible. It integrates cutting-edge technologies such as artificial intelligence, machine learning, and big data analytics to enhance the capabilities of autonomous systems. For instance, AI algorithms enable self-driving vehicles to make split-second decisions based on real-time data, adapting to unpredictable road conditions and optimizing routes for efficiency. In manufacturing, innovations in automation and robotics lead to streamlined production processes, reducing waste and enhancing product quality.The Intelligent Systems RevolutionAutive's essence transcends mere automation; it encompasses the development of truly intelligent systems that can learn, adapt, and collaborate with humans. These systems understand complex scenarios, make informed decisions, and even anticipate future trends. For example, smart homes equipped with Autive technologies learn from users' habits, optimizing energy consumption and providing personalized comfort. In healthcare, intelligent diagnostic tools aid doctors in detecting diseases earlier and designing tailored treatment plans.Societal Transformation and Ethical ConsiderationsThe advent of Autive has profound implications for society as a whole. It promises to create new industries, job opportunities, and lifestyle changes that could dramatically improve quality of life. However, this transformation also poses ethical dilemmas related to job displacement, privacy concerns, and accountability in autonomous decision-making. Addressing these challenges requires a proactive approach, including policy making, education, and public dialogue to ensure that the benefits of Autive are shared equitably and its development aligned with ethical principles.ConclusionIn conclusion, Autive innovation represents a powerful force that is transforming industries, enhancing efficiency, and fueling societal progress. By harnessing the full potential of autonomy, intelligence, and innovation, we are embarking on a journey towards a future where machines and humans work harmoniously, each contributing to the other's strengths. As we navigate this exciting。
归化和异化在商务英语翻译中的应用--以商务英语翻译中公司简介的英译为例
摘要如今,中国企业进入国外市场呈现上升趋势。
特别是中国全面入世之后,很多企业已经将公司简介翻译成英文。
然而,通过一定数量的观察研究发现,多数公司译文质量差强人意。
鳖脚的翻译有损公司的形象,更有可能导致公司经济方面的损失。
本文以韦努蒂提出的归化异化理论为框架,通过对几个有代表性的公司简介英译的分析,阐述了如何适当地运用归化和异化理论来解决公司简介英译中的问题。
公司简介翻译的首要目的是为了传递信息,更多考虑如何使译文传递的信息便于读者理解和接受,如何最有效地实现译文预期的功能和目的,原文的形式和内容往往要服从于译文的需要,服从于文本的交际功能。
由此可见,公司简介翻译的主要策略为归化。
当然,异化的策略也是不可或缺的,例如在词汇层面上有时异化是很好的解决办法。
同时,翻译不仅仅是一个语言层面的翻译过程,翻译时译者要根据客户或委托人的要求,结合翻译的目的和译文读者的接受能力,从原作所提供的多源信息中进行选择性的翻译。
关键词:公司简介英译;归化;异化;翻译策略ABSTRACTToday, an increasing number of Chinese businesses are entering foreign markets. Especially with China‟s full access to WTO, many of these companies have their company profiles translated into English to facilitate the advertising of their goods or services in a foreign market. An examination of these translations has revealed the quality of translations leaves much to be desired. Poor translations can damage the image of a company at best and result in adverse financial consequences at worst.The present paper applies Venuti‟s Domestication and Foreignization as the theoretic framework. With analyzing several representative company profile C-E translations, this thesis tries to discuss how to resolve the problems in company profile translation by proper use of Domestication and Foreignization. Company profile translation mainly aims to offer information, so the apprehension of target readers and acceptability of translated versions should be put at the top place. Domestication is, therefore, an appropriate strategy in business translation, but it dose not always work. Because in some cases, foreignization is more effective, such as in lexical level. Besides, translation is not merely a linguistic process. Guided by the translation brief and taking into account of the translation purpose and the acceptability of the target readers, the translator selects certain items from the source-language.Keywords:company profile translation; domestication; foreignization; translation strategyContents1. Introduction (4)1.1 General introduction. (4)1.2 Research problems. (5)2. Company Profile Translation (6)2.1 Characteristics of Company Profile (6)2.1.1 Clearness (6)2.1.2 Conciseness (7)2.1.3 Preciseness (7)2.1.4 Summary (8)2.2 Criteria of Company Profile Translation (8)2.2.1 Properties of Company Profile Translation (8)2.2.2 Principles for Company Profile Translation (8)2.2.3 Summary (11)3. Domestication and Foreignization (12)3.1 General View on Domestication and Foreignization (12)3.2 Domestication (12)3.3 Foreignization (13)4. The Application of Domestication and Foreignization in C-E Company Profile Translation (15)4.1 The existing problems in company profile translation (15)4.2 Sample analysis of C-E company profile translation (15)4.2.1 Sample analysis 1 (15)4.2.1.1 C-E company profile of Zhongtian Information Technology Co., Ltd (15)4.2.1.2 Existing problems on textual level and revision strategies (16)4.2.2 Sample analysis 2 (17)4.2.2.1 C-E company profile translation of Nice Group and Little Swan Group (17)4.2.2.2 Existing problems in lexical and syntactical level and revision strategies (17)4.2.3 Summary (18)4.3 The Influential Factors of Domestication and Foreignization in Company Profile Translation (18)4.3.1 Cultural Differences (18)4.3.2 Economic Differences (19)5. Conclusion (20)Acknowledgements (21)References (22)1. Introduction1.1 General introductionWith the increasing speed of globalization, the world is becoming smaller and smaller. At the same time China joined in WTO, which makes China be connected more closely to the rest of the world .China‟s intercourses with other countries or regions, especially in business field, are developing rapidly. As a result of that, business translation, a kind of media or tool communication, plays more and more important role in economic area and draws more attention as well as arouses more debates on this topic.Business translation is not only a word-translation activity but also cultural exchanges .The choice of domestication or foreignization is the most important approach in the process of translation, which always puts translator in embarrassing situation.Domestication and foreignization are two basic translation strategies which provide both linguistic and cultural guidance. They are termed by American translation theorist L.Venuti. According to Venuti, the former one refers to “an ethnocentric reduction of the foreign text to target-language cultural values, bring the author back home,” while the latter one is “an ethnodeviant pressure on those (cultural) values to register the linguistic and cultural difference of the foreign text, sending the r eader abroad.”[1]20. Generally speaking, domestication designates the type of translation in which a transparent, fluent style is adopted to minimize the strangeness of the foreign text for target language readers, while foreignization means a target text is produced which deliberately breaks target conventions by retaining something of the foreignness of the original [2]59.Business translation belongs to pragmatic translation and has its own characteristics .Business translation mainly aims to offer information, so the apprehension of target readers and acceptability of translated versions should be put at the top place. Domestication is, therefore, an appropriate strategy in business translation, but it dose not always work. Because in some cases, foreignization is more effective .The research on domestication and foreignization has aroused hot debates; unfortunately, most of them are only focused on linguistic level until recent years when cultural study becomes a hot topic.1.2 Research problems.The present dissertation seeks to address the following research questions:1)Which type of text does the Company Profile Translation belong to?2) What are the problems existed in Company Profile Translation?3) How to resolve these problems by applying domestication and foreignization?2. Company Profile Translation2.1 Characteristics of Company ProfileCompany profile is common English adapted to specific business purposes. Generally speaking, business texts fall into the category of formal style. Business texts commonly serve as a media or tool to the receptors to understand all the information or expressions of the senders. A good business text should be clear and correct enough to arouse no misunderstanding or confusion. As a means of communication, the words used in company profile translation are quite different from those in general English. Due to the particularity of business activities, any negligence may cause big losses in money or in time. Every word, therefore, should be used carefully. Thus in terms of characteristics of company profile, the following points should be paid attention to:2.1.1 ClearnessClearness is the soul of company profile. Clearness means one meaning only, that is, there is no ambiguity or misunderstanding. A business report, a contract, an order, or even a note, should be clear enough to make readers understand well. Ambiguous words, phrases and sentence structures should all be avoided.Clearness requires that the information and meanings be expressed in a simple and direct way. The most proper business language is as such that the readers can understand the meaning immediately. Direct rather than indirect expressions are suitable in business texts, because direct expressions are clearer than indirect expressions, and the former are easier than the latter, thus less misunderstanding or confusion may appear.For example:We would like to know whether you would allow us to extend the time of shipment for 20 days and if you would be so kind as to allow us to do so, kindly give us your reply by fax without delay.It is difficult for readers to grasp the idea because it is too lengthy a sentence expressed in such a tortuous way. It can be rewritten as:Please reply by fax immediately if you will allow us to delay the shipmentuntil April 21.如果同意我方把交货时间延期至4月21日,请速回电。
低碳政策下供应链减排微分博弈与利润协调研究
低碳政策下供应链减排微分博弈与利润协调研究低碳政策下供应链减排微分博弈与利润协调研究摘要:低碳经济成为全球能源转型的趋势,对供应链环境的要求日益严格。
如何在低碳政策的推动下,实现供应链减排与利润的均衡协调是当前一个重要的问题。
本文从供应链角度出发,建立一个包含多个环节的供应链微分博弈模型,分析了供应链中各环节行为和利润对减排的影响,以及供应链策略与政策环境的互动。
本文将多数博弈与小节比较分析相结合,得出了供应链博弈均衡解和政策制定的建议。
研究结果表明:在低碳政策的推动下,供应链减排需要各参与方的共同努力与配合,利润协调需要建立合理的奖惩机制和利益分享机制,政策制定需要考虑环节间的影响和协同效应。
关键词:低碳政策;供应链减排;微分博弈;利润协调;协同效应Abstract:Low-carbon economy is becoming a global trend inenergy transformation, and the requirements for the supply chain environment are getting stricter. How to achieve a balance between supply chain emission reduction and profit coordination under the promotion of low-carbon policy is currently an important issue.This paper starts from the perspective of the supply chain, establishes a differential game model of the supply chain containing multiple links, analyzes the impact of the behavior and profits of each link in the supply chain on emission reduction, and theinteraction between supply chain strategies and policy environment. This paper combines the analysis of most games and subsections to obtain the equilibrium solution of the supply chain game and suggestions for policy formulation. The research results show that under the promotion of low-carbon policy, supply chain emission reduction requires the joint efforts and cooperation of all participants, profit coordination requires the establishment of a reasonable incentive and interest sharing mechanism, and policy formulation needs to consider the impact and synergy among links.Key words: low-carbon policy; supply chain emission reduction; differential game; profit coordination; synergy effect。
Essentials of Software Prototyping
Essentials of Software PrototypingSoftware prototyping is an essential part of the software development process as it allows developers to quickly and efficiently create a working model of the final product. By creating a prototype, developers can gather feedback from stakeholders early on in the development process, identify potential issues, and make necessary changes before investing significant time and resources into the final product.There are several key essentials of software prototyping that developers should keep in mind in order to effectively utilize this development technique. First and foremost, it is important to clearly define the goals and objectives of the prototype. This includes identifying the purpose of the prototype, the target audience, and the features and functionalities that need to be included.Next, developers should carefully select the right prototyping method that best suits the project requirements. There are several prototyping methods available, including low-fidelity prototypes, high-fidelity prototypes, and interactive prototypes. Each method has its own strengths and weaknesses, so it is important to choose the method that will best meet the needs of the project.Another essential aspect of software prototyping is the involvement of stakeholders throughout the prototyping process. By involving stakeholders early on and obtaining their feedback and input, developers can ensure that the final product will meet the needs and expectations of the end users. Additionally, involving stakeholders in the prototyping process can help to identify potential issues and make necessary changes before moving on to the final development phase.Communication is also key when it comes to software prototyping. Developers should maintain open and transparent communication with stakeholders throughout the prototyping process to ensure that all parties are on the same page. This includes providing regular updates on the progress of the prototype, seeking feedback and input from stakeholders, and addressing any concerns or questions that may arise.Furthermore, designers should focus on creating a prototype that is both functional and user-friendly. The purpose of a prototype is to demonstrate the key features and functionalities of the final product, so it is important to ensure that the prototype accurately reflects the intended design and user experience. Additionally, designers should prioritize usability testing to identify any usability issues and make necessary improvements.In conclusion, software prototyping is a valuable tool for software developers that allows them to quickly and efficiently create a working model of the final product. By following the essentials of software prototyping, developers can effectively gather feedback, identify issues, and make necessary changes before investing significant time and resources into the final product. With careful planning, communication, and user testing, software prototyping can help to ensure the success of a software development project.。
Competence-based
Competence-based knowledge structures for personalised learningJürgen Heller, Christina Steiner, Cord Hockemeyer, & Dietrich Albert Department of Psychology, University of Graz, Austria AbstractCompetence-based extensions of Knowledge Space Theory are suggested as a formal framework for implementing key features of personalised learning in technology-enhanced learning. The approach links learning objects and assessment problems to the relevant skills that are taught or required. Various ways to derive these skills from domain ontologies are discussed in detail. Moreover, it is shown that the approach induces structures on the assessment problems and learning objects, respectively, that can serve as a basis for an efficient adaptive assessment of the learners’ skills, and for selecting personalised learning paths.1IntroductionPersonalised learning aims at tailoring the teaching to individual need, interest and aptitude so as to ensure that every learner achieves and reaches the highest standards possible. It usually proceeds by assessing the learner’s current knowledge state and probably other individual characteristics or preferences, and by using the results of this assessment to inform further teaching. Knowledge Space Theory (Doignon & Falmagne, 1985, 1999) provides a foundation for personalising the learning experience. The theory, in its original formalisation, is purely behaviouristic. Various approaches have been devised in order to theoretically explain the observed behaviour by considering underlying cognitive constructs (e.g. Falmagne, Koppen, Villano, Doignon & Johannesen, 1990). These approaches focus on items’ difficulty components, their underlying demands, and skills or competencies and processes for performing them. The following section will give an introduction to the basic concepts of Knowledge Space Theory. Subsequently, an extension of Knowledge Space Theory is suggested as a formal framework that can serve as a basis for implementing personalised learning into a technology-enhanced learning system. This approach incorporates explicit reference to underlying skills and competencies and learning objects into an originally behaviouristic formal psychological theory with its focus on knowledge assessment. Its discussion covers the derivation of skills from ontological information, and the impact of skill assignments to assessment problems and learning objects. It is shown that theses assignments induce structures on which procedures for an efficient adaptive assessment of the learner’s competencies, and for generating personalised learning paths can be based.2Basic Notions of Knowledge Space TheoryKnowledge Space Theory provides a set-theoretic framework for representing the knowledge of a learner in a certain domain, which is characterised by a set of assessment problems (subsequently denoted by Q). In this framework the knowledge state of an individual is identified with the set of problems the person is capable of solving. Due to mutual (psychological) dependencies between the problems not all potential knowledge states (i.e. subsets of problems) will actually be observed. If a correct solution to a certain problem can be inferred given another problem is mastered, then each knowledge state will contain the first problem whenever it contains the second one (i.e. the first problem may be considered a prerequisite to the second). In order to capture the relationships between the problems of a domain the notion of a surmise relation was introduced. Two problems a and b are in a surmise relation whenever from a correct solution to problem b the mastery of problem a can be surmised. A surmise relation can be illustrated by a so-called Hasse diagram (see Figure 1 for an example), where descending sequences of line segments indicate a surmise relation. According to the surmise relation shown in Figure 1, from a correct solution to problem b the correct answer to problem a can be surmised, while the mastery of problem e implies correct answers to problems a, b, and c. A surmise relation restricts the number of possible knowledge states and forms a quasi-order on the set of assessment problems.1edbcaFigure 1: Example of a Hasse diagram illustrating a surmise relation on a knowledge domain Q.The collection of possible knowledge states of a given domain Q is called a knowledge structure, whenever it contains the empty set Ø and the whole set Q. The knowledge structure induced by the surmise relation depicted in Figure 1 is given by K = { Ø, {a}, {c}, {a, c}, {a, b}, {a, b, c}, {a, b, d}, {a, b, c, e}, {a, b, c, d}, Q}. The possible knowledge states are naturally ordered by set-inclusion, which results in the diagram shown in Figure 2.{a, b, c, d, e}{a, b, c, d} {a, b, d} {a, b} {a}{a, b, c, e} {a, b, c} {a, c} {c}∅Figure 2: Knowledge structure induced by the surmise relation of Figure 1. The dashed arrows indicate a possible learning path.Figure 2 illustrates that there are various possible learning paths for moving from the naive knowledge state (empty set ∅) to the knowledge state of full mastery (set Q). One of the possible learning paths is indicated by arrows describing the possible steps of a learning process. It suggests to initially present material related to problem a (or, equivalently, c), followed by material related to problems b or c (a, respectively), and so on. Notice that the knowledge structure of Figure 2 is somehow special, as it allows for gradual learning. On the one hand, each knowledge state (except state Q) has at least one immediate successor state that comprises all the same problems plus exactly one. On the other hand, each knowledge state (except state ∅) has at least one predecessor state that contains exactly the same problems, except one. A knowledge structure with these properties, in which learning can take place step by step, is called well-graded. According to Figure 2, for instance, the states {a, b, c, d} and {a, b, c, e} are the immediate successor states to the knowledge state {a, b, c}. The set {d, e} constitutes the so-called outer fringe of the knowledge state {a, b, c}. It consists of exactly those problems that a learner having knowledge state {a, b, c} should tackle next, and can thus form a basis for generating personalised learning paths. The knowledge state {a, b, c} has also two predecessor states, which are {a, b} and {a, c}. The set {b, c} represents the so-called inner fringe of the knowledge2state {a, b, c}. Its problems may be seen as corresponding to the most sophisticated content that has been learned recently. This is the content that the learner should revisit, when previously learned material is to be reviewed. Besides providing the information relevant for generating personalised learning paths, a knowledge structure is at the core of an efficient adaptive procedure for knowledge assessment. It allows for uniquely determining the knowledge state by presenting the learner with only a subset of the problems (for more details see Section 4.3).3Competence-based Extensions of Knowledge Space TheoryAlthough there is a commercial learning system that is based on Knowledge Space Theory, which is the ALEKS system (), this approach suffers from its limitation to a purely behaviouristic perspective. Knowledge Space Theory focuses completely on the observable solution behaviour, and does not refer to both learning objects and skills or competencies that are to be taught. To overcome these limitations Knowledge Space Theory may be extended so that it incorporates explicit reference to learning objects and underlying skills and competencies. The subsequent considerations are based on previous work by Falmagne et al. (1990), Doignon (1994), Düntsch and Gediga (1995), Korossy (1997, 1999), Albert and Held (1994, 1999), Hockemeyer (2003), and Hockemeyer, Conlan, Wade, and Albert (2003). It not only integrates these different contributions, but also derives the implications for implementing a personalised learning system, and clarifies the approach’s relation to domain ontologies. Extended Knowledge Space Theory is dealing with three different sorts of entities, which are • • • the set Q of assessment problems, the set L of learning objects (LOs), the set S of skills relevant for solving the problems, and taught by the LOs.Notice that the skills in the set S are meant to provide a fine-grained, low-level description of the learner’s capabilities. Usually, it is a whole bunch of skills that is tested by an assessment problem, or taught by a LO. Each of these basic sets is assumed to be endowed with a structure, which we conceive as a collection of subsets of the respective set. In particular, we consider • • • a knowledge structure on the set Q of assessment problems, a learning structure on the set L of LOs, a competence structure on the set of skills S.As outlined above, the knowledge structure constitutes the collection of possible knowledge states and forms the basis of the problem-based assessment of a student’s competency (cf. Section 4.3). Usage of the notion ‘competency’ in the present context is in line with the terminology of Doignon and Falmagne (1999), which refers to subsets of skills that are collected in the competence structure and which may also be called competence states. The learning structure together with a student’s current competence state is used to generate a personalised learning path. Learning and competence structures are defined in complete analogy to the knowledge structure introduced above. The main goal is to identify the pieces of information that are needed for establishing all those structures.4 4.1Skills and Skill Assignments Deriving Skills from Domain OntologiesThis section addresses the question of how to identify skills that are relevant and suitable for modelling3the underlying constructs of assessment problems and learning object regarding a certain domain. As an alternative to cognitive task analysis (e.g. Korossy, 1999), querying experts (e.g. Zaluski, 2001), and systematic problem construction by applying the component-attribute approach (e.g. Albert & Held, 1994), we propose to utilise information coming from domain ontologies. An ontology allows to structure a domain of knowledge with respect to its conceptual organization. It constitutes a specification of the concepts in a domain and the relations among them and thus, defines a common vocabulary of the knowledge domain. A common and natural way of representing ontologies are concept maps. Hence, in the sequel we refer to concept maps as representation of ontological information. The information provided by a concept map can be used for identifying skills and for establishing a competence structure, respectively.Identifying Skills with Sub-Structures of a Concept MapSkills in terms of competence-based Knowledge Space Theory may be identified with sub-structures of a concept map representing the ontological information of the respective domain. This means, that a specific skill that is required for solving problems, or that is taught by learning objects, can be identified with a subset of propositions represented by the concept map. Assume, for instance, the knowledge domain of right triangles. Figure 3 illustrates a possible assessment problem from this domain.given: a = 4 cm c = 7 cm b=?Figure 3: Example of an assessment problem for the knowledge domain ‘right triangles’Solving this geometry problem requires to know the Pythagorean Theorem and how to apply it. These demands, which can be identified with a sub-structure of a concept map (see Figure 4), may be assumed to constitute a skill.Figure 4: Concept map of the knowledge domain ‘right triangles’. The marked sub-structure is referred to the skill ‘knowing the Pythagorean Theorem’.4The representation of skills in the concept map may be used for deriving dependencies between skills, e.g. by set inclusion. If the representation of a skill x in the concept map is a subset of that of a skill y, then skill x is subordinated to skill y.Using the Component-Attribute ApproachConcept maps provide a tool for modelling the content of a knowledge domain. Their construction aims at uncovering the prerequisite relations among concepts within a topic, and between different topics in a subject, and may be based on curriculum and content analysis. Curriculum and content analysis not only reveal the basic concepts of a domain, but also the learning objectives that are related to those concepts. Learning objectives include required activities of the learner and may be captured by so-called action verbs. Action verbs (e.g. state, or solve equation) describe the observable student performance or behaviour and may be annotated to the nodes of the concept map representing the concepts that are to be taught. The information provided by the concept map then again can be used for establishing a competence structure in the sense of Knowledge Space Theory. The concept map provides a hierarchical structure on the concepts of a domain. Hence, the set of concepts C can be represented as a graph illustrating the relationship between the concepts (see Figure 5(a) for an example). Additionally, a relation may be introduced on the set of action verbs A that induces a structure on it. For instance, to ‘state’ a linear equation is most likely a prerequisite to ‘solve’ a linear equation, and therefore, the action verb ‘state’ can be considered as a prerequisite to the action verb ‘solve’. The structure defined on the action verbs can also be depicted in a graph (see Figure 5(b) for an example). Based on these considerations, a skill in terms of extended Knowledge Space Theory may be identified with a pair consisting of a concept and an action verb (e.g. c1a2). As an example for a skill consider ‘solve linear equation’, which consists of the concept ‘linear equation’ and the action verb ‘solve’. Formally we define S ⊆ C × A to reflect the fact that not all combinations of concepts and action verbs may be meaningful, or even realisable. A crucial question is how to merge the two kinds of structures, i.e. the structure on the set of concepts and the structure on the set of action verbs, in order to establish a structure on the set of skills.c1 a1 c2 c3 a2(a)c4(b)Figure 5: Concept structure (a) and structure defined on action verbs (b)To resolve this issue we suggest the component-attribute approach (Albert & Held, 1994, 1999). According to this approach components are understood as dimensions, while attributes are the different values these dimensions can take on. In the present context, the set of concepts and the set of action verbs are considered as the components, and the attributes are identified with the respective elements (e.g. c1, c2, c3, c4 in C and a1, a2 in A). On each component a relation is defined that orders the attributes (see Figure 5). A structure on the set of skills is then established by forming the direct product of these two components, which results in a prerequisite relation on the Cartesian product C × A. The product of the two graphs displayed in Figure 5 is the relation depicted in Figure 6. From this you can see, e.g. that skill c2a2 is a prerequisite to the skills c2a1, c1a1, and c1a2.5c1a1 c3a1 c2a1 c1a2 c2a2 c3a2 c4a2Figure 6: Example of a prerequisite relation on the skills induced by the structures on concepts and action verbs displayed in Figure 5.c4a1If S is a proper subset of the Cartesian product C × A then we consider the prerequisite relation that the direct product shown in Figure 6 induces on S. In the framework of extended Knowledge Space Theory the prerequisite relation on the skills is interpreted as a surmise relation that gives rise to competence structure. The competence states contained in it have to respect the ordering illustrated in Figure 6, which means, for example, that with the skill c3a1 each competence state has to contain the skills c3a2, c4a1, and c4a2, too. Notice, that from a psychological point of view, pairs consisting of a concept and an action verb, like ‘state linear equation’ or ‘solve linear equation’, describe rather global skills. Solving an equation might require several more elementary skills, which are in correspondence with the distinct steps in a solution path. It may thus be necessary to characterise the skills at a more fine-grained level. Further research is needed for deciding upon an optimal level of granularity of the skills.4.2Assigning Skills to Assessment ProblemsLet us now consider the assignment of skills to the set of assessment problems. The relationship between assessment problems and skills can be formalised by two mappings. The mapping s (skill function) associates to each problem a collection of subsets of skills. Each of these subsets (i.e. each competency) consists of those skills that are sufficient for solving the problem. Assigning more than one competency to a problem takes care of the fact that there may be more than one way to solve it. The mapping p (problem function) associates to each subset of skills the set of problems that can be solved in it. It defines a knowledge structure because the associated subsets actually are nothing else but the possible knowledge states.-It has been shown that both concepts are equivalent, which means that, given the skill function, the problem function is uniquely determined, and vice versa. Consequently, only one of the two functions needs to be known in order to build the respective knowledge structure. Consideration is confined to the skill function, because it may be interpreted as representing the assignment of metadata to the problems. It follows that assigning (semantic) metadata to assessment problems puts constraints on the possible knowledge states that can occur. We illustrate the intimate relationship between skill function and problem function by a simple example. Consider the knowledge domain Q = {a, b, c, d}, and let the skill function s on the set S = {x, y, z} of skills be given by s(a) s(b) = = {{x, y},{x, z}}, {{x, z}},6s(c) s(d)= ={{x},{y}}, {{y, z}}.This means, for example, that each of the skill sets {x, y} and {x, z} is sufficient for solving problem a. From the skill function we can derive the corresponding problem function, which yields p(∅) p({x}) p({y}) p({z}) p({x,y}) p({x,z}) p({y,z}) p(S) = = = = = = = = ∅, {c}, {c}, ∅, {a, c}, {a, b, c}, {c, d}, Q.The assignment of skills to the assessment problems induces a knowledge structure on the set on problems, which is actually given by the subsets of problems in the range of the problem function. The knowledge structure for the above example is given by {∅, {c}, {a, c}, {c, d}, {a, b, c}, Q}. In principle, the skill function for a given set Q of assessment problems may also introduce dependencies between skills. It may be the case that a certain skill is required for solving a problem only in connection with another skill. In the above example the skill z is available only if either x or y is available, too. These dependencies, however, may only crop up in the given set Q, and it remains unclear whether they are valid in general. If capitalising on incidental dependencies between problems is to be avoided then the constraints the skill function puts on the possible subsets of skills should be neglected.4.3Problem-based Skill AssessmentA knowledge structure can form the basis for devising an efficient adaptive procedure for knowledge assessment (see e.g. Doignon & Falmagne, 1999; Dowling & Hockemeyer, 2001). After identifying a learner’s knowledge state, which refers to the observable behaviour, it can then be mapped to the corresponding competence state. Considering the knowledge structure given in Figure 2 for the knowledge domain Q = {a, b, c, d, e}, in the beginning of an assessment phase all states of the structure may correspond to the knowledge state of an individual learner. According to a deterministic procedure, the assessment starts by selecting a problem that is contained approximately in half of the states of this structure and by posing this problem to the learner. Dependent on the learner’s answer, the next problem will be selected. If the learner is capable of solving problem b, for example, then only the knowledge states containing problem b are still feasible. If subsequently problem e is solved, states {a, b, c, e} and {a, b, c, d, e} remain. The learner’s knowledge state is uniquely identified after presenting problem d. For instance, state {a, b, c, e} results if problem d cannot be solved by this learner. Thus, for a set of five assessment problems, the presentation of only three problems allows for identifying the knowledge state of a learner. Formally, the number of questions for determining the knowledge state of a learner is approximately the dual logarithm of the total number of knowledge states.7Aside from the just outlined deterministic assessment procedure, assessment may also be embedded into a probabilistic framework. A probabilistic assessment method allows for considering that the knowledge states may occur with different frequencies within a population as well as that a subject sometimes may be careless in answering a problem or may guess the correct answer. Such an assessment method assumes an a priori likelihood function (e.g. probability distribution) on the knowledge states. Initially, this likelihood may depend on the learner’s profile, for example, the age, or grade of this learner. Later, this probability distribution is updated consistent with the learner’s answers to the posed problems. The questioning continues until there is a pronounced peak in the likelihood function that suggests a unique knowledge state for an individual learner. The knowledge state identified for a learner then can be mapped to his/her competence state by using the skill function. This means that, given a knowledge state, we are looking for the subset of skills that are sufficient for solving the problems contained in the knowledge state. However, there may be more than one such subset. In this case the skills cannot be recovered uniquely given the assessed knowledge state. To provide an example, consider the skill function defined in Section 4.2. If we assume that the assessment converged to the knowledge state {c} then it is unclear, which skills the learner is endowed with. According to the skill function either skill x or skill y may be responsible for solving problem c. This non-uniqueness occurs whenever a problem function is not one-to-one. Using additional information may lead to a unique identification of the available skills (e.g. looking up the learning history, checking for the skills actually taught). The best strategy, however, would be to select a proper set of assessment problems that avoids the above mentioned non-uniqueness. Once the competence state of a learner has been determined it may serve as a basis for selecting a personalised learning path.4.4Assigning Skills to Learning ObjectsThe relationship between learning objects and skills is different from that between assessment problems and skills. The relationship between the set L of LOs and the skills in S is mediated by two mappings (Hockemeyer, 2003; Hockemeyer et al. 2003). The mapping r associates to each LO a subset of skills (required skills), which characterise the prerequisites for dealing with it, or understanding it. The mapping t associates to each LO a subset of skills (taught skills), which refer to the content actually taught by the LO. In a similar way as outlined above, the mappings r and t induce a learning structure on the set of LOs, which plays a central role for generating personalised learning paths. The pair of mappings r and t also imposes constraints on the competence states that can occur. Again, these constraints are tied to the given set L of LOs. The imposed competence structure characterises the learning progress that may be achieved by studying the learning objects in L. Generally, the assignment of skills to learning objects allows for deciding upon which learning objects are to be presented next, given a certain competence state. The concepts of inner and outer fringes (cf. Section 2) of a competence state may provide the basis for implementing personalised learning. The inner fringe of a knowledge state may be interpreted as `what a learner can do´, while the outer fringe represents `what this learner is ready to learn´. Therefore, proceeding in the learning process the next skills to be learned should be chosen from the outer fringe of the current competence state. Thus, a suitable learning object has to be selected that is characterized by required skills that the learner has already available and by taught skills that correspond to the outer fringe of the current competence state. If previously learned material has to be reviewed, then the content corresponding to the inner fringe of a learner’s actual competence state seems to be a natural choice, because it contains the most sophisticated skills acquired by the learner.5ConclusionsThe present paper proposes a competence-based extension of Knowledge Space Theory that provides a formal framework for explicitly linking assessment problems and learning objects to the relevant skills and competencies. It is demonstrated that the assignment of skills to assessment problems (which are sufficient for their solution) induces a knowledge structure characterising the possible answer patterns of the learners. Moreover, it is shown that assigning required and taught skills to learning objects allows for generating personalised learning paths. The derivation of the relevant skills from domain ontologies is discussed in detail. Two possible approaches are outlined, which are on the one hand to identify skills with sub-structures of a concept map, and on the other8hand to identify them with pairs of concepts and action verbs, and establishing a skill structure by merging the structures given on both sets. A competence-based extension of Knowledge Space Theory can serve as a framework for an efficient adaptive assessment of the skills and competencies of a learner, and for selecting personalised learning paths. It thus constitutes a valuable model for implementing personalised learning within an open technology-enhanced learning system.6ReferencesAlbert, D., & Held, T. (1994). Establishing knowledge spaces by systematical problem construction. In D. Albert (Ed.), Knowledge Structures (pp. 78–112). New York: Springer Verlag. Albert, D., & Held, T. (1999). Component Based Knowledge Spaces in Problem Solving and Inductive Reasoning. In D. Albert & J. Lukas (Eds.), Knowledge Spaces: Theories, Empirical Research Applications (pp. 15–40). Mahwah, NJ: Lawrence Erlbaum Associates. Doignon, J.-P., & Falmagne, J.-C. (1985). Spaces for the assessment of knowledge. International Journal of Man-Machine Studies, 23, 175-196. Doignon, J.-P. & Falmagne, J.-C. (1999). Knowledge Spaces. Berlin: Springer. Doignon, J. (1994). Knowledge spaces and skill assignments. In G. H. Fischer & D. Laming (Eds.), Contributions to Mathematical Psychology, Psychometrics and Methodology (pp. 111–121). New York: Springer–Verlag. Dowling, C. E., & Hockemeyer, C. (2001). Automata for the Assessment of Knowledge. IEEE Transactions on Knowledge and Data Engineering, 13, 451–461. Düntsch, I., & Gediga, G. (1995). Skills and Knowledge Structures. British Journal of Mathematical and Statistical Psychology, 48, 9–27. Falmange, J.-C., Koppen, M., Villano, M., Doignon, J.-P. & Johannesen, L. (1990). Introduction to Knowledge Spaces: How to Build, Test, and Search Them. Psychological Review, 97, 201-224. Hockemeyer, C. (2003). Competence based adaptive e-learning in dynamic domains. In F. W. Hesse & Y. Tamura (Ed.), The Joint Workshop of Cognition and Learning through Media-Communication for Advanced E-Learning (JWCL), 2003, 79–82, Berlin. Hockemeyer, C., Conlan, O., Wade, V., & Albert, D. (2003). Applying Competence Prerequisite Structures for eLearning and Skill Management. Journal of Universal Computer Science, 9, 14281436. Korossy, K. (1997). Extending the theory of knowledge spaces: a competence-performance approach. Zeitschrift für Psychologie, 205, 53-82. Korossy, K. (1999). Modeling knowledge as competence and performance. In D. Albert & J. Lukas (Eds.), Knowledge Spaces: Theories, Empirical Research, Applications (pp. 103-132). Mahwah, NJ: Lawrence Erlbaum Associates. Zaluski, A. (2001). Knowledge Spaces Mathematica Package. PrimMath 2001 - Mathematica in Science, Technology and Education. Conference Proceedings, Zagreb, September 27-28th, 2001. University of Zagreb.9。
关联-顺应模式下的汉语新词翻译
中南民族大学硕士学位论文关联-顺应模式下的汉语新词翻译姓名:汤蕾申请学位级别:硕士专业:外国语言学及应用语言学指导教师:许菊2011-05摘 要语言是人类交际的基本工具之一,而词汇是语言中最活跃的部分。
新词主要是指反映新事物、新概念、新思维、新经历、新问题等的词汇。
自改革开放以来,大量的汉语新词不断涌现在我们的日常生活中并被翻译成英语。
虽然迄今汉语新词翻译取得了巨大成绩,但其中仍存在很多问题,如机械直译、胡译、词义冗余、文化误译、文化信息缺失等。
本文以杨平教授提出的关联—顺应模式作为理论框架,试图探究能够解释汉语新词翻译的新途径。
本研究通过对源自报纸、杂志及网络上的汉语新词英译文本的语料进行分析,旨在回答下列四个问题:(i) 关联—顺应模式下如何描述汉语新词的翻译过程。
(ii) 在此模式下如何选择指导汉语新词翻译的标准。
(iii) 在此模式下如何选择指导新词翻译的具体策略。
(iv) 如何构建汉语新词翻译的关联—顺应模式。
在关联—顺应模式下,汉语新词翻译可分为理解和译出两个阶段。
在理解阶段,译者试图寻求与原语作者意图相匹配的最佳关联解释;在译出阶段,译者通过对语言、认知、心理以及社会文化语境的顺应,来进行语言形式或策略选择,以期让目标语读者获得足够的语境效果,并为之提供易读的翻译文本。
本研究发现:(一)汉语新词的翻译不仅是一个寻求最佳关联的明示—推理过程,同时也是一个对语境动态顺应的过程。
在该过程中,译者需要在众多的翻译标准及翻译策略中作出选择。
(二)在关联—顺应模式下,汉语新词翻译可供选择的指导标准有“忠实”、“顺畅”、“简洁”、“清晰”、“传神”。
(三)为了达到这些标准,译者可采取的翻译策略有“补全法”、“意译”、“直译加注”、“仿译”、“借词”、“回译”、“中国英语”(包括“音译”、“直译”、“新造词”)。
(四)关联—顺应模式下汉语新词翻译可以填补词汇语义空缺、增进目的语读者的理解、弥补文化缺省、传播特色中国文化。
情境——组织存放词汇语义知识的恰当框架Situation–
The Linear Complementarity Problem
2
O.L. Mangasarian and J.S. Pang
The feasible region of XLCP (M; N; C) is denoted FEA(M; N; C); it is de ned to be the set FEA(M; N; C) f(x; y) 2 R2n : Mx ? Ny 2 C g; + which is a polyhedral subset of R2n. We shall say that XLCP (M; N; C) is feasible if FEA(M; N; C) + is nonempty. The set of complementary solutions of the XLCP (M; N; C) is given by SOL(M; N; C) f(x; y) 2 FEA(M; N; C) : x ? yg: 2. The Equivalent Bilinear Program. Associated with the XLCP (M; N; C) is a natural bilinear program de ned on the same feasible region: minimize xT y subject to (x; y) 2 FEA(M; N; C): We shall denote this problem by BLP (M; N; C). The BLP (M; N; C) should be contrasted with the \natural" quadratic program that one associates with the standard LCP (q; M) which corresponds to the special case of the XLCP (M; N; C) with m = n, N = I, and C = f?qg. The latter quadratic program is 3] minimize xT (q + Mx) (1) subject to x 0; q + Mx 0: One important distinction between the BLP (M; N; C) and the quadratic program (1) is that the latter is de ned by the variable x only, whereas the former involves the pair (x; y). We shall see shortly that the BLP (M; N; C) plays a similar role in the study of the XLCP (M; N; C) as (1) in the LCP (q; M). Since the objective function of BLP (M; N; C) is clearly nonnegative on FEA(M; N; C), the XLCP (M; N; C) is equivalent to the BLP (M; N; C) in the sense that a pair of vectors (x; y) solves the former problem if and only if (x; y) is a globally optimal solutiobjective value. Moreover, by the well-known Frank-Wolfe Theorem of quadratic programming 5], the BLP (M; N; C) always has an optimal solution provided that it is feasible. Of course, it is in general not necessary for an optimal solution of the BLP (M; N; C) to have zero objective value. In what follows, we shall establish several results that pertain to the relationship between the XLCP and the associated BLP. Proposition 2.1. Let M and N be m n matrices and C a polyhedral set in Rm . The bilinear function f(x; y) xT y is convex on the set FEA(M; N; C) if and only if the following implication
【工程学科英语(整合第二稿)】 参考答案
Unit OneTask 1⑩④⑧③⑥⑦②⑤①⑨Task 2① be consistent with他说,未来的改革必须符合自由贸易和开放投资的原则。
② specialize in启动成本较低,因为每个企业都可以只专门从事一个很窄的领域。
③ d erive from以上这些能力都源自一种叫机器学习的东西,它在许多现代人工智能应用中都处于核心地位。
④ A range of创业公司和成熟品牌推出的一系列穿戴式产品让人们欢欣鼓舞,跃跃欲试。
⑤ date back to置身硅谷的我们时常淹没在各种"新新"方式之中,我们常常忘记了,我们只是在重新发现一些可追溯至涉及商业根本的朴素教训。
Task 3T F F T FTask 4The most common viewThe principle task of engineering: To take into account the customers ‘ needs and to find the appropriate technical means to accommodate these needs.Commonly accepted claims:Technology tries to find appropriate means for given ends or desires;Technology is applied science;Technology is the aggregate of all technological artifacts;Technology is the total of all actions and institutions required to create artefacts or products and the total of all actions which make use of these artefacts or products.The author’s opinion: it is a viewpoint with flaws.Arguments: It must of course be taken for granted that the given simplified view of engineers with regard to technology has taken a turn within the last few decades. Observable changes: In many technical universities, the inter‐disciplinary courses arealready inherent parts of the curriculum.Task 5① 工程师对于自己的职业行为最常见的观点是:他们是通过应用科学结论来计划、开发、设计和推出技术产品的。
classification of the evaluated hardware element
classification of the evaluatedhardware elementIn the field of hardware engineering, the evaluation of different hardware elements plays a crucial role in determining their functionality, reliability, and overall performance. These evaluations are carried out to assess the capabilities and limitations of the hardware components, ensuring that they meet specific criteria and standards. The classification of the evaluated hardware element helps to categorize and understand its characteristics, which aids in making informed decisions when selecting the appropriate hardware for a particular application. This article will delve into the classification of the evaluated hardware element and discuss the various categories that exist.1. Processing Units and MicroprocessorsOne of the most critical hardware components that are evaluated is the processing unit or microprocessor. These elements are responsible for executing instructions and performing calculations in a computer system. The evaluation of processing units involves assessing their clock speed, architecture, cache size, and power consumption. Based on these evaluations, processing units can be classified into various categories, such as low-end, mid-range, and high-end, based on their performance and capabilities.2. Memory ModulesMemory modules are another vital hardware element that is extensively evaluated. These modules store and retrieve data for quick access by the processor. The evaluationof memory modules involves testing their capacity, speed, type, and compatibility with the system. Classification of memory modules is typically done based on their capacity, type (e.g., RAM or ROM), and speed (e.g., DDR3 or DDR4). This classification helps in selecting the appropriate memory module that meets the requirements of the system in terms of storage and data transfer rates.3. Storage DevicesEvaluation of storage devices focuses on assessing their capacity, speed, reliability, and data transfer rates. Storage devices include hard disk drives (HDDs), solid-state drives (SSDs), and various other storage media. HDDs are evaluated based on parameters like rotational speed, data transfer rate, and storage capacity. SSDs are evaluated based on factors such as durability, transfer speed, and reliability. The classification of storage devices helps in selecting the appropriate storage medium based on the requirements of the system, whether it is for high-speed data access, larger storage capacity, or durability.4. Graphics Processing Units (GPUs)GPUs are hardware elements specifically designed to handle complex graphical computations and rendering tasks. The evaluation of GPUs involves assessing factors like CUDA core count, memory bandwidth, clock speed, and power consumption. GPUs are usually classified into different categories based on their performance levels, such as entry-level, mid-range, and high-end, to help users choose the appropriate GPU for their desired graphics-intensive applications.5. Networking DevicesEvaluating networking devices involves assessing their data transfer rates, compatibility with different network protocols, reliability, and security features. Networking devices include routers, switches, and wireless access points. Classification of networking devices is done based on their capabilities, such as gigabit routers, managed switches, or wireless routers. This classification helps in selecting the appropriate networking device that caters to the specific needs of the network infrastructure.6. Input and Output DevicesInput and output devices are evaluated based on factors like compatibility, data transfer rates, responsiveness, and durability. Input devices include keyboards, mice, and touchscreens, whereas output devices include monitors, printers, and speakers. These hardware elements are often classified based on their functionality, size, and connectivityoptions. This classification aids in selecting the appropriate input and output devices for seamless interaction with computer systems.In conclusion, the classification of the evaluated hardware element is essential for understanding and categorizing the characteristics and capabilities of different hardware components. Evaluating hardware elements such as processing units, memory modules, storage devices, GPUs, networking devices, and input/output devices allows for informed decision-making when selecting the appropriate hardware for specific applications. By considering the classification of the evaluated hardware element, individuals and organizations can ensure that their hardware choices align with their requirements and expectations.。
A mathematical framework for global illumination algorithms
A Mathematical Framework for GlobalIllumination AlgorithmsPh. Dutré, E. Lafortune, Y. D. Willems{philipd, ericl, ydw}@cs.kuleuven.ac.beDepartment of Computing ScienceKatholieke Universiteit LeuvenCelestijnenlaan 200AB-3001 Heverlee, Belgium1 AbstractThis paper describes a mathematical framework for rendering algorithms. Starting from the ren-dering equation and the potential equation, we will introduce the Global Reflection Distribution Function (GRDF). By using the GRDF, we are able to compute the behaviour of light in an envi-ronment, independent of the initial lighting or viewpoint conditions. This framework is able to describe most existing rendering algorithms.2 IntroductionThe global illumination problem is formulated by the well known rendering equation [Kajiya86]. Different methods have been proposed to solve this equation: Monte Carlo Path Tracing, which is in fact an application of distributed ray tracing [Cook et al. 84, Shirley-W ang91, Shirley-Wang92]; various two-pass methods [Chen et al. 91, Sillion-Puech89, Wallace et al. 87], which combine a radiosity and a ray tracing pass; methods based on particle tracing [Pattanaik-Mudur92], which are related to solutions presented in recent heat transfer literature [Brewster92].Algorithms which solve the global illumination problem can be subdivided into four different classes. A first group of methods is based upon gathering techniques: the illumination of a point or surface is computed by looking at its surroundings, and by taking into account possible contribu-tions towards the illumination of the surface. A second group of methods simulates the propaga-tion of light in an environment, starting from the light sources. Both these approaches can further be divided in deterministic and probabilistic algorithms. Gathering algorithms are described by the traditional rendering equation, but shooting algorithms are best described by the so called potential equation [Pattanaik-Mudur93, Pattanaik93].3 The rendering equation3.1 Exitant and incident radianceRadiance is the basic quantity for describing light transport. It is expressed as power per unit sur-face area per unit solid angle. Exitant radiance (L→) is the radiance leaving a surface point in a given direction of the hemisphere. Incident radiance (L←) is radiance arriving at a surface point from a direction belonging to the hemisphere. Equation 1 gives the realtionship between exitant and incident radiance (figure 1).(1)where:•L (x →θ): exitant radiance leaving x in direction θ [Watt / m 2 sr].•L (y ←ψ): incident radiance arriving at y from direction ψ [Watt / m 2 sr].Propagation of radiance in an environment is described by the well known rendering equation (fig-ure 2):(2)where:•L e (x →θ): self-emitted excitant radiance leaving x in direction θ [Watt / m 2 sr].•Ωx : hemisphere around x .•d ωϕ: differential solid angle around direction ϕ.•p (x , ϕ): the closest point seen from x in direction ϕ.•ϕ-1: direction opposite to ϕ.•f r (ϕ-1, x , θ): Bidirectional Reflection Distribution Function (BRDF) evaluated at x, with inco-ming direction ϕ-1 and outgoing direction θ.L x θ→()L p x θ,()θ1−←()=L y ψ←()L p y ψ,()ψ1−→()=FIGURE 1. Exitant and incident radianceL x θ→()L e x θ→()L p x ϕ,()ϕ1−→()f r ϕx θ,,()ϕn x ,()d ωϕcos Ωx ∫+=FIGURE 2. Transport of exitant radiance•cos (ϕ, n x ): the absolute value of the cosine of the angle between direction ϕ and the normal direction at x .The integral over the hemisphere around x can be written as a transport operator T working on L (x →θ):(3)Substituting equation 1 in equation 2, we derive a similar transport equation for incident radiance (figure 3):(4)In analogy with the transport equation for exitant radiance, we define an operator Q such that:(5)3.2 Adjoint equationsTwo operators O 1 and O 2, operating on elements of the same vectorspace V , are said to be adjoint with respect to an inner product <A , B > if:(6)O 2 is called the adjoint operator of O 1 and is denoted as O 1*. It is easy to prove that the above defined operators T and Q are adjoint to each other for the following inner product, defined in the space of all functions operating on arguments (z ,θ)∈A x Ω:(7)L x θ→()L e x θ→()TL x θ→()+=TL x θ→()L p x ϕ,()ϕ1−→()f r ϕx ,θ,()ϕn x ,()d ωϕcos Ωx ∫=L y ψ←()L e y ψ←()L p y ψ,()ϕ←()f r ϕp y ψ,()ψ1−,,()ϕn p y ψ,(),()d ωϕcos Ωp y ψ,()∫+=surfaceFIGURE 3. Transport of incident radianceL y ψ←()L e y ψ←()QL y ψ←()+=QL y ψ←()L p y ψ,()ϕ←()f r ϕp y ψ,()ψ1−,,()ϕn p y ψ,(),()d ωϕcos Ωp y ψ,()∫=A B V :O 1A B ,〈〉∈,∀A O 2B ,〈〉=F 1F 2,〈〉F 1z ϕ→()F 2z ϕ←()ϕn z ,()cos d ωϕd µz Ωz ∫A ∫=We will refer to the above defined transport operators as T and T *, corresponding to exitant and incident radiance respectively.4 FluxThe flux is the total amount of power emitted by a set of points and a set of directions around these points. This total set is described by a function g (x ,θ).g equals 1 if (x ,θ) belongs to the set, and equals 0 if (x ,θ) does not belong to the set. The flux, associated with a set S defined by g , can then be written as:(8)or, using the inner product defined above:(9)5 The potential equationThe potential equation (referenties) describes the global illumination problem from a dif ferent point of view. The advantage of the potential equation is that shooting algorithms are better described using this equation. Instead of computing the radiance of all pairs (x ,θ) we are interested in, shooting algorithms compute the potential W (x ,θ) for each pair (x ,θ), w.r.t. a given set of which we want to compute the flux.W (x ,θ) describes the fraction of the radiance L (x →θ) which contributes to the flux of the set S .W e (x ,θ) equals 1 for points belonging to the set and equals 0 for all other points. The potential W (x ,θ) can be described by the following transport equation (figure3):(10)This equation is the same transport equation as was used to describe the transport of incident radi-ance. Therefore, we will apply the same notion of “incidence” to potential.W (x ,θ) as described here is thus incident potential and we will use the notation W (x ←θ). Thus the transport equation of W (x ←θ) can be written as:(11)ΦS ()L z ϕ→()g z ϕ,()ϕn z ,()cos d ωϕd µz Ωz ∫A ∫=ΦS ()L →g ,〈〉=W x θ,()W e x θ,()W p x θ,()ϕ,()f r ϕp x θ,()θ1−,,()ϕn p x θ,(),()d ωϕcos Ωp y ψ,()∫+=FIGURE 4. Transport of potentialW x θ←()W e x θ←()T *W x θ←()+=In analogy with the relation between exitant and incident radiance, we can also define exitant potential:(12)Substituting equation 12 in equation 10:(13)or(14)Thus, we have derived two different sets of transport equations which describe the global illumina-tion problem. On the one hand we have excitant radiance and incident potential, described by adjoint transport equations. On the other hand, we have incident radiance and excitant potential,also described by a set of adjoint transport equations.From the observations above, we can derive an alternate formula for the flux. The function g can be replaced by W e , since these functions describe the same property of a point.(15)In order to solve the global illumination problem, we need to compute the flux for a number of sets. If a ray tracing approach is used, one set consist of the points and associated directions visible to a pixel; if a radiosity algorithm is used, a set consists of a single patch with an associated hemi-sphere for each point. We have two sets of equations at our disposal to compute the fluxes:(16)The set of equations on the left corresponds to gathering algorithms. Given a set S defined by W e ←,we have to compute the corresponding L →.L → is computed by gathering all incoming radiances at the point of interest. Ray tracing algorithms are a typical example of gathering algorithms.The set of equations on the right corresponds to shooting algorithms. W ith respect to a given set S ,a potential is computed for each point belonging to a light source.W ← is computed by “shooting”light into the environment, until we eventually reach the set S . Progressive radiosity is a typical example.W y ψ→()W p y ψ,()ψ1−←()=or W x θ←()W p x θ,()θ1−→()=W y ψ→()W e y ψ→()W p y ψ,()ϕ→()f r ϕp y ψ,()ψ1−,,()ϕn p y ψ,(),()d ωϕcos Ωp y ψ,()∫+=W y ψ→()W e y ψ→()TW y ψ→()+=ΦS ()L →g ,〈〉=L →W e ←,〈〉=L →W←T *W ←−,〈〉=L →W ←,〈〉L →T *W ←,〈〉−=L →W ←,〈〉TL →W ←,〈〉−=L →TL →−W ←,〈〉=L e →W ←,〈〉=ΦS ()L →W e ←,〈〉=L →L e →TL →+=ΦS ()L e →W ←,〈〉=W ←W e ←T *W ←+=Both approaches have distinct advantages. Some two-pass methods use the advantages of both radiosity and ray tracing algorithms.6 Global Reflection Distribution FunctionGiven the transport equation of L →, it is clear that each single value L →(x →θ) can be written as a linear combination of all possible values L e →(z →ϕ). Indeed:(17)The fraction of each L e →(z →ϕ) that counts towards L →(x →θ) can be considered as a function F x,θover all (z,ϕ). Thus:(18)For each point (x ,θ), there is a corresponding function F x, θ. We can express the single value L (x →θ) as an inner product by using a suitable Dirac impulse:(19)This leads to the following observations:(20)This holds for all possible functions L → and all values of x and θ. Therefore, we can say that:(21)F x, θ can be expressed by the same transport equation that was used for incident potential. W e can also apply the notion of “incidence” to F x, θ. The full transport equation then reads:(22)We can make an analogue reasoning for the potential. W e can derive a fuction G y ,ψ, such that:L →L e →TL→+=L e →T L e →T L e →T L e →T L e →...+()+()+()+()+=I T T 2T 3T 4...+++++()L e →=L x θ→()L e →F x θ,,〈〉=δx θ,z ϕ,()0if z ϕ,()x θ,()≠=f x θ,()δx θ,z ϕ,()ϕn z ,()d ωϕd µz cos Ωz ∫A ∫f x θ,()=L →x θ→()L →δx θ,,〈〉L e →F x θ,,〈〉==L e →F x θ,,〈〉=L →TL →−F x θ,,〈〉=L →F x θ,,〈〉TL →F x θ,,〈〉−=L →F x θ,,〈〉L →T *F x θ,,〈〉−=L →F x θ,T *F x θ,−,〈〉=δx θ,F x θ,T *F x θ,−=or F x θ,δx θ,T *F x θ,+=F x θ,y ψ←()δx θ,y ψ←()T *F x θ,y ψ←()+=(23)It is clear that there must be some relationship between F x, θ and G y ,ψ. Based on the two different expressions for the flux of a given set, we can derive this relationship.(24)and also(25)Since equation 24 and equation 25 have to be equal, and since this equality holds for all possible functions L e → and W e ←, the relation between F and G is found:(26)Because of this relationship, the functions F x ,θ and G y ,ψ can be described by a single function F r :(27)F r is the global reflection distribution function (GRDF), as introduced by Lafortune [Lafor-tune93b].7 Properties of the GRDF7.1 Transport equationsSince the GRDF is defined through the definitions of F and G , the following adjoint transport equations describe the behaviour of the GRDF:(28)This double formulation implies that there are two different ways to compute specific values of the GRDF: a gathering approach and a shooting approach.W y ψ←()W e ←G y ψ,,〈〉=G y ψ,x θ→()δy ψ,x θ→()TG y ψ,x θ→()+=ΦS ()L →W e ←,〈〉=L →x θ→()W e ←x θ←()θn x ,()cos d ωθd µx Ωx ∫A ∫=L e →F x θ,,〈〉W e ←x θ←()θn x ,()cos d ωθd µx Ωx ∫A ∫=L e →y ψ→()Ωy ∫A ∫W e ←x θ←()F x θ,y ψ←()ψn y ,()cos θn x ,()d ωψd µy cos d ωθd µx Ωx ∫A ∫=ΦS ()L e →W ←,〈〉=L e →y ψ→()W ←y ψ←()ψn y ,()cos d ωψd µy Ωy ∫A ∫=L e →y ψ→()W e ←G y ψ,,〈〉ψn y ,()cos d ωψd µy Ωy ∫A ∫=L e →y ψ→()Ωx ∫A ∫W e ←x θ←()G y ψ,x θ→()ψn y ,()cos θn x ,()d ωθd µx d ωψd µy cos Ωy ∫A ∫=F x θ,y ψ←()G y ψ,x θ→()=F x θ,y ψ←()G y ψ,x θ→()=F r y ψ←x θ→,()=F r y ψ←x θ→,()δy ψ←x θ→,()T *F r y ψ←x θ→,()+=F r y ψ←x θ→,()δy ψ←x θ→,()TF r y ψ←x θ→,()+=7.2 Physical interpretationFrom the above observations, the following interpretation can be given to the GRDF:•F r (y ←ψ,x →θ) is the differential fraction of L →(y →ψ)cos(ψ,n y )d ωψd µy which contributes to the value of L →(x →θ).•F r (y ←ψ,x →θ) is the differential fraction of W ←(x ←θ)cos(θ,n x )d ωθd µx which contributes to the value of W ←(y ←ψ).This is very similar to the definition of the common BRDF , which describes the same property for exitant and incident radiance in a single point. The GRDF expands on this concept and describes the relationship between any two radiance or potential values, taking into account all possible reflections. The BRDF can be considered as a special case of the GRDF. The name Global Reflec-tion Distribution Function is therefore quite appropriate.7.3 Transforming argumentsSince the values of F r (y ←ψ,x →θ) need to be the same, no matter which transport equations we use to compute the values,T *F r (y ←ψ,x →θ) and TF r (y ←ψ,x →θ) should be equal:(29)From this equality, the following property of the GRDF can be derived:(30)This relationship is the generalisation of the property of the BRDF , in which the incoming and out-going direction can be switched, resulting in the same value of the BRDF .8 Practical applicationsThe GRDF is independent of both L e → and W e ←, it only depends on the geometry of the scene and the surface characteristics of the objects. If one is able to compute the GRDF in advance, applying different values of L e → or W e ← is straightforward. Changing L e → means that the initial lighting conditions are changed; changing W e ← means we want to compute the flux for a different set. This latter option also encompasses the change of viewpoint.Since the GRDF is described by two transport equations, we have a choice of what equation to use in order to compute different values of the GRDF:•If we use the T * equation, we are actually using a gathering approach, leading to algorithms as stochastic ray tracing [Cook et al. 84], path tracing [Kajiya86] or Gauss-Seidel radiosity .•If we use the T equation, a shooting approach is used, leading to algorithms such as particle tracing [Pattanaik92, Dutré et al. 93] or progressive radiosity .• A simultaneous use of both transport equations has been described by [Lafortune93a]. This dual path tracing algoritm involves elements of both ray tracing and particle tracing, and uses the advantages of both.T *F r y ψ←x θ→,()F r p y ψ,()ϕx θ→,←()f r ϕp y ψ,()ψ1−,,()ϕn p y ψ,(),()d ωϕcos Ωp y ψ,()∫=TF r y ψ←x θ→,()F r y ψ←p ,x ϕ,()ϕ1−→()f r ϕx θ,,()ϕn x ,()d ωϕcos Ωx ∫=F r y ψ←x θ→,()F r p x θ,()θ1−←p y ψ,()ψ1−→,()=9 ConclusionWe have described a powerful mathematical framework in which all rendering algorithms can be defined. The Global Reflection Distribution Function (GRDF) plays a major role in this frame-work. The advantages of the GRDF are:•It allows us to compute the behaviour of light in a given environment, independent of initial lighting conditions and independent of the final choice of viewpoint.•All rendering algorithms can be described as different ways of solving the GRDF equations.•The GRDF is described by two adjoint transport equations. Combining both these equations ina single algorithm combines the advantages of both shooting and gathering algorithms.10 ReferencesBrewster, Q. 1992Thermal Radiative Transfer & Properties, J.Wiley & Sons.Chen, S., Rushmeier, H., Miller, G., Turner D. 1991 A Progressive Multi-Pass Method for Global puter Graphics (SIGGRAPH ‘91 Proceedings), 25(4):164-174Cook, R., Porter, T., Carpenter, L. 1984 Distributed Ray puter Graphics (SIGGRAPH ‘84 Proceedings), 18(3):137-145Dutré, P., Lafortune E., Willems Y. 1993 Monte Carlo Light Tracing with direct Pixel Contribu-tions.Proceedings of COMPUGRAPHICS, International Conference on ComputationalGraphics and Visualization Techniques, Portugal, December 1993Kajiya, J. 1986 The Rendering puter Graphics(SIGGRAPH ‘86 Procceedings), 20(4):143-150Lafortune E., Willems Y. 1993a Bi-directional Path Tracing Proceedings of COMPUGRAPHICS, International Conference on Computational Graphics and Visualization Techniques, Portugal, December 1993Lafortune E., Willems Y., 1993b A Theoretical Framework for Physically Based Rendering.Sub-mitted for publication to Computer Graphics Forum.Pattanaik, S., Mudur, S. 1992 Computation of Global Illumination by Monte Carlo Simulation of the Particle Model of Light.Proceedings of the 3rd Eurographics Workshop on Rendering, pp.71-83Pattanaik, S., Mudur, S. 1993 The Potential Equation and Importance in Illumination Computa-tions,Computer Graphics Forum, 12(2)Pattanaik, S. 1993 Computational Methods for Global Illumination and Visualisation of Complex 3D Environments,PhD Thesis, Birla Institute of Techmology and Science, Pilani, IndiaShirley, P., Wang, C. 1991 Direct Lighting by Monte Carlo Integration.Proceedings of the 2nd Eurographics Workshop on RenderingShirley, P., Wang C. 1992 Distribution Ray tracing: Theory and Practice,Proceedings of the 3rd Eurographics Workshop on RenderingSillion, F., Puech, C. 1989 A General Two-Pass Method Integrating Specular and Diffuse Reflputer Graphics (SIGGRAPH ‘89 Proceedings), 23(3):335-344Wallace, J., Cohen, M., Greenberg, D. 1987 A Two-Pass Solution to the Rendering Equation: A Synthesis of Ray tracing and Radiosity puter Graphics (SIGGRAPH ‘87 Pro-ceedings), 21(4):311-320。
theoretical framework
Intro to Research Methods Courses IndexPage 35 of 81 pages. Chapter: 6: Literature SearchThe Theoretical or ConceptualFrameworkThe rationale for incorporating the review of the literature in the research is that when you substantiate what you say, you usuallysubstantiate it through the literature you have read.?Therefore, you must document your source for your rationale and your theoretical/conceptual framework .The literature review is a series of references, not a bibliography.?Only the literature that you have used to substantiate your problem is included in your literature review.?Not everything that you have read about your problem is relevant to your research and therefore should not be included.A framework is simply the structure of the idea or concept and how it is put together.?A theoretical framework, then, is an essay that interrelate the theories involved in the question .?Remember, a theory is a discussion of related concepts, while a concept is a word or phrase that symbolizes several interrelated ideas.?Unlike a theory, a concept does not need to be discussed to beunderstood.?However, since you are using several interrelated concepts in a new way, your conceptual framework must explain the relationship among these concepts. Even if your question does not include a theory, there is no doubt that it contains at least one concept that needs to be explained or described in relation to the question as a whole.Look at your question again.?How many ideas-as expressed in words does your question contain??Look at each of your definitions. More than likely the question is a sequence of related ideas that form a concept rather than a single idea.?If so, you must write a conceptual framework that explains the interrelationship of all of the ideas in your question. You have already learned that the level of your question is closely related to the extent of literature on your topic.?The same is true for the theoretical or conceptual frameworks, they relate closely to the level of the question.Conceptual Framework and Level I Questionsquestions do not have theoretical frameworks as a rule, but your rationale, g the significance of your question to transportation and the potential tion of the results of your study to the profession, is a framework within ur topic is examined.?If you are basing your Level I study on a theory or that has been studied in a different setting or with a different population, a theoretical framework that can be developed as described.Conceptual Framework and Level II QuestionsAt Level II, there is a conceptual framework to explain the possible connection between the variables.?Each variable or concept will have been studied before, and, even though it is being used differently in your study, you will find previous research useful in helping to develop your framework.Conceptual Framework and Level III QuestionsLevel III questions require a theoretical framework which explains the cause-and-effect relationship among the variables.?The 搘hy? Question cannot be answered with a theoretical explanation, and then the question is at a wrong level.When you develop your problem essay, be sure that you are consistent with the level of your question, and use this as an opportunity to cross-check all the parts of the problem for consistency. When you write your problem essay, you will be incorporating your rationale for the development of the question, your theoretical or conceptual framework, and yourliterature review into one (not three) definitive statement of what you are studying and why, and its relevance to you and your reader. Remember, at this point you are the expert on your research.?Now all you have to do is prove your expertise in an essay.?。
INRIA, Sophia-Antipolis
Jo lle Despeyroux and Pierre Leleu
INRIA, Sophia-Antipolis 2004 Route des Lucioles - B.P. 93 F-06902 Sophia-Antipolis Cedex, France.
Moreover, instead of introducing an operational semantics which computes the canonical form ( -long normal form) using a given strategy, our system has reduction rules, which allow a certain nondeterminism in the mechanism of reduction. We have been able to adapt classic proof techniques to show the important metatheoretic results: decidability of typability, soundness of typing with respect to typing rules, Church-Rosser property (CR), Strong Normalization property (SN) and conservativity of our system with respect to the simply-typed -calculus. The main problems we encountered in the proofs are on one hand due to the use of functional types in the types of the recursive constructors, and on the other hand due to the use of -expansion. To solve the problems due to -expansion, we bene t from previous works done for the simply-typed -calculus ( JG95]) and for system F ( Gha96]). In the second section of the paper, we introduce our version of the modal inductive system, its syntax, its typing and reduction rules. Then in the third section, we prove its essential properties (soundness of typing, CR, SN) from which we deduce that it is a conservative extension of the simply-typed -calculus. Finally, we discuss related works and outline future work. A full version of this paper with complete technical developments is available in Lel97].
Classes = Objects + Data Abstraction
Absal studies of object systems, such as AC94, Bru93, FHM94, PT94] and the earlier papers appearing in GM94], types are viewed as interfaces to objects. This means that the type of an object lists the operations on the object, generally as method names and return types, but does not restrict its implementation. As a result, objects of the same type may have arbitrarily di erent internal representations. In contrast, the type of an object in common practical objectoriented languages such as Ei el Mey92] and C++ Str86, ES90] may impose some implementation constraints. In particular, although the \private" internal data of an object is not accessible outside the member functions of the class, all objects of the same class must have all of the private internal data listed in the class declaration. In this paper, we present a type-theoretic framework that incorporates both forms of type. We rst explain the basic principles by extending a core object calculus (developed for this purpose but previously described in FM95]) with a standard higherorder data abstraction mechanism as in MP88, CW85]. Then we devise a special-purpose syntax
A Classification of Formal Specifications Languages
Object speci cations
Z, VDM Relational Model
Continuous Process speci cations
1
2 Classifying Formal Speci cations
Following Liskov and Berzins 1, we distinguish between procedural speci cations, object speci cations and continuous process speci cations. Procedural speci cations describe simple software systems that perform a mapping between an input space and an output space; examples of procedural speci cations include a sorting program, a square root program or a numerical analysis routine. Object speci cations describe software systems that maintain an internal space, and whose outputs are determined not only by their inputs but also by their internal states; example of object speci cations include an abstract data type or a database. Continuous process speci cations describe systems that perform a stimulusresponse mechanism. The response of these systems to any given stimulus depends on the history of previous stimuli submitted to the system; examples of continuous process systems (also called reactive systems 8, 18, 15, 17]), include an operating system or a process control system. Furthermore, each one of these speci cation type can be divided into two classes: operational speci cations which describe a system's behavior by building an abstract mathematical model of the state of the system and the operations that constitute its interface; descriptive speci cations which specify a system by describing its desired properties. Table 1 shows the classi cation of some formal speci cation languages. Note that the distinction between the di erent categories in not clear-cut (e.g., several languages incorporate both process and object paradigms). However we believe that such a classi cation can facilitate our understanding of the di erent languages. In sections 3 to 5 we present some formal speci cation languages according to our classi cation; we describe each technique's main features and illustrate its use through an example. Table 2 presents informal requirements that will be used to illustrate the di erent types of speci cations.
ieee conference on decision and control 检索
ieee conference on decision and control 检索IEEE conference on decision and control (CDC) is a prominent annual conference in the field of systems, control, and decision-making. Focusing on new theories, methodologies, and applications, CDC serves as a platform for researchers and practitioners to exchange ideas and present their latest findings. In this article, we will explore the significance of CDC, its history, and key topics covered in the conference.CDC has a long history dating back to 1962 when it was first held. Since then, it has grown into a prestigious conference attracting experts and professionals from academia, industry, and government organizations. The conference features a wide range of technical sessions, workshops, panel discussions, and keynote speeches by distinguished speakers. The topics covered in CDC are diverse and encompass various aspects of control and decision-making, including but not limited to:1. Control Theory: CDC provides a platform for discussing state-of-the-art control theory and its applications. Topics such as stability analysis, optimal control, robust control, adaptive control, and nonlinear control are frequently discussed in the conference.2. Decision Theory: Decision-making plays a crucial role in many fields such as finance, economics, and operations research. CDC explores decision models, algorithms, and methods for solving decision-making problems in uncertain and dynamic environments.3. System Identification and Estimation: Estimating system parameters and identifying system models are essential tasks incontrol systems. CDC focuses on techniques for system identification, parameter estimation, and model validation.4. Robotics and Automation: With advancements in robotics and automation, CDC features sessions and workshops dedicated to discussing topics related to autonomous systems, robot control, machine learning in robotics, and human-robot interaction.5. Networked Control Systems: Networked systems have become ubiquitous in various domains such as transportation, energy, and communication networks. CDC explores control methodologies for networked systems, including networked estimation, networked control, and distributed control.6. Game Theory and Multi-agent Systems: Game theory provides a mathematical framework for analyzing decision-making in multi-agent systems. CDC discusses game-theoretic approaches to system design, coordination, and cooperation in multi-agent systems.7. Cyber-Physical Systems (CPS): CDC addresses challenges and solutions related to integrating physical systems with computational elements. Topics include modeling and control of CPS, sensor networks, and real-time systems.In addition to the technical sessions, CDC also organizes tutorial sessions presented by experts in the field. These tutorials offer an opportunity for attendees to learn about the fundamentals and recent developments in specific areas of control and decision-making.CDC also hosts several competitions, such as the Student Best Paper Competition and the Student Poster Competition, to encourage young researchers to showcase their work and interact with peers and mentors.Apart from the technical program, CDC provides ample opportunities for networking and collaboration among participants. The conference attracts researchers, practitioners, and industry professionals from all over the world, providing a diverse and stimulating environment for knowledge sharing and collaboration. In conclusion, IEEE conference on decision and control (CDC) is a significant annual event in the field of systems, control, and decision-making. With its rich history and expansive technical program, CDC serves as a platform for researchers and practitioners to exchange ideas, present their latest findings, and foster collaboration in various areas of control and decision-making.。
- 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
- 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
- 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。
A type-theoretic framework for formal reasoningwith different logical foundationsZhaohui LuoDept of Computer Science,Royal Holloway,Univ of LondonEgham,Surrey TW200EX,U.K.zhaohui@Abstract.A type-theoretic framework for formal reasoning with differ-ent logical foundations is introduced and studied.With logic-enrichedtype theories formulated in a logical framework,it allows various log-ical systems such as classical logic as well as intuitionistic logic to beused effectively alongside inductive data types and type universes.Thisprovides an adequate basis for wider applications of type theory basedtheorem proving technology.Two notions of set are introduced in theframework and used in two case studies of classical reasoning:a predica-tive one in the formalisation of Weyl’s predicative mathematics and animpredicative one in the verification of security protocols.1IntroductionDependent type theories,or type theories for short,are powerful calculi for logical reasoning that provide solid foundations for the associated theorem prov-ing technology as implemented in various‘proof assistants’.These type the-ories include Martin-L¨o f’s predicative type theory[NPS90,ML84],as imple-mented in ALF/Agda[MN94,Agd00]and NuPRL[C+86],and the impredicative type theories[CH88,Luo94],as implemented in Coq[Coq04]and Lego/Plastic [LP92,CL01].The proof assistants have been successfully used in formalisation of mathematics(e.g.,the formalisations in Coq of the four-colour theorem[Gon05]) and in reasoning about programs(e.g.,the analysis of security protocols).The current type theories as found in the proof assistants are all based on intuitionistic logic.As a consequence,the type theory based proof assistants are so far mainly used for constructive reasoning.Examples that require or use other methods of reasoning,say classical reasoning,would have to be done by ‘extending’the underlying type theory with classical laws by brute force and praying for such an extension to be OK.We believe that the type theory based theorem proving technology is not (and should not be)limited to constructive reasoning.In particular,it should adequately support classical reasoning as well as constructive reasoning.To this end,what one needs is a type-theoretic framework that supports the use of a This work is partially supported by the following research grants:UK EPSRC grant GR/R84092,Leverhulme Trust grant F/07537/AA and EU TYPES grant510996.wider class of logical systems and,at the same time,keeps the power of type theory in formal reasoning such as inductive reasoning based on inductive types.In this paper,we introduce such a type-theoretic framework where logic-enriched type theories are formulated in a logical framework such as LF[Luo94] and PAL+[Luo03].Logic-enriched type theories with intuitionistic logic have been proposed by Aczel and Gambino in studying type-theoretic interpretations of constructive set theory[AG02,GA06].They are studied here from the an-gle of supporting formal reasoning with different logical foundations.We show that it is possible to provide a uniform framework so that type theories can be formulated properly to support different methods of reasoning.The structure of our framework promotes a complete separation between logical propositions and data types.It provides an adequate support to classical inference as well as intuitionistic inference,in the presence of inductive types and type universes.Two notions of set are introduced in the type-theoretic framework:a pred-icative one and an impredicative one.They are used in two case studies:one in the formalisation of Weyl’s predicative mathematics[Wey18,AL06]and the other in the formalisation and analysis of security protocols.Both case studies use classical reasoning–we have chosen to do so partly because how to use type theory in constructive reasoning has been extensively studied and partly because we want to show how classical logic can be employed in a type-theoretic setting.The proof assistant Plastic[CL01]has been extended(in an easy and straight-forward way)by Paul Callaghan to implement the type-theoretic framework as described in this paper.The case studies have been done in the extended Plastic and help to demonstrate that type theory and the associated proof assistants can be used to support formal reasoning with different logical foundations.2Logic-enriched type theories in a logical frameworkThe type-theoretic framework formulates logic-enriched type theories(LTTs for short)in a logical framework.It consists of two parts,the part of logical proposi-tions and the part of data types.Both parts are formulated in a logical framework and linked by the induction rules(see below).We start with the logical framework LF[Luo94]or PAL+[Luo03],where the kind T ype represents the world of types.Now,we extend the logical framework with a new kind P rop that stands for the world of logical propositions and,for every P:P rop,a kind P rf(P)of proofs of P.The logic of an LTT is specified in P rop by declaring constants for the log-ical operators and the associated rules(as logics are introduced in Edinburgh LF[HHP93]).The data types are introduced in T ype as in type theories such as Martin-L¨o f’s type theory[NPS90]and UTT[Luo94].Different LTTs can be formulated in the framework for formal reasoning with different logical foun-dations.Instead of considering LTTs in general,we shall present and study a typical example–the LTT with the classicalfirst-order logic,abbreviated as LTT1,to illustrate how an LTT is formulated in our type-theoretic framework.2.1LTT1:an exampleThe system LTT1consists of the classicalfirst-order logic,the inductive data types,and type universes.Each of the components is described below.Logic of LTT1The logical operators such as⊃,¬and∀are introduced by declaring as constants the operators and the direct proofs of the associated inference rules.For instance,for universal quantification∀,we declare(where we write‘f[a1,...,a n]’for applications and‘f[x1,...,x n]:A where x i:A i’for f:(x1:A1,...,x n:A n)A):∀[A,P]:P rop,∀I[A,P,f]:P rf(∀[A,P])and∀E[A,P,a,p]:P rf(P[a]) where A:T ype,P[x:A]:P rop,f[x:A]:P rf(P[x])and p:P rf(∀[A,P]).Note that∀can only quantify over types;that is,for a formula∀[A,P], or∀x:A,P[x]in the usual notation,A must be a type(of kind T ype).Since P rop is not a type(it is a kind),one cannot form a proposition by quantifying over P rop.Higher-order logical quantifications such as∀X:P rop.X,as found in impredicative type theories,are not allowed.Similarly,since propositions are not types(P rf(P)is a kind,not a type),one cannot quantify over propositions, either.As another example,we declare the classical negation operator¬P:P rop for P:P rop and the corresponding double negation rule DN[P,p]:P rf(P) where P:P rop and p:P rf(¬¬P)].Other logical operators can be introduced in a similar way or defined as usual.For instance,an equality operator can be introduced to form propositions a=A b,for A:T ype and a,b:A.Inductive data types in LTT1The system LTT1(and every LTT)contains (some or all of)the inductive types as found in Martin-L¨o f’s type theory[NPS90] or UTT[Luo94],which include those of natural numbers,dependent pairs,lists, trees,ordinals,etc.For example,the type N:T ype of natural numbers can be introduced byfirst declaring its constructors0:N and s[n]:N where n:N, and then its elimination operator E T[C,c,f,n]:C[n],for C[n]:T ype with n:N, and the associated computation rulesE T[C,c,f,0]=c:C[0]E T[C,c,f,s[n]]=f[n,E T[C,c,f,n]]:C[s[n]]For each inductive type,there is an associated induction rule for proving properties of the objects of that type.For example,the induction rule for N isE P[P,c,f,n]:P[n]for P[n]:P rop[n:N]Note that the elimination operator over types,E T,has associated computational rules,while the elimination operator over propositions,E P,does not.The induction rules are crucial in connecting the world of logical propositions (formally represented by P rop)and that of the data types(formally represented by T ype).Quantifications over types allow one to form propositions to express logical properties of data and the induction rules to prove those properties.Type universes in LTT1The system LTT1(and every LTT)may contain type universes,types consisting of(names of)types as objects.For example,a universe of‘small types’can be introduced astype:T ype and T[x]:T ype[x:type].Some of the inductive types have names in a type universe.For example,we can have nat as a name of N in type by declaring nat:type and T[nat]=N:T ype. The general way of introducing type universes can be found in[ML84]and see, e.g.,[Luo94]for universes containing inductive types generated by schemata.We remark that,if we have introduced a type universe that contains the names of N and∅(the empty type),we can prove Peano’s fourth axiom for natural numbers(∀x:N.(s[x]=N0))internally in the type-theoretic framework. This is similar to Martin-L¨o f’s type theory,where Peano’s fourth axiom is not provable internally without a type universe[Smi88].Logical consistency of LTT1The system LTT1is logically consistent in the sense that there are unprovable propositions.If one is not satisfied with the ‘simple minded consistency’[ML84],a meta-mathematical consistency can be proved–it can be shown that LTT1is relatively consistent w.r.t.ZF.Theorem1(consistency).The type system LTT1is logically consistent.Proof sketch Let T be Martin-L¨o f’s intensional type theory extended with Excluded Middle(i.e.,extending it with assumed proofs of A+(A→∅)for all types A).Then T is logically consistent w.r.t.ZF.Now,consider the mapping :LTT1→T that maps the types and propositions of LTT1to types of T so that A =A for A:T ype and,e.g.,∀[A,P] =Π[A ,P ]and(¬P) =P →∅. Then,by proving a more general lemma by induction,we can show that if Γ a:A in LTT1,thenΓ a :A in T.The logical consistency follows.Although there are other ways to prove the meta-mathematical consistency, the above proof sketch raises an interesting question one may ask:if Martin-L¨o f’s type theory extended with a classical law(say Excluded Middle)is consistent, why does one prefer to use LTT1rather than such an extension directly?One of the reasons for such a preference is that the LTT approach preserves the meaning-theoretic understanding of types as consisting of their canonical objects(e.g.,N consists of zero and the successors).Such an adequacy property would be destroyed by a direct extension of Martin-L¨o f’s type theory with a classical law,where every inductive type contains(infinitely many)non-canonical objects.Therefore,in this sense,it is inadequate to introduce classical laws directly to Martin-L¨o f’s type theory or other type theories.In our type-theoretic framework,there is a clear distinction between logical propositions and data types.For example,the classical law in LTT1does not affect the data types such as N.It hence provides an adequate treatment of classical reasoning on the one hand and a clean meaning-theoretic understanding of the inductive types on the other.This clear separation between logical propositions and data types is an impor-tant salient feature of the type-theoretic framework in general.In Martin-L¨o f’s type theory,for example,types and propositions are identified.The author has argued,for instance in the development of ECC/UTT[Luo94]as implemented in Lego/Plastic and the current version of Coq1,that it is unnatural to identify logi-cal propositions with data types and there should be a clear distinction between the two.This philosophical idea was behind the development of ECC/UTT, where data types are not propositions,although logical propositions are types.Logic-enriched type theories,and hence our framework as presented in this paper,have gone one step further–there is a complete separation between propositions and types.Logical propositions and their totality P rop are not regarded as types.This has led to a moreflexible treatment of logics in our framework.2.2ImplementationThe type-theoretic framework has been implemented by Callaghan by extending the proof assistant Plastic[CL01].Plastic implements the logical framework LF. The extension is to add the kind P rop and the operator P rf,as described at the beginning of this section.Logical operators such as those of LTT1are introduced by the user.Plastic already supports the inductive types in the kind T ype. However,it does not automatically generate the induction rules(represented above by E P for N)which,at the moment,are entered by the user(this is possible because E P does not have associated computation rules.)We should also mention that Plastic supports adding computation rules of certain form and this allows one to add universes and the associated computation rules.The system LTT1has been completely implemented in Plastic and so have the notions of set and the case studies to be described in the following two sections.3Typed setsWhen we consider sets in our type-theoretic framework,the objects of a set are all‘similar’in the sense that every set is a typed set.In other words,every set has a base type from which every element of the set comes.For instance,a set with N as base type contains only natural numbers as its elements.We believe that such a notion of typed set is natural and much closer to the practice of everyday mathematical reasoning than that in the traditional set theory where there is no distinction between types.Sometimes,mathematicians use the word‘category’for what we call types and consider sets with elements of the same category(see, e.g.,[Wey18]).Here,types are formal representations of categories.In the following,we consider two notions of(typed)set:an impredicative notion and a predicative notion.1The current type structure of Coq(version8.0)[Coq04],after the universe Set be-comes predicative,is very similar to(if not exactly the same as)that of ECC/UTT.Impredicative notion of set Impredicative sets can be introduced as follows.–Set[A:T ype]:T ype(Set[A]is the type of sets with base type A.)–set[A:T ype,P[x:A]:P rop]:Set[A](Informally,set[A,P]is{x:A|P[x]}.)–in[A:T ype,a:A,S:Set[A]]:P rop(Informally,in[A,a,S]is a∈S.)–in[A,a,set[A,P]]=P[a]:P rop(Computational equality)Note that this notion of set is impredicative.For example,the powerset of S: Set[A]can be defined as{S :Set[A]|∀x:A.x∈S ⊃x∈S}of type Set[Set[A]]. Predicative notion of set The notion of predicativity has been studied by many people,most notably by Feferman in a series of papers,including[Fef05]. Intuitively,the definition of a set{x|p(x)}is predicative if the predicate p(x) does not involve any quantification over the totality that contains the entity being defined.If the set-forming predicate p does not involve any quantification over sets at all,then the definition of the set is predicative.In our type-theoretic framework,predicative sets can be introduced byfirst introducing a propositional universe of small propositions:prop:P rop,V[p:prop]:P ropIntuitively,a small proposition contains only quantifications over small types (with names in type).This can be seen from the quantifier rules for prop:¯∀[a,p]:prop[a:type,p[x:T[a]]:prop]V[¯∀[a,p]]=∀[T[a],[x:T[a]]V[p[x]]]:P ropwhere p[x]must be a small proposition for any object x in the small type a.Formally,predicative sets can be introduced as follows:–Set[a:type]:T ype(Set[a]is a type if a is a small type in universe type.)–set[a:type,p[x:T[a]]:prop]:Set[a](Informally,p in{x:A|p[x]}must be a small propositional function.)–in[a:type,x:T[a],S:Set[a]]:prop(Informally,x∈S is a small proposition.)–in[a,x,set[a,p]]=p[x]:prop(Computational equality)Note that a type of sets is not a small type.Therefore,quantification over sets is not allowed when stating a set-forming condition.Remark1.Aczel and Gambino[AG02,GA06]have considered a type universe P of small propositions in the world of types(formally,P:T ype).Such a universe has played an important role in their study of the type-theoretic interpretation of constructive set theory.However,it seems that this has the effect of putting logical propositions directly‘back’to the world of types.In our case,the propo-sitional universe prop is of kind P rop,not of kind T ype.We argue that the notions of set introduced above are adequate in support-ing‘mathematical pluralism’.2With these notions,the type-theoretic framework can be used to formalise,in the classical setting,the ordinary(classical and impredicative)mathematics and Weyl’s predicative mathematics and,in the in-tuitionistic setting,the predicative and impredicative constructive mathematics. 2The philosophical view of mathematical pluralism is to be elaborated elsewhere.4Case studiesFormalisation of Weyl’s predicative mathematics As is known,the or-dinary mathematics is impredicative in the sense that it allows impredicative definitions that some people might regard as‘circular’and hence problematic. Such people would believe that the so-called predicative mathematics is safer, where impredicative or circular definitions are regarded as illegal.For instance, in the early quarter of the last century,the mathematician Hermann Weyl has developed a predicative treatment of the calculus(in classical logic)[Wey18], which has been studied and further developed by Feferman and others[Fef05].The formalisation of Weyl’s work[Wey18]has been done in the type-theoretic framework(more specifically,in LTT1with predicative sets)in Plastic[AL06]. Formalisation and analysis of security protocols Security protocols have been extensively studied in the last two to three decades.Besides other interest-ing research,theorem provers have been used to formalise security protocols and prove their properties.For instance,Paulson has studied the‘inductive approach’[Pau98]in Isabelle[Pau94]to verify properties of security protocols.As a case study of classical impredicative reasoning in the type-theoretic framework,we have formalised in Plastic several security protocols in LTT1 with impredicative sets.The examples include simple protocols such as the Needham-Schroeder public-key protocol[NS78,Low96]and the Yahalom protocol [BAN89,Pau99].Our formalisation has followed Paulson’s inductive approach closely,in order to examine how well the power of inductive reasoning in the type-theoretic framework matches that in Isabelle.Our experience shows that the answer is positive,although more automation in some cases would be de-sirable.In our formalisation,agents,messages and events are all modelled as inductive types(rather than as inductive sets as in Isabelle).The operations such as parts,analz and synth are defined as maps between sets of messages.A protocol is then modelled as a set of traces(a set of lists of events).One can then show that various properties are satisfied by the protocol concerned,including the secrecy properties such as the session key secrecy theorem.5Concluding remarks on future workFuture work includes comparative studies with other existing logical systems and the associated theorem proving technology.For example,it would be interesting to compare the predicative notion of set with that studied by Feferman and others[Fef00,Sim99]and to consider a comparison with that of Martin-L¨o f’s type theory[NPS90,ML84]to study the relationship between the notion of predicative set and that of predicative type.Acknowledgements Thanks go to Peter Aczel for his comments during his visit to Royal Holloway,Robin Adams for discussions during our joint work on formalisation of Weyl’s predicative mathematics and Paul Callaghan for his efforts in extending Plastic to implement the type-theoretic framework.References[AG02]P.Aczel and N.Gambino.Collection principles in dependent type theory.Proofs and Programs,(eds)P.Callaghan,Z.Luo,J.McKinna and R.Pollack.LNCS2277,2002.[Agd00]Agda proof assistant.http://www.cs.chalmers.se/~catarina/agda,2000. [AL06]R.Adams and Z.Luo.Weyl’s predicative classical mathematics as a logic-enriched type theory.Submitted to TYPES’06,to appear,2006.[BAN89]M.Burrows,M.Abadi,and R.Needham.A logic of authentication.Proc.of the Royal Society of London,426:233–271,1989.[C+86]R.Constable et al.Implementing Mathematics with the NuPRL Proof Devel-opment System.Pretice-Hall,1986.[CH88]Th.Coquand and G.Huet.The calculus of rmation and Computation,76(2/3),1988.[CL01]P.C.Callaghan and Z.Luo.An implementation of typed LF with coercive subtyping and universes.J.of Automated Reasoning,27(1):3–27,2001. [Coq04]The Coq Development Team.The Coq Proof Assistant Reference Manual (Version8.0),INRIA,2004.[Fef00]S.Feferman.The significance of Hermann Weyl’s Das Kontinuum.Proof Theory,(eds)V.Hendricks et al.,2000.[Fef05]S.Feferman.Predicativity.In S.Shapiro,editor,The Oxford Handbook of Philosophy of Mathematics and Logic.Oxford Univ Press,2005.[GA06]N.Gambino and P.Aczel.The generalised type-theoretic interpretation of constructive set theory.J.of Symbolic Logic,71(1):67–103,2006.[Gon05]G.Gonthier.A computer checked proof of the four colour theorem,2005. [HHP93]R.Harper,F.Honsell,and G.Plotkin.A framework for defining logics.Journal of the Association for Computing Machinery,40(1):143–184,1993. [Low96]G.Lowe.Breaking andfixing the Needham-Schroeder public-key protocol using CSP and FDR.Lecture Notes in Computer Science,1055,1996.[LP92]Z.Luo and R.Pollack.LEGO Proof Development System:User’s Manual.LFCS Report ECS-LFCS-92-211,Dept of Computer Science,Univ of Edin-burgh,1992.[Luo94]putation and Reasoning:A Type Theory for Computer Science.Oxford University Press,1994.[Luo03]Z.Luo.PAL+:a lambda-free logical framework.Journal of Functional Pro-gramming,13(2):317–338,2003.[ML84]P.Martin-L¨o f.Intuitionistic Type Theory.Bibliopolis,1984.[MN94]L.Magnusson and B.Nordstr¨o m.The ALF proof editor and its proof engine.LNCS806,1994.[NPS90] B.Nordstr¨o m,K.Petersson,and J.Smith.Programming in Martin-L¨o f’s Type Theory:An Introduction.Oxford University Press,1990.[NS78]R.Needham and ing encryption for authentication in large networks of m.of the ACM,21(12):993–999,1978.[Pau94]L.Paulson.Isabelle:a generic theorem prover.LNCS,828,1994.[Pau98]L.Paulson.The inductive approach to verifying cryptographic protocols.Journal of Computer Security,6:85–128,1998.[Pau99]L.Paulson.Proving security protocols correct.LICS,1999.[Sim99]S.Simpson.Subsystems of Second-Order Arithmetic.Springer-Verlag,1999. [Smi88]J.Smith.The independence of Peano’s fourth axiom from Martin-L¨o f’s type theory without universes.Journal of Symbolic Logic,53(3),1988.[Wey18]H.Weyl.The Continuum:a critical examination of the foundation of analysis.Dover Publ.1994,(English translation of Das Kontinuum,1918).。