6 外文文献原文及译文
毕业论文(设计)外文文献翻译及原文
金融体制、融资约束与投资——来自OECD的实证分析R.SemenovDepartment of Economics,University of Nijmegen,Nijmegen(荷兰内梅亨大学,经济学院)这篇论文考查了OECD的11个国家中现金流量对企业投资的影响.我们发现不同国家之间投资对企业内部可获取资金的敏感性具有显著差异,并且银企之间具有明显的紧密关系的国家的敏感性比银企之间具有公平关系的国家的低.同时,我们发现融资约束与整体金融发展指标不存在关系.我们的结论与资本市场信息和激励问题对企业投资具有重要作用这种观点一致,并且紧密的银企关系会减少这些问题从而增加企业获取外部融资的渠道。
一、引言各个国家的企业在显著不同的金融体制下运行。
金融发展水平的差别(例如,相对GDP的信用额度和相对GDP的相应股票市场的资本化程度),在所有者和管理者关系、企业和债权人的模式中,企业控制的市场活动水平可以很好地被记录.在完美资本市场,对于具有正的净现值投资机会的企业将一直获得资金。
然而,经济理论表明市场摩擦,诸如信息不对称和激励问题会使获得外部资本更加昂贵,并且具有盈利投资机会的企业不一定能够获取所需资本.这表明融资要素,例如内部产生资金数量、新债务和权益的可得性,共同决定了企业的投资决策.现今已经有大量考查外部资金可得性对投资决策的影响的实证资料(可参考,例如Fazzari(1998)、 Hoshi(1991)、 Chapman(1996)、Samuel(1998)).大多数研究结果表明金融变量例如现金流量有助于解释企业的投资水平。
这项研究结果解释表明企业投资受限于外部资金的可得性。
很多模型强调运行正常的金融中介和金融市场有助于改善信息不对称和交易成本,减缓不对称问题,从而促使储蓄资金投着长期和高回报的项目,并且提高资源的有效配置(参看Levine(1997)的评论文章)。
因而我们预期用于更加发达的金融体制的国家的企业将更容易获得外部融资.几位学者已经指出建立企业和金融中介机构可进一步缓解金融市场摩擦。
外文文献及翻译
外文文献原稿和译文原稿DATABASEA database may be defined as a collection interrelated data store together with as little redundancy as possible to serve one or more applications in an optimal fashion .the data are stored so that they are independent of programs which use the data .A common and controlled approach is used in adding new data and in modifying and retrieving existing data within the data base .One system is said to contain a collection of database if they are entirely separate in structure .A database may be designed for batch processing , real-time processing ,or in-line processing .A data base system involves application program, DBMS, and database.THE INTRODUCTION TO DATABASE MANAGEMENT SYSTEMSThe term database is often to describe a collection of related files that is organized into an integrated structure that provides different people varied access to the same data. In many cases this resource is located in different files in different departments throughout the organization, often known only to the individuals who work with their specific portion of the total information. In these cases, the potential value of the information goes unrealized because a person in other departments who may need it does not know it or it cannot be accessed efficiently. In an attempt to organize their information resources and provide for timely and efficient access, many companies have implemented databases.A database is a collection of related data. By data, we mean known facts that can be recorded and that have implicit meaning. For example, the names, telephone numbers, and addresses of all the people you know. You may have recorded this data in an indexed address book, or you may have stored it on a diskette using a personalcomputer and software such as DBASE Ⅲor Lotus 1-2-3. This is a collection of related data with an implicit meaning and hence is a database.The above definition of database is quite general. For example, we may consider the collection of words that made up this page of text to be usually more restricted. A database has the following implicit properties:● A database is a logically coherent collection of data with some inherent meaning. A random assortment of data cannot be referred to as a database.● A database is designed, built, and populated with data for a specific purpose. It has an intended group of user and some preconceived applications in which these users are interested.● A database represents some aspect of the real world, sometimes called the miniworld. Changes to the miniworld are reflected in the database.In other words, a database has some source from which data are derived, some degree of interaction with events in the real world, and an audience that is actively interested in the contents of the database.A database management system (DBMS) is composed of three major parts: (1) a storage subsystem that stores and retrieves data in files; (2)a modeling and manipulation subsystem that provides the means with which to organize the data and to add, delete, maintain, and update the data; and (3) an interface between the DBMS and its users. Several major trends are emerging that enhance the value and usefulness of database management systems.●Managers who require more up-to-date information to make effective decisions.●Customers who demand increasingly sophisticated information services and more current information about the status of their orders, invoices, and accounts.●Users who find that they can develop custom applications with database systems in a fraction of the time it takes to use traditional programming languages.●Organizations that discover information has a strategic value; they utilize their database systems to gain an edge over their competitors.A DBMS can organize, process, and present selected data elements from the database. This capability enables decision makers to search, probe, and query database contents in order to extract answers to nonrecurring and unplanned questions that aren’t available in regular reports. These questions might initially be vague and/or p oorly defined, but people can “browse” through the database until they have the needed information. In short, the DBMS will “mange” the stored data items and assemble the needed items from the common database in response to the queries of those who aren’t programmers. In a file-oriented system, user needing special information may communicate their needs to a programmer, who, when time permits, will write one or more programs to extract the data and prepare the information. The availability of a DBMS, however, offers users a much faster alternative communications path.DATABASE QUERYIf the DBMS provides a way to interactively enter and update the database ,as well as interrogate it ,this capability allows for managing personal database. However, it does not automatically leave an audit trail of actions and does not provide the kinds of controls necessary in a multi-user organization .There controls are only available when a set of application programs is customized for each data entry and updating function.Software for personal computers that perform some of the DBMS functions has been very popular .Individuals for personal information storage and processing intended personal computers for us .Small enterprises, professionals like doctors, architects, engineers, lawyers and so on have also used these machines extensively. By the nature of intended usage ,database system on there machines are except from several of the requirements of full-fledged database systems. Since data sharing is not intended, concurrent operations even less so ,the software can be less complex .Security and integrity maintenance are de-emphasized or absent .as data volumes will be small, performance efficiency is also less important .In fact, the only aspect of a database system that is important is data independence. Data independence ,as stated earlier ,means that application programs and user queries need not recognize physical organization of data on secondary storage. The importance of this aspect , particularly for the personal computer user ,is that this greatly simplifies database usage . The user can store ,access and manipulate data at ahigh level (close to the application)and be totally shielded from the low level (close to the machine )details of data organization.DBMS STRUCTURING TECHNIQUESSpatial data management has been an active area of research in the database field for two decades ,with much of the research being focused on developing data structures for storing and indexing spatial data .however, no commercial database system provides facilities for directly de fining and storing spatial data ,and formulating queries based on research conditions on spatial data.There are two components to data management: history data management and version management .Both have been the subjects of research for over a decade. The troublesome aspect of temporal data management is that the boundary between applications and database systems has not been clearly drawn. Specifically, it is not clear how much of the typical semantics and facilities of temporal data management can and should be directly incorporated in a database system, and how much should be left to applications and users. In this section, we will provide a list of short-term research issues that should be examined to shed light on this fundamental question.The focus of research into history data management has been on defining the semantics of time and time interval, and issues related to understanding the semantics of queries and updates against history data stored in an attribute of a record. Typically, in the context of relational databases ,a temporal attribute is defined to hold a sequence of history data for the attribute. A history data consists of a data item and a time interval for which the data item is valid. A query may then be issued to retrieve history data for a specified time interval for the temporal attribute. The mechanism for supporting temporal attributes is to that for supporting set-valued attributes in a database system, such as UniSQL.In the absence of a support for temporal attributes, application developers who need to model and history data have simply simulated temporal attributes by creating attribute for the time interval ,along with the “temporal” attribute. This of course may result in duplication of records in a table, and more complicated search predicates in queries. The one necessary topic of research in history data management is to quantitatively establish the performance (and even productivity) differences betweenusing a database system that directly supports attributes and using a conventional database system that does not support either the set-valued attributes or temporal attributes.Data security, integrity, and independenceData security prevents unauthorized users from viewing or updating the database. Using passwords, users are allowed access to the entire database of the database, called subschemas. For example, an employee database can contain all the data about an individual employee, but one group of users may be authorized to view only payroll data, while others are allowed access to only work history and medical data.Data integrity refers to the accuracy, correctness, or validity of the data in the database. In a database system, data integrity means safeguarding the data against invalid alteration or destruction. In large on-line database system, data integrity becomes a more severe problem and two additional complications arise. The first has to do with many users accessing the database concurrently. For example, if thousands of travel agents book the same seat on the same flight, the first agent’s booking will be lost. In such cases the technique of locking the record or field provides the means for preventing one user from accessing a record while another user is updating the same record.The second complication relates to hardware, software or human error during the course of processing and involves database transaction which is a group of database modifications treated as a single unit. For example, an agent booking an airline reservation involves several database updates (i.e., adding the passenger’s name and address and updating the seats-available field), which comprise a single transaction. The database transaction is not considered to be completed until all updates have been completed; otherwise, none of the updates will be allowed to take place.An important point about database systems is that the database should exist independently of any of the specific applications. Traditional data processing applications are data dependent.When a DMBS is used, the detailed knowledge of the physical organization of the data does not have to be built into every application program. The application program asks the DBMS for data by field name, for example, a coded representationof “give me customer name and balance due” would be sent to the DBMS. Without a DBMS the programmer must reserve space for the full structure of the record in the program. Any change in data structure requires changes in all the applications programs.Data Base Management System (DBMS)The system software package that handles the difficult tasks associated with creating ,accessing and maintaining data base records is called a data base management system (DBMS). A DBMS will usually be handing multiple data calls concurrently.It must organize its system buffers so that different data operations can be in process together .It provides a data definition language to specify the conceptual schema and most likely ,some of the details regarding the implementation of the conceptual schema by the physical schema.The data definition language is a high-level language, enabling one to describe the conceptual schema in terms of a “data model “.At the present time ,there are four underling structures for database management systems. They are :List structures.Relational structures.Hierarchical (tree) structures.Network structures.Management Information System(MIS)An MIS can be defined as a network of computer-based data processing procedures developed in an organization and integrated as necessary with manual and other procedures for the purpose of providing timely and effective information to support decision making and other necessary management functions.One of the most difficult tasks of the MIS designer is to develop the information flow needed to support decision making .Generally speaking ,much of the information needed by managers who occupy different levels and who have different levels and have different responsibilities is obtained from a collection of exiting information system (or subsystems)Structure Query Language (SQL)SQL is a data base processing language endorsed by the American NationalStandards Institute. It is rapidly becoming the standard query language for accessing data on relational databases .With its simple ,powerful syntax ,SQL represents a great progress in database access for all levels of management and computing professionals.SQL falls into two forms : interactive SQL and embedded SQL. Embedded SQL usage is near to traditional programming in third generation languages .It is the interactive use of SQL that makes it most applicable for the rapid answering of ad hoc queries .With an interactive SQL query you just type in a few lines of SQL and you get the database response immediately on the screen.译文数据库数据库可以被定义为一个相互联系的数据库存储的集合。
毕业论文英文参考文献与译文
Inventory managementInventory ControlOn the so-called "inventory control", many people will interpret it as a "storage management", which is actually a big distortion.The traditional narrow view, mainly for warehouse inventory control of materials for inventory, data processing, storage, distribution, etc., through the implementation of anti-corrosion, temperature and humidity control means, to make the custody of the physical inventory to maintain optimum purposes. This is just a form of inventory control, or can be defined as the physical inventory control. How, then, from a broad perspective to understand inventory control? Inventory control should be related to the company's financial and operational objectives, in particular operating cash flow by optimizing the entire demand and supply chain management processes (DSCM), a reasonable set of ERP control strategy, and supported by appropriate information processing tools, tools to achieved in ensuring the timely delivery of the premise, as far as possible to reduce inventory levels, reducing inventory and obsolescence, the risk of devaluation. In this sense, the physical inventory control to achieve financial goals is just a means to control the entire inventory or just a necessary part; from the perspective of organizational functions, physical inventory control, warehouse management is mainly the responsibility of The broad inventory control is the demand and supply chain management, and the whole company's responsibility.Why until now many people's understanding of inventory control, limited physical inventory control? The following two reasons can not be ignored:First, our enterprises do not attach importance to inventory control. Especially those who benefit relatively good business, as long as there is money on the few people to consider the problem of inventory turnover. Inventory control is simply interpreted as warehouse management, unless the time to spend money, it may have been to see the inventory problem, and see the results are often very simple procurement to buy more, or did not do warehouse departments .Second, ERP misleading. Invoicing software is simple audacity to call it ERP, companies on their so-called ERP can reduce the number of inventory, inventory control, seems to rely on their small software can get. Even as SAP, BAAN ERP world, the field ofthese big boys, but also their simple modules inside the warehouse management functionality is defined as "inventory management" or "inventory control." This makes the already not quite understand what our inventory control, but not sure what is inventory control.In fact, from the perspective of broadly understood, inventory control, shouldinclude the following:First, the fundamental purpose of inventory control. We know that the so-called world-class manufacturing, two key assessment indicators (KPI) is, customer satisfaction and inventory turns, inventory turns and this is actually the fundamental objective of inventory control.Second, inventory control means. Increase inventory turns, relying solely on the so-called physical inventory control is not enough, it should be the demand and supply chain management process flow of this large output, and this big warehouse management processes in addition to including this link, the more important The section also includes: forecasting and order processing, production planning and control, materials planning and purchasing control, inventory planning and forecasting in itself, as well as finished products, raw materials, distribution and delivery of the strategy, and even customs management processes.And with the demand and supply chain management processes throughout the process, it is the information flow and capital flow management. In other words, inventory itself is across the entire demand and supply management processes in all aspects of inventory control in order to achieve the fundamental purpose, it must control all aspects of inventory, rather than just manage the physical inventory at hand.Third, inventory control, organizational structure and assessment.Since inventory control is the demand and supply chain management processes, output, inventory control to achieve the fundamental purpose of this process must be compatible with a rational organizational structure. Until now, we can see that many companies have only one purchasing department, purchasing department following pipe warehouse. This is far short of inventory control requirements. From the demand and supply chain management process analysis, we know that purchasing and warehouse management is the executive arm of the typical, and inventory control should focus on prevention, the executive branch is very difficult to "prevent inventory" for the simple reason that they assessment indicatorsin large part to ensure supply (production, customer). How the actual situation, a reasonable demand and supply chain management processes, and thus set the corresponding rational organizational structure and is a question many of our enterprisesto exploreThe role of inventory controlInventory management is an important part of business management. In the production and operation activities, inventory management must ensure that both the production plant for raw materials, spare parts demand, but also directly affect the purchasing, sales of share, sales activities. To make an inventory of corporate liquidity, accelerate cash flow, the security of supply under the premise of minimizing Yaku funds, directly affects the operational efficiency. Ensure the production and operation needs of the premise, so keep inventories at a reasonable level; dynamic inventory control, timely, appropriate proposed order to avoid over storage or out of stock; reduce inventory footprint, lower total cost of inventory; control stock funds used to accelerate cash flow.Problems arising from excessive inventory: increased warehouse space andinventory storage costs, thereby increasing product costs; take a lot of liquidity, resultingin sluggish capital, not only increased the burden of payment of interest, etc., would affect the time value of money and opportunity income; finished products and raw materials caused by physical loss and intangible losses; a large number of enterprise resource idle, affecting their rational allocation and optimization; cover the production, operation of the whole process of the various contradictions and problems, is not conducive to improve the management level.Inventory is too small the resulting problems: service levels caused a decline in the profit impact of marketing and corporate reputation; production system caused by inadequate supply of raw materials or other materials, affecting the normal production process; to shorten lead times, increase the number of orders, so order (production) costs; affect the balance of production and assembly of complete sets.NotesInventory management should particularly consider the following two questions:First, according to sales plans, according to the planned production of the goods circulated in the market, we should consider where, how much storage.Second, starting from the level of service and economic benefits to determine howto ensure inventories and supplementary questions.The two problems with the inventory in the logistics process functions.In general, the inventory function:(1)to prevent interrupted. Received orders to shorten the delivery of goods fromthe time in order to ensure quality service, at the same time to prevent out of stock.(2)to ensure proper inventory levels, saving inventory costs.(3)to reduce logistics costs. Supplement with the appropriate time interval compatible with the reasonable demand of the cargo in order to reduce logistics costs, eliminate or avoid sales fluctuations.(4)ensure the production planning, smooth to eliminate or avoid sales fluctuations.(5)display function.(6)reserve. Mass storage when the price falls, reduce losses, to respond to disasters and other contingencies.About the warehouse (inventory) on what the question, we must consider the number and location. If the distribution center, it should be possible according to customer needs, set at an appropriate place; if it is stored in central places to minimize the complementary principle to the distribution centers, there is no place certain requirements. When the stock base is established, will have to take into account are stored in various locations in what commodities.库存管理库存控制在谈到所谓“库存控制”的时候,很多人将其理解为“仓储管理”,这实际上是个很大的曲解。
外文参考文献译文及原文【范本模板】
广东工业大学华立学院本科毕业设计(论文)外文参考文献译文及原文系部城建学部专业土木工程年级 2011级班级名称 11土木工程9班学号 23031109000学生姓名刘林指导教师卢集富2015 年5 月目录一、项目成本管理与控制 0二、Project Budget Monitor and Control (1)三、施工阶段承包商在控制施工成本方面所扮演的作用 (2)四、The Contractor’s Role in Building Cost Reduction After Design (4)一、外文文献译文(1)项目成本管理与控制随着市场竞争的激烈性越来越大,在每一个项目中,进行成本控制越发重要。
本文论述了在施工阶段,项目经理如何成功地控制项目预算成本。
本文讨论了很多方法。
它表明,要取得成功,项目经理必须关注这些成功的方法.1。
简介调查显示,大多数项目会碰到超出预算的问……功控制预算成本.2.项目控制和监测的概念和目的Erel and Raz (2000)指出项目控制周期包括测量成……原因以及决定纠偏措施并采取行动。
监控的目的就是纠偏措施的。
.。
标范围内。
3.建立一个有效的控制体系为了实现预算成本的目标,项目管理者需要建立一……被监测和控制是非常有帮助的。
项目成功与良好的沟通密。
决( Diallo and Thuillier, 2005).4.成本费用的检测和控制4.1对检测的优先顺序进行排序在施工阶段,很多施工活动是基于原来的计……用完了。
第四,项目管理者应该检测高风险活动,高风险活动最有。
..重要(Cotterell and Hughes, 1995)。
4.2成本控制的方法一个项目的主要费用包括员工成本、材料成本以及工期延误的成本。
为了控制这些成本费用,项目管理者首先应该建立一个成本控制系统:a)为财务数据的管理和分析工作落实责任人员b)确保按照项目的结构来合理分配所有的……它的变化-—在成本控制线上准确地记录所有恰..。
外文文献及翻译
((英文参考文献及译文)二〇一六年六月本科毕业论文 题 目:STATISTICAL SAMPLING METHOD, USED INTHE AUDIT学生姓名:王雪琴学 院:管理学院系 别:会计系专 业:财务管理班 级:财管12-2班 学校代码: 10128 学 号: 201210707016Statistics and AuditRomanian Statistical Review nr. 5 / 2010STATISTICAL SAMPLING METHOD, USED IN THE AUDIT - views, recommendations, fi ndingsPhD Candidate Gabriela-Felicia UNGUREANUAbstractThe rapid increase in the size of U.S. companies from the earlytwentieth century created the need for audit procedures based on the selectionof a part of the total population audited to obtain reliable audit evidence, tocharacterize the entire population consists of account balances or classes oftransactions. Sampling is not used only in audit – is used in sampling surveys,market analysis and medical research in which someone wants to reach aconclusion about a large number of data by examining only a part of thesedata. The difference is the “population” from which the sample is selected, iethat set of data which is intended to draw a conclusion. Audit sampling appliesonly to certain types of audit procedures.Key words: sampling, sample risk, population, sampling unit, tests ofcontrols, substantive procedures.Statistical samplingCommittee statistical sampling of American Institute of CertifiedPublic Accountants of (AICPA) issued in 1962 a special report, titled“Statistical sampling and independent auditors’ which allowed the use ofstatistical sampling method, in accordance with Generally Accepted AuditingStandards (GAAS). During 1962-1974, the AICPA published a series of paperson statistical sampling, “Auditor’s Approach to Statistical Sampling”, foruse in continuing professional education of accountants. During 1962-1974,the AICPA published a series of papers on statistical sampling, “Auditor’sApproach to Statistical Sampling”, for use in continuing professional educationof accountants. In 1981, AICPA issued the professional standard, “AuditSampling”, which provides general guidelines for both sampling methods,statistical and non-statistical.Earlier audits included checks of all transactions in the period coveredby the audited financial statements. At that time, the literature has not givenparticular attention to this subject. Only in 1971, an audit procedures programprinted in the “Federal Reserve Bulletin (Federal Bulletin Stocks)” includedseveral references to sampling such as selecting the “few items” of inventory.Statistics and Audit The program was developed by a special committee, which later became the AICPA, that of Certified Public Accountants American Institute.In the first decades of last century, the auditors often applied sampling, but sample size was not in related to the efficiency of internal control of the entity. In 1955, American Institute of Accountants has published a study case of extending the audit sampling, summarizing audit program developed by certified public accountants, to show why sampling is necessary to extend the audit. The study was important because is one of the leading journal on sampling which recognize a relationship of dependency between detail and reliability testing of internal control.In 1964, the AICPA’s Auditing Standards Board has issued a report entitled “The relationship between statistical sampling and Generally Accepted Auditing Standards (GAAS)” which illustrated the relationship between the accuracy and reliability in sampling and provisions of GAAS.In 1978, the AICPA published the work of Donald M. Roberts,“Statistical Auditing”which explains the underlying theory of statistical sampling in auditing.In 1981, AICPA issued the professional standard, named “Audit Sampling”, which provides guidelines for both sampling methods, statistical and non-statistical.An auditor does not rely solely on the results of a single procedure to reach a conclusion on an account balance, class of transactions or operational effectiveness of the controls. Rather, the audit findings are based on combined evidence from several sources, as a consequence of a number of different audit procedures. When an auditor selects a sample of a population, his objective is to obtain a representative sample, ie sample whose characteristics are identical with the population’s characteristics. This means that selected items are identical with those remaining outside the sample.In practice, auditors do not know for sure if a sample is representative, even after completion the test, but they “may increase the probability that a sample is representative by accuracy of activities made related to design, sample selection and evaluation” [1]. Lack of specificity of the sample results may be given by observation errors and sampling errors. Risks to produce these errors can be controlled.Observation error (risk of observation) appears when the audit test did not identify existing deviations in the sample or using an inadequate audit technique or by negligence of the auditor.Sampling error (sampling risk) is an inherent characteristic of the survey, which results from the fact that they tested only a fraction of the total population. Sampling error occurs due to the fact that it is possible for Revista Română de Statistică nr. 5 / 2010Statistics and Auditthe auditor to reach a conclusion, based on a sample that is different from the conclusion which would be reached if the entire population would have been subject to audit procedures identical. Sampling risk can be reduced by adjusting the sample size, depending on the size and population characteristics and using an appropriate method of selection. Increasing sample size will reduce the risk of sampling; a sample of the all population will present a null risk of sampling.Audit Sampling is a method of testing for gather sufficient and appropriate audit evidence, for the purposes of audit. The auditor may decide to apply audit sampling on an account balance or class of transactions. Sampling audit includes audit procedures to less than 100% of the items within an account balance or class of transactions, so all the sample able to be selected. Auditor is required to determine appropriate ways of selecting items for testing. Audit sampling can be used as a statistical approach and a non- statistical.Statistical sampling is a method by which the sample is made so that each unit consists of the total population has an equal probability of being included in the sample, method of sample selection is random, allowed to assess the results based on probability theory and risk quantification of sampling. Choosing the appropriate population make that auditor’ findings can be extended to the entire population.Non-statistical sampling is a method of sampling, when the auditor uses professional judgment to select elements of a sample. Since the purpose of sampling is to draw conclusions about the entire population, the auditor should select a representative sample by choosing sample units which have characteristics typical of that population. Results will not extrapolate the entire population as the sample selected is representative.Audit tests can be applied on the all elements of the population, where is a small population or on an unrepresentative sample, where the auditor knows the particularities of the population to be tested and is able to identify a small number of items of interest to audit. If the sample has not similar characteristics for the elements of the entire population, the errors found in the tested sample can not extrapolate.Decision of statistical or non-statistical approach depends on the auditor’s professional judgment which seeking sufficient appropriate audits evidence on which to completion its findings about the audit opinion.As a statistical sampling method refer to the random selection that any possible combination of elements of the community is equally likely to enter the sample. Simple random sampling is used when stratification was not to audit. Using random selection involves using random numbers generated byRomanian Statistical Review nr. 5 / 2010Statistics and Audit a computer. After selecting a random starting point, the auditor found the first random number that falls within the test document numbers. Only when the approach has the characteristics of statistical sampling, statistical assessments of risk are valid sampling.In another variant of the sampling probability, namely the systematic selection (also called random mechanical) elements naturally succeed in office space or time; the auditor has a preliminary listing of the population and made the decision on sample size. “The auditor calculated a counting step, and selects the sample element method based on step size. Step counting is determined by dividing the volume of the community to sample the number of units desired. Advantages of systematic screening are its usability. In most cases, a systematic sample can be extracted quickly and method automatically arranges numbers in successive series.”[2].Selection by probability proportional to size - is a method which emphasizes those population units’recorded higher values. The sample is constituted so that the probability of selecting any given element of the population is equal to the recorded value of the item;Stratifi ed selection - is a method of emphasis of units with higher values and is registered in the stratification of the population in subpopulations. Stratification provides a complete picture of the auditor, when population (data table to be analyzed) is not homogeneous. In this case, the auditor stratifies a population by dividing them into distinct subpopulations, which have common characteristics, pre-defined. “The objective of stratification is to reduce the variability of elements in each layer and therefore allow a reduction in sample size without a proportionate increase in the risk of sampling.” [3] If population stratification is done properly, the amount of sample size to come layers will be less than the sample size that would be obtained at the same level of risk given sample with a sample extracted from the entire population. Audit results applied to a layer can be designed only on items that are part of that layer.I appreciated as useful some views on non-statistical sampling methods, which implies that guided the selection of the sample selecting each element according to certain criteria determined by the auditor. The method is subjective; because the auditor selects intentionally items containing set features him.The selection of the series is done by selecting multiple elements series (successive). Using sampling the series is recommended only if a reasonable number of sets used. Using just a few series there is a risk that the sample is not representative. This type of sampling can be used in addition to other samples, where there is a high probability of occurrence of errors. At the arbitrary selection, no items are selected preferably from the auditor, Revista Română de Statistică nr. 5 / 2010Statistics and Auditthat regardless of size or source or characteristics. Is not the recommended method, because is not objective.That sampling is based on the auditor’s professional judgment, which may decide which items can be part or not sampled. Because is not a statistical method, it can not calculate the standard error. Although the sample structure can be constructed to reproduce the population, there is no guarantee that the sample is representative. If omitted a feature that would be relevant in a particular situation, the sample is not representative.Sampling applies when the auditor plans to make conclusions about population, based on a selection. The auditor considers the audit program and determines audit procedures which may apply random research. Sampling is used by auditors an internal control systems testing, and substantive testing of operations. The general objectives of tests of control system and operations substantive tests are to verify the application of pre-defined control procedures, and to determine whether operations contain material errors.Control tests are intended to provide evidence of operational efficiency and controls design or operation of a control system to prevent or detect material misstatements in financial statements. Control tests are necessary if the auditor plans to assess control risk for assertions of management.Controls are generally expected to be similarly applied to all transactions covered by the records, regardless of transaction value. Therefore, if the auditor uses sampling, it is not advisable to select only high value transactions. Samples must be chosen so as to be representative population sample.An auditor must be aware that an entity may change a special control during the course of the audit. If the control is replaced by another, which is designed to achieve the same specific objective, the auditor must decide whether to design a sample of all transactions made during or just a sample of transactions controlled again. Appropriate decision depends on the overall objective of the audit test.Verification of internal control system of an entity is intended to provide guidance on the identification of relevant controls and design evaluation tests of controls.Other tests:In testing internal control system and testing operations, audit sample is used to estimate the proportion of elements of a population containing a characteristic or attribute analysis. This proportion is called the frequency of occurrence or percentage of deviation and is equal to the ratio of elements containing attribute specific and total number of population elements. WeightRomanian Statistical Review nr. 5 / 2010Statistics and Audit deviations in a sample are determined to calculate an estimate of the proportion of the total population deviations.Risk associated with sampling - refers to a sample selection which can not be representative of the population tested. In other words, the sample itself may contain material errors or deviations from the line. However, issuing a conclusion based on a sample may be different from the conclusion which would be reached if the entire population would be subject to audit.Types of risk associated with sampling:Controls are more effective than they actually are or that there are not significant errors when they exist - which means an inappropriate audit opinion. Controls are less effective than they actually are that there are significant errors when in fact they are not - this calls for additional activities to establish that initial conclusions were incorrect.Attributes testing - the auditor should be defining the characteristics to test and conditions for misconduct. Attributes testing will make when required objective statistical projections on various characteristics of the population. The auditor may decide to select items from a population based on its knowledge about the entity and its environment control based on risk analysis and the specific characteristics of the population to be tested.Population is the mass of data on which the auditor wishes to generalize the findings obtained on a sample. Population will be defined compliance audit objectives and will be complete and consistent, because results of the sample can be designed only for the population from which the sample was selected.Sampling unit - a unit of sampling may be, for example, an invoice, an entry or a line item. Each sample unit is an element of the population. The auditor will define the sampling unit based on its compliance with the objectives of audit tests.Sample size - to determine the sample size should be considered whether sampling risk is reduced to an acceptable minimum level. Sample size is affected by the risk associated with sampling that the auditor is willing to accept it. The risk that the auditor is willing to accept lower, the sample will be higher.Error - for detailed testing, the auditor should project monetary errors found in the sample population and should take into account the projected error on the specific objective of the audit and other audit areas. The auditor projects the total error on the population to get a broad perspective on the size of the error and comparing it with tolerable error.For detailed testing, tolerable error is tolerable and misrepresentations Revista Română de Statistică nr. 5 / 2010Statistics and Auditwill be a value less than or equal to materiality used by the auditor for the individual classes of transactions or balances audited. If a class of transactions or account balances has been divided into layers error is designed separately for each layer. Design errors and inconsistent errors for each stratum are then combined when considering the possible effect on the total classes of transactions and account balances.Evaluation of sample results - the auditor should evaluate the sample results to determine whether assessing relevant characteristics of the population is confirmed or needs to be revised.When testing controls, an unexpectedly high rate of sample error may lead to an increase in the risk assessment of significant misrepresentation unless it obtained additional audit evidence to support the initial assessment. For control tests, an error is a deviation from the performance of control procedures prescribed. The auditor should obtain evidence about the nature and extent of any significant changes in internal control system, including the staff establishment.If significant changes occur, the auditor should review the understanding of internal control environment and consider testing the controls changed. Alternatively, the auditor may consider performing substantive analytical procedures or tests of details covering the audit period.In some cases, the auditor might not need to wait until the end audit to form a conclusion about the effectiveness of operational control, to support the control risk assessment. In this case, the auditor might decide to modify the planned substantive tests accordingly.If testing details, an unexpectedly large amount of error in a sample may cause the auditor to believe that a class of transactions or account balances is given significantly wrong in the absence of additional audit evidence to show that there are not material misrepresentations.When the best estimate of error is very close to the tolerable error, the auditor recognizes the risk that another sample have different best estimate that could exceed the tolerable error.ConclusionsFollowing analysis of sampling methods conclude that all methods have advantages and disadvantages. But the auditor is important in choosing the sampling method is based on professional judgment and take into account the cost / benefit ratio. Thus, if a sampling method proves to be costly auditor should seek the most efficient method in view of the main and specific objectives of the audit.Romanian Statistical Review nr. 5 / 2010Statistics and Audit The auditor should evaluate the sample results to determine whether the preliminary assessment of relevant characteristics of the population must be confirmed or revised. If the evaluation sample results indicate that the relevant characteristics of the population needs assessment review, the auditor may: require management to investigate identified errors and likelihood of future errors and make necessary adjustments to change the nature, timing and extent of further procedures to take into account the effect on the audit report.Selective bibliography:[1] Law no. 672/2002 updated, on public internal audit[2] Arens, A şi Loebbecke J - Controve …Audit– An integrate approach”, 8th edition, Arc Publishing House[3] ISA 530 - Financial Audit 2008 - International Standards on Auditing, IRECSON Publishing House, 2009- Dictionary of macroeconomics, Ed C.H. Beck, Bucharest, 2008Revista Română de Statistică nr. 5 / 2010Statistics and Audit摘要美国公司的规模迅速增加,从第二十世纪初创造了必要的审计程序,根据选定的部分总人口的审计,以获得可靠的审计证据,以描述整个人口组成的帐户余额或类别的交易。
外文文献翻译(图片版)
本科毕业论文外文参考文献译文及原文学院经济与贸易学院专业经济学(贸易方向)年级班别2007级 1 班学号3207004154学生姓名欧阳倩指导教师童雪晖2010 年 6 月 3 日目录1 外文文献译文(一)中国银行业的改革和盈利能力(第1、2、4部分) (1)2 外文文献原文(一)CHINA’S BANKING REFORM AND PROFITABILITY(Part 1、2、4) (9)1概述世界银行(1997年)曾声称,中国的金融业是其经济的软肋。
当一国的经济增长的可持续性岌岌可危的时候,金融业的改革一直被认为是提高资金使用效率和消费型经济增长重新走向平衡的必要(Lardy,1998年,Prasad,2007年)。
事实上,不久前,中国的国有银行被视为“技术上破产”,它们的生存需要依靠充裕的国家流动资金。
但是,在银行改革开展以来,最近,强劲的盈利能力已恢复到国有商业银行的水平。
但自从中国的国有银行在不久之前已经走上了改革的道路,它可能过早宣布银行业的改革尚未取得完全的胜利。
此外,其坚实的财务表现虽然强劲,但不可持续增长。
随着经济增长在2008年全球经济衰退得带动下已经开始软化,银行预计将在一个比以前更加困难的经济形势下探索。
本文的目的不是要评价银行业改革对银行业绩的影响,这在一个完整的信贷周期后更好解决。
相反,我们的目标是通过审查改革的进展和银行改革战略,并分析其近期改革后的强劲的财务表现,但是这不能完全从迄今所进行的改革努力分离。
本文有三个部分。
在第二节中,我们回顾了中国的大型国有银行改革的战略,以及其执行情况,这是中国银行业改革的主要目标。
第三节中分析了2007年的财务表现集中在那些在市场上拥有浮动股份的四大国有商业银行:中国工商银行(工商银行),中国建设银行(建行),对中国银行(中银)和交通银行(交通银行)。
引人注目的是中国农业银行,它仍然处于重组上市过程中得适当时候的后期。
第四节总结一个对银行绩效评估。
外文参考文献译文及原文
目录1介绍 (1)在这一章对NS2的引入提供。
尤其是,关于NS2的安装信息是在第2章。
第3章介绍了NS2的目录和公约。
第4章介绍了在NS2仿真的主要步骤。
一个简单的仿真例子在第5章。
最后,在第.8章作总结。
2安装 (1)该组件的想法是明智的做法,以获取上述件和安装他们的个人。
此选项保存downloadingtime和大量内存空间。
但是,它可能是麻烦的初学者,因此只对有经验的用户推荐。
(2)安装一套ns2的all-in-one在unix-based系统 (2)安装一套ns2的all-in-one在Windows系统 (3)3目录和公约 (4)目录 (4)4运行ns2模拟 (6)ns2程序调用 (6)ns2模拟的主要步骤 (6)5一个仿真例子 (8)6总结 (12)1 Introduction (13)2 Installation (15)Installing an All-In-One NS2 Suite on Unix-Based Systems (15)Installing an All-In-One NS2 Suite on Windows-Based Systems (16)3 Directories and Convention (17)Directories and Convention (17)Convention (17)4 Running NS2 Simulation (20)NS2 Program Invocation (20)Main NS2 Simulation Steps (20)5 A Simulation Example (22)6 Summary (27)1介绍网络模拟器(一般叫作NS2)的版本,是证明了有用在学习通讯网络的动态本质的一个事件驱动的模仿工具。
模仿架线并且无线网络作用和协议(即寻址算法,TCP,UDP)使用NS2,可以完成。
一般来说,NS2提供用户以指定这样网络协议和模仿他们对应的行为方式。
电子电路 数字模拟 外文翻译 外文文献 英文文献 电子电路 数字与模拟 下册
1外文译文节选自(美)C.A.霍尔特《电子电路数字与模拟下册》1 .1基本放大器研究放大器,我们首先分析图 1.1的电路,它包含一个偏置于放大区的NPN晶体管。
虽然基区宽度W是集电极电压的函数,但为了使讨论尽可能简化,将忽略这个次要的效应。
因此,I ES和a F看作常数。
符号在这里以及整个这本书中,采用标准符号表示电流和电压。
电流为i B 为i B=I B+i b 图 1.1当V i为零时,图1.1 的电路叫做静态,即处于休止状态,静态基极电流为I B;当V i不为零时,总电流i B与静态值之差为i b。
符号i b表示增量电流,也称为i b的信号分量。
注意:i B ,I B ,i b的习惯参考方向均以流入器件的B端为正。
V BE表示从基极B到发射极E的电压降,同样把它写成静态电压V BE和增量电压V be之和。
图12—1电路中V be就是V i。
总之,小写字母带大写下标表示各总电流和总电压;大写字母带大写下标表示各静态量;小写字母带小写下标用于各增量变量。
不特别声明,电流参考方向均以流入器件为正。
电压参考方向用双下标,或象图2.1中Vo那样用正负符号表示时,则Q点的电压和电流均指静态量。
图1.12 运算放大器除前一章讨论过的共射、共集和共基电路以外,还有另一种特别重要的基本组态,这就是差分放大器。
它有两个信号电压输入瑞和一个正比于输入信号差值的输出端。
常常,从提供负反馈的分压网络上提取输出的一部分作为一个输入电压;而有时,一个输入端甘脆接地。
在这两种情况下,差分放大器都变成只有一个输入和一个输出的单端放大器。
我们将看到,差分放大器可以处理较大的信号而没有过大的非线性失真,而且这个较大的动态范围是它的众多特性之一。
由于偏流不大时输入阻抗为中到高阻抗,所以信号源负载不会过重。
在低频工作(包括直流)是可能的。
其电路结构特别适合子集成电路制造,因而多数线性集成电路包含一级或多级差分放大器。
这类电路的实例有:模拟计算机网络、单片稳压器、视频放大器、模拟比较器和运算放大器。
外文文献原稿和译文(模板)
北京化工大学北方学院毕业设计(论文)——外文文献原稿和译文(空一行) 外文文献原稿和译文 (空一行) 原□□稿(空一行) IntroductionThe "jumping off" point for this paper is Reengineering the Corporation, by Michael Hammer and James Champy. The paper goes on to review the literature on BPR. It explores the principles and assumptions behind reengineering, looks for commonfactors behind its successes or failures, examines case studies, and presents alternatives to "classical" reengineering theory. The paper pays particular attention to the role of information technology in BPR. In conclusion, the paper offers somespecific recommendations regarding reengineering. Old Wine in New Bottles The concept of reengineering traces its origins back to management theories developedas early as the nineteenth century. The purpose of reengineering is to "make all your processes the best-in-class." Frederick Taylor suggested in the 1880's that managers use process reengineering methods to discover the best processes for performing work, and that these processes be reengineered to optimize productivity. BPR echoes the classical belief that there is one best way to conduct tasks. In Taylor's time, technology did not allow large companies to design processes in across-functional or cross-departmental manner. Specialization was the state-of-theart method to improve efficiency given the technology of the time.(下略)之上之下各留一空行,宋体,三号字,居中,加粗。
交通安全外文翻译文献中英文
外文文献翻译(含:英文原文及中文译文)英文原文POSSIBILITIES AND LIMITA TIONS OF ACCIDENT ANALYSISS.OppeAbstraetAccident statistics, especially collected at a national level are particularly useful for the description, monitoring and prognosis of accident developments, the detection of positive and negative safety developments, the definition of safety targets and the (product) evaluation of long term and large scale safety measures. The application of accident analysis is strongly limited for problem analysis, prospective and retrospective safety analysis on newly developed traffic systems or safety measures, as well as for (process) evaluation of special short term and small scale safety measures. There is an urgent need for the analysis of accidents in real time, in combination with background behavioural research. Automatic incident detection, combined with video recording of accidents may soon result in financially acceptable research. This type of research may eventually lead to a better understanding of the concept of risk in traffic and to well-established theories.Keyword: Consequences; purposes; describe; Limitations; concerned; Accident Analysis; possibilities1. Introduction.This paper is primarily based on personal experience concerning traffic safety, safety research and the role of accidents analysis in this research. These experiences resulted in rather philosophical opinions as well as more practical viewpoints on research methodology and statistical analysis. A number of these findings are published already elsewhere.From this lack of direct observation of accidents, a number of methodological problems arise, leading to continuous discussions about the interpretation of findings that cannot be tested directly. For a fruitful discussion of these methodological problems it is very informative to look at a real accident on video. It then turns out that most of the relevant information used to explain the accident will be missing in the accident record. In-depth studies also cannot recollect all the data that is necessary in order to test hypotheses about the occurrence of the accident. For a particular car-car accident, that was recorded on video at an urban intersection in the Netherlands, between a car coming from a minor road, colliding with a car on the major road, the following questions could be asked: Why did the driver of the car coming from the minor road, suddenly accelerate after coming almost to a stop and hit the side of the car from the left at the main road? Why was the approaching car not noticed? Was it because the driver was preoccupied with the two cars coming from the right and the gap before them that offered him thepossibility to cross? Did he look left before, but was his view possibly blocked by the green van parked at the corner? Certainly the traffic situation was not complicated. At the moment of the accident there were no bicyclists or pedestrians present to distract his attention at the regularly overcrowded intersection. The parked green van disappeared within five minutes, the two other cars that may have been important left without a trace. It is hardly possible to observe traffic behavior under the most relevant condition of an accident occurring, because accidents are very rare events, given the large number of trips. Given the new video equipment and the recent developments in automatic incident and accident detection, it becomes more and more realistic to collect such data at not too high costs. Additional to this type of data that is most essential for a good understanding of the risk increasing factors in traffic, it also important to look at normal traffic behavior as a reference base. The question about the possibilities and limitations of accident analysis is not lightly answered. We cannot speak unambiguously about accident analysis. Accident analysis covers a whole range of activities, each originating from a different background and based on different sources of information: national data banks, additional information from other sources, especially collected accident data, behavioral background data etc. To answer the question about the possibilities and limitations, we first have to look at the cycle of activities in the area of traffic safety. Some ofthese activities are mainly concerned with the safety management of the traffic system; some others are primarily research activities.The following steps should be distinguished:- detection of new or remaining safety problems;- description of the problem and its main characteristics;- the analysis of the problem, its causes and suggestions for improvement;- selection and implementation of safety measures;- evaluation of measures taken.Although this cycle can be carried out by the same person or group of persons, the problem has a different (political/managerial or scientific) background at each stage. We will describe the phases in which accident analysis is used. It is important to make this distinction. Many fruitless discussions about the method of analysis result from ignoring this distinction. Politicians, or road managers are not primarily interested in individual accidents. From their perspective accidents are often treated equally, because the total outcome is much more important than the whole chain of events leading to each individual accident. Therefore, each accident counts as one and they add up all together to a final safety result.Researchers are much more interested in the chain of events leading to an individual accident. They want to get detailed information abouteach accident, to detect its causes and the relevant conditions. The politician wants only those details that direct his actions. At the highest level this is the decrease in the total number of accidents. The main source of information is the national database and its statistical treatment. For him, accident analysis is looking at (subgroups of) accident numbers and their statistical fluctuations. This is the main stream of accident analysis as applied in the area of traffic safety. Therefore, we will first describe these aspects of accidents.2. The nature of accidents and their statistical characteristics.The basic notion is that accidents, whatever there cause, appear according to a chance process. Two simple assumptions are usually made to describe this process for (traffic) accidents:- the probability of an accident to occur is independent from the occurrence of previous accidents;-the occurrence of accidents is homogeneous in time.If these two assumptions hold, then accidents are Poisson distributed. The first assumption does not meet much criticism. Accidents are rare events and therefore not easily influenced by previous accidents. In some cases where there is a direct causal chain (e.g. , when a number of cars run into each other) the series of accidents may be regarded as one complicated accident with many cars involved.The assumption does not apply to casualties. Casualties are often related to the same accident andtherefore the independency assumption does not hold. The second assumption seems less obvious at first sight. The occurrence of accidents through time or on different locations are not equally likely. However, the assumption need not hold over long time periods. It is a rather theoretical assumption in its nature. If it holds for short periods of time, then it also holds for long periods, because the sum of Poisson distributed variables, even if their Poisson rates are different, is also Poisson distributed. The Poisson rate for the sum of these periods is then equal to the sum of the Poisson rates for these parts.The assumption that really counts for a comparison of (composite) situations, is whether two outcomes from an aggregation of situations in time and/or space, have a comparable mix of basic situations. E.g. , the comparison of the number of accidents on one particular day of the year, as compared to another day (the next day, or the same day of the next week etc.). If the conditions are assumed to be the same (same duration, same mix of traffic and situations, same weather conditions etc.) then the resulting numbers of accidents are the outcomes of the same Poisson process. This assumption can be tested by estimating the rate parameter on the basis of the two observed values (the estimate being the average of the two values). Probability theory can be used to compute the likelihood of the equality assumption, given the two observations and their mean.This statistical procedure is rather powerful. The Poisson assumptionis investigated many times and turns out to be supported by a vast body of empirical evidence. It has been applied in numerous situations to find out whether differences in observed numbers of accidents suggest real differences in safety. The main purpose of this procedure is to detect differences in safety. This may be a difference over time, or between different places or between different conditions. Such differences may guide the process of improvement. Because the main concern is to reduce the number of accidents, such an analysis may lead to the most promising areas for treatment. A necessary condition for the application of such a test is, that the numbers of accidents to be compared are large enough to show existing differences. In many local cases an application is not possible. Accident black-spot analysis is often hindered by this limitation, e.g., if such a test is applied to find out whether the number of accidents at a particular location is higher than average. The procedure described can also be used if the accidents are classified according to a number of characteristics to find promising safety targets. Not only with aggregation, but also with disaggregation the Poisson assumption holds, and the accident numbers can be tested against each other on the basis of the Poisson assumptions. Such a test is rather cumbersome, because for each particular case, i.e. for each different Poisson parameter, the probabilities for all possible outcomes must be computed to apply the test. In practice, this is not necessary when the numbers are large. Then the Poissondistribution can be approximated by a Normal distribution, with mean and variance equal to the Poisson parameter. Once the mean value and the variance of a Normal distribution are given, all tests can be rephrased in terms of the standard Normal distribution with zero mean and variance one. No computations are necessary any more, but test statistics can be drawn from tables.3. The use of accident statistics for traffic safety policy.The testing procedure described has its merits for those types of analysis that are based on the assumptions mentioned. The best example of such an application is the monitoring of safety for a country or region over a year, using the total number of accidents (eventually of a particular type, such as fatal accidents), in order to compare this number with the outcome of the year before. If sequences of accidents are given over several years, then trends in the developments can be detected and accident numbers predicted for following years. Once such a trend is established, then the value for the next year or years can be predicted, together with its error bounds. Deviations from a given trend can also be tested afterwards, and new actions planned. The most famous one is carried out by Smeed 1949. We will discuss this type of accident analysis in more detail later.(1). The application of the Chi-square test for interaction is generalised to higher order classifications. Foldvary and Lane (1974), inmeasuring the effect of compulsory wearing of seat belts, were among the first who applied the partitioning of the total Chi-square in values for the higher order interactions of four-way tables.(2). Tests are not restricted to overall effects, but Chi-square values can be decomposed regarding sub-hypotheses within the model. Also in the two-way table, the total Chisquare can be decomposed into interaction effects of part tables. The advantage of 1. and 2. over previous situations is, that large numbers of Chi-square tests on many interrelated (sub)tables and corresponding Chi-squares were replaced by one analysis with an exact portioning of one Chi-square.(3). More attention is put to parameter estimation. E.g., the partitioning of the Chi-square made it possible to test for linear or quadratic restraints on the row-parameters or for discontinuities in trends.(4). The unit of analysis is generalised from counts to weighted counts. This is especially advantageous for road safety analyses, where corrections for period of time, number of road users, number of locations or number of vehicle kilometres is often necessary. The last option is not found in many statistical packages. Andersen 1977 gives an example for road safety analysis in a two-way table. A computer programme WPM, developed for this type of analysis of multi-way tables, is available at SWOV (see: De Leeuw and Oppe 1976). The accident analysis at this level is not explanatory. It tries to detect safety problems that need specialattention. The basic information needed consists of accident numbers, to describe the total amount of unsafety, and exposure data to calculate risks and to find situations or (groups of) road users with a high level of risk. 4. Accident analysis for research purposes.Traffic safety research is concerned with the occurrence of accidents and their consequences. Therefore, one might say that the object of research is the accident. The researcher’s interest however is less focused at this final outcome itself, but much more at the process that results (or does not result) in accidents. Therefore, it is better to regard the critical event in traffic as his object of study. One of the major problems in the study of the traffic process that results in accidents is, that the actual occurrence is hardly ever observed by the researcher.Investigating a traffic accident, he will try to reconstruct the event from indirect sources such as the information given by the road users involved, or by eye-witnesses, about the circumstances, the characteristics of the vehicles, the road and the drivers. As such this is not unique in science, there are more examples of an indirect study of the object of research. However, a second difficulty is, that the object of research cannot be evoked. Systematic research by means of controlled experiments is only possible for aspects of the problem, not for the problem itself. The combination of indirect observation and lack of systematic control make it very difficult for the investigator to detectwhich factors, under what circumstances cause an accident. Although the researcher is primarily interested in the process leading to accidents, he has almost exclusively information about the consequences, the product of it, the accident. Furthermore, the context of accidents is complicated. Generally speaking, the following aspects can be distinguished: - Given the state of the traffic system, traffic volume and composition, the manoeuvres of the road users, their speeds, the weather conditions, the condition of the road, the vehicles, the road users and their interactions, accidents can or cannot be prevented.- Given an accident, also depending on a large number of factors, such as the speed and mass of vehicles, the collision angle, the protection of road users and their vulnerability, the location of impact etc., injuries are more or less severe or the material damage is more or less substantial. Although these aspects cannot be studied independently, from a theoretical point of view it has advantages to distinguish the number of situations in traffic that are potentially dangerous, from the probability of having an accident given such a potentially dangerous situation and also from the resulting outcome, given a particular accident.This conceptual framework is the general basis for the formulation of risk regarding the decisions of individual road users as well as the decisions of controllers at higher levels. In the mathematical formulation of risk we need an explicit description of our probability space, consistingof the elementary events (the situations) that may result in accidents, the probability for each type of event to end up in an accident, and finally the particular outcome, the loss, given that type of accident.A different approach is to look at combinations of accident characteristics, to find critical factors. This type of analysis may be carried out at the total group of accidents or at subgroups. The accident itself may be the unit of research, but also a road, a road location, a road design (e.g. a roundabout) etc.中文译文交通事故分析的可能性和局限性S.Oppe摘要交通事故的统计数字, 尤其国家一级的数据对监控和预测事故的发展, 积极或消极检测事故的发展, 以及对定义安全目标和评估工业安全特别有益。
外文文献翻译译稿和原文【范本模板】
外文文献翻译译稿1卡尔曼滤波的一个典型实例是从一组有限的,包含噪声的,通过对物体位置的观察序列(可能有偏差)预测出物体的位置的坐标及速度。
在很多工程应用(如雷达、计算机视觉)中都可以找到它的身影。
同时,卡尔曼滤波也是控制理论以及控制系统工程中的一个重要课题。
例如,对于雷达来说,人们感兴趣的是其能够跟踪目标.但目标的位置、速度、加速度的测量值往往在任何时候都有噪声。
卡尔曼滤波利用目标的动态信息,设法去掉噪声的影响,得到一个关于目标位置的好的估计.这个估计可以是对当前目标位置的估计(滤波),也可以是对于将来位置的估计(预测),也可以是对过去位置的估计(插值或平滑).命名[编辑]这种滤波方法以它的发明者鲁道夫。
E。
卡尔曼(Rudolph E. Kalman)命名,但是根据文献可知实际上Peter Swerling在更早之前就提出了一种类似的算法。
斯坦利。
施密特(Stanley Schmidt)首次实现了卡尔曼滤波器。
卡尔曼在NASA埃姆斯研究中心访问时,发现他的方法对于解决阿波罗计划的轨道预测很有用,后来阿波罗飞船的导航电脑便使用了这种滤波器。
关于这种滤波器的论文由Swerling(1958)、Kalman (1960)与Kalman and Bucy(1961)发表。
目前,卡尔曼滤波已经有很多不同的实现.卡尔曼最初提出的形式现在一般称为简单卡尔曼滤波器。
除此以外,还有施密特扩展滤波器、信息滤波器以及很多Bierman, Thornton开发的平方根滤波器的变种.也许最常见的卡尔曼滤波器是锁相环,它在收音机、计算机和几乎任何视频或通讯设备中广泛存在。
以下的讨论需要线性代数以及概率论的一般知识。
卡尔曼滤波建立在线性代数和隐马尔可夫模型(hidden Markov model)上.其基本动态系统可以用一个马尔可夫链表示,该马尔可夫链建立在一个被高斯噪声(即正态分布的噪声)干扰的线性算子上的。
系统的状态可以用一个元素为实数的向量表示.随着离散时间的每一个增加,这个线性算子就会作用在当前状态上,产生一个新的状态,并也会带入一些噪声,同时系统的一些已知的控制器的控制信息也会被加入。
过程装备与控制工程专业U形管换热器毕业论文外文文献翻译及原文
毕业设计(论文)外文文献翻译文献、资料中文题目:U形管换热器文献、资料英文题目:文献、资料来源:文献、资料发表(出版)日期:院(部):专业:过程装备与控制工程专业班级:姓名:学号:指导教师:翻译日期: 2017.02.14毕业设计(论文)外文翻译毕业设计(论文)题目: U形管式换热器设计外文题目: U-tube heat exchangers译文题目:指导教师评阅意见U-tube heat exchangersM. Spiga and G. Spiga, Bologna1 Summary:Some analytical solutions are provided to predict the steady temperature distributions of both fluids in U-tube heat exchangers. The energy equations are solved assuming that the fluids remain unmixed and single-phased. The analytical predictions are compared with the design data and the numerical results concerning the heat exchanger of a spent nuclear fuel pool plant, assuming distinctly full mixing and no mixing conditions for the secondary fluid (shell side). The investigation is carried out by studying the influence of all the usual dimensionless parameters (flow capacitance ratio, heat transfer resistance ratio and number of transfer units), to get an immediate and significant insight into the thermal behaviour of the heat Exchanger.More detailed and accurate studies about the knowledge of the fluid temperature distribution inside heat exchangers are greatly required nowadays. This is needed to provide correct evaluation of thermal and structural performances, mainly in the industrial fields (such as nuclear engineering) where larger, more efficient and reliable units are sought, and where a good thermal design can not leave integrity and safety requirements out of consideration [1--3]. In this view, the huge amount of scientific and technical informations available in several texts [4, 5], mainly concerning charts and maps useful for exit temperatures and effectiveness considerations, are not quite satisfactory for a more rigorous and local analysis. In fact the investigation of the thermomechanieal behaviour (thermal stresses, plasticity, creep, fracture mechanics) of tubes, plates, fins and structural components in the heat exchanger insists on the temperature distribution. So it should be very useful to equip the stress analysis codes for heat exchangers withsimple analytical expressions for the temperature map (without resorting to time consuming numerical solutions for the thermal problem), allowing a sensible saving in computer costs. Analytical predictions provide the thermal map of a heat exchanger, aiding in the designoptimization.Moreover they greatly reduce the need of scale model testing (generally prohibitively expensive in nuclear engineering), and furnish an accurate benchmark for the validation of more refined numerical solutions obtained by computer codes. The purpose of this paper is to present the local bulk-wall and fluid temperature distributions forU-tube heat exchangers, solving analytically the energy balance equations.122 General assumptionsLet m, c, h, and A denote mass flow rate (kg/s), specific heat (J/kg -1 K-l), heat transfer coefficient(Wm -2 K-l), and heat transfer surface (m2) for each leg, respectively. The theoretical analysis is based on classical assumptions [6] :-- steady state working conditions,-- equal flow distribution (same mass flow rate for every tube of the bundle),-- single phase fluid flow,-- constant physical properties of exchanger core and fluids,-- adiabatic exchanger shell or shroud,-- no heat conduction in the axial direction,-- constant thermal conductances hA comprehending wall resistance and fouling.According to this last assumption, the wall temperature is the same for the primary and secondary flow. However the heat transfer balance between the fluids is quite respected, since the fluid-wall conductances are appropriately reduced to account for the wall thermal resistance and thefouling factor [6]. The dimensionless parameters typical of the heat transfer phenomena between the fluids arethe flow capacitance and the heat transfer resistance ratiosand the number of transfer units, commonly labaled NTU in the literature,where (mc)min stands for the smaller of the two values (mc)sand (mc)p.In (1) the subscripts s and p refer to secondary and primary fluid, respectively. Only three of the previous five numbers are independent, in fact :The boundary conditions are the inlet temperatures of both fluids3 Parallel and counter flow solutionsThe well known monodimensional solutions for single-pass parallel and counterflow heat exchanger,which will be useful later for the analysis of U-tube heat exchangers, are presented below. If t, T,νare wall, primary fluid, and secondary fluid bulk temperatures (K), and ξ and L represent the longitudinal space coordinate and the heat exchanger length (m), the energy balance equations in dimensionless coordinate x = ξ/L, for parallel and counterflow respectivelyread asM. Spiga and G. Spiga: Temperature profiles in U-tube heat exchangersAfter some algebra, a second order differential equation is deduced for the temperature of the primary (or secondary) fluid, leading to the solutionwhere the integration constants follow from the boundary conditions T(0)=T i , ν(0)≒νifor parallel T(1) = Ti ,ν(0) = νifor counter flow. They are given-- for parallel flow by - for counterflow byWishing to give prominence to the number of transfer units, it can be noticed thatFor counterflow heat exchangers, when E = 1, the solutions (5), (6) degenerate and the fluidtemperatures are given byIt can be realized that (5) -(9) actually depend only on the two parametersE, NTU. However a formalism involving the numbers E, Ns. R has been chosen here in order to avoid the double formalism (E ≤1 and E > 1) connected to NTU.4 U-tube heat exchangerIn the primary side of the U-tube heat exchanger, whose schematic drawing is shown in Fig. 1, the hot fluid enters the inlet plenum flowing inside the tubes, and exits from the outlet plenum. In the secondary side the fluid flows in the tube bundle (shell side). This arrangement suggests that the heat exchanger can be considered as formed by the coupling of a parallel and a counter-flow heat exchanger, each with a heigth equal to the half length of the mean U-tube. However it is necessary to take into account the interactions in the secondary fluid between the hot and the cold leg, considering that the two flows are not physically separated. Two extreme opposite conditions can be investigated: no mixing and full mixing in the two streams of the secondary fluid. The actual heat transfer phenomena are certainly characterized by only a partial mixing ofthe shell side fluid between the legs, hence the analysis of these two extreme theoretical conditions will provide an upper and a lower limit for the actual temperature distribution.4.1 No mixing conditionsIn this hypothesis the U-tube heat exchanger can be modelled by two independent heat exchangers, a cocurrent heat exchanger for the hot leg and a eountercurrent heat exchanger for the cold leg. The only coupling condition is that, for the primary fluid, the inlet temperature in the cold side must be the exit temperature of the hot side. The numbers R, E, N, NTU can have different values for the two legs, because of thedifferent values of the heat transfer coefficients and physical properties. The energy balance equations are the same given in (2)--(4), where now the numbers E and Ns must be changed in E/2 and 2Ns in both legs, if we want to use in their definition the total secondary mass flow rate, since it is reduced in every leg to half the inlet mass flow rate ms. Of course it is understood that the area A to be used here is half of the total exchange area of the unit, as it occurs for the length L too. Recalling (5)--(9) and resorting to the subscripts c and h to label the cold and hot leg, respectively, the temperature profile is given bywhere the integration constants are:M. Spiga and G. Spiga: Temperature profiles in U-tube heat exchangersIf E, = 2 the solutions (13), (14) for the cold leg degenerate into4.2 Full mixing conditionsA different approach can be proposed to predict the temperature distributions in the core wall and fluids of the U-tube heat exchanger. The assumption of full mixing implies that the temperaturesof the secondary fluid in the two legs, at the same longitudinal section, are exactly coinciding. In this situation the steady state energy balance equations constitute the following differential set :The bulk wall temperature in both sides is thenand (18)--(22) are simplified to a set of three equations, whose summation gives a differential equation for the secondary fluid temperature, withgeneral solutionwhere # is an integration constant to be specified. Consequently a second order differential equation is deduced for the primary fluid temperature in the hot leg :where the numbers B, C and D are defined asThe solution to (24) allows to determine the temperaturesand the number G is defined asThe boundary conditions for the fluids i.e. provide the integration constantsAgain the fluid temperatures depend only on the numbers E and NTU.5 ResultsThe analytical solutions allow to deduce useful informations about temperature profiles and effectiveness. Concerning the U-tube heat exchanger, the solutions (10)--(15) and (25)--(27) have been used as a benchmark for the numerical predictions of a computer code [7], already validated, obtaining a very satisfactory agreement.M. Spiga and G. Spiga: Temperature profiles in U-tube heat exchangers 163 Moreover a testing has been performed considering a Shutte & Koerting Co. U-tube heat exchanger, designed for the cooling system of a spent nuclear fuel storage pool. The demineralized water of the fuel pit flows inside the tubes, the raw water in the shell side. The correct determination of the thermal resistances is very important to get a reliable prediction ; for every leg the heat transfer coefficients have been evaluated by the Bittus-Boelter correlation in the tube side [8], by the Weisman correlation in the shell side [9] ; the wall material isstainless steel AISI 304.and the circles indicate the experimental data supplied by the manufacturer. The numbers E, NTU, R for the hot and the cold leg are respectively 1.010, 0.389, 0.502 and 1.011, 0.38~, 0.520. The difference between the experimental datum and the analytical prediction of the exit temperature is 0.7% for the primary fluid, 0.9% for the secondary fluid. The average exit temperature of the secondary fluid in the no mixing model differs from the full mixing result only by 0.6%. It is worth pointing out the relatively small differences between the profiles obtained through the two different hypotheses (full and no mixing conditions), mainly for the primary fluid; the actual temperature distribution is certainly bounded between these upper and lower limits,hence it is very well specified. Figures 3-5 report the longitudinal temperaturedistribution in the core wall, τw = (t -- νi)/(Ti -- νi), emphasizing theeffects of the parameters E, NTU, R.As above discussed this profile can be very useful for detailed stress analysis, for instance as anM. Spiga and G. Spiga: Temperature profiles in U-tube heat exchangersinput for related computer codes. In particular the thermal conditions at the U-bend transitions are responsible of a relative movement between the hot and the cold leg, producing hoop stresses with possible occurrence of tube cracking . It is evident that the cold leg is more constrained than the hot leg; the axial thermal gradient is higher in the inlet region and increases with increasing values of E, NTU, R. The heat exchanger effectiveness e, defined as the ratio of the actual heat transfer rate(mc)p (Ti-- Tout), Tout=Tc(O), to the maximum hypothetical rateunder the same conditions (mc)min (Ti- νi), is shown in Figs. 6, 7respectively versus the number of transfer units and the flow capacitance ratio. As known, the balanced heat exchangers E = 1) present the worst behaviour ; the effectiveness does not depend on R and is the same for reciprocal values of the flow capacitance ratio.U形管换热器m . Spiga和g . Spiga,博洛尼亚摘要:分析解决方案提供一些两相流体在u形管换热器中的分布情况。
外文文献翻译——顾客满意度(附原文)
外文文献翻译(附原文)译文一:韩国网上购物者满意度的决定因素摘要这篇文章的目的是确定可能导致韩国各地网上商场顾客满意的因素。
假设客户的积极认知互联网购物的有用性,安全,技术能力,客户支持和商场接口积极影响客户满意度。
这也是推测,满意的顾客成为忠实的客户。
调查结果证实,客户满意度对顾客的忠诚度有显著影响,这表明,当顾客满意服务时会显示出很高的忠诚度.我们还发现,“网上客户有关安全风险的感知交易中,客户支持,网上购物和商场接口与客户满意度呈正相关.概念模型网上购物者可以很容易的将一个商场内的商品通过价格或质量进行排序,并且可以在不同的商场之间比较相同的产品。
网上购物也可以节省时间和降低信息搜索成本。
因此,客户可能有一种感知,他们可以用更少的时间和精力得到更好的网上交易。
这个创新的系统特性已被定义为知觉有用性。
若干实证研究发现,客户感知的实用性在采用影响满意度的创新技术后得以实现.因此,假设网上购物的知觉有用性与满意度成正相关(H1).网上客户首要关注的是涉及关于网上信用卡使用的明显的不安全感。
虽然认证系统有明显进步,但是顾客担心在网上传输信用卡号码这些敏感的信息是不会被轻易的解决的。
网上的隐私保护环境是另一个值得关注的问题。
研究表明,网上客户担心通过这些网上业务会造成身份盗窃或冒用他们的私人信息。
因此,据推测,网上购物的安全性对顾客满意度有积极地影响(H2)。
以往的研究表明,系统方面的技术,如网络速度,错误恢复能力和系统稳定性都是导致客户满意度的重要因素。
例如,Kim和Lim(2001)发现,网络速度与网上购物者的满意度有关.Dellaert和卡恩(1999年)也报告说,当网络提供商没有进行很好的管理时网上冲浪速度慢会给评价网站内容带来负面影响。
丹尼尔和Aladwani的文件表明,系统错误的迅速准确的恢复能力以及网络速度是影响网上银行用户满意度的重要因素(H3)。
由于网上交易的非个人化性质客户查询产品和其他服务的迅速反应对客户满意度来说很重要。
外文文献翻译译文
环境管理会计(EMA)是管理会计发展的趋势Christine Jasch摘要:组织机构和会计师们为什么应该关心环境问题?来自供应链、资金提供商、监管机构以及其他利益相关者对于环境绩效及其信息披露的压力,导致组织机构的与环境相关的成本不断增加。
但同时提高环境绩效能够带来潜在的货币利益这一观点也逐渐得到人们的认同,传统的会计实务不能充分提供对于环境管理和与之相关的战略决策所需要的信息。
由于联合国可持续发展事务署下的环境管理会计工作组的成立,以及由它主办的出版物的发行,环境管理会计得到了促进和提升。
最近,国际会计师联合会发行了一份关于环境管理会计的指导性文件,这将进一步推动环境管理会计在会计师中的应用。
这期《清洁生产》杂志的关于环境管理会计的这个特别问题,侧重于它的方法论背景,以及来自澳大利亚、奥地利、阿根廷、加拿大、日本和立陶宛的案例研究经验。
正文:环境问题伴随者相关费用,收入和利益,正被世界上大多数国家的公民,政府组织,合作型领导人给予越来越多的关注.但是,有一个越来越广泛的共识,那就是,传统的会计不能为合理的支持在环境管理责任方面的决策制定提供准确的信息.为了填补这个差距,目前,EMA的新兴领域已经受到持续增加的关注.在19世纪九十年代早期,美国环保署是第一个成立了正式的项目去促进EMA的采纳的国家机构.从那时起,在30个国家的组织已经开始推动和落实EMA的许多不同类型的与环保相关的管理措施. 对于EMA的广泛关注是由于联合国可持续发展事务司对EMA的提倡以及其对EMA书籍的委托出版。
国际会计师联合会决定授权在由联合国科学发展司EMA工作组发表的最早的关于EMA 两本出版物的基础上发展一个关于EMA的指导性文件以整合关于EMA的最好的信息并与此同时进行必要的更新和添加.这个文件既不是有规定的要求的标准,也不是个描述性研究报告.它意在成为一个提供指导性信息的文件,作为监管要求,标准和纯粹信息的中间地带.这样, 它的目标是提供了一个总体框架和EMA的定义是相当全面,这是一致的可能与其他现有的,广泛应用于环境会计框架与EMA必须通力合作,以减少一些就这一重要议题的国际混乱功能。
外文文献翻译原文+译文
外文文献翻译原文Analysis of Con tin uous Prestressed Concrete BeamsChris BurgoyneMarch 26, 20051、IntroductionThis conference is devoted to the development of structural analysis rather than the strength of materials, but the effective use of prestressed concrete relies on an appropriate combination of structural analysis techniques with knowledge of the material behaviour. Design of prestressed concrete structures is usually left to specialists; the unwary will either make mistakes or spend inordinate time trying to extract a solution from the various equations.There are a number of fundamental differences between the behaviour of prestressed concrete and that of other materials. Structures are not unstressed when unloaded; the design space of feasible solutions is totally bounded;in hyperstatic structures, various states of self-stress can be induced by altering the cable profile, and all of these factors get influenced by creep and thermal effects. How were these problems recognised and how have they been tackled?Ever since the development of reinforced concrete by Hennebique at the end of the 19th century (Cusack 1984), it was recognised that steel and concrete could be more effectively combined if the steel was pretensioned, putting the concrete into compression. Cracking could be reduced, if not prevented altogether, which would increase stiffness and improve durability. Early attempts all failed because the initial prestress soon vanished, leaving the structure to be- have as though it was reinforced; good descriptions of these attempts are given by Leonhardt (1964) and Abeles (1964).It was Freyssineti’s observations of the sagging of the shallow arches on three bridges that he had just completed in 1927 over the River Allier near Vichy which led directly to prestressed concrete (Freyssinet 1956). Only the bridge at Boutiron survived WWII (Fig 1). Hitherto, it had been assumed that concrete had a Young’s modulus which remained fixed, but he recognised that the de- ferred strains due to creep explained why the prestress had been lost in the early trials. Freyssinet (Fig. 2) also correctly reasoned that high tensile steel had to be used, so that some prestress would remain after the creep had occurred, and alsothat high quality concrete should be used, since this minimised the total amount of creep. The history of Freyssineti’s early prestressed concrete work is written elsewhereFigure1:Boutiron Bridge,Vic h yFigure 2: Eugen FreyssinetAt about the same time work was underway on creep at the BRE laboratory in England ((Glanville 1930) and (1933)). It is debatable which man should be given credit for the discovery of creep but Freyssinet clearly gets the credit for successfully using the knowledge to prestress concrete.There are still problems associated with understanding how prestressed concrete works, partly because there is more than one way of thinking about it. These different philosophies are to some extent contradictory, and certainly confusing to the young engineer. It is also reflected, to a certain extent, in the various codes of practice.Permissible stress design philosophy sees prestressed concrete as a way of avoiding cracking by eliminating tensile stresses; the objective is for sufficient compression to remain after creep losses. Untensionedreinforcement, which attracts prestress due to creep, is anathema. This philosophy derives directly from Freyssinet’s logic and is primarily a working stress concept.Ultimate strength philosophy sees prestressing as a way of utilising high tensile steel as reinforcement. High strength steels have high elastic strain capacity, which could not be utilised when used as reinforcement; if the steel is pretensioned, much of that strain capacity is taken out before bonding the steel to the concrete. Structures designed this way are normally designed to be in compression everywhere under permanent loads, but allowed to crack under high live load. The idea derives directly from the work of Dischinger (1936) and his work on the bridge at Aue in 1939 (Schonberg and Fichter 1939), as well as that of Finsterwalder (1939). It is primarily an ultimate load concept. The idea of partial prestressing derives from these ideas.The Load-Balancing philosophy, introduced by T.Y. Lin, uses prestressing to counter the effect of the permanent loads (Lin 1963). The sag of the cables causes an upward force on the beam, which counteracts the load on the beam. Clearly, only one load can be balanced, but if this is taken as the total dead weight, then under that load the beam will perceive only the net axial prestress and will have no tendency to creep up or down.These three philosophies all have their champions, and heated debates take place between them as to which is the most fundamental.2、Section designFrom the outset it was recognised that prestressed concrete has to be checked at both the working load and the ultimate load. For steel structures, and those made from reinforced concrete, there is a fairly direct relationship between the load capacity under an allowable stress design, and that at the ultimate load under an ultimate strength design. Older codes were based on permissible stresses at the working load; new codes use moment capacities at the ultimate load. Different load factors are used in the two codes, but a structure which passes one code is likely to be acceptable under the other.For prestressed concrete, those ideas do not hold, since the structure is highly stressed, even when unloaded. A small increase of load can cause some stress limits to be breached, while a large increase in load might be needed to cross other limits. The designer has considerable freedom to vary both the working load and ultimate load capacities independently; both need to be checked.A designer normally has to check the tensile and compressive stresses, in both the top and bottom fibre of the section, for every load case. The critical sections are normally, but not always, the mid-span and the sections over piers but other sections may become critical ,when the cable profile has to be determined.The stresses at any position are made up of three components, one of which normally has a different sign from the other two; consistency of sign convention is essential.If P is the prestressing force and e its eccentricity, A and Z are the area of the cross-section and its elastic section modulus, while M is the applied moment, then where ft and fc are the permissible stresses in tension and compression.c e t f ZM Z P A P f ≤-+≤Thus, for any combination of P and M , the designer already has four in- equalities to deal with.The prestressing force differs over time, due to creep losses, and a designer isusually faced with at least three combinations of prestressing force and moment;• the applied moment at the time the prestress is first applied, before creep losses occur,• the maximum applied moment after creep losses, and• the minimum applied moment after creep losses.Figure 4: Gustave MagnelOther combinations may be needed in more complex cases. There are at least twelve inequalities that have to be satisfied at any cross-section, but since an I-section can be defined by six variables, and two are needed to define the prestress, the problem is over-specified and it is not immediately obvious which conditions are superfluous. In the hands of inexperienced engineers, the design process can be very long-winded. However, it is possible to separate out the design of the cross-section from the design of the prestress. By considering pairs of stress limits on the same fibre, but for different load cases, the effects of the prestress can be eliminated, leaving expressions of the form:rangestress e Perm issibl Range Mom entZ These inequalities, which can be evaluated exhaustively with little difficulty, allow the minimum size of the cross-section to be determined.Once a suitable cross-section has been found, the prestress can be designed using a construction due to Magnel (Fig.4). The stress limits can all be rearranged into the form:()M fZ PA Z e ++-≤1 By plotting these on a diagram of eccentricity versus the reciprocal of the prestressing force, a series of bound lines will be formed. Provided the inequalities (2) are satisfied, these bound lines will always leave a zone showing all feasible combinations of P and e. The most economical design, using the minimum prestress, usually lies on the right hand side of the diagram, where the design is limited by the permissible tensile stresses.Plotting the eccentricity on the vertical axis allows direct comparison with the crosssection, as shown in Fig. 5. Inequalities (3) make no reference to the physical dimensions of the structure, but these practical cover limits can be shown as wellA good designer knows how changes to the design and the loadings alter the Magnel diagram. Changing both the maximum andminimum bending moments, but keeping the range the same, raises and lowers the feasible region. If the moments become more sagging the feasible region gets lower in the beam.In general, as spans increase, the dead load moments increase in proportion to the live load. A stage will be reached where the economic point (A on Fig.5) moves outside the physical limits of the beam; Guyon (1951a) denoted the limiting condition as the critical span. Shorter spans will be governed by tensile stresses in the two extreme fibres, while longer spans will be governed by the limiting eccentricity and tensile stresses in the bottom fibre. However, it does not take a large increase in moment ,at which point compressive stresses will govern in the bottom fibre under maximum moment.Only when much longer spans are required, and the feasible region moves as far down as possible, does the structure become governed by compressive stresses in both fibres.3、Continuous beamsThe design of statically determinate beams is relatively straightforward; the engineer can work on the basis of the design of individual cross-sections, as outlined above. A number of complications arise when the structure is indeterminate which means that the designer has to consider, not only a critical section,but also the behaviour of the beam as a whole. These are due to the interaction of a number of factors, such as Creep, Temperature effects and Construction Sequence effects. It is the development of these ideas whichforms the core of this paper. The problems of continuity were addressed at a conference in London (Andrew and Witt 1951). The basic principles, and nomenclature, were already in use, but to modern eyes concentration on hand analysis techniques was unusual, and one of the principle concerns seems to have been the difficulty of estimating losses of prestressing force.3.1 Secondary MomentsA prestressing cable in a beam causes the structure to deflect. Unlike the statically determinate beam, where this motion is unrestrained, the movement causes a redistribution of the support reactions which in turn induces additional moments. These are often termed Secondary Moments, but they are not always small, or Parasitic Moments, but they are not always bad.Freyssinet’s bridge across the Marne at Luzancy, started in 1941 but not completed until 1946, is often thought of as a simply supported beam, but it was actually built as a two-hinged arch (Harris 1986), with support reactions adjusted by means of flat jacks and wedges which were later grouted-in (Fig.6). The same principles were applied in the later and larger beams built over the same river.Magnel built the first indeterminate beam bridge at Sclayn, in Belgium (Fig.7) in 1946. The cables are virtually straight, but he adjusted the deck profile so that the cables were close to the soffit near mid-span. Even with straight cables the sagging secondary momentsare large; about 50% of the hogging moment at the central support caused by dead and live load.The secondary moments cannot be found until the profile is known but the cablecannot be designed until the secondary moments are known. Guyon (1951b) introduced the concept of the concordant profile, which is a profile that causes no secondary moments; es and ep thus coincide. Any line of thrust is itself a concordant profile.The designer is then faced with a slightly simpler problem; a cable profile has to be chosen which not only satisfies the eccentricity limits (3) but is also concordant. That in itself is not a trivial operation, but is helped by the fact that the bending moment diagram that results from any load applied to a beam will itself be a concordant profile for a cable of constant force. Such loads are termed notional loads to distinguish them from the real loads on the structure. Superposition can be used to progressively build up a set of notional loads whose bending moment diagram gives the desired concordant profile.3.2 Temperature effectsTemperature variations apply to all structures but the effect on prestressed concrete beams can be more pronounced than in other structures. The temperature profile through the depth of a beam (Emerson 1973) can be split into three components for the purposes of calculation (Hambly 1991). The first causes a longitudinal expansion, which is normally released by the articulation of the structure; the second causes curvature which leads to deflection in all beams and reactant moments in continuous beams, while the third causes a set of self-equilibrating set of stresses across the cross-section.The reactant moments can be calculated and allowed-for, but it is the self- equilibrating stresses that cause the main problems for prestressed concrete beams. These beams normally have high thermal mass which means that daily temperature variations do not penetrate to the core of the structure. The result is a very non-uniform temperature distribution across the depth which in turn leads to significant self-equilibrating stresses. If the core of the structure is warm, while the surface is cool, such as at night, then quite large tensile stresses can be developed on the top and bottom surfaces. However, they only penetrate a very short distance into the concrete and the potential crack width is very small. It can be very expensive to overcome the tensile stress by changing the section or the prestress。
外文文献及翻译
((英文参考文献及译文)二〇一六年六月本科毕业论文 题 目:STATISTICAL SAMPLING METHOD, USED INTHE AUDIT学生姓名:王雪琴学 院:管理学院系 别:会计系专 业:财务管理班 级:财管12-2班 学校代码: 10128 学 号: 201210707016Statistics and AuditRomanian Statistical Review nr. 5 / 2010STATISTICAL SAMPLING METHOD, USED IN THE AUDIT - views, recommendations, fi ndingsPhD Candidate Gabriela-Felicia UNGUREANUAbstractThe rapid increase in the size of U.S. companies from the earlytwentieth century created the need for audit procedures based on the selectionof a part of the total population audited to obtain reliable audit evidence, tocharacterize the entire population consists of account balances or classes oftransactions. Sampling is not used only in audit – is used in sampling surveys,market analysis and medical research in which someone wants to reach aconclusion about a large number of data by examining only a part of thesedata. The difference is the “population” from which the sample is selected, iethat set of data which is intended to draw a conclusion. Audit sampling appliesonly to certain types of audit procedures.Key words: sampling, sample risk, population, sampling unit, tests ofcontrols, substantive procedures.Statistical samplingCommittee statistical sampling of American Institute of CertifiedPublic Accountants of (AICPA) issued in 1962 a special report, titled“Statistical sampling and independent auditors’ which allowed the use ofstatistical sampling method, in accordance with Generally Accepted AuditingStandards (GAAS). During 1962-1974, the AICPA published a series of paperson statistical sampling, “Auditor’s Approach to Statistical Sampling”, foruse in continuing professional education of accountants. During 1962-1974,the AICPA published a series of papers on statistical sampling, “Auditor’sApproach to Statistical Sampling”, for use in continuing professional educationof accountants. In 1981, AICPA issued the professional standard, “AuditSampling”, which provides general guidelines for both sampling methods,statistical and non-statistical.Earlier audits included checks of all transactions in the period coveredby the audited financial statements. At that time, the literature has not givenparticular attention to this subject. Only in 1971, an audit procedures programprinted in the “Federal Reserve Bulletin (Federal Bulletin Stocks)” includedseveral references to sampling such as selecting the “few items” of inventory.Statistics and Audit The program was developed by a special committee, which later became the AICPA, that of Certified Public Accountants American Institute.In the first decades of last century, the auditors often applied sampling, but sample size was not in related to the efficiency of internal control of the entity. In 1955, American Institute of Accountants has published a study case of extending the audit sampling, summarizing audit program developed by certified public accountants, to show why sampling is necessary to extend the audit. The study was important because is one of the leading journal on sampling which recognize a relationship of dependency between detail and reliability testing of internal control.In 1964, the AICPA’s Auditing Standards Board has issued a report entitled “The relationship between statistical sampling and Generally Accepted Auditing Standards (GAAS)” which illustrated the relationship between the accuracy and reliability in sampling and provisions of GAAS.In 1978, the AICPA published the work of Donald M. Roberts,“Statistical Auditing”which explains the underlying theory of statistical sampling in auditing.In 1981, AICPA issued the professional standard, named “Audit Sampling”, which provides guidelines for both sampling methods, statistical and non-statistical.An auditor does not rely solely on the results of a single procedure to reach a conclusion on an account balance, class of transactions or operational effectiveness of the controls. Rather, the audit findings are based on combined evidence from several sources, as a consequence of a number of different audit procedures. When an auditor selects a sample of a population, his objective is to obtain a representative sample, ie sample whose characteristics are identical with the population’s characteristics. This means that selected items are identical with those remaining outside the sample.In practice, auditors do not know for sure if a sample is representative, even after completion the test, but they “may increase the probability that a sample is representative by accuracy of activities made related to design, sample selection and evaluation” [1]. Lack of specificity of the sample results may be given by observation errors and sampling errors. Risks to produce these errors can be controlled.Observation error (risk of observation) appears when the audit test did not identify existing deviations in the sample or using an inadequate audit technique or by negligence of the auditor.Sampling error (sampling risk) is an inherent characteristic of the survey, which results from the fact that they tested only a fraction of the total population. Sampling error occurs due to the fact that it is possible for Revista Română de Statistică nr. 5 / 2010Statistics and Auditthe auditor to reach a conclusion, based on a sample that is different from the conclusion which would be reached if the entire population would have been subject to audit procedures identical. Sampling risk can be reduced by adjusting the sample size, depending on the size and population characteristics and using an appropriate method of selection. Increasing sample size will reduce the risk of sampling; a sample of the all population will present a null risk of sampling.Audit Sampling is a method of testing for gather sufficient and appropriate audit evidence, for the purposes of audit. The auditor may decide to apply audit sampling on an account balance or class of transactions. Sampling audit includes audit procedures to less than 100% of the items within an account balance or class of transactions, so all the sample able to be selected. Auditor is required to determine appropriate ways of selecting items for testing. Audit sampling can be used as a statistical approach and a non- statistical.Statistical sampling is a method by which the sample is made so that each unit consists of the total population has an equal probability of being included in the sample, method of sample selection is random, allowed to assess the results based on probability theory and risk quantification of sampling. Choosing the appropriate population make that auditor’ findings can be extended to the entire population.Non-statistical sampling is a method of sampling, when the auditor uses professional judgment to select elements of a sample. Since the purpose of sampling is to draw conclusions about the entire population, the auditor should select a representative sample by choosing sample units which have characteristics typical of that population. Results will not extrapolate the entire population as the sample selected is representative.Audit tests can be applied on the all elements of the population, where is a small population or on an unrepresentative sample, where the auditor knows the particularities of the population to be tested and is able to identify a small number of items of interest to audit. If the sample has not similar characteristics for the elements of the entire population, the errors found in the tested sample can not extrapolate.Decision of statistical or non-statistical approach depends on the auditor’s professional judgment which seeking sufficient appropriate audits evidence on which to completion its findings about the audit opinion.As a statistical sampling method refer to the random selection that any possible combination of elements of the community is equally likely to enter the sample. Simple random sampling is used when stratification was not to audit. Using random selection involves using random numbers generated byRomanian Statistical Review nr. 5 / 2010Statistics and Audit a computer. After selecting a random starting point, the auditor found the first random number that falls within the test document numbers. Only when the approach has the characteristics of statistical sampling, statistical assessments of risk are valid sampling.In another variant of the sampling probability, namely the systematic selection (also called random mechanical) elements naturally succeed in office space or time; the auditor has a preliminary listing of the population and made the decision on sample size. “The auditor calculated a counting step, and selects the sample element method based on step size. Step counting is determined by dividing the volume of the community to sample the number of units desired. Advantages of systematic screening are its usability. In most cases, a systematic sample can be extracted quickly and method automatically arranges numbers in successive series.”[2].Selection by probability proportional to size - is a method which emphasizes those population units’recorded higher values. The sample is constituted so that the probability of selecting any given element of the population is equal to the recorded value of the item;Stratifi ed selection - is a method of emphasis of units with higher values and is registered in the stratification of the population in subpopulations. Stratification provides a complete picture of the auditor, when population (data table to be analyzed) is not homogeneous. In this case, the auditor stratifies a population by dividing them into distinct subpopulations, which have common characteristics, pre-defined. “The objective of stratification is to reduce the variability of elements in each layer and therefore allow a reduction in sample size without a proportionate increase in the risk of sampling.” [3] If population stratification is done properly, the amount of sample size to come layers will be less than the sample size that would be obtained at the same level of risk given sample with a sample extracted from the entire population. Audit results applied to a layer can be designed only on items that are part of that layer.I appreciated as useful some views on non-statistical sampling methods, which implies that guided the selection of the sample selecting each element according to certain criteria determined by the auditor. The method is subjective; because the auditor selects intentionally items containing set features him.The selection of the series is done by selecting multiple elements series (successive). Using sampling the series is recommended only if a reasonable number of sets used. Using just a few series there is a risk that the sample is not representative. This type of sampling can be used in addition to other samples, where there is a high probability of occurrence of errors. At the arbitrary selection, no items are selected preferably from the auditor, Revista Română de Statistică nr. 5 / 2010Statistics and Auditthat regardless of size or source or characteristics. Is not the recommended method, because is not objective.That sampling is based on the auditor’s professional judgment, which may decide which items can be part or not sampled. Because is not a statistical method, it can not calculate the standard error. Although the sample structure can be constructed to reproduce the population, there is no guarantee that the sample is representative. If omitted a feature that would be relevant in a particular situation, the sample is not representative.Sampling applies when the auditor plans to make conclusions about population, based on a selection. The auditor considers the audit program and determines audit procedures which may apply random research. Sampling is used by auditors an internal control systems testing, and substantive testing of operations. The general objectives of tests of control system and operations substantive tests are to verify the application of pre-defined control procedures, and to determine whether operations contain material errors.Control tests are intended to provide evidence of operational efficiency and controls design or operation of a control system to prevent or detect material misstatements in financial statements. Control tests are necessary if the auditor plans to assess control risk for assertions of management.Controls are generally expected to be similarly applied to all transactions covered by the records, regardless of transaction value. Therefore, if the auditor uses sampling, it is not advisable to select only high value transactions. Samples must be chosen so as to be representative population sample.An auditor must be aware that an entity may change a special control during the course of the audit. If the control is replaced by another, which is designed to achieve the same specific objective, the auditor must decide whether to design a sample of all transactions made during or just a sample of transactions controlled again. Appropriate decision depends on the overall objective of the audit test.Verification of internal control system of an entity is intended to provide guidance on the identification of relevant controls and design evaluation tests of controls.Other tests:In testing internal control system and testing operations, audit sample is used to estimate the proportion of elements of a population containing a characteristic or attribute analysis. This proportion is called the frequency of occurrence or percentage of deviation and is equal to the ratio of elements containing attribute specific and total number of population elements. WeightRomanian Statistical Review nr. 5 / 2010Statistics and Audit deviations in a sample are determined to calculate an estimate of the proportion of the total population deviations.Risk associated with sampling - refers to a sample selection which can not be representative of the population tested. In other words, the sample itself may contain material errors or deviations from the line. However, issuing a conclusion based on a sample may be different from the conclusion which would be reached if the entire population would be subject to audit.Types of risk associated with sampling:Controls are more effective than they actually are or that there are not significant errors when they exist - which means an inappropriate audit opinion. Controls are less effective than they actually are that there are significant errors when in fact they are not - this calls for additional activities to establish that initial conclusions were incorrect.Attributes testing - the auditor should be defining the characteristics to test and conditions for misconduct. Attributes testing will make when required objective statistical projections on various characteristics of the population. The auditor may decide to select items from a population based on its knowledge about the entity and its environment control based on risk analysis and the specific characteristics of the population to be tested.Population is the mass of data on which the auditor wishes to generalize the findings obtained on a sample. Population will be defined compliance audit objectives and will be complete and consistent, because results of the sample can be designed only for the population from which the sample was selected.Sampling unit - a unit of sampling may be, for example, an invoice, an entry or a line item. Each sample unit is an element of the population. The auditor will define the sampling unit based on its compliance with the objectives of audit tests.Sample size - to determine the sample size should be considered whether sampling risk is reduced to an acceptable minimum level. Sample size is affected by the risk associated with sampling that the auditor is willing to accept it. The risk that the auditor is willing to accept lower, the sample will be higher.Error - for detailed testing, the auditor should project monetary errors found in the sample population and should take into account the projected error on the specific objective of the audit and other audit areas. The auditor projects the total error on the population to get a broad perspective on the size of the error and comparing it with tolerable error.For detailed testing, tolerable error is tolerable and misrepresentations Revista Română de Statistică nr. 5 / 2010Statistics and Auditwill be a value less than or equal to materiality used by the auditor for the individual classes of transactions or balances audited. If a class of transactions or account balances has been divided into layers error is designed separately for each layer. Design errors and inconsistent errors for each stratum are then combined when considering the possible effect on the total classes of transactions and account balances.Evaluation of sample results - the auditor should evaluate the sample results to determine whether assessing relevant characteristics of the population is confirmed or needs to be revised.When testing controls, an unexpectedly high rate of sample error may lead to an increase in the risk assessment of significant misrepresentation unless it obtained additional audit evidence to support the initial assessment. For control tests, an error is a deviation from the performance of control procedures prescribed. The auditor should obtain evidence about the nature and extent of any significant changes in internal control system, including the staff establishment.If significant changes occur, the auditor should review the understanding of internal control environment and consider testing the controls changed. Alternatively, the auditor may consider performing substantive analytical procedures or tests of details covering the audit period.In some cases, the auditor might not need to wait until the end audit to form a conclusion about the effectiveness of operational control, to support the control risk assessment. In this case, the auditor might decide to modify the planned substantive tests accordingly.If testing details, an unexpectedly large amount of error in a sample may cause the auditor to believe that a class of transactions or account balances is given significantly wrong in the absence of additional audit evidence to show that there are not material misrepresentations.When the best estimate of error is very close to the tolerable error, the auditor recognizes the risk that another sample have different best estimate that could exceed the tolerable error.ConclusionsFollowing analysis of sampling methods conclude that all methods have advantages and disadvantages. But the auditor is important in choosing the sampling method is based on professional judgment and take into account the cost / benefit ratio. Thus, if a sampling method proves to be costly auditor should seek the most efficient method in view of the main and specific objectives of the audit.Romanian Statistical Review nr. 5 / 2010Statistics and Audit The auditor should evaluate the sample results to determine whether the preliminary assessment of relevant characteristics of the population must be confirmed or revised. If the evaluation sample results indicate that the relevant characteristics of the population needs assessment review, the auditor may: require management to investigate identified errors and likelihood of future errors and make necessary adjustments to change the nature, timing and extent of further procedures to take into account the effect on the audit report.Selective bibliography:[1] Law no. 672/2002 updated, on public internal audit[2] Arens, A şi Loebbecke J - Controve …Audit– An integrate approach”, 8th edition, Arc Publishing House[3] ISA 530 - Financial Audit 2008 - International Standards on Auditing, IRECSON Publishing House, 2009- Dictionary of macroeconomics, Ed C.H. Beck, Bucharest, 2008Revista Română de Statistică nr. 5 / 2010Statistics and Audit摘要美国公司的规模迅速增加,从第二十世纪初创造了必要的审计程序,根据选定的部分总人口的审计,以获得可靠的审计证据,以描述整个人口组成的帐户余额或类别的交易。
企业市场营销外文文献——中文译文
Science and technology enterprises Marketing StrategyABSTRACTWith the coming of knowledge-based economy,higll&new-tech enterprises play an increasingly strategic role in national economy,and also make great contribute to providing advanced products and services,promoting technical progress,enlarging employment and developing the national economic competitive power.But while they make a SUCCESS upon advanced technology and hi-tech products,they usually put too much emphasis oll technology advantages,accordingly neglect the research and applications of marketing strategy and management,and then caused the Marketing Myopia resulting in passiveness evefl defeat to the management.So how to exercise modem marketing theories,research and constitute marketing strategy and policy of lIigh&new-tech enterprises,and provide necessary theory base and suppoaing to the marketing problems of hiigh&new—tech enterprises,has some reality significance and generalize application value to promote continuance,healthy and rapidly development ofhigh&new-tech enterprises.KEYWORDS:high&new—tech enterprise,marketing strategy,technical marketing,innovation ofmarketing theoriesFirst, the science and technology enterprise marketing strategyMarketing strategy is the enterprise under the guidance of the marketing concept , the application of modern management methods , for a period of time ,the development of the overall business marketing ideas and planning. Marketing strategy consists of three different levels of content : target market, market positioning and marketing mix 。
毕业论文外文文献以及中文译文
Stores like the one in Shenzhen show how much has changed in Chinese retailing. Just two decades ago, shops had surly staff offering a few drab items, often locked safely away in glass cases. Yet there is still a long way to go. Even today, much of the population buys from daily markets or directly from producers. ቤተ መጻሕፍቲ ባይዱrganised retailing remains relatively new. Most Chinese stores are tiny, family-run outfits. China's top 100 chains account for just a tenth of total retail sales.
外文文献翻译——顾客满意度(附原文)
外文文献翻译(附原文)译文一:韩国网上购物者满意度的决定因素摘要这篇文章的目的是确定可能导致韩国各地网上商场顾客满意的因素。
假设客户的积极认知互联网购物的有用性,安全,技术能力,客户支持和商场接口积极影响客户满意度。
这也是推测,满意的顾客成为忠实的客户。
调查结果证实,客户满意度对顾客的忠诚度有显著影响,这表明,当顾客满意服务时会显示出很高的忠诚度。
我们还发现,“网上客户有关安全风险的感知交易中,客户支持,网上购物和商场接口与客户满意度呈正相关。
概念模型网上购物者可以很容易的将一个商场内的商品通过价格或质量进行排序,并且可以在不同的商场之间比较相同的产品。
网上购物也可以节省时间和降低信息搜索成本。
因此,客户可能有一种感知,他们可以用更少的时间和精力得到更好的网上交易。
这个创新的系统特性已被定义为知觉有用性。
若干实证研究发现,客户感知的实用性在采用影响满意度的创新技术后得以实现。
因此,假设网上购物的知觉有用性与满意度成正相关(H1)。
网上客户首要关注的是涉及关于网上信用卡使用的明显的不安全感。
虽然认证系统有明显进步,但是顾客担心在网上传输信用卡号码这些敏感的信息是不会被轻易的解决的。
网上的隐私保护环境是另一个值得关注的问题。
研究表明,网上客户担心通过这些网上业务会造成身份盗窃或冒用他们的私人信息。
因此,据推测,网上购物的安全性对顾客满意度有积极地影响(H2)。
以往的研究表明,系统方面的技术,如网络速度,错误恢复能力和系统稳定性都是导致客户满意度的重要因素。
例如,Kim和Lim(2001)发现,网络速度与网上购物者的满意度有关。
Dellaert和卡恩(1999年)也报告说,当网络提供商没有进行很好的管理时网上冲浪速度慢会给评价网站内容带来负面影响。
丹尼尔和Aladwani的文件表明,系统错误的迅速准确的恢复能力以及网络速度是影响网上银行用户满意度的重要因素(H3)。
由于网上交易的非个人化性质客户查询产品和其他服务的迅速反应对客户满意度来说很重要。
- 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
- 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
- 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。
原文:Gagne's Theory of InstructionMichael CorryDr. Donald CunninghamP540 - Spring 1996Robert Gagne's theory of instruction has provided a great number of valuable ideas to instructional designers, trainers, and teachers. But is it really useful to everyone at all times? During this paper, I will assume the position of a teacher educator (something I have done formally for several years now) while examining the strengths and weaknesses of Gagne's theory of instruction. Driscoll (1994) breaks Gagne's theory into three major areas - the taxonomy of learning outcomes, the conditions of learning, and the events of instruction. I will focus on each of these three areas while briefly describing the theory of instruction. Once this brief introduction of the theory is completed, I will attempt to turn this theory "back upon itself" while examining the strengths and weaknesses of it's various assumptions.Gagne's Theory of InstructionAs previously explained Gagne's theory of instruction is commonly broken into three areas. The first of these areas that I will discuss is the taxonomy of learning outcomes. Gagne's taxonomy of learning outcomes is somewhat similar to Bloom's taxonomies of cognitive, affective, and psychomotor outcomes (some of these taxonomies were proposed by Bloom, but actually completed by others). Both Bloom and Gagne believed that it was important to break down humans' learned capabilities into categories or domains. Gagne's taxonomy consists of five categories of learning outcomes - verbal information, intellectual skills, cognitive strategies, attitudes, and motor skills. Gagne, Briggs, and Wager (1992) explain that each of the categories leads to a different class of human performance.Essential to Gagne's ideas of instruction are what he calls "conditions of learning." He breaks these down into internal and external conditions. The internal conditions deal with previously learned capabilities of the learner. Or in other words, what the learner knows prior to the instruction. The external conditions deal with the stimuli (a purely behaviorist term) that is presented externally to the learner. For example, what instruction is provided to the learner.To tie Gagne's theory of instruction together, he formulated nine events of instruction. When followed, these events are intended to promote the transfer of knowledge orinformation from perception through the stages of memory. Gagne bases his events of instruction on the cognitive information processing learning theory.The way Gagne's theory is put into practice is as follows. First of all, the instructor determines the objectives of the instruction. These objectives must then be categorized into one of the five domains of learning outcomes. Each of the objectives must be stated in performance terms using one of the standard verbs (i.e. states, discriminates, classifies, etc.) associated with the particular learning outcome. The instructor then uses the conditions of learning for the particular learning outcome to determine the conditions necessary for learning. And finally, the events of instruction necessary to promote the internal process of learning are chosen and put into the lesson plan. The events in essence become the framework for the lesson plan or steps of instruction.Strengths and Weaknesses of the Theory and it's AssumptionsAs a teacher educator who has employed Gagne's theory into real life, I have some unique insights into the strengths and weaknesses of the theory and it's assumptions. I will again structure my comments following the three areas of the theory as described by Driscoll (1994). I will first examine the domains of learning outcomes. As a teacher the domains of learning have helped me to better organize my thoughts and the objectives of the instructional lesson. This proved to be very beneficial to me as a teacher, because I was always looking for a good way to put more structure into the objectives of my lesson plans. Additionally, the domains of learning helped me to better understand what types of learning I was expecting to see from my students.One of the greatest weakness that I experienced with Gagne's theory was taking the goals I had for my students, putting them into the correct learning outcome category, and then creating objectives using Gagne's standard verbs. I would like to break this problem into two parts. First, as I began to use the theory, it quickly became apparent that some goals were easy to classify into the learning outcome categories, but that many were not as easy to categorize. As a teacher, I spent a great deal of time reading and studying Gagne's categories in an attempt to better understand how certain goals fit in the different categories. This was good in the sense that it forced me to really understand what I wanted my students to do. But, on the other hand, it always caused me a great deal of uneasiness about whether or not I was fouling up the whole process by putting the goal into the wrong learning outcome category.The second half of this weakness has to do with creating objectives using Gagne's standard verbs. After the experience with categorizing the goal into the proper learning outcome, I was faced with changing my goal into a performance objective using one of the standard verbs. This always bothered me as a teacher because I felt like I couldn't always force my objectives into the form that the theory needed. I do believe that writing down objectives is very important, but the standard verbs made the process so rigid that I felt like I was filling in the blanks. I always felt like I had nocreativity in writing the objectives - I felt pigeonholed. Along with this feeling came the fact that all objectives had to be written in performance terms. This also made me feel a little uneasy because I felt that some of the overriding objectives I had for my students could not be expressed in performance terms. This objectives were more process oriented than product oriented. It was always very difficult to put these processes into performance terms using the standard verbs.As a teacher educator I found that the conditions of learning proposed by Gagne were very beneficial. I saw them as guidelines to follow. I didn't take them to be algorithmic in nature but more heuristic. They seemed to make logical sense and in fact I think they helped me better structure my lesson plans and my teaching. Once again however, even though I viewed the conditions as heuristics, I did feel that I was somewhat of a robot carrying out commands. I always felt as though I was being driven by the conditions.This leads directly to a discussion of the events of instruction. I felt that the events of instruction really helped me the most as a teacher. The events gave me the skeleton on which I could hang my lesson. The events not only provided me with a road map to follow, but also a way to look at my lesson plans in a more holistic nature. I was able to see how the parts of the lesson fit together to achieve the ultimate goal.This part of Gagne's theory seemed to be the least rigid to me because you did not have to follow it as rigorously as other parts of the theory. For example, Gagne explains that most lessons should follow the sequence of the events of instruction, but that the order is not absolute. While I appreciated the fact that this was less rigid than other parts of the theory, I always had one important question. If the events of instruction follow the cognitive learning process, then why would it be advisable to change the sequence of the events or to leave events out? Wouldn't this have a great impact of the learning process? Would learning still take place?This leads me to the learning theory upon which Gagne bases his instructional theory. As a teacher early in my career who was very enamored with computers, cognitive information processing theory seemed like a great explanation of the learning process (I am not sure I still feel the same way). However, those who do not understand or agree with cognitive information processing theory might not feel the same. For those people, I believe that Gagne's theory might not work very well for them.ConclusionIn conclusion, I would like to summarize the points I have tried to cover in this paper. First of all, Gagne's theory does provide a great deal of valuable information to teachers like myself. I believe it is mostly appealing to those teachers who may be early in their teaching careers and are in need of structure for their lesson plans and a holistic view of their teaching. The theory is very systematic and rigid at most points. It is almost like a cookbook recipe to ensure successful teaching and ultimatelylearning by the students. However, the systematic nature of the theory may be aturn-off for many teachers, particularly those who like to be creative, don't like rigidity, and who don't believe in a cookbook approach to ensure learning.An additional point to cover is that the theory is not always easy to implement. I am sure I am not alone in my feeling that many times it is difficult to take the goals I had for my students, put them into the correct learning outcome category, and then create objectives using Gagne's standard verbs.The final point I would like to cover deals with the learning theory upon which Gagne bases his theory. First of all, if the events of instruction really match up with the learning process, then I do not believe it would be advisable to change the sequence of the events or to leave certain events out of the sequence altogether. Second, cognitive information processing is not acceptable to all teachers. Many teachers would not agree with this idea of how learning takes place. For those who disagree with cognitive information processing, Gagne's theory of instruction would not fit their needs.BibliographyDriscoll, M. P. (1994). Psychology of learning for instruction. Boston: Allyn and Bacon.Gagne, R. M., Briggs, L. J., &Wager, W. W. (1992).Principles of instructionaldesign. Fort Worth: HarcourtBrace Jovanovich.译文:译者:马钰Gagne的教学理论迈克尔·科里唐纳德·坎宁安博士P540 - 1996春Robert Gagne的教学理论为教学设计者、培训人员和教师提供了大量的宝贵意见。