A Comparison of Frameworks for Enterprise Architecture Modeling Presented at ER2003 Interna

合集下载

TOEFL真题

TOEFL真题

TOEFL真题为了让大家更好的预备托福考试,我给大家整理了托福真题练习,下面我就和大家共享,来观赏一下吧。

TOEFL真题1托福阅读文本:Although only 1 person in 20 in the Colonial period lived in a city, the cities had a disproportionate influence on the development of North America. They were at the cutting edge ofsocial change. It was in the cities that the elements that can be associated with modern capitalism first appeared — the use of money and commercial paper in place of barter, open competition in place of social deference and hierarchy, with an attendant rise in social disorder, and the appearance of factories using coat or water power in place of independent craftspeople working with hand tools. The cities predicted the future, wrote historian Gary. B. Nash, even though they were but overgrown villages compared to the great urban centers of Europe, the Middle East and China.Except for Boston, whose population stabilized at about 16,000 in 1760, cities grew by exponential leaps through the eighteenth century. In the fifteen years prior to the outbreak of the War for independence in 1775, more than 200,000 immigrants arrived on North American shores.This meant that a population the size of Boston was arriving every year, and most of it flowed into the port cities in the Northeast. Philadelphias population nearly doubted in those years, reaching about 30,000 in 1774, New York grew at almost the same rate, reaching about 25,000 by 1775.The quality of the hinterland dictated the pace of growth of the cities. The land surrounding Boston had always been poor farm country, and by the mid-eighteenth century it was virtually stripped of its timber. The available farmland was occupied, there was little in the region beyond the city to attract immigrants. New York and Philadelphia, by contrast, served a rich and fertile hinterland laced with navigable watercourses. Scots, Irish, and Germans landed in these cities and followed the rivers inland. The regions around the cities of New York and Philadelphia became the breadbaskets of North America, sending grain not only to other colonies but also to England and southern Europe, where crippling droughts in the late 1760s created a whole new market.托福阅读题目:1. Which of the following aspects of North America in the eighteenth century does the passagemainly discuss?(A) The effects of war on the growth of cities(B) The growth and influence of cities(C) The decline of farming in areas surrounding cities(D) The causes of immigration to cities2. Why does the author say that the cities had a disproportionate influence on the development ofNorthAmerica (lines 1-2)?(A) The influence of the cities was mostly negative(B) The populations of the cities were small, but their influence was great.(C) The cities were growing at a great rate.(D) Most people pretended to live in cities3. The phrase in place of in lines 4-5 is closest in meaning to(A) connected to(B) in addition to(C) because of(D) instead of4. The word attendant in line 6 is closest in meaning to(A) avoidable(B) accompanying(C) unwelcome(D) unexpected5. Which of the following is mentioned as an element of modern capitalism?(A) Open competition(B) Social deference(C) Social hierarchy(D) Independent craftspeople6. It can be inferred that in comparison with North American cities, cities in Europe, the MiddleEast, and China had(A) large populations(B) little independence(C) frequent social disorder(D) few power sources7. The phrase exponential leaps in line 12 is closest in meaning to(A) long wars(B) new laws(C) rapid increases(D) exciting changes8. The word it in line 15 refers to(A) population(B) size(C) Boston(D)Year9. How many immigrants arrived in NorthAmerica between 1760 and 1775?(A)About 16,000(B)About 25,000(C)About 30,000(D) More than 200,00010. The word dictated in line 18 is closest in meaning to(A) spoiled(B) reduced(C) determined(D) divided11. The word virtually in line 20 is closest in meaning to(A) usually(B) hardly(C) very quickly(D) almost completely12. The region surrounding New York and Philadelphia is contrasted with the region surroundingBoston in terms of(A) quality of farmland(B) origin of immigrants(C) opportunities for fishing(D) type of grain grown13. Why does the author describe the regions around the cities of New York and Philadelphia asbreadbaskets?(A) They produced grain especially for making bread.(B) They stored large quantities of grain during periods of drought(C) They supplied grain to other parts of North America and other countries.(D) They consumed more grain than all the other regions of NorthAmerica.托福阅读答案:BBDBAACADC DACTOEFL真题2托福阅读文本:Throughout the nineteenth century and into the twentieth, citizens of the United States maintained a bias against big cities. Most lived on farms and in small towns and believed cities to be centers of corruption, crime, poverty, and moral degradation. Their distrust was caused, in part,by a national ideology that proclaimed farming the greatest occupation and rural living superior to urban living. This attitude prevailed even as the number of urban dwellers increased and cities became an essential feature of the national landscape. Gradually, economic reality overcame ideology. Thousands abandoned the precarious life on the farm for more secure and better paying jobs in the city. But when these people migrated from the countryside, they carried their fears and suspicious with them. These new urbanities, already convinced that cities were overwhelmed with great problems, eagerly embraced the progressive reforms that promised to bring order out of the chaos of the city.One of many reforms came in the area of public utilities. Water and sewerage systems were usually operated by municipal governments, but the gas and electric networks were privately owned. Reformers feared that the privately owned utility companies would charge exorbitant rates for these essential services and deliver them only to people who could afford them. Some city and state governments responded by regulating the utility companies, but a number of cities began to supply these services themselves. Proponents of these reforms argued that public ownership and regulation would insure widespread access to these utilities and guarantee a fair price.While some reforms focused on government and public behavior, others looked at the cities as a whole. Civic leaders, convinced that physical environment influenced human behavior, argued that cities should develop master plans to guide their future growth and development. City planning was nothing new, but the rapidindustrialization and urban growth of the late nineteenth century took place without any consideration for order. Urban renewal in the twentieth century followed several courses. Some cities introduced plans to completely rebuild the city core. Most other cities contented themselves with zoning plans for regulating future growth. Certain parts of town were restricted to residential use, while others were set aside for industrial or commercial development.托福阅读题目:1. What does the passage mainly discuss?(A)A comparison of urban and rural life in the early twentieth century(B) The role of government in twentieth century urban renewal(C) Efforts to improve urban life in the early twentieth century(D) Methods of controlling urban growth in the twentieth century2. The word bias in line 2 is closest in meaning to(A) diagonal(B) slope(C) distortion(D) prejudice3. The first paragraph suggests that most people who lived in rural areas(A) were suspicious of their neighbors(B) were very proud of their lifestyle(C) believed city government had too much power(D) wanted to move to the cities4. In the early twentieth century, many rural dwellers migrated to the city in order to(A) participate in the urban reform movement(B) seek financial security(C) comply with a government ordinance(D) avoid crime and corruption5. The word embraced in line 11 is closest in meaning to(A) suggested(B) overestimated(C) demanded(D) welcomed6. What concern did reformers have about privately owned utility companies?(A) They feared the services would not be made available to all city dwellers.(B) They believed private ownership would slow economic growth(C) They did not trust the companies to obey the government regulations.(D) They wanted to ensure that the services would be provided to rural areas.7. The word exorbitant in line 16 is closest in meaning to(A) additional(B) expensive(C) various(D) modified8.All of the following were the direct result of public utility reforms EXCEPT(A) local governments determined the rates charged by private utility companies(B) some utility companies were owned and operated by local governments(C) the availability of services was regulated by local government(D) private utility companies were required to pay a fee to local governments9. The word Proponents in line 18 is closest in meaning to(A) Experts(B) Pioneers(C) Reviewers(D) Supporters10. Why does the author mention industrialization (line 24)?(A) To explain how fast urban growth led to poorly designed cities(B) To emphasize the economic importance of urban areas(C) To suggest that labor disputes had become an urban problem(D) To illustrate the need for construction of new factories托福阅读答案:CDBBDABDDATOEFL真题3托福阅读文本:The sculptural legacy that the new United States inherited from its colonial predecessors was far from a rich one, and in fact, in 1776 sculpture as an art form was still in the hands of artisans and craftspeople. Stone carvers engraved their motifs of skulls and crossbones and other religious icons of death into the gray slabs that we still see standing today in old burial grounds. Some skilled craftspeople made intricately carved wooden ornamentations for furniture or architectural decorations, whileothers caved wooden shop signs and ships figureheads. Although they often achieved expression and formal excellence in their generally primitive style, they remained artisans skilled in the craft of carving and constituted a group distinct from what we normally think of as sculptors in todays use of the word.On the rare occasion when a fine piece of sculpture was desired, Americans turned to foreign sculptors, as in the 1770s when the cities of New York and Charleston, South Carolina, commissioned the Englishman Joseph Wilton to make marble statues of William Pitt. Wilton also made a lead equestrian image of King George III that was created in New York in 1770 and torn down by zealous patriots six years later.A few marble memorials with carved busts, urns, or other decorations were produced in England and brought to the colonies to be set in the walls of churches — as in Kings Chapel in Boston. But sculpture as a high art, practiced by artists who knew both the artistic theory of theirRenaissance-Baroque-Rococo predecessors and the various technical procedures of modeling, casting, and carving rich three-dimensional forms, was not known among Americans in 1776. Indeed, for many years thereafter, the United States had two groups from which to choose — either the local craftspeople or the imported talent of European sculptors.The eighteenth century was not one in which powered sculptural conceptions were developed.Add to this the timidity with which unschooled artisans — originally trained as stonemasons, carpenters, or cabinetmakers — attacked the medium from which they sculpture made in the United States in the late eighteenth century.托福阅读题目:1. What is the main idea of the passage ?(A) There was great demand for the work of eighteenth-century artisans.(B) Skilled sculptors did not exist in the United States in the 1770s.(C) Many foreign sculptors worked in the United States after 1776.(D)American sculptors were hampered by a lack of tools and materials.2. The word motifs in line 3 is closest in meaning to(A) tools(B) prints(C) signatures(D) designs3. The work of which of the following could be seen in burial grounds?(A) European sculptors(B) Carpenters(C) Stone carves(D) Cabinetmakers4. The word others in line 6 refers to(A) craftspeople(B) decorations(C) ornamentations(D) shop signs5. The word distinct in line 9 is closest in meaning to(A) separate(B) assembled(C) notable(D) inferior6. The word rare in line 11 is closest in meaning to(A) festive(B) infrequent(C) delightful(D) unexpected7. Why does the author mention Joseph Wilton in line 13?(A) He was an English sculptor who did work in the United States.(B) He was well known for his wood carvings(C) He produced sculpture for churches.(D) He settled in the United States in 1776.8. What can be inferred about the importation of marble memorials from England?(A) Such sculpture was less expensive to produce locally than to import(B) Such sculpture was not available in the United States.(C) Such sculpture was as prestigious as those made locally.(D) The materials found abroad were superior.9. How did the work of American carvers in 1776 differ from that of contemporary sculptors?(A) It was less time-consuming(B) It was more dangerous.(C) It was more expensive.(D) It was less refined.托福阅读答案:BDCAABABDTOEFL真题4托福阅读文本:In seventeenth-century colonial North America, all day-to-day cooking was done in the fireplace. Generally large, fireplaces were planned for cooking as well as for warmth. Those in the Northeast were usually four or five feet high, and in the South, they were often high enough for a person to walk into. A heavy timber called the mantel tree was used as a lintel to support the stonework above the fireplace opening. This timber might be scorched occasionally, but it was far enough in front of the rising column of heat to be safe from catching fire.Two ledges were built across from each other on the inside of the chimney. On these rested the ends of a lug pole from which pots were suspended when cooking. Wood from a freshly cut tree was used for the lug pole, so it would resist heat, but it had to be replaced frequently because it dried out and charred, and was thus weakened. Sometimes the pole broke and the dinner fell into the fire. When iron became easier to obtain, it was used instead of wood for lug poles, and later fireplaces had pivoting metal rods to hang pots from.Beside the fireplace and built as part of it was the oven. It was made like a small, secondary fireplace with a flue leading into the main chimney to draw out smoke. Sometimes the door of the oven faced the room, but most ovens were built with the opening facing into the fireplace. On baking days (usually once or twice a week) a roaring fire of oven wood, consisting of brown maple sticks, was maintained in the oven until its walls were extremely hot. The embers were later removed, bread dough was put into the oven, and the oven was sealed shut until the bread was fully baked.Not all baking was done in a big oven, however. Also used was an iron bake kettle, which looked like a stewpot on legs and which had an iron lid. This is said to have worked well when it was placed in the fireplace, surrounded by glowing wood embers, with more embers piled on its lid.托福阅读题目:1. Which of the following aspects of domestic life in colonial North America does the passagemainly discuss?(A) methods of baking bread(B) fireplace cooking(C) the use of iron kettles in a typical kitchen(D) the types of wood used in preparing meals2. The author mentions the fireplaces built in the South to illustrate(A) how the materials used were similar to the materials used in northeastern fireplaces(B) that they served diverse functions(C) that they were usually larger than northeastern fireplaces(D) how they were safer than northeastern fireplaces3. The word scorched in line 6 is closest in meaning to(A) burned(B) cut(C) enlarged(D) bent4. The word it in line 6 refers to(A) the stonework(B) the fireplace opening(C) the mantel tree(D) the rising column of heat5.According to the passage , how was food usually cooked in a pot in the seventeenth century?(A) By placing the pot directly into the fire(B) By putting the pot in the oven(C) By filling the pot with hot water(D) By hanging the pot on a pole over the fire6. The word obtain in line 12 is closest in meaning to(A) maintain(B) reinforce(C) manufacture(D) acquire7. Which of the following is mentioned in paragraph 2 as adisadvantage of using a wooden lugpole?(A) It was made of wood not readily available.(B) It was difficult to move or rotate.(C) It occasionally broke.(D) It became too hot to touch.8. It can be inferred from paragraph 3 that, compared to other firewood, oven wood produced(A) less smoke(B) more heat(C) fewer embers(D) lower flames9.According to paragraph 3, all of the following were true of a colonial oven EXCEPT:(A) It was used to heat the kitchen every day.(B) It was built as part of the main fireplace.(C) The smoke it generated went out through the main chimney.(D) It was heated with maple sticks.10.According to the passage , which of the following was an advantage of a bake kettle?(A) It did not take up a lot of space in the fireplace.(B) It did not need to be tightly closed.(C) It could be used in addition to or instead of the oven.(D) It could be used to cook several foods at one time.托福阅读答案:BCACD DCBAAB。

A comparison of the entanglement measures negativity and concurrence

A comparison of the entanglement measures negativity and concurrence

The transformation rule is: C (ρ′ ) = C (ρ) (2)
It was furthermore shown that for each density matrix ρ there exists an A and B such that ρ′ is Bell diagonal. The concurrence of a Bell diagonal state is only dependent on its largest eigenvalue λ1 [4]: C (ρBD ) = 2λ1 (ρBD ) − 1. It is then straightforward to obtain the parameterization of the surface of constant concurrence (and hence constant entanglement of formation): it consists of applying all complex full rank 2×2 matrices A and B on all Bell diagonal states with the given concurrence, under the constraint that Tr B†B A† A ⊗ | det(A)| | det B | ρ = 1.
It is clear that we can restrict ourselves to matrices A and B having determinant 1 (A, B ∈ SL(2, C )), as will be done in the sequel. The extremal values of the negativity can now be obtained in two steps: first find the state with extremal negativity for given eigenvalues of the corresponding Bell diagonal state by varying A and B , and then do an optimization over all Bell diagonal states with equal λ1 . The first step can be done by differentiating the following cost function over the manifold of A, B ∈ SL(2, C ):

翻译——A FRAMEWORK FOR EVALUATING THIRD-PARTY LOGISTICS

翻译——A FRAMEWORK FOR EVALUATING THIRD-PARTY LOGISTICS

A FRAMEWORK FOREV ALUATING THIRD-PARTYLOGISTICS3PL providers with advanced IT are expected to lower logistics costs and integrate the supply chain with increased productivity and growth. Here, a set of criteria for choosing the most suitable provider.In recent years, companies have increasingly embraced one-stop global logistics services. By allowing companies to concentrate on their core competencies, these third-party logistics (3PL) providers can improve customer service and reduce costs.A 3PL provider can act as a lead logistics provider or a fourth-party logistics (4PL) provider aligned with a host of 3PL providers. This article explores the major considerations in searching for a 3PL provider to expedite the movement of goods and information. With the help of established theories in the literature, we use an evaluation criteria framework built around IT to examine a 3PL provider.Five streams of literature relate to logistics provider models [9]: strategic decision making in organizations, industrial buying behavior, transportation purchasing, supplier selection, and logistics relationships. Among these topics, supplier selection, or how to evaluate 3PL providers and form strategic alliances with them, has been inadequately addressed in the current literature. Strategic alliances allow companies to reduce conflict, reciprocate regarding mutual goal-related matters, increase efficiency and stability, and establish marketplace legitimacy [3]. Logistics managers consider perceived performance, perceived capability, and responsiveness as important factors in selecting logistics providers [5]. In general, it appears that market and firm characteristics influence the choice of logistics providers [10], and managers achieve customer service improvement and cost reduction by outsourcing logistics services [8].One study applied transaction cost economics to logistics provider selection to explore the conditions under which logistics functions are separated [1]. About 60% of the Fortune 500 companies surveyed reported having at least one logistics provider contract [7]. A conceptual model of the logistics provider buying process has been presented [9] in five steps, in which companies identify the need to outsource logistics, develop feasible alternatives, evaluate and select a supplier, implement service, and engage in ongoing service assessment.A major shortcoming of the 3PL literature is the lack of consideration of IT as a primary component of logistics-providing solutions. The integration of IT with the logistics providers and their customers—known as Inter-organizational Systems (IOS)—essentially supports the outsourcing of logistics activities [6]. IT is a critical factor for 3PL performance since the logistics provider must integrate systems with its clients. IT links members of a supply chain, such as manufacturers, distributors, transportation firms, and retailers, as it automates some element of the logistics workload, such as order processing, order status inquiries, inventory management, or shipment tracking.Framework of 3PL Functions3PL services can be relatively limited or comprise a fully integrated set of logistics activities. Two surveys [8, 9] identified the following as significant outsourcing functions:•Transportation•Warehousing•Freight consolidation and distribution•Product marking, labeling, and packaging•Inventory management•Traffic management and fleet operations•Freight payments and auditing•Cross docking•Product returns•Order management•Packaging•Carrier selection•Rate negotiation•Logistics information systemsThese functions can be divided into four categories, as shown in Figure 1: warehousing, transportation, customer service, and inventory and logistics management. Significant IT improvements are leading to lower transaction costs and allowing all supply chain participants to manage increased complexity [6]. The information and material flow among the four categories have been theorized [8] to validate the interrelationships between transportation and customer service. Material flow occurs as a result of integration of transportation and distribution systems, andinformation flow is essential to integrate the four categories.Figure 1. Categorization of logistics functions.To implement 3PL, real-time information flow is essential. A framework of 3PL provider functions and evaluation criteria can be derived that revolves around the information flow that affects the 3PL provider functions, as illustrated in Figure 2. First, material is transported to distributed-warehousing facilities. Then, using efficient inventory management and logistics techniques, global warehouses are fulfilled according to customized, dynamic allocation levels. The material is distributed either by 3PL or 4PL global transportation freight carriers, and global customer services including reverse logistics are provided. Here, I detail descriptions of the four categories of outsourced functions, and discuss global information flow.Figure 2. A framework of 3PL provider functions and evaluation criteria.Global warehousing. Customers are demanding just-in-time delivery of material and warehousing. The warehousing component necessitates the strategic placement of global mini-distribution centers. Companies need an efficient end-to-end supply chain, and a single point of failure in warehousing can create disaster in order fulfillment. 3PL providers are ramping up their warehouses by investing in new fulfillment equipment and advanced technologies. Warehousing functions include receiving, sort and direct put-away, wave management, merge and pack-out, manifest documents, label or bar code printing, kitting, and pick/pack activities. Many companies, including Nabisco and International Paper, have outsourced their warehousing operations to concentrate on their core competencies.Global transportation. This function must be completed by a freight carrier who can move any-sized units by land, sea, rail, river, and air in a timely manner. A partnership effort between the customer and a 3PL provider may be extended to a 4PL provider, but 4PL providers must work with 3PL providers to bring synergy to the information flow and to realize cost savings. Many companies, including Ford, Honeywell, National Semiconductor, and Cisco, have outsourced transportation operations.Global customer services. 3PL providers offer a wide range of customer services including warranty parts recovery, financial services, automating letters of credit (LOC), auditing, order management, fulfillment, carrier selection, rate negotiation, international trade management, and help desk or call center activities. In addition, with the increased returns generated by e-business, 3PL providers are playing a lead role in developing and executing reverse logistics. Many companies, including Nike, Scovill, Oneida, and Cisco, have outsourced customer services.Global inventory management and logistics .This function includes global inventory visibility, backorder capability and fulfillment, order-entry management, forecasting, cycle count and auditing, shipment management, rotable pool planning, and customs documentation. A planning solution system focusing on the unique complexities of company and customer needs is essential for inventory management and logistics. The system must optimize inventory based on service contracts and required response times, and it must have product-based forecasting capabilities utilizing product life curves. The inventory management system should also optimize placement of warehouses and stocking locations, and automate replenishment of parts.Companies such as Rolls Royce, National Semiconductor, and IBM have outsourced their inventory management and logistics operations to concentrate on their core competencies.Some may think logistics functions can be achieved by a supply chain management (SCM) solution, but many differences exist between service logistics and SCM, as illustrated in Table 1. A major difference is that a penalty for breech of service level agreement (SLA) usually enhances the performance of 3PL providers. Therefore, 3PL providers with SCM expertise and global trade expertise are much needed to provide strategic options and innovative solutions in the areas of logistics, inventory control, demand management to meet optimum allocation levels, multidirectional global transportation, and warehousing. Firms will gain competitive advantage if they fully understand the implications of SCM and tailor programs for customers. As e-commerce grows globally, the financial benefits of supply chain logistics leadership can be exponential.Global information flow. Information flow significantly enhances unit movement, as it helps determine how and when to move units most efficiently. 3PL providers are offering advanced IT and broader global coverage, enabling manufacturing and service industries to concentrate on their core competencies. Companies need a state-of-the-art 3PL provider with a wealth of IT deployment experience to achieve optimal information flow.Figure 3. Evaluation process of 3PLIT revolves around four major players: the 3PL customer, the customer’s clients, the customer’s suppliers and alliances, and the 3PL provider itself. Information flow begins with the 3PL customer. That information is analyzed by the 3PL provider, which dynamically changes the allocation levels at the appropriate warehouse locations globally. The analysis programs typically include software for dynamic material allocation, inventory control, supply chain management, logistics, transportation management, as well as intelligent decision-making algorithms. Each transaction is recorded in the customer system via electronic data interchange (EDI), among other methods. Many companies, including Cisco, Nike, and Ford, have outsourced IT services.Table 1. Differences between 3PL and SCMA 3PL EvaluationFigure 3 describes a 3PL evaluation process, which includes a preliminary screening based on qualitative factors such as reputation. Depending on qualitative and feasibility factors, a short list of 3PL providers is obtained. An evaluation criterion is sent to the short-listed 3PL providers. After receiving the completed evaluation list, the prospective providers are interviewed. After the desired features and criteria are compared and analyzed, a 3PL provider is selected. This process has been tested in a Fortune 100 company and yielded good results. The basic process, as follows, was obtained from previous research [9].Gathering 3PL information. A list of 3PL providers can be obtained from professional organizations. Google and Yahoo searches reveal about 430 logistics providers, of which roughly 75% are U.S.-based. Websites such as , , , and purchasingresearchservice. com offer informal organizational information.Table 2. Comparative functions of 3PL providersCompiling the short list. This preliminary screening eliminates 3PL providers that do not provide the overall functions listed in Table 2. This table also illustrates framework features of few logistics providers, obtained from provider Web sites. Current suppliers of traditional transportation and distribution services, and outside consultants with logistics expertise can help compile the short list. Most companies usually consider six to eight potential suppliers, and evaluate two or three finalists.Evaluation criteria . To evaluate prospective provider, a set of criteria must be defined. These evaluation criteria typically include quality, cost, capacity, delivery capability, and financial stability. In addition, cultural compatibility, customer references, financial strength, operating and pricing flexibility, and IT capabilities play predominant roles [9]. Performance metrics that must be part of the evaluation criteria [5] include shipment and delivery times, error rates, and responsiveness to unexpected events. The following set of factors can be used to evaluate a 3PL provider [5, 9]:•IT•Quality•Cost•Services•Performance metrics•IntangiblesUsing the six factors against the framework we created for 3PL provider evaluation, we derived the criteria shown in Table 3.Final 3PL selection . An evaluation criteria sheet as part of a formal request for proposal (RFP) is usually sent to the prospective short list finalists. This proposal initiates the process whereby the client and the 3PL provider enter into negotiations, not only regarding price, but also skill, culture, and commitment matching. RFP preparation is important because it forms the basis upon which the 3PL provider formulates its assessment of client needs, the resources needed to serve those needs and, finally, the cost of its services. A clear explanation of needs and requirements should be included in the RFP. In addition, a clear and concise statement of the tasks involved and the measurements against which success will be judged must also be included.Once the evaluation sheets are received, prospective 3PL providers are interviewed. In this final face-to-face interview between the 3PL customer and the prospective 3PL provider, each party must clearly understand project details, goals, and expectations. During this step, problem resolution procedures are established, and incentives to assure continued process improvement are defined. A cultural match between a 3PL provider and the client is also established. A 3PL provider will likely have personnel operating at the client site and cultures must mesh for success. Based on the interviews, RFP responses, and a functional comparison, a 3PL can be selected.ConclusionThe evaluation criteria framework presented in this article can help IT management evaluate outsourcing logistics services. The conceptual framework using IT as the focus peruses the core functionalities of 3PL providers such as inventory management, logistics, transportation, warehousing, and customer services. Using this framework and the factors essential to quantify outsourcing, we have established a set of criteria for 3PL provider selection. A careful consideration of this framework and the use of IT in logistics and supply chain management can provide insights to logistics managers, procurement managers, IT managers, and academicians. Thecontinued presence of 3PL providers with advanced IT will lead to lower logistics costs and integrate all aspects of the supply chain with increased productivity and growth.【c】References1. Aertsen, F. Contracting out the physical distribution function: A tradeoff between asset specificity and performance measurement. International Journal of Physical Distribution and Logistics Management 23, 1 (1993), 23–29.2. Bakos, J.Y. Information links and electronic marketplaces. The role of inter-organizational information systems in vertical markets. Journal of Management Information Systems (Fall 1991), 31–52.3. Cooper, M.C. and Gardner, J.T. Building good business relationships: More than partnering or strategic alliances? International Journal of Physical Distribution and Logistics Management 23, 6 (1993), 14–26.4. Gurbaxani, V. and Whang, S. The Impact of information systems on organizations and markets. Commun. ACM 34, 1 (Jan. 1991), 59–73.5. Menon, M.K. et al. Selection criteria for providers of third-party logistics: An exploratory study. Journal of Business Logistics 19 , 1 (1998), 121–136.6. Lewis, I. and Talalayevsky, A. Third-party logistics: Leveraging information technology. Journal of Business Logistics 21 (2000), 173–185.7. Lieb, R.C. and Randall, H.L. A comparison of the use of third-party logistics services by large American manufacturers, 1991, 1994 and 1995. Journal of Business Logistics 17 , 1 (1996), 305–320.8. Rabinovich, E., et al. Outsourcing of integrated logistics functions. International Journal of Physical Distribution and Logistics Management 29, 6 (1999), 353–373.9. Sink, H.L. and Langley, C.J. A managerial framework for the acquisition of third-party logistics services. Journal of Business Logistics 19 , 1 (1997), 121–136.10. V an Damme, D.A and Van Amstel, M.J. Outsourcing logistics management activities. The International Journal of Logistics Management 7 , 2 (1996), 8第三方物流评价框架拥有先进的IT系统的第三方物流供应商,有望降低物流成本,整合供应链,从而提高生产力和促使经济增长。

Introduction to Marketing (市场营销概论 英文版)8 Pricing

Introduction to Marketing (市场营销概论  英文版)8 Pricing

• 8.6 pricing strategy
Formulate pricing strategies reflects its willingness to adapt and modify price according to the needs of customers and market conditions指定价格策略 反映了公司想要根据顾客的需求和 市场的条件来适应和修改价格的意 愿
• 8.6.1 discounting
1 customer buy products in large quantities 顾客买的产品数量多
2 stocks are high, perhaps owing to an overall reduction in demand货存大,可能 因为整体需求的下降
• 2 demand estimation需求的估计
Predicted demand levels at differing prices预计在不同的价格情况下需求 的水平
3 Anticipating competitor reaction预计竞争对 手的反应
When products are easily imitated and markets are easy to enter, the price of competitive products assumes major importance当产品很 容易被模仿,市场很容易进入,竞争的产品 的价格就很重要
• 8.5 price selection techniques价格选 择的方法
1 demand
2 cost
3 other elements of the marketing mix as well as aspects of consumer behaviour

第二语言习得 考试复习题

第二语言习得 考试复习题

第二语言习得期中考试复习题1. acquisition& learning➢The term “acquisition” is used to refer to picking up a second language through exposure, whereas the term “learning” is used to refer to the conscious study of a second language. Now most of the researchers use them interchangeably, irrespective of whether conscious or unconscious processes are involved2. incidental learning & intentional learning➢While reading for pleasure a reader does not bother to look up a new word in a dictionary, but a few pages later realizes what that word means, then incidental learning is said to have taken place.➢If a student is instructed to read a text and find out the meanings of unknown words, then it becomes an intentional learning activity. ngauage➢Language is a system of arbitrary vocal symbols used for human communication .That is to say , language is systematic (rule-governed ), symbolic and social.nguage Acquisition Device➢The capacity to acquire one’s FIRST LANGUAGE , when this capacity is pictured as a sort of mech anism or apparatus.5.Contrastive analysis❖Under the influence of behaviorism, researchers of language teaching developed the method of contrastive analysis (CA) to study learner errors. Its original aim is to serve foreign language teaching.6.Error analysis❖Error analysis aims to 1) find out how well the learner knows a second language, 2) find out how the learner learns a second language, 3) obtain information on common difficulties in second language learning, and to 4) serve as an aid in teaching or in the preparation and compilation of teaching materials (Corder, 1981).It is a methodology of describing Second Language Learners’ language system s.7.interlanguage❖It refers to the language that the L2 learner produced .❖The language produced by the learner is a system in its own right.❖The language is a dynamic system, evolving over time.8.Krashen and His Monitor Model❖ 1. The Acquisition-Learning Hypothesis❖ 2. The Monitor Hypothesis❖ 3. The Natural Hypothesis❖ 4. The Input Hypothesis❖ 5. The Affective Filter Hypothsis9. input hypothesis❖Its claims : The learner improves and progresses along the “natural order” when s/he receives second language “input” that is one step beyond his or her current stage of linguistic competence. For example, if a learner is at a stage “i”, then acquisition takes place when s/he is exposed to “Comprehensible Input” that belongs to level “i+1”.10. affective filter hypothesis❖The hypothesis is based on the theory of an affective filter, which states that successful second language acquisition depends on the learner’s feelings. Negative attitudes (including a lack of motivation or self-confidence and anxiety) are said to act as a filter, preventing the learner from making use of INPUT, and thus hindering success in language learning.11.Shumann’s Acculturation Model❖This model of second language acquisition was formulated by John.H.Schumann(1978), and applies to the natural context of second language acquisition where a second language is acquired without any instruction in the environment. Schumann defines acculturation as the process of becoming adapted to a new culture or rather , the social and psychological integration of the learner with the target language group.12.Universal Grammar⏹The language faculty built into the human mind consisting of principles and parameters.⏹This is the universal grammar theory associated with Noam Chomsky.⏹Universal Grammar sees the knowledge of grammar in the mind as having two components: “principles"that all languages have incommon and “parameters” on which they vary.13.M acLaughlin’s Information processing model☐SLA is the acquisition of a complex cognitive skill that must progress from controlled processing to automatic processing.14.Anderson’s ACT☐This is another general theory of cognitive learning that has been applied to SLA☐Also emphasizes the automatization process.☐Conceptualizing three types of memory:1. Working memory2. Declarative long term memory3. Procedural long-term memory15.fossilization☐It refers to the phenomenon in which second language learners often stop learning even though they might be far short of native-like competence. The term is also used for specific linguistic structures that remain incorrect for lengthy periods of time in spite of plentiful input.munication strategies⏹Communication strategies, known as CSs, consist of attempts to deal with problems of communication that have arisen in interaction.They are characterized by the negotiation of an agreement on meaning between the two parties.1.What it is that needs to be learnt in language acquisition?➢Phonetics and Phonology➢Syntax➢Morphology➢Semantics➢Pragmatics2.How experts study the children’s acquisition➢Observe young children’s learning to talk.➢Record the speech of their children➢Create a database➢Have a single hypothesis3.What are learning strategies? Give examples ?➢Intentional behaviour and thoughts that learners make use of during learning in order to better help them understand, learn or remember new information .➢Learning strategies are classified into :1. meta-cognitive strategies2. cognitive strategies3. socio-affective strategies4.What are the factors influencing the success of SLA ?●Cognitive factors :1. Intelligence2. Language aptitudenguage learning strategies●Affective factors:nguage attitudes2.Motivation5.What are the differences between the Behaviorist learning model and that of Mentalist?➢Behaviorist learning model claims that children acquired the L1 by trying to imitate utterances produced by people around them and by receiving negative or positive reinforcement of their attempts to do so. Language acquisition, therefore, was considered to be environmentally determined.6.What are the beneficial views obtained from the studies on children’s L1 acquisition?1. Children’s language acquisition goes through several stages2. These stages are very similar across children for a given language, although the rate at which individual children progress through them ishighly variable;3. These stages are similar across languages;4. Child language is rule-governed and systematic, and the rules created by the child do not necessarily correspond to adult ones;5. Children are resistant to correction;6. Children’s mental capacity limits the n umber of rules they can apply at any one time, and they will revert to earlier hypotheses when two ormore rules compete.7.What are the differences of error analysis from contrastive analysisContrastive analysis stresses the interfering effects of a first language on second language learning and claims that most errors come from interference of the first language. (Corder ,1967). However, such a narrow view of interference ignores the intralingual effects of language learning among other factors. Error an alysis is the method to deal with intralingual factors in learners’ language (Corder, 1981).it is a methodology of describing Second Language Learners’ language systems .Error analysis is a type of bilingual comparison, a comparison between learners’ inte rlanguage and a target language, while contrastive analysis between languages. (native language and target language)8.What are UG principles and parameters?➢The universal principle is the principle of structure-dependency, which states that language is organized in such a way that it crucially depends on the structural relationships between elements in a sentence.➢Parameters are prnciples that differ in the way they work or function from language to language. That is to say there are certain linguistic features that vary across languages.9.What role does UG play in SLA?➢Three possibilities :1. UG operates in the same way for L2 as it does for L1.2. The learner’s Core grammar is fixed and UG is no longer available to the L2 learner, particularly not to th e adult learner.3. UG is partly available but it is only one factor in the acquisition of L2. There are other factors and they may interfere with the UGinfluence.10.What are classifications of communication strategies?Faerch and Kasper characterizes CSs in the light of learners’ attempts at governing two different behaviors and their taxonomies are achievement and reduction strategies , and they are based on the psycholinguistics.➢Achievement Strategies:⏹Paraphrase⏹Approximation⏹Word coinage⏹Circumlocution⏹Conscious Transfer⏹Literal translation⏹Language switch (borrowing)⏹Mime⏹Use body language and gestures to make communication open⏹Appeal for assistance➢Reduction Strategies⏹Message abandonment(topic shift):Ask a student to answer the question :How old are you ? She must utter two orthree sentences to answer the question, but she mustn’t tell her age.⏹Topic avoidance(Silence)。

A comparison of linear and hypertext formats in information retrieval

A comparison of linear and hypertext formats in information retrieval

A C O M P A R IS O N O F L I N E A R A N D H Y P E R T E X T F O R M A T S I N IN F O R M A T I O N R E T R I E V A L. Cliff McKnight, Andrew Dillon and John RichardsonHUSAT Research Centre, Department of Human Sciences, University of Loughborough.This item is not the definitive copy. Please use the following citation when referencing this material: McKnight, C., Dillon, A. and Richardson, J. (1990) A comparison of linear and hypertext formats in information retrieval. In: R. McAleese and C. Green, HYPERTEXT: State of the Art, Oxford: Intellect, 10-19.AbstractAn exploratory study is described in which the same text was presented to subjects in one of four formats, of which two were hypertext (TIES and Hypercard) and two were linear (Word Processor and paper). Subjects were required to use the text to answer 12 questions. Measurement was made of their time and accuracy and their movement through the document was recorded, in addition to a variety of subjective data being collected. Although there was no significant difference between conditions for task completion time, subjects performed more accurately with linear formats. The implications of these findings and the other data collected are discussed.Introduction.It has been suggested that the introduction of hypertexts could lead to improved access and (human) processing of information across a broad range of situations (Kreitzberg and Shneiderman, 1988). However, the term ‘hypertext’ has been used as though it was a unitary concept when, in fact, major differences exist between the various implementations which are currently available and some (Apple’s Hypercard for example) are powerful enough to allow the construction of a range of different applications. In addition these views tend to disregard the fact that written texts have evolved over several hundred years to support a range of task requirements in a variety of formats. This technical evolution has been accompanied by a comparable evolution in the skills that readers have in terms of rapidly scanning, searching and manipulating paper texts (McKnight et al., 1990).The recent introduction of cheap, commercial hypertext systems has been made possible by the widespread availability of powerful microcomputers. However, the recency of this development means that there is little evidence about the effectiveness of hypertexts and few guidelines for successful implementations.Although a small number of studies have been carried out, their findings have typically been contradictory (cf. Weldon et al.,1985, and Gordon et al., 1988). This outcome is predictable if allowances are not made for the range of text types (e.g., on-linehelp, technical documentation, tutorial packages) and reading purposes (e.g., familiarisation, searching for specific items, reading for understanding). Some text types would appear to receive no advantage from electronic implementation let alone hypertext treatment at the intra-document level (poetry or fiction, for example, where the task is straightforward reading rather than study or analysis of the text per se). Thus there appears to be justification for suggesting that some hypertext packages may be appropriate for some document types and not others. A discussion of text types can be found in Dillon and McKnight (1989).Marchionini and Shneiderman (1988) differentiate between the procedural and often iterative types of information retrieval undertaken by experts on behalf of end users and the more informal methods employed by end users themselves. They suggest that hypertext systems may be well suited to end users because they encourage “informal, personalized, content-oriented information seeking strategies” (p.71).The current exploratory study was designed to evaluate a number of document formats using a task that would tend to elicit ‘content-oriented information seeking strategies’. The study was also undertaken to evaluate a methodology and indicate specific questions to be investigated in future experiments.MethodSubjects16 subjects participated in the study, 9 male and 7 female, age range 21–36. All were members of HUSAT staff and all had experience of using a variety of computer systems and applications.MaterialsThe text used was “Introduction to Wines” by Elizabeth A. Buie and W. Edgar Hassell, a basic guide to the history, production and appreciation of wine. This hypertext was widely distributed by Ben Shneiderman as a demonstration of the TIES (The Interactive Encyclopedia System) package prior to its marketing as HyperTIES.This text was adapted for use in other formats by the present authors. In the TIES version, each topic is held as a separate file, resulting in 40 individual small files. For the Hypercard version, a topic card was created for each corresponding TIES file. For the word processor version, the text was arranged in an order which seemed sensible to the authors starting with the TIES ‘Introduction’ text and grouping the various topics under more general headings. A pilot test confirmed that the final version was generally consistent in structure with the present sample’s likely ordering.The Hypercard and word processor versions were displayed on a monochrome Macintosh II screen and the TIES version was displayed on an IBM PC colour screen. The paper version was a print-out of the word processor text.TaskSubjects were required to use the text to answer a set of 12 questions. These were specially developed by the authors to ensure that a range of information retrieval strategies were employed to answer them and that the questions did not unduly favour any one medium (e.g., one with a search facility).DesignA four-condition, independent subjects design was employed with presentation format (Hypercard, TIES, Paper and Word Processor) as the independent variable. The dependent variables were speed, accuracy, access strategy and subjects’ estimate of document size.ProcedureSubjects were tested individually. One experimenter described the nature of the investigation and introduced the subject to the text and system. Any questions the subject had were answered before a three minute familiarisation period commenced, during which the subject was encouraged to browse through the text. After three minutes the subjects were asked several questions pertaining to estimated document size and range of contents viewed. They were then given the question set and asked to attempt all questions in the presented order. Subjects were encouraged to verbalise their thoughts and a small tie-pin microphone was used to record their comments. Movement through the text was captured by video camera.ResultsEstimates of document sizeAfter familiarisation with the text, subjects were asked to estimate the size of the document in pages or screens. The linear formats contained 13 pages, the Hypercard version contained 53 cards, and the TIES version contained 78 screens. Therefore raw scores were converted to percentages. The responses are presented in Table 1 (where a correct response is 100).Condition TIES Paper HyperCard Word ProcessorSubject1 641.03 76.92 150.94 92.312 58.97 92.31 56.6 76.923 51.28 76.92 465.17 100.04 153.84 153.85 75.47 93.21Mean 226.28 100.0 187.05 90.61 SD 280.41 36.63 189.84 9.75 Table 1: Subjects’ estimates of document size.Subjects in the linear format conditions estimated the size of the document reasonably accurately. However, subjects who read the hypertexts were less accurate, several of them over-estimating the size by a very high margin. While a one-way ANOVA revealed no significant effect (F[3,12] = 0.61, NS) these data are interesting and suggest that subjective assessment of text size as a function of format is an issue worthy of further investigation. Such an assessment may well influence an estimation of the level of detail involved in the content as well as choice of appropriate access strategy.SpeedTime taken to complete the twelve tasks was recorded for each subject. Total time per subject and mean time per condition are presented in Table 2 (all data are in seconds).ProcessorWord Condition TIES Paper HyperCardSubject1 1753 795 1161 14802 1159 1147 655 8273 2139 2231 1013 10144 1073 1115 1610 1836Mean 1531 1322 1110 1289Table 2: Task completion times (in seconds).Clearly, while there is variation at the subject level there is little apparent difference between conditions. A one-way ANOVA confirmed this (F[3,12]= 0.47, p > 0.7) and even a t-test between the fastest and slowest conditions, Hypercard and TIES, failed to show a significant effect for speed (t = 1.31, d.f. = 6, p > 0.2).The term ‘accuracy’ in this instance refers to how many items a subject answers correctly. This was assessed by the experimenters who awarded one point for an unambiguously correct answer, a half-point for a partly correct answer and no points for a wrong answer or abandoned question. The accuracy scores per subject and mean accuracy scores per condition are shown in Table 3.ProcessorWord Condition TIES Paper HyperCardSubject1 6.0 11.0 8.5 11.52 9.5 12.0 7.5 12.08.0 10.5 10.0 10.534 6.0 11.0 7.5 9.0Mean 7.38 11.12 8.38 10.75SD 1.7 0.63 1.18 1.32 Table 3: Accuracy scores.As can be seen from these data, subjects performed better in both linear-format conditions than in the hypertext conditions. A one-way ANOVA revealed a significant effect for format (F[3,12] = 8.24, p < 0.005) and a series of post-hoc tests revealed significant difference between paper and TIES (t = 4.13, d.f. = 6, p < 0.01), Word Processor and TIES (t = 3.13, d.f. = 6, p < 0.05) and between Paper and Hypercard (t = 4.11, d.f. = 6, p < 0.01). Even using a more rigorous basis for rejection than the 5 per cent level, i.e., the 10/k(k-1) level, where k is the number of groups, suggested by Ferguson (1959), which results in a critical rejection level of p < 0.0083 in this instance, the Paper/TIES and Paper/Hypercard differences are still significant.The number of questions abandoned by subjects was also identified. Although there was no significant difference between conditions (F[3,12] = 1.85, NS) subjects using the linear formats abandoned less than those using the hypertext formats (total abandon rates: Paper = 1; Word Processor = 2; Hypercard = 4 and TIES = 9).NavigationTime spent viewing the Contents/Index (where applicable) was determined for each subject and converted to a percentage of total time. These scores are presented in Table 4.WordProcessor Condition TIES Paper HyperCardSubject1 53.28 2.72 47.16 6.342 25.36 1.49 19.1 13.933 49.5 10.24 17.5 12.874 30.84 5.36 23.4 7.54Mean 39.74 4.95 26.79 10.173.88 13.81 3.79 SD 13.72Table 4: Time spent viewing Contents/Index as a percentage of total time.This table demonstrates a very large difference between both hypertext formats and the linear formats. A one-way ANOVA revealed a significant effect for condition (F[3,12]= 9.95, p < 0.005). Once more, applying the more conservative critical rejection level, post-hoc tests revealed significant differences between Paper and TIES (t = 4.90, d.f. = 6, p < 0.003), between Word Processor and TIES (t = 4.16, d.f. = 6, p < 0.006) and between Hypercard and paper (t = 3.06, d.f. = 6, p < 0.03). Thus, interacting with a hypertext document may necessitate heavy usage of indices in order to navigate effectively through the information space.SummaryIn general, subjects performed better with the linear format texts than with the hypertexts. The linear formats led to significantly more accurate performance and to significantlyless time spent in the index and contents. Not surprisingly, estimating document size seems easier with linear formats.DiscussionWhile there was no significant effect for the estimation of document size data, a number of observations can be made. The accurate estimates for the Paper and Word Processor condition may well have resulted from the fact that the Contents pages indicated the total number of pages and that the page number was displayed for each page, and hence browsing through the document would have given repeated cues to the document size. Finally, in the Paper condition the subjects would have received tactile feedback as they manipulated the document.Subjects in the hypertext conditions had none of this information to help them form an impression of the document’s size. While many of the cards were discrete (i.e., there were few continuation cards) and were individually listed in the indices this information did not prevent some subjects from making large over-estimates. A poor estimate of document size could lead to incorrect assumptions concerning the range of coverage and level of a document and the adoption of an inappropriate reading strategy. Future studies might usefully explore the relationship between manipulation strategy and subjective impression of size for larger documents.A number of factors are likely to have influenced the subjects’ task completion times and as a result the lack of an overall speed effect is to be expected. These factors include variation in the subjects’ familiarity with the topic area (wines); variation in the subjects’ reading speeds; the presence or absence of a string search facility in the electronic versions; variation in the subjects’ familiarity with the different software packages; and their determination to continue searching until an answer is found. However, there does not appear to be a speed/accuracy trade-off.The strong effect found for the navigation measure appears to be consistent with the significant difference in accuracy scores between the four conditions. Subjects in the hypertext conditions spent considerably more time viewing the index and contents lists than did subjects in the linear conditions but were less successful in finding the correct answers to the questions. The hypertext systems elicited a style of text manipulation which consisted of selecting likely topics from the index, jumping to them and then returning to the index if the answer was not found. Relatively little time was spent browsing between linked nodes in the database. This is a surprising finding since hypertext systems in general are assumed to be designed with this style of manipulation in mind. It may be argued that a comprehension or summarisation task would have resulted in this style of manipulation but, in contrast to the above, subjects in the linear conditions tended to refer once to the Contents/Index and then scan the text for the answer rather than make repeated use of the Contents or Index.Further evidence of the superiority of scanning the text in the linear conditions as opposed to frequent recourse to the Contents/Index in the hypertext conditions issuggested by considering the relationship between use of the string search facilities and the numbers of questions abandoned before an answer was found. Three of the questions were designed so that a string search strategy would be optimal and two of the electronic text conditions (one linear, one hypertext) supported string searching. The lack of a string search facility in the TIES condition was associated with a very high proportion of abandoned questions (58%) whilst these three questions were answered with 100% accuracy by subjects in the paper condition.In the other two conditions in which string searching was supported it was used to different degrees of effectiveness. In the Hypercard condition the subjects employed string searching with 92% of the relevant questions and this resulted in 66% of the questions being answered correctly (and 17% being abandoned). In the Word Processor condition string searching was employed on 50% of the relevant questions and 92% were answered correctly (0% abandoned). Thus, although string searching was available to the subjects in the Word Processor condition it was used with less frequency than in the Hypercard condition. However, the subjects in the Word Processor condition answered substantially more of the questions correctly and this was presumably using strategies based on visual scanning.ConclusionAlthough some caution should be exercised in interpreting the results of this study, it is clear that for some texts and some tasks hypertext is not the universal panacea which some have claimed it to be. Furthermore, the various implementations of hypertext will support performance to a greater or lesser extent in different tasks. Future work should attempt to establish clearly the situations in which each general style of hypertext confers a positive advantage so that the potential of the medium can be realised.AcknowledgementThis work was funded by the British Library Research and Development Department and was carried out as part of Project Quartet.ReferencesDillon A. and McKnight C (1989) Towards the classification of text types: a repertory grid approach. International Journal of Man-Machine Studies, in press.Ferguson G A (1959) Statistical Analysis in Psychology and Education. McGraw-Hill, New York.Gordon S, Gustavel J, Moore J and Hankey J (1988) The effects of hypertext on reader knowledge representation. Proceedings of the Human Factors Society – 32nd Annual Meeting.Kreitzberg C and Shneiderman B (1988) Restructuring knowledge for an electronic encyclopedia. Proceedings of International Ergonomics Association’s 10th Congress.Marchionini G and Shneiderman B (1988) Finding facts vs. browsing knowledge in hypertext systems. Computer, January, 70–80.McKnight C, Dillon A and Richardson J (1990) From Here to Hypertext. Cambridge University Press, Cambridge, in press.Weldon L J, Mills C B, Koved L and Shneiderman B (1985) The structure of information in online and paper technical manuals. Proceedings of the Human Factors Society – 29th Annual Meeting.。

On the learnability and design of output codes for multiclass problems

On the learnability and design of output codes for multiclass problems

Koby Crammer and Yoram Singer School of Computer Science&Engineering The Hebrew University,Jerusalem91904,Israel kobics,singer@cs.huji.ac.ilAbstractOutput coding is a general framework for solvingmulticlass categorization problems.Previous re-search on output codes has focused on buildingmulticlass machines given predefined output codes.In this paper we discuss for thefirst time the prob-lem of designing output codes for multiclass prob-lems.For the design problem of discrete codes,which have been used extensively in previous works,we present mostly negative results.We then in-troduce the notion of continuous codes and castthe design problem of continuous codes as a con-strained optimization problem.We describe threeoptimization problems corresponding to three dif-ferent norms of the code matrix.Interestingly,forthe norm our formalism results in a quadraticprogram whose dual does not depend on the lengthof the code.A special case of our formalism pro-vides a multiclass scheme for building support vec-tor machines which can be solved efficiently.Wegive a time and space efficient algorithm for solv-ing the quadratic program.Preliminary experimentswe have performed with synthetic data show thatour algorithm is often two orders of magnitude fasterthan standard quadratic programming packages.1IntroductionMany applied machine learning problems require assigning labels to instances where the labels are drawn from afinite set of labels.This problem is often referred to as multiclass categorization or classification.Examples for machine learn-ing applications that include a multiclass categorization com-ponent include optical character recognition,text classifica-tion,phoneme classification for speech synthesis,medical analysis,and more.Some of the well known binary classi-fication learning algorithms can be extended to handle mul-ticlass problem(see for instance[5,19,20]).A general ap-proach is to reduce a multiclass problem to a multiple binary classification problem.Dietterich and Bakiri[9]described a general approach based on error-correcting codes which they termed error-correcting output coding(ECOC),or in short output cod-ing.Output coding for multiclass problems is composed of two stages.In the training stage we need to construct multiple(supposedly)independent binary classifiers each of which is based on a different partition of the set of the labels into two disjoint sets.In the second stage,the classification part,the predictions of the binary classifiers are combined to extend a prediction on the original label of a test instance. Experimental work has shown that output coding can often greatly improve over standard reductions to binary problems [9,10,16,1,21,8,4,2].The performance of output coding was also analyzed in statistics and learning theoretic con-texts[12,15,22,2].Most of the previous work on output coding has concen-trated on the problem of solving multiclass problems using predefined output codes,independently of the specific ap-plication and the class of hypotheses used to construct the binary classifiers.Therefore,by predefining the output code we ignore the complexity of the induced binary problems. The output codes used in experiments were typically con-fined to a specific family of codes.Several family of codes have been suggested and tested so far,such as,comparing each class against the rest,comparing all pairs of classes[12, 2],random codes[9,21,2],exhaustive codes[9,2],and lin-ear error correcting codes[9].A few heuristics attempting to modify the code so as to improve the multiclass prediction accuracy were suggested(e.g.,[1]).However,they did not yield significant improvements and,furthermore,they lack any formal justification.In this paper we concentrate on the problem of designing a good code for a given multiclass problem.In Sec.3we study the problem offinding thefirst column of a discrete code matrix.Given a binary classifier,we show thatfinding a goodfirst column can be done in polynomial time.In con-trast,when we restrict the hypotheses class from which we choose the binary classifiers,the problem offinding a good first column becomes difficult.This result underscores the difficulty of the code design problem.Furthermore,in Sec.4 we discuss the general design problem and show that given a set of binary classifiers the problem offinding a good code matrix is NP-complete.Motivated by the intractability results we introduce in Sec.5the notion of continuous codes and cast the design problem of continuous codes as a constrained optimization problem.As in discrete codes,each column of the code ma-trix divides the set of labels into two subsets which are la-beled positive()and negative().The sign of each entry in the code matrix determines the subset association(or)and the magnitude corresponds to the confidence in this association.Given this formalism,we seek an output code with small empirical loss whose matrix norm is small.We describe three optimization problems corresponding to three different norms of the code matrix:and.For and we show that the code design problem can be solved by linear programming(LP).Interestingly,for the norm our formalism results in a quadratic program(QP)whose dual does not depend on the length of the code.Similar to sup-port vector machines,the dual program can be expressed in terms of inner-products between input instances,hence we can employ kernel-based binary classifiers.Our framework yields,as a special case,a direct and efficient method for constructing multiclass support vector machine.The number of variables in the dual quadratic problem is the product of the number of samples by the number of classes.This value becomes very large even for small datasets. For instance,an English letter recognition problem with training examples would require variables.In this case,the standard matrix representation of dual quadratic problem would require more than5Giga bytes of mem-ory.We therefore describe in Sec.6.1a memory efficient algorithm for solving the quadratic program for code design. Our algorithm is reminiscent of Platt’s sequential minimal optimization(SMO)[17].However,unlike SMO,our algo-rithm optimize on each round a reduced subset of the vari-ables that corresponds to a single rmally,our algorithm reduces the optimization problem to a sequence of small problems,where the size of each reduced problem is equal to the number of classes of the original multiclass problem.Each reduced problem can again be solved us-ing a standard QP technique.However,standard approaches would still require large amount of memory when the num-ber of classes is large and a straightforward solution is also time consuming.We therefore further develop the algorithm and provide an analytic solution for the reduced problems and an efficient algorithm for calculating the solution.The run time of the algorithm is polynomial and the memory re-quirements are linear in the number of classes.We conclude with simulations results showing that our algorithm is at least two orders of magnitude faster than a standard QP technique, even for small number of classes.2Discrete codesLet be a set of training examples where each instance belongs to a domain. We assume without loss of generality that each label is an integer from the set.A multiclass clas-sifier is a function that maps an instance into an element of.In this work we focus on a frame-work that uses output codes to build multiclass classifiers from binary classifiers.A discrete output code is a matrix of size over where each row of corre-spond to a class.Each column of defines a parti-tion of into two disjoint sets.Binary learning algorithms are used to construct classifiers,one for each column of .That is,the set of examples induced by column of is.This set is fed as training data to a learning algorithm thatfinds a hypothesis.This reduction yields different binary classifiers.We denote the vector of predictions of these clas-sifiers on an instance as.We denote the th row of by.Given an example we predict the label for which the row is the“closest”to.We will use a general notion for closeness and define it through an inner-product function.The higher the value ofis the more confident we are that is the correct label of according to the classifiers.An example for a closeness function is.It is easy to verify that this choice of is equivalent to picking the row of which attains the minimal Hamming distance to.Given a classifier and an example,we say that misclassified the example if.Let be 1if the predicate holds and0otherwise.Our goal is there-fore tofind a classifier such thatlabel set and the sample size.First,note that inthis case(2)Second,note that the sample can be divided into equiv-alence classes according to their labels and the classifica-tion of.For and,de-fine,which isequivalent to random guessing).Hence,the size of is:(4)Using Eqs.(2)and(4),we rewrite Eq.(3),.Thus,the minimum of isachieved if and only if the formula is satisfiable.There-fore,a learning algorithm for and can also be usedas an oracle for the satisfiability of.While the setting discussed in this section is somewhatsuperficial,these results underscore the difficulty of the prob-lem.We next show that the problem offinding a good out-put code given a relatively large set of classifiers is in-tractable.We would like to note in passing that efficient al-gorithm forfinding a single column might be useful in othersettings.For instance in building trees or directed acyclicgraphs for multiclass problems(cf.[18]).We leave this forfuture research.4Finding a general discrete output codeIn this section we prove that given a set of binary classi-fiers,finding a code matrix which minimizes the em-pirical loss is NP-complete.Given a sampleand a set of classifiers,let us de-note by the evaluation ofon the sample,where is the predictions vector for the th sample.We now show that even when andthe problem is NP-complete.(Clearly,the problem remains NPC for).Following the notation of previous sections,the output code matrix is composed of two rows and and the predicted class for instance is.For the simplicity of the presentation of the proof,we assume that both the code and the hypotheses’values are over the set(instead of).This assumption does not change the problem since there is a linear transform between the two sets. Theorem2The following decision problem is NP-complete. Input:A natural number,a labeled sample,where,and.Question:Does there exist a matrix such that the classifier based on an output code makes at most mistakes on.Proof:Our proof is based on a reduction technique intro-duced by H¨o ffgen and Simon[14].Since we can check in polynomial time whether the number of classification errors for a given a code matrix exceeds the bound,the prob-lem is clearly in NP.We show a reduction to Vertex Cover in order to prove that the problem is NP-hard.Given an undirected graph ,we will code the structure of the graph as follows. The sample will be composed of two subsets,andof size and respectively.We set.Each edge is encoded by two examples in. We set for thefirst vector to,,and elsewhere.We set the second vector to,, and elsewhere.We set the label of each example in to.Each example in encodes a node where,,and elsewhere.We set the label of each example in to(second class). We now show that there exists a vertex cover with at most nodes if and only if there exists a coding matrixthat induces at most classification errors on the sample.:Let be a vertex cover such that. We show that there exists a code which has at most mis-takes on.Let be the characteristic function of,that is,if and otherwise. Define the output code matrix to be and.Here,denotes the component-wise logical not operator.Since is a cover,for each we getandTherefore,for all the examples in the predicted label equals the true label and we suffer errors on these exam-ples.For each example that corresponds to a node we haveTherefore,these examples are misclassified(Recall that the label of each example in is).Analogously,for each example in which corresponds to we getand these examples are correctly classified.We thus have shown that the total number of mistakes according to is .:Let be a code which achieves at most mis-takes on.We construct a subset as follows.We scan and add to all vertices corresponding to misclas-sified examples from.Similarly,for each misclassified example from corresponding to an edge,we pick either or at random and add it to.Since we have at most misclassified examples in the size of is at most .We claim that the set is a vertex cover of the graph. Assume by contradiction that there is an edge for which neither nor belong to the set.Therefore,by construction,the examples corresponding to the vertices and are classified correctly and we get,Summing the above equations yields that,(7) In addition,the two examples corresponding to the edge are classified correctly,implying thatwhich again by summing the above equations yields,(8) Comparing Eqs.(7)and(8)we get a contradiction.that exactly one class attains the maximum value according to the function.We will concentrate on the problem of finding a good continuous code given a set of binary classi-fiers.The approach we will take is to cast the code design prob-lem as constrained optimization problem.Borrowing the idea of soft margin[7]we replace the discrete0-1multiclass loss with the linear bound(9) This formulation is also motivated by the generalization anal-ysis of Schapire et al.[2].The analysis they give is based on the margin of examples where the margin is closely related to the definition of the loss as given by Eq.(9).Put another way,the correct label should have a confi-dence value which is larger by at least one than any of the confidences for the rest of the labels.Otherwise,we suffer loss which is linearly proportional to the difference between the confidence of the correct label and the maximum among the confidences of the other labels.The bound on the empir-ical loss is then,where equals if and otherwise.We say that a sample is classified correctly using a set of binary classi-fiers if there exists a matrix such that the above loss is equal to zero,(10) Denote by(11) Thus,a matrix that satisfies Eq.(10)would also satisfy the following constraints,(12)We view a code as a collection of vectors and define the norm of to be the norm of the concatenation of the vectors constituting.Motivated by[24,2]we seek a ma-trix with a small norm which satisfies Eq.(12).Thus, when the entire sample can be labeled correctly,the prob-lem offinding a good matrix can be stated as the follow-ing optimization problem,subject to:Here is an integer.Note that of the constraints forare automatically satisfied.This is changed in the following derivation for the non-separable case.In the general case a matrix which classifies all the examples correctly might not exist.We therefore introduce slack variables and modify Eq.(10)to be,(13)The corresponding optimization problem is,(14) subject to:for some constant.This is an optimization problem with“soft”constraints.Analogously,we can define an opti-mization problem with“hard”constraints,subject to:,for someThe relation between the“hard”and“soft”constraints and their formal properties is beyond the scope of this paper. For further discussion on the relation between the problems see[24].5.1Design of continuous codes using LinearProgrammingWe now further develop Eq.(14)for the cases. We dealfirst with the cases and which result in linear programs.For the simplicity of presentation we will assume that.For the case the objective function of Eq.(14)be-comes.We introduce a set of auxiliary variables to get a standard linear programming setting,subject to:To obtain its dual program(see also App.B)we define one variable for each constraint of the primal problem.We use for thefirst set of constraints,and for the second set. The dual program is,subject to:The case of is similar.The objective function of Eq.(14)becomes.We introduce a single new variable to obtain the primal problem,subject to:Following the technique for,we get that the dual pro-gram is,subject to:Both programs(and)can be now solved using standard linear program packages.5.2Design of continuous codes using QuadricProgrammingWe now discuss in detail Eq.(14)for the case.For convenience we use the square of the norm of the matrix (instead the norm itself).Therefore,the primal program be-comes,subject to:(16) The saddle point we are seeking is a minimum for the primal variables(),and the maximum for the dual ones().To find the minimum over the primal variables we require,(18)(19)Eq.(19)implies that when the optimum of the objective function is achieved,each row of the matrix is a linear combination of.We say that an example is a support pattern for class if the coefficient ofin Eq.(19)is not zero.There are two settings for which an example can be a support pattern for class.Thefirst case is when the label of an example is equal to,then the th example is a support pattern if.The second case is when the label of the example is different from,then the th pattern is a support pattern if.Loosely speaking,since for all and we have and,the variable can be viewed as a distri-bution over the labels for each example.An example affects the solution for(Eq.(19))if and only if in not a point distribution concentrating on the correct label.Thus,only the questionable patterns contribute to the learning process.We develop the Lagrangian using only the dual variables. Substituting Eqs.(17)and(19)into Eq.(16)and using vari-ous algebraic manipulations,we obtain that the target func-tion of the dual program is,It is easy to verify that is strictly convex in.Since the constraints are linear the above problem has a single opti-mal solution and therefore QP methods can be used to solve it.In Sec.6we describe a memory efficient algorithm for solving this special QP problem.To simplify the equations we denote bythe difference between the correct point distribution and the distribution obtained by the optimization problem,Eq.(19) becomes,(21)Since we look for the value of the variables which maximize the objective function(and not the optimum of itself), we can omit constants and write the dual problem given by Eq.(20)as,subject to:and(22)whereand the classification rule becomes,(25)The general framework for designing output codes using the QP program described above,also provides,as a special case,a new algorithm for building multiclass Support Vec-tors Machines.Assume that the instance space is the vector space and define(thus),then the primal program in Eq.(15)becomesVariablesConstraints VariablesConstraints(31)For brevity,we will omit the index and drop constants (that do not affect the solution).The reduced optimization has variables and constraints,Since the program from Eq.(32)becomes,subject to:andwhere,(33) In Sec.6.1we discuss an analytic solution to Eq.(33)and in Sec.6.2we describe a time efficient algorithm for computing the analytic solution.6.1An analytic solutionWhile the algorithmic solution we describe in this section is simple to implement and efficient,its derivation is quite complex.Before describing the analytic solution to Eq.(33), we would like to give some intuition on our method.Let us fix some vector and denote.First note that is not a feasible point since the constraintis not satisfied.Hence for any feasible point some of the constraints are not tight.Second,note that the differences between the bounds and the variables sum to one.Let us induce a uniform distribution over the components of.Then,the variance of isSince the expectation is constrained to a given value,the optimal solution is the vector achieving the smallest vari-ance.That is,the components of of should attain similar values,as much as possible,under the inequality constraints .In Fig.1we illustrate this motivation.We pickedand show plots for two different feasible values for.The x-axis is the index of the point and the y-axis designates the values of the components of .The norm of on the plot on the right hand side plot is smaller than the norm of the plot on the left hand side.The right hand side plot is the optimal solution for.The sum of the lengths of the arrows in both plots is. Since both sets of points are feasible,they satisfy the con-straint.Thus,the sum of the lengths of the “arrows”in both plots is one.We exploit this observation in the algorithm we describe in the sequel.We therefore seek a feasible vector whose most of its components are equal to some threshold.Given we de-fine a vector whose its th component equal to the mini-mum between and,hence the inequality constraints are satisfied.We define(34)35duced optimization problem with .The x-axis is the index of the point,and the y-axis denotes the values .The bottom plot has a smaller variance hence it achieves a better value for .We denote byUsing,the equality constraint from Eq.(33)becomes.Let us assume without loss of generality that the compo-nents of the vector are given in a descending order,(this can be done in time).Letand .To prove the main theorem of this section we need the following lemma.Lemma 3is piecewise linear with a slope in eachrangefor .Proof:Let us develop.35Figure 2:An illustration of the solution of the QP problem using the inverse of for .The optimal value is the solution for the equation which is .Note that if thenfor all .Also,the equalityholds for eachin the range.Thus,for ,the functionhas the form,(35)This completes the proof.We now can prove the main theorem of this section.Theorem 5Let be the unique solution of.Then is the optimum value of the optimization problem stated in Eq.(33).The theorem tells us that the optimum value of Eq.(33)isof the form defined by Eq.(34)and that there is exactly one value of for which the equality constraintholds.A plot of and the solution for fromFig.1are shown in Fig.2.Proof:Corollary 4implies that a solution exists and isunique.Note also that from definition ofwe have that the vector is a feasible point of Eq.(33).We now prove that is the optimum of Eq.(33)by showing thatfor all feasible points .Assume,by contradiction,that there is a vector such that .Let ,and define.Since both and satisfy the equality constraint of Eq.(33),we have,(36)Since is a feasible point we have.Also, by the definition of the set we have that for all .Combining the two properties we get,for all(37) We start with the simpler case of for all.In this case,differs from only on a subset of the coordi-nates.However,for these coordinates the components of are equal to,thus we obtain a zero variance from the constant vector whose components are all.Therefore, no other feasible vector can achieve a better variance.For-mally,since for all,then the terms for cancel each other,From the definition of in Eq.(34)we get thatfor all,We use now the assumption that for all and the equality(Eq.(36))to obtain,and we get a contradiction since.We now turn to prove the complementary case in which .Since,then there exists such that.We use again Eq.(36)and conclude that there exists also such that.Let us assume without loss of generality that(The case follows analogously by switching the roles of and ).Define as follows,otherwiseThe vector satisfies the constraints of Eq.(33)sinceand.Since and are equal except for their and components we get,Input:.Initialize.Define.Sort the components of,such that. Define;.While..Eq.(39) ComputeFigure3:The algorithm forfinding the optimal solution of the reduced quadratic program(Eq.(33)).Substituting the values for and from the definition of we obtain,Using the definition of and forand for we obtain,Thefirst term of the bottom equation is negative since and.Also,hence and the second term is also negative.We thus get,which is a contradiction.Input:.Choose-a feasible point for Eq.(24).Iterate.Choose an exampleCompute and Eqs.(29)and(30)Eq.(33) Output thefinal hypothesis:Eq.(25)(40)The complete algorithm is described in Fig.3.Since it takes time to sort the vector and anothertime for the loop search,the total run time is.We arefinally ready to give the algorithm for solving learning problem described by Eq.(24).Since the output code is constructed of the supporting patterns we term our algorithm SPOC for Support Pattern Output Coding.The SPOC algorithm is described in Fig.4.We have also devel-oped methods for choosing an example to modify on each round and a stopping criterion for the entire optimization al-gorithm.Due to lack of space we omit the details which will appear in a full paper.We have performed preliminary experiments with syn-thetic data in order to check the actual performance of our algorithm.We tested the special case corresponding to mul-ticlass SVM by setting.The code matrices we test35Figure5:Run time comparison of two algorithms for code design using quadratic programming:Matlab’s standard QP package and the proposed algorithm(denoted SPOC).Note that we used a logarithmic scale for the run-time()axis. are of rows(classes)and columns.We varied the size of the training set size from to. The examples were generated using the uniform distribution over.The domain was partitioned into four quarters of equal size:,,,and.Each quarter was associated with a different label.For each sample size we tested,we ran the algorithm three times,each run used a different randomly generated training set.We compared the standard quadratic optimization routine available from Mat-lab with our algorithm which was also implemented in Mat-lab.The average running time results are shown in Fig.5. Note that we used a log-scale for the(run-time)axis.The results show that the efficient algorithm can be two orders of magnitude faster than the standard QP package.7Conclusions and future researchIn this paper we investigated the problem of designing out-put codes for solving multiclass problems.Wefirst discussed discrete codes and showed that while the problem is intractable in general we canfind thefirst column of a code matrix in polynomial time.The question whether the algorithm can be generalized to columns with running time ofor less remains open.Another closely related question is whether we canfind efficiently the next column given previ-ous columns.Also left open for future research is further us-age of the algorithm forfinding thefirst column as a subrou-tine in constructing codes based on trees or directed acyclic graphs[18],and as a tool for incremental(column by col-umn)construction of output codes.Motivated by the intractability results for discrete codes we introduced the notion of continuous output codes.We described three optimization problems forfinding good con-tinuous codes for a given a set of binary classifiers.We have discussed in detail an efficient algorithm for one of the three problems which is based on quadratic programming.As a special case,our framework also provides a new efficient al-gorithm for multiclass Support Vector Machines.The im-portance of this efficient algorithm might prove to be crucial in large classification problems with many classes such asKanji character recognition.We also devised efficient im-plementation of the algorithm.The implementation details of the algorithm,its convergence,generalization properties, and more experimental results were omitted due to the lack of space and will be presented elsewhere.Finally,an impor-tant question which we have tackled barely in this paper is the problem of interleaving the code design problem with the learning of binary classifiers.A viable direction in this do-main is combining our algorithm for continuous codes with the support vector machine algorithm. Acknowledgement We would like to thank Rob Schapire for numerous helpful discussions,to Vladimir Vapnik for his encouragement and support of this line of research,and to Nir Friedman and Ran Bachrach for useful comments and suggestions.References[1] D.W.Aha and R.L.Bankert.Cloud classification usingerror-correcting output codes.In Artificial Intelligence Ap-plications:Natural Science,Agriculture,and Environmental Science,volume11,pages13–28,1997.[2] E.L.Allwein,R.E.Schapire,and Y.Singer.Reducing multi-class to binary:A unifying approach for margin classifiers.In Machine Learning:Proceedings of the Seventeenth Interna-tional Conference,2000.[3]Peter L.Bartlett.The sample complexity of pattern classifi-cation with neural networks:the size of the weights is more important than the size of the network.IEEE Transactions on Information Theory,44(2):525–536,March1998.[4] A.Berger.Error-correcting output coding for text classifica-tion.In IJCAI’99:Workshop on machine learning for infor-mationfiltering,1999.[5]Leo Breiman,Jerome H.Friedman,Richard A.Olshen,and Charles J.Stone.Classification and Regression Trees.Wadsworth&Brooks,1984.[6]V.Chvatal.Linear Programming.Freeman,1980.[7]Corinna Cortes and Vladimir Vapnik.Support-vector net-works.Machine Learning,20(3):273–297,September1995.[8]Ghulum Bakiri Thomas G.Dietterich.Achieving high-accuracy text-to-speech with machine learning.In Data min-ing in speech synthesis,1999.[9]Thomas G.Dietterich and Ghulum Bakiri.Solving multiclasslearning problems via error-correcting output codes.Journal of Artificial Intelligence Research,2:263–286,January1995.[10]Tom Dietterich and Eun Bae Kong.Machine learning bias,statistical bias,and statistical variance of decision tree algo-rithms.Technical report,Oregon State University,1995. [11]R.Fletcher.Practical Methods of Optimization.John Wiley,second edition,1987.[12]Trevor Hastie and Robert Tibshirani.Classification by pair-wise coupling.The Annals of Statistics,26(1):451–471,1998.[13]David Haussler.Decision theoretic generalizations of the PACmodel for neural net and other learning rma-tion and Computation,100(1):78–150,1992.[14]Klaus-U.H¨o ffgen and Hans-U.Simon.Robust trainabilityof single neurons.In Proceedings of the Fifth Annual ACM Workshop on Computational Learning Theory,pages428–439,Pittsburgh,Pennsylvania,July1992.[15]G.James and T.Hastie.The error coding method and PiCT.J.of computational and graphical stat.,7(3):377–387,1998.[16] E.B.Kong and T.G.Dietterich.Error-correcting output cod-ing corrects bias and variance.In Proc.of the Twelfth Interna-tional Conference on Machine Learning,p.313–321,1995.[17]J.C.Platt.Fast training of Support Vector Machines usingsequential minimal optimization.In B.Sch¨o lkopf,C.Burges, and A.Smola,editors,Advances in Kernel Methods-Support Vector Learning.MIT Press,1998.[18]J.C.Platt,N.Cristianini,and rge mar-gin dags for multiclass classification.In Advances in Neural Information Processing Systems12.MIT Press,2000.(To appear.).[19]J.Ross Quinlan.C4.5:Programs for Machine Learning.Mor-gan Kaufmann,1993.[20] D.E.Rumelhart,G.E.Hinton,and R.J.Williams.Learn-ing internal representations by error propagation.In David E.Rumelhart and James L.McClelland,editors,Parallel Dis-tributed Processing–Explorations in the Microstructure of Cognition,chapter8,pages318–362.MIT Press,1986. [21]Robert ing output codes to boost multiclasslearning problems.In Machine Learning:Proceedings of the Fourteenth International Conference,pages313–321,1997.[22]Robert E.Schapire and Yoram Singer.Improved boosting al-gorithms using confidence-rated predictions.Machine Learn-ing,37(3):1–40,1999.[23]V.N.Vapnik.Estimation of Dependences Based on EmpiricalData.Springer-Verlag,1982.[24]V.N.Vapnik.Statistical Learning Theory.Wiley,1998.[25]J.Weston and C.Watkins.Support vector machines for multi-class pattern recognition.In Proceedings of the Seventh Euro-pean Symposium On Artificial Neural Networks,April1999.A Legend Description Section6。

A Comparison of Algorithms for the

A Comparison of Algorithms for the

A Comparison of Algorithms for the Optimization of Fermentation ProcessesRui MendesIsabel RochaEug´e nio C.FerreiraMiguel RochaAbstract —The optimization of biotechnological processes isa complex problem that has been intensively studied in the past few years due to the economic impact of the products obtained from fermentations.In fed-batch processes,the goal is to find the optimal feeding trajectory that maximizes the final productivity.Several methods,including Evolutionary Algorithms (EAs)have been applied to this task in a number of different fermentation processes.This paper performs an experimental comparison between Particle Swarm Optimization,Differential Evolution and a real-valued EA in three distinct case studies,taken from previous work by the authors and literature,all considering the optimization of fed-batch fermentation processes.I.I NTRODUCTIONA number of valuable products such as recombinant pro-teins,antibiotics and amino-acids are produced using fer-mentation techniques.Additionally,biotechnology has been replacing traditional manufacturing processes in many areas like the production of bulk chemicals,due to its relatively low requirements regarding energy and environmental costs.Consequently,there is an enormous economic incentive to develop engineering techniques that can increase the pro-ductivity of such processes.However,these are typically very complex,involving different transport phenomena,microbial components and biochemical reactions.Furthermore,the nonlinear behavior and time-varying properties,together with the lack of reliable sensors capable of providing direct and on-line measurements of the biological state variables limits the application of traditional control and optimization techniques to bioreactors.Under this context,there is the need to consider quantita-tive mathematical models,capable of describing the process dynamics and the interrelation among relevant variables.Additionally,robust global optimization techniques must deal with the model’s complexity,the environment constraints and the inherent noise of the experimental process [3].In fed-batch fermentations,process optimization usually encompasses finding a given nutrient feeding trajectory that maximizes productivity.Several optimization methods have been applied in this task.It has been shown that,for simple bioreactor systems,the problem can be solved analytically [24].Rui Mendes and Miguel Rocha are with Department of Infor-matics and the Centro de Ciˆe ncias e Tecnologias da Computac ¸˜a o,Universidade do Minho,Braga,Portugal (email:azuki@di.uminho.pt,mrocha@di.uminho.pt).Isabel Rocha and Eug´e nio Ferreira with the Centro de Engenharia Biol´o gica da Universidade do Minho (email:irocha@deb.uminho.pt,ecferreira@deb.uminho.pt).Numerical methods make a distinct approach to this dy-namic optimization problem.Gradient algorithms are used to adjust the control trajectories in order to iteratively improve the objective function [4].In contrast,dynamic programming methods discretize both time and control variables to a predefined number of values.A systematic backward search method in combination with the simulation of the system model equations is used to find the optimal path through the defined grid.However,in order to achieve a global optimum the computational burden is very high [23].An alternative approach comes from the use of algorithms from the Evolutionary Computation (EC)field,which have been used in the past to optimize nonlinear problems with a large number of variables.These techniques have been applied with success to the optimization of feeding or temperature trajectories [14][1],and,when compared with traditional methods,usually perform better [20][6].In this work,the performance of different algorithms belonging to three main groups -Evolutionary Algorithms (EA),Particle Swarm (PSO)and Differential Evolution (DE)-was compared,when applied to the task of optimizing the feeding trajectory of fed-batch fermentation processes.Three test cases taken from literature and previous work by the authors were used.The algorithms were allowed to run for a given number of function evaluations that was deemed to be enough to achieve acceptable results.The comparison among the algorithms was based on their final result and on the convergence speed.The paper is organized as follows:firstly,the fed-batch fermentation case studies are presented;next,PSO,DE and a real-valued EA are described;the results of the application of the different algorithms to the case studies are presented;finally,the paper presents a discussion of the results,conclu-sions and further work.II.C ASESTUDIES :FED -BATCH FERMENTATIONPROCESSESIn fed-batch fermentations there is an addition of certain nutrients along the process,in order to prevent the accumu-lation of toxic products,allowing the achievement of higher product concentrations.During this process the system states change considerably,from a low initial to a very high biomass and product concen-trations.This dynamic behavior motivates the development of optimization methods to find the optimal input feeding trajectories in order to improve the process.The typical input in this process is the substrate inflow rate time profile.0-7803-9487-9/06/$20.00/©2006 IEEE2006 IEEE Congress on Evolutionary ComputationSheraton Vancouver Wall Centre Hotel, Vancouver, BC, Canada July 16-21, 20062018For the proper optimization of the process,a white box mathematical model is typically developed,based on dif-ferential equations that represent the mass balances of the relevant state variables.A.Case study IIn previous work by the authors,a fed-batch recombinant Escherichia coli fermentation process was optimized by EAs [17][18].This was considered as thefirst case study in this work and will be briefly described next.During the aerobic growth of the bacterium,with glucose as the only added substrate,the microorganism can follow three main different metabolic pathways:•Oxidative growth on glucose:k1S+k5Oµ1−→X+k8C(1)•Fermentative growth on glucose:k2S+k6Oµ2−→X+k9C+k3A(2)•Oxidative growth on acetic acid:k4A+k7Oµ3−→X+k10C(3) where S,O,X,C,A represent glucose,dissolved oxygen, biomass,dissolved carbon dioxide and acetate components, respectively.In the sequel,the same symbols are used to represent the state variables’concentrations(in g/kg);µ1to µ3are time variant specific growth rates that nonlinearly depend on the state variables,and k i are constant yield coefficients.The associated dynamical model can be described by the following equations:dXdt =(−k1µ1−k2µ2)X+F in,S S indt =(k3µ2−k4µ3)X−DA(6)dOdt =(k8µ1+k9µ2+k10µ3)X−CT R−DC(8)dWT f(10) The relevant state variables are initialized with the follow-ing values:X(0)=5,S(0)=0,A(0)=0,W(0)=3. Due to limitations in the feeding pump capacity,the value of F in,S(t)must be in the range[0.0;0.4].Furthermore, the following constraint is defined over the value of W: W(t)≤5.Thefinal time(T f)is set to25hours.B.Case study IIThis system is a fed-batch bioreactor for the production of ethanol by Saccharomyces cerevisiae,firstly studied by Chen and Huang[5].The aim is tofind the substrate feed rate profile that maximizes the yield of ethanol.The model equations are the following:dx1x4(11)dx2x4(12)dx3x4(13)dx41+x30.22+x2(15)g2=171.5x2dx10.12+A −ux1dt =x3x4e−5x4x5(19)dx3x5)x3(20) dx4x5(21)dx5(x4+0.4)(x4+62.5)(23) The aim of the optimization is tofind the feeding profile (u)that maximizes the following PI:P I=x1(T f)x5(T f)(24) Thefinal time is set to T f=15(hours)and the initial values for relevant state variables are the following:x1(0)= 0,x2(0)=0,x3(0)=1,x4(0)=5and x5(0)=1.The feed rate is constrained to the range u(t)∈[0.0;3.0].III.T HE A LGORITHMSThe optimization task is tofind the feeding trajectory, represented as an array of real-valued variables,that yields the best performance index.Each variable will encode the amount of substrate to be introduced into the bioreactor, in a given time unit,and the solution will be given by the temporal sequence of such values.In this case,the size of the genome would be determined based on thefinal time of the process(T f)and the discretization step(d)considered in the numerical simulation of the model,given by the expression: T fdI+1(25) where I stands for the number of points within each interpo-lation interval.The value of d used in the experiments was d=0.005,for case studies I,II and III.The evaluation process,for each individual in the pop-ulation,is achieved by running a numerical simulation of the defined model,given as input the feeding values in the genome.The numerical simulation is performed using ODEToJava,a package of ordinary differential equation solvers,using a linearly implicit implicit/explicit(IMEX) Runge-Kutta scheme used for stiff problems[2].Thefitness value is then calculated from thefinal values of the state variables according to the PI defined for each case.A.Particle Swarm OptimizationA particle swarm optimizer uses a population of particles that evolve over time byflying through space.The particles imitate their most successful neighbors by modifying their velocity component to follow the direction of the most successful position of their neighbors.Each particle is defined by:P(i)t= x t,v t,p t,e tx t∈R d is the current position in the search space;p t∈R d is the position visited by the particle in the past that had the best function evaluation;v t∈R d is a vector that represents the direction in which the particle is moving,it is called the‘velocity’;e t is the evaluation of p t under the function being optimized,i.e.e t=f(p t).Particles are connected to others in the population via a predefined topology.This can be represented by the adja-cency matrix of a directed graph M=(m ij),where m ij= 1if there is an edge from particle i to particle j and m ij=0 otherwise.At each iteration,a new population is produced by allow-ing each particle to move stochastically toward its previous best position and at the same time toward the best of the previous best positions of all other particles to which it is connected.The following is an outline of a generic PSO.1)Set the iteration counter,t=0.2)Initialize each x(i)and v(i)randomly.Set p(i)=x(i)0.3)Evaluate each particle and set e(i)=f(p(i)0).4)Let t=t+1and generate a new population,whereeach particle i is moved to a new position in the search space according to:(i)v(i)t=velocityvelocityupdate(v(i)t−1)=X j∈N(i)r·(c1+c2)4(i.e.,smallperturbations will be preferred over larger ones).where[min i;max i]is the range of values allowed for gene i.In both cases,an innovation is introduced:the mutation operators are applied to a variable number of genes(a value that is randomly set between1and10in each application). 20213p<0.001 20.001≤p<0.01 10.01≤p<0.05 N p≥0.05CanPso2.5154±0.71232.5563±0.70912.5641±0.7168 DE9.3693±0.05709.4738±0.00529.4770±0.0028 DEBest2.7077±0.19212.7419±0.21152.7936±0.2176 DETourn9.1044±0.19839.2913±0.12409.3596±0.1114 EA7.9371±0.13558.5161±0.08838.8121±0.0673 Fips9.1804±0.16429.4280±0.05519.4528±0.0538CanPso7.1461±1.11527.1461±1.11527.1461±1.1152 DE9.4351±0.00009.4351±0.00009.4351±0.0000 DEBest7.6932±0.83217.6937±0.83217.6937±0.8321 DETourn9.4099±0.05519.4099±0.05519.4099±0.0551 EA8.7647±0.14419.0137±0.14219.1324±0.1320 Fips9.4351±0.00009.4351±0.00009.4351±0.0000CanPso DE DEBest DETourn EA DEN-N-N3-3-1DETourn3-3-13-3-23-3-13-3-1FipsAlgorithm PI40k NFEs PI100k NFEs PI200k NFEsCanPso19385.2±284.319386.4±284.319406.8±272.5 DE20379.4±11.620397.2±13.920406.9±14.5 DEBest19418.1±290.019421.0±290.419430.5±293.5 DETourn20362.7±52.420380.4±42.720394.3±32.8 EA20151.8±69.720335.1±54.120394.7±23.1 Fips19818.0±160.719818.9±161.119818.9±161.1TABLE IVR ESULTS FOR CASE II FOR I=100(109VARIABLES),I=200(55 VARIABLES)AND I=540(21VARIABLES)RESPECTIVELY.Table V shows the comparison of the algorithms.As can be seen,CanPso continues to be the worst contender but DEBest is not a very bad choice when the number of variables is small.EA is still beaten by DE and DETourn in most cases. Figure2presents the convergence curve of the algorithms. DE and DETourn converge fast(around40,000NFEs);Fips gets stuck in a plateau that is higher than the one of DEBest and CanPso;EA converges slowly but is steadily improving. It seems that,given enough time,EAfinds similar solutions to either DE and DETourn.20233-3-1DEBest 3-3-1N-N-N 3-3-N EAN-N-N3-3-NN-N-N3-3-N3-3-NTABLE VP AIRWISE T -TEST WITH THEH OLM P -VALUE ADJUSTMENT FOR THEALGORITHMS OF CASEII.T HE P -VALUE CODES CORRESPOND TOI =100,I =200AND I =540RESPECTIVELY .1650017000 17500 18000 18500 19000 19500 20000 20500 050000100000 150000200000P INFEsCanPsoFips DE DEBest DETournEAFig.2.Convergence of the algorithms for case II for I =200.E.Results for case IIITable VI presents the results of the algorithms on case III.This case seems to be quite simple and most algorithms find similar results.DE ,Fips and EA are the best algorithms in this problem because of their reliability:they have narrow confidence intervals.DETourn seems to be a little less reliable,but its confidence intervals are still small enough.Table VII shows the comparison of the algorithms for this problem.In this case,most algorithms are not statistically different.This is the case when we turn to the reliability of the algorithms to draw conclusions.As we stated before,most algorithms find similar solutions,which indicates that this case is probably not a good benchmark to compare algorithms.Figure 3presents the convergence curve of the algorithms for I =100.In this case DE ,DETourn and Fips converge very fast;EA has a slower convergence rate;CanPso and DEBest get stuck in local optima.V.C ONCLUSIONSANDF URTHER W ORKThis paper compares canonical particle swarm (CanPso ),fully informed particle swarm (Fips ),a real-valued EA (EA )and three schemes of differential evolution (DE ,DEBest and DETourn )in three test cases of optimizing the feeding trajectory in fed-batch fermentation.Each of these problems was tackled with different numbers of points (i.e.,different values for I )to interpolate the feeding trajectory.This is a trade off:the more variables we have,the more precise the curve is but the harder it is to optimize.CanPso 27.069±1.75127.370±1.83627.579±1.681DE 32.641±0.02932.674±0.00232.680±0.001DEBest 30.774±1.00430.775±1.00430.775±1.004DETourn 32.624±0.05732.629±0.05632.631±0.056EA 32.526±0.02532.633±0.01332.670±0.008Fips 32.625±0.10032.629±0.09932.630±0.099CanPso 31.914±0.66231.914±0.66231.914±0.662DE 32.444±0.00032.444±0.00032.444±0.000DEBest 31.913±0.70031.914±0.70031.914±0.700DETourn 32.441±0.00532.441±0.00532.441±0.005EA 32.413±0.01232.439±0.00332.443±0.001Fips 32.444±0.00032.444±0.00032.444±0.000CanPsoDE DEBest DETourn EADE 1-N-N 1-N-N DETourn 2-N-NN-3-N1-N-N N-3-NFips1214 16 18 20 22 24 26 28 30 32 34 050000100000 150000200000P INFEsCanPsoFips DE DEBest DETournEAFig.3.Convergence of the algorithms for case III when I =100.to choose DE instead.Previous work by the authors [19]developed a new representation in EAs in order to allow the optimization of a time trajectory with automatic interpolation.It would be interesting to develop a similar approach within DE or Fips .Another area of future research is the consideration of on-line adaptation,where the model of the process is updated during the fermentation process.In this case,the good computational performance of DE is a benefit,if there is the need to re-optimize the feeding given a new model and values for the state variables are measured on-line.A CKNOWLEDGMENTSThis work was supported in part by the Portuguese Foundation for Science and Technology under project POSC/EIA/59899/2004.The authors wish to thank Project SeARCH (Services and Advanced Research Computing with HTC/HPC clusters),funded by FCT under contract CONC-REEQ/443/2001,for the computational resources made avail-able.R EFERENCES[1]P.Angelov and R.Guthke.A Genetic-Algorithm-based Approach toOptimization of Bioprocesses Described by Fuzzy Rules.Bioprocess Engin.,16:299–303,1997.[2]Spiteri Ascher,Ruuth.Implicit-explicit runge-kutta methods for time-dependent partial differential equations.Applied Numerical Mathe-matics ,25:151–167,1997.[3]J.R.Banga,C.Moles,and A.Alonso.Global Optimization of Bio-processes using Stochastic and Hybrid Methods.In C.A.Floudas and P.M.Pardalos,editors,Frontiers in Global Optimization -Nonconvex Optimization and its Applications ,volume 74,pages 45–70.Kluwer Academic Publishers,2003.[4]A.E.Bryson and Y .C.Ho.Applied Optimal Control -Optimization,Estimation and Control .Hemisphere Publication Company,New York,1975.[5]C.T.Chen and C.Hwang.Optimal Control Computation forDifferential-algebraic Process Systems with General Constraints.Chemical Engineering Communications ,97:9–26,1990.[6]J.P.Chiou and F.S.Wang.Hybrid Method of Evolutionary Algorithmsfor Static and Dynamic Optimization Problems with Application to a Fed-batch Fermentation puters &Chemical Engineering ,23:1277–1291,1999.[7]Maurice Clerc and James Kennedy.The particle swarm -explosion,stability,and convergence in a multidimensional complex space.IEEE Transactions on Evolutionary Computation ,6(1):58–73,2002.[8]J.Stuart Hunter George E.P.Box,William G.Hunter.Statistics forexperimenters:An introduction to design and data analysis .NY:John Wiley,1978.[9]Cyril Harold Goulden.Methods of Statistical Analysis,2nd ed .JohnWiley &Sons Ltd.,1956.[10]S Holm.A simple sequentially rejective multiple test procedure.Scandinavian Journal of Statistics ,6:65–70,1979.[11]J.Kennedy and R.Mendes.Topological structure and particle swarmperformance.In David B.Fogel,Xin Yao,Garry Greenwood,Hitoshi Iba,Paul Marrow,and Mark Shackleton,editors,Proceedings of the Fourth Congress on Evolutionary Computation (CEC-2002),Honolulu,Hawaii,May 2002.IEEE Computer Society.[12]Rui Mendes,James Kennedy,and Jos´e Neves.The fully informed par-ticle swarm:Simple,maybe better.IEEE Transactions on EvolutionaryComputation ,8(3):204–210,2004.[13]Z.Michalewicz.Genetic Algorithms +Data Structures =EvolutionPrograms .Springer-Verlag,USA,third edition,1996.[14]H.Moriyama and K.Shimizu.On-line Optimization of CultureTemperature for Ethanol Fermentation Using a Genetic Algorithm.Journal Chemical Technology Biotechnology ,66:217–222,1996.[15]S.Park and W.F.Ramirez.Optimal Production of Secreted Protein inFed-batch Reactors.AIChE J ,34(9):1550–1558,1988.[16]I.Rocha.Model-based strategies for computer-aided operation ofrecombinant E.coli fermentation .PhD thesis,Universidade do Minho,2003.[17]I.Rocha and E.C.Ferreira.On-line Simultaneous Monitoring ofGlucose and Acetate with FIA During High Cell Density Fermentation of Recombinant E.coli.Analytica Chimica Acta ,462(2):293–304,2002.[18]M.Rocha,J.Neves,I.Rocha,and E.Ferreira.Evolutionary algo-rithms for optimal control in fed-batch fermentation processes.In G.Raidl et al.,editor,Proceedings of the Workshop on Evolutionary Bioinformatics -EvoWorkshops 2004,LNCS 3005,pages pp.84–93.Springer,2004.[19]Miguel Rocha,Isabel Rocha,and Eug´e nio Ferreira.A new represen-tation in evolutionary algorithms for the optimization of bioprocesses.In Proceedings of the IEEE Congress on Evolutionary Computation ,pages 484–490.IEEE Press,2005.[20]J.A.Roubos,G.van Straten,and A.J.van Boxtel.An EvolutionaryStrategy for Fed-batch Bioreactor Optimization:Concepts and Perfor-mance.Journal of Biotechnology ,67:173–187,1999.[21]Rainer Storn.On the usage of differential evolution for functionoptimization.In 1996Biennial Conference of the North American Fuzzy Information Processing Society (NAFIPS 1996),pages 519–523.IEEE,1996.[22]Rainer Storn and Kenneth Price.Minimizing the real functions ofthe icec’96contest by differential evolution.In IEEE International Conference on Evolutionary Computation ,pages 842–844.IEEE,May 1996.[23]A.Tholudur and W.F.Ramirez.Optimization of Fed-batch BioreactorsUsing Neural Network Parameters.Biotechnology Progress ,12:302–309,1996.[24]V .van Breusegem and G.Bastin.Optimal Control of Biomass Growthin a Mixed Culture.Biotechnology and Bioengineering ,35:349–355,1990.2025。

斯托克计量经济学课后习题实证答案

斯托克计量经济学课后习题实证答案

斯托克计量经济学课后习题实证答案P ART T WO Solutions to EmpiricalExercisesChapter 3Review of StatisticsSolutions to Empirical Exercises1. (a)Average Hourly Earnings, Nominal $’sMean SE(Mean) 95% Confidence Interval AHE199211.63 0.064 11.50 11.75AHE200416.77 0.098 16.58 16.96Difference SE(Difference) 95% Confidence Interval AHE2004 AHE1992 5.14 0.117 4.91 5.37(b)Average Hourly Earnings, Real $2004Mean SE(Mean) 95% Confidence Interval AHE199215.66 0.086 15.49 15.82AHE200416.77 0.098 16.58 16.96Difference SE(Difference) 95% Confidence Interval AHE2004 AHE1992 1.11 0.130 0.85 1.37(c) The results from part (b) adjust for changes in purchasing power. These results should be used.(d)Average Hourly Earnings in 2004Mean SE(Mean) 95% Confidence Interval High School13.81 0.102 13.61 14.01College20.31 0.158 20.00 20.62Difference SE(Difference) 95% Confidence Interval College High School 6.50 0.188 6.13 6.87Solutions to Empirical Exercises in Chapter 3 109(e)Average Hourly Earnings in 1992 (in $2004)Mean SE(Mean) 95% Confidence Interval High School13.48 0.091 13.30 13.65 College19.07 0.148 18.78 19.36Difference SE(Difference) 95% Confidence Interval College High School5.59 0.173 5.25 5.93(f) Average Hourly Earnings in 2004Mean SE(Mean) 95% Confidence Interval AHE HS ,2004AHE HS ,19920.33 0.137 0.06 0.60 AHE Col ,2004AHE Col ,19921.24 0.217 0.82 1.66Col–HS Gap (1992)5.59 0.173 5.25 5.93 Col–HS Gap (2004)6.50 0.188 6.13 6.87Difference SE(Difference) 95% Confidence Interval Gap 2004 Gap 1992 0.91 0.256 0.41 1.41Wages of high school graduates increased by an estimated 0.33 dollars per hour (with a 95%confidence interval of 0.06 0.60); Wages of college graduates increased by an estimated 1.24dollars per hour (with a 95% confidence interval of 0.82 1.66). The College High School gap increased by an estimated 0.91 dollars per hour.(g) Gender Gap in Earnings for High School Graduates Yearm Y s m n m w Y s w n w m Y w Y SE (m Y w Y )95% CI 199214.57 6.55 2770 11.86 5.21 1870 2.71 0.173 2.37 3.05 200414.88 7.16 2772 11.92 5.39 1574 2.96 0.192 2.59 3.34There is a large and statistically significant gender gap in earnings for high school graduates.In 2004 the estimated gap was $2.96 per hour; in 1992 the estimated gap was $2.71 per hour(in $2004). The increase in the gender gap is somewhat smaller for high school graduates thanit is for college graduates.Chapter 4Linear Regression with One RegressorSolutions to Empirical Exercises1. (a) ·AHE 3.32 0.45 u AgeEarnings increase, on average, by 0.45 dollars per hour when workers age by 1 year.(b) Bob’s predicted earnings 3.32 0.45 u 26 $11.70Alexis’s predicted earnings 3.32 0.45 u 30 $13.70(c) The R2 is 0.02.This mean that age explains a small fraction of the variability in earnings acrossindividuals.2. (a)There appears to be a weak positive relationship between course evaluation and the beauty index.Course Eval 4.00 0.133 u Beauty. The variable Beauty has a mean that is equal to 0; the(b) ·_estimated intercept is the mean of the dependent variable (Course_Eval) minus the estimatedslope (0.133) times the mean of the regressor (Beauty). Thus, the estimated intercept is equalto the mean of Course_Eval.(c) The standard deviation of Beauty is 0.789. ThusProfessor Watson’s predicted course evaluations 4.00 0.133 u 0 u 0.789 4.00Professor Stock’s predicted course evaluations 4.00 0.133 u 1 u 0.789 4.105Solutions to Empirical Exercises in Chapter 4 111(d) The standard deviation of course evaluations is 0.55 and the standard deviation of beauty is0.789. A one standard deviation increase in beauty is expected to increase course evaluation by0.133 u 0.789 0.105, or 1/5 of a standard deviation of course evaluations. The effect is small.(e) The regression R2 is 0.036, so that Beauty explains only3.6% of the variance in courseevaluations.3. (a) ?Ed 13.96 0.073 u Dist. The regression predicts that if colleges are built 10 miles closerto where students go to high school, average years of college will increase by 0.073 years.(b) Bob’s predicted years of completed education 13.960.073 u 2 13.81Bob’s predicted years of completed education if he was 10 miles from college 13.96 0.073 u1 13.89(c) The regression R2 is 0.0074, so that distance explains only a very small fraction of years ofcompleted education.(d) SER 1.8074 years.4. (a)Yes, there appears to be a weak positive relationship.(b) Malta is the “outlying” observation with a trade share of 2.(c) ·Growth 0.64 2.31 u TradesharePredicted growth 0.64 2.31 u 1 2.95(d) ·Growth 0.96 1.68 u TradesharePredicted growth 0.96 1.68 u 1 2.74(e) Malta is an island nation in the Mediterranean Sea, south of Sicily. Malta is a freight transportsite, which explains its larg e “trade share”. Many goods coming into Malta (imports into Malta)and immediately transported to other countries (as exports from Malta). Thus, Malta’s importsand exports and unlike the imports and exports of most other countries. Malta should not beincluded in the analysis.Chapter 5Regression with a Single Regressor:Hypothesis Tests and Confidence IntervalsSolutions to Empirical Exercises1. (a) ·AHE 3.32 0.45 u Age(0.97) (0.03)The t -statistic is 0.45/0.03 13.71, which has a p -value of 0.000, so the null hypothesis can berejected at the 1% level (and thus, also at the 10% and 5% levels).(b) 0.45 r 1.96 u 0.03 0.387 to 0.517(c) ·AHE 6.20 0.26 u Age(1.02) (0.03)The t -statistic is 0.26/0.03 7.43, which has a p -value of 0.000, so the null hypothesis can berejected at the 1% level (and thus, also at the 10% and 5% levels).(d) ·AHE 0.23 0.69 u Age(1.54) (0.05)The t -statistic is 0.69/0.05 13.06, which has a p -value of 0.000, so the null hypothesis can berejected at the 1% level (and thus, also at the 10% and 5% levels).(e) The difference in the estimated E 1 coefficients is 1,1,??College HighScool E E 0.69 0.26 0.43. Thestandard error of for the estimated difference is SE 1,1,??()College HighScoolE E (0.032 0.052)1/2 0.06, so that a 95% confidence interval for the difference is 0.43 r 1.96 u 0.06 0.32 to 0.54(dollars per hour).2. ·_ 4.000.13CourseEval Beauty u (0.03) (0.03)The t -statistic is 0.13/0.03 4.12, which has a p -value of 0.000, so the null hypothesis can be rejectedat the 1% level (and thus, also at the 10% and 5% levels).3. (a) ?Ed13.96 0.073 u Dist (0.04) (0.013)The t -statistic is 0.073/0.013 5.46, which has a p -value of 0.000, so the null hypothesis can be rejected at the 1% level (and thus, also at the 10% and 5% levels).(b) The 95% confidence interval is 0.073 r 1.96 u 0.013 or0.100 to 0.047.(c) ?Ed13.94 0.064 u Dist (0.05) (0.018)Solutions to Empirical Exercises in Chapter 5 113(d) ?Ed13.98 0.084 u Dist (0.06) (0.013)(e) The difference in the estimated E 1 coefficients is 1,1,??Female Male E E 0.064 ( 0.084) 0.020.The standard error of for the estimated difference is SE 1,1,??()Female Male E E (0.0182 0.0132)1/20.022, so that a 95% confidence interval for the difference is 0.020 r 1.96 u 0.022 or 0.022 to0.064. The difference is not statistically different.Chapter 6Linear Regression with Multiple RegressorsSolutions to Empirical Exercises1. Regressions used in (a) and (b)Regressor a bBeauty 0.133 0.166Intro 0.011OneCredit 0.634Female 0.173Minority 0.167NNEnglish 0.244Intercept 4.00 4.07SER 0.545 0.513R2 0.036 0.155(a) The estimated slope is 0.133(b) The estimated slope is 0.166. The coefficient does not change by an large amount. Thus, theredoes not appear to be large omitted variable bias.(c) Professor Smith’s predicted course evaluation (0.166 u 0)0.011 u 0) (0.634 u 0) (0.173 u0) (0.167 u 1) (0.244 u 0) 4.068 3.9012. Estimated regressions used in questionModelRegressor a bdist 0.073 0.032bytest 0.093female 0.145black 0.367hispanic 0.398incomehi 0.395ownhome 0.152dadcoll 0.696cue80 0.023stwmfg80 0.051intercept 13.956 8.827SER 1.81 1.84R2 0.007 0.279R0.007 0.277Solutions to Empirical Exercises in Chapter 6 115(a) 0.073(b) 0.032(c) The coefficient has fallen by more than 50%. Thus, it seems that result in (a) did suffer fromomitted variable bias.(d) The regression in (b) fits the data much better as indicated by the R2, 2,R and SER. The R2 and R are similar because the number of observations is large (n 3796).(e) Students with a “dadcoll 1” (so that the student’s father went to college) complete 0.696 moreyears of education, on average, than students with “dadcoll 0” (so that the student’s father didnot go to college).(f) These terms capture the opportunity cost of attending college. As STWMFG increases, forgonewages increase, so that, on average, college attendance declines. The negative sign on thecoefficient is consistent with this. As CUE80 increases, it is more difficult to find a job, whichlowers the opportunity cost of attending college, so that college attendance increases. Thepositive sign on the coefficient is consistent with this.(g) Bob’s predicted years of education 0.0315 u 2 0.093 u58 0.145 u 0 0.367 u 1 0.398 u0 0.395 u 1 0.152 u 1 0.696 u 0 0.023 u 7.5 0.051 u 9.75 8.82714.75(h) Jim’s expected years of education is 2 u 0.0315 0.0630 less than Bob’s. Thus, Jim’s expectedyears of education is 14.75 0.063 14.69.3.Variable Mean StandardDeviation Unitsgrowth 1.86 1.82 Percentage Pointsrgdp60 3131 2523 $1960tradeshare 0.542 0.229 unit freeyearsschool 3.95 2.55 yearsrev_coups 0.170 0.225 coups per yearassasinations 0.281 0.494 assasinations per yearoil 0 0 0–1 indicator variable (b) Estimated Regression (in table format):Regressor Coefficienttradeshare 1.34(0.88)yearsschool 0.56**(0.13)rev_coups 2.15*(0.87)assasinations 0.32(0.38)rgdp60 0.00046**(0.00012)intercept 0.626(0.869)SER 1.59R2 0.29R0.23116 Stock/Watson - Introduction to Econometrics - Second EditionThe coefficient on Rev_Coups is í2.15. An additional coup in a five year period, reduces theaverage year growth rate by (2.15/5) = 0.43% over this 25 year period. This means the GPD in 1995 is expected to be approximately .43×25 = 10.75% lower. This is a larg e effect.(c) The 95% confidence interval is 1.34 r 1.96 u 0.88 or 0.42 to 3.10. The coefficient is notstatistically significant at the 5% level.(d) The F-statistic is 8.18 which is larger than 1% critical value of 3.32.Chapter 7Hypothesis Tests and Confidence Intervals in Multiple RegressionSolutions to Empirical Exercises1. Estimated RegressionsModelRegressor a bAge 0.45(0.03)0.44 (0.03)Female 3.17(0.18)Bachelor 6.87(0.19)Intercept 3.32(0.97)SER 8.66 7.88R20.023 0.1902R0.022 0.190(a) The estimated slope is 0.45(b) The estimated marginal effect of Age on AHE is 0.44 dollars per year. The 95% confidenceinterval is 0.44 r 1.96 u 0.03 or 0.38 to 0.50.(c) The results are quite similar. Evidently the regression in (a) does not suffer from importantomitted variable bias.(d) Bob’s predicted average hourly earnings 0.44 u 26 3.17 u 0 6.87 u 0 3.32 $11.44Alexis’s predicted average hourly earnings 0.44 u 30 3.17 u 1 6.87 u 1 3.32 $20.22 (e) The regression in (b) fits the data much better. Gender and education are important predictors of earnings. The R2 and R are similar because the sample size is large (n 7986).(f) Gender and education are important. The F-statistic is 752, which is (much) larger than the 1%critical value of 4.61.(g) The omitted variables must have non-zero coefficients and must correlated with the includedregressor. From (f) Female and Bachelor have non-zero coefficients; yet there does not seem to be important omittedvariable bias, suggesting that the correlation of Age and Female and Age and Bachelor is small. (The sample correlations are ·Cor(Age, Female) 0.03 and·Cor(Age,Bachelor) 0.00).118 Stock/Watson - Introduction to Econometrics - Second Edition2.ModelRegressor a b cBeauty 0.13**(0.03) 0.17**(0.03)0.17(0.03)Intro 0.01(0.06)OneCredit 0.63**(0.11) 0.64** (0.10)Female 0.17**(0.05) 0.17** (0.05)Minority 0.17**(0.07) 0.16** (0.07)NNEnglish 0.24**(0.09) 0.25** (0.09)Intercept 4.00**(0.03) 4.07**(0.04)4.07**(0.04)SER 0.545 0.513 0.513R2 0.036 0.155 0.1552R0.034 0.144 0.145(a) 0.13 r 0.03 u 1.96 or 0.07 to 0.20(b) See the table above. Intro is not significant in (b), but the other variables are significant.A reasonable 95% confidence interval is 0.17 r 1.96 u 0.03 or0.11 to 0.23.Solutions to Empirical Exercises in Chapter 7 119 3.ModelRegressor (a) (b) (c)dist 0.073**(0.013) 0.031**(0.012)0.033**(0.013)bytest 0.092**(0.003) 0.093** (.003)female 0.143**(0.050) 0.144** (0.050)black 0.354**(0.067) 0.338** (0.069)hispanic 0.402**(0.074) 0.349** (0.077)incomehi 0.367**(0.062) 0.374** (0.062)ownhome 0.146*(0.065) 0.143* (0.065)dadcoll 0.570**(0.076) 0.574** (0.076)momcoll 0.379**(0.084) 0.379** (0.084)cue80 0.024**(0.009) 0.028** (0.010)stwmfg80 0.050*(0.020) 0.043* (0.020)urban 0.0652(0.063) tuition 0.184(0.099)intercept 13.956**(0.038) 8.861**(0.241)8.893**(0.243)F-statitisticfor urban and tuitionSER 1.81 1.54 1.54R2 0.007 0.282 0.284R0.007 0.281 0.281(a) The group’s claim is that the coefficien t on Dist is 0.075 ( 0.15/2). The 95% confidence forE Dist from column (a) is 0.073 r 1.96 u 0.013 or 0.099 to 0.046. The group’s claim is includedin the 95% confidence interval so that it is consistent with the estimated regression.120 Stock/Watson - Introduction to Econometrics - Second Edition(b) Column (b) shows the base specification controlling for other important factors. Here thecoefficient on Dist is 0.031, much different than the resultsfrom the simple regression in (a);when additional variables are added (column (c)), the coefficient on Dist changes little from the result in (b). From the base specification (b), the 95% confidence interval for E Dist is0.031 r1.96 u 0.012 or 0.055 to 0.008. Similar results are obtained from the regression in (c).(c) Yes, the estimated coefficients E Black and E Hispanic are positive, large, and statistically significant.Chapter 8Nonlinear Regression FunctionsSolutions to Empirical Exercises1. This table contains the results from seven regressions that are referenced in these answers.Data from 2004(1) (2) (3) (4) (5) (6) (7) (8)Dependent VariableAHE ln(AHE) ln(AHE) ln(AHE) ln(AHE) ln(AHE) ln(AHE) ln(AHE) Age 0.439**(0.030) 0.024**(0.002)0.147**(0.042)0.146**(0.042)0.190**(0.056)0.117*(0.056)0.160Age2 0.0021**(0.0007) 0.0021** (0.0007)0.0027**(0.0009)0.0017(0.0009)0.0023(0.0011)ln(Age) 0.725**(0.052)Female u Age 0.097 (0.084) 0.123 (0.084) Female u Age2 0.0015 (0.0014)0.0019 (0.0014) Bachelor u Age 0.064 (0.083)0.091 (0.084) Bachelor u Age2 0.0009 (0.0014) 0.0013 (0.0014) Female 3.158**(0.176) 0.180**(0.010)0.180**(0.010)0.180**(0.010)(0.014)1.358*(1.230)0.210**(0.014)1.764(1.239)Bachelor 6.865**(0.185) 0.405**(0.010)0.405**(0.010)0.405**(0.010)0.378**(0.014)0.378**(0.014)0.769(1.228)1.186(1.239)Female u Bachelor 0.064** (0.021) 0.063**(0.021)0.066**(0.021)0.066**(0.021)Intercept 1.884(0.897) 1.856**(0.053)0.128(0.177)0.059(0.613)0.078(0.612)0.633(0.819)0.604(0.819)0.095(0.945)F-statistic and p-values on joint hypotheses(a) F-statistic on terms involving Age 98.54(0.00)100.30(0.00)51.42(0.00)53.04(0.00)36.72(0.00)(b) Interaction termswithAge24.12(0.02)7.15(0.00)6.43(0.00)SER 7.884 0.457 0.457 0.457 0.457 0.456 0.456 0.456 R0.1897 0.1921 0.1924 0.1929 0.1937 0.1943 0.1950 0.1959 Significant at the *5% and **1% significance level.122 Stock/Watson - Introduction to Econometrics - Second Edition(a) The regression results for this question are shown in column (1) of the table. If Age increasesfrom 25 to 26, earnings are predicted to increase by $0.439 per hour. If Age increases from33 to 34, earnings are predicted to increase by $0.439 per hour. These values are the samebecause the regression is a linear function relating AHE and Age .(b) The regression results for this question are shown in column (2) of the table. If Age increasesfrom 25 to 26, ln(AHE ) is predicted to increase by 0.024. This means that earnings are predicted to increase by 2.4%. If Age increases from 34 to 35, ln(AHE ) is predicted to increase by 0.024.This means that earnings are predicted to increase by 2.4%. These values, in percentage terms,are the same because the regression is a linear function relating ln(AHE ) and Age .(c) The regression results for this question are shown in column (3) of the table. If Age increasesfrom 25 to 26, then ln(Age ) has increased by ln(26) ln(25) 0.0392 (or 3.92%). The predictedincrease in ln(AHE ) is 0.725 u (.0392) 0.0284. This means that earnings are predicted toincrease by 2.8%. If Age increases from 34 to 35, then ln(Age ) has increased by ln(35) ln(34) .0290 (or 2.90%). The predicted increase in ln(AHE ) is 0.725 u (0.0290) 0.0210. This means that earnings are predicted to increase by 2.10%.(d) When Age increases from 25 to 26, the predicted change in ln(AHE ) is(0.147 u 26 0.0021 u 262) (0.147 u 25 0.0021 u 252) 0.0399.This means that earnings are predicted to increase by 3.99%.When Age increases from 34 to 35, the predicted change in ln(AHE ) is(0. 147 u 35 0.0021 u 352) (0. 147 u 34 0.0021 u 342) 0.0063.This means that earnings are predicted to increase by 0.63%.(e) The regressions differ in their choice of one of the regressors. They can be compared on the basis of the .R The regression in (3) has a (marginally) higher 2,R so it is preferred.(f) The regression in (4) adds the variable Age 2 to regression(2). The coefficient on Age 2 isstatistically significant ( t 2.91), and this suggests that the addition of Age 2 is important. Thus,(4) is preferred to (2).(g) The regressions differ in their choice of one of the regressors. They can be compared on the basis of the .R The regression in (4) has a (marginally) higher 2,R so it is preferred.(h)Solutions to Empirical Exercises in Chapter 8 123 The regression functions using Age (2) and ln(Age) (3) are similar. The quadratic regression (4) is different. It shows a decreasing effect of Age on ln(AHE) as workers age.The regression functions for a female with a high school diploma will look just like these, but they will be shifted by the amount of the coefficient on the binary regressor Female. The regression functions for workers with a bachelor’s degree will also look just like these, but they would be shifted by the amount of the coefficient on the binary variable Bachelor.(i) This regression is shown in column (5). The coefficient on the interaction term Female uBachelor shows the “extra effect” of Bachelor on ln(AHE) for women relative the effect for men.Predicted values of ln(AHE):Alexis: 0.146 u 30 0.0021 u 302 0.180 u 1 0.405 u 1 0.064 u 1 0.078 4.504Jane: 0.146 u 30 0.0021 u 302 0.180 u 1 0.405 u 0 0.064 u 0 0.078 4.063Bob: 0.146 u 30 0.0021 u 302 0.180 u 0 0.405 u 1 0.064 u 0 0.078 4.651Jim: 0.146 u 30 0.0021 u 302 0.180 u 0 0.405 u 0 0.064 u 0 0.078 4.273Difference in ln(AHE): Alexis Jane 4.504 4.063 0.441Difference in ln(AHE): Bob Jim 4.651 4.273 0.378Notice that the difference in the difference predicted effects is 0.441 0.378 0.063, which is the value of the coefficient on the interaction term.(j) This regression is shown in (6), which includes two additional regressors: the interactions of Female and the age variables, Age and Age2. The F-statistic testing the restriction that the coefficients on these interaction terms is equal to zero is F 4.12 with a p-value of 0.02. This implies that there is statistically significant evidence (at the 5% level) that there is a different effect of Age on ln(AHE) for men and women.(k) This regression is shown in (7), which includes two additional regressors that are interactions of Bachelor and the age variables, Age and Age2. The F-statistic testing the restriction that the coefficients on these interaction terms is zero is 7.15 with a p-value of 0.00. This implies that there is statistically significant evidence (at the 1% level) that there is a different effect of Age on ln(AHE) for high school and college graduates.(l) Regression (8) includes Age and Age2 and interactions terms involving Female and Bachelor.The figure below shows the regressions predicted value of ln(AHE) for male and females with high school and college degrees.124 Stock/Watson - Introduction to Econometrics - Second EditionThe estimated regressions suggest that earnings increase as workers age from 25–35, the rangeof age studied in this sample. There is evidence that the quadratic term Age2 belongs in theregression. Curvature in the regression functions in particularly important for men.Gender and education are significant predictors of earnings, and there are statistically significant interaction effects between age and gender and age and education. The table below summarizes the regressions predictions for increases in earnings as a person ages from 25 to 32 and 32 to 35Gender, Education Predicted ln(AHE) at Age(Percent per year)25 32 35 25 to 32 32 to 35Males, High School 2.46 2.65 2.67 2.8% 0.5%Females, BA 2.68 2.89 2.93 3.0% 1.3%Males, BA 2.74 3.06 3.09 4.6% 1.0%Earnings for those with a college education are higher than those with a high school degree, andearnings of the college educated increase more rapidly early in their careers (age 25–32). Earnings for men are higher than those of women, and earnings of men increase more rapidly early in theircareers (age 25–32). For all categories of workers (men/women, high school/college) earningsincrease more rapidly from age 25–32 than from 32–35.。

cnn-pieee-lenet

cnn-pieee-lenet

Gradient-Based Learning Appliedto Document RecognitionYANN LECUN,MEMBER,IEEE,L´EON BOTTOU,YOSHUA BENGIO,AND PATRICK HAFFNER Invited PaperMultilayer neural networks trained with the back-propagation algorithm constitute the best example of a successful gradient-based learning technique.Given an appropriate network architecture,gradient-based learning algorithms can be used to synthesize a complex decision surface that can classify high-dimensional patterns,such as handwritten characters,with minimal preprocessing.This paper reviews various methods applied to handwritten character recognition and compares them on a standard handwritten digit recognition task.Convolutional neural networks,which are specifically designed to deal with the variability of two dimensional(2-D)shapes,are shown to outperform all other techniques.Real-life document recognition systems are composed of multiple modules includingfield extraction,segmentation,recognition, and language modeling.A new learning paradigm,called graph transformer networks(GTN’s),allows such multimodule systems to be trained globally using gradient-based methods so as to minimize an overall performance measure.Two systems for online handwriting recognition are described. Experiments demonstrate the advantage of global training,and theflexibility of graph transformer networks.A graph transformer network for reading a bank check is also described.It uses convolutional neural network character recognizers combined with global training techniques to provide record accuracy on business and personal checks.It is deployed commercially and reads several million checks per day. Keywords—Convolutional neural networks,document recog-nition,finite state transducers,gradient-based learning,graphtransformer networks,machine learning,neural networks,optical character recognition(OCR).N OMENCLATUREGT Graph transformer.GTN Graph transformer network.HMM Hidden Markov model.HOS Heuristic oversegmentation.K-NN K-nearest neighbor.Manuscript received November1,1997;revised April17,1998.Y.LeCun,L.Bottou,and P.Haffner are with the Speech and Image Processing Services Research Laboratory,AT&T Labs-Research,Red Bank,NJ07701USA.Y.Bengio is with the D´e partement d’Informatique et de Recherche Op´e rationelle,Universit´e de Montr´e al,Montr´e al,Qu´e bec H3C3J7Canada. Publisher Item Identifier S0018-9219(98)07863-3.NN Neural network.OCR Optical character recognition.PCA Principal component analysis.RBF Radial basis function.RS-SVM Reduced-set support vector method. SDNN Space displacement neural network.SVM Support vector method.TDNN Time delay neural network.V-SVM Virtual support vector method.I.I NTRODUCTIONOver the last several years,machine learning techniques, particularly when applied to NN’s,have played an increas-ingly important role in the design of pattern recognition systems.In fact,it could be argued that the availability of learning techniques has been a crucial factor in the recent success of pattern recognition applications such as continuous speech recognition and handwriting recognition. The main message of this paper is that better pattern recognition systems can be built by relying more on auto-matic learning and less on hand-designed heuristics.This is made possible by recent progress in machine learning and computer ing character recognition as a case study,we show that hand-crafted feature extraction can be advantageously replaced by carefully designed learning machines that operate directly on pixel ing document understanding as a case study,we show that the traditional way of building recognition systems by manually integrating individually designed modules can be replaced by a unified and well-principled design paradigm,called GTN’s,which allows training all the modules to optimize a global performance criterion.Since the early days of pattern recognition it has been known that the variability and richness of natural data, be it speech,glyphs,or other types of patterns,make it almost impossible to build an accurate recognition system entirely by hand.Consequently,most pattern recognition systems are built using a combination of automatic learning techniques and hand-crafted algorithms.The usual method0018–9219/98$10.00©1998IEEE2278PROCEEDINGS OF THE IEEE,VOL.86,NO.11,NOVEMBER1998Fig.1.Traditional pattern recognition is performed with two modules:afixed feature extractor and a trainable classifier.of recognizing individual patterns consists in dividing the system into two main modules shown in Fig.1.Thefirst module,called the feature extractor,transforms the input patterns so that they can be represented by low-dimensional vectors or short strings of symbols that:1)can be easily matched or compared and2)are relatively invariant with respect to transformations and distortions of the input pat-terns that do not change their nature.The feature extractor contains most of the prior knowledge and is rather specific to the task.It is also the focus of most of the design effort, because it is often entirely hand crafted.The classifier, on the other hand,is often general purpose and trainable. One of the main problems with this approach is that the recognition accuracy is largely determined by the ability of the designer to come up with an appropriate set of features. This turns out to be a daunting task which,unfortunately, must be redone for each new problem.A large amount of the pattern recognition literature is devoted to describing and comparing the relative merits of different feature sets for particular tasks.Historically,the need for appropriate feature extractors was due to the fact that the learning techniques used by the classifiers were limited to low-dimensional spaces with easily separable classes[1].A combination of three factors has changed this vision over the last decade.First, the availability of low-cost machines with fast arithmetic units allows for reliance on more brute-force“numerical”methods than on algorithmic refinements.Second,the avail-ability of large databases for problems with a large market and wide interest,such as handwriting recognition,has enabled designers to rely more on real data and less on hand-crafted feature extraction to build recognition systems. The third and very important factor is the availability of powerful machine learning techniques that can handle high-dimensional inputs and can generate intricate decision functions when fed with these large data sets.It can be argued that the recent progress in the accuracy of speech and handwriting recognition systems can be attributed in large part to an increased reliance on learning techniques and large training data sets.As evidence of this fact,a large proportion of modern commercial OCR systems use some form of multilayer NN trained with back propagation.In this study,we consider the tasks of handwritten character recognition(Sections I and II)and compare the performance of several learning techniques on a benchmark data set for handwritten digit recognition(Section III). While more automatic learning is beneficial,no learning technique can succeed without a minimal amount of prior knowledge about the task.In the case of multilayer NN’s, a good way to incorporate knowledge is to tailor its archi-tecture to the task.Convolutional NN’s[2],introduced in Section II,are an example of specialized NN architectures which incorporate knowledge about the invariances of two-dimensional(2-D)shapes by using local connection patterns and by imposing constraints on the weights.A comparison of several methods for isolated handwritten digit recogni-tion is presented in Section III.To go from the recognition of individual characters to the recognition of words and sentences in documents,the idea of combining multiple modules trained to reduce the overall error is introduced in Section IV.Recognizing variable-length objects such as handwritten words using multimodule systems is best done if the modules manipulate directed graphs.This leads to the concept of trainable GTN,also introduced in Section IV. Section V describes the now classical method of HOS for recognizing words or other character strings.Discriminative and nondiscriminative gradient-based techniques for train-ing a recognizer at the word level without requiring manual segmentation and labeling are presented in Section VI. Section VII presents the promising space-displacement NN approach that eliminates the need for segmentation heuris-tics by scanning a recognizer at all possible locations on the input.In Section VIII,it is shown that trainable GTN’s can be formulated as multiple generalized transductions based on a general graph composition algorithm.The connections between GTN’s and HMM’s,commonly used in speech recognition,is also treated.Section IX describes a globally trained GTN system for recognizing handwriting entered in a pen computer.This problem is known as “online”handwriting recognition since the machine must produce immediate feedback as the user writes.The core of the system is a convolutional NN.The results clearly demonstrate the advantages of training a recognizer at the word level,rather than training it on presegmented, hand-labeled,isolated characters.Section X describes a complete GTN-based system for reading handwritten and machine-printed bank checks.The core of the system is the convolutional NN called LeNet-5,which is described in Section II.This system is in commercial use in the NCR Corporation line of check recognition systems for the banking industry.It is reading millions of checks per month in several banks across the United States.A.Learning from DataThere are several approaches to automatic machine learn-ing,but one of the most successful approaches,popularized in recent years by the NN community,can be called“nu-merical”or gradient-based learning.The learning machine computes afunction th input pattern,andtheoutputthatminimizesand the error rate on the trainingset decreases with the number of training samplesapproximatelyasis the number of trainingsamples,is a number between0.5and1.0,andincreases,decreases.Therefore,when increasing thecapacitythat achieves the lowest generalizationerror Mostlearning algorithms attempt tominimize as well assome estimate of the gap.A formal version of this is calledstructural risk minimization[6],[7],and it is based on defin-ing a sequence of learning machines of increasing capacity,corresponding to a sequence of subsets of the parameterspace such that each subset is a superset of the previoussubset.In practical terms,structural risk minimization isimplemented byminimizingisaconstant.that belong to high-capacity subsets ofthe parameter space.Minimizingis a real-valuedvector,with respect towhichis iteratively adjusted asfollows:is updated on the basis of a singlesampleof several layers of processing,i.e.,the back-propagation algorithm.The third event was the demonstration that the back-propagation procedure applied to multilayer NN’s with sigmoidal units can solve complicated learning tasks. The basic idea of back propagation is that gradients can be computed efficiently by propagation from the output to the input.This idea was described in the control theory literature of the early1960’s[16],but its application to ma-chine learning was not generally realized then.Interestingly, the early derivations of back propagation in the context of NN learning did not use gradients but“virtual targets”for units in intermediate layers[17],[18],or minimal disturbance arguments[19].The Lagrange formalism used in the control theory literature provides perhaps the best rigorous method for deriving back propagation[20]and for deriving generalizations of back propagation to recurrent networks[21]and networks of heterogeneous modules[22].A simple derivation for generic multilayer systems is given in Section I-E.The fact that local minima do not seem to be a problem for multilayer NN’s is somewhat of a theoretical mystery. It is conjectured that if the network is oversized for the task(as is usually the case in practice),the presence of “extra dimensions”in parameter space reduces the risk of unattainable regions.Back propagation is by far the most widely used neural-network learning algorithm,and probably the most widely used learning algorithm of any form.D.Learning in Real Handwriting Recognition Systems Isolated handwritten character recognition has been ex-tensively studied in the literature(see[23]and[24]for reviews),and it was one of the early successful applications of NN’s[25].Comparative experiments on recognition of individual handwritten digits are reported in Section III. They show that NN’s trained with gradient-based learning perform better than all other methods tested here on the same data.The best NN’s,called convolutional networks, are designed to learn to extract relevant features directly from pixel images(see Section II).One of the most difficult problems in handwriting recog-nition,however,is not only to recognize individual charac-ters,but also to separate out characters from their neighbors within the word or sentence,a process known as seg-mentation.The technique for doing this that has become the“standard”is called HOS.It consists of generating a large number of potential cuts between characters using heuristic image processing techniques,and subsequently selecting the best combination of cuts based on scores given for each candidate character by the recognizer.In such a model,the accuracy of the system depends upon the quality of the cuts generated by the heuristics,and on the ability of the recognizer to distinguish correctly segmented characters from pieces of characters,multiple characters, or otherwise incorrectly segmented characters.Training a recognizer to perform this task poses a major challenge because of the difficulty in creating a labeled database of incorrectly segmented characters.The simplest solution consists of running the images of character strings through the segmenter and then manually labeling all the character hypotheses.Unfortunately,not only is this an extremely tedious and costly task,it is also difficult to do the labeling consistently.For example,should the right half of a cut-up four be labeled as a one or as a noncharacter?Should the right half of a cut-up eight be labeled as a three?Thefirst solution,described in Section V,consists of training the system at the level of whole strings of char-acters rather than at the character level.The notion of gradient-based learning can be used for this purpose.The system is trained to minimize an overall loss function which measures the probability of an erroneous answer.Section V explores various ways to ensure that the loss function is differentiable and therefore lends itself to the use of gradient-based learning methods.Section V introduces the use of directed acyclic graphs whose arcs carry numerical information as a way to represent the alternative hypotheses and introduces the idea of GTN.The second solution,described in Section VII,is to eliminate segmentation altogether.The idea is to sweep the recognizer over every possible location on the input image,and to rely on the“character spotting”property of the recognizer,i.e.,its ability to correctly recognize a well-centered character in its inputfield,even in the presence of other characters besides it,while rejecting images containing no centered characters[26],[27].The sequence of recognizer outputs obtained by sweeping the recognizer over the input is then fed to a GTN that takes linguistic constraints into account andfinally extracts the most likely interpretation.This GTN is somewhat similar to HMM’s,which makes the approach reminiscent of the classical speech recognition[28],[29].While this technique would be quite expensive in the general case,the use of convolutional NN’s makes it particularly attractive because it allows significant savings in computational cost.E.Globally Trainable SystemsAs stated earlier,most practical pattern recognition sys-tems are composed of multiple modules.For example,a document recognition system is composed of afield loca-tor(which extracts regions of interest),afield segmenter (which cuts the input image into images of candidate characters),a recognizer(which classifies and scores each candidate character),and a contextual postprocessor,gen-erally based on a stochastic grammar(which selects the best grammatically correct answer from the hypotheses generated by the recognizer).In most cases,the information carried from module to module is best represented as graphs with numerical information attached to the arcs. For example,the output of the recognizer module can be represented as an acyclic graph where each arc contains the label and the score of a candidate character,and where each path represents an alternative interpretation of the input string.Typically,each module is manually optimized,or sometimes trained,outside of its context.For example,the character recognizer would be trained on labeled images of presegmented characters.Then the complete system isLECUN et al.:GRADIENT-BASED LEARNING APPLIED TO DOCUMENT RECOGNITION2281assembled,and a subset of the parameters of the modules is manually adjusted to maximize the overall performance. This last step is extremely tedious,time consuming,and almost certainly suboptimal.A better alternative would be to somehow train the entire system so as to minimize a global error measure such as the probability of character misclassifications at the document level.Ideally,we would want tofind a good minimum of this global loss function with respect to all theparameters in the system.If the loss functionusing gradient-based learning.However,at first glance,it appears that the sheer size and complexity of the system would make this intractable.To ensure that the global loss functionwithrespect towith respect toFig.2.Architecture of LeNet-5,a convolutional NN,here used for digits recognition.Each plane is a feature map,i.e.,a set of units whose weights are constrained to be identical.or other2-D or one-dimensional(1-D)signals,must be approximately size normalized and centered in the input field.Unfortunately,no such preprocessing can be perfect: handwriting is often normalized at the word level,which can cause size,slant,and position variations for individual characters.This,combined with variability in writing style, will cause variations in the position of distinctive features in input objects.In principle,a fully connected network of sufficient size could learn to produce outputs that are invari-ant with respect to such variations.However,learning such a task would probably result in multiple units with similar weight patterns positioned at various locations in the input so as to detect distinctive features wherever they appear on the input.Learning these weight configurations requires a very large number of training instances to cover the space of possible variations.In convolutional networks,as described below,shift invariance is automatically obtained by forcing the replication of weight configurations across space. Secondly,a deficiency of fully connected architectures is that the topology of the input is entirely ignored.The input variables can be presented in any(fixed)order without af-fecting the outcome of the training.On the contrary,images (or time-frequency representations of speech)have a strong 2-D local structure:variables(or pixels)that are spatially or temporally nearby are highly correlated.Local correlations are the reasons for the well-known advantages of extracting and combining local features before recognizing spatial or temporal objects,because configurations of neighboring variables can be classified into a small number of categories (e.g.,edges,corners,etc.).Convolutional networks force the extraction of local features by restricting the receptive fields of hidden units to be local.A.Convolutional NetworksConvolutional networks combine three architectural ideas to ensure some degree of shift,scale,and distortion in-variance:1)local receptivefields;2)shared weights(or weight replication);and3)spatial or temporal subsampling.A typical convolutional network for recognizing characters, dubbed LeNet-5,is shown in Fig.2.The input plane receives images of characters that are approximately size normalized and centered.Each unit in a layer receives inputs from a set of units located in a small neighborhood in the previous layer.The idea of connecting units to local receptivefields on the input goes back to the perceptron in the early1960’s,and it was almost simultaneous with Hubel and Wiesel’s discovery of locally sensitive,orientation-selective neurons in the cat’s visual system[30].Local connections have been used many times in neural models of visual learning[2],[18],[31]–[34].With local receptive fields neurons can extract elementary visual features such as oriented edges,endpoints,corners(or similar features in other signals such as speech spectrograms).These features are then combined by the subsequent layers in order to detect higher order features.As stated earlier,distortions or shifts of the input can cause the position of salient features to vary.In addition,elementary feature detectors that are useful on one part of the image are likely to be useful across the entire image.This knowledge can be applied by forcing a set of units,whose receptivefields are located at different places on the image,to have identical weight vectors[15], [32],[34].Units in a layer are organized in planes within which all the units share the same set of weights.The set of outputs of the units in such a plane is called a feature map. Units in a feature map are all constrained to perform the same operation on different parts of the image.A complete convolutional layer is composed of several feature maps (with different weight vectors),so that multiple features can be extracted at each location.A concrete example of this is thefirst layer of LeNet-5shown in Fig.2.Units in thefirst hidden layer of LeNet-5are organized in six planes,each of which is a feature map.A unit in a feature map has25inputs connected to a5case of LeNet-5,at each input location six different types of features are extracted by six units in identical locations in the six feature maps.A sequential implementation of a feature map would scan the input image with a single unit that has a local receptive field and store the states of this unit at corresponding locations in the feature map.This operation is equivalent to a convolution,followed by an additive bias and squashing function,hence the name convolutional network.The kernel of the convolution is the set of connection weights used by the units in the feature map.An interesting property of convolutional layers is that if the input image is shifted,the feature map output will be shifted by the same amount,but it will be left unchanged otherwise.This property is at the basis of the robustness of convolutional networks to shifts and distortions of the input.Once a feature has been detected,its exact location becomes less important.Only its approximate position relative to other features is relevant.For example,once we know that the input image contains the endpoint of a roughly horizontal segment in the upper left area,a corner in the upper right area,and the endpoint of a roughly vertical segment in the lower portion of the image,we can tell the input image is a seven.Not only is the precise position of each of those features irrelevant for identifying the pattern,it is potentially harmful because the positions are likely to vary for different instances of the character.A simple way to reduce the precision with which the position of distinctive features are encoded in a feature map is to reduce the spatial resolution of the feature map.This can be achieved with a so-called subsampling layer,which performs a local averaging and a subsampling,thereby reducing the resolution of the feature map and reducing the sensitivity of the output to shifts and distortions.The second hidden layer of LeNet-5is a subsampling layer.This layer comprises six feature maps,one for each feature map in the previous layer.The receptive field of each unit is a232p i x e l i m a g e .T h i s i s s i g n i fic a n tt h e l a r g e s t c h a r a c t e r i n t h e d a t a b a s e (a t28fie l d ).T h e r e a s o n i s t h a t i t it h a t p o t e n t i a l d i s t i n c t i v e f e a t u r e s s u c h o r c o r n e r c a n a p p e a r i n t h e c e n t e r o f t h o f t h e h i g h e s t l e v e l f e a t u r e d e t e c t o r s .o f c e n t e r s o f t h e r e c e p t i v e fie l d s o f t h e l a y e r (C 3,s e e b e l o w )f o r m a 2032i n p u t .T h e v a l u e s o f t h e i n p u t p i x e l s o t h a t t h e b a c k g r o u n d l e v e l (w h i t e )c o ro fa n d t h e f o r e g r o u n d (b l ac k )c o r r e s p T h i s m a k e s t h e m e a n i n p u t r o u g h l y z e r o r o u g h l y o n e ,w h i c h a c c e l e r a t e s l e a r n i n g I n t h e f o l l o w i n g ,c o n v o l u t i o n a l l a y e r s u b s a m p l i n g l a y e r s a r e l a b e l ed S x ,a n d l a ye r s a r e l a b e l e d F x ,w h e r e x i s t h e l a y L a y e r C 1i s a c o n v o l u t i o n a l l a y e r w i t h E a c h u n i t i n e a c hf e a t u r e m a p i s c o n n e c t28w h i c h p r e v e n t s c o n n e c t i o n f r o m t h e i n p t h e b o u n d a r y .C 1c o n t a i n s 156t r a i n a b l 122304c o n n e c t i o n s .L a y e r S 2i s a s u b s a m p l i n g l a y e r w i t h s i s i z e 142n e i g h b o r h o o d i n t h e c o r r e s p o n d i n g f T h e f o u r i n p u t s t o a u n i t i n S 2a r e a d d e d ,2284P R O C E E D I N G S O F T H E I E E E ,V O L .86,N O .11,N O VTable 1Each Column Indicates Which Feature Map in S2Are Combined by the Units in a Particular Feature Map ofC3a trainable coefficient,and then added to a trainable bias.The result is passed through a sigmoidal function.The25neighborhoods at identical locations in a subset of S2’s feature maps.Table 1shows the set of S2feature maps combined by each C3feature map.Why not connect every S2feature map to every C3feature map?The reason is twofold.First,a noncomplete connection scheme keeps the number of connections within reasonable bounds.More importantly,it forces a break of symmetry in the network.Different feature maps are forced to extract dif-ferent (hopefully complementary)features because they get different sets of inputs.The rationale behind the connection scheme in Table 1is the following.The first six C3feature maps take inputs from every contiguous subsets of three feature maps in S2.The next six take input from every contiguous subset of four.The next three take input from some discontinuous subsets of four.Finally,the last one takes input from all S2feature yer C3has 1516trainable parameters and 156000connections.Layer S4is a subsampling layer with 16feature maps of size52neighborhood in the corresponding feature map in C3,in a similar way as C1and yer S4has 32trainable parameters and 2000connections.Layer C5is a convolutional layer with 120feature maps.Each unit is connected to a55,the size of C5’s feature maps is11.This process of dynamically increasing thesize of a convolutional network is described in Section yer C5has 48120trainable connections.Layer F6contains 84units (the reason for this number comes from the design of the output layer,explained below)and is fully connected to C5.It has 10164trainable parameters.As in classical NN’s,units in layers up to F6compute a dot product between their input vector and their weight vector,to which a bias is added.This weighted sum,denotedforunit (6)wheredeterminesits slope at the origin.Thefunctionis chosen to be1.7159.The rationale for this choice of a squashing function is given in Appendix A.Finally,the output layer is composed of Euclidean RBF units,one for each class,with 84inputs each.The outputs of each RBFunit(7)In other words,each output RBF unit computes the Eu-clidean distance between its input vector and its parameter vector.The further away the input is from the parameter vector,the larger the RBF output.The output of a particular RBF can be interpreted as a penalty term measuring the fit between the input pattern and a model of the class associated with the RBF.In probabilistic terms,the RBF output can be interpreted as the unnormalized negative log-likelihood of a Gaussian distribution in the space of configurations of layer F6.Given an input pattern,the loss function should be designed so as to get the configuration of F6as close as possible to the parameter vector of the RBF that corresponds to the pattern’s desired class.The parameter vectors of these units were chosen by hand and kept fixed (at least initially).The components of thoseparameters vectors were set to1.While they could have been chosen at random with equal probabilities for1,or even chosen to form an error correctingcode as suggested by [47],they were instead designed to represent a stylized image of the corresponding character class drawn on a7。

GRE分类模拟题23_真题(含答案与解析)-交互(876)

GRE分类模拟题23_真题(含答案与解析)-交互(876)

GRE分类模拟题23(总分100, 做题时间90分钟)基础填空1.Mongolian gazelles are the dominant herbivore in Mongolia"s eastern steppe, but they are ______ by the ongoing loss of their habitat.SSS_MULTI_SELA threatenedB trespassedC invadedD establishedE strengthenedF menaced该题您未回答:х该问题分值: 2.5答案:A,F[解析] ● but说明前后反义重复。

● 空格与dominant(处于支配地位的)反义重复。

● threaten威胁,trespass冒犯,invade冒犯,establish创立,strengthen加强,menace威胁。

答案选AF。

● 注:这里trespass和invade是干扰项,同表示“冒犯”。

2.A trained anthropologist raised in an **munity, Sven Haakanson has stressed the importance of **munities ______ anthropologists so that **munities can take an active role in ______ their own cultural heritage.SSS_MULTI_SELA oustingB collaborating withC contending withD describingE exemptingF depleting该题您未回答:х该问题分值: 2.5答案:B,D[解析] ● so that说明前后同义重复。

● 第一空与take an active role in(扮演……角色)同义重复。

2024年教师资格(中学)-英语学科知识与教学能力(初中)考试历年真题摘选附带答案

2024年教师资格(中学)-英语学科知识与教学能力(初中)考试历年真题摘选附带答案

2024年教师资格(中学)-英语学科知识与教学能力(初中)考试历年真题摘选附带答案第1卷一.全考点押密题库(共100题)1.(单项选择题)(每题2.00 分) Language is a tool of communication. The symbol “Highway Closed” on a highway serves→ ←.A. an expressive functionB. an informative functionC. a performative functionD. a persuasive function2.(单项选择题)(每题 2.00 分) I’ve loved my mother's desk since I was just tall enough to see above the top of it as mother sat writing letters. Standing by her chair, looking at the ink bottle, pens, and white paper, I decided that the act of writing must be the most wonderful thing in the world.Years later, during her final illness, mother kept different things for my sister and brother.“But the desk.” she’d said again, “It's for Elizabeth.”I never saw her angry, never saw her cry. I knew she loved me; she showed it in action. But as a young girl, I wanted heart-to-heart talks between mother and daughter.They never happened. And a gulf opened between us. I was “too emotional”. But she lived on the surface”.As years passed I had my own family. I loved my mother and thanked her for our happy family.I wrote to her in careful words and asked her to let me know in any way she chose that she did forgive me.I posted the letter and waited for her answer.None came.My hope turned to disappointment, then little interest and, finally, peace—it seemed that nothing happene I couldn’t be sure that the letter had even got to mother.I only knew that I had written it, and I could stop trying to make her into someone she was not.Now the present of her desk told, as she'd never been able to, that she was pleased thatwriting was my chosen work. I cleaned the desk carefully and found some papers inside—a photo of my father and a one—page letter, folded and refolded many times.Give me an answer, my letter asks, in any way you choose. Mother, you always choose the actthat speaks louder than words. The underlined word “gulf” in the passage means→ ←.A. deep understanding between the old and the youngB. different ideas between the mother and the daughterC. free talks between mother and daughterD. part of the sea going far in land3.(单项选择题)(每题 2.00 分) 40 years ago the idea of disabled people doing sport was never heard of. But when the annual games for the disabled were started at Stoke-Mandeville, Englandin 1948 by Sir Ludwig Guttmann, the situation began to change.Sir Ludwig Guttmann, who had been driven to England in 1939 from Nazi Germany, had been askedby the British government to set up an injuries center at Stoke Mandeville Hospital near London. His ideas about treating injuries included sport for the disabled.In the first games just two teams of injured soldiers took part. The next year, 1949, five teams took part. From those beginnings, things have developed fast Teams now come from abroadto Stoke Mandeville every year. In 1960 the first Olympics for the Disabled were held in Rome, in the same place as the normal Olympic Games. Now, every four years the Olympic Gamesfor the Disabled are held, if possible, in the same place as the normal Olympic Games, althoughthey are organized separately. In other years Games for the Disabled are still held at Stoke Mandeville. In the 1984 wheelchair Olympic Games, 1064 wheelchair athletes from about 40 countries took part. Unfortunately, they were held at Stoke Mandeville and not in Los Angeles, along with the other Olympics.The Games have been a great success in promoting international friendship and understanding,and in proving that being disabled does not mean you can’t enjoy sport. One small sourceof disappointment for those who organize and take part in the games, however, has been the unwillingness of the International Olympic Committee to include disabled events at Olympic Games for the able-bodied. Perhaps a few more years are still needed to convince those fortunate enough not to be disabled that their disabled fellow athletes should not be excluded. The first games for the disabled were held→ ←after Sir Ludwig Guttmann arrived in England.A. 40 yearsB. 21 yearsC. 10 yearsD. 9 years4.(单项选择题)(每题 2.00 分) I’ve loved my mother's desk since I was just tall enough tosee above the top of it as mother sat writing letters. Standing by her chair, looking atthe ink bottle, pens, and white paper, I decided that the act of writing must be the most wonderful thing in the world.Years later, during her final illness, mother kept different things for my sister and brother.“But the desk.” she’d said again, “It's for Elizabeth.”I never saw her angry, never saw her cry. I knew she loved me; she showed it in action. Butas a young girl, I wanted heart-to-heart talks between mother and daughter.They never happened. And a gulf opened between us. I was “too emotional”. But she livedon the surface”.As years passed I had my own family. I loved my mother and thanked her for our happy family.Iwrote to her in careful words and asked her to let me know in any way she chose that she did forgive me.I posted the letter and waited for her answer.None came.My hope turned to disappointment, then little interest and, finally, peace—it seemed that nothing happene I couldn’t be sure that the letter had even got to mother.I only knew thatI had written it, and I could stop trying to make her into someone she was not.Now the present of her desk told, as she'd never been able to, that she was pleased that writing was my chosen work. I cleaned the desk carefully and found some papers inside—a photo of my father and a one—page letter, folded and refolded many times.Give me an answer, my letter asks, in any way you choose. Mother, you always choose the act that speaks louder than words. The passage shows that→ ←.A. mother was cold on the surface but kind in her heart to her daughterB. mother was too serious about everything her daughter had doneC. mother eared much about her daughter in wordsD. mother wrote to her daughter in careful words5.(单项选择题)(每题 2.00 分) In which of the following situations is the teacher playing the role of observer?→ ←A. Giving feedback and dealing with errors.B. Organizing students to do activities by giving instructions.C. Walking around to see how each student performs in group work.D. Offering help to those who need it both in ideas and language.6.(单项选择题)(每题 2.00 分) One evening, while Marcos Ugarte was doing his homework and his father, Eduardo, prepared lesson plans, they heard someone yelling outside. Eduardo, 47, and Marcos, 15, stepped onto the porch of their home in Troutdale, Oregon, and saw a commotion four doors d own, outside the home of their neighbors, the Ma family. “I didn’t think anything was wrong.” Eduardo recalls. “I told Marcos we should give them some privacy.” He headed back inside, but Marcos’s eye was caught by a glow from the Ma house."Dad, the house is on fire!” Marcos cried.Clad only in shorts, the barefoot teen sprinted toward the Ma’s home with his dad. Grandmother Yim Ma, mother Suzanne Ma, and son Nathan Ma were gathered on the front lawn yelling for help. When the Ugartes got there,they saw father Alex Ma stumbling down the stairs, coughing, his face black with soot."Is anyone else in the house?” Eduardo asked."My son!” Alex managed to say, pointing to the second floor Eduardo started up the stairs, but thick, black smoke, swirling ash, and intense heat forced him to his knees. He crawled upstairs and down the hall where Alex said he would find Cody, eight, who had locked himselfin a bedroom.As the fire raged across the hall, Eduardo banged on the bedroom door and tried to turn the doorknob. Cody didn’t respond Eduardo made his way back downstairs.Meanwhile, Marcos saw Yim and Suzanne pulling an aluminum ladder out of the garage. “Cody was standing at the window, screaming for help,"says Marcos, "I knew I had to do something.” He grabbed the ladder, positioned it near the window, and climbed toward the boy.When Marcos reached the window, he pushed the screen into the room and coaxed Cody out. “It’s OK.” Marcos told him. “I‘ve got you.”Holding Cody with one arm. Marcos descended the ladder.When firefighters arrived, plumes of black smoke were billowing from the back of the house as flames engulfed the second floor. Emergency personnel took Cody to a nearby hospital,where he was treated for smoke inhalation and released. No one else was injured. The causeof the blaze is still under investigation."You just don’t see a teenager have that kind of→ composure←," says Mark Maunder, GreshamFire Department battalion chief.The Ma family relocated. The day after the fire, Alex visited Marcos. “Than k you for savingmy son.” Alex said. “You are his hero forever.”What does the underlined word “composure” in the last but one PARAGRAPH mean?→ ←.A. sympathyB. braveryC. calmnessD. warm-heartedness7.(单项选择题)(每题 2.00 分) Resources can be said to be scarce in both an absolute and relative sense: the surface of the Earth is finite, imposing absolute scarcity; but the scarcity that concerns economists is the relative scarcity of resources in different uses. Materials used for one purpose cannot at the same time be used for other purposes, if the quantity of an input is limited, the increased use of it in one manufacturing process must cause it to become less available for other uses.The cost of a product in terms of money may not measure its true cost to society. The truecost of, say, the construction of a supersonic jet is the value of the schools and refrigerators that will never be built as a result. Every act of production uses up someof society's available resources; it means the foregoing of an opportunity to produce something else. In deciding how to use resources most effectively to satisfy the wants ofthe community, this opportunity cost must ultimately be taken into account.In a market economy the price of a good and the quantity supplied depend on the cost of makingit, and that cost, ultimately, is the cost of not making other goods. The market mechanism enforces this relationship. The cost of, say, a pair of shoes is the price of the leather,the labor, the fuel, and other elements used up in producing them. But the price of thesein- puts, in turn, depends on what they can produce elsewhere-if the leather can be usedto produce handbags that are valued highly by consumers, the prices of leather will be bidup correspondingly. W hat does this passage mainly discuss?→ ←A. The scarcity of manufactured goods.B. The value of scarce materials.C. The manufacturing of scarce goods.D. The cost of producing shoes.8.(单项选择题)(每题 2.00 分) Writing exercises like copying, fill-in, completions and transformation are mainly the type of exercises used in→ ←.A. controlled writingB. guided writingC. flee writingD. expressive writing9.(单项选择题)(每题 2.00 分) Which of the following consonants doesn't fall under the same category according to the voicing?________A. [m]B. [b]C. [d]D. [p]10.(单项选择题)(每题 2.00 分) Which of the following is most suitable for the cultivation of linguistic competence?→ ←A. sentence-makingB. cue-card dialogueC. simulated dialogueD. learning syntax11.(单项选择题)(每题 2.00 分) Which of the following sets of phonetic features characterizes the English phoneme [u:]?→ ←A. [high, back, rounded]B. [high, back, unfounded]C. [low, back, rounded]D. [low, front unfounded]12.(单项选择题)(每题 2.00 分) 40 years ago the idea of disabled people doing sport was never heard of. But when the annual games for the disabled were started at Stoke-Mandeville, England in 1948 by Sir Ludwig Guttmann, the situation began to change.Sir Ludwig Guttmann, who had been driven to England in 1939 from Nazi Germany, had been asked by the British government to set up an injuries center at Stoke Mandeville Hospital near London. His ideas about treating injuries included sport for the disabled.In the first games just two teams of injured soldiers took part. The next year, 1949, five teams took part. From those beginnings, things have developed fast Teams now come from abroad to Stoke Mandeville every year. In 1960 the first Olympics for the Disabled were held in Rome, in the same place as the normal Olympic Games. Now, every four years the Olympic Games for the Disabled are held, if possible, in the same place as the normal Olympic Games, although they are organized separately. In other years Games for the Disabled are still held at Stoke Mandeville. In the 1984 wheelchair Olympic Games, 1064 wheelchair athletes from about 40 countries took part. Unfortunately, they were held at Stoke Mandeville and not in Los Angeles, along with the other Olympics.The Games have been a great success in promoting international friendship and understanding, and in proving that being disabled does not mean you can’t enjoy sport. One small source of disappointment for those who organize and take part in the games, however, has been the unwillingness of the International Olympic Committee to include disabled events at Olympic Games for the able-bodied. Perhaps a few more years are still needed to convince those fortunate enough not to be disabled that their disabled fellow athletes should not be excluded. Besides Stoke Mandeville, surely the games for the disabled were once held in→ ←.A. New YorkB. LondonC. RomeD. Los Angeles13.(单项选择题)(每题 2.00 分) —I’m going to study engineering in Peking University tomorrow.—→ ←.A. All the best in your studyB. All the best with your studyC. All the best in your businessD. All the best in your new job14.(单项选择题)(每题 2.00 分) The doctor→ ←a medicine for my headache.A. subscribedB. describedC. prescribedD. inscribed15.(单项选择题)(每题 2.00 分) When a lady customer intends to buy a coat with white stripes,what is she supposed to place an emphasis on if she says to the shop assistant?________A. I'd like a Red coat with .white stripesB. I'd Like a red coat with white stripes.C. I'd like a red coat with White StripesD. I'd like a red Coat with white stripes16.(单项选择题)(每题 2.00 分) What purpose does NOT post-listening activities serve?→ ←A. Helping students relate the text with their personal experience.B. Offering students the opportunities of extending other language skills.C. Practicing students’ ability of matching the pre-listing predictions with contents ofthe text.D. Give the answer directly to students and not to explain.17.(单项选择题)(每题 2.00 分) Which of the following activities can be adopted at thepre-reading stage?→ ←.A. rearranging the materialsB. brainstorming the topicC. writing a summary of the textD. draft framework18.(单项选择题)(每题 2.00 分) Throughout the history of the arts, the nature of creativityhas remained constant to artists. No matter what objects they select, artists are to bringforth new forces and forms that cause change to find poetry where no one has ever seen or experienced it before.Landscape(风景)is another unchanging element of art. It can be found from ancient times through the 17th-century Dutch painters to the 19th-century romanticists and impressionists.In the 1970s, Alfred Leslie, one of the new American realists, continued this practice. Leslie sought out the same place where Thomas Cole, a romanticist, had produced paintings of thesame scene a century and a half before. Unlike Cole who insists on a feeling of lonelinessand the idea of finding peace in nature, Leslie paints what he actually sees. In his paintings,there is no particular change in emotion, and he includes ordinary things like the highwayin the hack: ground. He also takes advantage of the latest developments of color photographyto help both the eye and the memory when he improves his painting back in his workroom. Besides, all art begs the age-old question: What is real? Each generation of artists has shown their understanding of reality in one form or another. The impressionists saw realityin brief emotional effects, the realists in everyday subjects and in forest scenes, and theCro-Magnon cave people in their naturalistic drawings of the animals in the ancient forests.To sum up, understanding reality is a necessary struggle for artists of all periods.Over thousands of years the function of the arts has remained relatively constant. Past or present, Eastern or Western, the arts are a basic part of our immediate experience. Manyand different are the faces of art, and together they express the basic need and hope of human beings. Which of the following is the ma in topic of the passage?→ ←A. History of the arts.B. Basic questions of the arts.C. New developments in the arts.D. Use of modem technology in the arts.19.(单项选择题)(每题 2.00 分) By the end of last year, nearly a million cars→ ←in that auto factory.A. had producedB. had been producedC. would be producedD. were produced20.(单项选择题)(每题 2.00 分) → ←the temperature might drop, coal was prepared for warming.A. To considerB. ConsideredC. ConsideringD. To be considered21.(单项选择题)(每题 2.00 分) → ←a moment and I will go to your rescue.A. Go onB. Hold onC. Move toD. Carry on22.(单项选择题)(每题 2.00 分) The study of language development over a period of time is generally called→ ←linguistics.A. appliedB. synchronicC. comparativeD. diachronic23.(单项选择题)(每题 2.00 分) Which of the following is a slip of tongue?→ ←A. Seeing is believing.B. Where there is smoke, there is fire.C. Where there is life, there is hope.D. Where there is a way, there is a will.24.(单项选择题)(每题 2.00 分) Which of the following statements is NOT a way of presenting new vocabulary?_A. Defining.B. Using real objects.C. Writing a passage by using new words.D. Giving explanations.25.(单项选择题)(每题 2.00 分) For a while, my neighborhood was taken ever by an army of joggers(慢跑者). They were there all the time: early morning, noon, and evening. There were little old ladies in gray sweats, young couples in Adidas shoes, middle-aged men with red faces. "Come on!" My friend Alex encouraged me to join him as he jogged by my house every evening. "You'll feel great."Well, I had nothing against feeling great and if Alex could jog every day, anyone could. So I took up jogging seriously and gave it a good two months of my life, and not a day more. Based on my experience, jogging is the most overvalued form of exercise around, and judging from the number of the people who left our neighborhood jogging army. I’m not alone in my opinion.First of all, jogging is very hard on the body. Your legs and feet a real pounding(追击)ruining down a road for two or three miles. I developed foot, leg, and back problems. Then I readabout a nationally famous jogger who died of a heart attack while jogging, and I had something else to worry about Jogging doesn't kill hundreds of people, but if you have any physical weaknesses, jogging will surely bring them out, as they did with me.Secondly, I got no enjoyment out of jogging. Putting one foot in front of the other for forty-five minutes isn't my idea of fun. Jogging is also a lonely pastime. Some joggers say, "I love being out there with just my thoughts." Well, my thoughts began to bore me, and most of them were on how much my legs hurt.And how could I enjoy something that brought me pain? And that wasn't just the first week, it was practically every day for two months. I never got past the pain level, and pain isn't fun. What a cruel way to do it! So many other exercises, including walking, lead to almost the same results painlessly, so why jog?I don't jog any more, and I don't think I ever will. I’m walking two miles three times a week at a fast pace, and that feels good. I bicycle to work when the weather is good. I'm getting exercise, and I'm enjoying it at the same time. I could never say the same for jogging, and I've found a lot of better ways to stay in shape. What was the writer's attitude towards jogging in the beginning?→ ←A. He felt it was worth a try.B. He was very fond of it.C. He was strongly against it.D. He thought it must be painful.26.(单项选择题)(每题 2.00 分) If a teacher attempts to implement the top-down model to teach listening, he is likely to present_____.A. new words after playing the tapeB. new words before playing the tapeC. background information after playing the tapeD. background information before playing the tape27.(单项选择题)(每题 2.00 分) Trees should only be pruned when there is a good and clear reason for doing so and, fortunately, the number of such reasons is small. Pruning involves the cutting away of overgrown and unwanted branches, and the inexperienced gardener can be encouraged by the thought that more damage results from doing it unnecessarily than from leaving the tree to grow in its own way.First, pruning may be done to make sure that trees have a desired shape or size. The object may be to get a tree of the right height, and at the same time to help the growth of small side branches which will thicken its appearance or give it a special shape. Secondly, pruning may be done to make the tree healthier. You may cut diseased or dead wood, or branches that are rubbing against each other and thus cause wounds. The health of a tree may be encouraged by removing branches that are blocking up the centre and so preventing the free movement of air.One result of pruning is that an open wound is left on the tree and this provides an easy entry for disease, but it is a wound that will heal. Often there is a race between the healing and the disease as to whether the tree will live or die, so that there is a period when the tree is at risk. It should be the aim of every gardener to reduce which has been pruned smooth and clean, for healing will be slowed down by roughness. You should allow the cut surface to dry for a few hurts and then paint it with one of the substances available from garden shops produced especially for this purpose. Pruning is usually without interference from the leaves and also it is very unlikely that the cuts you make will bleed If this does happen, it is, of course, impossible to paint them properly. Pruning should b e done to→ ←.A. make the tree grow tallerB. improve the shape of the treeC. get rid of the small branchesD. make the small branches thicker28.(单项选择题)(每题 2.00 分) Which of the following statements about Audio-lingual Methodis wrong?→ ←A. The method involves giving the learner stimuli in the form of prompts.B. The method involves praising the correct response or publishing incorrect response untilthe right one is given.C. Mother tongue is accepted in the classroom just 8s the target language.D. Emphasis is laid upon using oral language in the classroom; some reading and writing mightbe done as homework.29.(单项选择题)(每题 2.00 分) I’ve loved my mother's desk since I was just tall enough tosee above the top of it as mother sat writing letters. Standing by her chair, looking atthe ink bottle, pens, and white paper, I decided that the act of writing must be the most wonderful thing in the world.Years later, during her final illness, mother kept different things for my sister and brother.“But the desk.” she’d said again, “It's for Elizabeth.”I never saw her angry, never saw her cry. I knew she loved me; she showed it in action. Butas a young girl, I wanted heart-to-heart talks between mother and daughter.They never happened. And a gulf o pened between us. I was “too emotional”. But she livedon the surface”.As years passed I had my own family. I loved my mother and thanked her for our happy family.Iwrote to her in careful words and asked her to let me know in any way she chose that shedid forgive me.I posted the letter and waited for her answer.None came.My hope turned to disappointment, then little interest and, finally, peace—it seemed that nothing happene I couldn’t be sure that the letter had even got to mother.I only knew thatI had written it, and I could stop trying to make her into someone she was not.Now the present of her desk told, as she'd never been able to, that she was pleased that writing was my chosen work. I cleaned the desk carefully and found some papers inside—aphoto of my father and a one—page letter, folded and refolded many times.Give me an answer, my letter asks, in any way you choose. Mother, you always choose the actthat speaks louder than words. What did mother do with her daughter's letter asking forgiveness?A. She had never received the letter.B. For years, she often talked about the letter.C. She didn't forgive her daughter at all in all her life.D. She read the letter again and again till she die.30.(单项选择题)(每题 2.00 分) It was not→ ←she took off her dark glasses→ ←I realized she was a famous actress.A. when; thatB. until; thatC. until; whenD. when; then31.(单项选择题)(每题 2.00 分) How could we possibly think that keeping animals in cages in unnatural environments-mostly for entertainment purposes is fair and respectful?Zoo officials say they are concerned about animals. However, most zoos remain "collections" of interesting "things" rather than protective habitats. Zoos teach people that it is acceptable to keep animals bored, lonely, and far from their natural homes.Zoos claim to educate people and save endangered species, but visitors leave zoos without having learned anything meaningful about the animals natural behavior, intelligence, or beauty. Zoos keep animals in small spaces or cages, and most signs only mention the species name, diet, and natural range. The animals normal behavior is seldom noticed because zoos don’t usually take care of the animals natural needs.The animals are kept together in small spaces, with no privacy and little opportunity for mental and physical exercise. This results in unusual and self-destructive behavior called zoochosis. A world-wide study of zoos found that zoochosis is common among animals kept in small spaces or cages. Another study showed that elephants spend 22 percent of their time making repeated head movements or biting cage bars, and bears spend 30 percent of their time walking back and forth, a sign of unhappiness and pain.Furthermore, most animals in zoos are not endangered. Captive breeding of endangered big cats, Asian elephants, and other species has not resulted in their being sent back to the wild. Zoos talk a lot about their captive breeding programs because they do not want people to worry about a species dying out. In fact, baby animals also attract a lot of paying customers. Havent we seen enough competitions to name baby animals?Actually, we will save endangered species only if we save their habitats and put an end to the reasons people kill them. Instead of supporting zoos, we should support groups that work to protect animals natural habitats.The author tries to persuade readers to accept his argument mainly by→←.A. pointing out the faults in what zoos doB. using evidence he has collected at zoosC. questioning the way animals are protectedD. discussing the advantages of natural habitats32.(单项选择题)(每题 2.00 分) When Thomas Butler stepped off a plane in April 2002 on his return to the United States from a trip to Tanzania, he set in motion a chain of events that now threatens to destroy his life. A microbiologist at Texas Tech University in Lubbock, Butler was bringing back samples of the plague bacterium Yersinia pestis for his research. Yet on re-entering the country, he is alleged to have passed right by US customs inspectors without notifying them that he was carrying this potentially deadly cargo. That move and its consequences have led the federal government to prosecute Butler for a range of offences. If convicted on all counts, he could be fined millions of dollars and spend the rest of his life in jail.The US scientific community has leapt to butlers defense, arguing that his prosecution is over- zealous, alarming and unnecessary. The presidents of the National Academy of Sciences。

A framework for non-rigid matching and correspondence

A framework for non-rigid matching and correspondence
1
e-mail address of authors: lastname- rstname@
physical component elements, which does not generalize easily to 3D, and so, only a rigid 3D transformation was considered. We present a framework for non-rigid matching that begins with solving the basic a ne point matching problem. The algorithm iteratively updates the a ne parameters and correspondence in turn, each as a function of the other. The a ne transformation is solved in closed form, which lends tremendous exibility{ the formulation can be used in 2D or 3D. The correspondence is solved by using a softassign 1] procedure, in which the two-way assignment constraints are solved without penalty functions. The accuracy of the correspondence is improved by the integration of multiple features. A method for non-rigid parameter estimation is developed, based on the assumption of a well-articulated model with distinct regions, each of which may move in an a ne fashion, or can be approximated as such. Umeyama 3] has done work on parameterized parts using an exponential time tree search technique, and Wakahara 4] on local a ne transforms, but neither integrates multiple features nor explicitly considers the non-rigid matching case, while expressing a one-to-one correspondence between points.

语言学智慧树知到答案章节测试2023年山东农业工程学院

语言学智慧树知到答案章节测试2023年山东农业工程学院

第一章测试1.Study the following dialogue. What function does it play according to thefunctions of language? —A nice day, isn’t it? — Right! I really enjoy thesunlight. ( )A:EmotiveB:PerformativeC:InterpersonalD:Phatic答案:D2.Which of the following property of language enables language users toovercome the barriers caused by time and place, due to this feature oflanguage, speakers of a language are free to talk about anything in anysituation?()A:ArbitrarinessB:DualityC:DisplacementD:Transferability答案:C3.__________ refers to a language user’s underlying knowledge about the systemof rules. ( )A:PerformanceB:CompetenceC:ParoleD:Langue答案:B4.If a linguistic study prescribes how things should be, it is said to beprescriptive. ( )A:对B:错答案:A5.De Saussure, who made the distinction between langue and parole in theearly 20th century, was a French linguist. ( )A:对B:错答案:B第二章测试1.An aspirated [ph], an unaspirated [p] and an unreleased [p-] are __________ ofthe /p/ phoneme. ( )A:tagmemesB:analoguesC:morphemesD:allophones答案:D2.Which one is different from the others according to places of articulation? ( )A:[p]B:[n]C:[m]D:[ b ]答案:B3.When pure vowels or monophthongs are pronounced, no vowel glides takeplace. ( )A:错B:对答案:B4.[l] is a lateral alveolar. ( )A:错B:对答案:B5.Acoustic phonetics is concerned with the perception of speech sounds. ( )A:对B:错答案:B第三章测试1. A lexeme can be understood as a family of words that differ only in theirgrammatical endings, for example the endings for number, case, tense,participle form, etc.()A:错B:对答案:B2.In most cases, prefixes change the meaning of the base whereas suffixeschange the word-class of the base. ()。

英语口语5000句大全

英语口语5000句大全

AbstractsAbstract 1加强A演化规律研究,特别是定量评价其与B之间的关系对A的勘探具有重要意义。

1. Enhancing studies of evolution laws of A, especially quantitatively evaluating the relationship between A and B have significant meanings for prospecting of A.存在问题:主语是单数还是复数注意动词名词化的使用“特别是”….especially/(Br.)specially/ particularly/ in particular/notably 都可以。

2. It is significant for exploring to strengthen on the research of the evolution rules o A, especially of the quantitative evaluation between A and B.3. The research on strengthening A evolution rule【直接跟名词】, especially on quantitatively evaluating A related with B is very significant for A exploration.采集鄂尔多斯盆地的A,通过物理模拟实验模拟A演化,利用C法获得相关数据,结合…结果,确定A演化规律及其与B的关系。

1.The evolution of A sampled from the Ordos Basin wasstudied[sampled 修饰不清] by physical simulation.Combining the relevant data obtained by Method C with the ……results, we confirmed the evolution laws of A and the relationship between A and B.2.Through collecting A from Ordos Basin, simulating theevolution rules of A by physics simulation, applying MethodC to obtain relevant data and combining the results of ……we can determine the evolution laws of A and the relationship with B. [方法论部分和结果部分混杂,句子比较长]3.First of all, collecting A in Ordos Basins; second,simulating A evolution by physical simulation experiment;third obtaining correlation data by method C; fourth, combing…results to determine A evolution rule and relation with B. 【句子不完整】结果表明D先增加后减小,E随着模拟实验温度的增加呈现先减小后增大。

Toward a Comprehensive Framework for Software Process Modeling Evolution

Toward a Comprehensive Framework for Software Process Modeling Evolution

Toward a Comprehensive Framework for Software Process ModelingEvolutionOsama Eljabiri Fadi P. DeekNew Jersey Institute of Technology New Jersey Institute of TechnologyCIS Department CIS DepartmentUniversity Heights University HeightsNewark, NJ 07102 Newark, NJ 07102omae@ deek@AbstractS oftware process modeling has undergone extensive changes in the last three decades, impacting process' structure, degree of control, degree of visualization, degree of automation and integration. These changes can be attributed to several factors. This paper studies two of these factors, the time dimension and the interdisciplinary impact, and assesses their effect on the evolution of process modeling. A literature survey for software process modeling was carried out which provided evidence of how the time dimension and the interdisciplinary impact triggered process evolution and changes in methodology, technology, experience and business requirements. Finally, the paper concludes with a theoretical framework to serve as an illustrative model for the effects of the time dimension and interdisciplinary impact on process modeling evolution. This framework can serve as to develop more advanced models for technological forecasting in software process modeling evolution.KeywordsSoftware engineering, software process modeling, software process evolution, interdisciplinary impact, change in time, software development, software project management.1. IntroductionExamining the software development life cycle literature reveals a wealth of approaches that have been introduced in the last three decades. Many vary by titles, rationales, structures, degree of mapping and visualizing the real world applications and the extent of how these models reflect strategic goals in organizations. Moreover, there are a variety of approaches to classifying these models and to providing criteria for applying them to diverse business requirements. This can be attributed to several factors including evolving experiences of software engineers, degree of problem complexity, organizational goals, availability of technology, human factors and cognitive styles in addressing problems. Although several studies have examined the software development process literature at different levels of detail and abstraction [1, 2, 3, 4, 5, 6, 7, 8], there is still a benefit to a comprehensive review of the current software process literature, with a focus on the evolution of software process models as a function of time. As already indicated, there were several factors contributing to the diversity of software process models. One of these factors is the interdisciplinary impact influencing the development of software process models. The combined effect of the time dimension and the interdisciplinary impact might not only explain the evolution of process models but also may be useful in perhaps foreseeing future developments of process modeling.2. Literature ReviewThe evolution of process models started by the code and fix model [2], which fits the solution into the problem rather than drawing solutions from well-defined problems. Pressman [9] presented a comprehensive survey, though some approaches were not considered, and introduced the following process models: linear sequential (classical waterfall), prototyping model, RAD model, incremental model, spiral model, component assembly model, concurrent development model.Somerville [3] placed the process models he addressed in four main categories: the waterfall approach, the evolutionary development, the formal transformation, and assembly from reusable components. Evolutionary development is based on stages that consist of increments where "the directionsof evolution are determined by operational experience"[2].Behforooz and Hudson introduced another useful classification [4]. They considered all process models virtually as versions of the waterfall model and introduced models that were overlooked by others such as the Department Of Defense (DOD) system development life cycle and the NASA model.The waterfall model has played a significant role in process modeling evolution over the decades and has become the basis for most software acquisition standards [2].Although the waterfall model has its drawbacks, it is still the super class of many process-modeling approaches in software engineering. The unified software development approach proposed by Jacobson et al. [5] addressed some of the problems with previous models using an object-oriented approach and UML standards. This model is use-case driven, architecture centric, iterative and incremental, and has new phases: Inception, elaboration, construction and transition. While the unified process model can be characterized in terms of its object-oriented methodology and iterative nature, a framework by Abdel-Hamid et al. [10] was introduced to address management considerations coupled with software economics aspects. This framework recognized the impact of the control of resources variable on the overall performance of process models and thus gained popularity. IBM Cleanroom is another team oriented approach to software engineering in which intellectual control of the work is ensured by ongoing review by a qualified small team and the use of the formal methods in all the process phases and statistical quality control of an incremental development process [11].Process models with built in object-oriented techniques can be easily modified, extended and viewed at appropriate levels of abstraction. Their application areas include "development of an abstract theory of software process, formal methods of software process, definition and analysis, simulation of software process and the development of automated enactment support"[12].A lthough object-oriented methodologies have proven to be advantageous in process modeling, SOFL (structured-object-oriented-formal language) [13] is an approach that shows how integration between structured and object-oriented methodologies can add more value to a process model. This approach also combines static and dynamic modeling. These integrations aimed to develop a process model that overcomes formal methods problems, which limited their use in the industry.Introducing risk-driven process models was a significant breakthrough in process modeling after a large library of models based on document-driven or code-driven approaches as "the evolving risk driven approach provided a new framework for guiding the software process" [2]. This was referred to as the spiral model, which was to be adaptable to the full range of software project situations and flexible to accommodate a high dynamic range of technical alternatives and user objectives. However, the spiral model required further calibration to be fully usable in all situations [2]. In an effort to resolve model conflicts, Boehm [14] expanded the spiral model to another version named "win-win spiral model". In this version of spiral model Boehm used a stakeholder win-win approach to determine the objectives, constraints and alternatives for each cycle of the spiral.The prototyping model can be used as a generic tool in the software development process. Not only it can be integrated with other process models, but also it can assist in developing the requirements analysis phase. Furthermore, it can be used as an experimental tool in assessing the efficiency of the entire development process. In this respect, prototyping can be utilized as a mechanism in monitoring software processes before investing a great deal of efforts and resources [15]. The spiral model can also be utilized as a process model generator [16]. Boehm et al. used the spiral model as a framework for eliciting or generating adequate process models by means of the decision table technique. Another example for combined effect of both interdisciplinary impact and change in methodology is the commercial off-the-shelf (COTS) approach, which gained more attention recently. COTS components can be a complete application, an application generator, a problem-oriented language, or a framework in which specific applications are addressed by parameter choices [17].Web development life cycle, recently referred to as web engineering, is also gaining an increasing interest in software development [18]. In order to develop and maintain web sites in a cost-efficient way throughout their entire life cycle, sophisticated methods and tools have to be deployed [18].The reengineering process model is an approach based on business metrics of cost, time, and risk reduction as a result of substantial change in existing processes, which would create breakthroughs in the final product. According to Somerville [3], software reengineering has three main phases: defining the existing system, understanding and transformation, and reengineering the system. While traditional models were supported by loosely coupled CASE tools that provide assistance independently each phase of the life cycle, more sophisticated architectures were lately introduced. These provide mechanisms to ensure proper tool integration and interface capable of monitoring andcoordinate the activities and actions of software projects and team members [19]. The TAME process modeling approach represents a step toward integrating process modeling with product metrics along with the automation capabilities of CASE tools in a more comprehensive framework [1]. Integrating experimental data with CASE tools can also make the process model much more efficient by allowing data collection and knowledge base building throughout the development process. This approach has been introduced through the CAESE methodology where the tools and the experiment design combine to meet the software production goals as assessing software product metrics will be more efficient with the statistical analysis based on experiments accompanied by the high degree of CASE automation [20]. The flow of events represented by the cleanroom development increment life cycle based on formal techniques can be categorized in the same class [21].Finally, the cognitive prospective and human factors in developing process models are also relevant since problem solving cannot be achieved efficiently without adopting adequate strategies that are based on correct understanding of humans and their real needs [22]. Behavioral approaches have enhanced software usability from a user-oriented prospective particularly in the area of user interface design, thus influencing process modeling as well [23].3. AnalysisThe software development life cycle offers a methodology that developers follow to achieve software solutions for real world problems. This methodology reflects the evolving of the software solutions through a timeframe of a development process. It also represents the technical, human and financial resources required to perform the software project activities. In other words, it is a problem-solving framework that works within limited time and limited resources.As identified by Jaccheri et al., software processes are complex activities that affect critical parameters such as final product quality and costs [24]. Therefore, process control is significant to assure software product quality, as the duality of product and the process is an important element in software engineering [9]. The control capability is not only utilized for the purpose of preventive maintenance and corrective actions but also for quality assurance, quality improvement and forecasting. both in their structure and outcomes.Methodology adopted is an important factor that should be considered. Object-oriented methodology has a different impact than the process-oriented methodology on software development life cycle modeling.The degree of complexity in business problems is also an influential factor that should be considered. The change in the nature of business problems added more complexity to business processes, which resulted in changes in business requirements as a function of time.The time dimension variable and its associated factors impact the degree of visualization across process models. Degree of visualization is also a measure of process modeling evolution.4. ConclusionIn conclusion, the paper suggests a final conceptual framework. This framework indicates the effect of the time dimension and the interdesciplinary impact on the evolution of process modeling. It also indicates the effect of the time dimension on the interdesciplinary impact. This cross-relationship can be attributed to the change in experience and business requirements that triger the involvement of more deciplines in the assessment and development of more effecient process models.Based on this framework, several implications could be extracted.The first implication is the significant role of the four intervenning variables (i.e: change in experience, technology, methodology and business requirements) in transferring the effect of the time dimension on the evolution of process modeling. Another implication is that the dgree of automation, degree of visualization, degree of control, degree of integration and changes in structure could be used as measures of the extent in which process models evolve. The third implication is that the interdesciplinary impacts have had critical effects on process model evolution. This effect was coupled with the time dimension variable and trigered by its four intervenning variables. Cognitive phsycology played an important role in the context of behavioral and protpotyping models as more user involvement implies more human considerations. This can also be understood in the context of the customer economy as user satisfaction becames an issue in evaluating information systems.Software economics is another significant issue in this framework as it trigers the attention to risk considerations. Software economics encompasses several metrics of business performance that can also be addressesd in future studies. These business metrics that should be reflected in process modeling include cost reduction, profit maximazation, market share, competitive advantage, and the effect of project diversification in large organizations.Other interdecilpinary components addressed in this paper include management and industrial engineering which are correlated. Management had asignificant effect on process modeling structure as it allowed the incorporation of system dynamics subsequent to static modeling. Industrial engineering drew the attention to quality assurance standards applied to business processes which motivated software engineers to develop standards such as ISO9001 and the CMM model. The paper also discussed the impact of mathematics on the evolving of formal mothods and specification languages. In sum, this paper presents a suggested framework for a better understanding of the evolution of process modeling in terms of the time dimension and interdisciplinary impact. This framework can be used as an explanatory model of process modeling history and evolution, as well as for predictive purposes and technological forecasting. References[1] Victor R. Basili and H. Dieter Rombach, "The TAME Project: Towards Improvement-Oriented Software Environments", IEEE Transactions on Software Engineering, vol. SE-14, no. 6, June 1988, pp. 752-772.[2] Barry Boehm, "A Spiral Model of Software Development and Enhancement", IEEE Computer, vol. 21, no. 5, May 1988, pp. 61-72.[3] Ian Somerville, Software Engineering,New York, NY, Addison-Wesley, ISBN 0-201-17568-1, 1995.[4]Ali Behforooz,, Software Engineering Fundamentals, ISBN 0-19-510539-7, Oxford university press, New York, 1996.[5] Ivar Jacobson, Grady Booch and James Rambaugh, “The Unified Software Development Process”, ISBN: 0-201-57169-2, Addison Wesley, New York ,1998.[6] Barry Boehm ,"Anchoring the Software Process", IEEE Software, July 1996 .[7]Shari Lawrence Pfleeger, Software Engineering: Theory and Practice, Upper saddle River, NJ: Prentice Hall Corp, 1998.[8] 1074-1995: IEEE Guide for Developing Software Life Cycle Processes.[9]Roger Pressman, Software Engineering: A Practitioner's Approach, 4th Edition, New York, NY McGraw-Hill, ISBN 0070521824- 1438, 1996.[10] Tarek. Abdel-Hamid and Stuart E. Madnick, “Lessons Learned From Modeling The Dynamics Of Software Development “, Communications of the ACM vol. 32, no. 12 Dec. 1989, pp. 14-26.[11]Carmen J., Trammell, Leon H. Binder and Catherine E. Snyder, “The Automated Production Control Documentation System: A Case Study In Cleanroom Software Engineering“, ACM Transactions on Software Engineering Methodology vol. 1, no. 1 , Jan. 1992, pp. 81 – 94.[12] John D. Riley, “An Object-Oriented Approach To Software Process Modeling And Definition”, Proceedings of the 1994 conference on TRI-Ada '94, 1994, pp. 16 – 22. [13] Shaoying Liu, Offutt, A.J., Ho-Stuart, C., Sun, Y., Ohba, M., “SOFL: A Formal Engineering Methodology For Industrial Applications“, IEEE Transactions on Software Engineering, vol. 24, no. 1, Jan. 1998, pp. 24 –45.[14] Boehm, B. & Port, D., “Escaping The Software Tar Pit: Model Clashes And How To Avoid Them”, Software Engineering Notes. 24(1), January 1999 ,pp. 36-48.[15] Bradac, M., D. Perry, and L. Votta, "Prototyping a Process Monitoring Experiment", IEEE Transactions on Software Engineering, vol. 20, no .10, October 1994, pp. 774-784.[16] Barry Boehm and Frank Belz, “Experiences With The Spiral Model As A Process Model Generator”, Proceedings of the 5th international software process workshop on Experience with software process models, 1990, pp. 43 – 45.[17]W. Morven Gentleman, “Effective use of COTS (commercial-off-the-shelf) software components in long-lived systems” (tutorial), ACM Proceedings of the 1997 international conference on Software engineering, 1997, pp. 635 – 636.[18] Jung Reinhard and Robert Winter, “Case For WEB SITES Towards An Integration Of Traditional Case Concepts And Novel Development Tools”, Institute for Information Management University of St. Gallen, http:\\iwi1.unsg.ch\research\webcase, 1998.[19]Jayashree Ramanathan and Soumitra Sarkar, "Providing Customized Assistance for Software Lifecycle Approaches", IEEE Transactions on Software Engineering, vol. ~14, no. ~6, June 1988, pp. 749-757.[20] Torli, K., Matsumoto, K., Nakakoji, K., Takada, Y., Takada, S., Shims, K., “Ginger2: An Environment For Computer-Aided Empirical Software Engineering “, IEEE Transactions on Software Engineering, vol. 25, no. 4, July/August 1999, pp. 474 –491.[21] Carmen J. Trammell, Leon H. Binder and Catharine E. Snyder, “The Automated Production Control Documentation System: A Case Study In Cleanroom Software Engineering“, ACM Transactions Software Engineering Methodology, vol. 1, no. 1 , Jan. 1992, pp. 81 – 94.[22]Leveson N.G., “Intent Specifications: An Approach To Building Human-Centered Specifications”, IEEE Transactions on Software Engineering, vol. 26, no. 1, Jan. 2000, pp. 15 –35.[23]J. D. Chase, Robert S. Schulman, H. Rex Hartson and Deborah Hix, ”Development And Evaluation Of A Taxonomical Model Of Behavioral Representation Techniques“; ACM, Conference proceedings on Human factors in computing systems: “celebrating interdependence”, 1994, pp. 159 – 165.[24]Maria Letizia Jaccheri, Gian Pietro Picco and Patricia Lago, “Eliciting Software Process Models With The E3 Language “, ACM Transactions on Software Engineering Methodology, vol. 7, no. 4, Oct. 1998, pp. 368 – 410.。

Performance Comparison of Persistence Frameworks

Performance Comparison of Persistence Frameworks

Performance Comparison of Persistence FrameworksSabu M. Thampi* Asst. Prof., Department of CSE L.B.S College of Engineering Kasaragod-671542 Kerala, India smtlbs@yahoo.co.in Ashwin A.K S8, Department of CSE L.B.S College of Engineering Kasaragod-671542 Kerala, India ashwin_a_k@yahoo.co.inAbstractOne of the essential and most complex components in the software development process is the database. The complexity increases when the "orientation" of the interacting components differs. A persistence framework moves the program data in its most natural form to and from a permanent data store, the database. Thus a persistence framework manages the database and the mapping between the database and the objects. This paper compares the performance of two persistence frameworks – Hibernate and iBatis’s SQLMaps using a banking database. The performance of both of these tools in single and multi-user environments are evaluated.1. IntroductionWhen a component based on one kind of approach (e.g. object oriented) tries to interact directly with another object having its roots in another kind of approach (e.g. relational), the complexity increases due to the knots and knaves of cross approach communication. This is evident in all the database APIs provided by different languages. The best example of this is the Java Database Connectivity (JDBC) API. Though JDBC provides an easy method for accessing different databases without much ado, it is basically a low level API providing only a thin layer of abstraction. This is adequate for small and medium projects, but is not well suited for enterprise level applications. With JDBC opening and closing the connection involves a lot of code. What is required is a framework that can act as a mediator between both parties. In OOP, it is typically the behavior of objects (usecases, algorithmic logic) being emphasized. On the other hand, it is the data that counts in database technology. This fact serves as a common motive for the combination of these two paradigms [1]. The core component of this coupling is what is called “object*relational mapping” which takes care of the transitions of data and associations from one paradigm into the other (and vice versa). In order to make a program's object persistent, which means to save its current state and to be able to load that data later on, it is necessary to literally map its attributes and relations to a set of relational tuples. The rules defining such mappings can be quite complex. Here, the term “mapping“ can be defined as the application of rules to transfer object data to a unique equivalent in an RDBMS (relational database management system) and vice-versa. Viewed from the object's perspective, this ensures that all relevant object data can be saved to a database and retrieved again. A persistence framework moves the program data in its most natural form (in memory objects) to and from a permanent data store, the database. The persistence framework manages the database and the mapping between the database and the objects. Persistence framework simplifies the development process. There are many persistence frameworks (both Open Source and Commercial) in the market. Hibernate and iBatis are examples for ORM frameworks for Java. Hibernate [2] is an open source project being covered by BossTM. It is intended to be a full-scale ORM environment and features interesting functionality, such as “real transparency”: a data class does not have to extend special classes of Hibernate; it only has to make properties available through standard get-/set-methods. Hibernate uses bytecode processing to extend from these classes and implement persistence. It also supports – according to the project homepage – a sophisticated caching mechanism (duallayer, which can be distributed as well) using pluggable cache providers. Hibernate is an object/relational persistence and query service for Java. Hibernate lets you develop persistent classes following common Java features - including association, inheritance, polymorphism, composition and the Java collections framework. The HibernateCorresponding AuthorQuery Language, designed as a "minimal" objectoriented extension to SQL, provides a bridge between the object and relational worlds. Hibernate also allows you to express queries using native SQL or Java-based Criteria and Example queries.Apache Software Organization is an open source framework for building web applications that integrate with standard technologies, such as Java Servlets, JavaBeans, and JavaServer Pages. Struts offer many benefits to the web application developer, including Model 2 implementation of Model-View-Controller (MVC) design patterns in JSP web applications. The MVC Model 2 paradigm applied to web applications separate display code (for example, HTML and tag libraries) from flow control logic (action classes).Figure 1: Full Cream Architecture of Hibernate The SQLMaps product of iBatis [3,4] does not represent an ORM environment at the scale of Hibernate. Just as the name suggests, it is heavily SQL-centric and provides means to access centrally stored SQL-statements in a convenient way. The mapping functionality is able to create objects based on query data, but there is no transaction support. The SQLMaps product is light-weight and is expected to run faster than the heavy loaded full-scale ORM toolsIt uses a special mapping files in which the developer should expose object’s properties to be made persistent as well as respective database tables and columns these properties should be mapped to. In addition to that there are something called dynamic queries, caching of queries, transactions and calling stored procedures. The framework maps JavaBeans to SQL statements using a XML descriptor. The performance comparison of Hibernate and iBATIS are explored in this paper. Both of the above tools have their advantages and disadvantages. The remaining sections of the paper are organized as follows. Section 2 gives an overview of protype banking application. Simulation results are presented in section 3. Section 4 concludes the paper.Figure 2: iBATIS Data Mapper framework2. Online Banking systemA very simple prototype version of an online banking application using struts framework and iBatis/ Hibernate framework is developed to analyze the performance of persistence frameworks. The Jakarta Project's Struts framework, version 1.1b2, fromFigure 3: E-R Diagram for banking application The Banking application has a number of functionality such as summary, account details, transfer, transactions, update and contact details. The entity relationship diagram in figure 3 illustrates the relationship among different entities in the prototype system.The figure 4 shows the architecture of the Demo Banking application. The request is given through the browser. The DAO layer contains only method names. When the username and password are entered through GUI and the login button is clicked, the application through the struts framework calls for a login action class which first generates a hash code with the entered data .This is supposed to be the account id. Then the application checks the validity of the entered data by querying from the database based on account id. If they are correct, entry is given otherwise an error page is displayed. The jsp page stores the value of the account id to be referenced in future pages.transfer money to others account as well as to the different account types he has.3. Simulation ResultsPerformance between Hibernate and iBatis is measured using a java program which uses both hibernate and iBatis to perform basic sql operations on the banking database and the RTT (Round Trip Time) is calculated and used to measure the way these mapping tools perform under various situations. The aim is to get the time from generation of sql to querying bank database and then getting back the data. The program was run from one system and the SQL Server was located in another system. The conditions were the same for both the Hibernate and iBatis. The test also included simulation for a single user and multi user. The simulation of multi user was made through the creation of threads. Java supports multithreading environment. The number of threads is passed as input to the program. The response of Hibernate and iBatis under multi user environment is monitored. The RTT is monitored for both the cases.Figure 4: Architecture of Banking applicationFigure 5: Account Summary Page Each account type is a link. When a particular link is clicked the account id and the account number corresponding to the particular link is passed to transactionaction class. From here the transaction list corresponding to the respective account number is retrieved and is passed to the jsp. There is a unique id for each transaction, which is denoted by Transid. When the user clicks the transfer button in the jsp page the pretransferaction class is called which checks the account types belonging to the account holder from the data table named Ac_details and displays it in a drop down menu. The holder canFigure 6: An example of Transaction details of Banking Application The tests were conducted in the following environment: Operating system: Microsoft Windows 2000 Processor: Intel Xeon 4 Processor Memory: 1024 MB DDR RAM The following inputs are needed for the test program: Whether hibernate or iBatis: The user can specify which DAO to be executed whether Hibernate or iBatis. Number of records: The number of records to be inserted, deleted, updated is also given at a time. Insert, update, select, delete or all the operations specified: The user can also specify what operation to be monitored whether insert, update, delete or all operations together can be done.Number of iterations: Number of times the particular set is to be repeated can also be given as input. Number of threads: This simulates number of user accessing the application. The data shown below is how the raw data is recorded and stored into a text file. It is this data that is summarized into graph in figures 8. Average time (with 5000 records, 10 iterations & 50 threads for hibernate) Avg_Insert=3917 Avg_Update=1462 Avg_Select(First Time)=37182 Avg_Select(Second Time)=2361 Avg_Delete=1414 --------------------------------------------Average Time (With 5000 Records, 10 Iterations & 50 Threads For Ibatis) Avg_Insert=6272 Avg_Update=5556 Avg_Select(First Time)=5197 Avg_Select(Second Time)=5157 Avg_Delete=5414 --------------------------------------------Avg_Insert,Avg_Update,,Avg_Select (First Time), Avg_Select (Second Time),Avg_Delete corresponds to average time taken for insert, update, select1, select2 & delete. The above values are computed as follows: i. Find the time taken for an operation in each set of record is noted. ii. The sum of the time taken for all the iterations is found. iii. The average for that set of iterations is computed. iv. If multiple threads (say x no of threads) are present we will have many number of averages (here x) v. The final values are obtained by computing the averages of all the averages previously obtained in step iv. In the figure 7, y-axis represents time in milli seconds and the x-axis represents the various operations performed (such as insert, update, select1, select2) for both hibernate and iBatis. The graph shows that there are minute differences the time taken between hibernate and iBatis except for select1. The large variation in time taken for select1 is caused due to the complex caching algorithms employed by hibernates. Such techniques have proved to be useful in case of subsequent searches as seen in the graph. The graph in figure 8 shows the results of time taken when there are 5 threads 5000 records and 10 iterations. As shown in the graph the time taken forhibernate for the first select is very large compare to iBatis, but in all other cases hibernate has an upper hand over iBatis in terms of time taken. Even in the second select operation time taken by hibernate is less compared to that of iBatis. This implies that barring the initial overhead caused by hibernate during the first select it fares well compared to iBatis.Figure 7: 1 Thread 5000 records 10 iterationsFigure 8: 5 Threads 5000 records 10 iterationsFigure 9: 50 Threads 5000 records 10 iterations Figure 9 represents the time taken when there are 50 threads involved for 5000 records and 10 iterations. In this case it is seen that the large variation that was noticed in case of select1 operation in figures 7 and 8 has now been minimized. When the number of threads was increased to simulate multiple user environments, it is seen that in this case it is iBatis, which lagsbehind, hibernate compared to the previous cases. The only operation in which hibernate consumes more time is for the insert operation.[3] F. Gianneschi: “JDBC vs. iBATIS a case study”, /vqwiki/jsp/Wiki? Action=action_view_attachment&attachment=EN _iBATISCaseStudy.pdf, (Oct. 2005) [4] iBATIS Developer Guide, available at /DevGuide.html#d 0e112, (Sept. 2005) [5] /enterprise/ persistenceframework.shtml [6] Ambler, Scott W, “Mapping Objects to Relational Databases”, An AmbySoft Inc. White Paper, /mappingObjects.html, October 2004 [7] Fussell, Mark L., Foundations of Object Relational Mapping, v0.2 [mlf-970703], published online at , copyright by Mark Fussell, 1997 [8] Various authors, Object Relational Tool Comparison,Online-Wiki, /cgi/wiki?ObjectRelationalToolCom parison, October 2004 [9] Sun Microsystems, “Enterprise JavaBeansTM Specification,Version 2.1”,November 12, 2003, available at /products/ejb/docs.html, (Oct.2005) [10] E. Roman, R. P. Sriganesh, G. Brose: “Mastering Enterprise JavaBeans“, Third Edition, Jan. 2005, available at /books/wiley/maste ringEJB/downloads/MasteringEJB3rd Ed.pdf, (Oct. 2005) [11] iBATIS Mail Archive, available /userjava@ /, (Oct. 2005) at4. ConclusionObject relational mapping became important due to increasing coupling between relational database management systems and object oriented application concepts and development. There are tools to automate these mapping tasks, which can be distinguished by the degree to which they abstract the storage logic for the application. Choosing a suitable product can significantly cut down development efforts, costs and time. After conducting the DAO tests on banking database and comparing a similar application using hibernate and iBatis we come to the following conclusions: 1. In terms of round trip delays iBatis takes lesser time. The slighter increase in time in case of hibernate can be accounted to the time taken for automatically generating the queries and the complex caching algorithms used by hibernate. 2. In terms of flexibility iBatis has an upper hand over hibernate. 3. Considering the learning curve iBatis has a smaller curve since it is more similar to JDBC. 4. Programming using iBatis requires an SQL guru in the team but while using hibernate in-depth knowledge in SQL is not required. 5. Considering the features provided by both the tools, hibernate is much stronger since it supports lazy fetching and mapping associations.5. References[1] Schirrer, “Object-Relational Mapping, Theoretical Background and Tool Comparison”, Bachelor Thesis, Nov. 2004. [2] The Hibernate Project, “Hibernate Reference Documentation, version 2.1.6”, , October 2004, page 61ff。

计量经济学 Test bank questions Chapter 4

计量经济学 Test bank questions Chapter 4

Multiple Choice Test Bank Questions No Feedback – Chapter 4 Correct answers denoted by an asterisk.1. A researcher conducts a Breusch-Godfrey test for autocorrelation using 3 lags of theresiduals in the auxiliary regression. The original regression contained 5 regressors including a constant term, and was estimated using 105 observations. What is the critical value using a 5% significance level for the LM test based on T R2?(a)1.99(b)2.70(c)* 7.81(d)8.56.2. Which of the following would NOT be a potential remedy for the problem of multicollinearity between regressors?(a) Removing one of the explanatory variables(b) * Transforming the data into logarithms(c) Transforming two of the explanatory variables into ratios(d) Collecting higher frequency data on all of the variables3. Which of the following conditions must be fulfilled for the Durbin Watson test to be valid?(i) The regression includes a constant term(ii) The regressors are non-stochastic(iii) There are no lags of the dependent variable in the regression(iv) There are no lags of the independent variables in the regression(a)* (i), (ii) and (iii) only(b) (i) and (ii) only(c) (i), (ii), (iii) and (iv)(d) (i), (ii), and (iv) only4. If the residuals of a regression on a large sample are found to be heteroscedastic which of the following might be a likely consequence?(i) The coefficient estimates are biased(ii) The standard error estimates for the slope coefficients may be too small(iii) Statistical inferences may be wrong(a) (i) only(b) * (ii) and (iii) only(c) (i), (ii) and (iii)(d) (i) and (ii) only5. The value of the Durbin Watson test statistic in a regression with 4 regressors (including the constant term) estimated on 100 observations is 3.6. What might we suggest from this?(a) The residuals are positively autocorrelated(b) * The residuals are negatively autocorrelated(c) There is no autocorrelation in the residuals(d) The test statistic has fallen in the intermediate region6. Which of the following is NOT a good reason for including lagged variables in a regression?(a) Slow response of the dependent variable to changes in the independent variables(b) Over-reactions of the dependent variables(c) The dependent variable is a centred moving average of the past 4 values of the series(d) * The residuals of the model appear to be non-normal7. What is the long run solution to the following dynamic econometric model?∆y t = β1 + β2∆X2t + β3∆X3t + u t(a) y = β1 + β2X2+ β3X3(b) y t = β1+ β2X2t + β3X3t(c) y = - (β2/β1) X2 - (β3 /β1)X3(d) * There is no long run solution to this equation8. Which of the following would you expect to be a problem associated with adding lagged values of the dependent variable into a regression equation?(a) * The assumption that the regressors are non-stochastic is violated(b) A model with many lags may lead to residual non-normality(c) Adding lags may induce multicollinearity with current values of variables(d) The standard errors of the coefficients will fall as a result of adding more explanatory variables9. A normal distribution has coefficients of skewness and excess kurtosis which are respectively(a) * 0 and 0(b) 0 and 3(c) 3 and 0(d) Will vary from one normal distribution to another10. Which of the following would probably NOT be a potential “cure” for non-normal residuals?(a) * Transforming two explanatory variables into a ratio(b) Removing large positive residuals(c) Using a procedure for estimation and inference which did not assume normality(d) Removing large negative residuals11. What would be the consequences for the OLS estimator if autocorrelation is present in a regression model but ignored?(a) It will be biased(b) It will be inconsistent(c) * It will be inefficient(d) All of (a), (b) and (c) will be true.12. If OLS is used in the presence of heteroscedasticity, which of the following will be likely consequences?(i) Coefficient estimates may be misleading(ii) Hypothesis tests could reach the wrong conclusions(iii) Forecasts made from the model could be biased(iv) Standard errors may inappropriate(a) * (ii) and (iv) only(b) (i) and (iii) only(c) (i), (ii), and (iii) only(d) (i), (ii), (iii), and (iv).13. If a residual series is negatively autocorrelated, which one of the following is the most likely value of the Durbin Watson statistic?(a) Close to zero(b) Close to two(c) * Close to four(d) Close to one.14. If the residuals of a model containing lags of the dependent variable areautocorrelated, which one of the following could this lead to?(a) Biased but consistent coefficient estimates(b) * Biased and inconsistent coefficient estimates(c) Unbiased but inconsistent coefficient estimates(d) Unbiased and consistent but inefficient coefficient estimates.15. Which one of the following is NOT a symptom of near multicollinearity? (a) The R 2 value is high(b) The regression results change substantively when one particular variable is deleted (c) * Confidence intervals on parameter estimates are narrow(d) Individual parameter estimates are insignificant16. Which one of the following would be the most appropriate auxiliary regression for a Ramsey RESET test of functional form?(a) * t t t v yy ++=210ˆαα (b) t t t t t t t t v x x x x x x y ++++++=326235224322102αααααα (c) t t t v y u++=2102ˆˆαα (d) t t t t t t t t v x x x x x x u ++++++=32623522432210αααααα17. If a regression equation contains an irrelevant variable, the parameter estimates will be(a)* Consistent and unbiased but inefficient(b)Consistent and asymptotically efficient but biased(c)Inconsistent(d)Consistent, unbiased and efficient.18. Put the following steps of the model-building process in the order in which it would be statistically most appropriate to do them:(i) Estimate model(ii) Conduct hypothesis tests on coefficients(iii) Remove irrelevant variables(iv) Conduct diagnostic tests on the model residuals(a)(i) then (ii) then (iii) then (iv)(b)(i) then (iv) then (ii) then (iii)(c)* (i) then (iv) then (iii) then (ii)(d)(i) then (iii) then (ii) then (iv).。

考研英语作文精炼句子软件

考研英语作文精炼句子软件

When it comes to refining sentences for the postgraduate entrance English exam, a software can be a valuable tool. Here are some key features that such a software should possess to be effective:1. Syntax Analysis: The software should have the capability to analyze the syntax of a sentence to identify any grammatical errors or awkward phrasings.2. Vocabulary Enhancement: It should offer suggestions for more sophisticated vocabulary that could elevate the quality of the writing without compromising the original meaning.3. Cohesion and Coherence: The software should check for the flow of ideas and suggest ways to improve the cohesion and coherence of the text.4. Style Consistency: Ensuring that the writing style remains consistent throughout the essay, the software should flag any inconsistencies in tone or formality.5. Idiom and Phrase Usage: The program should recognize common idioms and phrases and suggest more idiomatic expressions where appropriate.6. Punctuation Correction: Proper punctuation is crucial for clear communication. The software should correct any punctuation errors or suggest improvements.7. Paragraph Structuring: It should provide guidance on how to structure paragraphs effectively, ensuring each one has a clear topic sentence and supporting details.8. Thesaurus Integration: For those looking to vary their word choice, the software should have a builtin thesaurus to find synonyms and related terms.9. Readability Score: Providing a readability score can help users understand the complexity of their writing and make adjustments to meet their target audiences comprehension level.10. Customizable Feedback: Users should be able to customize the type of feedback they receive, focusing on areas they feel need the most improvement.11. Example Sentences: The software could provide example sentences to illustrate how to correct or improve a particular sentence structure or choice of words.12. Interactive Learning: Incorporating interactive elements where users can practicesentence refinement in a simulated environment can be beneficial for learning.13. Export Options: Users should be able to export their refined essays in various formats for further use or review.14. UserFriendly Interface: The software should be easy to navigate, with a clean and intuitive interface that makes the sentence refinement process straightforward.15. Regular Updates: Language evolves, and so should the software. Regular updates to the database and algorithms will ensure the software remains effective and relevant.By incorporating these features, a sentence refinement software can significantly assist students preparing for the postgraduate entrance English exam, helping them to produce highquality, polished essays.。

  1. 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
  2. 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
  3. 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。

Copyright 2003 by Richard Martin and Edward Robertson
1 of 6
A Comparison of Frameworks for Enterprise Architecture Modeling
Presented at ER2003 International Conference on Conceptual Modeling And constraints, by which we evaluate conformance. A deficiency of models is a lack of ability to formalize constraints between models. ⊗ Slide 7 – Artifact Prototypes Enterprise architecture frameworks are based upon notions of some preexisting set of prototype models that are known to interact. If we did not have so many kinds of model artifacts we would probably not need a framework to organize our access to them. When we only had a few simple models, usually because we severely restricted our model domain, we did not use frameworks. It was only after the complexity of models and their number increased dramatically that we began to see the framework concepts emerge. ⊗ Slide 8 – Entities in Time The next three slides attempt to convey our feeling that frameworks, of the kind that interest us the most, are of two basic varieties for which the distinguishing characteristic is their expression of time dependency. We’ll use the terms continuant and occurrent for their intuitive feel and in a relative sense. We realize that even the current literature is inconsistent in their definition. Recall our claim that purpose is characterized by ordered dependence. When that purposeful order is roughly chronological, the framework is occurrent. Frameworks with a purpose that is not referenced to chronology, or that extracts time from its purposeful dimension is called continuant – it was here yesterday, is here today, and will be here tomorrow. ⊗ Slide 9 – Continuants/Occurrents ⊗ Slide 10 – Enterprise Description ⊗ Slide 11 – Zachman Framework for Enterprise Architecture This is a somewhat dated by still valid image of John Zachman’s framework. You can get a current version off the web. ⊗ Slide 12 – Zachman Framework for Enterprise Architecture (IS version) The first characteristic of a big “F” framework is the framework or grid structure. In this case it’s shown as two dimensional. ⊗ It has a purposeful ordered ordinate dimension R, usually called role or perspective, of 5 or 6 coordinates depending upon version. The first and last coordinates provide an interface to externalities and the middle three abstract a conceptual owner, logical designer, and physical builder partitioning for model artifacts. ⊗ The other dimension is unordered ordinate and consists of coordinates expressing the universal partition of inquiry, what, how, where, who, when, and why. ⊗ The framework is to be populated by a wide variety Copyright 2003 by Richard Martin and Edward Robertson 2 of 6
A Comparison of Frameworks for Enterprise Architecture Modeling
Presented at ER2003 International Conference on Conceptual Modeling of models – among them a logical data model, logistics network, and rule design. ⊗ Of critical importance is the primitive model proto-type or association for each kind of inquiry that serves to distinguish that interrogative from all others. Keeping the models for each column primitive in the associative sense presents the greatest challenge in using this framework. ⊗ Slide 13 – Zachman Recursion Our formal approach to this framework, and others as well, includes recursion to manage an additional decomposition dimension. ⊗ Something like this as nested frames. ⊗ Slide 14 – Zachman Properties ⊗ Slide 15 – ISO 15704: Annex A – GERAM Here we have the overview slide for GERAM, the result of a task force of IFAC/IFIP that pulls together several previous efforts from the manufacturing domains. Notice that in addition to its three dimensional representation, its vertical ordinate dimension is labeled “life-cycle phases” giving it a distinctly occurrent character. GERAM is used in several international efforts including the virtual logistics networks of the Globemen project and is the basis for ISO 19439 that we present and discuss in more detail. ⊗ Slide 16 – ISO/CEN FDIS 19439 Again we have the three dimensional space of ISO 15704 and GERAM. ⊗ The model phases are familiar to anyone building models. Start at the top and work your way down to the bottom – hopefully gaining handsome return during operation. ⊗ 19439 distinguishes four views of the enterprise model that are considered to satisfy completely the need for enterprise description. ⊗ The last dimension makes the notion of proto-type models explicit and distinguishes partial models, say for a business sector or standard, from both the particular models of the enterprise being modeled or the generic constructs from which they are fabricated. Note that generic and parቤተ መጻሕፍቲ ባይዱial models form a reference catalog not defined during the operation phase. ⊗ Slide 17 – 19439 – Model Dimension ⊗ Slide 18 – 19439 – View Dimension ⊗ Slide 19 – 19439 – Genericity Dimension ⊗ Slide 20 – 19439 - Recursion The concept of recursion expressed in 15704’s GERAM and 19439 is somewhat different. Here, the models of the operational phase can be used either in conjunction with a reference catalog or ⊗
相关文档
最新文档