毕设之英文文献翻译成中文
毕设外文翻译是什么意思(两篇)
引言概述:在现代高等教育中,毕业设计(或称为毕业论文、学士论文等)是学生完成学业的重要环节。
而对于一些特定的专业,例如翻译专业,有时候还需要完成外文翻译这一项任务。
本文将探讨毕设外文翻译的意义和目的,以及为什么对翻译专业的学生而言这一任务极其重要。
正文内容:1.提高翻译能力和技巧外文翻译是一项对翻译专业学生而言十分重要的任务,通过进行外文翻译,学生们可以通过实践提高自己的翻译能力和技巧。
在这个过程中,他们可以学习如何处理不同类型的外文文本,熟悉不同领域的专业术语,并掌握一些常用的翻译技巧和策略。
2.扩展语言和文化知识毕设外文翻译要求学生们对翻译语言的相关知识和背景有一定的了解。
在进行翻译时,学生们需要遵循目标语言的语法规则,并确保所翻译的内容准确、清晰地传达源语言的意义。
通过这一过程,学生们可以进一步扩展自己的语言和文化知识,提高自己的跨文化沟通能力。
3.提供实践机会毕设外文翻译为学生们提供了一个实践的机会,让他们能够将在课堂上所学到的理论知识应用于实际操作中。
通过实践,学生们可以对所学知识的理解更加深入,同时也可以发现并解决实际翻译过程中的问题和挑战。
这对于学生们将来从事翻译工作时具备更好的实践能力和经验具有重要意义。
4.培养翻译专业素养毕设外文翻译要求学生们具备良好的翻译专业素养。
在进行翻译过程中,学生们需要保持专业的态度和责任心,严谨地对待每一个翻译任务。
他们需要学会如何进行翻译质量的评估和控制,以确保最终翻译稿的准确性和流畅性。
这一系列的要求和实践,可以帮助学生们培养出色的翻译专业素养。
5.提升自我学习和研究能力毕设外文翻译要求学生们进行广泛的文献阅读和研究,以便更好地理解所翻译的内容,并找到适当的翻译方法和策略。
在这个过程中,学生们需要培养自己的自主学习和研究能力,提高对学术和专业领域的敏感性,并能够独立思考和解决问题。
这将对学生们未来的学术研究和进一步的职业发展产生积极的影响。
总结:引言概述:毕业设计外文翻译(Thesis Translation)是指在毕业设计过程中,对相关外文文献进行翻译,并将其应用于研究中,以提供理论支持和参考。
管理系统类毕业设计外文文献翻译
.NET Compact Framework 2.0中的新事物介绍.NET Compact Framework 2.0版在以前版本——.NET Compact Framework1.0版——上提供许多改善。
虽然普遍改善,但他们都集中在共同的目标——改进开发商生产力、以完整的.NET Framwork提供更强的兼容性,以及加大对设备特性的支持。
这篇文章提供一个.NET Compact Framework2.0的变动和改进的高水平的概要。
用户界面相关的灵活的设备显示器的小尺寸要求:应用程序高效率地使用可用空间。
这在过去是要求开发商花费很多时间来设计和实施应用的用户界面。
最近的在灵活的显示能力方面的进步,譬如高分辨率和多方位支持,使得用户界面发展的工作更具挑战性。
为了简化创造应用用户界面的任务,.NET Compact Framework2.0提供许多关于这方面描述的新特性。
窗口形式控制存在于用户界面中心的是控制;.NET Compact Framework2.0提供了很多新的控制。
这些新控制由除了特别针对设备之外的控制组成。
这种控制是.NET Compact Framework有的与.NET Framework一样充分的控制。
MonthCalendarMonthCalendar控制是提供日期显示的可定制的日历控制,而且是有利于为用户提供一个图解方式来精选日期。
DateTimePickerDateTimePicker控制是为显示和允许用户进入日期和时间信息的可定制的控制。
由于它的一个紧凑显示和图解日期选择格式的组合,它特别适用于灵活的设备应用程序。
当显示信息时,DateTimePicker控制与正文框相似;但是,当用户选择了一个日期, 可能显示一个类似于MonthCalendar控制的弹出日历。
WebBrowserWebBrowser控制压缩了设备Web浏览器,并且提供强大的显示能力和暴露很多事件。
这些事件除了允许你的应用程序提供对于这些事件的用户化的行为,还允许你的应用程序追踪用户与Web浏览器内容的互动。
3桂电毕设英文翻译原文
毕业设计(论文)英文翻译原文院(系):成人教育学院专业:工商企业管理学生姓名:周杨学号: 030113300433指导教师:王蕴老师职称:教授2014年10月12 日The Changing Pattern of Pay and BenefitsTudor, Thomas R, Trumble, Robert R Journal of Compensation & Benefits/May/2008Today, many companies still base their reward systems on the 1950s compensation mode l made popular during the brief period when U.S. companies dominated the world. With today s increasingly competitive environment, however, companies must look more closely at the co st-benefit of rewards, instead of just using them in an attempt to reduce employee dissatisfacti on. Companies must provide short-term motivation and encourage employees to develop long -term skills that will aid the company. Most importantly, companies must also attract and retai n high performers, instead of alienating them with pay systems that give everyone pay increas es without regard to levels of performance. For example, such new compensation approaches may include skill-based pay, gainsharing plans, and flexible benefits systems.Traditional compensation approaches are still often modeled on the centralization-based organizational model, in which decisions were made at the top and management rigidly define d tasks. However, with global competition becoming an increasingly prominent issue, compan ies need reward systems that match their movement to decentralized structures. Larger numbe rs of companies are also becoming very aware that they cannot just pass additional compensat ion costs onto future customers. Today, our pay systems must move in step with the participati ve-management trend by becoming more flexible instead of remaining fixed. This adjustment involves many factors including shorter product life cycles, a need to be more flexible, a need for workers to continually gain additional skills, and for them to think more on the job.In today's most successful companies, employee rewards and benefits are increasingly in corporated into an organization's strategic planning. Why? The rationale is that employee com pensation has a substantial impact on the long-term financial position of a firm. Compensation structures should consider an organization's strategic requirements and should match organiza tional goals. Compensation strategic planning should involve:consideration of the internal and external environment; and creation of an organization's compensation statement, compensatio n goals, and the development of compensation policies.Today, one strategic compensation trend is the use of pay incentives instead of the traditi onal, annual “everybody gets” pay increase. The rationale is to control costs and to more close ly tie performance to compensation. We can group the changing pattern of compensation into two general areas: Pay Method Trends and Benefits Trends. Human Resources managers shou ld familiarize themselves with these changing trends and determine the plan that is most suita ble for their organization.PAY METHOD TRENDSThere are a number of pay methods available for use by employers, including general pa y increases, cost-of-living increases, merit pay, bonuses, skill-based pay, competence-based pa y, CEO compensation, gainsharing, and various types of incentive pay.General Pay IncreaseA general pay increase is a pay increase given to everyone in a company. It can be a lum p-sum payment, but it is more likely to be a percentage increase in base salary. The employer' s rationale for the pay increase may have been the result of a market survey, job evaluation, or just a profitable year. The trend, however, is for general increases to decline as pay-for-perfor mance systems become increasingly dominant. In addition, giving everyone the same raise so metimes decreases morale because high-performing employees see poor performers getting th e same reward.Cost-of-Living IncreaseCost-of-living increases are general pay increases triggered by a rise in an inflation-sensi tive index, such as the consumer price index or the producer price index. As with general pay increases, the use of cost-of-living pay increases is decreasing among companies. The rational e for this decrease is that with lower inflation (thus little change in prices), incomes are more s table and the need for inflation adjustments is not as great as it was in the past. In addition, col lective bargaining agreements are now less likely to include provisions for cost-of-living incre ases, so nonunion firms are not under as much pressure to provide them in an attempt to matc h union-negotiated compensation. Their decline can also be attributed to the fact that employe rs are moving away from pay systems that are nonperformance related.Merit PayMerit pay is another generic term in which pay incentives are given for overall job perfor mance.² Some problems frequently encountered with merit pay plans include: the use of subjective criteria when measuring employee performance;a lack of uniform standards for rating individual employees;differences among managers in how to make individual ratings.Merit pay was the first attempt by firms to create a pay-for-performance system. Howeve r, due to employer (and employee) dissatisfaction with merit pay plans, the trend is to eliminat e them and instead use pay-for-performance plans that are more objective (such as bonus plan s), and that use specific performance measuring criteria that aid in the performance appraisal p rocess.³ This trend includes both the private and public sectors, because the merit pay system i n the federal sector has also been inadequate.BonusA bonus is a generic term involving a type of pay-for-performance plan. Managers can give a bonus for individual or group performance, and for meeting objectives such as MBO (ma nagement by objectives). Researchers and practitioners have given these plans high marks for motivating employees, for creating loyalty, and for meeting performance objectives. In additio n, bonuses reduce the turnover of high-performing employees and increase the turnover of lo w performers, who do not get bonuses. If the bonus system is well-designed, they also create i nternal equity. As such, bonus systems (pay-for-performance) are the current trend in compens ation.Skill-Based PaySkill-based pay emphasizes a company's desire to increase the skills and knowledge of it s workforce. It may involve classes, voluntary job rotation, or tests. Its benefits are many, incl uding having trained people available to do a job if someone is absent. Skill-based pay also w orks well with quality circles because:it provides employees with a better understanding of the jobs their coworkers perform;it reduces resistance to restructuring or other needed changes;it leads to a more flexibleworkforce that can better adapt to new technologies or processes; and it encourages a lea rning environment.It does, however, require a large investment in training which can be expen sive.Competence-Based PayCompetence-based pay (the grid system) is very new and does vary from plan-to-plan. T he idea is not only to reward employees for how well they do a job, but for how they do the jo b. For example, a competence-based pay plan can be used to persuade workers to use the com puters that are sitting on their desks, or to adapt to other changes that come along. The rational e behind a competence-based pay plan is to keep employee skills current.CEO CompensationThe compensation of CEOs (and other top executives) has also been changing, and now inclu des more pay incentives—such as stock options—to better link performance with compensati on. Plans linking executive pay with performance may include stock options, cash bonuses, p hantom stock, or deferred compensation, all of which are ways of making top management m ore accountable for company performance. Today, performance considerations are a larger par t of executive compensation. The Securities and Exchange Commission also requires corporat ions to explain the rationale behind their executive compensation programs to shareholders.GainsharingGainsharing is a pay-for-performance plan in which “gains” are shared with employees f or improvements in profitability or productivity.Gainsharing plans are designed to create a partnership with employees so that both management and labor are working toward the same goa ls and that both groups are benefiting from the results. Gainsharing is a growing trend, and it f its well with other trends, such as participatory management, worker empowerment, and team work. It is also being used in many service businesses, such as banking and insurance. Gainsh aring encourages employee involvement and acceptance of change, and aligns employee goals with company goals.Five Types of Pay IncentivesWhile all pay incentives can be generically coined as “gainsharing,” we will briefly ment ion five types:1. ESOPs. Employee Stock Ownership Plans allow the sharing of gains through dividends and any increase in the value of company stock. ESOPs do create ownership in the company for e mployees that may result in additional motivation, but they do not necessarily have a participa tive-management component.2. Profit-Sharing Plans. Profit-sharing plans allow employees to share in the revenue they hel ped generate. This sharing can be either deferred or immediate. Some observers argue that ass ociating rewards and performance is difficult if managers only give rewards annually, and that perhaps employees should not share in the profits because they do not share in the risks. How ever, companies such as Lincoln Electric and Ford feel that profit sharing is a strong inducem ent to increase performance. The current rate of growth of these plans is significant. For best motivational results, companies should use a system that is based on some criteria that emplo yees understand, instead of just an arbitrary amount. The advantage of profitsharing plans is t hat employers do not have to pay a large sum of money if the profit target is not met.3. Scanlon Plans. Scanlon plans allow employees to share in any savings in labor cost (using a ratio) that is due to their increased performance. The rationale for ScanIon plans is to help em ployees identify with and participate in the company. Employees participating in such plans m ay have access to suggestion programs, brainstorming sessions, or committees to solve produc tion problems. The employer and the employees then share in the savings that result.4. Rucker Plans. Rucker plans allow employees to share in any improvement in the ratio of e mployee costs to the valued added in manufacturing. This is the most complex gainsharing pla n, because it deals with four variables: labor costs, sales value of production (changes in equip ment, or work methods, for example), purchases of outside services such as subcontracting, or utilities, and purchases of outside materials, involving “inventory, theft, and so on”. Rucker p lans are designed to give employees a stake in areas such as reducing labor costs, using raw m aterials, and outsourcing decisions. As such, everyone shares in the savings.5. Improshare Pl ans. Improshare plans allow employees to share in productivity gains that occur because of their efforts.[sup5] Following the Improshare approach, managers give bonuses when the actual hours for a specific amount of productivity are less than the standard that they created using a formula. The savings are split between the company and the workers, in a ratio such as 50⁄50.CHANGES IN BENEFIT PLANSChanges in benefit plans have occurred as a result of efforts to keep up with trends, to co ntain costs, and to meet government regulations. Employees often view benefits as an entitle ment, and their cost—which has steadily increased—now averages 36 percent of total wages. The trend is to get the most out of benefits, while keeping costs down. For example, employer s do not want to pay for any overlap of coverage, or to pay too much for coverage. As their co sts continue to go up, employers are now starting to question how much employees value their benefits. For example:Do they support recruitment, motivate, and retain good employees? Do they support the strategic mission of the firm?Do proposed benefits support the company's retention goals and the demographics of pot ential recruits?Do they support the company culture or the culture the company now wants to promote?A movement now exists among employers for measuring benefit results and continuously eval uating benefits. A focus on Total Quality Management makes the internal employee the custo mer of HR departments who have the product of “benefits.” HR departments want to satisfy t he customer, but are also benchmarking and quantifying each benefit. The strategic trend is to design benefits to make it easier to realize the corporate mission and to enhance the value of t he benefits offered. Another major trend is offering flexible benefits where employees make b enefit decisions to fit their lifestyles. 401(k) PlansToday, 401 (k) plans are popular retirement vehicles because contributions are made on b efore-tax basis and investment earnings are tax deferred. They also address the trend of more mobile employees, who do not stay with a company for their entire working lives. With 401 ( k) plans, employee accounts can be transferred to another company's plan or to an Individual Retirement Account. A company can also establish 401(k) plans without providing for employ er matching contributions, so the only employer cost is for plan administration.Managed Care PlansManaged care plans, such as Preferred Provider Organizations (PPOs) and Health Mainte nance Organizations (HMOs), are a growing benefit trend away from traditional medical insur ance. These plans often include preventive maintenance features that attempt to treat illnesses earlier to avoid higher costs. Although they have disadvantages, they are designed to save ben efit expenses. And, due to the of rising cost of health care, companies can no longer afford towrite a blank check to cover their employees' health care costs. So, they are requiring employ ees to pick up a portion of these costs by shifting more of the premium burden to employees, and⁄or increasing deductibles.Prepaid Legal ServicesPrepaid legal services are new plans in which legal expenses are paid before the services are used. The growing number of lawsuits in this country has sparked demand for this type of benefit. A company may offer this benefit if it wants to protect its employees from the threat o f litigation, so that their minds are on their work. Or, it may offer this benefit to keep up with i ts competitors who are offering such plans. At this point, it is too early to tell how popular pre paid legal services plans will be in the future, though it is possible that they will be offered as a flexible benefit option.Dependent-Care AssistanceDependent-care assistance is also a new benefit whose popularity is growing. Companies are beginning to recognize that in todays economy, both parents often work and that many wo rkers are raising children in single-parent households. This benefit can help attract employees and reduce turnover because parents do not like to make changes if their child-care provider s atisfies them. In addition to caring for children, many employees are responsible for the care o f elderly parents or other relatives. Eldercare is a benefit that addresses this need, and allows e mployees to stay focused on work instead of worrying about their parents. Dependent care ass istance is likely to be increasingly offered as an option in flexible benefit plans.Wellness ProgramsWellness programs are designed to reduce sick-leave and medical expenses. These progr ams may include exercise, nutrition, stress reduction classes, as well assmoking and substance abuse help. Why the popularity of wellness and counseling progr ams? Studies show that lifestyle and diet impact illness, and that counseling programs can hel p curtail other higher cost benefit usage.In linking benefits to a corporate strategy plan, employers want to: help employees to lower their health costs; reduce turnover of good employees; and increase productivity .A company's HR department can perform audits to make sure that a wellness program is a valued added benefit.Flexible Benefit PlansFlexible benefit plans are increasing in number because the needs of workers are more di verse today. The rationale behind these plans is to increase employee satisfaction, reduce turn over, and decrease expenses to employers. Flexible benefit plans can also help employees realize the value of their benefits. The cost to administer these plans may be higher than with stan dard benefit provision, but flexible benefit plans can save money by not providing a specific b enefit to an employee who does not want it. Flexible benefit plans support workplace diversit y and changing employee demographics by allowing employers to offer a variety of benefits t o their workers.Frequently included in flexible benefit plans are salary reduction features that enable em ployees to divert pretax dollars into nontaxable benefit choices. If an employer needs to reduc e costs because of low profits one year, it can lessen its contribution to benefits, but still allow employees to direct where they want their benefit dollars to go, instead of making across-the-board cuts in coverage.Flexible benefit plans also put a price on benefits, which helps makes employees aware o f their actual cost—a fact often taken for granted. Flexible benefit plans help to equalize benef its provision because one employee may want a child-care benefit, but an older employee may want more life insurance coverage. These plans tend to have a positive impact on employees and are more cost-effective to employers.Flexible benefit plans also:reduce the entitlement mentality that has become associated •with the provision of many benefits;better associate benefits with direct compensation; andfit well with the trend of more employee involvement in company decision-making.Outplacement Benefit PlansOutplacement benefits plans provide support for terminated employees, and in turn show the remaining employees that the company is trying to be fair. Such plans may include office space, resume writing assistance, and employment counseling, among other benefits. These pl ans are designed to reduce termination litigation and to help maintain the morale of remaining employees.Source:Tudor,Thomas R,Trumble,Robert R.The Changing Pattern of Pay and Benefits[J].Jour nal of Compensation & Benefits,2008,(May):22-25Pay for performanceNot everyone sees the trend toward paying for skills and/or competencies as a good thing:It would be easy to conclude from reports in the business press that merit pay is dead and organizations need to reconstitute pay plans to pay people in some new way. Suggestions include paying employees for the knowledge, shills, abilities and behaviors they bring to theworkplace. Although interesting, this call for wholesale reform overlooks fundamental tenets of economic and behavioral theories.Pay for performance is the holy grail of modern compensation administration—widely sought but hard to actually achieve .Pay for performance is the flag, motherhood, and apple pie, but it is easier said than done. One primary problem is defining performance properly, so that the organization pays for results and not for effort. Once over that hurdle, there remains the large impediment of finding enough money to make the reward for top performance meaningful. Many different approaches are used—various variable pay schemes, annual awards in lieu of permanent increase in base pay, and the traditional merit pay salary increase.The concept of pay for performance has different meanings to different people. Many either fail to recognize the pay for performance fails when the different in reward between adequate performance and outstanding performance is inconsequential or cannot solve the problem of funding adequate differentiation while dealing with essential range maintenance costs.For example, Logue reported on the introduction of performance-based pay for unionized employees in a public university. The old system had four annual, essentially “automatic,”5percent steps from minimum to maximum. The new system added 10 percent to the top of the salary range. All employees would move through the regular range automatically, but growth within the top 10 percent was based only on performance. Since 20 percent of all salary increase funds were allocated to performance increases, top performers could receive additional amounts over and above the automatic movement through the standard portion of the salary range.Such performance-based salary increases (PSIs) went to 12 percent of the represented employees, who receive PSIs ranging from 3.9 to 5.9 percent in the first fiscal year (2000 to 2001). PSIs ranged from 0.5 to 4 percent in fiscal year 2001 to 2002 due to the greater number of employees receiving increases. One wonders what happened the third year! In any event, achieving an extra 1 or 2 or 3 percent is unlikely to stimulate anyone to significantly higher levels or performance, particularly when they are guaranteed automatic annual increases.Others take steps to address the differentiation problem:Through the implementation of a new tool called the Monoline Merit Increase Matrix, one organization shows how it rewards employees based on performance and gets more mileage out of its merit increase budget…The Monoline Merit eliminates the use of comparisons for merit increase. It is designed to create a larger distinction in the merit percent provided between top performers and employees who meet expectations and are paid fairly for their work…Under the new methodology, managers must examine the possibility that employees who meet performance goals do not have to receive a merit increase if they are competitively paid. Pay for whose performance :Even if one can solve the differentiation problem, there still remains the problem of determining the locus of performance pay plans all devolve into two broad categories, depending on whether performance is measured at the group or at the individual level: Group plans can fail to specifically direct or reward individual employees behaviors. As a result, group plans have produced somewhat limited results with respect to improvements in employees performance or organizational profitability. Further, group plans do not different reward individual who perform well vs. those who do not. This may exact the perception of pay inequities among better performers.Performance pay plans based on individual performance are more effectives in improving individual employee performances vs. group plans. Typically, these plans provide specific and objective goals for employees to work toward. However, rewarding individual performance may reduce cooperation among employees and focus employees on a restricted range of results.Designing an effective compensation program:First, an effective compensation program should recognize that monetary rewards do change employee behavior despite what some academicians have claimed. The power of money is twofold. It not only is valued for itself, for what it can buy, but it can also serve as a powerful communication devise, as a score card if you will.Second, stick to the basics when designing a salary program. Pay people at a reasonable market level for base salary based on survey data (what is reasonable will depend on your ability to pay and the availability of the talent you need.) focus primarily on external pay market data, and maintain internal equity only within each separate pay market. That is, internal equity is important within information technology, engineering, accounting, etc., but is not important between these groups as they are in separate pay markets. One size never fits all!Third, use variable pay everywhere. For those positions that cannot be individually measured, use group measures (work group, location, division, and/or corporate measures, as appropriate). For those positions that can be individual measured, use a combination of individual and group measures (individual measures to motivate individual effort, group measures to encourage cooperative behavior).Fourth, keep the performance measures as simple as possible and limit their numbers, preferably to two or three, Remember, what you measure is what you get, so pick yourmeasures carefully.Fifth, communicate, communicate, communicate. Communicate the details of the program. Communicate the rationale for the measures—that is, how they fit into the organization’s strategy. Communicate on an ongoing basis actual performance versus target performance.Source: Martin G.Wolf,2002 “linking performance scorecards to profit performance pay”ACA News,vol.41,no.4,april,pp.23-25.Variable payVariable pay is an expanding field within compensation driven by the emerging trends of pay for performance and competitive advantage. Funding these new programs and developing the processes supporting long-term effectiveness is critical.Pay for performanceIn the past, company employment was routinely assumed to be for a career. Many, many employees worked for one organization for their entire work life. Loyalty on the part of the employer and employees was taken for granted. Times have changed. Reengineering, downsizing, and talent wars have reworked the playing field for employment decisions. No longer does a new college graduate dream of working for the same company for life. In addition, worldwide competitive business pressure has focused corporations on performance. In the past and still for many organizations today, paying for performance is normally done with promotions over the career. Base pay increase over time is a normal method to reward performance.Information technology professionals can now move from company to company with ease and can expect to receive a year 2000 bonus if they stay until the new millennium. Organizations realize the competitive demands for change and the need to motivate change. Many employees are now asking “What is in it for me if I take the risk?” Variable pay is an excellent way to answer the question. Pay for performance with variable pay below the executive level is in its infancy for most organization excluding the sales organization. Less than 30 percent is profit sharing and does not have a line of sight to business unit performance .Fewer than 10 percent of organization have variable pay programs for all employees that reward individual, team, and business unite performance. Variable pay has many opportunities for growth with the new organization emphasis on performance, retention, and competitive advantage.Funding variable payFinancially, variable pay is very attractive compared to base pay increase programs. Base pay increase compound and a concern for permanent increase cost. In addition, base pay increases have an entitlement mentality where the recipient is looking for the next one shortly after receiving the last increase. Many corporation reinforce this expectation by having an annual increase plan (normally called a merit increase plan ) to adjust for inflation and market movement.Variable pay is attractive because it does not compound from year to year, and the unspent funds can be reused each year or budget cycle. Having employees learn their performance bonus each year creates a compelling reason for them to improve instead of relaxing into an entitlement mentality, which is often the result of base pay increase programs. When business results are good, the payout can be attractive, and, when times are bad, the payout is small, reducing costs and helping to improve the bottom line.Strategic planning can support the movement to variable pay. Moving to a strong variable pay program can take years with the need to build success along the way.Variable pay successSo if variable pay has such great potential, why has there been such a reluctant to implement variable pay? One answer is that the failure rate for variable pay plans is 38 percent as document in an ACA study by Marc Wallace. The success rate in executive compensation and sales compensation is substantially greater, but the concern for excessive reward is real. Executive compensation requires hand holding and considerable administration. Many small-group plans require period redesign, which takes more compensation consulting resources than are available. These draw backs are part of the reluctance of management to implement variable pay.Building variable pay plans to be continuous for the long term is the key to variable pay success, Most plans need to be renewed annually to ensure on going success. Fairness, trust and impact on the business are all measures of success. Plans that do not continuously evolve need extra attention every year and will fail to more frequently. I helped implement two variable pay plan for all employees at Coring incorporated, and those plans are now over 10 years old and going strong, One is a spot bonus plan, and the other is good sharing .variable pay plans can indeed work very well.Balancing individual incentives with shared business goals is important. This rewards for business success are the most critical and should be more significant in total dollars than individual reward. The bottom line is that the business needs to succeed. Line of sight and control are also important variables. Many times this is where incentives come into play. People like to be judged on what is control is delicate. Too much emphasis on individual。
毕设三项文档之-外文翻译
本科生毕业设计 (论文)
外文翻译
原文标题
Worlds Collide:
Exploring the Use of Social Media Technologies for
Online Learning
译文标题
世界的碰撞:
探索社交媒体技术在在线学习的应用
作者所在系别计算机科学与工程系作者所在专业计算机科学与技术作者所在班级
作者姓名
作者学号
指导教师姓名
指导教师职称讲师
完成时间2013年2月
北华航天工业学院教务处制
注:1. 指导教师对译文进行评阅时应注意以下几个方面:①翻译的外文文献与毕业设计(论文)的主题是否高度相关,并作为外文参考文献列入毕业设计(论文)的参考文献;②翻译的外文文献字数是否达到规定数量(3 000字以上);③译文语言是否准确、通顺、具有参考价值。
2. 外文原文应以附件的方式置于译文之后。
毕业设计论文外文文献翻译
毕业设计(论文)外文文献翻译院系:财务与会计学院年级专业:201*级财务管理姓名:学号:132148***附件: 财务风险管理【Abstract】Although financial risk has increased significantly in recent years risk and risk management are not contemporary issues。
The result of increasingly global markets is that risk may originate with events thousands of miles away that have nothing to do with the domestic market。
Information is available instantaneously which means that change and subsequent market reactions occur very quickly。
The economic climate and markets can be affected very quickly by changes in exchange rates interest rates and commodity prices。
Counterparties can rapidly become problematic。
As a result it is important to ensure financial risks are identified and managed appropriately. Preparation is a key component of risk management。
【Key Words】Financial risk,Risk management,YieldsI. Financial risks arising1.1What Is Risk1.1.1The concept of riskRisk provides the basis for opportunity. The terms risk and exposure have subtle differences in their meaning. Risk refers to the probability of loss while exposure is the possibility of loss although they are often used interchangeably。
毕设翻译英文
轨道交通学院毕业设计(论文)外文翻译题目:列车车载的直流恒流源的设计专业电子信息工程班级10115111学号1011511137姓名赵士伟指导教师陈文2014 年3 月 3 日本文摘自:IEEE TRANSACTIONS ON INDUSTRY AND GENERAL APPLICATIONS VOL. IGA-2, NO.5 SEPT/OCT 1966Highly Regulated DC Power Supplies Abstract-The design and application of highly regulated dc power supplies present many subtle, diverse, and interesting problems. This paper discusses some of these problems (especially inconnection with medium power units) but emphasis has been placed more on circuit economics rather than on ultimate performance.Sophisticated methods and problems encountered in connection with precision reference supplies are therefore excluded. The problems discussed include the subjects of temperature coefficient,short-term drift, thermal drift, transient response degeneration caused by remote sensing, and switching preregualtor-type units and some of their performance characteristics.INTRODUCTIONANY SURVEY of the commercial de power supply field will uncover the fact that 0.01 percent regulated power supplies are standard types and can be obtained at relatively low costs. While most users of these power supplies do not require such high regulation, they never-theless get this at little extra cost for the simple reason that it costs the manufacturer very little to give him 0.01 percent instead of 0.1 percent. The performance of a power supply, however, includes other factors besides line and load regulation. This paper will discuss a few of these-namely, temperature coefficient, short-term drift, thermal drift, and transient response. Present medium power dc supplies commonly employ preregulation as a means of improving power/volume ratios and costs, but some characteristics of the power supply suffer by this approach. Some of the short-comings as well as advantages of this technology will be examined.TEMPERATURE COEFFICIENTA decade ago, most commercial power supplies were made to regulation specifications of 0.25 to 1 percent. The reference elements were gas diodes having temperature coefficients of the order of 0.01 percent [1]. Consequently, the TC (temperature coefficient) of the supply was small compared to the regulation specifications and often ignored. Today, the reference element often carries aTC specification greater than the regulation specification.While the latter may be improved considerably at little cost increase, this is not necessarily true of TC. Therefore,the use of very low TC zener diodes, matched differential amplifier stages, and low TC wire wound resistors must be analyzed carefully, if costs are to be kept low.A typical first amplifier stage is shown in Fig. 1. CRI is the reference zener diode and R, is the output adjustment potentiometer.Fig. 1. Input stage of power supply.Fig. 2. Equivalent circuit of zener reference.Let it be assumed that e3, the output of the stage, feedsadditional differential amplifiers, and under steady-state conditions e3 = 0. A variation of any of the parameters could cause the output to drift; while this is also true of the other stages, the effects are reduced by the gain of all previous stages. Consequently, the effects of other stages will be neglected. The following disculssion covers the effects of all elements having primary and secondary influences on the overall TC.Effect of R3The equivalent circuit of CRI -R3 branch is shown in Fig. 2. The zener ha's been replaced with its equivalent voltage source E/' and internal impedance R,. For high gain regulators, the input of the differential amplifier will have negligible change with variations of R3 so thatbefore and after a variation of R3 is made.If it is further assumed that IB << Iz; then from (1)Also,Eliminating I, from (2b),andNow, assuming thatthen,Equation (2b) can also be writtenThe Zener DiodeThe zener diode itself has a temperature coefficient andusually is the component that dominates the overall TCof the unit. For the circuit of Fig. 1, the TC ofthe circuit describes, in essence, the portion of the regulator TC contributed by the zener. If the bridge circuit shown in Fig. 1 were used in conjunction with a dropping resistor so that only a portion of the output voltage appeared across the bridge circuit shown, the TC of the unit and the zener would be different. Since the characteristic of zeners is so well known and so well described in the literature, a discussion will not be given here [2].Variation of Base-Emitter VoltagesNot only do the values of V,, of the differential am-plifier fail to match, but their differentials with tem perature also fail to match. This should not, however,suggest that matched pairs are required. The true reference voltage of Fig. 1 is not the value E,, but E, + (Vie, -Vbe2)-Since, for most practical applicatioinsthe TC of the reference will be the TC of the zener plusConsidering that it is difficult to obtain matched pairs that have differentials as poor as 50 V/°C, it becomes rather apparent that, in most cases, a matched pair bought specifically for TC may be overdesigning.Example 2: A standard available low-cost matched pair laims 30AV/°C. In conjunction with a 1N752, the ontribution to the overall TC would beTests, performed by the author on thirteen standard germanium signal transistors in the vicinity of room temperature and at a collector current level of 3 mA,indicated that it is reasonable to expect that 90 to 95 percent of the units would have a base-emitter voltage variation of -2.1 to -2.4 mV/°C. Spreads of this magnitude have also been verified by others (e.g., Steiger[3]). The worst matching of transistors led to less than 400 ,V/°C differential. In conjunction with a 1N752,even this would give a TC of better than 0.007%/0C.Variation of Base CurrentsThe base current of the transistors is given byA variation of this current causes a variation in signal voltage at the input to the differential amplifier due to finite source impedances. Matching source impedances is not particularly desirable, since it reduces the gain of the system and requires that transistors matched for I,o and A be used. Hunter [4 ] states that the TC of a is in the range of +0.2%/0C to -0.2%7/'C and that 1,, may be approximated bywhere Ao is the value at To.β is also temperature dependent and Steiger [3] experimentally determined the variation to be from about 0.5%/°C to 0.9%/0C.And,Fig. 3. Input circuit of Q2.The current AIB flows through the source impedance per Fig. 3. The drops in the resistance string, however, are subject to the constraint that EB (and AEB) are determined by the zener voltage and the base-emitter drops of Q1 and Q2. Consequently, if in going from temperature T1to T2 a change AEB occurs,The change in output voltage isAndExample 3: For Q2 (at 25°C)(see Example 1)∴Variation of R,The effects of a variation of the TC between RIA and RIB is sufficiently self-evident so that a discussion of the contribution is not included.SHORT-TERM DRIFTThe short-term drift of a supply is defined by the National Electrical Manufacturers Association (NEMA) as "a change in output over a period of time, which change is unrelated to input, environment, or load [5]."Much of the material described in the section on temperature coefficient is applicable here as well. It has been determined experimentally, however, that thermal air drafts in and near thevicinity ofthe powersupplycontributesenormouslyto theshort-termcharacteristics. Thecooling effects of moving air are quite well known, but it is not often recognized that even extremely slow air movements over such devices as zeners and transistors cause the junction temperature of these devices to change rapidly. If the TC of the supply is large compared to the regulation, then large variations in the output will be observed. Units having low TC's achieved by compensation-that is, by canceling out the effects of some omponents by equal and opposite effects of others may still be plagued by these drafts due to the difference in thermal time constants of the elements.Oftentimes, a matched transistor differential amplifier in a common envelope is used for the first amplifier just to equalize and eliminate the difference in cooling effects between the junctions. Approximations to this method include cementing or holding the transistors together, imbedding the transistors in a common metal block, etc. Excellent results were achieved by the author by placing the input stage and zener reference in a separate enclosure. This construction is shown in Fig. 4. The improvement in drift obtained by means of the addition of the metal cover is demonstrated dramatically in Fig. 5.Fig. 5. Short-term drift of a power supply similar to the one shown in Fig. 4 with and without protective covers. The unit was operated without the cover until time tl, when the cover was attached. The initial voltage change following t, is due to a temperaturerise inside the box.Fig. 5. Short-term drift of a power supply similar to the one shown n Fig. 4 withand without protective covers. The unit was operated without the cover until time tl, when the cover was attached. The initial voltage change following t, is due to atemperature rise inside the box.If potentiometers are used in the supply for output adjustment (e.g., RI), care should be used in choosing the value and design. Variations of the contact resistance can cause drift. It is not always necessary, however, to resort to the expense of high-resolution multiturn precision units to obtain low drift. A reduction in range of adjustment, use of low-resistance alloys and low-resolution units which permit the contact arm to rest firmly between turns, may be just as satisfactory. Of course, other considerations should include the ability of both the arms and the wire to resist corrosion. Silicone greases are helpful here. Periodic movement of contact arms has been found helpful in "healing" corroded elements.THERMAL DRIFTNEMA defines thermal drift as "a change in output over a period of time, due to changes in internal ambient temperatures not normally related to environmental changes. Thermal drift is usually associated with changes in line voltage and/or load changes [5]."Thermal drift, therefore, is strongly related to the TC of the supply as well as its overall thermal design. By proper placement of critical components it is possible to greatly reduce or even eliminate the effect entirely. It is not uncommon for supplies of the 0.01 percent(regulation) variety to have drifts of between 0.05 to 0.15 percent for full line or full load variations. In fact, one manufacturer has suggested that anything better than 0.15 percent is good. Solutions to reducing thermal drift other than the obvious approach of improving the TC and reducing internal losses include a mechanical design that sets up a physical and thermal barrier between the critical amplifier components and heat dissipating elements. Exposure to outside surfaces with good ventilation is recommended. With care, 0.01 to 0.05 percent is obtainable.TRANSIENT RESPONSEMost power supplies of the type being discussed have a capacitor across the load terminals. This is used for stabilization purposes and usually determines the dominant time constant of the supply. The presence of this capacitor unfortunately leads to undesirable transient phenomena when the supply is used in the remote sensing mode①. Normally, transistorized power supplies respond in microseconds, but as the author has pointed out [6], the response can degenerate severely in remote sensing .The equivalent circuit is shown in Fig. 6. The leads from the power supply to the load introduce resistance r. Is is the sensing current of the supply and is relatively constant.Under equilibrium conditions,A sudden load change will produce the transient of Fig. 7. The initial "spike" is caused by an inductive surge Ldi/dt; the longer linear discharge following is the resultof the capacitor trying to discharge (or charge). The discharge time iswhereandThe limitations of I,, are usually not due to available drive of the final amplifier stages but to other limitations, current limiting being the most common. Units using pre regulators of the switching type (transistor or SCR types) should be looked at carefully if the characteristics mentioned represent a problem.①Remote sensing is the process by which the power supply senses voltage directly at the load.Fig. 6. Output equivalent circuit at remote sensing.Fig. 7. Transient response, remote sensing.Fig. 8. Block diagram.Preregulated supplies are used to reduce size and losses by monitoring and controlling the voltage across the class-A-type series passing stage (Fig. 8). Since the main regulator invariably responds much quicker than the preregulator, sufficient reserve should always be built into the drop across the passing stage. Failure to provide this may result in saturation of the passing stage when load is applied, resulting in a response time which is that of the preregulator itself.SWITCHING PREREGULATOR-TYPE UNITS The conventional class-A-type transistorized power supply becomes rather bulky, expensive, and crowded with passing stages, as the current and power level of the supply increases. The requirement of wide output adjustment range, coupled with the ability of the supply to be remotely programmable, aggravates the condition enormously. For these reasons the high-efficiency switching regulator has been employed as a preregulator in commercial as well as military supplies for many years. The overwhelming majority of the supplies used silicon controlled rectifiers as the control element. For systems operating from 60-cycle sources, this preregulator responds in 20 to 50 ms.Recent improvements in high-voltage, high-power switching transistors has made the switching transistor pproach more attractive. This system offers a somewhat lower-cost, lower-volume approach coupled with a submillisecond response time. This is brought about by a high switching rate that is normally independent of line frequency. The switching frequency may be fixed, a controlled variable or an independent self-generated (by the LC filter circuit) parameter [7], [8]. Faster response time is highly desirable since it reduces the amount of reserve voltage required across the passing stage or the amount of (storage) capacity required in the preregulator filter.A transistor suitable for operating as a power switch has a high-current, high-voltage rating coupled with low leakage current. Unfortunately, these characteristics are achieved by a sacrifice in thermal capacity, so that simultaneous conditions of voltage and current leading to high peak power could be disastrous. It therefore becomes mandatory to design for sufficient switch drive during peak load conditions and also incorporate current-limiting or rapid overload protection systems.Commercial wide-range power supplies invariably have output current limiting, but this does not limit the preregulator currents except during steady-state load conditions (including short circuits). Consider, for example, a power supply operating at short circuit and the short being removed suddenly. Referring to Fig. 8, the output would rise rapidly, reduce the passing stage voltage, and close the switching transistor. The resulting transient extends over many cycles (switching rate) so that the inductance of the preregulator filter becomes totally inadequate to limit current flow. Therefore, the current will rise until steady state is resumed, circuit resistance causes limiting, or insufficient drive causes the switch to come out of saturation. The latter condition leads to switch failure.Other operating conditions that would produce similar transients include output voltage programming and initial turn-on of the supply. Momentary interruption of input power should also be a prime consideration.One solution to the problem is to limit the rate of change of voltage that can appear across the passing stage to a value that the preregulator can follow. This can be done conveniently by the addition of sufficient output capacitance. This capacitance inconjunction with the current limiting characteristic would produce a maximum rate of change ofwhereC0 = output capacity.Assuming that the preregulator follows this change and has a filter capacitor Cl, then the switch current isDuring power on, the preregulator reference voltage rise must also be limited. Taking this into account,whereER = passing stage voltageTl = time constant of reference supply.The use of SCR's to replace the transistors would be a marked improvement due to higher surge current ratings, but turning them off requires large energy sources. While the gate turn-off SCR seems to offer a good compromise to the overall problem, the severe limitations in current ratings presently restrict their use.REFERENCES[1] J. G. Truxal, Control Engineer's Handbook. New York: McGrawHill, 1958, pp. 11-19.[2] Motorola Zener Diode/Rectifier Handbook, 2nd ed. 1961.[3] W. Steiger, "A transistor temperature analysis and its applica-tion to differential amplifiers," IRE Trans. on Instrumentation,vol. 1-8, pp. 82-91, December 1959.[4] L. P. Hunter, Handbook of Semi-Conductor Electronics. NewYork: McGraw Hill, 1956, p. 13-3.[5] "Standards publication for regulated electronic dc powersupplies," (unpublished draft) Electronic Power Supply Group,Semi-Conductor Power Converter Section, NEMA.[6] P. Muchnick, "Remote sensing of transistorized power sup-plies," Electronic Products, September 1962.[7] R. D. Loucks, "Considerations in the design of switching typeregulators," Solid State Design, April 1963.[8] D. Hancock and B. Kurger, "High efficiency regulated powersupply utilizing high speed switching," presented at the AIEEWinter General Meeting, New York, N. Y., January 27-February 1, 1963.[9] R. D. Middlebrook, Differential Amplifiers. New York: Wiley,1963.[10] Sorensen Controlled Power Catalog and Handbook. Sorensen,Unit of Raytheon Company, South Norwalk, Conn.With the rapid development of electronic technology, application field of electronic system is more and more extensive, electronic equipment, there are more and more people work with electronic equipment, life is increasingly close relationship. Any electronic equipment are inseparable from reliable power supply for power requirements, they more and more is also high. Electronic equipment miniaturized and low cost in the power of light and thin, small and efficient for development direction. The traditional transistors series adjustment manostat is continuous control linear manostat. This traditional manostat technology more mature, and there has been a large number of integrated linear manostat module, has the stable performance is good, output ripple voltage small, reliable operation, etc. But usually need are bulky and heavy industrial frequency transformer and bulk and weight are big filter.In the 1950s, NASA to miniaturization, light weight as the goal, for a rocket carrying the switch power development. In almost half a century of development process, switch power because of its small volume, light weight, high efficiency, wide range, voltage advantages in electric, control, computer, and many other areas of electronic equipment has been widely used. In the 1980s, a computer is made up of all of switch power supply, the first complete computer power generation. Throughout the 1990s, switching power supply in electronics, electrical equipment, home appliances areas to be widely, switch power technology into the rapid development. In addition, large scale integrated circuit technology, and the rapid development of switch power supply with a qualitative leap, raised high frequency power products of, miniaturization, modular tide.Power switch tube, PWM controller and high-frequency transformer is an indispensable part of the switch power supply. The traditional switch power supply is normally made by using high frequency power switch tube division and the pins, such as using PWM integrated controller UC3842 + MOSFET is domestic small power switch power supply, the design method of a more popularity.Since the 1970s, emerged in many function complete integrated control circuit, switch power supply circuit increasingly simplified, working frequency enhances unceasingly, improving efficiency, and for power miniaturization provides the broad prospect. Three end off-line pulse width modulation monolithic integrated circuit TOP (Three switch Line) will Terminal Off with power switch MOSFET PWM controller one package together, has become the mainstream of switch power IC development. Adopt TOP switch IC design switch power, can make the circuit simplified, volume further narrowing, cost also is decreased obviouslyMonolithic switching power supply has the monolithic integrated, the minimalist peripheral circuit, best performance index, no work frequency transformer can constitute a significant advantage switching power supply, etc. American PI (with) company in Power in the mid 1990s first launched the new high frequency switching Power supply chip, known as the "top switch Power", with low cost, simple circuit, higher efficiency. The first generation of products launched in 1994 represented TOP100/200 series, the second generation product is the TOPSwitch - debuted in 1997 Ⅱ. The above products once appeared showed strong vitality and he greatly simplifies thedesign of 150W following switching power supply and the development of new products for the new job, also, high efficiency and low cost switch power supply promotion and popularization created good condition, which can be widely used in instrumentation, notebook computers, mobile phones, TV, VCD and DVD, perturbation VCR, mobile phone battery chargers, power amplifier and other fields, and form various miniaturization, density, on price can compete with the linear manostat AC/DC power transformation module.Switching power supply to integrated direction of future development will be the main trend, power density will more and more big, to process requirements will increasingly high. In semiconductor devices and magnetic materials, no new breakthrough technology progress before major might find it hard to achieve, technology innovation will focus on how to improve the efficiency and focus on reducing weight. Therefore, craft level will be in the position of power supply manufacturing higher in. In addition, the application of digital control IC is the future direction of the development of a switch power. This trust in DSP for speed and anti-interference technology unceasing enhancement. As for advanced control method, now the individual feels haven't seen practicability of the method appears particularly strong,perhaps with the popularity of digital control, and there are some new control theory into switching power supply.(1)The technology: with high frequency switching frequencies increase, switch converter volume also decrease, power density has also been boosted, dynamic response improved. Small power DC - DC converter switch frequency will rise to MHz. But as the switch frequency unceasing enhancement, switch components and passive components loss increases, high-frequency parasitic parameters and high-frequency EMI and so on the new issues will also be caused.(2)Soft switching technologies: in order to improve the efficiency ofnon-linearity of various soft switch, commutation technical application and hygiene, representative of soft switch technology is passive and active soft switch technology, mainly including zero voltage switch/zero current switch (ZVS/ZCS) resonance, quasi resonant, zero voltage/zero current pulse width modulation technology (ZVS/ZCS - PWM) and zero voltage transition/zero current transition pulse width modulation (PWM) ZVT/ZCT - technical, etc. By means of soft switch technology can effectively reduce switch loss and switch stress, help converter transformation efficiency (3)Power factor correction technology (IC simplifies PFC). At present mainly divided into IC simplifies PFC technology passive and active IC simplifies PFC technology using IC simplifies PFC technology two kinds big, IC simplifies PFC technology can improve AC - DC change device input power factor, reduce the harmonic pollution of power grid.(4)Modular technology. Modular technology can meet the needs of the distributed power system, enhance the system reliability.(5)Low output voltage technology. With the continuous development of semiconductor manufacturing technology, microprocessor and portable electronic devices work more and more low, this requires future DC - DC converter can provide low output voltage to adapt microprocessor and power supply requirement of portable electronic devicesPeople in switching power supply technical fields are edge developing related power electronics device, the side of frequency conversion technology, development of switch between mutual promotion push switch power supply with more than two year growth toward light, digital small, thin, low noise and high reliability, anti-interference direction. Switching powersupply can be divided into the AC/DC and DC/DC two kinds big, also have AC/AC DC/AC as inverter DC/DC converter is now realize modular, and design technology and production process at home and abroad, are mature and standardization, and has approved by users, but the AC/DC modular, because of its own characteristics in the process of making modular, meet more complex technology and craft manufacture problems. The following two types of switch power supply respectively on the structure and properties of this.Switching power supply is the development direction of high frequency, high reliability, low consumption, low noise, anti-jamming and modular. Because light switch power, small, thin key techniques are changed, so high overseas each big switch power supply manufacturer are devoted to the development of new high intelligent synchronous rectifier, especially the improvement of secondary devices of the device, and power loss of Zn ferrite (Mn) material? By increasing scientific and technological innovation, to enhance in high frequency and larger magnetic flux density (Bs) can get high magnetic under the miniaturization of, and capacitor is a key technology. SMT technology application makes switching power supply has made considerable progress, both sides in the circuitboard to ensure that decorate components of switch power supply light, small, thin. The high frequency switching power supply of the traditional PWM must innovate switch technology, to realize the ZCS ZVS, soft switch technology has becomethe mainstream of switch power supply technical, and greatly improve the efficiency of switch power. For high reliability index, America's switch power producers, reduce by lowering operating current measures such as junction temperature of the device, in order to reduce stress the reliability of products made greatly increased.Modularity is of the general development of switch power supply trend can be modular power component distributed power system, can be designed to N + 1 redundant system, and realize the capacity expansion parallel. According to switch power running large noise this one defect, if separate the pursuit of high frequency noise will increase its with the partial resonance, and transform circuit technology, high frequency can be realized in theory and can reduce the noise, but part of the practical application of resonant conversion technology still have a technical problem, so in this area still need to carry out a lot of work, in order to make the technology to practional utilization.Power electronic technology unceasing innovation, switch power supply industry has broad prospects for development. To speed up the development of switch power industry in China, we must walk speed of technological innovation road, combination with Chinese characteristics in the joint development path, for I the high-speed development of national economy to make the contribution. The basic principle and component functionAccording to the control principle of switch power to classification, we have the following 3 kinds of work mode:1) pulse width adjustment type, abbreviation Modulation PulseWidth pulse width Modulation (PWM) type, abbreviation for. Its main characteristic is fixed switching frequency, pulse width to adjust by changing voltage 390v, realize the purpose. Its core is the pulse width modulator. Switch cycle for designing filter circuit fixed provided convenience. However, its shortcomings is influenced by the power switch conduction time limit minimum of output voltage cannot be wide range regulation; In addition, the output will take dummy loads commonly (also called pre load), in order to prevent the drag elevated when output voltage. At present, most of the integrated switch power adopt PWM way.2) pulse frequency Modulation mode pulse frequency Modulation (, referred to PulseFrequency Modulation, abbreviation for PFM) type. Its characteristic is will pulse width fixed by changing switch frequency to adjust voltage 390v, realize the purpose. Its core is the pulse frequency modulator. Circuit design to use fixed pulse-width generator to replace the pulse width omdulatros and use sawtooth wave generator voltage? Frequency converter (for example VCO changes frequency VCO). It on voltage stability principle is: when the output voltage Uo rises, the output signal controller pulse width unchanged and cycle longer, make Uo 390v decreases, and reduction. PFM type of switch power supply output voltage range is very wide, output terminal don't meet dummy loads. PWM way and way of PFM respectively modulating waveform is shown in figure 1 (a), (b) shows, tp says pulse width (namely power switch tube conduction time tON), T represent cycle. It can be easy to see the difference between the two. But they have something in common: (1) all use time ratio control (TRC) on voltage stability principle, whether change tp, finally adjustment or T is。
毕业设计外文文献翻译(原文+译文)
Environmental problems caused by Istanbul subway excavation and suggestionsfor remediation伊斯坦布尔地铁开挖引起的环境问题及补救建议Ibrahim Ocak Abstract:Many environmental problems caused by subway excavations have inevitably become an important point in city life. These problems can be categorized as transporting and stocking of excavated material, traffic jams, noise, vibrations, piles of dust mud and lack of supplies. Although these problems cause many difficulties,the most pressing for a big city like Istanbul is excava tion,since other listed difficulties result from it. Moreover, these problems are environmentally and regionally restricted to the period over which construction projects are underway and disappear when construction is finished. Currently, in Istanbul, there are nine subway construction projects in operation, covering approximately 73 km in length; over 200 km to be constructed in the near future. The amount of material excavated from ongoing construction projects covers approximately 12 million m3. In this study, problems—primarily, the problem with excavation waste(EW)—caused by subway excavation are analyzed and suggestions for remediation are offered.摘要:许多地铁开挖引起的环境问题不可避免地成为城市生活的重要部分。
汽车电子毕设设计外文文献翻译(适用于毕业论文外文翻译+中英文对照)
Ultrasonic ranging system designPublication title: Sensor Review. Bradford: 1993.Vol.ABSTRACT: Ultrasonic ranging technology has wide using worth in many fields, such as the industrial locale, vehicle navigation and sonar engineering. Now it has been used in level measurement, self-guided autonomous vehicles, fieldwork robots automotive navigation, air and underwater target detection, identification, location and so on. So there is an important practicing meaning to learn the ranging theory and ways deeply. To improve the precision of the ultrasonic ranging system in hand, satisfy the request of the engineering personnel for the ranging precision, the bound and the usage, a portable ultrasonic ranging system based on the single chip processor was developed.Keywords: Ultrasound, Ranging System, Single Chip Processor1. IntroductiveWith the development of science and technology, the improvement of people’s standard of living, speeding up the development and construction of the city. Urban drainage system have greatly developed their situation is construction improving. However, due to historical reasons many unpredictable factors in the synthesis of her time, the city drainage system. In particular drainage system often lags behind urban construction. Therefore, there are often good building excavation has been building facilities to upgrade the drainage system phenomenon. It brought to the city sewage, and it is clear to the city sewage and drainage culvert in the sewage treatment system.Co mfort is very important to people’s lives. Mobile robots designed to clear the drainage culvert and the automatic control system Free sewage culvert clear guarantee robots, the robot is designed to clear the culvert sewage to the core. Control system is the core component of the development of ultrasonic range finder. Therefore, it is very important to design a good ultrasonic range finder.2. A principle of ultrasonic distance measurementThe application of AT89C51:SCM is a major piece of computer components are integrated into the chip micro-computer. It is a multi-interface and counting on the micro-controller integration, and intelligence products are widely used in industrial automation. and MCS-51 microcontroller is a typical and representative.Microcontrollers are used in a multitude of commercial applications such as modems, motor-control systems, air conditioner control systems, automotive engine and among others. The high processing speed and enhanced peripheral set of these microcontrollers make them suitable for such high-speed event-based applications. However, these critical application domains also require that these microcontrollers are highly reliable. The high reliability and low market risks can be ensured by a robust testing process and a proper tools environment for the validation of these microcontrollers both at the component and at the system level. Intel Plaform Engineering department developed an object-oriented multi-threaded test environment for the validation of its AT89C51 automotive microcontrollers. The goals of this environment was not only to provide a robust testing environment for the AT89C51 automotive microcontrollers, but to develop an environment which can be easily extended and reused for the validation of several other future microcontrollers. The environment was developed in conjunction with Microsoft Foundation Classes(AT89C51).1.1 Features* Compatible with MCS-51 Products* 2Kbytes of Reprogrammable Flash MemoryEndurance: 1,000Write/Erase Cycles* 2.7V to 6V Operating Range* Fully Static operation: 0Hz to 24MHz* Two-level program memory lock* 128x8-bit internal RAM* 15programmable I/O lines* Two 16-bit timer/counters* Six interrupt sources*Programmable serial UART channel* Direct LED drive output* On-chip analog comparator* Low power idle and power down modes1.2 DescriptionThe AT89C2051 is a low-voltage, high-performance CMOS 8-bit microcomputer with 2Kbytes of flash programmable and erasable read only memory (PEROM). The device is manufactured using Atmel’s high density nonvolatile memory technology and is compatible with the industry standard MCS-51 instruction set and pinout. By combining a versatile 8-bit CPU with flash on a monolithic chip, the Atmel AT89C2051 is a powerful microcomputer which provides a highly flexible and cost effective solution to many embedded control applications.The AT89C2051 provides the following standard features: 2Kbytes of flash,128bytes of RAM, 15 I/O lines, two 16-bit timer/counters, a five vector two-level interrupt architecture, a full duplex serial port, a precision analog comparator, on-chip oscillator and clock circuitry. In addition, the AT89C2051 is designed with static logicfor operation down to zero frequency and supports two software selectable power saving modes. The idle mode stops the CPU while allowing the RAM, timer/counters, serial port and interrupt system to continue functioning. The power down mode saves the RAM contents but freezer the oscillator disabling all other chip functions until the next hardware reset.1.3 Pin Configuration1.4 Pin DescriptionVCC Supply voltage.GND Ground.Prot 1Prot 1 is an 8-bit bidirectional I/O port. Port pins P1.2 to P1.7 provide internal pullups. P1.0 and P1.1 require external pullups. P1.0 and P1.1 also serve as the positive input (AIN0) and the negative input (AIN1), respectively, of the on-chip precision analog comparator. The port 1 output buffers can sink 20mA and can drive LED displays directly. When 1s are written to port 1 pins, they can be used as inputs. When pins P1.2 to P1.7 are used as input and are externally pulled low, they will source current (IIL) because of the internal pullups.Port 3Port 3 pins P3.0 to P3.5, P3.7 are seven bidirectional I/O pins with internal pullups. P3.6 is hard-wired as an input to the output of the on-chip comparator and is not accessible as a general purpose I/O pin. The port 3 output buffers can sink 20mA. When 1s are written to port 3 pins they are pulled high by the internal pullups and can be used as inputs. As inputs, port 3 pins that are externally being pulled low will source current (IIL) because of the pullups.Port 3 also serves the functions of various special features of the AT89C2051 as listed below.1.5 Programming the FlashThe AT89C2051 is shipped with the 2 Kbytes of on-chip PEROM code memory array in the erased state (i.e., contents=FFH) and ready to be programmed. The code memory array is programmed one byte at a time. Once the array is programmed, to re-program any non-blank byte, the entire memory array needs to be erased electrically.Internal address counter: the AT89C2051 contains an internal PEROM address counter which is always reset to 000H on the rising edge of RST and is advanced applying a positive going pulse to pin XTAL1.Programming algorithm: to program the AT89C2051, the following sequence is recommended.1. power-up sequence:Apply power between VCC and GND pins Set RST and XTAL1 to GNDWith all other pins floating , wait for greater than 10 milliseconds2. Set pin RST to ‘H’ set pin P3.2 to ‘H’3. Apply the appropriate combination of ‘H’ or ‘L’ logic to pins P3.3, P3.4, P3.5,P3.7 to select one of the programming operations shown in the PEROM programming modes table.To program and Verify the Array:4. Apply data for code byte at location 000H to P1.0 to P1.7.5.Raise RST to 12V to enable programming.5. Pulse P3.2 once to program a byte in the PEROM array or the lock bits. The byte-write cycle is self-timed and typically takes 1.2ms.6. To verify the programmed data, lower RST from 12V to logic ‘H’ level and set pins P3.3 to P3.7 to the appropriate levels. Output data can be read at the port P1 pins.7. To program a byte at the next address location, pulse XTAL1 pin once to advance the internal address counter. Apply new data to the port P1 pins.8. Repeat steps 5 through 8, changing data and advancing the address counter for the entire 2 Kbytes array or until the end of the object file is reached.9. Power-off sequence: set XTAL1 to ‘L’ set RST to ‘L’Float all other I/O pins Turn VCC power off2.1 The principle of piezoelectric ultrasonic generatorPiezoelectric ultrasonic generator is the use of piezoelectric crystal resonators to work. Ultrasonic generator, the internal structure as shown, it has two piezoelectric chip and a resonance plate. When it’s two plus pulse signal, the frequency equal to the intrinsic piezoelectric oscillation frequency chip, the chip will happen piezoelectric resonance, and promote the development of plate vibration resonance, ultrasound is generated. Conversely, it will be for vibration suppression of piezoelectric chip, the mechanical energy is converted to electrical signals, then it becomes the ultrasonic receiver.The traditional way to determine the moment of the echo’s arrival is based on thresholding the received signal with a fixed reference. The threshold is chosen well above the noise level, whereas the moment of arrival of an echo is defined as the first moment the echo signal surpasses that threshold. The intensity of an echo reflecting from an object strongly depends on the object’s nature, size and distance from the sensor. Further, the time interval from the echo’s starting point to the moment when it surpasses the threshold changes with the intensity of the echo. As a consequence, a considerable error may occur even two echoes with different intensities arriving exactly at the same time will surpass the threshold at different moments. The stronger one will surpass the threshold earlier than the weaker, so it will be considered as belonging to a nearer object.2.2 The principle of ultrasonic distance measurementUltrasonic transmitter in a direction to launch ultrasound, in the moment to launch the beginning of time at the same time, the spread of ultrasound in the air, obstacles on his way to return immediately, the ultrasonic reflected wave received by the receiverimmediately stop the clock. Ultrasound in the air as the propagation velocity of 340m/s, according to the timer records the time t, we can calculate the distance between the launch distance barrier(s), that is: s=340t / 23. Ultrasonic Ranging System for the Second Circuit DesignSystem is characterized by single-chip microcomputer to control the use of ultrasonic transmitter and ultrasonic receiver since the launch from time to time, single-chip selection of 875, economic-to-use, and the chip has 4K of ROM, to facilitate programming.3.1 40 kHz ultrasonic pulse generated with the launchRanging system using the ultrasonic sensor of piezoelectric ceramic sensorsUCM40, its operating voltage of the pulse signal is 40kHz, which by the single-chip implementation of the following procedures to generate.puzel: mov 14h, # 12h; ultrasonic firing continued 200msHere: cpl p1.0; output 40kHz square wavenop;nop;nop;djnz 14h, here;retRanging in front of single-chip termination circuit P1.0 input port, single chip implementation of the above procedure, the P1.0 port in a 40kHz pulse output signal, after amplification transistor T, the drive to launch the first ultrasonic UCM40T, issued 40kHz ultrasonic pulse, and the continued launch of 200ms. Ranging the right and the left side of the circuit, respectively, then input port P1.1 and P1.2, the working principle and circuit in front of the same location.3.2 Reception and processing of ultrasonicUsed to receive the first launch of the first pair UCM40R, the ultrasonic pulse modulation signal into an alternating voltage, the op-amp amplification IC1A and after polarization IC1B to IC2. IC2 is locked loop with audio decoder chip LM567, internal voltage-controlled oscillator center frequency of f0=1/1.1R8C3, capacitor C4 determinetheir target bandwidth. R8-conditioning in the launch of the high jump 8 feet into a low-level, as interrupt request signals to the single-chip processing.Ranging in front of single-chip termination circuit output port INT0 interrupt the highest priority, right or left location of the output circuit with output gate IC3A access INT1 port single-chip, while single-chip P1.3 and P1.4 received input IC3A, interrupted by the process to identify the source of inquiry to deal with, interrupt priority level for the first left right after. Part of the source code is as follows:Receivel: push pswpush accclr ex1; related external interrupt 1jnb p1.1, right; P1.1 pin to 0, ranging from right to interrupt service routine circuitjnb p1.2, left; P1.2 pin to 0, to the left ranging circuit interrupt service routinereturn: SETB EX1; open external interrupt 1pop accpop pswretiright: …; right location entrance circuit interrupt service routineAjmp Returnleft: …; left ranging entrance circuit interrupt service routineAjmp Return3.3 The calculation of ultrasonic propagation timeWhen you start firing at the same time start the single-chip circuitry within the timer T0, the use of timer counting function records the time and the launch of ultrasonic reflected wave received time. When you receive the ultrasonic reflected wave, the receiver circuit output a negative jump in the end of INT0 or INT1 interrupt request generates a signal, single-chip microcomputer in response to external interrupt request, the implementation of the external interrupt service subroutine, read the time difference, calculating the distance. Some of its source code is as follows:RECEIVE0: PUSH PSWPUSH ACCCLR EX0; related external interrupt 0MOV R7, TH0; read the time valueMOV R6, TL0CLR CMOV A, R6SUBB A, #0BBH; calculate the time differenceMOV 31H, A; storage resultsMOV A, R7SUBB A, # 3CHMOV 30H, ASETB EX0; open external interrupt 0\POP ACCPOP PSWRETIFor a flat target, a distance measurement consists of two phases: a coarse measurement and a fine measurement:Step 1: Transmission of one pulse train to produce a simple ultrasonic wave.Step 2: Changing the gain of both echo amplifiers according to equation, until the echo is detected.Step 3: Detection of the amplitudes and zero-crossing times of both echoes.Step 4: Setting the gains of both echo amplifiers to normalize the output at, say 3 volts. Setting the period of the next pulses according to the: period of echoes. Setting the time window according to the data of step 2.Step 5: Sending two pulse trains to produce an interfered wave. Testing the zero-crossing times and amplitudes of the echoes. If phase inversion occurs in the echo, determine to otherwise calculate to by interpolation using the amplitudes near the trough. Derive t sub m1 and t sub m2.Step 6: Calculation of the distance y using equation.4、The ultrasonic ranging system software designSoftware is divided into two parts, the main program and interrupt service routine. Completion of the work of the main program is initialized, each sequence of ultrasonic transmitting and receiving control.Interrupt service routines from time to time to complete three of the rotation direction of ultrasonic launch, the main external interrupt service subroutine to read the value of completion time, distance calculation, the results of the output and so on.5、ConclusionsRequired measuring range of 30cm-200cm objects inside the plane to do a number of measurements found that the maximum error is 0.5cm, and good reproducibility. Single-chip design can be seen on the ultrasonic ranging system has a hardware structure is simple, reliable, small features such as measurement error. Therefore, it can be used not only for mobile robot can be used in other detection system.Thoughts: As for why the receiver do not have the transistor amplifier circuit, because the magnification well, integrated amplifier, but also with automatic gain control level, magnification to 76dB, the center frequency is 38k to 40k, is exactly resonant ultrasonic sensors frequency.6、Parking sensor6.1 Parking sensor introductionReversing radar, full name is "reversing the anti-collision radar, also known as" parking assist device, car parking or reversing the safety of assistive devices, ultrasonic sensors(commonly known as probes), controls and displays (or buzzer)and other components. To inform the driver around the obstacle to the sound or a moreintuitive display to lift the driver parking, reversing and start the vehicle around tovisit the distress caused by, and to help the driver to remove the vision deadends and blurred vision defects and improve driving safety.6.2 Reversing radar detection principleReversing radar, according to high-speed flight of the bats in thenight, not collided with any obstacle principles of design anddevelopment. Probe mounted on the rear bumper, according to different price and brand, the probe only ranging from two, three, four, six, eight,respectively, pipe around. The probe radiation, 45-degree angle up and downabout the search target. The greatest advantage is to explore lower than the bumper of the driver from the rear window is difficult to see obstacles, and the police, suchas flower beds, children playing in the squatting on the car.Display parking sensor installed in the rear view mirror, it constantlyremind drivers to car distance behindthe object distance to the dangerous distance, the buzzer starts singing, allow the driver to stop. When the gear lever linked into reverse gear, reversing radar, auto-start the work, the working range of 0.3 to 2.0 meters, so stop when the driver was very practical. Reversing radar is equivalent to an ultrasound probe for ultrasonic probe can be divided into two categories: First, Electrical, ultrasonic, the second is to use mechanical means to produce ultrasound, in view of the more commonly used piezoelectric ultrasonic generator, it has two power chips and a soundingboard, plus apulse signal when the poles, its frequency equal to the intrinsic oscillation frequency of the piezoelectric pressure chip will be resonant and drivenby the vibration of the sounding board, the mechanical energy into electrical signal, which became the ultrasonic probe works. In order to better study Ultrasonic and use up, people have to design and manufacture of ultrasonic sound, the ultrasonic probe tobe used in the use of car parking sensor. With this principle in a non-contactdetection technology for distance measurement is simple, convenient and rapid, easyto do real-time control, distance accuracy of practical industrial requirements. Parking sensor for ranging send out ultrasonic signal at a givenmoment, and shot in the face of the measured object back to the signal wave, reversing radar receiver to use statistics in the ultrasonic signal from the transmitter to receive echo signals calculate the propagation velocity in the medium, which can calculate the distance of the probe and to detect objects.6.3 Reversing radar functionality and performanceParking sensor can be divided into the LCD distance display, audible alarm, and azimuth directions, voice prompts, automatic probe detection function is complete, reversing radar distance, audible alarm, position-indicating function. A good performance reversing radar, its main properties include: (1) sensitivity, whether theresponse fast enough when there is an obstacle. (2) the existence of blind spots. (3) detection distance range.6.4 Each part of the roleReversing radar has the following effects: (1) ultrasonic sensor: used tolaunch and receive ultrasonic signals, ultrasonic sensors canmeasure distance. (2) host: after the launch of the sine wave pulse to the ultrasonic sensors, and process the received signal, to calculate the distance value, the data and monitor communication. (3) display or abuzzer: the receivinghost from the data, and display the distance value and provide differentlevels according to the distance from the alarm sound.6.5 Cautions1, the installation height: general ground: car before the installation of 45 ~55: 50 ~ 65cmcar after installation. 2, regular cleaningof the probe to prevent the fill. 3, do not use the hardstuff the probe surface cover will produce false positives or ranging allowed toprobe surface coverage, such as mud. 4, winter to avoid freezing. 5, 6 / 8 probe reversing radar before and after the probe is not free to swap may cause the ChangMing false positive problem. 6, note that the probe mounting orientation, in accordance with UP installation upward. 7, the probe is not recommended to install sheetmetal, sheet metal vibration will cause the probe resonance, resulting in false positives.超声测距系统设计原文出处:传感器文摘布拉福德:1993年超声测距技术在工业现场、车辆导航、水声工程等领域具有广泛的应用价值,目前已应用于物位测量、机器人自动导航以及空气中与水下的目标探测、识别、定位等场合。
本科毕业设计外文文献翻译
(Shear wall st ructural design ofh igh-lev el fr ameworkWu Jiche ngAbstract : In t his pape r the basic c oncepts of man pow er from th e fra me sh ear w all str uc ture, analy sis of the struct ur al des ign of th e c ont ent of t he fr ame she ar wall, in cludi ng the seism ic wa ll she ar spa本科毕业设计外文文献翻译学校代码: 10128学 号:题 目:Shear wall structural design of high-level framework 学生姓名: 学 院:土木工程学院 系 别:建筑工程系 专 业:土木工程专业(建筑工程方向) 班 级:土木08-(5)班 指导教师: (副教授)nratiodesign, and a concretestructure in themost co mmonly usedframe shear wallstructurethedesign of p oints to note.Keywords: concrete; frameshearwall structure;high-risebuildingsThe wall is amodern high-rise buildings is an impo rtant buildingcontent, the size of theframe shear wall must comply with building regulations. The principle is that the largersizebut the thicknessmust besmaller geometric featuresshouldbe presented to the plate,the force is close to cylindrical.The wall shear wa ll structure is a flatcomponent. Itsexposure to the force along the plane level of therole ofshear and moment, must also take intoaccountthe vertical pressure.Operate under thecombined action ofbending moments and axial force andshear forcebythe cantilever deep beam under the action of the force levelto loo kinto the bottom mounted on the basis of. Shearwall isdividedinto a whole walland theassociated shear wall in theactual project,a wholewallfor exampl e, such as generalhousingconstruction in the gableor fish bone structure filmwalls and small openingswall.Coupled Shear walls are connected bythecoupling beam shear wall.Butbecause thegeneralcoupling beamstiffness is less thanthe wall stiffnessof the limbs,so. Walllimb aloneis obvious.The central beam of theinflection pointtopay attentionto thewall pressure than the limits of the limb axis. Will forma shortwide beams,widecolumn wall limbshear wall openings toolarge component atbothen ds with just the domain of variable cross-section ro din the internalforcesunder theactionof many Walllimb inflection point Therefore, the calcula tions and construction shouldAccordingtoapproximate the framestructure to consider.The designof shear walls shouldbe based on the characteristics of avariety ofwall itself,and differentmechanical ch aracteristicsand requirements,wall oftheinternalforcedistribution and failuremodes of specific and comprehensive consideration of the design reinforcement and structural measures. Frame shear wall structure design is to consider the structure of the overall analysis for both directionsofthehorizontal and verticaleffects. Obtain theinternal force is required in accordancewiththe bias or partial pull normal section forcecalculation.The wall structure oftheframe shear wall structural design of the content frame high-rise buildings, in the actual projectintheuse of themost seismic walls have sufficient quantitiesto meet thelimitsof the layer displacement, the location isrelatively flexible. Seismic wall for continuous layout,full-length through.Should bedesigned to avoid the wall mutations in limb length and alignment is notupand down the hole. The sametime.The inside of the hole marginscolumnshould not belessthan300mm inordertoguaranteethelengthof the column as the edgeof the component and constraint edgecomponents.Thebi-direc tional lateral force resisting structural form of vertical andhorizontalwallconnected.Each other as the affinityof the shear wall. For one, two seismic frame she ar walls,even beam highratio should notgreaterthan 5 and a height of not less than400mm.Midline columnand beams,wall midline shouldnotbe greater tha nthe columnwidthof1/4,in order toreduce thetorsional effect of the seismicaction onthecolumn.Otherwisecan be taken tostrengthen thestirrupratio inthe column tomake up.If theshear wall shearspan thanthe big two. Eventhe beamcro ss-height ratiogreaterthan 2.5, then the design pressure of thecut shouldnotmakeabig 0.2. However, if the shearwallshear spanratioof less than two couplingbeams span of less than 2.5, then the shear compres sion ratiois notgreater than 0.15. Theother hand,the bottom ofthe frame shear wallstructure to enhance thedesign should notbe less than200mmand notlessthanstorey 1/16,otherpartsshouldnot be less than 160mm and not less thanstorey 1/20. Aroundthe wall of the frame shear wall structure shouldbe set to the beam or dark beamand the side columntoform a border. Horizontal distributionofshear walls can from the shear effect,this design when building higher longeror framestructure reinforcement should be appropriatelyincreased, especially in the sensitiveparts of the beam position or temperature, stiffnesschange is bestappropriately increased, thenconsideration shouldbe givento the wallverticalreinforcement,because it is mainly from the bending effect, andtake in some multi-storeyshearwall structurereinforcedreinforcement rate -likelessconstrained edgeofthecomponent or components reinforcement of theedge component.References: [1 sad Hayashi,He Yaming. On the shortshear wall high-rise buildingdesign [J].Keyuan, 2008, (O2).高层框架剪力墙结构设计吴继成摘要: 本文从框架剪力墙结构设计的基本概念人手, 分析了框架剪力墙的构造设计内容, 包括抗震墙、剪跨比等的设计, 并出混凝土结构中最常用的框架剪力墙结构设计的注意要点。
毕业设计(论文)外文资料翻译(学生用)
毕业设计外文资料翻译学院:信息科学与工程学院专业:软件工程姓名: XXXXX学号: XXXXXXXXX外文出处: Think In Java (用外文写)附件: 1.外文资料翻译译文;2.外文原文。
附件1:外文资料翻译译文网络编程历史上的网络编程都倾向于困难、复杂,而且极易出错。
程序员必须掌握与网络有关的大量细节,有时甚至要对硬件有深刻的认识。
一般地,我们需要理解连网协议中不同的“层”(Layer)。
而且对于每个连网库,一般都包含了数量众多的函数,分别涉及信息块的连接、打包和拆包;这些块的来回运输;以及握手等等。
这是一项令人痛苦的工作。
但是,连网本身的概念并不是很难。
我们想获得位于其他地方某台机器上的信息,并把它们移到这儿;或者相反。
这与读写文件非常相似,只是文件存在于远程机器上,而且远程机器有权决定如何处理我们请求或者发送的数据。
Java最出色的一个地方就是它的“无痛苦连网”概念。
有关连网的基层细节已被尽可能地提取出去,并隐藏在JVM以及Java的本机安装系统里进行控制。
我们使用的编程模型是一个文件的模型;事实上,网络连接(一个“套接字”)已被封装到系统对象里,所以可象对其他数据流那样采用同样的方法调用。
除此以外,在我们处理另一个连网问题——同时控制多个网络连接——的时候,Java内建的多线程机制也是十分方便的。
本章将用一系列易懂的例子解释Java的连网支持。
15.1 机器的标识当然,为了分辨来自别处的一台机器,以及为了保证自己连接的是希望的那台机器,必须有一种机制能独一无二地标识出网络内的每台机器。
早期网络只解决了如何在本地网络环境中为机器提供唯一的名字。
但Java面向的是整个因特网,这要求用一种机制对来自世界各地的机器进行标识。
为达到这个目的,我们采用了IP(互联网地址)的概念。
IP以两种形式存在着:(1) 大家最熟悉的DNS(域名服务)形式。
我自己的域名是。
所以假定我在自己的域内有一台名为Opus的计算机,它的域名就可以是。
土木工程专业毕业设计外文文献翻译2篇
土木工程专业毕业设计外文文献翻译2篇XXXXXXXXX学院学士学位毕业设计(论文)英语翻译课题名称英语翻译学号学生专业、年级所在院系指导教师选题时间Fundamental Assumptions for Reinforced ConcreteBehaviorThe chief task of the structural engineer is the design of structures. Design is the determination of the general shape and all specific dimensions of a particular structure so that it will perform the function for which it is created and will safely withstand the influences that will act on it throughout useful life. These influences are primarily the loads and other forces to which it will be subjected, as well as other detrimental agents, such as temperature fluctuations, foundation settlements, and corrosive influences, Structural mechanics is one of the main tools in this process of design. As here understood, it is the body of scientific knowledge that permits one to predict with a good degree of certainly how a structure of give shape and dimensions will behave when acted upon by known forces or other mechanical influences. The chief items of behavior that are of practical interest are (1) the strength of the structure, i. e. , that magnitude of loads of a give distribution which will cause the structure to fail, and (2) the deformations, such as deflections and extent of cracking, that the structure will undergo when loaded underservice condition.The fundamental propositions on which the mechanics of reinforced concrete is based are as follows:1.The internal forces, such as bending moments, shear forces, and normal andshear stresses, at any section of a member are in equilibrium with the effect of the external loads at that section. This proposition is not an assumption but a fact, because any body or any portion thereof can be at rest only if all forces acting on it are in equilibrium.2.The strain in an embedded reinforcing bar is the same as that of thesurrounding concrete. Expressed differently, it is assumed that perfect bonding exists between concrete and steel at the interface, so that no slip can occur between the two materials. Hence, as the one deforms, so must the other. With modern deformed bars, a high degree of mechanical interlocking is provided in addition to the natural surface adhesion, so this assumption is very close to correct.3.Cross sections that were plane prior to loading continue to be plan in themember under load. Accurate measurements have shown that when a reinforced concrete member is loaded close to failure, this assumption is not absolutely accurate. However, the deviations are usually minor.4.In view of the fact the tensile strength of concrete is only a small fraction ofits compressive strength; the concrete in that part of a member which is in tension is usually cracked. While these cracks, in well-designed members, are generally so sorrow as to behardly visible, they evidently render the cracked concrete incapable of resisting tension stress whatever. This assumption is evidently a simplification of the actual situation because, in fact, concrete prior to cracking, as well as the concrete located between cracks, does resist tension stresses of small magnitude. Later in discussions of the resistance of reinforced concrete beams to shear, it will become apparent that under certain conditions this particular assumption is dispensed with and advantage is taken of the modest tensile strength that concrete can develop.5.The theory is based on the actual stress-strain relation ships and strengthproperties of the two constituent materials or some reasonable equivalent simplifications thereof. The fact that novelistic behavior is reflected in modern theory, that concrete is assumed to be ineffective in tension, and that the joint action of the two materials is taken into consideration results in analytical methods which are considerably more complex and also more challenging, than those that are adequate for members made of a single, substantially elastic material.These five assumptions permit one to predict by calculation the performance of reinforced concrete members only for some simple situations. Actually, the joint action of two materials as dissimilar and complicated as concrete and steel is so complex that it has not yet lent itself to purely analytical treatment. For this reason, methods of design and analysis, while using these assumptions, are very largely based on the results of extensive and continuing experimental research. They are modified and improved as additional test evidence becomes available.钢筋混凝土的基本假设作为结构工程师的主要任务是结构设计。
毕设英语翻译(英文和译文都有)
Wear 225–229 Ž1999. 354–367Wear of TiC-coated carbide tools in dry turningC.Y .H. Lim ) , S.C. Lim, K.S. LeeDepartment of Mechanical and Production Engineering, National Uni Õersity of Singapore, 10 Kent Ridge Crescent, Singapore 119260, SingaporeAbstractThis paper examines the flank and crater wear characteristics of titanium carbide ŽTiC .-coated cemented carbide tool inserts during dry turning of steel workpieces. A brief review of tool wear mechanisms is presented together with new evidence showing that wear of the TiC layer on both flank and rake faces is dominated by discrete plastic deformation, which causes the coating to be worn through to the underlying carbide substrate when machining at high cutting speeds and feed rates. Wear also occurs as a result of abrasion, as well as cracking and attrition, with the latter leading to the wearing through of the coating on the rake face under low speed conditions. When moderate speeds and feeds are used, the coating remains intact throughout the duration of testing. Wear mechanism maps linking the observed wear mechanisms to machining conditions are presented for the first time. These maps demonstrate clearly that transitions from one dominant wear mechanism to another may be related to variations in measured tool wear rates. Comparisons of the present wear maps with similar maps for uncoated carbide tools show that TiC coatings dramatically expand the range of machining conditions under which acceptable rates of tool wear might be experienced. However, the extent of improvement brought about by the coatings depends strongly on the cutting conditions, with the greatest benefits being seen at higher cutting speeds and feed rates. q 1999 Elsevier Science S.A. All rights reserved.Keywords: Wear mechanisms; Wear maps; TiC coatings; Carbide tools1. IntroductionIn the three decades since the commercial debut of coated cutting tools, these tools have gained such popular- ity that today’s metal cutting industry has come to rely almost exclusively on them. This success stems from the spectacular improvements in tool performance and cutting economies that the coatings are able to bring to traditional high-speed-steel and cemented carbide tools w 1x . At pre- sent, the most common group of coated tools consists of various combinations of titanium nitride ŽTiN ., titanium carbide ŽTiC ., titanium carbonitride ŽTiCN . and aluminium selection of the appropriate tool and machining conditions for a particular application, but also assist engineers andscientists in their development of new tool andcoating materials.This work investigates the wear of coated cementedcarbide tool inserts during dry turning under a wide range ofmachining conditions. Although the current trend is towardsthe use of multilayer coatings, an understanding of the wearcharacteristics of the individual constituent mate- rials wouldbenefit the development of such multilayers. TiC ischosen in this study, since it often forms theimportant base layer in multilayer-coatings due to its lowoxide ŽAl 2 O 3 ., deposited in a multilayer manner onto thermal mismatch with cemented carbide substratesw 5x . cemented carbide substrates. The wear behaviour of coated tools has understandably been the subject of much research but in most instances, the focus appears to be limited to relatively narrow ranges ofmachining conditions Žsee for example, Refs. w 2–4x ..There seems to be a lack of overviews on the wearcharacteristics of different coated tools throughoutthe entire range of their recommended cutting conditions.Such information would not only contribute to a more informedThe flank and crater wear characteristics of TiC-coated tools will be examined, and the methodology of wear maps will be applied to explore the ways in which these tools may be used more effectively.2. Experimental detailsA series of experiments was carried out in accordance with the International Standard ISO 3685-1977 ŽE.test for single-point turning tools w6x. Commerciallyavailable) Corresponding author. Tel.: q65-874-8082; fax: q65-779-1459;e-mail: mpelimc@.sg TiC-coated tool inserts of geometry ISO SNMN 120408 from Sumitomo’s AC720 coated grade were used in these0043-1648r99r$ - see front matter q 1999 Elsevier Science S.A. All rights reserved. PII: S 0 0 4 3 - 1 6 4 8 Ž9 8 .0 0 3 6 6 - 4C.Y.H. Lim et al.r W ear 225–229 (1999) 354–367 355Table 1Tool geometry used in the turning testsBack rake angley68Side rake angley68End clearance angle68Side clearance angle68End cutting edge angle158Side cutting edge angle158Nose radius0.8 mmtests. The cemented carbide substrates belonged to the ISO application group P20–P30 and these had been coated with TiC to an average thickness of 8.5 mm. Knoop microhard- ness indentation testing on the TiC coating with a load of 50 gf indicated a mean hardness of 2678 kg r mm2 . The workpiece material, a hot-rolled medium carbon steel ŽAISI 1045 equivalent.with an average hardness of 89 HRB, was used in its as-received condition. A toolholder of designation ISO CSBNR 2525M12 was employed to achieve the specified cutting geometry. Details of this tool geometry and the test configuration adopted during the turning tests may be found respectively in Table 1 and Fig.23. The chipbreaker, which formed part of the clamping mechanism of the toolholder, was fully wound back during the tests to prevent it from supporting the chip and shorten- ing the contact length.A total of 13 sets of various combinations of cutting speed and feed rate were selected for the tests, with the aim of adequately covering the recommended range of machining conditions for coated carbide tools w7–9x. The choice of these 13 conditions was also influenced in part by the need to explore the wear behaviour under certain machining conditions for which no wear data were avail- able from the open literature. This was to ensure the proper tool wear w10x. A value of 2 mm was chosen, based on the average depth of cut used in the machining tests of other researchers whose data were extracted for the wear maps. No cutting fluid was used in these experiments, as stipu- lated in ISO 3685-1977 ŽE.w6x. Each insert was tested for a total of 20 min or until catastrophic failure, whichever occurred first. The period of 20 min was chosen to limit the amount of work material consumed, while at the same time corresponding to the average tool life of between 10 to 20 min seen in industrial practice.Flank and crater wear were monitored at regular inter- vals throughout the machining experiments. The locations of these wear regions on the tool are shown in Fig. 24. According to ISO 3685-1977 ŽE. w6x, flank and crater wear were measured by the width of the flank wear land, VB, and the depth of the crater, KT, respectively. These mea- surements are illustrated in Fig. 25. It has been shown previously w11x that the rates of flank and crater wear may be more meaningfully portrayed by the dimensionless pa- rameters of VB and KT per unit cutting distance. These quantities are more conveniently represented by log wŽVB or KT.rŽcutting distance.x, and the experimental wear rates from the present tests are given in Table 2.3. Tool wear mechanismsSeveral studies on the mechanisms of flank and crater wear in TiC-coated carbide tools may be found in the open literature Žsee for example, Refs. w4,12,13x., but in each case, the tools were tested under a relatively narrow range of machining conditions. In this work, the worn tools were examined using scanning electron microscopy ŽSEM., after removing adherent work material by immersion in concen-construction of the wear maps later. These conditions are trated hydrochloricacid ŽHCl.. A number of metallo-listed in Table 2. The depth of cut was kept constant since it has been shown that this parameter has little effect ongraphic sections through the centre of thecrater and normal to the cutting edge were also made. TheobservedTable 2Machining conditions and experimental tool wear ratesSet Speed Žm r min.Feed Žmm r rev.Flank wear rate Crater wear rateŽVB.rŽDistance.log 10 ŽŽVB.rŽKT.rŽDistance.log 10 ŽŽKT.rŽDistance..ŽDistance..1 32.3 0.06 9.60 =10y82 207.5 0.06 3.04 =10y83 404.0 0.06 1.57 =10y74 103.9 0.2 3.03 =10y85 186.5 0.2 2.04 =10y86 302.9 0.2 6.09 =10y87 98.3 0.3 3.81 =10y88 193.0 0.3 2.39 =10y89 316.9 0.3 1.12 =10y710 31.7 0.4 9.78 =10y811 349.2 0.4 2.37 =10y712 150.1 0.5 2.33 =10y813 241.0 0.5 4.05 =10y8y7.0 1.32 =10y8y7.5 1.08 =10y9y6.8 5.65 =10y9y7.5 7.22 = 10y 10y7.7 6.79 = 10y 10y7.2 4.73 =10y9y7.4 1.27 =10y9y7.6 1.50 =10y9y7.0 1.17 =10y7y7.0 1.18 =10y8y6.6 5.72 =10y7y7.6 1.88 =10y9y7.4 5.93 =10y9y7.9y9.0y8.2y9.1y9.2y8.3y8.9y8.8y6.9y7.9y6.2y8.7y8.2C.Y.H. Lim et al.r W ear 225–229 (1999) 354–367 356appear plastically deformed in the direction of workpiecerotation ŽFig. 1b.. As cutting continues to the end of the20-min test, a ‘ridge-and-furrow’ topography isformedŽFig. 1c.. This ridge-and-furrow appearance has previouslybeen reported w4,13,14x, and the mechanismresponsibleFig. 1. Topography of flank face Ža.when new, Žb.after 3 mins, and Žc.after 20 mins of cutting at 103.9 m r min and 0.2 mm r rev,showing original dimple-like surface features being worn bydiscrete plastic deformation to give a ridge-and-furrow appearance.wear mechanisms are discussed below, with an attempt tocorrelate the current findings with published reportsin order to present a better picture of tool wear across a widerrange of cutting conditions.3.1. Flank wear mechanismsThe unworn TiC coating on the flank face of the newtool shown in Fig. 1a exhibits ‘dimple-like’ surface fea-tures. After 3 min of machining, however, these featuresC.Y.H. Lim et al.r W ear 225–229 (1999) 354–367357was termed ‘discrete plastic deformation’ since the depth ofthe gradual thinning of the coating worn by discrete plastic deforma- tion. deformation is limited to 1 mm or less of the coating surfacew4x, as seen in Fig. 1c.It has been demonstrated that during machining, highcompressive stresses and intimate contact between theatomically clean surfaces of the newly-machined work- pieceand the flank face of TiC-coated carbide tools results inseizure over much of the tool–work contact area w15x. ŽItshould be pointed out that the term ‘seizure’used in themachining context differs somewhat from the usual tribo-logical understanding in which the real and nominal con- tactareas are equal..However, cutting is able to continue as thework material moves by shear in the layers of the workadjacent to this interface w16x. Such conditions gener- ate highshear stresses on the tool surface that plastically deform and‘smear’ the original dimple-like features of the coating in thedirection of workpiece rotation. With time, these deformeddimples flow and merge into the ridge- and-furrowtopography shown in Fig. 1c. It has been proposed that thisprocess culminates in the ductile frac- ture of tiny fragmentsof the coating, which are then swept away by the passingwork w4x.Discrete plastic deformation gradually reduces thethickness of the TiC coating during machining. The cross-sectional view of the flank wear land in Fig. 2 shows a‘depression’ in the coating worn by discrete plastic defor-mation. As wear progresses, localized areas of the underly-ing carbide substrate become exposed ŽFig. 3a., and even-tually merge into a continuous band of exposed substrate ŽFig.3b., an observation shared by several other workers w4,17–20x.Fig. 2. Section through flank wear land after 20 min of cutting at 150.1m r min and 0.5 mm r rev, showing a ‘depression’ in the TiC coating due toC.Y.H. Lim et al.r W ear 225–229 (1999) 354–367 358Fig. 3. Flank wear land showing coating removal by discrete plastic deformation, Ža.beginning with the appearance of holes and voids, and Žb. followed by the merging of voids to form a continuous band of exposed substrate. deformation around the tool edge w22x. This may contribute to cracking due to the inability of the brittle TiC layer to conform to this deformation. Although cracking of the coating does not directly result in flank wear, the cracks compromise the integrity of the coating and may become preferential sites for coating removal. Fig. 4 shows an example of how tiny fragments of the coating have been ‘ plucked out’ in the vicinity of cracks. This could acceler- ate coating wear and hasten the exposure of the substrate.The severity of cracking and attrition appears to depend on the cutting speed and feed rate. At speeds below 40 m r min, no signs of cracking or attrition are seen. At moderate speeds and feeds, a few fine cracks are visible, but attrition is not evident in most cases. Cracks become more abundant at high speeds and feeds, accompanied by slight attrition of the coating. These observations lend support to the earlier suggestion that cracking is related to the compressive deformation of the tool edge. Higher cutting speeds increase tool temperatures, thus causing greater softening of the tool nose, while a higher feed imposes higher compressive stresses on the tool edge. Both factors result in greater deformation of the tool nose, which increases the extent of cracking. However, under the conditions of the present tests, cracking and attrition of the TiC coating on the flank does not appear to play a dominant role in tool wear. There is also no evidence to support the suggestion that hard coating fragments re- moved via the attrition process contribute significantly to wear by abrasion w20x.Flank wear of TiC-coated carbide tools has frequently been attributed to the dominance of abrasive wear Žsee for example, Refs. w12,14,20,23x.. It seems unlikelythoughRidge-and-furrow wear surfaces are seen under all the conditions used in the present tests, but they appear more pronounced at higher cutting speeds and feed rates. It has been shown that increases in speed and feed lead to a rise in temperatures at the tool flank w21x. This causes the TiC layer to soften w22x, rendering it more susceptible to plastic deformation. In tools tested at high speeds and feeds, the coating is worn away rapidly by discrete plastic deforma- tion in a matter of minutes, exposing the carbide substrate beneath.Other features observed on the worn tool flanks are fine cracks parallel to the tool edge and perpendicular to the direction of workpiece rotation ŽFig. 4.. These are found on all tools except those tested at very low cutting speeds Žless than 40 m r min.. Careful examination of new tools shows that such cracks are not present prior to testing. Furthermore, the cracks are confined to the flank wear land, the only region that is in contact with the workpiece during machining. These findings suggest that the cutting process is responsible for the formation of these cracks. It is believed that the high shear stresses that cause discrete plastic deformation, also lead to cracking within theC.Y.H. Lim et al.r W ear 225–229 (1999) 354–367359TiC coating. In addition, the high compressive loads imposed on the tool edge during cutting are known to cause bulk that abrasion would be a major mechanism of coating wear since the TiC coating has a hardness equal to that of hard inclusions in the workpiece w22x. There is also little evi- dence of deep abrasion grooves on the coating, which might indicate the dominance of abrasive wear. At low magnifications, the ridge-and-furrow topography of dis-Fig. 4. Flank wear land showing fine cracks in the TiC coating, and the attrition of coating particles in the vicinity of the cracks when cutting at 316.9 m r min and 0.3 mm r rev.C.Y.H. Lim et al.r W ear 225–229 (1999) 354–367 360crete plastic deformation could perhaps be mistaken for evidence of abrasion and it is usually only upon closer examination that the difference between the two mecha- nisms may be discerned.On some tools, however, there appear to be faint grooves scratched on the surface of the coating. The tool shown in Fig. 5 has been tested at a very low cutting speed and feed rate. The worn surface appears smooth with numerous shallow but sharp grooves. These grooves do not resemble the coarser ridge-and-furrow features of discrete plastic deformation. Even on surfaces that do exhibit ridge-and- furrow formation, such as the one in Fig. 6, faint sharp lines may also be seen on top of the ridges. It is possible that these grooves are the result of abrasion by favourably-oriented inclusions in the workpiece thatareFig. 6. Flank wear land after 20 min of cutting at 207.5 m r min and 0.06 able to plough into the TiC coating w 24x . Abrasion is mm r rev, showing sharp, shallow abrasion grooves on top of ridge-and- probably significant only at lower speeds and feeds, where discrete plastic deformation involves very small strains and the wear rate is low w 4x . At higher speeds and feeds, the effect that such abrasion has on the overall wear of the coating on the flank face is likely to be small since the grooves are very shallow, especially when compared with the depth of the furrows formed by discrete plastic defor- mation. TiC has been found to be the most resistant tofurrow surface. in Fig. 7. The growth in the exposed areas of substrate through such a chipping mechanism has also been reported elsewhere w 12,25x . Examination of the tools in their as-machined condition show that regions where the substrate have become ex- abrasion among the common coating materialsŽsee for posed are completely covered by work material, which example, Refs. w 13,14,20x ., so it is hardly surprising that abrasion is not a major wear mechanism in the present investigation.For most of the tools, the TiC coatings remained intact throughout the entire duration of the experiment. Only a few tools tested under high speed or high feed conditions suffered coating loss through discrete plastic deformation and attrition. Once an area of substrate has been exposed, the passing work ‘impinges’ on the lower border of the coating adjacent to these exposed areas of substrate, and slowly chips away at the coating, causing the flank wear land to grow downwards. This process leaves a very uneven border at the bottom of the flank wear land, as seenFig. 5. Flank wear land after 20 min of cutting at 32.3 m r min and 0.06 mm r rev, showing sharp, shallow grooves due to abrasion.C.Y.H. Lim et al.r W ear 225–229 (1999) 354–367361may be removed only by dissolving in acid. This suggeststhat there has been intimate contact between the workmaterial and the exposed carbide substrate during machin-ing. Removal of the adherent work material reveals a smoothtopography with ridges and grooves on some of thecarbide grains ŽFig. 8.. This feature has been com- monlyassociated with diffusion wear in cemented carbide toolsw22,26x. However, it is also likely that the discrete plasticdeformation mechanism observed on the TiC coat- ing,continues to wear away the substrate as well. Such a processcould also contribute to the smoothly worn appear- ance.Fig. 7. Flank wear land after 20 min of cutting at 207.5 m r min and 0.06mm r rev, showing the uneven border between the TiC coating and theexposed carbide substrate as a result of chipping of the coating.C.Y.H. Lim et al.r W ear 225–229 (1999) 354–367 362features on the coating have been deformed in the chip flowdirection. These observations suggest that discrete plasticdeformation is also a dominant mechanism in coat- ing wearon the rake face. Similar to the case of flank wear earlier,discrete plastic deformation on the rake face be- comes morepronounced as cutting speed and feed rate are raised. Thesefindings agree with a previous suggestion that theseverity of this wear process is governed by temperature andshear stress, which rise with increasingspeed and feed w4x. The coating on the rakeface isFig. 8. Flank wear land after 2 min of cutting at 404 m r min and 0.06mm r rev, showing a smooth topography, as well as carbide grains withrandomly-oriented ridges and grooves.3.2. Crater wear mechanismsThe ridge-and-furrow topography associated with dis-crete plastic deformation on the flank face is also seen on therake face. In the comparison of a new and a worn tool inFig. 9, it is apparent that the originaldimple-likeFig. 9. Topography of rake face Ža. when new, and Žb. after 3 min of cuttingat 302.9 m r min, 0.2 mm r rev, showing original dimple-likesurface features being worn by discrete plastic deformation togive a ridge-and-furrow appearance.C.Y.H. Lim et al.r W ear 225–229 (1999) 354–367363gradually worn to expose the carbide substrate in a mannersimilar to that seen on the flank face.The micrograph in Fig. 10 shows extensive cracking onthe rake face parallel to the cutting edge and perpendicular tothe chip flow direction. In the vicinity of these cracks, tinypieces of the TiC coating have been plucked out, leavingjagged uneven edges. The cracks here resemble the cracksseen on the flank face. It is believed that in moving the chipover the tool under a condition of seizure w15x, the high-shearstresses generated on the tool surface causes the coating tofracture. Unlike the case of flank wear in which cracking ismore severe at higher speeds and feeds, this phenomenon isobserved on the rake face only at low cuttingspeeds. This may be explained by considering the differenttemperatures experienced on the rake face at low speeds andat high speeds. At low speeds, the temperature during cuttingis lower and the hardness of the TiC coating remainsrelatively high; the coating is therefore likely to be brittle andmore susceptible to cracking. With rising speed, the highertemperatures cause the hardness of the TiC coating to dropw22x, and the coating becomes more duc- tile. Consequently,the shear stresses imposed on the coat- ing surface result indiscrete plastic deformation of the coating rather thancracking.The cracking of the TiC coating on the rake facefacilitates removal of the coating by attrition. The view of therake face at a low magnification in Fig. 11 shows large areaswhere the coating has been removed by such anFig. 10. Rake face after cutting for 20 min at 32.3 m r min and 0.06 mm r rev,showing cracks parallel to the cutting edge and perpendicular to chip flowdirection.C.Y.H. Lim et al.r W ear 225–229 (1999) 354–367364Fig. 11. Rake face after cutting for 20 min at 32.3 m r min and 0.06 mm r rev, showing large areas where the TiC coating has been removed via cracking and attrition.attrition process. The extensive attrition at low speeds and feeds, could also be caused by a less laminar and more intermittent flow of the chip over the rake face. Sudden and local tensile stresses imposed by the unevenly flowing work material would tear away fragments of the coating. Uneven flow of the chip is associated with the occurrenceFig. 13. Crater bottom after 30 s of cutting at 241 m r min and 0.5 mm r rev, showing a smooth topography, as well as carbide grains with randomly-oriented ridges and grooves.The TiC coating is worn away fairly rapidly at high speeds and feeds via discrete plastic deformation. When this happens, the crater quickly fills with work material. Dissolving the adherent work or examining the tool in cross-section reveals a deep crater. Careful study of the crater in cross-section shows the occasional protrudingof a built-up-edge ŽBUE .. Although no adherent BUE was carbide at the tool –work interface ŽFig. 12.. This is afound on any tool after cutting, this does not rule out its occurrence during machining. An earlier report described the formation of a BUE when machining at speeds be- tween 15 m r min and 40 m r min w 27x . The same study also noted that the adhesion between the BUE and the coated tool edge is very weak, and the BUE is readily loosened during metallographic preparations. Here, a comparison of the surface finish of the workpiece after machining at two different speeds Ž32.3 m r min and 103.9 m r min . reveals that the workpiece is a lot rougher at the lower speed: 6.35 characteristic often associated with diffusion wear of car- bide tools, in which it is believed that carbides that are less soluble in the work material are left protruding while the surrounding tool material is worn away at a faster rate w 26x . However, the discrete plastic deformation mechanism could also account for such features, since the softer matrix would be deformed to a greater extent and worn away more quickly than the harder carbides. Removing the work material in the crater with acid reveals a smooth topogra- phy with carbide grains that have randomly-orientedmm Ra as compared to 2.46 mm Ra. A poor surface finish grooves on their surfaces ŽFig. 13.. This appearance ishas been found to be a good indication of the formation of BUE during machining w 22x .very similar to that of the flank wear land shown in Fig. 8.4. Wear maps for TiC-coated carbide toolsWear maps are useful tools for presenting the overall behaviour of wearing systems in a more meaningful andcomplete fashion w 28–30x . Research on metalsC.Y.H. Lim et al.r W ear 225–229 (1999) 354–367365 w31–34x,Fig. 12. Section through crater after cutting for 30 s at 307.8 m r min and 0.6 mm r rev, showing protruding carbide grains at the tool–work inter- face. ceramics Žsee for example, Ref. w35x., and some cutting tools w36–38x has shown that such maps facilitate the study and understanding of the relationships between measured wear rates and the dominant wear mechanisms over a wide range of operating conditions. The wear-map approach is adopted in this work to examine the wear characteristics of the TiC-coated carbide tools.The construction of a wear map first requires the exten- sive gathering of wear data from the technical literature for the particular wear system of interest. In this case, infor- mation relating to flank and crater wear of TiC-coated carbide tools during dry turning of steel workpieces wasC.Y.H. Lim et al.r W ear 225–229 (1999) 354–367366Fig. 14. Map for flank wear of TiC-coated carbide inserts during dry turning. The regions where the different ranges of wear rates are observed are shaded accordingly. The safety zone is the region where flank wear rates are the lowest; the considerably larger least-wear regime is also indicated.collected. The axes of the map are then decided: usually two Žsometimes three . operating parameters of the system zone Ž) y 6.9.. On the crater wear map, the wear rates span a wider range than in the case of flank wear, and are selected to form a plane Žor space . within whichthese are divided into five regimes: the safety zone Ž- empirical wear data are presented. Here, the same axes as y 8.5., a least-wear zone Žy 8.0 to y 8.4., and three other those employed previouslyw 11,36 –38x , namely,cuttinghigher-wear regions Ž) y 7.9..speed Žin m r min . and feed rate Žin mm r rev ., are used. 4.1. Wear-rate mapsMaps showing the flank and crater wear rates of TiC- coated carbide tools are given in Figs. 14 and 15. These maps, which have been introduced earlier w 38x , are based on the results of the present cutting tests, together with similar data from 35 other sources. The boundaries on the maps define different regions within which wear rates of similar ranges of values are contained. Three major wear regions are demarcated on the flank wear map; these are: the safety zone Ždimensionless wear rates - y 7.5., the moderate-wear region Žy 7.0 to y 7.4., and the high-wearThe boundaries on the wear maps reflect the influence of cutting speed and feed rate on the wear of the inserts. Under high speed –high feed conditions, the protective TiC coating is worn away quickly to expose the substrate, which has a much lower wear resistance, and thus gives rise to the higher wear rates seen. The increased wear rate at the low speeds is probably due to the presence of a BUE occurring at those speeds. Information on the wear of TiC-coated tools in the speed range of BUE formation is scanty since such low speeds are not within the normal machining range of these tools. It is believed however, that the BUE that forms on TiC-coated inserts during low-speed machining is very unstable w 27x . The unstable nature of the BUE causes it to break off and reform over and over again,。
毕业设计论文外文文献翻译
xxxx大学xxx学院毕业设计(论文)外文文献翻译系部xxxx专业xxxx学生姓名xxxx 学号xxxx指导教师xxxx 职称xxxx2013年3 月Introducing the Spring FrameworkThe Spring Framework: a popular open source application framework that addresses many of the issues outlined in this book. This chapter will introduce the basic ideas of Spring and dis-cuss the central “bean factory” lightweight Inversion-of-Control (IoC) container in detail.Spring makes it particularly easy to implement lightweight, yet extensible, J2EE archi-tectures. It provides an out-of-the-box implementation of the fundamental architectural building blocks we recommend. Spring provides a consistent way of structuring your applications, and provides numerous middle tier features that can make J2EE development significantly easier and more flexible than in traditional approaches.The basic motivations for Spring are:To address areas not well served by other frameworks. There are numerous good solutions to specific areas of J2EE infrastructure: web frameworks, persistence solutions, remoting tools, and so on. However, integrating these tools into a comprehensive architecture can involve significant effort, and can become a burden. Spring aims to provide an end-to-end solution, integrating spe-cialized frameworks into a coherent overall infrastructure. Spring also addresses some areas that other frameworks don’t. For example, few frameworks address generic transaction management, data access object implementation, and gluing all those things together into an application, while still allowing for best-of-breed choice in each area. Hence we term Spring an application framework, rather than a web framework, IoC or AOP framework, or even middle tier framework.To allow for easy adoption. A framework should be cleanly layered, allowing the use of indi-vidual features without imposing a whole worldview on the application. Many Spring features, such as the JDBC abstraction layer or Hibernate integration, can be used in a library style or as part of the Spring end-to-end solution.To deliver ease of use. As we’ve noted, J2EE out of the box is relatively hard to use to solve many common problems. A good infrastructure framework should make simple tasks simple to achieve, without forcing tradeoffs for future complex requirements (like distributed transactions) on the application developer. It should allow developers to leverage J2EE services such as JTA where appropriate, but to avoid dependence on them in cases when they are unnecessarily complex.To make it easier to apply best practices. Spring aims to reduce the cost of adhering to best practices such as programming to interfaces, rather than classes, almost to zero. However, it leaves the choice of architectural style to the developer.Non-invasiveness. Application objects should have minimal dependence on the framework. If leveraging a specific Spring feature, an object should depend only on that particular feature, whether by implementing a callback interface or using the framework as a class library. IoC and AOP are the key enabling technologies for avoiding framework dependence.Consistent configuration. A good infrastructure framework should keep application configuration flexible and consistent, avoiding the need for custom singletons and factories. A single style should be applicable to all configuration needs, from the middle tier to web controllers.Ease of testing. Testing either whole applications or individual application classes in unit tests should be as easy as possible. Replacing resources or application objects with mock objects should be straightforward.To allow for extensibility. Because Spring is itself based on interfaces, rather than classes, it is easy to extend or customize it. Many Spring components use strategy interfaces, allowing easy customization.A Layered Application FrameworkChapter 6 introduced the Spring Framework as a lightweight container, competing with IoC containers such as PicoContainer. While the Spring lightweight container for JavaBeans is a core concept, this is just the foundation for a solution for all middleware layers.Basic Building Blockspring is a full-featured application framework that can be leveraged at many levels. It consists of multi-ple sub-frameworks that are fairly independent but still integrate closely into a one-stop shop, if desired. The key areas are:Bean factory. The Spring lightweight IoC container, capable of configuring and wiring up Java-Beans and most plain Java objects, removing the need for custom singletons and ad hoc configura-tion. Various out-of-the-box implementations include an XML-based bean factory. The lightweight IoC container and its Dependency Injection capabilities will be the main focus of this chapter.Application context. A Spring application context extends the bean factory concept by adding support for message sources and resource loading, and providing hooks into existing environ-ments. Various out-of-the-box implementations include standalone application contexts and an XML-based web application context.AOP framework. The Spring AOP framework provides AOP support for method interception on any class managed by a Spring lightweight container.It supports easy proxying of beans in a bean factory, seamlessly weaving in interceptors and other advice at runtime. Chapter 8 dis-cusses the Spring AOP framework in detail. The main use of the Spring AOP framework is to provide declarative enterprise services for POJOs.Auto-proxying. Spring provides a higher level of abstraction over the AOP framework and low-level services, which offers similar ease-of-use to .NET within a J2EE context. In particular, the provision of declarative enterprise services can be driven by source-level metadata.Transaction management. Spring provides a generic transaction management infrastructure, with pluggable transaction strategies (such as JTA and JDBC) and various means for demarcat-ing transactions in applications. Chapter 9 discusses its rationale and the power and flexibility that it offers.DAO abstraction. Spring defines a set of generic data access exceptions that can be used for cre-ating generic DAO interfaces that throw meaningful exceptions independent of the underlying persistence mechanism. Chapter 10 illustrates the Spring support for DAOs in more detail, examining JDBC, JDO, and Hibernate as implementation strategies.JDBC support. Spring offers two levels of JDBC abstraction that significantly ease the effort of writing JDBC-based DAOs: the org.springframework.jdbc.core package (a template/callback approach) and the org.springframework.jdbc.object package (modeling RDBMS operations as reusable objects). Using the Spring JDBC packages can deliver much greater pro-ductivity and eliminate the potential for common errors such as leaked connections, compared with direct use of JDBC. The Spring JDBC abstraction integrates with the transaction and DAO abstractions.Integration with O/R mapping tools. Spring provides support classesfor O/R Mapping tools like Hibernate, JDO, and iBATIS Database Layer to simplify resource setup, acquisition, and release, and to integrate with the overall transaction and DAO abstractions. These integration packages allow applications to dispense with custom ThreadLocal sessions and native transac-tion handling, regardless of the underlying O/R mapping approach they work with.Web MVC framework. Spring provides a clean implementation of web MVC, consistent with the JavaBean configuration approach. The Spring web framework enables web controllers to be configured within an IoC container, eliminating the need to write any custom code to access business layer services. It provides a generic DispatcherServlet and out-of-the-box controller classes for command and form handling. Request-to-controller mapping, view resolution, locale resolution and other important services are all pluggable, making the framework highly extensi-ble. The web framework is designed to work not only with JSP, but with any view technology, such as Velocity—without the need for additional bridges. Chapter 13 discusses web tier design and the Spring web MVC framework in detail.Remoting support. Spring provides a thin abstraction layer for accessing remote services without hard-coded lookups, and for exposing Spring-managed application beans as remote services. Out-of-the-box support is inc luded for RMI, Caucho’s Hessian and Burlap web service protocols, and WSDL Web Services via JAX-RPC. Chapter 11 discusses lightweight remoting.While Spring addresses areas as diverse as transaction management and web MVC, it uses a consistent approach everywhere. Once you have learned the basic configuration style, you will be able to apply it in many areas. Resources, middle tier objects, and web components are all set up using the same bean configuration mechanism. You can combine your entireconfiguration in one single bean definition file or split it by application modules or layers; the choice is up to you as the application developer. There is no need for diverse configuration files in a variety of formats, spread out across the application.Spring on J2EEAlthough many parts of Spring can be used in any kind of Java environment, it is primarily a J2EE application framework. For example, there are convenience classes for linking JNDI resources into a bean factory, such as JDBC DataSources and EJBs, and integration with JTA for distributed transaction management. In most cases, application objects do not need to work with J2EE APIs directly, improving reusability and meaning that there is no need to write verbose, hard-to-test, JNDI lookups.Thus Spring allows application code to seamlessly integrate into a J2EE environment without being unnecessarily tied to it. You can build upon J2EE services where it makes sense for your application, and choose lighter-weight solutions if there are no complex requirements. For example, you need to use JTA as transaction strategy only if you face distributed transaction requirements. For a single database, there are alternative strategies that do not depend on a J2EE container. Switching between those transac-tion strategies is merely a matter of configuration; Spring’s consistent abstraction avoids any need to change application code.Spring offers support for accessing EJBs. This is an important feature (and relevant even in a book on “J2EE without EJB”) because the u se of dynamic proxies as codeless client-side business delegates means that Spring can make using a local stateless session EJB an implementation-level, rather than a fundamen-tal architectural, choice.Thus if you want to use EJB, you can within a consistent architecture; however, you do not need to make EJB the cornerstone of your architecture. This Spring feature can make devel-oping EJB applications significantly faster, because there is no need to write custom code in service loca-tors or business delegates. Testing EJB client code is also much easier, because it only depends on the EJB’s Business Methods interface (which is not EJB-specific), not on JNDI or the EJB API.Spring also provides support for implementing EJBs, in the form of convenience superclasses for EJB implementation classes, which load a Spring lightweight container based on an environment variable specified in the ejb-jar.xml deployment descriptor. This is a powerful and convenient way of imple-menting SLSBs or MDBs that are facades for fine-grained POJOs: a best practice if you do choose to implement an EJB application. Using this Spring feature does not conflict with EJB in any way—it merely simplifies following good practice.Introducing the Spring FrameworkThe main aim of Spring is to make J2EE easier to use and promote good programming practice. It does not reinvent the wheel; thus you’ll find no logging packages in Spring, no connection pools, no distributed transaction coordinator. All these features are provided by other open source projects—such as Jakarta Commons Logging (which Spring uses for all its log output), Jakarta Commons DBCP (which can be used as local DataSource), and ObjectWeb JOTM (which can be used as transaction manager)—or by your J2EE application server. For the same reason, Spring doesn’t provide an O/R mapping layer: There are good solutions for this problem area, such as Hibernate and JDO.Spring does aim to make existing technologies easier to use. For example, although Spring is not in the business of low-level transactioncoordination, it does provide an abstraction layer over JTA or any other transaction strategy. Spring is also popular as middle tier infrastructure for Hibernate, because it provides solutions to many common issues like SessionFactory setup, ThreadLocal sessions, and exception handling. With the Spring HibernateTemplate class, implementation methods of Hibernate DAOs can be reduced to one-liners while properly participating in transactions.The Spring Framework does not aim to replace J2EE middle tier services as a whole. It is an application framework that makes accessing low-level J2EE container ser-vices easier. Furthermore, it offers lightweight alternatives for certain J2EE services in some scenarios, such as a JDBC-based transaction strategy instead of JTA when just working with a single database. Essentially, Spring enables you to write appli-cations that scale down as well as up.Spring for Web ApplicationsA typical usage of Spring in a J2EE environment is to serve as backbone for the logical middle tier of a J2EE web application. Spring provides a web application context concept, a powerful lightweight IoC container that seamlessly adapts to a web environment: It can be accessed from any kind of web tier, whether Struts, WebWork, Tapestry, JSF, Spring web MVC, or a custom solution.The following code shows a typical example of such a web application context. In a typical Spring web app, an applicationContext.xml file will reside in the WEB-INF directory, containing bean defini-tions according to the “spring-beans” DTD. In such a bean definition XML file, business objects and resources are defined, for example, a “myDataSource” bean, a “myInventoryManager” bean, and a “myProductManager” bean. Spring takes care of their configuration, their wiring up, and their lifecycle.<beans><bean id=”myDataSource” class=”org.springframework.jdbc. datasource.DriverManagerDataSource”><property name=”driverClassName”> <value>com.mysql.jdbc.Driver</value></property> <property name=”url”><value>jdbc:mysql:myds</value></property></bean><bean id=”myInventoryManager” class=”ebusiness.DefaultInventoryManager”> <property name=”dataSource”><ref bean=”myDataSource”/> </property></bean><bean id=”myProductManager” class=”ebusiness.DefaultProductManage r”><property name=”inventoryManager”><ref bean=”myInventoryManager”/> </property><property name=”retrieveCurrentStock”> <value>true</value></property></bean></beans>By default, all such beans have “singleton” scope: one instance per context. The “myInventoryManager” bean will automatically be wired up with the defined DataSource, while “myProductManager” will in turn receive a reference to the “myInventoryManager” bean. Those objects (traditionally called “beans” in Spring terminology) need to expos e only the corresponding bean properties or constructor arguments (as you’ll see later in this chapter); they do not have to perform any custom lookups.A root web application context will be loaded by a ContextLoaderListener that is defined in web.xml as follows:<web-app><listener> <listener-class>org.springframework.web.context.ContextLoaderListener</listener-class></listener>...</web-app>After initialization of the web app, the root web application context will be available as a ServletContext attribute to the whole web application, in the usual manner. It can be retrieved from there easily via fetching the corresponding attribute, or via a convenience method in org.springframework.web. context.support.WebApplicationContextUtils. This means that the application context will be available in any web resource with access to the ServletContext, like a Servlet, Filter, JSP, or Struts Action, as follows:WebApplicationContext wac = WebApplicationContextUtils.getWebApplicationContext(servletContext);The Spring web MVC framework allows web controllers to be defined as JavaBeans in child application contexts, one per dispatcher servlet. Such controllers can express dependencies on beans in the root application context via simple bean references. Therefore, typical Spring web MVC applications never need to perform a manual lookup of an application context or bean factory, or do any other form of lookup.Neither do other client objects that are managed by an application context themselves: They can receive collaborating objects as bean references.The Core Bean FactoryIn the previous section, we have seen a typical usage of the Spring IoC container in a web environment: The provided convenience classes allow for seamless integration without having to worry about low-level container details. Nevertheless, it does help to look at the inner workings to understand how Spring manages the container. Therefore, we will now look at the Spring bean container in more detail, starting at the lowest building block: the bean factory. Later, we’ll continue with resource setup and details on the application context concept.One of the main incentives for a lightweight container is to dispense with the multitude of custom facto-ries and singletons often found in J2EE applications. The Spring bean factory provides one consistent way to set up any number of application objects, whether coarse-grained components or fine-grained busi-ness objects. Applying reflection and Dependency Injection, the bean factory can host components that do not need to be aware of Spring at all. Hence we call Spring a non-invasive application framework.Fundamental InterfacesThe fundamental lightweight container interface is org.springframework.beans.factory.Bean Factory. This is a simple interface, which is easy to implement directly in the unlikely case that none of the implementations provided with Spring suffices. The BeanFactory interface offers two getBean() methods for looking up bean instances by String name, with the option to check for a required type (and throw an exception if there is a type mismatch).public interface BeanFactory {Object getBean(String name) throws BeansException;Object getBean(String name, Class requiredType) throws BeansException;boolean containsBean(String name);boolean isSingleton(String name) throws NoSuchBeanDefinitionException;String[] getAliases(String name) throws NoSuchBeanDefinitionException;}The isSingleton() method allows calling code to check whether the specified name represents a sin-gleton or prototype bean definition. In the case of a singleton bean, all calls to the getBean() method will return the same object instance. In the case of a prototype bean, each call to getBean() returns an inde-pendent object instance, configured identically.The getAliases() method will return alias names defined for the given bean name, if any. This mecha-nism is used to provide more descriptive alternative names for beans than are permitted in certain bean factory storage representations, such as XML id attributes.The methods in most BeanFactory implementations are aware of a hierarchy that the implementation may be part of. If a bean is not foundin the current factory, the parent factory will be asked, up until the root factory. From the point of view of a caller, all factories in such a hierarchy will appear to be merged into one. Bean definitions in ancestor contexts are visible to descendant contexts, but not the reverse.All exceptions thrown by the BeanFactory interface and sub-interfaces extend org.springframework. beans.BeansException, and are unchecked. This reflects the fact that low-level configuration prob-lems are not usually recoverable: Hence, application developers can choose to write code to recover from such failures if they wish to, but should not be forced to write code in the majority of cases where config-uration failure is fatal.Most implementations of the BeanFactory interface do not merely provide a registry of objects by name; they provide rich support for configuring those objects using IoC. For example, they manage dependen-cies between managed objects, as well as simple properties. In the next section, we’ll look at how such configuration can be expressed in a simple and intuitive XML structure.The sub-interface org.springframework.beans.factory.ListableBeanFactory supports listing beans in a factory. It provides methods to retrieve the number of beans defined, the names of all beans, and the names of beans that are instances of a given type:public interface ListableBeanFactory extends BeanFactory {int getBeanDefinitionCount();String[] getBeanDefinitionNames();String[] getBeanDefinitionNames(Class type);boolean containsBeanDefinition(String name);Map getBeansOfType(Class type, boolean includePrototypes,boolean includeFactoryBeans) throws BeansException}The ability to obtain such information about the objects managed by a ListableBeanFactory can be used to implement objects that work with a set of other objects known only at runtime.In contrast to the BeanFactory interface, the methods in ListableBeanFactory apply to the current factory instance and do not take account of a hierarchy that the factory may be part of. The org.spring framework.beans.factory.BeanFactoryUtils class provides analogous methods that traverse an entire factory hierarchy.There are various ways to leverage a Spring bean factory, ranging from simple bean configuration to J2EE resource integration and AOP proxy generation. The bean factory is the central, consistent way of setting up any kind of application objects in Spring, whether DAOs, business objects, or web controllers. Note that application objects seldom need to work with the BeanFactory interface directly, but are usu-ally configured and wired by a factory without the need for any Spring-specific code.For standalone usage, the Spring distribution provides a tiny spring-core.jar file that can be embed-ded in any kind of application. Its only third-party dependency beyond J2SE 1.3 (plus JAXP for XML parsing) is the Jakarta Commons Logging API.The bean factory is the core of Spring and the foundation for many other services that the framework offers. Nevertheless, the bean factory can easily be used stan-dalone if no other Spring services are required.Derivative:networkSpring 框架简介Spring框架:这是一个流行的开源应用框架,它可以解决很多问题。
供电毕设(含外文文献+中文翻译)
某钢铁企业变电所保护系统及防护系统设计1 绪论1.1 变电站继电保护的发展变电站是电力系统的重要组成部分,它直接影响整个电力系统的安全与经济运行,失恋系发电厂和用户的中间环节,起着变换和分配电能的作用,电气主接线是发电厂变电所的主要环节,电气主接线的拟定直接关系着全厂电气设备的选择、配电装置的布置、继电保护和自动装置的确定,是变电站电气部分投资大小的决定性因素。
继电保护的发展现状,电力系统的飞速发展对继电保护不断提出新的要求,电子技术、计算机技术与通信技术的飞速发展又为继电保护技术的发展不断地注入了新的活力,因此,继电保护技术得天独厚,在40余年的时间里完成了发展的4个历史阶段。
随着电力系统的高速发展和计算机技术、通信技术的进步,继电保护技术面临着进一步发展的趋势。
国内外继电保护技术发展的趋势为:计算机化,网络化,保护、控制、测量、数据通信一体化和人工智能化。
继电保护的未来发展,继电保护技术未来趋势是向计算机化,网络化,智能化,保护、控制、测量、数据通信一体化发展.微机保护技术的发展趋势:①高速数据处理芯片的应用②微机保护的网络化③保护、控制、测量、信号、数据通信一体化④继电保护的智能化1.2本文的主要工作在本次毕业设计中,我主要做了关于某钢铁企业变电所保护系统及防护系统设计,充分利用自己所学的知识,严格按照任务书的要求,围绕所要设计的主接线图的可靠性,灵活性进行研究,包括:负荷计算、主接线的选择、短路电流计算,主变压器继电保护的配置以及线路继电保护的计算与校验的研究等等。
1.3 设计概述1.3。
1 设计依据1)继电保护设计任务书.2)国标GB50062—92《电力装置的继电保护和自动装置设计规范》3)《工业企业供电》1。
3.2 设计原始资料本企业共有12个车间,承担各附属厂的设备、变压器修理和制造任务。
1、各车间用电设备情况用电设备明细见表1.1所示.表1。
1 用电设备明细表2、负荷性质本厂大部分车间为一班制,少数车间为两班或者三班制,年最大有功负荷利用小时数为。
Java毕设论文英文翻译
Java毕设论文英文翻译Java and the InternetIf Java is, in fact, yet another computer programming language, you mayquestion why it is so important and why it is being promoted as a revolutionarystep in computer programming. The answer isn’timmediately obvious if you’re coming from a traditional programming perspective. Although Java is very useful for solving traditional standalone programming problems, it is also important because it will solve programming problems on the World Wide Web.What is the Web?The Web can seem a bit of a mystery at first, with all this talk of “surfing,”helpful to step back and see what it realland “homepages.”It’s“presence,”y is, but to do this you must understand client/server systems, another aspecfull of confusing issues.t of computing that’sClient/Se rver computingThe primary idea of a client/server system is that you have a central repositoryof information—some kind of data, often in a database—that you want todistribute on demand to some set of people or machines. A key to the client/server concept is that the repository of information is centrally located so that it an be changed and so that those changes will propagate out to the information consumers. Taken together, the information repository, the software that distributes the information, and the machine(s) where the information and software reside is called the server. The software that resides on the remote machine, communicates with the server, fetches the information, processes it, and then displays it on the remote machine is called the client.The basic concept of client/server computing, then, is not so complicated. The problems arise because you have a single server trying to serve many clientsat once. Generally, a database management system is involved, so the designer the layout of data into tables for optimal use. In addition, systems oft “balances”en allow a client to insert new information into a server. This means you mustnew datwalk over another client’snew data doesn’tensure that one cl ient’slost in the process of adding it to the database (this is called a, or that data isn’ttransaction processing). As client software changes, it mustbe built, debugged,and installed on the client machines, which turns out to be more complicatedespecially problematic to support muand expensive than you might think. It’sthe all-impoltiple types of computers and operating systems. Finally, there’srtant performance issue: You might have hundreds of clients making requestsof your server at any one time, so any small delay is crucial. To minimize latency, programmers work hard to offload processing tasks, often to the client machine, but sometimes to other machines at the server site, using so-called middleware. (Middleware is also used to improve maintainability.)The simple idea of distributing information has so many layers of complexitcrucial: Cliy that the whole problem can seem hopelessly enigmatic. And yet it’sent/server computing accounts for roughly half of all programming activities. It’s responsible for everything from taking orders and credit-c ar d transactions tothe distribution of any kind of data—stock market, scientific, government, youname it. What we’ve come up with in the past is individual solutions to individual problems, inventing a new solution each time. These were hard to create and hard to use, and the user had to learn a new interface for each one. The entire client/server problem needs to be solved in a big way.The Web asagiant s ervera bit worse than thThe Web is actually one giant client/server system. It’sat, since you have all the servers and clients coexisting on a single network atneed to know that, because all you care about is connecting to a once. You don’tnd interacting with one server at a time (even though you might be hopping around the world in your search for the correct server). Initially it was a simple one-way process. You made a request of a server and it handed you a file, which y browser software (i.e., the client) would interpret by formatting o our machine’snto your local machine. But in short order people began wanting to do morethan just deliver pages from a server. They wanted full client/server capability so that the client could feed information back to the server, for example, to do database lookups on the server, to add new information to the server, or to place an order ( which required more security than the original systems offered). These are thechanges we’ve been seeing in the development of the Web.The Web browser was a big step forward: the concept that one piece of information could be displayed on any type of computer without change. However, br owsers were still rather primitive and rapidly bogged down by the demands plaparticularly interactive, and tended to clog up bothced on them. They weren’tthe server and the Internet because any time you needed to do something that required programming you had to send information back to the server to be pro cessed. It could take many seconds or minutes to find out you had misspelled perf something in your request. Since the browser was just a viewer it couldn’torm even the simplest computing tasks. (On the otherexecute any programs on your local machi hand, it was safe, because it couldn’tne that might contain bugs or viruses.)To solve this problem, different approaches have been taken. To begin with, gr aphics standards have been enhanced to allow better animation and video within browsers. The remainder of the problem can be solved only by incorporating the ability to run programs on the client end, under the browser. This iscalled client-side programming.Client-side programmingThe Web’sinitial server-browser design provided for interactive content, but the interactivity was completely provided by the server. The server produced static pages for the client browser, which would simply interpret and display them. Basic HyperText Markup Language (HTML) contains simple mechanis ms for data gathering: text-entry boxes, check boxes, radio boxes, lists and drop-down lists, as well as a button that can only be programmed to reset the dataon the form or “submit”the data on the form back to the server. This submissio n passes through the Common Gateway Interface (CGI) provided on all Web servers. The text within the submission tells CGI what to do with it. The most common action is to run a program located on the server in a directory th typically called “cgi-b in.” (If you watch the address window at the top ofat’syour browser when you push a button on a Web page, you can sometimes see “cgi-bin” within all the gobbledygook there.) These programs can be written in most languages. Perl has been a common choice because it is designed for text manipulation and is interpreted, so it can be installed on any server regardlessof processor or operating system. However, Python (my favorite—see www.Pyt /doc/4110905303.html,) has been making inroads because of its greater power and simplicity. Many powerful Web sites today are built strictly on CGI, and you can in fact do nearly anything with CGI. However, Web sites built on CGI programs can rapidly become overly complicated to maintain, and there is alsothe problem of respo nse time. The response of a CGI program depends on how much data must be s ent, as well as the load on both the server and the Internet. (On top of this, sta rting a CGI program tends to be slow.) The initial designers of the Web did notforesee how rapidly this bandwidth would be exhausted for the kinds of applic ations people developed. For example, any sort of dynamic graphing is nearly i mpossible to perform with consistency because a Graphics Interchange Format (GIF) file must be created and moved from the server to the client for each ve rsion of the graph. And you’ve no doubt had direct experience with somethingas simple as validating the data on an input form. You press the submit button on a page; the data is shipped back to the server; the server starts a CGI pro gram that discovers an error, formats an HTML page informing you of the error, and then sends the page back to you; you minelegant. The so ust then back up a page and try again. Not only is this slow, it’slution is client-side programming. Most machines that run Web browsers are powerful engines capable of doing vast work, and with the original static HTML approach they are sitting there, just idly waiting for the server to dish up thenext page. Client-side programming means that the Web browser is harnessedto do whatever work it can, and the result for the user is a much speedier andmore interactive experience at your Web site. The problemwith discussionsvery different from discussionsof client-side programmi ng is that they aren’tof programming in general. The parameters are almost the same, but the platform is different; a Web browser is like a limited operating system. In the end,you must still program, and this accounts for the dizzying array of problems and solutions produced by client-side programming. The rest of this section pro vides an overview of the issues and approaches in client-side programming.Plug-insOne of the most significant steps forward in client-side programming is the development of the plug-in. This is a way for a programmer to add new funct ionality to the browser by downloading a piece of code that plugs itself into the appropriate spot in the browser. It tells the browser “from now on you can perf (You need to download the plug-in only once.) Some fast orm this new activity.”and powerful behavior is added to browsers via plug-ins, but writing a plug-inwant to do as part ofsomething you’dis not a trivial task, and isn’tthe process of building a particular site. The value of the plug-in for client-side programming is that it allows an expert programmer todevelop a new language and add that language to a browser without the permission of the browser manufacturer. Thus, plug-ins provide a “back d oor” that allows the creation of new client-side programming languages (although not all languages are implemented as plug-ins).ScriptinglanguagesPlug-ins resulted in an explosion of scripting languages. With a scripting language, you embed the source code for your client-side program directlyinto the HTML page, and the plug-in that interprets that language is automatically activated while the HTML page is being displayed. Scripting languages tend to be reasonably easy to understand and, because they are simply text that is part of an HTML page, they load very quickly as part of the single server hit required to procure that page. The trade-off is that your code is exposed fordoing amazingly so everyone to see (and steal). Generally, however, you aren’tphisticated things with scripting languages, so this is not too much of a hardship.This points out that the scripting languages used inside Web browsers are really intended to solve specific types of problems, primarily the creation of richerand more interactive graphical user interfaces (GUIs).However, a scripting language might solve 80 percent of the problems encountered in client-side prog ramming. Your problems might very well fit completely within that 80 percent,and since scripting languages can allow easier and faster development, you should probably consider a scripting language before looking at a more involved s olution such as Java or ActiveX programming.The most commonly discussed browser scripting languages are JavaScript (named that way just to grab some of Ja which has nothing to do with Java; it’smarketing momentum), VBScript (which looks like Visual BASIC), and Tcl/va’sTk, which comes from the popular cross-platform GUI-building language. There are others out there, and no doubt more in development.JavaScript is probably the most commonly supported. It comes built into bothNetscape Navigator and the Microsoft Internet Explorer (IE). Unfortunately,the flavor of JavaScript on the two browsers can vary widely (the Mozilla browser, freely downloadable from /doc/4110905303.html,, supports the ECMAScript standard, which may one day become universally supported). In addition, there are probably more JavaScript books available than there arefor the other browser languages, and some tools automatically create pages using JavaScript. However, if you’re already fluent in Visual BASIC or Tcl/Tk, you’ll be more pr oductive using those scripting languages rather than learning a new one. (You’ll have your hands full dealing with the Web issues already.)ja va和因特网既然Jav a不过另一种类型的程序设计语言,大家可能会奇怪它为什么值得如此重视,为什么还有这么多的人认为它是计算机程序设计的一个里程碑呢?如果您来自一个传统的程序设计背景,那么答案在刚开始的时候并不是很明显。
毕业设计外文翻译英文翻译英文原稿
Harmonic source identification and current separationin distribution systemsYong Zhao a,b,Jianhua Li a,Daozhi Xia a,*a Department of Electrical Engineering Xi’an Jiaotong University, 28 West Xianning Road, Xi’an, Shaanxi 710049, Chinab Fujian Electric Power Dispatch and Telecommunication Center, 264 Wusi Road, Fuzhou, Fujian, 350003, China AbstractTo effectively diminish harmonic distortions, the locations of harmonic sources have to be identified and their currents have to be separated from that absorbed by conventional linear loads connected to the same CCP. In this paper, based on the intrinsic difference between linear and nonlinear loads in their V –I characteristics and by utilizing a new simplified harmonic source model, a new principle for harmonic source identification and harmonic current separation is proposed. By using this method, not only the existence of harmonic source can be determined, but also the contributions of the harmonic source and the linear loads to harmonic voltage distortion can be distinguished. The detailed procedure based on least squares approximation is given. The effectiveness of the approach is illustrated by test results on a composite load.2004 Elsevier Ltd. All rights reserved.Keywords: Distribution system; Harmonic source identification; Harmonic current separation; Least squares approximation1. IntroductionHarmonic distortion has experienced a continuous increase in distribution systems owing to the growing use of nonlinear loads. Many studies have shown that harmonics may cause serious effects on power systems, communication systems, and various apparatus [1–3]. Harmonic voltages at each point on a distribution network are not only determined by the harmonic currents produced by harmonic sources (nonlinear loads), but also related to all linear loads (harmonic current sinks) as well as the structure and parameters of the network. To effectively evaluate and diminish the harmonic distortion in power systems, the locations of harmonic sources have to be identified and the responsibility of the distortion caused by related individual customers has to be separated.As to harmonic source identification, most commonly the negative harmonic power is considered as an essential evidence of existing harmonic source [4–7]. Several approaches aiming at evaluating the contribution of an individual customer can also be found in the literatures. Schemes based on power factor measurement to penalize the customer’s harmonic currents are discussed in Ref. [8]. However, it would be unfair to use economical penalization if we could not distinguish whether the measured harmonic current is from nonlinear load or from linear load.In fact, the intrinsic difference between linear and nonlinear loads lies in their V –I characteristics. Harmonic currents of a linear load are i n linear proportion to its supplyharmonic voltages of the same order 次, whereas the harmonic currents of a nonlinear load are complex nonlinear functions of its supply fundamental 基波and harmonic voltage components of all orders. To successfully identify and isolate harmonic source in an individual customer or several customers connected at same point in the network, the V –I characteristics should be involved and measurement of voltages and currents under several different supply conditions should be carried out.As the existing approaches based on measurements of voltage and current spectrum or harmonic power at a certain instant cannot reflect the V –I characteristics, they may not provide reliable information about the existence and contribution of harmonic sources, which has been substantiated by theoretical analysis or experimental researches [9,10].In this paper, to approximate the nonlinear characteristics and to facilitate the work in harmonic source identification and harmonic current separation, a new simplified harmonic source model is proposed. Then based on the difference between linear and nonlinear loads in their V –I characteristics, and by utilizing the harmonic source model, a new principle for harmonic source identification and harmonic current separation is presented. By using the method, not only the existence of harmonic source can be determined, but also the contributions of the harmonic sources and the linear loads can be separated. Detailed procedure of harmonic source identification and harmonic current separation based on least squares approximation is presented. Finally, test results on a composite load containing linear and nonlinear loads are given to illustrate the effectiveness of the approach.2. New principle for harmonic source identification and current separationConsider a composite load to be studied in a distribution system, which may represent an individual consumer or a group of customers supplied by a common feeder 支路in the system. To identify whether it contains any harmonic source and to separate the harmonic currents generated by the harmonic sources from that absorbed by conventional linear loads in the measured total harmonic currents of the composite load, the following assumptions are made.(a) The supply voltage and the load currents are both periodical waveforms withperiod T; so that they can be expressed by Fourier series as1()s i n (2)h h h v t ht T πθ∞==+ (1)1()sin(2)h h h i t ht πφ∞==+The fundamental frequency and harmonic components can further be presented bycorresponding phasorshr hi h h hr hi h hV jV V I jI I θφ+=∠+=∠ , 1,2,3,...,h n = (2)(b) During the period of identification, the composite load is stationary, i.e. both its composition and circuit parameters of all individual loads keep unchanged.Under the above assumptions, the relationship between the total harmonic currents of the harmonic sources(denoted by subscript N) in the composite load and the supply voltage, i.e. the V –I characteristics, can be described by the following nonlinear equation ()()()N i t f v t = (3)and can also be represented in terms of phasors as()()122122,,,...,,,,,,...,,Nhr r i nr ni Nh Nhi r inr ni I V V V V V I I V V V V V ⎡⎤=⎢⎥⎣⎦ 2,3,...,h n = (4)Note that in Eq. (4), the initial time (reference time) of the voltage waveform has been properly selected such that the phase angle u1 becomes 0 and 10i V =, 11r V V =in Eq. (2)for simplicity.The V –I characteristics of the linear part (denote by subscript L) of the composite load can be represented by its equivalent harmonic admittance Lh Lh Lh Y G jB =+, and the total harmonic currents absorbed by the linear part can be described as,Lhr LhLh hr Lh Lhi LhLh hi I G B V I I B G V -⎡⎤⎡⎤⎡⎤==⎢⎥⎢⎥⎢⎥⎣⎦⎣⎦⎣⎦2,3,...,h n = (5)From Eqs. (4) and (5), the whole harmonic currents absorbed by the composite load can be expressed as()()122122,,,...,,,,,,...,,hr Lhr Nhr r i nr ni h hi Lhi Nhi r inr ni I I I V V V V V I I I I V V V V V ⎡⎤⎡⎤⎡⎤==-⎢⎥⎢⎥⎢⎥⎣⎦⎣⎦⎣⎦ 2,3,...,h n = (6)As the V –I characteristics of harmonic source are nonlinear, Eq. (6) can neither be directly used for harmonic source identification nor for harmonic current separation. To facilitate the work in practice, simplified methods should be involved. The common practice in harmonic studies is to represent nonlinear loads by means of current harmonic sources or equivalent Norton models [11,12]. However, these models are not of enough precision and new simplified model is needed.From the engineering point of view, the variations of hr V and hi V ; ordinarily fall into^3% bound of the rated bus voltage, while the change of V1 is usually less than ^5%. Within such a range of supply voltages, the following simplified linear relation is used in this paper to approximate the harmonic source characteristics, Eq. (4)112222112322,ho h h r r h i i hnr nr hni ni Nh ho h h r r h i i hnr nr hni ni a a V a V a V a V a V I b b V b V b V b V b V ++++++⎡⎤=⎢⎥++++++⎣⎦2,3,...,h n = (7)这个地方不知道是不是原文写错?23h r r b V 其他的都是2The precision and superiority of this simplified model will be illustrated in Section 4 by test results on several kinds of typical harmonic sources.The total harmonic current (Eq. (6)) then becomes112222112222,2,3,...,Lh Lh hr ho h h r r h i i hnr nr hni ni h Lh Lh hi ho h h r r h i i hnr nr hni ni G B V a a V a V a V a V a V I B G V b b V b V b V b V b V h n-++++++⎡⎤⎡⎤⎡⎤=-⎢⎥⎢⎥⎢⎥++++++⎣⎦⎣⎦⎣⎦= (8)It can be seen from the above equations that the harmonic currents of the harmonic sources (nonlinear loads) and the linear loads differ from each other intrinsically in their V –I characteristics. The harmonic current component drawn by the linear loads is uniquely determined by the harmonic voltage component with same order in the supply voltage. On the other hand, the harmonic current component of the nonlinear loads contains not only a term caused by the same order harmonic voltage but also a constant term and the terms caused by fundamental and harmonic voltages of all other orders. This property will be used for identifying the existence of harmonic source sin composite load.As the test results shown in Section 4 demonstrate that the summation of the constant term and the component related to fundamental frequency voltage in the harmonic current of nonlinear loads is dominant whereas other components are negligible, further approximation for Eq. (7) can be made as follows.Let112'012()()nh h hkr kr hki ki k k h Nhnh h hkr kr hki kik k h a a V a V a V I b b V b V b V =≠=≠⎡⎤+++⎢⎥⎢⎥=⎢⎥⎢⎥+++⎢⎥⎢⎥⎣⎦∑∑ hhr hhi hr Nhhhr hhi hi a a V I b b V ⎡⎤⎡⎤''=⎢⎥⎢⎥⎣⎦⎣⎦hhrhhihr Lh Lh Nh hhrhhi hi a a V I I I b b V ''⎡⎤⎡⎤'''=-=⎢⎥⎢⎥''⎣⎦⎣⎦,2,3,...,hhr hhiLh Lh hhrhhi hhr hhi Lh Lh hhr hhi a a G B a a h n b b B G b b ''-⎡⎤⎡⎤⎡⎤=-=⎢⎥⎢⎥⎢⎥''⎣⎦⎣⎦⎣⎦The total harmonic current of the composite load becomes112012(),()2,3,...,nh h hkr kr hki ki k k hhhrhhi hr h Lh NhLhNh n hhrhhi hi h h hkr kr hki kik k h a a V a V a V a a V I I I I I b b V b b V b V b V h n=≠=≠⎡⎤+++⎢⎥⎢⎥''⎡⎤⎡⎤''=-=-=-⎢⎥⎢⎥⎢⎥''⎣⎦⎣⎦⎢⎥+++⎢⎥⎢⎥⎣⎦=∑∑ (9)By neglecting ''Nh I in the harmonic current of nonlinear load and adding it to the harmonic current of linear load, 'Nh I can then be deemed as harmonic current of thenonlinear load while ''Lh I can be taken as harmonic current of the linear load. ''Nh I =0 means the composite load contains no harmonic sources, while ''0NhI ≠signify that harmonic sources may exist in this composite load. As the neglected term ''Nh I is not dominant, it is obviousthat this simplification does not make significant error on the total harmonic current of nonlinear load. However, it makes the possibility or the harmonic source identification and current separation.3. Identification procedureIn order to identify the existence of harmonic sources in a composite load, the parameters in Eq. (9) should be determined primarily, i.e.[]0122hr h h h rh i hhr hhihnr hni C a a a a a a a a ''= []0122hi h h h rh i hhrhhihnr hni C b b b b b b b b ''=For this purpose, measurement of different supply voltages and corresponding harmoniccurrents of the composite load should be repeatedly performed several times in some short period while keeping the composite load stationary. The change of supply voltage can for example be obtained by switching in or out some shunt capacitors, disconnecting a parallel transformer or changing the tap position of transformers with OLTC. Then, the least squares approach can be used to estimate the parameters by the measured voltages and currents. The identification procedure will be explained as follows.(1) Perform the test for m (2m n ≥)times to get measured fundamental frequency andharmonic voltage and current phasors ()()k k h h V θ∠,()()k k hh I φ∠,()1,2,,,1,2,,k m h n == .(2) For 1,2,,k n = ,transfer the phasors corresponding to zero fundamental voltage phase angle ()1(0)k θ=and change them into orthogonal components, i.e.()()11kkr V V = ()10ki V =()()()()()()()()()()11cos sin kkkkk kkkhr h h hihhV V h V V h θθθθ=-=-()()()()()()()()()()11cos sin k kkkk kkkhrhhhihhI I h I I h φθφθ=-=-,2,3,...,h n =(3)Let()()()()()()()()1221Tk k k k k k k k r i hr hi nr ni VV V V V V V V ⎡⎤=⎣⎦ ,()1,2,,k m = ()()()12Tm X V V V ⎡⎤=⎣⎦ ()()()12T m hr hr hr hrW I I I ⎡⎤=⎣⎦()()()12Tm hi hi hihi W I I I ⎡⎤=⎣⎦ Minimize ()()()211hr mk hr k I C V=-∑ and ()()()211him k hi k IC V=-∑, and determine the parametershr C and hi C by least squares approach as [13]:()()11T T hr hr T T hi hiC X X X W C X X X W --== (10)(4) By using Eq. (9), calculate I0Lh; I0Nh with the obtained Chr and Chi; then the existence of harmonic source is identified and the harmonic current is separated.It can be seen that in the course of model construction, harmonic source identification and harmonic current separation, m times changing of supply system operating condition and measuring of harmonic voltage and currents are needed. More accurate the model, more manipulations are necessary.To compromise the needed times of the switching operations and the accuracy of the results, the proposed model for the nonlinear load (Eq. (7)) and the composite load (Eq. (9)) can be further simplified by only considering the dominant terms in Eq. (7), i.e.01111,Nhr h h hhr hhi hr Nh Nhi ho h hhrhhi hi I a a V a a V I I b b V b b V +⎡⎤⎡⎤⎡⎤⎡⎤==+⎢⎥⎢⎥⎢⎥⎢⎥+⎣⎦⎣⎦⎣⎦⎣⎦2,3,,h n = (11) 01111h h Nh ho h a a V I b b V +⎡⎤'=⎢⎥+⎣⎦01111,hr hhrhhi hr h h h LhNh hi hhr hhihi ho h I a a V a a V I I I I b b V b b V ''+⎡⎤⎡⎤⎡⎤⎡⎤''==-=-⎢⎥⎢⎥⎢⎥⎢⎥''+⎣⎦⎣⎦⎣⎦⎣⎦2,3,,h n = (12) In this case, part equations in the previous procedure should be changed as follows[]01hr h h hhrhhi C a a a a ''= []01hi h h hhrhhiC b b b b ''= ()()()1Tk k k hr hi V V V ⎡⎤=⎣⎦ Similarly, 'Nh I and 'Lh I can still be taken as the harmonic current caused by thenonlinear load and the linear load, respectively.4. Experimental validation4.1. Model accuracyTo demonstrate the validity of the proposed harmonic source models, simulations are performed on the following three kind of typical nonlinear loads: a three-phase six-pulse rectifier, a single-phase capacitor-filtered rectifier and an acarc furnace under stationary operating condition.Diagrams of the three-phase six-pulse rectifier and the single-phase capacitor-filtered rectifier are shown in Figs. 1 and 2 [14,15], respectively, the V –I characteristic of the arc furnace is simplified as shown in Fig. 3 [16].The harmonic currents used in the simulation test are precisely calculated from their mathematical model. As to the supply voltage, VekT1 is assumed to be uniformly distributed between 0.95 and 1.05, VekThr and VekThi ek 1; 2;…;m T are uniformly distributed between20.03 and 0.03 with base voltage 10 kV and base power 1 MVFig. 1. Diagram of three-phase six-pulse rectifier.Fig. 2. Diagram of single-phase capacitor-filtered rectifierFig. 3. Approximate V –I characteristics of arc furnace.Three different models including the harmonic current source (constant current) model, the Norton model and the proposed simplified model are simulated and estimated by the least squares approach for comparison.For the three-phase six-pulse rectifier with fundamental currentI=1.7621; the1 parameters in the simplified model for fifth and seventh harmonic currents are listed in Table 1.To compare the accuracy of the three different models, the mean and standard deviations of the errors on Ihr; Ihi and Ih between estimated value and the simulated actual value are calculated for each model. The error comparison of the three models on the three-phase six-pulse rectifier is shown in Table 2, where mhr; mhi and mha denote the mean, and shr; shi and sha represent the standard deviations. Note that I1 and _Ih in Table 2are the current values caused by rated pure sinusoidal supply voltage.Error comparisons on the single-phase capacitor-filtered rectifier and the arc furnace load are listed in Table 3 and 4, respectively.It can be seen from the above test results that the accuracy of the proposed model is different for different nonlinear loads, while for a certain load, the accuracy will decrease as the harmonic order increase. However, the proposed model is always more accurate than other two models.It can also be seen from Table 1 that the componenta50 t a51V1 and b50 t b51V1 are around 20:0074 t0:3939 0:3865 and 0:0263 t 0:0623 0:0886 while the componenta55V5r and b55V5i will not exceed 0:2676 £0:03 0:008 and 0:9675 £0:003 0:029; respectively. The result shows that the fifth harmonic current caused by the summation of constant term and the fundamental voltage is about 10 times of that caused by harmonic voltage with same order, so that the formal is dominant in the harmonic current for the three-phase six-pulse rectifier. The same situation exists for other harmonic orders and other nonlinear loads.4.2. Effectiveness of harmonic source identification and current separationTo show the effectiveness of the proposed harmonic source identification method, simulations are performed on a composite load containing linear load (30%) and nonlinear loads with three-phase six-pulse rectifier (30%),single-phase capacitor-filtered rectifier (20%) and ac arc furnace load (20%).For simplicity, only the errors of third order harmonic current of the linear and nonlinear loads are listed in Table 5, where IN3 denotes the third order harmonic current corresponding to rated pure sinusoidal supply voltage; mN3r ;mN3i;mN3a and mL3r ;mL3i;mL3a are error means of IN3r ; IN3i; IN3 and IL3r ; IL3i; IL3 between the simulated actual value and the estimated value;sN3r ;sN3i;sN3a and sL3r ;sL3i;sL3a are standard deviations.Table 2Table 3It can be seen from Table 5 that the current errors of linear load are less than that of nonlinear loads. This is because the errors of nonlinear load currents are due to both the model error and neglecting the components related to harmonic voltages of the same order, whereas only the later components introduce errors to the linear load currents. Moreover, it can be found that more precise the composite load model is, less error is introduced. However, even by using the very simple model (12), the existence of harmonic sources can be correctly identified and the harmonic current of linear and nonlinear loads can be effectively separated. Table 4Error comparison on the arc furnaceTable 55. ConclusionsIn this paper, from an engineering point of view, firstly anew linear model is presented for representing harmonic sources. On the basis of the intrinsic difference between linear and nonlinear loads in their V –I characteristics, and by using the proposed harmonic source model, a new concise principle for identifying harmonic sources and separating harmonic source currents from that of linear loads is proposed. The detailed modeling and identification procedure is also developed based on the least squares approximation approach. Test results on several kinds of typical harmonic sources reveal that the simplified model is of sufficient precision, and is superior to other existing models. The effectiveness of the harmonic source identification approach is illustrated using a composite nonlinear load.AcknowledgementsThe authors wish to acknowledge the financial support by the National Natural Science Foundation of China for this project, under the Research Program Grant No.59737140. References[1] IEEE Working Group on Power System Harmonics, The effects of power system harmonics on power system equipment and loads. IEEE Trans Power Apparatus Syst 1985;9:2555–63.[2] IEEE Working Group on Power System Harmonics, Power line harmonic effects on communication line interference. IEEE Trans Power Apparatus Syst 1985;104(9):2578–87.[3] IEEE Task Force on the Effects of Harmonics, Effects of harmonic on equipment. IEEE Trans Power Deliv 1993;8(2):681–8.[4] Heydt GT. Identification of harmonic sources by a State Estimation Technique. IEEE Trans Power Deliv 1989;4(1):569–75.[5] Ferach JE, Grady WM, Arapostathis A. An optimal procedure for placing sensors and estimating the locations of harmonic sources in power systems. IEEE Trans Power Deliv 1993;8(3):1303–10.[6] Ma H, Girgis AA. Identification and tracking of harmonic sources in a power system using Kalman filter. IEEE Trans Power Deliv 1996;11(3):1659–65.[7] Hong YY, Chen YC. Application of algorithms and artificial intelligence approach for locating multiple harmonics in distribution systems. IEE Proc.—Gener. Transm. Distrib 1999;146(3):325–9.[8] Mceachern A, Grady WM, Moncerief WA, Heydt GT, McgranaghanM. Revenue and harmonics: an evaluation of someproposed rate structures. IEEE Trans Power Deliv 1995;10(1):474–82.[9] Xu W. Power direction method cannot be used for harmonic sourcedetection. Power Engineering Society Summer Meeting, IEEE; 2000.p. 873–6.[10] Sasdelli R, Peretto L. A VI-based measurement system for sharing the customer and supply responsibility for harmonic distortion. IEEETrans Instrum Meas 1998;47(5):1335–40.[11] Arrillaga J, Bradley DA, Bodger PS. Power system harmonics. NewYork: Wiley; 1985.[12] Thunberg E, Soder L. A Norton approach to distribution networkmodeling for harmonic studies. IEEE Trans Power Deliv 1999;14(1):272–7.[13] Giordano AA, Hsu FM. Least squares estimation with applications todigital signal processing. New York: Wiley; 1985.[14] Xia D, Heydt GT. Harmonic power flow studies. Part I. Formulationand solution. IEEE Trans Power Apparatus Syst 1982;101(6):1257–65.[15] Mansoor A, Grady WM, Thallam RS, Doyle MT, Krein SD, SamotyjMJ. Effect of supply voltage harmonics on the input current of single phase diode bridge rectifier loads. IEEE Trans Power Deliv 1995;10(3):1416–22.[16] Varadan S, Makram EB, Girgis AA. A new time domain voltage source model for an arc furnace using EMTP. IEEE Trans Power Deliv 1996;11(3):1416–22.。
- 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
- 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
- 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。
A Novel Automatic ImageAnnotation Method Based onMulti-instance LearningAbstractAutomatic image annotation (AIA) is the bridge of high-level semantic information and the low-level feature. AIA is an effective method to resolve the problem of “Se mantic Gap”. According to the intrinsic character of AIA, which is many regions contained in the annotated image, AIA Based on the framework of multi-instance learning (MIL) is proposed in this paper. Each keyword is analyzed hierarchically in low-granularity-level under the framework of MIL.Through the representative instances are mined, the semantic similarity of images can be effectively expressed and the better annotation results are able to be acquired, which testifies the effectiveness of the proposed annotation method.1.IntroductionWith the development of multimedia and network technology, image data has been becoming more common rapidly. Facing a mass of image resource, content based image retrieval (CBIR), a technology to organize, manage and analyze these resource efficiently, is becoming a hot point. However, under the limitation of “semantic gap”, that is, the underlying vision features, such as color, texture, and shape, can not reflect and match the query attention completely, CBIR confronts the unprecedented challenge.In recent years, newly proposed automatic image annotation (AIA) keeps focus on erecting a bridge between high-level semantic and low-level features, which is an effective approach to solve the above mentioned semantic gap. Since 1999 co-occurrence model proposed by Morris etc., the research of automatic image annotation was initiated[1]. In [2], translation model was developed to annotate image automatically based on an assumption that keywords and vision features were different language to describe the same image. Similar to [2], literature [3] proposed Cross Media Relevance Model (CMRM) where the vision information of each image was denoted as blob set which is to manifest the semantic information of image. However, blob set in CMRM was erected based on discrete region clustering which produced a loss of vision features so that the annotation results were too perfect. In order to compensate for this problem, a Continuous-space Relevance Model (CRM) was proposed in [4]. Furthermore, in [5] Multiple-Bernoulli Relevance Model was proposed to improve CMRM and CRM.Despite variable sides in the above mentioned methods, the core idea based on automatic image annotation is identical. The core idea of automatic image annotation applies annotated images to erect a certain model to describe the potential relationship or map between as keywords and image features which is used to predict unknown annotation images. Even if previous literatures achieved some results from variable sides respectively, semantic description of each keyword has not been defined explicitly in them. For this end, on the basis of investigating the characters of the automatic image annotation, i.e. images annotated by keywords comprise multiple regions; automatic image annotation is regarded as a problem of multi instance learning. The proposed method analyzes each keyword inmulti-granularity hierarchy to reflect the semantic similarity so that the method not only characterizes semantic implication accurately but also improves the performance of image annotation which verifies the effectiveness of our proposed method.This article is organized as follows: section 1 introduces automatic image annotation briefly; automatic image annotation based on multi-instance learning framework is discussed in detail in section 2; and experimental process and results are described in section 3; section 4 summaries and discusses the future research briefly.2.Automatic Image Annotation in the framework ofMulti-instance LearningIn the previous learning framework, a sample is viewed as an instance, i.e. the relationship between samples and instances is one-to-one, while a sample may contain more instances, this is to say, the relationship between samples and instances is one-to-many. Ambiguities between training samples of multi-instance learning differ from ones of supervised learning, unsupervised learning and reinforcement learning completely so that the previous methods hardly solve the proposed problems. Owing to its characteristic features and wide prospect, multi-instance learning is absorbing more and more attentions in machine learning domain and is referred to as a newly learning framework[7]. The core idea multi- instance learning is that the training sample set consists of concept-annotated bags which contain unannotated instances. The purpose of multi-instance learning is to assign a conceptual annotation to bags beyond training set by learning from training bags. Ingeneral, a bag is annotated a Positive if and only if at least one instance is labeled Positive, otherwise the bag is annotated as Negative.2.1Framework of Image Annotation of Multi-instance LearningAccording to the above-mentioned definition of the multi-instance learning, namely, a Positive bag contain at least a positive instance, we can draw a conclusion that positive instances should be distributed much more than negative instances in Positive bags. This conclusion shares common properties with DD algorithm [8] in multi-instance learning domain. If some point can represent the more semantic of a specified keyword than any other point in the feather space, no less than one instance in positive bags should be close to this point while all instances in negative bags will be far away from this point. In the proposed methods, we take into consideration each semantic keyword independently. Even if a part of useful information will be lost neglecting the relationship between keywords, various keywords from each image are used to computing the similarities between images so that the proposed methods can represent the semantic similarity of image effectively in low- granularity. In the following sections, each keyword will be analyzed and applied in local level so that irrelevant information with keywords will be eliminated to improve the precision of representation of the semantic of keywords. Firstly, keywords w ,including Positive and Negative bags, are collected, and the area surrounded by Positive bags are obtained by clustering adaptively. Secondly, this cluster is viewed as Positive set of w which contains most items than other clusters and is farthest from Negative bags. Thirdly, Gaussian Mixture Model (GMM) is used to learn the semantic of w . Finally, the images can be annotatedautomatically based on the posterior probability of each keyword of images according to the probability of image in GMM by using Bayesian estimation. Figure 1 illustrates this process.Fig.1. The framework of automatic image annotation based multi-instance learning2.2Automatic Image AnnotationIn convenience, we firstly put forward some symbols. w is denoted as a semantic keyword, X={X k|k=1,…,N} as a set of training samples, where N is the number of training samples; S={x1,L,x n} as a set of representative instances after adaptively clustering,where x n is the nth item in a clusters. Therefore, GMM is constructed to describe semantic concept of w , i.e. GMM is used to estimate the distribution of each keyword of feature space to erect the one-to-one map from keywords to vision feature. Note that the superiority of GMM lies in producing a smooth estimation for any density distribution which can reflect the feature distribution of semantic keywords effectively by non-parameter density estimating.For a specified keyword w , GMM represents its vision feature distribution, p ( x w ) is defined as follows:),()(1∑∑==i Mi i x N i w x p μπ Where ),(∑i i x N μ represents the Gussian distribution of i th component, μi and ∑i are the corresponding mean and variance reapectively, πi is weight of the i th component, reflecting its significance, and 11=∑=M i i π, M is the number ofcomponents. Each component represents a cluster in feature space, reflecting a vision feature of w . In each component, the conditional probability density of low-level vision feature vector x can be computed as follows:Where d is the dimension of feature vector x . The parameters of GMM are estimated by EM method which is maximum likelihood estimation for distribution parameters from incomplete data. EM consists of two steps, expectation step, E-step, and maximum step, M-step, which are executed alternately until convergence after multiple iteration. Assuming that the keyword w can produce N w representative instances, ),(∑=i i i μθ represents mean and co-variance of the i thGussian component. Intuitively, different semantic keywords should represent different vision features and the numbers of components are not identical with each other in general so that an adaptive value of M can be obtained based onMinimum Description Length (MDL)[9].The proposed method extracts semantic clustering sets from training images which are used to construct GMM in which each component represents some vision feature of a specified keyword. From the perspective of semantic mapping, the proposed model described the one-to-many relationship between keywords and the corresponding vision features. The extracted semantic clustering set can reflect the semantic similarity between instances and keywords. According to the above methods, a GMM is constructed for each keyword respectively to describe the semantic of the keyword. And then, for a specified image to be annotated X={x 1,…,x m },where x m is denoted as the m th separated region, the probability of keyword w is computed according to formula (3).∏=∝mi i w p X w p x 1)()( (3) Finally, the image X is annotated according to 5 keywords of greatest posterior probabilities.3. Experimental Results and AnalysisFor comparison with other image annotation algorithms fairly, COREL[2], a widely used image data set, is selected in our experimental process. This image set consists of 5000 images, 4500 images from which are used as training samples, the rest 500 images as test samples. 1 through 5 keywords is extracted to annotate an image, so in all 371 keywords exists in dataset. In our experiments, each image is divided 10 regions using Normalized Cut segment technology [6]. 42,379 regionsare produced in all for a whole image data set, and then, these regions are clustered to 500 groups each of which is called a blob. For each region, 36-demension features, such as color, shape, location etc. are considered like literature [2].In order to measure the performances of various image annotation methods, we adopt the same evaluation metrics as literature [5], some popular indicators in automatic image annotation and image retrieval. Precision is referred as the ratio of the times of correct annotation in relation to all the times of annotation, while recall is referred as the ratio of the times of correct annotation in relation to all the positive samples. The detailed definitions are as follows:A B precision =(4) CB recall = (5) Where A is the number of images annotated by some keyword; B is the number of images annotated correctly;C is the number of images annotated by some keyword in the whole data set. As a tradeoff between the above indicators, the geometric mean of them is adopted widely, namely:(6)Moreover, we take a statistics of the number of keywords annotated correctly which are used to annotate an image correctly at least. The statistical value reflects the coverage of keywords in our proposed methods, denoted b y “Nu mWords ”.3.1 Experimental ResultsFigure 2 shows that the annotated results of the proposed method, MIL Annotation, keep rather a high consistent with the ground truth. This fact verifies theeffectiveness of our proposed methods.Fig.2. Illustrations of annotation results of MIL Annotation3.2 Annotation Results of MIL AnnotationTable 1 and Table 2 show that compare the average performance between our proposed method and some traditional annotation models such as COM[1], TM[2], CMRM[3], CRM[4] and MBRM[5], on COREL image data set. In experiments, 263 keywords are concerned.Table 1. The performances of various annotation model on CORELTable 2. The comparison of F-measure between various modelsFrom Table 1 and Table 2, we can know that the annotation performance of the proposed method outperforms other models in two keyword set, and the proposed method has a significant improvement relation to existing algorithms in average precision, average recall F-measure and “Num Words”. Specifically, MIL annotation can obtain a significant improvement over COM, TM, CMRM and CRM; in existing probability-based image annotation models, MBRM can get a best annotation performance which is equivalent to the performance of MIL annotation.4. ConclusionsAnalyzing the properties of automatic image annotation deeply can know it can be viewed as a multi- instance learning problem so that we proposed a method to annotated images automatically based on multi-instance learning. Each keyword is analyzed independently to guarantee more effective semantic similarity in low-granularity. And then, under the frame of multi-instance learning, each keyword is further analyzed in various hierarchies. Irrelevant information with keywords will be eliminated to improve the precision of representation of the semantic of keywords by mapping keywords to corresponding region. Experimental results demonstrated the effectiveness of MR-MIL.References[1] Mori Y, Takahashi H, Oka R. Image-to-word transformation based on dividing and vector quantizing images with words. In: Proc. of Intl. Workshop on Multimedia Intelligent Storage and Retrieval Management (MISRM'99), Orlando, Oct. 1999. [2] Duygulu P, Barnard K, Freitas N, Forsyth D. Object recognition as machine translation: learning a lexicon for a fixed image vocabulary. In: Proc. of European Conf. on Computer Vision (ECCV’02), Copenhagen, Denmark, May. 2002: 97-112.[3] Jeon J, Lavrenko V, Manmatha R. Automatic image annotation and retrieval using cross-media relevance models. In: Proc. of Int. ACM SIGIR Conf. on Research and Development in Information Retrieval (ACM SIGIR’03), Toronto, Canada, Jul. 2003:119-126.[4] Lavrenko V, Manmatha R, Jeon J. A model for learning the semantics of pictures. In: Proc. Of Advances in Neural Information Processing Systems (N IPS’03), 2003.[5] Feng S, Manmatha R, Lavrenko V. Multiple bernoulli relevance models for image and video annotation. In: Proc. of IEEE Int. Conf. on Computer Vision and Pattern Recognition (CVPR’04), Washington DC, USA, Jun. 2004: 1002-1009. [6] Shi J, Malik J. Normalized cuts and image Segmentation. IEEE Trans. on Pattern Analysis and Machine Intelligence, 2000,22(8): 888-905.[7] Maron O. Learning from ambiguity. Department of Electrical Engineering and Computer Science, MIT, PhD dissertation.1998.[8] Maron O, Lozano P T. A framework for multiple-instance learning. In: Proc. of Advances in Neural Information Processing Systems (NIPS’98), Pittsburgh, USA, Oct. 1998: 570-576.[9] Li J, Wang J. Automatic linguistic indexing of pictures by a statistical modeling approach. IEEE Trans. On Pattern Analysis and Machine Intelligence, 2003, 25(9): 1075 – 1088基于多实例的新型自动图像标注方法研究Shunle Zhua ,Xiaoqiu Tana数学物理信息学院,浙江海洋大学,舟山,316000,中国摘要:图像自动标注是连接高层语义特征和底层特征的桥梁。